halid
stringlengths 8
12
| lang
stringclasses 1
value | domain
sequencelengths 1
7
| timestamp
stringlengths 19
19
| year
stringclasses 49
values | url
stringlengths 43
389
| text
stringlengths 908
2.18M
| size
int64 908
2.18M
| authorids
sequencelengths 1
102
| affiliations
sequencelengths 1
229
|
---|---|---|---|---|---|---|---|---|---|
01484386 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484386/file/978-3-642-34549-4_2_Chapter.pdf | Stijn Hoppenbrouwers
email: stijnh@cs.ru.nl
Asking Questions about Asking Questions in Collaborative Enterprise Modelling
Keywords: Collaborative Modelling, Modelling Process, Question Asking, Answer Structuring, Enterprise Modelling, Collaboration Systems
In this paper we explore the subject of question asking as an inherent driver of enterprise modelling sessions, within the narrower context of the 'dialogue game' approach to collaborative modelling. We explain the context, but mostly report on matters directly concerning question asking and answer pre-structuring as a central issue in an ongoing effort aiming for the practiceoriented development of a series of dialogue games for collaborative modelling. We believe that our findings can be relevant and helpful to anyone concerned with planning, executing or facilitating collaborative modelling sessions, in particular when involving stakeholders untrained in systems thinking and modelling.
Introduction
In the field of collaborative enterprise modelling [START_REF] Renger | Challenges in collaborative modelling: a literature review and research agenda[END_REF][START_REF] Barjis | Collaborative, Participative and Interactive Enterprise Modeling[END_REF], in particular in combination with information systems and service engineering, an increasing industrial and academic interest is becoming visible in the combining of advanced collaborative technologies with various types of modelling [START_REF] Hoppenbrouwers | From Dialogue Games to m-ThinkLets: Overview and Synthesis of a Collaborative Modeling Approach[END_REF], e.g. for business process modelling, domain modelling, business rules modelling, or enterprise architecture modelling. This includes support for well established, even traditional setups for modelling sessions (like workshops, interview-like sessions, and multi-participant model reviews) but also more innovative, on-line incarnations thereof, both synchronous and asynchronous, both facilitated and unfacilitated, often related to social media, and often geographically distributed [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF]. In addition, collaborative modelling is increasingly interwoven with operational (in addition to development) processes in enterprises; it may be initiated as part of a development project but will often become integrated with long-term, persistent 'maintenance' processes realizing enterprise model evolution. This shift in the context of application for enterprise modelling entails increasingly intense collaboration with business stakeholders not trained in established forms of systems modelling [START_REF] Zoet | An Agile way of Working, in Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF].
Collaborative enterprise modelling, as positioned above, includes a small number of approaches focusing on the understanding and support of the process of modelling. Specific approaches to this very much reflect views of what such a process essentially is, which may very greatly. In most cases, emphasis is on 'collaborative diagram drawing' (for example [START_REF] Pinggera | Tracing the process of process modeling with modeling phase diagrams[END_REF]). A different (though not unrelated) approach chooses to view collaborative modelling as a model-oriented conversation in which propositions are exchanged and discussed [START_REF] Rittgen | Negotiating Models[END_REF][START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF].
Beyond theories concerning the nature of collaborative modelling lies the question how to support collaborative model conceptualisation efforts (other than merely by providing some model editor), either by means of software or by less high-tech means. Our own, ongoing attempt to devise an effective practice-oriented framework for the structuring and guiding of modelling sessions has led us to develop something called 'dialogue games for modelling': game-like, open procedures in which explicit rules govern the interactions allowed and required within a structured conversationfor-modelling ( [START_REF] Hoppenbrouwers | Towards Games for Knowledge Acquisition and Modeling[END_REF][START_REF] Hoppenbrouwers | A Dialogue Game Prototype for FCO-IM, in On the Move to Meaningful Internet Systems[END_REF][START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF]; see section 2 for more on this). For some time it has been clear to us that the questions underlying models and modelling efforts are (or should be) an explicit driving force behind the conversations that constitute modelling processes [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF]. In this paper, we directly address the issue of questions asking, as well as the pre-structuring and guiding of answers to be given.
This paper is written more from a design point of view than from an analytical or observation (descriptive) point of view. It works directly towards application of the results presented in the design of operational dialogue games. We therefore work under the Design Science paradigm [START_REF] Hevner | Design Science in Information Systems Research[END_REF]. The ideas presented are a result of some experimental designs that were empirically validated on a small scale, but they yet have merely a heuristic status; they are not established practices, nor have they been exhaustively validated. And yet, we believe that the presented approach to question asking, and answer pre-structuring and guiding, is approximately 'right' as well as simply 'useful' since it was not 'just thought up' but carefully distilled through a focused and multifaceted effort to understand, guide and support the systematic asking of questions in detailed conversations-for-modelling.
The main problem addressed thus is that of 'how to ask particular questions in order to guide and drive a conversation for modelling', down to the level of structuring and aiding the actual phrasing of questions. To our best knowledge, this matter has never been addressed with a similar, dedicated and detailed design focus in the field of enterprise modelling, or anywhere else. Purposeful question asking in general has received plenty of attention in the context of interviewing skills (see for example [START_REF] Bryman | Social Research Methods[END_REF], Chapter 18), but an adequately content-related, generative approach we could not find. In the field of speech generation, some attention has been given to model-based question generation (see for example [START_REF] Olney | Question generation from Concept Maps[END_REF]), but here results are too theoretical and too limited to be of help for our purpose. This is why we took a grassroots design approach grounded in observation and reflection on what modellers and facilitators do (or should do) when they formulate questions to drive and guide a modelling process. The result is a small but useful set of concepts and heuristics that can help participants in and facilitators of modelling sessions to think about and make explicit the questions to be asked, from the main questions behind the session as a whole, down to specific questions asked in highly focused parts of the session. While (as discussed) the results have not been tested at great length, they do reflect a focused effort to come up with generally useful concepts and heuristics, spanning several years and a fair number of experimental projects (only some of which have been published; most were graduate projects). For a considerable part, these experiments and studies were conducted in the wider context of the Agile Service Development project as reported in [START_REF] Lankhorst | Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF], and are now continued under the flag of the Collaborative Modelling Lab (CoMoLab) [17].
Dialogue Games for Collaborative Modelling
Our approach to developing means of guiding and structuring conversations-formodelling has led to the design and use of Dialogue Games. Previous to this, it was already theorized [START_REF] Rittgen | Negotiating Models[END_REF][START_REF] Hoppenbrouwers | Formal Modelling as a Grounded Conversation[END_REF] (backed up by analysis of observed collaborative modelling sessions [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF]) that collaborative modelling as a conversation involves the setting and use of Rules constraining both the Interactions of the conversation as well as its chief outcome (the Model). The Interactions include both the stating of propositions and discussion of those propositions, leading to acceptance of some propositions by the participants. Accepted propositions at a given time constitute the Model at that time [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF]. Apart from the primary result of modelling (the Model), results may be social in nature, e.g. reaching some level of understanding or consensus, or achieving a sense of ownership of the model. Such goals can also be part of the rules set, and they are also achieved through Interactions. The notions of Rules, Interactions and Models (the basics of the 'RIM framework') can be used for analysis of any modelling session, but they can also be used as a basis for designing support and guidance for such sessions -which is what we did next.
Dialogue Games initially are a theoretical notion from Argumentation Theory going back to [START_REF] Mann | Dialogue Games: Conventions of Human Interaction[END_REF]. A more operational incarnation of dialogue games, an educational tool, was devised in the form of the InterLoc system as reported in [START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF][START_REF] Ravenscroft | Designing interaction as a dialogue game: Linking social and conceptual dimensions of the learning process[END_REF]. The core of this tool is an augmented 'chatbox' in which every contribution of the participants in a chat has to be preceded by an 'opener' chosen from a limited, preset collection (for example "I think that …"; "I want to ask a question: …"; "I agree, because: …"). Thus, the argumentation/discourse structure of the chat is constrained, and users become more aware of the structure of their conversation as it emerges. Also, the resulting chat log (available to the participants throughout) reflects the discourse structure quite transparently, including who said what, and when; this has proved useful both during the conversation and for later reference.
We took this concept and added to it the use of openers to constrain not only the type of contribution to the conversation, but also the format of the answer, for example "I propose the following Activity: …". This blended syntactic constraints with conversational constraints, and gave us access to introducing into dialogue games conceptual elements stemming from modelling languages. In addition, we showed that diagram techniques could easily and naturally be used in parallel to the chat, augmenting the verbal interaction step-by-step (as is common in most types of collaborative modelling) [START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF].
Some new ground was broken by our growing awareness that most conversationsfor-modelling did not have one continuous and undivided focus: one big dialogue game (the whole modelling session) typically consists of number of successive smaller dialogue games focusing on small, easily manageable problems/questions [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF]; the 'divide and conquer' principle. This principle is confirmed in the literature [START_REF] Prilla | Fostering Self-direction in Participatory Process Design[END_REF][START_REF] Andersen | Scripts for Group Model Building[END_REF]. It led to the introduction of the notion of 'Focused Conceptualisation' or FoCon [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF]: 'functional requirements' for modelling sessions (and parts thereof) including the expected type of 'input' (e.g. people; documents, conceptual structures) and desired 'output' (models in some modelling language, for some specific use; also, social results) as well as 'means to achieve the output': focus questions, sub-steps, and possibly some 'rules of the game'. Thus FoCons can help define highly focused dialogue games, with small sets of openers dedicated to answering focus questions that are just a part of the modelling conversation as a whole. Within such limited scopes of interaction, it is much easier to harness known principles from collaboration and facilitation technology (e.g. from brainstorming, prioritizing, problem structuring) to guide and support people in generating relevant and useful answers to questions [START_REF] Hoppenbrouwers | From Dialogue Games to m-ThinkLets: Overview and Synthesis of a Collaborative Modeling Approach[END_REF]. Importantly, this combines the 'information demand' of the general modelling effort with the HCI-like 'cognitive ergonomics' of the tasks set for the participants, which has to match their skills and expertise [START_REF] Wilmont | Abstract Reasoning in Collaborative Modeling[END_REF].
Part of the FoCon approach also is the distinction between the pragmatic goal/focus of a modelling effort and its semantic-syntactic goal/focus. As explained in [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF], pragmatic focus concerns the informational and communicational goal of the model: its intended use. One process model, for example, is not the other, even if it is drawn up in the same language (say, BPMN). What are the questions that the model needs to answer? Do they work towards, for example, generation of a workflow system? Process optimization? Establishing or negotiating part of an Enterprise Architecture? Do they concern an existing situation, or a future one? And so on.
Semantic-syntactic focus concerns the conceptual constraints on the model: typically, its modelling language. In some cases, such constraint may actually hardly be there, in which case constraints are perhaps those of some natural language, or a subset thereof (controlled natural language). Practically speaking, a real life modelling effort may or may not have a clearly preset semantic-syntactic focus, but it should always have a reasonably clear pragmatic focus -if not, why bother about the model in the first place? In any case, the pragmatic focus is (or should be) leading with respect to the semantic-syntactic focus.
The pragmatic and semantic-syntactic goals are crucial for identifying and setting questions for modelling.
Questions and Answer Types as Drivers and Constraints
Perhaps the most central argumentation underlying this paper is this: 'if models are meant to provide information, then they aim to answer questions [START_REF] Hoppenbrouwers | A Fundamental View on the Process of Conceptual Modeling[END_REF] -explicitly or not. In that case, in order to provide pragmatic focus to a conversation-for-modelling, it seems quite important to be aware of what questions are to be asked in the specific modelling context; if people are not aware, how can they be expected to model efficiently and effectively?' This suggests that making 'questions asked' explicit (before, during or even after the event) seems at the least a useful exercise for any modelling session. There is of course a clear link here with standard preparations for interviews and workshops. Yet it transpires that in some of the more extreme (and unfortunate) cases, the explicit assignments given or questions asked remain rather course grained, like 'use language L to describe domain D' (setting only the semanticsyntactic focus clearly). If experienced, context-aware experts are involved, perhaps the right questions are answered even if they are left implicit. However, if stakeholders are involved who have little or no modelling experience, and who generally feel insecure about what is expected of them, then leaving all but the most generic questions implicit seems suboptimal, to say the least. Disaster may ensueand in many cases, it has. We certainly do not claim that modellers 'out there' never make explicit the lead questions underlying and driving their efforts. We do feel confident, however, in stating that in many cases, a lot can be gained in this respect. This is not just based on a 'professional hunch', but also on focused discussions with practitioners on this topic, and on a considerable number of research observations of modelling sessions in the field.
Once the importance of questions as a driving force behind conversations for modelling became clear, we became interested in the structures and mechanisms of question asking. It was a natural choice for us to embed this question in the context of dialogue games, where questions are one of the chief Interactions, following Rules, and directly conveying the goals underlying the assignment to create a Model (see section 2).
Questions are a prominent way of both driving and constraining conversations. They coax people into generating or at least expressing propositions aimed to serve a specific purpose (fulfil an information need), but they are also the chief conversational means by which 'answer space' is conceptually restricted, by setting limits of form (syntax) or meaning (semantics) that the answers have to conform to. As explained in Section 2, modelling languages put a 'semantic-syntactic focus' on the expressions that serve to fulfil the pragmatics goal of a modelling effort. Thus, even the demand or decision to use a modelling language is closely related to the asking of questions, and can be actively guided by them.
In the FoCon approach [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF] (Section 2), only some minimal attention was paid to the subject of 'focus questions'. We now are ready to address this subject in more depth, and head-on.
Structuring Questions and Answers in Dialogue Games
In our ongoing effort to better understand and structure 'dialogue games for modelling', we have developed a number of prototype dialogue games, still mostly in unpublished bachelor's and master's thesis projects (but also see [START_REF] Hoppenbrouwers | A Dialogue Game Prototype for FCO-IM, in On the Move to Meaningful Internet Systems[END_REF][START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF], as well as [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF]). Recently, these prototypes and studies have explicitly confronted us with questions about question asking. This has led us to define the following heuristic Question Asking Framework for coherently combining questions and answers, which is put forward in an integrated fashion for the first time in this paper. The following main concepts are involved:
• Main conceptualization Goal(s) behind the questions to ask (G); pragmatic and possibly also semantic-syntactic goals underlying the creation of the model. • The Questions to ask (Q); the actual, complete phrases used in asking focus questions within the conversation-for-modelling • The Answers, which are the unknown variable: the result to be obtained (A) • Possibly, Form/Meaning constraints on the answer (F): an intensional description of the properties the answer should have (for example, that it should be stated in a modelling language, or that it should be an 'activity' or 'actor'). • Possibly, one or more Examples (E) of the kind of answer desired: an extensional suggestion for the answer. While the QAF is by no means a big theoretical achievement, it does provide a good heuristic for the analysis and design of 'question structures' in dialogue games. It is helpful in systematically and completely identifying and phrasing questions and related items (the latter being rather important in view of active facilitation).
Below we will proceed to discuss the concepts of the QAF in more detail, as well as matters of sequence and dynamic context. We will use an explanatory example throughout, taken from our previous work in 'Group Model Building' (GMB), an established form of collaborative modelling in the field of Problem Structuring. Space lacks here for an elaborate discussion of GMB; we will very briefly provide some information below, but for more have to refer to [START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF].
Illustration: Group Model Building
GMB is rooted in System Dynamics and involves the collaborative modelling of causal relations and feedback loops. It aims for the shared understanding between participants of the complex influences among system variables in some system (typically, a business situation calling for an intervention). The process of group model building aims to gradually tease out quantitative variables (providing an abstract analysis and representation of the problem focused on), causal relations between the variables (cause-effect, positive and negative), and feedback loops consisting of sets of circularly related variables. For our current purposes, we will only refer to some basic items in GMB, and show how the QAF items can be deployed in this context.
Goals Questions
As drivers of modelling session as a whole, Goal Questions can be posed. These should clearly describe the pragmatic goals of the session. Semantic-syntactic goals may in principle also be posed here, but things are more complicated for such goals: whether or not they should be explicitly communicated to the participants depends on whether the participants will or will not be directly confronted with the models and modelling language in the session. If not (and in many approaches, including ours, this is common enough), the semantic-syntactic goal is a covert one that is implicitly woven into the operational focus questions and answer restrictions (i.e. openers) of the Dialogue Games (Sections 4.3 and 4.4). This is in fact one of the main points of the dialogue game approach. We will therefore assume here that the semantic-syntactic goals are not explicitly communicated to the participants, though it is certainly always necessary that the over-all semantic-syntactic goals of the modelling effort are established (not communicated) as well as possible and known to the organizers of the session.
Typically, Goal Questions consist of two parts: the main question (of an informative nature), and the intended use that this information will be put to, the purpose. For example:
Main question: "Please describe what factors play a role in increasing the number of students enrolling in the Computer Science curriculum, and how they are related".
Purpose: "This description will be used to identify possible ways of taking action that could solve the problem."
Typically, the main question has a 'WH word' (why, what, how, who, etc.) in it, but this is no requisite. Clearly formulating the main questions is important, and may be hard in that the question may be difficult to formulate (a language issue) but in principle the main question as an item is straightforward. There may be more main questions or assignments (for example expressing social goals like 'reach consensus'), but obviously too many questions will blur the pragmatic focus. As for explicitly stating the purpose, as argued in [START_REF] Hoppenbrouwers | Focused Conceptualisation: Framing Questioning and Answering in Model-Oriented Dialogue Games[END_REF], very much influences the way people conceptualise the model, even at a subconscious level; this is why we advocate including it. Again, it is possible to include more than one purpose here, but this may decrease clarity of focus and can easily reduce the quality of the conceptualisation process and its outcome.
Importantly, main questions and purposes are not reflected in the openers of a Dialogue Game. They give a clear general context for the whole session, i.e of the entire set of 'minigames' (FoCons) constituting the conversation-for-modelling. The Goal questions should be clearly communicated (if not discussed) before the session starts, and perhaps the participants should be reminded of them occasionally (possibly by displaying them frequently, if not continuously).
Focus Questions: Guiding the Conversation
The focus questions are by nature the most crucial item in the QAF. Without exception, they should be covered by at least one opener in their dialogue game, meaning that they are explicitly available as an interaction type to at least one type of participant (role) in at least one dialogue game. In most cases, focus questions will be posed by the facilitator; whether or not they can also be asked by other participants depends on the further game design.
We found that it is helpful to explicitly distinguish two parts of focus questions: the question part, and the topic part. Questions, for example "What might influence …?" are of course incomplete without also mentioning a (grammatical) object of the sentence: what specific entity or domain the main question is applied to. This may seem trivial, but it is crucial in view of actual 'generation' of questions because the topic part of a focus question is as important as the question part, and is highly context dependent. The topic part may be derived from an answer to a previous question that was given only seconds before. Also, the topic part is typically much more context-dependent with respect to terminology: whereas the question part phrasing may be generically useful in diverse contexts (fields, enterprises, departments; situations) the topic will require accurate knowledge of the way participants talk about their enterprise and refer to bits of it. The set of possible topic descriptions is most safely assumed to be infinite, or at least to be quite unpredictable and situational, and therefore 'open'.
As for the more generic 'question part': here too, many questions (being open questions more often than yes/no questions) will be started off with a phrase including a WH-word (often accompanied by a preposition, as in "for what", "by who", etc.).
Clearly many questions are possible, but we do believe that for a particular set of topics/dialogue game types, their number is limited ('closed' sets seem possible, at least at a practical level). Points of view reflected by questions can be based on many different concepts and sources, for example:
• Meta-models (the syntax of a modelling language may dictate, for example, that every 'variable' should be at least a 'cause' or an 'effect' of another variable; causal relations should be marked as either positive (+) or negative (-), and so on) • Aspects of enterprise systems (e.g. following the Zachman framework: whyhow-what-who-where-when combined with the contextual-conceptuallogical-physical-detailed 'levels') • Methods (e.g. questions based on intervention methods: brainstorming, categorizing, prioritizing, and so on). • The classic 'current system' versus 'system-to-be' distinction In fact, it is largely through the asking of focus questions that participants make explicit how they look at and conceptually structure the domains and systems under scrutiny, and also it is the way their 'world view' is imposed upon the conversation, and on other participants.
For all the QAF items, but for the focus questions in particular, great care must be taken that they are phrased clearly and above all understandably in view of the participants' capacities, skills, and expertise [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF]. This requires quite a high level of language awareness, proficiency and instinct on behalf of, at least, the facilitator. Standard questions (or partial questions), that may have been tested and improved throughout a number of games, may offer some foothold here, but also one must be very much aware that question phrasings fit for one situation may be less appropriate and effective for others.
Forms: Constraining the Answer
Forms are the conceptual frames (in both the syntactic and the semantic sense) in which the answers are to be 'slotted'. The term refers to the 'form' (shape, structure) of the answer but also, and perhaps even more so, to the type of form that needs 'filling in' (template). Importantly, it is possible the form is in fact not there, meaning that in such cases the Goal and Focus questions do all the constraining. However, in particular in cases when some conceptual constraint (modeling language) is involved, offering a Form can be extremely helpful. If indeed we deal with collaborative 'modelling' (instead of, for example, 'decision making' or 'authoring' or 'brainstorming'), some conceptual constraining by means of some structured language seems as good as mandatory, per definition. Yet this does not mean such restricting Forms should necessarily accompany all focus questions: it is quite possible that in earlier phases of conceptualization, no strict form constraint is imposed, but that such constraint is introduced only as the effort is driven home to its end goals. Thus, some (sub) DGs may include Forms, while others may not.
In the basic Dialogue Game designs we have discussed so far, 'answer-openers' are provided that restrict the answer textually, as in "I propose the following variable: …". However, more advanced types of interfacing have always been foreseen here in addition to the basic opener [START_REF] Hoppenbrouwers | Exploring Dialogue Games for Collaborative Modeling, in E-Collaboration Technologies and Organizational Performance: Current and Future Trends[END_REF], for example the use of GUI-like forms [START_REF] Hoppenbrouwers | A Dialogue Game Prototype for FCO-IM, in On the Move to Meaningful Internet Systems[END_REF], and even interactive visualizations (simple diagrams). In principle, we can include good old 'model diagram drawing' here as well, though admittedly this does not fit in too well with our general FoCon approach and the verbal nature of conversations. Yet in the end, our credo is: whatever works, works.
Checking and enforcement of form-conform answering can be implemented in degrees. Below we suggest some (increasing) levels of forms checking:
• Unrestricted except by goal and focus questions • Mere textual constraint (e.g. by using simple openers)
• Using typed fields for individual words • Using typed fields and checking the fields syntactically • Using typed fields and checking the fields semantically • Offering a limited set of (checked) choices Note that such checking/enforcing mechanisms are of course already well known in common information-and database system interfaces and functionality (data integrity checks, etc.) and in various kinds of advanced model and specification editors.
In addition to offering template-like forms, we found that it is a good idea to add some explicit verbal description and explanation of the conceptual constraints, for example: "A 'variable' is described as a short nominal phrase, preferably of no more than four words, describing something that causes changes in the problem variable, or is affected by such changes. Variables should concern things that are easily countable, usually a 'number' or 'quantity' of something".
A final note on openers: while in this section we focused on conceptually constrained answer-openers, in view of Dialogue Games at large it is important to realize that more generic, conversation-oriented openers can be used alongside Forms, e.g. "I don't think that is a good idea, because …", "I really don't know what to say here", "I like that proposition because …", and so on. This makes it possible to blend discussion items and highly constrained/focused content items. Based on our experience with and observations of real life modelling sessions, such a blend is required to mirror and support the typical nature and structure of conversations-formodelling. Given that a chat-like interface and log is present underlying the whole modelling process, advanced interfacing can still produce chat entries (automatically generated) while conversational entries can be more directly and manually entered in the chat.
Auxiliary Examples of Answers
The last QAF item is perhaps the least crucial one, and certainly an optional one, but still it can be of considerable help in effectively communicating constraints on answers. Examples of answers are complementary to Forms, where in logical terms Examples offer more of a (partial) 'extensional definition' than the 'intentional definition' which can be associated with Forms. In addition, it is possible to provide some (clearly marked!) negative examples: answers that are not wanted.
Generally it seems to work well enough to give examples that are illustrative rather than totally accurate. For example, 'variables' in GMB need to be quantifiable, i.e. should concern 'things that can be easily counted' (a phrasing typically used in constraining answers suggesting variables). Positive examples for 'variables' thus could be"
• "Number of items produced"
• "Time spent on preparations"
• "Number of kilometres travelled"
• "Number of rejections recorded", whereas negative examples could be:
• NOT "willingness to cooperate"
• NOT "liberty to choose alternatives"
• NOT "aggressive feelings towards authority" The need for the use of Examples varies. In general, they will be most useful when participants are confronted with some Question-Form combination for the first time, leaving them somewhat puzzled and insecure. Experience shows that it is often recommendable to remove examples as soon as 'the coin drops', but to keep them close at hand in case confusion strikes again.
Dynamic Sets and Sequences of Questions
When analysing, describing and supporting structured processes, it is always tempting to picture them as deterministic flows. As reported in [START_REF] Ssebuggwawo | Interactions, Goals and Rules in a Collaborative Modelling Session[END_REF][START_REF] Hoppenbrouwers | Method Engineering as Game Design: an Emerging HCI Perspective on Methods and CASE Tools[END_REF], actual dialogues structures are far too unpredictable to capture by such means, switching often between various foci and modes. This is one of the main reasons why we have opted for a rulebased, game-like approach from the start. However, this does not mean that modelling sessions and dialogue games are wholly unstructured. There certainly can be a logic behind them, reflecting the way they work towards their Goals in a rational fashion (often by means of interrelated sub-goals). Our way out of this is indeed to define a number of complementary FoCons (DGs) that cover all 'interaction modes' to be expected in a particular modelling session. The participants, and especially the facilitator, are then free (to a greater or lesser degree) to choose when they go to which DG, and thus also in which order. However, there may be some input required to start a certain FoCon; for example, in GMB it is no use trying to determine the nature of a feedback loop if its variables have not been adequately defined. Thus, a simple logic does present itself. In our experience, this logic is best operationalized by the plain mechanism of preconditions on DGs, making them available (or not) given the presence of some minimal information they need as 'input'. In addition, the facilitator has an important role to play in switching between DGs: determining both when to switch, and where to jump to. The definition of heuristic or even rules for making such decisions is a main interest for future research. Besides the simple inputbased logic mentioned above, we expect that other aspects and best practices will be involved here, but we cannot put our finger on them yet.
The above implies that the sequence in which questions are to be asked cannot be predicted, nor does it need to be. Which questions are asked in which order is determined by:
• Which questions are part of a particular DG (with some specific focus)
• In what order the questions are asked within that DG, which depends on active question choosing by the facilitator, but equally so on the highly unpredictable conversational actions taken by the participants • In what order the session jumps from one DG to another, as mostly determined by the facilitator. In this sense, a modelling session has the character of a semi-structured interview rather than that of a structured one.
Finally we consider the challenge of generating, dynamically and on the spot, the detailed content of each question item during a series of interrelated DGs. We believe that in many cases, a manageable number of basic interaction modes can be discerned beforehand, i.e. in the preparatory phase of organized modelling sessions, and perhaps even as part of a stable 'way of working' in some organizational context. Thus, DGs can be designed, including:
• the question parts of Focus Questions • the Forms • the Examples However, this excludes some more context-dependent items:
• both the main question and the purpose parts of the Goal questions • the topic parts of the Focus Questions These items will have to be formulated for and even during every specific DG. Some of them may be predictable, since they may be based on specific information about the domain available before the session is initiated. However, a (large) part of the domain-specific information may emerge directly from the actual session, and also 'previously available information' may change because of this. The main question and the purpose parts of the Goal questions at least can be determined in preparation of a particular session, typically in project context [START_REF] Hoppenbrouwers | Stakeholder Communication, in Agile Service Development -Combining Adaptive Methods and Flexible Solutions[END_REF][START_REF] Zoet | An Agile way of Working, in Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF], and will usually remain pretty stable during a modelling session. This leaves the topic parts of the Focus Questions: what topic the individual, opener-born question phrasings are applied to.
As discussed in Section 4.3, such topic phrasings are highly context specific. If they are to be inserted on the spot by facilitators or other participants in an unsupported environment, they will demand a lot from the domain awareness and language capacity of those involved. Fortunately, such capacity is usually quite well developed, and the task is challenging but not unfeasible -as has often been shown in practice. Yet let us also consider tool support. If partially automated support is involved (DGs as a case of collaboration technology [START_REF] Hoppenbrouwers | From Dialogue Games to m-ThinkLets: Overview and Synthesis of a Collaborative Modeling Approach[END_REF]), close interaction will be required between the question generator and the structured repository of information available so far. Needless to say this poses rather high demands on accessibility, performance, and well-structuredness of such a repository. Yet not in all cases will generation of questions (based on the knowledge repository) be fully automatic: in many cases, the facilitator or other participants may be offered a choice from the (limited) number of items relevant in the present context.
Conclusion and Further Research
We have presented a discussion of a number of issues with respect to 'asking questions' in context of collaborative modelling sessions in enterprise engineering. Central in this discussion was the Question Asking Framework (QAF), a heuristic construct of which the concepts can help analysis and design of question-related aspects of (in particular) highly focused sub-conversations. Our discussion was set against the background of 'Dialogue Games for modelling'. The findings have already been used, to a greater or lesser extent, in the design of prototype Dialogue Games in various modelling contexts.
We are now collaborating with two industrial parties who have taken up the challenge of bringing the Dialogue Game idea to life in real projects. We work towards the creation of a reasonably coherent set of support modules that enable the rapid development and evolution of Dialogue Games for many different purposes and situations, involving a number of different flavours of modelling -both in view of the modelling languages and techniques involved, and of the style and setup of collaboration [START_REF] Lankhorst | Agile Service Development: Combining Adaptive Methods and Flexible Solutions[END_REF]. Most of the ideas and concepts put forward in this paper have already played a role in design sessions, in which they turned out extremely helpful. Together with other concepts from the Dialogue Game approach, they enabled us to create a good and clear focus for talking about modelling sessions in a highly specific, support-oriented way. While further validation of the presented concepts certainly needs to be pursued in the near future, we do claim that a first reality check and operational validation has in fact been performed, with satisfactory results. Among many possible topics for further research, we mention some interesting ones:
• Effective capturing of generic rules for facilitation in DGs • Decision making for jumping between DGs • Optimal ways of communicating rules, goals, assignments and and directives in DGs • Interactive use of advanced visualisations blended with chat-like dialogues • Limitations and advantages of on-line, distributed collaborative modelling using DGs • Using DGs in system maintenance and as an extension of helpdesks • Making intelligent suggestions based on design and interaction patterns and using AI techniques • Automatically generating questions and guiding statements for use in DGs, based on natural language generation and advanced HCI techniques
Fig. 1 .
1 Fig. 1. Concepts of the heuristic Question Asking Framework (QAF)In Fig.1., we show the basic concepts plus an informal indication (the arrows) of how the elements of the QAF are related in view of a generative route from Goal to Answer: based on the pragmatic and possibly also the semantic-syntactic goal of the effort at hand, a set of questions are to be asked. For each Question, and also very much dependent on its Goal, auxiliary means are both intensional (F) and extensional (E) descriptions of the sort of answer fulfilling Q. Combinations of Q, F and E should lead to A: the eventual Answer (which as such is out of scope of the framework).While the QAF is by no means a big theoretical achievement, it does provide a good heuristic for the analysis and design of 'question structures' in dialogue games. It is helpful in systematically and completely identifying and phrasing questions and related items (the latter being rather important in view of active facilitation).Below we will proceed to discuss the concepts of the QAF in more detail, as well as matters of sequence and dynamic context. We will use an explanatory example throughout, taken from our previous work in 'Group Model Building' (GMB), an established form of collaborative modelling in the field of Problem Structuring. Space lacks here for an elaborate discussion of GMB; we will very briefly provide some information below, but for more have to refer to[START_REF] Hoppenbrouwers | A Dialogue Game for Analysing Group Model Building: Framing Collaborative Modelling and its Facilitation[END_REF].
Fig. 2 .
2 Fig. 2. Example of a causal loop diagram resulting from a GMB dialogue game
Acknowledgements
We are grateful for early contributions made to the ideas presented in this paper by Niels Braakensiek, Jan Vogels, Jodocus Deunk, and Christiaan Hillen. Also thanks to Wim van Stokkum, Theodoor van Dongen, and Erik van de Ven. | 44,995 | [
"1003526"
] | [
"348023",
"300856"
] |
01484387 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484387/file/978-3-642-34549-4_3_Chapter.pdf | Julia Kaidalova
email: julia.kaidalova@jth.hj.se
Ulf Seigerroth
email: ulf.seigerroth@jth.hj.se
Tomasz Kaczmarek
email: t.kaczmarek@kie.ue.poznan.pl
Nikolay Shilov
Practical Challenges of Enterprise Modeling in the light of Business and IT Alignment
Keywords: Enterprise Modeling, Business and IT Alignment, EM practical challenges
The need to reduce a gap between organizational context and technology within enterprise has been recognized and discussed by both researchers and practitioners. In order to solve this problem it is required to capture and analyze both business and IT dimensions of enterprise operation. In this regard, Enterprise Modeling is currently considered as widely used and powerful tool that enables and facilitates alignment of business with IT. The central role of EM process is EM practitioner -a person who facilitates and drives EM project towards successful achievement of its goals. Conducting EM is a highly collaborative and nontrivial process that requires considerable skills and experience since there are various challenges to manage and to deal with during the whole EM project. Despite quite wide range of related research, the question of EM challenges needs further investigation, in particular concerning the viewpoint of EM practitioners. Thus, the purpose of this paper is to identify challenges that EM practitioners usually face during their modeling efforts taking into consideration potential influence of these challenges on successful conduct of EM and on alignment of Business and IT thereafter.
Introduction
Successful business management in the dynamically evolving environment demands considerable agility and flexibility from decision makers in order to remain competitive. As a part of business changes and business redesign, there is also a need to have clear understanding about current way of business operation. [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF] argue that Enterprise Modeling (EM) is one of the most powerful and widely used means that meets both types of needs. They mark out two general purposes that EM can be used for. The first purpose is business development, for example, development of business vision and strategies, business operations redesign, development of the supporting information systems, whereas the second one is ensuring business quality, for example, knowledge sharing about business or some aspect of business operation, or decision-making.
EM is a process for creating enterprise models that represent different aspects of enterprise operation, for example, goals, strategies, needs [START_REF] Stirna | Integrating Agile Modeling with Participative Enterprise Modeling[END_REF]. The ability of enterprise models to depict and represent enterprise from several perspectives to provide a multidimensional understanding makes EM a powerful tool that also can be used for Business and IT alignment (BITA) [START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF][START_REF] Wegmann | Business and IT Alignment with SEAM[END_REF]. In general the problem of BITA has received great attention from both practitioners and researchers [START_REF] Chan | IT alignment: what have we learned[END_REF];; [START_REF] Luftman | Key issues for IT executives[END_REF]. This branch of EM focuses on the gap between the organizational context and technology (information systems in particular) that is pervasive in organization operations and provides a backbone as well as communication means for realizing the organization goals. Particularly, in the domain of modeling similar calls for alignment of information systems and business emerged within various modeling efforts [START_REF] Grant | Strategic alignment and enterprise systems implementation: the case of Metalco[END_REF][START_REF] Holland | A Framework for Understanding Success and Failure in Enterprise Resource Planning System Implementation[END_REF][START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF].
EM is usually a participative and collaborative process, where various points of view are considered and consolidated [START_REF] Stirna | Integrating Agile Modeling with Participative Enterprise Modeling[END_REF]. Two parties of EM are participants from the enterprise itself and EM practitioner (or facilitator) that leads modeling session(s). The first group of stakeholders consists of enterprise employees who have to share and exchange their knowledge about enterprise operations (domain knowledge). There are various factors that can hinder the process of sharing knowledge between enterprise members, for example, as the project progresses the enterprise becomes less interested to allocate their most knowledgeable human resources to modeling sessions, since it can be considered as waste of time (Barjis, 2007). The second party of EM is the EM practitioner a person who facilitates and drives EM project process (partly or fully) towards effectively achieving its goals [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF]. This role is responsible for making sure that the project resources are used properly in order to achieve the goals of the project and to complete the project on time (ibid, [START_REF] Rosemann | Four facets of a process modeling facilitator[END_REF]. Thus, EM practitioner needs to have considerable experience and broad range of knowledge regarding EM execution, since various problems and challenges occur both during execution of EM sessions and follow-up stages of EM [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF].
The need for documentation guidelines related to EM has been revealed and highlighted by several researchers, i.e. cf. [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF]. Identification of factors that can hinder successful application of EM can be considered as one aspect of such guidelines. Several researchers have claimed that there is a need to investigate challenging factors as an important component of EM practice (Bandara et al., 2006;[START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF][START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. This has surfaced the need to investigate factors that are considered as challenging from the viewpoint of EM practitioners. In particular, it is interesting to identify challenges that EM practitioner are facing during both EM sessions and the follow-up stages of EM project. Identification and description of these challenges can serve as a considerable help for EM practitioners, which can facilitate successful accomplishment of EM project and in turn support BITA within modeled enterprise. The research question of the paper is therefore defined according to below.
What challenges do enterprise modeling practitioners face during EM? The rest of the paper is structured in the following way: Section 2 presents related research, Section 3 describes the research method that has been applied to address the research question, in Section 4 and Section 5 results are presented. The paper then ends with c conclusions and discussion of future work in Section 6.
Related Research
A need to deal with a gap between organizational context and technology within enterprise has been recognized and discussed by research community for quite some time [START_REF] Orlikowski | An improvisational model for change management: the case of groupware[END_REF]. Several researchers have emphasized the need to capture dimensions of both business and IT during design and implementation of IS (i.e. cf. [START_REF] Gibson | IT-enabled Business Change: An Approach to Understanding and Managing Risk[END_REF]. In this respect, EM serves as a widely-used and effective practice, because of the core capability of enterprise models to capture different aspects of enterprise operation. Thus, EM currently gets more and more recognition as a tool that can be used for alignment of business with IT [START_REF] Seigerroth | Enterprise Modelling and Enterprise Architecture: the constituents of transformation and alignment of Business and IT[END_REF].
Performing EM successfully is a nontrivial task that requires considerable skills and experience since there are various issues to manage and to deal with during the whole EM project [START_REF] Stirna | Participative Enterprise Modeling: Experiences and Recommendations[END_REF]. Among core challenges of EM [START_REF] Barjis | Collaborative, Participative and Interactive Enterprise Modeling[END_REF] highlights the complex sociotechnical nature of an enterprise and conflicting descriptions of the business given by different actors. [START_REF] Indulska | Business Process Modeling: Current Issues and Future Challenges[END_REF] present the work that is dedicated to current issues and future challenges of business process modeling with regard to three points of views: academics, practitioners, and tool vendors. The main findings of their work are two lists with top ten items: current business process modeling issues and future business process modeling challenges. They also mention a number of areas that attract attention of practitioners, but still have not been considered by academics, for example, value of business process modeling, expectations management and others. [START_REF] Delen | Integrated modeling: the key to holistic understanding of the enterprise[END_REF] investigates challenges of EM and identified four challenges with regard to decision mak point of view: heterogeneous methods and tools, model correlation, representation extensibility, and enterprise model compiling.
Another research that investigates the question of EM challenges is presented by [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. Their work identifies four challenges of EM, which will serve as a basis for our work. The first challenge is Degree of formalism. There are different modeling notations (from formal machine interpretable languages to very informal rich pictures). The expressivity of the selected formalism impacts the final model. The second one is Degree of detail. Is a problem of deciding how many things need to be put into a model at different layers of EM in order to describe a certain situation. The third challenge is Accuracy of the view. It is a challenge of selecting a point of view during modeling. The fourth one is Change and model dependencies. This challenge refers to the fact that modeling is usually done in a constantly changing environment. Models should direct the change in the enterprise, but also models undergo changes. In a multi-layered modeling a change at one layer of the model might has consequences on other layers, and can reflect the change that the enterprise undergoes.
Apart from that, there are several research directions that we consider as related research, below we present three of them. The first are practical guidelines to perform EM. Guidelines are always created in response to challenges and problematic issues that arise during practical activities, therefore it can be possible to get an idea about EM challenges by looking on practical guidelines to perform EM. The second research direction is facets and competence of EM practitioner, which focuses on key factors that determine competence of EM practitioner and highlights, first and foremost, the core questions that EM practitioner is supposed to solve. The third related research direction is EM critical success factors, which focuses on identification of factors that are crucial for success of EM efforts. Since significant part of EM efforts is done by EM practitioner, it is possible to get an idea about EM challenges based on EM critical success factors. Combined overview of these related research directions provided us with a broad foundation regarding potential EM challenges. It helped us on further stages of research, including construction of interview questions and conducting of interviews with respondents.
Practical guidelines to perform EM
There are several papers that are introducing different kinds of guidelines for carrying out EM. [START_REF] Stirna | Participative Enterprise Modeling: Experiences and Recommendations[END_REF] describe a set of experiences related to applying EM in different organizational contexts, after what they present a set of generic principles for applying participative EM. Their work marks out five high-level recommendations of using participative EM. Presented generic recommendations are the following: assess the organizational context, assess the problem at hand, assign roles in the modeling process, acquire resources for the project in general and for preparation efforts in particular, conduct modeling sessions. [START_REF] Stirna | Anti-patterns as a Means of Focusing on Critical Quality Aspects in Enterprise Modeling[END_REF] introduce guidelines for carrying out EM in form of antipatterns EM common and reoccurring pitfalls of EM projects. Presented antipatterns address three aspects of EM the modeling product, the modeling process, and the modeling tool support. For example, the second group consists of the following anti-patterns: everybody is a facilitator, the facilitator acts as domain expert, concept dump and others. Group addressing EM tool support contains the next everyone embraces a new tool and others.
Facets and Competence of Enterprise Modeling Practitioner
The significance of the EM practitioner role for overall success of EM project is admitted and discussed by several researchers. Among others, [START_REF] Persson | Towards Defining a Competence Profile for the Enterprise Modeling Practitioner[END_REF] have presented a work that analyses competence needs for the EM practitioner with regard to different steps in the EM process. They consider that EM process consists of the following activities: project inception and planning, conducting modeling sessions, delivering a result that can be used for subsequent implementation project. Two main competence areas that are identified here are competences related to modeling (ability to model;; ability to facilitate a modeling session) and competences related to managing EM projects (for example, ability to select an appropriate EM approach and tailor it in order to fit the situation at hand;; ability to interview involved domain experts).
Another view on competence of EM practitioner is presented by [START_REF] Rosemann | Four facets of a process modeling facilitator[END_REF]. They argue that key role of the modeling facilitator has not been researched so far and present a framework that describe four facets (the driving engineer, the driving artist, the catalyzing engineer, and the catalyzing artist) that can be used by EM practitioner.
Critical success factors
Critical success factors within the context of EM research can be defined as key factors that ensure the modeling project to progress effectively and complete successfully [START_REF] Bandara | Factors and Measures of Business Process Modelling: Model Building Through a Multiple Case Study[END_REF]. [START_REF] Bandara | Factors and Measures of Business Process Modelling: Model Building Through a Multiple Case Study[END_REF] divide critical success factors of business process modeling into two groups: project-specific factors (stakeholder participation, management support, information resources, project management, modeler experience) and modeling-related factors (modeling methodology, modeling language, modeling tool).
The work of [START_REF] Rosemann | Critical Success Factors of Process Modelling for Enterprise Systems[END_REF] identifies the factors that influence process modeling success. Among them they mention: modeling methodology, modeling management, user participation, and management support.
Research Method
General overview of the research path is presented in Figure 1. As a basis for the present work we have used the work of [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF] that is dedicated to multi-layered EM and its challenges in BITA. Our study started from interview design that could fulfill two purposes: validate EM challenges that have been preliminary presented by [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF] and identify other EM challenges. It is important to mention that both kinds of challenges were supposed to be identified considering their potential influence on successful EM execution and, in its turn, on alignment of business and IT subsequently.
Interview design
In order to identify practical challenges that EM practitioners face it was decided to conduct semi-structured interviews. This kind of empirical research strategy is able to provide in-depth insight into practice of EM and, what is even more important, it allows steering respondents into desired direction in order to receive rich and detailed feedback.
Interview questions consisted of two parts that could provide investigation of EM challenges that have a potential to influence BITA of modeled enterprise: questions with a purpose to identify challenges that EM practitioners face and questions with a purpose to validate preliminary set of EM challenges (identified in [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. In combination these two groups of questions were supposed to provide comprehensive and integral picture of EM practical challenges. Questions were constructed in such a way that it was possible to identify challenges in both direct and indirect ways. Except from a few examples below the full list of questions can be accessed for download 1 .
The first part of the interviews had the intention to disclose the most significant challenges that respondents face during EM. In order to carry out this part of interview we have designed a set of direct questions (among others,
). The second group of questions had particular intention to validate preliminary set of EM challenges. This group included both direct and indirect questions. For example, validation of Degree of Formalism challenge has been done with the help of direct question ( consider degree of formali
) and a number of indirect questions (among others,
). Having these two types of questions helped us to look into the real fact of the matter instead of just checking it superficially. It should be noted that during further analysis of answers regarding one or another challenge. In other words, challenge was considered as admitted by particular respondent even if he/she admit it only during answers on indirect questions.
The final question of interviews has been designed in such a way that we could conclude the discussion and get filtered and condensed view on EM practical challenges
). An intention here was to make respondents to reconsider and rank the challenges that they have just mentioned, so that it is possible to see which of those they consider as the most important.
Selection of respondents
Since we have chosen interviews as an empirical method of our work a significant part of the work was dedicated to choosing the right respondents. It was important to find people with considerable EM experience within SMEs. Finally, four respondents with 10-16 years of EM experience have been chosen. Chosen EM practitioners have mostly been working with SMEs within Sweden: Respondent 1 (Managing partner Skye AB), Respondent 2 (Test Manager at The Swedish Board of Agriculture), Respondent 3 (Senior Enterprise Architect at Enterprise Design, Ferrologic AB), and Respondent 4 (Senior Business Consultant at Department for Enterprise Design Ferrologic AB).
Conduct of interviews
Interviews started from a preliminary stage during which respondents have been provided with brief description of previously identified EM challenges (in work of [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF]. This stage had a goal to start and facilitate further discussion by either admitting or denying identified challenges. It also served as a warm-up that opens the main part of the interview, which came right after. The rest of interviews consisted in discussion of prepared question in a very open-ended manner. In other words, respondents were able to build their answers and argumentation quite freely and unconstrained, however, prepared interview questions served as a directive frame for our conversation.
Analysis of interview data and results generation
Interviews have been recorded and analyzed afterwards. During analysis of interview data our goal was to detect all challenges that have been mentioned by interview respondents, but, what is even more important, it was necessary to logically group detected challenges. This was done by documenting mentioned challenges in a structured manner and putting those challenges, which were related to each other, into one coherent category. Thus, it was possible to generate the main part of results: a set of conceptually structured EM practical challenges. Moreover, we could introduce another part of results, which are general recommendations to deal with presented challenges. However, it is important to make clear differentiation between two deliverables of the present study, since the way to obtain the general structure of EM practical challenges (analysis of interview data as such) differs from the way to obtain general recommendations to deal with those challenges (analysis of generated challenges taking into consideration interview data). Results of interview study are presented in the next section.
Results of Interview Study
As it would be expected, EM practitioners are facing various challenges during EM. Several statements of respondents helped us to identify two central activities that unite these challenges (c.f. Figure 2
below). espondent 1) oom with computer and capturing political aspect and human aspect. These aspects are the most difficult in modeling
Respondent 3)
1. E xtracting information about enterprise
T ransforming information into enterprise models
Fig. 2. Two challenging activities of EM.
Thus, it was possible to distinguish the first challenging activity, which is extracting information about enterprise by EM practitioner, from the second one, which is further transformation of this information into enterprise models. Interestingly enough, two out of four respondents have strongly emphasized the importance and complexity of the first activity, not the second one.
those challenges together are much smaller than challenges with getting the (Respondent 3) -related issues are underestimated! It is people that we are working with. We create models, we build models and we can be very specific about relations between them, but that is just technical stuff. The important thing is to get people that
Respondent 4) Below we present detailed description of challenges that have been identified. In order to generate presented items we considered and, if possible, grouped all challenges that have been mentioned by interview respondents. Statements of interview respondents that we relied on when identifying and generating EM challenges are available for download2 .
Challenges that are related to extracting information about enterprise
This group includes challenges that EM practitioner face while obtaining information about enterprise operation during EM workshops and other fact-finding activities.
Right information
This challenge is related to the fact that it is usually quite problematic to get information that is really relevant for solving particular modeling problem. According to our respondents, quite often they need to be very persistent and astute while communicating with enterprise in order to make them share their knowledge about enterprise operation. Often it leads to the situation when EM practitioner finally has too much information, with different degree of validity and accuracy. The answers also indicate the problem of fuzziness of information, white spots that the participants don't know about and possible inaccuracies in the information obtained from them. This might pose the challenge for modeling which typically requires accurate, complete and clear information.
Group dynamic and human behavior
Another challenge is that EM practitioner is supposed to deal with group of people that have various tempers, models of behavior and, what is even more important, relations between them. It undoubtedly leads to building unique group dynamic that has to be considered and controlled by EM practitioner in order to steer modeling sessions efficiently.
Shared language and terminology
During EM project different stakeholders usually have different background and consequently different understanding of used terms and relations between these terms. It leads to various problems during EM sessions when stakeholders use different names to address the same concept or, on the contrary, the same names when talking about totally different things. In addition, in some cases employees of an enterprise use some unique terminology that EM practitioner is not familiar with, so EM practitioner needs to adapt in-flight. All these factors lead to the strong need to create shared terminology between project stakeholders in order to create a common ground for efficient communication.
The purpose of EM and roles of stakeholders within it
One of the most problematic issues during EM project is to make project stakeholders understand the essence of EM as such, since in most of the cases they are not familiar with executive details of EM and with idea of EM in general. Clarification of it might include different aspect: general enlightenment of purposes and goals of EM project;; description of roles and relevant responsibilities that different stakeholders are supposed to have within EM project together with description of EM practitioner role;; explanation of key capabilities of enterprise models, for example, difference between enterprise models and other representative artifacts.
Challenges that are related to transforming information into enterprise models
This area includes challenges that EM practitioner face while transforming information about enterprise operation into enterprise models. In contrast to the process of obtaining information, this process mostly does not involve collaboration of EM practitioner with other stakeholders. It is a process of enterprise models creation in some tangible or intangible form, so that it will be possible to use them further.
Degree of formalism
This challenge is related to degree of formalism that is supposed to be used during whole EM project, since existing modeling notations vary from very formal machine interpretable languages to very informal with quite rich pictures (when EM practitioner decides how to document different kinds of findings). From one point of view, it is preferable to use quite formal notation, since in this way enterprise models can be used and reused further even during other projects. However, using formal notation with some stakeholders can hinder the process of modeling, since they might become overloaded and stressed by describing enterprise operations in a way that is too formal for them. Thus, the choice of formalism degree is a quite challenging task that EM practitioner is supposed to solve.
Degree of detail
This challenge is about how many details each layer of enterprise model should have. Degree of detail can be high (which includes plenty of details within the model) and low (which includes quite general view on enterprise operation). From one point of view it is important to describe enterprise operation with a high degree of detail, so that it will be possible to see as much elements and interaction between them as possible. However, sometimes it is crucial to have a general view on enterprise functioning, since stakeholders, to the contrary, are interested in rather overall view on it. Thus, the challenge is to leave on enterprise model only important and required details.
Modeling perspective
It is a challenge of selecting point of the view during EM. Certainly, enterprise models are able to represent various views on enterprise functioning, which makes them indispensable to deal with different views of stakeholders and with different aspects of enterprise operation. However, in some cases it can be problematic to understand the consequences of adopting certain point of the view on one layer of modeling. In addition, it might be not easy to see how this point of view on one layer will affect other layers.
Change and model dependencies
This challenge is related to the fact that EM is always done in constantly changing environment, which cause the need to keep track of coming changes and update models accordingly. In multi-layered EM it can be quite problematic to keep track of influence of model change on one layer on models on other layers. Some tools enable automatic fulfillment of this task, whereas others do not have such capability.
Scope of the area for investigation
This is a challenge that is related to limiting the scope of the interest during EM. On the one hand, it is important to have rather broad overview of enterprise functioning, since it can provide comprehensive and clear view on all actors and cause-effect relationships that take place within modeled enterprise. However, having very broad view can hinder efficient EM, since in this case EM practitioner need to analyze enormous amount of information instead of focusing on the most problematic areas. Thus, it can be quite problematic to define the scope of investigation in properly.
Overall conceptual structure of EM practical challenges
Taking into consideration interview findings and previous work of [START_REF] Kaczmarek | Multi-layered enterprise modeling and its challenges in business and IT alignment[END_REF], it was possible to build conceptual structure of challenges that EM practitioners face. With the help of interviews it was possible to reveal general conceptual distinction between two challenging areas of EM, that is why it was reasonable to divide EM challenges into two groups. The first group consist of challenges that are related to extraction of information about enterprise, i.e. extract right information, manage group dynamic and human behavior, use shared language and terminology, clarify the purpose of EM and roles of stakeholders within it. The second group consisted of challenges that are related to transforming extracted information into models, i.e. choose degree of formalism, choose degree of detail, adapt modeling perspective, keep track of change and model dependencies, define and stick to the scope of the area for investigation (see Figure 3 below).
General Recommendations to Deal with EM Practical Challenges
In Section 4 we have presented results of performed interview study, which concluded in building conceptual structure of EM practical challenges. Afterwards we could analyze created structure of EM challenges, while keeping in mind views and opinions of interview respondents, therefore it was also possible to generate a number of general recommendations that can help EM practitioners to cope with identified EM challenges (see Table 3
Conclusions and Future Work
The need to successfully conduct EM in order to align business and IT is acknowledged and discussed, thereby practical challenges of EM are turning out to be an important aspect to investigate. The main purpose of the work was to identify challenges that EM practitioners face during EM. Correspondingly, the main finding of the work is a set of conceptually structured practical challenges of EM. It includes two groups of challenges that take place within EM: extracting of information that is related to enterprise operation and transforming this information into models. Challenges that have been discovered within the first activity are right information, group dynamic and human behavior, shared language and terminology, the purpose of EM and roles of stakeholders within it. The second group involves the following challenges: degree of formalism, degree of detail, modeling perspective, change and model dependencies, and scope of the area for investigation. Moreover, work introduced a number of general recommendations that can help EM practitioner to deal with identified challenges.
From practical point of view presented challenges and general recommendations can be considered as supportive guidelines for EM practitioners, which, in its turn, can facilitate successful EM execution and subsequently ensure BITA. From scientific point of view identified challenges and general recommendations can serve as a contribution to the particular areas of EM practical challenges and documented guidelines for conducting EM, which, in a broad sense, makes an input to the question of EM successful execution and, correspondingly, to the question of BITA.
The study has several limitations, which we plan to address in future research. One of them is related to the fact that the data collected at this stage of the study was limited to the Swedish context. We plan to validate the results also for other regions. An important aspect of future work is therefore to elaborate created conceptual structure of EM challenges into comprehensive framework with the help of solid empirical contribution from international EM practitioners, since it is interesting to get a broader picture of EM practical challenges taking into consideration international modeling experience. Second, it would be useful to validate the results obtained from our initial group of practitioners with a larger, more diverse group. This is also subject to our future work. Another aspect that should be considered in future is enhancement of recommendations to deal with EM challenges.
Fig. 1 .
1 Fig. 1. General research path.
Fig. 3 .
3 Fig. 3. Overall conceptual structure of EM practical challenges.
The next stage included selection of respondents, after what it was possible to conduct interviews. Then collected empirical data has been analyzed, after what it was possible to generate the results in order to answer research question.
Mutli-layered
Enterprise Modeling
and its Challenges in
Business and IT
Alignment (Kaczmarek
et al., 2012)
Interview design Purposed on: -Validation of preliminary identified EM new EM challenges -Identification of challenges Selection of respondents SMEs modeling with -EM practitioners with significant experience of Conduct of interviews A nalysis of data from interviews Results generation -Overall conceptual structure of EM practical challenges -General recommendations to deal with identified challenges
Table 1 . EM challenges and general recommendations to deal with them
1 below). General recommendations have been generated taking into consideration opinions of interview respondents. For example, recommendation R15 that can help to cope eep the balance between readability of model and functionality of it depending on the given we considered statements of Respondent 1 and Respondent 3 Sometimes you end up in a need to decide what would be the best: to create good graphical representation or to create sound and valid model. In some cases customers want to generate code from the model, so if the model is inconsistent they definitely get problems with their code generation. Respondent 3 The problem when you make the model in formal way is that, when you try to describe it, you can really get in trouble with communication Respondent 1). Another example is recommendation R23 that can deal with challenge of defining the scope of the area for investigation. It is has been formulated considering statements of Respondent 1 and Respondent 2 ( We need to know what we should do and to focus on that. Respondent 1;; If you have a problem and stakeholders think it lies in this area, it is not enough to look at that area, because you need larger picture to really understand the problem. That is why you always need to look at a bigger area in the beginning to get a total picture. It is important that you do not go too Respondent 2). Lift a focus if models are unnecessary detailed. R17. It is usually reasonable to work with different degree of detail, since often it is important to see business on different levels. R18. When communicating with participants it is usually reasonable to step up from the current level of detail and start asking WHY question instead of HOW question. R19. Define the degree of detail on initial stage of EM taking into consideration goals and purpose of EM project. On the initial stage of EM look at a larger area than on what stakeholders are describing, however, stay focused on identified problematic areas during further stages.
Challenge Challenge General recommendations
area
Extracting Right R1. Capture what stakeholders know for sure, not what they
enterprise- information believe is true.
related R2. Build group of participants for modeling session from
information people with relevant knowledge and suitable social skills.
Group R3. Make everyone involved.
dynamic and R4. Work with session participants as with group.
human R5. Avoid working with too large groups of participants
behavior during EM sessions.
R6. Make sure that you are solving the right task that is
given by right people.
Shared R7. Conduct some kind of education (for example, warm-
language and up introduction as start of modeling sessions).
terminology R8. Depending on audience ground your explanation on
literature, experiences from previous projects or even on
http://hem.hj.se/~kaijul/PoEM2012/
http://hem.hj.se/~kaijul/PoEM2012/
Acknowledgements
This work was conducted in the context of a COBIT collaboration project ), which is financed by Swedish Foundation for International Cooperation in Research and Higher Education. COBIT is an international collaboration project between Jönköping University (Sweden), Poznan University of Economics (Poland) and St. Petersburg Institute for Informatics and Automation (Russia).
We acknowledge Kurt Sandkuhl, Karl Hammar and Banafsheh Khademhosseinieh for their valuable advices and interesting conceptual discussions during the process of paper writing. | 39,160 | [
"1003527",
"1003528",
"1003529",
"992762"
] | [
"452135",
"452135",
"300731",
"471046"
] |
01484389 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484389/file/978-3-642-34549-4_5_Chapter.pdf | Ilia Bider
Erik Perjons
email: perjons@dsv.su.se
Mturi Elias
Untangling the Dynamic Structure of an Enterprise by Applying a Fractal Approach to Business Processes
Keywords: Business Process, Enterprise Modeling, Fractal Enterprise
A promising approach for analyzing and designing an enterprise is to consider it as a complex adaptive system (CAS) able to self-adjust to the changes in the environment. An important part of designing a CAS model is to untangle the dynamic structure of an enterprise. This paper presents a procedure for identifying all processes that exist in an enterprise as well as their interconnections. The procedure makes use of a number of process-assets and asset-processes archetypes. The first ones help to find out what assets are needed for a particular process, the second ones help to find out supporting processes that are needed to have each type of assets ready available for deployment. The procedure is based on the ideas of fractal organization where the same pattern is repeated on different levels. The uncovered dynamic structure of an enterprise can support strategic planning, change management, as well as discovering and preventing misbalances between its business processes. The paper also presents an example of applying the procedure to research activities of a university.
Introduction
One of the main characteristics of the environment in which a modern enterprise functions is its high dynamism due to globalization and speedy technological progress. To survive and grow in the dynamic environment with global competition for customers, capital and skilled workforce, a modern enterprise should be able to quickly adapt itself to changes in the environment, which includes using opportunities these changes offer for launching new products and services.
This new enterprise environment has already attracted attention of researchers who started to consider an enterprise as a complex adaptive system (CAS) able to selfadjust to the changes in the environment [START_REF] Piciocchi | Managing Change in Fractal Enterprises and IS Architectures from a Viable Systems Perspective[END_REF][START_REF] Valente | Demystifying the struggles of private sector paradigmatic change: Business as an agent in a complex adaptive system[END_REF][START_REF] Engler | Modeling an Innovation Ecosystem with Adaptive Agents[END_REF][START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF]. The long-term goal of our research project is to create a practical methodology for modeling an enterprise as a multilayered CAS capable of self-adaptation without centralized planning mechanism. Building such a model requires finding interconnections between various components of the enterprise. Such interconnections should allow efficient information exchange between the layers so that changes in various parts of the enterprise environment are promptly discovered and dealt with. The objective of having such a model is to help an enterprise to better understand its existing structure so that it could be fully exploited and/or improved.
In the short term, our research is currently focused on getting answers to the following two interconnected questions:
• How to find all processes that exist in an enterprise? This is not a trivial matter as only most visible processes catch attention of management and consultants. These processes represent only the tip of an iceberg of what exists in the enterprise in half-documented, or in totally undocumented form (tacit knowledge).
• What types of interconnections exist between different business processes and how they can be represented in an enterprise model? The answer is needed to get a holistic view on the enterprise processes which is one of the objectives of having an enterprise model.
Besides helping to achieve our long terms goals, such answers, if found, have their own practical application. Without knowing all business processes and their interconnections, it is difficult to plan any improvement, or radical change. Changes introduced in some processes without adjusting the associated processes may have undesirable negative consequences. Having a map of all processes and their connections could help to avoid such situations.
This paper is devoted to finding answers to the above two questions. This is done based on the enterprise model from [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF] that represents an enterprise as consisting of three types of components: assets (e.g., people, infrastructure, equipment, etc.), sensors and business process instances. The working hypothesis, when answering the questions above, is that the processes and their relationships can be uncovered via the following procedure. One starts with the visible part of the iceberg, so-called main processes. Here, as main we count processes that produce value for which some of the enterprise external stakeholders are ready to pay, e.g., customers of a private enterprise, or a local government paying for services provided to the public. Typical examples of main processes are hard (e.g., a computer) or soft (e.g., software system) product manufacturing, or service delivery (e.g., educational process at a university). When the main processes are identified, one proceeds "under water" following up assets that are needed to run the main processes. Each assets type requires a package of so-called supporting processes to have the corresponding assets in "working order" waiting to be deployed in the process instances of the main process. To supporting processes belong, for example, human resources (HR) processes (e.g., hiring or retiring members of staff) that insure the enterprise having right people to be engaged in its main processes.
To convert the working hypothesis above into a procedure that could be used in practice, we introduce:
• Process-assets archetypes (patterns) that help to find out what assets are needed for a particular process, especially for a main process from which we start unwinding, • Assets-processes archetypes (patterns) that help to find out supporting processes that are needed to have each type of assets ready available for deployment.
Having these archetypes/patterns will help us to unveil the dynamic process structure of an enterprise starting from the main process and going downwards via repeating pattern "a main process->its assets->processes for each assets->assets for each process-> …". As the result we will get an indefinite tree consisting of the same type of elements. Such kind of structures is known in the scientific literature under the name of fractal structures [START_REF] Mcqueen | Physics and fractal structures[END_REF].
Based on the deliberations above, the goal of this paper is to introduce the processassets and asset-processes archetypes/patterns, and show how to use them in practice to untangle the dynamic structure of an enterprise. The example we use for the latter is from the academic world. We start from one of the main processes -research project -in the university world and unwind it according to the procedure outlined above. The example was chosen based on the authors having their own experience of this process type as well as easy access to the expertise of the colleagues. The chosen example does not mean that the procedure is applicable only to the university world. When discussing the archetypes, we will give examples from other types of enterprises as well.
The research presented in the paper is done in the frame of the design science paradigm [START_REF] Peffers | Design Science Research Methodology for Information Systems Research[END_REF][START_REF] Bider | Design science research as movement between individual and generic situation-problem-solution spaces[END_REF]. The goal of such kind of research is finding and testing a generic solution [START_REF] Bider | Design science research as movement between individual and generic situation-problem-solution spaces[END_REF], or artifact in terms of [START_REF] Peffers | Design Science Research Methodology for Information Systems Research[END_REF], for a class of practical problems. The archetypes and procedure of using them suggested in the paper constitutes a design science artifact for getting an answer for the two main questions discussed. Though most of the concepts used in building this artifact are not new, the artifact itself, which is the main contribution of the paper, as a whole is new and original. In addition, we do not know any research work specifically devoted to finding answers to the questions above. So our solution, even if not perfect, can be used in practice until a better one could be found.
The rest of the paper is structured in the following way. In Section 2, we present an overview of our three-layered enterprise model from [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF]. In Section 3, we discuss process and assets archetypes (patterns). In section 4, we apply these patterns to unwind parts of the dynamical structure of a university. In Section 5, we discuss some related works. Section 6 discusses the results achieved and plans for the future.
The Assets-Sensors-Processes Model of an Enterprise
Our starting point is a systemic approach to the enterprise modeling from [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF]. We consider an enterprise as a system that reacts on different situations constantly emerging in its environment or inside itself to maintain the balance between itself and environment or inside itself. An emerging situation is dealt by creating a respondent system [START_REF] Lawson | A Journey Through the Systems Landscape[END_REF] that is disbanded after the situation has been dealt with. The respondent system is built from the assets that the larger system already has. Some of these assets are people, or other actors (e.g., robots). Other assets are control elements, e.g., policy documents, which define the behavior of the respondent system.
To deal with emerging situations effectively, an enterprise creates templates for the majority of known types of situations. Such a template is known under different names, like project template, business process definition, business process type, or business process model. We will refer to it as to Business Process Template (BPT). BPT contains two parts:
1. Start conditions that describe a situation which warrants creation of a respondent system 2. Execution rules that describe a composition and behavior of a respondent system A respondent system created according to the BPT template has different names, e.g., a project or a case. We will refer to such a system as to Business Process Instance (BPI).
Note that PBTs can exist in an organization in an explicit or implicit form, or a combination of both. Explicit BPTs can exist as written documents (e.g. employee's handbooks or position descriptions), process diagram, or built in computerized system that support running BPIs according to the given BPTs. Implicit BPTs are in the head of the people engaged in BPIs that follows given BPTs. These BPTs belongs to what is called tacit knowledge.
Based on the systemic view above, we consider an enterprise as consisting of three types of components, assets, sensors and BPIs, depicted in Fig. 1, and explained below: ─ People with their knowledge and practical experiences, beliefs, culture, sets of values, etc. ─ Physical artifacts -computers, telephone lines, production lines, etc. ─ Organizational artifacts, formal as well as informal -departments, teams, networks, roles, etc. ─ Information artifacts -policy documents, manuals, business process templates (BPTs), etc. To information artifacts belong both written (documented) artifacts, and tacit artifacts -the ones that are imprinted in the people's heads (e.g., culture)
The assets are relatively static, which means that by themselves they cannot change anything. Assets are activated when they are included in the other two types of components. Assets themselves can be changed by other types of components when the assets are set in motion for achieving some goals. Note that assets here are not regarded in pure mechanical terms. All "soft" assets, like sense of common goals, degree of collaborativeness, shared vision, etc., belong to the organizational assets. Note also that having organizational artifacts does not imply a traditional function oriented structure. Any kind of informal network or resource oriented structural units are considered as organizational artifacts.
2. Sensors are a set of (sub)systems, the goal of which is to watch the state of the enterprise itself and its environment and catch impulses and changes (trends) that require firing of BPIs of certain types. We need a sensor (which might be a distributed one) for each BPT. The work of a sensor is governed by the Start Conditions of the BPT description (which is an informational artifact). A sensor can be fully automatic for some processes (an order placed by a customer in a webbased shop), or require human participation to detect changes in the system or its surroundings. 3. BPIs -a set of respondent systems initiated by sensors for reaching certain goals and disbanded when these goals are achieved. The behavior of a BPI system is governed by the Execution Rules of the corresponding BPT. Dependent on the type, BPIs can lead to changes being made in the assets layer. New people are hired or fired, departments are reorganized, roles are changed, new policies are adopted, BPT descriptions are changed, new BPTs are introduced, and obsolete ones are removed.
Process-Assets and Asset-Processes Archetypes
In [START_REF] Bider | Modeling an Agile Enterprise: Reconciling Systems and Process Thinking[END_REF], we have discussed several types of interrelationships between the components of an enterprise overviewed in the previous section, namely:
1. Sensors and BPIs use assets to complete their mission: to discover needs for fire a BPI for a sensor, or to attain a goal for BPI. 2. BPIs can change the assets 3. A sensor, as well as BPI can be recursively decomposed using the assets-sensorsprocesses model of Fig. 1.
In this paper, we concentrate only on the first two types of relationships between the components of the enterprise, leaving the third type, process decomposition, outside the scope of this paper. In other words, we will not be discussing any details of the internal structure of processes, focusing only on what types of assets are needed for running process instances of a certain type and in what way process instances can affect the assets.
The Process-Assets Archetype for Main Processes
We consider as enterprise any organization the operational activities of which are financed by external stakeholders. It can, for example, be a private company that gets money for its operational activities from the customers, a head office of an interest organization that gets money from the members, or a public office that gets money from the taxpaying citizens or inhabitants. We consider a main (or core) process to be a process that produces value to the enterprise's external stakeholders for which they are willing to pay. Our definition of the term main (or core) process may not be the same as those of others [START_REF] Hammer | How Process Enterprises Really Work[END_REF][START_REF] Scheer | ARIS -Business Process Modeling[END_REF]. For example, we consider as main processes neither sales and marketing processes, nor product development processes in a product manufacturing company. However, our definition of the main process does cover processes of producing and delivering products and services for external stakeholders, which is in correspondence with other definitions of main processes [START_REF] Hammer | How Process Enterprises Really Work[END_REF][START_REF] Scheer | ARIS -Business Process Modeling[END_REF].
Main processes are the vehicles of generating money for operational activities. To get a constant cash flow, an enterprise needs to ensure that new business process instances (BPIs) of main processes are started with some frequency. To ensure that each started BPI can be successfully finished, the enterprise needs to have assets ready to be employed so that the new BPI gets enough of them when started. We consider that any main process requires the following six types of assets (see also Fig. 2 and3):
1. Paying stakeholders. Examples: customers of a private enterprise, members of an interest organization, local or central government paying for services provided for the public.1 2. Business Process Templates (BPTs). Examples are as follows. For a production process in a manufacturing company, BPT includes product design and design of a technological line to produce the product. For a software development company that provides customer-built software, BPT includes a software methodology (project template) according to which their systems development is conducted. For a service provider, BPT is a template for service delivery. 3. Workforce -people trained and qualified for employment in the main process.
Examples: workers at the conveyor belt, physicians, researchers. 4. Partners. Examples: suppliers of parts in a manufacturing process, a lab that complete medical tests on behalf of a hospital. Partners can be other enterprises or individuals, e.g., retired workers that can be hired in case there is temporal lack of skilled workforce to be engaged in a particular process instance. 5. Technical and Informational Infrastructure -equipment required for running the main process. Examples: production lines, computers, communication lines, buildings, software systems etc. 6. Organizational Infrastructure. Examples: management, departments, teams, policies regulating areas of responsibilities and behavior.
Below we give some additional clarification on the list of assets above.
• The order in which the asset types are listed is arbitrary, and does not reflect the importance of assets of a given type; all of them are equally important. • Our notion of asset does not coincide with the one accepted in the world of finance [START_REF] Elliott | Financial Accounting and Reporting[END_REF]. Except the technical infrastructure, all assets listed above belong to the category of so-called intangible assets of the finance world. Intangible assets usually lack physical substance and their value is difficult to calculate in financial terms. Technical infrastructure belongs to the category of fixed (i.e., having physical substance) tangible (i.e., the value of which is possible to calculate in financial terms) assets.
• All of the following three types of assets -paying stakeholders, skilled workforce, and partners -belong to the category of stakeholders. We differentiate them by the role they play in the main business processes. Paying stakeholders, e.g., customers, pay for the value produced in the frame of process insatnces. Workforce directly participates in the process instances and get compensation for their participation (e.g., in the form of salary). Partners provide the process with resources needed for process instances to run smoothly, e.g., electricity (power provider), money (banks or other type of investors), parts, etc. Partners get compensation for their products and services in form of payment, profit sharing, etc.
Fig. 2. The process-assets archetype for main processes
Fig. 3. An example of instantiation of the process-assets archetype for main processes
The type of processes (main) together with types of assets required for running it constitute a process-assets archetype2 for main processes. Graphically it is depicted in the form of Fig. 2, in which the process type is represented by an oval and assets types -by rectangles. An arrow from the process to an asset shows the needs to have this type of assets in order to successfully run process instances of the given type. A label on an arrow shows the type of assets. Instantiation of the archetype is done by inserting labels inside the oval and rectangles. Fig. 3 is an example of such instantiation for a product manufacturing process.
3.2
The Asset-Processes Archetype
In Section 3.1, we have introduced six types of assets that are needed to ensure that BPIs of a main process run smoothly and with required frequency. Each assets type requires a package of supporting processes to ensure that it is in condition ready to be employed in BPIs of the main process. We present this package as consisting of three types of processes connected to the life-cycle of each individual asset (see also an example in Fig. 4):
1. Acquire -processes that result in the enterprise acquiring a new asset of a given type. The essence of this process depends on the type of asset, the type of the main process and the type of the enterprise. For a product-oriented enterprise, acquiring new customers (paying stakeholders) is done through marketing and sales processes. Acquiring skilled work force is a task completed inside a recruiting process. Acquiring a new BPT for a product-oriented enterprise is a task of new product and new technological process development. Creating a new BPT also results in introducing a new process in the enterprise. 2. Maintain -processes that help to keep existing assets in right shape to be employable in the BPIs of a given type. For customers, it could be Customer Relationship Management (CRM) processes. For workforce, it could be training.
For BPT, it could be product and process improvement. For technical infrastructure, it could be service. 3. Retire -processes that phase out assets that no longer can be used in the main process. For customers, it could be discontinuing serving a customer that is no longer profitable. For BPTs, it could be phasing out a product that no longer satisfies the customer needs. For workforce, it could be actual retirement.
Fig. 4. An example of instantiation of the asset archetype
The asset-processes archetype can be graphically presented in the form of Fig. 4. In it, the asset type is represented by a rectangle, and a process type -by an oval. An arrow from the asset to a process shows that this process is aimed at managing assets of the given type. The label on the arrow shows the type of the process -acquire, maintain, or retire. Instantiation of the archetype is done by inserting labels inside the rectangle and ovals. Actually, Fig. 4 is an example of such instantiation for the customer's assets in a manufacturing company (on the difference between archetypes and instantiations, see Fig. 2 and3 and the text related to them in Section 3.1).
Archetypes for Supporting Processes
Types of assets that are needed for a supporting process can be divided into two categories, general asset types, and specific ones. General types are the same as for the main process, except that a supporting process does not need paying stakeholders.
The other five types of assets needed for a main process: BPT, workforce, partners, technical and informational infrastructure, organizational infrastructure, might be needed for a supporting process as well. Note also that some supporting processes, e.g., servicing a piece of infrastructure, can be totally outsourced to a partner. In this case, only the partner's rectangle will be filled when instantiating the archetype for such a process.
Additionally to the five types of assets listed above, other types of assets can be added to a specific category of supporting processes. We have identifying two additional assets for supporting processes of acquiring an asset that belongs to the category of stakeholders, e.g., paying stakeholders, workforce, and partners: • Value proposition, for example, description of products and/or services delivered to the customer, or salary and other benefits that an employee gets. • Reputation, for example, of being reliable vendor, or being a great place of work.
Adding the above two asset types to the five already discussed, gives us a new process-assets archetype, i.e., the archetype for the acquiring stakeholders. An example of instantiation of such an archetype is presented in Fig. 5. There might be other specific archetypes for supporting processes, but so far we have not identified any more of them.
Fig. 5. An example of instantiation of the process-assets archetype for acquiring stakeholders
Harnessing the Growth of the Processes-Assets Tree
Using archetypes introduced above, we can unwind the process structure of the enterprise. Potentially the resulting tree will grow down and in breadth indefinitely.
As an enterprise has a limited size, there should be some mechanisms that contain this growth and, eventually, stops it. We see several mechanisms that harness the growth:
• Some processes, e.g., maintenance of infrastructure, can be outsourced to a partner.
In this case, only the partner part of a corresponding archetype will be filled. • Some processes can share assets, e.g., workforce and BPT. For example, recruiting of staff can be done according to the same template and by the same employees working in the HR department independently whether the recruitment is done for the employees of main or supporting processes.
• Some processes can be used for managing more than one asset. For example, the assets Product offers from Fig. 5 (Value proposition asset) and Product&Technological process design from Fig. 3 (BPT asset) are to be acquired by the same process of New product development. There is too tight interconnection between these two assets so that they cannot be created separately, e.g.:
─ The offers should be attractive to the customer, so the product should satisfy some customer needs ─ The price should be reasonable, so the technological process should be designed to ensure this kind of a price • A process on an upper level of the tree can be employed as a supporting process on the lower level, which terminates the growth from the corresponding node. For example, one of the "supporting" processes for acquiring and maintaining the asset Brand reputation from Fig. 5 is the main production process itself which should provide products of good quality.
Testing the Model
The archetypes introduced in Section 3 were obtained by abstracting known facts about the structure and functioning of a manufacturing company. Therefore, testing the ideas should be done in a different domain. We choose to apply the model to an academic "enterprise", more exactly, we start unwinding the Research project process. The result of applying the process-assets archetype from Fig. 2 to this process is depicted in Fig. 6.
Fig. 6. Instantiation of the process-assets archetype for the main process: Research project
The main difference between Fig. 3, which instantiates product manufacturing, and Fig. 6 is that Research project has financiers rather than customers as paying stakeholders. The result of a research process is new knowledge that is accessible for everybody, but is financed by few, including private donors who might not directly benefit from their payments. Financiers can be of several sorts: • Research agencies giving grants created by local or central governments, or international organizations • Industrial companies that are interested in the development of certain areas of science • Individuals that sponsors research in certain areas Let us consider that a financier is a research agency giving research grants. Then, applying the asset-processes archetype from Section 3.2 to the leftmost node (Financiers) of Fig. 6, we get an instantiation of this archetype depicted in Fig. 7.
Fig. 7. Instantiation of the assets-processes archetype for a financier Research agency
Applying the Acquiring the stakeholders archetype from Section 3.3 to the leftmost node of Fig. 7 (Identifying & pursuing funding opportunities), we will get its instantiation depicted in Fig. 8 (only the first four assets are presented in this figure).
Fig. 8. Instantiation of the Acquiring stakeholders archetype to Identifying and pursuing funding opportunities
We made an experiment of interviewing two research team leaders in our institution based on Fig. 6, 7, 8. They managed to identify their core research areas and what kind of reputation they use when applying for grants. This took some time, as they did not have explicit answers ready. They also noted that the model helps to better understand the supporting processes around their research work. This experiment, albeit limited, shows that the model can be useful in understanding the dynamic structure of an enterprise. However, more experiments are required to validate the usefulness of our approach.
Related Research
Analysis of enterprises based on the idea of fractality has been done by several researchers and practitioners, e.g., [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF], [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF], [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF], [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF]. Their approaches differ from that of ours, which comes as no surprise as there is no accepted definition of what fractals mean in respect to the enterprise world. In essence, fractals are a high-level abstract idea of a structure with a recurring (recursive) pattern repeating on all levels. Dependent on the perspective chosen for modeling of a real life phenomenon, this pattern will be different for different modelers. Below, due to the size limitations, we only shortly summarize the works on fractal structures in enterprise modeling, and show the difference between them and our approach.
The book of Hoverstadt [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF] uses the viable system model (VSM) to unfold the fractal structure of the enterprise via the system -subsystems' relationships. Subsystems are considered as having the same structure and generic organizational characteristics as the system in which they are enclosed. The resulting structure helps to analyze whether there is a balance between the subsystems. Overall, our long term goal is similar to Hoverstadt's: create a methodology for modeling an enterprise as a multilayered complex adaptive system. However, we use a completely different approach to enterprise modeling, instead of system subsystems relationships, we interleave processes and assets when building an enterprise model.
Another approach to analysis of enterprise models based on the idea of fractality can be found in Sandkuhl & Kirikova [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF]. The idea is to find fractal structures in an enterprise model built when using a general modeling technique. [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF] analyzes two such models in order to find fractals in it. The results are mixed, some fractals are found, but the suspicion that many others are missed remains, due to they may not be represented in the models analyzed. The approach in [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF] radically differs from that of ours. We have a hypothesis of a particular fractal structure to be found when analyzing an enterprise, while [START_REF] Sandkuhl | Analysing Enterprise Models from a Fractal Organisation Perspective -Potentials and Limitations[END_REF] is trying to find any types of the fractal structures based on the generic characteristics of organizational fractals.
Canavesio and Martinez [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF] presents a conceptual model for analyzing a fractal company aiming at supporting a high degree of flexibility to react and adapt quickly to environmental changes. Main concepts are project, resource, goal, actors, plan, and relationships thereof. The approach from [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF] differs from that of ours in the kind of fractals used for enterprise modeling. Fractals from [START_REF] Canavesio | Enterprise modeling of a project-oriented fractal company for SMEs networking[END_REF] concern the detailed structure of business processes, while we are looking only on the relationships between processes and assets.
The focus on process organization when applying fractal principles can be found in [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF]. [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF] is using a pattern of sense-and-respond processes on different organizational levels each consisting of the same pattern: requirement, execution and delivery. The difference between our approach and that from [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF] is the same as we have mentioned above. [START_REF] Ramanathan | Fractal architecture for the adaptive complex enterprise[END_REF] is looking at the details of individual processes, we are trying to catch general relationships between different processes.
Discussion and Future Research
This paper suggests a new type of enterprise modeling that connects enterprise processes in a tree-like structure where the main enterprise processes serve as a root of the tree. The tree expands via finding all assets needed for smooth functioning of the main processes, and after that, via finding all supporting processes that are needed to handle these assets. The tree has a recursive/fractal form, where instantiations of process-assets archetypes are interleaved with those of asset-processes archetypes.
We see several practical areas where a model connecting all processes and assets in an enterprise could be applied, e.g.:
• As a help in strategic planning for finding out all branches of the processes-assets tree that require adjustments. For example, when sales plans a new campaign that will bring new customers, all assets required by the corresponding main process should be adjusted to satisfy the larger number of customers. This includes workforce, suppliers, infrastructure, etc. The calculation itself can be done with one of the known Systems Thinking methods, e.g., Systems Dynamics.
• To prevent "organizational cancer" as described in [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF], p 57, when a supporting process starts behaving as it were a main one disturbing the balance of the organizational structure. This is typical for IT-departments that may start finding external "customers" for software developed for internal needs.
• As a help in radically changing the direction. When all supporting processes are mapped in the tree, it will be easier for the enterprise to change its business activity by picking up some supporting processes and converting it to the main one, while making appropriate adjustments to the tree. For example, a product manufacturing company could decide to become an engineering company. Such a decision can be made when manufacturing becomes unprofitable, while the company still have a very strong engineering department. An example of such transformation is described in [START_REF] Hoverstadt | The Fractal Oragnization: Creating Sustainable Oragnization with the Viable System Model[END_REF], p. 74. Another example comes from the experience of the first author who worked for an US company that made such transformation twice. First transformation was from being a software consulting business to becoming a software product vendor when the consulting business could not accommodate the existing workforce. The second time it was done in a reverse order when a market for their line of products suddenly collapsed.
As far as future research is concerned, we plan to continue our work in several directions: • Continuing testing. The model presented in this paper has been tested only in a limited scope, and it requires further testing and elaboration. The next major step in our research is to build a full tree with Research project as a root. This will help us to further elaborate the model, and improve our catalog of process archetypes. Furthermore, we need to test this modeling technique in another domain, for example, to build a model for a software development company.
• Continuing working on the graphical representation of the model. Two aspects need to be covered in this respect: ─ Representing multiplicity, e.g., multiple and different assets of the same kind that require different supporting process. ─ Representing sharing assets and processes in the model as discussed in section 3.4. • Using the processes-assets model as a foundation for modeling and designing an enterprise as a CAS (complex adaptive system). Different processes discovered with the procedure suggested in this paper are connected to different parts of the external and/or internal environment of the enterprise. If participants of these processes are entrusted to watch and report on changes in their parts of the environment, it could create a set of effective sensors (see Section 2) covering all aspects of the enterprise environment. Connecting these sensors to firing adaptation processes will close the "adaptation loop". As an example for the above, assume that the recruiting process shows that it becomes difficult to recruit skilled workforce for a main process. This fact can fire an investigative process to find out the reason for these difficulties. It could be that nobody is willing to learn such skills any more, or the competitors are expanding and offer better conditions (e.g., salary), or the enterprise reputation as a good place of work has been shattered. Based on the result of the investigation appropriate changes can be made in HR processes themselves or in completely different parts of the enterprise.
• Another application area of our processes-assets model is analyzing and representing process models in a repository. As pointed in [START_REF] Shahzad | Requirements for a Business Process Model Repository: A Stakeholders' Perspective[END_REF], an attractive alternative to designing business processes from scratch is redesigning existing models. Such an approach requires the use of process model repositories that provide a location for storing and managing process knowledge for future reuse. One of the key challenges of designing such a repository is to develop a method to analyze and represent a collection of related processes [START_REF] Elias | A Business Process Metadata Model for a Process Model Repository[END_REF]. The process-assets and asset-processes archetypes provide a mechanism to analyze and represent the relationships between business processes in a repository. The processes-assets relationships structure, when represented in the repository, will serve as a navigation structure that determines the possible paths for accessing process models by imposing an organized layout on the repository's content.
Fig. 1 .
1 Fig. 1. An enterprise model consisting of three types of components: assets, sensors and BPIs
In some works all paying stakeholders are considered as customers. We prefer to differentiate these two terms as not to be engaged in the discussions not relevant to the issues of this paper.
In this paper, we use term archetype in its general meaning of "the original pattern or model of which all things of the same type are representations or copies", and not as a pattern of behavior as is widely accepted in Systems Thinking literature.
Acknowledgements
We are grateful to our colleagues, Paul Johannesson, Hercules Dalianis and Jelena Zdravkovic who participated in interviews related to the analysis of the research activity reported in Section 4. We are also thankful to David Alman, Gene Bellinger, Patrick Hoverstadt, Harold Lawson and anonymous reviewers whose comments on the earlier draft of this paper helped us to improve the text. | 40,759 | [
"1014409",
"1003536",
"1003537"
] | [
"300563",
"300563",
"300563"
] |
01484390 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484390/file/978-3-642-34549-4_7_Chapter.pdf | Jaap Gordijn
email: j.gordijn@vu.nl
Ivan Razo-Zapata
email: i.s.razozapata@vu.nl
Pieter De Leenheer
email: pieter.de.leenheer@vu.nl
Roel Wieringa
email: r.j.wieringa@utwente.nl
Challenges in Service Value Network Composition
Keywords: service value network, bundling, composition, e 3 service
Commercial services become increasingly important. Complex bundles of these services can be offered by multiple suppliers in a service value network. The e 3 service ontology proposes a framework for semi-automatically composing such a network. This paper addresses research challenges in service value network composition. As a demonstration of the state of the art, the e 3 service ontology is used. The challenges are explained using an example of an Internet service provider
Introduction
Services comprise a significant part of the economy. For instance, in the USA approximately 81.1 % of the employees worked in the service industry in 2011 1 .
Increasingly, such services are ordered and/or provisioned online. For instance, a cinema ticket can be ordered via the Internet, but the customer still has to travel to the cinema, where the service is delivered. Viewing a film, by contrast, can be ordered and provisioned online. Other examples are an email inbox, web-page hosting, or voice over IP (VOIP). The focus of this paper is on services that can be offered and provisioned online, see also Sect. 2, about the virtual ISP example.
Services are ordered and provisioned in a service value network (SVN) (see e.g. [START_REF] Hamilton | Service value networks: Value, performance and strategy for the services industry[END_REF][START_REF] Christopher | Services Marketing: People, Technology, Strategy[END_REF][START_REF] Allee | A value network approach for modeling and measuring intangibles[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF] for SVN and related concepts). At minimum, a SVN consists of two actors, namely a supplier and a customer. However, in many cases, the SVN will consist of multiple suppliers, each offering a service, who together satisfy a complex customer need. The package of services satisfying the complex customer need is called the service bundle. By using multi-supplier service bundles, each supplier can concentrate on its own core competence, and can participate in satisfying a complex customer need, which it never could satisfy on its own. Moreover, a SVN may contain the suppliers of the suppliers and so on, until we reach the suppliers for which we safely can assume that their services can be provisioned in a known way.
The observation that a SVN may consist of many suppliers leads to the conclusion that formation, or composition of the SVN is a research question in its own right. Specifically, if the customer need is ordered and provisioned online, the composition process should be software supported, and at least be semi-automatic. To this end, we introduce the notion of computational services; these are commercial services which are represented in a machine readable way, so that software can (semi) automatically reason about the required service bundle and the corresponding suppliers. We employ ontologies (see Sect.. 3) for representation and reasoning purposes.
The e 3 service ontology [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF] and its predecessor serviguration [START_REF] Baida | Software-aided Service Bundling -Intelligent Methods & Tools for Graphical Service Modeling[END_REF] is an approach to semi-automatically compose service value networks. The e 3 service approach takes two perspectives on service composition, namely a customer-and a supplier perspective, and tries to generate a multi-supplier service bundle and the corresponding SVN to satisfy a complex customer need. We use the e 3 service ontology as the baseline for service value network composition.
The e 3 service ontology is not to be confused with Web service technologies such as SOAP, WSDL and UDDI [START_REF] Curbera | Unraveling the web services web: An introduction to soap, wsdl, and uddi[END_REF]. Whereas the focus of e 3 service is on the composition of commercial services, SOAP, WSDL and UDDI facilitate interoperability between software services executing on various software and hardware platforms. Nevertheless, commercial services can be (partly) implemented by means of web service technology. After sketching the state of the art of e 3 service , the contribution of this paper is to explain research challenges with respect to e 3 service , including potential solution directions. Although the research challenges are described in terms of the e 3 service work, we believe the challenges themselves are present in a broader context.
To facilitate the discussion, we create a hypothetical example about a virtual Internet service provider (ISP) (Sect.. 2). Thereafter, we discuss the state of the art with respect to e 3 service , by using the virtual ISP example (Sect.. 3). Then we briefly state our vision about the composition of SVNs (Sect. 4). Subsequently, we present the research directions (Sect. 5). Finally, we present our conclusions (Sect. 6).
Example: The virtual Internet service provider
To illustrate the capabilities of, and research issues with respect to e 3 service , we have constructed a hypothetical educational example about a virtual Internet service provider. This example is inspired on the example in [START_REF] Chmielowiec | Technical challenges in market-driven automated service provisioning[END_REF][START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF].
The virtual Internet service provider example assumes that an end user (the customer) wants to compose an Internet service provider out of elementary service offered by potentially different suppliers. For example, an offered service bundle may include only basic Internet access (then the bundle consists of only one service). In contrast, a service bundle may be complex such as basic Internet access, an email inbox, an email sending service (e.g. a SMTP service), web page hosting, voice over IP (telephony), a helpdesk, remote disk storage and back up and news. All these service can potentially be offered by different suppliers, so that a multi-supplier service bundle emerges. Moreover, some services may be self-services. For example, the helpdesk service may consist of 1st, 2nd and 3rd line support, and the customer performs the 1st line helpdesk by himself.
3 e 3 service : State of the art This section summarizes the current state of the art of e 3 service . For a more detailled discussion, the reader is referred to [START_REF] Ivan | Fuzzy verification of service value networks[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF][START_REF] Ivan | Handbook of Service Description: USDL and its Methods, chapter Service Network Approaches[END_REF] and [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF]. Although the indentified research challenges exist outside the context of e 3 service , we take the state of the art of e 3 service as our point of departure.
Impedance mismatch between customer and supplier
A key problem in the composition of service value networks is the mismatch between the customer need and the offered service (bundle) by the supplier(s). The service bundle may contain several features (later called consequences) which are unwanted by the customer, or the bundle may miss required features as wanted by the customer.
Example. The user may want to communicate via text (e.g. email). However, the provider is offering the bundle consisting of email, voice over IP (VoIP), and Internet access. The mismatch is in the VoIP service which is not requested by the customer; the latter one (Internet access) is a required service needed to enable email and VoIP.
To address this mismatch, e 3 service proposes two ontologies: (1) the customer ontology, and (2) the supplier ontology, including automated reasoning capacity.
Customer ontology
The customer ontology borrows concepts and terminology from marketing (see e.g. [START_REF] Kotler | Marketing Management[END_REF] and [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF]). Key notions in the supplier ontology are need [START_REF] Kotler | Marketing Management[END_REF][START_REF] Arnd | How broad should the marketing concept be[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF][START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF] and consequence [START_REF] Gutman | Laddering theory-analysis and interpretation[END_REF][START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. According to [START_REF] Gutman | Laddering theory-analysis and interpretation[END_REF][START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF], a consequence is the result from consuming valuable service outcomes. A need may be specified by various consequences [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. In the current work on e 3 service (of Razo-Zapata et al., ibid) we focus mainly on functional consequences. In the previous example, we have already exemplified the notion of need and consequence.
Supplier ontology
The supplier ontology is fully integrated with the e 3 value ontology [START_REF] Gordijn | Value based requirements engineering: Exploring innovative e-commerce idea[END_REF] and therefore borrows many concepts from the e 3 value ontology [START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. Key concepts in the e 3 value ontology are actors who perform value activities [START_REF] Gordijn | Value based requirements engineering: Exploring innovative e-commerce idea[END_REF]. Actors can exchange things of economic value (value objects) with each other via value transfers [START_REF] Gordijn | Value based requirements engineering: Exploring innovative e-commerce idea[END_REF].
Example. An actor can be an Internet service provider (ISP) who performs the activities of access provisioning, email inbox provisioning and email SMTP relaying, web / HTTP hosting, and more. To other actors (customers) a range of services (in terms of value objects) is offered, amongst others email inbox, SMTP relay and hosting of web pages.
To be able to connect the supplier ontology with the customer ontology, value objects have consequences too [START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. These consequences are from an ontological perspective similar to the consequences identified by the customer ontology. This allows for matching both kinds of consequences. The fact that a value object can have multiple consequences (and vice versa) models the situation that a customer obtains a value object as a whole (thus with all the consequences it consists of), whereas the customer might be interested in a subset of consequences. It is not possibile to buy consequences separately, as they are packaged into a value object.
Reasoning support
In [START_REF] Ivan | Fuzzy verification of service value networks[END_REF] different reasoning processes are employed than in [START_REF] Kinderen | Needs-driven service bundling in a multi-supplier setting ? The computational e3service approach[END_REF]. We restrict ourselves to [START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. In [START_REF] Ivan | Fuzzy verification of service value networks[END_REF], reasoning is explained as a Propose-Critique-Modify (PCM) [START_REF] Balakrishnan Chandrasekaran | Design problem solving: A task analysis[END_REF] problem solving method, consisting of the following reasoning steps:
-Propose -Laddering: A technique to refine needs in terms of functional consequences [START_REF] Gutman | Laddering theory-analysis and interpretation[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. a complex need (N
Vision on composition of service value networks
Our long term vision can be characterized as follows:
-A multi-perspective view on composition and operation of service value networks. For instance, a business value perspective, a business process perspective, and an IT perspective may be relevant. -Integration of the forementioned perspectives (e.g. cf. [START_REF] Pijpers | Using conceptual models to explore business-ict alignment in networked value constellations case studies from the dutch aviation industry, spanish electricity industry and dutch telecom industry[END_REF]). These perspectives together provide a blueprint of the SVN at hand. -Various ways of composing SVNs, for instance hierarchical composition [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF] with one party that executes the composition in contrast to self-organization composition, in which participants themselves configure a SVN. -Operationalization of the SVN in terms of processes and supporting IT. In some cases, IT can be dominant, as is for instance the case for the virtual ISP example. -Reconfiguration of the SVN. In some cases it is necessary to reconfigure the SVN based on quality monitoring, disappearing actors, etc.
Although the issues described above might seem only applicable to our vision, areas such as Service-oriented Enterprise Architecture also deal with them by aiming at transparently merging business services (commercial services), software services (web services), platform services and infrastructure services (IT architecture) (see e.g. [START_REF] Wegmann | Business-IT Alignment with SEAM for Enterprise Architecture[END_REF]).
Research challenges in service value networks
Terminologies for customer and supplier ontologies may differ
Theme. Ontology.
Description of the challenge. The current e 3 service has two important assumptions. First, it is assumed that the customer and supplier ontology are linked to each other via a single consequence construct. Perhaps, multiple (e.g. more detailled) customer consequences may map onto one supplier consequence, or vice versa. Second, it is assumed that both the customer and suppliers all use the same terminology for stating the consequences. This challenge supposes first that links via the customer and supplier can involve more complex constructs than is the case right now (so, just one concept: the consequence). Second the challenge includes the idea that -given certain constructs to express what is needed and offered -the customer and suppliers can do so by using different terminology.
Example. With respect to the first assumption, the virtual ISP example may suppose a global customer consequence 'being online' that maps onto a supplier consequence 'email' + 'internet access'. Considering the second assumption, in the virtual ISP example, the desired customer consequence can be 'communicate via text', whereas the stated supplier consequence can be 'electronic mail'.
Foreseen solution direction. Concerning the consequence as matching construct, the goal is to allow for composition of consequences into more complex consequence constructs. E.g. various kind of relationships between consequences can be identified. For instance, in [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF], a consequence can depend on other consequences, and can be in a core/enhancing bundling and an optional bundling relationship with other consequences. This can be extended with composition relationships.
With respect to the use of different terminologies, in [START_REF] Pedrinaci | Toward the next wave of services: Linked services for the web of data[END_REF][START_REF] Bizer | Linked Data -The Story So Far[END_REF] a solution is proposed to match various functionalities, expressed in different terminologies, in the context of web services. Perhaps this kind of solution is also of use for commercial services, which are expressed by different terminologies.
The notion of consequence is a too high-level construct
Theme. Ontology.
Description of the challenge. Currently, the e 3 service ontology matches customer needs with supplier service offerings via the notion of consequence. In [START_REF] Kinderen | Reasoning about customer needs in multi-supplier ict service bundles using decision models[END_REF], a distinction is made between functional consequences, and quality consequences. However, a more detailled structuring of the notion of consequences can be useful. It is for instance possible to distinguish various quality consequences, such as timely provisioning of the service, stability of the service supplier, etc.
Example. In case of the virtual ISP example it is possible that a supplier offers Internet access and another supplier offers VoIP (via the offered Internet access). In such a case, it is important that Internet access has sufficient quality for the VoIP service. In this context, quality can be stated by the bandwidth and latency of the network connection, which should be sufficient to carry the VoIP connection.
Foreseen solution direction. An ontology of both functional consequences and quality consequences should be made. Functional consequences are highly domain dependent but for quality consuences, theories on software quality can be of use and SERQUAL [START_REF] Parasuraman | A conceptual model of service quality and its implicationt[END_REF], a theory on quality properties of commercial services. Finally, the Unified Service Description Language (USDL) 2 [5] may be a source.
Matching of customer needs with supplier service offerings is broker-based
Theme. Reasoning.
Description of the challenge. Different approaches can be followed to match customer needs with supplier offerings. In this paper, we distinguish hierarchical matching and self organizing matching. In case of hierarchical matching, there is a party (e.g. a broker) that controls and executes the matching process. Suppliers are simply told by the broker to provide their services in a bundle. In the current work ( [START_REF] Ivan | Fuzzy verification of service value networks[END_REF]) matching is done via a broker. The position of the party who performs the matching is powerful from a business perspective, since such a party determines which actors are providing which services. Other matching models can be distinguished, for instance self organizing models, in which actors collaborate and negotiate about the service bundle to be provided to the customer and there is no central coordinator.
Example. In the virtual ISP example, there would be in the current e 3 service implementation a specific party (the broker) who performs the matching process. This process includes eliciting customer needs, finding the appropriate service bundles and assigning specific services in the bundle to individual suppliers.
Foreseen solution direction. Hierarchical matching is currently party supported by e 3 service as an intermediate top-level party performing the matching process.
The current process can be extended by supporting multiple matching parties who are organized in a matching hierarchy. Additionally, self organizing matching should be supported (as this is an entirely different business model), e.g. via gossiping protocols, which avoid central components such as a broker (see e.g. [START_REF] Datta | Autonomous gossiping: A self-organizing epidemic algorithm for selective information dissemination in wireless mobile ad-hoc networks[END_REF] for gossiping in computer networks).
Restricted knowledge used for need and consequence elicitation
Theme. Reasoning.
Description of the challenge. The current implementation of e 3 service supposes business-to-consumer (B2C) and business-to-business (B2B) relationships. B2C interaction plays a role while executing customer need and consequence elicitation, based on the customer need and service catalogues. B2B relationships play a role during the linking process: if a supplier offers a service to the customer, it is possible that the supplier itself requires services from other suppliers. This is referred to as linking. It is possible that customer to customer (C2C) interaction may play a role during the customer need and consequence elicitation process. For instance, a service value network with the belonging consequences may be built that closely resembles the service value network (and consequences) generated for another customer.
Example. Suppose that a particular customer uses a bundle of Internet access + email (inbox and SMTP) + VoIP, and is satisfied with the bundle. Via a recommender system, this customer may publish his/her experiences with the used service bundle at hand. The service value web configuration components may use information about this published bundle as an example for other service bundles.
Foreseen solution direction. Customer to customer recommendation systems may be used as an input te create a recommender system that registers customer's scores on particular consequences. These scores can then be used in the customer need and consequences elicitation process.
Implementation of e 3 service by web services
Theme. Software tool support.
Description of the challenge. The software implementation of e 3 service is currently Java-and RDF-based. It is possible to think of the software as a set of web services and associated processes that perform the composition of the SVN. Moreover, these web services may be offered (and requested) by multiple suppliers and the customer, so that the composition becomes a distributed task.
Example. In the virtual ISP example, each enterprise that potentially wants to participate in a SVN can offer a set web services. These web services allow the enterprise to participate in the composition process.
Foreseen solution direction. We foresee the use of web-service standards, such SOAP and WSDL to build a configurator that can run as decentralized (meaning: at the customer and suppliers sites) as much as possible. Moreover, a selforganizing implementation obviously should support a fully decentralized architecture.
Conclusion
In this paper, we have introduced a number of research challenges with respect to commercial service value networks in general and the e 3 service ontology in particular. By no means, the list of challenges is complete. The first challenge is to allow a more complex conceptualisation of service characteristics as well as the use of different terminology by the customer and suppliers of services. Another research challenge is to develop a more detailled ontology for functional and quality consequences. Currently, e 3 service uses a brokerage approach for matching. A different approach to be investigated is the self-organizing approach. Furthermore, a research challenge is how to use customer to customer interactions in the process of elicitation of customer needs and consequences.
Finally, a research challenge is how to enable the current e 3 service framework in terms of web services.
C .-Bundling: Finding multi-supplier service bundles that satisfy the customer need[START_REF] Ivan | Fuzzy verification of service value networks[END_REF][START_REF] Ivan | Dynamic cluster-based service bundling: A value-oriented framework[END_REF]. Bundles may partly satisfy the need, may overlap, or may precisely satisfy the need. E.g. since S 1 cannot provide all the customerdesired F Cs, an extra service such as remote desktop (S 2 ) offering F C 2 can be combined with S 1 to generate a solution bundle. -Linking: Finding additional services needed by the suppliers that provide the service bundle[START_REF] Ivan | Fuzzy verification of service value networks[END_REF][START_REF] Razo-Zapata | Service value networks for competency-driven educational services: A case study[END_REF][START_REF] Gordijn | Generating service valuewebs by hierarchical configuration: An ipr case[END_REF]. E.g. a bundle composed of S 1 and S 2 might need to solve dependencies for S 2 such as a versioning service that provides the updated O.S. to S 2 . -Verify -Analysis of provided, missing and non-required functional consequences and rating the importance of these consequences using a fuzzy inference system[START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. an SVN providing F C 1 , F C 2 and F C 3 will fit better the customerdesired consequences (and will have a higher score) than any SVN providing only F C 1 and F C 2 or only F C 2 and F C 3 . -Critique -In case the configuration task is unsuccessful: Identification of the source of failure[START_REF] Balakrishnan Chandrasekaran | Design problem solving: A task analysis[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. after composing an SVN offering F C 1 , F C 2 and F C 3 , the customer might realize that F C 3 is not relevant for him. In this case the customer can indicate that he would like to get alternative SV N s only offering F C 1 and F C 2 . -Modify -Modify the service network of the service bundle based on the results of the critique step cf.[START_REF] Balakrishnan Chandrasekaran | Design problem solving: A task analysis[END_REF][START_REF] Ivan | Fuzzy verification of service value networks[END_REF]. E.g. based on the output, new SV N s can be composed to better fit the customer-desired consequences.
1 ) such as Assuring Business Continuity can be expressed in terms of: Data available in case of emergency (F C 1 ), Application available 24/7 (F C 2 ) and Regulatory compliance (F C 3 ). -Offering: Determination what functional consequences can be offered by suppliers [23, 22, 21]. E.g. a backup service (S 1 ) can offer FCs such as: Data available in case of emergency F C A , Redundancy F C B , Regulatory compliance (F C C ), among others. -Matching: Match customer-desired consequences with supplier-offered consequences [23, 22, 21]. E.g. the customer-desired F C 1 can be matched with the supplier-offered F C A and F C 3 with F C
see http://www.bls.gov/fls/flscomparelf.htm table 7, visited June
21st, 2012
http://www.internet-of-services.com/index.php?id=570&L=0, visited June 21st, 2012
Acknowledgments. The research leading to these results has received funding from the NWO/Jacquard project VALUE-IT no 630.001.205. | 27,447 | [
"1003538",
"1003539",
"1003540"
] | [
"62433",
"62433",
"62433",
"487805",
"303060"
] |
01484400 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484400/file/978-3-642-34549-4_10_Chapter.pdf | Michaël Petit
Christophe Feltus
email: christophe.feltus@tudor.lu
François Vernadat
email: francois.vernadat@eca.europa.eu
Enterprise Architecture Enhanced with Responsibility to Manage Access Rights -Case Study in an EU Institution
Keywords: Access rights management, Business/IT alignment, Enterprise architecture, Responsibility, Case study
An innovative approach is proposed for aligning the different layers of the enterprise architecture of a European institution. The main objective of the alignment targets the definition and the assignment of the access rights needed by the employees according to business specifications. This alignment is realized by considering the responsibility and the accountabilities (doing, deciding and advising) of these employees regarding business tasks. Therefore, the responsibility (modeled in a responsibility metamodel) is integrated with the enterprise architecture metamodel using a structured method. The approach is illustrated and validated with a dedicated case study dealing with the definition of access rights assigned to employees involved in the user account provisioning and management processes.
Introduction
Access rights management is the process encompassing the definition, deployment and maintenance of access rights required by the employees to get access to the resources they need to perform the activities assigned to them. This process is central to the field of information security because it impacts most of the functions of the information systems, such as the configuration of the firewalls, the access to the file servers or/and the authorization to perform software operations. Furthermore, the management of access rights is complex because it involves many employee profiles, from secretaries to top managers, and concerns all the company layers, from the business to the technical ones. On one hand, access rights to IT components must be defined based on functional requirements (defining who can or must use which functionality) and, on the other hand, based on governance needs (defining which responsibility exists at the business level). The functional requirements advocate that, to perform an activity, the employee must hold the proper access rights. The governance needs are those defined by governance standards and norms and those aiming at improving the quality and the accuracy of these access rights [START_REF] Feltus | Enhancement of CIMOSA with Responsibility Concept to Conform to Principles of Corporate Governance of IT[END_REF].
Practically, one can observe [START_REF] Feltus | Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study[END_REF] that the existing access control models [START_REF] Clark | A comparison of commercial and military computer security policies. Security and Privacy[END_REF][START_REF] Covington | Securing context-aware applications using environment roles[END_REF][START_REF] Ferraiolo | Proposed nist standard for role-based access control[END_REF][START_REF] Karp | From abac to zbac: The evolution of access control models[END_REF][START_REF] Covington | A contextual attribute-based access control model. On the Move to Meaningful Internet Systems[END_REF][START_REF] Lang | A exible attribute based access control method for grid computing[END_REF] and rights engineering methods [START_REF] Crook | Modelling access policies using roles inrequirements engineering[END_REF][START_REF] He | A framework for privacy-enhanced access control analysis in requirements engineering[END_REF][START_REF] Neumann | A scenario-driven role engineering process for functional rbac roles[END_REF] do not permit to correctly fulfill these needs, mostly because they are handled at the technical layer by isolated processes, which are defined and deployed by the IT department or by an isolated company unit that, generally, does not consider their management according to the governance needs. To address this problem, the paper proposes an approach based on the employees' responsibilities that are identified and modeled by considering these governance needs. On one hand, the modeling of the responsibility concept permits to consider several dimensions of the links that associate an employee with the activities he/she has to perform. On the other hand, the integration of the responsibility in a business/IT alignment method, for the engineering of access rights, permits to engineer and deploy the rights strictly necessary for the employees, thereby avoiding too permissive (and possibly harmful) access rights.
Enterprise architecture frameworks (EAFs) can be used to model the interrelations between different abstraction layers of a company (e.g. the business, the application and the technical layers) and, according to different aspects such as behavior, the information or the static structure [START_REF] Lankhorst | and the ArchiMate team[END_REF]. These models provide views that are understandable by all stakeholders and support decision making, highlighting potential impacts on the whole enterprise. For instance, the enterprise architecture models can be used to understand the impact of a new business service integrated in the business layer on the technical layer and, consequently, enable analysis of some required server capacity. Conversely, the failure of a server has an impact on one or more applications and therefore on business services. The enterprise architecture models support analysis of the impact of various events or decisions and as such the improvement of alignment. For supporting the alignment between the enterprise layers, the EAFs have undergone major improvements during the first decade of the 2000's and some significant frameworks have been developed such as ArchiMate [START_REF] Lankhorst | and the ArchiMate team[END_REF], the Zachman framework [START_REF] Zachman | The Zachman Framework For Enterprise Architecture: Primer for Enterprise Engineering and Manufacturing By[END_REF] or TOGAF [START_REF]TOGAF (The Open Group Architecture Framework)[END_REF]. Even if the advantages of EAFs are not to be demonstrated anymore, the high abstraction level of the modeled concepts and of the links between these concepts makes it sometimes difficult to use the EAFs to perform, verify or justify concrete alignments. In particular, EAFs do not permit to engineer precisely the access rights provided to the employee at an application layer based on the specification from a business layer.
The paper proposes a contribution to help solving the problem of alignment of access rights with business responsibility originating from governance requirements. The solution extends a particular EAF promoted by the European Commission and used at the European Court of Auditors (ECA) with concepts for representing responsibility at a business level. This extension is obtained by integrating the ECA EA metamodel with the responsibility metamodel of our previously developed Responsibility Modeling Language [START_REF] Feltus | Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study[END_REF][START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF]. The foreseen advantage of integrating both is the enhancement of the alignment among the concepts from the business perspective, the concepts from the application perspective and the concepts from the technical perspective (see Sect. 3). Ultimately, this alignment will support the definition of the access rights to be provisioned to employees, based on their responsibilities. The applicability of the improved metamodel is demonstrated through a case study performed in a real setting.
The paper is structured as follows. In the next section, the responsibility metamodel is introduced. In Section 3, the ECA EA metamodel is presented and, in Section 4, both are integrated. In section 5, a case study related to the User provisioning and User account management processes is presented. Finally, in Section 6, some conclusions are provided.
Modeling responsibility
The elaboration of the responsibility metamodel (Fig. 1) has been performed on the basis of a literature review. As explained in previous papers [START_REF] Feltus | Strengthening employee's responsibility to enhance governance of IT: COBIT RACI chart case study[END_REF][START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF], in the first place, it is analyzed how the responsibility is dealt with in information technology professional frameworks, in the field of requirements engineering and role engineering and in the field of access right and access control models [START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF]. Then, this literature review was completed with an analysis of a state of the art on responsibility in the field of Human Sciences. The responsibility metamodel and its most meaningful concepts have been defined in previous works of the authors [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF]. The most significant ones, for access rights management, are: the concept of responsibility, which is composed of all accountabilities related to one single business task and that, in order to be honored, require rights (the resources provided by the company to the employee, among which the access rights to information) and capabilities (the qualities, the skills or the resources intrinsic to the employee). The accountability represents the obligation related to what has to be done concerning a business task and the justification that it is done to someone else, under threat of sanction(s) [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF]. Three types of accountabilities can be defined: the accountability of doing which concerns the act of realizing a business task, the accountability of advising which concerns the act of providing consultancy to allow the realization of the task and the accountability of deciding which concerns the act of directing and making decisions and providing authorization regarding a business task. An employee is assigned to one or more responsibility, which may be, additionally, gathered in business role(s).
ECA EA metamodel
To support the management of its information systems (IS), the European Commission has developed a dedicated architecture framework named CEAF2 that has been deployed in several other European institutions and especially the European Court of Auditors (ECA). The particularity of the CEAF is that it is business and IT oriented and provides a framework for the business entities in relation with IT usage and supporting infrastructure. Considering the business as being at the heart of the framework allows continual business/IT alignment. In addition to its four perspectives, namely "business", "functional", "application" and "data", the CEAF also contains a set of architecture standards that gather methods, vocabulary and rules to comply with. One such rule is, for instance, at the business layer, that the IT department of ECA (DIT) responsible for the management of information technology, needs to understand the business activities to automate them. The DIT has defined its own enterprise architecture metamodel, the ECA EA metamodel based on the CEAF (see Fig. 2). This ECA EA is formalized using an entity-relationship model and is made operational using the Corporate Modeler Suite3 . It is made of the same four vertical layers as the CEAF, each representing a perspective in the architecture, i.e.:
• The business layer, formalizing the main business processes of the organization (process map and process flows in terms of activities). • The functional layer, defining the views needed to describe the business processes in relation with business functions and services. • The application layer, describing the IT applications or ISs and the data exchanges between them. • The technical layer, describing the IT infrastructure in terms of servers, computers network devices, security devices, and so forth.
Each layer includes a set of generic objects, relevant for the layer, and may contain different types of views. Each view is based on one diagram template (Fig. 2). The concepts which are relevant in the context of this paper (i.e. to be integrated with the one of the responsibility metamodel) are described in the next section.
Integrated ECA EA-responsibility metamodel
In this section, the integration of the ECA EA metamodel with the responsibility metamodel is presented. The method proposed by [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] was used for integrating the metamodels. The three steps of the method are (1) preparation for integration, (2) investigation and definition of the correspondences and (3) integration of both metamodels.
Preparation for integration
Preparing the integration first goes through a primary activity for selecting the subset of concepts from the metamodels relevant for integration. Secondly, a common language for representing both metamodels is selected.
1) Subset of concepts concerned by the integration
This activity of selecting the appropriate subset of concepts considered for the integration has been added to the method of [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] and is required to address the concepts from the metamodels that are meaningful for the assignment of accountabilities regarding business tasks to the employees and for the definition of the rights and capabilities required therefore. The subset of concepts concerned by the integration, in the ECA EA metamodel of Fig. 2,includes: • The concept of role. This concept is used, according to the ECA EA metamodel documentation, to represent the notion of entity executing a task of a process. It is associated to the concept of a task that it realizes and to the concept of organization to which it belongs. • The concept of task. This concept is used to describe how the activities are performed. A task is achieved by a single actor (not represented in the ECA EA metamodel), is performed continuously and cannot be interrupted. The task is associated to the concept of role which realizes it, to the concept of activity that it belongs to and to the concept of function that it uses. • The concept of function. This concept enables to break-down an IS in functional blocks and functionality items within functional domains. A function block is defined by the business concepts that it manages on behalf of the IS, combining the functions (functions related to business objects) and production rules of the data that it communicates. It is associated to the concept of task, of IS (the application) that implements it and of entity that it accesses in a CRUD mode (Create, Read, Update and Delete). • The concept of entity. This concept represents the business data items conveyed by the IS or handled by an application. In the latter case, it refers to information data. It means that the physical data model implemented is not described in systems/database. The entity is accessed by the function, is associated to flow, is defined by attributes and relationships and is stored in a datastore. • The concept of application. This concept represents a software component that contributes to a service for a dedicated business line or for a particular system. Regarding its relation with other concepts: the application is used by the application service, is made of one or more other application(s), uses a technology, sends and receives flow items and implements functions. In the responsibility metamodel (see Sect. 2), the following concepts defined in [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF] are kept: responsibility, business role, business task, right, capability, accountability and employee.
2) Selection of a common representation language
For the integration step, UML is used because it is precise enough for this purpose, standard and commonly used. As a consequence, the ECA EA metamodel formalized using the entity-relation model has been translated into a UML class diagram (Fig. 2).
Investigation and definition of the correspondences
In [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF], the author explains that this second step consists in analyzing the correspondences between classes of the two metamodels. These correspondences exist if correspondences among pairs of classes exist and if correspondences between instances of these classes taken pair-wise can be generalized. The correspondences can be identified by analyzing the semantic definitions of the classes and can be validated on instances in models created by instantiating both metamodels for different case studies. Based on the definitions of concepts and on the authors' experience with the case study presented in Sect. 5, three correspondence cases between the concepts of the ECA EA metamodel and the responsibility metamodel have been identified:
• Role from the ECA EA metamodel and business role from the responsibility metamodel: the concept of role in the ECA EA metamodel is represented in the business architecture, is an element that belongs to the organization and realizes business tasks. Hence, it reflects a business role rather than an application role and corresponds, as a result, to the business role of the responsibility metamodel (cf. application role / Role Based Access Control [START_REF] Feltus | Enhancement of Business IT Alignment by Including Responsibility Components in RBAC, 5 th Busital workshop[END_REF]). • Entity from the ECA EA metamodel and information from the responsibility metamodel. The concept of entity in the ECA EA metamodel is equivalent to the concept of information from the responsibility metamodel. Instances of both concepts are accessed by a human or by an application component and specific access rights are necessary to access them. • Task from the ECA EA metamodel and business task from the responsibility metamodel. The concept of task in the ECA EA metamodel and the concept of business task from the responsibility metamodel semantically have the same meaning. The task from the ECA EA metamodel composes the business architecture and corresponds to a task performed on the business side. According to the definition of the ECA concept, it can be noticed that the task is performed by a single actor. This is a constraint that does not exist in the responsibility metamodel and that needs to be considered at the integration step.
Integration of metamodels
The third step defined in [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] corresponds to the integration of both metamodels. During the analysis of the correspondences between the metamodel concepts, some minor divergences have been observed. Notwithstanding the influence of these divergences, to consider that a sufficient correspondence exists between the elements and to consider them during this third step of integration, these divergences are analyzed in depth and the correspondence rules formalized in order to obtain a well defined and precise integration.
Consequently, to construct the integrated metamodel that enriches the ECA EA metamodel with the responsibility metamodel, a set of integration rules has been defined. Therefore, it is decided that (1) when a correspondence exists between one concept from the ECA EA metamodel and one concept from the responsibility metamodel, the name of the concept from the ECA EA metamodel is preserved, (2) when the concept of the responsibility metamodel has no corresponding concept in the ECA EA metamodel, this concept is integrated in the integrated metamodel and the name from the responsibility metamodel is used, (3) when a correspondence exists with conflicts between the definition of the concepts, the concepts are integrated in the integrated metamodel, the name of the concept from the ECA EA metamodel is preserved and additionally integration constraints to be respected are included in the case of using the integrated metamodel. Finally, (4) when concepts differently exist in both metamodels, the integration preferences are motivated case by case. In the sequel, correspondences between classes are first considered and then correspondences between associations between classes.
1) UML Classes integration a)
Classes that correspond exactly: The role from the ECA EA metamodel and the business role from the responsibility metamodel exactly match. The entity from the ECA EA metamodel and the information from the responsibility metamodel also exactly match.
b) Classes that only exist in one metamodel
Employee, responsibility, right and the type of rights to access information, capability and accountability only exist in the responsibility metamodel. Function only exists in the ECA EA metamodel. c) Classes that correspond under constraints The business task from the responsibility metamodel and the task from the ECA EA metamodel correspond partially. In the ECA EA metamodel, a task is performed by a single actor. The ECA EA metamodel description does not define the granularity level of a business task and, for instance, does not define if "doing a task", "advising for the performance of a task" or "making decision during the realization of a task" are considered as three tasks or as a single one. In the first case, three actors may be assigned separately to each of the three propositions although, in the latter case, only one actor is assigned to it. In the responsibility metamodel, many employees may be assigned to many responsibilities regarding a business task. It can be observed that, in practice, this is often what happens for responsibility, for instance in courts during trials. Therefore, it can be considered, in the integrated metamodel, that a task may be concerned by more than one accountability, themselves composing responsibilities assigned to one or more employees. For instance, let us consider the task to deploy a new software component on the ECA network. There is a first responsibility to effectively deploy the solution. This responsibility is assigned to an IT system administrator who is accountable towards the manager of his unit. This means that he must justify the realization (or absence thereof) of the deployment and that he may be sanctioned positively/negatively by the unit manager. The latter, concerning this deployment, is responsible to make the right decisions, for instance, to decide the best period of the day for the deployment, to give the go/no go for production after performing test, and so forth. This responsibility is directly handled by the unit manager who must justify his decision and is sanctioned accordingly by his own superior, for instance, the department manager, and so forth. This illustration explains how many responsibilities may be related to the same task but assigned to various employees or roles.
d) Classes that exist differently in both metamodels
The concept of access right from the responsibility metamodel and the concept of access mode from the ECA EA metamodel are represented differently. The concept of access right is a type of rights in the responsibility metamodel which semantically corresponds to an access mode in the ECA EA metamodel. In the ECA EA metamodel, the entity is accessed by the concept of function that, additionally, is associated to a task and to an application of the IS that implements it. As a result, the access right is already considered in the ECA EA metamodel, but it is directly associated to the concept of task by the intermediary of function. In the integrated metamodel, the concept of function that is interesting to consider as allowing the connection between concepts from the business architecture, from the application architecture and from the data architecture, is preserved. However, to restrict the usage of a function only for what is strictly necessary, it is not considered that it is associated to a task, but that it is required by a responsibility and necessary for accountability. As such, an employee with the accountability of doing a task gets the right to use a certain function, an employee with the accountability of deciding about the execution of a task gets the right to use another function, and so forth. For example, to record an invoice, a bookkeeper requires the use of the function "encode new invoice". This function is associated to a write access to the invoicing data.
Additionally, the financial controller who controls the invoice requires the use of the "control invoice" function that is associated to a read access to the same invoicing data. 2) UML associations integration a) UML associations from the responsibility metamodel that complete or replace, in the integrated metamodel, the UML associations from the ECA EA metamodel The direct UML association between a role and a task in ECA EA metamodel is replaced by a composition of associations: "a business role is a gathering of responsibilities, themselves made of a set of accountabilities concerning a single business task". This composition is more precise and is therefore retained. The UML association between the task and the function it uses in the ECA EA metamodel is replaced by two UML associations: "an accountability concerning a single business task requires right(s)" and "one type of right is the right to use a function" b) UML associations from the responsibility metamodel, that do not exist in the ECA EA metamodel The following associations are present only in responsibility metamodel and are simply included in the integrated metamodel: "a responsibility requires capabilities", "a responsibility requires rights", "an employee is assigned to one or more responsibility(ies) and to one or more business role(s)", "a capability is necessary for a business task" and "a right is necessary for a business task".
The metamodel resulting from the integration is shown in Fig. 3.
This section reports on the exploitation of the integrated metamodel developed in the previous section on a real-world case study from a European institution in order to validate its applicability and its contribution to the engineering of more accurate access rights. The integrated metamodel was applied for the management of the access rights provided to employees involved in the User provisioning and User account management processes. The case study has been performed over fourteen months, from January 2011 to February 2012. During this period, twelve meetings were organized with the DIT managers of the institution and with the access right administrator to model and assess the processes and to elaborate and assign a set of thirteen responsibilities.
Process description
The user provisioning process is about providing, adapting or removing access rights to a user depending if he is a newcomer arriving at the Court, an employee or an external staff member whose status or job changes or if he is temporarily or definitely leaving the Court. Employee or external staff status changes when, for instance, his job category, department or name changes or when the end date of his contract is modified. The management of the users' identity and access rights are areas in which the DIT is hugely involved. Indeed, since each employee of the ECA needs different access rights on the various ISs, these access rights must be accurately provided according to the user profile.
To manage these rights, the DIT has acquired the Oracle Identity Management (OIM) tool. This tool is central to the identity and user accounts management activity and, as illustrated by Fig. 4, is connected, on the one hand, to the applications that provision the user profiles (COMREF and eAdmin 4 ) and, on the other hand, to the user directories that provision access rights rules (eDir, Active Directory (AD), Lotus Notes (LN), and so forth). COMREF is the central human resource database of the European Commission used by the HR management tool Sysper2 5 . The main COMREF database is located in the EC data center and contains a set of officials and employees' information items such as the type of contract, occupation, grade, marital status, date of birth, place of work, department, career history and so forth. This information is synchronized every day with the COMREF_ECA 6 datastore and with the OIM tool. In parallel, additional information is also uploaded in the OIM tool for the subset of data relative to ECA workers (employees or external staff), directly from the ECA, e.g. the office number, the entry ID card, the phone numbers, the telephone PIN code, and so forth. This information is also daily synchronized with the central COMREF database.
At the business layer, processes have been defined to support the activities of the employees who manage (such as the system administrators) or use the system (such as the secretaries who fill in the data related to the PIN codes or phone numbers). The case study focuses on one of these processes, the user provisioning and user account management process. This process aims at defining an ordinate set of tasks to manage 4 eAdmin is a tool to manage administrative data such as office numbers 5 Sysper2 is the Human Resource Management solution of the European Commission that supports the personnel recruitment, career management, organization chart, time management, etc. 6 COMREF_ECA is a dedicated mirror in Luxembourg of the COMREF database for the officials and employees of the ECA the request, establishment, issue, suspension, modification or closure of user accounts and to, accordingly, provide the employees with a set of user privileges to access IT resources. More specially, the case study focuses on the evolution of this process, due to some recent enhancement of the automation of the provisioning loop between the COMREF database and OIM, and on the new definition of the responsibilities of the employees involved in this process.
Definition and assignment of the responsibilities
A sequence of four steps is applied to model the responsibilities of the employees involved in the upgraded user provisioning and user accounts management process.
1) Identification of business tasks
The business tasks are defined by instantiating the concepts of task from the integrated metamodel (Fig. 3). In this step, the tasks for which responsibilities have to be defined are identified, but tasks that are performed by an application component and for which defining the responsibility is inappropriate according to the definition of the responsibility in Sect. 2 are not considered. After the provisioning process enhancement, six tasks are remaining. These tasks are: "Release Note d'information 7 ", "Complete Sysper2 data entry", "Assign an office number using eAdmin", "Assign a phone number and a PIN code", "Enter phone number and PIN code in OIM" and "Perform auto provisioning and daily reconciliation".
2) Identification of the accountabilities
The accountability, as explained in Sect. 2, defines which obligation(s) compose(s) a responsibility for a business task and which justification is expected. In the ECA EAresponsibility metamodel, this concept of accountability has been preserved since it is important to distinguish what really are the accountabilities of the ECA employees regarding the business tasks. In this step, for each of the tasks, the existing accountabilities are reviewed for each of the responsibilities. Mainly, three of them have been retained. The obligation to "Do" that composes the responsibility of performing the task, the obligation to "Decide about" that composes the responsibility of being accountable for the performance of a task and the obligation to "Advise" that composes the responsibility to give advice for the performance of the task. For example, three types of accountability concern the task "Assign a phone number and a PIN code" and the task "Assign an office number using eAdmin". Three examples explained later in the text are provided in Tables 123.
1) Identification of the rights and capabilities
The rights and capabilities are elements required by a responsibility and necessary to achieve accountabilities (Fig. 1). Both concepts have, naturally, been introduced in the integrated metamodel in Fig. 3. In this step, it is analyzed, accountability by accountability, which capabilities and which rights are necessary to realize the accountability. In the integrated ECA EA-responsibility metamodel, the access right (which is a type of right) is no more directly associated to the realization of an action involving an information (e.g. read a file), but is a right to use a function that realizes, together, an action (e.g.: CRUD) regarding an entity and the use of an application that manipulates this entity. For instance, the Responsibility OIM 7 (Table 1) assigned to Barbara Smith requires using the function that realizes Read-Write access in eAdmin.
Once the responsibilities have been modeled, they can be assigned to employees, considering their role in the organization. As explained in Fig. 3, a responsibility may be assigned directly to an employee or to a role.
2) Assignment of the responsibilities to the employees
In the case study, some responsibilities are directly assigned to employees and others are assigned to roles. For instance, the Responsibility OIM 1 (Table 2) is made of the accountability to do the task "Release Note d'information". This responsibility is assigned to the role Human Resources Directorate/ RCD (recruitment career development), although the Responsibility OIM 10 (Table 3) is made of the accountability to verify the task "Enter Phone number and PIN code in OIM" and is assigned directly to the employee Francis Carambino.
Case study analysis
The instantiation of the responsibilities, after the mapping of the responsibility metamodel with the ECA EA metamodel, brings a set of thirteen responsibilities, from which the following results are observable.
1) Better definition of accountabilities of employees regarding the tasks
Before the case study was performed, the description of the process according to the sole ECA EA metamodel did only provide a list of the roles responsible to perform the tasks. As a result, this description was not accurate enough to know which employees perform which tasks, and which other employees decide about it, give advice and so forth. For instance, some employees did not appear in the process description, although they were involved in it. This was for instance the case of the IAM 8 Service Manager. The description of the process, according to the integrated metamodel gives a clear view on all the accountabilities and their assignments to the employees.
2) Explicit formalization of capabilities required by employees to meet their accountabilities
Before the case study, the description of the process did not address the employee capabilities necessary to perform accountabilities. Employees were assigned to responsibilities without previously knowing if they were capable of assuming them. The description of the process, according to the integrated metamodel, clearly highlights the capabilities necessary to perform the tasks. For instance, to "Complete Sysper2 data entry", the employee needed both a Sysper2 and SQL training and, if someone else is assigned to this responsibility, the same training is required.
3) Explicit formalization of the rights and access rights required by the employees to meet their accountabilities
Another difference in the process description after the case study is that the right, and more specifically the access rights, needed to perform an accountability are clearly enumerated. For instance, to "Complete Sysper2 data entry", it is necessary to have the access right to Read-Write and Modify all Sysper2 functions and the right to use another system called RETO 9 .
4) Possibility to associate tasks to responsibilities or to roles
The final improvement is the possibility to assign a task, either to a role or to a responsibility rather than directly to an employee. This possibility offers more flexibility and reduces the risk of providing access rights to employees that do not need them. As an example, all employees with the role of Human Resources Directorate/ RCD are assigned to the responsibility to "Release Note d'information", although only one employee advises about the assignment of offices. Some other concepts of the responsibility metamodel have not been introduced yet in the integrated metamodel and have not been illustrated in the case study. Indeed, as explained in Section 2, checking the employee's commitment during the assignment of a responsibility or a role was not in the scope of this case study. However, some other cases in the ECA have shown that the commitment influences the way employees accept their responsibilities. For instance, in 2010, ECA bought a highly sophisticated tool to support problems management. During the deployment of the tool in production, the employees have not been informed about their new responsibilities related to the usage of the tool. As a result, they did not commit to these responsibilities and the tool has not been used properly or up to the expectations. The same problem occurred at a later stage when a decision was made to use a tool to manage the CMDB 10 .
Conclusions
The paper has presented a method to improve the alignment between the different layers of an enterprise architecture metamodel and, thereby, to enhance the management of access rights provided to employees based on their accountabilities. This method is based on the integration of an enterprise architecture framework with a responsibility metamodel. The integration of both metamodels has been illustrated using a three-step approach proposed by [START_REF] Petit | Some methodological clues for defining a unified enterprise modelling language[END_REF] and has been applied to the ECA EA metamodel, an EAF of a European institution. A validation has been realized on a real case study related to the user provisioning and user account management processes. The objectives of this case study were to validate (1) the applicability of the integrated metamodel and (2) the engineering of more accurate access rights comparing to the solutions reviewed in [START_REF] Feltus | ReMoLa: Responsibility Model Language to Align Access Rights with Business Process Requirements[END_REF]. The validation has been performed in four phases. First, the accountability of the employees regarding the tasks of the process has been defined. Next, the capabilities required to perform these accountabilities have been formalized. Thirdly, the required rights and access rights have been formalized. Finally, the employees have been associated to responsibilities or to roles. The output of these phases was a set of thirteen responsibilities. The validation shows that using the combination of the ECA EA and the responsibility metamodel brings benefits compared to using ECA only. Additionally, compared to the other approaches, the method offers other possibilities and advantages, including more precise definition of accountabilities of employees regarding tasks, explicit formalization of the rights and capabilities required by the employees to perform the accountabilities (traceability between accountabilities and rights), and formal associations of employees to responsibilities or to business roles. The approach has also been validated, in parallel, with other processes from the healthcare sector and are available in [START_REF] Feltus | Enhancing the ArchiMate ® Standard with a Responsibility Modeling Language for Access Rights Management[END_REF].
Fig. 1 .
1 Fig. 1. The responsibility metamodel UML diagram.
Fig. 2 .
2 Fig. 2. ECA EA metamodel UML diagram
Fig. 3 .
3 Fig. 3. The responsibility metamodel integrated with the ECA EA metamodel
Fig. 4 .
4 Fig. 4. Overview of the ECA OIM architecture
Table 1 .
1 Responsibility OIM 7.
Responsibility OIM 7
Task Assign an office number using eAdmin
Accountability Doing
Employee Barbara Smith
Accountable towards Reynald Zimmermann
Backup Antonio Sanchis
Role Logistic administrator
Backup Role Logistic Head of Unit
Right Read-Write access in eAdmin
Capability eAdmin manipulation training
Table 2 .
2 Responsibility OIM 1.
Responsibility OIM 1
Task Release "Note d'Information"
Accountability Doing
Employee All
Accountable towards Gerald Hadwen
Role Human Resources Directorate/ RCD
Backup Role RCD Unit Manager
Right Read HR workflow, Read Information Note template and Use editing tool
Capability Ability to edit official documents and HR training
Task Release "Note d'Information"
Table 3 .
3 Responsibility OIM 10.
Responsibility OIM 10
Task Enter phone number and PIN code in OIM
Accountability Deciding
Employee Francis Carambino
Accountable towards Marco Jonhson
Backup Philippe Melvine
Role OIM Administrator
Backup Role IAM Service Manager
Right Read-Write access to OIM tool-Phone number application and Read-Write access
to OIM tool-PIN code application
Capability Computer sciences education, two years experience in OIM administration
The Enterprise Engineering Team (EE-Team) is a collaboration between Public Research Centre Henri Tudor, Radboud University Nijmegen and HAN University of Applied Sciences. www.ee-team.eu
CEAF means European Commission Enterprise Architecture Framework.
Modeler Suite from CaseWise (http://www.casewise.com/products/modeler)
In English: Information note
Identity and Access Management
RETO (Reservation TOol) is a personal identification number booking tool common to all institutions
Configuration Management Database, in accordance with ITIL
Acknowledgements
This work has been partially sponsored by the Fonds National de la Recherche Luxembourg, www.fnr.lu, via the PEARL programme | 42,593 | [
"837175",
"986246",
"17748"
] | [
"364917",
"364917",
"371421",
"452132",
"487813"
] |
01484401 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484401/file/978-3-642-34549-4_12_Chapter.pdf | Dirk Van Der Linden
email: dirk.vanderlinden@tudor.lu
Stijn Hoppenbrouwers
email: stijn.hoppenbrouwers@han.nl
Challenges of Identifying Communities with Shared Semantics in Enterprise Modeling
Keywords: enterprise modeling, conceptual understanding, personal semantics, community identification, semantics clustering
In this paper we discuss the use and challenges of identifying communities with shared semantics in Enterprise Modeling. People tend to understand modeling meta-concepts (i.e., a modeling language's constructs or types) in a certain way and can be grouped by this understanding. Having an insight into the typical communities and their composition (e.g., what kind of people constitute a semantic community) would make it easier to predict how a conceptual modeler with a certain background will generally understand the meta-concepts he uses, which is useful for e.g., validating model semantics and improving the efficiency of the modeling process itself. We demonstrate the use of psychometric data from two studies involving experienced (enterprise) modeling practitioners and computing science students to find such communities, discuss the challenge that arises in finding common real-world factors shared between their members to identify them by and conclude that the common (often implicit) grouping properties such as similar background, focus and modeling language are not supported by empirical data.
Introduction
The modeling of an enterprise typically comprises the modeling of many aspects (e.g., processes, resources, rules), which themselves are typically represented in a specialized modeling language or method (e.g., BPMN [START_REF]Object Management Group: Business process model and notation (bpmn) ftf beta 1 for version 2[END_REF], e3Value [START_REF] Gordijn | e-service design using i* and e3value modeling[END_REF], RBAC [START_REF] Ferrariolo | Role-based access control (rbac): Features and motivations[END_REF]). Most of these languages share similar meta-concepts (e.g., processes, resources, restrictions 5 ). However, from language to language (and modeler to modeler) the way in which these meta-concepts are typically used (i.e., their intended semantics) can differ. For example, one modeler might typically intend restrictions to be deontic in nature (i.e., open guidelines that ought to be the case), while a different modeler might typically consider them as alethic conditions (i.e., rules that are strict logical necessities). They could also differ in whether they typically interpret results as being material or immaterial 'things'. If one is to integrate or link such models (i.e., the integrative modeling step in enterprise modeling [START_REF] Lankhorst | Enterprise architecture modelling-the issue of integration[END_REF][START_REF] Kuehn | Enterprise Model Integration[END_REF][START_REF] Vernadat | Enterprise modeling and integration (EMI): Current status and research perspectives[END_REF][START_REF] Opdahl | Interoperable language and model management using the UEML approach[END_REF]) and ensure the consistency and completeness of the involved semantics, it is necessary to be aware of the exact way in which such a meta-concept was used by the modeler. If this is not explicitly taken into account, problems could arise from, e.g., treating superficially similar concepts as being the same or eroding the nuanced view from specific models when they are combined and made (internally) consistent.
To deal more effectively with such semantic issues it is necessary to have some insight into the "mental models" of the modeler. It is important to gain such insight because people generally do not think in the semantics of a given modeling language, but in the semantics of their own natural language [START_REF] Sowa | The Role of Logic and Ontology in Language and Reasoning[END_REF]. Furthermore, some modeling languages do not have an official, agreed-upon specification of their semantics [START_REF] Ayala | A comparative analysis of i*-based agent-oriented modeling languages[END_REF] and if they do, there is no guarantee that their semantics are complete or consistent [START_REF] Breu | Towards a formalization of the Unified Modeling Language[END_REF][START_REF] Nuffel | Enhancing the formal foundations of bpmn by enterprise ontology[END_REF][START_REF] Wilke | UML is still inconsistent! How to improve OCL Constraints in the UML 2.3 Superstructure[END_REF], let alone that users might deliberately or unconsciously ignore the official semantics and invent their own [START_REF] Henderson-Sellers | UML -the Good, the Bad or the Ugly? Perspectives from a panel of experts[END_REF]. Understanding the intended semantics of a given model thus can not come only from knowledge of the language and its semantics, but requires us to spend time understanding the modeler who created the model.
However, one cannot realistically be expected to look into each individual modeler's semantic idiosyncrasies. Instead, a generalized view on how people with a certain background typically understand the common meta-concepts could be used to infer, to some degree of certainty, the outline of their conceptual understanding. Such (stereo)types of modelers could be found by identifying communities of modelers that share similar semantic tendencies for given concepts and analyzing whether they have any shared properties that allow us to treat them as one. As language itself is inherently the language of community [START_REF] Perelman | The New Rhetoric: A Treatise on Argumentation[END_REF] (regardless of whether that community is bound by geography, biology, shared practices and techniques [START_REF] Wenger | Communities of practice: The organizational frontier[END_REF] or simply speech and natural language [START_REF] Gumperz | The speech community[END_REF][START_REF] Hoppenbrouwers | Freezing language : conceptualisation processes across ICT-supported organisations[END_REF]), it is safe to assume that there are communities which share a typical way of using (and understanding) modeling language concepts. This is not to say that such communities would be completely homogeneous in their semantics, but merely that they contain enough overlap to be able to treated as belonging together during a process which integrates models originating from their members without expecting strong inconsistencies to arise in the final product.
Finding such communities based on, for example, empirical data is not a difficult matter in itself. However, going from simply finding communities to understanding them and generalizing them (i.e., to be able to predict on basis of empirical data or prior experience that communities of people which share certain properties will typically have certain semantics) is the difficult step. To do so it is necessary to find identifiers -properties that are shared between the members of a community. These identifiers (e.g., dominant modeling language, focus on specific aspects) are needed to be able to postulate that a given modeler, with a given degree of certainty, belongs to some community and thus likely shares this community's typical understanding of a concept.
In workshop sessions held with companies and practitioners from the Agile Service Development (ASD) 6 project who are involved in different kinds of (collaborative) domain modeling (e.g., enterprise modeling, knowledge engineering, systems analysis) we have found that there are a number of common identifiers modelers are typically (and often implicitly) grouped by. That is, on basis of these properties they are often assigned to collaborate on some joint domain modeling task. These properties are for example a similar background, education, focus on what aspects to model (e.g., processes, goals) in what sector they do so (e.g., government, health care, telecommunications), and used modeling languages. It seems thus that in practice, it is assumed that those who share a background or use similar modeling languages and methods will model alike.
While the wider context of our work is to build towards a theory of how people understand typical modeling meta-concepts (which can aid enterprise modelers with creating integrated models) this paper will focus first on testing the above assumption. To do so we hypothesize that these commonly used properties (e.g., sector, focus, used modeling language) should be reflected in communities that share a similar semantic understanding of common modeling meta-concepts. To test this we will investigate the personal semantics for practitioners and students alike, group them by shared semantics and investigate whether they share these, or indeed, any amount properties. If this is found to be so, it could mean that it is possible to predict, to a certain degree, what (the range of) understanding is that a modeler has for a given concept.
In this stage of our empirical work we have enough data from two of our studies into the conceptual understanding of the common meta-concepts amongst practitioners and students (cf. [START_REF] Van Der Linden | Initial results from a study on personal semantics of conceptual modeling languages[END_REF] for some initial results) to have found several communities that share a similar understanding of conceptual modeling metaconcepts. However, we have began to realize the difficulties inherent in properly identifying them. The rest of this paper is structured as follows. In Section 2 we discuss the used data and how we acquired it. In Section 3 we demonstrate how this kind of data can be analyzed to find communities, discuss the difficulties in identifying common properties amongst their members and reflect on the hypothesis. Finally, in Section 4 we conclude and discuss our future work.
Methods and Used Data Samples
The data used in this paper originates from two studies using semantic differentials into the personal semantics participants have for a number of meta-concepts common to modeling languages and methods used in Enterprise Modeling. The Semantic Differential [START_REF] Osgood | The Measurement of Meaning[END_REF] is a psychometric method that can be used to investigate what connotative meanings apply to an investigated concept, e.g., whether an investigated concept is typically considered good or bad, intuitive or difficult. It is widely used in information systems research and there are IS-specific guidelines in order to ensure quality of its results [START_REF] Verhagen | A framework for developing semantic differentials in is research: Assessing the meaning of electronic marketplace quality (emq)[END_REF]. We use semantic differentials to investigate the attitude participants have towards actors, events, goals, processes, resources, restrictions and results and to what degree they can be considered natural, human, composed, necessary, material, intentional and vague things. These concepts and dimensions originate from our earlier work on categorization of modeling language constructs [START_REF] Van Der Linden | Towards an investigation of the conceptual landscape of enterprise architecture[END_REF]. The resulting data is in the form of a matrix with numeric scores for each concept-dimension combination (e.g., whether an actor is a natural thing, whether a result is a vague thing). Each concept-dimension combination has a score ranging from 2 to -2, denoting respectively agreement and disagreement that the dimension 'fits' with their understanding. A more detailed overview of the way we apply this method is given in [START_REF] Van Der Linden | Beyond terminologies: Using psychometrics to validate shared ontologies[END_REF].
The practitioner data sample (n=12) results from a study which was carried out in two internationally operating companies focused on supporting clients with (re)design of organizations and enterprises. The investigated practitioners all had several years of experience in applying conceptual modeling techniques. We inquired into the modeling languages and methods they use, what sector(s) they operate in, what they model, and what kind of people they mostly interact with. The student data sample (n=19) results from an ongoing longitudinal study into the (evolution of) understanding computing and information systems science students have of modeling concepts. This study started when the students began their studies and had little to no experience. We inquired into their educational (and where applicable professional) background, knowledge of modeling or programming languages and methods, interests and career plans in order to see whether these could be used as identifying factors for a community.
To find communities of people that share semantics (i.e., score similarly for a given concept) we analyzed the results using Repeated Bisection clustering and Principal Component Analysis (PCA). The PCA results and their visualization (see Figs. 2 and1) demonstrate (roughly) the degree to which people share a (semantically) similar understanding of the investigated concepts (for the given dimensions) and can thus be grouped together.
General Results & Discussion
Most importantly, the results support the idea that people can be clustered based on their personal semantics. The PCA data proved to be a more useful resource for investigating the clusters and general semantic distance than the (automated) clustering itself, as we found it was hard to a priori estimate parameters like optimal cluster size and similarity cutoffs. As shown in Figs. 1 and2 there are easily detectable clusters (i.e., communities) for most of the investigated concepts, albeit varying in their internal size and variance. The closer two participants are on both axes, the more similar (the quantification of) their semantics are. for their understanding of goals, processes, resources and restrictions, with some discussed participants highlighted. Colored boxes and circles are used to highlight some interesting results that will be discussed in more detail.
While there are both clusters of people that share a semantic understanding for students and practitioners alike, they do differ to what degree larger clusters can be found. Internal variance is generally greater for students (i.e., the semantics are more 'spread out'). This may be explained by the greater amount of neutral attitudes practitioners display towards most of the dimension (i.e., lack any strongly polarized attitudes) causing a lower spread of measurable seman-tics. Such neutral attitudes might be a reflection of the necessity to be able to effectively interact with stakeholders who hold different viewpoints. Nonetheless, they are still easily divided into communities based on their semantic differences. To demonstrate, we will discuss some of the clusters we found for the understanding practitioners and students have of goals, processes, resources and restrictions. The immediately obvious difference between the practitioners and students is that, where there are clusters to be found amongst the practitioners, they differ mostly on one axis (i.e., component), whereas the students often differ wildly on both axes. Of particular interest to testing our hypothesis are participants 3 & 8, and 2, 7 & 10 from the practitioner data sample. The first community clusters together very closely for their understanding of restrictions (and goals, albeit to a lesser degree) while they differ only slightly for most other concepts. This means one would expect them to share some realworld properties. Perhaps they are people specialized in goal modeling, or share a typical way of modeling restrictions in a formal sense. The second community (participants 2, 7 & 10) cluster together very closely for resources, fairly close for goals and restrictions, while being strongly different when it comes to their understanding of processes. One could expect this to infer that they have some shared focus on resources, either through a language they use (e.g., value-exchange or deployment languages) which are often strongly connected to goals (as either requiring them, or resulting in their creation). On the opposite, one would not necessarily expect there to be much overlap between the participants in regards to processes, as they are grouped with a wide spread.
For the students, there are several potentially interesting communities to look at. Participants 4 & 8 differ strongly for several concepts (e.g., their strong differentiation on two components for resources, and for processes and restrictions), but they have an almost exactly similar understanding of goals. One would expect that some kind of property shared between them might be used to identify other participants that cluster together for goals, but not necessarily share other understandings. Participants 3, 6 & 19 also cluster together closely for one concept resources -but differ on their understanding of the other investigated concepts. As such, if (some) experience in the form of having used specific programming and modeling languages is correlated to their conceptual understanding, one would expect to find some reflection of that in the clusterings of these students.
However, when we add the information we have about the participants (see Tables 1 and2) to these clusters , we run into some problems. It is often the case that communities do not share (many) pertinent properties, or when they do, there are other communities with the same properties that are far removed from them in terms of their conceptual understanding. Take for instance participants 2, 7 & 10 (highlighted with a gray oval) from the practitioner data sample. While they share some properties, (e.g., operating in the same sector, having some amount of focus on processes, and interacting with domain experts), when we look at other communities it is not as simple to use this combination of properties to uniquely identify them. For instance, participants 3 & 8 (highlighted with a black rectangle) cluster together closely in their own right, but do share some overlapping properties (both operate in the government sector). Thus, merely looking at the sector a modeler operates in cannot be enough to identify them. Looking at the combination of sector and focus is not enough as well, as under these conditions participant 8 and 10 should be grouped together because they both have a focus on rules. When we finally look at the combination of sector, focus and interaction we have a bit more chance of uniquely identifying communities, although there are still counter-examples. Participant 9 (highlighted with a gray rectangle), for example, shares all the properties with participants 2, 7 & 10, but is conceptually far removed from all others. In general the dataset shows this trend, providing both examples and counterexamples for most of these property combinations, making it generally very difficult, if not flat-out impossible to to identify communities.
We face the same challenge in the student data sample, although even more pronounced on an almost individual level. There are participants that share the same properties while having wildly varying conceptual understandings. There seems to be some differentiation on whether participants have prior experience, but even then this sole property does not have enough discriminatory power. Take for example participants 4&8 (highlighted with a black rectangle) and participants 3,6&19 (highlighted with a gray oval). Both these communities cluster closely together for a specific concept, but then differ on other concepts. One could expect this has to do with a small amount of properties differing between them, which is the case, as there is consistently a participant with some prior experience in programming and scripting languages amongst them. However, if this property really is the differentiating factor, one would expect that on the other concepts the participants with prior experience (4&6) would be further removed from other participants than the ones without experience are, which is simply not the case. It thus seems rather difficult to link these properties to the communities and their structure.
This challenge could be explained by a number of things. First and foremost would be a simple lack in the amount of properties (or their granularity, as might be the case in the student data sample) to identify communities by, while it is also possible that the investigated concepts were not at the right abstraction level (i.e., either too specific or too vague), or that the investigated concepts were simply not the concepts people use to model. The simplest explanation is that the properties we attempt to identify communities by are not the right (i.e., properly discriminating) ones. It is possible (especially for the student data sample) that some of the properties are not necessarily not right, but that they are not discriminative enough. For example, knowing what modeling languages someone uses could be described in more detail because a language could have multiple versions that are in-use, and it is possible (indeed quite likely) that a language as-used is not the same as the 'official' language. However, this line of reasoning is problematic for two reasons. The first being that these are properties that are used by practitioners to (naively) group modelers together, the second that there is no clear-cut way to identify reasonable other properties that are correlated to the modeling practice. If these properties are not useful, we would have to reject the hypothesis on grounds of them being a 'bad fit' for grouping people. Other properties that could be thought of could include reflections of the cultural background of modelers, however, these are less likely to be of influence in our specific case as the Enterprise Modelers we investigate are all set in a Western European context and there is little cultural diversity (or their granularity, as might be the case in the student data sample) in this sense.
Another explanation could be that the meta-concepts we chose are not at the right abstraction level (i.e., concept width), meaning that they are either too vague or specific. For example, some modelers could typically think on nearinstantiation level while others think more vague. If concepts are very specific one would actually expect to find differences much faster (as the distance between people's conceptual understanding can be expected to be larger), which thus makes it easier to find communities. If they are (too) vague though, people would not differ much because there are not enough properties to differ on in the first place. However, the way we set up our observations rules out the vagueness possibility, as participants were given a semantic priming task before the semantic differential task of each concept. What we investigated was thus their most typical specific understanding of a concept. For this reason it is unlikely the abstraction level of the concepts was the cause of the challenge of identifying the communities.
Finally, the most obvious explanation could be a flaw in our preliminary work, namely that we did not select the right concepts, irrespective of their abstraction level. Considering the concepts were derived from an analysis of conceptual modeling languages and methods used for many aspects of enterprises, and that there simply does not seem a way to do without most of them, we find it very unlikely this is the case. The unlikely option that what we investigated was not actually the modeling concept, but something completely else (i.e., someone considering their favorite Hollywood actors over a conceptual modeling interpretation of actor) can also be ruled out as the priming task in our observation rules out this possibility. It thus seems far more prudent that these potential issues did not contribute to the challenge we face, and we should move towards accepting that identifying communities of modelers based on the investigated properties might not be a feasible thing to do.
While we had hoped that these observations would have yielded a positive result to the hypothesis, the lack of support we have shown means that a theory of predicting how modelers understand the key concepts they use, and thus what the additional 'implicit' semantics of a model could be (as alluded to earlier) is likely not feasible. Nonetheless, the observations do help to systematically clarify that these different personal understandings exist, can be measured, and might be correlated to communication and modeling breakdown due to unawareness of linguistic prejudice. Eventually, in terms of Gregor's [START_REF] Gregor | The nature of theory in information systems[END_REF] types of theories in information systems this information can be used by enterprise modelers and researchers alike to build design theories supporting model integration in enterprise modeling by pointing out potentially sensitive aspects of models' semantics.
If we wanted to simply discount the possibility of these properties being good ways to identify communities that share a semantic understanding of some concepts with, we would be done. But there is more of an issue here, as these properties are being used to identify communities and group people together in practice. Thus, given these findings we have to reject the hypothesis as stated in our introduction, while as of yet not being able to replace it with anything but a fair warning and call for more understanding -do not just assume (conceptual) modelers will model alike just because they have been using the same languages, come from the same background or work in the same area.
To summarize, we have shown that the often implicit assumption that people have strongly comparable semantics for the common modeling meta-concepts if they share an expertise in certain sectors, modeling focus and used languages cannot be backed up by our empirical investigation. While not an exhaustive disproof of the hypothesis by any means, it casts enough doubt on it that it would be a considerate practice for Enterprise Modelers to be more careful and double-check their assumptions when modeling together with, or using models from, others practitioners.
Conclusion and Future Work
We have shown a way to discover communities that share semantics of conceptual modeling meta-concepts through analysis of psychometric data and discussed the difficulties in identifying them through shared properties between their members. On basis of this we have rejected the hypothesis that modelers with certain shared properties (such as used languages, background, focus, etc.) can be easily grouped together and expected to share a similar understanding of the common conceptual modeling meta-concepts.
Our future work involves looking at the used properties in more detail (i.e., what exactly a used language constitutes) and a more detailed comparison of the results of practitioners and students in terms of response polarity and community distribution. Furthermore we will investigate whether there is a correlation between the specific words that a community typically uses to refer to its concepts.
Fig. 1 .
1 Fig.1. Principal components found in the data of concept-specific understanding for practitioners. The visualizations represent (roughly) the distance between understandings individual participants have. The further away two participants are on both axes (i.e., horizontal and vertically different coordinates), the more different their conceptual understanding has been measured to be. Shown are the distances between participants for their understanding of goals, processes, resources and restrictions, with some discussed participants highlighted. Colored boxes and circles are used to highlight some interesting results that will be discussed in more detail.
Fig. 2 .
2 Fig.2. Principal components found in the data of concept-specific understandings for students. The visualizations represent (roughly) the distance between understandings individual participants have. The further away two participants are on both axes (i.e., horizontal and vertically different coordinates), the more different their conceptual understanding has been measured to be. Shown are the distances between for participants for their understanding of goals, processes, resources and restrictions, with some discussed participants highlighted.
Table 1 .
1 Comparison of some practitioners based on investigated properties. The proprietary language is an in-house language used by one of the involved companies.
No. Used languages Sector Focus Interacts with
3 Proprietary Financial, Govern- Knowledge rules, Analists, modelers
ment processes, data
8 UML, OWL, RDF, Government, Rules Business profes-
Mindmap, Rules- Healthcare sionals, policymak-
peak, Proprietary ers, lawyers
2 Proprietary Government Knowledge sys- Managers, domain
tems, processes experts
7 Proprietary, UML, Government, Business processes, Domain experts, IT
Java spatial planning process structure specialists
10 Proprietary, xml, Government, Processes, rules, Domain experts,
xslt finance object definitions java developers
for systems
Table 2 .
2 Comparison of some students based on investigated properties. Profiles are standardized packages of coursework students took during secondary education, nature being natural sciences, technology a focus on physics and health a focus on biology.
No. Study Profile Prior experience
4 Computing Science Nature & Technology & Some programming and
Health scripting experience
8 Computing Science Nature & Technology None
3 Information Systems Nature & Technology None
6 Computing Science Nature & Technology Programming experience
19 Information Systems Nature & Health None
The ASD project (www.novay.nl/okb/projects/agile-service-development/7628) was a collaborative research initiative focused on methods, techniques and tools for the agile development of business services. The ASD project consortium consisted of Be Informed, BiZZdesign, Everest, IBM, O&i, PGGM, RuleManagement Group, Voogd & Voogd, CRP Henri Tudor, Radboud University Nijmegen, University Twente, Utrecht University & Utrecht University of Applied Science, TNO and Novay.
Acknowledgments. This work has been partially sponsored by the Fonds National de la Recherche Luxembourg (www.fnr.lu), via the PEARL programme.
The Enterprise Engineering Team (EE-Team) | 30,548 | [
"1002484"
] | [
"371421",
"300856",
"452132",
"348023",
"300856",
"452132"
] |
01484404 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484404/file/978-3-642-34549-4_8_Chapter.pdf | Eystein Mathisen
email: eystein.mathisen@uin.no
John Krogstie
email: krogstie@idi.ntnu.no
Modeling of Processes and Decisions in Healthcare -State of the Art and Research Directions
In order to be able to deliver efficient and effective decision support technologies within healthcare, it is important to be able to understand and describe decision making in medical diagnosis, treatment and administrative processes. This paper outlines how information can be synthesized, interpreted and used during decision making in dynamic healthcare environments. We intend to develop a set of modeling constructs that describe the decision requirements forming the basis for adequate situation awareness in clinical processes. We propose that a separate decision perspective will 1) enhance the shared understanding of the decision context among clinical staff, and 2) provide a better understanding of how we can design information system support for complex cognitive tasks in dynamic work environments.
Introduction
The clinical and administrative processes in today's healthcare environments are becoming increasingly complex and intertwined and the provision of clinical care involves a complex series of physical and cognitive activities. A multitude of stakeholders and healthcare providers with the need for rapid decision-making, communication and coordination, together with the steadily growing amount of medical information, all contribute to the view of healthcare as a complex cognitive work domain.
The healthcare environments can also be characterized as a very dynamic work environment, in which clinicians rapidly switch between work activities and tasks. The process is partially planned, but at the same time driven by events and interrupts [START_REF] Clancy | Applications of complex systems theory in nursing education, research, and practice[END_REF][START_REF] Dahl | Context in care--requirements for mobile context-aware patient charts[END_REF].
To be able to cope with the dynamism and complexities in their environments, many organizations have been forced to restructure their operations and integrate complex business processes across functional units and across organizational boundaries [START_REF] Fawcett | Process integration for competitive success: Benchmarking barriers and bridges[END_REF]. This has in many cases led to the adoption of process-oriented approaches and enterprise modeling to the management of organizational operations. Process modeling is used within organizations as a method to increase the focus on and knowledge of organizational processes, and function as a key instrument to organize activities and to improve the understanding of their interrelationships [START_REF] Recker | Business process modeling : a comparative analysis[END_REF]. Today, there is a large number of modeling languages with associated notations as we will discuss in more detail in section 3.
Recent work within the healthcare domain has been studying how one can best adopt process orientation and process oriented information systems in order to provide effective and efficient solutions for healthcare processes, exemplified by the concepts of patient care, clinical pathways or patient trajectories [START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF]. The adoption of process-orientation in the healthcare sector is addressing the quality of the outcomes of care processes (e.g. clinical outcomes and patient satisfaction) as well as improvements in operational efficiency [START_REF] Fryk | A Modern Process Perspective, Process Mapping and Simulation in Health Care[END_REF]).
In this context, it is important to note that research has shown that performance differences between organizations operating in dynamic and complex environments are related to how people decide and act [START_REF] Bourgeois | Strategic Decision Processes in High Velocity Environments: Four Cases in the Microcomputer Industry[END_REF]. Hence, the focus of this research relates to how clinical decision-makers adapt to dynamic and complex healthcare environments and how information is synthesized, interpreted and used during decision-making in these contexts. The concept of decision-making is not a well-researched phenomenon in relation to the mapping and modeling of healthcare processes. It is argued here that the complexity of organizational decision making in general (see e.g. [START_REF] Langley | Opening up Decision Making: The View from the Black Stool[END_REF]) is not reflected in the various modeling languages and methods that are currently available, even though decision making is an inherent and important part of every non-trivial organizational process. Thus, we want to investigate how decision-making expertise can be expressed in enterprise models describing healthcare processes.
The organization of the paper is as follows: Section 2 describes and discusses some of the most prevalent challenges within healthcare. Section 3 presents the theoretical background for the project, with focus on (process) modeling and situation awareness as a prerequisite for decision making, followed by a presentation of decision making theories and process modeling in healthcare. Section 4 gives an overview of the proposed research directions for this area while section 5 provides closing remarks.
Challenges in the Healthcare Domain
The healthcare domain is a typical risky, complex, uncertain and time-pressured work environment. Healthcare workers experience many interruptions and disruptions during a shift. Resource constraints with regards to medical equipment/facilities and staff availability, qualifications, shift and rank (organizational hierarchy) are commonplace. Clinical decisions made under these circumstances can have severe consequences. Demands for care can vary widely due to the fact that every patient is unique. This uniqueness implies that the patient's condition, diagnosis and the subsequent treatment processes are highly situation-specific. Work is performed on patients whose illnesses and response to medical treatment can be highly unpredictable. Medical care is largely oriented towards cognitive work like planning, problem solving and decision making. In addition the many practical activities that are needed to perform medical care -often including the use of advanced technology -requires cognitive work as well. Thus, the needs of individual patients depend on the synchronization of clinical staff, medical equipment and tools as well as facilities (e.g. operating rooms). The management of procedures for a set of operating rooms or an intensive care unit must be planned and the associated resources and activities require coordination [START_REF] Nemeth | The context for improving healthcare team communication[END_REF]. Planning, problem solving and decision making involves the assessment of resource availability, resource allocation, the projection of future events, and assessment of the best courses of action. According to Miller et al. [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF], members of a healthcare team must coordinate the acquisition, integration and interpretation of patient and teamrelated information to make appropriate patient care decisions.
Clinicians face two types of data processing challenges in decision-making situations 1. Deciding on medical acts -what to do with the patient.
Deciding on coordination acts -which patient to work on next. Knowing what has
been going on in the clinical process enables clinicians to adapt their plans and coordinate their work with that of others. In addition to patient data, these decisions are informed by data about what other personnel are doing and which resources (rooms and equipment) are in use.
From the above discussion, we argue that communication and collaboration for informed decision making leading to coordinated action are among the most prevalent challenges that are experienced within healthcare. Lack of adequate team communication and care coordination is often mentioned as the major reasons for the occurrence of adverse events in healthcare [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF][START_REF] Reader | Communication skills and error in the intensive care unit[END_REF]. According to Morrow et al. [START_REF] Morrow | Reducing and Mitigating Human Error in Medicine[END_REF], errors and adverse events in medical care is related to four broad areas of medical activities: medical device use, medication use, team collaboration, and diagnostic/decision support. In [START_REF] Eisenberg | The social construction of healthcare teams, in Improving Healthcare Team Communication[END_REF], Eisenberg discusses communication and coordination challenges related to healthcare teams and points out the following requirements of these teams:
• A building of shared situational awareness contributing to the development of shared mental models. • Continuously refreshing and updating the medical team's understanding of the changing context with new information. • Ensuring that team members adopt a notion of team accountability and enables them to relate their work to the success of the team.
In chapter 3 we will look more closely at the theoretical underpinnings of the proposed research, starting with an overview of perspectives to processes modeling.
Theoretical Background
Perspectives to Process Modeling
A process is a collection of related, structured tasks that produce a specific service or product to address a certain goal for some actors. Process modeling has been performed in connection with IT and organizational development at least since the 70ties. The archetypical way to look on processes is as a transformation, according to an IPO (input-process-output) approach. Whereas early process modeling languages had this as a basic approach [START_REF] Gane | Structured Systems Analysis: Tools and Techniques[END_REF], as process modeling have been integrated with other types of conceptual modeling, variants have appeared. Process modeling is usually done in some organizational setting. One can look upon an organization and its information system abstractly to be in a state (the current state, often represented as a descriptive 'as-is' model) that are to be evolved to some future wanted state (often represented as a prescriptive 'to be' model). These states are often modeled, and the state of the organization is perceived (differently) by different persons through these models. Different usage areas of conceptual models are described in [START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF][START_REF] Nysetvold | Assessing Business Process Modeling Languages Using a Generic Quality Framework[END_REF]:
1. Human sense-making: The descriptive model of the current state can be useful for people to make sense of and learn about the current perceived situation. 2. Communication between people in the organization: Models can have an important role in human communication. Thus, in addition to support the sensemaking process for the individual, descriptive and prescriptive models can act as a common framework supporting communication between people. 3. Computer-assisted analysis: This is used to gain knowledge about the organization through simulation or deduction, often by comparing a model of the current state and a model of a future, potentially better state. 4. Quality assurance, ensuring e.g. that the organization acts according to a certified process developed for instance as part of an ISO-certification process. 5. Model deployment and activation: To integrate the model of the future state in an information system directly. Models can be activated in three ways: (a) Through people, where the system offers no active support. (b) Automatically, for instance as an automated workflow system. (c) Interactively, where the computer and the users co-operate [START_REF] Krogstie | Interactive Models for Supporting Networked Organisations[END_REF]. 6. To be a prescriptive model to be used to guide a traditional system development project, without being directly activated.
Modeling languages can be divided into classes according to the core phenomena classes (concepts) that are represented and focused on in the language. This has been called the perspective of the language [START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF][START_REF] Lillehagen | Active Knowledge Modeling of Enterprises[END_REF]. Languages in different perspectives might overlap in what they express, but emphasize different concepts as described below. A classic distinction regarding modeling perspectives is between the structural, functional, and behavioral perspective [19]. Through other work, such as [START_REF] Curtis | Process modeling[END_REF], [START_REF] Mili | Business process modeling languages: Sorting through the alphabet soup[END_REF], F3 [START_REF] Bubenko | Facilitating fuzzy to formal requirements modeling[END_REF], NATURE [START_REF] Jarke | Theories underlying requirements engineering: an overview of NATURE at Genesis[END_REF], [START_REF] Krogstie | Conceptual Modelling in Information Systems Engineering[END_REF][START_REF] Zachman | A framework for information systems architecture[END_REF] additional perspectives have been identified, including object, goal, actor, communicational, and topological. Thus identified perspectives for conceptual modeling are:
Behavioral perspective: Languages following this perspective go back to the early sixties, with the introduction of Petri-nets [START_REF] Petri | Kommunikation mit Automaten[END_REF]. In most languages with a behavioral perspective the main phenomena are 'states' and 'transitions' between 'states'. State transitions are triggered by 'events' [START_REF] Davis | A comparison of techniques for the specification of external system behavior[END_REF].
Functional perspective:
The main phenomena class in the functional perspective is 'transformation': A transformation is defined as an activity which based on a set of phenomena transforms them to another set of phenomena.
Structural perspective: Approaches within the structural perspective concentrate on describing the static structure of a system. The main construct of such languages is the 'entity'.
Goal and Rule perspective:
Goal-oriented modeling focuses on 'goals' and 'rules'. A rule is something which influences the actions of a set of actors. In the early nineties, one started to model so-called rule hierarchies, linking goals and rules at different abstraction levels.
Object-oriented perspective: The basic phenomena of object oriented modeling languages are those found in most object oriented programming languages; 'Objects' with unique id and a local state that can only be manipulated by calling methods of the object. The process of the object is the trace of the events during the existence of the object. A set of objects that share the same definitions of attributes and operations compose an object class.
Communication perspective:
The work within this perspective is based on lan-guage/action theory from philosophical linguistics [START_REF] Winograd | Understanding Computers and Cognition: A New Foundation for Design[END_REF]. The basic assumption of language/action theory is that persons cooperate within work processes through their conversations and through mutual commitments taken within them.
Actor and role perspective: The main phenomena of modeling languages within this perspective are 'actor' and 'role'. The background for modeling in this perspective comes both from organizational science, work on programming languages, and work on intelligent agents in artificial intelligence.
Topological perspective: This perspective relates to the topological ordering between the different concepts. The best background for conceptualization of these aspects comes from the cartography and CSCW fields, differentiating between space and place [START_REF] Dourish | Re-space-ing place: "place" and "space" ten years on[END_REF][START_REF] Harrison | Re-place-ing space: the roles of place and space in collaborative systems[END_REF]. 'Space' describes geometrical arrangements that might structure, constrain, and enable certain forms of movement and interaction; 'place' denotes the ways in which settings acquire recognizable and persistent social meaning through interaction.
Situation and context awareness
A clinician's situation awareness is the key feature for the success of the decision process in medical decision-making. In general, decision makers in complex domains must do more than simply perceive the state of their environment in order to have good situation awareness. They must understand the integrated meaning of what they perceive in light of their goals. Situation awareness incorporates an operator's understanding of the situation as a whole, which forms the basis for decision-making. The integrated picture of the current situation may be matched to prototypical situations in memory, each prototypical situation corresponding to a 'correct' action or decision.
Figure 1 shows the model of situation awareness in decision making and action in dynamic environments. Situation awareness (SA) is composed of two parts: situation and awareness. Pew [START_REF] Pew | The state of Situation Awareness measurement: heading toward the next century[END_REF] defines a 'situation' as "a set of environmental conditions and system states with which the participant is interacting that can be characterized uniquely by a set of information, knowledge and response options." The second part ('awareness') is primarily a cognitive process resulting in awareness. Some definitions put a higher emphasis on this process than the other (situation). For example Endsley [START_REF] Endsley | Toward a Theory of Situation Awareness in Dynamic Systems[END_REF] defines SA as "the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future". The model in Figure 1 defines three levels of situation awareness. The first level is perception, which refers to the perception of critical cues in the environment. Examples of relevant cues in a clinical setting are patient vital signs, lab results and other team member's current activities. The second level (comprehension) involves an understanding of what the integrated cues mean in relation to the clinician's goals. Here, a physician or a team of medical experts will combine information about past medical history, the current illness(es) and treatments to try to understand the significance of data about the patient's condition. The third level is related to projection, i.e. understanding what will happen with the patient in the future. Using the understanding of the current situation, a clinician or a healthcare team can for instance predict a patient's response to a particular treatment process [START_REF] Wright | Building shared situation awareness in healthcare settings[END_REF].
According to Endsley and Garland [START_REF] Endsley | Situation awareness: analysis and measurement[END_REF], situation awareness is in part formed by the availability of information. This information can be obtained from various sources such as sensory information from the environment, visual/auditory displays, decision aids and support systems, extra-and intra-team communication and team member background knowledge and experience. These information sources will have different levels of reliability giving rise to different levels of confidence in various information sources. Information is primarily aimed at: 1) reducing uncertainty in decision-making and 2) interpretation and sense making in relation to the current situation. Hence, situation awareness is derived from a combination of the environment, the system's displays and other people (team members) as integrated and interpreted by the individual.
In the context of figure 1, mental models help (or block) a person or a team in the process of determining what information is important to attend to, as well as helping to form expectations. Without a 'correct' mental model it would be difficult to obtain satisfactory situation awareness. Processing novel cues without a good mental model, strains the working memory and makes achieving SA much harder and much more prone to error. Mental models provide default information (expected characteristics of elements) that help form higher levels of SA even when needed data is missing or incomplete. Mental models affect the way we handle decisions and actions under uncertainty.
Furthermore, any model of information behavior must indicate something about different stakeholder' information needs and sources. SA is a vital component of the decision making process regardless of the dynamics of the environment within which the decisions are made. SA shapes the mental model of the decision maker and as such influences the perceived choice alternatives and their outcomes. Although Endsley's work on situation awareness originated within the military and aviation domains, there has been an increasing interest from other areas of research. Within the field of medical decision making research, Patel et al. [START_REF] Patel | Emerging paradigms of cognition in medical decisionmaking[END_REF] pointed out the limitations of the classical paradigm of decision research and called for more research within medical decision-making as it occurs in the natural setting. Drawing on the concepts of naturalistic decision making and situation awareness, Patel et al. [START_REF] Patel | Emerging paradigms of cognition in medical decisionmaking[END_REF] argues that this will enable us to better understand decision processes in general, develop strategies for coping with suboptimal conditions, develop expertise in decisionmaking as well as obtaining a better understanding of how decision-support technologies can successfully mediate decision processes within medical decision making. Examples of research efforts covering situation awareness and decision making within healthcare can be found within anesthesiology [START_REF] Gaba | Situation Awareness in Anesthesiology[END_REF], primary care [START_REF] Singh | Exploring situational awareness in diagnostic errors in primary care[END_REF], surgical decision making [START_REF] Jalote-Parmar | Situation awareness in medical visualization to support surgical decision making[END_REF], critical decision making during health/medical emergency [START_REF] Paturas | Establishing a Framework for Synchronizing Critical Decision Making with Information Analysis During a Health/Medical Emergency[END_REF] and within evidence-based medical practices in general [START_REF] Falzer | Cognitive schema and naturalistic decision making in evidence-based practices[END_REF]. Decision making theories will be further elaborated in section 3.3.
Returning to the model of situation awareness presented in figure 1, we notice that there are two factors that constrain practitioners in any complex and dynamic work domain [START_REF] Morrow | Reducing and Mitigating Human Error in Medicine[END_REF]: 1) the task/system factors and 2) the individual/team cognitive factors. Task/system factors focuses on the characteristics of the work environment and the cognitive demands imposed on the practitioners operating in the domain under consideration. According to Vicente [START_REF] Vicente | Cognitive Work Analysis : Toward Safe, Productive, and Healthy Computer-Based Work[END_REF], this is called the ecological approach and is influenced by the physical and social reality. The cognitive factors, called the cognitivist approach, is concerned with how the mental models, problem solving strategies, decision making and preferences of the practitioners are influenced by the constraints of the work domain.
In the next section we will discuss the main features of decision making, thus covering the cognitivist perspective. In section 4 we also look closer at how enterprise or process models can be used to describe the task environment (i.e. the ecology).
Theories of clinical decision making -from decision-analytic to intuitive decision models
According to the cognitivist perspective, the level of situation awareness obtained is, among other factors, influenced by the practitioner's goals, expectations, mental model (problem understanding), and training. With reference to Endsley's model in fig. 1, we see that decision making is directly influenced by a person's or a team's situation awareness.
The decision making process can be described in more than one way. A classic description of decision making relates the concept to the result of a gradual processthe decision process -performed by an actor: the decision maker. The philosopher Churchman puts it this way: The manager is the man who decides among alternative choices. He must decide which choice he believes will lead to a certain desired objective or set of objectives [START_REF] Churchman | Challenge to Reason[END_REF]. The decision-making process is described with various action steps and features from one definition to another. Typical steps are the generation of solution alternatives, evaluation of the impact/consequences of options and choice of solutions based on evaluation results and given criteria [START_REF] Ellingsen | Decision making and information. Conjoined twins?[END_REF]. Mintzberg et al. [START_REF] Mintzberg | The Structure of "Unstructured" Decision Processes[END_REF] have identified three central phases in a general decision making process: 1) identification, 2) development and 3) selection described by a set of supporting 'routines' and the dynamic factors explaining the relationship between the central phases and the supporting routines. The identification phase consists of decision recognition and diagnosis (routines), while the development phase consists of the search and design routines. Finally, the selection phase is a highly iterative process that consists of the screening, evaluation-choice and authorization routines. In a similar manner, Power [START_REF] Power | Decision Support Systems: Concepts and Resources for Managers[END_REF] defines a decision process as consisting of seven stages or steps: 1) defining the problem, 2) decide who should decide, 3) collect information, 4) identify and evaluate alternatives, 5) decide, 6) implement and 7) follow-up assessment. In an attempt to improve decision support in requirements engineering, Alenljung and Persson [START_REF] Alenljung | Portraying the practice of decision-making in requirements engineering: a case of large scale bespoke development[END_REF] combines Mintzberg's and Power's staged decision process models. Mosier and Fischer [START_REF] Mosier | Judgment and Decision Making by Individuals and Teams: Issues, Models, and Applications[END_REF] discuss decision making in terms of both front-end judgment processes and back-end decision processes. The front-end processes involve handling and evaluating the importance of cues and information, formulating a diagnosis, or assessing the situation. According to Mosier and Fischer, the back-end processes involve retrieving a course of action, weighing different options, or mentally simulating a possible response. This is illustrated in figure 2.
Judgment Decision
Front-end process Back-end process
Fig. 2. Components of the decision making process (adapted from [47])
The decision making process is often categorized into rational/analytical and naturalistic/intuitive decision making [START_REF] Roy | Decision-making models[END_REF]. This distinction refers to two broad categories of decision-making modes that are not mutually exclusive. This implies that any given decision process in reality consists of analytical as well as intuitive elements. Kushniruk [START_REF] Kushniruk | Analysis of complex decision-making processes in health care: cognitive approaches to health informatics[END_REF] argues that the cognitive processes taking place during clinical decision making can be located along a cognitive continuum, which ranges between intuition and rational analysis. Models of rational-analytical decision-making can be divided into two different approaches, the normative and descriptive approach. The classical normative economic theory assumes complete rationality during decision-making processes using axiomatic models of uncertainty and risk (e.g. probability theory or Bayesian theory) and utility (including multi-attribute utility theory) as illustrated by Expected Utility Theory [50] and Subjective Expected Utility [START_REF] Savage | Foundations of Statistics[END_REF]. Here, the rationally best course of action is selected among all available possibilities in order to maximize returns. Theories of rational choice represent, however, an unrealistic model of how decision makers act in real-world settings. It has been shown that there is a substantial non-rational element in people's thinking and behavior along with practical limits to human rationality. These factors are evident in several descriptive theories, exemplified by Prospect Theory [START_REF] Kahneman | Procpect Theory: An analysis of decision under risk[END_REF], Regret Theory [START_REF] Loomes | Regret Theory: An Alternative Theory of Rational Choice Under Uncertainty[END_REF] as well as Simon's theory of bounded rationality [START_REF] Simon | Rational decision making in business organizations[END_REF]. According to Simon, the limits of human rationality are imposed by the complexity of the world, the incompleteness of human knowledge, the inconsistencies of individual preferences and belief, the conflicts of value among people and groups of people, and the inadequacy of the amount of information people can process/compute. The limits to rationality are not static, but depend on the organizational context in which the decision-making process takes place. In order to cope with bounded rationality, clinical decision makers rely on cognitive short-cutting mechanisms or strategies, called heuristics, which allow the clinician to make decisions when facing poor or incomplete information. There is, however, some disadvantages related to the use of heuristics. In some circumstances heuristics lead to systematic errors called biases [START_REF] Gorini | An overview on cognitive aspects implicated in medical decisions[END_REF] that influence the process of medical decision making in a way that can lead to undesirable effects in the quality of care.
At the other end of the cognitive continuum proposed by Kushniruk [START_REF] Kushniruk | Analysis of complex decision-making processes in health care: cognitive approaches to health informatics[END_REF], one finds naturalistic or intuitive decision making models. Since the 1980s, a considerable amount of research has been conducted on how people make decisions in real-world complex settings (see for example [START_REF] Klein | Naturalistic decision making[END_REF]). One of the most important features of naturalistic decision-making is the explicit attempt to understand how people handle complex tasks and environments. According to Zsambok [START_REF] Zsambok | Naturalistic Decision Making (Expertise: Research & Applications[END_REF], naturalistic decision making can be defined as "how experienced people working as individuals or groups in dynamic, uncertain, and often fast-paced environments, identify and assess their situation, make decisions, and take actions whose consequences are meaningful to them and to the larger organization in which they operate". Different decision models that are based on the principles of naturalistic decision making are Recognition-primed Decision Model [START_REF] Klein | Naturalistic decision making[END_REF][START_REF] Zsambok | Naturalistic Decision Making (Expertise: Research & Applications[END_REF], Image theory [START_REF] Beach | The Psychology of Decision Making: People in Organizations[END_REF], the Scenario model [START_REF] Beach | The Psychology of Decision Making: People in Organizations[END_REF] and Argument-driven models [START_REF] Lipshitz | Decision making as argument-driven action[END_REF]. Details of these models will not be discussed further in this paper.
Research in healthcare decision making has largely been occupied with the 'decision event', i.e. a particular point in time when a decision maker considers different alternatives and chooses a possible course of action. Apart from the naturalistic decision making field, Kushniruk [START_REF] Kushniruk | Analysis of complex decision-making processes in health care: cognitive approaches to health informatics[END_REF] and Patel et al. [START_REF] Patel | Emerging paradigms of cognition in medical decisionmaking[END_REF] has proposed a greater focus on medical problem solving, i.e. the processes that precede the decision event. In essence, this argument is in line with Endsley's model of situation awareness.
Turning our attention to the environmental perspective in Endsley's SA model, we will in the next section discuss the modeling of healthcare processes and workflows.
Process modeling within healthcare
Process modeling in healthcare has previously been applied in the analysis and optimization of pathways, during requirements elicitation for clinical information systems and for general process quality improvement [START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF][START_REF] Becker | Health Care Processes -A Case Study in Business Process Management[END_REF][START_REF] Ramudhin | A Framework for the Modelling, Analysis and Optimization of Pathways in Healthcare[END_REF][START_REF] Staccini | Modelling health care processes for eliciting user requirements: a way to link a quality paradigm and clinical information system design[END_REF][START_REF] Petersen | Patient Care across Health Care Institutions: An Enterprise Modelling Approach[END_REF]. Other approaches, mainly from the human-factors field, have used process models as a tool for building shared understanding within teams (e.g. [START_REF] Fiore | Process mapping and shared cognition: Teamwork and the development of shared problem models, in Team cognition: Understanding the factors that drive process and performance2004[END_REF]). The adoption of traditional process modeling in healthcare is challenging in many respects. The challenges can, among other factors, be attributed to [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF][START_REF] Ramudhin | A Framework for the Modelling, Analysis and Optimization of Pathways in Healthcare[END_REF]:
• Interrupt and event driven work, creating the need for dynamic decision making and problem solving. • Processes that span multiple medical disciplines, involving complex sets of medical procedures. • Different types of, and often individualized, treatments.
• A large number of possible and unpredictable patient care pathways.
• Many inputs (resources and people) that can be used in different places.
• Frequent changes in technology, clinical procedures and reorganizations.
In addition, there are different levels of interacting processes in healthcare, as in other organizational domains. Lenz et al. [START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF] made a distinction between site-specific and site-independent organizational processes (e.g. medical order entry, patient dis-charge or result reporting) and medical treatment processes (e.g. diagnosis or specific therapeutic procedures). These distinctions are shown in table 2 In a similar manner, Miller et al. [START_REF] Miller | Care Coordination in Intensive Care Units: Communicating Across Information Spaces[END_REF] identified four nested hierarchical levels of decision making, including 1) unit resource coordination, 2) care coordination, 3) patient care planning and 4) patient care delivery. They conclude that care coordination and decision making involves two distinct 'information spaces': one associated with the coordination of resources (level 1 & 2 above) and one with the coordination and administration of patient care (level 3 &4) ([10], p. 157). These levels are not independent. Miller et al. found a strong association between patient-related goals and team coordination goals, and called for more research regarding the modeling of information flows and conceptual transitions (i.e. coordination activities) across information spaces. In the remainder of this section we will present a few examples of process modeling efforts related to healthcare settings. This is not a comprehensive review, but serves to illustrate of the type of research that has been done in the area.
Fiore et al. suggests that process modeling can be used as a problem-solving tool for cross-functional teams. They argue that process modeling efforts can lead to the construction of a shared understanding of a given problem [START_REF] Fiore | Process mapping and shared cognition: Teamwork and the development of shared problem models, in Team cognition: Understanding the factors that drive process and performance2004[END_REF]. Here, the modeling process in itself enables team members to improve a limited understanding of the business process in question. In a similar manner, Aguilar-Savén claims that business process modeling enables a common understanding and analysis of a business process and argues that a process model can provide a comprehensive understanding of a process [START_REF] Aguilar-Savén | Business process modelling: Review and framework[END_REF].
In an attempt to investigate how process models can be used to build shared understanding within healthcare teams, Jun et al. identified eight distinct modeling methods and evaluated how healthcare workers perceived the usability and utility of different process modeling notations [START_REF] Jun | Health Care Process Modelling: Which Method When?[END_REF]. Among the modeling methods evaluated were traditional flowcharts, data flow diagrams, communication diagrams, swim-lane activity diagrams and state transition diagrams. The study, that included three different cases in a real-world hospital setting, concluded that healthcare workers considered the usability and utility of traditional flowcharts better than other diagram types. However, the complexity within the healthcare domain indicated that the use of a combination of several diagrams was necessary.
Rojo et al. applied BPMN when describing the anatomic pathology sub-processes in a real-world hospital setting [START_REF] Rojo | Implementation of the Business Process Modelling Notation (BPMN) in the modelling of anatomic pathology processes[END_REF]. They formed a multidisciplinary modeling team consisting of software engineers, health care personnel and administrative staff. The project was carried out in six stages: informative meetings, training, process selection, definition of work method, process description and process modeling. They concluded that the modeling effort resulted in an understandable model that easily could be communicated between several stakeholders.
Addressing the problem of aligning healthcare information systems to healthcare processes, Lenz et al. developed a methodology and a tool (Mapdoc), used to model clinical processes [START_REF] Lenz | Towards a continuous evolution and adaptation of information systems in healthcare[END_REF]. A modified version of UML's Activity Diagram was used to support interdisciplinary communication and cultivate a shared understanding of relevant problems and concerns. Here, the focus was to describe the organizational context of the IT application. They found process modeling to be particular useful in projects where organizational redesign was among the goals.
Ramudhin et al. observed that modeling efforts within healthcare often involved the combination of multiple modeling methods or additions to existing methodology [START_REF] Ramudhin | A Framework for the Modelling, Analysis and Optimization of Pathways in Healthcare[END_REF]. They proposed an approach that involved the development a new modeling framework customized for the healthcare domain, called medBPM. One novel feature of the framework was that all relevant aspects of a process were presented in one single view. The medBPM framework was tested in a pilot project in a US hospital. Preliminary results was encouraging with regard to the framework's ability to describe both "as-is" (descriptive) and "to-be" (prescriptive) processes.
In a recent paper, Fareedi et al. identified roles, tasks, competences and goals related to the ward round process in a healthcare unit [START_REF] Ali Fareedi | Modelling of the Ward Round Process in a Healthcare Unit[END_REF]. They used a formal approach to implement the modeling results in the form of an ontology using OWL1 and the Protégé 269 ontology editor. The overall aim was to improve the effectiveness of information systems use in healthcare by using the model to represent information needs of clinical staff. Another point made by the authors was the formal ontology's direct applicability in improving the information flow in the ward round process. An ontological approach was also taken by Fox et.al: in the CREDO project the aim was to use an ontological approach to task and goal modeling in order to support complex treatment plans and care pathways [ ].
A common feature all the languages used in these research efforts is that they presuppose a rational decision maker following the relatively simple if-then-else or caseswitch structures leading to a choice between one of several known alternatives. Here, the decision process itself is embedded in the upstream activities/tasks preceding the decision point. The decision point will then simply act as the point in time when a commitment to action was made. This is unproblematic for trivial, structured decision episodes, but falls short of describing the factors influencing an unstructured problem/decision situation like the ones encountered within complex and dynamic healthcare processes.
Research Directions
The objective of our research is to use different conceptualizations and models of situation awareness in combination with models of clinical decision making as a "theoretical lens" for capturing and describing the decision requirements (i.e. knowledge/expertise, goals, resources, and information, communication/coordination needs) related to the perception, comprehension and projection of a situation leading up to a critical decision. The aim is to investigate how we can model these requirements as extensions to conventional process modeling languages (e.g. BPMN) possibly in the form of a discrete decision perspective [START_REF] Curtis | Process modeling[END_REF]. The GRAI Grid formalism as described in for instance [START_REF] Lillehagen | Active Knowledge Modeling of Enterprises[END_REF][START_REF] Ravat | Collaborative Decision Making: Perspectives and Challenges[END_REF], is of particular interest to investigate further, as it focuses on the decisional aspects of the management of systems. The GRAI grid de-fines decision centres (points where decisions are made) as well as the informational relationships among these decision points.
In our work, the following preliminary research questions have been identified:
• What can the main research results within clinical decision making and situation awareness tell us about how experts adapt to complexity and dynamism, synthesize and interpret information in context for the purpose of decision making in dynamic work environments? • How can we model the concept of a "situation" and "context" in complex and dynamic healthcare processes characterized by high levels of coordination, communication and information needs? • Will the use of a separate decision perspective in a process model enhance the knowledge building process [START_REF] Fiore | Towards an understanding of macrocognition in teams: developing and defining complex collaborative processes and products[END_REF] and the shared understanding of the decision context among a set of stakeholders? • Will the use of a separate decision perspective in process models lead to a better understanding of how we can design information system support for decisionmaking tasks in dynamic work environments?
To address these areas one needs to design and evaluate a set of modeling constructs that makes it possible to represent aspects of coordination, communication and decision making in clinical processes. This involves identifying relevant case(s) from a healthcare work environment and collecting data using participant observation and interviews of subjects in their natural work settings that can be used as a basis for further research work. The development of the modeling constructs can be done using the principles from design science described for instance by Hevner et al. [START_REF] Hevner | Design Science in Information Systems Research[END_REF] and March et al. [START_REF] March | Design and Natural Science Research on Information Technology[END_REF]. Hevner et al. [START_REF] Hevner | Design Science in Information Systems Research[END_REF] define design science as an attempt to create artifacts that serve human purposes, as opposed to natural and social sciences, which try to understand reality as is. We intend to develop a set of modeling constructs (i.e. design artifacts) that can describe the decision requirements that form the basis for adequate situation awareness in complex and dynamic healthcare processes. By developing a decision view, it is possible to envision process models that communicate a decision-centric view in addition to the traditional activity-, role-or information-centred views. From the previous discussion on situation awareness and decision making models, we intend to define what conceptual elements should be included in the decision view. Taking into consideration Endsley's model of situation awareness, the concept of a situation is central along with what constitutes timely, relevant information attuned to the decision maker's current (but probably changing) goals. A number of criteria have been defined to characterize and assess the quality of enterprise and process models and modeling languages (see for instance [START_REF] Krogstie | Model-Based Development and Evolution of Information Systems: A Quality Approach[END_REF]). Hence, the model constructs developed in relation to the previously mentioned decision view must be evaluated with respect to a set of modeling language quality criteria.
Conclusion
In this paper we have discussed state of the art in modeling of processes and decisions within health care. The paper relates three strands of research: 1) healthcare process modeling, 2) situation awareness and decision-making theories, and 3) decision sup-port technologies with the overall aim of improving decision quality within healthcare.
By studying the dynamic decision making process under complex conditions this can lead us to a better understanding of the communication, coordination and information needs of healthcare personnel operating in dynamic and challenging environments. In addition, we propose that the ability to express these insights as one of several modeling perspectives of healthcare process models could prove to be useful for capturing the requirements that must be imposed on information systems support in dynamic work environments.
Fig. 1 .
1 Fig. 1. Situation awareness (from [31], p. 35)
Table 1 .
1 . Categorization of healthcare processes ([START_REF] Lenz | IT Support for Healthcare Processes -Premises, Challenges[END_REF])
Organizational processes Patient treatment processes
Site-independent Generic process patterns Clinical guidelines
Site-specific adaptation Organization-specific workflows Medical pathways
http://www.w3.org/TR/owl-features/
http://protege.stanford.edu | 48,508 | [
"1003542",
"977578"
] | [
"487817",
"50794",
"50794"
] |
01484405 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484405/file/978-3-642-34549-4_9_Chapter.pdf | Janis Stirna
Jānis Grabis
email: grabis@rtu.lv
Martin Henkel
email: martinh@dsv.su.se
Jelena Zdravkovic
email: jelenaz@dsv.su.se
Capability Driven Development -an Approach to Support Evolving Organizations
Keywords: Enterprise modeling, capabilities, capability driven development, model driven development
The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution -Capability Driven Development (CDD). A meta-model for representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is been proposed. The use of the meta-model is exemplified by a case from the energy efficiency domain. A number of issues related to use of the CDD approach, namely, capability delivery application, CDD methodology, and tool support also are discussed.
Introduction
In order to improve alignment between business and information technology, information system (IS) developers continuously strive to increase the level of abstraction of development artifacts. A key focus area is making the IS designs more accessible to business stakeholders to articulate their business needs more efficiently. These developments include object-orientation, component based development, business process modeling, enterprise modeling (EM) and software services design. These techniques are mainly aimed at capturing relatively stable, core properties of business problems and on representing functional aspects of the IS [START_REF] Wesenberg | Enterprise Modeling in an Agile World[END_REF]. However, the prevalence and volatility of the Internet shifts the problem solving focus to capturing instantaneous business opportunities [START_REF]Cloud Computing: Forecasting Change[END_REF] and increases the importance of nonfunctional aspects. Furthermore, the context of use for modern IS is not always predictable at the time of design; instead as IS should have the capability to support different contexts. Hence, we should consider the context of use and under which circumstances the IS, in congruence with the business system, can provide the needed business capability. Hence, system's capability is determined not only during the design-time but also at run-time when the system's ability to handle changes in contexts is put to test. The following anecdotal evidence can be used to illustrate importance of capabilities. A small British bakery was growing successfully and decided to promote their business by offering their cupcakes at a discount via collective buying website, Groupon. As a result it had to bake 102 000 cupcakes and suffered losses comparable to its yearly profit. The bakery did not have mechanisms in place to manage the unforeseen and dramatic surge in demand -it did not have the capability of baking 102 000 cupcakes nor mechanisms for foreseeing the consequences. Another example is a mobile telecommunications company offering telephone services over its network, similar in all respects to traditional fixed-line providers. Such a service consists of the same home telephone, with an additional box between the telephone and the wall. However, unlike ordinary fixed-line telephony, it cannot connect to emergency services (112) in the event of a power outage. In this case the provided capability is unstable in a changing context.
A capability-driven approach to development should be able to elevate all such issues and to produce solutions that fit the actual application context.
From the business perspective, we define a capability as being the ability to continuously deliver a certain business value in dynamically changing circumstances. Software applications (and their execution environments) are an integral part of capabilities. This means that it is important to tailor these applications with regard to functionality, usability, reliability and other factors required by users operating in varying contexts. That puts pressure on software development and delivery methods. The software development industry has responded by elaborating Model Driven Development (MDD) methods and by adopting standardized design and delivery approaches such as service-oriented architecture and cloud computing. However, there are a number of major challenges when it comes to making use of MDD to address business capabilities:
§ The gap between business requirements and current MDD techniques. Model driven approaches and tools still operate with artifacts defined on a relatively low abstraction level. § Inability to model execution contexts. In complex and dynamically changing business environments, modeling just a service providing business functionality in very limited context of execution is not sufficient. § High cost for developing applications that work in different contexts. Software developers, especially SMEs, have difficulties to market their software globally because of the effort it takes to adhere to localization requirements and constraints in the context of where the software will be used. § Limited support for modeling changes in non-functional requirements. Model driven approaches focus on functional aspects at a given time point, rather than representing evolution of both functional and non-functional system requirements over time. § Limited support for "plasticity" in applications. The current context-aware and front-end adaptation systems focus mainly on technical aspects (e.g., location awareness and using different devices) rather than on business context awareness. § Limited platform usage. Limited modeling support for defining ability the IS to make use of new platforms, such as cloud computing platforms. Cloud computing is a technology driven phenomenon, and there is little guidance for development of cloud based business applications.
We propose to support the development of capabilities by using EM techniques as a starting point of the development process, and to use model-based patterns to describe how the software application can adhere to changes in the execution context. Our vision is to apply enterprise models representing enterprise capabilities to create executable software with built-in contextualization patterns thus leading to Capability Driven Development (CDD).
The objective of this paper is to present the capability meta-model, to discuss its feasibility by using an example case, and to outline a number of open development issues related to practical adoption of the CDD approach.
The research approach taken in this paper is conceptual and argumentative. Concepts used in enterprise modeling, context representation and service specification are combined together to establish the capability meta-model. Preliminary validation and demonstration of the CDD approach is performed using an example of designing a decision support system for optimizing energy flows in a building. Application of the meta-model is outlined by analyzing its role in development of capability delivery applications. The CDD methodology is proposed following the principles of agile, iterative and real-time software development methodologies.
The remainder of the paper is organized as follows. Section 2 presents related work. In section 3 requirements for CDD are discussed. Section 4 presents the CDD meta-model. It is applied to an example case in section 5. Section 6 discusses aspects of development methodology need for the CDD approach. The paper ends with some concluding remarks in section 7.
Related Work
In the strategic management discipline, a company's resources and capabilities are long-time seen as the primary source of profitability and competitive advantage - [START_REF] Barney | Firm Resources and Sustained Competitive Advantage[END_REF] has united them into what has become known as the resource-based view of the company. Accordingly, Michael Porter's value chain identifies top-level activities with the capabilities needed to accomplish them [START_REF] Porter | Competitive Advantage: Creating and Sustaining Superior Performance[END_REF]. In Strategy Maps and Balanced Scorecards, Kaplan and Norton also analyze capabilities through the company's perspectives, e.g. financial, customers', and other [START_REF] Kaplan | Strategy Maps: Converting Intangible Assets into Tangible Outcomes[END_REF]. Following this, in the research within Business-IT alignment, there have been attempts to consider resources and capabilities as the core components in enterprise models, more specifically, in business value models [START_REF] Osterwlader | Modeling value propositions in e-Business[END_REF][START_REF] Kinderen | Reasoning about customer needs in multi-supplier ICT service bundles using decision models[END_REF]. However, in none of these works, capabilities are formally linked to IS models. In the SOA reference architecture [START_REF] Oasis | Reference Architecture Foundation for Service Oriented Architecture Version 1.0[END_REF] capability has been described as a business functionality that, through a service, delivers a welldefined user need. However, in the specification, not much attention is given to the modeling of capability, nor it is linked to software services. In the Web Service research, capability is considered purely on the technical level, through service level agreements and policy specifications [START_REF] Papazoglou | Design Methodology for Web Services and Business Processes[END_REF].
In order to reduce development time, to improve software quality, and to increase development flexibility, MDD has established itself as one of the most promising software development approaches. However, [START_REF] Asadi | MDA-Based Methodologies: An Analytical Survey[END_REF] show that the widely practiced MDD specialization -Model Driven Architecture [START_REF] Kleppe | MDA Explained[END_REF] and following methodologies, mainly assume requirements as given a priori. [START_REF] Loniewski | A Systematic Review of the Use of Requirements Engineering Techniques in Model-Driven Development[END_REF] and [START_REF] Yue | A systematic review of transformation approaches between user requirements and analysis models[END_REF] indicate that MDA starts with system analysis's models. They also survey various methods for integrating requirements into an overall model-driven framework, but do not address the issue of requirements origination. There is a limited evidence of MDA providing the promised benefits [START_REF] Mohagheghi | Where Is the Proof? -A Review of Experiences from Applying MDE in Industry[END_REF]. Complexity of tools, their methodological weaknesses, and too low abstraction level of development artifacts are among the main areas of improvement for MDD tools [START_REF] Henkel | Pondering on the Key Functionality of Model Driven Development Tools: the Case of Mendix[END_REF].
Business modeling and Enterprise Modeling (EM) [START_REF]Perspectives on Business Modelling: Understanding and Changing Organisations[END_REF] has been used for business development and early requirements elicitation for many years, but a smooth (nearly automated) transition to software development has not been achieved due to immaturity of the existing approaches and lack of tools. Enterprise-wide models are also found in [17], where the enterprise architecture of ArchiMate is extended with an intentional aspect capturing the goals and requirements for creating an enterprise system. A comparable solution is developed in [START_REF] Pastor | Linking Goal-Oriented Requirements and Model-Driven Development[END_REF], where a generic process is presented for linking i* and the OO-Method as two representatives of Goal-Oriented Requirements Engineering (GORE) and MDD, respectively. In [START_REF] Zikra | Bringing Enterprise Modeling Closer to Model-Driven Development[END_REF] a recent analysis of the current state in this area is presented, as well as proposed a meta-model for integrating EM with MDD.
Model driven approaches also show promise to development of cloud-based applications, which has been extensively discussed at the 1st International Conference on Cloud Computing and Service Sciences, c.f. [START_REF] Esparza-Peidro | Towards the next generation of model driven cloud platforms[END_REF][START_REF] Hamdaqa | A reference model for developing cloud applications[END_REF]. However, these investigations currently are at the conceptual level and are aimed at demonstrating a potential of MDD for cloud computing. A number of European research project, e.g. REMICS and SLA@SOI have been defined in this area.
Methods for capturing context in applications and services have achieved high level of maturity and they provide a basis for application of context information in software development and execution. [START_REF] Vale | COMODE: A framework for the development of contextaware applications in the context of MDE[END_REF] describe MDD for context-aware applications, where the context model is bound to a business model, encompassing information about user's location, time, profile, etc. Context awareness has been extensively explored for Web Services, both methods and architectures, as reported in [START_REF]Enabling Context-Aware Web Services: Methods, Architectures, and Technologies[END_REF]. It is also studied in relation to workflow adaptation [START_REF] Smanchat | A survey on context-aware workflow adaptations[END_REF]. Lately, [START_REF] Hervas | A Context Model based on Ontological Languages; a proposal for Information Visualisation[END_REF] has suggested a formal context model, compounded by ontologies describing users, devices, environment and services. In [START_REF] Liptchinsky | A Novel Approach to Modeling Context-Aware and Social Collaboration Processes[END_REF] an extension to State charts to capture context dependent variability in processes has been proposed.
Non-functional aspects of service-oriented applications are controlled using QoS data and SLA. Dynamic binding and service selection methods allow replacing underperforming services in run-time [START_REF] Comuzzi | A framework for QoS-based Web service contracting[END_REF]. However, QoS and SLA focus only on a limited number of technical performance criteria with little regard to business value of these criteria.
In summary, there are a number or contributions in addressing the problem of adjusting the IS depending on the context, however business capability concept is not explicitly addressed in the context development.
Requirements for Capability Driven Development
In this section we discuss a number of requirements motivating the need for CDD.
Currently the business situation in which the IS will be used is predetermined at design time. At run-time, only adaptations that are within the scope of the planned situation can usually be made. But in the emerging business contexts we need rapid response to changes in the business context and development of new capabilities, which also requires run-time configuration and adjustment of applications. In this respect a capability modeling meta-model linking business designs with application contexts and IS components is needed.
Designing capabilities is a task that combines both business and IS knowledge. Hence both domains need to be integrated in such a way that allows establishing IS support for the business capabilities.
Current EM and business development approaches have grown from the principle that a single business model is owned by a single company. In spite of distributed value chains and virtual organizations [START_REF] Davidow | The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21st Century[END_REF] this way of designing organizations and their IS still prevails. The CDD approach would aim to support co-development and co-existence of several business models by providing "connection points" between business models based on goals and business capabilities.
Most of the current MDD approaches are only efficient at generating relatively simple data processing applications (e.g. form-driven). They do not support e.g. complex calculations, advanced user interfaces, scalability of the application in the cloud. CDD should bring the state of the art further by supporting the modeling of the application execution context; this includes the ability to model the ability to switch service providers and platforms. Furthermore, the capability approach would also allow deploying more adequate security measures, by designing overall security approaches at design-time and then customizing them at deployment and run-time.
Foundation for Capability Driven Development
The capability meta-model presented in this section provides the theoretical and methodological foundation for the CDD. The meta-model is developed on the basis of industrial requirements and related research on capabilities. Initial version of such a meta-model is given in Figure 1. The meta-model has three main sections:
§ Enterprise and capability modeling. This focuses on developing organizational designs that can be configured according to the context dependent capabilities in which they will be used. I.e. this captures a set of generic solutions applicable in many different business situations. § Capability delivery context modeling. Represents the situational context under which the solutions should be applied including indicators for measuring the context properties. § Capability delivery patterns representing reusable solutions for reaching business goals under different situational contexts. The context defined for the capability should match the context in which the pattern is applicable in.
Enterprise and Capability Modeling
This part covers modeling of business goals, key performance indicators (KPI), and business processes needed to accomplish the goals. We also specify resources required to perform processes. The associations between these modeling components are based on the meta-model of EM approach EKD [29]. The concept of capability extends this meta-model towards being suitable for CDD.
Capability expresses an ability to reach a certain business objective within the range of certain contexts by applying a certain solution. Capability essentially links together business goals with patterns by providing contexts in which certain patterns (i.e. business solutions) should be applicable.
Each capability supports or is motivated by one business goal. In principle business goals can be seen as internal means for designing and managing the organization and capabilities as offerings to external customers. A capability requires or is supported by specific business processes, provided by specific roles, as well as it needs certain resources and IS components. The distinguishing characteristic of the capability is that it is designed to be provided in a specific context. The desired goal fulfillment levels can be defined by using a set of goal fulfillment indicators -Goal KPIs.
Context Modeling
The context is any information that can be used to characterize the situation, in which the capability can be provided. It describes circumstances, i.e. context situation, such as geographical location, platforms and devices used and as well as business conditions and environment. These circumstances are defined by different context types. The context situation represents the current context status. Each capability delivery pattern is valid for a specific set of context situations as defined by the pattern validity space. The context KPIs are associated with a specific capability delivery pattern. They represent context measurements, which are of vital importance for the capability delivery. The context KPI are used to monitor whether the pattern chosen for capability delivery is still valid for the current context situation. If the pattern is not valid, then capability delivery should be dynamically adjusted by applying a different pattern or reconfiguring the existing pattern (i.e., changing delivery process, reassigning resources etc.). Technically, the context information is captured using a context platform in a standardized format (e.g. XCoA). Context values change according to a situation. The context determines how a capability is delivered, which is represented by a pattern.
Capability Delivery Pattern
A pattern is used to: "describe a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem in such a way that you can use this solution a million times over, without ever doing it the same way twice" [START_REF] Alexander | A pattern language[END_REF]. This principle of describing a reusable solution to a recurrent problem in a given context has been adopted in various domains such as software engineering, information system analysis and design [START_REF] Gamma | Design Patterns: Elements of Reusable Object-Oriented Software Architecture[END_REF] as well as organizational design.
Organizational patterns have proven to be a useful way for the purpose of documenting, representing, and sharing best practices in various domains (c.f. [START_REF] Niwe | Organizational Patterns for B2B Environments-Validation and Comparison[END_REF]).
In the CDD approach we amalgamate the principle of reuse and execution of software patterns with the principle of sharing best practices of organizational patterns. Hence, capability delivery patterns are generic and abstract design proposals that can be easily adapted, reused, and executed. Patterns will represent reusable solutions in terms of business process, resources, roles and supporting IT components (e.g. code fragments, web service definitions) for delivering a specific type of capability in a given context. In this regard the capability delivery patterns extend the work on task patterns performed in the MAPPER project [START_REF] Sandkuhl | Evaluation of Task Pattern Use in Web-based Collaborative Engineering[END_REF].
Each pattern describes how a certain capability is to be met within a certain context and what resources, process, roles and IS components are needed. In order to provide a fit between required resources and available resources, KPIs for monitoring capability delivery quality are defined in accordance with organization's goals. KPIs measure whether currently available resources are sufficient in the current context. In order to resolve resource availability conflicts, conflict resolutions rules are provided.
Example Case
To exemplify the proposed approach we model a case of a building operator aiming to run its buildings efficiently and in an environmentally sustainable manner. The case is inspired by the FP7 project EnRiMa -"Energy Efficiency and Risk Management in Public Buildings" (proj. no. 260041). The objective of the EnRiMa project is to develop a decision support system (DSS) for optimizing energy flows in a building. In this paper we envision how this service will be used after the DSS will be operational. The challenge that the capability driven approach should address is the need to operate different buildings (e.g. new, old, carbon neutral) in different market conditions (e.g. fixed energy prices, flexible prices), different energy technologies (e.g. energy storage, photovoltaic (PV)), and with different ICT technologies (e.g. smart sensors, advanced ICT infrastructure, closed ICT infrastructure, remote monitoring, no substantial ICT support). The EnRiMa DSS aims to provide building specific optimization by using customized energy models describing the energy flows for each building. The optimization can be based on using building data from the onsite buildings management systems, for example giving the current temperature and ventilation air flow. The project also aims to provide a DSS that can be installed onsite or via deployment in the cloud.
Fig. 2. A generic goal model for a building operator
Enterprise Modeling
The top goal is refined into a number of sub-goals, each lined to one or several KPIs. This is a simplification; in real life there are more sub-goals and KPIs to consider than figure 2 shows. In this particular case the decomposition of the top goal into the five sub-goals should be seen in conjunction with the KPIs. I.e. the building operator wants to achieve all of the sub-goals, but since that is not possible for each particular building the owner defines specific KPIs to be used for the optimization tasks.
In summary, KPIs are used for designing the capabilities to set the level of goal fulfillment that is being expected from the capabilities. In the capability driven approach presented here we use indicators to define different level of goal fulfillment that we can expect.
Processes are central for coordinating the resources that are needed for a capability. In this case there are processes that are executed once e.g. for the initial configuration of the system and then-re executed when the context changes. We here include four basic processes:
Energy audit and configuration process. As a part of improving the energy efficiency of a building there is a need to perform an energy audit and to configure the decision support system with general information on building. The energy audit will result in a model of the building energy flows, for example to determine how much of the electricity that goes to heating, and to determine the technical equipment (such as boilers) efficiency level. Besides the energy flow there is also a need to configure the system with information of the glass area of the building, hours of operation and so on. Depending on the desired capability the process can take a number of variants, ranging from simple estimation to full-scale audits. Note that if the context changes, for example if the installed energy technology in the building changes, there is a need to repeat the configuration. We here define two variant of this process: Template based -using generic building data to estimate energy flows, Full energy auditdoing a complete energy flow analysis, leading to a detailed model of the building ICT infrastructure integration process. To continuously optimize the energy efficiency of a building there is a need to monitor the building behavior via its installed building management system. For example, by monitoring the temperature changes the cooling system can be optimized to not compensate for small temperature fluctuations. This process can take several variants, depending on the context in the form of the building management system ability to integrate with external systems. In this case we define two variants: Manual data entry -data entered manually, Integration -data fetched directly from the building management system. The actual integration process depends on which building management system is installed (e.g. Siemens Desigo system).
Deployment process. Depending on the access needs the decision support system can be executed at the building site, at a remote locations, or in a cloud platform provided by an external provider. Process variants: On-site, External, Cloud provider.
Energy efficiency monitoring and optimization process. This process is at the core of delivering the capability, i.e. monitoring, analyzing and optimizing the energy flows is what can lead to a lower the energy consumption. A very basic variant, addressing a simple context is to just monitor for failures in one of the building systems. A more advanced variant, catering to highly automated buildings is to perform a daily, automated, analysis to change the behavior of the installed building technologies. Process variants: Passive monitoring -monitoring for failures, Active optimization -performing pro-active optimizations based on detailed estimations Depending on the context the variants of these processes can be activated, this will be described in the next section.
Context Modeling
The DSS can be deployed to a wide range of contexts. To exemplify the varying conditions we here describe two simplified context types:
Older building, low ICT Monitoring -where the building got a low degree of ICT integration abilities, and the overall desire of the building owner is to monitor the buildings energy usage and minimize costs.
Modern building, high ICT infrastructure -where integration with the building system is possible, a building model allowing continuous optimizations is possible, and the building owner wants to balance CO 2 emissions and cost minimization.
Each of these context types can be addressed by capabilities (see figures 3 and 4) that guide through selecting the right processes or process variants; this will be further described in the section on patterns. The examples here present the enterprise models at design-time. To detect a context change at runtime we define a set of context-KPIs. These allow us to monitor the goal fulfillment at runtime by comparing the measurable situational properties. For example, Context KPI: Energy consumption 200 kWh/m 2 should be compared with the actual energy consumption (see figure 3). DSS and which executable components (e.g. web-services) should be used. If the building has closed system, then manual data input should be used instead. Table 1 shows two capabilities and their relation to variants of the energy audit and integration with the existing ICT systems of the building. Moreover we identify those context KPIs that can be of use when monitoring the process execution.
Discussion
In this section we will discuss issues pertinent to usage of CDD, namely capability delivery application (CDA), CDD methodology, and tool support.
Capability Delivery Application
A company requesting a particular capability represents it using the concepts of CDD meta-model. The main principle of CDD is that, in comparison to traditional development methods, the software design part is supported by improving both the analysis side and the implementation side. From the analysis side, the capability representation is enriched and architectural decisions are simplified by using patterns. From the implementation side, the detailed design complexity is reduced by relying on, for example, traditional web-services or cloud-based services. The resulting CDA is a composite application based on external services.
Figure 5 shows three conceptual layers of the CDA: (1) Enterprise Modeling layer; (2) design layer; and (3) execution layer. The EM layer is responsible for high level of representation of required capabilities. The design layer is responsible for composing meta-capabilities from capability patterns, which is achieved by coupling patterns with executable services. The execution layer is responsible for execution of the capability delivery application and its adjustment to the changing context.
The requested capability is modeled using the EM techniques and according to the capability meta-model as described in this paper. The patterns are analyzed in order to identify atomic capabilities that can be delivered by internal or external services by using a set of service selection methods. These service selection methods are based on existing service selection methods [START_REF] Chen | A method for context-aware web services selection[END_REF]. Availability of internal services is identified by matching the capability definition against the enterprise architecture, and a set of the matching rules will have to be elaborated.
Fig. 5. Layered view of capability delivery application
A process composition language is used to orchestrate services selected for delivering the requested capability. The process composition model includes multiple process execution variants [START_REF] Lu | On managing business processes variants[END_REF]. The capabilities are delivered with different frontends, which are modelled using an extended user interface modelling language. The external services used in CDA should be able to deliver the requested performance in the defined context. The necessary service functionality and non-functional requirements corresponding to the context definition are transformed into a service provisioning blueprint [START_REF] Nguyen | Blueprint Template Support for Engineering Cloud-Based Services[END_REF], which is used as a starting point for binding capability delivery models with executable components and their deployment environment. The service provisioning blueprint also includes KPIs to be used for monitoring the capability delivery. We envision that the CDA is deployed together with its simulation model and run-time adjustment algorithms based on goal and context KPIs. The key task of these algorithms is enacting of the appropriate process execution variant in response to the context change.
Business capabilities also could be delivered using traditional service-oriented and composite applications. However, the envisioned CDA better suites the requirements of CDD by providing integration with enterprise models and built-in algorithms for dynamic application adjustment in response to changing execution context.
The Process of Capability Driven Development
Enterprise modelling layer
Design layer
Execution layer
An overview of the envisioned CDD process is shown in Figure 6. It includes three main capability delivery cycles: 1) development of the capability delivery application; 2) execution of the capability delivery application; and 3) capability refinement and pattern updating. These three cycles address the core requirements of the CDD by starting development with enterprise level organizational and IS models, adjustment of the capability delivery during the application run-time and establishing and updating capability delivery patterns. CDD should also encompass run-time adjustment algorithms because the capability is delivered in a changing context, where both business (e.g., current business situation (growth, decline), priorities, personnel availability) and technical (e.g., location, device, workload) matters. Once the CDA is developed and deployed, it is continuously monitored and adjusted according to the changing context. Monitoring is performed using KPIs included in the system during the development and adjustment is made using algorithms provided by the CDD methodology.
Tool support also is important for CDD. EM is a part of CDD and for this purpose a modeling tool is needed. It should mainly address the design phase because at runtime tools provided by the target platform will be used.
We are currently planning to develop an open source Eclipse based tool for CDD and will use Eclipse EMF plug-in and other relevant plug-ins as the development foundation. Models are built on the basis of extensions of modeling languages such as EKD, UML and executable BPMN 2.0.
Concluding Remarks and Future Work
We have proposed an approach that integrates organizational development with IS development taking into account changes in the application context of the solution -Capability Driven Development. We have presented a meta-model for representing business designs and exemplified it by a case from the energy efficiency domain. This in essence is research in progress, and hence, we have also discussed a number of issues for future work related to use of the CDD approach, namely, capability delivery application, CDD methodology, and tool support also are discussed.
The two important challenges to be addressed are availability of patterns and implementation of algorithms for dynamic adjustment of CDA. In order to ensure pattern availability an infrastructure and methods for life-cycle management of patterns is required. In some cases, incentives for sharing patterns among companies can be devised. That is particularly promising in the field of energy efficiency. There could be a large number of different adjustment algorithms. Elaboration and implementation should follow a set of general, open principles for incorporating algorithms developed by third parties.
The main future directions are throughout validation of the capabilities meta-model and formulation of rules for matching required capabilities to existing or envisioned enterprise resources represented in a form of enterprise models and architectures.
Fig. 1 .
1 Fig. 1. The initial capability meta-model
Fig. 6 .
6 Fig. 6. Capability Driven Development methodology
Table 1 .
1 Example of two context patterns, each making use of process variants.
Capability delivery Capability: Old building, low Capability: Modern building, high
pattern contains: ICT ICT
ICT infrastructure Pattern: Manual data entry Pattern: Integrate with Siemens
integration process Desigo
Energy audit and Pattern: Template based audit Pattern: Run full energy audit
configuration
To support development of CDA, a CDD methodology is needed. It is based on agile and model driven IS development principles and consists of the CDD development process, a language for representing capabilities according to the CDD meta-model, as well as modeling tools. The main principles of the CDD methodology should be:§ Use of enterprise models understandable to business stakeholders, § Support for heterogeneous development environment as opposed to a single vendor platform, § Equal importance of both design-time and run-time activities with clear focus on different development artifacts, § Rapid development of applications specific to a business challenge, § Search for the most economically and technically advantageous solution,
Capability Enterprise Capability delivery
definition architecture patterns
Process composition Front-end modelling blueprint Service provisioning
Capability delivery application
KPI & Algorithms
Context platform Application simulation model External services
The patterns shown here omit details such as forces and usage guidelines, e.g. explaining how to apply and use the processes and/or executable services. In a real life case they should be developed and included in the pattern body.
Capability Delivery Patterns
The EnRiMa DSS will be used to balance various, often contradictory, operator goals, e.g. to lower the energy costs in buildings and to reduce CO 2 emissions. Each building however is different, and thus the context of execution for the system will vary. Therefore we design a set of process variants. The role of capability delivery patterns is to capture and represent which process variants should be used at which contexts delivering which capabilities. For example, if a building has Siemens Desigo building management system, then a pattern describing how to integrate it with the EnRiMa | 38,875 | [
"977607",
"1002486",
"1003544",
"942421"
] | [
"300563",
"302733",
"300563",
"300563"
] |
01353135 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://hal.science/hal-01353135/file/Liris-5889.pdf | Fernando De
Katherine Breeden
Blue Noise through Optimal Transport
Keywords: CR Categories: I.4.1 [Image Processing and Computer Vision]: Digitization and Image Capture-Sampling Blue noise, power diagram, capacity constraints
We present a fast, scalable algorithm to generate high-quality blue noise point distributions of arbitrary density functions. At its core is a novel formulation of the recently-introduced concept of capacityconstrained Voronoi tessellation as an optimal transport problem. This insight leads to a continuous formulation able to enforce the capacity constraints exactly, unlike previous work. We exploit the variational nature of this formulation to design an efficient optimization technique of point distributions via constrained minimization in the space of power diagrams. Our mathematical, algorithmic, and practical contributions lead to high-quality blue noise point sets with improved spectral and spatial properties.
Introduction
Coined by [START_REF] Ulichney | Digital Halftoning[END_REF], the term blue noise refers to an even, isotropic, yet unstructured distribution of points. Blue noise was first recognized as crucial in dithering of images since it captures the intensity of an image through its local point density, without introducing artificial structures of its own. It rapidly became prevalent in various scientific fields, especially in computer graphics, where its isotropic properties lead to high-quality sampling of multidimensional signals, and its absence of structure prevents aliasing. It has even been argued that its visual efficacy (used to some extent in stippling and pointillism) is linked to the presence of a blue-noise arrangement of photoreceptors in the retina [START_REF] Yellott | Spectral consequences of photoreceptor sampling in the rhesus retina[END_REF]].
Previous Work
Over the years, a variety of research efforts targeting both the characteristics and the generation of blue noise distributions have been conducted in graphics. Arguably the oldest approach to algorithmically generate point distributions with a good balance between density control and spatial irregularity is through error diffusion [START_REF] Floyd | An adaptive algorithm for spatial grey scale[END_REF][START_REF] Ulichney | Digital Halftoning[END_REF], which is particularly well adapted to low-level hardware implementation in printers. Concurrently, a keen interest in uniform, regularity-free distributions appeared in computer rendering in the context of anti-aliasing [START_REF] Crow | The aliasing problem in computer-generated shaded images[END_REF]. [START_REF] Cook | Stochastic sampling in computer graphics[END_REF] proposed the first dart-throwing algorithm to create Poisson disk distributions, for which no two points are closer together than a certain threshold. Considerable efforts followed to modify and improve this original algorithm [START_REF] Mitchell | Generating antialiased images at low sampling densities[END_REF][START_REF] Mccool | Hierarchical Poisson disk sampling distributions[END_REF][START_REF] Jones | Efficient generation of Poisson-disk sampling patterns[END_REF][START_REF] Bridson | Fast Poisson disk sampling in arbitrary dimensions[END_REF][START_REF] Gamito | Accurate multidimensional Poisson-disk sampling[END_REF]]. Today's best Poisson disc algorithms are very efficient and versatile [START_REF] Dunbar | A spatial data structure for fast Poisson-disk sample generation[END_REF][START_REF] Ebeida | Efficient maximal Poisson-disk sampling[END_REF], even Figure 1: Memorial. Our variational approach allows sampling of arbitrary functions (e.g., a high-dynamic range image courtesy of P. Debevec), producing high-quality, detail-capturing blue noise point distributions without spurious regular patterns (100K points, 498 s).
running on GPUs [START_REF] Wei | Parallel Poisson disk sampling[END_REF][START_REF] Bowers | Parallel Poisson disk sampling with spectrum analysis on surfaces[END_REF][START_REF] Xiang | Parallel and accurate Poisson disk sampling on arbitrary surfaces[END_REF]. Fast generation of irregular low-discrepancy sequences have also been proposed [START_REF] Niederreiter | Random Number Generation and Quasi-Monte-Carlo Methods[END_REF][START_REF] Lemieux | Fast capacity constrained Voronoi tessellation[END_REF]]; however, these methods based on the radical-inverse function rarely generate highquality blue noise.
In an effort to allow fast blue noise generation, the idea of using patterns computed offline was raised in [Dippé and Wold 1985]. To remove potential aliasing artifacts due to repeated patterns, [START_REF] Cohen | Wang tiles for image and texture generation[END_REF] recommended the use of non-periodic Wang tiles, which subsequently led to improved hierarchical sampling [START_REF] Kopf | Recursive Wang tiles for real-time blue noise[END_REF]] and a series of other tile-based alternatives [START_REF] Ostromoukhov | Fast hierarchical importance sampling with blue noise properties[END_REF]Lagae and Dutré 2006;[START_REF] Ostromoukhov | Sampling with polyominoes[END_REF]. However, all precalculated structures used in this family of approaches rely on the offline generation of high-quality blue noise.
Consequently, a number of researchers focused on developing methods to compute point sets with high-quality blue noise properties, typically by evenly distributing points over a domain via Lloyd-based iterations [McCool and Fiume 1992;[START_REF] Deussen | Floating points: A method for computing stipple drawings[END_REF][START_REF] Secord | Weighted Voronoi stippling[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF][START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF], electro-static forces [START_REF] Schmaltz | Electrostatic halftoning[END_REF], statistical-mechanics interacting Gaussian particle models [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]], or farthest-point optimization [Schlömer et al. 2011]. These iterative methods consistently generate much improved point distributions, albeit at sometimes excessive computational complexity.
Finally, recent efforts have provided tools to analyze point sets using spatial/spectral [Lagae and Dutré 2008;Schlömer and Deussen 2011] and differential [START_REF] Wei | Differential domain analysis for non-uniform sampling[END_REF] methods. Extensions to anisotropic [Li et al. 2010b;[START_REF] Xu | Blue noise sampling of surfaces[END_REF], non-uniform [START_REF] Wei | Differential domain analysis for non-uniform sampling[END_REF], multiclass [START_REF] Wei | Multi-class blue noise sampling[END_REF]], and general spectrum sampling [START_REF] Zhou | Point sampling with general noise spectrum[END_REF]] have also been recently introduced.
Motivation and Rationale
Despite typically being slower, optimization methods based on iterative displacements of points have consistently been proven superior to other blue noise generation techniques. With the exception of [START_REF] Schmaltz | Electrostatic halftoning[END_REF][START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF], these iterative approaches rely on Voronoi diagrams and Lloyd's relaxations [START_REF] Lloyd | Least squares quantization in PCM[END_REF]]. To our knowledge, the use of Lloyd's algorithm for blue noise sampling was first advocated in [START_REF] Mccool | Hierarchical Poisson disk sampling distributions[END_REF] to distribute points by minimizing the root mean square (RMS) error of the quantization of a probability distribution. However, the authors noticed that a "somewhat suboptimal solution" was desirable to avoid periodic distribution: Lloyd's algorithm run to convergence tends to generate regular regions with point or curve defects, creating visual artifacts. Hence, a limited number of iterations was used in practice until [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF] proposed the use of a Capacity-Constrained Voronoi Tessellation (CCVT), a rather drastic change in which a constraint of equi-area partitioning is added to algorithmically ensure that each point conveys equal visual importance. However, this original approach and its various improvements rely on a discretization of the capacities, and thus suffer from a quadratic complexity, rendering even GPU implementations [Li et al. 2010a] unable to gracefully scale up to large point sets. Two variants were recently proposed to improve performance, both providing an approximation of CCVT by penalizing the area variance of either Voronoi cells [START_REF] Chen | Variational blue noise sampling[END_REF] or Delaunay triangles [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF]].
Contributions
In this paper, we show that CCVT can be formulated as a constrained optimal transport problem. This insight leads to a continuous formulation able to enforce the capacity constraints exactly, unlike related work. The variational nature of our formulation is also amenable to a fast, scalable, and reliable numerical treatment. Our resulting algorithm will be shown, through spectral analysis and comparisons, to generate high-grade blue noise distributions. Key differences from previous methods include:
• a reformulation of CCVT as a continuous constrained minimization based on optimal transport, as opposed to the discretized approximation suggested in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF]; • an optimization procedure over the space of power diagrams that satisfies the capacity constraints up to numerical precision, as opposed to an approximate capacity enforcement in the space of Delaunay triangulations [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] or Voronoi diagrams [START_REF] Chen | Variational blue noise sampling[END_REF]]; • a regularity-breaking procedure to prevent local aliasing artifacts that occur in previous approaches.
Redefining Blue Noise through Optimal Transport
Before presenting our algorithm for point set generation, we spell out our definition of blue noise as a constrained transport problem. We consider an arbitrary domain D over which a piecewise-continuous positive field ρ (e.g., intensity of an image) is defined.
Background
Two crucial geometric notions will be needed. We briefly review them next for completeness.
Optimal Transport. The optimal transport problem, dating back to Gaspard Monge [START_REF] Villani | Optimal Transport: Old and New[END_REF]], amounts to determining the optimal way to move a pile of sand to a hole of the same volume-where "optimal" means that the integral of the distances by which the sand is moved (one infinitesimal unit of volume at a time) is minimized.
The minimum "cost" of moving the piled-up sand to the hole, i.e., the amount of sand that needs to be moved times the Lp distance it has to be moved, is called the p-Wasserstein metric. The 2-Wasserstein metric, using the L2 norm, is most common, and is often referred to as the earth mover's distance. Optimal transport has recently been of interest in many scientific fields; see [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF][START_REF] Bonneel | Displacement interpolation using Lagrangian mass transport[END_REF][START_REF] De Goes | An optimal transport approach to robust reconstruction and simplification of 2d shapes[END_REF]
V w i = {x ∈ D| x -xi 2 -wi ≤ x -xj 2
-wj, ∀j}. The power diagram of (X, W ) is the cell complex formed by the power cells V w i . Note that when the weights are all equal, the power diagram coincides with the Voronoi diagram of X; power diagrams and their associated dual (called regular triangulations) thus generalize the usual Voronoi/Delaunay duality.
Blue Noise as a Constrained Transport Problem
Sampling a density function ρ(x) consists of picking a few representative points xi that capture ρ well. This is, in essence, the halftoning process that a black-and-white printer or a monochrome pointillist painter uses to represent an image. In order to formally characterize a blue noise distribution of points, we see sampling as the process of aggregating n disjoint regions Vi (forming a partition V of the domain D) into n points xi: if ρ is seen as a density of ink over D, sampling consists in coalescing this distribution of ink into n Dirac functions (i.e., ink dots).
We can now revisit the definition of blue noise sampling through the following requirements:
A. Uniform Sampling: all point samples should equally contribute to capturing the field ρ. Consequently, their associated regions Vi must all represent the same amount m of ink:
mi = V i ρ(x) dx ≡ m.
B. Optimal Transport: the total cost of transporting ink from the distribution ρ to the finite point set X should be minimized, thus representing the most effective aggregation. This ink transport cost for an arbitrary partition V is given as
E(X, V) = i V i ρ(x) x -xi 2 dx,
i.e., as the sum per region of the integral of all displacements of the local ink distribution ρ to its associated ink dot.
C. Local Irregularity: the point set should be void of visual artifacts such as Moiré patterns and other aliasing effects; that is, it should be free of local spatial regularity.
Note that the first requirement implies that the resulting local point density will be proportional to ρ as often required in importance sampling. The second requirement favors isotropic distribution of points since such partitions minimize the transport cost. The final requirement prevents regular or hexagonal grid patterns from emerging. Together, these three requirements provide a densityadapted, isotropic, yet unstructured distribution of points, capturing the essence of a blue noise as a constrained transport problem.
Power Diagrams vs. Voronoi Diagrams
While the cost E may resemble the well-known CVT energy [START_REF] Du | Centroidal Voronoi Tessellations: Applications and algorithms[END_REF], the reader will notice that it is more general, as the cells Vi are not restricted to be Voronoi. In fact, [START_REF] Aurenhammer | Minkowski-type theorems and least-squares clustering[END_REF] proved that capacity constrained partitions (requirement A) that minimize the cost E (requirement B) for a given point set are power diagrams. So, instead of searching through the entire space of possible partitions, we can rather restrict partitions V to be power diagrams, that is, Vi ≡ V w i . Within this subspace of partitions, the cost functional E coincides with the 0 -HOT2,2 energy of [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]] (i.e., the power diagram version of the CVT energy). This difference is crucial: while methods restricting their search to Delaunay meshes [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] or Voronoi diagrams [START_REF] Chen | Variational blue noise sampling[END_REF]] can only approximate the constraints in requirement A, this power diagram formulation has the additional variables (weights) necessary to allow exact constraint enforcement, thus capturing sharp feature much more clearly than previous methods (see Sec. 5).
In fact, all of our results exhibit uneven weights as demonstrated in Fig. 2, reinforcing the importance of power vs. Voronoi diagrams.
Variational Formulation
Leveraging the fact that requirements A and B can only be enforced for power diagrams, we describe next our variational characterization of blue noise distributions of weighted point sets (X, W ). Requirement C will be enforced algorithmically, as discussed in Sec. 4.6, by detecting regularity and locally jittering the point set to guide our optimization towards non-regular distributions.
Functional Extremization
We can now properly formulate our constrained minimization to enforce requirements A and B.
Lagrangian formulation.
A common approach to deal with a constrained minimization is to use Lagrange multipliers Λ={λi}i=1...n to enforce the n constraints (one per point) induced by requirement A. The resulting optimization procedure can be stated as:
Extremize E(X, W ) + i λi mi -m
with respect to xi, wi, and λi, where the functional E is now clearly labeled with the point set and its weights as input (since we know that only power diagrams can optimize the constrained transport energy), and mi is the amount of ink in the region V w i :
E(X, W ) = i V w i ρ(x) x-xi 2 dx, mi = V w i ρ(x) dx. (1)
Simpler formulation. The Lagrangian multipliers add undue complexity: they contribute an additional n variables to the optimization. Instead, one can extremize a simpler function F depending only on the weighted point set: we show in the appendix that the extremization above is equivalent to finding a stationary point of the following scalar functional:
F(X, W ) = E(X, W ) - i wi mi -m . (2)
With n fewer variables to deal with, we will show in Sec. 4 that blue noise generation can be efficiently achieved. Test from [START_REF] Secord | Weighted Voronoi stippling[END_REF]] (20K points). While [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] does not capture density gradients very cleanly (see close-ups), our result is similar to CCVT [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] on this example, at a fraction of the CPU time. Comparative data courtesy of the authors.
Functional Properties
The closed-form expression of our functional allows us not only to justify the Lloyd-based algorithmic approaches previously used in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a], but also to derive better numerical methods to find blue noise point sets by exploiting a few key properties.
F(X, W ) : F(X, W ) : F(X, W ) : For a fixed set of points X, the Hessian of our functional w.r.t. weights is the negated weighted Laplacian operator as shown in the appendix. Consequently, extremizing F is actually a maximization with respect to all wi's. This is an important insight that will lead us to an efficient numerical approach comparable in speed to recent approximate CCVT methods [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF]], but much faster than the quadratic scheme used in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a].
F(X, W ) : F(X, W ) : F(X, W ) : Now for a fixed set of weights W , our functional is the 0 -HOT2,2 energy of [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]] (i.e., the power diagram version E of the CVT energy), with one extra term due to the constraints. Several numerical methods can be used to minimize this functional. Note that, surprisingly, the functional gradient w.r.t. positions turns out to be simply
∇x i F = 2mi xi -bi , with bi = 1 mi V w i xρ(x)dx, (3)
because the boundary term of the Reynolds' transport theorem cancels out the gradients of the constraint terms (see appendix). Extremizing F thus implies that we are looking for a "centroidal power diagram", as xi and its associated weighted barycenter bi have to match to ensure a zero gradient.
Discussion
We now discuss the key differences between our transport-based formulation and previous CCVT methods.
Discrete vs. Continuous Formulation. The initial CCVT method and its improvements [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF][START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a] adopted a discrete formulation in which the density function ρ is represented by a finite set of samples, with the number of samples being "orders of magnitude" larger than the number n of points. Blue noise point sets are then generated via repeated energydecreasing swaps between adjacent clusters, without an explicit use of weights. This discrete setup has several numerical drawbacks. First, while samples can be thought of as quadrature points for capacity evaluation, their use causes accuracy issues: in essence, using samples amounts to quantizing capacities; consequently, the transport part of the CCVT formulation is not strictly minimized. Second, the computational cost induced by the amount of swaps required to reach convergence is quadratic in the number of samples-and thus impractical beyond a few thousand points. Instead, we provided a continuous functional whose extremization formally encodes the concept behind the original CCVT method [START_REF] Balzer | Capacity-constrained Voronoi diagrams in finite spaces[END_REF].
The functional F in Eq. 2 was previously introduced in [START_REF] Aurenhammer | Minkowski-type theorems and least-squares clustering[END_REF]] purely as a way to enforce capacity constraints for a fixed point set; here we extend F as a function of weights wi and positions xi, and the closed-form gradient and Hessian we explicitly derived will permit, in the next section, the development of a fast numerical treatment to generate high-quality blue noise distributions in a scalable fashion, independently of the sampling size of the density function.
Approximate vs. Exact Constraints. Attempts at dealing with CCVT through continuous optimization have also been investigated by sacrificing exact enforcement of capacity constraints. In [START_REF] Balzer | Voronoi treemaps for the visualization of software metrics[END_REF][START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF]], for instance, a point-by-point iterative approach is used to minimize the capacity variance of Voronoi cells to best fit the capacity constraints; [START_REF] Chen | Variational blue noise sampling[END_REF] recommend adding the capacity variance as a penalty term to the CVT energy instead; [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] take a dual approach by minimizing capacity variance on Delaunay triangles instead of Voronoi cells. These different variants all mix the requirements of good spatial distribution and capacity constraints into a single minimization, leading to an over-constrained formulation. Minima of their functionals thus always represent a tradeoff between capacity enforcement and isotropic spatial distribution. Instead, our formulation allows exact capacity constraints by controlling the power diagram through the addition of a weight per vertex: we can now optimize distribution quality while constraining capacity, resulting in high quality blue noise sampling of arbitrary density field (see quadratic ramp in Fig. 10 for a comparison with recent methods).
Numerical Optimization
We now delve into the numerical methods and algorithmic details we use to efficiently generate blue noise point distribution based on our variational formulation.
Overall Strategy
We proceed with point set generation by computing a critical point of the functional F defined in Eq. 2: we extremize the functional F by repeatedly performing a minimization step over positions followed by a projection step over weights to enforce constraints. The power diagram of the weighted point set is updated at each step via the CGAL library [2010]. While this alternating procedure is typical for non-linear multivariable problems, we will benefit from several properties of the functional as already alluded to in Sec. 3:
• enforcing the capacity constraints for a fixed set of point positions is a concave maximization; • minimizing F for a fixed set of weights is akin to the minimization of the CVT energy, for which fast methods exist; • staying clear of regular patterns is enforced algorithmically through a simple local regularity detection and removal. These three factors conspire to result in a fast and scalable generation of high-quality blue noise point sets as we discuss next.
Constraint Enforcement
For a given set of points X, we noted in Sec. 3.1 that finding the set of weights Wopt to enforce that all capacities are equal is a concave maximization. Fast iterative methods can thus be applied to keep computational complexity to a minimum.
Since the Hessian of F(X, W ) is equal to the negated weighted Laplacian ∆ w,ρ (see appendix), Newton iterations are particularly appropriate to find the optimal set of weights Wopt. At each iteration, we thus solve the sparse, (Poisson) linear system: ∆ w,ρ δ = m-m1 m-m2 . . . m-mn t , (4) where the righthand side of the equation is equal to the current gradient of F w.r.t. weights. A standard line search with Armijo condition [START_REF] Nocedal | Numerical optimization[END_REF] is then performed to adapt the step size along the vector δ before updating the vector W of current weights. Given that the Hessian is sparse and symmetric, many linear solvers can be used to efficiently solve the linear system used in each Newton iteration; in our implementation, we use the sparse QR factorization method in [START_REF] Davis | Algorithm 915, SuiteSparseQR: Multifrontal multithreaded rank-revealing sparse QR factorization[END_REF]]. Typically, it only takes 3 to 5 such iterations to bring the residual of our constraints to within an accuracy of 10 -12 .
Transport Minimization
For a fixed set of weights W , we can move the locations of the n points in order to improve the cost of ink transport F(X, W ). Previous CCVT-based methods [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]Li et al. 2010a] used Lloyd's algorithm as the method of choice for their discrete optimization. In our continuous optimization context, we have more options. A Lloyd update where positions xi are moved to the barycenter bi of their associated weighted cell V w i can also be used to reliably decrease the transport cost: indeed, we prove in the appendix that the gradient of F(X, W ) is a natural extension of the gradient of the regular CVT energy. However, Lloyd's algorithm is a special case of a gradient descent that is known to suffer from linear convergence [START_REF] Du | Centroidal Voronoi Tessellations: Applications and algorithms[END_REF]. We improve the convergence rate through line search, again using adaptive timestep gradient descent with Armijo conditions as proposed in [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]]. Note that quasi-Newton iterations as proposed in [START_REF] Liu | On Centroidal Voronoi Tessellation -energy smoothness and fast computation[END_REF] for the CVT energy are not well suited in our context: alternating weight and position optimizations renders the approximation of the Hessian matrix from previous gradients inaccurate, ruining the expected quadratic convergence.
Density Integration
Integrations required by our formulation can be easily handled through quadrature. However, poor quadrature choices may impair the convergence rate of our constraint enforcement. Given that blue noise sampling is most often performed on a rectangular greyscale image, we design a simple and exact procedure to compute integrals of the density field ρ inside each cell, as it is relatively inexpensive. Assuming that ρ is given as a strictly-positive piecewise constant field, we first compute the value m used in our capacity constraints by simply summing the density values times the area of each constant regions (pixels, typically), divided by n. We then perform integration within each V w i in order to obtain the mass mi, the barycenter bi, and the individual transport cost for each V w i . We proceed in three steps. First, we rasterize the edges of the power diagram and find intersections between the image pixels and each edge. Next we perform a scan-line traversal of the image and construct pixel-cell intersections. Integrated densities, barycenters, and transport costs per cell are then accumulated through simple integration within each pixel-cell intersection where the density is constant. Note that our integration differs from previous similar treatments (e.g., [START_REF] Secord | Weighted Voronoi stippling[END_REF]Lecot and Lévy 2006]) as we provide robust and exact computation not only for cell capacities, but also for their barycenters and transport costs-thus avoiding the need for parameter tweaking required in quadrature approximations.
Boundary Treatment
While some of the results we present use a periodic domain (see Sec. 5), most sampling applications involve a bounded domain D, often given as a convex polygon (as in the case of a simple image). Dealing with boundaries in our approach is straightforward. First, boundary power cells are clipped by D before computing their cell barycenters bi and capacities mi. Second, the coefficients of the weighted Laplacian ∆ w,ρ are computed through the ratio of (possibly clipped) dual edge lengths and primal edge lengths, as proposed in [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]]. Thence, the presence of boundaries adds only limited code and computational complexity and it does not affect the convergence rates of any of the steps described above. Note that other boundary treatments could be designed as well, using mirroring or other typical boundary conditions if needed.
Detecting & Breaking Regularities
The numerical procedure described so far solely targets requirements A and B, and as such, nothing preempts regularity. In fact, less regular more regular E hexagonal lattices are solutions to our extremization problem in the specific case of constant density and a toroidal domain-and these solutions correspond to "deep" extrema of our functional, as the cost of ink transport E reaches a global minimum on such regular packings of points. Instead, we algorithmically seek "shallow" extrema to prevent regularity (see inset).
For capacity-constrained configurations, local regularities are easily detected by evaluating the individual terms Ei measuring the transport cost within each region V w i : we assign a regularity score ri per point as the local absolute deviation of Ei, i.e., ri = 1 |Ωi|
j∈Ω i |Ei -Ej|,
where Ωi is the one-ring of xi in the regular triangulation of (X, W ). We then refer to the region around a point xi as aliased if ri <τ , where the threshold τ = 0.25 m 2 in all our experiments. When aliased, a point and its immediate neighbors are jittered by a Gaussian noise with a spatial variance of 1.0/ρ(xi) and maximum magnitude √ m to break symmetries as recommended in [START_REF] Lucarini | Symmetry-break in Voronoi tessellations[END_REF]]. To prevent a potential return to the same crystalline configuration during subsequent optimization steps, we further relocate 1% of the aliased points to introduce defects. Since our numerical approach relies on a line search with Armijo rule (seeking local extrema), starting the optimization from this stochastically scrambled configuration will fall back to a nearby, shallower extremum-hence removing regularity as demonstrated in Fig. 5.
It is worth pointing out that all CVT-based methods (including the existing CCVT schemes) may result in point distributions with local regular patterns. While a few approaches avoided regularity by stopping optimization before convergence, we instead prevent regularity by making sure we stop at shallow minima. This τ -based shallowness criterion can be seen as an alternative to the temperature parameter proposed in [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]], where the level of excitation of a statistical particle model controls the randomness on the formation of point distributions. Our simple approach is numerically robust and efficient: in practice, we observed that the proposed regularity breaking routine takes place at most once in each example test, independently of the value of τ .
Optimization Schedule
We follow a simple optimization schedule to make the generation process automatic and efficient for arbitrary inputs. We start with a random distribution of points conforming to ρ (better initialization strategies could be used, of course). We then proceed by systematically alternating optimization of weights (to enforce constraints, Sec. 4.2) and positions (to minimize transport cost, Sec. 4.3). Weight optimization is initialized with zero weights, and iterated until ∇wF ≤ 0.1 m (the capacity m is used to properly adapt the convergence threshold to the number of points n and the density ρ). For positions, we optimize our functional until ∇ X F ≤ 0.1 √ n m 3 (again, scaling is chosen here to account for density and number of points). We found that performing Lloyd steps until the gradient norm is below 0.2 √ n m 3 saves computation (it typically requires 5 iterations); only then do we revert to a full-blown adaptive timestep gradient descent until convergence (taking typically 10 iterations). Once an extremum of F is found, we apply the regularity detectingand-breaking procedure presented in Sec. 4.6, and, if an aliased point was found and jittered, we start our optimization again. This simple schedule (see pseudocode in Fig. 6) was used as is on all our results.
Results
We ran our algorithm on a variety of inputs: from constant density (Fig. 8) to photos (Fig. 1, 3, and4) and computer-generated images (Fig. 2 and 10), without any need for parameter tuning. Various illustrations based on zoneplates, regularity, and spectral analysis are used throughout the paper to allow easy evaluation of our results and to demonstrate how they compare to previous work. Spectral Properties. The special case of blue noise point distribution for a constant density in a periodic domain has been the subject of countless studies. It is generally accepted that such a point distribution must have a characteristic blue-noise profile for the radial component of its Fourier spectra, as well as low angular anisotropy [START_REF] Ulichney | Digital Halftoning[END_REF]]. This profile should exhibit no low frequencies (since the density is constant), a high peak around the average distance between adjacent points, along with a flat curve end to guarantee white noise (i.e., no distinguishable features) in the high frequency range. Fig. 8 demonstrates that we improve upon the results of all previous CCVT-related methods, and fare arguably better than alternative methods such as [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]]; in particular, we systematically (i.e., not just on average over several distributions, but for every single run) get flat spectrum in low and high frequencies, while keeping high peaks at the characteristic frequency. Note also that the method of [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF] appears to slowly converge to our results when the ratio m/n (using their notation) goes to infinity with, evidently, much larger timings (Fig. 7). // Newton method for W 13:
Enforce-Capacity-Constraints() (lines 26-33)
14:
// Gradient descent for X 15:
d = ∇ X F.
16:
Find β satisfying Armijo condition. 17:
X ← X -βd. Jitter aliased points and immediate neighbors.
23:
Relocate 1% of aliased points. Spatial Properties. We also provide evaluations of the spatial properties of our results. Fig. 8 shows two insightful visualizations of the typical spatial arrangement of our point distributions, side by side with results of previous state-of-the-art methods. The second row shows the gaps between white discs centered on sampling points with a diameter equal to the mean distance between two points; notice the uniformity of gap distribution in our result. The third row compares the number of neighbors for the Voronoi region of each site; as pointed out in [START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF]], the enforcement of the capacity constraints favors heterogenous valences, with fewer noticeable regular regions. Finally, the minimum distance among all points normalized by the radius of a disc in a hexagonal tiling is a measure of distribution quality, known as the normalized Poisson disk radius, and recommended to be in the range [0.65, 0.85] by [Lagae and Dutré 2008]. In all our constant density blue noise examples, the normalized radius is in the range [0.71, 0.76]. Quadratic Ramp. Another common evaluation of blue noise sampling is to generate a point set for an intensity ramp, and count the number of points for each quarter of the ramp. Fig. 10 compares the point sets generated by our technique vs. state-of-the-art methods [START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF][START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF]. While all the methods recover approximately the right counting of points per quarter, our result presents a noticeably less noisy, yet unstructured distribution of points.
Zoneplates. We also provide zoneplates in Fig. 8 for the function sin(x 2 + y 2 ). Each zoneplate image was created via 32x32 copies of CVT [START_REF] Du | Centroidal Voronoi Tessellations: Applications and algorithms[END_REF] stopped at α = 0.75 CCDT [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] CapCVT [START_REF] Chen | Variational blue noise sampling[END_REF] [ [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] with T=1/2 CCVT [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF] Our algorithm Fifth row: mean periodograms for 10 independent point sets (except for [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] for which only 5 pointsets were available). Sixth row: radial power spectra-note the pronounced peak in our result, without any increase of regularity. Last row: anisotropy in dB ( [START_REF] Ulichney | Digital Halftoning[END_REF]], p. 56). Data/code for [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] and [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] courtesy of the authors. Our timings as a function of the number of points exhibit a typical n log n behavior, systematically better than [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]]'s n 2 ; yet, our radial spectra (inset, showing averages over 10 runs with 1024 points) even outperforms the fine 1024-sample CCVT results. (Here, CCVT-X stands for X "points-per-site" as in [START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]].)
a 1024-point blue noise patch, followed by a Mitchell reconstruction filter to generate a 1024x1024 image with an average of one point per pixel as suggested in [Lagae and Dutré 2006]. Observe the presence of a second noise ring in previous methods, as opposed to the anti-aliased reconstruction achieved by our method.
Complexity. Previous CCVT methods analyzed the (worst-case) time complexity of a single iteration of their optimization approach. One iteration of our algorithm involves the construction of a 2D power diagram, costing O(n log n). It also involves the enforcement of the capacity constraints via a concave maximization w.r.t. the weights via a step-adaptive Newton method; the time complexity of this maximization is of the order of a single Newton step since the convergence rate is quadratic (see [START_REF] Nocedal | Numerical optimization[END_REF] for a more detailed proof), and therefore incurs the linear cost of solving a sparse (Poisson) linear system. For N -pixel images and n points, the total complexity of our algorithm thus becomes O(n log n + N ), with the extra term corresponding to the cost of locating the pixels within each power cell through scan-line traversal. This is significantly better than the discrete versions of CCVT which were either O(n 2 +nN log N/n) [START_REF] Balzer | Capacity-constrained Voronoi diagrams in continuous spaces[END_REF] or O(n 2 +nN ) [Li et al. 2010a] and of the same order as the CCVT approximations in [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF]]. However, we can not match the efficiency of the multi-scale statistical particle model introduced in [START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]], which scales linearly with the number of points and produces results arguably comparable with the best current methods of blue noise generation. points of the luminance of a high dynamic range 512x768 image (see supplemental material), an order of magnitude more complex than the largest results demonstrated by CCVT-based methods. Note that we purposely developed a code robust to any input and any pointsto-pixels ratio. However, code profiling revealed that about 40% of computation time was spent on the exact integration described in Sec. 4.4; depending on the targeted application, performance could thus be easily improved through quadrature [Lecot and Lévy 2006] and/or input image resampling if needed.
Stopping Criteria. As discussed in Sec. 4.7, we terminate optimization when ∇F < ε, i.e., the first order condition for identifying a locally optimal solution to a critical point search [Nocedal and ). The method of [START_REF] Chen | Variational blue noise sampling[END_REF]] (in green) behaves similarly when using a loose stopping criteria based on the functional decrease per iteration; but becomes twice slower (in blue) if the termination is based on the norm of the functional gradient to guarantee local optimality. The code released by [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF]] (in orange) also exhibits comparable performance by terminating the optimization not based on convergence, but after a fixed number of iterations.
Wright 1999]. Recent optimization-based blue noise methods [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF], on the other hand, have used the decrease of the objective function per iteration as their stopping criteria. However, a small decrease in the functional does not imply convergence, since a change of functional value depends both on the functional landscape and the step size chosen in each iteration. Favoring guaranteed high quality vs. improved timing, we prefer adopting the first order optimality condition as our termination criteria for robust generation of blue noise distributions. Despite this purposely stringent convergence criteria, the performance of our method is similar to [START_REF] Chen | Variational blue noise sampling[END_REF] with their recommended termination based on functional decrease-but twice faster if the method of [START_REF] Chen | Variational blue noise sampling[END_REF]] is modified to use a stricter termination criterion based on the norm of the functional gradient. [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF] advocate a fixed number of iterations, which, again, does not imply either convergence or high-quality results. Our timings and theirs are, however, similar for the type of examples the authors used in their paper. See Fig. 9 for a summary of the timings of our algorithm compared to the CCVT-based methods of [START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF][START_REF] Chen | Variational blue noise sampling[END_REF] for the generation of blue noise sampling of a constant density field.
Future Work
We note that our numerical treatment is ripe for GPU implementations as each element (from power diagram construction to line search) is known to be parallelizable. The scalability of our approach should also make blue noise generation over non-flat surfaces and 3D volumes practical since our formulation and numerical approach generalizes to these cases without modification. Blue noise meshing is thus an obvious avenue to explore and evaluate for numerical benefits. On the theoretical side it would be interesting to seek a fully variational definition of blue noise that incorporates requirements A, B and C altogether. Generating anisotropic and multiclass sampling would also be desirable, as well as extending our regularity-breaking procedure to other CVT-based methods. Finally, the intriguing connection between HOT meshes [START_REF] Mullen | HOT: Hodge Optimized Triangulations[END_REF]] and our definition of blue noise (which makes the Hodge-star for 0-forms not just diagonal, but constant) may deserve further exploration.
Acknowledgements. We wish to thank the authors of [START_REF] Chen | Variational blue noise sampling[END_REF][START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF] for providing data for comparisons, and Christian Lessig for proof-reading. FdG, KB, and MD acknowledge the valuable support of NSF grants DGE-1147470 and CCF-1011944 throughout this project. Observe that our method returns the best matching of the reference percentages, while still presenting an even and unstructured distribution.
Comparative data courtesy of the authors.
Appendix: Functional Properties and Derivatives
In this appendix, we provide closed-form expressions for the first and second derivatives of the functional F defined in Eq. 2.
Notation: We denote by eij the regular edge between two adjacent points xi and xj, and by e * ij the dual edge separating the partition regions V w i and V w j . (Remember that x ∈ e * ij iff xxi 2 -wi = xxj 2 -wj.) We also refer to the average value of the field ρ over e * ij as ρij, and to the one-ring of xi in the regular triangulation of (X, W ) as Ωi.
Reynolds transport theorem: The derivatives of F are most directly found by Reynolds theorem, which states that the rate of change of the integral of a scalar function f within a volume V is equal to the volume integral of the change of f , plus the boundary integral of the rate at which f flows through the boundary ∂V of outward unit normal n; i.e., in terse notation:
∇ V f (x) dV = V ∇f (x) dV + ∂V f (x) (∇x • n) dA.
W.r.t. weights: Since the regions V w i partition the domain D, the sum of all capacities is constant; hence,
∇w i mi + j∈Ω i ∇w i mj = 0.
Moreover, Reynolds theorem applied to the capacities yields
∇w i mj = - ρij 2 |e * ij | |eij| .
Next, by using both Reynolds theorem and the equality of power distances along dual edges, one obtains ∇w i E(X, W ) = j∈Ω i (wj -wi) (∇w i mj) ∇w i j wj(mj -m) = mi -m + j∈Ω i (wj -wi)(∇w i mj).
Therefore, the gradient simplifies to ∇w i F(X, W ) = m -mi.
Combining the results above yields that the Hessian of F with respect to weights is simply a negated weighted Laplacian operator:
∇ 2 w F(X, W ) = -∆ w,ρ with ∆ w,ρ ij = -ρij 2
|e * ij | |eij| For fixed points, F is thus a concave function in weights and there is a unique solution Wopt for any prescribed capacity constraints.
W.r.t. position:
We first note that ∇x i mi + j∈Ω(i) ∇x i mj = 0 as in the weight case. Using the definition of the weighted barycenter bi (Eq. 3), Reynolds theorem then yields ∇x i E(X, W ) = 2 mi(xi -bi) + j∈Ω i (wj -wi)(∇x i mj) ∇x i j wj(mj -m) = j∈Ω i (wj -wi)(∇x i mj) .
Therefore: ∇x i F(X, W ) = 2mi(xi -bi).
Equivalence of Optimizations:
The constrained minimization with Lagrangian multipliers (Eq. 1) is equivalent to extremizing the functional F (Eq. 2). Indeed, observe that any solution of the Lagrangian formulation is a stationary point of the functional F, since we just derived that a null gradient implies that mi = m (constraints are met) and xi = bi (centroidal power diagram). Another way to understand this equivalence is to observe that the gradient with respect to weights of the Lagrangian formulation is ∆ w,ρ (W + Λ); hence, extremization induces that W = -Λ + constant, and the Lagrange multipliers can be directly replaced by the (negated) weights.
Figure 2 :
2 Figure 2: Fractal. Optimal transport based blue noise sampling of a Julia set image (20K points). Colors of dots indicate (normalized) weight values, ranging from -30% to 188% of the average squared edge length in the regular triangulation. The histogram of the weights is also shown on top of the color ramp.
Figure 3 :
3 Figure 3: Zebra. Since our approach accurately captures variations of density, we can blue-noise sample images containing both fuzzy and sharp edges (160K-pixel original image (top right) courtesy of Frédo Durand). 40K points, generated in 159 seconds.
Figure 4 :
4 Figure 4: Stippling. Test from[START_REF] Secord | Weighted Voronoi stippling[END_REF]] (20K points). While[START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] does not capture density gradients very cleanly (see close-ups), our result is similar to CCVT[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] on this example, at a fraction of the CPU time. Comparative data courtesy of the authors.
Figure 5 :
5 Figure 5: Breaking Regularity. Optimization of F with a strict convergence threshold ( ∇ X F ≤ 10 -5 ) can produce regularity (left), as revealed by a valence-colored visualization (top) and the distribution of local transport costs Ei (bottom). After jittering and relocating aliased regions (middle, colored cells), further optimization brings the point set to a shallower (i.e., less regular) configuration (right) as confirmed by valences and transport costs.
1: // BLUE NOISE THROUGH OPTIMAL TRANSPORT 2: Input: domain D, density ρ, and number of points n 3: Initialize X with n random points inside D conforming to ρ
Output: n points with blue noise reqs. A, B, and C 26: Subroutine ENFORCE-CAPACITY-CONSTRAINTS(
Figure 6 :
6 Figure 6: Pseudocode of the blue noise algorithm.
Figure 8 :
8 Figure 8: Comparisons. Different blue noise algorithms are analyzed for the case of constant density over a periodic domain; Top row: distributions of 1024 points; Second row: gaps between white discs centered on sampling points, over black background. Notice the uniformity of gap distribution in two rightmost point sets. Third row: coloring based on number of neighbors for the Voronoi region of each site; Fourth row: 1024x1024 zoneplates for the function sin(x 2 + y 2 ) (see Sec. 5 or[Lagae and Dutré 2006] for details). Fifth row: mean periodograms for 10 independent point sets (except for[START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] for which only 5 pointsets were available). Sixth row: radial power spectra-note the pronounced peak in our result, without any increase of regularity. Last row: anisotropy in dB ([Ulichney 1987], p. 56). Data/code for[START_REF] Fattal | Blue-noise point sampling using kernel density model[END_REF]] and[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]] courtesy of the authors.
Figure 7 :
7 Figure 7: Discrete vs. Continuous CCVT.Our timings as a function of the number of points exhibit a typical n log n behavior, systematically better than[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]]'s n 2 ; yet, our radial spectra (inset, showing averages over 10 runs with 1024 points) even outperforms the fine 1024-sample CCVT results. (Here, CCVT-X stands for X "points-per-site" as in[START_REF] Balzer | Capacityconstrained point distributions: A variant of Lloyd's method[END_REF]].)
Figure 9 :
9 Figure 9: Performance. Our method (in grey) performs well despite a stringent convergence criteria ( ∇F < 0.1 √ nm 3). The method of[START_REF] Chen | Variational blue noise sampling[END_REF]] (in green) behaves similarly when using a loose stopping criteria based on the functional decrease per iteration; but becomes twice slower (in blue) if the termination is based on the norm of the functional gradient to guarantee local optimality. The code released by[START_REF] Xu | Capacity-constrained Delaunay triangulation for point distributions[END_REF]] (in orange) also exhibits comparable performance by terminating the optimization not based on convergence, but after a fixed number of iterations.
Figure 10 :
10 Figure 10: Ramp. Blue noise sampling of a quadratic density function with 1000 points. The percentages in each quarter indicate ink density in the image, and point density in the examples. Observe that our method returns the best matching of the reference percentages, while still presenting an even and unstructured distribution. Comparative data courtesy of the authors.
for graphics applications.
Power Diagrams. From a point set X = {x i }i=1...n a natural
partition of a domain D can be obtained by assigning every location
in D to its nearest point xi ∈ X. The region Vi assigned to point
xi is known as its Voronoi region, and the set of all these regions
forms a partition called the Voronoi diagram. While this geometric
structure (and its dual, the Delaunay triangulation of the point set)
has found countless applications, power diagrams offer an even more
general way to partition a domain based on a point set. They involve
the notion of a weighted point set, defined as a pair (X, W ) =
{(x1, w1), . . . , (xn, wn)}, where X is a set of points and W =
{wi}i∈1...n are real numbers called weights. The power distance
from a position x to a weighted point (xi, wi) is defined as x-
xi 2 -wi, where . indicates the Euclidean distance. Using
this definition, with each xi we associate a power cell (also called
weighted Voronoi region) | 54,552 | [
"7709",
"752254"
] | [
"74355",
"21398",
"413089",
"74355"
] |
01484447 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484447/file/SIIE.pdf | Keywords: speech recognition, deep neural network, acoustic modeling
This paper addresses the topic of deep neural networks (DNN). Recently, DNN has become a flagship in the fields of artificial intelligence. Deep learning has surpassed stateof-the-art results in many domains: image recognition, speech recognition, language modelling, parsing, information retrieval, speech synthesis, translation, autonomous cars, gaming, etc. DNN have the ability to discover and learn complex structure of very large data sets. Moreover, DNN have a great capability of generalization. More specifically, speech recognition with DNN is the topic of our work in this paper. We present an overview of different architectures and training procedures for DNN-based models. In the framework of transcription of broadcast news, our DNN-based system decreases the word error rate dramatically compared to a classical system.
I. INTRODUCTION
More and more information appear on Internet each day. And more and more information is asked by users. This information can be textual, audio or video and represents multimedia information. About 300 hours of multimedia is uploaded per minute [START_REF] Lee | Spoken Content Retrieval -Beyond Cascading Speech Recognition with Text Retrieval[END_REF]. It becomes difficult for companies to view, analyze, and mine the huge amount of multimedia data on the Web. In these multimedia sources, audio data represents a very important part. Spoken content retrieval consists in "machine listening" of data and extraction of information. Some search engines like Google, Yahoo, etc. perform the information extraction from text data very successfully and give a response very quickly. For example, if the user wants to get information about "Obama", the list of several textual documents will be given by Google in a few seconds of search. In contrast, information retrieval from audio documents is much more difficult and consists of "machine listening" of the audio data and detecting instants at which the keywords of the query occur in the audio documents. For example, to find all audio documents speaking about "Obama".
Not only individual users, but also a wide range of This work was funded by the ContNomina project supported by the French National Research Agency (ANR) under contract ANR-12-BS02-0009.
All authors are with the Université de Lorraine, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506, France, Inria, Villers-lès-Nancy, F-54600, France, CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506, France (e-mail: fohr@ loria.fr, illina@loria.fr, mella@loria.fr).
companies and organizations are interested by these types of applications. Many business companies are interested to know what is said about them and about their competitors on broadcast news or on TV. In the same way, a powerful indexing system of audio data would benefit archives. Well organized historical archives can be rich in term of cultural value and can be used by researchers or general public.
Classical approach for spoken content retrieval from audio documents is speech recognition followed by text retrieval [START_REF] Larson | Spoken Content Retrieval: A Survey of Techniques and Technologies[END_REF]. In this approach, the audio document is transcribed automatically using a speech recognition engine and after this the transcribed text is used for the information retrieval or opinion mining. The speech recognition step is crucial, because errors occurring during this step will propagate in the following step.
In this article, we will present the new paradigm used for speech recognition: Deep Neural Networks (DNN). This new methodology for automatic learning from examples achieves better accuracy compared to classical methods. In section II, we briefly present automatic speech recognition. Section III gives an introduction to deep neural networks. Our speech recognition system and an experimental evaluation are described in section IV.
II. AUTOMATIC SPEECH RECOGNITION
An automatic speech recognition system requires three main sources of knowledge: an acoustic model, a phonetic lexicon and a language model [START_REF] Deng | Machine Learning Paradigms for Speech Recognition[END_REF]. Acoustic model characterizes the sounds of the language, mainly the phonemes and extra sounds (pauses, breathing, background noise, etc.). The phonetic lexicon contains the words that can be recognized by the system with their possible pronunciations. Language model provides knowledge about the word sequences that can be uttered. In the state-of-the-art approaches, statistical acoustic and language models, and to some extent lexicons, are estimated using huge audio and text corpora.
Automatic speech recognition consists in determining the best sequence of words (ܹ ) that maximize the likelihood: W = argmax ௐ ܲሺܺ|ܹሻܲሺܹሻ (1) where P(X|W), known as acoustic probability, is the probability of the audio signal (X) given the word sequence W. This probability is computed using acoustic model. P(W), known as language probability, is the probability a priori of the word sequence, computed using the language model. New Paradigm in Speech Recognition:
Deep Neural Networks Dominique Fohr, Odile Mella and Irina Illina A. Acoustic modeling Acoustic modeling is mainly based on Hidden Markov Model (HMM). An HMM is a statistical model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states [START_REF] Rabiner | A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition[END_REF]. HMM is a finite state automaton with N states, composed of three components: ሼ,ܣ ,ܤ Πሽ. ܣ is the transition probability matrix (ܽ is the transition probability from the state i to the state jሻ. Π is the prior probability vector (ߨ the prior probability of state i), and ܤ is the emission probability vector (b j (x) is the probability of emission of observation x being in state j).
In speech recognition, the main advantage of using HMM is its ability to take into account the dynamic aspects of the speech. When a person speaks quickly or slowly, the model can correctly recognize the speech thanks to the self-loop on the states.
To model the sounds of a language (phones), a three-state HMM is commonly chosen (cf. Fig. 1). These states capture the beginning, central and ending parts of a phone. In order to capture the coarticulation effects, triphone models (a phone in a specific context of previous and following phones) are preferred to context-independent phone models.
Until 2012, emission probabilities were represented by a mixture of multivariate Gaussian probability distribution functions modeled as:
ܾ ሺݔሻ = ∑ ܿ ࣨሺ;ݔ ߤ , Σ ሻ ெ ୀଵ (2)
The parameters of Gaussian distributions are estimated using the Baum-Welch algorithm.
A tutorial on HMM can be found in [START_REF] Rabiner | A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition[END_REF]. These models were successful and achieved best results until 2012.
B. Language modeling
Historically, the most common approach for language modeling is based on statistical n-gram model. An n-gram model gives the probability of a word w i given the n-1 previous words:
These probabilities are estimated on a huge text corpus. To avoid a zero probability for unseen word sequences, smoothing methods are applied, the best known smoothing method being proposed by Kneiser-Ney [START_REF] Kneser | Improved Backing-off for m-gram Language Modeling[END_REF].
C. Search for the best sentence
The optimal computation of the sentence to recognize is not tractable because the search space is too large. Therefore, heuristics are applied to find a good solution. The usual way is to perform the recognition in two steps:
• The aim of this first step is to remove words that have a low probability to belong to the sentence to recognize. A word lattice is constructed using beam search. This word lattice contains best word hypotheses. Each hypothesis consist of words, their acoustic probabilities, language model probabilities and time boundaries of the words.
• The second step consists in browsing the lattice using additional knowledge to generate the best hypothesis. Usually, the performance of automatic speech recognition is evaluated in terms of Word Error Rate (WER), i.e. the number of errors (insertions, deletion and substitutions) divided by the number of words in the test corpus.
III. DEEP NEURAL NETWORKS
In 2012, an image recognition system based on Deep Neural Networks (DNN) won the Image net Large Scale Visual Recognition Challenge (ILSVCR) [START_REF] Krizhevsky | ImageNet Classification with Deep Convolutional Neural Networks[END_REF]. Then, DNN were successfully introduced in different domains to solve a wide range of problems: speech recognition [START_REF] Xiong | Achieving Human Parity in Conversational Speech Recognition[END_REF], speech understanding, parsing, translation [START_REF] Macherey | Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation[END_REF], autonomous cars [START_REF] Bojarski | End to End Learning for Self-Driving Cars[END_REF], etc. [START_REF] Deng | A Tutorial Survey of Architectures, Algorithms and Applications for Deep Learning[END_REF]. Now, DNN are very popular in different domains because they allow to achieve a high level of abstraction of large data sets using a deep graph with linear and non-linear transformations. DNN can be viewed as universal approximators. DNN obtained spectacular results and now their training is possible thanks to the use of GPGPU (General-Purpose Computing on Graphics Processing Units).
A. Introduction
Deep Neural Networks are composed of neurons that are interconnected. The neurons are organized into layers. The first layer is the input layer, corresponding to the data features. The last layer is the output layer, which provides the output probabilities of classes or labels (classification task).
The output y of the neuron is computed as the non-linear weighted sum of its input. The neuron input x i can be either the input data if the neuron belongs to the first layer, or the output of another neuron. An example of a single neuron and its connections is given in Figure 2.
A DNN is defined by three types of parameters [11]:
• The interconnection pattern between the different layers of neurons; • The training process for updating the weights w i of the interconnections;
• The activation function f that converts a neuron's weighted input to its output activation (cf. equation in Fig. 2). The widely used activation function is the non-linear weighted sum. Using only linear functions, neural networks can separate only linearly separable classes. Therefore, nonlinear activation functions are essential for real data. Figure 3 shows some classical non-linear functions as sigmoid, hyperbolic tangent (tanh), RELU (Rectified Linear Units), and maxout. Theoretically, the gradient should be computed using the whole training corpus. However, the convergence is very slow because the weights are updated only once per epoch. One solution of this problem is to use Stochastic Gradient Descent (SGD). It consists in computing the gradient on a small set of training samples (called mini-batch) and in updating the weights after each mini-batch. This speeds up the training process.
During the training, it may happen that the network learns features or correlations that are specific to the training data rather than generalize the training data to be applicable to the test data. This phenomenon is called overfitting. One solution is to use a development set that should be as close as possible to the test data. On this development set, recognition error is calculated at each epoch of the training. When the error begins to increase, the training is stopped. This process is called early stopping. Another solution to avoid overfitting consists in using regularization. It consists in inserting a constraint to the error function to restrict the search space of weights. For instance, the sum of the absolute values of the weights can be added to the error function [START_REF] Goodfellow | Deep Learning[END_REF]. One more solution to avoid overfitting is dropout [START_REF] Srivastava | Dropout: A Simple Way to Prevent Neural Networks from[END_REF]. The idea is to "remove" randomly some neurons during the training. This prevents neurons from co-adapting too much and performs model averaging.
C. Different DNN architectures
There are different types of DNN regarding the architecture [START_REF] Lecun | Deep Learning[END_REF]:
• MultiLayer Perceptron (MLP): each neuron of a layer is connected with all neurons of the previous layer (feedforward and unidirectional). • Recurrent Neural Network (RNN): when it models a sequence of inputs (time sequence), the network can use information computed at previous time (t-1) while computing output for time t. Fig. 4 shows an example of a RNN for language modeling: the hidden layer h(t-1) computed for the word t-1 is used as input for processing the word t [START_REF] Mikolov | Statistical Language Models based on Neural Networks[END_REF]. • Long Short-Term Memory (LSTM) is a special type of RNN. The problem with RNN is the fact that the gradient is vanishing, and the memory of past events decreases. Sepp Hochreiter and Jürgen Schmidhuber [START_REF] Hochreiter | Long Short-Term Memory[END_REF] have proposed a new recurrent model that has the capacity to recall past events. They introduced two concepts: memory cell and gates. These gates determine when the input is significant enough to remember or forget the value, and when it outputs a value. Fig. 5 displays the structure of an LSTM.
• Convolutional Neural Network (CNN) is a special case of Feedforward Neural Network. The layer consists of filters (cf. Fig. 6). The parameters of these filters are learned. One advantage of this kind of architecture is the sharing of parameters, so there are fewer parameters to estimate. In the case of image recognition, each filter detects a simple feature (like a vertical line, a contour line, etc.). In deeper layer, the features are more complex (cf. Fig. 7). Frequently, a pooling layer is used. This layer allows a non-linear downsampling: max pooling (cf. Fig. 8) computes maximum values on sub-region. The idea is to reduce the size of the data for the following layers. An example of stateof-the-art acoustic model using CNN is given in Fig. 9. The main advantage of RNN and LSTM is their ability to take into account temporal evolution of the input features. These models are widely used for natural language processing. Strong point of CNN is the translation invariance, i.e. the skill of discover structure patterns regardless the position. For acoustic modelling all these structures can be exploited. Fig. 6. Example of a convolution with a filter
1 0 1 0 1 0 1 0 1 ൩
Original image is in green, filter applied on bottom right of image is in orange and convolution result is in pink.
A difficult DNN issue is the choice of the hyperparameters: number of hidden layers, number of neurons per layer, choice of non-linear functions, choice of learning rate adaptation function. Often, some hyperparameters are adjusted experimentally (trial and error), because they depend on the task, the size of the database and data sparsity.
D. DNN-based acoustic model
As said previously, for acoustic modeling, HMM with 3 left-to-right states are used to model each phone or contextual phone (triphone). Typically, there are several thousand of HMM states in a speech recognition system.
In DNN-based acoustic model, contextual phone HMMs are keeped but all the Gaussian mixtures of the HMM states (equation 2) are replaced by DNN. Therefore, DNN-based acoustic model computes the observation probability b j (x) of each HMM phone state given the acoustic signal using DNN networks [START_REF] Hinton | Deep Neural Networks for Acoustic Modeling in Speech Recognition[END_REF]. The input of the DNN will be the acoustic parameters at time t. The DNN outputs correspond to all HMM states, one output neuron for one HMM state.
In order to take into account contextual effects, the acoustic vectors from a time window centered on time t (for instance from time t-5 to t+5) are put together.
To train the DNN acoustic model, the alignment of the training data is necessary: for each frame, the corresponding HMM state that generated this frame should be known. This alignment of the training data is performed using a classical GMM-HMM model.
E. Language model using DNN
A drawback of classical N-gram language models (LM) is their weak ability of generalization: if a sequence of words was not observed during training, N-gram model will give poor probability estimation. To address this issue, one solution is to move to a continuous space representation. Neural networks are efficient for carrying out such a projection. To take into account the temporal structure of language (word sequences), RNN have been largely studied. The best NNbased language models use LSTM and RNN [23][24].
IV. KATS (KALDI BASED TRANSCRIPTION SYSTEM)
In this section we present the KATS speech recognition system developed in our speech group. This system is built using Kaldi speech recognition toolkit, freely available under the Apache License. Our KATS system can use GMM-based and DNN-based acoustic models.
A. Corpus
The training and test data were extracted from the radio broadcast news corpus created in the framework of the ESTER project [START_REF] Povey | The Kaldi Speech Recognition Toolkit[END_REF]. This corpus contains 300 hours of manually transcribed shows from French-speaking radio stations (France Inter, Radio France International and TVME Morocco). Around 250 h were recorded in studio and 50h on telephone. 11 shows corresponding to 4 hours of speech (42000 words) were used for evaluation.
B. Segmentation
The first step of our KATS system consists in segmentation and diarization. This module splits and classifies the audio signal into homogeneous segments: non-speech segments (music and silence), telephone speech and studio speech. For this, we used the toolkit developed by LIUM [START_REF] Rouvier | An Open-source State-of-the-art Toolbox for Broadcast News Diarization[END_REF]. We processed separately telephone speech and studio speech in order to estimate two sets of acoustic models; studio models and telephone models.
C. Parametrization
The speech signal is sampled at 16 kHz. For analysis, 25 ms frames are used, with a frame shift of 10 ms. 13 MFCC were calculated for each frame completed by the 13 delta and 13 delta-delta coefficients leading to a 39-dimension observation vector. In all experiments presented in this paper, we used MCR (Mean Cepstral Removal).
D. Acoustic models
In order to compare GMM-HMM and DNN-HMM acoustic models, we used the same HMM models with 4048 senones. The only difference is the computation of the emission probability (b j (x) of equation 2): for GMM-HMM it is a mixture of Gaussians, for DNN-HMM, it is a deep neural network. Language model and lexicon stay the same. For GMM-HMM acoustic models, we used 100k Gaussians. For DNN, the input of the network is the concatenation of 11 frames (from t-5 to t+5) of 40 parameters. The network is a MLP with 6 hidden layers of 2048 neurons per layer (cf. Fig. 10). The output layer has 4048 neurons (corresponding to 4048 senones). The total number of parameters in DNN-HMM is about 30 millions.
E. Language models and lexicon
Language models were trained of huge text corpora: newspaper corpus (Le Monde, L'Humanité), news wire (Gigaword), manual transcriptions of training corpus and web data. The total size was 1.8 billion words. The n-gram language model is a linear combination of LM models trained on each text corpus. In all experiments presented in this paper, only a 2-gram model is used with 40 million bigrams and a lexicon containing 96k words and 200k pronunciations.
F. Recognition results
Recognition results in terms of word error rate for the 11 shows are presented in Table 1. The confidence interval of these results is about +/-0.4 %. Two systems are compared. These systems use the same lexicon and the same language models but differ by their acoustic models: GMM-HMM and DNN-HMM, so, the comparison is fair. For all shows, the DNN-based system outperforms the GMM-based system. The WER difference is 5.3% absolute, and 24% relative. The improvement is statistically significant. The large difference in performance between the two systems suggests that DNNbased acoustic models achieves better classification and has generalization ability.
Shows
V. CONCLUSION
From 2012, deep learning has shown excellent results in many domains: image recognition, speech recognition, language modelling, parsing, information retrieval, speech synthesis, translation, autonomous cars, gaming, etc. In this article, we presented deep neural networks for speech recognition: different architectures and training procedures for acoustic and language models are visited. Using our speech recognition system, we compared GMM and DNN acoustic models. In the framework of broadcast news transcription, we shown that the DNN-HMM acoustic model decreases the word error rate dramatically compared to classical GMM-HMM acoustic model (24% relative significant improvement).
The DNN technology is now mature to be integrated into products. Nowadays, main commercial recognition systems (Microsoft Cortana, Apple Siri, Google Now and Amazon Alexa) are based on DNNs.
Fig. 1 .
1 Fig. 1. HMM with 3 states, left-to-right topology and selfloops, commonly used in speech recognition.
Fig. 2 .
2 Fig. 2. Example of one neuron and its connections.
Fig. 3 .
3 Fig. 3. Sigmoid, RELU, tangent hyperbolic and maxout nonlinear functions
Fig. 4 .
4 Fig. 4. Example of a RNN.
Fig. 5 .
5 Fig. 5. Example of LSTM with three gates: input gate, forget gate, output gate and a memory cell (from [19]).
Fig. 7 .
7 Fig. 7. Feature visualization of convolutional network trained on ImageNet from Zeiler and Fergus [20].
Fig. 8 .
8 Fig. 8. Max pooling with a 2x2 filter (from www.wildml.com)
Fig. 9 .
9 Fig. 9. The very deep convolutional system proposed by IBM for acoustic modeling: 10 CNN, 4 pooling, 3 full connected (FC MLP) (from [22]).
Fig. 10 .
10 Fig. 10. Architecture of the DNN used in KATS system.
Table 1 .
1 Word Error Rate (%) for the 11 shows obtained using the GMM-HMM and DNN-HMM KATS systems.
# words GMM-HMM DNN-HMM
20070707_rfi (France) 5473 23.6 16.5
20070710_rfi (France) 3020 22.7 17.4
20070710_france_inter 3891 16.7 12.1
20070711_france_inter 3745 19.3 14.4
20070712_france_inter 3749 23.6 16.6
20070715_tvme (Morocco) 2663 32.5 26.5
20070716_france_inter 3757 20.7 17.0
20070716_tvme (Morocco) 2453 22.8 17.0
20070717_tvme (Morocco) 2646 25.1 20.1
20070718_tvme (Morocco) 2466 20.2 15.8
20070723_france_inter 8045 22.4 17.4
Average 41908 22.4 17.1
ACKNOWLEDGMENT
This work was funded by the ContNomina project supported by the French National Research Agency (ANR) under contract ANR-12-BS02-0009. | 23,286 | [
"15652",
"15902",
"15663"
] | [
"420403",
"420403",
"420403"
] |
01484479 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484479/file/FINAL%20ENGLISH%20VERSION%20Simondon-matrixTotal.pdf | Rémi Jardat
email: r.jardat@istec.fr
StrategyMatrixes as Technical Objects: Using the Simondonian concepts of concretization, milieu, allagmaticprinciples and transindividuality in a business strategy context
Keywords: Simondon, strategy matrix, transindividual Subject classification codes: xxx?
Strategy matrixes as technical objects: Using the Simondonian concepts of concretization, milieu, and
transindividuality in a business strategy context.
Introduction
To date, most management research papers drawing on the work of Gilbert Simondon have relied, chiefly, on a single text, On the Mode of Existence of Technical Objects (1958), which was, in fact, the philosopher"s second and ancillary doctoral thesis. To get to the core of Simondon"s thinking we must turn to a seminal and far more challenging work, which is his first and mainthesis:Individuation in the light of the notions of Form and Information.Simondonnever succeeded in having it published in his lifetime, due to serious health problems that persisted until his death in 1989. His full thesis, as well as other equally important philosophical tracts, was not published until quite recently (Simondon 2005). In the meantime, the great originality and wealth of his thinking became asource of inspiration for a small circle of no lessthinkersthanEdgar Morin and Bruno Latour (Barthélémy, 2014, 180-186). But onlyStiegler (1994,1996,2001,2004,2006) has truly delved into Simondon"s ideas headlong, while openly declaring himself aSimondonian. More recently,his focus has been squarely on proletarianization seen through a pattern ofalienation, as witnessed in relations between man and the technical object (or technological system),described by Simondonin his secondary thesis (1958,(328)(329)337). In relying not only on Simondon"s secondary thesis, but also on notions developed in his main thesis, including some fundamental schema and concepts found there, this paper seeks to make a novel contribution to management research by taking a broader approach to Simondon than has been the case with studies undertaken so far.
The empirical data used in this paper consist of archival materialsthat allow us to trace the emergence in France of a field of knowledge that underlies strategic management studies. In the modernizing upheaval of the post-war years, strategic management tools appearedthere amid the rise of a "field," as defined by Bourdieu (1996: 61), that wasfraught withdisputes about legitimacy; and it exhibited "metastable" inner and outer surroundings, or amilieu, as defined by Simondon (2005: 16, 32-33), all of which proved conducive to the crystallizationof new ways of thinking. In what was a largely local, intellectual breeding ground (albeitone open to certain outside influences), a debate of ideas eventually gave rise to what Michel Foucault termed a savoir (1969 : 238) in terms of organizational strategy materials; namely, a set of objects, modes of expression, concepts and theoretical choicesshaped according to their own, specific body of rules.
In keeping with Michel Foucault"s "archaeological" approach,which is driven by data collection, we reviewed a set of post-war academic, specialistand institutional literature up until 1975,when the field of strategic management studies seems to have becomerelatively stable, or at least hadattained metastability, as far as itsrationale and discursive rules were concerned.
The author conducted an initial Foucauldiananalysis of this material, as part of an extensive research study, whose results remain unpublished and have not yet been translated into English. For our present purposes, we have returned to that material, recognizing it asan important trace back to a metastable system of knowledge in a structuration stage, where management-related technical objects were being created.
(1) Using those archives, this paper focuses onanalyzing the technicityand degree of individuation behind strategic matrixes, while looking at how they originated.
Hence, wehave tested and validated the relevancy of evaluating an abstract, cognitivemanagement-related object by reference to ascale that Simondon had developed for concrete technical objects. We also show that the "concretization" and "technicity" categories have retained their relevancy for studying the production of new managerial matrixes in a variety of cultural contexts.
(2) Our findings call for an initial, theoretical discussion, concerning the notion of technical culture. Specifically, we shall see how the Simondonian notion of transindividualism makes it possible to address factors governing theemergence and transmission of these objects.
(3) In a second discussion, on epistemological issues, Foucauldian archeology andSimondonianallagmaticprincipleswill be contrastedin terms of how they open up new insights or tensions regarding the strategic matrix. Such an exercise is possible because the genesis of a management-related technical object brings into play,simultaneously,principles of both operation and structure. It also, offers management a valuableglimpse intothe realm that is occupied by whatSimondon calls the essence of technicity (Simondon, 1958, 214).
Genesisand concretization of strategymatrixes in the French industrial and institutional milieu
Can strategy matrixes by studied as technical objects and, if so, to what extent do Simondonianconcepts help explain their success, manner of use, and limitations? In addressing that question, we shall examinein (1.1)how Simondonuses the notions oftechnicityandthe technicalindividualin relation to material objects, which allowstheseconcepts to be applied to abstract technological objects.Then, in(1.2) using an archive of strategy-related knowledge defined according to certain, specific parameters,we shall examinetechnicity and degree of individuation inthe context of strategymatrixes.Lastly, in (1.3) we will try to determine the extent to which matrixes do or do not develop their own technological milieus, as they are transmitted across most every continent andcultural context.
The Simondonian notion of Technicity: ontology, the individualand milieu.
Simondon defines three stages underpinning the ontological status of technology by introducing the differences between "technical (or technological) elements," "technical individuals," and "technical totalities" or "ensembles."The isolated, individual technological object is comprised of technological elements, or components; and, for the purposes of broad-scalefabrication and applications, the object must be brought together with a variety of other technological objects and integrated into a vast installation, or ensemble.
Figure1: Different versions of a technologicalobject, theadze, byLeroi-Gourhan (1971 [1943], p. 187).
The mature technological object, as described by Simondon, would appear to correspond to adze n° 343.
To illustrate this point, while drawing on a related study undertaken by André Leroi-Gourhan [1943] (1971, 184-189), let us consider how Simondonlooks at the process of development of the seemingly simple woodworking tool, the adze (Simondon, 1958, 89-89):
The technologicalelements of the adze consist not only of its various physical parts (the blade and the shaft) but also the convergence of the totality of each of its functions as a tool: "a cutting edge,""a heavy, flat part that runs from the socket to the cutting edge," and "a cutting edge that is more strongly steel-plated than any other part" The adze is a technical individual because it is a unity of constituent elements that are brought together to generate a productive "resonance," as each part is held in place by, and supports, the other, to ensure correct functioning and offer resistance against the forces of wear and tear:
"The molecular chains in the metal have a certain orientation which varies according tolocation, as is the case withwood whose fibers are so positioned as togive the greatest solidness and elasticity, especially in the middle sectionsbetween the cutting edge and the heavy flat sectionextending from the socket to the cutting edge; this area close to the cutting edge becomes elastically deformed during woodworking because it acts as wedge and lever on the piece of wood in the lifting process."
It is as if this tool "as a whole were made of a plurality of differently functioning zones soldered together."
The adze-as-technical-object is totally inconceivable, and could not have been manufactured efficiently, had it not been for the technical ensemble that gave it its shape and was ultimately transmitted across time:
The tool is not made of matter and form only; it is made of technical elements developed in accordance with a certain scheme of functioning and assembled as a stable structure by the manufacturing process. The tool retains in itself the result of the functioning of a technical ensemble. The production of a good adze requires the technical ensemble ofthe foundry, the forge, and the quench hardening process.
That three-stage ontology ofelement-individual-ensemble is behind whatSimondonterms the technicity of the object, and this is what makes it possible to generalize the concept beyond material objects alone: it is made of "technical elements developed according to a certain scheme of functioning and assembled as a stable structure by the manufacturing process." 1 Technical objects exhibit"qualities that relate neither to pure matter nor to pure form, but that belong to the intermediary level of schemes" (Simondon, 1958, p. 92). Technicity has much more to do withthe relational than the material2 : that is, the technological object is nothing more than an ensemble of relationships between technical elements, as expressed in thought, that areestablished, implemented then repeatedly reintroduced, re-activated. And the ensemble drivesits design, manufacture, use and maintenance.
For Simondon,technicity is a rich and complex notion: there are degrees of technicity, and through the process he dubs as concretization, an object evolves and becomes increasingly technical.As Simondon uses the term, it is not at all to be taken in direct opposition with the notion of abstraction. Concretizationof a technical object occurs through a series of improvements, which can sometimes be progressive and incremental, or sometimes even brutal, as an object condenses each of the various functionsinto a tighter and tightercohesion,using fewerconstitutive elements, holdingout the possibility of enhanced performance, greater structural integrity, better reliability and optimal productivity of manufacture: "with each structural element performing several functions instead of just one" (Simondon, 1958, p. 37). Under concretization, as each technical element grows in sophistication, another process, called individuation, ensures, simultaneously, that the technical object becomes indivisible and autonomous, in the technical field.In the caseof cathode tubes, for example, "the successive precisions and reinforcementsincorporatedinto this system serve to counterany unforeseen drawbacks that arise during its use and transform improvementsinto stable features " (pp. 36-37). In that light, "modern electronic tubes"can be seen as more individualized objects, because they are more concretized than the "primitive triode" (ibid.).
In the world of technical objects,the different degrees oftechnicityreflect a more general Simondonianontology, which introduces several stages of individuation (Simondon, 2005). ForSimondon, an individual, that is, any given entity, is never really complete but is constantly engaged in a process of becoming. In the Simondonian ontology, the question of being or not being an individual is a side issue. It is more salient to ask whether an entity is or is not engaged in the process of individuation, whereby a technical object, for example, tends to become more and more of an individual. In that perspective, there is no stable, static,individual/non-individual dichotomy but, rather, successive degrees of individuation or dis-individuation, with the death of a human being, and the attendant decomposition, being a prime illustration.
Moreover, technical objects cannot undergo further individuation without progressively co-creating their associated environment, or milieu.The milieuis a part of the technical object"s (and also the human being"s) surroundings,and, whenever it is sufficiently close to the object, itcontributes to its creation,potentially to the point ofmodifying its basic attributes, while also providing the object with the resources needed for its proper functioning. That is a singular notion insofar as it challenges the entity vs. environmentduality traditionally invoked in management scienceor the inside versus outsidedichotomy found in modern life sciences. Indeed, just as living beings have their own interior milieu 3 where the workings of their vital mechanisms depend on an extra-cellular fluid environment (not unlike the saltwater sea environment that harbored the first single-celled organisms), technical individuals develop in conjunction with their environment, which is both within and without. It is in those exact terms that Simondonexplains the technical object of the 1950s (1958, p. 70):
The associated milieu is the mediator of the relationship between manufactured technical elements and natural elements within which the technical being functions.That is not unlike the ensemble created by oil and water in motion within the Guimbal turbine and around it.
That idea is of paramount importance in understanding the triadic concept of the technical element, the technical individual and the technical ensemble. An individualcan be identified by the unity of its associated milieu. That is, on the individual level, technical elements"do not have an associated milieu" (Simondon, 1958, p. 80), whereas technical ensembles do not depend on any one, single milieu: "We can identify a technical individual when the associated milieu exists as prerequisite of itsfunctioning, and we can identify an ensemble when the opposite is the case." (Simondon, 1958, p. 75).
"
The living being is an individual that brings with it its associated milieu" (Simondon, 1958, p. 71)
The technicity ofstrategy matrixes: an overview of their genesis based on archivesobtained from the field in post-war France.
The archives that we havecompiled consist of the entire set of strategy-related literature published in France,from 1945 to1976. For the sake of completeness, relevant texts and periodicals from the national library, theBibliothèqueNationale de France (BNF), have also been included.The list of the 200archives,selected from among thousands of brochures, periodicals and other texts, as well as their content, has been kept available for scientificreference. Our selection was guided by the idea that "strategists" are those whoattempt to describe management-strategy-related practices or theory formarket actors who are not directly affected by the strategy itselfusually, academics, journalists, business gurus and popularizers of business strategy, etc. In that period, well before the three leading strategy consulting firms (BDG, ADL, McKinsey) appeared on the scene in France and introduced highly elaborate strategy tools, there was a relatively rich variety of local, strategically-focused firms producing their own matrixes. After looking at documents focusing specifically on corporate strategy, we drew up the following list, which we have presented in chronological order (each matrix is covered by a separate monographic study, shown in Appendixes 1 through 6):
"Sadoc"s Table of Ongoing Adaptation to Market Changes" [START_REF] Octave | Stratégies type de développement de l"entreprise[END_REF]See Appendix 1.
The "Panther/Elephant" Matrix [START_REF] Charmont | La Panthèreoul"elephant[END_REF], See Appendix 2 A French translation of the "Ansoff Product/Market Growth Matrix" for Concentric Diversification (1970). See Appendix 3.
"Morphological Boxes" and"Morphological Territories" [START_REF] Dupont | Prévision à long termeetstratégie[END_REF]. See Appendix 4. The"Houssiaux Matrix"for analyzing relations between multinational business enterprises and national governments (Houssiaux, 1970).See Appendix 5.
The "Bijon matrix" analyzing the link between firms" profitability and market share [START_REF] Bijon | Recherched"unetypologie des entreprises face au développement[END_REF].See Appendix 6.
Out of all of these strategic analysis tools, only the American models, which havesince been translated into French (Ansoff, 1970),have stood the test of time and continue to serve as key reference guides for strategy professionals and instructors alike (see Appendix 7). Unfortunately, all purely French-designed models have fallen into obscurity, even though the range of strategic options they offer is as broad as those found in both contemporary and subsequent American tools (which is the case, most notably, with the Sadoc/Géliniermatrix (1963)). One can only wonder if cultural bias played a role in this turn of events, where American culture"s soft power eventually swept away everything in its path.
Of course, that would be a far too simplistic explanation. More intriguingly, and more pertinent to our current discussion, is the role played by technical culture, under the definition put forward by Simondon (1958). All of these matrixes (i) share most of their technical elements in commonand (ii) can be classified into technical ensembles with considerable overlap. Yet (iii) they differ greatly in terms of their degrees of concretizationand the intensity of the role played by their milieu. To paraphrase sociologist Bruno Latour, it "puts down the entire world on a flimsy sheet of paper," [2006,57]).
(i) Technical elements common to most matrixensembles.
All of the matrixes under study give a composite picture of market position in relation to a relatively high number of strategic choices (16 under theHoussiaux matrix),using atwo-dimensional chart thatfacilitatesmemorization and ranking.The technical elementsof a matrix are thus extremely simple: two axes, with segmentation variables, yielding a two-dimensional segmentation result.It should be noted, however, that, depending on the matrixes, someelements are more specialized and sophisticated than others.Whereas the morphological boxes (Appendix 4)and the panther/elephantmatrix (Appendix 1) have axes that are fully segmented, alternating between different types of strategicparameters,othersuse graduated scales: identifying products at a greater or lesser remove fromthe company"s current line ofactivity, in the case of the Ansoff matrix (Appendix 3); degreeof centralization of state regulatory policies, in the case of the Houssiauxmatrix (Appendix 5); graduation of one of the two axes (phase of the product life cycle) in the case of Gélinier"s "Table of Adaptation to Market Changes"(Appendix 2).
These same basic elements continue to appear in subsequent or newer strategy planning models, as a simple bibliographical search reveals. Entering a query into the EBSCO Business Source Premier database,using the key words "strategy" and"matrix"and the Boolean search operators "and"and"or,"yielded70 articles recently published by academic journals.Of those, 14display strategic choice matrixes, some of which deal with"marketing strategy"but use a similar matrix structure. And it should be noted that some recent matrixes are "skewed": the polarity of one of the axes is reversed in relation to the orthogonal axis, making it difficult to track a diagonal gradient (e.g. Azmi, 2008). But they are notable less for the elements they contain than their arrangement. However that may be, these matrixes have been analyzed and included in the list annexed in Appendix 7, and illustratedin Appendix 84 .
(ii) Technical ensemblesthat are found in virtually every matrixand fall within the scope of a metastable discourse specific toPost-War France
The axis of each matrix typically included a range of economic or institutional factorsnecessary to build a strategic knowledge base, well before the three major, classic Anglo-American models were introduced, as we showed in a previous paper on these archives (Author). Previously, after collecting archival literature and analyzing it in relation toMichel Foucault"s archeological approach, we examined the institutional conditions that made it possible forthe object of knowledgethe firm"s strategic choice to emerge and be identified, In addition, we explored how it became possible to classify various strategic choiceby categories. That discursive and institutional ensemble that gave rise tothe "strategic choice"as object of knowledgeis,in our view,the technical ensemble in which the object "strategix matrix" has been created.
It must be borne in mind that a special set of circumstances prevailed in postwar France, as the State and business enterprises vied to become the sole, fixed point of referencefor stakeholders, when makingcommercial/economic decisions.At issue was the question of who should ultimately hold sway over management thinking, either the State [START_REF] Pierre | Stratégieséconomiques, etudes théoriqueset applications aux entreprises[END_REF][START_REF] Macaux | Les concentrations d"entreprises, débats qui ontsuivi la conference de M. de termont[END_REF], trade unions (Pouderoux, p. 242), employers and business managers (Termont, 1955;[START_REF] Macaux | Les concentrations d"entreprises, débats qui ontsuivi la conference de M. de termont[END_REF]Demonque, 1966) or consultants [START_REF] Octave | La function de contrôle des programmesd[END_REF]. The country"s industrial economy, which had been largely imported from Anglo-American models, offered at least a discursive answer to the question, well before the writings of Michael Porter appeared between1970 and 1980. Notably, the French translation of Edith Penrose"sThe Theory of the Growth of the Firm, in1961, offered valuable insight intothe interplay between growth, profitability, firm size and the industrial sector, which underlay "standard strategy scenarios"(e.g. [START_REF] Octave | Stratégies type de développement de l"entreprise[END_REF],established byFrench strategists. Penrose"s work alsohelped resolve some of the above-mentionedcontroversies that had embroiled French institutions, regarding who would serve as the point of reference and undertakedecision-making initiatives.
Theformulationofstandard strategy scenariosalso gave new life toinformation and reporting plans as well asthe tenor of economic debate,by refocusing them on the nowlegitimate pursuit of corporate success.
In sum, despite cultural particularities, the intensifying competitive pressure on business enterprises, coupled with the ability to collect sectoral economic data, createda metastable setting such that the technical object "matrix" gained utility and could be produced locally by different authors (albeit with certain variations), although technical elementswere virtually, if not entirely, identical.
(iii) Sharply contrasting degrees of concretization.
The study of matrixes that emergedin the French discursive sphere during the postwar period highlighted several functions or operations(seeAppendix monographs n°1 to n°6), which converged toward legitimizingthe choices made byexecutive management and reinforcing its ability to use arguments to exhibit its mastery of the complex and changing reality of the organizational environment:
Compressing function, insofar as matrixes offer the corporate manager several succinct criteria for decision-making and control,making it possible to reduce the number of profit-generating/growth factors at the corporate level, keeping only those that appear to be relevant; Linking function, because matrixesrender the changing situation of the business enterprise more comprehensiblethrough their invariant laws, whichallows simplicity to guide complex choices; Totalizing function,offering the company director the assurance that linking could apply to a seamless, boundless world.
However, as was the case for material technical objects that were studied by Simondon, the capacity todeliver these properties in abstract technical objects,such as strategy matrixes, wasbound to generate unexpected or "unlooked-for side effects" (Simondon, 1958, p. 40). And there are,indeed, a number of underlying tensions between each of the three principal properties of the strategic object: The tension between linkingandtotalizing: the orientation toward totalizingleads to embracing an overly broad picture of reality, as variables are so numerous and disparate that linking them becomes impossible. As a result, it is difficult to understand how an elaboratedHoussiaux matrix (Appendix 5),for example, can serve as a guide for management in arbitrating differences and establishing priorities between theoverseas subsidiaries of a multinational firm positioned in various boxes of the matrix.
The tension betweenlinkingand compressing: this occurs as the ideal offinding the "one best way"of taking a course of action, on the one hand, is pitted against the ideal of the firm that can remain flexible and open to active participation by stakeholders.The typological matrixessuch as the panther/elephant matrix [START_REF] Charmont | La Panthèreoul"elephant[END_REF] are cruel reminders of that difficulty. These tensions can be resolved more or less successfully depending on the matrixes employed. Indeed, according to how they are arranged, the technical elementsof certain matrixes can serve multi-functional and complementary purposes, including acting as sources of mediation, which are aimed at alleviating these tensions: Categorization, mediationbetween compressingand linking.
The intersection between two segmented axes brings about a coincidence between two company-related typologies, and a kind of natural correspondence seems to develop between them. Typically, in the panther/elephant matrix(Appendix 1) or in the Houssiaux matrix (Appendix 5)that categorization generatesan action-reaction type linkingschema. That is, these matrixes gave rise to a series of connections between certain types of courses of action adopted by rival firms andthe (state-controlled) contextual settings, on the one hand, and strategic counter-measures or responses, on the other.
Hierarchical ordering, or mediation between linkingand totalizing.For centuries, science hasstriven to decipher and manipulate nature by trial and error.The classic examples for engineering specialists are Taylor expansions, which make it possible to approximate most functions as closely as one wishes by simply adding a series of functions with increasing exponentials (x, x2, x3, etc.). This approach, which involves making repeated adjustments until the formula converges toward a fixed point, relies on applying a hierarchy: the coefficient is determined for x, where the degree of x is one, then two, then three, etc., until the desired level of precision is reached. In a quite similar fashion, a hierarchy can be applied to approximate as closely as desired a reality that is never entirely attainable.None of those mechanisms used to perform partial totalizationswould work, was is not possible to prioritize the descriptive parameters from the most important to the least important. The ranking of criteria must be seen then, to a lesser degree, as a form of totalizing that opens a zone of reconciliation in which linkingand totalizingcan co-exist. Matrixes whose axes are not only segmented according to A, B C, etc., but also graduated between a pole that minimizesthe value of a parameter and another that maximizes it (e.g.: a product that is more or less different fromthe firm"s current line of activityin the Ansoffmatrix[Appendix 3], or a State that is more or lessauthoritative in theHoussiaux matrix) apply such a hierarchical ordering.
Interpolation, or mediation betweentotalizingandcompressing. Graduated axes, particularly when they present a continuum of options, like the horizontal axis of the Ansoffmatrix (1970), use a linear interpolation, i.e. showing intermediate categories at finer and finer intervals along the matrix"s generating axis. By offering a spectrum of options, ranging between two extremes, it is possible to play very locallywith economies of scale, when greater precision is desired.
When that logic is taken to the extreme, a continuous gradient appears in the matrix, for example inMorphological Territory or in the "concentric diversifications" cell of the Ansoff matrix.
All three means of "tension-dampening"can be achieved at the same time through a single technical configuration, which we see inboth theAnsoff matrix and theHoussiaux matrix, as well as in the matrixes created in the 1970s (BCG, ADL and McKinsey) and even in more recent examples (e.g. the "Entrepreneurial Strategy Matrix" [Sonfield&Lussier, 1997, Sonfieldet al., 2001], and the "Etnographic Strategic
Matrix" [Paramo [START_REF] Morales | Ethnomarketing, the cultural dimension of marketing[END_REF]). There is adiagonal line that is clearly implied though not expressed explicitly in the matrix,emergingfrom the milieusector of the matrix, as in the case of the Ansoff model (1970), which serves as a first example (seeAppendix 3). Indeed, through thediagonal gradient, these matrixes present all strategic options in successive strata, whether in continuous, discrete, orcumulative series. However, stratifying data can involve arranging it in hierarchical order (between a lower and higher graded status) as well as dividing it into categories (because there are different ranges or strata of data). It can also entailinterpolation, because intermediate levels or grades of sample data can be represented spatially in the form of a radial (stratified) graph (polar visibility graph) or a rectilinearly layered (stratified) drawing, (see figures below), making it possible to create a multi-scale visualization, so that a viewer can zoom-in for a more detailed view.
X Stage III Ansoff matrix (Appendix 3) Horizontal Axis X X X X X Vertical Axis X X X X X Demarcate d Milieu par les axes X X Stage IV Bijon matrix (Appendix 5) Horizontal Axis X X X X Vertical Axis X X X X Axe Diagonal X X X Milieu
By arranging these matrixes according to the number of functions performed simultaneously in each matrix element, and according to the associated milieu, we can identify the different degrees of concretization that steadily intensify as we move from themorphological boxtoward theAnsoff matrix. Although the Bijon matrix closely resembles the latter, in functional terms, it boasts an additional technical element (the diagonal axis), which tendsto dilute the functions carried out by the two orthogonal axes andconfuse the reader regarding their milieu of interaction. Thus, it cannot be seen asan advancement compared to theAnsoff matrix, but is, instead, a regression.Because it includes an additional axis to show a diagonal gradient, theBijon matrixis closer to the ideal-type model of the "primitive and abstract" technical object where "each structurehas a clearly defined function, and, generally, a single one" (Simondon, 1958: 41). TheAnsoff matrix, whose finely graduated and polarized axes act in synergywith a milieu that follows the reader"s natural points of orientation (left-right, up-down)is enough to suggesta concentric gradient of diversification, belongs toa more "concrete"
stage. Importantly, it meets the criterion laid out by Simondon (1958: 41) whereby "a function can be performed by a number ofsynergistically associated structures,"whereas, through the corresponding milieu that is established, "each structure performs … essential and positive functions that are integrated into the functioning of the ensemble" (ibid:41). Lastly, theAnsoff matrix alsoexhibits this same type of refinement, which ultimately "reduces the residual antagonisms that subsist between the functions" (ibid.:46). Thefiner and finer graduations on each axisgive rise to (1) hierarchical orderingand(2) interpolation, which reduce the residual incompatibilitiesthat exist between the functions, namely,(1) betweenlinking and totalizing, and (2)betweentotalizing and compressing.
Thischart is not intended to recount the history of matrix models (especially since no chronological order is given) but, rather, to identifytheirtechnical genesis, which, in the interest of simplicity, has been broken down into different stages of concretization:
stage (I) marked by the emergence of the box, using the example of morphological boxand thepanther/elephant matrix; stage (II) which saw the introduction of the incomplete-gradient matrix, illustrated by the Gélinier/Sadocmatrix and the morphological territories; stage III marked by the diagonal-gradient matrix making full use of the stratification properties ofthe axes and cross-axesmilieu, illustrated by the Ansoff matrix; and stage IV, which witnessedthe hyper-specialized matrix. The latter stage reflects a kind ofhypertely, whichSimondondescribed in these words (1958: 61):
The evolution of technical objects can move towards some form of hypertely, resulting in a technical object becoming overly specialized and ill-adaptedtoeven the slightest change in operating or manufacturingneeds and requirements;
(iv) Tracing the genesis of the strategy matrix through a process of individuation.
Our archival research, whichwas limited to literature produced in France between the years 1960 and1971,showed that different iterations of the same technical objectthestrategy matrix-emerged over the years, while presenting highly variable degrees of concretization. By classifying the objects according to their degreeof concretization, we obtaina picture of thetechnical genesisof the matrix. The fact that only the most technologically evolved version, the type III matrix, is still in use in its original form, suggests that the genesis of the strategy matrix was a form of technical progress.
However that may be, given the extremely local nature of the findings, theirvalidityandrelevancy remain problematic.
Cross-cultural validityand relevancyof the technical genesisofstrategy matrixes
We looked at a recent worldwide sample of strategy matrixes produced in academia (seechart in Appendix7 and figuresinAppendix 8). Such matrixes continueto be producedthroughout the world and, in reviewing them, it is easy to immediately identify four common stages of individuation of the object-matrix(see the "technical status"column in the chart). A review of the sample prompts the following remarks :
(1) The "strategy matrix" technical object is used in a wide variety of geographical locations and specialized contexts (the Anglo-American and Hispanic worlds, and in India, etc.), but there is no technical stage that seems to be country-or context-specific. Although the production of strategy matrixes can be associated with a certain "technical culture,"it transcends the expected cultural divisions.
(2) Stage III of the diagonal matrix is the most widely used and reproduced, which gives credence to the idea that it is the most "concrete"stage, offering the most flexibility, and representsan advance in relation to stages I and II.
(3) The long-standing production of Stage I matrixes is noteworthy.This phenomenonwas described and explained by Simondonhimself with regard to"material" technical objects (Simondon, 1958).
(4) Eachstage IV hyper-specializedmatrix exhibits a singular and distinct architecture, without having benefitted from any substantial re-use or generalization. Here,the Simondonianconcept of hypertely seemsto apply to these matrices as a whole.
We did not, however, find any studiesthat attempted, as we have here, to give a careful consideration of why and how a strategy matrix came to be designed, yet alone attempt to explain how choices were made regarding its structure, its components and their interactions, synergies and/or incompatibilities. Although strategy matrixes appears to have thrived over a long period of time, no "mechanology" (Simondon, 1958 : 81) of them as technical objectsseems to havebeen developedor been taken into account by the authors. It would seem that the question of outlining a frameworkfor describing and teaching about the technical culture of matrixes remains to be investigated (Simondon, 1958 : 288).
2. Theoretical Discussion: the transindividualand laying the ground for a technical culture to come.
Simondon"sargument that the essence of the technical object resides in the scheme of the system in which it existsand not in its matter or form (1958, p. 6) opens the way for two complementary avenues of research for management sciences. The first consists of developing an approach for examining all kinds of abstract management tools as technical objects. That is what we just illustrated in considering the strategy matrix and is various iterations.In the invention of a strategy matrix, the generation of a diagonal and dynamic milieu offers a clear illustration of the Simondonian process where (Simondon, 1958, p. 71):
The unity of the future associated milieu (wherecause-and-effect relationships will be deployed to allow the newtechnical object to function), is represented, actedout, likea rolethat is not based on any real character, but is played out by the schemes of the creative imagination.
The only thing that separates this case from Simondon"s studies on mechanical and electronic devices of his day is that, here, "schemes of the creative imagination"andthe scheme of the technical object both exist in a cognitive state.
A second possibilitypresented by this conception of the technical-object-as-aschemeresides in the opportunity to develop a technical culture.
It goes without saying that an encyclopedic, manual-like overview cannot begin to provide areal understanding of management tools in all their strengths and limitations.Conversely, reading pointed case studies or sharing professional experiences (even those put down in text form) cannot give sufficient insight for fully understanding the importance of choosing the right technical tool from among the wide range of existing models, let alone from amongthose that remain to be created. It would seem that the "general technology ormechanology"thatSimondonhad hoped for (1958, p. 58) holds out the possibility of providing management sciences with novel responses to this question. We shall now attempt to advance that argument while relying on the findings of our study of the "strategy matrix" as technical object.
Developing a true technical culture through strategy matrixes would, in our view, accomplish the ideal describedbySimondon (1958, p. 335):
Above the social community of work, and beyond the interindividual relationship that does not arise from the actual performance of an activity, there is a mental and practical universe wheretechnicity emerges, where human beings communicate through what they invent.
This presupposes, above all, that, between the specialist, the instructor and the student, "the technical object be taken for what it is at essence, that is, the technical object as invented, thought out, willed, and assumed by a human subject " (ibid.) Insofar as the essence of technicity resides in the concretization of a schemeof individuation, developing atechnical culture of managementhinges more on the transmission of their genesisthan on the transmission of their history alone. Specifically, transmitting a technical culture of the strategy-matrix objectwould entail explicating the synergetic functioningof its componentsand the degree oftechnicityinvolved in each of itsdifferent iterations, so that the student or the manager is able to invent orperfect his own matrixes, while remaining fully aware of the specific cognitive effectshe wishes to impart with this tool and each of its variants.In that way, a relationship can be formed with the technical object by "creatinga fruitful connection between the inventive and organizing capacities of several subjects" (ibid., p.342).With matrixes, that would mean teaching learners and users to create a more or less successful synergetic interaction between the functions ofcompressing, totalizing,linkingand stratification, as defined above ( §1.2). Simondon (1958, p. 335)defines the relationship that develops between inventors, users and humans as "transindividual"whenever a technical object is "appreciatedand known for what is at its essence." Some might rightfully wonder whetherthe usual educationalapproach to strategy matrixes reflects such a transindividualrelationship or whether, to the contrary, it fails to encourage sufficient consideration of the importance of symbolic machines of management, which risks turning future managers into a "proletarized worker," to borrow the expression coined by [START_REF] Stiegler | MécréanceetDiscrédit T1[END_REF]. We know only too well that it is indeed possible to work as a proletariat while still being"manipulators of symbols," in the words ofRobert Reich (1992).
Simondon posited the notion oftransindividualismbecause, in his thinking, a human, like all living beings, is never definitivelyan individual: "the individual is neither completenor substantial in itself" (2005, p. 216). Everengaged in a necessarily incomplete process of individuation, he has at his corea "reservoir of becoming" and remains a pre-individual(2005 : 167). That enables a part of himself, which is identical to other humans, to fuse with a superior individuated entity.Here, Simondonis describing two separate things (2005: 167). On the one hand, there is an affective dimension, whichwe will not address here, but there is also a cognitive dimension, which consists of schemesystems of thought. Put in more contemporary terms, while taking into account the rise of the cognitive sciences, it can be said that universality andour subconscious cognitive faculties represent the pre-individual reservoir of each human being, whereas the universal understanding of technical schemes among highly dissimilar people is a decidedlytransindividual act, which occurs when human beings, who are quite unalike, activate the same mental operations in a similar way. The Simondoniantransindividualcan, in this way, be seen as a core notion of a universal technical culture, which cuts across ethnic cultures and corporate cultures alike, provided that aunderstanding of the genesis or lineage of technology (notably, managerial techniques) is sufficiently developed.Hopefully, the reader of these lines is only too well aware that it is that very type of transindividual relationship that has begun to develop, here, between himselfor herself and creators of strategy matrixes.
Epistemologicaldiscussion: archeologyandallagmatic operations
In previous studies (author), we showedhow the four above-mentioned operations (compressing, linking, totalizingandstratification) could be readthrough the lens of Michel Foucault"s rules governing discursive formations(1969) and could be extended well beyond matrix models to coverall of the concepts generatedby French strategistsinthe 1960s, encompassing a wide variety of technical elements. Performing such an "Archeology of Strategy-related Knowledge"revealed that the strategy-related data that we collected was stratifiedvia cognitive tools such as matrixes according to institutional positions adopted by executive management..In other words, the epistemological stratification ofstrategy-related data reproducedthehierarchical stratification of the firm. The utopia of an all-powerful, all-knowing executive management was, in a certain manner, created by the very structure of the strategic concepts (Author).
The obvious limitations of an archeological approach are that it merely allows us to identify constants in the structure and the structuration of concepts, and uncover blind spots in concept formation. It must be remembered, too, that archeologyisarcheo-logy, which means it focuses on particular historical moments, seeking to regroupconceptual tools under the same banner withoutclassifying them in relation to each other, or on the basis of their lineage or forms of succession. In contrast, by viewing cognitive tools not only as concepts but as technical objects, after Simondon"s example, it is possible to identify their genesis andmake a cross comparison according to their degree of technicity. TheSimondoniannotion ofconcretizationis intended, here, to complement the Foucauldianrules of discursive formation (Foucault, 1969).
That idea can be developed even further. We need merely consider that Foucauldianarcheology, far from being solely a form of structuralism--which Foucault repeatedly denied to no avail--constituteswhat Gilbert Simondontermed anallagmatic operation or a"science of operations."The operation-based dimension of Foucauldian archeologybecomes clear, for example, through the schemesystemsthat Foucault suggested be employed to identify the rules of concept formation inany given corpus of knowledge (although that must be seen as only a preliminary step). Below, we present an outline of Foucault"s procedures of intervention in relation to discursive statements (1969 : 78):
Foucault"s procedures of intervention (1969) Operations performed through the "strategy matrix" considered as a technical object" Techniques of rewriting Redistribution of a model into a twodimensional type model Methods of transcribingaccording to a more or less formalized and artificial language Assigning a name to each strategy (e.g."concentric diversification") Methods of translating quantitative statements into qualitative formulations, and vice-versa Place categories on a continuum on each axis Methods ofexpanding approximations of statements and refining their exactitude Make a transition fromdiscrete categories to continuous gradients The way in which the domain of validity of statements is delimited, again through a process of expansion and refinement The way in which a type of statement is transferred from one field of application to another The methods of systematizing propositions that already exist, insofar as they were previously formulated, but in a separate state Include pre-designated strategies (organic growth, diversification) in an over-arching system as potential outcomes or courses of action. Methods of rearranging statements that are already related or linked to each other but have been recombined or redistributed into a new system Stratify the scope of possibilities within the block or partition of the matrix denoted asmilieu.
Table2: Foucault"s procedures of interventionandoperations performed by type III strategy matrixes.
By their very wording, these"rules of concept formation"reveal exactly how they operate. "Transcribing,""translating,""redistributing" (or "rearranging"), etc. are as much cognitive operations as discursive practices. Even if Foucauldian archeology does not draw explicitly on Simondon terminology, it undeniably establishes anexusbetween operation and structure. Written by Michel Foucault as a work of theory, but also as a defense and illustration of the approach he had adopted in his previous works (Foucault, 1961(Foucault, , 1962(Foucault, , 1966) ) at the height ofthestructuralist vogue, The Archeologyof Knowledgeseems almost to be out to confirmSimondon"s assertion that "a science of operations cannot be achieved unless the science of structures senses, from within, the limits of its own domain" (2005: 531).Simondonuses the term"allagmatic"to describethe "theoryof operations"(2005, p. 529). Our study of matrixes seeks to illustrate the intimate links that bind operation and structure, but in light of the conceptual groundworklaid out bySimondon. In Simondon"s view, an operation "precedes and leads up to the appearance of a structure or modifiesit" (ibid., p. 529). He provides a simple illustrationby describing the gesture made by a surveyor who traces a line parallelto a straight line through a point lying outside that straight line. The surveyor"s act is structured on "the parallelism that exists between a straight line in relation to another straight line,"whereas the operation behind that act is "the gesture through which he performs the tracing without really taking much notice to what it is he is tracing."The important thing here is that the operation, the"gesture,"has its own schema of how it is to be carried out. Indeed, to trace a straight line, a whole series of turns of the wrist and movements of the arm, for example, are called into play. The operation entailed in tracing a straight line requires adopting an array of angular positions, in contrast to the parallel lines themselves that will result from the act of tracing. The scheme of the operation (a variation of angles) is thus by no means an exact reflection of the scheme of the structure (strict alignment)needed to carry out the operation itself. Similarly, it can be said that the operations performed by a matrix (dynamic stratification, oriented gradient)do not reflect the static, symmetric, and isotropicschema that underlies the structural framework of each matrix box. The applicability of these concepts to strategy matrixes is obvious.Executive Management is ever confronted by concerns that are syncretic, super-saturatedand contradictory, and there is a constant need to refine and summarize strategy-related data and linkit intelligibly. The inventor of a strategy matrix crystalizesthis field of tensions into a two-dimensional structure that aims to classify, rank, interpolate and stratify it, while offering a metastablesolution toany incompatibilities and conflicting expectations.
The "type III strategy matrix" astechnological individual performs Management teams that compile strategy-related data and input itinto the matrix blocks modulate the data. The result of that operation is, if successful, a syncreticstrategic vision.Here, the matrix has played a rolethat Simondon calls "Formsignal." As for the management researcher, he also engages in a type of conversion action.For him, these conversion actions are neither modulation or demodulation but "analogy," in the full sense of the term as used by Simondon. Modulation and demodulation link operation and structure, whereas analogy links two operations with each other. This is why Simondon calls an analogy an "équivalencetransopératoire" (ibid., p. 531). Specifically, when the researcher or the instructor explicates the genesisof the strategy-matrix-as-technical-object,he or she creates a useful link between theinventor"s crystallizationof the matrix, on the one hand, and, the crystallization that consists in the reader"s understanding of that very same schema, thanks to the information storage and schematizing machine that is his brain, on the other hand.That process is made possible by the fact that we share the same facultiesof intelligence, which are a part of our common transindividuality. In Simondon words, "It is human understanding of, and knowledge about, the same operative schemas that human thought transfers "(2005 : 533). In an analogical operation, Simondonian epistemologyis superimposed onto the ontology. And let us end with a salient quote from the Simondonianphilosopher Jean-HuguesBarthélémy (2014: 27):
In contemplating all things in terms of their genesis, human thoughtparticipates in the construction of its own thinking, instead of confronting it directly, because "understanding" genesis is itself still a genesisfollowed by understanding.
Conclusion
This paper seeks, first and foremost, to make a unique theoretical contribution to management science: we have developed a transcultural theory on the essence of strategy matrixes and their technological genesis. We havealso sought to draw attention to significant methodological issues by testing and validating a study of cognitive management tools, principally by drawing parallels with Simondonianconcepts regarding electronic and mechanical technical objects from the 1950s. In addition, our contribution may be seen as having a number of implications for epistemology: we have highlighted the important structurationalist, as opposed tostructuralist, workings behind Foucauldian archeology. By studying the rules of concept formation that apply to management science, seen as a field of knowledge, we have sought to examine strategic management tools and concepts through an allagmatic perspective, viewing them as technical objects. Lastly, our researchcan have interesting repercussions for education, for we have outlined aneducational approach toexamining the technological culture of management based upon building a link between thetransindividualandthose who create management systems.
The controversies that have arisen pitting individualism against holism, universalism againstculturalism, the structure against dynamism, and beingagainst nothingness, are a reflection ofthegreat, perplexing difficulties that continue to haunt Western thought.
WithSimondon, the notion of genesisis given pride of place, mainly because it alone"presupposes the unity containing plurality" (2005: 266),and is seen asolver of aporia.The fact that a human being is engaged in a continuous genesis of itself is also a fundamental principle behind Simondon"s concept of the transindividual. The allagmatic (2005 : 429), which seeks to grasp the relationship between operationsand Figure 5. Gélinier/Sadoc"s matrix.
The underlying idea behind this matrix is that a firm whose product x competition outcome is unfavorable (typically illustrated by Gélinier in box A4, showing a product in decline stage in a context of intense competition) must change its product focus toward a mix that is more favorable (the arrow drawn by Gélinier points to box C2, indicating a product in the growth stage on a niche market). The product adaptation process is "ongoing" whenever the firm engages in a variety of business activities, where certain ones, as indicated in the upper right-hand section of the matrix, will have to undergo adaptation. The need to implement business changes came as part of a national industrial restructuring effort in the postwar period, after CECA and, later, the Common Market raised the possibility of "converting marginal businesses."
This chart can be viewed as a combination of two technical elements (graduated axes) located within a milieu (demarcated by space on the sheet of paper) that allows them to interact. Each segmented axis projects into the space all available options (i.e. all products or competitive situations falling within one of the types of pre-defined categories). It should be noted that the products axis is not only segmented but graduated as well, since the order of the segments reflects the law of the changing market reality, in contrast with the axis depicting competitive situations. At the same time, the space occupied by the matrix portrays 25 types of strategic situations, reducing the memorizing effort required to interpret the axes and their graduation. Hence, the matrix performs both a totalizing and compressing function. However, there is no clear, explicit method for linking the elements that explain the overall logic of adapting to market changes: financial synergy, the cash flow rationale, and the technology trajectory. In this pioneering technical object, which closely resembles the matrix designed by Arthur D. Little, the underlying portfolio assumptions are confined to a risk minimization strategy, at best. Structurally, there is no clear means of locating the milieu or zone of interaction between the two matrix axes; that is, there is no diagonal line created by the interaction of the different characteristics on each matrix axis.
Appendix 2: The "Panther/Elephant" matrix
A new management approach is beginning to appear on the horizon and is poised to challenge if not surpass the traditional "best management practices" spirit. For it is becoming increasingly clear that the quality of business management is no longer enough to guarantee success, as managers find themselves faced by an emerging breed of "flexible, fearless, but highly successful and visionary entrepreneurs."
Claude Charmont has proposed a model relying on all of these assumptions, giving it a form that represents one of the first and most highly original uses of strategy matrixes.
It classifies firms according to their business outlook within a two-dimensional array ("square matrix)"), with the first variable representing the degree of "best-practice spirit," and the second measuring the degree of "entrepreneurial spirit." memorization of 4 quadrants is reduced to the memorization of two axes). No diagonal effect is produced by combining axes, and there appears to be no means of circulating within the two-dimensional space, so that the matrix does not generate its own milieu.
A translation of The
New Type
Conglomerate (Heterogeneous) Diversification
(1) Related marketing efforts/systems and technology.
( Normal reading orientation (in the direction of the slope of the diagonal line) Lines at an iso-distance from the firm's current situation the term) into a continuum of options defined by their distance from the current situation, portrayed as concentric circles dubbed "contiguous zones."
From a functional point of view, the Ansoff matrix can be considered simply as a condensation, into a single object, of elements that appear in the morphological box and morphological territory model. Taking two more ungainly tools and combining them into a single, more "concretized" tool that is technically more sophisticated, is analogous to the laboratory machines whose fit is not yet optimal, as described by Simondon to illustrate the pre-individual stages that mark the genesis of a technical object. Likewise, depending on whether or not the firm"s growth (tr) exceeds its financial capacity (te), it will position itself to the left or to the right of the median line (te/tr = 1):
Tranlsation:
The firm loses its financial equilibrium
The firm improves its cash flow The most favorable situation for the firm is that of "industry leader" shown in quadrant te>tr>tm.
That situation can deteriorate toward either of two directions, each of which is linked to a specific type of management error: a) a "myopic view of the environment," in which a firm that is growing slower than the market experiences a dramatic loss in its growth capacity (a scenario depicted in the area below the main diagonal) and b)
"disregard of financial imperatives," where a growth crisis also places the firm in a difficult financial situation. In this approach, the path taken by a firm can be seen in the model (Bijon did not create the model used for this paper): Although there are considerable differences in the parameters at play, as well as in the underlying commercial and economic factors, the outcomes obtained from using these models are likely to be scarcely different. .
-
Most favourable situation
Deterioration due to adopting a myopic view of the environment
Deterioration of financial situation due to disregard of financial imperatives structure The four strategic alternatives [START_REF] Tavana | Euclid: Strategic Alternative Assessment Matrix[END_REF] Total
Category x Brand
IV
Although the axes have their own gradient, the "jigsaw" segmentation of the bidimensional space has a central pole, in discrepancy with the angular position of the poles of the axes. Two gradient compete with each other and blur each other. The strategy reference point matrix [START_REF] Fiegenbaum | Strategic Reference Point Theory[END_REF]
Time x Internal-External x Inputs-Outputs
IV
A three-dimensional matrix drawn in rough perspective. This destroys the milieu of the matrix. The object has lost its individuality.
Table 3. Technical stages of contemporary matrixes
The tension betweenthe compressingandtotalizing: Is it possible to give a condensed overview of corporatecourses of actions and financial performance and, at the same time, describe the totalityofstrategy factors? That is what explains the continual oscillation betweenhighly reductive 4-box matrixes, and 9-or 16-box models intended to describe reality in more detail.
,Figure 2 .
2 Figure 2. Twotypes of stratification graphs
Simondonposits concepts defining the relationship between structure, operationand the individual. Referring to paradigms found in the field of physical chemistry and information theory (ibid. p. 536),he defines modulationanddemodulationas two possible ways of linking an operationand a structure (ibid: 531). "Modulation is the act of bringing together operation and structure into an active ensemble called the modulator" and the act of demodulation is the exact opposite: separation.Each individual is, for Simondon"a domain of reciprocal convertibilityof operation into structure and structure into operation,"i.e. "the milieu oftheallagmatic act" (p. 535). An individual can inhabit twopossible states. The first is the so-called "syncretic"state of the individual engaged in the process of individuation, where operation and structure are still fused and indistinguishable; and the lack of distinguishability is the nature ofhis metastable situation: "the individualisfraught with tension, oversaturation, incompatibility. " (p. 535). That same individualsometimes enters another so-called "analytic"state, in which structure and operation exist correlatively, and the individualbecomes individuated.
Figure 3 :
3 Figure 3: Conversion actionsbetween operation" and "structure," based onSimondon"s concepts (2005: 535-536).
with oriented and graduated orthogonal axes, and the two-dimensional stratified milieu that emerges.
Fig. 6 .
6 Fig. 6. The Panther/elephant Matrix
Fig.7. TheAnsoff Diversification matrix (below : a re-transcription in English of the French document)
Figure 8 .
8 Figure 8. Stratifiedmilieu within the Ansoff diversification matrix
Figure 1 .
1 Figure 1. The firm"s growth rate compared to the market growth rate
Fig. 14 Figure 3 .
143 Fig. 14 The second segmentation within The Bijon Matrix
Fig. 16 .
16 Fig. 16. Strategic trends awkwardly suggested by the Bijon matrix
Table 1 :
1 The Strategy Matrix as Technical Object, viewed by degrees of intensifying
concretization and stages of development
Charmont matrix is shown below:
Entrepreneurial Spirit
Strong Weak Strong
Best 3. Conservative, 4. Firms enjoying fast-
management well-managed growing diversification
practices" firms but selective in exploring
spirit new avenues to profits
Weak 1. Bureaucratic 2. Dynamic, forward-
and conservative moving firms
firms characterized by a high
number of failed ventures
This relationship is not inconsistent with realistic metaphysics. Although Simondon did not advocate substantialism, he adhered to the philosophy of a "realism of relationships"(Barthélémy, 2008: 18-34).
These visual depictions follow the example of Simondon"s technical Atlas, which was used to support his arguments(Simondon, 1958[START_REF] Gilbert | [END_REF]).
structure, opens the way for resolving other incompatibilities. We hope that, in elaborating these topics in the context of specific management objects, our findings will incite the academic community to someday devise a true technical culture of management. And although that day may prove to be a long way off, we can only hope that Simondon"swish, expressed in 1958, will ultimately be realized (p. 298):
Through the generalization of the fundamental 'schemas', a 'technic of all techniques' could be developed: just as pure sciences have sets of precepts and rulesto be followed, we might imagine creating a pure technology or a general technology.
1. "The living being is an individual who carries within himself his associated milieu" (Simondon, 1958, p. 71) List of references (Auteur) (Auteur) Azmi, FezaTabassum (2008). Organizational Learning: Crafting a Strategic Framework. ICFAI Journal of Business Strategy. Jun2008, Vol. 5 Issue 2, p58-70. Banerjee, Saikat (2008). Strategic Brand-Culture Fit: A conceptual framework for brand management. Journal of Brand Management. May2008, Vol. 15 Issue 5, p312-321. Barthélémy, Jean-Hugues (2008). Simondonoul'encyclopédismegénétique. Paris: Presses Universitaires de France. Barthélémy, Jean-Hugues (2014). Simondon. Paris: Les Belles Lettres.
Appendix 1: the "Sadoc/ Gélinier Matrix" Gélinier (1963, pp. 158-169) designed a matrix portraying the correlation between certain types of situations and appropriate strategic responses, containing 8-variable values. It makes cross-tabulations between variables, but only for variables 1 and 2, through "Sedoc"s Table of Ongoing
Appendix 3. Ansoff's diversification matrix
In a work that has been translated into French, Ansoff (1970, chap. 7) lays out his thoughts on diversificationstrategies, using a matrix that exhibits a high degree of technicity. In Prévision à long termeetstratégie, Christophe [START_REF] Dupont | Prévision à long termeetstratégie[END_REF] attempts to establish a link between technology planning and strategic management. He presents two analytical tools that seem to have played a primordial role in the genesis of matrixes: the "morphological box" and "morphological territory."
New Products
New
The "morphological box" is a technology forecasting and planning tool that is still used, to this day, in France (Godet, 1997), for all kinds of forward-looking studies.
Every possible configuration is represented by an n-tuple[Pij], with a combination of values using a set of descriptive parameters indicating possible future scenarios or situations (following the example given in Dupont"s book, we have shown variables in sextuples). Some parameters have fewer possible alternatives than others, and "prohibited" scenarios are indicated with an "X."
(Lines : Descriptive Parameters)
Fig. 9 The morphological boxes (Columns : Options)
The author then introduces the notion of the difference, or distance, between the possible scenarios (in the same way that the distance between vectors is calculated in mathematics), which leads to the definition of "morphological territories," that is, concentric zones in which future situations are shown at a further and further remove from the current situation: Type of strategy adopted depending on disparities between industrial policies It should be noted that interpreting the policy recommendation is relatively straightforward: the multinational firm should adopt a less-integrated business model as the degree of divergence between the national policies rises.
Strategies in light of disparities in industrial Policies
List of Situations
J. Houssiaux"s chart is canonical but is not a "matrix" strictly speaking. In a true matrix, two different parameters are represented in each matrix cell in order to show a unique and unrepeated combination of values. Here, to the contrary, the model repeats a value ("severity" of state policies), placing it in two different blocks on either side of the main diagonal line. That explains why the chart is perfectly symmetrical, forming a rectangle that has been cut into two congruent triangles, with the same value in both the upper and lower halves. A single triangle, using either the upper or lower half of the chart, would have sufficed for presenting all of the information shown here. And so, not only is this not the most optimal use of space, it illustrates a very poor use of the compressing effect.
Nonetheless, the graduated axes of the matrix generate, a diagonal slope. Similarly, the fact that the full range of possible industrial policy options is covered by each axis performs a good totalizing function.
Appendix 6: The Bijon matrix
The author laid out a theory of making the right strategic choice, based on the perceived growth potential of the firm and its markets, respectively. The model shows a "twodimensional space" divided into six sectors, and requires a minimum of mathematical proficiency if it is to be used to good effect.
To construct this type of "matrix," the author defines three values, the third of which proves more difficult to express as a testable value than the first two:
"The market growth rate (tm)" If the firm is highly diversified "a different approach may have to be adopted, separately, for each of the firm"s business units" (p. 224)
"The "reasonable" growth rate (te) is the highest growth rate that the firm can allow itself to achieve without making a structural change to its balance sheet "
It is a "function of its cash-flow, its ability to negotiate borrowings on financial markets, and make sensible income-producing investments" (p. 224).
The firm"s position (or the position of a diversified firm"s business unit) is
shown on these three parameters in a plane (te/tr) x (te/tm).
Depending on whether or not the firm grows faster than the market where it operates, it will occupy one or the other side of the diagonal on this plane:
The firm increases its market share The firm"s market share decreases Let us look, first, at the differences:
As regards the calculated values, the BCG matrix cross-tabulates industry growth rate factors and their relative market shares. These values may appear to be constructed solely from data visible from outside the firm/industry, independently of its financial structure, management, etc., whereas the Bijon matrix cross-tabulates growth rates in terms of the value (te), which is clearly variable dependent on the firm"s balance sheet structure.
The other differences (division of the matrix into 4 parts instead of 6, no express requirement to use a portfolio with the Bijon matrix) are minor by comparison to the difference mentionedabove.
-There are also a number of important points that the models share in common:
For one thing, the planned or forecast values for both models are very similar.Indeed, in both cases, they create a diagonal effect within the matrix that naturally draws the eyes in the direction of its slope, to view the path taken by the firm.
In addition, when examining the commercial and economic laws underlying both models, we recognize an even closer similarity. On the one hand, the BCG matrix enjoys economic relevancy only because it bears out the law of the stages of industrial maturity, which itself is founded on an interpretation of the "experience curve": the more an industrial sector matures, the more a dominant market position in that particular sector is required in order to generate a cash flow from that sector. On the other hand, the Bijon model has a predictive value only if the "reasonable growth ratevalue(te) is constantly updated, insofar as it measures a firm"s capacity to supply capital that it has not applied toward its own growth. Although there is no explicit law of maturity justifying this model, the presence of the value (te) ensures that it is, in fact, taken into account, in the event that it indeed proves valid. The Bijonmodel rests on weaker assumptions than those inherent to the BCG model, and reveals itself to be more general in scope. It could be said, then, that the primary difference between the two tools is their difference in presentation: while the BCG matrix takes into account the firm"s financial resources only implicitly, through the law requiring that a balanced portfolio be maintained, which guides the manner in which its results are interpreted, the Bijon matrix displays its internal features explicitly in the matrix coordinates. In contrast, the need "not to lag behind when entering the market" is, in the case of the [START_REF] Spencer | An analysis of the product-process matrix and repetitive manufacturing[END_REF] Product structure x Process
III
The diagonal gradient is extremely explicit | 72,263 | [
"12868"
] | [
"57129"
] |
01484503 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484503/file/NER2017_DiscreteMotorImagery.pdf | Sébastien Rimbert
Cecilia Lindig-León
Mariia Fedotenkova
Laurent Bougrain
Modulation of beta power in EEG during discrete and continuous motor imageries
In most Brain-Computer Interfaces (BCI) experimental paradigms based on Motor Imageries (MI), subjects perform continuous motor imagery (CMI), i.e. a repetitive and prolonged intention of movement, for a few seconds. To improve efficiency such as detecting faster a motor imagery, the purpose of this study is to show the difference between a discrete motor imagery (DMI), i.e. a single short MI, and a CMI. The results of experiment involving 13 healthy subjects suggest that a DMI generates a robust post-MI event-related synchronization (ERS). Moreover event-related desynchronization (ERD) produced by DMI seems less variable in certain cases compared to a CMI.
I. INTRODUCTION
Motor imagery (MI) is the ability to imagine performing a movement without executing it [START_REF] Avanzino | Motor imagery influences the execution of repetitive finger opposition movements[END_REF]. MI has two different components, namely the visual-motor imagery and the kinesthetic motor imagery (KMI) [START_REF] Neuper | Imagery of motor actions: Differential effects of kinesthetic and visual-motor mode of imagery in single-trial {EEG}[END_REF]. KMI generates an event-related desynchronization (ERD) and an event-relatedsynchronization (ERS) in the contralateral sensorimotor area, which is similar to the one observed during the preparation of a real movement (RM) [START_REF] Pfurtscheller | Event-related eeg/meg synchronization and desynchronization: basic principles[END_REF]. Compared to a resting state, before a motor imagery, firstly there is a gradual decrease of power in the beta band [START_REF] Kilavik | The ups and downs of beta oscillations in sensorimotor cortex[END_REF](16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30) of the electroencephalographic signal, called ERD. Secondly, a minimal power level is maintained during the movement. Finally, from 300 to 500 milliseconds after the end of the motor imagery, there is an increase of power called ERS or post-movement beta rebound with a duration of about one second.
Emergence of ERD and ERS patterns during and after a MI has been intensively studied in the Brain-Computer Interface (BCI) domain [START_REF] Jonathan Wolpaw | Brain-Computer Interfaces: Principles and Practice[END_REF] in order to define detectable commands for the system. Hence, a better understanding of these processes could allow for the design to better interfaces between the brain and a computer system. Additionally, they could also play a major role where MI are involved such as rehabilitation for stroke patients [START_REF] Butler | Mental practice with motor imagery: evidence for motor recovery and cortical reorganization after stroke[END_REF] or monitoring consciousness during general anesthesia [START_REF] Blokland | Decoding motor responses from the eeg during altered states of consciousness induced by propofol[END_REF].
Currently, most of the paradigms based on MIs require the subject to perform the imagined movement several times for a predefined duration. In this study, such a task is commonly referred to as a continuous motor imagery (CMI). However, first the duration of the experiment is long, second a succession of flexions and extensions generates an overlapping of ERD and ERS patterns making the signal less detectable. *This work has been supported by the Inria project BCI LIFT 1 Neurosys team, Inria, Villers-lès-Nancy, F-54600, France 2 Artificial Intelligence and Complex Systems, Université de Lorraine, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506 3 Neurosys team CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506
In fact, one simple short MI, referred in this article as a discrete motor imagery (DMI), could be more useful for two reasons. Firstly, a DMI could be used to combat fatigue and boredom for BCI-users improving ERD and ERS production [START_REF] Ahn | Performance variation in motor imagery braincomputer interface: a brief review[END_REF]. Secondly, the ERD and ERS generated by the DMI could be detectable at a higher quality and more rapidly compared to a CMI. This was found in a previous study that established a relationship between the duration of MI and the quality of the ERS extracted and showed that a brief MI (i.e. 2 seconds MI) could be more efficient then a sustained MI [START_REF] Thomas | Investigating brief motor imagery for an erd/ers based bci[END_REF]. Our main hypothesis is that a DMI generates robust ERD and ERS patterns which could be detectable by a BCIsystem. To analyze and compare the modulation of beta band activity during a RM, a DMI and a CMI, we computed timefrequency maps, the topographic maps and ERD/ERS%.
II. MATERIAL AND METHODS
A. Participants
13 right-handed healthy volunteer subjects took part in this experiment (7 men and 6 women, from 19 to 43 years old). They had no medical history which could have influenced the task. All subjects gave their agreement and signed an information consent form approved by the ethical INRIA committee before participating.
1) Real movement: The first task consisted of an isometric flexion of the right index finger on a computer mouse. A low frequency beep indicated when the subject had to execute the task.
2) Discrete imagined movement: The second task was a DMI of the previous real movement.
3) Continuous imagined movement: The third task was a CMI during four seconds of the real movement of the first task. More precisely, the subject imagined several (around four) flexions and extensions of the right index finger. This way, the DMI differed from the CMI by the repetition of the imagined movement. The number of imagined flexions was fixed (4 MIs). For this task, two beeps, respectively with low and high frequencies, separated by a four second delay, indicated the beginning and the end of the CMI.
B. Protocol
Each of the three tasks introduced in section II corresponds to a session. The subjects completed three sessions during the same day. All sessions were split into several runs. Breaks of a few minutes were planned between sessions and between runs to avoid fatigue. At the beginning of each run, the subject was told to relax for 30 seconds. Condition 1 corresponded to RMs was split into 2 runs of 50 trials.
Conditions 2 and 3 corresponded to discrete and continuous imagined movements, respectively, was split into 4 runs of 25 trials. Thus, 100 trials were performed by subjects for each task. Each experiment began with condition 1 as session 1. Conditions 2 and 3 were randomized to avoid possible bias cause by fatigue, gel drying or another confounding factor.For conditions 1 and 2, the timing scheme of a trial was the same: one low frequency beep indicated the start followed by a rest period of 12 seconds. For condition 3, a low frequency beep indicated the start of the MI to do during 4 seconds, followed by a rest period of 8 seconds. The end of the MI is announced by a high frequency beep (Fig. 1). C. Electrophysiological data EEG signals were recorded through the OpenViBE [START_REF] Renard | Openvibe: An open-source software platform to design, test and use brain-computer interfaces in real and virtual environments[END_REF] platform with a commercial REFA amplifier developed by TMS International. The EEG cap was fitted with 9 passive electrodes re-referenced with respect to the common average reference across all channels over the extended international 10-20 system positions. The selected electrodes are FC3, C3, CP3, FCz, Fz, CPz, FC4, C4, Skin-electrode impedances were kept below 5 kΩ.
D. EEG data analysis
We performed time-frequency analysis using spectrogram method(Fig. 2). The spectrogram is a squared magnitude of the short-time Fourier transform. As the analysis window in the method of spectrogram we used Gaussian window with α = 2.5 [START_REF] Harris | On the use of windows for harmonic analysis with the discrete Fourier transform[END_REF] with overlap by one time point between the subsequent segments. The length of the window was chosen such as to give the frequency resolution ∆f = 1 Hz.
To evaluate more precisely this modulation we computed the ERD/ERS% using the "band power method" [START_REF] Pfurtscheller | Event-related eeg/meg synchronization and desynchronization: basic principles[END_REF] with a matlab code. First, the EEG signal is filtered between 15-30 Hz (beta band) for all subjects using a 4th-order Butterworth band-pass filter. Then, the signal is squared for each trial and averaged over trials. Then it is smoothed using a 250millisecond sliding window with a 100 ms shifting step. Finally, the averaged power computed for each window was subtracted and then divided by the averaged power of a baseline corresponding to 2 seconds before each trial.
In addition, we computed the topographic maps of the ERD/ERS% modulations for all subjects (see Fig. 3).
III. RESULTS
A. Electrophysiological results
To verify if a DMI generates ERD and ERS patterns which could be detectable by a CMI, we studied the following three features: (i) the time-frequency analysis for the electrode C3, (ii) the relative beta power for the electrode C3 and (iii) the topographic map built from the 9 selected electrodes. Electrode C3 is suitable for monitoring right hand motor activity. A grand average was calculated over the 13 subjects. We used a Friedman's test to analyze whether ERS were significantly and respectively different during the three conditions. Because participants were asked to close her eyes, the alpha band was disturbed (confirmed by the time-frequency analysis) and not considered for this study. Consequently values corresponding to the desynchronization appears smaller because they were only analyzed in the beta band. For this reason, section III is mainly focused on the ERS.
1) Real movement: Fig. 2.A illustrates a strong synchronization in the 17-20 Hz band appearing 2 seconds after the start beep and confirmed the activty in the beta band. The ERD/ERS% averages (Fig. 2.D) indicate that one second after the cue, the power in the beta band increases by around 80%, reaches its maximum and returns to the baseline 4 seconds after. The evolution from ERD to ERS is rapid (less than one second) and should be linked to the type of movement realized by the subjects. Interestingly, each subject (except Subject 13) has a same ERD/ERS% profile (i.e. a strong beta rebound) after the real movement. Subject 13 has no beta rebound after the movement but has a stronger ERD, it is particularly true for the other conditions. The grand average topographic map (Fig. 3) shows that the ERS is more important on the area of the electrode C3. However, the ERS is also present around other electrodes, as well as the ipsilateral one.
2) Discrete motor imagery: Fig. 2.B shows a strong modulation in the 16-22 Hz band starting 2 seconds after the start beep. The ERS post-MI reaches 28% which is less stronger compare to the other tasks (Fig. 2.E). Some subjects (S1, S2, S5, S6, S10) have a stronger robust ERS produced by DMI while others have no beta rebound. This confirms that a DMI could be used in BCI domain. The lack of beta rebound (S3, S4, S11) could be caused to the difficulty of the DMI task. Indeed, post-experiment questionnaires showed that some subjects had difficulties in performing this task. The grand average (Fig. 3) shows desynchronization around 5% over the C3 area. One second later, the beta rebound appears, and is more present around the C3 area.
3) Continuous motor imagery: During the CMI, the subjects imagined several movements in a time window of 4 seconds. Fig. 2.C show a global decrease of activity during the CMI and stronger modulation in 16-21 Hz after the MI. The results of the grand average showed a low desynchronization during this time window. It is interesting to note that some subjects (S2, S10) have no desynchronization during the CMI task and could have a negative effect on the classification phase. Other subjects (S6, S1, S7) have a different profile which shows that a first ERS is reached one second after the beginning of the CMI, then the power increases and decreases again, being modulated during 3 seconds. Indeed, this ERD can be considered as the concatenation of several ERDs and ERSs due to the realization of several MIs. Indeed, for some subjects (S1, S6 or S9) the first ERD (23%) is reached during the first second after the MI. The topographic map shows that during the first second after the start beep, an ERD is lightly visible, but there is difficulty to identify a synchronization or a desynchronization. Understanding of individual ERD and ERS profiles between subjects for the CMI task is crucial to improve the classification phase in a BCI.
4) Comparison between RM, DMI and CMI: We observe that the ERS is stronger for a real movement. In fact, the beta rebound is 60% larger for a RM than for a MI. Although the ERS is stronger during a DMI than a CMI for some subjects (S2 and S6), this result is not statistically significant according to the Friedman test. The ERS of the CMI is stronger than the ERS of a DMI in average. For both DMI and CMI, the ERD is stronger and lasts longer than for the real movement. For some subjects (S1, S6 and S10) ERD produced during the CMI is more variable and seems to be the result of a succession of ERD and ERS generated by several MI.
IV. DISCUSSION
The subjects carried out voluntary movements, DMI and CMI of an isometric flexion of the right hand index finger. Results show that the power of the beta rhythm is modulated during the three tasks. The comparison between ERSs suggests that subjects on average have a stronger ERS during a CMI than a DMI. However, this is not the case for all subjects.
A. EEG system
It is well established that a large number of electrodes allows to have a good estimation of the global average potential of the whole head [START_REF] Dien | Issues in the application of the average reference: review, critiques, and recommendations[END_REF]. Although we are focused on specific electrodes, our results were similar by using method of the derivation, which corresponded to the literature. We choosed to study C3 without derivation because we are interested to designing a mnimal system to detect ERD and ERS during general anesthesia conditions.
B. ERD/ERS modulation during real movements
The results are coherent with previous studies describing ERD/ERS% modulations during motor actions.The weakness of the ERD can be linked to the instruction that was focused more on the precision than the speed of the movement [START_REF] Pastötter | Oscillatory correlates of controlled speed-accuracy tradeoff in a response-conflict task[END_REF].
C. ERS modulation during motor imageries
The results show that the beta rebound is lower after a DMI or a CMI than after a real movement, which has been already been demonstrated previously [START_REF] Schnitzler | Involvement of primary motor cortex in motor imagery: a neuromagnetic study[END_REF]. However, the novelty is the beta rebound is stronger on average after a CMI than DMI for a few subjects.
D. ERD modulation during continuous motor imagery
When the subjects performed the CMI, the ERD was highly variable during the first 4 seconds. For some subjects, our hypothesis is there are some intern-ERD and intern-ERS into this period. The difficulty is that the CMI involve several MI, that are not synchronized across trials, unlike the DMI which starts and ends at roughly the same time for each trial, due to the cue. Normally, for continuous real movement, the ERD was sustained during the execution of this movement [START_REF] Erbil | Changes in the alpha and beta amplitudes of the central eeg during the onset, continuation, and offset of longduration repetitive hand movements[END_REF]. However, in our data it is possible to detect several ERDs during the 4 seconds of CMI where the subject performed 3 or 4 MIs. This assumes that the ERD and ERS components overlap in time when we perform a CMI. Several studies already illustrate the concept of overlap of various functional processes constituting the beta components during RMs [START_REF] Kilavik | The ups and downs of beta oscillations in sensorimotor cortex[END_REF]. This could explain why the ERD during a CMI could be less detectable and more varied than the ERD during a DMI. To validate this hypothesis, we plan to design a new study to explore how two fastsuccessive movements (or MIs) can affect the signal in the beta frequency band.
V. CONCLUSIONS This article examined the modulation of beta power in EEG during a real movement, a discrete motor imagery (DMI) and a continuous motor imagery (CMI). We showed that during a real voluntary movement corresponding to an isometric flexion of the right hand index finger a low ERD appeared, and was followed by a rapid and powerful ERS. Subsequently, we showed that the ERD and ERS components were still modulated by both a DMI and a CMI. The ERS is present in both cases and shows that a DMI could be used in BCI domain. In future work, a classification based on the beta rebound of a DMI and a CMI will be done to complete this study and confirm future impact of DMI task in BCIdomain to save time and avoid fatigue.
Fig. 1 .
1 Fig. 1. Timing schemes of a trial for each task: Real Movement (RM, top); Discrete Motor Imagery (DMI, middle); Continuous Motor Imagery (CMI, bottom). The DMI and CMI sessions are randomized.
Fig. 2 .
2 Fig. 2. Left side: time-frequency grand average (n = 13) analysis for the RM (A), the DMI (B), the CMI (C) for electrode C 3 . A red color corresponds to strong modulations in the band of interest. Right side: grand average ERD/ERS% curves (in black, GA) estimated for the RM (D), the DMI (E), the CMI (F) within the beta band (15-30 Hz) for electrode C 3 . The average for each subject is also presented.
Fig. 3 .
3 Fig. 3. Topographic map of ERD/ERS% (grand average, n=13) in the 15-30 Hz beta band during Real Movement (top), Discrete Motor Imagery (middle) and Continuous Motor Imagery (bottom). The red color corresponds to a strong ERS (+50%) and a blue one to a strong ERD (-40%). The green line indicates when the start beep sounds and the purple line indicates when the end beep sounds to stop the CMI. On this extrapolated map only recorded electrode will be considered (FC3, C3, CP3, FCz, Fz, CPz, FC4, C4, CP4). | 18,589 | [
"774179",
"1062"
] | [
"213693",
"213693",
"213693",
"413289",
"213693"
] |
01484574 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01484574/file/chi-teegi-interactivity.pdf | Jérémy Frey
email: jeremy.frey@inria.fr
Renaud Gervais
email: renaud.gervais@inria.fr
Thibault Lainé
email: thibault.laine@inria.fr
Maxime Duluc
email: maxime.duluc@inria.fr
Hugo Germain
email: hugo.germain@inria.fr
Stéphanie Fleck
email: stephanie.fleck@univ-lorraine.fr
Fabien Lotte
email: fabien.lotte@inria.fr
Martin Hachet
email: martin.hachet@inria.fr
Scientific Outreach with Teegi, a Tangible EEG Interface to Talk about Neurotechnologies
Keywords: Tangible Interaction, EEG, BCI, Scientific Outreach ACM Classification H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities, H.5.2 [User Interfaces]: Interaction styles, H.1.2 [User/Machine Systems]: Human information processing, I.2.6 [Learning]: Knowledge acquisition
Teegi is an anthropomorphic and tangible avatar exposing a users' brain activity in real time. It is connected to a device sensing the brain by means of electroencephalography (EEG). Teegi moves its hands and feet and closes its eyes along with the person being monitored. It also displays on its scalp the associated EEG signals, thanks to a semi-spherical display made of LEDs. Attendees can interact directly with Teegi -e.g. move its limbs -to discover by themselves the underlying brain processes. Teegi can be used for scientific outreach to introduce neurotechnologies in general and brain-computer interfaces (BCI) in particular.
Introduction
Teegi (Figure 1) is a Tangible ElectroEncephaloGraphy (EEG) Interface that enables novice users to get to know more about something as complex as neuronal activity, in an easy, engaging and informative way. Indeed, EEG measures the brain activity under the form of electrical currents, through a set of electrodes placed on the scalp and connected to an amplifier (Figure 2). EEG is widely used in medicine for diagnostic purposes and is also increasingly explored in the field of Brain-Computer Interfaces (BCI). BCIs enable a user to send input commands to interactive systems without any physical motor activities or to monitor brain states [START_REF] Pfurtscheller | Motor imagery and direct brain-computer communication[END_REF][START_REF] Frey | Framework for electroencephalography-based evaluation of user experience[END_REF]. For instance, a BCI can enable a user to move a cursor to the left or right of a computer screen by imagining left or right hand movements respectively. BCI is an emerging research area in Human-Computer Interaction (HCI) that offers new opportunities. Yet, these emerging technologies feed into fears and dreams in the general public ("telepathy", "telekinesis", "mind-control", ...). Many fantasies are linked to a misunderstanding of the strengths and weaknesses of such new technologies. Moreover, BCI design is highly multidisciplinary, involving computer science, signal processing, cognitive neuroscience and psychology, among others. As such, fully understanding and using BCI can be difficult.
In order to mitigate the misconceptions surrounding EEG and BCI, we introduced Teegi in [START_REF] Frey | Teegi: Tangible EEG Interface[END_REF], as a new system based on a unique combination of spatial augmented reality, tangible interaction and real-time neurotechnologies. With Teegi, a user can visualize and analyze his or her own brain activity in real-time, on a tangible character that can be easily manipulated, and with which it is possible to interact. Since this first design, we switched from projection-based and 3D tracking technologies to a LEDs-based semi-spherical display (Figure 3). All the electronics are now embedded. This way, Teegi became self-contained and can be easily deployed outside the lab. We also added servomotors to Teegi, so that he can move and be moved. This way, we can more intuitively describe how hands and feet movements are linked to specific brain areas and EEG patterns. Our first exploratory studies in the lab shown that interacting with Teegi seemed to be easy, motivating, reliable and informative. Since then, we confirmed that Teegi is a relevant training and scientific outreach tool for the general public. Teegi as a "puppet" -an anthropomorphic augmented avatar -proved to be a good solution in the field to break the ice with the public and explain complex phenomena to people from all horizons, from children to educated adults. We tested Teegi across continents and cultures during scientific fairs before thousands of attendees, in India as well as in France.
Description of the system
The installation is composed of three elements: the EEG system that records brain signals from the scalp, a com-puter that processes those signals and the puppet Teegi, with which attendees interact.
EEG signals can be acquired from various amplifiers, from medical grade equipment to off-the-shelf devices. The choice of system mainly depends on the brain states that one wants to describe through Teegi. For instance, our installation focuses on the brain areas involved in motor activity, hence we require electrodes over the parietal zone. We use Brain Products' LiveAmp1 and Neuroelectrics' Enobio2 systems. The former has 32 gel-based electrodes, which give more accurate readings but are more tedious to setup. The Enobio has 20 "dry" electrodes, making it easier to switch the person whose brain activity is being monitored, but it is more prone to artifacts -e.g. if the person is not sitting. Both those systems are mobile and wireless.
The readings are sent to a computer. Those signals are acquired and processed by OpenViBE 3 , an open-source software dedicated to BCIs. OpenViBE acts as an abstraction layer between the amplifiers and Teegi, sending processed EEG signals through wifi to Teegi -for more technical details about the signal processing, see [START_REF] Frey | Teegi: Tangible EEG Interface[END_REF].
Teegi is 3D printed, designed to be both attractive and hold the various electronic components. It embeds a Raspberry Pi 3 and NiMh batteries (autonomy of approximately 2 hours). A python script on the Raspberry Pi handles the 402 LEDs (Adafruit Neopixel) covering the "head", which are connected to its GPIO pins. For a smoother display, the light of the LEDs is diffused by a 3mm thick cap made of acrylic glass. Two 8-by-8 white LEDs matrices picture the eyes. The script also commands the servomotors placed in the hands and feet, 4 Dynamixel XL320.
Scenario
Teegi possesses two operating modes: avatar and puppet. As an avatar, it uses the EEG system and directly translates the brain states being recorded into movements and brain activity display. As a puppet, the EEG is not used and one could interact freely with Teegi (move its limbs, close its eyes with a trigger), as a way to discover which brain regions are involved in specific motor activities or in vision.
Typically, a demonstration of Teegi starts by letting the audience play with the puppet mode. When one closes Teegi's eyes, she would notice that the display changed in the "back" of the head. We then explain that the occipital area holds the primary visual cortex. When ones move the left hand, a region situated on the right part of Teegi's scalp is illuminated. When the right hand is moved it is the opposite, LEDs situated on the left turn blue or red. We take this opportunity to explain that the body is contralaterally controlled; the right hemisphere controls the left part of the body and vice versa. Depending on the nature of the attendees, we can go further and explain the phenomenon of desynchronization that takes place within the motor cortex when there is a movement, and the synchronization that occurs between neurons when it ends.
With few intuitive interactions, Teegi is a good mediator for explaining basic neuroscience. When used as an avatar, the LED display and Teegi's servomotors are linked to the EEG system -for practical reasons one of the demonstrators wear the EEG cap. We demonstrate that when the EEG user closes her eyes, Teegi closes his. Moreover, Teegi's hands and feet move according to the corresponding motor activity (real or imaged) detected in the EEG signal. During the whole activity, Teegi's brain areas are illuminated according to the real-time EEG readings.
Audience and Relevance
The demonstration is suitable for any audience: students, researchers, naive or expert in BCI. We would like to meet with our HCI pairs to discuss the utility of tangible avatars that are linked to one's physiology. We believe that such interfaces, promoting self-investigation and anchored in reality, are a good example of how the field could contribute to education (e.g. [START_REF] Ms Horn | Comparing the use of tangible and graphical programming languages for informal science education[END_REF]) -moreover when it comes to rather abstract information. Teegi could also foster discussions about the pitfalls of BCI; for example it is difficult to avoid artifacts and perform accurate brain measures.
Overall, Teegi aims at deciphering complex phenomenon as well as raising awareness about neurotechnologies. Beside scientific outreach, in the future we will explore how Teegi could be used to better learn BCIs and, in medical settings, how it could help to facilitate stroke rehabilitation.
Figure 1 :
1 Figure 1: Teegi displays brain activity in real time by means of electroencephalography. It can be used to explain to novices or to children how the brain works.
Figure 2 :
2 Figure 2: An electroencephalography (EEG) cap.
Figure 3 :
3 Figure 3: Teegi possesses a semi-spherical display composed of 402 LEDs (left) which is covered by a layer of acrylic glass (right).
http://www.brainproducts.com/
http://www.neuroelectrics.com/
http://openvibe.inria.fr/
Acknowledgments
We want to thank Jérémy Laviole and Jelena Mladenović for their help and support during this project. | 9,880 | [
"562",
"962497",
"1003562",
"1003563",
"1003564",
"946542",
"4180",
"18101"
] | [
"179935",
"487838",
"179935",
"179935",
"179935",
"179935",
"234713",
"179935",
"3102"
] |
01484636 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01484636/file/NER2017_ImprovingClassificationWithBetaRebound%20%281%29.pdf | Sébastien Rimbert
Cecilia Lindig-León
Laurent Bougrain
Profiling BCI users based on contralateral activity to improve kinesthetic motor imagery detection
Kinesthetic motor imagery (KMI) tasks induce brain oscillations over specific regions of the primary motor cortex within the contralateral hemisphere of the body part involved in the process. This activity can be measured through the analysis of electroencephalographic (EEG) recordings and is particularly interesting for Brain-Computer Interface (BCI) applications. The most common approach for classification consists of analyzing the signal during the course of the motor task within a frequency range including the alpha band, which attempts to detect the Event-Related Desynchronization (ERD) characteristics of the physiological phenomenon. However, to discriminate right-hand KMI and left-hand KMI, this scheme can lead to poor results on subjects for which the lateralization is not significant enough. To solve this problem, we propose that the signal be analyzed at the end of the motor imagery within a higher frequency range, which contains the Event-Related Synchronization (ERS). This study found that 6 out of 15 subjects have a higher classification rate after the KMI than during the KMI, due to a higher lateralization during this period. Thus, for this population we can obtain a significant improvement of 13% in classification taking into account the users lateralization profile.
I. INTRODUCTION
Brain-Computer interfaces (BCI) allow users to interact with a system using brain activity modulation mainly in electroencephalographic (EEG) signals [START_REF]Brain-Computer Interfaces: Principles and Practice[END_REF]. One major interaction mode is based on the detection of modulations of sensorimotor rhythms during a kinesthetic motor imagery (KMI), i.e, the ability to imagine performing a movement without executing it [START_REF] Guillot | Brain activity during visual versus kinesthetic imagery: an FMRI study[END_REF], [START_REF] Neuper | Imagery of motor actions: Differential effects of kinesthetic and visualmotor mode of imagery in single-trial EEG[END_REF]. More precisely, alpha [START_REF] Hashimoto | EEG-based classification of imaginary left and right foot movements using beta rebound[END_REF][START_REF] Allison | Could Anyone Use a BCI[END_REF][START_REF] Pfurtscheller | Motor imagery and direct braincomputer communication[END_REF][START_REF] Fok | An eeg-based brain computer interface for rehabilitation and restoration of hand control following stroke using ipsilateral cortical physiology[END_REF][START_REF] Medical | World medical association declaration of Helsinki: ethical principles for medical research involving human subjects[END_REF][START_REF] Renard | Openvibe: An open-source software platform to design, test and use brain-computer interfaces in real and virtual environments[END_REF][START_REF] Blankertz | Optimizing spatial filters for robust EEG single-trial analysis [revealing tricks of the trade[END_REF] and beta rhythms [START_REF] Hohne | Motor imagery for severly motor-impaired patients: Evidence for brain-computer interfacing as superior control solution[END_REF][START_REF] Ang | Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b[END_REF][START_REF] Duprès | Supervision of timefrequency features selection in EEG signals by a human expert for brain-computer interfacing based on motor imagery[END_REF](18)(19)(20)(21)(22)(23)(24)(25) modulations can be observed measuring Event-Related Desynchronization (ERD) or Synchronization (ERS). In particular, before and during an imagined movement, there is a gradual decrease of power, mainly in the alpha band. Furthermore, after the end of the motor imagery, in the beta band, there is an increase of power called ERS or post-movement beta rebound [START_REF] Pfurtscheller | Event-related EEG/MEG synchronization and desynchronization: basic principles[END_REF].
A KMI generates an activity over specific regions of the primary motor cortex within the contralateral hemisphere of the body part used in the process [START_REF] Pfurtscheller | Functional brain imaging based on ERD/ERS[END_REF]. Some BCIs are based on this contralateral activation to differentiate the cerebral activity generated by a right-hand KMI from a left-hand KMI [START_REF] Qin | ICA and Committee Machine-Based Algorithm for Cursor Control in a BCI System[END_REF]. Usually, the modulation corresponding to a user interaction is scanned in specific frequency bands such as Alpha, Beta or Alpha+Beta . This activity is mainly *This work has been supported by the Inria project BCI LIFT 1 Neurosys team, Inria, Villers-lès-Nancy, F-54600, France 2 Artificial Intelligence and Complex Systems, Université de Lorraine, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506 3 Neurosys team CNRS, LORIA, UMR 7503, Vandoeuvre-lès-Nancy, F-54506 observed, during the KMI in the 8-30 Hz band, which merge alpha and beta bands, or after the KMI in the beta band [START_REF] Hashimoto | EEG-based classification of imaginary left and right foot movements using beta rebound[END_REF].
Detection rates for these two KMI tasks vary with subjects and could be improved. Indeed, between 15% and 30% of the users are considered as BCI-illiterate and cannot control a BCI [START_REF] Allison | Could Anyone Use a BCI[END_REF]. In this article, we suggest that some of the so-called BCI-illiterate subjects have poor performance due to poor lateralization during the KMI task. Several studies showed activity only in the contralateral area [START_REF] Pfurtscheller | Motor imagery and direct braincomputer communication[END_REF] for a KMI, but other studies showed that ERD and ERS are also in the ipsilateral area [START_REF] Fok | An eeg-based brain computer interface for rehabilitation and restoration of hand control following stroke using ipsilateral cortical physiology[END_REF] and could be a problem for BCI classification.
According to our knowledge, no studies compare the classifier accuracy based on signals observed during the KMI versus after the KMI. In this article, we hypothesize the possibility to define specific profile of BCI users based on the contralateral activity of the ERD and the ERS. We define three BCI profiles based on accuracy: users with good accuracy i) during the KMI in the Alpha band, ii) during the KMI in the Alpha+Beta bands and iii) after the KMI in the Beta band. We also show that the accuracy is linked to the absence or presence of a contralateral activity during the observed periods.
II. MATERIAL AND METHODS
A. Participants
Fiftenn right-handed healthy volunteer subjects took part in this experiment (11 men and 4 women, 19 to 43 years old). They had no medical history which could have influenced the task. All experiments were carried out with the consent agreement (approved by the ethical committee of INRIA) of each participant and following the statements of the WMA declaration of Helsinki on ethical principles for medical research involving human subjects [START_REF] Medical | World medical association declaration of Helsinki: ethical principles for medical research involving human subjects[END_REF].
B. Electrophysiological data
EEG signals were recorded by the OpenViBE [START_REF] Renard | Openvibe: An open-source software platform to design, test and use brain-computer interfaces in real and virtual environments[END_REF] platform from fiftenn right-handed healthy subjects at 256 Hz using a commercial REFA amplifier developed by TMS International TM . The EEG cap was fitted with 26 passive electrodes, namely Fp1; Fp z ; Fp2; F z ; FC5; FC3; FC1; FC z ; FC2; FC4; FC6; C5; C3; C1; C z ; C2; C4; C6; CP5; CP3;CP1; CP z ; CP2; CP4; CP6 and P z , re-referenced with respect to the common average reference across all channels and placed by using the international 10-20 system positions to cover the primary sensorimotor cortex.
C. Protocol
Subjects were asked to perform two different kinesthetic motor imageries to imagine the feeling of the movement (left hand and right hand). They were seated in a comfortable chair with the arms at their sides in front of a computer screen showing the cue indicated the task to perform. The whole session consisted of 4 runs containing 10 trials per task for a total of 40 trials per class.
Two panels were simultaneously displayed on the screen, which were associated from left to right, to the left hand and right hand. Each trial was randomly presented and lasted for 12 seconds, starting at second 0 with a cross at the center of each panel and an overlaid arrow indicating for the next 6 seconds the task to be performed.
D. Common Spatial Pattern
We used the algorithm called Common Spatial Pattern (CSP) to extract motor imagery features from EEG signals;
this generated a series of spatial filters that were applied to decompose multi-dimensional data into a set of uncorrelated components [START_REF] Blankertz | Optimizing spatial filters for robust EEG single-trial analysis [revealing tricks of the trade[END_REF]. These filters aim to extract elements that simultaneously maximize the variance of one class, while minimizing the variance of the other one. This algorithm has been used for all conditions: the three frequency bands (Alpha, Beta and Alpha+Beta) during the ERD (0-6s) and ERS (6-12s) time windows (Figure 2).
E. ERD/ERS patterns
To evaluate more precisely the modulation which appeared during the two different time windows, we computed the ERD/ERS% using the "band power method" [START_REF] Pfurtscheller | Event-related EEG/MEG synchronization and desynchronization: basic principles[END_REF] with a Matlab code. First, the EEG signal was filtered considering one of the three different frequency bands (7-13 Hz, Alpha band; 15-25 Hz, Beta band; Alpha+Beta 8-30 Hz) for all subjects using a 4th-order Butterworth band-pass filter. Then, the signal was squared for each trial and averaged over trials. Then it is smoothed using a 250-ms sliding window with a 100 ms shifting step. The averaged power computed for each window was subtracted and then divided by the averaged power of a baseline corresponding to a 2s window before each trial. Finally, the averaged power computed for each window was subtracted and then divided by the averaged power of a baseline corresponding 2s before each trial. This transformation was multiplied by 100 to obtain percentages. This process can be summarized by the following equation:
ERD/ERS% = x 2 -BL 2 BL 2 × 100 , (1)
where x 2 is the average of the squared signal over all trials and samples of the studied window, BL 2 is the mean of a baseline segment taken at the beginning of the corresponding trial, and ERD/ERS% is the percentage of the oscillatory power estimated for each step of the sliding window. It is done for all channels separately.
ERD and ERS are difficult to observe from the EEG signal. Indeed, an EEG signal expresses the combination of activities from several neuronal sources. One of the most effective and accurate techniques used to extract events is the average technique [START_REF] Quiroga | Single-trial event-related potentials with wavelet denoising[END_REF]. We decided to use this technique to represent the modulation of power of the Alpha, Beta and Alpha+Beta rhythms for two KMIs tasks.
III. RESULTS
A. Three BCI user profiles
Table 2 shows the best accuracy obtained for each subject on a discriminative task of left-hand and right-hand KMI according to the three profiles defined in Section I. Thus, 6 subjects have a higher accuracy looking at the Beta band after the KMI, 3 subjects have a higher accuracy looking at the Alpha band during the KMI and 6 subjects have a higher accuracy looking at the Alpha+Beta band during the KMI. The best averaged accuracy over subjects were obtained considering modulations during KMI (in alpha or in alpha+beta bands). However, looking at the individual performances, we can see that 6 subjects were better considering the beta band after the KMI. For this population we can obtain a significant improvement of 13% in classification considering the activity after the KMI versus during the KMI. Using the best profile for each subject improves the averaged accuracy of 6%.
B. Classification rate and contralateral ERD/ERS activity
Subjects with a higher accuracy in the Beta band after the KMI (Profile 2) have a strong ERS in contralateral during this period and a bilateral desynchronization during the KMI in the Alpha and Alpha+Beta bands (see subject 2, Fig. 4). This result is confirmed by the grand average map (Fig. 3) which shows also an ipsilateral ERD after the KMI. Finally, bilaterally ERD during the KMI, contralateral ERS and ipsilaterad ERD after the KMI could explain the high accuracy for these subjects. To validate our hypothesis, we show that the contralateral activity of subject 2 is higher for KMIs tasks on the post-KMI period in the Beta band (Fig. 5).
Conversely, subjects with a higher accuracy in the Alpha and Alpha+Beta bands during the KMI (Profiles 1 and 3) have a strong contralateral ERD during the task (Fig. 3 and Fig. 4). After the KMI, in the three frequency bands, they have no contralateral ERS or no Beta rebound on the motor cortex (see subject 10, Fig. 4). Figure 6 shows that the contralateral activity of subject 10 is higher for KMIs tasks during the KMI period in the Alpha band. Fig. 6. Box plots of the power spectrum for Subject 10 (Profile 1) within the alpha band and the beta band over electrodes C3 and C4 for right hand and left KMIs. It can be noticed that there is a higher difference between the contralateral activity during the KMI period in the alpha band.
IV. DISCUSSION
Subjects carried out left-hand KMIs and right-hand KMIs. Results show that 6 out of 15 subjects had a higher classification accuracy based on the post-KMI period in the beta band. This specific accuracy is due to a higher lateralization of ERD and ERS during this period.
Our study shows results which could allow to design an adaptive BCI based on contralateral activity on the motor cortex. The importance of BCI users profiles, especially for patients with severe motor impairments has already been established by other studies [START_REF] Hohne | Motor imagery for severly motor-impaired patients: Evidence for brain-computer interfacing as superior control solution[END_REF]. Moreover, it appears that there can be important changes of the contralateral activity under the choice of the frequency band [START_REF] Ang | Filter bank common spatial pattern algorithm on BCI competition IV datasets 2a and 2b[END_REF], [START_REF] Duprès | Supervision of timefrequency features selection in EEG signals by a human expert for brain-computer interfacing based on motor imagery[END_REF]. This is why, if we expect designing an adaptive BCI based on the specific contralateral activity of the motor cortex, it is necessary to merge these two methods.
More subjects are necessary to precise this BCI user profile. However, we investigated other KMIs (not detailed in this article), especially combined KMI (i.e. right-hand and left-hand KMI together versus right-hand KMI) and it appears that some subjects have the same BCI profile.
V. CONCLUSIONS
In this article, we analyzed classification accuracies to discriminate right-hand and left-hand kinesthetic motor imageries. More specifically, we distinguished two periods (i.e., during the KMI and after the KMI) for three frequency bands (Alpha, Beta and Alpha+Beta). We defined three BCI profiles based on the accuracy of 15 subjects: users with a good accuracy i) during the KMI in the alpha band, ii) during the KMI in the Alpha+Beta band and iii) after the KMI in the Beta band. This work showed that 6 out of 15 subjects had a higher classification accuracy after the KMI in the beta band, due to a contralateral ERS activity on the motor cortex. Finally, taking into account the user's lateralization profile, we obtained a significant improvement of 13% in classification for these subjects. This study show that users with a low accuracy analyzing the EEG signals during the KMI cannot be considered as BCI-illiterate. Thus, in future work, an automatic method to profile BCI users will be done allowing to design an adaptive BCI based on the best period to observe a contralateral activity on the motor cortex.
Fig. 1 .
1 Fig. 1. Time scheme for the 2-class setup: left-hand KMI and right-hand KMI. Each trial was randomly presented and lasted for 12 second(s). During the first 6 seconds, users were asked to perform the motor imagery indicated by the task cue. The use of each body part was indicated by the presence of arrows: an arrow pointing to the left side on the left panel for a left hand KMI, an arrow pointing to the right side on the right panel for a right hand KMI. After 6s, the task cue disappeared and the crosses were remaining for the next 6 seconds indicating the pause period before the next trial started.
Fig. 2 .
2 Fig. 2. Accuracy results obtained by a Linear Discriminant Analysis (LDA) and using the CSP algorithm as feature extraction on the 2 classes (left-hand KMI and right-hand KMI) for 15 subjects. The classification method was applied on three frequency band (Alpha, Beta and Alpha+Beta) on the ERD time window (0-6s) and on the ERS time window (6-12s).
Fig. 3 .
3 Fig. 3. Topographic map of ERD/ERS% on three frequency bands (Alpha:7-13 Hz; Beta:15-25 Hz; Alpha+Beta:8-30 Hz) for two KMI tasks (left-hand and right-hand). Profile 1 represents grand average for Subject 10, 13 and 14, who have better performance during the ERD phase (0-6 seconds) in Alpha band. Profile 2 represents grand average for Subject 2, 4, 5, 6, 7 and 12, who have better performance during the ERS phase (6-12 seconds) in Beta band. Profile 3 represents grand average for Subject 1, 3, 8, 9, 11 and 15, who have better performance during the ERD phase (0-6 seconds) in Alpha+Beta band. The red color corresponds to a strong ERS and a blue one to a strong ERD.
Fig. 4 .
4 Fig. 4. Topographic map of ERD/ERS% in three frequency bands (Alpha:7-13 Hz; Beta:15-25 Hz; Alpha+Beta:8-30 Hz) for two KMI tasks (left hand and right hand). Subject 10 is representative of Profile 1. Subject 2 is representative of Profile 2. Subject 11 is representative of Profile 3. The red color corresponds to a strong ERS and a blue one to a strong ERD.
2 Fig. 5 .
25 Fig.[START_REF] Pfurtscheller | Functional brain imaging based on ERD/ERS[END_REF]. Box plots of the power spectrum for Subject 2 (Profile 2) within the Alpha band and the Beta band over electrodes C3 and C4 for right hand and left hand KMIs. It can be noticed that there is a higher difference between the contralateral activity during the post-KMI period in the Beta band. | 18,910 | [
"1062"
] | [
"213693",
"213693",
"213693"
] |
01484673 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484673/file/978-3-642-36611-6_10_Chapter.pdf | Björn Johansson
email: bjorn.johansson@ics.lu.se
Feedback in the ERP Value-Chain: What Influence has Thoughts about Competitive Advantage
Keywords: Competitive Advantage, Enterprise Resource Planning (ERP), ERP Development, Resource-Based View, Value-Chain
Different opinions about whether an organization gains a competitive advantage (CA) from an enterprise resource planning (ERP) system exist. However, this paper describes another angle of the much reported competitive advantage discussion. The basic question in the paper concerns how thoughts about receiving competitive advantage from customizing ERPs influences feedback in ERP development. ERP development is described as having three stakeholders: an ERP vendor, an ERP partner or re-seller, and the ERP end-user or client. The question asked is: What influence has thoughts about receiving competitive advantage on the feedback related to requirements in ERP development? From a set of theoretical propositions eight scenarios are proposed. These scenarios are then illustrated from interviews with stakeholders in ERP development. From an initial research, evidence for six of these eight scenarios was uncovered. The main conclusion is that thoughts about competitive advantage seem to influence the feedback, but not really in the way that was initial assumed. Instead of, as was assumed, having a restrict view of providing feedback stakeholders seems to be more interested in having a working feedback loop in the ERP value-chain making the parties in a specific value-chain more interested in competing with other parties in other ERP valuechains.
Introduction
Competitive Advantage (CA) and how organizations gain CA from Information and Communication Technologies (ICTs) are subjects that have been discussed extensively. Different opinions on the answer to the question as to whether ICTs enable organizations to gain CA exist. Some proponents, such as Carr [START_REF] Carr | IT Doesn't Matter[END_REF], claim that the technology is irrelevant since it can be treated as a commodity. Others, such as Tapscott [START_REF] Tapscott | The engine that drives success[END_REF], argue for its importance while still other writers say it depends on how the technology is used and that it is how business processes are managed that are primary for gaining CA [START_REF] Smith | IT doesn't matter -business processes do: a critical analysis of Nicholas Carr's I.T[END_REF]. However, in reviewing the academic literature there seems to be a common understanding that it is not the technology as such that eventually provides organizations with CA but how the technology is managed and used [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF].
However, in this paper another perspective of CA in relation to Enterprise Resource Planning systems (ERPs) is discussed, and that is how the ERP value-chain stakeholders' interests in maintaining or improving their CA may influence feedback related to requirements of ERPs. When distinguishing between the stakeholders in the ERP value-chain and their relative positions, the subject becomes more complex. The research builds on a set of propositions suggesting what gives stakeholders in the ERP value-chain their CA. The propositions are then presented as win-lose scenarios that are discussed using preliminary findings from an empirical study.
The principle question addressed in this paper is: What influence has thoughts about receiving competitive advantage on the feedback related to requirements in ERP development?
The rest of the paper is organized as follows: The next section defines ERPs and describes the ERP value-chain and its stakeholders. Section 3 then define CA and describe ERPs and CA from the resource-based view of the firm perspective. This is followed by a presentation of the propositions and a table suggesting CA scenarios in relation to the different stakeholders in the ERP value-chain. The penultimate section presents eight scenarios together with some preliminary findings from own as well as extant studies. Finally some concluding remarks in addition with directions for future research are presented.
ERPs, the ERP Value-Chain and its Stakeholders
ERPs are often defined as standardized packaged software designed with the aim of integrating the internal value chain with an organization's external value chain through business process integration [START_REF] Lengnick-Hall | The role of social and intellectual capital in achieving competitive advantage through enterprise resource planning (ERP) systems[END_REF][START_REF] Rolland | Bridging the Gap Between Organisational Needs and ERP Functionality[END_REF], as well as providing the entire organization with common master data [START_REF] Hedman | ERP systems impact on organizations[END_REF]. Wier et al. [START_REF] Wier | Enterprise resource planning systems and non-financial performance incentives: The joint impact on corporate performance[END_REF] argue that ERPs aim at integrating business processes and ICT into a synchronized suite of procedures, applications and metrics which transcend organizational boundaries. Kumar and van Hillegersberg [START_REF] Kumar | ERP experiences and evolution[END_REF] claim that ERPs that originated in the manufacturing industry were the first generation of ERPs. Development of these first generation ERPs was an inside-out process proceeding from standard inventory control (IC) packages, to material requirements planning (MRP), material resource planning (MRP II) and then eventually expanding it to a software package to support the entire organization (second generation ERPs). This evolved software package is sometimes described as the next generation ERP and labeled as ERP II which, according to Møller [START_REF] Møller | ERP II: a conceptual framework for next-generation enterprise systems[END_REF], could be described as the next generation enterprise systems (ESs).
This evolution has increased the complexity not only of usage, but also in the development of ERPs. The complexity comes from the fact that ERPs are systems that are supposed to integrate the organization (both inter-organizationally as well as intra-organizationally) and its business processes into one package [START_REF] Koch | ERP-systemer: erfaringer, ressourcer, forandringer[END_REF]. It can be assumed that ERPs as well as how organizations use ERPs have evolved significantly from a focus on manufacturing to include service organizations [START_REF] Botta-Genoulaz | An investigation into the use of ERP systems in the service sector[END_REF]. These changes have created a renewed interest in developing and selling ERPs. Thus, the ERP market is a market that is in flux. This impacts not only the level of stakeholder involvement in an ERP value-chain [START_REF] Ifinedo | ERP systems success: an empirical analysis of how two organizational stakeholder groups prioritize and evaluate relevant measures[END_REF][START_REF] Somers | A taxonomy of players and activities across the ERP project life cycle[END_REF], but also how these different stakeholders gain CA from developing, selling, or using ERPs. It is clear that a user organization no longer achieves CA just by implementing an ERP [START_REF] Karimi | The Impact of ERP Implementation on Business Process Outcomes: A Factor-Based Study[END_REF][START_REF] Kocakulah | Enterprise Resource Planning (ERP): managing the paradigm shift for success[END_REF]. Fosser et al., [START_REF] Fosser | ERP Systems and competitive advantage: Some initial results[END_REF] present evidence that supports this and at the same time show that for some organizations there is a need to implement an ERP system for at least achieving competitive parity. They also claim that the way the configuration and implementation is accomplished can enhance the possibility to gain CA from an ERP system, but an inability to exploit the ERP system can bring a competitive disadvantage. This is in line with the assumption from the resource-based view that it is utilization of resources that makes organizations competitive and just implementing ERPs provides little, if any, CA [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF]. One reason for this could be that the number of organizations that have implemented ERPs has exploded. Shehab et al. [START_REF] Shehab | Enterprise resource planning: An integrative review[END_REF] claim that the price of entry for running a business is to implement an ERP, and they even suggest that it can be a competitive disadvantage if you do not have an ERP system. Beard and Sumner [START_REF] Beard | Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective[END_REF] argue that through reduction of costs or by increasing organizations revenue, ERPs may not directly provide organizations with CA. Instead, they suggest that advantages could be largely described as value-adding through an increase of information, faster processing, more timely and accurate transactions, and better decision-making.
In contrast to the above analysis, development of ERPs is described as a valuechain consisting of different stakeholders, as shown in Figure 1. The value-chain differs between different business models, however, it can be claimed that the presented value-chain is commonly used in the ERP market. The presented valuechain can be seen as an ERP business model that has at least three different stakeholders: ERP software vendors, ERP resellers/distributors, and ERP end-user organizations (or ERP customers). It can be said that all stakeholders in the valuechain, to some extent, develop the ERP further. However, what it is clear is that the feedbacks, related to requirements, from users are of importance for future development. The software vendors develop the core of the system that they then "sell" to their partners that act as resellers or distributors of the specific ERP. These partners quite often make changes to the system or develop what could be labeled as add-ons to the ERP core. These changes or add-ons are then implemented in order to customize the ERP for a specific customer. In some cases the customer develops the ERP system further either by configuration or customization. At this stage of the value-chain it can be argued that the "original" ERP system could have changed dramatically from its basic design. This ERP development value-chain may result in the ERP software vendors not having as close connection to the end-user that they would choose and they do not always understand what functionalities are added to the end-users' specific ERP systems. Therefore is feedback in the ERP value-chain essential for future development. The stakeholders in the ERP value-chain have different roles; accordingly, they have different views of CA gained from ERPs. One way of describing this is to use a concept from the resource-based view: core competence [START_REF] Javidan | Core competence: What does it mean in practice?[END_REF]. Developing ERPs are normally the ERP software vendor's core competence. The ERP reseller/distributors' core competence should also be closely related to ERPs, but it is unclear if development should be their core competency. Their core competences could or should be marketing and implementing ERPs. However, this probably varies between ERP resellers/distributors; for some it could be development of add-ons that constitute one of their core competences. When it comes to end-user organizations, it can be said that ERP development definitely is not their core competence. However, they are involved in the ERP development value-chain, since it is crucial for an organization to have alignment between its business processes and supporting technology. To further discuss this ERPs and CA are discussed from the resource-based view of the firm in the next section.
ERP and Competitive Advantage seen from the Resource-Based View
Whether an organization (the customer in figure 1) gains CA from software applications depends, according to Mata et al. [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF], as well as Kalling [START_REF] Kalling | Gaining competitive advantage through information technology: a resource-based approach to the creation and employment of strategic IT resources[END_REF], on how these resources are managed. The conclusion Mata et al. [START_REF] Mata | Information technology and sustained competitive advantage: A resource-based analysis[END_REF] draw is that among attributes related to software applications -capital requirements, proprietary technology, technical skills, and managerial software applications skills -it is only the managerial software application skills that can provide sustainability of CA. Barney [START_REF] Barney | Firm resources and sustained competitive advantage[END_REF] concludes that sources of sustained CA are and must be focused on heterogeneity and immobility of resources. This conclusion builds on the assumption that if a resource is evenly distributed across competing organizations and if the resource is highly mobile, the resource cannot produce a sustained competitive advantage as described in the VRIO framework (Table 1).
The VRIO framework aims at identifying resources with potential for having sustained competitive advantage by answering the questions, is a resource or capability…If all answers are answered affirmative, the specific resource has the potential to deliver sustained competitive advantage to the organization. However, to do that, it has to be efficient and effectively organized. Barney [23] describes this as exploiting the resource. If the organization is a first-mower in the sense that it is the first organization that uses this type of resource in that specific way, it can quite easily receive competitive advantage, but, it can be temporary. How long time the competitive advantage lasts is a question of how hard it is for others to imitate the usage of that resource. This means that the question of how resources are exploited by the organization is the main factor when it comes to if the competitive advantage becomes sustainable or not. The framework, Table 1, which employs Barney 's [START_REF] Barney | Firm resources and sustained competitive advantage[END_REF] notions about CA and ICT in general, has been used extensively [START_REF] Lengnick-Hall | The role of social and intellectual capital in achieving competitive advantage through enterprise resource planning (ERP) systems[END_REF][START_REF] Beard | Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective[END_REF][START_REF] Kalling | Gaining competitive advantage through information technology: a resource-based approach to the creation and employment of strategic IT resources[END_REF][START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF]. What the conducted research implies is that CA can be difficult but not impossible to achieve if the resource is difficult to reproduce (e.g. the role of history, causal ambiguity and social complexity). Fosser et al., [START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF] conclude that the real value of the resource is not the ICT in itself, but the way the managers exploit it, which is in line with the resourcebased view of the firm and the value, rareness, imitability and organization (VRIO) framework.
Quinn and Hilmer [START_REF] Quinn | Strategic Outsourcing[END_REF] argue that organizations can increase the CA by concentrating on resources which provide unique value for their customers. There are many different definitions of CA; however, a basic definition is that the organization achieves above normal economic performance. If this situation is maintained, the CA is deemed to be sustained. Based on the discussion above and the statement made by Quinn and Hilmer [START_REF] Quinn | Strategic Outsourcing[END_REF], Table 2 suggests what outcome of CA could be and how it potentially could be gained by different stakeholders in the ERP development valuechain including the end-user. There are some conflicts between attributes for gaining CA, such as developing competitively priced software with high flexibility and developing software that is easy to customize and, at the same time, achieve CA by developing exclusive add-ons.
If the organization is a first mover in the sense that it is the first organization that uses this type of resource in a specific way, it can quite easily gain CA, but it will probably only be temporary. The length of time that the CA lasts depends on how hard or expensive it is for others to imitate the usage of that resource. This means that the question of how resources are exploited by the organization is the main factor when it comes to whether the CA becomes sustainable or not.
Levina and Ross [START_REF] Levina | From the vendor's perspective: Exploring the value proposition in information technology outsourcing(1)(2)[END_REF] describe the value proposition in outsourcing from a vendor's perspective. They claim that the value derived from vendors is based on their ability to develop complementary core competencies. From an ERP perspective, it can be suggested that vendors, as well as distributors (Figure 1) provide value by delivering complementary core competencies to their customers. The evolution of ERPs has made these resources easier to imitate. However, a major barrier to imitation is the cost of implementation [START_REF] Robey | Learning to Implement Enterprise Systems: An Exploratory Study of the Dialectics of Change[END_REF][START_REF] Davenport | Holistic management of mega-package change: The case of SAP[END_REF]. Being competitive in its own market Implementing an ERP system that supports its business processes Implementing an ERP system that is difficult for competitors to reproduce The resource-based view claims that a resource has to be rare or, be heterogeneously distributed, to provide CA. In the case of ERPs, this kind of resource is not rare. There are a lot of possibilities for organizations to implement different ERPs, and the evolution of ICT has made it feasible for more organizations to implement ERPs by decreasing the costs of using ERPs. However, as described by Barney [23] and Shehab et al. [START_REF] Shehab | Enterprise resource planning: An integrative review[END_REF], failure to implement an ERP can also lead to an organization suffering competitive disadvantages.
The CA from ERPs would probably be negated by duplication as well as by substitution. If, for instance, the ERP resellers sold their add-ons to the ERP software vendor, the duplication of that add-on would be quicker and the CA that the ERP reseller previously had would be gradually eroded. However, if they kept the add-on as "their" unique solution, other ERP resellers or ERP software vendors would probably find a substitute to the add-on or develop their own. This implies a conflict between vendors and resellers when it comes to CA and the development of "better" ERPs. This can be explained by realizing that ERP resellers/distributors often develop add-ons which have a specific functionality for solving a particular problem for their customer. This can be seen as one way of customization, where resellers/distributors use their domain knowledge about the customers' industry in addition to their knowledge about the specific customer. This, in effect, allows resellers to increase their CA and earn abnormal returns. Another way is for resellers to sell the add-on to other resellers resulting in the resellers decreasing their CA in the long run. It is probable that resellers who sell their add-on solutions to other resellers would see it as not influencing their CA since they sell the add-on to customers already using the same ERP system and this would not make ERP end-user organizations change resellers. However, the question remains whether the same would apply if the resellers sold the add-on to the software vendor. The answer would depend on the incentives that the resellers had for doing that. If the add-ons were to be implemented in the basic software, the possibility of selling the add-on to client organizations, as well as to other resellers, would disappear.
Beard and Sumner [START_REF] Beard | Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective[END_REF] investigate whether a common systems approach for implementing ERPs can provide a CA. The focus of their research was to investigate what happens when a variety of firms within the same industry adopt the same system and employ almost identical business processes. Their conclusion is that it seems that ERPs are increasingly a requirement for staying competitive (i.e. competitive parity), and that ERPs can yield at most a temporary CA. From this it can be suggested that ERP end-user organizations want a "cheap" system that they can use to improve their business processes, thereby making a difference compared with other organizations in the same industry. But, since ERPs encourage organizations to implement standardized business processes (so-called "best practice" Wagner and Newell, [START_REF] Wagner | Best' For Whom?: The Tension Between 'Best Practice' ERP Packages And Diverse Epistemic Cultures In A University Context[END_REF]), organizations get locked in by the usage of the system and then, depending on whether they are a first mover or not, they receive only a temporary CA. This implies that the ERP end-user organization often implement an ERP with the objective of having a "unique" ERP system. But does the ERP customer want a unique ERP system? If the customer believes they have a unique business model, it is likely they would want a unique ERP system. However, they also want a system with high interoperability internally, as well as one compatible with external organizations systems. It is likely that end-user organizations have a need for a system that is not the same as their competitors. This is congruent with the ERP resellers/distributors. They receive their CA by offering their customers the knowledge of how to customize an ERP using industries' best practices and, at the same time, how to implement functionality that makes ERP system uniquely different from their competitor's system. Based on this discussion the next section presents some propositions on how thoughts about achieving CA from uniqueness of ERP system influence feedback of requirements in the ERP value-chain.
Propositions on how Competitive Advantages thoughts influence requirements feedback
Proposition 1: Both resellers and end-users (encouraged by resellers) in the ERP value-chain see customization as a way of achieving Competitive Advantage (CA). This could result in resistance to providing software vendors with the information necessary for them to develop ERPs further in the direction of standardization and thereby decreasing the resellers' need to customize the system.
Kalling [START_REF] Kalling | Gaining competitive advantage through information technology: a resource-based approach to the creation and employment of strategic IT resources[END_REF] suggested that the literature on resource protection focuses, to a large extent, on imitation, trade and substitution. He proposed that development of a resource can also be seen as a protection of the resource. Referring to Liebeskind [START_REF] Liebeskind | Knowledge, strategy, and the theory of the firm[END_REF], Kalling posited that the ability to protect and retain resources arises from the fact that resources are asymmetrically distributed among competitors. The problem, according to Kalling, is how to protect more intangible resources such as knowledge. Relating this to ERPs, it follows that knowledge about a specific usage situation of an ERP would be hard to protect by legal means, such as contracts. Another way of protecting resources is, as described by Kalling, to "protect by development." This means that an organization protects existing resources by developing resources in a way that flexibility is increased by adjusting and managing present resources. In the ERP case this could be described as customizing existing ERPs, thereby sustaining CA gained from using the ERP system. Kalling describes this as a way of increasing a time advantage. From the different ERP stakeholders' perspectives, it could be argued that both protection by development, as well as trying to increase the time advantage, influences the direction in which ERPs are developed.
Proposition 2: The conflict between different parties in the ERP value-chain and how they believe they will gain CA influences the feedback in the ERP value-chain. This tends to increases the cost for both development as well as maintenance of ERP systems.
The discussion and propositions so far suggest that decision-makers in organizations and their beliefs regarding how to gain and sustain CA by customization of ERPs, are a major hindrance to the development of future ERPs. This emanates from the assumption that organizations (end users and resellers) protect what customization they have made. The reason why they do so is based on their belief that they will sustain a CA gained by developing, selling or using customized ERPs. However, returning to Table 2 and the suggestion as to what it is that constitute CA for the different stakeholders, it can be concluded that there are some generic influencing factors. The conflicting goals of the three parties in the ERP value-chain increases complexity in the market place. From a resource-based perspective, first mover advantage could be seen as something that influences all stakeholders and their possibility to gain and to some extent sustain CA. The same could also be said about speed of implementation. The main suggestion is that even if the role of history, causal ambiguity and social complexity influences the organizations' possibility to gain CA, the management skills that the organizations have is crucial.
When looking at what improves their market share of the three different stakeholders in the ERP value-chain, it can be proposed that there are no direct conflicts amongst stakeholders. The reason is that they all have different markets and different customers; therefore they do not compete directly with one other. In reality, they have each other as customers and/or providers, as described in Figure 1. It is suggested that further development of ERPs carried out by vendors could result in a higher degree of selling directly to end-customers or other ways of delivering ERPs to end-customers so that the partners will be driven to insolvency and replaced by, for instance, application service provision (ASP) [START_REF] Bryson | Designing effective incentive-oriented contracts for application service provider hosting of ERP systems[END_REF][START_REF] Johansson | Deciding on Using Application Service Provision in SMEs[END_REF], or software as a service -SaaS [START_REF] Jacobs | Enterprise software as service: On line services are changing the nature of software[END_REF] or open source [START_REF] Johansson | ERP systems and open source: an initial review and some implications for SMEs[END_REF][START_REF] Johansson | Diffusion of Open Source ERP Systems Development: How Users Are Involved, in Governance and Sustainability in Information Systems[END_REF]. The first step in this direction would probably be signaled if the add-ons that partners currently deliver to end-customers are implemented in the core product. From this it can be concluded that there is a potential conflict between the different parties in the value-chain when it comes to how different stakeholders gain CA and how that influences future ERP development.
ERP software vendors become competitive if they utilize their resources to develop ERPs that are attractive to the market. ERP resellers/distributors thus need to utilize their resources to become attractive partners when implementing ERPs. Furthermore, ERP end-users need to use the ERP system so that it supports their businesses. In other words, it is how end-user organizations employ the ERP that is of importance, and it could be that having a unique ERP system (Table 1) is not as important as has previously been believed. In other words, while customization is in the interests of the resellers this may not be the case for the end users.
Millman [START_REF] Millman | What did you get from ERP, and what can you get?[END_REF] posits that ERPs are the most expensive but least value-derived implementation of ICT support. The reason for this, according to Millman, is that a lot of ERPs functionality is either not used or is implemented in the wrong way. That it is wrongly implemented results from ERPs being customized to fit the business processes, instead of changing the process so that it fits the ERP [START_REF] Millman | What did you get from ERP, and what can you get?[END_REF]. However, according to Light [START_REF] Light | Going beyond "misfit" as a reason for ERP package customisation[END_REF], there are more reasons for customization than just the need for achieving a functionality fit between the ERP and the organization's business processes. He believes that from the vendor's perspective, customizations might be seen as fuelling the development process. From an end-user' perspective, Light describes customization as a value-added process that increases the system's acceptability and efficiency [START_REF] Light | Going beyond "misfit" as a reason for ERP package customisation[END_REF]. He further reasons that customization might occur as a form of resistance or protection against implementation of a business process that could be described as "best practices." One reason why end-user organizations get involved in ERP development is that they want to adjust their ERPs so that they support their core competences.
Proposition 3: End-users of ERPs and their basic assumption about how they receive CA are encouraged by resellers of ERPs. Resellers want to sustain their CA by suggesting and delivering high levels of ERP customization.
The main conclusion so far can be formulated as follows: Highly customized ERPs deliver better opportunities for CA for the resellers in the ERP value-chain while it decreases the opportunity for both ERP software vendors as well as ERP end-user organizations to attain CA.
To discuss this further, in the next section we propose various scenarios supported by some early empirical data.
Scenarios describing ERP related Competitive Advantage
In this section eight possible scenarios on how thoughts about receiving competitive advantage from a customized ERP system could be described from a CA perspective is presented. The description is based on semi-structured interviews done with an ERP vendor, ERP reseller consultants and ERP customers and recently published studies in two Norwegian companies presented by Fosser et al,. [START_REF] Fosser | ERP Systems and competitive advantage: Some initial results[END_REF][START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF]. The interviews with the ERP vendor and the ERP reseller consultants were part of an on-going research project investigating requirements management. The project aimed at gaining knowledge on what factors that influence future development of ERPs. In total there were 11 interviews conducted with different executives at a major ERP vendor organization and three interviews conducted with ERP consultants at a reseller organization. The reseller organization implements and supports different ERP systems, and one of their "products" is the ERP system that is developed by the ERP vendor. The interviews with ERP customers comes from the study done by Fosser et al., [START_REF] Fosser | ERP Systems and competitive advantage: Some initial results[END_REF][START_REF] Fosser | Organisations and vanilla software: What do we know about ERP systems and competitive advantage?[END_REF] (in total 19 interviews) which were part of a research project that aimed at understanding competitive advantage in an ERP context. Citations from interviews done in these different studies are used to illustrate findings and flesh out the content of table 3. Lose Lose Lose Scenario A: It can be said that this is probably the situation that all stakeholders in a business relationship ideally want. However, to have a win-win-win situation in an ERP development value-chain is not straightforward. From the vendors' perspective it means that they should develop an ERP system that is both so generic that the reseller could sell it to a lot of different clients to generate revenue from licenses and at the same time be so specific that the end users could gain a CA from the usage of the standardized system. However, if the vendor manages to develop such a generic form of ERP it is likely that end user would demand an extensive customization effort. The result could then be that the re-seller could sell a lot of consultancy hours for adjusting the software to the business processes in the client's organization. A quotation from an ERP consultant at an ERP reseller organization describes a situation when the feedback loop worked as a win-win-win situation. The ERP consultant said: "Before the ERP vendor merged with a bigger ERP vendor we had a close relationship that actually made it possible to have requests from a specific customer implemented in the system. Now we don't know who to talk with and even if we get a contact with them (the vendor) they are not really interested". He (the ERP consultant) continues with stating that: "We developed a very interesting add-on for a customer, that we then tried to get implemented in the base system but it was impossible. So, we started to sell this add-on to other ERP resellers (of the same system). We did so because we think it will benefit us in the long run if customers feel that the system is interesting -In that way we will probably increase our market".
If this continues for some time it probably ends with a situation as in Scenario E. Scenario E is then the situation when vendor loses and the re-seller and clients win. We see this as a possibility if the re-sellers spend so much time with clients developing ERP systems offering CA while generating large consultancy hours but at the cost of not marketing the base ERP system to new clients. Our early data gathering suggests this scenario is common among the stakeholders. One example of support of this situation is the following statement from an executive at the ERP vendor (the same ERP vendor that was mentioned above by the developer at the ERP reseller).
The executive at the ERP vendor said that: "We don't have enough knowledge about how the system is used and what the user of the system actually wants to have. This makes that future development of the system is extremely hard and it is a fact that there are problems with requirements management in ERP development" Director of Program Management.
Comparing the citations from consultant with the one from the vendor there seems to be a contradiction. The consultant feels it hard to provide feedback while the vendor feels a lack of feedback. From the CA perspective this is hard to explain, however, what can be said is that this specific consultant see an opportunity in increasing its CA by providing feedback to the vendor. The reason for why it does not happen probably is related to lack of resources at the vendor place or a lack of a clear relationship between the parties. One way for the vendor of dealing with this is to get a closer relationship to some ERP resellers -by a relationship program giving some benefits to reseller that have a close relationship with the vendor. However, it demands that they for instance follow a specific process for implementation of the ERP.
This could then result in the situation described in scenario B, in which both the vendor and the re-seller have a win-win situation while the client has a disadvantaged position especially if they do not customize the software to the extent whereby they gain CA. The following quotations from ERP customers describe this situation. "An ERP system is something you just need to do business today. But the way we have implemented it and configured it has given us a competitive advantage." Assistant Director of Logistics.
"I believe that it is mostly a system you need to have. But an ERP system can be utilized to achieve a competitive advantage, if you are skillful." Senior Consultant.
"It keeps us on the same level as our competitors. We are focusing on quality products. That is our competitive advantage. An ERP system cannot help us with that". The Quality Manager.
"I don't think we have got any competitive advantage. All our competitors are running such a system, so it is just something we need to have. It is actually a competitive disadvantage because we have not managed to get as far as the others, with the system." Managing Director.
All these citations describe the situation when the customers see ERP implementation as a necessity to avoid competitive disadvantage. To some extent it can be said that they understand customization as something you do to gain CA, which implies that they all are interested in what other customers do and that could be seen as something that hindrance feedback resulting in the scenario B situation. Another reason why the situation could result in scenario B is that it is shown that if clients customize to a high extent, the long-term maintenance costs of the ERP system becomes so great that the benefits are lost. The following statement from a developer at the ERP vendor supports scenario B.
"It is clearly seen that when a customer implement the ERP system for the first time they customize a lot. When they then upgrade with a new version the extensive customization is much less and when they upgrade with version 3 and/or 4 they hardly don't do any customization. The reason is must likely that they have discovered that customization cost a lot at the same time as they have discovered that they are not that unique that they thought when implementing the first version" Program Manager A.
In the long run this could also result in scenario F. Scenario F describes the situation where the vendor starts to lose market share because clients have problems achieving CA resulting in a bad reputation for the ERP product. The situation of less customization and less demand on add-ons could also result in scenario C. In scenario C, we see a vendor by-passing the reseller and working directly with the client enabling them both to gain a CA. This is somewhat supported by an executive at the ERP vendor, who says: "However, there will probably be a day when the partners not are needed -at least for doing adjustments of ERPs. This is not a problem since the rules of the game always change. And there will still be a need for partners. The partners see themselves as … they understand the customer's problem." Program Manager B.
Scenario D is an interesting scenario since it is only the vendor that shows a winning position. It could be explained by the fact that if the vendor manages to develop a generic ERP system and thereby gain a more or less monopoly status they will have the possibility to sell many licenses. It also shows the situation when the vendor not seems to be dependent on feedback from customers in the development of the ERP. A quotation from an ERP customer describes this clearly: "I try to exploit the available tools in SAP without investing money in new functionality. There are a lot of possibilities in the ERP systems, e.g. HR, which we are working with to utilize our resources more efficiently." Director of Finance.
It could also be that the client needs to buy and implement the ERP since it more or less a necessity to implement an ERP to obtain competitive parity. This means that ERP end-users use the ERP as standardized software and they do not feel that providing feedback to the vendor is of importance.
With scenario G it is probably a situation that the vendor would not allow to continue. However, from the perspective of an ERP customer one motive for restricting the feedback could be justified from this citation: "We have a unique configuration of the system that fits our organization and this gives us a competitive advantage. The IS department is very important in this context." Assistant Director of Logistics. While another citation suggests that providing feedback could be a way of gaining competitive advantage: "I actually hold lectures about how we do things in our organization. I tell others about the big things, but I think it is the small things that make us good. All the small things are not possible to copy. I think it is a strength that we have a rumor for being good at ERP and data warehouse. It gives [us] a good image. Though, we are exposed to head hunters from other organizations." Director of IS.
The empirical data so far did not provide any evidence for scenario G or scenario H. Regarding scenario H it can be stated that from a "prisoner dilemma game" [START_REF] Tullock | Adam Smith and the Prisoners' Dilemma[END_REF] it could happen that all lose, however, from research on the prisoners dilemma game it is clear that if the "game" are repeated the involved parties would start to cooperate [START_REF] Tullock | Adam Smith and the Prisoners' Dilemma[END_REF]. This means that it more or less can be assumed that in the ERP value-chain case in the long-run while the stakeholders work in the direction of scenario A. This also to some extent means that neither of the scenarios (B, D, F and H) giving a lose for clients will be sustainable in the long-run.
Concluding remark and future research
Using an innovative value chain analysis considering the ERP vendor, reseller and client we developed eight scenarios to examine our research question: "What influence has thoughts about receiving competitive advantage on the feedback related to requirements in ERP development?" From the preliminary empirical research evidence to support six of the eight scenarios were found. As the other two were the least likely to occur, the findings encourages to conduct further systematic research in the future to flesh out the findings and to look particularly at ERP acquisitions in a variety of settings. As ERP systems are ubiquitous in modern corporations it is vital that managers consider the value such systems offer in the long term. Furthermore, the analysis offers a more in-depth understanding of the dynamics of the ERP development value chain, its complexity and its impact on competitive advantage for the different stakeholders.
However, returning to the question about how CA thoughts influence feedback in ERP development, it can be stated that it seems to influence the feedback, but not really in the way that were initial assumed. Instead of, as was assumed, having a restrict view of providing feedback stakeholders seems to be more interested in having a working feedback loop in the ERP value-chain making the parties in a specific value-chain more interested in competing with other parties in other ERP value-chains.
For the future, it will be interesting also to try to reveal the patterns that emerge in the value chain and investigate which scenarios are more sustainable in the long-term and how clients can position themselves more effectively to improve their competitive advantage.
Figure 1
1 Figure 1 Stakeholders in the ERP value-chain
Table 1
1 The VRIO framework[START_REF] Barney | Gaining and sustaining competitive advantage[END_REF]
Is a resource or capability…
Valuable? Rare? Costly Exploited by Competitive Economic
to Organisation? Implications Performance
Imitate?
No --- --- No Competitive Below
Disadvantage Normal
Yes No --- Competitive Normal
Parity
Yes Yes No Temporary Above
Competitive Normal
Advantage
Yes Yes Yes Sustained Above
Yes Advantage Competitive Normal
Table 2
2 ERP value-chain stakeholders and competitive advantage
Stakeholder Outcome of Competitive Gained through
Advantage
ERP High level of market share Competitively priced software
Software in the ERP market (e.g. the Highly flexible software
Vendor number software licenses Ease of implementing the software
sold) Ease of customizing the software
ERP High level of market share Knowledge about the customer's
Resellers/dis in the ERP consultancy business
tributor market (e.g. consultancy High level of competence in
hours delivered) development of add-ons that are
seen as attractive by the ERP end-
user organization
High level of competence at
customization
ERP end- High level of market share
user in the customer-specific
organization market (e.g. products or
services sold; rising market
share; lower costs)
Table 3
3 Scenarios describing win or lose relationship
Scenario Vendor Re-Seller Client (end user)
A Win Win Win
B Win Win Lose
C Win Lose Win
D Win Lose Lose
E Lose Win Win
F Lose Win Lose
G Lose Lose Win
H | 45,696 | [
"1001319"
] | [
"344927"
] |
01484675 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484675/file/978-3-642-36611-6_12_Chapter.pdf | Rogerio Atem De Carvalho
Björn Johansson
email: bjorn.johansson@ics.lu.se
Towards More Flexible Enterprise Information Systems
Keywords: Enterprise Information Systems, Domain Specific Languages, Design Patterns, Statechart Diagrams, Natural Language Processing
The aim of this paper is to present the software development techniques used to build the EIS Patterns development framework, which is a testbed for a series of techniques that aim at giving more flexibility to EIS in general. Some of these techniques are customizations or extensions of practices created by the agile software development movement, while others represent new proposals. This paper also aims at helping promoting more discussion around the EIS development questions, since most of research papers in EIS area focus on deployment, IT, or business related issues, leaving the discussion on development techniques ill-treated.
Introduction
In Information Systems, flexibility can be understood as the quality of a given system to be adaptable in a cost and effort effective and efficient way. Although it is usual to hear from Enterprise Information Systems (EIS) vendors that their systems are highly flexible, the practice has shown that customizing this type of system is still a costly task, mainly because there are still based on relatively old software development practices and tools. In this context, the EIS Patterns framework1 is a research project which aims at providing a testbed for a series of relatively recent techniques nurtured at the Agile methods communities, and ported to the EIS arena.
The idea of suggesting and testing new ways for developing EIS was born from accumulated research and experience on more traditional methods, such as Model Driven Development (MDD), on top of the open source ERP5 system [START_REF] Smets-Solanes | ERP5: A Next-Generation, Open-Source ERP Architecture[END_REF]. ERP5 represents a fully featured and complex EIS core, making it hard to test the ideas here presented in their pure form, thus it was decided to develop a simpler framework to serve as a proof of concept of proposed techniques.
This paper is organized as follows: the next topic summarizes the series of papers that forms the timeline of research done on top of ERP5; following this, the proposed techniques are presented, and finally some conclusions and possible directions are listed.
Background
In order to understand this proposal, it is necessary to know the basis from where it was developed, which is formed by a series of approaches developed on top of ERP5. Following the dominant tendency of the past decade, which was using MDD, the first approach towards a formalization of a deployment process for ERP5 was to develop a high-level modeling architecture and a set of reference models [START_REF] Campos | Modeling Architecture and Reference Models for the ERP5 Project[END_REF], as well as the core of a development process [START_REF] Carvalho | A Development Process Proposal for the ERP5 System[END_REF]. This process evolved to the point of providing a complete set of integrated activities, covering the different abstraction levels involved by supplying, according to the Geram [START_REF]IFIP -IFAC GERAM: Generalized Enterprise Reference Architecture and Methodology, IFIP -IFAC Task Force on Architectures for Enterprise Integration[END_REF] framework, workflows for Enterprise, Requirements, Analysis, Design, and Implementation tasks [START_REF] Monnerat | Enterprise Systems Modeling: the ERP5 Development Process[END_REF].
Since programming is the task that provides the "real" asset in EIS development, which is the source code that reflects the business requirements, programming activities must also be covered. Therefore, in "ERP5: Designing for Maximum Adaptability" [START_REF] Carvalho | ERP5: Designing for Maximum Adaptability[END_REF] it is presented how to develop on top of the ERP5's documentcentric approach, while in "Using Design Patterns for Creating Highly Flexible EIS" [START_REF] Carvalho | Using Design Patterns for Creating Highly Flexible Enterprise Information Systems[END_REF], the specific design patterns used to derive concepts from the system's core are presented. Complimentary, in "Development Support Tools for ERP" [START_REF] Carvalho | Development Support Tools for Enterprise Resource Planning[END_REF] two comprehensive sets of ERP5's development support tools are presented: (i) Productrelated tools that support code creation, testing, configuration, and change management, and (ii) Process-related tools that support project management and team collaboration activities. Finally, in "ERP System Implementation from the Ground up: The ERP5 Development Process and Tools" [START_REF] Carvalho | ERP System Implementation from the Ground up: The ERP5 Development Process and Tools[END_REF], the whole picture of developing on top of ERP5 is presented, locating usage of the tools in each development workflow, and defining its domain-specific development environment (DSDE).
Although it was possible to develop a comprehensive MDD-based development process for the ERP5 framework, the research and development team responsible for proposing this process developed at the same time an Enterprise Content Management solution [START_REF] Carvalho | An Enterprise Content Management Solution Based on Open Source[END_REF] and experimented with Agile techniques for both managing the project and constructing the software. Porting this experimentation to the EIS development arena lead to the customization of a series of agile techniques, as presented in "Agile Software Development for Customizing ERPs" [START_REF] Carvalho | Agile Software Development for Customizing ERPs[END_REF].
The work on top of ERP5 provided a strong background, on both research and practice matters, enough to identify the types of relatively new software development techniques that could be used on other EIS development projects. Even more, this exploration of a real-world, complex system, has shown that some other advances could be obtained by going deeper into some of the techniques used, as well as by applying them in a lighter framework, where experimentations results could be quickly obtained.
Enters EIS Patterns
EIS Patterns is a simple framework focused on testing new techniques for developing flexible EIS. It was conceived having the Lego sets in mind: very basic building blocks that can be combined to form different business entities. Therefore, it was built around three very abstract concepts, each one with three subclasses, representing two "opposite" derived concepts and an aggregator of these first two, forming the structure presented in Fig. 1. -Transformation: is a movement inside a node, in other words, the source and destination are the same node, it represents the transformation by machine or human work of a resource, such as drilling a metal plate or writing a report.
-Transportation: is a movement of resources between two nodes, for example, moving a component from one workstation to another, sending an order from the supplier to the customer.
-Process: a collective of transformations and/or transportations, in other words, a business process.
Besides the obvious "is a" and "is composed by" relationships presented in the ontology in Fig. 1, a chain of relationships denote how business processes are implemented: "a Process coordinates Node(s) to perform Operation(s) that operates on Work Item(s)". The semantic meaning of this chain is that process objects control under which conditions node objects perform operations in order to transform or transport resources. This leads to another special relationship which is "a Movement encapsulates an Operation", which means that a movement object will encapsulate the execution of an operation. In practical terms, an operation is the abstract description of a production operation, which is implemented by one or more node objects' methods. When this operation is trigged by a process object, it defers the actual execution to a pre-configured node object's method, and this execution is logged by a movement object, which stores all parameters, date and time, and results of this execution. Therefore, an operation is an abstract concept which can be configured to defer execution to different methods, from different objects, in accordance to the intents of a specific business process instance. In other words, a business process abstraction keeps its logic, while specific results can be obtained by configuration.
Although this execution deference can appear to be complex, it is a powerful mechanism which allows that a given business process model may be implemented in different ways, according to different modeling-time or even runtime contexts. In other words, the same process logic can be implemented in different ways, for different applications, thus leveraging the power of reuse.
It is important to note that in this environment, Processes control the active elements, the Nodes, which in turn operate on top of the passive ones, the Resources. In programming terms, this means that processes are configurable, nodes are extended, and resources are typically "data bag" classes. Therefore, extending the nodes for complying with new business requirements becomes the next point where flexibility must take place.
Using Decorators to Create a Dynamic System
Usually, class behavior is extended by creating subclasses, however, this basic technique can lead to complex, hard to maintain, and even worse, hard-coded class hierarchies. One of the solutions to avoid this is to use the Decorator design pattern [START_REF] Gamma | Design Patterns -Elements of Reusable Object-Oriented Software[END_REF], taking into account the following matters: -While subclassing adds behavior to all instances of the original class, decorating can provide new behavior, at runtime, for individual objects. At runtime means that decoration is a "pay-as-you-go" approach to adding responsibilities.
-Using decorators allows mix-and-matching of responsibilities.
-Decorator classes are free to add operations for specific functionalities.
-Using decorators facilitates system configuration, however, typically, it is necessary to deal with lots of small objects. Hence, by using decorators it is possible, during a business process realization, to associate and/or dissociate different responsibilities to node objects -in accordance to the process logic, and providing two main benefits: (i) the same object, with the same identifier, is used during the whole business process, there is no need for creating different objects of different classes, and (ii) given (i), auditing is facilitated, since it is not necessary to follow different objects, instead, the decoration of the same object is logged. Moreover, it is possible to follow the same object during all its life-cycle, including through different business processes: after an object is created and validated -meaning that it reflects a real-world business entity -it will keep its identity forever 2 .
An important remark is that decorators must keep a set of rules of association, which is responsible for allowing or prohibiting objects to be assigned to new responsibilities. If a given object respects the rules of association of a given decorator, it can be decorated by it. At this point, defining a flexible way of ensuring contracts between decorators and decorated objects is of interest.
Should-dsl: a language for contract checking
Although Should-dsl was originally created as a domain specific language for checking expectations in automated tests [START_REF] Tavares | A tool stack for implementing Behavior-Driven Development in Python Language[END_REF], in the EIS Patterns framework it is also used to provide highly readable contract verifiers, such as: associated |should| be_decorated_by(EmployeeDecorator)
In the case above the rule is auto-explanative: "the associated object should be decorated by the Employee Decorator", meaning that for someone to get manager's skills he or she should have the basic employee's skills first. Besides being human readable, these rules are queryable, for a given decorator it is possible to obtain its rules, as well as the symmetric: for a given node object, it is possible to identify which decorators it can use. Query results, together with the analysis of textual requirements using Natural Language Processing, are used to help configuring applications built on top of the framework.
Using Natural Language Processing to Find Candidate Decorators
It is also possible to parse textual requirements, find the significant terms and use them to query decorators' documentation, so the framework can suggest possible decorators to be used in accordance to the requirements. Decorators' methods that represent business operations -the components of business processes -are specially tagged, making it possible to query their documentation as well as obtain their category. Categories are used to classify these operations, for instance, it is possible to have categories such as "financial", "logistics", "manufacturing" and so on. In that way, the framework can suggest, from its base of decorators, candidates to the users' requirements.
A Domain-Specific and Ubiquitous Language for Modeling Business Process
The ontology presented in Fig. 1, although simple, is abstract enough to represent entities involved in any business process. Moreover, by appropriately using a statechart diagram, it is possible to use a single model to describe a business process, define active entities, as well as to simulate the process.
In order to better describe this proposal, Fig. 2 shows a simple quotation process. Taking into account that a class diagram was used to represent the structural part of the business process 3 , by explicitly declaring the objects responsible for the transitions, it is possible to identify the active elements of the process, all of the Person type: sales_rep, verifier, approver, and contractor; as well as how they collaborate to perform the business process, by attaching the appropriate methods calls. Additionally, in some states, a method is declared with the "/do" tag, to indicate that a simulation can be ran when the process enters these states.
To run these state machine models, Yakindu (www.yakindu.org) could be used. By adapting the statechart execution engine, it is possible to run the model while making external calls to automated tests, giving the user the view of the live system running, as proposed by Carvalho et al. [START_REF] Carvalho | Business Language Driven Development: Joining Business Process Models to Automated Tests[END_REF]. Fig. 2. A simple quotation process using the proposed concepts.
An Inoculable Workflow Engine
Workflow engines provide the basis for the computational realization of business processes. Basically, there are two types of workflow engines: (i) associated to application development platforms or (ii) implemented as software libraries.
EIS patterns uses Extreme Fluidity (xFluidity), a variation of the type (ii) workflow engine, developed as part of the framework. xFluidity is an inoculable (and expellable) engine that can be injected into any Python object, turning it workflowaware. Symmetrically, it can be expelled from the object, turning the object back to its initial structure when necessary. It was developed in this way because type (i) engines forces you to use a given environment to develop your applications, while type (ii) forces you to use specific objects to implement workflows, most of times creating a mix of application specific code and workflow specific statements. With xFluidity it is possible to define a template workflow and insert the code necessary to make it run inside the business objects, while keeping the programming style, standards, naming conventions, and patterns of the development team. In EIS Patterns, xFluidity is used to configure Process objects, making them behave as business processes templates.
Currently xFluidity is a state-based machine, however, it can be implemented using other notations, such as Petri Nets. In that case, no changes are necessary in the inoculated objects, given that these objects do not need to know which notation is in use, they simple follow the template.
Conclusions and Further Directions
This paper briefly presents a series of techniques that can be applied to turn EIS more flexible, including the use of dynamic languages 4 . Although the EIS Patterns framework is a work in progress, it is developed on top of research and practical experience obtained on the development of the ERP5 framework.
This experience led to the use of an abstract core to represent all concepts, while providing flexibility through the use of the Decorator pattern. On top of this technique, Natural Language Processing (NLP) and automated contract checking is used to improve reuse even more and, as a side effect, enhance system documentation, given that developers are forced to provide code documentation as well as to define association contracts through should-dsl, which is a formal way of defining the requirements for the use of decorators to expand the functionality of Node objects.
The integrated use of an inoculable workflow engine, a domain-specific and ubiquitous language, and should-dsl to check association contracts, is innovative and provides more expressiveness to the models and the source code, by the use of a single language for all abstraction levels, which reduces the occurrence of translation errors through these levels. This is an important point: more expressive code facilitates change and reuse, thus increasing flexibility.
Further improvements include the development of a workflow engine based on BPMN, in order to make the proposal more adherent to current tendencies, and provide advances on the use of NLP algorithms to ease identification and reuse of concepts.
Fig. 1 .
1 Fig.1. Ontology representing the EIS Patterns core. Fig 1 is interpreted as follows: Resource: is anything that is used for production. -Material: product, component, tool, document, raw material etc. -Operation: human operation and machine operation, as well as their derivatives. -Kit: a collective of material and/or immaterial resources. Ex.: bundled services and components for manufacturing. Node: is an active business entity that transforms resources. -Person: employee, supplier's contact person, drill operator etc. -Machine: hardware, software, drill machine, bank account etc. -Organization: a collective of machines and/or persons, such as manufacturing cell, department, company, government. Movement: is a movement of a Resource between two Nodes.
Initially discussed at the EIS Development blog through a series of posts entitled EIS Patterns, starting in December
(http://eis-development.blogspot.com).
A more complete discussion on using decorators, with examples, can be found at http://eisdevelopment.blogspot.com.br/2011/03/enterprise-information-systems-patterns_09.html
For a discussion on this see http://eis-development.blogspot.com.br/2010/09/is-java-betterchoice-for-developing.html | 19,289 | [
"1003572",
"1001319"
] | [
"487851",
"487852",
"344927"
] |
01484676 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484676/file/978-3-642-36611-6_13_Chapter.pdf | D Mansilla
email: tina.dmansilla@educ.ar
Pollo - Cattaneo
P Britos
García -Martínez
A Proposal of a Process Model for Requirements Elicitation in Information Mining Projects
Keywords: Process, elicitation, information mining projects, requirements
A problem addressed by an information mining project is transforming existing business information of an organization into useful knowledge for decision making. Thus, the traditional software development process for requirements elicitation cannot be used to acquire required information for information mining process. In this context, a process of requirements gathering for information mining projects is presented, emphasizing the following phases: conceptualization, business definition and information mining process identification.
Introduction
Traditional Software Engineering offers tools and process for software requirements elicitation which are used for creating automatized information systems. Requirements are referred as a formal specification of what needs to be developed. They are descriptions of the system behaviour [START_REF] Sommerville | Requirements Engineering: A Good Practice Guide[END_REF].
Software development projects usually begin by obtaining an understanding of the business domain and rules that govern it. Understanding business domains help to identify requirements at the business level and at product level [START_REF] Lauesen | Software Requirements. Styles and Techniques[END_REF], which define the product to be built considering the context where it will be used. Models such as Context Diagram, Data Flow Diagrams and others are used to graphically represent the business process in the study and are used as validation tools for these business processes. A functional analyst is oriented to gather data about inputs and outputs of the software product to be developed and how that information is transformed by the software system.
Unlike software development projects, the problem addressed by information mining projects is to transform existing information of an organization into useful knowledge for decision making, using analytical tools [START_REF] Pollo-Cattaneo | Proceso de Educción de Requisitos en Proyectos de Explotación de Información[END_REF]. Models for requirements elicitation and project management, by focusing on the software product to be developed, cannot be used to acquire required information for information mining processes. In this context, it is necessary to transform existing experience in the use of requirements elicitation tools in the software development domain into knowledge that can be used to build models used in business intelligence projects and in information mining process [START_REF] Pollo-Cattaneo | Ingeniería de Proyectos de Explotación de Información[END_REF] [5] [6].
This work will describe the problem (section 2), it will present a proposal for a process model for requirements elicitation in information mining projects (section 3), emphasizing in three phases: Conceptualization (section 3.1), Business Definition (section 3.2) and Information Mining Process Identification (section 3.4). Then, a study case is presented (section 4), and a conclusion and future lines of work are proposed (section 5).
State of current practice
Currently, several disciplines have been standardized in order to incorporate best practices learned from experience and from new discoveries.
The discipline of project management, for example, generated a body of knowledge where the different process areas of project management are defined. Software engineering specify different software development methodologies, like the software requirements development process [START_REF] Sommerville | Requirements Engineering: A Good Practice Guide[END_REF]. On the other side, related to information mining projects, there are some methodologies for developing information mining systems such as DM [START_REF] Garcia-Martinez | Information Mining Processes Based on Intelligent Systems[END_REF], P3TQ [START_REF] Pyle | Business Modeling and Business Intelligence[END_REF], y SEMMA [START_REF]SAS Enterprise Miner: SEMMA[END_REF].
In the field of information mining there is not a unique process for managing projects [START_REF] Pollo-Cattaneo | Metodología para Especificación de Requisitos en Proyectos de Explotación de Información[END_REF]. However, there are several approaches that attempt to integrate the knowledge acquired in traditional software development projects, like the Kimball Lifecycle [START_REF] Kimball | The Data Warehouse Lifecycle Toolkit[END_REF], and project management framework in medium and small organizations [START_REF] Vanrell | Modelo de Proceso de Operación para Proyectos de Explotación de Información[END_REF]. In [START_REF] Britos | Requirements Elicitation in Data Mining for Business Intelligence Projects[END_REF] an operative approach regarding information mining project execution is proposed, but it does not detail which elicitation techniques can be used in a project.
The found problem is that previously mentioned approaches emphasize work methodologies associated with information mining projects and do not adapt traditional software engineering requirements elicitation techniques. In this situation, it is necessary to understand the activities that should be taken and which traditional elicitation techniques can be adapted for using in information mining projects.
Proposed Elicitation Requirement Process Model
The proposed process defines a set of high level activities that must be performed as a part of the business understanding stage, presented in the CRISP-DM methodology, and can be used in the business requirements definition stage of the Kimball Lifecycle. This process breaks down the problem of requirement elicitation in information mining projects into several phases, which will transform the knowledge acquired in the earlier stage. Figure 1 shows strategic phases of an information mining project, focusing on the proposed requirement elicitation activities. The project management layer deals with coordination of different activities needed to achieve the objectives. Defining activities in this layer are beyond this work. This work identifies activities related to the process exposed in [START_REF] Kimball | The Data Warehouse Lifecycle Toolkit[END_REF], and can be used as a guide for the activities to be performed in an information mining project.
Business Conceptualization Phase.
The Business Conceptualization phase is the phase of the elicitation process that will be used by the analyst to understand the language used by the organization and the specific words used by the business. Table 1 summarizes inputs and outputs of the Business Conceptualization phase. Interviewing business users will define information related problems that the organization has. The first activity is to identify a list of people that will be interviewed. This is done as part of the business process gathering activity.
In these interviews, information related to business process is collected and modeled in use cases. A business process is defined as the process of using the business on behalf of a customer and how different events in the system occur, allowing the customer to start, execute and complete the business process [START_REF] Jacobson | The Object Advantage. Business Process Reengineering with Object Technology[END_REF]. The Business Analyst should collect specific words used in business processes in order to obtain both a description of the different tasks performed in each function, as well as the terminology used in each use case.
The Use Case modeling task uses information acquired during business data gathering and, as a last activity of this phase, will generate these models.
Business Definition Phase
This phase defines the business in terms of concepts, vocabulary and information repositories. The objective is to document concepts related to business process gathered in the Business Conceptualization Phase and discover its relationships with other terms or concepts. A dictionary is the proposed tool to define these terms. The structure of a concept can be defined as shown in table 3. Once the dictionary is completed the map, the analyst begins to analyze the various repositories of information in the organization. It is also important to determinate volume information, as this data can be used to select the information mining processes applicable to the project. The acquired information is used to build a map, or a model, that shows the relationship between the business use cases, business concepts and information repositories. This triple relationship can be used as the start point of any technique of information mining processes.
Identification of Information Mining Process Phase
The objective of the phase is to define which information mining process can be used to solve the identified problems in the business process. There are several processes that can be used [START_REF] Pollo-Cattaneo | Ingeniería de Procesos de Explotación de Información[END_REF], for instance: ─ Discovery of behavior rules (DBR) ─ Discovery of groups (DOG) ─ Attribute Weighting of interdependence (AWI) ─ Discovery of membership group rules (DMG) ─ Weighting rules of behavior or Membership Groups (WMG) This phase does not require any previous input, so activities can be performed in parallel with activities related to Business Conceptualization. Table 4 shows inputs and outputs of this phase. The list of business problems must be prioritized and be written in natural language, using the user's vocabulary. Only important or critical problems are identified.
Analysis of the problems in the list has to be done. This analysis can be done using the model known as "Language Extended Lexicon (LEL)" [17][18] and can be used as a foundation of the work performed in this phase: breaking down the problem into several symbols presented in the LEL model. This model shows 4 general types of symbol, subject, object, verb and state.
To define useful information mining process, a decision table is proposed. The table analyses LEL structures, concepts identified in the Business Conceptualization phase, existing information repositories and problems to be solved. All the information is analyzed together and according to this analysis, an information mining process is selected as the best option for the project. Table 5 shows the conditions and rules identified as foundations in this work. An important remark is that subjects discovery refers to concepts or subject that hasn't been identified as part of the business domain The objective of the table is to be able to decide, through the analysis of the information gathered about the business, which information mining process can be applied to the project. An important remark is that this decision table will add new knowledge and new rules, with the end of improving the selection technique criteria. With more projects used as input and more experience acquired in these projects, the rules proposed on the table can be adjusted and then we can get a better selection choice.
The Process Information Mining Identification phase is the last phase of the process. The following tasks will depend upon project managing process and tasks defined for the project.
Proof of concept
A case study is presented next to prove the proposed model.
Business Description
A real estate agency works mainly with residential properties in the suburban area. It's lead by two owners, partners in equal shares. This real state agency publishes its portfolio in different media, mostly local real estate magazines. Published properties are in the mid range value, up to three hundred thousand dollars. It only has one store, where all the employees work. The following business roles are covered: a real estate agent, salesman, administrative collaborators and several consultants.
Process Execution
The first step of the process consists in two activities: identify project stakeholders and set the list of people to be interviewed. In this case, with the little business information that we have, we can identify three stakeholders: the owners and the real estate agent.
The second step is to set up the interviews of the stakeholders and gather information related to the business in the study. The following paragraph describes information obtained in the interview.
This agency focuses on leasing and selling real estate. Any person can offer a house for sale. If a house is for sale, the real estate agent will estimate the best offer for the property being sold. When a person has an interest in buying a home, they complete a form with the contact details and characteristics that must meet the property. If there are any properties that meet the requested criteria, they are presented to the customer. The real estate agency considered clients as those who have offered a home to sell or have already begun a process of buying a home offered, and considered interested customers, persons who are consulting on the proposed properties or are looking for properties to buy. If interested customers agree on the purchase of a property will be customers of the estate and begins the process of buying property. The customer contact information and the property details are stored in an Excel file.
In this case, we can identify the following Business Use Cases:
─ Sell a property Use Case, action of a person selling a property. ─ Buy a property Use Case, action of a person buying a property. ─ Show a property managed by the real estate agency Use Case, reflects the action of showing a real estate available for sale to interested parties.
For the Business Definition Phase, the business concept dictionary is created. From gathered information, the concepts shown in table 6 can be identified. Identified concepts are analyzed in order to find relationships between themselves. A class model can show the basic relationships between identified concepts in the case.
From the gathered business information, a problem found is that the real estate agent wants to know, when a property is offered for sale, which customers could be interested in buying the property. Following the identification, a LEL analysis is done with each problem on the list. In this case, the analysis finds the symbols presented in table 7. Idea Idea -The action of showing a property to a customer.
-A Customer state achieved when a property meets his or her requirements Impact Impact -The property must satisfy the customer requirements.
-The property is shown to the interested party.
With the obtained LEL analysis, information repositories and defined business concepts, the information mining process to apply in the project, will be determined. The decision table presented in section 3.3 is used, checking the conditions against the gathered information. The result of this analysis states that the project can apply the process of Discovery Rules of Conduct.
Conclusion
This work presents a proposal of a process model for requirements elicitation in information mining projects, and how to adapt in these projects, existing elicitation techniques. The process breaks down in three phases, in the first phase the business is analyzed (Conceptualization phase), later, a business model is built and defined to understand its scope and the information it manages (Business Definition phase), and finally, we use the business problems found and the information repositories that stores business data as an input for a decision tabke to establish which information mining technique can be applied to a specific information mining project (Identification of an Information Mining Process).
As a future line of work, several cases are being identified to support the empirical case proposed, emphasizing the validation of the decision table presented in section 3.3.
Fig. 1 .
1 Fig. 1. Information Mining Process phases.
Table 1 .
1 Business Conceptualization phase inputs and outputs.
Phase Task Input Input product Representation Transformation technique Output Output Product Representation
Business Understanding Project Definition Project KickOff Project Sponsors Analysis List of users to be interviewed List of users to be interviewed template.
Business Conceptualization Business Process data gathering List of users to be interviewed List of users to be interviewed Interviews Workshops Gathered Information Information gathering template
Business Building Model Gathered Information Information Template gathering Analysis of information gathered Use Case Model Use Case Model template
Table 2 .
2 Table 2 shows inputs and outputs of this phase Business definition phase inputs and outputs
Project Managment
Business Data Data Modeling Evaluation Deployment
Understanding Understanding Preparation
Business Conceptualization Business Definition Identification of Information
Mining Process
Table 3 .
3 Concept Structure
Structure element Description
Concept Term to be defined
Definition Description of the concept meaning.
Data structure Description of data structures contained in the concept
Relationships A List of Relationships with other concepts
Processes A list of processes that use this concept
Table 4 .
4 Inputs and Outputs of Identification of Information Mining Process Phase
Phase Task Input product Input Representation Transformation technique Output Product Output Representation
Identify Use Case Use Case Documentation Problem List Problem List
Identifica- Business Model Model Analysis Template
tion of Problems Template
Information Mining Process Select an information mining process Problem List Concept Dictionary Problem List Template Dictionary Template LEL Analysis An informa-tion Mining process to be applied
Table 5 .
5 Information mining process selection decision table
Condition R01 R02 R03 R04 R05
The action represented by a verb. Associates subjects and objects? Yes No No Yes Yes
Is Analysis of factors required to obtain a group of subjects or objects? No Yes Yes Yes Yes
Actions
The technique to be applied is: DBR DOG AWL DMG WMG
Table 6 .
6 Identified Business Concepts
Selling Customer: A person who offers a property for sale Property Appraisal: Appraisal of property for sale.
Structure: Name and Last Name Structure: Appraisal value (Number)
Contact Information Property ID
Transaction Currency
Relationships: Property Relationships: Property
Property appraisal Customer
Busines Process: Sell a Property Business Processes Sell a Property
Offer a Property
Table 7 .
7 Real Estate agency problems related symbols.
Property [Object] Customer [Subject]
Idea Idea
-It's the object that the real estate agency sells -A Person interested in buying a property.
-It has its own attributes -A Person who is selling a property.
Impact: It is sold to a Customer Impact: Fills a form with buying criteria.
To Offer a property [Verb] Interested [Status] | 19,136 | [
"1003573",
"1003574",
"1003575",
"992693"
] | [
"346011",
"346011",
"300134",
"487856",
"487857"
] |
01484679 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484679/file/978-3-642-36611-6_16_Chapter.pdf | Per Svejvig
Torben Storgaard
Charles Møller
email: charles@business.aau.dk
Hype or Reality: Will Enterprise Systems as a Service become an Organizing Vision for Enterprise Cloud Computing in Denmark?
Keywords: Cloud computing, software as a service (SaaS), enterprise systems, organizing vision, institutional theory
Cloud computing is at "the peak of inflated expectations" on the Gartner Hype Cycle from 2010. Service models constitute a layer in the cloud computing model and Software as a Service (SaaS) is one of the important service models. Software as a Service provides complete business applications delivered over the web and more specific when delivering enterprise systems (ES) applications such as ERP, CRM and others we can further categorize the model as an Enterprise Systems as a Service (ESaaS) model. However it is said that ESaaS is one of the last frontier for cloud computing due to security risk, downtime and other factors. The hype about cloud computing and ESaaS made us speculate about our local context, Denmark, what are the current situation and how might ESaaS develop. We are asking the question: Will ESaaS become an organizing vision in Denmark? We used empirical data from a database with more than 1150 Danish organizations using ES, informal contacts to vendors etc. The result of our study is very surprising as none of the organizations in the database apply ESaaS although recent information from vendors indicates more than 50 ESaaS implementations in Denmark. We discuss the distance between the community discourse and current status of real ESaaS implementations.
Introduction
Cloud computing is on everybody's lips today and is promoted as a silver bullet for solving several of the past problems with IT by offering pay per use, rapid elasticity, on demand self-service, simple scalable services and (perhaps) multi-tenancy [START_REF] Wohl | Cloud Computing[END_REF]. Cloud computing is furthermore marketed as a cost saving strategy appealing well to the post financial crisis situation for many organizations with cloud's "Opex over Capex story and ability to buy small and, if it works, to go big" [START_REF] Schultz | Enterprise Cloud Services: the agenda[END_REF]. Cloud computing has even been named by Gartner " as the number one priority for CIOs in 2011" [START_REF] Golden | Cloud CIO: 5 Key Pieces of Rollout Advice[END_REF]. Gartner position in addition cloud computing at the "peak of inflated expectations" at the Gartner Hype Cycle predicting 2 to 5 years to mainstream adoption [START_REF] Fenn | Hype Cycle for Emerging Technologies[END_REF].
Service models constitute a layer in the cloud computing model and Software as a Service (SaaS) is one of the important types of service models. Software as a Service provides complete business applications delivered over the web and more specific when delivering enterprise systems (ES) applications such as ERP, CRM and others we can further categorize the model as an Enterprise Systems as a Service (ESaaS) model. In this paper we use ESaaS interchangeable with SaaS but also as a more specific concept. As cloud computing is still an evolving paradigm, its definitions, use cases, underlying technologies, issues, risks, and benefits can be refined [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF].
Software as a Service (SaaS) embraces cloud applications for social networks, office suites, CRM, video processing etc. One example is Salesforce.com, a business productivity application (CRM), which relies completely on the SaaS model [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF] consisting of Sales Cloud, Service Cloud and Chatter Collaboration Cloud [START_REF]The leader in customer relationship management (CRM) & cloud computing[END_REF] residing on "[Salesforce.com] servers, allowing customers to customize and access applications on demand" [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF].
However enterprise wide system applications and especially ERP has been considered the last frontier for SaaS where companies has put forward the following reasons preventing them from considering ESaaS (prioritized sequence): (1) ERP is too basic and strategic to running our business, (2) security concerns, (3) ability to control own upgrade process, (4) downtime risk, (5) greater on-premise functionality, (6) require heavy customizations, and finally [START_REF]The leader in customer relationship management (CRM) & cloud computing[END_REF] already invested in IT resources and don't want to reduce staff [START_REF]SaaS ERP: Trends and Observations[END_REF]. A very recent example shows the potential problem with cloud and ESaaS where Amazon had an outage of their cloud services lasting for several days and affecting a large number of customers [START_REF] Thibodeau | Amazon Outage Sparks Frustration, Doubts About Cloud[END_REF].
Despite these resisting factors there seems to be a big jump in ESaaS interest with 39% of respondents willing to consider ESaaS according to Aberdeen's 2010 ERP survey, which is a 61% increase in willingness from their 2009 to 2010 survey [START_REF] Subramanian | Big Jump in SaaS ERP Interest[END_REF], this a furthermore supported by a very recent report from Panorama Consulting Group [START_REF][END_REF] stating the adoption rate of ESaaS to be 17%.
The adoption pattern of ESaaS varies in at least two dimensions company size and application category. Small companies are more likely to adopt SaaS followed by mid-size organizations [START_REF]SaaS ERP: Trends and Observations[END_REF], which might be explained by large companies having a more complex and comprehensive information infrastructure [as defined in 12] compared to small and mid-size companies. CRM applications are more frequent than ERP application [START_REF]SaaS ERP: Trends and Observations[END_REF] where a possible explanation can be the perception of ERP as too basic and strategic to run the business in a ESaaS model.
Most recently, the Walldorf German based ERP vendor SAP have launched an on demand ERP (SaaS) solution: SAP Business By Design that can be seen as prototype ESaaS model [START_REF] Sap | SAP Business ByDesign[END_REF]. SAP Business By Design is a fully integrated on-demand Enterprise Resource Planning (ERP) and business management software solution for small and medium sized enterprises (SME). It is a complete Software as a Service (SaaS) offering for 10-25 users available on most major markets. However, the real cases are actually hard to locate.
Enterprise Systems as a Service -Global and Local Context
Cloud computing appears to have emerged very recently as a subject of substantial industrial and academic interest, though its meaning, scope and fit with respect to other paradigms is hotly debated. For some researchers, Clouds are a natural evolution towards full commercialization of Grid systems, while for others they may be dismissed as a mere rebranding of the existing pay-per-use or pay-as-you-go technologies [START_REF] Antonopoulos | Cloud Computing: Principles, Systems and Applications[END_REF].
Cloud computing is a very broad concept and an umbrella term for refined on demand services delivered by the cloud [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF]. The multiplicity in understanding of the term is probably fostered by the "beyond amazing hype level" [START_REF] Wohl | Cloud Computing[END_REF] underlining the peak in Gartner's Hype Cycle [START_REF] Fenn | Hype Cycle for Emerging Technologies[END_REF]. Many stakeholders (vendors, analysts etc.) jump on the bandwagon inflating the term and "if everything is a cloud, then it gets very hard to see anything" [START_REF] Wohl | Cloud Computing[END_REF], so we need to be very explicit about using the term. We follow the US National Institute of Standards and Technology (NIST) definition [START_REF] Mell | The NIST Definition of Cloud Computing[END_REF]:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
This cloud model promotes availability and is composed of five essential characteristics, three service models, and four deployment models as illustrated in figure 1 below [START_REF] Williams | A quick start guide to cloud computing: moving your business into the cloud[END_REF].
The notion of the "cloud" as a technical concept is used as a metaphor for the internet and was in the past used to represent the telephone network as an abstraction of the underlying infrastructure [START_REF] Baan | Business Operations Improvement, The New Paradigm in Enterprise IT[END_REF]. There are different deployment models for cloud computing such as private clouds operated solely for an organization, community clouds shared by several organizations, public clouds and hybrid clouds as a composition of two or more clouds (private, community or public) [START_REF] Beck | Agile Software Development Manifesto[END_REF]. The term virtual private clouds have also entered the scene analogous to VPN. There is a controversy about whether private clouds (virtual or not virtual) really is cloud computing [START_REF] Wohl | Cloud Computing[END_REF].
Cloud computing can be divided into three layers namely [START_REF] Voorsluys | Introduction to Cloud Computing[END_REF]: (1) Infrastructure as a Service (IaaS), (2) Platform as a Service (PaaS) and (3) Software as a Service (SaaS). The focus in this paper is on the enterprise systems as a service (ESaaS) where "SaaS is simply software that is delivered from a server in a remote location to your desktop, and is used online" [START_REF] Wohl | Software as a Service (SaaS)[END_REF]. ESaaS usage is expected to expand in 2011 [START_REF] O'neill | Cloud Computing in 2011: 3 Trends Changing Business Adoption[END_REF].
The air is also charged with cloud computing and SaaS in Denmark. Many of the issues discussed in this paper apply to the local Danish context, but there are also additional points to mention. First, Denmark has a lot of small and medium sized organizations (SME's) which are expected to be more willing to adapt ESaaS [START_REF]SaaS ERP: Trends and Observations[END_REF].
Second, the Local Government Denmark (LGDK) (an interest group and member authority of Danish municipalities) tried to implement a driving license booking system based on Microsoft's Azure PaaS, but ran into technical and juridical problems. The technical problems were related to the payment module, logon and data extract from the cloud based solution [START_REF] Elkaer | Derfor gik kommunernes cloud-forsøg i vasken[END_REF]. The legal issue was more serious as LGDK (and the municipalities) was accused by the Danish Data Protection Agency to break the act on processing of personal data, especially about location of data [START_REF] Elkaer | Datatilsynet farer i flaesket på KL over cloud-flop[END_REF]. LGDK decided to withdraw the cloud solution and replaced it with an on premise solution with the comments "cloud computing is definitely more difficult, and harder, than what is mentioned in the booklets" [START_REF] Elkaer | Derfor gik kommunernes cloud-forsøg i vasken[END_REF].
Finally, the CIO from "The LEGO Group", a well-known Danish global enterprise within toy manufacturing, stated in news media that "cloud is mostly hot air". Cloud can only deliver a small fraction of the services that LEGO need and cannot replace their "customized SAP, Microsoft, Oracle and ATG [e-commerce] platforms with end to end business process support". LEGO are using cloud to specific point solutions such as "spam and virus filtering", "credit card clearing" and load-testing of applications but "[t]o put our enterprise-platform on the public cloud is Utopia" [START_REF] Nielsen | LEGO: Skyen er mest varm luft[END_REF].
This section has described the global and local context for ESaaS and both contexts will probably influence Danish organizations and their willingness to adopt these solutions. In the next section we will look into a theoretical framing of the cloud computing impact on the enterprise systems in Denmark.
IS Innovations as Organizing Visions
An Organizing Vision (OV) can be considered a collective, cognitive view of how new technologies enables success in information systems innovation. This model is used to analyze ESaaS in Denmark. [START_REF] Swanson | The Organizing Vision in Information Systems Innovation[END_REF] takes institutional theory into IS research and propose the concept of organizing vision in IS innovation, which they define as "a focal community idea for the application of information technology in organizations" [START_REF] Swanson | The Organizing Vision in Information Systems Innovation[END_REF]. Earlier research has argued that early adoption of a technological innovation is based on rational choice while later adoption is institutionalized. However Swanson and Ramiller suggest that institutional processes are engaged from the beginning. Interorganizational communities create and employ organizing visions of IS innovations. Examples are CASE tools, e-commerce, client server [START_REF] Ramiller | Organizing Visions for Information Technology and the Information Systems Executive Response[END_REF] and Application Service Providers (ASP) [START_REF] Currie | The organizing vision of application service provision: a process-oriented analysis[END_REF] and comparable to management fads like BPR, TQM and quality circles [START_REF] Currie | The organizing vision of application service provision: a process-oriented analysis[END_REF][START_REF] Abrahamson | Management Fashion: Lifecycles, Triggers, and Collective Learning Processes[END_REF]. The organizing vision is important for early and later adoption and diffusion. The vision supports interpretation (to make sense of the innovation), legitimation (to establish the underlying rationale) and mobilization (to activate, motivate and structure the material realization of innovation) [START_REF] Swanson | The Organizing Vision in Information Systems Innovation[END_REF][START_REF] Currie | The organizing vision of application service provision: a process-oriented analysis[END_REF].
The OV model presents different institutional forces such as community discourse, community structure and commerce and business problematic, which are used in the analysis of ESaaS in Denmark.
Research Methodology
The research process started early 2011 where we applied different data collection methods: (1) Queries into HNCO database with ERP, CRM systems, (2) Informal dialogue with ES vendors, and finally (3) Literature search of cloud computing and SaaS (research and practitioner oriented). The second author is employed at a Herbert Nathan & Co (HNCO), a Danish management consulting company within area of ERP, they maintains a database of top 1000 companies in Denmark and their usage of enterprise systems. However we did not find any customers in the database using ESaaS, which were surprising. We repeated our study in spring 2012 and surprisingly got the same result as one year ago. ES as a Service is apparently not used by Top 1000 companies in Denmark. However informal talk with vendors indicates that there might be about 50 references in Denmark, but we have only been able to confirm a small number of these claimed references.
Analysis
The table below shows the analysis concerning ESaaS as an organizing vision (adapted from Figure 3):
Institutional forces Global Context Local Context
Community discourse
Cloud computing has been named by Gartner "as the number one priority for CIOs in 2011" [START_REF] Golden | Cloud CIO: 5 Key Pieces of Rollout Advice[END_REF] Gartner position cloud computing at the "peak of inflated expectations" at the Gartner Hype Cycle predicting 2 to 5 years to mainstream adoption [START_REF] Fenn | Hype Cycle for Emerging Technologies[END_REF] Aberdeen survey and Panorama Consulting Group report shows a big jump in interest in ESaaS / SaaS [START_REF] Subramanian | Big Jump in SaaS ERP Interest[END_REF][START_REF][END_REF] Amazon had an outage of their cloud services lasting for several days and affecting a large number of customers [START_REF] Thibodeau | Amazon Outage Sparks Frustration, Doubts About Cloud[END_REF]. This case received very much press coverage and it would be natural to expect it to have a negative impact on the perception of cloud computing
The global discourse are part of the local Danish discourse, but local stories does also shape the local context Denmark has a lot of small and medium sized organizations (SME's) which are expected to be more willing to adapt ESaaS [START_REF]SaaS ERP: Trends and Observations[END_REF]. That might fertilize the ground for faster adoption of ESaaS The Local Government Denmark (LGDK) tried to implement a driving license booking system based on Microsoft's Azure PaaS, but ran into technical and juridical problems [START_REF] Elkaer | Derfor gik kommunernes cloud-forsøg i vasken[END_REF].
The CIO from "The LEGO Group" stated in news media that "cloud is mostly hot air". Cloud can only deliver a small fraction of the services that LEGO need [ small and medium sized companies the business conditions might be different, and the arguments in favor of cloud computing and ESaaS / SaaS might be more prevailing
Table 1 Analysis of ESaaS as an organizing vision
Table 1 above shows the conditions for ESaaS to become an organizing vision although it would be too early to claim it is an organizing vision especially because the link to practice appears to be uncertain. Our knowledge about the 50 implementations in Denmark is very limited and we do not know the status of the implementations (pilots, just started, normal operation, abandoned etc.).
Discussions
First of all the research indicate that the organizing vision of ESaaS in Denmark is perhaps on a too preliminary stage to make sense. The evidence are scarce or inaccessible which indicate that the idea is either not existing or at an immature state. Given the vast amount of interest, we assume that the concept is either immature or that the ideas will emerge under a different heading that we have not been able to identify.
In any cases we can use the idea of the organizing vision as a normative model for the evolution of the cloud computing concept in an enterprise systems context. This is comparable to Gartner's hype cycle: After the initial peak of inflated expectations we will gradually move into the slope of enlightenment. The organizing vision could be a normative model for making sense of the developments. But only future research will tell.
As a final comment to the organizing vision of ESaaS the following quote from Larry Ellison, the CEO of Oracle from September 2008 sum up the experiences:
The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do. I can't think of anything that isn't cloud computing with all of these announcements. The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?
Conclusion
This paper has sought to further our understanding of cloud computing, SaaS and with special focus on ESaaS. We described the global and local context for cloud computing and ESaaS / SaaS. We furthermore presented institutional theory extended by the work of Swanson and Ramiller about their concept of organizing visions. We asked the question: Will ESaaS become an organizing vision in Denmark. The paper can give some initial and indicative answers to the question that the community discourse support ESaaS as an organizing vision but the current status of real ESaaS implementations is uncertain.
The paper has only been able to scratch the surface and to give some initial thoughts about ESaaS in the local context. However the paper sets the stage for a longer-term research challenges about ESaaS. First, an obvious extension of this paper is to study the Danish market in much more detail by interviewing the actors in the community structure especially ESaaS customers. Second, comparative studies between countries would also be interesting, does such an organizing vision as ESaaS diffuse similarly or differently and what shapes the diffusion and adoption. Finally, the theoretical framework by Swanson and Ramiller are appealing to study the adoption and diffusion of technology possible extended by this paper's approach with the global and local context. | 20,923 | [
"990471",
"1003581",
"1003582"
] | [
"19908",
"487863",
"300821"
] |
01484684 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484684/file/978-3-642-36611-6_20_Chapter.pdf | Christian Leyh
email: christian.leyh@tu-dresden.de
Lars Crenze
ERP System Implementations vs. IT Projects: Comparison of Critical Success Factors
Keywords: ERP System Implementations vs. IT Projects: Comparison of Critical Success Factors Christian Leyh, Lars ERP systems, IT projects, implementation, critical success factors, CSF, literature review, comparison
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Today's enterprises are faced with the globalization of markets and fast changes in the economy. In order to be able to cope with these conditions, the use of information and communication systems as well as technology is almost mandatory. Specifically, the adoption of enterprise resource planning (ERP) systems as standardized systems that encompass the actions of whole enterprises has become an important factor in today´s business. Therefore, during the last few decades, ERP system software represented one of the fastest growing segments in the software market; indeed, these systems are one of the most important recent developments within information technology [START_REF] Deep | Investigating factors affecting ERP selection in the made-to-order SME sector[END_REF], [START_REF] Koh | Change and uncertainty in SME manufacturing environments using ERP[END_REF].
The demand for ERP applications has increased for several reasons, including competitive pressure to become a low-cost producer, expectations of revenue growth, and the desire to re-engineer the business to respond to market challenges. A properly selected and implemented ERP system offers several benefits, such as considerable reductions in inventory costs, raw material costs, lead time for customers, production time, and production costs [START_REF] Somers | The impact of critical success factors across the stages of enterprise resource planning implementations[END_REF]. The strong demand for ERP applications resulted in a highly fragmented ERP market and a great diffusion of ERP systems throughout enterprises of nearly every industry and every size [START_REF] Winkelmann | Experiences while selecting, adapting and implementing ERP systems in SMEs: a case study[END_REF], [START_REF] Winkelmann | Teaching ERP systems: A multi-perspective view on the ERP system market[END_REF]. This multitude of software manufacturers, vendors, and systems implies that enterprises that use or want to use ERP systems must strive to find the "right" software as well as to be aware of the factors that influence the success of the implementation project. Remembering these so-called critical success factors (CSFs) is of high importance whenever a new system is to be adopted and implemented or a running system needs to be upgraded or replaced. Errors during the selection, implementation, or maintenance of ERP systems, incorrect implementation approaches, and ERP systems that do not fit the requirements of the enterprise can all cause financial disadvantages or disasters, perhaps even leading to insolvencies. Several examples of such negative scenarios can be found in the literature (e.g. [START_REF] Barker | ERP implementation failure: A case study[END_REF], [START_REF] Hsu | Avoiding ERP pitfalls[END_REF]).
However, it is not only errors in implementing ERP systems that can have negative impact on enterprises; errors within other IT projects (e.g., implementations of BI, CRM or SCM systems, etc.) can be damaging as well. Due to the fast growing and changing evolution of technology, it is especially necessary for enterprises to at least keep in touch with the latest technologies. For example, buzz words like "Cloud computing" or "Software as a Service (SaaS)" can be read throughout managerial magazines very often. Therefore, to cope with implementations of these and other systems it is mandatory for the enterprises to be aware of the CSFs for these IT projects as well.
In order to identify the factors that affect ERP system implementations or IT projects, several case studies, surveys, and even some literature reviews have already been conducted by various researchers. However, a comparison of the factors affecting ERP implementation or IT project success has only rarely been done. To be aware of the differences within the CSFs for ERP and IT projects, it is important for the enterprises to be sure to have / to acquire the "right" employees (project leader, project team members, etc.) with adequate know-how and experience.
To gain insight into the different factors affecting ERP implementation and IT project success, we performed a CSF comparison. We conducted two literature reviews, more specifically, systematic reviews of articles in different databases and among several international conference proceedings. This also served to update the existing reviews by including current literature.
The CSFs reported in this paper were derived from 185 papers dealing with ERP systems and from 56 papers referring to factors affecting IT projects' success. The frequency of the occurrence of each CSF was counted. The aggregated results of these reviews as well as the comparison of the reviews will be presented in this paper.
Therefore, the paper is structured as follows: Within the next section our literature review methodology will be outlined in order to render our reviews reproducible. The third section deals with the results of the literature reviews and the comparison of the reviews. We will point out the factors that are the most important and those that seem to have little influence on the success of ERP implementations and IT projects. Finally, the paper concludes with a summary of the results as well as a critical acclaim for the conducted literature reviews.
Research Methodology -Literature Review
Both literature reviews to identify the aforementioned CSFs were performed via several steps, similar to the approach suggested by Webster & Watson [START_REF] Webster | Analyzing the past, preparing the future: Writing a literature review[END_REF]. In general, they were systematic reviews based on several databases that provide access to various IS journals. For the ERP system CSFs, we performed an additional search in the proceedings of several IS conferences. During the review of the ERP papers we identified 185 papers with relevant information concerning CSFs within five databases and among proceedings of five international IS conferences. However the overall procedure for the ERP system review will not be part of this paper. It is described in detail in [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF], [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF].
The steps of the IT projects' CSF review procedure are presented below. These steps are similar to the ERP CSF review [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF], [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF]. An overview of the steps is given in Figure 1. However, due to our experience during the first review (duplicates, relevant papers per database and/or proceedings), we reduced the number of databases and did not perform a review among conference proceedings. Step 1: The first step involved defining the sources for the literature review. For this approach, as mentioned, due to our earlier experience in the review procedure, two databases were identified -"Academic Search Complete" and "Business Source Complete." The first contains academic literature and publications of several academically taught subjects with specific focus on humanities and social sciences. The second covers more practical topics. It contains publications in the English language from 10,000 business and economic magazines and other sources.
Step 2: Within this step, we had to define the search terms for the systematic review. Keywords selected for this search were primarily derived from the keywords supplied and used by the authors of some of the relevant articles identified in a preliminary literature review. It must be mentioned that the search term "CSF" was not used within the Academic Search Complete database since this term is also predominantly used in medical publications and journals. As a second restriction, we excluded the term "ERP" from the search procedure in the Business Source Complete database to focus on IT projects other than ERP projects. However, this restriction could not be used within the first database due to missing functionality.
Step 3: During this step, we performed the initial search according to steps 1 and 2 and afterwards eliminated duplicates. Once the duplicates were eliminated, 507 articles remained.
Step 4: The next step included the identification of irrelevant papers. During the initial search, we did not apply any restrictions besides the ones mentioned above. The search was not limited to the research field of IS; therefore, papers from other research fields were included in the results as well. These papers had to be excluded. This was accomplished by reviewing the abstracts of the papers and, if necessary, by looking into the papers' contents. In total, this approach yielded 242 papers that were potentially relevant to the field of CSFs for IT projects.
Step 5: The fifth and final step consisted of a detailed analysis of the remaining 242 papers and the identification of the CSFs. Therefore, the content of all 242 papers was reviewed in depth for the purpose of categorizing the identified success factors. Emphasis was placed not only on the wording of these factors but also on their meaning. After this step, 56 relevant papers that suggested, discussed, or mentioned CSFs remained. The results of the analysis of these 56 papers are described in the following section. A list of these papers will not be part of this article but it can be requested from the first author.
Results of the Literature Review -Critical Success Factors Identified
The goal of the performed reviews was to gain an in-depth understanding of the different CSFs already identified by other researchers. As stated previously, 185 papers that referred to CSFs of ERP implementation projects were identified, as were 56 papers referring to CSFs of IT projects. The identified papers consist of those that present single or multiple case studies, survey results, literature reviews, or CSFs conceptually derived from the chosen literature. They were reviewed again in depth in order to determine the various concepts associated with CSFs. For each paper, the CSFs were captured along with the publication year, the type of data collection used, and the companies (i.e., the number and size) from which the CSFs were derived.
To provide a comprehensive understanding of the different CSFs and their concepts, we described the ERP implementation CSFs in [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF] and [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF]. There, the detailed definitions of the ERP implementation CSFs can be found. Since most of those CSFs can be matched with CSFs of IT projects (as shown later) we will not describe them within this paper.
Critical Success Factors for ERP System Implementations
Overall, 31 factors (as described in [START_REF] Leyh | Critical success factors for ERP system implementation projects: A literature review[END_REF], [START_REF] Leyh | Critical success factors for ERP system selection, implementation and postimplementation[END_REF]) were identified referring to factors influencing the ERP system implementation success. In most previous literature reviews, the CSFs were grouped without as much attention to detail; therefore, a lower number of CSFs was used (e.g., [START_REF] Somers | The impact of critical success factors across the stages of enterprise resource planning implementations[END_REF], [START_REF] Loh | Critical elements for a successful enterprise resource planning implementation in small-and medium-sized enterprises[END_REF], [START_REF] Finney | ERP implementation: A compilation and analysis of critical success factors[END_REF]). However, we took a different approach in our review. For the 31 factors, we used a larger number of categories than other researchers, as we expected the resulting distribution to be more insightful. If more broad definitions for some CSFs might be needed at a later time, further aggregation of the categories is still possible.
All 185 papers were published between the years 1998 and 2010. Table 1 shows the distribution of the papers based on publication year. Most of the papers were published between 2004 and 2009. Starting in 2004, about 20 papers on CSFs were published each year. Therefore, a review every two or three years would be reasonable in order to update the results of previously performed literature reviews. The identified CSFs and each factor's total number of occurrences in the reviewed papers are shown in the Appendix in Table 4. Top management support and involvement, Project management, and User training are the three most-named factors, with each being mentioned in 100 or more articles.
Regarding the data collection method, we must note that the papers we analyzed for CSFs were distributed as follows: single or multiple case studies -95, surveys -55, and literature reviews or articles in which CSFs are derived from chosen literature -35.
Critical Success Factors for IT Projects
In the second literature review, 24 factors were identified referring to the success of IT projects. Again, we used a larger number of categories and did not aggregate many of the factors since we had good experience with this approach during our first CSF review. All 56 papers were published between the years 1982 and 2011. Table 2 shows the distribution of the papers based on publication year. Most of the papers were published between 2004 and 2011. It must be stated that some of the papers are older than 15 years. However, we included these papers in the review as well. Table 3 shows the results of our review, i.e., the identified CSFs and each factor's total number of occurrences in the reviewed papers. Project management and Top management support are the two most often named factors, with each being mentioned in some 30 or more articles. These factors are followed by Solution fit, Organizational structure, and Resource management, all mentioned in nearly the half of the analyzed articles. As shown in Table 3, due to the smaller number of relevant papers, the differentiation between the separate CSFs is not as clear as with the ERP CSFs. Most differ by only small numbers. Regarding the data collection method, in this review the papers we analyzed for IT projects' CSFs were distributed as follows: single or multiple case studies -16, surveys -27, and literature reviews or articles where CSFs are derived from chosen literature -13.
Comparison of the Critical Success Factors
As mentioned earlier, we identified 31 CSFs dealing with the success of ERP system implementations and 24 factors affecting IT projects' success. The factors are titled according to the naming used most often in the literature. Therefore, we had to deal with different terms in both reviews. However, most of the CSFs (despite their different naming) can be found on both sides. Here, Table 4 in the Appendix provides an overview of the CSF matching.
As shown, there are nine CSFs that occur only in the review of ERP literature. Therefore, these factors are specifically affecting only for ERP implementation projects. However, most of these nine factors are not cited very often, so they seem to be less important than other CSFs mentioned in both reviews. Hence, two of these nine -Business process reengineering (BPR) and ERP system configuration -are in the top 10. Since ERP implementation has a large impact on an enterprise and its organizational structures, BPR is important for adapting the enterprise to appropriately fit the ERP system. On the other side, it is also important to implement the right modules and functionalities of an ERP system and configure them so they fit the way the enterprise conducts business. As not all IT projects have as large an impact on an organization as do ERP implementations, their configuration (or the BPR of the organization's structure) is a less important factor for success.
Within the review of IT project literature, two factors -Resource management and Working conditions -have no match within the ERP implementation CSF list, but here, the first lands in the top five of this review and seems to be an important factor for IT projects' success.
Comparing the top five, it can be found that the two most-often cited factors are the same in both reviews (see Table 3 andTable 4). These top two are followed by different factors in each review. However, it can be stated that project management and the involvement and support of the top management is important for every IT project and ERP implementation. Solution fit (rank #3) and Organizational fit of the ERP system (rank #8), which are matched, are both important factors, but are even more important for IT projects. This is also supported by Organizational structure. This factor is #4 for IT projects but only #27 for ERP implementation. For IT projects, a fitting structure within the enterprise is important since BPR (as mentioned above) is not a factor for those projects. For ERP implementations, the "right" organizational structure is less important, since BPR is done during almost every ERP implementation project and, therefore, the structure is changed to fit the ERP system.
Conclusion and Limitations
The aim of our study was to gain insight into the research field of CSFs for ERP implementations and for IT projects and to compare those CSFs. Research on the fields of ERP system implementations and IT projects and their CSFs is a valuable step toward enhancing an organization's chances for implementation success [START_REF] Finney | ERP implementation: A compilation and analysis of critical success factors[END_REF]. Our study reveals that several papers, i.e., case studies, surveys, and literature reviews, focus on CSFs. All in all, we identified 185 relevant papers for CSFs dealing with ERP system implementations. From these existing studies, we derived 31 different CSFs. The following are the top three CSFs that were identified: Top management support and involvement, Project management, and User training. For factors affecting IT projects' success, we identified 56 relevant papers citing 24 different CSFs. Here, Project management, Top management support, and Solution fit are the top three CSFs.
As shown in Table 1 and Table 2, most of the papers in both reviews were published after 2004. Within the ERP paper review, in particular, about 20 or more CFS-papers have been published each year since 2004. Thus, one conclusion suggests that new literature reviews on the CSFs of ERP systems and even on the CSFs for IT projects should be completed every two or three years in order to update the results.
Due to the quickly evolving technology, it becomes more and more important for companies to be up to date and to at least keep in touch with the latest developments. This is also important for smaller and medium-sized enterprises (SMEs). Especially in the ERP market that became saturated in the segment for large companies at the beginning of this century, many ERP manufacturers have shifted focus to the SMEs segment due to low ERP penetration rates within this segment. Therefore, large market potential awaits any ERP manufacturers addressing these markets. This can be transferred to other software and IT solutions as well. To cooperate with larger enterprises with highly developed IT infrastructure, SMEs need to improve their IT systems and infrastructure as well. Therefore, CSF research should also focus on SMEs due to the remarkable differences between large-scale companies and SMEs. ERP implementation projects and IT projects must be adapted to the specific needs of SMEs. Also, the importance of certain CSFs might differ depending on the size of the organization. Thus, we have concluded that an explicit focus on CSFs for SMEs is necessary in future research.
Regarding our literature reviews, a few limitations must be mentioned as well. We are aware that we cannot be certain that we have identified all relevant papers published in journals and conferences since we made a specific selection of five databases and five international conferences, and set even more restrictions while conducting the IT projects' review. Therefore, journals not included in our databases and the proceedings from other conferences might also provide relevant articles. Another limitation is the coding of the CSFs. We tried to reduce any subjectivity by formulating coding rules and by discussing the coding of the CSFs among several independent researchers. Hence, other researchers may code the CSFs in other ways. [
Figure 1 .
1 Figure 1. Progress of the IT projects literature review
Table 1 . Paper distribution of ERP papers
1
Year 2010 2009 2008 2007 2006 2005 2004
Papers 6 29 23 23 25 18 23
Year 2003 2002 2001 2000 1999 1998
Papers 11 12 5 6 3 1
Table 2 . Paper distribution of IT project papers
2
Year 2011 2010 2009 2008 2007 2006 2005 2004 2003
Papers 4 5 6 6 3 10 4 5 1
Year 2002 2001 1998 1995 1993 1987 1983 1982
Papers 2 1 1 2 2 2 1 1
Table 3 . IT projects CSFs in rank order based on frequency of appearance in analyzed literature
3
Factor Number of Factor Number of
instances instances
Project management 31 Commitment and motivation of 17
the employees
Top management support 30 Implementation approach 17
Organizational structure 26 Communication 15
Solution fit 26 Strategy fit 15
Resources management 25 Change management 14
User involvement 24 Team organization 14
Knowledge & experience 23 Corporate environment 10
Budget / available resources 20 Monitoring 10
Stakeholder management 19 Project scope 10
Leadership 18 Risk management 8
User training 18 Corporate culture 6
Working conditions 18 Legacy systems and IT structure 3
Appendix | 22,564 | [
"1003471"
] | [
"96520",
"96520"
] |
01484685 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484685/file/978-3-642-36611-6_21_Chapter.pdf | Anjali Ramburn1
email: anjali.ramburngopaul@uct.ac.za
Lisa Seymour1
email: lisa.seymour@uct.ac.za
Avinaash Gopaul1
Understanding the Role of Knowledge Management during the ERP Implementation Lifecycle: Preliminary Research Findings Relevant to Emerging Economies
Keywords: Knowledge Management, ERP Implementation, ERP Implementation Phase, Emerging Economy
This work in progress paper presents a preliminary analysis on the challenges of knowledge management (KM) experienced in the ERP implementation phase. This paper is an integral section of an ongoing research focusing on the role of KM during the ERP implementation lifecycle in both large and medium organizations in South Africa. One of the key research objectives is to investigate the core challenges of KM in large and medium organizations in South Africa. A review of the existing literature reveals a lack of comprehensive KM research during the different ERP implementation phases and particularly, in emerging economies. Initial findings include lack of process, technical and project knowledge as key challenges. Other concerns include poor understanding of the need for change, lack of contextualization and management support. This paper closes some of the identified research gaps in this area and should benefit large organizations in the South African economy.
Introduction
Background and Context
Organizations are continuously facing challenges, causing them to rethink and adapt their strategies, structures, goals, processes and technologies in order to remain competitive [START_REF] Bhatti | Critical Success Factors for the Implementation of Enterprise Resource Planning (ERP): Empirical Validation. 2nd International Conference on Innovation in Information Technology[END_REF], [START_REF] Holland | A critical success factors model for ERP implementation[END_REF]. Many large organizations are now dependent on ERP systems for their daily operations. An increasing number of organizations are investing in ERP systems in South Africa. There have been many implementations in the South African public sector such as the SAP implementations at the City of Cape Town and Tshwane Metropolitan Council. The implementation process is however described as costly, complex and risky whereby firms are not able to derive benefits of the systems despite huge investments. Half of all ERP implementations fail to meet the adopting organizations' expectations [START_REF] Jasperson | Conceptualization of Post-Adoptive Behaviours Associated with Information Technology Enabled Work Systems[END_REF]. This has been attributed to the disruptive and threatening nature of ERP implementations [START_REF] Zorn | The Emotionality of Information and Communication Technology Implementation[END_REF], [START_REF] Robey | Learning to Implement Enterprise Systems: An Exploratory Study of the Dialectics of Change[END_REF]. This process however can be less challenging and more effective through proper use of knowledge management (KM) throughout the ERP lifecycle phases. Managing ERP systems knowledge has been identified as a critical success factor and as a key driver of ERP success [START_REF] Leknes | The role of knowledge management in ERP implementation: a case study in Aker Kvaerner[END_REF]. An ERP implementation is a dynamic continuous improvement process and "a key methodology supporting ERP continuous improvement would be knowledge management" [START_REF] Mcginnis | Incorporating of Knowledge Management into ERP continuous improvement: A research framework[END_REF].
Research Problem, Objective and Scope
There has been very little work conducted to date that assesses the practices and techniques employed to effectively explain the impact of KM in the ERP systems lifecycle [START_REF] Parry | The importance of knowledge management for ERP systems[END_REF], [START_REF] Sedera | Knowledge Management for ERP success[END_REF]. Current research in the context of KM focuses mostly on knowledge sharing and integration challenges during the actual ERP adoption process, offering only a static perspective of KM and ERP implementation [START_REF] Suraweera | Dynamics of Knowledge Leverage in ERP Implementation[END_REF], [START_REF] Gable | The enterprise system lifecycle: through a knowledge management lens[END_REF], [START_REF] Markus | Towards a Theory of Knowledge Reuse: Types of Knowledge Reuse Situations and Factors in Reuse Success[END_REF]. A number of organizations see the ERP GO Live as the end of the cycle, and very little emphasis has been given to the post implementation phases.
This research seeks to explore the ERP implementation life cycle from a KM perspective within a South African context and aims at providing a comprehensive understanding of the role of KM practices during the ERP implementation lifecycle. One of the key objectives is to investigate the KM challenges faced by organizations while implementing ERP systems. This paper therefore presents the findings of KM challenges experienced during the implementation phase of an ERP system. It should be noted that the results, discussed in this paper, are an interpretation of the initial findings which is still under review. This analysis will be further developed and elaborated in the subsequent research phases.
Literature Review
Enterprise Resource Planning Systems
An ERP system can be defined as "an information system that enables the integration of transaction-based data and business processes within and across functional areas in an enterprise" [START_REF] Parry | The importance of knowledge management for ERP systems[END_REF]. Some of the key enterprise functions that ERP systems support include supply chain management, inventory control, sales, manufacturing scheduling, customer relationship management, financial and cost management and human resources [START_REF] Sedera | Knowledge Management for ERP success[END_REF], [START_REF] Soffer | ERP modeling: a comprehensive approach[END_REF]. Despite the cost intensive, lengthy and risky process, the rate of implementation of ERP systems has increased over the years. Most of the large multinational organizations have already adopted ERPs as their de facto standard with the aim of increasing productivity, efficiency and organizational competitiveness [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF].
Role and Challenges of Knowledge Management
KM is defined as an on-going process where knowledge is created, shared, transferred to those who need it, and made available for future use in the organization [START_REF] Chan | Knowledge management for implementing ERP in SMEs[END_REF]. Effective use of KM in ERP implementation has the potential to improve organizational efficiency during the ERP implementation process [START_REF] Leknes | The role of knowledge management in ERP implementation: a case study in Aker Kvaerner[END_REF]. Successful transfer of knowledge between different ERP implementation stakeholders such as the client, implementation partner and vendor is important for the successful implementation an ERP system.
Use of KM activities during the ERP implementation phase ensures reduced implementation costs, improved user satisfaction as well as strategic and competitive business advantages through effective product and process innovation during use of ERP [START_REF] Sedera | Knowledge Management for ERP success[END_REF]. Organizations should therefore be aware of and identify the knowledge requirement for any implementation. However, a number of challenges hindering the proper diffusion of KM activities during the ERP implementation phase have been highlighted. The following potential knowledge barriers have been identified by [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF].
Knowledge Is Embedded in Complex Organizational
Processes. ERP systems' capabilities and functionalities span across different departments involving many internal and external users, leading to a diversity of interest and competencies in specific knowledge areas. A key challenge is to overcome any conflicting interest in order to integrate knowledge in order to promote standardization and transparency. Knowledge Is Embedded in Legacy Systems. Users are reluctant to use the new systems, constantly comparing the capabilities of the new systems to legacy systems. This is a prevalent mindset which needs to be anticipated and [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF] suggest the ERP system looks outwardly similar to the legacy system through customization. This can be achieved by "integrating knowledge through mapping of the information, processes, and routines of the legacy system into the ERP systems with the use of conversion templates" [START_REF] Pan | Knowledge Integration as a key problem in an ERP Implementation[END_REF]. Knowledge Is Embedded in Externally Based Processes. ERP systems link external systems to internal ones, as a result external knowledge from suppliers and consultants needs to be integrated in the system. This can be a tedious process and the implementation team needs to ensure that essential knowledge is integrated from the initial implementation phases through personal and working relationships.
Gaps in the Literature
The literature review indicates that most of the studies performed in the context of KM and ERP implementation offer a one dimensional static view of the actual ERP adoption phases without emphasizing the overall dynamic nature of ERP systems. Furthermore, previous studies have failed to provide a holistic view of the challenges, importance, different dimensions and best practices of KM during the whole ERP implementation cycle.
Research Method
Research Paradigm and Approach
This research employs an interpretive epistemology which is ideal for this research as this study focuses on theory building, where the ERP implementation challenges faced by organizations are explored using a knowledge perspective [START_REF] Walsham | Interpretive case studies in IS research: Nature and method[END_REF]. A qualitative research method is deemed suitable as opposed to a quantitative one, as qualitative research emphasizes on non-positivist, non-linear and cyclical forms of research, allowing the scientist to gain new insights of the research area through each iteration aiming to provide a better understanding to the social world [START_REF] Leedy | Practical research: planning and design[END_REF], [START_REF] Strauss | Basics of Qualitative Research: Grounded Theory Procedure and Techniques[END_REF].
Grounded theory seems particularly applicable in the current context as there has been no exhaustive analysis on, barriers, dimensions and role of KM focusing on the whole ERP implementation life cycle in organizations. Grounded theory used in this research is an "inductive, theory-discovering methodology that allows the researcher to develop a theoretical account of the general features of a topic, while simultaneously grounding the account in empirical observations of data" [START_REF] Glaser | The Discovery of Grounded Theory: Strategies for Qualitative Research[END_REF], [START_REF] Orlikowski | CASE tools as organisational change: Investigating incremental and radical changes in systems development[END_REF].
Semi-Structured interviews targeting different ERP implementation stakeholders are being conducted in an organization currently in their ERP implementation phase. The aim is to interview as many participants as possible until theoretical saturation is achieved. Approval for this research has been obtained from the University of Cape Town's ethical committee. Participants have been asked to sign a voluntary participant consent form and their anonymity has been assured.
All the interviews have been recorded and transcribed. Iterative analysis of the collected data has enabled the researcher to understand and investigate the main research problems posed. The transcripts of the interviews have been read a number of times to identify, conceptualise, and categorise emerging themes.
Case Description
This section provides a brief overview of the case organization. Founded in 1923, the company has a number of branches throughout South Africa, employing over 39 000 people. The organization is currently launching the SAP Project Portfolio Management module throughout its different branches across the country. Currently in the implementation stage, an organization wide initial training, involving the employees, has already been conducted. The interviews have been carried out in one of organization's division in Cape Town and purposive sampling has been used to select the interviewees. All the chosen participants had been through the training and were impacted by the SAP implementation process.
Preliminary Findings
Preliminary research findings indicate several challenges with regards to KM in the ERP implementation phase. Most of the barriers identified were either directly or indirectly related to the inadequacies and inefficiencies of knowledge transfer. The section below provides a comprehensive account of the major challenges that have been identified.
Knowledge Management Challenges
Trainer's Lack of Process Knowledge. Interviewees mentioned the training provided was inadequate in various ways. The trainers were not knowledgeable enough; they lacked key SAP skills and did not understand the process from the users' perspective. Since none of the trainers had any experience as end users of the system, there were some inconsistencies in their understanding of the new system from a user perspective. Ownership of roles and tasks were not clearly defined. They also lacked the expertise to engage with the different problems that surfaced during the training and there was no clarification on the information and process flow between the different departments and the individuals as per their role definition.
"However what makes it difficult is that the trainers do not work with the project. They do not know the process entirely and are not aware of what is happening in the background, they only collect data."
Trainer's Lack of Technical Knowledge. The technical knowledge and qualification of the trainers were put into question. The trainers were the admin support technicians who are experts in the current system the interviewees use but did not have enough expertise to deal with the upcoming ERP system. "I think they did not know the system themselves, I had been in training with them for the current program we use and they were totally 100% clued up. You could have asked them anything, they had the answers."
Interviewees' Lack of Technical Knowledge. Interviewees also struggled with use and understanding of the ERP system. They found the user interface and navigation increasingly complex as opposed to their existing system. As a result, they were overcome with frustration and they did not see the importance of the training. "I have not used the system before, so I do not understand it. We struggled with the complexity of the system. The number of steps we had to do made it worse. No one understood what and why we were doing most of the steps." Lack of Knowledge on Need for Change. The interviewees did not understand the benefits of using SAP from a strategic perspective. They questioned the implementation of the new system as they felt their previous system could do everything they needed it to. They had never felt the need for a new system.
Lack of Project knowledge. Interviewees were unaware of the clear project objectives, milestones and deployment activities. The interviewees did not have any information regarding the status of the project activities. They were only aware of the fact that they had to be trained in SAP as this would be their new system in the future but did not exactly know by when they were required to start using the system. Some of them believed they were not near the implementation stage, and the training was only a pilot activity to test whether they were ready for implementation. However, others hoped that the implementation had been cancelled due to the number of problems experienced in the training sessions.
Poor Project Configuration Knowledge. Another key concern voiced related to the complexity of the ERP system as opposed to the existing system the participants are using. They have been working with the current system for a number of years and believed it operated in the most logical way, the same way as to how their minds would function. On the other hand, the ERP system was perceived as complex, the number of steps required to perform for a task seem to have increased drastically. This may be attributed to the lack of system configuration knowledge which could have been essential in substantially decreasing the number of steps required to perform a particular task.
Lack of Knowledge on Management
Initiatives. The interviewees felt they did not have to use or understand the system until they got the 'go ahead' from top and middle management. Interviews indicated that top and middle management had not supported the initiative as yet. Interviewees had received no information or communication on planning, adoption and deployment of the new system from management; hence they showed no commitment towards using the new system.
Lack of Knowledge on
Conclusions and Implications
This paper reports on the preliminary findings based on the implementation activities of an ERP system in a large engineering company in Cape Town. The findings of this study show a number of intra-organizational barriers to efficient knowledge transfer. Inadequate training, lack of technical and project knowledge, lack of management support and change management initiatives have been cited as the major KM challenges. Other fundamental KM challenges include process knowledge, customization and contextualization of knowledge. Seemingly, in a large organization with multiple branches throughout South Africa, understanding the process, contextualization and customization of the training content from the users' perspective is a key aspect to consider during an ERP implementation process.
This research is still ongoing and the subsequent research phases focus on providing a holistic view of the role, different dimensions and best practices of KM during the entire ERP implementation cycle. Upon completion, this research will be of immediate benefit to both academics and practitioners.
From an academic perspective, this study will explore the whole ERP implementation lifecycle from a KM perspective, hence contributing to the existing body of knowledge in this area by attempting to offer a better explanation of the existing theories and frameworks. Since there has not been any study that looked at the entire lifecycle of ERP implementation through a KM perspective in South Africa, this research is unique in nature and is expected to break some new ground in South Africa, aiming to provide an advancement of knowledge in this particular field. Through a practical lens, this research should be of immediate benefit to large and medium organizations. The results of this study can also be useful and applicable to international companies with global user bases.
Change Management Initiatives. Managing change is arguably one of the primary concerns of ERP implementation. The analysis show the lack of importance attributed to this area. Lack of proper communication channels and planning coupled with the absence of change management initiatives resulted in employees' confusion, instability and resistance as shown by quotes below. "We should not have used SAP at all, they should scrap it…If someone new came and asked me whether they should go for the training, I would tell them, try your best to get out of it." Knowledge Dump (Information overload). Information overload was another identified challenge. The training included people from different departments who are associated with different aspects of the process. As a result, the trainers covered various tasks related to various processes in one training session as opposed to focusing on the specific processes that the interviewees understood and were involved with. The participants got confused with regards to their role definition and the ownership of the different activities. The trainers were unable to clear this confusion. This caused a certain level of panic amongst the group; subsequently they lost interest in the training and attributed it as an unproductive process.Poor Contextualization ofKnowledge. Another concern raised was with reference to the lack of customization of the training materials and exercises used resulting in a poor focus on local context. Interviewees could not relate to the training examples given as they were based on the process flow from a different suburb. Interviewees said each suburb has its own way of operating and has unique terms and terminologies. The fact that the examples used came from Johannesburg and not from Cape Town made it harder for the interviewees to understand the overall process. "The examples they used were from Joburg, so they work in a different way to us. The examples should have been customised to how we work in order for us to better understand the process." | 21,464 | [
"1003587",
"1003468"
] | [
"303907",
"303907",
"303907"
] |
01484689 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484689/file/978-3-642-36611-6_25_Chapter.pdf | Nuno Ferreira
email: nuno.ferreira@i2s.pt
Nuno Santos
email: nuno.santos@ccg.pt
Pedro Soares
email: psoares@ccg.pt
Ricardo J Machado
Dragan Gašević
email: dgasevic@acm.org
Transition from Process-to Product-level Perspective for Business Software
Keywords: Enterprise logical architecture, Information System Requirement Analysis, Design, Model Derivation
When there are insufficient inputs for a product-level approach to requirements elicitation, a process-level perspective is an alternative way for achieving the intended base requirements. We define a V+V process approach that supports the creation of the intended requirements, beginning in a process-level perspective and evolving to a product-level perspective trough successive models derivation with the purpose of creating context for the implementation teams. The requirements are expressed through models, namely logical architectural models and stereotyped sequence diagrams. Those models alongside with the entire approach are validated using the architecture validation method ARID.
Introduction
A typical business software development project is coordinated so that the resulting product properly aligns with the business model intended by the leading stakeholders. The business model normally allows for eliciting the requirements by providing the product's required needs. In situations where organizations focused on software development are not capable of properly eliciting requirements for the software product, due to insufficient stakeholder inputs or some uncertainty in defining a proper business model, a process-level requirements elicitation is an alternative approach. The process-level requirements assure that organization's business needs are fulfilled. However, it is absolutely necessary to assure that product-level (IT-related) requirements are perfectly aligned with process-level requirements, and hence, are aligned with the organization's business requirements.
One of the possible representations of an information system is its logical architecture [START_REF] Castro | Towards requirements-driven information systems engineering: the Tropos project[END_REF], resulting from a process of transforming business-level and technological-level decisions and requirements into a representation (model). It is necessary to promote an alignment between the logical architecture and other supporting models, like organizational configurations, products, processes, or behaviors. A logical architecture can be considered a view of a system composed of a set of problem-specific abstractions supporting functional requirements [START_REF] Azevedo | Refinement of Software Product Line Architectures through Recursive Modeling Techniques In[END_REF].
In order to properly support technological requirements that comply with the organization's business requirements, we present in this paper an approach composed by two V-Models [START_REF] Haskins | Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities[END_REF], the V+V process. The requirements are expressed through logical architectural models and stereotyped sequence diagrams [START_REF] Machado | Requirements Validation: Execution of UML Models with CPN Tools[END_REF] in both a process-and a product-level perspective. The first execution of the V-Model acts in the analysis phase and regards a process-level perspective. The second execution of the V-Model regards a product-level perspective and enables the transition from analysis to design trough the execution of the product-level 4SRS method [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF]. Our approach assures a proper compliance between the process-and the product-level requirements through a set of transition steps between the two perspectives.
This paper is structured as follows: section 2 presents the V+V process; section 3 describes the method assessment through ARID; in section 4 we present an overview of the process-to product-level transition; in section 5 we compare our approach with other related works; and in section 6 we present the conclusions.
A V+V Process Approach for Information System's Design
At a macro-process level, the development of information systems can be regarded as a cascaded lifecycle, if we consider typical and simplified phases: analysis, design and implementation. We encompass our first V-Model (at process-level) within the analysis phase and the second V-Model (at product-level) in the transition between the analysis and the design. One of the outputs of any of our V-Models is the logical architectural model for the intended system. This diagram is considered a design artifact but the design itself is not restricted to that artifact. We have to execute a V+V process to gather enough information in the form of models (logical architectural model, B-type sequence diagrams and others) to deliver, to the implementation teams, the correct specifications for product realization.
Regarding the first V-Model, we refer that it is executed at a process-level perspective. How the term process is applied in this approach can lead to inappropriate interpretations. Since the term process has different meanings depending on the context, in our process-level approach we acknowledge that: (1) real-world activities of a business software production process are the context for the problem under analysis; (2) in relation to a software model context [START_REF] Conradi | Process Modelling Languages. Software Process: Principles, Methodology, and Technology[END_REF], a software process is composed of a set of activities related to software development, maintenance, project management and quality assurance. For scope definition of our work, and according to the previously exposed acknowledgments, we characterize our process-level perspective by: (1) being related to real-world activities (including business); (2) when related to software, those activities encompass the typical software development lifecycle. Our process-level approach is characterized by using refinement (as one kind of functional decomposition) and integration of system models. Activities and their interface in a process can be structured or arranged in a process architecture [START_REF] Browning | Modeling impacts of process architecture on cost and schedule risk in product development[END_REF].
Our V-Model approach (inspired in the "Vee" process model [START_REF] Haskins | Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities[END_REF]) suggests a roadmap for product design based on business needs elicited in an early analysis phase. The approach requires the identification of business needs and then, by successive artifact derivation, it is possible to transit from a business-level perspective to an IT-level perspective and at the same time, aligns the requirements with the derived IT artifacts. Additionally, inside the analysis phase, this approach assures the transition from the business needs to the requirements elicitation.
In this section, we present our approach, based on successive and specific artifacts generation. In the first V-Model (at the process-level), we use Organizational Configurations (OC) [START_REF] Evan | Toward a theory of inter-organizational relations[END_REF], A-type and B-type sequence diagrams [START_REF] Machado | Requirements Validation: Execution of UML Models with CPN Tools[END_REF], (business) Use Case models (UCs) and a process-level logical architectural model. The generated artifacts and the alignment between the business needs and the context for product design can be inscribed into this first V-Model.
The presented approach encompasses two V-Models, hereafter referred as the V+V process and depicted in Fig. 1. The first V deals with the process-level perspective and its vertex is supported by the process-level 4SRS method detailed in [START_REF] Ferreira | Derivation of Process-Oriented Logical Architectures: An Elicitation Approach for Cloud Design[END_REF]. The process-level 4SRS method execution results in the creation of a validated architectural model which allows creating context for the product-level requirements elicitation and in the uncovering of hidden requirements for the intended product design. The purpose of the first execution of the V-Model regards eliciting requirements from a high-level business level to create context for product design, that can be considered a business elicitation method (like the Business Modeling discipline of RUP).
Fig. 1. The V+V process approach
The second execution of the V-Model is done at a product-level perspective and its vertex is supported by the product-level 4SRS method detailed in [START_REF] Machado | Transformation of UML Models for Service-Oriented Software Architectures[END_REF]. The product-level V-Model gathers information from the context for product design (CPD) in order to create a new model referred as Mashed UCs. Using the information present in the Mashed UCs model, we create A-type sequence diagrams, detailed in [START_REF] Machado | Requirements Validation: Execution of UML Models with CPN Tools[END_REF]. These diagrams are input for the creation of (software) Use Case Models that have associated textual descriptions of the requirements for the intended system. Using the 4SRS method in the vertex, we derive those requirements into a Logical Architectural model. Using a process identical to the one used in the process-level V-Model, we create B-type sequence diagrams and assess the Logical Architectural Model.
The V-Model representation provides a balanced process representation and, simultaneously, ensures that each step is verified before moving into the next. As seen in Fig. 1, the artifacts are generated based on the rationale and in the information existing in previously defined artifacts, i.e., A-type diagrams are based on OCs, (business) use case model is based on A-type sequence diagrams, the logical architecture is based on the (business) use case model, and B-type sequence diagrams comply with the logical architecture. The V-Model also assures validation of artifacts based on previously modeled artifacts (e.g., besides the logical architecture, B-type sequence diagrams are validated by A-type sequence diagrams). The aim of this manuscript is not to detail the inner execution of the V-Model, nor is it to detail the rules that enable the transition from the process-to the product-level, but rather to present the overall V+V process within the macro-process of information systems development.
In both V-Models, the assessment is made using an adaption of ARID (presented in the next section) and by using B-type sequence diagrams to check if the architectural elements present in the Logical Architectural Model produced by the models are contained in the scenarios depicted by the B-type sequence diagrams.
The first V produces a process-level logical architecture (that can be considered the information system logical architecture); the second V produces a product-level logical architecture (that can be considered the business software logical architecture). Also, for each of the V-Models, in the descending side of the V (left side), models created in succession represent the refinement of requirements and the creation of system specifications. In the ascending side (right side of the V), models represent the integration of the discovered logical parts and their involvement in a cross-side oriented validating effort contributing for the inner-validation for macro-process evolution.
V-Model Process Assessment with ARID
In both V-Models execution, the assessments that result from comparing A-and B-type sequence diagrams produce Issues documents. These documents are one of the outputs of the Active Reviews for Intermediate Designs (ARID) method [START_REF] Clements | Active Reviews for Intermediate Designs[END_REF][START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF] used to assess each V-Model execution. The ARID method is a combination of Architecture Tradeoff Analysis Method (ATAM) [START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF] with Active Design Review (ADR) [START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF]. By its turn, ATAM can be seen as an improved version of Software Architecture Analysis Method (SAAM) [START_REF] Clements | Evaluating software architectures: methods and case studies[END_REF]. These methods are able to conduct reviews regarding architectural decisions, namely on the quality attributes requirements and their alignment and satisfaction degree of specific quality goals. The ADR method targets architectures under development, performing evaluations on parts of the global architecture. Those features made ARID our method of choice regarding the evaluation of the in-progress logical architecture and in the assistance to determine the need of further refinements, improvements, or revisions before assuming that the architecture is ready to be delivered to the teams responsible for implementation. This delivery is called context for product implementation (CPI).
Fig. 2. Assessment of the V+V execution using ARID
In Fig. 2, we present the simplified interactions between the ARID-related models in the V+V process. In this figure, we can see the macro-process associated with both V-Models, the transition from one to the other (later detailed) and the ARID models that support the assessment of the V+V execution.
The Project Charter regards information that is necessary for the ongoing project and relates to project management terminology and content [START_REF]Project Management Institute: A Guide to the Project Management Body of Knowledge (PMBOK® Guide)[END_REF]. This document encompasses information regarding the project requirements in terms of human and material resources, skills, training, context for the project, stakeholder identification, amongst others. It explicitly contains principles and policies of the intended practice with people from different perspectives in the project (analysis, design, implementation, etc.). It also allows having a common agreement to refer to, if necessary, during the project execution.
The Materials document contains the necessary information for creating a presentation of the project. It regards collected seed scenarios based on OCs (or Mashed UCs), A-type sequence diagrams and (business or software) Use Case Models. Parts of the Logical Architectural model are also incorporated in the presentation that will be presented to the stakeholders (including software engineers responsible for implementation). The purpose of this presentation is to enlighten the team about the logical architecture and propose the seed scenarios to discussion and create the B-type sequence diagrams based on presented information.
The Issues document supports information regarding the evaluation of the presented logical architecture. If the logical architecture is positively assessed, we can assume that we reached consensus to proceed into the macro-process. If not, using the Issues document it is possible to promote a new iteration of the corresponding V-Model execution to adjust the previously resulting logical architecture to make the necessary corrections to comply with the seed scenarios. Main causes for this adjustment are: (1) bad decisions that were made in the corresponding 4SRS method execution; (2) B-type sequence diagrams not complying with all the A-type sequence diagrams; (3) created B-type sequence diagrams not comprising the entire logical architecture; (4) the need to explicitly placing a design decision in the logical architectural model, usually done by using a common architectural pattern and injecting the necessary information in the use case textual descriptions that are input for the 4SRS.
The adjustment of the logical architectural model (by iterating the same V-Model) suggests the construction of a new use case model or, in the case of a new scenario, the construction of new A-type sequence diagrams. The new use case model captures user requirements of the revised system under design. At the same time, through the application of the 4SRS method, it is possible to derive the corresponding logical architectural model.
Our application of common architectural patterns include business, analysis, architectural and design patterns as defined in [START_REF] Azevedo | Systematic Use of Software Development Patterns through a Multilevel and Multistage Classification[END_REF]. By applying them as early as possible in the development (in early analysis and design), it is possible to incorporate business requirements into the logical architectural model and at the same time assure that the resulting model is aligned with the organization needs and also complies with the established non-functional requirements. The design patterns are used when there is a need to detail or refine parts of the logical architecture and, by itself, to promote a new iteration of the V-Model.
In the second V, after being positively assessed by the ARID method, the business software logical architectural model is considered a final design artifact that must be divided into products (applications) for latter implementation by the software teams.
Process-to Product-level Transition
As stated before, a process-level V-Model can be executed for business requirements elicitation purposes, followed by a product-level V-Model for defining the software functional requirements. The V+V process is useful for both stakeholders, organizations and technicians, but it is necessary to assure that they properly reflect the same system. In order to assure an aligned transition between the process-and product-level perspectives in the V+V process we propose the execution of a set of transition steps whose execution is required to create the Mashed UC model referred in Fig. 1 and in Fig. 2. The detail of the transition rules is subject of future publications.
Like in [START_REF] Azevedo | Refinement of Software Product Line Architectures through Recursive Modeling Techniques In[END_REF][START_REF] Machado | Refinement of Software Architectures by Recursive Model Transformations[END_REF], we propose the usage of the 4SRS by recursive executions with the purpose of deriving a new logical architecture. The transition steps are structured as follows: (1) Architecture Partitioning, where the Process-level Architectural Elements (AEpc's) under analysis are classified by their computation execution context with the purpose of defining software boundaries to be transformed into Product-level (software) Use Cases (UCpt's.); (2) Use Case Transformation, where AEpc's are transformed into software use cases and actors that represent the system under analysis through a set of transition patterns that must be applied as rules; (3) Original Actors Inclusion, where the original actors that were related to the use cases from which the architectural elements of the process-level perspective are derived (in the first V execution) must be included in the representation; (4) where the model is analyzed for redundancies; and (5) Gap Filling; where the necessary information of any requirement that is intended to be part of the design and that is not yet represented, is added, in the form of use cases.
By defining these transition steps, we assure that product-level (software) use cases (UCpt) are aligned with the architectural elements from the process-level logical architectural model (AEpc); i.e., software use case diagrams are reflecting the needs of the information system logical architecture. The application of these transition rules to all the partitions of an information system logical architecture gives origin to a set of Mashed UC models.
Comparison with Related Work
An important view considered in our approach regards the architecture. What is architecture? In the literature there is a plethora of definitions but most agree that an architecture concerns both structure and behavior, with a level of abstraction that only regards significant decisions and may be in conformance with an architectural style, is influenced by its stakeholders and the environment where it is intended to be instantiated and also encompasses decisions based on some rationale or method.
It is acknowledged in software engineering that a complete system architecture cannot be represented using a single perspective [START_REF] Sungwon | Designing logical architectures of software systems[END_REF][START_REF] Kruchten | The 4+1 View Model of Architecture[END_REF]. Using multiple viewpoints, like logical diagrams, sequence diagrams or other artifacts, contributes to a better representation of the system and, as a consequence, to a better understanding of the system. Our stereotyped usage of sequence diagrams adds more representativeness value to the specific model than, for instance, the presented in Krutchen's 4+1 perspective [START_REF] Kruchten | The 4+1 View Model of Architecture[END_REF]. This kind of representation also enables testing sequences of system actions that are meaningful at the software architecture level [START_REF] Bertolino | An explorative journey from architectural tests definition down to code tests execution[END_REF]. Additionally, the use of this kind of stereotyped sequence diagrams at the first stage of analysis phase (user requirements modeling and validation) provides a friendlier perspective to most stakeholders, easing them to establish a direct correspondence between what they initially stated as functional requirements and what the model already describes.
Conclusions and Outlook
We presented an approach to create context for business software implementation teams in contexts where requirements cannot be properly elicited. Our approach is based on successive models construction and recursive derivation of logical architectures, and makes use of model derivation for creating use cases, based on high-level representations of desired system interactions. The approach assures that validation tasks are performed continuously along the modeling process. It allows for validating: (1) the final software solution according to the initial expressed business requirements; (2) the B-type sequence diagrams according to A-type sequence diagrams; (3) the logical architectures by traversing it with B-type sequence diagrams. These validation tasks, specific to the V-Model, are subject of a future publication.
It is a common fact that domain-specific needs, namely business needs, are a fast changing concern that must be tackled. Process-level architectures must be in a way that potentially changing domain-specific needs are local in the architecture representation. Our proposed V+V process encompasses the derivation of a logical architecture representation that is aligned with domain-specific needs and any change made to those domain-specific needs is reflected in the logical architectural model through successive derivation of the supporting models (OCs, A-and B-type sequence diagrams, and use cases). Additionally, traceability between those models is built-in by construction, and intrinsically integrated in our V+V process.
Acknowledgments
This work has been supported by project ISOFIN (QREN 2010/013837). | 23,545 | [
"1002459",
"1002460",
"1002461",
"991637",
"1002440"
] | [
"486560",
"486561",
"486561",
"300854",
"486532"
] |
01484690 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484690/file/978-3-642-36611-6_3_Chapter.pdf | Julian Faasen
Lisa F Seymour
email: lisa.seymour@uct.ac.za
Joachim Schuler
email: joachim.schuler@hs-pforzheim.de
SaaS ERP adoption intent: Explaining the South African SME perspective
Keywords: Software as a Service, Cloud computing, Enterprise Resource Planning, SaaS ERP, South African SME, Information Systems adoption
This interpretive research study explores intention to adopt SaaS ERP software within South African SMEs. Semi-structured interviews with participants from different industry sectors were performed and seven multidimensional factors emerged explaining the current reluctance to adoption. While, improved IT reliability and perceived cost reduction were seem as benefits they were dominated by other reasons. Reluctance to adopt was attributed to systems performance and availability risk; sunk cost and satisfaction with existing systems; data security risk; loss of control and lack of vendor trust; and finally functionality fit and customization limitations. The findings provide new insights into the slow SaaS ERP adoption in South Africa and provide empirically supported data to guide future research efforts. Findings can be used by SaaS vendors to address perceived shortcomings of SaaS ERP software.
Introduction
Small and medium enterprises (SMEs) are major players in every economy and make a significant contribution to employment and Gross Domestic Product (GDP) [START_REF] Seethamraju | Adoption of ERPs in a medium-sized enterprise-A case study[END_REF]. In the past, many organizations were focused on local markets, but have been forced to respond to competition on a global level as well [START_REF] Shehab | Enterprise resource planning: An integrative review[END_REF]. The role of the SME in developing countries such as South Africa is considered critical in terms of poverty alleviation, employment creation and international competitiveness [START_REF] Berry | The Economics of Small, Medium and Micro Enterprises in South Africa, Trade and Industrial Policy Strategies[END_REF]. However, resource limitations have made it difficult for many smaller organizations to enter new markets and compete against their larger counterparts. Thus SMEs in all countries are forced to seek innovative ways to become more efficient and competitive within a marketplace rife with uncertainty. Adoption of Information Systems (IS) is viewed as a way for SMEs to become more competitive and to drive business benefits such as cost reduction, improved profitability, enhanced customer service, new market growth opportunities and more efficient operating relationships with trading partners [START_REF] Premkumar | A meta-analysis of research on information technology implementation in small business[END_REF]. Many organizations have adopted Enterprise Resource Planning (ERP) software in an attempt to achieve such benefits.
ERP software facilitates the integration of cross-functional business processes in order to improve operational efficiencies and business performance. If used correctly, ERP software can drive bottom-line results and enhance competitive advantage. Whilst most large organizations world-wide have managed to acquire ERP software [START_REF] Klaus | What is ERP? Information Systems Frontiers[END_REF], it has been reported that many SMEs have been unwilling to adopt ERP software due to the high cost and risk involved [START_REF] Buonanno | Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies[END_REF]. However, an alternative to on-premise enterprise software has been made possible with the advent of the Software as a Service (SaaS) model.
SaaS as a subset of cloud computing involves the delivery of web-based software applications via the internet. SaaS is essentially an outsourcing arrangement, where enterprise software is hosted on a SaaS vendor's infrastructure and rented by customers at a fraction of the cost compared with traditional on-premise solutions. Customers access the software using an internet browser and benefit through lower upfront capital requirements [START_REF] Feuerlicht | SOA: Trends and directions[END_REF], faster deployment time [START_REF] Benlian | A transaction cost theoretical analysis of software-as-a-service (SAAS)-based sourcing in SMBs and enterprises[END_REF]; [START_REF] Deyo | Software as a service (SaaS): A look at the migration of applications to the web[END_REF], improved elasticity [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF], flexible monthly installments [START_REF] Armbrust | A view of cloud computing[END_REF] and more predictable IT budgeting [START_REF] Benlian | A transaction cost theoretical analysis of software-as-a-service (SAAS)-based sourcing in SMBs and enterprises[END_REF]; [START_REF] Hai | SaaS and integration best practices[END_REF]. Countering these benefits are concerns around software reliability, data security [START_REF] Hai | SaaS and integration best practices[END_REF]; [START_REF] Heart | Who Is out there? Exploring Trust in the Remote-Hosting Vendor Community[END_REF]; [START_REF] Kern | Application service provision: Risk assessment and mitigation[END_REF] and long-term cost savings [START_REF] Hestermann | Magic quadrant for midmarket and tier 2oriented ERP for product-centric companies[END_REF]. Customization limitations [START_REF] Chong | Architecture strategies for catching the long tail[END_REF] and integration challenges [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF] are considered major concerns relating to SaaS offerings. Furthermore, concerns relating to data security and systems availability have raised questions as to the feasibility of SaaS for hosting mission-critical software.
Despite the perceived drawbacks of SaaS, Gartner suggests that SaaS ERP solutions are attracting growing interest in the marketplace [START_REF] Hestermann | Magic quadrant for ERP for Product-Centric Midmarket Companies[END_REF]. Traditional ERP vendors such as SAP have begun expanding their product ranges to include SaaSbased offerings. The success of Salesforce's SaaS CRM solution provides further evidence that the SaaS model is capable of delivering key business functionality. However, the adoption of SaaS ERP software has been reported as slow [START_REF] Hestermann | Magic quadrant for ERP for Product-Centric Midmarket Companies[END_REF] and appears to be confined to developed countries. Despite the plethora of online content promoting the benefits of SaaS ERP software, there is a lack of empirical research available that explains the slow rate of adoption. Thus, the purpose of this study is to gain an understanding of the reluctance to adopt SaaS ERP software within South African SMEs. This research is considered important as SaaS is a rapidly growing phenomenon with widespread interest in the marketplace. Furthermore, this study aims to narrow the research gap by contributing towards much-needed empirical research into SaaS ERP adoption.
Literature Review
A number of pure-play SaaS vendors as well as traditional ERP providers are offering ERP software via the SaaS model. Krigsman [START_REF] Krigsman | The 2011 focus experts' guide to enterprise resource planning[END_REF] summarized the major SaaS ERP vendors and offerings and found that many are offering the major six core modules: Financial Management, Human Resources Management, Project Management, Manufacturing, Service Operations Management and Supply Chain Management.
However, according to Aberdeen Group, only nine SaaS vendors actually offered pure SaaS ERP software and services [START_REF] Wailgum | SaaS ERP Has Buzz, But Who Are the Real Players[END_REF]. A Forrsights survey found that 15% of survey participants were planning adoption of SaaS ERP by 2013 [START_REF] Kisker | ERP Grows Into The Cloud: Reflections From SuiteWorld[END_REF]. However, two-thirds of those firms were planning to complement their existing on-premise ERP software with a SaaS offering. Only 5% of survey participants planned to replace most/all of their on-premise ERP systems within 2 years (from the time of their survey). These findings provide evidence of the slow rate of SaaS ERP adoption. It should also be noted that popular SaaS ERP vendors such as Netsuite and Epicor were not yet providing SaaS ERP products in South Africa during the time of this study in 2011. Given the scarcity of SaaS ERP literature, a literature review of the factors potentially influencing this slow adoption was performed based on prior studies relating to on-premise ERP adoption, IS adoption, SaaS, ASP and IS outsourcing (Figure 1). The major factors identified are structured according to the Technology-Organization Environment (TOE) framework [START_REF] Tornatzky | The process of technological innovation[END_REF]. For parsimonious reasons only these factors that were confirmed from our results are discussed in the results section of this paper.
Research Method
The primary research question was to identify why South African SMEs are reluctant to consider the adoption of SaaS ERP. Given the lack of research available an inductive interpretive and exploratory approach was deemed appropriate. The study also contained deductive elements as past research was used to generate an initial model. Walsham [START_REF] Walsham | Interpretive case studies in IS research: Nature and method[END_REF] posits that past theory in interpretive research is useful as a means of creating a sensible theoretical basis for informing the initial empirical work. To reduce the risk of relying too heavily on theory, a significant degree of openness to the research data was maintained through continual reassessment of initial assumptions [START_REF] Walsham | Interpretive case studies in IS research: Nature and method[END_REF].
Non-probability purposive sampling [START_REF] Saunders | Research methods of business students[END_REF] was used to identify suitable organizations to interview and ethics approval from the University was obtained prior to commencing data collection. The sample frame consisted of South African SMEs with between 50 and 200 employees [START_REF]Small Business Act 102[END_REF]. One participating organization contained 250 employees and was included due to difficulties finding appropriate interview candidates. SMEs in different industry segments were targeted to increase representation. Furthermore, SMEs that operated within traditional ERP-focussed industries (e.g. manufacturing, logistics, distribution, warehousing and financial services, etc.) were considered to improve the relevance of research findings. The majority of participants interviewed were key decision makers within their respective organizations to accurately reflect the intention to adopt SaaS ERP software within their respective organizations. Table 1 provides a summary of company and participant demographics. Data was collected using semi-structured interviews with questions which were initially guided by a priori themes extracted from the literature review. However, the researcher practised flexibility by showing a willingness to deviate from the initial research questions in order to explore new avenues [START_REF] Myers | The qualitative interview in IS research: Examining the craft[END_REF].
Data analysis was conducted using the general inductive approach, where research findings emerged from the significant themes in the raw research data [START_REF] Thomas | A general inductive approach for analyzing qualitative evaluation data[END_REF]. To enhance the quality of analysis member checking, thick descriptions, code-recode and audit trail strategies [START_REF] Anfara | Qualitative analysis on stage: Making the research process more public[END_REF] were employed.
During interviews, it was apparent that the term "ERP" was sometimes used to represent functionality provided by a number of disparate systems. Thus the term ERP was used in terms of how the participant's companies used their business software collectively to fulfil the role of ERP software. Table 2 below provides an overview of the software landscape for each of the companies interviewed. Companies used a combination of off-the-shelf, bespoke, vertical ERP or modular ERP applications.
In this study, intention to adopt SaaS ERP software is defined as the degree to which the organization (SME) considers replacing all or most of their on-premise enterprise software with SaaS ERP software. SaaS ERP was defined as web-based ERP software that is hosted by SaaS ERP vendors and delivered to customers via the internet. The initial engagement with participants focussed primarily on multi-tenant SaaS ERP offerings, implying that a single instance of the ERP software would be shared with other companies. At the time of this study SaaS ERP was not easily available from vendors in South Africa. Irrespective of the availability, none of the companies interviewed had an intention of adopting SaaS ERP software in the future. However, one participant suggested a positive intention towards adoption of SaaS applications: "Microsoft CRM is available on the SaaS model...that's the way companies are going and we are seriously considering going that way" (Participant B). His company was in the process of planning a trial of SaaS CRM software. However, Participant B's organization was also in the process implementing on-premise ERP software. The findings are inconsistent with global Gartner and Forrsights surveys which reported a willingness and intention to adopt SaaS ERP software within small and mid-sized organizations [START_REF]SaaS ERP: Trends and observations[END_REF]; [START_REF] Kisker | ERP Grows Into The Cloud: Reflections From SuiteWorld[END_REF].
The main objective of this research was to explore the factors that impacted the reluctance to consider SaaS ERP software adoption within South African SMEs. The following 7 themes emerged and are discussed in the following sections:
1.
Perceived cost reduction (driver) 2.
Sunk cost and Satisfaction with existing system (inhibitor) 3.
Systems performance and availability risk (inhibitor) 4.
Improved IT reliability (driver) 5.
Data security risk (inhibitor) 6.
Loss of control and Vendor trust (inhibitor) 7.
Functionality Fit and Customization Limitations (inhibitor)
Perceived cost reduction
In line with the literature cost reductions were envisaged in terms of initial hardware and infrastructure [START_REF] Kaplan | SaaS survey shows new model becoming mainstream[END_REF]; [START_REF] Torbacki | SaaS-direction of technology development in ERP/MRP systems[END_REF]; [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF] and were perceived as having a positive effect on intention to adopt SaaS ERP. However, participants also referred to the high cost of maintaining their on-premise ERP applications and potential long term operational cost savings with SaaS ERP. "..it's the ongoing running costs, support and maintenance, that makes a difference" (Participant B). However, these high costs were often justified in terms of the value that their onpremise systems provided: "...if it's considered important then cost is very much a side issue" (Participant D).
Sunk cost and Satisfaction with existing systems
The intention to adopt SaaS ERP was negatively affected by sunk cost and satisfaction with their existing systems. This was the 2nd most dominant theme. Sunk cost represents irrecoverable costs incurred during the acquisition and evolution of their existing IT systems.
"...if you're company that's got a sunk cost in ERP...the hardware and staff and training them up...what is the benefit of moving across to a SaaS model?" (A1).
"...if we were starting today with a clean slate, with not having a server room full of hardware, then definitely...SaaS would be a good idea" (D) Satisfaction with existing systems relates to the perception of participants that their existing enterprise software was fit for purpose.
"...whenever you've got a system in place that ticks 90% of your boxes and it's reliable...why change, what are we going to gain, will the gain be worth the pain and effort and the cost of changing" (A1). The effect of sunk costs towards SaaS ERP adoption intent could not be verified within academic literature but is consistent with the 2009 Aberdeen Group survey, where organizations showed reluctance towards adoption due to past investment in IT [START_REF]SaaS ERP: Trends and observations[END_REF]. Both sub-themes were also related to a lack of perceived benefits towards changing to alternatives such as SaaS ERP.
"
…you're constantly investing in the current system and you're depreciating those costs over three, five, years. So… if you've got those sunk costs…even if you could save 30% you'd have to weigh it up around the investment" (A1).
This is in agreement with research which states that organizations adopt technology innovations only if they consider the technology to be capable of addressing a perceived performance gap or to exploit a business opportunity [START_REF] Premkumar | Adoption of new information technologies in rural small businesses[END_REF].
System performance and availability risk
Concerns over systems performance and availability risk were the dominant reasons for the reluctance to adopt SaaS ERP. This was commented on by all participants. Systems performance and availability risk concerns were primarily related to bandwidth concerns in South Africa. More specifically, bandwidth cost, internet latency limitations and bandwidth reliability (uptime) were considered factors which impacted the performance and availability of SaaS ERP solutions, thus impacting adoption intent. These findings are in line with literature which suggests that systems performance and availability concerns have a negative impact on ASP adoption [START_REF] Lee | Determinants of success for application service provider: An empirical test in small businesses[END_REF] and SaaS adoption [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF].
"The cheapest, I suppose is the ADSL, with 4MB lines, but they tend to fall over, cables get stolen" (Participant D).
"They can't guarantee you no downtime, but I mean there are so many factors locally that they've got no control of. You know, you have a parastatal running the bulk of our bandwidth system" (E) Systems performance and availability was associated with the risk of losing access to mission-critical systems and the resulting impact on business operations. Although bandwidth has become cheaper and more reliable in South Africa over the past decade, organizations and SaaS vendors are still faced with a number of challenges in addressing the risks associated with performance and availability of SaaS ERP software.
Improved IT Reliability
Most participants felt that SaaS ERP would be beneficial as a means of providing them with improved reliability of their core business software due to sophisticated platform technology, regular software updates, more effective backups and better systems redundancy. These sub-themes were considered major benefits of SaaS ERP software for SMEs interviewed. The perceived benefits of redundancy, backing up and received software updates were expressed as follows:
"I think it will be a safer option ...if they've got more expensive infrastructure with redundancy built in" (C1).
"...the other advantage is in terms of backing up and protecting of data…at least that becomes somebody else's responsibility" (E). "...it's probably more often updated...because it's been shared across a range of customers; it has to really be perfect all the time" (A1).
The benefit of improved IT reliability becomes more evident when one considers many SMEs often lack the required skills and resources to manage their on-premise enterprise systems effectively [START_REF] Kamhawi | Enterprise resource-planning systems adoption in Bahrain: Motives, benefits, and barriers[END_REF]; [START_REF] Ramdani | SMEs & IS innovations adoption: A review and assessment of previous research[END_REF] thus making on-demand sourcing models such as SaaS more attractive: "...having ERP software in-house that you maintain…does come with huge human resource constraint's." and "I'm not in the business of managing ERP systems, I'm in the business of book publishing and distribution...SaaS ERP makes all the sense in the world...you focus on just using it for your business rather than you run the product as well" (A1).
Data Security Risk
Data security concerns were the fourth most dominant explanation and were related to concerns around the security and confidentiality of business information hosted on SaaS vendor infrastructure. Senior management provided the majority of responses. Data security concerns related to external hacking, risks from inside the SaaS vendor environment and from other clients sharing the infrastructure.
"...somebody somewhere at some level has got to have access to all of that information and it's a very off-putting factor for us" (E). "they've got a large number of other clients accessing the same servers" (D)
This confirms data security risk as of the major inhibitors of SaaS ERP adoption [START_REF] Buonanno | Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies[END_REF], [START_REF] Hai | SaaS and integration best practices[END_REF], [START_REF] Heart | Who Is out there? Exploring Trust in the Remote-Hosting Vendor Community[END_REF]; [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF]. Issues relating to vendor control over privileged access and segregation of data between SaaS tenants [START_REF] Brodkin | Gartner: Seven cloud-computing security risks[END_REF] appear to be strong concerns. Whilst SaaS vendors claim that their solutions are more secure, SaaS is generally considered suitable for applications with low data security and privacy concerns [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF].
Ensuring that sufficient data security mechanisms are in place is also critical in terms of regulatory compliance when moving applications into the cloud [START_REF] Armbrust | A view of cloud computing[END_REF]. South African organizations would also need to consider the new Protection of Personal Information Act.
Loss of Control and Lack of Vendor Trust
A number of participants associated SaaS ERP with a loss of control over their software and hardware components. They also raised concerns around trusting vendors with their mission-critical software solutions. This was the 3rd most dominant theme, with the majority of responses coming from senior management: "...if they decide to do maintenance...there's nothing we can do about it...you don't have a choice" (C2).
"...they sort of cut corners and then you end up getting almost a specific-to-SLA type of service" (A2). "Obviously the disadvantage is the fact that you are putting a lot of trust in another company and you've got to be sure that they are going to deliver because your entire business now is running on the quality of their staff, their turnaround times" (A1).
Participants felt that being reliant on vendors introduced risk that may affect the performance, availability and security of their mission critical applications. This is related to literature suggesting that organizations prefer in-house systems due to the risk of losing control over mission critical applications [START_REF] Benlian | Drivers of SaaS-Adoption-An empirical study of different application types[END_REF]. The linkage between lack of vendor trust and two other themes, systems performance and availability risk and data security risk, are consistent with Heart's [START_REF] Heart | Who Is out there? Exploring Trust in the Remote-Hosting Vendor Community[END_REF] findings.
In this study, systems performance and availability risk was primarily related to bandwidth constraints (cost, internet latency and reliability). Thus, in the context of this study, the vendor trust aspect is very much related to SaaS vendors to ensure data security and ISPs to ensure internet connectivity uptime.
Functionality Fit and Customization Limitations
Functionality fit refers to the degree to which ERP software matches the organizations functionality requirements. This was the least dominant concern with three participants raising concerns around lack of flexibility of SaaS ERP software due to concerns around the ability to customize the "...it's got enhanced modules like book production....it gets quite complex, so that's for instance one of the modules that's quite niche that you don't get in typical ERP...I think if you were starting from scratch and you had nothing, the benefit would be that if we put (current ERP software) in, the product and the people who put it in for you understand the industry whereas...but would there be anyone within SAP or Oracle who really understands the book industry?" (A).
"I think the disadvantages are flexibility...most of them won't allow too much of customization" (B).
"They do have a certain amount of configurability in the program...but when it comes down to the actual software application, they (ERP vendor) say this is what you get...and if you want to change, that's fine but then we'll make the change available to everybody...so you lose your competitive advantage" (D). Functionality fit is considered an important factor which effects on-premise ERP software adoption [START_REF] Buonanno | Factors affecting ERP system adoption: A comparative analysis between SMEs and large companies[END_REF] [START_REF] Markus | The enterprise systems experience-from adoption to success[END_REF]. There are a limited number of vendors providing pure SaaS ERP software services [START_REF] Ramdani | SMEs & IS innovations adoption: A review and assessment of previous research[END_REF] and SaaS ERP vendors are providing core ERP modules that cater for a wider market segment [START_REF] Krigsman | The 2011 focus experts' guide to enterprise resource planning[END_REF]. However, niche organizations that require highly specific functionality may find SaaS ERP software unsuitable, since the SaaS ERP business process logic may not fit their organization's functionality requirements.
Customization of ERP software is viewed as a means of accommodating the lack of functionality fit between the ERP software and the organization's functionality requirements, however, customization is limited within multi-tenancy SaaS ERP software [START_REF] Xin | Software-as-a Service Model: Elaborating Client-Side Adoption Factors[END_REF]; [START_REF] Chong | Architecture strategies for catching the long tail[END_REF].
Organizations could adopt SaaS ERP to fulfil standard functionality (accounting, warehousing, etc) whilst retaining in-house bespoke software to deliver specific functionality required but then integration complexity could become an issue. Various integration options are available for SaaS users. Platform as a service (PaaS) solutions provided by SalesForce.com (using Force.com and AppExchange) provide organizations with opportunities for purchasing 3rd party plugins that address integration needs [START_REF] Deyo | Software as a service (SaaS): A look at the migration of applications to the web[END_REF]. However, changes to the SaaS software (e.g. software upgrades or customization) could break 3rd party interfaces [START_REF] Hai | SaaS and integration best practices[END_REF]. Alternatively, organizations can make use of the standard web application programming interfaces (APIs) provided by the SaaS solution providers [START_REF] Chong | Architecture strategies for catching the long tail[END_REF]; [START_REF] Hai | SaaS and integration best practices[END_REF]. This enables SaaS vendors to continuously provide updates to functionality without breaking existing integrations [START_REF] Hai | SaaS and integration best practices[END_REF]. However, these integration solutions have raised concerns around data security since multiple customers are transacting via the same web APIs [START_REF] Sun | Software as a service: An integration perspective[END_REF].
The purpose of this research was to investigate reluctance by South African SMEs to consider the SaaS ERP business model. The following 7 themes emerged, in order from most significant to least, based on the participant perceptions, personal experience and organizational context (Figure 2).
1.
Systems performance and availability risk (inhibitor) 2.
Sunk cost and Satisfaction with existing system (inhibitor) 3.
Loss of control and Vendor trust (inhibitor) 4.
Data security risk (inhibitor) 5.
Improved IT reliability (driver) 6.
Perceived cost reduction (driver) 7.
Functionality Fit and Customization Limitations (inhibitor) Reluctance to adopt SaaS ERP was predominantly attributed to system performance and availability risk; data security risk; and loss of control and lack of vendor trust. Furthermore, loss of control and lack of vendor trust was found to increase the risks associated with systems performance and availability and the risks associated with data security. Thus organizations believed that in-house systems afforded them more control over their mission-critical software. The presence of sunk costs appeared to negatively affect their perceptions towards the degree of cost reduction gains on offer with SaaS ERP software. Satisfaction with existing systems was associated with a lack of perceived benefits towards SaaS ERP software (why should we change when our current systems work?).
There was an acknowledgement that the SaaS ERP model would provide improved IT reliability but it also would come with reduced functionality fit and customization limitations.
SME intention to adopt
Lack of control and vendor trust concerns dominate in the South African environment and this is exacerbated by high risks of unavailability attributed to the poor network infrastructure of the country. Concerns regarding cable theft were even reported. The findings in this study are not necessarily representative of all organizations in South Africa and due to the lack of SaaS ERP vendor presence in South Africa, it is reasonable to assume that South African organizations lack sufficient awareness around SaaS ERP software capabilities and this may have introduced a significant degree of bias.
By providing empirically supported research into SaaS ERP adoption, this research has attempted to narrow the research gap and to provide a basis for the development of future knowledge and theory. SaaS vendors in particular may be able to benefit through comparing these findings with their own surveys and establishing new and innovative ways to address the inhibitors of SaaS ERP adoption intent.
These research findings suggest similarities between the satisfaction with existing systems factor and the diffusion of innovations (DOI) model construct "relative advantage". Other data segments (not included within this paper) also suggest a possible relationship with two other DOI constructs "observability" and "trialability". Therefore the use of DOI theory for future research into SaaS ERP adoption might improve understanding.
Fig. 1 .
1 Fig. 1. Model derived from the broad literature.
Fig. 2 .
2 Fig. 2. An explanation of SME reluctance to adopt SaaS ERP. Negative effects are indicated by a negative sign (-) and positive effects by a positive sign (+).
Table 1 .
1 Company and Participant Demographics.
Company Participant
code Code Position Experience Industry Employees
A A1 Digital Director 10 years + Book publishing & 250
distribution
A A2 IT Operations 17 Years Book publishing & 250
Manager distribution
B B Head of IT 20 years + Financial Services 120
C C1 Chief Operating 20 years + Specialized Health 50
Officer Services
C C2 IT Consultant 7 years + Specialized Health 50
Services
D D Financial 20 years + Freight Logistics 200
Director Provider
E E Managing 20 years + Medical Distribution 137
Director
Table 2 .
2 Software landscape for companies interviewed.
Current Software Landscape Company code
A B C D E
Using industry-specific ERP software Yes No No No No
Using component-based ERP software No No Yes Yes Yes
Using off-the-shelf software Yes Yes Yes Yes Yes
Using Bespoke (customized) software Yes Yes Yes Yes No
Implementation of ERP software in progress No Yes No No No | 32,816 | [
"1003468",
"1003469"
] | [
"303907",
"303907",
"487694"
] |
01484691 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484691/file/978-3-642-36611-6_5_Chapter.pdf | P Pytel
email: ppytel@gmail.com
P Britos
email: paobritos@gmail.com
R García-Martínez
A Proposal of Effort Estimation Method for Information Mining Projects Oriented to SMEs
Keywords: Effort Estimation method, Information Mining, Small and Mediumsized Enterprises, Project Planning, Software Engineering
Software projects need to predict the cost and effort with its associated quantity of resources at the beginning of every project. Information Mining projects are not an exception to this requirement, particularly when they are required by Small and Medium-sized Enterprises (SMEs). An existing Information Mining projects estimation method is not reliable for small-sized projects because it tends to overestimates the estimated efforts. Therefore, considering the characteristics of these projects developed with the CRISP-DM methodology, an estimation method oriented to SMEs is proposed in this paper. First, the main features of SMEs' projects are described and applied as cost drivers of the new method with the corresponding formula. Then this is validated by comparing its results to the existing estimation method using SMEs real projects. As a result, it can be seen that the proposed method produces a more accurate estimation than the existing estimation method for small-sized projects.
Introduction
Information Mining consists in the extraction of non-trivial knowledge which is located (implicitly) in the available data from different information sources [START_REF] Schiefer | Process Information Factory: A Data Management Approach for Enhancing Business Process Intelligence[END_REF]. That knowledge is previously unknown and it can be useful for some decision making process [START_REF] Stefanovic | Supply Chain Business Intelligence Model[END_REF]. Normally, for an expert, the data itself is not the most relevant but it is the knowledge included in their relations, fluctuations and dependencies. Information Mining Process can be defined as a set of logically related tasks that are executed to achieve [START_REF] Curtis | Process Modelling[END_REF], from a set of information with a degree of value to the organization, another set of information with a greater degree of value than the initial one [START_REF] Ferreira | Integration of Business Processes with Autonomous Information Systems: A Case Study in Government Services[END_REF]. Once the problem and the customer's necessities are identified, the Information Mining Engineer selects the Information Mining Processes to be executed. Each Information Mining Process has several Data Mining Techniques that may be chosen to carry on the job [START_REF] Garcia-Martinez | Information Mining Processes Based on Intelligent Systems[END_REF]. Thus, it can be said that, Data Mining is associated to the technology (i.e. algorithms from the Machine Learning's field) while Information Mining is related to the processes and methodologies to complete the project successfully. In other words, while Data Mining is more related to the development tasks, Information Mining is closer to Software Engineering activities [START_REF] García-Martínez | Towards an Information Mining Engineering[END_REF]. However, not all the models and methodologies available in Software Engineering can be applied to Information Mining projects because they do not handle the same practical aspects [START_REF] Rodríguez | Estimación Empírica de Carga de Trabajo en Proyectos de Explotación de Información[END_REF]. Therefore, specific models, methodologies, techniques and tools need to be created and validated in order to aid the Information Mining practitioners to carry on a project.
As in every Software project, Information Mining projects begin with a set of activities that are referred as project planning. This requires the prediction of the effort with the necessary resources and associated cost. Nevertheless, the normal effort estimation method applied in Conventional Software Development projects cannot be used at Information Mining projects because the considered characteristics are different. For example COCOMO II [START_REF] Boehm | Software Cost Estimation with COCOMO II[END_REF], one of the most used estimation method for Conventional Software projects, uses the quantity of source code lines as a parameter. This is not useful for estimating an Information Mining project because the data mining algorithms are already available in commercial tools and then it is not necessary to develop software. Estimation methods in Information Mining projects should use more representative characteristics, such as, the quantity of data sources, the level of integration within the data and the type of problem to be solved. In that respect, only one specific analytical estimation method for Information Mining projects has been found after a documentary research. This method called Data Mining Cost Model (or DMCoMo) is defined in [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF]. However, from a statistical analysis of DMCoMo performed in [START_REF] Pytel | Estudio del Modelo Paramétrico DMCoMo de Estimación de Proyectos de Explotación de Información[END_REF], it has been found that this method tends to overestimate the efforts principally in small-sized projects that are usually required by Small and Mediumsized Enterprises [START_REF] García-Martínez | Ingeniería de Proyectos de Explotación de Información para PYMES[END_REF].
In this context, the objective of this paper is proposing a new effort estimation method for Information Mining projects considering the features of Small and Medium-sized Enterprises (SMEs). First, the estimation method DMCoMo is described (section 2), and the main characteristics of SMEs' projects are identified (section 3). Then an estimation method oriented to SMEs is proposed (section 4) comparing its results to DMCoMo method using real projects data (section 5). Finally, the main conclusions and future research work are presented (section 6).
DMCoMo Estimation Method
Analytical estimation methods (such as COCOMO) are constructed based on the application of regression methods in the available historical data to obtain mathematical relationships between the variables (also called cost drivers) that are formalized through mathematical formulas which are used to calculate the estimated effort. DMCoMo [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF] defines a set of 23 cost drivers to perform the cost estimation which are associated to the main characteristics of Information Mining projects. These cost drivers are classified in six categories which are included in table 1 as specified in [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF]. Once the values of the cost drivers are defined, they are introduced in the mathemati-cal formulas provided by the method. DMCoMo has two formulas which have been defined by linear regression with the information of 40 real projects of different business types (such as marketing, meteorological projects and medical projects). The first formula uses all 23 cost drivers as variables (formula named MM23) and it should be used when the project is well defined; while the second formula only uses 8 cost drivers (MM8) and it should be used when the project is partially defined. As a result of introducing the values in the corresponding formula, the quantity of men x month (MM) is calculated. But, as it has been pointed out by the authors, the behaviour of DMCoMo in projects outside of the 90 and 185 men x month range is unknown. From a statistical analysis of its behaviour performed in [START_REF] Pytel | Estudio del Modelo Paramétrico DMCoMo de Estimación de Proyectos de Explotación de Información[END_REF], DMCoMo always tends to overestimates the estimated efforts (i.e. all project estimations are always bigger than 60 men x month). Therefore, DMCoMo could be used in medium and big-sized projects but it is not useful for small-sized projects. As these are the projects normally required by Small and Medium-sized Enterprises, a new estimation method for Information Mining projects is proposed considering the characteristics of small-sized projects.
Data Mining Models
SMEs' Information Mining Projects
According to the Organization for Economic Cooperation and Development (OECD) Small and Medium-sized Enterprises (SMEs) and Entrepreneurship Outlook report [START_REF]Organization for Economic Cooperation and Development: OECD SME and Entrepreneurship Outlook[END_REF]: "SMEs constitute the dominant form of business organization in all countries world-wide, accounting for over 95 % and up to 99 % of the business population depending on country". However, although the importance of SMEs is well known, there is no universal criterion to characterise them. Depending on the country and region, there are different quantitative and qualitative parameters used to recognize a company as SMEs. For instance, at Latin America each country has a different definition [START_REF] Álvarez | Manual de la Micro[END_REF]: while Argentina considers as SME all independent companies that have an annual turnover lower than USD 20,000 (U.S. dollars maximum amount that depends on the company's activities), Brazil includes all companies with 500 employees or less. On the other hand, the European Union defines as SMEs all companies with 250 employees or less, assets lower than USD 60,000 and gross sales lower than USD 70,000 per year. In that respect, International Organization for Standardization (ISO) has recognized the necessity to specify a software engineering standard for SMEs and thus it is working in the ISO/IEC 29110 standard "Lifecycle profiles for Very Small Entities" [START_REF]International Organization for Standardization: ISO/IEC DTR 29110-1 Software Engineering -Lifecycle Profiles for Very Small Entities (VSEs) -Part 1: Overview[END_REF]. The term 'Very Small Entity' (VSE) was defined by the ISO/IEC JTC1/SC7 Working Group 24 [START_REF] Laporte | Developing International Standards for VSEs[END_REF] as being "an entity (enterprise, organization, department or project) having up to 25 people".
From these definitions (and our experience), in this paper an Information Mining project for SMEs is demarcated as a project performed at a company of 250 employees or less (at one or several locations) where the high-level managers (usually the company's owners) need non-trivial knowledge extracted from the available databases to solve a specific business problem with no special risks at play. As the company's employees usually do not have the necessary experience, the project is performed by contracted outsourced consultants. From our experience, the project team can be restricted up to 25 people (including both the outsourced consultants and the involved company staff) with maximum project duration of one year.
The Information Mining project's initial tasks are similar to a Conventional Software Development project. The consultants need to elicit both the necessities and desires of the stakeholders, and also the characteristics of the available data sources within the organization (i.e. existing data repositories). Although, the outsourced consultants must have a minimum knowledge and experience in developing Information Mining projects, they might or not have experience in similar projects on the same business type which could facilitate the tasks of understanding the organization and its related data. As the data repositories are not often properly documented, the organization's experts should be interviewed. However, experts are normally scarce and reluctant to get involved in the elicitation sessions. Thus, it is required the willingness of the personnel and the supervisors to identify the correct characteristics of the organization and the data repositories. As the project duration is quite short and the structure of the organization is centralized, it is considered that the elicited requirements will not change.
On the other hand, the Information and Communication Technology (ICT) infrastructure of SMEs is analysed. In [START_REF] Ríos | El Pequeño Empresario en ALC, las[END_REF] it is indicated that more than 70% of Latin America's SMEs have an ICT infrastructure, but only 37% have automated services and/or proprietary software. Normally commercial off-the-shelf software is used (such as spread-sheets managers and document editors) to register the management and operational information. The data repositories are not large (from our experience, less than one million records) but implemented in different formats and technologies. Therefore, data formatting, data cleaning and data integration tasks will have a considerable effort if there is no available software tools to perform them because ad-hoc software should be developed to implement these tasks.
Proposed Effort Estimation Method Oriented to SMEs
For specifying the effort estimation method oriented to SMEs, first, the cost drivers used to characterize a SMEs' project are defined (section 4.1) and then the corresponding formula is presented (section 4.2). This formula has been obtained by regression using real projects information. From 44 real information mining projects available, 77% has been used for obtaining the proposed method's formula (section 4.2) and 23% for validation of the proposed method (section 5). This means that 34 real projects have been used for obtaining the formula and 10 projects for validation.
These real Information Mining projects have been collected by researchers from the Information Systems Research Group of the National University of Lanus (GISI-DDPyT-UNLa), the Information System Methodologies Research Group of the Technological National University at Buenos Aires (GEMIS-FRBA-UTN), and the Information Mining Research Group of the National University of Rio Negro at El Bolson (SAEB-UNRN). It should be noted that all these projects had been performed applying the CRISP-DM methodology [START_REF] Chapman | CRISP-DM 1.0 Step by step BI guide Edited by SPSS[END_REF]. Therefore, the proposed estimation method can be considered reliable only for Information Mining projects developed with this methodology.
Cost Drivers
Considering the characteristics of Information Mining projects for SMEs indicated in section 3, eight cost drivers are specified. Few cost drivers have been identified in this version because, as explained in [START_REF] Chen | Finding the right data for software cost modeling[END_REF], when an effort estimation method is created, many of the non-significant data should be ignored. As a result the model is prevented from being too complex (and therefore impractical), the irrelevant and codependent variables are removed, and the noise is also reduced. The cost drivers have been selected based on the most critical tasks of CRISP-DM methodology [START_REF] Chapman | CRISP-DM 1.0 Step by step BI guide Edited by SPSS[END_REF]: in [START_REF] Domingos | 10 challenging problems in data mining research[END_REF] it is indicated that building the data mining models and finding patterns is quite simple now, but 90% of the effort is included in the data pre-processing (i.e. "Data Preparation" tasks performed at phase III of CRISP-DM). From our experience, the other critical tasks are related to "Business Understanding" phase (i.e. "understanding of the business' background" and "identifying the project success" tasks). The proposed cost factors are grouped into three groups as follows:
Cost drivers related to the project:
• Information Mining objective type (OBTY) This cost driver analyses the objective of the Information Mining project and therefore the type of process to be applied based on the definition performed in [START_REF] Garcia-Martinez | Information Mining Processes Based on Intelligent Systems[END_REF]. The allowed values for this cost drivers are indicated in table 2.
Table 2. Values of OBTY cost driver
Value Description
1
It is desired to identify the rules that characterize the behaviour or the description of an already known class.
2
It is desired to identify a partition of the available data without having a previously known classification.
3
It is desired to identify the rules that characterize the data partitions without a previous known classification. [START_REF] Ferreira | Integration of Business Processes with Autonomous Information Systems: A Case Study in Government Services[END_REF] It is desired to identify the attributes that have a greater frequency of incidence over the behaviour or the description of an already known class.
5
It is desired to identify the attributes that have a greater frequency of incidence over a previously unknown class.
• Level of collaboration from the organization (LECO)
The level of collaboration from the members of the organization is analysed by reviewing if the high-level management (i.e. usually the SME's owners), the middlelevel management (supervisors and department's heads) and the operational personnel are willing to help the consultants to understand the business and the related data (specially in the first phases of the project). If the Information Mining project has been contracted, it is assumed that at least the high-level management should support it. The possible values for this cost factor are shown in table 3.
Table 3. Values of LECO cost drivers
Value Description 1 Both managers and the organization's personnel are willing to collaborate on the project.
2
Only the managers are willing to collaborate on the project while the rest of the company's personnel is indifferent to the project.
3
Only the high-level managers are willing to collaborate on the project while the middle-level manager and the rest of the company's personnel is indifferent to the project.
4
Only the high-level managers are willing to collaborate on the project while the middle-level manager is not willing to collaborate.
Cost Drivers related to the available data:
• Quantity and type of the available data repositories (AREP)
The data repositories to be used in the Information Mining process are analysed (including data base management systems, spread-sheets and documents among others). In this case, both the quantity of data repositories (public or private from the company) and the implementation technology are studied. In this stage, it is not necessary to know the quantity of tables in each repository because their integration within a repository is relatively simple as it can be performed with a query statement. However, depending on the technology, the complexity of the data integration tasks could vary. The following criteria can be used:
-If all the data repositories are implemented with the same technology, then the repositories are compatible for integration. -If the data can be exported into a common format, then the repositories can be considered as compatible for integration because the data integration tasks will be performed using the exported data. -On the other hand, if there are non-digital repositories (i.e. written paper), then the technology should not be considered compatible for the integration. But the estimation method is not able to predict the required time to perform the digitalization because it could vary on many factors (such as quantity of papers, length, format and diversity among others). The possible values for this cost factor are shown in table 4. Between 2 and 5 data repositories non-compatible technology for integration. 4
More than 5 data repositories compatible technology for integration. 5
More than 5 data repositories no-compatible technology for integration.
• Total quantity of available tuples in main table (QTUM)
This variable ponders the approximate quantity of tuples (records) available in the main table to be used when applying data mining techniques. The possible values for this cost factor are shown in table 5. • Knowledge level about the data sources (KLDS)
The knowledge level about the data sources studies if the data repositories and their tables are properly documented. In other words, if a document exits that defining the technology in which it is implemented, the characteristics of the tables' fields, and how the data is created, modified, and/or When this document is not available, it should be necessary to hold meetings with experts (usually in charge of the data administration and maintenance) to explain them. As a result the project required effort should be increased depending on the collaboration of these experts to help the consultants.
The possible values for this cost factor are shown in table 7.
Table 7. Values of KLDS cost driver
Value Description 1 All the data tables and repositories are properly documented.
2 More than 50% of the data tables and repositories are documented and there are available experts to explain the data sources.
3
Less than 50% of the data tables and repositories are documented but there are available experts to explain the data sources.
4
The data tables and repositories are not documented but there are available experts to explain the data sources.
5
The data tables and repositories are not documented, and the available experts are not willing to explain the data sources.
6
The data tables and repositories are not documented and there are not available experts to explain the data sources.
Cost drivers related to the available resources:
• Knowledge and experience level of the information mining team (KEXT) This cost driver studies the ability of the outsourced consultants that will carry out the project. Both the knowledge and experience of the team in similar previous projects are analysed by considering the similarity of the business type, the data to be used and the expected goals. It is assumed that when there is greater similarity, the effort should be lower. Otherwise, the effort should be increased. The possible values for this cost factor are shown in table 8.
• Functionality and usability of available tools (TOOL)
This cost driver analyses the characteristics of the information mining tools to be utilized in the project and its implemented functionalities. Both the data preparation functions and the data mining techniques are reviewed.
The possible values for this cost factor are shown in table 9.
Table 8. Values of KEXT cost driver
Value Description
1
The information mining team has worked with similar data in similar business types to obtain the same objectives.
2
The information mining team has worked with different data in similar business types to obtain the same objectives.
3
The information mining team has worked with similar data in other business types to obtain the same objectives.
4
The information mining team has worked with different data in other business types to obtain the same objectives.
5
The information mining team has worked with different data in other business types to obtain other objectives.
Table 9. Values of TOOL cost driver
Value Description
1
The tool includes functions for data formatting and integration (allowing the importation of more than one data table) and data mining techniques.
2
The tool includes functions for data formatting and data mining techniques, and it allows importing more than one data table independently.
3
The tool includes functions for data formatting and data mining techniques, and it allows importing only one data table at a time.
4
The tool includes only functions for data mining techniques, and it allows importing more than one data table independently.
5
The tool includes only functions for data mining techniques, and it allows importing only one data table at a time.
Estimation Formula
Once the values of the cost drivers have been specified, they were used to characterize 34 information mining projects with their real effort1 collected by coresearchers as indicated before. A multivariate linear regression method [START_REF] Weisberg | Applied Linear Regression[END_REF] has been applied to obtain a linear equation of the form used by COCOMO family methods [START_REF] Boehm | Software Cost Estimation with COCOMO II[END_REF]. As a result, the following formula is obtained: PEM = 0.80 OBTY + 1.10 LECO -1.20 AREP -0.30 QTUM -0.70 QTUA + 1.80 KLDS -0.90 KEXT + 1.86 TOOL -3.30 [START_REF] Schiefer | Process Information Factory: A Data Management Approach for Enhancing Business Process Intelligence[END_REF] where PEM is the effort estimated by the proposed method for SMEs (in men x month), and the following cost drivers: information mining objective type (OBTY), level of collaboration from the organization (LECO), quantity and type of the available data repositories (AREP), total quantity of available tuples in the main table (QTUM) and in auxiliaries tables (QTUA), knowledge level about the data sources (KLDS), knowledge and experience level of the information mining team (KEXT), and functionality and usability of available tools (TOOL). The values for each cost driver are defined in tables 2 to 9 respectively of section 4.1.
Validation of the Proposed Estimation Method
In order to validate the estimation method defined in section 4, the data of other 10 collected information mining projects is used to compare the accuracy of the proposed method with both the real effort with the effort estimated by DMCoMo method. A brief description of these projects with their applied effort (in men x months) are shown in table 10.
P1
The business objective is classifying the different types of cars and reviewing the acceptance of the clients, and detecting the characteristics of the most accepted car.
The process of discovering behaviour rules is used.
P2
As there is not big increment in the middle segment, the company wants to gain market by attracting new customers. In order to achieve that, it is required to determine the necessities of that niche market.
The process of discovering behaviour rules is used.
P3
The high management of a company have decided to enhance and expand their market presence by launching a new product. The new concept will be proclaimed as a new production unit which aimed to create more jobs, more sales and therefore more revenue.
The processes of discovering behaviour rules and weighting of attributes are used.
P4
It is necessary to identify the customer behaviour in order to understand which type of customer is more inclined to buy any package of products. The desired objective is increasing the level of acceptance and sales of product packages.
The process of discovering behaviour rules is used.
P5
The objectives of the project are performing a personalized marketing campaign to the clients, and locating the ads in the most optimal places (i.e. the places with most CTR).
The process of discovery group-membership rules is used.
9.35
P6 Perform an analysis of the causes why the babies have some deceases when they are born, considering the economic, social and educational level, and also the age of the mother
The processes of discovering behaviour rules and weighting of attributes are used.
P7
The help desk sector of a governmental organization employs software system to register each received phone call. As a result, it is possible to identify a repairing request, a change or bad function of any computer in order to assign a technical who will solve the problem.
The process of discovering group-membership rules is used.
P8
The objective is improving the image of the company to the customers by having a better distribution service. This means finding the internal and external factors of the company that affect the delay of the orders to be delivered to customers.
The process of discovering group-membership rules is used.
P9
The purpose is achieving the best global technologies, the ownership of independent intellectual property rights, and the creation of an internationally famous brand among the world-class global automotive market.
The processes of discovering group-membership rules and weighting of the attributes are used.
P10
It has been decided to identify the key attributes that produce good quality wines. Once these attributes are detected, they should improve the lesser quality wines.
The processes of discovering behaviour rules and weighting of attributes are used.
1.56
A Proposal of Effort Estimation Method for Information Mining Projects Oriented to SMEs 1 1
Using the collected project data, the values of the DoCoMo's cost drivers are defined to calculate the corresponding estimation method. Both the formula that uses 8 cost factors (MM8 column) and the formula that uses the 23 cost factors (MM23 column) are applied obtaining the values shown in table 11.
N T A B 1 0 0 3 1 0 1 2 0 0 N T U P 1 1 1 5 3 1 1 0 1 1 N A T R 7 1 1 5 3 1 1 3 1 1 D I S P 1 1 1 2 2 1 1 4 1 1 P N U L 0 1 1 2 1 2 2 1 1 1 D M O D 1 4 1 2 5 1 1 0 1 0 D E X T 1 0 0 1 2 2 2 0 2 2 N M O D 1 2 2 3 3 2 2 1 2 2 T M O D 0 1 1 3 1 1 3 1 4 4 M T U P 1 1 1 3 3 1 1 0 1 1 M A T R 1 2 2 3 3 2 2 3 2 2 M T E C 1 1 1 5 3 1 4 1 1 4 N F U N 3 1 1 3 2 1 1 0 1 2 S C O M 1 1 1 0 1 1 1 1 1 1 T O O L 1 1 1 1 1 1 1 1 1 1 C O M P 3 5 0 2 1 0 1 4 3 2 N F O R 3 3 3 3 1 1 3 2 3 1 N D E P 4 2 2 1 4 1 3 0 4 2 D O C U 5 2 3 2 2 2 2 2 5 5 S I T E 3 1 1 0 2 1 3 0 3 3 K D A T 4 3 3 2 4 1 2 1 3 2
A D I R 1 1 1 4 2 1 2 6 1 1 M F A M 3 5 5 3 1 5 3 0 4 4 # P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
Similarly, the same procedure is performed to calculate the effort applying the formula specified in section 3.2 for the proposed estimation method oriented to SMEs (PEM column) as shown in table 12.
1,08
Finally, in table 13 the estimated efforts are compared with the real effort of each project (REf column) are compared. The efforts calculated by the DMCoMo method (MM8 and MM23 columns) and the proposed method for SMEs (PEM column) are indicating with their corresponding error (i.e. the difference between the real effort and the values calculated by each method). Also, the Relative Error for the estimation of the proposed method is shown (calculated as the error divided by the real effort). This comparison is reflected in a boxplot graph (figure 1) where the behaviour of the real and calculated efforts are shown by indicating the minimum and maximum values (thin line), standard deviation range (thick line) and average value (marker). When analysing the results of the DMCoMo method from table 13, it can be seen that the average error is very big (approximately 86 men x months for both formulas) with an error standard deviation of about ± 20 men x months respectively. DMCoMo always tends to overestimate the effort of the project (i.e. the error values are always negative) with a ratio greater than 590% (less difference for the project #6). This behaviour can be seen also graphically in figure 1. In addition, all estimated values are bigger than 60 men x months, which is the maximum threshold value previously identified for SMEs projects. From looking at these results, the conclusions of [START_REF] Marbán | A cost model to estimate the effort of data mining projects (DMCoMo)[END_REF] are confirmed: DMCoMo estimation method is not recommended to predict the effort of small-sized information mining projects.
On the other hand, when the results of the proposed method for SMEs are analysed, it can be seen that the average error is approximately 1.46 men x months with an error standard deviation of approximately ± 2 men x months. In order to study the behaviour of the proposed method with the real effort a new boxplot graph is presented in figure 2. From this second boxplot graph, it seems that the behaviour of the proposed method tends to underestimate the real effort behaviour. There are similar minimum values (i.e. 1.56 men x months for the real effort and 1.08 men x months for the proposed method), maximum values (i.e. 11.63 men x months for REf and 9.80 for PEM), and averages (i.e. 5.77 and 4.51 men x months respectively). Finally, if the real and estimated efforts of each project are compared using a chart graph (figure 3), it can be seen that the estimations of the proposed method are not completely accurate: ─ Projects #1, #3, #5, #8 and #9 have estimated efforts with an absolute error smaller than one men x month and a relative error lower than 10%. ─ Projects #2 and #10 have an estimated effort smaller than the real one with a relative error lower than 35%. In this case, the average error is about 0.74 men x months with a maximum error of one men x months (project #2).
─ At last, projects #4, #6 and #7 have an estimated with a relative error greater than 35% (but lower than 60%). In this case, the maximum error is nearly 7 men x months (project #6) and an average error of 3.81 men x month.
Conclusions
Software projects need to predict the cost and effort with its associated quantity of resources at the beginning of every project. The prediction of the required effort to perform an Information Mining project is necessary for Small and Medium-sized Enterprises (SMEs). Considering the characteristics of these projects developed with the CRISP-DM methodology, an estimation method oriented to SMEs has been proposed defining seven cost drivers and formula.
From the validation of the proposed method, it has been seen that that the proposed method produces a more accurate estimation than the DMCoMo method for smallsized projects. But, even though the overall behaviour of the proposed method is similar to real project behaviour, it tends to perform a little underestimation (the average error is smaller than 1.5 men x month). It can be highlighted that 50% of estimations have a relative error smaller than 10%, and the 20% have a relative error between 11% and 35%. For the rest of estimations, the relative error is smaller than 57%. Nevertheless, in all cases the absolute error is smaller than 7 men x months. These errors could be due to the existence of other factors affecting the project effort which have not been considered in this version of the estimation method.
As future research work, the identified issues will be studied in order to provide a more accurate version of the estimation method oriented to SMEs by studying the dependency between the cost drivers and then adding new cost drivers or redefining the existing ones. Another possible approach is modifying the existing equation formula by using an exponential regression with more collected real project data.
---
Number of Data Models (NMOD) -Types of Data Model (TMOD) -Number of Tuples for each Data Models (MTUP) -Number and Type of Attributes for each Data Model (MATR) -Techniques Availability for each Data Model (MTEC) Development Platform Number and Type of Data Sources (NFUN) -Distance and Communication Form (SCOM) Techniques and Tools -Tools Availability (TOOL) -Compatibility Level between Tools and Other Software (COMP) -Training Level of Tool Users (NFOR) Project Number of Involved Departments (NDEP) -Documentation (DOCU) -Multisite Development (SITE) Project Staff -Problem Type Familiarity (MFAM) -Data Knowledge (KDAT) -Directive Attitude (ADIR)
Fig. 1 .
1 Fig. 1. Boxplot graph comparing the behaviour of the Real Effort with the efforts calculated by DMCoMo and by the proposed estimation method for SMEs
Fig. 2 .
2 Fig. 2. Boxplot graph comparing the behaviour of the Real Effort with the effort calculated by the proposed estimation method for SMEs
Fig. 3 .
3 Fig. 3. Bar graph comparing for each project the Real Effort (REf) and the effort calculated by the proposed estimation method for SMEs (PEM)
Table 1 .
1 Cost Drivers used by DMCoMo
Category Cost Drivers
Source Data
-Number of Tables (NTAB) -Number of Tuples (NTUP) -Number of Table Attributes (NATR) -Data Dispersion (DISP) -Nulls Percentage (PNUL) -Data Model Availability (DMOD) -External Data Level (DEXT)
Table 4 .
4 Values of AREP cost driver
Value Description
1 Only 1 available data repository.
2 Between 2 and 5 data repositories compatible technology for integration.
3
Table 5 .
5 Values of QTUM cost driver • Total quantity of available tuples in auxiliaries tables (QTUA) This variable ponders the approximate quantity of tuples (records) available in the auxiliary tables (if any) used to add additional information to the main table (such as a table used to determine the product characteristics associated to the product ID of the sales main table). Normally, these auxiliary tables include fewer records than the main table. The possible values for this cost factor are shown in table 6.
Value Description
1 Up to 100 tuples from main table.
2 Between 101 and 1,000 tuples from main table.
3 Between 1,001 and 20,000 tuples from main table.
4 Between 20,001 and 80,000 tuples from main table.
5 Between 80,001 and 5,000,000 tuples from main table.
6 More than 5,000,000 tuples from main table.
Table 6 .
6 Values of QTUA cost driver
Value Description
1 No auxiliary tables used.
2 Up to 1,000 tuples from auxiliary tables.
3 Between 1,001 and 50,000 tuples from auxiliary tables.
4 More than 50,000 tuples from auxiliary tables.
Table 10 .
10 Data of the information mining projects used for the validation
# Business Objectives Information Mining Objectives Real Effort (men x month)
Table 11 .
11 Effort calculated by DMCoMo method
MM23 (men x month) 94.88 51.84 68.07 111.47 122.52 81.36 92.49 89.68 98.74 103.13
MM8 (men x month) 84.23 67.16 67.16 118.99 110.92 80.27 96.02 116.87 97.63 105.32
Table 12 .
12 Effort calculated by the proposed estimation method oriented to SMEs
# OBTY LECO AREP QTUM QTUA KLDS KEXT TOOL PEM (men x month)
P1 1 1 3 3 1 3 2 3 2,58
P2 1 1 1 3 1 3 5 5 6,00
P3 4 1 1 3 3 2 5 3 1,48
P4 1 4 3 5 1 1 2 3 1,68
P5 3 2 2 5 2 3 1 5 9,80
P6 4 1 1 2 1 1 5 5 5,10
P7 3 2 1 4 1 1 2 3 3,78
P8 1 4 1 3 2 1 1 3 4,88
P9 5 1 1 3 3 3 4 5 8,70
P10 4 1 2 2 1 1 4 3
Table 13 .
13 Comparison of the calculated efforts (in men x month)
DMCoMo PROPOSED METHOD
# REf MM8 REf -MM8 MM23 REf -MM23 PEM REf -PEM Relative Error
P1 2.41 84.23 -81.82 94.88 -92.47 2,58 -0.17 -7.2%
P2 7.00 67.16 -60.16 51.84 -44.84 6,00 1.00 14.3%
P3 1.64 67.16 -65.52 68.07 -66.43 1,48 0.16 9.8%
P4 3.65 118.99 -115.34 111.47 -107.82 1,68 1.97 54.0%
P5 9.35 110.92 -101.57 122.52 -113.17 9,80 -0.45 -4.8%
P6 11.63 80.27 -68.65 81.36 -69.73 5,10 6.53 56.1%
P7 6.73 96.02 -89.29 92.49 -85.76 3,78 2.95 43.8%
P8 5.40 116.87 -111.47 89.68 -84.28 4,88 0.52 9.6%
P9 8.38 97.63 -89.26 98.74 -90.36 8,70 -0.33 -3.9%
P10 1.56 105.32 -103.75 103.13 -101.56 1,08 0.48 30.9%
Average Error 88.68 85.64 1.46
Error Variance 380.28 428.99 3.98
Pytel, P., Britos, P., García-Martínez, R.
The real projects data used for regression is available at: http://tinyurl.com/bm93wol
Acknowledgements
The research reported in this paper has been partially funded by research project grants 33A105 and 33B102 of National University of Lanus, by research project grants 40B133 and 40B065 of National University of Rio Negro, and by research project grant EIUTIBA11211 of Technological National University at Buenos Aires.
Also, the authors wish to thank to the researchers that provided the examples of real SMEs Information Mining Projects used in this paper. | 39,486 | [
"1003594",
"1003595",
"992693"
] | [
"300134",
"487857",
"346011",
"487856",
"487857"
] |
01484694 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01484694/file/978-3-642-36611-6_8_Chapter.pdf | Wipawee Uppatumwichian
email: wipawee.uppatumwichian@ics.lu.se
Understanding the ERP system use in budgeting
Keywords: structuration theory, budgeting, ERP system, IS use
This paper investigates the enterprise resource planning (ERP) system use in budgeting in order to explain how and why ERP system are used or not used in budgeting practices. Budgeting is considered as a social phenomenon which requires flexibility for decision-making and integration for management controls. The analysis at the activity levels, guided by the concept of 'conflict' in structuration theory (ST), suggests that ERP systems impede flexibility in decision-making. However, the systems have the potential to facilitate integration in management controls. The analysis at the structural level, guided by the concept of 'contradiction' in ST, concludes that the ERP systems are not widely used in budgeting. This is because the systems support the integration function alone while budgeting assumes both roles. This paper offers the ERP system non-use explanation from an ulitarian perspective. Additionally, it calls for solutions to improve ERP use especially for the integration function.
Introduction
The advance in information system (IS) technologies has promised many improved benefits to organisations [START_REF] Davenport | Putting the enterprise into the enterprise system[END_REF][START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF]. However such improvements are often hindered by unwillingness to accept new IS technologies [START_REF] Davis | Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology[END_REF][START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF]. This results in IS technology nonuse [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF] and/or workaround [START_REF] Taylor | Understanding Information Technology Usage: A Test of Competing Models[END_REF][START_REF] Boudreau | Enacting integrated information technology: A human agency perspective[END_REF] and, inevitably moderate business benefits. For this reason, a traditional IS use research has been well-established in the discipline [START_REF] Pedersen | Modifying adoption research for mobile interent service adoption: Cross-disciplinary interactions In[END_REF] to investigate how and why users use or not use certain IS technologies.
In the field of accounting information system (AIS), previous research has indicated that there is a limited amount of research as well as understanding on the use of enterprise resource planning (ERP) systems to support management accounting practices [START_REF] Scapens | ERP systems and management accounting change: opportunities or impacts? A research note[END_REF][START_REF] Granlund | Extending AIS research to management accounting and control issues: A research note[END_REF][START_REF] Elbashir | The role of organisational absorptive capacity in strategic use of business intelligence to support integrated management control systems[END_REF][START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF]. Up to now, the available research results conclude that most organisations have not yet embraced the powerful capacity of the ERP systems to support the management accounting function [START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF][START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF][START_REF] Quattrone | A 'time-space odyssey': management control systems in two multinational organisations[END_REF]. Many studies have reported a consistent limited ERP use in management accounting function using data from many countries across the globe such as Egypt [START_REF] Jack | Enterprise Resource Planning and a contest to limit the role of management accountants: A strong structuration perspective[END_REF], Australia [START_REF] Booth | The impacts of enterprise resource planning systems on accounting practice -The Australian experience[END_REF], Finland [START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF][START_REF] Hyvönen | Management accounting and information systems: ERP versus BoB[END_REF][START_REF] Kallunki | Impact of enterprise resource planning systems on management control systems and firm performance[END_REF][START_REF] Chapman | Information system integration, enabling control and performance[END_REF] and Denmark [START_REF] Rom | Enterprise resource planning systems, strategic enterprise management systems and management accounting: A Danish study[END_REF]. Several researchers have in particular called for more research contributions on the ERP system use in management accounting context, and especially on how the systems might be used to support the two key functions in manegement accounting: decision-making and management control functions [START_REF] Granlund | Extending AIS research to management accounting and control issues: A research note[END_REF][START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF][START_REF] Rom | Management accounting and integrated information systems: A literature review[END_REF]. This paper responds to that call by uncovering the ERP systems use in budgeting. In relation to other management accounting activities, budgeting is considered to be the most suitable social phenomenon under investigation. This is because budgeting is a longstanding control procedure [START_REF] Davila | Management control systems in early-stage startup companies[END_REF] which continues to soar in popularity among modern organisations [START_REF] Libby | Beyond budgeting or budgeting reconsidered? A survey of North-American budgeting practice[END_REF]. In addition, it assumes the dual roles of decision-making and management control [START_REF] Abernethy | The role of budgets in organizations facing strategic change: an exploratory study[END_REF].
Budgeting is considered as a process undertaken to achieve a quantitative statement for a defined time period [START_REF] Covaleski | Budgeting reserach: Three theoretical perspectives and criteria for selective integration In[END_REF]. A budget cycle can be said to cover activities such as (1) budget construction, (2) consolidation, (3) monitoring and (4) reporting. The levers of control (LOC) framework [START_REF] Simons | How New Top Managers Use Control Systems as Levers of Strategic Renewal[END_REF] suggests that budgeting can be used interactively for decision-making and diagnostically for management control. This is in line with modern budgeting literature [START_REF] Abernethy | The role of budgets in organizations facing strategic change: an exploratory study[END_REF][START_REF] Frow | Continuous budgeting: Reconciling budget flexibility with budgetary control[END_REF] whose interpretation is that budgeting assumes the dual roles. However, the degree of combination between these two roles varies according to management's judgements in specific situations [START_REF] Simons | How New Top Managers Use Control Systems as Levers of Strategic Renewal[END_REF]. This dual role requires budgeting to be more flexible for decision-making yet integrative for management control [START_REF] Uppatumwichian | Analysing Flexibility and Integration needs in budgeting IS technologies In[END_REF].
Given the research gaps addressed and the flexible yet integrative roles of budgeting, this paper seeks to uncover how the ERP systems are used in budgeting as well as to explain why the ERP systems are used or not used in budgeting.
This paper proceeds as follows. The next section provides a literature review in the ERP system use literature with regard to the integration and flexibility domains. Section 3 discusses the concepts of conflict and contradiction in structuration theory (ST) which is the main theory used. After that, section 4 deliberates on the research method and case companies contained in this study. Subsequently, section 5 proceeds to data analysis based on the conflict and contradiction concepts in ST in order to explain how and why ERP systems are used or not used in budgeting. Section 6 ends this paper with conclusions and research implications.
The ERP literature review on flexibility and integration
This section reviews ERP literatures based on the integration and flexibility domains as it has been previously suggested that budgeting possesses these dual roles. It starts out with a brief discussion on what the ERP system is and its relation to accounting. Later it proceeds to discuss about incompatible conclusions in the literature about how the ERP system can be used to promote flexibility and integration.
The ERP system, in essence, is an integrated cross-functional system containing many selectable software modules which span to support numerous business functions that a typical organisation might have such as accounting and finance, human resources, and sales and distributions [START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF]. The system can be considered as a reference model which segments organisations into diverse yet related functions through a centralised database [START_REF] Kallinikos | Deconstructing information packages: Organizational and behavioural implications of ERP systems[END_REF]. The ERP system mandates a rigid business model which enforces underlying data structure, process model as well as organisational structure [START_REF] Kumar | ERP expiriences and evolution[END_REF] in order to achieve an ultimate integration between business operation and IS technology [START_REF] Dechow | Management Control of the Complex Organization: Relationships between Management Accounting and Information Technology In[END_REF].
The ERP system has become a main research interest within the IS discipline as well as its sister discipline, the AIS research, since the inception of this system in the early 1990s [START_REF] Granlund | Moderate impact of ERPS on management accounting: a lag or permanent outcome?[END_REF][START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF]. Indeed, it can be said that AIS gives rise to the modern ERP system because accounting is one of the early business operations that IS technology is employed to hasten the process [START_REF] Granlund | Introduction: problematizing the relationship between management control and information technology[END_REF]. A research finding [START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF] posits that the ERP systems require implementing organisations to set up the systems according to either 'accounting' or 'logistic' modes which forms a different control locus in organisations. Such indication strongly supports the prevailing relationship that accounting has in connection to the modern ERP system.
In relation to the flexibility domain, research to date has provided a contradictory conclusion on the relationship between the ERP system and flexibility. One research stream considers the ERP system to impose a stabilising effect on organisations because of the lack of flexibility in relation to changing business conditions [START_REF] Boudreau | Enacting integrated information technology: A human agency perspective[END_REF][START_REF] Booth | The impacts of enterprise resource planning systems on accounting practice -The Australian experience[END_REF][START_REF] Hyvönen | Management accounting and information systems: ERP versus BoB[END_REF][START_REF] Rom | Enterprise resource planning systems, strategic enterprise management systems and management accounting: A Danish study[END_REF][START_REF] Light | ERP and best of breed: a comparative analysis[END_REF][START_REF] Soh | Cultural fits and misfits: Is ERP a universal solution?[END_REF]. Akkermans et al. [START_REF] Akkermans | The impact of ERP on supply chain management: Exploratory findings from a European Delphi study[END_REF], for example, report that leading IT executives perceive the ERP system as a hindrance to strategic business initiatives. The ERP system is said to have low system flexibility which does not correspond to the changing networking organisation mode. This line of research concludes that a lack of flexibility in ERP system can post a direct risk to organisations because the ERP system reference model is not suitable to business processes [START_REF] Soh | Cultural fits and misfits: Is ERP a universal solution?[END_REF][START_REF] Strong | Understanding organization--Enterprise system fit: A path to theorizing the information technology artifact[END_REF]. In addition, the lack of flexibility results in two possible lines of actions from users: (1) actions in the form of inaction, that is, a passive resistance not to use the ERP systems [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF], or [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF] actions to reinvent the systems or a workaround [START_REF] Boudreau | Enacting integrated information technology: A human agency perspective[END_REF]. The other stream of research maintains that ERP system implementation improve flexibility in organisations [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF][START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF][START_REF] Spathis | Enterprise systems implementation and accounting benefits[END_REF][START_REF] Cadili | On the interpretative flexibility of hosted ERP systems[END_REF]. Shang and Seddon [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF], for example, propose that the ERP system contributes to increased flexibility in organisational strategies. This is because a modular IT infrastructure in the ERP system allows organisations to cherry pick modules which support their current business initiatives. In the same line, Brazel and Dang [START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF] posit that ERP implementation allows more organisational flexibility to generate financial reports. Cadili and Whitley [START_REF] Cadili | On the interpretative flexibility of hosted ERP systems[END_REF] support this view to a certain extent as they insert that the flexibility of an ERP system tends to decrease as the system grows in size and complication.
With regard to the integration domain, a similar contradictory conclusion on the role of ERP to integration is presented in the literature. One stream of research posits that the reference model embedded in the ERP system [START_REF] Kallinikos | Deconstructing information packages: Organizational and behavioural implications of ERP systems[END_REF], which enforces a strict data definition across organisational units through a single database, enables integration and control [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF][START_REF] Quattrone | A 'time-space odyssey': management control systems in two multinational organisations[END_REF][START_REF] Chapman | Information system integration, enabling control and performance[END_REF][START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF][START_REF] Spathis | Enterprise systems implementation and accounting benefits[END_REF]. Some of the benefits mentioned in the literature after an ERP implementation are: reporting capability [START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF], information quality [START_REF] Häkkinen | Life after ERP implementation: Long-term development of user perceptions of system success in an after-sales environment[END_REF], decision-making [START_REF] Spathis | Enterprise systems implementation and accounting benefits[END_REF] and strategic alliance [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF]. Another stream of research tends to put a serious criticism toward the view that ERP implementation will enable organisational integration. Quattrone and Hopper [START_REF] Quattrone | What is IT?: SAP, accounting, and visibility in a multinational organisation[END_REF], for example, argue that the ERP system is at best a belief that activities can be integrated by making transactions visible and homogenous. Dechow and Mouritsen [START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF] explicitly support this view by indicating that: "[The] ERP systems do not define what integration is and how it is to be developed". They argue that it is not possible to manage integration around the ERP systems, or any other IS systems. Regularly, any other means of integration but IS is more fruitful for organisational integration and control, such as a lunch room observation. In many cases, it is argued that integration can only be achieved through a willingness to throw away some data and integrate less information [START_REF] Dechow | Management Control of the Complex Organization: Relationships between Management Accounting and Information Technology In[END_REF].
Theoretical background
A review of IS use research [START_REF] Pedersen | Modifying adoption research for mobile interent service adoption: Cross-disciplinary interactions In[END_REF] has indicated that there are three main explanatory views which are widely used to explain IS use research. First the ulitarian view holds that users are rational in their choice of system use. This stream of research often employs a technology acceptance model [START_REF] Davis | Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology[END_REF] or a media richness theory [START_REF] Daft | Organizational Information Requirements, Media Richness and Structural Design[END_REF] to explain system use. Second the social influence view deems that social mechanisms are of importance in enforcing system use in particular social contexts [START_REF] Fishbein | Belief, attitude, intention and behaviour: An introduction to theory and research[END_REF]. The third and the last contingency view [START_REF] Drazin | Alternative Forms of Fit in Contingency Theory[END_REF] explains that people decide to use or not to use systems through personal characteristics and situational factors. Factors such as behavioural control [START_REF] Taylor | Understanding Information Technology Usage: A Test of Competing Models[END_REF], as well as skills and recipient attributes [START_REF] Treviño | Making Connections: Complementary Influences on Communication Media Choices, Attitudes, and Use[END_REF] serve as explanations to system use/non-use.
Being aware about these theoretical alternatives in the literature, the author chooses to approach this research through the lens of ST. It is convinced that the theory has a potential to uncover ERP use based on the ulitarian view. ST is appealing to the ERP system use study because the flexible yet integrative roles of budgeting fit into the contradiction discussion in social sciences research. It has been discussed that most modern theories along with social practices represent contradictions in themselves [START_REF] Robey | Accounting for the Contradictory Organizational Consequences of Information Technology: Theoretical Directions and Methodological Implications[END_REF]. Anthony Giddens, the founder of ST, explicitly supports the aforementioned argument. He writes: "don't look for the functions social practices fulfil, look for the contradiction they embody!" [START_REF] Giddens | Central problems in social theory[END_REF].
The heart of ST is an attempt to treat human actions and social structures as a duality rather than a dualism. To achieve this, Giddens bridges the two opposing philosophical views of functionalism and interpretivism. Functionalism holds that social structures are independent of human actions. Interpretivism, on the contrary, holds that social structures exist only in human minds. It is maintained that structures exist as human actors apply them. They are the medium and outcome of human interactions. ST is appealing to IS research because of its vast potential to uncover the interplay of people with technology [START_REF] Poole | Structuration theory in information systems research: Methods and controversies In[END_REF][START_REF] Walsham | Information systems strategy formation and implementation: The case of a central government agency[END_REF].
This paper focuses particularly on one element of ST, which is the concept of conflict and contradiction. According to Walsham [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF], this concept is largely ignored in the literature as well as in the IS research. Giddens defines contradiction as "an opposition or disjunction of structural principles of social systems, where those principles operate in terms of each other but at the same time contravene one another" [START_REF] Giddens | Central problems in social theory[END_REF]. To supplement contradiction which occurs at the structural level, he conceptualises conflict, which is claimed to occur at the level of social practice. In his own words, conflict is a "struggle between actors or collectives expressed as definite social practices" [START_REF] Giddens | Central problems in social theory[END_REF]. Based on the original writing, Walsham [START_REF] Walsham | Cross-Cultural Software Production and Use: A Structurational Analysis[END_REF] interprets conflicts as the real activity and contradiction as the potential basis for conflict which arises from structural contradictions.
This theorising has immediate application to the study of ERP systems use in budgeting. It is deemed that the flexibility (in decision-making) and integration (in management control) inherent in budgeting are the real activities that face business controllers in their daily operations with budgeting. Meanwhile, ERP systems and budgeting are treated as two different social structures [START_REF] Orlikowski | The Duality of Technology: Rethinking the Concept of Technology in Organizations[END_REF] which form the potential basis for conflict due to the clash between these structures. The next section discusses the research method and the case organisations involved in this study.
Research method and case description
This study employs an interpretative case study method according to Walsham [START_REF] Walsham | Interpretive Case Studies in IS Research: Nature and Method[END_REF]. The primary research design is a multiple case study [START_REF] Eisenhardt | Theory building from cases: Opportunities and challenges[END_REF] in which the researcher investigates a single phenomenon [START_REF] Gerring | What is a case study and what is it good for?[END_REF], namely the use of ERP systems in budgeting. This research design is based on rich empirical data [START_REF] Eisenhardt | Theory building from cases: Opportunities and challenges[END_REF][START_REF] Eisenhardt | Building Theories from Case Study Research[END_REF], therefore it tends to generate better explanation in respond to the initial research aim to describe and explain ERP system use in budgeting.
Eleven for-profit organisations from Thailand are included in this study. To be eligible for the study, these organisations meet the following three criteria. First they have installed and used an ERP system for finance and accounting functions for at least two years to ensure system maturity [START_REF] Nicolaou | Firm Performance Effects in Relation to the Implementation and Use of Enterprise Resource Planning Systems[END_REF]. Second, they employ budgeting as the main management accounting control. Third they are listed on a stock exchange to ensure size and internal control consistency due to stock market regulations [START_REF] Grabski | A review of ERP research: A future agenda for accounting information systems[END_REF].
This research is designed with triangulation in mind [START_REF] Miles | Qualitative data analysis : an expanded sourcebook[END_REF] in order to improve the validity of research findings. Based on Denzin [START_REF] Denzin | The Reserach Act: A Theoretical Introduction to Sociological Methods[END_REF]'s triangulation typologies, the methodological triangulation is applied in this study. Interviews, which are the primary data collection method, are conducted with twenty-one business controllers in eleven profit-organisations in Thailand in autumn 2011. These interviews are conducted at interviewee's locations. Therefore data from several other sources such as internal documentations and system demonstrations are available to the researcher for the methodological triangulation purpose. The interview follows a semi-structured format which lasts for approximately one to two hours on average. Interview participants are business controllers who are directly responsible for budgeting as well as IS technologies within organisations. Interview participants are for example chief financial controller (CFO), accounting vice president, planning vice president, accounting policy vice president, management accounting manager, business analyst, and business intelligent manager. Appendix 1 provides an excerpt of the interview guide. All interview participants have been working for the current organisations for a considerable amount of time which ranges between two to twenty years. Therefore it is deemed that they are knowledgeable of the subject under investigation. All interviews are recorded, transcribed and analysed in Nvivo8 data analysis software. Coding is performed following the inductive coding technique [START_REF] Miles | Qualitative data analysis : an expanded sourcebook[END_REF] using a simple two-level theme; an open-ended general epic coding followed by a more specific emic coding in order to allow a maximum interwoven within data analysis. Appendix 2 provides an example of the coding process performed in this research.
With regard to the case companies, the organisations selected represent core industries of Thailand such as the energy industry (Cases A-C), the food industry (Cases D-G) and the automobile industry (Cases H and I). The energy group is the backbone of Thailand's energy production chain, which accounts for more than half of the country's energy demands. The food industry group includes business units of global food companies and Thai food conglomerates which export foods worldwide. The automobile industry group is directly involved in the production and distribution chains of the world's leading automobile brands. For the two remaining cases, Case J is a Thai business unit of a global household electronic appliance company. Case K is a Thai hospitality conglomerate which operates numerous five-star hotels and luxury serviced apartments throughout the Asia Pacific region. In terms of IS technologies, all of these companies employ both ERP and spreadsheets (SSs) for budgeting functions. However, some have access to additional access to BI applications. Some companies employ off-the-shelf BI solutions for budgeting purpose such as the Cognos BI systems. Nevertheless some companies choose to develop their own BI systems in collaboration with IS/IT consultants. This type of in-house BI is referred to as "own BI". Table 1 provides a clear description of each case organisation. The next section presents data analysis obtained from these organisations.
Analysis
The analysis is presented based on the theoretical section presented earlier. It starts with the 'conflict' between (1) the ERP system and flexibility and (2) the ERP system and integration at the four budgeting activity levels. These two sections aim to explain how the ERP systems are used or not used in budgeting. Later on, the paper proceeds to discuss the 'contradiction' between the ERP system and budgeting at a structural level in order to suggest why the ERP system are used or not used to support budgeting activities.
Conflict at the activity level: ERP system and flexibility
Flexibility, defined as business controllers' discretion to use IS technologies for budget-related decision-making [START_REF] Ahrens | Accounting for flexibility and efficiency: A field study of management control systems in a restaurant chain[END_REF], is needed throughout the budgeting process.
Based on a normal budgeting cycle, there are two important activities in relation to the flexibility definition: (1) budget construction, and (2) budget reporting. These two activities require business controllers to construct a data model on an IS technology which takes into account the complex environmental conditions [START_REF] Frow | Continuous budgeting: Reconciling budget flexibility with budgetary control[END_REF][START_REF] Chenhall | Management control systems design within its organizational context: findings from contingency-based research and directions for the future[END_REF] to determine the best possible alternatives.
In the first activity of budget construction, this process requires a high level of flexibility because budgets are typically constructed in response to specific activities and conditions presented in each business unit. The ERP system is not called upon for budget construction in any case company because of the following two reasons: (1) the technology is developed in a generic manner such that it cannot be used to support any specific budgeting process. The Vice President Information Technology in Case I mentions: "SAP [ERP] is too generic1 for budgeting. […] They [SAP ERP developers] have to develop something that perfectly fits with the nature of the business, but I know it is not easy to do because they have to deal with massive accounting codes and a complicated chart of accounts". This suggestion is similar to the reason indicated by the Financial Planning Manager in Case F who explains that her attempt to use an ERP system for budgeting was not successful because "SAP [ERP] has a limitation when it comes to revenue handling. It cannot handle any complicated revenue structure". (2) The technology is not flexible enough to accommodate changes in business conditions which are the keys to forecasting future business operations. The Central Accounting Manager in Case G suggests that the ERP system limits what business controllers can do with their budgeting procedures in connection with volatile environments. She explicitly mentions that: "our [budgeting] requirements change all the time. The ERP system is fixed; you get what the system is configured for. It is almost impossible to alter the system. Our Excel [spreadsheets] can do a lot more than the ERP system. For example, our ERP system does not contain competitor information. In Excel, I can just create another column and put it in".
In the second activity of budget reporting, all cases run basic financial accounting reports from the ERP systems, and then they further edit the reports to fit their managerial requirements and variance analysis in spreadsheets. The practice is also similar in Cases A, B and E, where the ERP systems are utilised for budget monitoring (see more discussion in the next section). For example, the Corporate Accounting Manager in Case D indicates how the ERP system is not flexible for reporting and how he works around it: "When I need to run a report from the ERP system, I have to run many reports then I mix them all in Excel [spreadsheets] to get exactly what I want". The Business Intelligence Manager in Case K comments on why she sees that the ERP system is not flexible enough for variance analysis: "It is quite hard to analyse budgeting information in the ERP system. It is hard to make any sense out of it because everything is too standardised".
In summary, the empirical data suggests the ERP systems are not used to support the flexibility domain in budgeting since that there is a clear conflict between the ERP system and the flexibility required in budgeting activities. The ERP systems put limitations on what business controllers can or cannot do with regard to flexibility in budgeting. For example business controller cannot perform complicated business forecasting which is necessary for budget construction on the ERP system. This conflict is clearly addressed by the Financial Planning Manager in Case F who states: "The SAP [ERP] functions are not flexible enough [for budgeting] but it is quite good for [financial] accounting".
Conflict at the activity level: ERP system and integration
Integration, defined as the adoption of IS technologies to standardise data definitions and structures across data sources [START_REF] Goodhue | The impact of data Integration on the costs and benefits of information systems[END_REF], is needed for budget control. Based on a normal budgeting cycle, there are two important activities in relation to the definition of integration: (1) budget consolidation, and (2) budget monitoring. Various departmental budgets are consolidated together at an organisational level, which is subsequently used for comparison with actual operating results generated from financial accounting for monitoring purposes.
In the first activity of budget consolidation, none of the case companies is reported to be using the ERP system for this function. The majority of budgets are constructed and consolidated outside the main ERP system, typically in spreadsheets (except Case B, which uses a mixture of spreadsheets and BI). The CFO in Case H gives an overview of the company budgeting process: "We do budgeting and business planning processes on Excel [spreadsheets]. It is not only us that do it like this. All of the six [Southeast Asian] regional companies also follow this practice. Every company has to submit budgets on spreadsheets to the regional headquarters. The budget consolidation is also completed on spreadsheets". Regardless of a company's choice to bypass the ERP system for budget consolidation, all the case companies are able to use their ERP systems to prepare and consolidate financial statements for a financial accounting purpose at a specific company level, but not necessarily at a group level. These financial accounting statements will be used to support the second activity of budget monitoring.
In the second activity of budget monitoring, three case companies (Cases A, B and E) report that they use their ERP systems for budget monitoring purposes. The Planning Vice President in Case B mentions: "SAP [ERP] is more like a place which we put budgeting numbers into. We use it to control budgets. We prepare budgets outside the system but we put the final budget numbers into it for a controlling purpose so that we can track budget spending in relation to the purchasing function in SAP [ERP]". A similar use of the ERP systems is presented in Cases A and E, where budgets are loaded into SAP ERP Controlling (CO) and Project System (PS) modules for budget spending monitoring. Note that only the final budget numbers (after budget consolidation in spreadsheets) are loaded into the ERP system for a control purpose alone. The ERP system does not play a part in any budget construction processes in these three cases, as it is mentioned in the previous section that budget construction is entirely achieved outside the main ERP system.
In conclusion, the empirical data suggests that the ERP systems are not widely used to support the integration domain in budgeting. However the empirical data suggests that the ERP systems have the potential to support budget integration as it has been shown earlier that all case companies use the ERP system to prepare financial statements and some cases use the ERP systems to monitor budget spending/achievement. Regardless of the potential that the ERP systems offer, these companies have not widely used the ERP systems to support budgeting practice.
Companies have yet to realise this hidden potential of the ERP system [START_REF] Kallunki | Impact of enterprise resource planning systems on management control systems and firm performance[END_REF] to integrate currently separated financial accounting (e.g. financial statement preparation) and management accounting (e.g. budgeting) practices.
Contradiction at the structural level
Based on the discussions at the two activity levels presented in earlier sections, this section builds on the concept of contradiction in ST to explain how and why the ERP systems are used or not used in budgeting. Budgeting as a social practice is deemed to operate in terms of flexibility and integration, while at the same time these contravene each other. It has been shown earlier that the four main budgeting activities in a typical budgeting cycle (budget construction, budget consolidation, budget monitoring and budget reporting) belong equally to both the integration and flexibility domains. With regards to the four budgeting activities, it has been shown that the four main budgeting remain outside the main ERP systems with an exception of the budget monitoring activity alone. In this activity, a minority of case companies use the ERP systems to support this work function. It is also been noted that the ERP systems have the potential to consolidate budgeting information but it seems that companies have not yet decided to utilise this capability offered in the systems.
Explanations based on the ulitarian view through the conflict and contradiction concept in ST deem that the ERP systems are not used in the budgeting activities because the systems only have the capabilities to support the integration function alone. Compared with budgeting practice which needs flexibility in decision-making as well as integration in management control, the ERP systems are obviously not suitable to support budgeting. Figure 1 shows the overall discussion about the contradiction between the ERP systems and budgeting at a structural level. It explains the shifts in the roles of budgeting activities from flexibility in activity one, budget construction, to integration in activity two, budget consolidation, and so on. It also elaborates how the ERP systems can have the potential to support some particular activities (such as budget consolidation and budget monitoring) but not the others.
Flexibility
ERP systems
Other IS technologies Other IS technologies So why do the ERP systems support the integration but not the flexibility in budgeting? Despite all the endlessly fancy claims made by numerous ERP vendors, the basic assumptions of the ERP system are a reference model which enforces underlying data, business process and organisational structure. The procedures described by the system must be strictly adhered to throughout organisational task executions [START_REF] Kallinikos | Deconstructing information packages: Organizational and behavioural implications of ERP systems[END_REF]. Therefore it is hard or even impossible to alter the systems to change in response to new business requirements or circumstances because such change is contradictory to the most basic principle of the systems.
So how can we readdress the limitations of ERP systems to support the flexibility needs in budgeting? As Figure 1 explains, other types of IS technologies such as spreadsheets and business intelligence (BI) must be called upon to support the activities that the ERP systems cannot accommodate [START_REF] Hyvönen | A virtual integration-The management control system in a multinational enterprise[END_REF]. These technologies are built and designed from different assumptions from those of the ERP systems; therefore they can accommodate the flexibility in budgeting. These systems can be combined to support strategic moves made by top management according to the indication from the LOC framework [START_REF] Simons | How New Top Managers Use Control Systems as Levers of Strategic Renewal[END_REF].
Conclusions and implications
This paper investigates how and why the ERP systems are used or not used in budgeting. It builds from the concepts of conflict and contradiction in ST, which is based on the ulitarian view of IS technology use perspective. Budgeting is treated as a social practice which portrays the two consecutive but contradictory roles of flexibility and integration. Using empirical data from eleven case companies in Thailand, the analysis at the activity level reveals that the ERP systems are not used to support the flexibility domain in budgeting because the systems impede business controllers to perform flexibility-related activities in budgeting, namely budget construction and budget reporting. The analysis on the integration-related budgeting function reveals that the ERP are not widely used to support the activities either. However it strongly suggests the system capability to support the integration function in budgeting as the systems are widely used to generate financial reports along with the evidence that some case companies are using the ERP systems for budget monitoring purpose. The analysis at the structuration level concludes why the ERP systems are not widely used to support budgeting. It is deemed that there is a contradictory relationship between the ERP systems and budgeting because the systems operate only in terms of integration, while the budgeting process assumes both roles. For this reason, other types of IS technologies such as spreadsheets and BI are called upon to accommodate tasks that cannot be supported in the main ERP systems.
This research result concurs with previous research conclusion that the ERP systems may post a flexibility issue to organisations because the systems cannot be tailored or changed in respond to business conditions or user requirements [START_REF] Booth | The impacts of enterprise resource planning systems on accounting practice -The Australian experience[END_REF][START_REF] Rom | Enterprise resource planning systems, strategic enterprise management systems and management accounting: A Danish study[END_REF][START_REF] Soh | Cultural fits and misfits: Is ERP a universal solution?[END_REF][START_REF] Akkermans | The impact of ERP on supply chain management: Exploratory findings from a European Delphi study[END_REF]. Hence it does not support research findings which conclude the ERP systems promote flexibility in organisations [START_REF] Brazel | The Effect of ERP System Implementations on the Management of Earnings and Earnings Release Dates[END_REF]. In addition, it corresponds to previous findings which indicate that the ERP systems may assist integration in organisations [START_REF] Shang | Assessing and managing the benefits of enterprise systems: the business manager's perspective[END_REF][START_REF] Quattrone | A 'time-space odyssey': management control systems in two multinational organisations[END_REF]. At least, the ERP systems can support a company-wide data integration which is significant in accounting and management control but not necessary a companywide business process integration [START_REF] Dechow | Enterprise resource planning systems, management control and the quest for integration[END_REF][START_REF] Dechow | Management Control of the Complex Organization: Relationships between Management Accounting and Information Technology In[END_REF].
The use of the ulitarian view to generate explanations for ERP system use/non-use is still somewhat limited. There are many aspects that the ulitarian view cannot capture. For example, the ulitarian view cannot provide an explanation as to why the ERP systems are not widely used to support the budget integration functions despite the system capabilities for financial consolidations and budget monitoring. This suggests that other views, such as the social view as well as the contingency view suggested in prior literature, are necessary in explaining the ERP system use/non-use. Therefore future IS use research should employ theories and insights from many perspectives to gain insights into the IS use/non-use phenomena.
The results presented in this study should be interpreted with a careful attention. Case study, by definition, makes no claims to be typical. The nature of case study is based upon studies of small, idiosyncratic and predominantly non-numerical sample set, therefore there is no way to establish the probability that the data can be generalised to the larger population. On the contrary, the hallmark of case study approach lies in theory-building [START_REF] Eisenhardt | Theory building from cases: Opportunities and challenges[END_REF] which can be transposed beyond the original sites of study.
The research offers two new insights to the IS research community. First, it explains the ERP system limited use explanation in budgeting from an ulitarian perspective. It holds that the ERP systems have the potential to support only half of the budgeting activities. Explicitly, the systems can support the integration in management control but not the flexibility in decision-making. Second, it shows that business controllers recognise such limitations imposed by the ERP systems and that they choose to rely on other IS technologies especially spreadsheets to accomplish their budgeting tasks. Spreadsheets use is problematic in itself, issues such as spreadsheets errors and frauds are well-documented in the literature. Therefore academia should look for solutions to improve professionally designed IS technologies (e.g., the ERP system or the BI) use in organisations and reduce spreadsheets reliance in budgeting as well as in other business activities.
For practitioners, this research warns them to make informed decisions about IT/IS investments. ERP vendors often persuade prospective buyers to think that their systems are multipurpose. This research shows at least one of the many business functions in which the ERP systems do not excel. Thus any further IT/IS investments must be made with a serious consideration to the business function that needs support, as well the overall business strategies guiding the entire organisation.
Figure 1
1 Figure 1 Contradiction between budgeting and ERP system
Table 1 .
1 Case company description
Case Main Activities Owner ERP SSs BI
A Power plant Thai SAP Yes Magnitude
B Oil and Petrochemical Thai SAP Yes Cognos
C Oil refinery Thai SAP Yes -
D Frozen food processor Thai SAP Yes -
E Drinks and dairy products Foreign SAP Yes Magnitude
F Drinks Foreign SAP Yes Own BI
G Agricultural products Thai BPCS Yes -
H Truck Foreign SAP Yes -
I Automobile parts Thai SAP Yes Own BI
J Electronic appliances Foreign JDE Yes Own BI
K Hotels and apartments Thai Oracle Yes IDeaS
This italic shown in the original interview text represents the author's intention to emphasize certain information in the original interview text. This practice is used throughout the paper.
Appendix 1: Interview guide
How do you describe your business unit information? What IS technologies are used in relation to budgeting procedure? What are the budgeting procedures in your organisation? What are the characteristics of pre-budget information gathering and analysis? How does your business organisation prepare a budget? How does your business organisation consolidate budget(s)? How does your business organisation monitor budgets? How does your business organisation prepare budget-related reports? How does your organisation direct strategic management? How does your organisation control normative management? From what I understand I think SAP is developing an industrial product line but the budgeting function is very small so they think that it might not worth an investment. First I think that is why they brought in the BI. Second, I think budgeting is something for business students. So they have to develop something that perfectly fits with the nature of the business, but I know it is not easy to do because they have to deal with massive accounting codes and a complicated chart of accounts.
Consequences | 48,946 | [
"1003597"
] | [
"344927"
] |
01484775 | en | [
"spi",
"math"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01484775/file/Surprisepressureversion3versionTCAndromarch7.pdf | Thomas Carraro
email: thomas.carraro@iwr.uni-heidelberg.de
Eduard Marušić-Paloka
Andro Mikelić
email: mikelic@univ-lyon1.fr
Effective pressure boundary condition for the filtration through porous medium via homogenization
Keywords: homogenization, stationary Navier-Stokes equations, stress boundary conditions, effective tangential velocity jump, porous media
We present homogenization of the viscous incompressible porous media flows under stress boundary conditions at the outer boundary. In addition to Darcy's law describing filtration in the interior of the porous medium, we derive rigorously the effective pressure boundary condition at the outer boundary. It is a linear combination of the outside pressure and the applied shear stress. We use the two-scale convergence in the sense of boundary layers, introduced by Allaire and Conca [SIAM J. Math. Anal., 29 (1997), pp. 343-379] to obtain the boundary layer structure next to the outer boundary. The approach allows establishing the strong L 2 -convergence of the velocity corrector and identifica-
Introduction
The porous media flows are of interest in a wide range of engineering disciplines including environmental and geological applications, flows through filters etc. They take place in a material which consists of a solid skeleton and billions of interconnected fluid filled pores. The flows are characterised by large spatial and temporal scales. The complex geometry makes direct computing of the flows, and also reactions, deformations and other phenomena, practically impossible. In the applications, the mesoscopic modeling is privileged and one search for effective models where the information on the geometry is kept in the coefficients and which are valid everywhere. The technique which allows replacing the physical models posed at the microstructure level by equations valid globally, is called upscaling. Its mathematical variant, which gives also the rigorous relationship between the upscaled and the microscopic models is the homogenization technique.
It has been applied to a number of porous media problems, starting from the seminal work of Tartar [START_REF] Tartar | Convergence of the homogenization process[END_REF] and the monograph [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF]. Many subjects are reviewed in the book [START_REF] Hornung | Homogenization and Porous Media[END_REF]. See also the references therein.
Frequently, one has processes on multiple domains and model-coupling approaches are needed. Absence of the statistical homogeneity does not allow direct use of the homogenization techniques. Examples of situations where the presence of an interface breaks the statistical homogeneity are
• the flow of a viscous fluid over a porous bed,
• the forced infiltration into a porous medium.
2
The tangential flow of an unconfined fluid over a porous bed is described by the law of Beavers and Joseph [START_REF] Beavers | Boundary conditions at a naturally permeable wall[END_REF] and it was rigorously derived in [START_REF] Jäger | On the interface boundary condition of Beavers, Joseph, and Saffman[END_REF] and [START_REF] Marciniak-Czochra | Effective pressure interface law for transport phenomena between an unconfined fluid and a porous medium using homogenization[END_REF] using a combination of the homogenization and boundary layer techniques. The forced injection problem was introduced in [START_REF] Levy | On boundary conditions for fluid flow in porous media[END_REF] and the interface conditions were rigorously established and justified in [START_REF] Carraro | Effective interface conditions for the forced infiltration of a viscous fluid into a porous medium using homogenization[END_REF].
A particular class of the above problems is derivation of the homogenized external boundary conditions for the porous media flows. In the case of the zero velocity at the external boundary of the porous medium, one would impose zero normal component of the Darcy velocity as the homogenized boundary condition. The behavior of the velocity and pressure field close to the flat external boundary, with such boundary condition, has been studied in [START_REF] Jäger | On the Flow Conditions at the Boundary Between a Porous Medium and an Impervious Solid[END_REF], using the technique from [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF]. The error estimate in 2D, for an arbitrary geometry has been established in [START_REF] Marušić-Paloka | An Error Estimate for Correctors in the Homogenization of the Stokes and Navier-Stokes Equations in a Porous Medium[END_REF].
The case of the velocity boundary conditions could be considered as "intuitively" obvious. Other class of problems arises when we have a contact of the porous medium with another fluid flow and the normal contact force is given at the boundary. It describes the physical situation when the upper boundary of the porous medium in exposed to the atmospheric pressure and wind (see e.g. [START_REF] Coceal | Canopy model of mean winds through urban areas[END_REF]). Or, more generally, when the fluid that we study is in contact with another given fluid. Assuming that the motion in porous medium is slow enough that the interface Σ between two fluids can be seen as immobile. Intuitively, it is expected that the homogenized pressure will take the prescribed value at the boundary.
In this article we study the homogenization of the stationary Navier-Stokes equations with the given normal contact force at the external boundary and we will find out that the result is more rich than expected.
Setting of the problem
We start by defining the geometry. Let and d be two positive constants.
Let Ω = (0, ) × (-d, 0) ⊂ R 2 be a rectangle. We denote the upper boundary by
Σ = {(x 1 , 0) ∈ R 2 ; x 1 ∈ (0, ) } .
The bottom of the domain is denoted by
Γ = {(x 1 , -d) ∈ R 2 ; x 1 ∈ (0, ) } .
We set Γ = ∂Ω\Σ . Let A ⊂⊂ R 2 be a smooth domain such that A ⊂ (0, 1) 2 ≡ Y . The unit pore is Y * = Y \A. Now we choose the small parameter ε 1 such that ε = /m, with m ∈ N and define
T ε = {k ∈ Z 2 ; ε(k + A) ⊂ Ω } , Y * ε,k = ε(k + Y * ) , A ε k = ε (k + A).
The fluid part of the porous medium is now
Ω ε = Ω\ k∈Tε ε (k + A). Finally, B ε = k∈Tε ε (k + A)
is the solid part of the porous medium and its boundary is
S ε = ∂B ε . Σ Γ B ε ' i A • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
On Σ we prescribe the normal stress and Γ is an impermeable boundary. In the dimensionless form, the Stokes problem that we study reads
-µ ∆u ε + ∇p ε = F , div u ε = 0 in Ω ε , (1)
T(u ε , p ε ) e 2 = H = (P, Q) on Σ, u ε = 0 on S ε ∪ Γ, (2)
(u ε , p ε ) is -periodic in x 1 . (3)
Here T(v, q) denotes the stress tensor and D v the rate of strain tensor
T(v, q) = -2µ Dv + q I , Dv = 1 2 ∇v + (∇v) t
and µ is a positive constant.
Assumption 1. We suppose ∂A ∈ C 3 , F ∈ C 1 (Ω) 2 and P = P (x 1 ), Q = Q(x 1 ) being elements of C 1 per [0, ].
For the existence, uniqueness and regularity of solutions to Stokes problem (1)-
, under Assumption 1, we refer e.g. to [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF], Sec. 4.7.
Furthermore, we consider the full stationary incompressible Navier-Stokes system
-µ ∆u 1,ε + (u 1,ε ∇)u 1,ε + ∇p 1,ε = F , div u 1,ε = 0 in Ω ε (4) T(u 1,ε , p 1,ε ) e 2 = H = (P, Q) on Σ, u 1,ε = 0 on S ε ∪ Γ (5) (u 1,ε , p 1,ε ) is -periodic in x 1 . (6)
Existence of a solution for problem (4)-( 6) is discussed in Sec. 5.
Our goal is to study behavior of solutions to (1)-( 3) and ( 4)-( 6) in the limit when the small parameter ε → 0.
The main result
Our goal is to describe the effective behavior of the fluid flow in the above described situation. The filtration in the bulk is expected to be described by Darcy's law and we are looking for the effective boundary condition on the upper boundary Σ. To do so, we apply various homogenization techniques, such as two-scale convergence ( [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF] , [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]) and the two-scale convergence for boundary layers ( [START_REF] Allaire | Boundary Layers in the Homogenization of a Spectral Problem in Fluid-Solid Structures[END_REF]). We prove the following result:
Theorem 1. Let us suppose Assumption 1 and let (u ε , p ε ) be the solution of problem ( 1)-(3).
90
Then there exists an extension of p ε to the whole Ω, denoted again by the same symbol, such that
p ε → p 0 strongly in L 2 (Ω), ( 7
)
where p 0 is the solution of problem div K(∇p 0 -F) = 0 in Ω, ( 8
)
p 0 is -periodic in x 1 , n • K (∇p 0 -F) = 0 on Γ, (9)
p 0 = C π P + Q on Σ, (10)
with K the permeability tensor, defined by (83), and C π the boundary layer pressure stabilisation constant given by (41).
Next, let (w, π) be the solution of the boundary layer problem (36)-(38).
Then, after extending u ε and w by zero to the perforations, we have
95 u ε (x) -ε P (x 1 ) w(x/ε) ε 2 V weakly in L 2 (Ω), (11)
u ε (x) ε 2 P (x 1 ) G * w 1 (y) dy δ Σ e 1 + V weak* in M(Ω), (12)
u ε -ε P (x 1 ) w x ε ε 2 - 2 k=1 w k x ε F k - ∂p 0 ∂x k → 0 strongly in L 2 (Ω), ( 13
)
where G * is the boundary layer internal interface fluid/solid given by ( 28), V satisfies the Darcy law An analogous result holds for the homogenization of the stationary Navier-Stokes equations ( 4)-( 6)
V = K(F -∇p 0 ) , M (
Theorem 2. Under the assumptions on the geometry and the data from Theorem (1), there exist solutions (u 1,ε , p 1,ε ) of problem ( 4)-( 6) such that convergences ( 7), ( 11)-( 13) take place.
Proof of Theorem 1
The proof is divided in several steps. First we derive the a priori estimates.
Then we pass to the two-scale limit for boundary layers, in order to determine the local behavior of the solution in vicinity of the boundary. Once it is achieved, we subtract the boundary layer corrector from the original solution and use the classical two-scale convergence to prove that the residual converges towards the limit that satisfies the Darcy law. At the end we prove the strong convergences.
Step one: A priori estimates
We first recall that in Ω ε Poincaré and trace constants depend on ε in the following way
|φ| L 2 (Ωε) ≤ C ε |∇φ| L 2 (Ωε) ( 14
)
|φ| L 2 (Σ) ≤ C √ ε |∇φ| L 2 (Ωε) , ∀ φ ∈ H 1 (Ω ε ) , φ = 0 on S ε (15)
We also recall that the norms |Dv| L 2 (Ωε) and |∇v| L 2 (Ωε) are equivalent, due to the Korn's inequality, which is independent of ε (see e.g. [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]).
Here and in the sequel we assume that u ε is extended by zero to the whole Ω. In order to extend the pressure p ε we need Tartar's construction from his seminal paper [START_REF] Tartar | Convergence of the homogenization process[END_REF]. It relies on the related construction of the restriction operator, acting from the whole domain Ω to the pore space Ω ε . In our setting we deal with the functional spaces
X 2 = {z ∈ H 1 (Ω) 2 ; z = 0 for x 2 = -d } X ε 2 = {z ∈ X 2 ; z = 0 on S ε } .
Then, after [START_REF] Tartar | Convergence of the homogenization process[END_REF] and the detailed review in [START_REF] Allaire | One-Phase Newtonian Flow[END_REF], there exists a continuous restric-
120 tion operator R ε ∈ L(X 2 , X ε 2 ), such that div (R ε z) = div z + k∈Tε 1 |Y * ε,k | χ ε,k A ε k div z dx, ∀ z ∈ X 2 , |R ε z| L 2 (Ωε) ≤ C (ε |∇z| L 2 (Ω) + |z| L 2 (Ω) ) , ∀ z ∈ X 2 , |∇R ε z| L 2 (Ωε) ≤ C ε (ε |∇z| L 2 (Ω) + |z| L 2 (Ω) ) , ∀ z ∈ X 2 ,
where χ ε,k denotes the characteristic function of the set Y * ε,k , k ∈ T ε . Through a duality argument, it gives an extension of the pressure gradient and it was found in [START_REF] Lipton | Darcy's law for slow viscous flow past a trationary array of bubbles[END_REF] that the pressure extension p is given by the explicit formula
pε = p ε in Ω ε 1 |Y * ε,k | Y * ε,k p ε dx in Y * ε,k for each k ∈ T ε . (16)
For details we refer to [START_REF] Allaire | One-Phase Newtonian Flow[END_REF]. In addition, a direct computation yields
Ωε p ε div (R ε z) dx = Ω pε dx div z dx , ∀ z ∈ X 2 . (17)
Both the velocity and the pressure extensions are, for simplicity, denoted by the same symbols as the original functions (u ε , p ε ).
It is straightforward to see that:
Lemma 1. Let (u ε , p ε ) be the solution to problem (1), [START_REF] Allaire | One-Phase Newtonian Flow[END_REF]. Then there exists 125 some constant C > 0, independent of ε, such that
|∇u ε | L 2 (Ω) ≤ C √ ε ( 18
)
|u ε | L 2 (Ω) ≤ C ε 3/2 ( 19
)
|p ε | L 2 (Ω) ≤ C √ ε . (20)
Proof. We start from the variational formulation of problem (1), ( 2)
µ Ωε Du ε : Dv dx = Σ H • v dS + Ωε F • v dx, ∀ v ∈ V (Ω ε ) , (21)
V (Ω ε ) = {v ∈ H 1 (Ω ε ) 2 ; div v = 0, v = 0 on S ε ∪ Γ, v is -periodic in x 1 }
Using u ε as the test function and applying ( 14)-( 15) yield
µ Ωε |Du ε | 2 dx = Σ H • u ε dS + Ωε F • u ε dx ≤ C √ ε|Du ε | L 2 (Ωε) .
Now ( 14) implies ( 18) and ( 19). Since we have extended the pressure to the solid part of Ω, using Tartar's construction, ( 18) and ( 17) imply
|p ε | L 2 (Ω)/R = sup g∈L 2 (Ω)/R Ω p ε g dx |g| L 2 (Ω)/R = sup z∈X2 Ωε p ε div (R ε z) dx |z| H 1 (Ω) 2 ≤ C ε |∇u ε | L 2 (Ω) ,
giving the pressure estimate (20).
Step two: Two-scale convergence for boundary layers
We recall the definition and some basic compactness results for two-scale convergence for boundary layers due to Allaire and Conca [START_REF] Allaire | Boundary Layers in the Homogenization of a Spectral Problem in Fluid-Solid Structures[END_REF]. In the sequel, if the index y is added to the differential operators D y , ∇ y , div y , then the derivatives are taken with respect to the fast variables y 1 , y 2 instead of x 1 , x 2 .
Let G = (0, 1) × ( -∞ , 0) be an infinite band. The bounded sequence (φ ε ) ε>0 ⊂ L 2 (Ω) is said to two-scale converge in the sense of the boundary layers if there
exists φ 0 (x 1 , y) ∈ L 2 (Σ × G) such that 1 ε Ω φ ε (x) ψ x 1 , x ε dx → Σ G φ 0 (x 1 , y) ψ(x 1 , y) dx 1 dy , (22)
for all smooth functions ψ(x 1 , y) defined in Σ × G, with bounded support, such
that y 1 → ψ(x 1 , y 1 , y 2 ) is 1-periodic.
We need the following functional space
D 1 = {ψ ∈ C ∞ (G) ; ψ is 1 -periodic in y 1
and compactly supported in
y 2 ∈ (-∞, 0]} Now D 1 # (G) is the closure of D 1 in the norm |ψ| D 1 # (G) = |∇ψ| L 2 (G) .
It should be noticed that such functions do not necessarily vanish as y 2 → -∞. For that kind of convergence we have the following compactness result from [START_REF] Allaire | Boundary Layers in the Homogenization of a Spectral Problem in Fluid-Solid Structures[END_REF]:
Theorem 3. 1. Let us suppose 1 √ ε |φ ε | L 2 (Ω) ≤ C . ( 23
)
Then there exists φ 0 ∈ L 2 (Σ × G) and a subsequence, denoted by the same indices, such that φ ε → φ 0 two-scale in the sense of boundary layers.
2. Let us suppose
1 √ ε |φ ε | L 2 (Ω) + ε |∇φ ε | L 2 (Ω) ≤ C. ( 25
)
Then there exists
φ 0 ∈ L 2 (Σ; D 1 # (G)
) and a subsequence, denoted by the same indices, such that φ ε → φ 0 two-scale in the sense of boundary layers [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF] ε ∇φ ε → ∇ y φ 0 two-scale in the sense of boundary layers . (
Using the a priori estimates, we now undertake our first passing to the limit.
Before we start we define
C = -∞ j=0 (j e 2 + ∂A ) , M = -∞ j=0 (j e 2 + A) , G * = G\ -∞ j=0 (j e 2 + A ). ( 28
)
We introduce the space D 1 #0 (G * ) defined similarly as D 1 # (G) but on G * and such that its elements have zero trace on C. Thus, we take
D 1 = {ψ ∈ C ∞ (G * ) ; ψ| C = 0 , ψ is 1 -periodic in y 1 ,
and compactly supported in y 2 ∈ (-∞, 0]} .
Then D 1 #0 (G * ) is its closure in the norm |ψ| D 1 #0 (G * ) = |∇ψ| L 2 (G * )
. Those functions do vanish as y 2 → -∞ due to the zero trace on C that prevents them to 135 tend to a constant.
Lemma 2. Let (v 0 , q 0 ) ∈ L 2 (Σ; D 1 #0 (G * )) × L 2 (Σ; L 2 loc (G *
)) be given by the boundary layer problem
-µ∆ y v 0 + ∇ y q 0 = 0, div y v 0 = 0 in G * , (29)
-2µ D y v 0 + q 0 I e 2 = H for y 2 = 0 , v 0 = 0 on C, (30)
(v 0 , q 0 ) is 1-periodic in y 1 , v 0 → 0 as y 2 → -∞ . (31)
Then
1 ε u ε → v 0 two-scale in the sense of boundary layers (32)
∇u ε → ∇ y v 0 two-scale in the sense of boundary layers .
Proof. The a priori estimates ( 19) and ( 18) and the compactness theorem 3 imply the existence of some
v 0 ∈ L 2 (Σ; D 1 #0 (G * )) such that v 0 = 0 on M and 1 ε u ε → v 0 two-scale in the sense of boundary layers ( 34
)
∇u ε → ∇ y v 0 two-scale in the sense of boundary layers . ( 35
)
Now we take the test function
z ε (x) = z x 1 , x ε ∈ D 1 #0 (G * ) 2 such that div y z = 0 and z(x 1 , • ) = 0 in M in the variational formulation for (1), (2) 2µ ε Ωε εD u ε ε : εD z ε dx - Ωε p ε div z ε dx = Σ H • z ε dS + Ωε F • z ε dx. Since ∂z ε ∂x j = ε -1 ∂z ∂y j + δ 1j ∂z ∂x 1
we get on the limit 2µ
Σ G D y v 0 (x 1 , y) : D y z(x 1 , y) dy dx 1 = Σ H• 1 0 z(x 1 , y 1 , 0)dy 1 dx 1 .
Furthermore, since div u ε = 0 it easily follows that div y v 0 = 0. Thus there exists q 0 ∈ L 2 (Σ; L 2 loc (G * )) such that (v 0 , q 0 ) satisfy ( 29)-(31).
The boundary layer corrector (v 0 , q 0 ) can be decomposed as v 0 = P (x 1 ) w(y) 145 and q 0 = P (x 1 ) π(y) + Q(x 1 ) , where
-µ∆ y w + ∇ y π = 0 , div y w = 0 in G * , (36)
(-2µ D y w + π I) e 2 = e 1 for y 2 = 0 , w = 0 on C, ( 37
) (w, π) is 1-periodic in y 1 , w → 0 as y 2 → -∞ . (38)
Problem (36), (38) is of the boundary layer type. Existence of the solution and exponential decay can be proved as in [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF]. We have Theorem 4. Problem (36), (38) has a unique solution (w, π)
∈ D 1 #0 (G * ) × L 2
loc (G * ). Furthermore, there exists a constant C π such that
150 |e α |y2| ( π -C π ) | L 2 (G * ) ≤ C (39) |e α |y2| w | L 2 (G * ) + |e α |y2| ∇ w | L 2 (G * ) ≤ C . ( 40
)
for some constants C, α > 0 .
In the sense of (39) we write Using (40) yields
C π = lim y2→-∞ π(y) . (41
1 0 w 2 (y 1 , y 2 ) dy 1 = 0, ∀y 2 ≤ 0. ( 42
) (42) implies G * w 2 dy = 0. ( 43
)
Remark 3. Integrating (37) with respect to y 1 yields
e 1 = 1 0 -µ ∂w 1 ∂y 2 e 1 + ∂w 2 ∂y 1 e 1 + 2 ∂w 2 ∂y 2 e 2 + π e 2 (y 1 , 0) dy 1 .
Equating the second components gives
0 = 1 0 -2 µ ∂w 2 ∂y 2 + π (y 1 , 0) dy 1 = 1 0 2 µ ∂w 1 ∂y 1 + π (y 1 , 0) dy 1 = = 1 0 π(y 1 , 0) dy 1 .
If we test (36) with w k and (80) by w and combine, we get
C π = K -1 22 1 0 w 2 1 (y 1 , 0) dy 1 + 1 0 -2µ ∂w k ∂y 2 + π k e 2 (y 1 , 0) w(y 1 , 0) dy 1 .
Finally, we denote
J = {y 2 ∈] -∞, 0] ; (y 1 , y 2 ) ∈ M , y 1 ∈]0, 1[ }. Denoting m A = min{y 2 ∈ [0, 1] ; (y 1 , y 2 ) ∈ A } , M A = max{y 2 ∈ [0, 1] : (y 1 , y 2 ) ∈ A } .
The set J is then a union of disjoint intervals We now know the behavior of (u ε , p ε ) in vicinity of Σ. To get additional information of the behavior far from the boundary we deduce the boundary layer corrector from (u ε , p ε ) and define
J 0 = ] 0, m A [ , J i =]i -1 + M A , i + m a [ , i = 1,
U ε (x) = u ε (x) -ε P (x 1 ) w(x/ε) , P ε (x) = p ε (x) -[P (x 1 ) π(x/ε) + Q(x 1 )] .
The stress tensor T(v, q) = 2µ Dv -q I for such approximation satisfies
T(U ε , P ε ) = T(u ε , p ε ) -P (x 1 ) (2µD y w -πI) - -2µε dP dx 1 w 1 w 2 /2 w 2 /2 0 = T(u ε , p ε )- - P (x 1 ) 2µ ∂w1 ∂y1 -π + 2µε dP dx1 w 1 -Q µ P (x 1 ) ∂w1 ∂y2 + ∂w2 ∂y1 + ε dP dx1 w 2 µ P (x 1 ) ∂w1 ∂y2 + ∂w2 ∂y1 + ε dP dx1 w 2 P (x 1 ) 2µ ∂w2 ∂y2 -π -Q(x 1 )
By direct computation we get
-div T(U ε , P ε ) = f ε , (44)
f ε ≡ F + µε d 2 P dx 2 1 (w + w 1 e 1 ) + dP dx 1 2µ ∂w ∂y 1 -πe 1 + µ∇ y w 1 - dQ dx 1 e 1 , ( 45
) div U ε = -ε dP dx 1 w 1 in Ω ε , (46)
U ε = 0 on S ε , U ε = -ε P (x 1 ) w(x/ε) on Γ, (47)
(-2µ D U ε + P ε I) e 2 = 0 on Σ . ( 48
)
We want to derive appropriate a priori estimates for (U ε , P ε ). However, according to (46), the divergence of U ε is still too large for our purpose. Thus we need to compute the additional divergence corrector.
Lemma 3. There exists Φ ∈ H 2 (G * ) 2 such that div y Φ = w 1 in G * , (49)
Φ is 1-periodic in y 1 , Φ = 0 on C , Φ(y 1 , 0) = Ce 2 , (50)
e γ|y2| Φ ∈ L 2 (G * ) 4 and |Φ(y 1 , y 2 )| ≤ Ce -γ|y2| , for some γ > 0. (51)
Proof. We follow [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF] and search for Φ in the form
Φ = ∇ y ψ + curl y h = ∂ψ ∂y 1 - ∂h ∂y 2 , ∂ψ ∂y 2 + ∂h ∂y 1 .
The function ψ solves Again, assuming that U ε is extended by zero to the pores B ε we extend P ε using the formula ( 16) to prove:
170 -∆ y ψ = w 1 (y) in G * , ∂ψ ∂n = 0 on C, (52)
∂ψ ∂y 2 = d 0 = const. for y 2 = 0, ψ is 1-periodic in y 1 , (53)
Lemma 4. |∇U ε | L 2 (Ω) ≤ C ε (55) |U ε | L 2 (Ω) ≤ C ε 2 (56)
|P ε | L 2 (Ω) ≤ C . ( 57
)
Proof. It is straightforward to see that for the right-hand side, we have
|f ε | L 2 (Ω) ≤ C .
Furthermore
f ε = F -( dP dx 1 C π + dQ dx 1 ) e 1 + g ε , with |g ε | L 2 (Ω) = O( √ ε).
The idea is to test the system (44) with
Ũε = U ε + ε 2 dP dx 1 (x 1 ) Φ x ε , ( 58
)
where Φ is constructed in lemma 3. By the construction
div Ũε = ε 2 d 2 P dx 2 1 Φ ε 1 with Φ ε (x) = Φ(x/ε) . Thus |div Ũε | L 2 (Ω) ≤ C ε 5/2 .
The weak form of (44) reads
2µ Ωε D U ε : D z dx - Ωε P ε div z dx = Ωε f ε z dx , ∀ z ∈ X ε 2 (59) so that Ωε P ε div z dx ≤ C ( |D U ε | L 2 (Ωε) + ε )| z| H 1 (Ωε) , ∀ z ∈ X ε 2 . ( 60
)
Next we use identity [START_REF] Jäger | On the Flow Conditions at the Boundary Between a Porous Medium and an Impervious Solid[END_REF] to obtain the estimate
Ω P ε div z dx = Ωε P ε div (R ε z) dx ≤ C ε ( |D U ε | L 2 (Ωε) +ε ) |z| H 1 (Ω) , (61)
∀ z ∈ X 2 . Since div : X 2 → L 2 (Ω) is a surjective continuous operator, (61) yields | P ε | L 2 (Ω) ≤ C ( ε -1 |D U ε | L 2 (Ωε) + 1 ) . ( 62
)
Now we take z = Ũε as a test function in (59). To be precise, we observe that Ũε is not exactly in X ε 2 since it is not equal to zero for x 2 = -d. But, that value is exponentially small, of order e -γ/ε , so it can be easily corrected by lifting its boundary value by a negligibly small function. Thus, slightly abusing the notation, we consider it as an element of X ε 2 . Then, due to the (58)
Ωε P ε div Ũε dx = ε 2 Ωε P ε d 2 P dx 2 1 Φ ε 1 dx ≤ C ε |D U ε | L 2 (Ωε) + C ε 2 . (63)
Consequently, we get (55)-(57) .
At this point we use the classical two-scale convergence (see e.g. [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF], [START_REF] Allaire | Homogenization and two-scale convergence[END_REF]).
For readers' convenience we recall basic definitions and compactness results.
Let Y = [0, 1] 2 and let C ∞ # (Y ) be the set of all C ∞ functions defined on Y and periodic with period 1. We say that a sequence (v ε ) ε>0 , from L 2 (Ω), twoscale converges to a function
v 0 ∈ L 2 (Ω) if lim ε→0 Ω v ε (x) ψ x, x ε dx → Ω Y v 0 (x, y) ψ(x, y)dx dy , for any ψ ∈ C ∞ 0 (Ω; C ∞ # (Y )
). For such convergence we have the following compactness result from [START_REF] Allaire | Homogenization and two-scale convergence[END_REF] and [START_REF] Nguetseng | A general convergence result for a functional related to the theory of homogenization[END_REF] that we shall need in the sequel Theorem 5.
• Let (v ε ) ε>0 be a bounded sequence in L 2 (Ω). Then we can extract a subsequence that two-scale converges to some v 0 ∈ L 2 (Ω × Y ).
• Let (v ε ) ε>0 be a sequence in H 1 (Ω) such that v ε and ε ∇v ε are bounded in L 2 (Ω). Then, there exists a function v 0 ∈ L 2 (Ω; H 1 # (Y )) and a subsequence for which
v ε → v 0 in two-scales, (64)
ε ∇v ε → ∇ y v 0 in two-scales. ( 65
)
Lemma 5. Let (U ε , P ε ) be the solution of the residual problem ( 46)-( 48). Then
ε -2 U ε → U 0 in two-scales, (66)
ε -1 ∇U ε → ∇ y U 0 in two-scales, (67)
P ε → P 0 in two-scales, (68)
where
(U 0 , P 0 , Q 0 ) ∈ L 2 (Ω; H 1 # (Y * )) × H 1 (Ω) × L 2 (Ω; L 2 (Y * )/R) is the solu- tion of the two-scale problem -µ ∆ y U 0 + ∇ y Q 0 + ∇ x P 0 = F -( dQ dx 1 + C π dP dx 1 ) e 1 in Y * × Ω, ( 69
)
div y U 0 = 0 in Y * × Ω, (70)
U 0 = 0 on S × Ω, (U 0 , Q 0 ) is 1 -periodic in y, (71)
div x Y U 0 dy = 0 in Ω, Y U 0 dy • n = 0 on Γ, P 0 = 0 on Σ. ( 72
)
Proof. Using the estimates (55)-(57) we get that there exist
U 0 ∈ L 2 (Ω; H 1 # (Y )) and P 0 ∈ L 2 (Ω × Y ) such that ε -2 U ε → U 0 in two-scales, ε -1 ∇U ε → ∇ y U 0 in two-scales, P ε → P 0 two-scale.
It follows directly that U 0 (x, y) = 0 for y ∈ A.
First, for ψ(x, y) ∈ C ∞ (Y × Ω), periodic in y, such that ψ = 0 for y ∈ A 0 ← Ω dP dx 1 (x 1 ) w 1 x, x ε ψ x, x ε dx = ε -1 Ω div U ε ψ x, x ε dx = - Ω ε ∇ x ψ x, x ε + ∇ y ψ x, x ε • U ε (x) ε 2 dx → (73) → Ω Y U 0 • ∇ y ψ(x, y) dy dx ⇒ div y U 0 = 0 .
We then test equations ( 44)-( 48) with
m ε (x) = m x, x ε , where m ∈ H 1 (Ω; H 1 # (Y )), m = 0 for y ∈ M . 195 0 ← ε Ω f ε m ε dx = 2µ Ω D U ε (x) D y m x, x ε + εD x m x, x ε dx - Ω P ε (x) εdiv x m(x, x/ε) + div y m(x, x/ε) dx → - Ω Y P 0 (x, y) div y m(x, y) dy dx. (74)
Thus ∇ y P 0 = 0 implying P 0 = P 0 (x) .
Next we test system (44)-(48) with Z ε (x) = Z x, x ε , where Z ∈ H 1 (Ω; H 1 # (Y )), such that div y Z = 0 and Z = 0 for y ∈ A. It yields
Ω [F - dP (x 1 )C π + Q(x 1 ) dx 1 e 1 ] Y Z dy ← Ω f ε Z ε = - Ω P ε (x)div x Z(x, x/ε) dx + 2µ ε Ω D U ε (x) D y Z(x, x ε ) + εD x Z(x, x ε ) dx → (75) → 2µ Ω Y D y U 0 (x, y) D y Z(x, y) dy dx - Ω Y P 0 (x) div x Z(x, y) dy dx .
We conclude that ∇ x P 0 ∈ L 2 (Ω) and (U 0 , P 0 ) satisfies equations ( 69)-( 71).
200
The effective filtration velocity boundary conditions are determined by picking a smooth test-function ψ ∈ C ∞ (Ω), periodic in x 1 , ψ = 0 on Σ, and testing
div Ũε = ε 2 dP dx 1 Φ ε 1 18
with it. It gives
- Ωε dP dx 1 (x 1 ) Φ 1 x ε ψ(x) dx = ε -2 Ωε div Ũε (x) ψ(x) dx = = - Ωε ε -2 Ũε (x) • ∇ψ(x) dx - 0 Ũ ε 2 (x 1 , -d) ψ(x 1 , -d) dx 1 . ( 76
)
The last integral on the right hand side is negligible due to the exponential decay of w and Φ. The first integral on the right hand side, due to (66), converges and, due to the construction of Ũε ,
205 lim ε→0 Ω ε -2 Ũε (x) • ∇ψ(x) dx = lim ε→0 Ω ε -2 U ε (x) • ∇ψ(x) dx = = Ω Y U 0 (x, y) dy • ∇ψ(x) dx .
For the left-hand side in (76) we get
Ω d 2 P dx 2 1 (x 1 ) Φ 1 x ε ψ(x) dx ≤ C √ ε .
Thus
Ω Y U 0 dy • ∇ψ dx = 0 meaning that div x Y U 0 dy = 0 in Ω , Y U 0 dy • n = 0 on Γ.
We still need to determine the boundary condition for P 0 on Σ.
Let b be a smooth function defined on Ω×Y , such that div y b = 0 and b = 0 on Γ and b = 0 for y ∈ A . We now use b ε (x) = b(x, x/ε) as a test function in 210 (44)-( 48). We obtain
Ω f ε • b ε dx = 2µ Ω D U ε D x b • , • ε + ε -1 D y b • , • ε dx - ( 77
) Ω P ε div x b • , • ε dx → 2µ Ω Y D y U 0 D y b dydx - Ω P 0 div x Y b dy dx.
As for the left-hand side, we have
Ω f ε • b ε dx → Ω [F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ] ( Y b dy) dx so that 2µ Ω Y D y U 0 D y b dydx - Ω P 0 div x Y b dy dx = Ω Y b [F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ] dydx.
Using ( 69)-( 72) yields
Ω P 0 div Y b dy dx = - Ω ∇P 0 • Y b dy dx. It implies 2µ Σ Y
b • e 2 dy P 0 dx = 0 and, finally, P 0 = 0 on Σ.
Proving uniqueness of a weak solution for problem (69)-( 72) is straightforward.
Step four: Strong convergence 215
We start by proving the strong convergence for the pressure. We follow the approach from [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF]. Let {z ε } ε>0 be a sequence in X 2 such that z ε z 0 weakly in H 1 (Ω) .
Then we have
Ω P ε div z ε dx - Ω P 0 div z dx = Ω P ε div (z ε -z) dx + Ω ( P ε -P 0 ) div z dx.
For two integrals on the right-hand side we have lim ε→0 Ω ( P ε -P 0 ) div z dx = 0 and
Ω P ε div (z ε -z) dx = Ωε P ε div R ε (z ε -z) dx = 2µ Ωε D U ε ε εD(R ε (z ε -z) ) dx → 0 as ε → 0 .
Using surjectivity of the operator div : X 2 → L 2 (Ω) we conclude that P ε → P 0 strongly in L 2 (Ω).
Next we prove the strong convergence for the velocity. We define
U 0,ε (x) = 2 k=1 w k (x/ε) F k (x) - ∂ ∂x k P 0 (x) + C π P (x 1 ) + Q(x 1 )
.
Then for the L 2 -norms we have
Ωε U ε ε 2 -U 0,ε 2 dx ≤ C 2µ ε 2 Ωε D U ε ε 2 -U 0,ε 2 dx = = C 2µ ε -2 Ωε | D U ε | 2 dx + 2µ ε 2 Ωε | D U 0,ε | 2 dx - -4µ Ωε D U ε ε ε D U 0,ε dx .
Using the smoothness of U 0 we get, as ε → 0
220 (i) ε 2 Ωε | D U 0,ε | 2 dx = Ωε |D y U 0,ε | 2 dx + O(ε) → Ω×Y * |D y U 0 | 2 dx dy . (ii) 2µ Ω×Y * | D y U 0 | 2 dx = Ω (F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ) Y * U 0 dydx . (iii) 2µε -2 Ωε |D U ε | 2 dx = 2µε -2 Ωε D U ε D Ũε dx + O( √ ε) . (iv) 2µε -2 Ωε D U ε D Ũε dx -ε -2 Ωε P ε div Ũε dx = Ωε (F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ) U ε ε 2 dx + O( √ ε). (v) ε -2 Ωε P ε div Ũε dx = Ωε P ε d 2 P dx 2 1 Φ ε dx → 0 . (vi) (iii), (iv) and (v) ⇒ 2µ ε -2 Ωε |D U ε | 2 dx → Ω [ F - d(P (x 1 )C π + Q(x 1 )) dx 1 e 1 ] Y * U 0 dydx. (vii) Ωε D U ε ε ε D U 0,ε dx → Ω×Y * | D y U 0 | 2 dxdy. Thus lim ε→0 Ωε U ε ε 2 -U 0,ε 2 dx = 0 .
Step five: Weak* convergence of the boundary layer corrector
To prove convergence [START_REF] Coceal | Canopy model of mean winds through urban areas[END_REF] we need to show that
ε -1 P (x 1 ) w(x/ε) P (x 1 ) ( G * w(y) dy)δ Σ weak* in M(Ω) .
Thus we take the test function z ∈ C(Ω) 2 and, using the exponential decay of w, we get
230 1 ε Ω P (x 1 ) w x ε z(x) dx = 1 ε 0 P (x 1 ) 0 ε log ε w x ε z(x) dx 2 dx 1 + O(ε) = = 0 P (x 1 ) z(x 1 , 0) 0 -∞ w x 1 ε , y 2 dy 2 dx 1 + O(ε | log ε|) .
Using the well known property of the mean of a periodic function (see e.g. [START_REF] Sanchez-Palencia | Non-homogeneous media and vibration theory[END_REF])
yields lim ε→0 0 P (x 1 ) z(x 1 , 0) 0 -∞ w x 1 ε , y 2 dy 2 dx 1 = = 0 P (x 1 ) z(x 1 , 0) 0 -∞ 1 0 w(y) dy 1 dy 2 dx 1 = = 0 P (x 1 ) z(x 1 , 0) G * w(y) dy dx 1 = G * w(y) dy P (x 1 ) δ Σ | z . 4.6.
Step six: Separation of scales and the end of the proof of Theorem 1
We can separate the variables in ( 69)-( 72) by setting
U 0 (x, y) = 2 k=1 w k (y) F k (x) - ∂ ∂x k (Q(x 1 ) + C π P (x 1 ) + P 0 (x) ) , (78)
Q 0 (x, y) = 2 k=1 π k (y) F k (x) - ∂ ∂x k (Q(x 1 ) + C π P (x 1 ) + P 0 (x) ) , (79)
with
235 -µ∆w k + ∇π k = e k , div w k = 0 in Y * , (80)
w k = 0 on S, (w k , π k ) is 1 -periodic. (81)
Inserting the separation of scales formulas (78)-( 79) into (69)-(72) yields
div K [ F -∇ (P 0 + C π P + Q) ] = 0 in Ω, P 0 = 0 on Σ, P 0 is -periodic in x 1 , n • K [ F -∇ (P 0 + C π P + Q) ] = 0 on Γ. . (82)
Here
K = [K ij ] = [ Y w i j dy] (83)
stands for the positive definite and symmetric permeability tensor. System (82) a well-posed mixed boundary value problem for a linear elliptic equation for P 0 .
Nevertheless, it is important to note that P 0 is not the limit or homogenized pressure since
p ε (x) = P ε (x) + π x ε P (x 1 ) + Q(x 1 ) . Obviously p ε p 0 ≡ P 0 + C π P + Q .
This ends the proof of theorem 1 since the limit pressure is p 0 and it satisfies the boundary value problem ( 8)- [START_REF] Carraro | Effective interface conditions for the forced infiltration of a viscous fluid into a porous medium using homogenization[END_REF].
Proof of Theorem 2
We start by proving that problem (4)-( 6) admits at least one solution satisfying estimates ( 18)- [START_REF] Jäger | Asymptotic Analysis of the Laminar Viscous Flow Over a Porous Bed[END_REF].
240
It is well known that in the case of the stress boundary conditions, the inertia term poses difficulties and existence results for the stationary Navier-Stokes system can be obtained only under conditions on data and/or the Reynolds number (see e.g. [START_REF] Conca | The Stokes and Navier-Stokes equations with boundary conditions involving the pressure[END_REF]). Presence of many small solid obstacles in the porous media flows corresponds to a small Reynolds number, expressed through the 245 presence of ε in Poincaré's and trace estimates ( 14) and [START_REF] Helmig | Model coupling for multiphase flow in porous media[END_REF].
In order to estimate the inertia term we need fractional order Sobolev spaces.
we recall that
H 1/2 (Ω) 2 = {z ∈ L 2 (Ω) 2 | Ez ∈ H 1/2 (R 2 ) 2 }, where E : H 1 (Ω) 2 → H 1 (R 2 ) 2
is the classical Sobolev extension map. It is defined on the spaces H α (Ω), α ∈ (0, 1) through interpolation (see [START_REF] Constantin | Navier-Stokes equations[END_REF], Chapter 6).
Next, after [START_REF] Constantin | Navier-Stokes equations[END_REF], Chapter 6, one has
Ωε (u 1,ε ∇)u 1,ε • v dx ≤ C|u 1,ε | H 1/2 (Ω) 2 |∇u 1,ε | L 2 (Ω) 2 |v| H 1/2 (Ω) 2 , ∀v ∈ V (Ω ε ). (84)
Using ( 14) in (84) yields
Ωε (u 1,ε ∇)u 1,ε • u 1,ε dx ≤ Cε|∇u 1,ε | 3 L 2 (Ω) 2 . ( 85
)
Now it is enough to have an a priori estimate for the H 1 -norm. With such 250 estimate the standard procedure would give existence of a solution. It consists of defining a finite dimensional Galerkin approximation and using the a priori estimate and Brouwer's theorem to show that it admits a solution satisfying a uniform H 1 -a priori estimate. Finally, we let the number of degrees of freedom in the Galerkin approximation tend to infinity and obtain a solution through 255 the elementary compactness. For more details we refer to the textbook of Evans [START_REF] Evans | Partial Differential Equations: Second Edition[END_REF], subsection 9.1.
We recall that the variational form of ( 4)-( 6) is
L ε u 1,ε , v = 2µ Ωε Du 1,ε : Dv dx + Ωε (u 1,ε ∇)u 1,ε • v dx- - Ωε F • v dx - Σ H • v dS = 0, ∀v ∈ V (Ω ε ). (86)
Then, for ε ≤ ε 0 ,
L ε u 1,ε , u 1,ε ≥ 2µ|Du 1,ε | 2 L 2 (Ωε) 4 -Cε|Du 1,ε | 3 L 2 (Ωε) 4 -C √ ε|Du 1,ε | L 2 (Ωε) 4 ≥ ≥ C 1 ε 2 > 0, if |Du 1,ε | L 2 (Ωε) 4 = 1 √ ε . ( 87
)
As a direct consequence of (87), Brouwer's theorem implies existence of at least one solution for the N dimensional Galerkin approximation corresponding to (86) (see [START_REF] Evans | Partial Differential Equations: Second Edition[END_REF], subsection 9.1). After passing to the limit N → +∞, we obtain existence of at least one solution u ε for problem (86), such that |Du
1,ε | 2 L 2 (Ωε) 4 ≤ C √ ε|Du 1,ε | 2 L 2 (Ωε) 4 + C √ ε|Du 1,ε | L 2 (Ωε) 4 ,
implying estimates ( 18)-( 20).
Now we have
Ωε (u 1,ε ∇)u 1,ε • v ≤ Cε|∇u 1,ε | 2 L 2 (Ω) 2 |∇v| L 2 (Ω) 2 ≤ Cε 2 |∇v| L 2 (Ω) 2 , ∀v ∈ V (Ω ε ) (88)
and we conclude that in the calculations from subsections 4.2-4.4 the inertia term does not play any role. Hence it does not contribute to the homogenized 260 problem either. This observation concludes the proof of Theorem 2.
Numerical confirmation of the effective model
In this section we use a direct computation of the boundary layer corrector (36-38) and the microscopic problem (1-3) to numerically confirm the estimate (39)
|π -C π | L 2 (G * ) = O( √ )
and the strong convergence of the effective pressure [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]. For the pressure we find out
|p -p 0 | L 2 (Ω) = O( √ ),
which is consistent with the corrector type results from [START_REF] Jäger | On the boundary condition on the contact interface between a porous medium and a free fluid[END_REF].
Confirmation of boundary layer estimate
We start with estimate (39). For this we need to compute the value C π gives an accurate approximation. Furthermore, the cut-off boundary layer is computed by the finite element method. Thus, we compute C h π,cut , where the superscript h indicates the Galerkin approximation, and we have to assure that the discretization error |C π,cut -C h π,cut | is small enough.
For the numerical approximation we first introduce the cut-off domain
G * l := G\ -l j=0
(j e 2 + A)
and then consider the following cut-off boundary layer problem Problem 1 (Cut-off boundary layer problem). Find w and π, both 1-periodic in y 1 , such that it holds in the interior
-µ∆ y w l + ∇ y π l = 0 in G * l , (89)
∇ • w l = 0 in G * l , (90)
and on the boundaries (-2µD y w l + π l I) = e 1 for y 2 = 0, (91)
w l = 0 on C (92) w l,2 = ∂w l,1 ∂y 2 = 0 on Γ l , (93)
where Γ l = (0, 1) × l is the lower boundary of the cut-off domain.
The inclusions are defined as in Figure 1. The solid domain A is (a) circular in the isotropic case with radius r = 0.25 and center (0.5, 0.5), see Problem (89)-( 93) is approximated by the finite element method (FEM) using a Taylor-Hood element [START_REF] Taylor | A numerical solution of the Navier-Stokes equations using the finite element technique[END_REF] with bi-quadratic elements for the velocity and bilinear for the pressure. Since the inclusions are curvilinear we use a quadratic description of the finite element boundaries (iso-parametric finite elements). The stabilized pressure value of the boundary layer is defined in our computations as C h π,cut := π l,h (y 1 , l), i.e. it is the pressure value at the lower boundary of G * l . To define the value C h π,cut we have performed a test with increasing l to obtain the minimal length l of the cut-off domain for which the pressure value reaches convergence (up to machine precision). A shorter domain would introduce a numerical error and a longer domain would increase the computational costs without adding more accuracy.
In Table 1 the values of π l,h (y 1 , l) for increasing number of inclusions l are reported. It can be observed that one inclusion is enough to get the exact value C π = 0 for the circular inclusions. In case of elliptical inclusions the pressure is stabilized for l ≥ 7 and the effect of the cut-off domain can be seen only for smaller domains. Figure 2 shows a visualization of the boundary layer pressure Therefore for the convergence study of the effective pressure, we consider as exact value for ellipses C π = 0.2161642.
After computing the constant C h π,cut we proceed with the confirmation of the estimate (39) and plot in Figure 3 the convergence curves. We confirm the expected convergence rate
|π -C π | L 1 (G * ) = O( ) and |π -C π | L 2 (G * ) = O( √ ).
Confirmation of effective pressure values
The next step is the confirmation of the estimate [START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]. For a stress tensor defined by the constant contact stress (P, Q) and a right hand side which depends only on x 2 we have the analytical exact solution for the effective pressure
p 0 (x 2 ) = C π P + Q - 0 x2 f 2 (z) dz - K 12 K 22 0 x2 f 1 (z) dz. (94)
To compute it we need the vales K 12 and K 22 of the permeability tensor. These are defined as follows with the 1-periodic solution w i c (i = 1, 2) of the i th cell problem
K ij := Y * w i c,j dx, 29 10 -3 10
-∆w i c + ∇π i c = e i in Y * , ∇ • w i c = 0 in Y * , w i c = 0 on ∂A
where Y * is the unit pore domain of the cell problem with the corresponding 305 inclusion A. The inclusions are defined as in our previous work [START_REF] Carraro | Pressure jump interface law for the Stokes-Darcy coupling: Confirmation by direct numerical simulations[END_REF]. They correspond to one cell of problem (89)-( 93) and they are shown on Figure 1.
Therefore, we use the values of the permeability tensor computed therein and reported in Table 2. We use the extension pε h (16) for the microscopic pressure, where the subscript denotes the finite element approximation of the microscopic 310 problem obtained with Taylor-Hood elements, as for the cut-off boundary layer. With the expression of the effective pressure and the extension pressure we compute the convergence estimates. For the test case we use the values (P, Q)
for the normal component of the stress tensor and f (x) for the right hand side, needed in formula (94), as reported in Table 3. The results with the expected convergence rates are depicted in Figure 4. Finally, figures 5 and 6 show the velocity components, the velocity magnitude and the pressure in the microscopic problem for circles and ellipses. To simplify the visualization these figures show a microscopic problem with nine inclusions, so that the boundary layer is clearly visible. 320
Conclusion
The novelty of the result is in the boundary condition on Σ. The value of the Darcy pressure on the upper boundary Σ is now prescribed and its value depends not only on the given applied pressure force Q but also on the shear Thus, in interior of the domain, the velocity is plain Darcean, while in vicinity of the upper boundary, a boundary layer term ε P (x 1 ) w(x/ε) dominates.
The result can be used for the development of the model-coupling strategies, see [START_REF] Helmig | Model coupling for multiphase flow in porous media[END_REF] and [START_REF] Mosthaf | A coupling concept for two-phase compositional porous medium and single-phase compositional free flow[END_REF].
Ω) denotes the set of Radon measures on Ω and δ Σ is the Dirac measure concentrated on Σ, i.e. δ Σ |ψ = Σ ψ(x 1 , 0) dx 1 .
) Remark 1 .Remark 2 .div w dy = 1 0w 2 1 0w 2
121212 If ∂A ∈ C 3 then the regularity theory for the Stokes operator applies and (39), (40) hold pointwise. For more details on the regularity see e.g.[START_REF] Boyer | Mathematical Tools for the Study of the Incompressible Navier-Stokes Equations and Related Models[END_REF]. 155 Let the solution w to system (36)-(38) be extended by zero to M . Let b > a > 0 be arbitrary constants. Then we have (y 1 , b) dy 1 -(y 1 , a) dy 1 .
2, . . .. It is easy to see that the mapping t → 1 0 π(y 1 , t) dy 1 is constant on each of the intervals J i . If those constants are denoted by c i then c 0 = 0 and lim i→∞ c i = C π . 160 Remark 4. Let us suppose that the boundary layer geometry has the mirror symmetry with respect to the axis {y 1 = 1/2}. Then w 2 and π are uneven functions with respect to the axis and C π = 0. In particular, this result applies to the case of circular inclusions. 4.3. Step three: Derivation of the Darcy law via classical two-scale convergence 165
with n = (n 1 , n 2 ) 1 (y 1 ,
1211 being the exterior unit normal on C and t = (-n 2 , n 1 ) the tangent. The constant d 0 is chosen in a way that problem (52)-(53) admits a solution. By simple integration it turns out that d 0 = -G * w 1 (y) d y. Since the right-hand side is in H 1 (G * ), the problem has a solution ψ ∈ H 3 (G * ) that can be chosen to have an exponential decay |ψ| H 1 (G * ∩{|y2|>s}) ≤ C e -γs .(54)Next we use the trace theorem and construct a y 1 -periodic function h ∈ H 3 (G * ) such that∂h ∂t = curl h • n = 0 , ∂h ∂n = curl h • t = -0) = 0 (achieved if we take h(y 1 , 0) = const. ) .The function Φ, constructed above, satisfies (49) and (50). Exponential decay (54) of ψ implies exponential decay of h in the same sense and, finally, gives (51).
265
which is the limit value of the boundary layer pressure π for y 2 → ∞, see (41). Since the boundary layer problem is defined on an unbounded domain, we need to cut the domain and compute C π,cut , which is the approximation of C π on a cut-off domain with |y 2 | large enough so that the difference |C π -C π,cut | is smaller than the machine precision. Since the value π(y) stabilizes to C π exponentially fast, we expect that a boundary layer with a few unit cells
Figure 1 :
1 Figure 1a;
Figure 2 :
2 Figure 2: Visualization of boundary layer pressure and cut-off domain.
Figure 3 :
3 Figure 3: Confirmation of convergence for the boundary layer problem.
µ P (x 1 )Table 3 :
13 Q(x 1 ) f 1 (x) f 2 (x) C π ellipses C π Values used for the computations.
Figure 4 :
4 Figure 4: Confirmation of convergence for the microscopic problem.
(a) u 1 (b) u 2 (
12
Figure 5 : 1 (b) u 2 (
512 Figure 5: Visualization of the microcopic velocity and pressure with elliptical inclusions.
Figure 6 :
6 Figure 6: Visualization of the microcopic velocity and pressure with circular inclusions.
1,ε | L 2 (Ωε) 4 ≤
1/ √ ε. After plugging this information into estimate (85), equation (86) yields
the energy estimate
2µ|Du
Table 1 :
1 Stabiliziation of Cπ in the cut-off domain with increasing number of inclusions.
28
π in the cut-off domain with seven inclusions. A convergence check with global 300 refined meshes have shown that the discretization error is of the order O(10 -8 ).
Table 2 :
2 Values of the permeability tensor components.
The work of T.C. was supported by the German Research Council (DFG) through project "Multiscale modeling and numerical simulations of Lithium ion battery electrodes using real microstructures" (CA 633/2-1). 2 The work of EMP was supported in part by the grant of the Croatian science foundation No 3955, Mathematical modelling and numerical simulations of processes in thin and porous domains 3 The research of A.M. was supported in part by the LABEX MILYON (ANR-10-LABX-0070) of Université de Lyon, within the program "Investissements d'Avenir" (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). | 45,393 | [
"1003612",
"2599"
] | [
"231495",
"444777",
"521754"
] |
01007328 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2010 | https://hal.science/hal-01007328/file/GSFA.pdf | Dr B Girault
Dr A S Schneider
Prof E Arzt
Dr C P Frick
J Schmauch
INM K.-P Schmitt
Christof Schwenk
Strength Effects in Micropillars of a Dispersion Strengthened Superalloy**
By Baptiste Girault * , Andreas S. Schneider, Carl P. Frick and Eduard Arzt In order to realize the full potential of emerging microand nanotechnologies, investigations have been carried out to understand the mechanical behavior of materials as their internal microstructural constraints or their external size is reduced to sub-micron dimensions. [1,2] Focused ion beam (FIB) manufactured pillar compression techniques have been used to investigate size-dependent mechanical properties at this scale on a variety of samples, including single-crystalline, [3][4][5][6][7][8][9][10] nanocrystalline, [11] precipitatestrengthened, [12,13] and nanoporous [14,15] metals. Tests revealed that single-crystal metals exhibit strong size effects in plastic deformation, suggesting that the mechanical strength of the metal is related to the smallest dimension of the tested sample. Among the various explanations that have been pointed out to account for such a mechanical behavior, one prevailing theory developed by Greer and Nix [6] invokes ''dislocation starvation.'' It assumes that dislocations leave the pillar via the surface before dislocation multiplication occurs. To accommodate the induced deformation new dislocations have to be nucleated, which requires high stresses. [6,16] This theory has been partially substantiated by direct in situ transmission electron microscope (TEM) observations of FIB manufactured pillars which demonstrate a clear decrease in mobile dislocations with increasing deformation, a result ascribed to a progressive exhaustion of dislocation sources. [17] Another origin of size-dependent strengthening may lie in the constraints on active dislocation sources exerted by the external surface, i.e., source-controlled mechanisms. [18][19][20] A clear understanding of the mechanisms responsible for the size effects in plastic deformation is still missing and other origins of strength modification with size remains somewhat controversial. [17] Unlike pure metals, pillars with an internal size parameter smaller than the pillar diameter would be expected to exhibit no size effect, reflecting the behavior of bulk material. This was demonstrated for nanocrystalline [11] and nanoporous Au. [14,15] Nickel-titanium pillars with semi-coherent precipitates approximately 10 nm in size and spacing also exhibited no size dependence, although results are difficult to interpret due to the concurrent martensitic phase transformation. [13] Conversely, precipitate strengthened superalloy pillars were reported to show size-dependent behavior, a result left largely unexplained. [12,21] Therefore, a strong need exists to further explore the influence of internal size parameters on the mechanical properties of small-scale single crystals, to better understand the associated mechanisms responsible for the size effect.
The present paper investigates the uniaxial compression behavior of highly alloyed, focused ion beam (FIB) manufactured micropillars, ranging from 200 up to 4000 nm in diameter. The material used was the Ni-based oxide-dispersion strengthened (ODS) alloy Inconel MA6000. Stress-strain curves show a change in slip behavior comparable to those observed in pure fcc metals. Contrary to pure Ni pillar experiments, high critical resolved shear stress (CRSS) values were found independent of pillar diameter. This suggests that the deformation behavior is primarily controlled by the internal obstacle spacing, overwhelming any pillar-size-dependent mechanisms such as dislocation source action or starvation.
The research presented here investigates the mechanical behavior of single-crystalline micropillars made of a dispersion strengthened metal with a small internal size scale: the oxide-dispersion strengthened (ODS) Inconel MA6000, 1 which is a highly strengthened Ni-based superalloy produced by means of mechanical alloying. This high-energy ball milling process produces a uniform dispersion of refractory particles (Y 2 O 3 ) in a complex alloy matrix, and is followed by thermo-mechanical and heat treatments (hot-extrusion and hot-rolling) to obtain a large grained microstructure (in the millimeter range). MA6000 has a nominal composition of Ni-15Cr-4.5A1-2.5Ti-2Mo-4W-2Ta-0.15Zr-0.01B-0.05C-1.1Y 2 O 3 , in wt%. Previous studies carried out on bulk MA6000 showed that its strength is due to the oxide dispersoids and to coherent precipitates of globular-shaped g 0 -(Ni 3 Al/Ti) particles, which are formed during the heat treatment. Depending on the studies, the average sizes in these two-particle populations are about 20-30 and 275-300 nm, respectively. [22][START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Reppich | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Heilmaier | [END_REF] TEM investigations of our sample revealed a dense distribution of oxide particles with diameter and spacing well below 100 nm; however, no indications of g 0 -precipitates were found (Fig. 1(a)). Thus, in contrast to a recent study on nanocrystalline pillars, [11] the tested specimens have no internal grain boundaries, which would impede the dislocations from leaving the sample, but have a characteristic length scale smaller than the pillar diameter.
Experimental
Bulk MA6000 was mechanically and chemically polished. The polishing process and testing were carried out in a plane allowing access to elongated grains of several millimeters in size. Pillar manufacturing, testing, and analysis were similar to the study by Frick et al. [26] . Micro-and nanopillars with diameters ranging from 200 to 4000 nm and a diameter to length aspect ratio of approximately 3:1 were machined with a FIB FEI Nova 600 NanoLab DualBeam TM . All pillars were FIB manufactured within the same grain (Fig. 1(b)) in order to avoid any crystallographic orientation changes that could activate different slip systems. To minimize any FIB-related damage, a decreasing ionic current intensity from 0.3 nA down to 10 pA was used as appropriate with decreasing pillar diameters [27] . The pillars were subsequently compressed in loadcontrol mode by an MTS XP nanoindenter system equipped with a conical diamond indenter with a flat 10 mm diameter tip under ambient conditions. Loading rates varied between 1 and 250 mN Á s À1 depending on pillar diameter in order to obtain equal stress rates of 20 MPa Á s À1 .
The pillar diameter, measured at the top of the column, was used to calculate the engineering stress. It is important to mention that the pillars had a slight taper of approximately 2.78 on average, with a standard deviation of 0.58. Hence, stress as defined in this study represents an upper bound to the stress experienced by the sample during testing.
Figure 1(c) and (d) shows representative post-compression scanning electron microscope (SEM) micrographs of 304 and 1970 nm diameter pillars. Pillars with diameters above 1000 nm retained their cylindrical shape and showed multiple slip steps along their length; in some cases, barreling was observed. Samples below this approximate size tended to show localized deformation at the top with fewer, concentrated slip steps, which have been observed in previous studies, e.g., see Ref. [28]. Independent of pillar size, multiple slip was observed. High-magnification pictures of the sidewalls showed fewer slip steps in the vicinity of particles, emphasizing that particles act as efficient dislocation obstacles.
Electron backscattered diffraction (EBSD) measurements showed that the pillars were cut in a grain with the h110i crystallographic orientation aligned normal to the sample surface. Among the 12 different possible slip systems in fcc crystals, only four present a non-zero Schmid factor equal to 0.41. The slip bands were oriented at approximately 348 with regard to the pillar axis, nearly matching the expected 35.38 angle of the {111} h110i slip system for a h110i oriented fcc crystal.
1 Inconel MA6000 is a trademark of the Inco Alloys International, Inc., Huntington, WV.
Results and Discussion
Typical engineering stress-strain curves are shown in Figure 2. The features of the stress-strain curves changed with decreasing pillar diameter. Larger pillars displayed a stressstrain curve with strain hardening similar to bulk material. Below approximately 2000 nm, staircase-like stress-strain curves with plastic strain bursts separated by elastic loading segments were observed. This has been demonstrated in previous single-crystalline micropillar studies, where strain bursts were related to dislocation avalanches. [10,26] For pillars even smaller than 1000 nm in diameter, the staircase-like shape under 4% strain is followed by large bursts over several percent strain, which gave the appearance of strain softening. The large bursts are consistent with SEM observations showing highly localized deformation on a few glide planes for pillars with diameters below 1000 nm. This behavior suggests that, for small pillar diameters, the dispersoid particles no longer promote homogeneous deformation, as they do in bulk alloys. The pillars hence exhibit a size effect in the slip behavior.
By contrast, the flow stresses are comparable for all pillar diameters and do not exhibit a size effect (Fig. 2). This is highlighted in Figure 3, where the flow stress measured at 3% strain is plotted as a function of pillar diameter, and compared with previous results on pure Ni micropillar. [4,26] Whereas the pure Ni exhibits the frequently reported size effect, our data are independent of pillar diameter and lie close to the bulk value (critical resolved shear stress (CRSS) of about 500 MPa [START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF] ). Best power-law fits gave a relationship between flow stress s and diameter d of s a d À0.65 and d À0.62 for [111] and [269] Ni, respectively; for MA6000, the exponent is À0.04 AE 0.02, a value close to zero.
In contrast to the study on a superalloy containing only coherent precipitates, [12] this study clearly shows that incoherent particles can give rise to an internal size parameter, which is dominant over any pillar-size effect in the entire size range. The oxide particle spacing in our study is below 100 nm, which is much smaller than the pillar diameters. [22][START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Reppich | High Temperature Alloys for Gas Turbines and Other Applications[END_REF][START_REF] Heilmaier | [END_REF] It is notable that the extrapolated MA6000 strength values and the pure Ni data in Figure 3 intersect at a pillar diameter of about 150 nm, close to the oxide particle spacing. The smallest pillars still contain about 10, the largest about 40 000 oxide particles. In the latter case, continuous stress-strain curves as in bulk are expected due to averaging effects; in the smaller pillars, stochastic effects would explain the staircase-like behavior.
The absence of the size effect in single-crystalline MA6000 implies that neither the starvation theory nor sourcecontrolled mechanisms may be applicable. The high density of internal obstacles will be likely to prevent dislocations from exiting excessively through the surface; and the small obstacle spacing, compared to the pillar diameter, will make source-operation insensitive to surface effects. As a result, the flow stress will be determined by the interactions of dislocations and obstacles, as in bulk alloys. Size effects might, however, be expected for pillar diameters below the oxide particle spacing, i.e., 100 nm, but are beyond the scope of the present study.
Conclusions
In summary, compression tests were carried out on single-crystal pillars of an ODS-Ni superalloy (MA6000). The following conclusions were drawn: i) As in pure fcc metals, the superalloy pillars undergo a change in slip behavior. Pillars thinner than 2000 nm showed staircase-like stress-strain curves. The localized strain bursts suggest that the non-shearable particles no longer manage to homogenize slip as in bulk alloys. ii) Contrary to single-crystal studies on pure metals, no dependence of yield stress on sample size was measured.
A high constant strength was found, which is comparable Ni [5] and 3% offset values for [111] Ni. [START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF] The solid lines represent best power-law fits.
to the highest flow stress value published for pure Ni pillars (with a diameter of 150 nm). iii) These results suggest that size-dependent mechanisms such as dislocation starvation or source exhaustion are not operative in a dispersion strengthened alloy. Instead, the strong internal hardening dominates over any specimen size effect.
Fig. 1 .
1 Fig. 1. TEM plane view of MA6000 microstructure (a) and SEM images of (b) location of pillar series (white circles) with regards to grains boundaries (white dotted lines); (c) and (d) show deformed pillars with diameter of 304 and 1970 nm, respectively. Pictures were taken at a 528 tilt angle relative to the surface normal.
Fig. 2 .
2 Fig. 2. Representative compressive stress-strain behavior for MA6000 pillars of various diameters ranging from approximately 200 to 4000 nm.
Fig. 3 .
3 Fig. 3. Logarithmic plot of the critical resolved shear stress (CRSS) at 3% strain for all [111] MA6000 pillars tested. The error bars correspond to the standard deviation of six tests on different pillars presenting similar diameters. For comparison, 0.2% offset compressive stresses are shown for pure [269] Ni[5] and 3% offset values for [111] Ni.[START_REF] Singer | High Temperature Alloys for Gas Turbines and Other Applications[END_REF] The solid lines represent best power-law fits. | 14,079 | [
"1238219"
] | [
"123891",
"123891",
"303412",
"123891"
] |
01485082 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2014 | https://minesparis-psl.hal.science/hal-01485082/file/Dubois%20et%20al%202014%20IPDM%20co-design.pdf | Louis-Etienne Dubois
Pascal Le Masson
Benoît Weil
Patrick Cohendet
From organizing for innovation to innovating for organization: how co-design brings about change in organizations
Amongst the plethora of methods that have been developed over the years to involve users, suppliers, buyers or other stakeholders in the design of new objects, co-design has been advertised as a way to generate innovation in a more efficient and more inclusive manner. Yet, empirical evidence that demonstrate its innovativeness is still hard to come by. Moreover, the fact that co-design workshops are gatherings of participants with little design credentials and often no prior relationships raises serious doubts on its potential to generate novelty. In this paper, we study the contextual elements of 21 workshops in order to better understand what codesign really yields in terms of design outputs and relational outcomes. Our data suggest that co-design emerges in crisis situations and that it is best used as a two-time intervention. We suggest using collaborative design activities as a way to bring about change through innovation.
INTRODUCTION
Open, cross-boundary, participative, collaborative, distributed: whatever the word used, innovation has become a practice known to involve a wide array of actors [START_REF] Chesbrough | Open innovation: The new imperative for creating and profiting from technology[END_REF][START_REF] Remneland-Wikhamn | Open innovation climate measure: The introduction of a validated scale[END_REF]. Collaborative design activities, also known as codesign, are increasingly used to design new products, services and even public policies with users, citizens and other stakeholders [START_REF] Sanders | Co-creation and the new landscapes of design[END_REF][START_REF] Berger | Co-designing modes of cooperation at the customer interface: learning from exploratory research[END_REF].
While its tools and methods, as well as its benefits for design purposes, have been discussed at length, the settings in which such activities arise and more importantly its effects on the groups, organizations and design collectives remain to this date misunderstood [START_REF] Kleinsmann | Why do (n't) actors in collaborative design understand each other? An empirical study towards a better understanding of collaborative design[END_REF][START_REF] Schwarz | Sustainist Design Guide[END_REF]). Yet, initial contexts, which can be defined by and explored through the relationship between stakeholders, should be of major interest for they play a significant role in the unfolding of collaborative design or joint innovation processes [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF]. Furthermore, the fact that co-design workshops often involves participants who lack design credentials and do not share some sort of common purpose raises serious questions on the potential for innovation and motivations to take part in such time-consuming activities.
The purpose of this paper is to shed light on the context of co-design activities and its outputs, arguing that it may be used as a change management intervention while being advertised as a design and innovation best practice. Through a multiple-case study, we investigate the contextual elements of 21 workshops in which stakeholders gather, often for the very first time, to design new products, services or processes together. Following an overview of the literature on innovation, design and collaboration, we suggest based on our results that co-design is in fact a two-phase intervention in which relationships must first be reinforced through design activities before innovation issues can be tackled.
LITTERATURE REVIEW
Over the past decades, innovation has received increased attention from practitioners and academics altogether, resulting in new forms for organizing such activities and a large body of literature on its every dimension [START_REF] Remneland-Wikhamn | Open innovation climate measure: The introduction of a validated scale[END_REF]. Garel & Mock (2011:133) argue that "innovation requires a collective action and an organized environment". In other words, we need on one side people, preferably with relevant knowledge and skills (expertise), and on the other side, a collaborative setting in which diverse yet compatible collectives can come together to design new products, services or processes. This classic innovation scheme holds true for not only standard R&D teams, but also for new and more open forms of innovation in which users interact with industry experts in well-defined platforms [START_REF] Piller | Mass customization: reflections on the state of the concept[END_REF][START_REF] Von Hippel | Democratizing innovation[END_REF]. Accordingly, the literature review is structured as follows: first on the rationale behind the need for organized environment in which stakeholders can design and innovate, and then on the collective action that drives the collaboration between them.
From open innovation [START_REF] Chesbrough | Open innovation: The new imperative for creating and profiting from technology[END_REF] to participative design [START_REF] Schuler | Participatory design: Principles and practices[END_REF], the call for broader involvement in organizations' design, NPD and innovation activities has been heard widely and acted upon by many [START_REF] Von Hippel | Democratizing innovation[END_REF][START_REF] Hatchuel | Teaching innovative design reasoning: How concept-knowledge theory can help overcome fixation effects[END_REF]. Seen as a response to mounting competitive pressures, cross-boundaries practices are a way for organizations broadly taken to remain innovative, adaptive and flexible [START_REF] Teece | Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance[END_REF]. The case for openness, as put forth in the literature, is built on the promise of reduced uncertainty, more efficient processes, better products and positive market reaction to the introduction of the innovation [START_REF] Diener | The Market for Open Innovation[END_REF][START_REF] Thomke | Experimentation matters: Unlocking the potential of new technologies for innovation[END_REF]. Rather than focusing on cost reductions alone, open and collaborative are to be implemented for the value-added and creativity they bring to the table [START_REF] Remneland-Wikhamn | Transaction cost economics and open innovation: implications for theory and practice[END_REF]. As a result, organizations are increasingly engaging with their stakeholders to tap into their knowledge base, leverage value co-creation potential and integrate them in various stages of new product or service development activities [START_REF] Lusch | Competing through service: insights from service-dominant logic[END_REF][START_REF] Mota Pedrosa | Customer Integration during Innovation Development: An Exploratory Study in the Logistics Service Industry[END_REF]. However, openness to outside ideas does not come naturally. The existence of the well-documented "not-invented-here" syndrome [START_REF] Katz | Investigating the Not Invented Here (NIH) syndrome: A look at the performance, tenure, and communication patterns of 50 R & D Project Groups[END_REF] has academics constantly remind us that open innovation can only emerge in settings where the culture welcomes and nurtures ideas from outsiders [START_REF] Hurley | Innovation, market orientation, and organizational learning: an integration and empirical examination[END_REF].
The field of design has long embraced this participatory trend. Designers have been looking for more than forty years now for ways to empower users and make them more visible in the design process (Stewart & Hyysaalo, 2008). As a result, multiple approaches now coexist and have yielded a rich literature often focused on visualisation tools, design techniques and the benefits of more-inclusive objects that are obtained through sustained collaboration with users. Whether empathic (e.g. [START_REF] Koskinen | Empathic design: User experience in product design[END_REF]), user-centered (e.g. Norman & Draper, 1986), participatory [START_REF] Schuler | Participatory design: Principles and practices[END_REF] or contextual (e.g. [START_REF] Wixon | Contextual design: an emergent view of system design[END_REF]), streams of "user-active" design presupposes engaging with willing participants in order to improve the construction process and output. Still too often, the interactions between those who know and those who do remain shallow, and are limited to having users discuss the design of services or products [START_REF] Luke | Co-designing Services in the Co-futured City. Service Design: On the Evolution of Design Expertise[END_REF]. Worse, the multiplication of participatory design approaches, often calling a rose by another name, has resulted in practical and theoretical perplexity. According to Sanders et al. (2010: 195) "many practices for how to involve people in designing have been used and developed during the years» and as a untended consequence, «there is some confusion as to which tools and techniques to use, when, and for what purpose". Amongst these «better design methods», co-design seeks the active participation and integration of users' viewpoints throughout the entire design process. More than a glorified focus group, outsiders gather to create the object, not just discuss about it. According to Pillar et al. (2011: 9), the intended purpose of these activities "is to utilize the information and capabilities of customers and users for the innovation process". As such, co-design is often portrayed as a way to facilitate mass customization through platforms, merely enabling better design in settings where users are already willing to take part in the process. Yet, a more accurate and acknowledged definition of co-design refers to it as a creative and collective approach "applied across the whole span of a design process, (where) designers and people not trained in design are working together in the design development process" (Sanders & Stappers, 2008:6).
However, active participation of stakeholders and users in innovation or design processes does not always lead to positive outcomes. For one, [START_REF] Christensen | The Innovator's Dilemma[END_REF] has studied situations (dilemmas) in which intensively catering to existing users leads to diminishing returns and loss of vision. Da Mota Pedrosa (2012) also demonstrate that too much user integration in innovation process becomes detrimental to an organization, and that the bulk of the interactions should occur early in the ideation process rather than in the later development and production stages.
Finally, [START_REF] Holzer | Construction of Meaning in Socio Technical Networks: Artefacts as Mediators between Routine and Crisis Conditions[END_REF] raises the all-important mutual understanding hurdles that heterogeneous innovation groups face, which sometimes translates into a lack of shared meaning and conflict.
Collective action and collaboration in innovation are also well documented in the literature.
Choosing partners, whether it is your suppliers, buyers or other firms, is often portrayed as a strategic, yet highly contextual decision, where issues of trustworthiness, confidentiality and relevance are paramount [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF]. Prior relationships, mutual understanding and common identity are also said to play a role in the successful development of social cohesion and innovation [START_REF] Coleman | Social Capital in the Creation of Human Capital[END_REF][START_REF] Dyer | Creating and managing a high performance knowledge-sharing network: the Toyota case[END_REF]. In other words, engaging in exploratory activities across boundaries requires that the actors know and trust each other, are willing to play nice and share a minimum of behavioural norms [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF]. Along the same lines, Fleming et al. (2007:444) state that "closed social structures engender greater trust among individuals », which in turn generate more collaboration, creativity and ultimately more innovation. Simply put, the absence of social proximity or relationships precludes the expression of creativity and the emergence of novelty. This translates into settings or contexts in which open dialogue, inclusiveness and collaboration amongst individual leads to new objects [START_REF] Remneland-Wikhamn | Transaction cost economics and open innovation: implications for theory and practice[END_REF]. Without a common purpose, groups are bound to failure or conflict, for "goal incongruence hinders (the construction of) a joint solution [START_REF] Xie | Antecedents and consequences of goal incongruity on new product development in five countries: A marketing view[END_REF]. While relevant to our study, this literature remains elusive on more open forms of collaborative design, where relationships are multiple, often not obvious (i.e. not one firm and its few suppliers, but rather a "many-tomany" format) and not held together by contractual ties [START_REF] Hinde | Relationships: A dialectical perspective[END_REF]. Moreover, repeated interactions and equal commitment, two important drivers of collaboration in design [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF], are unlikely in ad-hoc formats such co-design in which interests are seldom shared (i.e. users are hardly committed at improving the firm's bottom line). Figure 1 below combines these streams in the literature, in which collectives and expertise are considered as innovation inputs. Yet first-hand observation of co-design activities leads one to denote that it 1) often involves people who lack design credentials and 2) gathers people with little to none prior history of working together or even sometimes a desire to collaborate. Our extended immersion in a setting that holds co-design workshops on a regular basis, as well as observations of several workshops in Europe has yielded few successful design outputs to account for. What's more, participants are seldom lead-users with relevant deep knowledge [START_REF] Von Hippel | Lead users: a source of novel product concepts[END_REF], nor driven by shared values or purpose as in an organization or an innovation community [START_REF] Raymond | The cathedral and the bazaar[END_REF]Adler & Heckscher, 1996). As opposed to Saxenian & Sabel (2008: 390) who argue that «it is possible to foster collaboration (…) only when social connections have become so dense and reliable that it is almost superfluous to do so», collaborative design workshops often take place in relational deserts. Very few experts and poor relationships: what can we really expect?
To a point, this situation is consistent with what authors such as [START_REF] Granovetter | The Strength of Weak Ties[END_REF][START_REF] Glasser | Economic action and social structure: The problem of embeddedness[END_REF] on the strength on weak ties and [START_REF] Burt | Structural Holes: The Structure of Competition[END_REF] on network embeddeness have studied. They demonstrate that too much social and cognitive proximity can be detrimental to innovation.
When too much social cohesion exists, knowledge variety and access to sources elsewhere in the network are hindered, thus limiting one's ability to generate novelty [START_REF] Uzzi | Social Structure and Competition in Interfirm Networks: The Paradox of Embeddedness[END_REF]. This phenomenon, described as the "paradox-of-embededdness" [START_REF] Uzzi | Social Structure and Competition in Interfirm Networks: The Paradox of Embeddedness[END_REF], shows that knowledge homogeneity can be an obstacle to collaboration, especially when it is geared towards innovation. Noteboom (2000) further explains this paradox by talking about "cognitive distance", where following an inverted-u curve, too little or too much proximity results in suboptimal innovation outcomes. Being further away, cognition and social wise, is also said to help avoid creativity hurdles such as "group-think" [START_REF] Janis | Victims of groupthink[END_REF] and fear of the outside world [START_REF] Coleman | Community conflict[END_REF]. While the jury is still out of whether weak or strong ties are best for innovation, authors have suggested that network configuration and density should be adapted to the nature of the task (i.e weak for exploration vs. strong for exploitation, Noteboom, 2000) or external conditions [START_REF] Rowley | Redundant Governance Structures: An Analysis of Structural ad Relational Embeddedness in the Steel and Semiconductor Industries[END_REF]. In other words, this literature argues that despite facing challenges in getting heterogeneous groups to collaborate, those who do can expect a proper innovation pay-off. Then again, going back to workshops witnessed over the past two years, we are still confronted with the same problem: even with weak ties, it still does not yield novelty. If no one configuration results in innovation, could it be that we are looking at it the wrong way?
More importantly, could it be that co-design is geared to generate more than just new objects?
And so we ask: what drives, as in one of our cases, elders, students, caregivers and representatives from an insurance company to design together? Or toddlers, teachers, architects and school board officials to get together to re-invent the classroom? In other words, why does co-design always seem to emerge in settings where basic conditions for collaboration and innovation are lacking [START_REF] Huxham | Creating collaborative advantage[END_REF]. Are such weak ties really generative? This, we argue, calls for a broader investigation of co-design; one that does not separate new object design (outputs) from effects on the design collective (outcomes). Hence, this paper addresses the following questions: what defines co-design contexts and what do workshops really yield?
RESEARCH METHOD AND EMPIRICAL BASE
Following a multiple case study methodology [START_REF] Eisenhardt | Building theories from case study research[END_REF][START_REF] Yin | Case study research. Design and methods[END_REF], of both retrospective and current cases, we investigate different contexts (i.e. organizational, pedagogical, industrial, etc.) in which co-design is used and the relationship between participants. Through semistructured or sometimes informal interviews, as well observation of both planning and execution phases of workshops, we investigate the background setting, prior relationships between participants and actual outputs of 21 different co-design workshops in 4 countries (France, Finland, Netherlands, Belgium). In total, we interviewed 20 participants (lasting anywhere from 15 to 60 minutes at a time) and witnessed 10 live workshops (lasting 5 to 8 hours each time).
Since co-design is still an emerging phenomenon theory-wise, the methodology was designed in a way that was coherent with our research object (Edmonson & McManus, 2007;[START_REF] Von Krogh | Phenomenon-based Research in Management and Organisation Science: When is it Rigorous and Does it Matter?[END_REF]). As such, our desire to contribute to the development of a co-design theory invited a broad study of different contexts and workshops. Adopting a "grounded theorist" posture (Glaser & Strauss, 1967), we opted for a qualitative study of multiple dimensions of a same phenomenon [START_REF] Shah | Building Better Theory by Bridging the Quantitative-Qualitative Divide[END_REF]. Moreover, we used different collection tools to better apprehend our research object in all its complexity (Eisenhardt & Grabner, 2007).
Cases for this study were selected based on an opportunity sampling, meaning we studied both past workshops and attended live experiments as they became available to us. For retrospective cases, we made sure that were less than a year old and that access to both participants and documentation was readily available to prevent any time bias. Only case was older (C1), yet was thoroughly documented in a book shortly after, thus preventing distortion.
Our questions touched on relationships dimensions amongst participants; their thoughts on the workshop and on they personally and collectively took away from their experience. To ensure the coherence of our data, we only studied co-design workshops that had a similar format, in terms of length (1 day), protocol (divergent-convergent sequence), tools and number of participants (15-25 at a time). Furthermore, as we still lack co-design theory to guide us in the identification of cases, we simply made sure that they 1) involved a wide array of participants and stakeholders (the "co") and 2) focused on the creation of a new object (the "design", as opposed to testing of existing concepts). Interviews were conducted during and after the workshops, recorded when possible and transcribed by the lead author. Key excerpts were later shared to the respondents during two group interviews, to ensure the validity of the interview content. Data collected for this article was made anonymous, in order to protect any sensitive innovation material, design issues or interpersonal conflicts from leaking out in the open.
Once transcribed, we then looked into our interview material, as well as secondary sources, for any information that could help us assess prior relationship (or lack thereof) between participants. Quotes pertaining to working or personal relationships, apprehensions towards collaboration and potential conflicts were highlighted, and in turn codified into first-level categories. We also asked (for retrospective cases) or observed the tangible design outputs of each workshop in order to see if the initial design goals were met. The lead author first conducted this task, followed by a discussion of the results with the co-authors, all reaching agreement on the coded data, the different categories and the subsequent interpretation of the results. Table 1
RESULTS
Our data reveals co-design often emerge out of little prior relationships or out of weak ties, with some cases even supporting claims about the presence of underlying malfunctions and poor collaboration climate. It should be mentioned that some of the cases are still underway, and that accordingly our attention is on the initial context and relationships between the stakeholders involved. Most participants are usually sitting around the same table for the first time, and very few of them have any prior design or collaboration experience to account for. According to one of the project leaders in the H1 case: "this is an opportunity to really get to meet the colleagues and get out of this isolated environment". This case, just like the W1 workshop, where stakeholders have not had to work together before, but are now forced to do so is common across most of our data. On this point, the respondent in charge of the W1 case stated that before " there was enough funding for every project, but now they have to come up with an integrated and coherent plan, instead of all pulling in different directions". A claim echoed by one participant (E1), pointing out to the fact "doctoral students come different schools and never really talk to each other". Lack of personal interactions was also raised by one of the dean in the F1 workshop: " we must find ways to get back in touch with both students and local businesses, something we've lost lately". Prior relationships are not assessed by face time, for many stakeholders have met in the past without engaging in anything more than shallow conversations, let alone design activities. As the L1 facilitator explains: " participants had never really exchanged in the past. It's sad because they all work together, but don't interact very much in the end". As a result, cases such as W1 or GT1 show that relationships are improved or created during workshops. In the former, one participant was satisfied about having met "new people around her with whom to work again in the near future", while in the latter, the facilitator believed that the real outcome of the day was "creating mutual interest amongst participants".
Some participants also touched on the lack of trust and collaboration with their colleagues, or similar negative state-of-minds towards the group. The host of the C1 case used these words to sum up prior relationships amongst the stakeholders: "designers and architects see the parents, professors and students as hurdles, they feel as if involving them in the process will only slow things down and bring new problems". Along the same lines, another participant identified the challenges of dealing with "everyone (who) arrive initially with their own pet project or personal needs" and in finding ways to bring everything together. Finally, the host of the N1 case said that they used "co-design because of the economical and trust crises", adding that it was the only good way to go in order to "connect the top-down system with the bottom-up movement".
These two cases, while working on projects that vary by scale and nature, were also both targeting stakeholders or neighborhood facing harsher conditions; "the poor schools, not just the wealthy ones". Workshops, once again, aiming for the most difficult conditions possible.
Other cases raised even more dramatic or sensitive issues amongst stakeholders, with some of them confessing about the absence of meaning or coherence in their day-to-day activities.
Workshops such as F1 or P1 had participants expressing feelings of uselessness in their job.
The host of the latter case explained: " what we are going through here is a meaning crisis, for whom and why are we working in this organization?" Access to material and notes used in the planning of some cases also support our hypothesis by pointing out to sometimes conflictual or tensed settings. For instance, the ED1 animation protocol states that "the client should quickly go over the workshop introduction, to briefly set the stage and avoid raising the sensitive issues". The animation protocol also reads like this: "the facilitator will need to refocus the discussions and remind participants of what is sought after and allowed during the workshop".
Respondents pointed out to different kind of gaps between what they wished to achieve (often expressed by the title of the workshop) and the current reality. Whether it was for an organizational structure ill suited for innovation "in which management usually does all the innovation and simply explains it to others after" as in the P1 case or a lack of relevant knowledge that leads the university (F1) dean " unsure of what to do to help students cope with today's changing environment". It can also just be that some participants have no or little design resources to spare, and as one facilitator puts it, "they come with their issue hoping one codesign workshop will solve it". In other cases, such as GT1 or CS1, what is missing are common language, criteria or working methods. "They had no idea on how to work together" said the facilitator of the former, adding that "what they ended up creating where filters and criteria to assess the quality of concepts to come". For all A and O cases, organizations turn to co-design when they lack the proper knowledge or skills to conduct the activities internally. In the A3 case, participants from the sponsoring firm confessed that they not only needed outsiders and experts to weigh in on the technological dimensions of the product-to-be, but also extended interactions with users to help them sort out real needs from all the needs identified through market research. Finally, for cases such as H1 and E1, stakeholders do not lack knowledge or skills on the content of the workshop, but rather on the means to go about organizing collaboration in design. According to the E1 facilitator, "it seemed as if the participants were as much, if not more, interested in our animation protocol than we tried to achieve with it". While some stakeholders leave with the protocol, other leave with participants by recruiting them out of the workshop. Firms involved in A and O cases are systematically on the lookout for participants who could help them fill internal knowledge gaps beyond the one-day workshops. As the facilitator of the O2 case confessed, the firm is convinced that "if they identify one good talent to recruit, then their investment is worth every penny. At that point, reaching a working prototype just becomes a bonus". One participant from the A2 case even adds that organizations are "not even hiding the fact that they also use co-design as a recruiting-in-action tool".
Secondary data such as planning documents or animation protocols also provide another interesting element: the name of the workshop, or in other words, what brings the stakeholders together. Hence, most cases seem to display both a small level of ambition and a low novelty target. Workshops focusing on concepts such as "classroom of the future", "energy-efficient buildings" or "facilitating care for elders" are fields that have been thoroughly explored for some time already. The fact that stakeholders get to it only now could be interpreted as a symptom of their inability to get it before, when such considerations were still emerging. In other words, workshop "titles" are often an open window into the collective's "common problem", rather than their "common purpose". But while they do not suggest any real innovative outputs, the way the design work is being distributed amongst active stakeholder is in itself quite original. Other significant secondary data can be found in the design output, or lack there of as in most of our cases. While many of them allowed for knowledge to be externalized, new functionalities to be identified and innovative fields (rather than precise innovation) to be suggested for later work, only 1 of our 21 cases (A3) yielded an artifact that could reach the market in the near future.
Hence, the results show that initial contexts and their underlying malfunctions vary. At the most simple level, our cases reveal that the problems may be of knowledge, skills or relational nature.
These dimensions serve as first-level constructs, in which four different levels can be used to further define the tensions faced by the collectives. They may be affect individuals, organizations, institutions (value networks) or society at large (i.e. cities, territories). These problems are not mutually exclusive and can be embedded or interlocked with one another.
Complex and deep-level tensions are not only signs that collaboration is unlikely, but that the need for facilitation amongst the stakeholder is essential to achieve any real design outputs.
Table 2 presented on the next page sums up these dimensions and levels built from our cases.
DISCUSSION
Based on our results, we argue that co-design should not be considered as a best practice, but rather as a crisis symptom. If it was indeed used to foster innovation and surpass classic methods to design new products or services, both the aim of the workshops and its results would support such claims. More importantly, if co-design was only used as a way to facilitate dialogue and build better teams, we would not still find real design tasks and expectations that bring stakeholders together. Rather, our results suggest that groups resort to co-design only when crises undermine their ability to collectively create using conventional approaches.
Workshops, it seems, are used as Trojan horses: getting design collectives operational (again) by working on the design of products or services. As Godet (1977:21) argues: "action accelerates change". In our cases, designing together fosters change amongst stakeholders.
And as such, co-design rises in a field that tends to respond to crises by inventing new methods, modes of organizing and management principles [START_REF] Midler | Compétion par l'innovation et dynamique des systèmes de conception dans les entreprises françaises-Une comparaison de trois secteurs[END_REF].
Resorting to the word "crises" is certainly not neutral: it holds a meaning often seen as dramatic and negative. This choice of words stems from both interviews excerpts, where some participants raised "meaning" (case P1) or "trust" (case C1) crises, and from existing innovation literature (e.g. [START_REF] Godet | Crise de la prévision, essor de la prospective: exemples et méthodes[END_REF][START_REF] Midler | Compétion par l'innovation et dynamique des systèmes de conception dans les entreprises françaises-Une comparaison de trois secteurs[END_REF][START_REF] Spina | Factors influencing co-design adoption: drivers and internal consistency[END_REF]). Scharmer (2009:2) argues that «the crisis of our time reveals the dying of an old social structure and way of thinking, an old way of institutionalizing and enacting collective social forms». The difference here is that we highlight crises not just to describe the initial setting in which the innovation or design efforts unfold, but actually as the sine qua non condition in which it can take place. By crisis, we simply point out to, as Godet (1977:20) said, "a gap between reality and aspirations".
In a collaborative design setting that aspires to innovate, the gap lies in the lack of prior relationships between the participants and the absence of experts or innovation credentials around the table. Contrary to the literature on proper innovation settings, co-design occurs where there is little potential for collaboration. Yet, while tensed context are known to hinder innovation [START_REF] Clauß | The Influence of the Type of Relationship on the Generation of Innovations in Buyer-Supplier Collaborations[END_REF], our cases show that stakeholders push through and keep on codesigning. It seems outcomes such as stronger ties, learning and change trump design outputs.
The change management literature can help us better understand co-design if it is indeed an intervention to a crisis, although it remains different on many levels. For one, co-design does not always target pre-existing and stable collectives (such as a team or an organization), but rather disgruntled individuals often coming together for the first time. Not does change management seek to regroup individuals: its purpose is to capitalize on "common purpose" and "shared values" as accelerators of organizational progress (Roy, 2010: 48). And while change management is about «moving from strategic orientation to action» (Rondeau et al. 2005:7) and helping individuals cope with disruption [START_REF] Rondeau | Transformer l'organisation : Vers un modèle de mise en oeuvre[END_REF], co-design appears to be moving from
IMPLICATIONS
This discussion raises in turn an important question: why then do we co-design if not to design?
Solving crises, (re) creating design collectives, exchanging knowledge, building foundations for later work: these are all legitimate, yet unsung co-design outcomes. Knowing that innovation and its subversive nature can disrupts collectives [START_REF] Hatchuel | Services publics: la Subversion par les nouveaux produits[END_REF], co-design can be used a "controlled disruption" that bonds people together through design activities. Again, if this holds true, then we must also revisit criteria used to assess its performance by including dimensions not only on the new objects, but also on the new collectives. As one workshop organizer told us: " the real success lies in knowing which citizens to mobilize for future projects (…) this came a bit as a surprise, but it represents an enormous potential". Creation of a design collective does not mean that the exact same individuals will be involved the next time around, only that the workshop participants benefit from increased awareness, mutual consideration and minimal knowledge that can be used in future projects in which they all hold stakes. For the innovation manager of the "A" cases, co-designing has translated into new design automatisms, where such workshops are now to be held "systematically in any project that are heavily clientcentered". While prior work has addressed the links between innovation and change (e.g
Hendersen & [START_REF] Henderson | Architectural innovation: the reconfiguration of existing product technologies and the failure of established firms[END_REF][START_REF] Kim | Strategy, value innovation, and the knowledge economy[END_REF], the latter is often described as the primary and intended target of interventions on the former. What we put forth here is the idea that change is the indirect, yet most important, outcome of co-design. What's more, it's specificity and strength is that this change in mediated by the new objects, which should still be pursued. Rather than treating it as a handicap, the lack of prior relationship can be turned into an asset.
Organizations should not give up or delay innovation efforts, as poor contexts may turn out to be quite conductive for new collectives and ideas to emerge. According to [START_REF] Morin | Pour une crisologie[END_REF], crisis highlight flaws and opportunities, but more importantly trigger actions that lead to new solutions.
The author adds, as Shumpeter (1939) before him, that "within the destruction of crisis lies creation". For the existing literature on co-design fails to account for relationships and focuses on the new objects, we argue that the complexity of current innovation issues call for more of such design interventions that create both tangible outputs and relationship outcomes. In terms of practical implications, our results offer a reversed outlook on the organizational change or configuration for innovation reasoning. While we have known for some time now that innovation and design can trigger change [START_REF] Schott | Process innovations and improvements as a determinant of the competitive position in the international plastic market[END_REF][START_REF] March | Footnotes to organizational change[END_REF], our understanding of codesign takes it one step further, making the case that such disruption can be purposefully introduced and managed as a mean to bring about desired changes in disgruntled groups or dissembled collectives. Starting from the proposition that design can help cope with collaboration issues, organizations can envision new means to bring about change through such activities. Moreover, knowing co-design activities need some sort of crisis to emerge, practitioners may want to hold back on hosting workshops in positive settings in which they are unlikely to bring about changes on collectives, that is if they don't result in negative outcomes.
As Noteboom (2000:76) explains, «people can collaborate without agreeing, (but) it is more difficult to collaborate without understanding, and it is impossible to collaborate if they don't make sense to each other». Hence, starting from a difficult context where collaboration is unlikely, it appears as if engaging in design activities allows not for complete agreement, but at least in the construction of a common repertoire of practices and references. Most cases studied here fail to produce significant design outputs, however most hold the promise of subsequent tangible results. Hence, based on classic transaction-cost economics, this "twotime" collaboration dynamic depicts the first co-design workshop as the investment phase, whereas subsequent workshops allow for tangible return (design outputs) to emerge. In that sense, participants or organizations disappointed about co-design could be compared to riskaverse or impatient investors pulling out too soon of the market. This mean that organization should have to resort to hybrid weak-and-strong ties configurations [START_REF] Uzzi | Social Structure and Competition in Interfirm Networks: The Paradox of Embeddedness[END_REF], but rather adopt a sequential approach to strengthening weak ties through design activities and then mobilizing this more cohesive collective towards solving innovation issues. Briefly put, it may be the answer to the paradox of embeddeness: strong ties can indeed lead to innovation, but only when the ties have first been built or mediated through the co-construction of a common object.
LIMITS AND FUTURE RESEARCH
Lastly, our research design holds two methodological limits that ought to be discussed. First, the use of retrospective cases holds the risk of historical distortions and maturation. Second, discussing crises, albeit not in such terms, with participants may at times have raised sensitive issues. To minimize the impact of both time and emotions, we interviewed a wide array of participants and centered our questions on specific instances of the workshop [START_REF] Hubert | Retrospective reports of strategic-level managers: Guidelines for increasing their accuracy[END_REF]. More importantly, we relied on secondary sources used in the planning or the facilitation of the workshop in order to assess contexts without falling into data maturation traps.
Future research on collaborative design activities should pursue the ongoing theorization of codesign and extend on this paper by conducting a larger-scale quantitative paper on relationships before and after workshops. It ought to further define the nature of the ties between stakeholders, and its evolution as they go through collaborative design. More importantly, future research should seek longitudinal cases in which repeated workshops would allow to further validate our claims on the "design-co-design" sequence required to reach innovative design.
Finally, other dimensions of the workshop that may influence both design outputs and collective outcomes such as animation protocols, formats or goals (what products, services or processes should collectives with poor relationships attempt to design?) should also be further studied.
CONCLUSION
Puzzled by both theoretical and empirical inconsistencies about what is to be expected from codesign, we conducted this multiple case study hoping to better understand the contextual elements and different results of collaborative design workshops. Our data has shown that codesign's natural environment was one of crisis, whether of knowledge, skill or relational nature.
Rather than seeing this situation as a hurdle for collaboration or an impossible setting for innovation, we have argued that it could on the contrary be overcome through the engagement of stakeholders in design activities and used as a leverage to change management. Results also point out to a sequence in which initial weak ties are strengthened by design, which in turns can lead to new objects to be designed by strong collectives. As a consequence, we have advised organizations to tackle internal or network malfunctions through innovation first, rather than addressing innovation issues only once the collective has reached collaboration maturity.
Figure 1 .
1 Figure 1. What co-design should look like based on innovation and collaboration theory
action to strategic orientation, by not focusing on the change itself but rather on what needs to be designed by the collective. Such interventions are not seeking to bring change according to precise plan, to a level where higher-level activities such as design and innovation can be better executed. As change-management guruKotter (1995:12) advocates, research has demonstrated that only "one or two years into transformation (can we) witness the introduction of new products". What co-design seems to do, on the contrary, is operating change for and by design, meaning innovation activities precede the existence of a proper collaboration setting.If this holds true, collectives should no longer been considered as mere inputs in the design of new objects, but on the contrary, as an important result of the design activity. Rather than treating collectives as in an input of design, the interventions we studied suggest a reversed outlook where design becomes an input of collectives. Hence, controlled disruption through codesign is not only possible, but desirable in order to fulfill both design targets and renewed collectives. It thus becomes a new managerial tool for bringing about change, not just new objects. According to[START_REF] Hatchuel | The two pillars of new management research[END_REF], advances in management are precisely, as we have tried to demonstrate here again, responses to difficulties of collective actions. Such outcomes on the collective are not taken into account by the existing design literature. For instance,Hatchuel et al. (2011) identify five potential design outputs: research questions, products, emerging concepts, skills and knowledge. While it could be said that the two last dimensions are improved along the way, a new design collective should be seen as a desirable outcome of its own.
Figure 2 .
2 Figure 2. What co-design looks like based on empirical evidence.
Table 1 .
1 below sums up the contextual elements of the 21 cases studied in this article. Summary of the contextual elements for 21 co-design workshops studied.
CASE PURPOSE PARTICIPANTS PRIOR RELATIONSHIPS DESIGN OUTPUTS
Designing an application to Sale clerks, IT and Little to none. Employees Early concepts from IT staff
L1 improve sales and customer marketing employees have internally designed are not well received. No
in-store experience customers, students together, but never with consensual concept emerges.
other stakeholders.
A1 Using RFID technology to Store employees and Little to none. Employees Functionalities emerge, but no
locate items in the store in management, IT have internally designed working concept or prototype
real-time. experts, customers, together, but never with designed.
students. other stakeholders.
A2 Designing the in-store Store employees and Little to none. Employees Client intends to recommend
offices of the future for sales management, have internally designed testing some of new office
associates and managers. ergonomist students. together, but never with concepts.
other stakeholders. Some
returning participants (A1)
A3 Using no-contact technology Store employees and Little to none. Employees Successful design of a
to locate to improve in-store management, IT have internally designed smartphone application
customer experience experts, customers, together, but never with working prototype. In store
students. other stakeholders. Some real testing is planned.
returning participants (A2)
O1 Designing new sensors to Store employees and Little to none. Employees Functionalities emerge, but no
better measure athletes' management, IT have internally designed working concept or prototype
performance experts, customers, together, but never with designed.
students. other stakeholders.
O2 Using kinect-like technology Store employees and Little to none. Employees Functionalities emerge, but no
to create new in-store management, IT have internally designed working concept or prototype
interactions with customers. experts, customers, together, but never with designed.
students. other stakeholders. No
returning participants (O1)
Table 2 .
2 Design collectives' initial malfunctions
Nature Level Manifestations (examples)
Individual Loss of meaning
Knowledge Low motivation on the job
Evolving roles
Organizational No common criteria / methods
Skills Poor innovation structure
Lack of specific skills
Institutional Unsure of who to work with
Relational Mistrust
Poor links with local partners
Social Wealth inequalities | 48,262 | [
"955410",
"1111",
"1099",
"1027507"
] | [
"39111",
"304765",
"39111",
"39111",
"304765"
] |
01485086 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2013 | https://minesparis-psl.hal.science/hal-01485086/file/Kroll%20Le%20Masson%20Weil%202013%20ICED%20final.pdf | Dr Ehud Kroll
email: kroll@aerodyne.technion.ac.il
Pascal Le Masson
MODELING PARAMETER ANALYSIS DESIGN MOVES WITH C-K THEORY
Keywords: parameter analysis, C-K theory, conceptual design, design theory
The parameter analysis methodology of conceptual design is studied in this paper with the help of C-K theory. Each of the fundamental design moves is explained and defined as a specific sequence of C-K operators and a case study of designing airborne decelerators is used to demonstrate the modeling of the parameter analysis process in C-K terms. The theory is used to explain how recovery from an initial fixation took place, leading to a breakthrough in the design process. It is shown that the efficiency and innovative power of parameter analysis is based on C-space "de-partitioning". In addition, the role of K-space in driving the concept development process is highlighted.
1
INTRODUCTION Studying a specific method with the aid of a theory is common in scientific areas [START_REF] Reich | A theoretical analysis of creativity methods in engineering design: casting and improving ASIT within C-K theory[END_REF][START_REF] Shai | Creativity and scientific discovery with infused design and its analysis with C-K theory[END_REF]. It allows furthering our understanding of how and why the method works, identifying its limitations and area of applicability, and comparing it to other methods using a common theoretical basis. At the same time, interpreting and demonstrating the method from the theoretic perspective can provide empirical validation of the theory. The current study focuses on using C-K theory to clarify the (implicit) theoretical grounds and logic of a pragmatic design method called Parameter Analysis (PA). It also helps to explain some practical issues in C-K design theory. C-K theory [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF][START_REF] Le Masson | Strategic management of innovation and design[END_REF][START_REF] Hatchuel | Towards an ontology of design: lessons from C-K design theory and Forcing[END_REF] is a general descriptive model with a strong logical foundation, resulting in powerful expressive capabilities. The theory models design as interplay between two spaces, the space of concepts (C-space) and the space of knowledge (K-space). Four operators, C→K, K→C, C→C and K→K, allow moving between and within these spaces to facilitate a design process. Space K contains all established, or true, propositions, which is all the knowledge available to the designer. Space C contains "concepts", which are undecidable propositions (neither true nor false) relative to K, that is, partially unknown objects whose existence is not guaranteed in K. Design processes aim to transform undecidable propositions into true propositions by jointly expanding spaces C and K through the action of the four operators. This expansion continues until a concept becomes an object that is well defined by a true proposition in K. Expansion of C yields a tree structure, while that of K produces a more chaotic pattern. PA [START_REF] Kroll | Innovative conceptual design: theory and application of parameter analysis[END_REF][START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF] is an empirically-derived method for doing conceptual design. It was developed initially as a descriptive model after studying designers in action and observing that their thought process involved continuously alternating between conceptual-level issues (concept space) and descriptions of hardware 1 (configuration space). The result of any design process is certainly a member of configuration space, and so are all the elements of the design artifact that appear, and sometimes also disappear, as the design process unfolds. Movement from one point to another in configuration space represents a change in the evolving design's physical description, but requires conceptual reasoning, which is done in concept space. The concept space deals with "parameters", which in this context are functions, ideas and other conceptual-level issues that provide the basis for anything that happens in configuration space. Moving from concept space to configuration space involves a realization of the idea in a particular hardware representation, and moving back, from configuration to concept space, is an abstraction or generalization, because a specific hardware serves to stimulate a new conceptual thought. It should be emphasized that concept space in PA is epistemologically different from C-space in C-K theory, as explained in [START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF]. To facilitate the movement between the two spaces, a prescriptive model was conceived, consisting of three distinct steps, as shown in Figure 1. The first step, Parameter Identification (PI), consists primarily of the recognition of the most dominant issues at any given moment during the design process. In PA, the term "parameter" specifically refers to issues at a conceptual level. These may include the dominant physics governing a problem, a new insight into critical relationships between some characteristics, an analogy that helps shed new light on the design task, or an idea indicating the next best focus of the designer's attention. Parameters play an important role in developing an understanding of the problem and pointing to potential solutions. The second step is Creative Synthesis (CS). This part of the process represents the generation of a physical configuration based on the concept recognized within the parameter identification step. Since the process is iterative, it generates many physical configurations, not all of which will be very interesting. However, the physical configurations allow one to see new key parameters, which will again stimulate a new direction for the process. The third component of PA, the Evaluation (E) step, facilitates the process of moving away from a physical realization back to parameters or concepts.
Evaluation is important because one must consider the degree to which a physical realization represents a possible solution to the entire problem. Evaluation also points out the weaknesses of the configurations and possible areas of improvement for the next design cycle.
1
Hardware descriptions or representations are used here as generic terms for the designed artifact; however, nothing in the current work excludes software, services, user experience and similar products of the design process. PA's repetitive PI-CS-E cycles are preceded by a Technology Identification (TI) stage of looking into fundamental technologies that can be used, thus establishing several starting points, or initial conditions. A cursory listing of each candidate technology's pros and cons follows, leading the designer to pick the one that seems most likely to succeed. PA proved to be useful and intuitive, yet more efficient and innovative than conventional "systematic design" approaches [START_REF] Kroll | Design theory and conceptual design: contrasting functional decomposition and morphology with parameter analysis[END_REF].
The present study attempts to address some questions and clarify some of the fundamental notions of both PA and C-K theory. Among them: What exactly are the elements of C-space and K-space? C-K theory distinguishes between the spaces based on the logical status of their members ("undecidable" propositions are concepts, and "true" or "false" ones are knowledge items), but it can still benefit from a clear and consistent definition of the structure and contents of these spaces.
What is the exact meaning of the C-K operators? In particular, is there a C→C operator, and does it mean that one concept is generated from another without use of knowledge? How should C-K diagrams be drawn? How can these diagrams capture the time-dependence of the design process? How exactly should the arrows representing the four operators be drawn?
If PA is a proven design method and C-K is a general theory of design, does the latter provide explanation to everything that is carried out in the former? Does C-K theory explain the specific design strategy inherent in PA, and in particular, the latter's claim that it supports innovative design? The PA method of conceptual design is demonstrated in the next section by applying it to a design task. The steps of PA are explained next with the notions of C-K theory, followed by a detailed interpretation of the case study in C-K terms. The paper concludes with a discussion of the results of this study and their consequences in regard to both PA and C-K theory. For brevity, the focus here is on the basic steps of PA, leaving out the preliminary stage of TI. The role of the case study in this paper is merely to demonstrate various aspects; the results presented are general and have been derived by logical reasoning and not by generalizing from the case study.
PARAMETER ANALYSIS APPLICATION EXAMPLE
The following is a real design task that had originated in industry and was later changed slightly for confidentiality reasons. It was assigned to teams of students (3-4 members in each) in engineering design classes, who were directed to use PA for its solution after receiving about six hours of instruction and demonstration of the method. The design process presented here is based on one team's written report with slight modifications for clarity and brevity. The task was to design the means of deploying a large number (~500) of airborne sensors for monitoring air quality and composition, wind velocities, atmospheric pressure variations, etc. The sensors were to be released at an altitude of ~3,000 m from an under-wing container carried by a light aircraft and stay as long as possible in the air, with the descent rate not exceeding 3 m/s (corresponding to the sensor staying airborne for over 15 minutes). Each sensor contained a small battery, electronic circuitry and radio transmitter, and was packaged as a 10 by 50-mm long cylinder weighing 10 g. It was necessary to design the aerodynamic decelerators to be attached to the payload (the sensors), and the method of their deployment from a minimum weight and size container. The following focuses on the decelerator design only. The design team began with analyzing the need, carrying out some preliminary calculations that showed that the drag coefficient C D of a parachute shaped decelerator is about 2, so to balance a total weight of 12-15 g (10 g sensor plus 2-5 g assumed for the decelerator itself), the parachute's diameter would be ~150 mm. If the decelerator were a flat disk perpendicular to the flow, the C D reduces to ~1.2, and if it were a sphere, then C D 0.5, with the corresponding diameters being about 200 and 300 mm, respectively. It was also clear that such large decelerators would be difficult to pack compactly in large numbers, that they should be strong enough to sustain aerodynamic loads, particularly during their deployment, when the relative velocity between them and the surrounding air was high, and that being disposable, they should be relatively cheap to make and assemble. Further, the sturdier the decelerator is made, chances are that it would also be heavier. And the heavier it is, the larger it would have to be in order to provide enough area to generate the required drag force. Technology identification began with the team identifying deceleration of the sensors as the most critical aspect of the design. For this task they came up with the technologies of flexible parachute, rigid parachute, gas-filled balloon and hot-air balloon. Reviewing some pros and cons of each technology, they chose the flexible parachute for further development. Figure 2 is a detailed description of a portion of the PA process carried out by the design team.
PA step Reasoning process Outcome
PI1
The first conceptual issue (parameter) should be the chosen technology. Drag force is ok but compact packing is impossible because these configurations cannot nest in each other.
Shall we try to improve the last configuration or backtrack?
Try to improve the design by finding a way to pack it compactly.
PI3
How can the last configuration be improved? Combine the idea of flexible parachute that can be folded for packing with a rigid parachute that doesn't have cords and doesn't require a strong "pull" to open.
Parameter: "Use a frame + flexible sheet construction that can fold like an umbrella; use a spring for opening" This would work, seems cheap to make, and shouldn't have deployment problems. But how will the "gliders" be packed and released in the air?
Shall we try to improve the last configuration or backtrack?
Continue with this configuration: design the container, packing arrangement, and method of deployment.
Figure 2. continued
The first concept (PI 1 ) is based on a small parachute that will provide the necessary drag force while allowing compact packing. The following creative synthesis step (CS 1 ) realizes this idea in a specific hardware by sketching and sizing it with the help of some calculations. Having a configuration at hand, evaluation can now take place (E 1 ), raising doubts about the operability of the solution.
The next concept attempted (PI 2 ) is the rigid parachute from the TI stage, implemented as a square pyramid configuration (CS 2 ), but found to introduce a new problem-packing-when evaluated (E 2 ).
A folding, semi-rigid parachute is the next concept realized and evaluated, resulting in the conclusion that parachutes are not a good solution. This brings a breakthrough in the design: dissipating energy by frictional work can also be achieved by a smaller drag force over a larger distance, so instead of a vertical fall the payload can be carried by a "glider" in a spiraling descent (PI 4 ). The resulting configuration (CS 4 ) shows an implementation of the last concept in words and a sketch, followed by an evaluation (E 4 ) and further development (not shown here).
It is interesting to note a few points in this process: First, when the designers carried out preliminary calculations during the need analysis stage, they already had a vertical drag device in mind, exhibiting the sort of fixation in which a seemingly simple problem triggers the most straightforward solution.
Second, technology identification yielded four concepts, all still relevant for vertical descent, and all quite "standard". A third interesting point is that when the "umbrella" concept failed (E 3 ), the designers chose not to attempt another technology identified at the outset (such as gas-filled balloon), but instead used the insights and understanding gained during the earlier steps to arrive at a totally new concept, that of a "glider" (PI 4 ). And while in hindsight, this last concept may not seem that innovative, it actually represents a breakthrough in the design process because this concept was not apparent at all at the beginning.
INTERPRETATION OF PARAMETER ANALYSIS IN C-K TERMS
Technology identification, which is not elaborated here, establishes the root concept, C 0 , as the important aspect of the task to be designed first. The actual PA process consists of three steps that are applied repeatedly (PI, CS and E) and involves two types of fundamental entities: parameters (ideas, conceptual-level issues) and configurations (hardware representations, structure descriptions). In addition, the E step deduces the behavior of a configuration followed by a decision as to how to proceed. The interpretation in C-K terms is based on the premises that because knowledge is not represented explicitly in PA and because a design should be considered tentative (undecidable in C-K terms) until it is complete, both PA's parameters and configurations are entities of C-K's C-space. The parameter identification (PI) step begins with the results of an evaluation step that establishes the specific behavior of a configuration in K-space by deduction ("given structure, find behavior"), and makes a decision about how to proceed. There are three possible decisions that the evaluation step can make: 1.
Stop the process if it is complete (in this case there is no subsequent PI step), or 2.
Try to improve the undesired behavior of the evolving configuration (this is the most common occurrence), or 3.
Use a specific technology (from technology identification, TI) for the current design task. This can happen at the beginning of the PA process, after establishing (in TI) which is the most promising candidate for further development, or if the evaluation results in a decision to abandon the current sequence of development and start over with another technology. In C-K terms, current behavior and decision on proceeding are knowledge items in K-space, so generating a new concept (for improvement or totally new) begins with a K→C operator. This, in turn, triggers a C→C operator, as shown in Figure 3. The K→C operator carries the decision plus domain knowledge into the C-space, while the C→C operator performs the actual derivation of the new concept. Two cases can be distinguished: the PI step can begin with a decision to improve the current design (case 2 above), as in Figure 3a, or it can begin with a decision to start with a new technology (case 3 above), as in Figure 3b. In both cases, the result of the PI step is always a new concept in C-K terms, which in PA terms is a parameter. In the following diagrams we shall use round-cornered boxes to denote C-K concepts that stand for PA parameters, and regular boxes for C-K concepts that represent PA configurations. The red numbers show the order of the process steps. However, because knowledge is required to realize an idea in hardware and perform quantitative reasoning, a visit to K-space is also needed. The CS step therefore begins with searching for the needed knowledge by a C→K operator that triggers a K→K (deriving specific results from existing knowledge). The new results, in turn, are used by a K→C operator to activate a C→C that generates the new concept, which is a PA configuration that realizes the parameter in hardware. This interpretation of CS as a sequence of four C-K operators is depicted in Figure 4a.
In PA, parameters (concept, ideas) cannot be evaluated, only configurations. This means that the evaluation (E) step begins with a configuration or structure and tries to deduce its behavior, from which it will make a decision (any one of the three described above). This means that a C→K operator is used to trigger a K→K; the former is the operation of looking for the knowledge necessary for the evaluation, while the latter is the actual deductive reasoning that leads to deriving the specific behavior and making the decision as to how to proceed. This is shown in Figure 4b. The design process began with the need, the problem to solve, as stated by the customer. A need analysis stage produced greater understanding of the task and the design requirements. This took place entirely in K-space and is not shown here. Next, technology identification focused the designers on the issue of deceleration (C 0 ), found possible core technologies, listed their pros and cons, and made a choice of the best candidate. The following description of the PA process commences at this point. Figure 5 shows the first cycle of PI-CS-E as described in Figure 2 and depicted with the formalism of Figures 3 and4. Note that while C 0 does not have a meaning of parameter or configuration in PA terms, the result of the first partition, C 1 , is a PA parameter, while the second partition generates the configuration C 2 . This first cycle ended with a decision to abandon the flexible parachute concept and use another technology identified earlier (in TI) instead. For brevity, the demonstration now skips to the last PI-CS-E cycle as depicted in Figure 6. It began with the evaluation result of step E 3 (see Figure 2) shown at the lower right corner of Figure 6. The designers concluded that parachutes, flexible or rigid, were not a good solution path, and called for trying something different. They could, of course, opt for the balloon technologies identified earlier, but thanks to their better understanding of the problem at that point, they decided to take a different look at the problem (PI 4 in Figure 6). They realized that their previous efforts had been directed at designing vertical decelerators, but that from the energy dissipation viewpoint a spiraling "glider" concept might work better. The C-K model of this step depicts a "de-partition", or growing of the tree structure in C-space upward, at its root. This phenomenon, also demonstrated in chapter 11 of Le [START_REF] Le Masson | Strategic management of innovation and design[END_REF], represents moving toward a more general or wider concept, and in our case, redefining the identity of C 0 :decelerator to C 0 ':vertical drag decelerator and partitioning C 7 to C 0 ' and C 8 .
5
DISCUSSION C-K theory has been clarified by this study with regard to its spaces and operators. Elements of Cspace correspond to both PA's parameters (concepts) and configurations (structures), thus they have the following structure: "there exists an object Name, for which the group of (behavioral) properties B 1 , B 2 ,… can be made with the group of structural characteristics S 1 , S 2 ,…". For example, concept C 2 (a PA configuration) and concept C 5 (a PA parameter) in Figure 6 can be described as:
"there exists an object C 2 , for which the group of properties B 1 = produces vertical drag (inherited from C 0 ') B 2 = based on flexible parachute (inherited from C 1 ) can be made with the group of characteristics S 1 = 150-mm dia. hemispherical canopy S 2 = cords for sensor attachment" "there exists an object C 5 , for which the group of properties B 1 = produces vertical drag (inherited from C 0 ') B 2 = based on rigid parachute (inherited from C 3 ) B 3 = built as an umbrella, i.e., folding frame and flexible skin can be made with the group of characteristics S 1 = 150x150mm square pyramid shape (inherited from C 4 )" The interesting thing to note is that except for the root concept in C-K (which is not defined as a PA entity), all other concepts have some attributes (properties and/or characteristics). But because a C-K concept can be either a PA parameter of configuration and PA excludes the possibility of having configurations without parameters to support them, the concepts in C-K sometimes have only properties (i.e., behavioral attributes), and sometimes properties plus characteristics (structural attributes); however, a concept cannot have characteristics and no properties. Need analysis, although not elaborated here, is the stage of studying the design task in terms of functions and constraints, and generating the design requirements (specifications). It takes place entirely in K-space. Technology identification also takes place mostly in K-space. The basic entities of PA, parameters (conceptual-level issues, ideas) and configurations (embodiments of ideas in hardware) have been shown to reside in C-K's C-space. However, all the design "moves" in PA-PI, CS and Ewhich facilitate moving between PA's spaces, require excursions to C-K's K-space, as shown in Figures 3 and4. In particular, the importance of investigating K-space when studying design becomes clear by observing how the acquisition of new knowledge (modeled with dark background in Figures 5 and6) that results from evaluating the evolving design is also the driver of the next step. It should be noted, however, that the tree structure of C-space is not chronological, as demonstrated by the de-partition that took place. To capture the time-dependence of the design process, C-K's concepts were labeled with a running index and the operator arrows numbered. This method of drawing C-K diagrams is useful for providing an overall picture of the design process, but is incorrect in the sense that when a C-K concept is evaluated and found to be deficient, leading to abandoning its further development (as with concepts C 2 and C 6 of Figure 6, for example), it should no longer show in C-space, as its logical status is now "decidable." Some of the ancestors of such 'false' concepts may also need to be dropped from C-space, depending on the exact outcome of the pertinent evaluation.
C-K theory is, by definition, a model of the design process, and does not contain a strategy for designing. However, modeling PA with C-K theory helps to clarify the former's strategy in several respects. First, PA is clearly a depth-first method, attempting to improve and modify the evolving design as much as possible and minimizing backtracking. It also uses a sort of heuristic "cost function" that guides the process to address the more difficult and critical aspects first. This strategy is very different from, for example, the breadth-first functional analysis and morphology method of systematic design [START_REF] Pahl | Engineering design: a systematic approach[END_REF], where all the functions are treated concurrently.
A second clarification of PA regards its support of innovation. As many solution-driven engineers do, the designers of the decelerator example also began with straightforward, known solutions for vertical descent (parachutes, balloons). This fixation often limits the designer's ability to innovate; however, the PA process demonstrated here allowed recovery from the effect of the initial fixation by learning (through the repeated evaluation of "standard" configurations) during the development process (generating new knowledge in C-K terms) and discovery of a final solution that was not included in the fixation-affected initial set of technologies. Moreover, C-K theory allowed identifying departitioning of concept space as the exact mechanism through which the innovation was achieved.
6 CONCLUSION C-K theory was shown to be able to model PA's steps, which are fundamental design "moves": generating an idea, implementing an idea in hardware representation, and evaluating a configuration. It also showed that PA supports innovative design by providing a means for recovering from fixation effects. Conversely, PA helped to clarify the structure of C-K's concepts, operators and C-space itself, and to emphasize the importance of K-space expansions. Many interesting issues still remain for future research: What particular knowledge and capabilities are needed by the designer when deciding what are the most dominant aspects of the problem in TI, and the most critical conceptual-level issues in each PI step? What exactly happens in K-space during PA as related to the structures of knowledge items and their role as drivers of the design process? Are there additional innovation mechanisms in PA that can be explained with C-K theory? Can C-K theory help compare PA to other design methodologies? In addition, we have already begun a separate investigation of the logic of PA as a special case of Branch and Bound algorithms, where design path evaluation is used for controlling the depth-first strategy in a way that ensures efficiency and innovation.
Figure 1 .
1 Figure 1. The prescriptive model of parameter analysis consists of repeatedly applying parameter identification (PI), creative synthesis (CS) and evaluation (E)
Parameter: "Produce a large enough drag force using a flexible parachute"CS1Which particular physical configuration would realize the flexible parachute concept?Configuration: A 150-mm dia. hemispherical parachute, connected to the sensor with cords. E1 Given the physical configuration, what is the behavior? Drag force is ok and compact packing can be done by folding, but the parachute may not open and cords may tangle. Shall we try to improve the last configuration or backtrack? Try another technology from the TI stage. PI2 Use the new technology for the decelerator design. Parameter: "Use a rigid parachute to generate drag force" CS2 Which particular physical configuration would realize the rigid parachute concept? Configuration: A 150-mm diagonal square pyramid with the sensor rigidly attached. E2 Given the physical configuration, what is the behavior?
Figure 2 .
2 Figure 2. Description of the reasoning process used to design airborne decelerators
Figure 3 .
3 Figure 3. C-K model of parameter identification (PI): (a) applies to the common case encountered during PA and (b) shows starting with a new technology The creative synthesis (CS) step starts with a parameter, a PA concept, and results in a new configuration. It involves a realization of an idea in hardware representation by particularization or instantiation (the opposite of generalization). It usually requires some quantitative specification of dimensions, materials, etc. that are derived by calculation. In terms of C-K theory, if PA's parameters and configurations are elements of C-space, then the CS step should start and end in C-space.However, because knowledge is required to realize an idea in hardware and perform quantitative reasoning, a visit to K-space is also needed. The CS step therefore begins with searching for the needed knowledge by a C→K operator that triggers a K→K (deriving specific results from existing knowledge). The new results, in turn, are used by a K→C operator to activate a C→C that generates
Figure 4 .
4 Figure 4. C-K model of (a) creative synthesis (CS) and (b) evaluation (E). Dark background denotes a new knowledge item
Figure 5 .
5 Figure 5. C-K model of the first PI-CS-E cycle of the decelerator design
Figure 6 .
6 Figure 6. C-K model of the fourth PI-CS-E cycle, demonstrating a "de-partition"It was shown that K→K operators represent deductive reasoning, generating new knowledge from existing one, but their action needs to be triggered by a reason, a purpose, and this is represented by a C→K operator. Likewise, a K→C operator uses knowledge for triggering a C→C operator. As demonstrated in this study, C→C operators do exist, representing the derivation of a new concept from another. However, this operation does not happen by itself in C-space, only if triggered by a K→C operator. The importance of having a C→C operator can be explained by the need to capture the relation of new concepts to their ancestors, including inheritance of their attributes. It should be noted, however, that the tree structure of C-space is not chronological, as demonstrated by the de-partition that took place. To capture the time-dependence of the design process, C-K's concepts were labeled with a running index and the operator arrows numbered. This method of drawing C-K diagrams is useful for providing an overall picture of the design process, but is incorrect in the sense that when a C-K concept is evaluated and found to be deficient, leading to abandoning its further development (as with concepts C 2 and C 6 of Figure6, for example), it should no longer show in C-space, as its logical status is now "decidable." Some of the ancestors of such 'false' concepts may also need to be dropped from C-space, depending on the exact outcome of the pertinent evaluation.
ACKNOWLEDGMENTS
The first author is grateful to the chair of "Design Theory and Methods for Innovation" at Mines ParisTech for hosting him in February 2012 and 2013 for furthering this work, and for the partial support of this research provided by the ISRAEL SCIENCE FOUNDATION (grant no. 546/12). | 31,677 | [
"1003635",
"1111",
"1099"
] | [
"84142",
"39111",
"39111"
] |
01485098 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2013 | https://minesparis-psl.hal.science/hal-01485098/file/Hatchuel%20Weil%20Le%20Masson%202011%20ontology%20of%20expansion%20v15.pdf | Armand Hatchuel
email: hatchuel@ensmp.fr
Benoit Weil
email: bweil@ensmp.fr
Pascal Le Masson
email: lemasson@ensmp.fr
Towards an ontology of design: lessons from C-K design theory and Forcing 1
Keywords:
In this paper we present new propositions about the ontology of design and a clarification of its position in the general context of rationality and knowledge. We derive such ontology from a comparison between formal design theories developed in two different scientific fields: Engineering and Set theory. We first build on the evolution of design theories in engineering, where the quest for domain-independence and "generativity" has led to formal approaches, likewise C-K theory, that are independent of what has to be designed. Then we interpret Forcing, a technique in Set theory developed for the controlled invention of new sets, as a general design theory. Studying similarities and differences between C-K theory and Forcing, we find a series of common notions like "d-ontologies", "generic expansion", "object revision", "preservation of meaning" and "K-reordering". They form altogether an "ontology of design" which is consistent with unique aspects of design.
Part 1. Introduction
What is design? Or in more technical terms, can we clarify as rigorously as possible some of the main features of an ontology of design? In this paper, we develop an approach of such ontology that became possible thanks to the following developments:
-the elaboration in the field of engineering of formal design theories, like C-K theory, (Hatchuel andWeil 2003, 2009) which are independent of any engineering domain and avoid too strong restrictions about what is designed.
-the exploration of design theories that could have emerged in other fields from a similar process of abstraction and generalization. In this paper, we introduce Forcing [START_REF] Cohen | The independence of the Continuum Hypothesis[END_REF], a technique and branch of Set theory that generalized extension procedures to the creation of new collections of sets. It presents, from our point of view, specific traits of a design theory with highly general propositions.
These design theories offered a unique material for a comparative investigation. The study of their similarities and differences is the core subject of this paper. It will lead us to what can be named an ontology of expansion which clarifies the nature of design. This ontology is no more postulated but revealed by common assumptions and structures underlying these design theories. Therefore, our findings only reach the ontological features consistent with existing formalizations of design. Yet, to our knowledge, such ontology of expansion, as well as the interpretation of Forcing as a design theory, had not been investigated and formulated in existing literature.
Before presenting the main hypotheses and the structure of this paper some preliminary material on design theories is worth mentioning.
Formal design theories. In the field of engineering, efforts to elaborate formal (or formalized) design theories have been persistent during the last decades [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF][START_REF] Reich | A Critical Review of General Design Theory[END_REF][START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF]. "Formal" means the attempt to reach rigor, if possible logical and mathematical rigor, both in the formulation of hypotheses and the establishment of findings. It also delineates the limited scope and purpose of these theories. Formal design theories (in the following we will say design theories or design theory) are only one part of the literature about design. They neither encompass all the findings of design research [START_REF] Finger | A Review of Research in Mechanical Engineering Design[END_REF]Cross 1993), nor describe all activities involved in design in professional contexts. For instance, it is well known that design practice is shaped by managerial, social and economic forces that may not be captured by formal design theories. Yet, this does not mean that design theories have no impact. Such forces are influenced by how design is described and organized. Actually, it is well documented that design theories, in engineering and in other fields, have contributed to change dominant design practices in Industry [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF].
Still, the main purpose of design theory is the advancement of design science by capturing the type of reasoning (or model of thought) which is specific to design. As an academic field, design theory has its specific object and cannot be reduced to decision theory, optimization theory or problem-solving theory. Therefore, recent design theories focus on what is called the creative or "generative" [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF]) aspects of design. Indeed, any design engineer (or designer) uses standard techniques to select or optimise existing solutions. However, design theories target the rationale, the models of thought and reasoning that only appear in design. This special attention does not deny the importance of routinized tasks in design, and design theory should, conceptually, account for both creative and routinized aspects of design, even if it does not include all routinized techniques used in design. Likewise, in mathematics, Set theory is able to account for the core assumptions of Arithmetics, Algebra and Analysis, yet it cannot replace these branches of mathematics. Finally, by focusing on creative design, design theory can complement decision theory by helping engineering and social sciences (economics, management, political science…) to better capture the human capacity to intentionally create new things or systems.
Research methodology. For sure, there is no unique way to explore an ontology of design. However, in this paper we explore a research path that takes into account the cumulative advancement of theoretical work in engineering design. The specific material and methodology of this research follows from two assumptions about the potential contribution of design theories to the identification of an ontology of design.
Assumption 1: Provided they reach a high level of abstraction and rigor, design theories model ontological features of design, Assumption 2: Provided there is a common core of propositions between design theories developed in different fields, this core can be seen as an ontology of design.
An intuitive support for these assumptions and the method they suggest, can be found using an analogy with Physics. If the goal of our research was to find an ontology of "matter" or "time" consistent with contemporary knowledge in Physics, a widely accepted method would be to look in detail to common or divergent assumptions about "matter" or "time" in contemporary theories and physics. And clearly, there is a wide literature about the implications of Special relativity and Quantum mechanics for the elaboration of new ontologies of time and matter. Similarly, our method assumes that design theories have already captured a substantial part of our knowledge about design and may be valid guides for the exploration of an ontology of design.
Outline of the paper. In this section (part 1) we outline the trends towards generality and domain-independence followed by design theories in engineering. They are well illustrated by the specific features of C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF]. Then we discuss the existence of design theories in other fields like science and mathematics. We suggest that in Mathematics, Forcing, a method in Set theory developed for the generation of new sets, can be seen as a design theory. In Part 2 and 3 we give an overview of the principles and findings of both C-K theory and Forcing. In part 4, we compare the assumptions and rationale of both theories. In spite of their different contents and contexts, we find that C-K theory and Forcing present common features that unveil an ontology of design that we characterize as an "ontology of expansion".
1.1-Design theories in engineering: recent trends
In engineering, the development of formal design theories can be seen as a quest for more generality, abstraction and rigor. This quest followed a variety of paths and it is out of the scope of this paper to provide a complete account of all theoretical proposals that occurred in the design literature during the last decades. We shall briefly overview design theories which have been substantially discussed in the literature. Still, we will analyse in more detail C-K theory as an example of formal theories developed in engineering that present a high level of generality. Then we will compare it to a theory born in Mathematics.
Brief overview of design theories. In the field of engineering, General Design Theory [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF] and Axiomatic Design (AD) [START_REF] Suh | Principles of Design[END_REF] were among the first to present formalized approaches of design. Both approaches have in common to define the design of a new entity as the search of some ideal mappings between required functions and selected attributes. The core value of these theories was to model these functions and attributes with mathematical structures that helped to define and warrant specific operations of design. For instance, in GDT, Hausdorff spaces of functions and attributes were assumed. Such mathematical structures offered the possibility to map new desired intersections of functions by the "best" adapted intersection of attributes. In AD, Matrix algebras and information theory were introduced. They are used to model types of mappings between attributes (called design parameters) and functions (called Functional requirements). These matrices define the design output. Thus, ideal designs can be axiomatically defined as particular matrix structures (AD"s first axiom) and associated to the ideal information required from the design user (AD"s second axiom). GDT and AD were no more dependent on any specific engineering domain but still relied on special mathematical structures that aimed to model and warrant "good" mappings.
After GDT and AD, the discussion about design theory followed several directions. Authors introduced a more dynamic and process-based view of design (eg. FBS, [START_REF] Gero | Design prototypes: a knowledge representation schema for design[END_REF])); they insisted on the role of recursive logic [START_REF] Zeng | On the logic of design[END_REF] as well as decomposition and combination aspects in design (Zeng and Gu 1999a, b). For this research, we only need to underline that these discussions triggered the quest for more general mathematical assumptions that could: i) capture both mapping and recursive processes within the same framework; ii) account for the "generative" aspect of design. Two recent theories are good examples of these trends.
The first one, called Coupled Design Process (CDP, [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF], kept the distinction between spaces of functions and spaces of attributes (or design parameters) but it allowed them to evolve dynamically by introducing general topological structures (Closure sets and operators). Thanks to these structures, CDP captured, for instance, the introduction of new functions different from those defined at the beginning of the design process. It also described new forms of interplay between functions and attributes, which could be generated by available data bases and not only by some fixed or inherited definitions of the designed objects. Thus CDP, extended GDT and the idea of a satisfactory mapping was replaced by a co-evolution of functions and attributes. Its mathematical assumptions also accounted for the non-linear and non-deterministic aspects of design.
The second design theory is called C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. C-K theory is consistent with the dynamics captured by CDP. However it is no more built on the distinction between functions and attributes spaces. Instead, it intends to model the special logic that allows a "new object" to appear. This generative aspect is commonly placed at the heart of the dynamics of design. C-K theory models design as the necessary interplay between known and "unknown" (undecidable) propositions. Attributes and functions are seen as tentative constraints used for the stepwise definition of an unknown and desired object. They also play a triggering role in the production of new knowledge. New attributes and new functions are both causes and consequences of changes in the knowledge available. C-K theory explains how previous definitions of objects are revised and new ones can appear, threatening the consistency of past knowledge. Thus the core idea of C-K theory is to link the definition process of a new object to the activation of new knowledge and conversely.
Authors point out [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF] that all these design theories do not present neither radically different nor contradictory point of views about design. Rather, they can be seen as steps belonging to a same endeavour: develop design theory on more general grounds by seeking domain independence and increased "generativity":
-Domain-independence. Formal design theories aimed to define design reasoning without any object definitions or assumptions coming from specific engineering domains. Thus design theory evolves towards a discipline that could be axiomatically built and improved through empirical field research or theory-driven experiments. A similar evolution has already happened in the fields of decision science or machine learning.
-Increased generativity. Design has often been seen as a sophisticated, ill-structured or messy type of problem solving. This vision was introduced by Herbert Simon, but it needs to be extended by introducing a unique aspect of design: the intention to produce "novel" and creative things (Hatchuel 2002). Authors recently called "generativity", this intentional research of novelty and surprises which has driven the development of design theories through more abstract mathematical assumptions [START_REF] Hatchuel | A systematic approach to design theories using generativeness and robustness[END_REF].
This second property of design deserves some additional remarks, because it has crucial theoretical consequences about what can be defined as a design task.
-Intuitively, the formulation of a design task has to maintain some indeterminacy of its goals and some unknown aspects about what has to be designed. If there is already a deterministic and complete definition of the desired object, design is done or is reduced to the implementation of a constructive predefined algorithm. For instance, finding the solution of an equation when the existence of the solution and the solving procedure are already known should not be seen as a design task. -However, in practice the frontiers of design are fuzzy. For instance, one can generate "novel" objects from random variables: is it design? For sure, a simple lottery will not appear as a design process. However, random fractal figures, present to observers complex and surprising forms. Yet, surprises are not enough to characterize design. The literature associates design work to the fact that novelty and surprises are: i) intentionally generated [START_REF] Schön | ) Varieties of Thinking. Essays from Harvard's Philosophy of Education Research Center[END_REF]); or ii) if they appear by accident, they may be used as a resource for the design task (an effect that has been popularised as "serendipity"). Authors already model Design practice as a combination of intentionality and indeterminacy [START_REF] Gero | Creativity, emergence and evolution in design: concepts and framework[END_REF][START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF]. Domain independence and the modelling of generative processes are main drivers of recent design theory in engineering. On both aspects, C-K theory can be seen as a good representative of the present stage of abstraction and generality in the field. It will be described in more detail and compared to Forcing for the main purpose of this paper: an investigation of the ontology of design.
The status of design theory: beyond engineering can we find formal design theories?
As described before, the evolution in engineering has led to design theories that are no more linked to engineering disciplines and domains. This is a crucial observation for our research. If design theory is independent of what is designed, an ontology of design becomes possible. Similarly an ontology of decision became possible when decision theory was no more dependent of its context of application. However, the independence from engineering domains does not prove that design theory has reached a level of generality that is acceptable outside engineering. Hence, our research of an ontology of design would be more solidly grounded if we could also rely on design theories that emerged in other fields than engineering. Yet, are there other fields where general design theories can be found? And do they present common aspects with engineering design theories? A complete answer to such questions would require a research program in philosophy, Art and science that is beyond the scope of one paper. Thus, we focused our inquiry on potential design theories in science and mathematics. We will introduce Forcing in Set theory and explain why, from our point of view, it can be seen as a general design theory.
Design theory in science. In standard scientific research, the generation of new models and discoveries is common place. Yet, classically the generative power of science is based upon a dynamic interaction between theory and experimental work. This view of science has been widely discussed and enriched. At the end of the 19th century, the mathematician Henri Poincaré suggested that the formation/construction of hypotheses is a creative process [START_REF] Poincaré | Science and Hypothesis. science and hypothesis was originally[END_REF]). Since, it has often been argued that the interaction between theories and experiments follows no deterministic path, or that radically different theories could present a good fit with the same empirical observations. Proposed by Imre Lakatos [START_REF] Worral | Imre Lakatos, the methodology of scientific research programmes[END_REF], the idea of "research programmes"(which can be interpreted in our view as "designed research programs") seemed to better account for the advancement of science than a neutral adjustment of theory to facts. In Modern Physics (relativity theory or quantum mechanics) the intentional generation of new theories is an explicit process. These theories2 are expected to be conceived in order to meet specific requirements: consistency with previously established knowledge, unification of partial theories, mathematical tractability, capacity to be tested experimentally, the prediction of new facts, and so forth... Thus, it is acceptable to say that in classic science new theories are designed. However, [START_REF] Worral | Imre Lakatos, the methodology of scientific research programmes[END_REF] to our knowledge, there is no formal design theory that has emerged as a general one in this field. This is a provisional observation and further research is needed. However, this state of information contrasts with what can be found in Mathematics where the generation of new objects has been modelled.
Design theory in mathematics: the forcing model in Set theory. Following again Henri Poincaré, it is now widely accepted that mathematical objects are created (i.e. designed) to reach increased generality, tractability and novelty. Yet, these views are not enough to offer a formal design theory. For our specific research program, a chapter of Set theory, called Forcing, deserves a special attention because it builds a highly general process for the design of new objects (set models) within the field of Set theory. Forcing generates new sets that verify the axioms of set theory (i.e. new "models" of Set theory). It is also a theory that proves why such technique has general properties and applications. Forcing played a major role in the solution of famous mathematical problems of the 20 th century. For instance, we will see how Forcing has been used to generate new real numbers that changed existing ideas about the cardinality -i.e. "size"of uncountable infinite sets. For sure, Forcing is embedded in the mathematical world of Set theory. However, the level of abstraction of Set theory is such that the following hypothesis can be made and will be justified after a more detailed presentation of Forcing:
Hypothesis: Due to the abstraction of Set theory, Forcing can be seen as a general design theory
A comparative approach between design theories. Towards a clarification of the ontology of design
If Forcing can be seen as a general design theory, then different design theories could have reached, independently, a high level of abstraction. And if these theories present a common core of propositions, this core would be a good description of what design is essentially about and what makes design reasoning possible. Thus our research method was not to postulate an ontology of design and discuss its validity but to infer it from the comparison of design theories coming from different scientific contexts. In this paper, we focus our comparison on i) C-K theory as a representative of design theories in engineering design; and ii) Forcing as a general design theory in Set theory. This comparison was structured by the following questions:
Q1: What are the similarities and differences between C-K theory and Forcing? Q2: What are the common propositions between such theories? What does this "common core" tell us about the ontology of design?
In spite of their different backgrounds, we found consistent correspondences between both theories. As expected, they offer new ground for the clarification of an ontology of design. However, such comparison presents limitations that have to be acknowledged. Forcing is a mathematical theory well established in the field of Set theory. The scope of C-K theory is broader. It aims to capture the design of artefacts and systems including physical and symbolic components. Its mathematical formalization is still an open research issue. Therefore, in this paper, we only seek for insights revealed by the comparison of their structural assumptions and operations when they are interpreted as two models of design. Our claim is that a more rigorous discussion about the ontology of design can benefit from such comparative examination of the structure of design theories.
Part 2. C-K theory: modelling design as a dual expansion
C-K theory has been introduced by Hatchuel and Weil [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. C-K theory attempts to describe the operations needed to generate new objects presenting desired properties. The conversation about C-K theory in the literature treats both its theoretical implications and potential developments (Kazakçi and Tsoukias 2005; Salustri 2005; Reich et al. 2010; Shai et al. 2009; Dym et al. 2005 ; Hendriks and Kazakçi 2010; Sharif Ullah et al. 2011) 3 . In this section we will present the main principles of C-K theory. They are sufficient to study its correspondences with Forcing (more detailed accounts and discussions can be found in [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF][START_REF] Hendriks | A formal account of the dual extension of knowledge and concept in C-K design theory[END_REF][START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF])).
C-K theory: notions and operators.
Intuitive motivation of C-K theory: what is a design task ? C-K theory focuses on a puzzling aspect of design [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]: the theoretical and practical difficulties to define the departure point of a design task. In professional contexts such departure points are called "specifications", "programs" or "briefs". But in all cases, their definition is problematic: they have to indicate some desired properties of an object without being able to give a constructive definition of this object, or without being able to warrant its existence by pre-existing knowledge. This explains why a design task cannot be fully captured by the mere task of a mapping between attributes and functions. Design only appears when such mapping is driven by an equivocal, incomplete, fuzzy or paradoxical formulation. Thus to better approach design, we need to model a type of reasoning that begins with a proposition that speaks of an object which is desirable, yet partially unknown, and which construction is undecided with available knowledge. But, this intuitive interpretation leads to difficult modelling issues. How can we reason on objects (or collections of objects) which existence is undecidable? Moreover, because the desired objects are partially unknown, their design will require the introduction of new objects or propositions that were unknown at the beginning of the process. The aim of C-K theory was to give a formal account of these intuitive observations and their consequences.
Concept and knowledge spaces. The name "C-K theory" mirrors the assumption that design can be modelled as the interplay between two interdependent spaces having different structures and logics: the space of concepts (C) and the space of knowledge (K). "Space" means here collections of propositions that have different logical status and relations. The structures of these two spaces determine the core propositions of C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF]. Space K contains established (true) propositions or propositions with a clear logical status. Space C is the space where the progressive construction of desired objects is attempted. In this space, we find propositions about objects the existence of which is undecided by the propositions available in K: these propositions of C space are called "concepts" in C-K theory. Example of concepts are propositions like "there exists a flying boat" or "there exists a smarter way to learn tennis". Design begins when one first concept C0 is used as a trigger of a design process. Design is then described as the special transformation of C0 into other concepts until it becomes possible to reject their undecidability by a proof of existence or non-existence in the K-space available at the moment of the proof (the propositions become decidable in the new K-space). The crucial point here is that in space C, the desired unknown objects or (collections of these objects) can only be characterized by comprehensionnally and not extensionally. If a true extensional definition of these objects existed in K or was directly deductible from existing K (i.e. there is a true constructive proof of their existence in K) then the design task would already been done. Now when a new object is designed its existence becomes true in space K, the space of known objects and propositions with a decided logical status, ie its concept becomes a proposition of K. To summarize:
-Space K contains all established (true) propositions (the available knowledge).
3 There is also documented material on its practical applications in several industrial contexts [START_REF] Elmquist | Towards a new logic for Front End Management: from drug discovery to drug design in pharmaceutical R&D[END_REF][START_REF] Mahmoud-Jouini | Managing Creativity Process in Innovation Driven Competition[END_REF][START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF]Hatchuel et al. 2004[START_REF] Hatchuel | The design of science based-products: an interpretation and modelling with C-K theory[END_REF][START_REF] Gillier | Managing innovation fields in a cross-industry exploratory partnership with C-K design theory[END_REF] ; Elmquist andLe Masson 2009). -Space C contains "concepts" which are undecided propositions by K (neither true nor false in K) about some desired and partially unknown objects x.
If follows from these principles that the structure of C is constrained by the special constructive logic of objects the existence of which is undecided. The structure of K is a free parameter of the theory. This corresponds to the observation that design can use all types of Knowledge. K can be modelled with simple graph structures, rigid taxonomies, flexible "object" structures, special topologies [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF] or Hilbert spaces if there are stochastic propositions in K. What counts from the point of view of C-K theory is that the structure of K allows distinguishing between decided and undecidable propositions. Indeed the K space of an engineer is different from an industrial designer"s one: the latter may include perceptions, emotions, theories about color and form and this will directly impact the objects they will design but, basically, from the point of view of design theory their model of reasoning can be the same.
Concepts, as defined in C-K theory, attempt to capture the ambiguity and equivocality of "briefs" and "specifications". Therefore, concepts are propositions of the form: "There exists a (non empty) class of objects x, for which a group of properties p 1 , p 2 , p k holds in K" 4 . Because Concepts are assumed as undecidable propositions in K, the collection of objects x that they "capture" has unusual structures. This is a crucial point of C-K theory that can be illustrated with an example E of a design task:
Example E: let us consider the design task E of "new tyres (for ordinary cars) without rubber". The proposition "there exists a (non empty) class of tyres for ordinary cars without rubber" is a concept as it can be assumed as undecidable within our present knowledge. For sure, existing tyres for ordinary cars are all made with rubber and there are no existing, or immediately constructible, tyres without rubber. Moreover, we know no established and invariant truth that forbids the existence of such new objects that we call "no-rubber tyres " (Example E will be used as an illustration in all sections of this paper). C-K theory highlights the fact that the design task E (and any design task) creates the necessity to reason consistently on "no rubber tyres" which existence is undecidable in K. These objects form a class that corresponds to a formula that is undecidable in K.
At this stage, the mathematical formulation of C-K theory is still a research issue and a key aspect of such discussion is the interpretation and formalization of the unknown and undecidable aspects of a "concept" 5 . However, turning undecided concepts into defined and constructible things is required by a design task and it is this process that is tentatively described by C-K theory. Necessarily, these operations are "expansions" in both K and C: -in K, we can attempt to "expand" the available knowledge (intuitively, it means learning and experimenting) if we want to reach a decidable definition of the initial concept -in C we can attempt to add new properties to the first concept in order to reach decidability. This operation, which we call a partition (see below) is also an expansion of the definition of the designed object (see below). If I say that I want to design a boat that can fly, I can logically expect that I have to add some properties to the usual definition of boats.
4 It can also be formulated as: "The class of objects x, for which a group of properties p1, p2, pk holds in K is non empty". 5 The literature about C-K theory discusses two ways to treat this issue:
-the class of "non rubber tyres for ordinary cars" can be seen as a special kind of set, called C-set, for which the existence of elements is K-undecidable [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. This is the core idea of the theory and the most challenging aspect of its modelling. Clearly assuming "elements" of this C-set will be contradictory with the status of the concept, or we would have to speak of elements without any possibility to define them or to construct them. This is in contradiction with the classic elementarist approach of sets (see Jech, Dehornoy). It means that the propositions "Cset is empty" or "a C-set is non-empty" is K-undecided and only after design is done we will be able to decide this question. Technically, Hatchuel and Weil suggest that C-Sets could be axiomatized within ZF if we reject the axiom of choice and the axiom of regularity, as these axioms assume necessarily the existence of elements. More generally, in space C, where the new object is designed, the membership relation of Set theory has a meaning only when the existence of elements is proved.
-Hendriks and Kazakci [START_REF] Hendriks | Design as Imagining Future Knowledge, a Formal Account[END_REF] have studied an alternative formulation of C-K theory only based on first order logic. They make no reference to C-sets and they reach similar findings about the structure of Design reasoning.
The core proposition of C-K theory is that design appears when both expansions interact. And C-K theory studies the special structure and consequences of such interplay.
-The design process: partitions and C-K operators. As a consequence of the assumptions of C-K theory, design can only proceed by a step-by-step "partitioning" of the initial concept or its corresponding class. Due to the undecidability of concepts and associated classes, "partitions" of a concept cannot be complete family of disjoint propositions. In the language of C-K theory, partitions are one or several new classes obtained by adding properties (coming from K) to the existing concepts. If C k is the concept "there exists a non empty class of objects which verify the properties p 0 , p 1 , p 2 … and p k ", a partition will add a new property p k+1 to obtain a new concept C k+1. Such partition create a partial order where C k+1 > C k . However, in Space C the class associated to C k+1 is not included in the Class associated to C k , as not extensional meaning holds in Space C. There is no warranted existence of any element of a Class associated to a concept. These additions form a "nested" collection of concepts. Beginning with concept C 0 , this partitioning operation may be repeated, whenever there is an available partitioning property in K and until the definition of an object is warranted in K.
Having in mind the interplay between C and K, this partitioning process has specific and unique [START_REF] Ullah | On some unique features of Ck theory of design[END_REF] features.
-Each new partition of a concept has an unknown status that has to be "tested" in K. "Testing" means activating new knowledge that may check the status of the new partition (mock-ups, prototypes, experimental plans are usual knowledge expansions related to a group of partitions).
-Testing a partition has two potential outputs: i) the new partition is true or false, thus forms an expansion in K, or is still undecidable and forms an expansion in C; ii) testing may also expand existing knowledge in a way which is not related to the status of the tested partition (surprises, discoveries, serendipity…). Such new knowledge can be used to generate new partitions and so forth... Finally, "testing" the partition of a concept always expands C or expands K by generating new truths. Hence the more we generate unknown objects in C, the more we may increase the expansion of K.
Example E: Assume that the concept of "non-rubber tyres" is partitioned by the type of material that replaces rubber. This depends of the knowledge we have in K about materials: for instance, plastics, metal alloys and ceramics. Thus we have three possible partitions: "non-rubber tyres with plastics", "non-rubber tyres with metal alloys" and "non-rubber tyres with ceramics". These partitions may create new objects. And testing these partitions may lead to new knowledge in K, for instance new types of plastics, or new materials that are neither plastics, metal alloys or ceramics! By combining all assumptions and operations described in C-K theory, the following propositions hold (Hatchuel andWeil 2003, 2009;Hendriks and Kazakci 2010) -Space C has necessarily a tree structure that follows the partitions of C 0 (see Fig 1).
-A design solution is the concept C k that is called the first conjunction i.e. the first concept to become a true proposition in K. It can also be defined by the series of partitioning properties (p 1 , p 2 …p k ) that forms the design path that goes from the initial concept C 0 to C k . When C k becomes true in K (a design is reached), the class associated to the series of concepts (C 0 , C 1 , C 2 ,... C k ) verify the property: i, i=0…k-1, C k C i. 6 ie: it becomes possible to use the inclusion relationship since the existence of the elements of C k is true in K and these elements are also included in all concepts that are "smaller" than C k .
-The other classes resulting from partitions of C 0 are concept expansions that do not form a proposition that belongs to K.
-All operations described in C-K theory are obtained through four types of operators within each space and between spaces: C-C, C-K, K-K, and K-C. The combination of these four operators is assumed to capture the specific features of design, including creative processes and seemingly "chaotic" evolutions of a real design work [START_REF] Hatchuel | The design of science based-products: an interpretation and modelling with C-K theory[END_REF]. From the point of view of C-K theory, standard models of thought and rationality do not model concepts and can be interpreted as K-K operators.
C-K theory: findings and issues
For sure, neither C-K theory nor any other design theory will warrant the existence of "tyres without rubber". Design theories, like C-K theory, only model the reasoning and operations of the design process and capture some of its "odd" aspects. Thus C-K theory introduces the notion of "expanding partition" which captures a wide range of creative mechanisms.
Expanding partition: generating new objects through chimeras and "crazy" concepts. In C-K theory, it is crucial to distinguish between two types of partitions in space C: expanding and restricting ones. To do so we need to introduce some additional structures in K: the definition of known objects. In example E, the attribute "made with rubber" is assumed to be a common attribute of all known tyres in K. Therefore, the partition "without rubber" is not a known property of the class of objects "tyres". This partition is called an expanding partition as it attempts to expand the definition of tyres by creating new tyres, which are different from existing ones. Suppose that the concept is now "a cheaper tyre" and the first partition is "a cheaper tyre using a rubber coloured in white": if "tyres with white rubber" are known in K, this is called a restricting partition. Restricting partitions only act as selectors among existing objects in K. While, expanding partitions have two important roles:
-they revise the definition of objects and potentially create new ones; they are a vehicle for intentional novelty and surprise in design; -they guide the expansion of knowledge in new directions that cannot be deduced from existing Knowledge.
The generative power captured by C-K theory comes from the combination of these two effects of expanding partitions. Revising the definition of objects allows for new potential objects to emerge (at least as concepts). But this is not enough to warrant their existence in K. Expanding partitions also foster the exploration of new knowledge, which may help to establish the existence of new objects. Thus, expanding partitions capture what is usually called imagination, inspiration, analogies or metaphors. These are well known ingredients of creativity. However, their impact on design was not easy to assess and seemed rather irrational. C-K theory models these mechanisms as expanding partitions through the old and simple technique of chimera forming 7 : partially defining a new object by unexpected attributes (this definition can be seen as crazy or monstruous regarding existing knowledge). Yet this is only one part of the mechanism. C-K theory unveils two distinct effects of these chimeras: allow for new definition of things and guide the expansion of new knowledge. By disentangling these two roles and the value of their interplay and superposition, C-K theory explains the rationality of chimeras and seemingly "crazy concepts" in design: they force the designer to explore new sources of knowledge which could, surprisingly, generate new objects different from the "crazy concepts". It is worth mentioning that this is not classic trial and error reasoning. Trials are not only selected among a list of predefined possibles. Trials are regenerated through C and K expansions. And the acquired knowledge is not only due to errors but also comes from unexpected explorations. And finally, most of potential trials stay at a stage of chimeras, and yet have generated new knowledge.
Example E: the concept of "non-rubber tyres using plastics" may appear as a chimera and rather "crazy" if known plastics do not fit with usual tyre requirements. But, from another point of view, it may trigger the investigation of plastics offering better resistance. Again, these new plastics may not fit. The same process could happen with ceramics and metal alloys and still only reach undecidable concepts. Meanwhile space K would have been largely expanded: new alloys, new plastics new ceramics and more. Then, and only then, new partitions can appear in Space C, for instance introducing new structures combining multiple layers of different materials and new shapes… The whole logic of space C will change: and the first partitions will no more be on types of materials but on new structural forms that were not known at the beginning of the design process.
An important issue: introducing new objects and the preservation of meaning.
Actually, expanding partitions also raise important issues. If, in example E, design succeeds, then "tyres without rubber" will exist in K. Now, if in K the definition of a tyre was "a special wheel made with rubber", such definition is no more consistent with the new designed object and has to be changed. The design of "tyres without rubber" outdates the old definition of tyres. Yet, revising the definition of tyres may impact other definitions like the definition of wheels, and so on. Thus, the revision of definitions has to be done without inconsistencies between all old and new objects in K. Clearly, any design should include a rigorous reordering of names and definitions in K in order to preserve the meaning of old and new things. Otherwise definitions will become less consistent and the whole space K will be endangered. Finally, design theory underlines a hidden, yet necessary, impact of design: the perturbation of names and definitions. It warns about the necessity to reorganize knowledge in order to preserve meaning in K i.e. the consistency of definitions in K.
What is the generality of the principles and issues raised by C-K theory? Are there implicit assumptions about design that limit the generality of the theory? We now explore existing similarities and differences between C-K theory and a general technique of modern set theory called "forcing". This comparison will guide us towards an ontology of "expansion" as a core ontological feature of design.
Part 3. Design inside Set theory: the Forcing method.
Can we find design theory or methods in mathematics? If a crucial feature of design is the intentional generation of new objects, several design approaches can be found. A branch of mathematics called intuitionism even perceives the mathematician as a "creative subject" [START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF]. Within more traditional mathematics there is a wide variety of "extensions" which can be interpreted as design techniques. "Extension" means transforming some existing mathematical entity M (a group, a ring, a field,…) in order to generate a larger one N, related to but different from M, that contains new entities which verify new selected properties.
The design of complex numbers. Extension procedures are usually dependent of the specific mathematical structure that is extended. Classic maths for engineering includes an example of such ad hoc extensions: the generation of complex numbers from Real ones. The procedure shows clear features of a design process. Real numbers have no "negative squares", yet, we can generate new entities called "complex numbers" that are "designed" to verify such a strange property. The method uses special properties of the division of polynomials. Let us divide any polynomial by a polynomial which has no real root (for instance X 2 +1); we generate equivalence classes built on the remainder of the division. The equivalence classes obtained by the polynomial division by X 2 + 1 are all of the form aX+b where (a, b) IR 2 . These equivalence classes have a field structure (ie with an addition, a multiplication…) and it contains the field of real numbers (all the equivalence classes where a = 0). In this field the polynomial X 2 +1 belongs to the equivalence class 0, ie X 2 +1 0. Hence the classes can be renamed with the name ai +b where i verifies i 2 + 1 = 0, ie i can be considered as the complex (or imaginary) root of x 2 +1=0. Just like the equivalence classes to which they correspond, these ai + b entities form a field (with an addition, a multiplication…), ie a new set of designed numbers which have standard properties of reals plus new ones. It is worth mentioning that with the design of complex numbers the definition of "number", likewise the definition of "tyre" in example E, had to be revised. And the most striking revision is that the new imaginary number i is not a real number, yet all the powers of i 2 are reals! Clearly, this extension method is dependent of the specific algebra of the ground structure ie. the field of real numbers. Therefore, if an extension method acts on general and abstract structures, then it could be interpreted as a general design theory. This is precisely the case of Forcing discovered by Paul Cohen in 1963 [START_REF] Cohen | The independence of the Continuum Hypothesis[END_REF][START_REF] Cohen | The independence of the Continuum Hypothesis II[END_REF][START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cross N (1993) Science and design methodology: A review[END_REF] 8 . It generalizes the extension logic to any sets and allows the generation of new collection of sets. We first present the principles of Forcing to support the idea that Forcing is a design theory; then, we study its correspondence with C-K theory.
Forcing: designing new collections of sets
Forcing has been described by historians of Set theory as « a remarkably general and flexible method with strong intuitive underpinnings for extending models of set theory » [START_REF] Kanamori | The Mathematical Development of Set Theory from Cantor to Cohen[END_REF]. Let us remind what are "models of set theory" before describing the Forcing operations.
Figure 2: The forcing Method -Models of Set theory. Set theory is built on a short list of axioms called Zermelo-Frankael axiomatic (ZF) [START_REF] Jech | Set Theory[END_REF] 9 . They define the equality, union, separation and well formation of sets. They also postulate the existence of some special sets. A model of set theory is any collection of sets that verify ZF; it is also called a model of ZF. In the engineering world, the conventional definition of a thing or a class of things (for instance, the definition of tyres) plays the role of a "model of tyres" even if real life conventions are less demanding than mathematical ones; thus, a 8 He has been awarded a Fields medal for this work. 9 Properly speaking, ZF has infinitely many axioms: its axiomatization consists of six axioms and two axiom schema's (comprehension and replacement), which are infinite collections of axioms of similar form. We thank an anonymous reviewer for this remark. model of tyres is a collection of sets of tyres that verify the usual definition of tyres10 . In the industrial world, thanks to technical standards, most engineering objects are defined through models (for example, machine elements).
-Why Forcing? Independent and undecidable propositions in Set theory. After the elaboration of ZF, set theorists faced propositions (P), like the "axiom of choice" and the "continuum hypothesis"11 , that seemed difficult to prove or reject within ZF. This difficulty could mean that these propositions were independent from the axioms of ZF, hence were undecidable within ZF, so that models of ZF could verify or not these propositions. Now, proving the existence of a model of ZF that does not verify the axiom of choice is the same type of issue than proving that there is a model of tyres with no rubber. One possible proof in both cases is to design such a model! Actually, designing new models of ZF is not straightforward and there comes Forcing, the general method invented by Paul Cohen.
The forcing method: ground models, generic filters and extensions
Forcing assumes the existence of a first model M of ZF, called the ground model and then it offers a constructive procedure of a new model N, called the extension model, different from M, which refutes or verifies P and yet, is a model of ZF. In other words, Forcing generates new collections of sets (i.e. models) and preserves ZF. Hence, it creates new sets but preserves what can be interpreted as their meaning i.e. the basic rules of sets. Forcing is not part of the basic knowledge for engineering science and is only taught in advanced Set theory courses. Therefore, a complete presentation of Forcing is beyond the scope of this paper and we will avoid unnecessary mathematical details and focus on the most insightful aspects of Forcing12 needed to establish the findings of this paper. Moreover, it is precisely because Forcing is a very general technique that one can understand its five main elements and its logic without a complete background in advanced Set Theory.
-The first element of Forcing is a ground model M: a well formed collection of sets, a model of ZF.
-The second element is the set of forcing conditions that will act on M. To build new sets from M, we have to extract elements according to some conditions that can be defined in M. Let us call (Q, <) a set of candidate conditions Q and a partial order relation < on Q. This partially ordered set (Q, <) is completely defined in M. From Q, we can extract conditions that can form series of compatible and increasingly refined conditions (q 0 , q 1 , q 2 ... q i ) with for any i: q i < q i-1 ; this means that each condition refines its preceding one. The result of each condition is a subset of M. Hence the series (q i ) builds series of nested sets, each one being included in its preceding set of the series. Such series of conditions generates a filter13 F on Q. And a filter can be interpreted as a step-by-step definition of some object or some set of objects where each step refines the preceding definition by adding new conditions.
-The third element of Forcing is the dense subsets of (Q, <): a dense subset D of Q is a set of conditions so that any condition in Q can be refined by at least one condition belonging to this dense subset. One property of dense subsets is that they contain very long (almost "complete") definitions of things (or sets) on M, since every condition in Q, whatever its "length", can always be refined by a condition in D.
-The fourth element of Forcing (its core idea!) is the formation of a generic filter G which step by step completely defines a new set not in M ! Now how is it possible to jump out the box M? Forcing uses a very general technique: it creates an object that has a property that no other object of M can have ! (Remark: this is similar to an expanding partition in the language of C-K theory). Technically, a generic filter is defined as a filter that intersects all dense subsets. In general this generic filter defines a new set that is not in M 14 but is still defined by conditions from Q, defined on M. Thus, G builds a new object that is necessarily different from all objects defined in M. We can interpret G as a collector of all information available in M in order to create something new not in M.
-The fifth element of Forcing is the construction method of the extended model N. The new set G is used as the foundation stone for the generation of new sets combining systematically G with other sets of M (usually called M(G)). The union of M and M(G) is the extension model N. (Fig 2 illustrates how G is built with elements of M, yet G is not in M; then N is built with combinations of G and M). A crucial aspect of Forcing is the necessity to well organize the naming of the sets of M when they are embedded in the extension set N. Thus, elements of M have two names, the old one and the new one. The generic set G, the new designed object, has one unique name as it was not present in M.
The main Forcing theorems. Paul Cohen invented Forcing and proved a series of theorems that highlighted the generality of the design process. The main results can be synthesized as follows:
-Forcing preserves ZF: Whenever a generic filter G exists, the new model N is a model of ZF. Hence, ZF is preserved and the new sets are not meaningless objects.
-Forcing controls all properties of N: All properties of the elements of N are strictly dependent on the conditions (q 0 ... q i ) that formed the generic filter. This means that any true proposition T in N is such that there exists some p i in G so that: q i T. Hence, the appropriate generic filter G warrants the existence of new models of sets with desired properties. The impact of Forcing on Set theory has been paramount, and at the same time, historians of mathematics acknowledge its surprising power. « Set theory had undergone a sea-change and beyond how the subject was enriched it is difficult to convey the strangeness of it" [START_REF] Kanamori | The Mathematical Development of Set Theory from Cantor to Cohen[END_REF]). 14 G is not in M as soon as Q follows the splitting condition: for every condition p, there are two conditions q and q" that refine p but are incompatible (there is no constraint that refine q and q"). Demonstration (see [START_REF] Jech | Set Theory[END_REF], exercise 14.6, p. 223): Suppose that G is in M and consider D = Q \ G. For any p in Q, the splitting condition implies that there q and q" that refine p and are incompatible; so one of the two is not in G hence is in D. Hence any condition of Q is refined by an element of D. Hence D is dense. So G is not generic. To illustrate Forcing we give a simple application due to Cohen [START_REF] Jech | Set Theory[END_REF]. It is the forcing of real numbers from integers (see Fig 3). Ground model: The sets of integers (power set based on the set of integers ); Forcing conditions Q: the conditions Q can be written as a (0; 1)-functions defined on a subset of : assume, a finite series of ordered integers (1, 2, 3, 4,…, k); to each integer assign a (0,1) value; we obtain a new k-list (0, 1, 1, 1,…, 0). The condition is defined over the k first integers and among these integers it extracts some integers (those with value 1) and leaves the others (value 0). It also describes the set of all numbers beginning by the sequence of selected integers. This can be assimilated to the reals written in base 2 and beginning with the same k first binary digits. Then, let us build a more refined condition by keeping this first list and assigning to k+1 a (0,1) value, without changing the values of the preceding n-list. We obtain a new condition of length k+1 that refines the first one. The operation can be repeated infinitely. This extension defines the order relation on the conditions Q. Note that (Q, <) follows the splitting condition: for any condition p, (q(0), q(1),… q(k)), there are always two conditions that refine p and are incompatible: (q(0), q(1),… q(k), 0) and (q(0), p(1),… q(k), 1). A series of ordered conditions from length 1 to length k forms a filter; all sets of conditions that contain a refinement of every condition are dense subsets; Generic filter: it is formed with the infinite series of conditions that intersects all dense subsets. Hence, the generic filter G builds an infinite list of selected integers and G is not in M. This follows directly from the splitting condition (see footnote [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF] or this can also be demonstrated as follows: for any 0-1 function g in M, Dg = {q Q, q g} is dense so it meets G so that G is different from any g. Hence G forms a new real number (this is the demonstration given in [START_REF] Jech | Set Theory[END_REF]). Note that any real number written in base 2 corresponds to a function g. Hence it means that G forms a real number that is different from any real number written in base 2 15, 16 .
Part 4: C-K theory and Forcing: a correspondence that uncovers an ontology of expansion
Now that we have presented C-K theory and Forcing we can come back to our hypotheses and research questions.
4.1-Forcing as a general design theory
The previous brief introduction to Forcing brings enough material to discuss our claim that Forcing is a general design theory, not an ad hoc technique.
-Design task: like any design project Forcing needs targeted properties for the new sets to be generated. However, Forcing gives no recipe to find generic filters for all desired properties about sets. It only explains how such generation is conceivable without creating nonsense in the world of Sets.
-Generality: Forcing uses only universal techniques like the definition of a new "thing" through a series of special refinements. Indeed, the basic assumptions of Forcing are the axioms of Set theory and the existence of ground models of sets. However, Set theory is one of the most universal languages that are available.
-Generativity: Novelty is obtained by a general method called generic filter which is independent of the targeted sets. The generic filter builds a new set that is different from any set that would have been built by a classic combination of existing conditions within M. Thanks to this procedure, the generic filter is different from any such combination. Thus genericity creates new things by stepping out the combinatorial process within M.
These three observations support the idea that Forcing can be interpreted as a general design theory. Indeed the word "Design" is not part of the Forcing language and it is the notion of "extension" that is used in Forcing and other branches of mathematics. But it is precisely the aim of a design science to unify distinct procedures that appear in different fields under different names, if they present an equivalent structure. Such unification is easier when we can compare abstract and general procedures. And Forcing shows that, likewise design theories in engineering, extensions in mathematics have evolved towards more general models17 . Let us now come back to our comparison and establish similarities and differences between both theories and why these reveal specific ontological elements of design.
4.2-C-K theory and Forcing: similarities and differences
At first glance, both theories present a protocol that generates new things that were not part of existing background. Yet, similarities and differences between these approaches will lead us to highlight common features which may be explicit in both approaches; or implicit in one and explicit in the other. As main common aspects we find: Knowledge expandability, Knowledge "voids", and generic expansions. They form altogether a basic substrate that makes design possible and unique. a) Knowledge expandability, invariant and designed ontologies (dontologies)
Knowledge expandability. Clearly, the generation of new objects needs new knowledge. In C-K theory it is an explicit operation. C-K theory assumes knowledge expansions that are not only the result of induction rules, which can be interpreted as KK operations. New knowledge is also obtained by CK and KC operators which have a triggering and guiding role through the formation of expanding partitions and concepts [START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF]. But where is such new knowledge in Forcing? Is this a major difference between the two theories? As already remarked by Poincaré [START_REF] Poincaré | Science and Hypothesis. science and hypothesis was originally[END_REF], one essential method to create novelty in mathematics is the introduction of induction rules which generate actual infinites. In Set theory, the axiom of infinity allows such repeated induction and plays the role of an endless supplier of new objects. Without such expansion technique, generic filters are impossible and Forcing disappears. Thus, both theories assume a mechanism for K expandability even if they use different techniques to obtain it.
Invariant ontologies and the limits of design. The background of Forcing is Set theory. Actually, Forcing creates new models of ZF, but ZF is explicitly unchanged by Forcing. The existence of such invariant structures is implicit in C-K theory and relates to implicit assumptions about the structure of K. C-K theory lacks some explicit rules about knowledge: at least, some minimal logic and basic wording that allows consistent deduction and learning. These common rules are thus necessary to the existence of design and, like ZF, may not be changed by design. Yet, it is really difficult establish ex ante what are the invariant rules that should never be changed by design. This issue unveils an interesting ontological limitation for a design theory. To formulate a design theory we need a minimal language and some pre-established knowledge that is invariant by design ! Intuitively, we could expect that the more this invariant ontology is general, the most generative will be the design theory. But it could be argued that too minimal invariant ontology would hamper the creative power of design. We can only signal this issue that deserves further research.
By constrast, we can also define variable ontologies, i.e. all definitions, objects and rules that can be changed by design. These variable ontologies correspond to the classic definition of ontologies in computer science or artificial intelligence [START_REF] Gruber | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF]. They are generated and renewed by design: we suggest to call them designed ontologies or d-ontologies, to remind that they result from a previous design process. Most human Knowledge is built on such d-ontologies. Finally, the ontology of design highlights a specific aspect of Knowledge. It is not only a matter of truth or validity. Design science also discusses how knowledge shapes or is shaped by the frontier between what is invariant or designed at the ontological level.
Example E: In the Tyre industry, rubber should be seen as a designed ontological element of Tyres. Yet, for obvious economic and managerial reasons, it could be considered as an invariant one. In any human activity, the frontier between what is invariant and what can be changed by design is a tough and conflictual issue. The role of design theory is not to tell where such frontier should be, but to establish that the existence of such frontier is an invariant ontological element of design, in all domains, be it in Mathematics, or in engineering.
Beyond invariant ontologies, design needs to generate designed ones and this needs another interesting aspect of knowledge: the presence of knowledge "voids".
b) Knowledge voids: undecidability and independence
In Set theory, Forcing is used to design models of Sets such that some models satisfy the property P while others verify its negation. These models prove that P is undecidable within Set theory.
When this happens, P can be interpreted as a void in the knowledge about Sets. Conversely, the presence of such voids is a condition of Forcing. In C-K theory, concepts are also undecidable propositions that can be similarly seen as voids. Yet, their undecidability is assumed and they are necessary to start and guide the design process 18 . Thus knowledge voids are a common ontology in both theories. Their existence, detection and formulation is a crucial part of the ontology of design. The word "void" is used as a metaphor, that conveys the image that these "voids" have to be intentionally "filled" by design 19 . As proved in Forcing, they signal the existence of independent structures in existing knowledge or in a system of axioms.
Example E. If one succeeds to design "Tyres without rubber", it will be confirmed that: i) the concept of "tyres without rubber" was undecidable within previous knowledge; ii) that the dontology of tyres has become independent from the d-ontology of rubber.
Thus C-K theory and forcing present consistent views about undecidability and highlight its importance for a science of design: it is both a necessary hypothesis for starting design (C-K theory) and an hypothesis that can only be proved by design (Forcing).
This finding leads to three propositions that explain why an ontology of design was so specific and required modelling efforts:
-The ontology of design is not linked to the accumulation of knowledge, but to the formation of independent structures (voids) in cumulated knowledge.
-The specific rationality of design is to "fill" such voids in order to create desired things -"filling" means to prove the independence between two propositions in K.
-The existence of such desired things remains undecidable as long as they are not designed.
c) Design needs generic processes for expansion
Generic and expanding expansions. In C-K theory, a design solution is a special path (C 0 ,… C k ) of the expanded tree of concepts in space C. This design path is obtained through a series of refinements which form a new true proposition in K. Whenever this series is established, several results hold. The partitions that form the design solution are proved compatible in K and define a new class of objects which verifies the first C 0 (initially undecidable in K). Comparing with Forcing, this design path is also a filter as the path is generated by a step by step refinement process. It is also a generic filter in C 20 . Hence, the design path is a generic filter in C, which includes C 0 and "forces" a new set of objects that verify C 0 . Yet the generation of novelty in C-K is not obtained by the mathematical chimera of an actual infinity of conditions like in Forcing. It is warranted by: i) the assumption of C 0 as an undecidable proposition in K at the beginning of design; and ii ) at least one expanding partition and one expansion in K, which are necessary to form one new complete design path. Thus, genericity also exists in C-K theory but it is not built by an infinite induction but by introducing new truths and revising the definition of objects. Finally, C-K theory and Forcing differ by the technique that generates novelty but both can be seen as generic expansions as they are obtained by expansions designed to generate an object that is different from any existing object or any combination of existing objects. Thus, generic expansions are a core element of the ontology of design.
Expanding partitions as potential forcings: C-K theory adopts a "real" world perspective. All Knowledge is not given at the beginning of the design process and C-K operators aim to expand this knowledge. This can be interpreted, yet only as a metaphor, with the forcing language. We could say that expanding partitions offer new potential forcing conditions. However, increasing potential Forcings is possible only if expanding partitions are not rejected in K because they contradict some invariant ontology.
K-reordering, new namings and preservation of meaning. In both Forcing and C-K theory, design generates new things. In mathematics, the generation of new? real numbers by Forcing obliged to rediscuss the cardinality of the continuous line. We call K-reordering, these KK operations that are needed to account for the safe introduction of new objects with all its consequences. For instance, design needs new names to avoid confusion and distinguish new objects. Interpretation rules will be necessary to preserve meaning with old and new names. As mentioned before, such issues are explicitly addressed in Forcing 21 . In the first formulations of C-K theory [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF], K-reordering was implicit. Now it is clear that it should receive explicit attention in any design theory. To avoid creating nonsense, design needs such careful K-reordering. These theoretical findings have been confirmed by empirical observations of design teams. During such experiments, authors observed the generation of "noun phrases" [START_REF] Mabogunje | Noun Phrases as Surrogates for Measuring Early Phases of the Mechanical Design Process[END_REF]: this is the response of designers facing the need 20 Proof: for any dense subset D of C, there is a refinement of C k that is in D. But since C k is also in K, any refinement of C k is in K and cannot be in C. Hence C k is in D. 21 The output of Forcing is not one unique new Set G, but a whole extended model N of ZF. The building of the extension Model combines subsets of the old ground Model M and the new set G. Thus new names have to be carefully redistributed so that an element M with name a gets a new name a" when considered as an element of the new set. As a consequence of these preserving rules the extension Set is well formed and obeys ZF axioms (Jech 2000).
to invent new names to describe objects defined by unexpected series of attributes. These "noun phrases" also allow some partial K-reordering that preserves meaning during the conversations at work.
Part 5: Discussion and conclusion: an ontology of design.
In the preceding sections we have compared two design theories coming from different fields. Our main assumptions were that these theories were sufficiently general to bring solid insights about what is design and what are some of its ontological features. We also expected that these common features would appear when each design theory is used to mirror the other.
What we have found is that an ontology of design is grounded on an ontology of expansion. This means that in any design domain, model or methodology we have to find a common group of basic assumptions and features that warrant a consistent model of expansion. Or, to put it more precisely: if we find a reasoning process where these features are present, we can consider it as a design process. What are these features? We have assumed that these features can be inductively obtained from the comparison between two general design theories in different fields. We found six ontological features that we summarize in the first column of table1, where we recall the corresponding elements of each feature for both Forcing (column 2) and C-K theory (column 3). -An ontology of design needs a dynamic frontier between invariant ontologies and designed ontologies. This proposition has important implications for the status of design. Design cannot be defined as an applied science or as the simple use of prior knowledge. Invariant ontologies can be seen as some sort of universal lawts. Yet designed ontologies are not deduced from these laws, their design needs extra knowledge and revised definitions. Moreover, it is not possible to stabilize ex ante the frontier between these two ontologies. Fore sure generic expansions need some minimal and invariant knowledge. But design theories say nothing about what could be such minimal frontier. Take the field of contemporary Art, even if Art work was not studied in this research, we can conjecture that invariant ontologies that bear on present artistic work are rather limited . Each artist can design and decide what should stay as an invariant ontology for her own future work. In mathematics we can find similar discussions when axiomatics and foundations are in debate. Therefore, an ontology of design may contribute to the debate about the creative aspects of mathematical work [START_REF] Kazakçi | Is "creative subject" of Brouwer a designer? -an Analysis of Intuitionistic Mathematics from the Viewpoint of C-K Design Theory?[END_REF]. Applying such categories to analyse our own work, we have to acknowledge that the ontology of expansion that we have found is a designed and not an invariant one. It depends on the design theories that we have compared in this paper. New ontological features of design may appear if we study other theories. However, by grounding our work on theories that present a high level of generality, but we can reasonably expect that we have at least captured some invariant features of design.
Ontology of design
-An ontology of design acknowledges for voids in knowledge: modelling unknowness. The notion of "voids" opens a specific perspective on knowledge structures. It should not be confused with the usual "lack of knowledge" about something that already exists or is well defined. It is correct to say that: "we lack knowledge about the presence of water on Mars". In this sentence the notions of presence, Mars, and water have not to be designed. Instead, knowledge voids designate unknown entities which existence requires design work. Thus it is not consistent, from our point of view, to say that "we lack knowledge about tyres without rubber". If we want to know something about them, we have to design them before! These findings open difficult questions that need further research: can we detect all "voids" in Knowledge? Are there limits to such inquiry? Are there different possibilities to conceptualize this metaphor? In our research we modelled "voids" with notions like undecidability and independence which are linked to the common background of C-K theory and Forcing. To challenge these interpretations further is needed to explore new models of what we called "concepts" and "unknown objects". A similar evolution happened with the notion of uncertainty, which was traditionally modelled with probability theory before more general models where suggested (like possibility theories).
-An ontology of design needs generic processes for the formation of new things. An important finding of our comparison is that generating new things needs generic expansions, which are neither pure imagination nor pure combination of what is already known. What we have found is that design needs a specific superposition and interplay of both chimeras and knowledge expansions. C-K theory insists on the dual role of expanding partitions which allows to revise the identity and definition of objects. Forcing is not obtained by a finite combination of elements of the ground model. It needs first to break the ground model by building a new object, the generic filter, and then to recombine it with old ones. This was certainly the most difficult mechanism to capture. Design theories like C-K theory and Forcing clarify such mechanisms but they are difficult to express with ordinary language. Our research shows that common notions like idea generation, "problem finding" or "serendipity" are only images or elements of a more complex cognitive mechanism. Indeed, it is the goal of theories to clarify what is confused in ordinary language and design theories have attempted to explain what remained usually obscure in Design. Still, it is a challenge to account more intuitively for the notion of generic expansion.
-An ontology of design needs mechanisms for preservation of meaning and Knowledge reordering. This finding signals the price to pay if we want to design and to continue expanding knowledge and things. At the core of these operations we find the simplest and yet most complex task: consistent naming. It is naming that controls the good reordering of knowledge when design is active. Naming is also necessary to accurately identify new "voids" i.e. new undecidable concepts or independent knowledge structures. Naming is also a central task for any industrial activity and organisation. An ontology of expansion tells us that the most consistent way to organize names is to remember how the things we name have been designed and thus differentiated from existing things. Yet, in practice names tends to have an existence of their own and it is well documented that this contributes to fixation effects [START_REF] Jansson | Design Fixation[END_REF]. It is also documented that in innovative industries, engineering departments are permanently producing a flow of new objects, thus a complete K-reordering becomes almost impossible and this process continuously threatens the validity of naming and component interchangeability [START_REF] Giacomoni | M et gestion des évolutions de données techniques : impacts multiples et interchangeabilité restreinte[END_REF].
Limitations and further research. To conclude we must stress again that our findings are limited by our material and research methodology. Our comparative work could be extended and strengthened by introducing other formal design theories, provided they are more general than C-K theory and Forcing and reveal new ontological features.
An alternative to our work would be to study design from the point of view of its reception which can be interpreted as a continuation of design or as a K-reordering process, both taking place beyond the designer"s work (by clients, users, experts, critics, media, etc..). There is also a wide scientific work on perception that has influenced many designers like for instance, Gestalt theory, contrast and color theory etc… One issue for further research could be to compare the ontology of expansion that we have found for Design to existing ontologies of perception.
We also acknowledge that, for instance, social or psychological approaches of design could lead to different perspectives on what design is about. However, the clarification of an ontology of design may contribute to new explorations of the social and psychological conditions of design.
The frontier between invariant and designed ontologies can be interpreted from a social perspective. Design, as we have find it, requires consistent naming and K-reordering and this also means special social work and training are needed for the acceptance of design activities. Human Societies need both invariance and evolution. Words, rules, habits, cannot change too rapidly but they also need to evolve by design. Thus one can ask if there are social systems that are more or less consistent with the ontology of expansion that we have described. Social and psychological structures indeed play an important role in the fixation of ontologies and in design training and learning. It will be the task of future research to link such theoretical advances to more empirical observations of design tasks [START_REF] Agogué | The Impact of Examples on Creative Design: Explaining Fixation and Stimulation Effects[END_REF].
Implication for design practice. The practical lesson of this theoretical research is rather simple. According to our findings design has a specific ontology, anchored in subtle and difficult cognitive mechanisms like Knowledge voids, generic expansions and K-reorderings. Thus we can better understand why design practice can be disconcerting, controversial and stressful; also why empirical design research is so demanding and complex [START_REF] Blessing | What is Engineering Design Research?[END_REF]. The good news is that design theory can cope with the cognitive "chaos" that seems to emerge from design. We understand that design corresponds to a type of rationality that cannot be reduced to standard learning or problem solving. The rationality of design is richer and more general than other rationalities. It keeps the logic of intention but accepts the undecidability of its target; it aims exploring the unknown and it is adapted to the exploitation of the emergent. Yet, its ontology can be explained, and as any other science, design science can make the obscure and the complex clearer and simpler.
Figure 1 :
1 Figure 1: C-K diagram
Figure 2. The forcing method
Figure 3 :
3 Figure 3: The generation of Cohen Reals by Forcing 3.3. An example of Forcing: the generation of new real numbers.
Table 1 :
1 Ontology of design as a common core of design Theories These findings have several implications and open areas for further research that we briefly discuss now.
Forcing C-K theory
Invariant ontologies Axioms of Set theory Basic logic and language;
invariant objects (frontier)
designed ontologies New models of Sets New families of objects
Knowledge expansions Inductive rules (axiom Discovery or guided
of infinity) exploration
"voids", undecidability and Independant axioms Concepts and independent
independence Set theory structures in K
Generic expansions Generic filter Design path with expanding
(generating new thing) partitions and K-expansions
K-reordering, naming and Building rules for the New names and reorganising
preservation of meaning extension model the definition of designed
ontologies
For instance, there is an active quest for new theoretical physics based on String theory that could replace the standard model of particles.
It may be surprising that the inclusion relation becomes possible: it becomes possible only when the existence is proved.
The idea of Design as Chimera forming can be traced back to Yoshikawa"s GDT[START_REF] Yoshikawa | Design Theory for CAD/CAM integration[END_REF] (see the Frodird, p. 177) although the authors didn"t use the term chimera and the theoretical properties of such operations were not fully described in the paper.
Such models of things are also present in Design theories (see for instance the "entity set" in GDT[START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF]).
The two propositions of this type that gave birth to the forcing method are well known in set theory. The first one is "every set of nonempty sets has a choice function"; the second one is the existence of infinite cardinals that are intermediate between the cardinal of the integers and the cardinal of the reals also called the continuum hypothesis.
Complete presentations of Forcing can be easily found in standard textbooks in advanced set theory[START_REF] Kunen | The Interplay Between Creativity issues and Design Theories: a new perspective for Design Management Studies?[END_REF][START_REF] Jech | Set Theory[END_REF][START_REF] Cohen | Set Theory and the Continuum Hypothesis. Addison-Wesley, Cross N (1993) Science and design methodology: A review[END_REF]
Filters are standard structures in Set theory. A filter F is a set of conditions of Q with the following properties: non empty; nestedness (if p < q and p in F then q is in F) and compatibility (if p, q are in F, then there is s in F such that s < p and s < q).
To give a hint on this strange property and its demonstration: Cohen follows, as he explains himself, the reasoning of Cantor diagonalization. He shows that the "new" real is different from any real g written in base 2 by showing that there is at least one condition in G that differentiates G and this real (this corresponds to the fact that G intersects Dg, the set of conditions that are not included in g).
Forcing is a mathematical tool that can Design new sets using infinite series of conditions. In real Design, series of conditions are not always infinite.
There are several forms of extensions in Mathematics that cannot be even mentioned in this paper. Our claim is that Forcing, to our knowledge, presents the highest generality in its assumptions and scope.
In a simulation study of C-K reasoning[START_REF] Kazakçi | Simulation of Design reasoning based on C-K theory: a model and an example application[END_REF] voids could be modelled, since knowledge was assumed to have a graph structure
One can also use the image of a "hole". The metaphor of "holes" has been suggested by Udo Lindemann during a presentation about "Creativity in engineering". (SIG Design theory Workshop February
2011). It is a good image of the undecidable propositions, or concepts in C-K theory, that trigger a Design process. Udo Lindemann showed that such "holes" can be detected with engineering methods when they are used to find Design ways that were not yet explored. | 86,616 | [
"3386",
"1099",
"1111"
] | [
"39111",
"39111",
"39111"
] |
01485144 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2013 | https://minesparis-psl.hal.science/hal-01485144/file/Hatchuel%20Reich%20Le%20Masson%20Weil%20Kazakci%202013%20ICED%20formatted%20V6%20%2B%20abstract.pdf | Armand Hatchuel
Yoram Reich
Pascal Le Masson
Benoit Weil
Akin Kazakci
Beyond Models and Decisions: Situating Design through generative functions
This paper aims to situate Design by comparison to scientific modeling and optimal Decision. We introduce "generative functions" characterizing each of these activities. We formulate inputs, outputs and specific conditions of the generative functions corresponding to modeling (G m ), Optimization (G o ) and Design (G d ): G d follows the classic view of modeling as a reduction of observed anomalies in knowledge by assuming the existence of unknown objects that may be observed and described with consistency and completeness. G o is possible when free parameters appear in models. Gd bears on recent Design theory, which shows that design begins with unknown yet not observable objects to which desired properties are assigned and have to be achieved by design. On this basis we establish that: i) modeling is a special case of Design; ii) the definition of design can be extended to the simultaneous generation of objects (as artifacts) and knowledge. Hence, the unity and variety of design can be explained, and we establish Design as a highly general generative function that is central to both science and decision. Such findings have several implications for research and education.
INTRODUCTION: THE NATURE OF DESIGN THEORY
1. Research goals: in this paper, we aim to situate Design theory by comparison to Science as a modeling activity and to Decision as an optimization activity. To tackle this critical issue, we introduce and formalize generative functions that characterize these three activities. From the study of these generative functions we show that: i) modeling is a special case of design; and ii) Design can be seen as the simultaneous generation of artifacts and knowledge. 2. Research motivation and background. Contemporary Design theories have reached a high level of formalization and generality [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]. They establish that design can include, yet cannot be reduced to, classic types of cognitive rationality (problem-solving, trial and error, etc.) [START_REF] Dorst | Design Problems and Design Paradoxes[END_REF][START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF]). Even if one finds older pioneers to this approach, modern attempts can be traced back to [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF] and have been followed by a series of advancements which endeavored to reach a theory of design that is independent of what is designed, and that can rigorously account for the generative (or creative) aspects of Design [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]. General Design Theory [START_REF] Yoshikawa | General Design Theory and a CAD System[END_REF], Coupled Design Process [START_REF] Braha | Topologial structures for modelling engineering design processes[END_REF], Infused Design [START_REF] Shai | Infused Design: I Theory[END_REF]), Concept-Knowledge (C-K theory) [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF] are representatives of such endeavor. The same evolution has occurred in the domain of industrial design where early esthetic orientations have evolved towards a comprehensive and reflexive approach of design that seeks new coherence [START_REF] Margolin | Design in History[END_REF]. Such academic corpus opens new perspectives about the situation of Design theory within the general landscape of knowledge and science: the more Design theory claims its universality, the more it is necessary to explain how it can be articulated to other universal models of thought well known to scientists. Still the notion of "Design theory" is unclear for the non-specialist: it has to be better related to standard forms of scientific activity. To advance in this direction this paper begins to answer simple, yet difficult, questions like: what is different between Design and the classic scientific method? Why design theory is not simply a decision theory? In this paper we focus on the relation between design, modeling and optimization; the latter are major and dominant references across all sciences. 3. Methodology. Authors (Cross 1993;[START_REF] Zeng | On the logic of design[END_REF][START_REF] Horvath | A treatise on order in engineering design research[END_REF]) have already attempted to position Design in relation with Science. [START_REF] Rodenacker | Methodisches Konstruieren. Konstruktionsbücher[END_REF] considered that Design consisted in: i) analyzing "physical phenomena" based on scientific modeling; and ii) "inverting" the logic by beginning with selecting a function and addressing it by using known physical models of the phenomena (see p. 22 in [START_REF] Rodenacker | Methodisches Konstruieren. Konstruktionsbücher[END_REF]). After WW2, Simon's approach of the artificial proposed a strong distinction between science and design [START_REF] Simon | The Sciences of the Artificial[END_REF]. However, Simon's Design theory was reduced to problem solving and did not capture specific traits of design [START_REF] Hatchuel | Towards Design Theory and expandable rationality: the unfinished program of Herbert Simon[END_REF][START_REF] Dorst | Design Problems and Design Paradoxes[END_REF]. [START_REF] Farrell | The Simon-Kroes model of technical artitacts and the distinction between science and design[END_REF] criticized the Simonian distinction, considering that design and science have a lot in common. Still science and design are not specified with enough rigor and precision in these comparisons. Our aim is to reach more precise propositions about "scientific modeling" and "optimal decision" and to establish similarities and differences with design theory, at the level of formalization allowed by recent design theories. The core of this paper is the analysis of modeling, decision and design through generative functions.
For each generative function we define its inputs and outputs, as well as the assumptions and constraints to be verified by these functions. This common formal language will help us establish the relations and differences between design theory, modeling theory and decision theory. 4. Paper outline. Section 2 presents a formal approach of classic modeling theory and decision theory. Section 3 shows why Design differs from modeling and decision theory. Section 4 outlines differences in the status of the "unknown" in each case. We show that modeling can be interpreted as the design of knowledge. It establishes that science and decision are centrally dependent of our capacity to design.
MODELING AND DECISION: UNKNOWN OBJECTS AS OBSERVABLES
Modeling: anomalies and unknown objects
The classic task of Science (formed in the 19 th Century), was to establish the "true laws of nature". This definition has been criticized during the 20 th century: more pragmatic notions about Truth were used to define scientific knowledge, based on falsifiability [START_REF] Poincaré | Science and Hypothesis[END_REF][START_REF] Popper | The Logic of Scientific Discovery[END_REF]; laws are interpreted as provisional "scientific models" [START_REF] Kuhn | The Structure of Scientific Revolutions[END_REF]McComas 1998;[START_REF] Popper | The Logic of Scientific Discovery[END_REF]. The conception of "Nature" itself has been questioned. The classic vision of "reality" was challenged by the physics of the 20 th century (Relativity theory, Quantum Mechanics). The environmental dangers of human interventions provoked new discussions about the frontiers of nature and culture. Yet, these new views have not changed the scientific method i.e. the logic of modeling. It is largely shared that Science produces knowledge using both observations and models (mostly mathematical, but not uniquely). The core of the scientific conversation is focused on the consistency, validity, testability of models, and above all, on how models may fit existing or experimentally provoked observations. To understand similarities and differences between Design theory and modeling theory, we first discuss the assumptions and generative function that define modeling theory.
The formal assumptions of modeling
Modeling is so common that its basic assumptions are widely accepted and rarely reminded. To outline the core differences or similarities between Modeling theory and Design theory, these assumptions have to be clarified and formalized. We adopt the following notations: -X i is an object i that is defined by its name "X i " and by additional properties.
-K i (X i ) is the established knowledge about X i (e.g. the collection of its properties). Under some conditions described below, they may form a model of X i -K(X i ) is the collection of models about all the X i s. At this stage we only need to assume that K follows the classic axioms of epistemic logic [START_REF] Hendricks | Mainstream and Formal Epistemology[END_REF]) (see section 4). Still, modeling theory needs additional assumptions (these are not hypotheses; they are not discussed):
A1. Observability of objects and independence from the observer. Classic scientific modeling assumes that considered objects X i are observable: it means that the scientist (as the observer) can perceive and/or activate some observations x i about X i . The quality and reliability of these observations is an issue that is addressed by statistics theory. These observations may impact on what is known K i (X i ) and even modify some parameters of X i i.e. some subsets of K i (X i ) but it is usually assumed that observations do not provoke the existence of X i , i.e. the existence of the X i s is independent of the observer. For instance in quantum mechanics, the position and momentum of a particle are dependent of the observation, not its existence, mass or other physical characteristics. (Here we adopt what is usually called the positivistic approach of Science. Our formalization also fits with a constructivist view of scientific modeling but it would be too long to establish it in this paper.) A2. Model consistency and completeness: K(X i ) is a model of the X i s if two conditions defined by the scientist are verified:
-Consistency: the scientist can define a consistency function H, that tests K(X i ) (no contradictions, no redundant propositions, simplicity, unity, symmetry etc…):
H(K(X i )) true means K(X i ) is a consistent model.
-Completeness: we call Y the collection of observations (or data coming from these observations) that can be related to the X i s. The scientist can define a completeness function D that checks (K(X i )-Y): D(K(X i )-Y) holds means that K(X i ) sufficiently predicts Y. Obviously, there is no universal formulation of H and D. Scientific communities tend to adopt common principles for consistency and completeness. For our research, what counts is the logical necessity of some H and D functions to control the progress of modeling. Notations: For the sake of simplicity, we will write: ∆H > 0 (resp. ∆D > 0) when consistency (resp. completness) of knowledge has increased. A3. Modeling aims to reduce knowledge anomalies. The modeling activity (the research process) is stimulated by two types of "anomalies" that may appear separately or together: -K(X i ) seems inconsistent according to H. For instance K(X i ) may lack unity or present contradictions. For instance Ockam's razor is a criterion of economy in the constitution of K. -New observations Y appear or are provoked by an experiment, and do not fit, according to D, with what is described or expected by K(X i ). Or K(X i ) predicts observations Y * that still never happened or are contradictory with available ones. For instance Higgs's Boson was predicted by the standard theory of particles and was observed several decades after its prediction. A4. Hypothesizing and exploring unknown objects. Facing anomalies, the scientist makes the hypothesis that there may exist an unknown object X x , observable but not yet observed, that would reduce the anomalies if it verifies some properties. Anomalies are perceived as signs of the existence of X x, and the modeling process will activate two interrelated activities.
-The elaboration of K(X x ) will hopefully provide a definition of X x and validate its expected properties. Optimization procedures can routinize such elaboration [START_REF] Schmidt | Distilling free-form natural laws from experimental data[END_REF]. -The expansion of Y, i.e. new provoked observations (experimental plans) may also increase information about (X x , K(X x )). Ideally, the two series should converge towards an accepted model K x (X x ) that increases H and D. . This process may also provoke a revision of previous knowledge K(X i ) that we will note K'(X i ) in all the paper (revised knowledge on X i ).
Some examples of scientific modeling
Example 1 X-Rays. When the story of X rays begun, many objects were already known (modelled): electricity, light, electromagnetic waves, photography where common K(X i ) for scientists. Research was stimulated by the formation of a photographic anomaly Y: a photosensitive screen became fluorescent when Crookes tubes were discharged in a black room. Roentgen hypothesized the existence of an unknown radiation, X x , that was produced by the Crookes tube and could produce a visible impact on photographic screens. It took a long period of work combining hypothesis building and experimental testing before X rays were understood and the photographic anomaly reduced. Example 2 New planets. We find a similar logic in the discovery of Neptune and then Pluto, the "planet X". In the 1840s, astronomers had detected a series of irregularities in the path of Uranus, an anomaly Y which could not be entirely explained by Newton gravitational theory applied to the thenknown seven planets (the established K(X i )). Le Verrier proposed a new model with eight planets (K'(X i ), K(X x )) in which the irregularities are resolved if the gravity of a farther, unknown planet X x was disturbing Uranus path around the Sun. Telescopic observations confirming the existence of a major planet were made by Galle, working from Le Verrier's calculations. The story followed the same path with the discovery of Pluto, which was predicted in the late 19 th century to explain newly discovered anomalies in Uranus' trajectory (new Y). For decades, astronomers suggested several possible celestial coordinates (i.e. multiple possible K(X x )) for what was called the "planet X". Interestingly enough, even today astronomers go on studying other models K(X x ) to explain Uranus trajectory, integrating for instance new knowledge on Neptune mass, gained by Voyager 2's 1989 flyby of Neptune.
Corollary assumptions in modeling theory
Modeling theory is driven by the criteria of consistency H and completeness D that allow detecting anomalies of knowledge before any explanation has been found. Hence, modeling needs the independence between X x and the criteria that judge the consistency and completeness of K(X i ): H and D. This assumption is necessary because H(K(X i ) and D(K(X i )-Y)) have to be evaluated when X x is still unknown and its existence not warranted (only K(X i ) and Y are known). Still, as soon as (X x , K(X x )) are formulated, even as hypotheses, H and D can take into account this formulation. Finally, modeling can be described through what we call a generative function G m . Definition: in all the following, we call generative function a transformation where the output contains at least one object (X x , K(X x )) that was unknown in the input of the function [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]) and which knowledge has been increased during the transformation. In the case of modeling, this generative function can be structurally defined as:
G m : (K(X i ), Y) (K(X x ), K'(X j ))
under the conditions that: -D(K(Xi)-Y)) does not hold (i.e. there is an anomaly in the knowledge input of G m ) -H(K'(X j )K(X x )) -H(K(X i ))>0 or H>0 (i.e. the new models are more consistent than the previous ones)
-D((K(X i ) K'(X j ) K(X x ))-Y) holds or D>0 (i.e.
the new models better fit with the observations)
The generative function G m only acts on knowledge but not on the existence of modeled objects. It helps to detect the anomalies and reduce distance between knowledge and observations.
Decisions and decidable parameters: models as systems of choice
The output of a modeling process is a transformation of K(X i ) that includes a new model K x (X x ) that defines an observable X x and captures its relations with other X i s. A decision issue appears when this new object X x can be a potential instrument for action through some program about X x .
Models as programs
The path from the discovery of a new object to a new technology is a classic view (yet limited as we will see in later sections) of design and innovation. This perspective assumes that K(X x ) can be decomposed into two parts: K u (X x ) which is invariant and K f (Xx) which offers free parameters (d 1 , d 2 ,..,d i ) that can be decided within some range of variation. Example 3: X rays consisted in a large family of electromagnetic radiations, described by a range of wavelengths and energies. The latter appeared as free parameters that could be controlled and selected for some purpose. The design of specific X-rays artefacts could be seen as the "best choice" among these parameters in relation to specific requirements: functionality, cost, danger, etc. The distinction between K f and K u clarifies the relation between the discovery of a new object and the discovery of a decision space of free parameters where the designer may "choose" a strategy. Decision theory and/or optimization theory provide techniques that guide the choice of these free parameters.
Optimization: generating choices
The literature about Decision theory and optimization explores several issues: decision with uncertainty, multicriteria or multiple agents decision making, etc. In all cases, the task is to evaluate and select among alternatives. Classic "optimization theory" explores algorithms that search the "best" or "most satisficing" choices among a decision space which contains a very large number or free possibilities -a number so large that systematic exploration of all possibilities is infeasible even with the most powerful computers. In recent decades optimization algorithms have been improved through inspiring ideas coming from material science (simulated annealing) or biomimicry (genetic algorithms, ant based algorithms…). However, from a formal point of view, the departure point of all these algorithms is a decision space (K(X x ), D(d j ), O(d i )), where: -K(X x ) is an established model of X x , -D(d j ) is the space of acceptable decisions about the d j s, which are the free parameters of X x -O(d j ) is the set of criteria used to select the "optimal" group of decisions D * (d j ). The task of these algorithms can be seen as a generative function G o that transforms the decision space into D * (d j ), which is the optimal decision. From the comparison of G m and G o , it appears that they both generate new knowledge, but in a different way. Modeling may introduce new X x when optimization only produces knowledge on the structure of K(X x ) from the perspective of some criterion O (If O was independent of X x (for instance, if O is a universal cost function), it could be possible to integrate both functions in one unique modeling function including optimization G m,o : (K(X i ), Y) (K'(X j ), K(X x ), D * (d j )) where H, D and O hold. Yet, in most cases, O may depend on the knowledge acquired about X x ). We now compare these structural propositions to the generative function associated to Design theory.
3
DESIGN: THE GENERATION OF NEW OBJECTS Intuitively, Design aims to define and realize an object X x that does not already exist, or that could not be obtained by a deduction from existing objects and knowledge. This intuition has been formalized by recent design theories [START_REF] Hatchuel | A systematic approach of design theories using generativeness and robustness[END_REF]). However, it mixes several assumptions that imply, as a first step of our analysis, strong differences between Design and modeling and need to be carefully studied. In the next developments we will follow the logic of C-K design theory to formalize the generative function of Design [START_REF] Hatchuel | A new approach to innovative design: an introduction to C-K theory[END_REF][START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF].
-Unknowness, desirability and unobservability Unknown objects X x are necessary to modeling theory. Design also needs unknown objects X x. According to C-K design theory, these objects do not exist and hence are not observable when design begins. They will exist only if design succeeds. Actually, when design starts, these objects are unknown and only desirable. How is it possible? They are assigned desirable properties P(X x ) and they form a concept (X x, P(X x )) , where P is the only proposition that is formulated about the specific unknown X x that has to be created by design. Similarly to the O of G o , P refers to a set of criteria to be met by X x . Moreover, within existing K(X i ), the existence of such concept is necessarily undecidable [START_REF] Hatchuel | C-K design theory: an advanced formulation[END_REF]. X x is not assumed as an observable object like in modeling, thus it can be viewed as an imaginary object. In design, X x is only partially imagined: design only needs that we imagine the concept of an object, but its complete definition has to be elaborated and realized. This has important consequences for the generative function of Design.
-Design as decided anomalies Like in modeling, we again assume K(X i ). Now, Design is possible only if between the concept (X x , P(X x )) and K(X i ) the following relations hold: -(K(X i )P(X x )) is wrong (i.e. what we know about X i s cannot imply the existence of X x ) -(K(X i ) (non (P(X x )) is wrong (i.e. what we know about the X i s cannot forbid the existence of X x ). These relations mean that K(X i ) is neither a proof of the existence of X x , nor a proof of its nonexistence. Hence, the existence of (X x , P(X x ) is undecidable, yet desirable, under K(X i ). Remark: undecidability can be seen as the anomaly specific to Design. It is not an observed anomaly, a distance between observations and K(X i ); it is a decided anomaly created by the designer when she builds the concept (X x , P(X x )). This makes a major difference between modeling and design.
-The generative function of design: introducing determination function Design theory is characterized by a specific generative function G d that aims to build some K(X x ) that proves the existence of X x, and P(X x ). As we know that K(X i ) cannot prove this existence, Design will need new knowledge. This can be limited to K(X x ) or, in the general case, this can require, like in modeling, to revise (X i , K(X i )) into (X j , K'(X j )) different from X x . These (X j , K'(X j )) were also unknown when design began, thus design includes modeling. The generative function of design G d is:
G d : (K(X i ), P(X x )) (K'(X j ), K(X x
)) with the following conditions (two are identical for modeling and the third is specific to design):
-∆H ≥ 0 which means that Design creates objects that maintain or increase consistency -∆D ≥ 0 which means that Design maintains or increases completeness
-(K(X i ) K'(X j ) K(X x
)) ((X x exists) and (P(X x ) holds)) The third condition can be called a determination function as it means that Design needs to create the knowledge that determines the realization of X x and the verification of P(X x ). This condition did not appear in the generative function of modeling. We will show that it was implicit in its formulation.
-Design includes decision, yet free parameters have to be generated Design could appear as a special case of decision theory: it begins with a decided anomaly and it aims to find some free parameters that, when "optimized", will warrant P(X x ). However, the situation is different from the decision theory analyzed previously: when design begins the definition parameters of X x are unknown, they have to be generated before being decided.
-Design observes "expansions" i.e. potential components of X x
As mentioned earlier, when Design begins, X x is not observable; it will be observed only when its complete definition will be settled, its existence warranted and made observable. So what can be observed during design if X x still does not exist? We may think that we could observe "some aspects" of X x . This is not a valid formulation as it assumes that X x is already there and we could capture some of its traits. But X x cannot be "present" until we design it and prove its existence. What can be, and is, done is to build new objects that could potentially be used as components of X x . These objects can be called expansions C i (X x ) (we use here the language of C-K design theory). Their existence and properties cannot be deduced from K(X i ), they have to be observed and modeled. Obviously if one of these expansions C j (X x ) verifies P, it can be seen as a potential design of X x . Usually, these expansions only verify some property P' that is a necessary (but not sufficient) condition for P. By combining different expansions, X x will be defined and P verified. The notion of "expansion" unifies a large variety of devices, material or symbolic, usually called sketches, mock-sup, prototypes, demonstrators, simulation etc. These devices are central for Design practice and are well documented in the literature [START_REF] Goldschmidt | The dialectics of sketching[END_REF][START_REF] Tversky | What do sketches say about thinking[END_REF][START_REF] Subrahmanian | Boundary Objects and Prototypes at the Interfaces of Engineering Design[END_REF]. Still they received limited attention in science (except in experimental plans) because they were absent of Modeling or Decision theory. Observing expansions generates two different outputs: i) some "building bricks" that could be used to form X x ; ii) new knowledge that will stimulate modeling strategies or new expansions. Thus, G d can be formulated more precisely by introducing expansions in its output:
G d : (K(X i ), P(X x )) (K'(X j ), C i (X x )
) and some subgroup C m of the expansions is such that X x = ∩ C m (X x ) and verifies P.
-G d does not generate a pure combination of X is : design goes out of the box This is a corollary of all previous findings. Because X x is unknown and undecidable when related to K(X i ), if a successful design exists, it will be composed of expansions that are different from any of the X i s (and outside the topology of the X i s). Hence, there is no combination of the X i s that would compose X x . G d goes necessarily out of the X i s' box! Creativity is not something added to design. Genuine design is creative by definition and necessity. Example 4: the design of electric cars. The use of electric power in cars is not a design task. It is easy to compose an electric car with known components. Design begins for instance with the concept: "an electric car with an autonomous range that is not too far from existing cars using fuel power". Obviously, this concept was both highly desired by carmakers and undecidable some years ago. Today, it is easy to observe all the new objects and knowledge that have been produced in existing electric cars that are now proposed, thus observable: new architectures, new batteries technologies and management systems, new car heating and cooling systems, new stations for charging or for battery exchange… New types of cars have been also proposed like the recent Twizzy by Renault who won the Red Dot best of the best design award in 2012. Still, commercialized cars could be seen as only expansions of the concept as none of them has reached the same autonomy as existing fuel cars (circa 700km). From a theoretical point of view commercial products are only economic landmarks of an ongoing design process. This example also illustrates the variety of design propositions, predicted by the theory.
4
COMPARISON AND GENERALIZATION: DESIGN AS THE SIMULTANEOUS GENERATION OF ARTEFACTS AND MODELS Now we can compare similarities and differences between Design, modeling and Decision theories. Table 1 synthesizes what we have learned about their generative functions.
Status of the unknown
X x is unknown, yet observable and independent, Y forms an anomaly X x presents free parameters to be decided, optimum is unknown
X x is unknown, assigned properties desirable, not observable, Input (K(X i ), Y) Y not explained by X i s (K(X x ), D(d i ), O(D * (d i ))) (K(X i ), P(X x )) P(X x ) undecidable / K(X i ) Output K'(X j ), K(X x ) D * (d i ) K'(X j ), K(X x ) Conditions -consistency ∆H > 0 -completeness ∆D > 0 O(D * (d j )) holds. ∆H ≥ 0 ∆D ≥ 0 Determination: (X x exists)
and (P(X x ) holds)
Discovery, invention and the status of the unknown
One can first remark the structural identity between the outputs of G d and G m . It explains why it is actually cumbersome to distinguish between "invention" and "discovery": in both cases, a previously unknown object has been generated. Yet this distinction is often used to distinguish between science and design. The difference appears in the assumptions on the unknown in each generative function: in modeling, the unknown is seen as an "external reality" that may be observed; in design, it is a desirable entity to bring to existence. The structure of the generative functions will show us that these differences mask deep similarities between modeling and Design.
Modeling as a special form of Design
We can now reach the core of our research by examining how these generative functions can be combined. Three important findings can be established. Proposition 1: Design includes modeling and decision. This is obvious from the structure of G m . Proof : Design needs to observe and test expansions as potential components of X x :
G d : (K(X i ), P(X x )) (K'(X j ), C i (X x )) so that X x = ∩ C m (X x
) and verifies P. If for some X u =∩C m (X x ), P(X u ) does not hold, (non-P(X u )) can be interpreted as an observed anomaly. Let us set: Y=non-P(X u ), Y appears as a provoked observation; if K(X u ) is the available knowledge about X u , then G d leads to a modeling issue corresponding to the following generative function: (K(X u ), Y) (K'(Xj), K(X z )) where X z is a new unknown object that has to be modeled and observed. Example 5: Each time when a prototype (∩C m (X x )) fails to meet design targets, it is necessary to build a scientific modeling of the failure. One famous historical example occurred at GE Research in the 1920s where Langmuir study of light bulb blackening led to the discovery of plasma, which owed him Nobel prize in 1932 [START_REF] Reich | The Making of American Industrial Research, Science and Business at GE and Bell[END_REF]. Proposition 2: Modeling needs design. This proposition seems less obvious: where is design in the reduction of anomalies that characterizes modeling? Actually Design is implicit in the conditions of the generative function of modeling: D((K(X i ) K(X' j ) K(X x ))-Y) holds. Proof: This condition simply says that adding K(X x ) to available knowledge explains Y. Now to check this proposition may require an unknown experimental setting that should desirably fit with the requirements of D. Let us call E x this setting and D r (E x ) these requirements. Hence, the generative function of modeling G m : (K(X i ), Y) (K(X x ), K'(X j )) is now dependent on a design function:
G d : (K(X i ), D r (E x )) (K'(X j ), K(E x ))
Example 6: There are numerous examples in the history of science where modeling was dependent on the design of new experimental settings (instruments, machines, reactors,…). In the case of the Laser, the existence of this special form of condensed light was theoretically predicted by Einstein as early as 1917 (a deduction from available K(X i )). Yet, the type of experimental "cavity" where the phenomena could appear was unknown and would have to meet extremely severe conditions. Thus, the advancement of knowledge in the field was dependent on Design capabilities [START_REF] Bromberg | Engineering Knowledge in the Laser Field[END_REF]. Proposition 3: Modeling is a special form of Design This proposition will establish that in spite of their differences, modeling is an implicit Design. Let us interpret modeling using the formal generative function of Design. Such operations are precisely those where the value of formalization is at its peak. Intuitively modeling and Design seem two logics with radically different views of the unknown; yet structurally, modeling is also a design activity. Proof: we have established that the generative function of modeling G m is a special form of G d .
G m : (K(X i ), Y) (K'(X j ), K(X x )) with the conditions : a. D(K(X i )-Y) does not hold b. H(K'(X j )K(X x )) -H(K(X i ))>0 c. D((K(X i ) K'(X j ) K(X x
))-Y) holds. Now instead of considering an unknown object X x to reduce the anomaly created by Y, let us consider an unknown knowledge K x (note that we do not write K(X x ) but K x ). In addition, we assume that K x verifies the following properties b' and c' which are obtained by replacing K(X x ) by K x in conditions b and c (remind that condition a is independent of K x and thus is unchanged ):
b': H(K'(X j )K x ) -H(K(X i ))>0 c': D((K(X i ) K'(X j ) K x ))-Y) holds. Remark that K x , like X x, is unknown and not observable, it has to be generated (designed). If we set a function T(K x ) that is true if "(b' and c') holds" then G m is equivalent to the design function:
G d : (K(X i ), T(K x )) (K'(X j ), K x ) Proof: if design succeeds then T(K x ) is true; this implies that c' holds i.e. K x reduces the anomaly Y. Thus, modeling is equivalent to a design process where the generation of knowledge is designed. Conditioning the "realism"of K x : with this interpretation of modeling, we miss the idea that K x is about an observable and independent object X x . Design may lead to an infinite variety of K x which all verify T(K x ). We need an additional condition that would control the "realism" of K x . "Realism" was initially embedded in the assumption that there is an observable and independent object. Now assume that we introduce a new design condition V(K x ) which says: K x should be designed independently from the designer. This would force the designer to only use observations and test expansions (for instance knowledge prototypes) that are submitted to the judgment of other scientists. Actually, this condition is equivalent to the assumption of an independent object X x . Proof: to recognize that X x exists and is independent of the scientists, we need to prove that two independent observers reach the same knowledge K(X x ). Conditioning the design of K x by V(K x ) is equivalent to assuming the existence of an independent object. This completes our proof that modeling is a special form of Design.
Generalization: design as the simultaneous generation of objects and knowledge
Design needs modeling but modeling can be interpreted as the design of new knowledge. Therefore we can generalize design as a generative function that simultaneously applies to a couple (X x , K(X x )):
-Let us call, Z i = (X i , K(X i )), Z x = (X x , K(X x )), -In classic epistemic logic, for all U: K(K(U))= K(U) this only means that we know what we know; and as K(X x )X x then K(X x , K(X x ))=(K(X x ), K(K(X x )) according to the distribution axiom [START_REF] Hendricks | Mainstream and Formal Epistemology[END_REF], which means that K is consistent with implication rules.
-Then, K(Z i ) = K(X i , K(X i )) = (K(X i ), K(K(X i )) = (K(X i ), K(X i )) =K(X i ) ; and similarly K(Z x ) =K(X x )
the generalized generative function G dz can be written with the same structure as G d :
G dz : (K(Z i ), L(Z x )) (K'(Z j ), K(Z x )) Where L(Z x ) is the combination of all desired properties related to the couple (X x , K(X x )):
-Assigned property to X x : P(X x ) -Conditions on K(X x ): consistency ∆H > 0; completeness ∆D > 0 Example 7: there are many famous cases where new objects and new knowledge is generated, e.g. the discovery of "neutral current" and the bubble chamber to "see" them at CERN in the 1960s [START_REF] Galison | How Experiments End[END_REF], or DNA double helix and the X-ray diffraction of biological molecules, needed for the observation [START_REF] Crick | What Mad Pursuit: A Personal View of Scientific Discovery. Basic Books, New York Cross N[END_REF]. This result establishes that the generative function of design is not specific to objects or artefacts. The standard presentations of modeling or design are partial visions of Design. Confirming the orientation of contemporary Design theory, our research brings rigorous support to the idea that Design is a generative function that is independent of what is designed and simultaneously generates objects and the knowledge about these objects according to the desired properties assigned to each of them.
5
CONCLUDING REMARKS AND IMPLICATIONS. 1. Our aim was to situate design and design theory by comparison to major standard references like scientific modeling and Decision theory. To reach this goal, we have not followed the classic discussions about science and design. Contemporary design theory offers a new way to study these issues. It has reached a level of formalization that can be used to organize a rigorous comparison of design, modeling and optimization. We use this methodology to reach novel and precise propositions. Our findings confirm previous research that insisted more on the similarities between Design and Science. But it goes beyond such general statements: we have introduced the notion of generative functions which permits to build a common formal framework for our comparison. We showed that design, modeling and decision correspond to various visions of the unknown. Beyond these differences, we have established that modeling (hence optimization) could be seen as special forms of design and we have made explicit the conditions under which such proposition holds. Finally we have established the high generality of Design that simultaneously generates objects and knowledge. These findings have two series of implications, which are also areas for further research: 2. On the unity and variety of forms of design: tell us what is that unknown that you desire… Establishing that design generates simultaneously objects and knowledge clarifies the unity of design. Engineers, Scientists, Architects, product creators are all designers. They do not differ in the structure of their generative functions, they differ in the desired properties they assign to the objects (or artifacts) and in the knowledge they generate. Scientists desire artefacts and knowledge that verify consistency, completeness and determination. They tend to focus on the desires of their communities. Engineers give more importance to the functional requirements of the artefacts they build; they also design knowledge that can be easily learned, transferred and systematized in usual working contexts. Architects have desires in common with engineers regarding the objects they create. But they do not aim at a systematized knowledge about elegance, beauty or urban values. Professional identities tend to underestimate the unity of design and tend to overemphasize the specificity of their desires and to confuse it with the generative functions they have to enact. This has led to persistent misunderstandings and conflicts. It has also fragmented the scientific study of design. It is still common to distinguish between "the technology and the design" of a productas if generating a new technology was not the design of both artefacts and knowledge. Our research certainly calls for an aggiornamento of the scientific status of Design where its unity will be stressed and used as a foundation stone for research and education.
On the relations between Science and Design
In this paper we avoid the usual debates about the nature of Science, knowledge and Design. We add nothing to the discussions on positivist and constructivist conceptions of reality. Our investigations focus on the operational logic and structure of each type of activity. We find that the status of the unknown is a key element of the usual distinction between design-as-artifact-making and Science-as-knowledge-creation. Still we also establish that Design offers a logic of the unknown that is more general and includes the logic of scientific Knowledge. Design makes explicit what it desires about the unknown. We establish that Science also designs knowledge according to desires but they are implicit or related to a community (not to the unique judgment of one researcher). Obviously, these findings should be better related to contemporary debates in epistemology and philosophy of Science. This task goes largely beyond the scope of this paper. Finally, our main conclusion is that Design theory can serve as an integrative framework for modeling and decision. By introducing desirable unknowns in our models of thought, Design does not create some sort of irrationality or disorder. Instead it offers a rigorous foundation stone to the main standards of scientific thinking.
G o : (K(X x ), D(d j ), O(d j )) D * (d j ) so that D * (d i ) D(d j ) and O(D * (d j )) holds.1
Table 1 :
1 Comparison of generative functions
Generative Modeling: G m Decision: G o Design: G d
function | 42,275 | [
"3386",
"1111",
"1099",
"10954"
] | [
"39111",
"63133",
"39111",
"39111",
"39111"
] |
01422161 | en | [
"math"
] | 2024/03/04 23:41:48 | 2020 | https://hal.science/hal-01422161v2/file/navier_slip_v2_0.pdf | Jean-Michel Coron
Frédéric Marbach
Franck Sueur
Small-time global exact controllability of the Navier-Stokes equation with Navier slip-with-friction boundary conditions *
come
Small-time global exact controllability of the Navier-Stokes equation with Navier slip-with-friction boundary conditions
Introduction
Description of the fluid system
We consider a smooth bounded connected domain Ω in R d , with d = 2 or d = 3. Although some drawings will depict Ω as a very simple domain, we do not make any other topological assumption on Ω. Inside this domain, an incompressible viscous fluid evolves under the Navier-Stokes equations. We will name u its velocity field and p the associated pressure. We assume that we are able to act on the fluid flow only on a open part Γ of the full boundary ∂Ω, where Γ intersects all connected components of ∂Ω (this geometrical hypothesis is used in the proofs of Lemma 2). On the remaining part of the boundary, ∂Ω \ Γ, we assume that the fluid flow satisfies Navier slip-with-friction boundary conditions. Hence, (u, p) satisfies:
∂ t u + (u • ∇)u -∆u + ∇p = 0 in Ω, div u = 0 in Ω, u • n = 0 on ∂Ω \ Γ, N (u) = 0 on ∂Ω \ Γ. (1)
Here and in the sequel, n denotes the outward pointing normal to the domain. For a vector field f , we introduce [f ] tan its tangential part, D(f ) the rate of strain tensor (or shear stress) and N (f ) the tangential Navier boundary operator defined as:
[f ] tan := f -(f • n)n, (2)
D ij (f ) := 1 2 (∂ i f j + ∂ j f i ) , (3)
N (f ) := [D(f )n + M f ] tan . (4)
Eventually, in (4), M is a smooth matrix valued function, describing the friction near the boundary. This is a generalization of the usual condition involving a single scalar parameter α ≥ 0 (i.e. M = αI d ). For flat boundaries, such a scalar coefficient measures the amount of friction. When α = 0 and the boundary is flat, the fluid slips along the boundary without friction. When α → +∞, the friction is so intense that the fluid is almost at rest near the boundary and, as shown by Kelliher in [START_REF] Kelliher | Navier-Stokes equations with Navier boundary conditions for a bounded domain in the plane[END_REF], the Navier condition [D(u)n + αu] tan = 0 converges to the usual Dirichlet condition.
Controllability problem and main result
Let T be an allotted positive time (possibly very small) and u * an initial data (possibly very large). The question of small-time global exact null controllability asks whether, for any T and any u * , there exists a trajectory u (in some appropriate functional space) defined on [0, T ] × Ω, which is a solution to [START_REF] Alexandre | Well-posedness of the Prandtl equation in Sobolev spaces[END_REF], satisfying u(0, •) = u * and u(T, •) = 0. In this formulation, system (1) is seen as an underdetermined system. The controls used are the implicit boundary conditions on Γ and can be recovered from the constructed trajectory a posteriori. We define the space L 2 γ (Ω) as the closure in L 2 (Ω) of smooth divergence free vector fields which are tangent to ∂Ω \ Γ. For f ∈ L 2 γ (Ω), we do not require that f • n = 0 on the controlled boundary Γ. Of course, due to the Stokes theorem, such functions satisfy Γ f • n = 0. The main result of this paper is the following small-time global exact null controllability theorem: Theorem 1. Let T > 0 and u * ∈ L 2 γ (Ω). There exists u ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)) a weak controlled trajectory (see Definition 1) of (1) satisfying u(0, •) = u * and u(T, •) = 0. Remark 1. Even though a unit dynamic viscosity is used in equation (1), Theorem 1 remains true for any fixed positive viscosity ν thanks to a straightforward scaling argument. Some works also consider the case when the friction matrix M depends on ν (see [START_REF] Paddick | Stability and instability of Navier boundary layers[END_REF] or [START_REF] Wang | Boundary layers in incompressible Navier-Stokes equations with Navier boundary conditions for the vanishing viscosity limit[END_REF]). This does not impact our proofs in the sense that we could still prove that: for any ν > 0, for any T > 0, for any smooth M ν , for any initial data u * , one can find boundary controls (depending on all these quantities) driving the initial data back to the null equilibrium state at time T . Remark 2. Theorem 1 is stated as an existence result. The lack of uniqueness both comes from the fact that multiple controls can drive the initial state to zero and from the fact that it is not known whether weak solutions are unique for the Navier-Stokes equation in 3D (in 2D, it is known that weak solutions are unique). Always in the 3D case, if the initial data u * is smooth enough, it would be interesting to know if we can build a strong solution to (1) driving u * back to zero (in 2D, global existence of strong solutions is known). We conjecture that building strong controlled trajectories is possible. What we do prove here is that, if the initial data u * is smooth enough, then our small-time global approximate null control strategy drives any weak solution starting from this initial state close to zero.
Ω ∂Ω \ Γ u • n = 0 [D(u)n + M u] tan = 0 Γ
Although most of this paper is dedicated to the proof of Theorem 1 concerning the null controllability, we also explain in Section 5 how one can adapt our method to obtain small-time global exact controllability towards any weak trajectory (and not only the null equilibrium state).
A challenging open problem as a motivation
The small-time global exact null controllability problem for the Navier-Stokes equation was first suggested by Jacques-Louis Lions in the late 80's. It is mentioned in [START_REF] Lions | Exact controllability for distributed systems. Some trends and some problems[END_REF] in a setting where the control is a source term supported within a small subset of the domain (this situation is similar to controlling only part of the boundary). In Lions' original question, the boundary condition on the uncontrolled part of the boundary is the Dirichlet boundary condition. Using our notations and our boundary control setting, the system considered is:
∂ t u + (u • ∇)u -∆u + ∇p = 0 in Ω, div u = 0 in Ω, u = 0 on ∂Ω \ Γ.
(5)
Global results
The second approach goes the other way around: see the viscous term as a perturbation of the inviscid dynamic and try to deduce the controllability of Navier-Stokes from the controllability of Euler. This approach is efficient to obtain small-time results, as inviscid effects prevail in this asymptotic. However, if one does not control the full boundary, boundary layers appear near the uncontrolled boundaries ∂Ω \ Γ. Thus, most known results try to avoid this situation.
In [START_REF] Coron | Global exact controllability of the 2D Navier-Stokes equations on a manifold without boundary[END_REF], the first author and Fursikov prove a small-time global exact null controllability result when the domain is a manifold without border (in this setting, the control is a source term located in a small subset of the domain). Likewise, in [START_REF] Fursikov | Exact controllability of the Navier-Stokes and Boussinesq equations[END_REF], Fursikov and Imanuvilov prove small-time global exact null controllability when the control is supported on the whole boundary (i.e. Γ = ∂Ω). In both cases, there is no boundary layer.
Another method to avoid the difficulties is to choose more gentle boundary conditions. In a simple geometry (a 2D rectangular domain), Chapouly proves in [START_REF] Chapouly | On the global null controllability of a Navier-Stokes system with Navier slip boundary conditions[END_REF] small-time global exact null controllability for Navier-Stokes under the boundary condition ∇ × u = 0 on uncontrolled boundaries. Let [0, L] × [0, 1] be the considered rectangle. Her control acts on both vertical boundaries at x 1 = 0 and x 1 = L. Uncontrolled boundaries are the horizontal ones at x 2 = 0 and x 2 = 1. She deduces the controllability of Navier-Stokes from the controllability of Euler by linearizing around an explicit reference trajectory u 0 (t, x) := (h(t), 0), where h is a smooth profile. Hence, the Euler trajectory already satisfies all boundary conditions and there is no boundary layer to be expected at leading order.
For Navier slip-with-friction boundary conditions in 2D, the first author proves in [START_REF] Coron | On the controllability of the 2-D incompressible Navier-Stokes equations with the Navier slip boundary conditions[END_REF] a small-time global approximate null controllability result. He proves that exact controllability can be achieved in the interior of the domain. However, this is not the case near the boundaries. The approximate controllability is obtained in the space W -1,∞ , which is not a strong enough space to be able to conclude to global exact null controllability using a local result. The residual boundary layers are too strong and have not been sufficiently handled during the control design strategy.
For Dirichlet boundary conditions, Guerrero, Imanuvilov and Puel prove in [START_REF] Guerrero | Remarks on global approximate controllability for the 2-D Navier-Stokes system with Dirichlet boundary conditions[END_REF] (resp. [START_REF] Guerrero | A result concerning the global approximate controllability of the Navier-Stokes system in dimension 3[END_REF]) for a square (resp. a cube) where one side (resp. one face) is not controlled, a small time result which looks like global approximate null controllability. Their method consists in adding a new source term (a control supported on the whole domain Ω) to absorb the boundary layer. They prove that this additional control can be chosen small in L p ((0, T ); H -1 (Ω)), for 1 < p < p 0 (with p 0 = 8/7 in 2D and 4/3 in 3D). However, this norm is too weak to take a limit and obtain the result stated in Open Problem (OP) (without this fully supported additional control). Moreover, the H -1 (Ω) estimate seems to indicate that the role of the inner control is to act on the boundary layer directly where it is located, which is somehow in contrast with the goal of achieving controllability with controls supported on only part of the boundary.
All the examples detailed above tend to indicate that a new method is needed, which fully takes into account the boundary layer in the control design strategy.
The "well-prepared dissipation" method
In [START_REF] Marbach | Small time global null controllability for a viscous Burgers' equation despite the presence of a boundary layer[END_REF], the second author proves small-time global exact null controllability for the Burgers equation on the line segment [0, 1] with a Dirichlet boundary condition at x = 1 (implying the presence of a boundary layer near the uncontrolled boundary x = 1). The proof relies on a method involving a well-prepared dissipation of the boundary layer. The sketch of the method is the following:
1. Scaling argument. Let T > 0 be the small time given for the control problem. Introduce ε ≪ 1 a very small scale. Perform the usual small-time to small-viscosity fluid scaling u ε (t, x) := εu(εt, x), yielding a new unknown u ε , defined on a large time scale [0, T /ε], satisfying a vanishing viscosity equation. Split this large time interval in two parts: [0, T ] and [T, T /ε].
2. Inviscid stage. During [0, T ], use (up to the first order) the same controls as if the system was inviscid. This leads to good interior controllability (far from the boundaries, the system already behaves like its inviscid limit) but creates a boundary layer residue near uncontrolled boundaries.
3. Dissipation stage. During the long segment [T, T /ε], choose null controls and let the system dissipate the boundary layer by itself thanks to its smoothing term. As ε → 0, the long time scale compensates exactly for the small viscosity. However, as ε → 0, the boundary layer gets thinner and dissipates better.
The key point in this method is to separate steps 2 and 3. Trying to control both the inviscid dynamic and the boundary layer at the end of step 2 is too hard. Instead, one chooses the inviscid controls with care during step 2 in order to prepare the self-dissipation of the boundary layer during step 3. This method will be used in this paper and enhanced to prove our result. In order to apply this method, we will need a very precise description of the boundary layers involved.
Boundary conditions and boundary layers for Navier-Stokes
Physically, boundary layers are the fluid layers in the immediate vicinity of the boundaries of a domain, where viscous effects prevail. Mathematically, they appear when studying vanishing viscosity limits while maintaining strong boundary conditions. There is a huge literature about boundary conditions for partial differential equations and the associated boundary layers. In this paragraph, we give a short overview of some relevant references in our context for the Navier-Stokes equation.
Adherence boundary condition
The strongest and most commonly used boundary condition for Navier-Stokes is the full adherence (or no-slip) boundary condition u = 0. This condition is most often referred to as the Dirichlet condition although it was introduced by Stokes in [START_REF] Gabriel | On the effect of the internal friction of fluids on the motion of pendulums[END_REF]. Under this condition, fluid particles must remain at rest near the boundary. This generates large amplitude boundary layers. In 1904, Prandtl proposed an equation describing the behavior of boundary layers for this adherence condition in [START_REF] Prandtl | Uber flussigkeits bewegung bei sehr kleiner reibung[END_REF]. Heuristically, these boundary layers are of amplitude O(1) and of thickness O( √ ν) for a vanishing viscosity ν. Although his equation has been extensively studied, much is still to be learned. Both physically and numerically, there exists situations where the boundary layer separates from the border: see [START_REF] Cowley | Computer extension and analytic continuation of Blasius' expansion for impulsive flow past a circular cylinder[END_REF], [START_REF] Guyon | Hydrodynamique physique[END_REF], [START_REF] Van Dommelen | On the Lagrangian description of unsteady boundary-layer separation. I. General theory[END_REF], or [START_REF] Van Dommelen | The spontaneous generation of the singularity in a separating laminar boundary layer[END_REF]. Mathematically, it is known that solutions with singularities can be built [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF] and that the linearized system is ill-posed in Sobolev spaces [START_REF] Gérard | On the ill-posedness of the Prandtl equation[END_REF]. The equation has also been proved to be ill-posed in a non-linear context in [START_REF] Guo | A note on Prandtl boundary layers[END_REF]. Moreover, even around explicit shear flow solutions of the Prandtl equation, the equation for the remainder between Navier-Stokes and Euler+Prandtl is also ill-posed (see [START_REF] Grenier | Boundary layers[END_REF] and [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]).
Most positive known results fall into two families. First, when the initial data satisfies a monotonicity assumption, introduced by Oleinik in [START_REF] Oleȋnik | On the mathematical theory of boundary layer for an unsteady flow of incompressible fluid[END_REF], [START_REF] Oleȋnik | Mathematical models in boundary layer theory[END_REF]. See also [START_REF] Alexandre | Well-posedness of the Prandtl equation in Sobolev spaces[END_REF], [START_REF] Gérard-Varet | Gevrey Stability of Prandtl Expansions for 2D Navier-Stokes[END_REF], [START_REF] Masmoudi | Local-in-time existence and uniqueness of solutions to the Prandtl equations by energy methods[END_REF] and [START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF] for different proof techniques in this context. Second, when the initial data are analytic, it is both proved that the Prandtl equations are well-posed [START_REF] Sammartino | Zero viscosity limit for analytic solutions, of the Navier-Stokes equation on a half-space. I. Existence for Euler and Prandtl equations[END_REF] and that Navier-Stokes converges to an Euler+Prandtl expansion [START_REF] Sammartino | Zero viscosity limit for analytic solutions of the Navier-Stokes equation on a half-space. II. Construction of the Navier-Stokes solution[END_REF]. For historical reviews of known results, see [START_REF] Weinan | Boundary layer theory and the zero-viscosity limit of the Navier-Stokes equation[END_REF] or [START_REF] Nickel | Prandtl's boundary-layer theory from the viewpoint of a mathematician[END_REF]. We also refer to [START_REF] Maekawa | The Inviscid Limit and Boundary Layers for Navier-Stokes Flows[END_REF] for a comprehensive recent survey.
Physically, the main difficulty is the possibility that the boundary layer separates and penetrates into the interior of the domain (which is prevented by the Oleinik monotonicity assumption). Mathematically, Prandtl equations lack regularization in the tangential direction thus exhibiting a loss of derivative (which can be circumvented within an analytic setting).
Friction boundary conditions
Historically speaking, the adherence condition is posterior to another condition stated by Navier in [START_REF] Navier | Mémoire sur les lois du mouvement des fluides[END_REF] which involves friction. The fluid is allowed to slip along the boundary but undergoes friction near the impermeable walls. Originally, it was stated as:
u • n = 0 and [D(u)n + αu] tan = 0, ( 6
)
where α is a scalar positive coefficient. Mathematically, α can depend (smoothly) on the position and be a matrix without changing much the nature of the estimates. This condition has been justified from the boundary condition at the microscopic scale in [START_REF] Coron | Derivation of slip boundary conditions for the Navier-Stokes system from the Boltzmann equation[END_REF] for the Boltzmann equation. See also [START_REF] Golse | From the Boltzmann equation to the Euler equations in the presence of boundaries[END_REF] or [START_REF] Masmoudi | From the Boltzmann equation to the Stokes-Fourier system in a bounded domain[END_REF] for other examples of such derivations.
Although the adherence condition is more popular in the mathematical community, the slip-withfriction condition is actually well suited for a large range of applications. For instance, it is an appropriate model for turbulence near rough walls [START_REF] Edward | Lectures in mathematical models of turbulence[END_REF] or in acoustics [START_REF] Geymonat | On the vanishing viscosity limit for acoustic phenomena in a bounded region[END_REF]. It is used by physicists for flat boundaries but also for curved domains (see [START_REF] Einzel | Boundary condition for fluid flow: curved or rough surfaces[END_REF], [START_REF] Guo | Slip boundary conditions over curved surfaces[END_REF] or [START_REF] Panzer | The effects of boundary curvature on hydrodynamic fluid flow: calculation of slip lengths[END_REF]). Physically, α is homogeneous to 1/b where b is a length, named slip length. Computing this parameter for different situations, both theoretically or experimentally is important for nanofluidics and polymer flows (see [START_REF] Barrat | Large slip effect at a nonwetting fluid-solid interface[END_REF] or [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF]).
Mathematically, the convergence of the Navier-Stokes equation under the Navier slip-with-friction condition to the Euler equation has been studied by many authors. For 2D, this subject is studied in [START_REF] Thierry Clopeau | On the vanishing viscosity limit for the 2D incompressible Navier-Stokes equations with the friction type boundary conditions[END_REF] and [START_REF] Kelliher | Navier-Stokes equations with Navier boundary conditions for a bounded domain in the plane[END_REF]. For 3D, this subject is treated in [START_REF] Gung | Boundary layer analysis of the Navier-Stokes equations with generalized Navier boundary conditions[END_REF] and [START_REF] Masmoudi | Uniform regularity for the Navier-Stokes equation with Navier boundary condition[END_REF]. To obtain more precise convergence results, it is necessary to introduce an asymptotic expansion of the solution u ε to the vanishing viscosity Navier-Stokes equation involving a boundary layer term. In [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], Iftimie and the third author prove a boundary layer expansion. This expansion is easier to handle than the Prandtl model because the main equation for the boundary layer correction is both linear and well-posed in Sobolev spaces. Heuristically, these boundary layers are of amplitude O( √ ν) and of thickness O( √ ν) for a vanishing viscosity ν.
Slip boundary conditions
When the physical friction between the inner fluid and the solid boundary is very small, one may want to study an asymptotic model describing a situation where the fluid perfectly slips along the boundary. Sadly, the perfect slip situation is not yet fully understood in the mathematical literature.
2D. In the plane, the situation is easier. In 1969, Lions introduced in [START_REF] Lions | Quelques méthodes de résolution des problèmes aux limites non linéaires[END_REF] the free boundary condition ω = 0. This condition is actually a special case of [START_REF] Beirão | Concerning the W k,p -inviscid limit for 3-D flows under a slip boundary condition[END_REF] where α depends on the position and α(x) = 2κ(x), where κ(x) is the curvature of the boundary at x ∈ ∂Ω. With this condition, good convergence results can be obtained from Navier-Stokes to Euler for vanishing viscosities.
3D. In the space, for flat boundaries, slipping is easily modeled with the usual impermeability condition u • n = 0 supplemented by any of the following equivalent conditions:
∂ n [u] tan = 0, (7)
[D(u)n] tan = 0, (8)
[∇ × u] tan = 0. (9)
For general non-flat boundaries, these conditions cease to be equivalent. This situation gives rise to some confusion in the literature about which condition correctly describes a true slip condition. Formally, condition (8) can be seen as the limit when α → 0 of the usual Navier slip-with-scalarfriction condition [START_REF] Beirão | Concerning the W k,p -inviscid limit for 3-D flows under a slip boundary condition[END_REF]. As for condition [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF] it can be seen as the natural extension in 3D of the 2D Lions free boundary condition. Let x ∈ ∂Ω. We note T x the tangent space to ∂Ω at x. The Weingarten map (or shape operator) M w (x) at x is defined as a linear map from T x into itself such that M w (x)τ := ∇ τ n for any τ in T x . The image of M w (x) is contained in T x . Indeed, since |n| 2 = 1 in a neighborhood of ∂Ω, [START_REF] Gung | Boundary layer analysis of the Navier-Stokes equations with generalized Navier boundary conditions[END_REF]). If Ω is smooth, the shape operator M w is smooth. For any x ∈ ∂Ω it defines a self-adjoint operator with values in T x . Moreover, for any divergence free vector field u satisfying u • n = 0 on ∂Ω, we have:
0 = ∇ τ (n 2 ) = 2n • ∇ τ n = 2n • M w τ for any τ . Lemma 1 ([5],
[D(u)n + M w u] tan = 1 2 (∇ × u) × n. (10)
Even though it is a little unusual, it seems that condition (9) actually better describes the situation of a fluid slipping along the boundary. The convergence of the Navier-Stokes equation to the Euler equation under this condition has been extensively studied. In particular, let us mention the works by Beirao da Veiga, Crispo et al. (see [START_REF] Veiga | On the sharp vanishing viscosity limit of viscous incompressible fluid flows[END_REF], [START_REF] Beirão | Sharp inviscid limit results under Navier type boundary conditions. An L p theory[END_REF], [START_REF] Beirão | Concerning the W k,p -inviscid limit for 3-D flows under a slip boundary condition[END_REF], [START_REF] Beirão | The 3-D inviscid limit result under slip boundary conditions. A negative answer[END_REF], [START_REF] Beirão | A missed persistence property for the Euler equations and its effect on inviscid limits[END_REF], [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF] and [START_REF] Crispo | On the zero-viscosity limit for 3D Navier-Stokes equations under slip boundary conditions[END_REF]), by Berselli et al. (see [START_REF] Carlo | Some results on the Navier-Stokes equations with Navier boundary conditions[END_REF], [START_REF] Berselli | On the vanishing viscosity limit of 3D Navier-Stokes equations under slip boundary conditions in general domains[END_REF]) and by Xiao, Xin et al. (see [START_REF] Wang | Vanishing viscous limits for 3D Navier-Stokes equations with a Navier-slip boundary condition[END_REF], [START_REF] Wang | Boundary layers in incompressible Navier-Stokes equations with Navier boundary conditions for the vanishing viscosity limit[END_REF], [START_REF] Xiao | On the vanishing viscosity limit for the 3D Navier-Stokes equations with a slip boundary condition[END_REF], [START_REF] Xiao | Remarks on vanishing viscosity limits for the 3D Navier-Stokes equations with a slip boundary condition[END_REF] and [START_REF] Xiao | On the inviscid limit of the 3D Navier-Stokes equations with generalized Navier-slip boundary conditions[END_REF]).
The difficulty comes from the fact that the Euler equation (which models the behavior of a perfect fluid, not subject to friction) is only associated with the u • n = 0 boundary condition for an impermeable wall. Any other supplementary condition will be violated for some initial data. Indeed, as shown in [START_REF] Beirão | A missed persistence property for the Euler equations and its effect on inviscid limits[END_REF], even the persistence property is false for condition [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF] for the Euler equation: choosing an initial data such that (9) is satisfied does not guarantee that it will be satisfied at time t > 0.
Plan of the paper
The paper is organized as follows:
• In Section 2, we consider the special case of the slip boundary condition [START_REF] Veiga | Reducing slip boundary value problems from the half to the whole space. Applications to inviscid limits and to non-Newtonian fluids[END_REF]. This case is easier to handle because no boundary layer appears. We prove Theorem 1 in this simpler setting in order to explain some elements of our method.
• In Section 3, we introduce the boundary layer expansion that we will be using to handle the general case and we prove that we can apply the well-prepared dissipation method to ensure that the residual boundary layer is small at the final time.
• In Section 4, we introduce technical terms in the asymptotic expansion of the solution and we use them to carry out energy estimates on the remainder. We prove Theorem 1 in the general case.
• In Section 5 we explain how the well-prepared dissipation method detailed in the case of null controllability can be adapted to prove small-time global exact controllability to the trajectories.
A special case with no boundary layer: the slip condition
In this section, we consider the special case where the friction coefficient M is the shape operator M w . On the uncontrolled boundary, thanks to Lemma 1, the flow satisfies:
u • n = 0 and [∇ × u] tan = 0. (11)
In this setting, we can build an Euler trajectory satisfying this overdetermined boundary condition. The Euler trajectory by itself is thus an excellent approximation of the Navier-Stokes trajectory, up to the boundary. This allows us to present some elements of our method in a simple setting before moving on to the general case which involves boundary layers.
As in [START_REF] Coron | On the controllability of the 2-D incompressible Navier-Stokes equations with the Navier slip boundary conditions[END_REF], our strategy is to deduce the controllability of the Navier-Stokes equation in small time from the controllability of the Euler equation. In order to use this strategy, we are willing to trade small time against small viscosity using the usual fluid dynamics scaling. Even in this easier context, Theorem 1 is new for multiply connected 2D domains and for all 3D domains since [START_REF] Coron | On the controllability of the 2-D incompressible Navier-Stokes equations with the Navier slip boundary conditions[END_REF] only concerns simply connected 2D domains. This condition was also studied in [START_REF] Chapouly | On the global null controllability of a Navier-Stokes system with Navier slip boundary conditions[END_REF] in the particular setting of a rectangular domain.
Domain extension and weak controlled trajectories
We start by introducing a smooth extension O of our initial domain Ω. We choose this extended domain in such a way that Γ ⊂ O and ∂Ω \ Γ ⊂ ∂O (see Figure 2.1 for a simple case). This extension procedure can be justified by standard arguments. Indeed, we already assumed that Ω is a smooth domain and, up to reducing the size of Γ, we can assume that its intersection with each connected component of ∂Ω is smooth. From now on, n will denote the outward pointing normal to the extended domain O (which coincides with the outward pointing normal to Ω on the uncontrolled boundary ∂Ω\ Γ). We will also need to introduce a smooth function ϕ : R d → R such that ϕ = 0 on ∂O, ϕ > 0 in O and ϕ < 0 outside of Ō.
Moreover, we assume that |ϕ(x)| = dist(x, ∂O) in a small neighborhood of ∂O. Hence, the normal n can be computed as -∇ϕ close to the boundary and extended smoothly within the full domain O. In the sequel, we will refer to Ω as the physical domain where we try to build a controlled trajectory of (1). Things happening within O \Ω are technicalities corresponding to the choice of the controls and we advise the reader to focus on true physical phenomenons happening inside Ω.
Ω ∂Ω \ Γ u • n = 0 [D(u)n + M u] tan = 0 Γ O Figure 2: Extension of the physical domain Ω ⊂ O. Definition 1. Let T > 0 and u * ∈ L 2 γ (Ω). Let u ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)
). We will say that u is a weak controlled trajectory of system (1) with initial condition u * when u is the restriction to the physical domain Ω of a weak Leray solution in the space C 0 w ([0, T ]; L 2 (O)) ∩ L 2 ((0, T ); H 1 (O)) on the extended domain O, which we still denote by u, to:
∂ t u + (u • ∇)u -∆u + ∇p = ξ in O, div u = σ in O, u • n = 0 on ∂O, N (u) = 0 on ∂O, u(0, •) = u * in O, (12)
where ξ ∈ H 1 ((0, T ),
L 2 (O)) ∩ C 0 ([0, T ], H 1 (O)
) is a forcing term supported in Ō \ Ω, σ is a smooth non homogeneous divergence condition also supported in Ō \ Ω and u * has been extended to O such that the extension is tangent to ∂O and satisfies the compatibility condition div u * = σ(0, •).
Allowing a non vanishing divergence outside of the physical domain is necessary both for the control design process and because we did not restrict ourselves to controlling initial data satisfying u * • n = 0 on Γ. Defining weak Leray solutions to (12) is a difficult question when one tries to obtain optimal functional spaces for the non homogeneous source terms. For details on this subject, we refer the reader to [START_REF] Farwig | A new class of weak solutions of the Navier-Stokes equations with nonhomogeneous data[END_REF], [START_REF] Farwig | Global weak solutions of the Navier-Stokes equations with nonhomogeneous boundary data and divergence[END_REF] or [START_REF] Raymond | Stokes and Navier-Stokes equations with a nonhomogeneous divergence condition[END_REF]. In our case, since the divergence source term is smooth, an efficient method is to start by solving a (stationary or evolution) Stokes problem in order to lift the non homogeneous divergence condition. We define u σ as the solution to:
∂ t u σ -∆u σ + ∇p σ = 0 in O, div u σ = σ in O, u σ • n = 0 on ∂O, N (u σ ) = 0 on ∂O, u σ (0, •) = u * in O. (13)
Smoothness (in time and space) of σ immediately gives smoothness on u σ . These are standard maximal regularity estimates for the Stokes problem in the case of the Dirichlet boundary condition. For Navier boundary conditions (sometimes referred to as Robin boundary conditions for the Stokes problem), we refer to [START_REF] Shibata | On a generalized resolvent estimate for the Stokes system with Robin boundary condition[END_REF], [START_REF] Shibata | On the Stokes equation with Robin boundary condition[END_REF] or [START_REF] Shimada | On the L p -L q maximal regularity for Stokes equations with Robin boundary condition in a bounded domain[END_REF]. Decomposing u = u σ + u h , we obtain the following system for u h :
∂ t u h + (u σ • ∇)u h + (u h • ∇)u σ + (u h • ∇)u h -∆u h + ∇p h = ξ -(u σ • ∇)u σ in O, div u h = 0 in O, u h • n = 0 on ∂O, N (u h ) = 0 on ∂O, u h (0, •) = 0 in O. (14)
Defining weak Leray solutions to ( 14) is a standard procedure. They are defined as measurable functions satisfying the variational formulation of ( 14) and some appropriate energy inequality. For in-depth insights on this topic, we refer the reader to the classical references by Temam [START_REF] Temam | Theory and numerical analysis[END_REF] or Galdi [START_REF] Galdi | An introduction to the Navier-Stokes initial-boundary value problem[END_REF]. In our case, let L 2 div (O) denote the closure in L 2 (O) of the space of smooth divergence free vector fields tangent to ∂O. We will say that
u h ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)
) is a weak Leray solution to [START_REF] Thierry Clopeau | On the vanishing viscosity limit for the 2D incompressible Navier-Stokes equations with the friction type boundary conditions[END_REF] if it satisfies the variational formulation:
- O u h ∂ t φ + O ((u σ • ∇)u h + (u h • ∇)u σ + (u h • ∇)u h ) φ + 2 O D(u h ) : D(φ) + 2 ∂O [M u h ] tan φ = O (ξ -(u σ • ∇)u σ ) φ, (15)
for any φ ∈ C ∞ c ([0, T ), Ō) which is divergence free and tangent to ∂O. We moreover require that they satisfy the so-called strong energy inequality for almost every τ < t:
|u h (t)| 2 L 2 + 4 (τ,t)×O |D(u h )| 2 ≤ |u h (τ )| 2 L 2 -4 (τ,t)×∂O [M u h ] tan u h + (τ,t)×O σu 2 h + 2(u h • ∇)u σ u h + 2 (ξ -(u σ • ∇)u σ ) u h . (16)
In ( 16), the boundary term is well defined. Indeed, from the Galerkin method, we can obtain strong convergence of Galerkin approximations u n h towards u h in L 2 ((0, T ); L 2 (∂O)) (see [56, page 155]). Although uniqueness of weak Leray solutions is still an open question, it is easy to adapt the classical Leray-Hopf theory proving global existence of weak solutions to the case of Navier boundary conditions (see [START_REF] Thierry Clopeau | On the vanishing viscosity limit for the 2D incompressible Navier-Stokes equations with the friction type boundary conditions[END_REF] for 2D or [START_REF] Iftimie | Inviscid limits for the Navier-Stokes equations with Navier friction boundary conditions[END_REF] for 3D). Once forcing terms ξ and σ are fixed, there exists thus at least one weak Leray solution u to [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF].
In the sequel, we will mostly work within the extended domain. Our goal will be to explain how we choose the external forcing terms ξ and σ in order to guarantee that the associated controlled trajectory vanishes within the physical domain at the final time.
Time scaling and small viscosity asymptotic expansion
The global controllability time T is small but fixed. Let us introduce a positive parameter ε ≪ 1. We will be even more ambitious and try to control the system during the shorter time interval [0, εT ]. We perform the scaling: u ε (t, x) := εu(εt, x) and p ε (t, x) := ε 2 p(εt, x). Similarly, we set ξ ε (t, x) := ε 2 ξ(εt, x) and σ ε (t, x) := εσ(εt, x). Now, (u ε , p ε ) is a solution to the following system for t ∈ (0, T ):
∂ t u ε + (u ε • ∇) u ε -ε∆u ε + ∇p ε = ξ ε in (0, T ) × O, div u ε = σ ε in (0, T ) × O, u ε • n = 0 on (0, T ) × ∂O, [∇ × u ε ] tan = 0 on (0, T ) × ∂O, u ε | t=0 = εu * in O. (17)
Due to the scaling chosen, we plan to prove that we can obtain
|u ε (T, •)| L 2 (O) = o(ε)
in order to conclude with a local result. Since ε is small, we expect u ε to converge to the solution of the Euler equation. Hence, we introduce the following asymptotic expansion for:
u ε = u 0 + εu 1 + εr ε , (18)
p ε = p 0 + εp 1 + επ ε , (19)
ξ ε = ξ 0 + εξ 1 , (20)
σ ε = σ 0 . ( 21
)
Let us provide some insight behind expansion ( 18)- [START_REF] Crispo | On the zero-viscosity limit for 3D Navier-Stokes equations under slip boundary conditions[END_REF]. The first term (u 0 , p 0 , ξ 0 , σ 0 ) is the solution to a controlled Euler equation. It models a smooth reference trajectory around which we are linearizing the Navier-Stokes equation. This trajectory will be chosen in such a way that it flushes the initial data out of the domain in time T . The second term (u 1 , p 1 , ξ 1 ) takes into account the initial data u * , which will be flushed out of the physical domain by the flow u 0 . Eventually, (r ε , π ε ) contains higher order residues. We need to prove |r ε (T, 1) in order to be able to conclude the proof of Theorem 1.
•)| L 2 (O) = o(
A return method trajectory for the Euler equation
At order O(1), the first part (u 0 , p 0 ) of our expansion is a solution to the Euler equation. Hence, the pair (u 0 , p 0 ) is a return-method-like trajectory of the Euler equation on (0, T ):
∂ t u 0 + u 0 • ∇ u 0 + ∇p 0 = ξ 0 in (0, T ) × O, div u 0 = σ 0 in (0, T ) × O, u 0 • n = 0 on (0, T ) × ∂O, u 0 (0, •) = 0 in O, u 0 (T, •) = 0 in O, (22)
where ξ 0 and σ 0 are smooth forcing terms supported in Ō \ Ω. We want to use this reference trajectory to flush any particle outside of the physical domain within the fixed time interval [0, T ]. Let us introduce the flow Φ 0 associated with u 0 :
Φ 0 (t, t, x) = x, ∂ s Φ 0 (t, s, x) = u 0 (s, Φ 0 (t, s, x)). (23)
Hence, we look for trajectories satisfying:
∀x ∈ Ō, ∃t x ∈ (0, T ), Φ 0 (0, t x , x) / ∈ Ω. (24)
We do not require that the time t x be the same for all x ∈ O. Indeed, it might not be possible to flush all of the points outside of the physical domain at the same time. Property ( 24) is obvious for points x already located in Ō \ Ω. For points lying within the physical domain, we use: [START_REF] Dardé | On the reachable set for the one-dimensional heat equation[END_REF] such that the flow Φ 0 defined in (23) satisfies [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF]. Moreover, u 0 can be chosen such that:
Lemma 2. There exists a solution (u 0 , p 0 , ξ 0 , σ 0 ) ∈ C ∞ ([0, T ] × Ō, R d × R × R d × R) to system
∇ × u 0 = 0 in [0, T ] × Ō. (25)
Moreover, (u 0 , p 0 , ξ 0 , σ 0 ) are compactly supported in (0, T ). In the sequel, when we need it, we will implicitly extend them by zero after T .
This lemma is the key argument of multiple papers concerning the small-time global exact controllability of Euler equations. We refer to the following references for detailed statements and construction of these reference trajectories. First, the first author used it in [START_REF] Coron | Contrôlabilité exacte frontière de l'équation d'Euler des fluides parfaits incompressibles bidimensionnels[END_REF] for 2D simply connected domains, then in [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF] for general 2D domains when Γ intersects all connected components of ∂Ω. Glass adapted the argument for 3D domains (when Γ intersects all connected components of the boundary), for simply connected domains in [START_REF] Glass | Contrôlabilité exacte frontière de l'équation d'Euler des fluides parfaits incompressibles en dimension 3[END_REF] then for general domains in [START_REF] Glass | Exact boundary controllability of 3-D Euler equation[END_REF]. He also used similar arguments to study the obstructions to approximate controllability in 2D when Γ does not intersect all connected components of the boundary for general 2D domains in [START_REF] Glass | An addendum to a J. M. Coron theorem concerning the controllability of the Euler system for 2D incompressible inviscid fluids[END_REF]. Here, we use the assumption that our control domain Γ intersects all connected parts of the boundary ∂Ω. The fact that condition (25) can be achieved is a direct consequence of the construction of the reference profile u 0 as a potential flow: u 0 (t, x) = ∇θ 0 (t, x), where θ 0 is smooth.
Convective term and flushing of the initial data
We move on to order O(ε). Here, the initial data u * comes into play. We build u 1 as the solution to:
∂ t u 1 + u 0 • ∇ u 1 + u 1 • ∇ u 0 + ∇p 1 = ∆u 0 + ξ 1 in (0, T ) × O, div u 1 = 0 in (0, T ) × O, u 1 • n = 0 on (0, T ) × ∂O, u 1 (0, •) = u * in O, (26)
where ξ 1 is a forcing term supported in Ō \ Ω. Formally, equation ( 26) also takes into account a residual term ∆u 0 . Thanks to [START_REF] Weinan | Boundary layer theory and the zero-viscosity limit of the Navier-Stokes equation[END_REF], we have ∆u 0 = ∇(div u 0 ) = ∇σ 0 . It is thus smooth, supported in Ō \ Ω and can be canceled by incorporating it into ξ 1 . The following lemma is natural thanks to the choice of a good flushing trajectory u 0 :
Lemma 3. Let u * ∈ H 3 (O) ∩ L 2 div (O). There exists a force ξ 1 ∈ C 1 ([0, T ], H 1 (O)) ∩ C 0 ([0, T ], H 2 (O)) such that the associated solution u 1 to system (26) satisfies u 1 (T, •) = 0. Moreover, u 1 is bounded in L ∞ ((0, T ), H 3 (O)).
In the sequel, it is implicit that we extend (u 1 , p 1 , ξ 1 ) by zero after T . This lemma is mostly a consequence of the works on the Euler equation, already mentioned in the previous paragraph, due to the first author in 2D, then to Glass in 3D. However, in these original works, the regularity obtained for the constructed trajectory would not be sufficient in our context. Thus, we provide in Appendix A a constructive proof which enables us to obtain the regularity for ξ 1 and u 1 stated in Lemma 3. We only give here a short overview of the main idea of the proof. The interested reader can also start with the nice introduction given by Glass in [START_REF] Glass | Contrôlabilité de l'équation d'Euler tridimensionnelle pour les fluides parfaits incompressibles[END_REF].
The intuition behind the possibility to control u 1 is to introduce ω 1 := ∇ × u 1 and to write (26) in vorticity form, within the physical domain Ω:
∂ t ω 1 + u 0 • ∇ ω 1 -ω 1 • ∇ u 0 = 0 in (0, T ) × Ω, ω 1 (0, •) = ∇ × u * in Ω. (27)
The term ω 1 • ∇ u 0 is specific to the 3D setting and does not appear in 2D (where the vorticity is merely transported). Nevertheless, even in 3D, the support of the vorticity is transported by u 0 . Thus, thanks to hypothesis [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF], ω 1 will vanish inside Ω at time T provided that we choose null boundary conditions for ω 1 on the controlled boundary Γ when the characteristics enter in the physical domain. Hence, we can build a trajectory such that ω 1 (T, •) = 0 inside Ω. Combined with the divergence free condition and null boundary data, this yields that u 1 (T, •) = 0 inside Ω, at least for simple geometries.
Energy estimates for the remainder
In this paragraph, we study the remainder defined in expansion [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF]. We write the equation for the remainder in the extended domain O:
∂ t r ε + (u ε • ∇) r ε -ε∆r ε + ∇π ε = f ε -A ε r ε , in (0, T ) × O, div r ε = 0 in (0, T ) × O, [∇ × r ε ] tan = -∇ × u 1 tan on (0, T ) × ∂O, r ε • n = 0 on (0, T ) × ∂O, r ε (0, •) = 0 in O, (28)
where we used the notations:
A ε r ε := (r ε • ∇) u 0 + εu 1 , (29)
f ε := ε∆u 1 -ε(u 1 • ∇)u 1 . (30)
We want to establish a standard L ∞ (L 2 ) ∩ L 2 (H 1 ) energy estimate for the remainder. As usual, formally, we multiply equation ( 28) by r ε and integrate by parts. Since we are considering weak solutions, some integration by parts may not be justified because we do not have enough regularity to give them a meaning. However, the usual technique applies: one can recover the estimates obtained formally from the variational formulation of the problem, the energy equality for the first terms of the expansion and the energy inequality of the definition of weak solutions (see [56, page 168] for an example of such an argument). We proceed term by term:
O ∂ t r ε • r ε = 1 2 d dt O |r ε | 2 , ( 31
) O (u ε • ∇) r ε • r ε = - 1 2 O (div u ε ) |r ε | 2 , ( 32
) -ε O ∆r ε • r ε = ε O |∇ × r ε | 2 -ε ∂O (r ε × (∇ × r ε )) • n, (33)
O ∇π ε • r ε = 0. (34)
In [START_REF] Galdi | An introduction to the Navier-Stokes initial-boundary value problem[END_REF], we will use the fact that div u ε = div u 0 = σ 0 is known and bounded independently of r ε . In [START_REF] Gérard | On the ill-posedness of the Prandtl equation[END_REF], we use the boundary condition on r ε to estimate the boundary term:
∂O r ε × (∇ × r ε ) • n = ∂O r ε × (∇ × u 1 ) • n = O div r ε × ω 1 = O (∇ × r ε ) • ω 1 -r ε • (∇ × ω 1 ) ≤ 1 2 O |∇ × r ε | 2 + 1 2 O ω 1 2 + 1 2 O |r ε | 2 + 1 2 O ∇ × ω 1 2 . ( 35
)
We split the forcing term estimate as:
O f ε • r ε ≤ 1 2 |f ε | 2 1 + |r ε | 2 2 . ( 36
)
Combining estimates ( 31)-( 34), ( 35) and ( 36) yields:
d dt |r ε | 2 2 + ε|∇ × r ε | 2 2 ≤ 2ε u 1 2 H 2 + |f ε | 2 + ε + σ 0 ∞ + 2 |A ε | ∞ + |f ε | 2 |r ε | 2 2 . ( 37
)
Applying Grönwall's inequality by integrating over (0, T ) and using the null initial condition gives:
|r ε | 2 L ∞ (L 2 ) + ε |∇ × r ε | 2 L 2 (L 2 ) = O(ε). (38)
This paragraphs proves that, once the source terms ξ ε and σ ε are fixed as above, any weak Leray solution to ( 17) is small at the final time. Indeed, thanks to Lemma 2 and Lemma 3, u 0 (T ) = u 1 (T ) = 0. At the final time, [START_REF] Glass | Contrôlabilité de l'équation d'Euler tridimensionnelle pour les fluides parfaits incompressibles[END_REF] gives:
|u ε (T, •)| L 2 (O) ≤ ε |r ε (T, •)| L 2 (O) = O(ε 3/2 ). (39)
Regularization and local arguments
In this paragraph, we explain how to chain our arguments in order to prove Theorem 1. We will need to use a local argument to finish bringing the velocity field exactly to the null equilibrium state (see paragraph 1.4.1 for references on null controllability of Navier-Stokes):
Lemma 4 ( [START_REF] Guerrero | Local exact controllability to the trajectories of the Navier-Stokes system with nonlinear Navier-slip boundary conditions[END_REF]). Let T > 0. There exists δ T > 0 such that, for any u * ∈ H 3 (O) which is divergence free, tangent to ∂O, satisfies the compatibility assumption N (u * ) = 0 on ∂O and of size |u
* | H 3 (O) ≤ δ T , there exists a control ξ ∈ H 1 ((0, T ), L 2 (O))∩C 0 ([0, T ], H 1 (O)
) supported outside of Ω such that the strong solution to [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF] with σ = 0 satisfies u(T, •) = 0.
In this context of small initial data, the existence and uniqueness of a strong solution is proved in [START_REF] Guerrero | Local exact controllability to the trajectories of the Navier-Stokes system with nonlinear Navier-slip boundary conditions[END_REF]. We also use the following smoothing lemma for our Navier-Stokes system:
Lemma 5. Let T > 0. There exists a continuous function C T with C T (0) = 0, such that, if u * ∈ L 2 div (O) and u ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)
) is a weak Leray solution to [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF], with ξ = 0 and σ = 0:
∃t u ∈ [0, T ], |u(t u , •)| H 3 (O) ≤ C T |u * | L 2 (O) . (40)
Proof. This result is proved by Temam in [84, Remark 3.2] in the harder case of Dirichlet boundary condition. His method can be adapted to the Navier boundary condition and one could track down the constants to explicit the shape of the function C T . For the sake of completeness, we provide a standalone proof in a slightly more general context (see Lemma 9, Section 5).
We can now explain how we combine these arguments to prove Theorem 1. Let T > 0 be the allowed control time and u * ∈ L 2 γ (Ω) the (potentially large) initial data to be controlled. The proof of Theorem 1 follows the following steps:
• We start by extending Ω into O as explained in paragraph 2.1. We also extend the initial data u * to all of O, still denoting it by u * . We choose an extension such that u * • n = 0 on ∂O and σ * := div u * is smooth (and supported in O \ Ω). We start with a short preparation phase where we let σ decrease from its initial value to zero, relying on the existence of a weak solution once a smooth σ profile is fixed, say σ(t, x) := β(t)σ * , where β smoothly decreases from 1 to 0. Then, once the data is divergence free, we use Lemma 5 to deduce the existence of a time
T 1 ∈ (0, T /4) such that u(T 1 , •) ∈ H 3 (O)
. This is why we can assume that the new "initial" data has H 3 regularity and is divergence free. We can thus apply Lemma 3.
• Let T 2 := T /2. Starting from this new smoother initial data u(T 1 , •), we proceed with the small-time global approximate controllability method explained above on a time interval of size T 2 -T 1 ≥ T /4.
For any δ > 0, we know that we can build a trajectory starting from u(T 1 , •) and such that u(T 2 , •) is smaller than δ in L 2 (O). It particular, it can be made small enough such that C T • Repeating the regularization argument of Lemma 5, we deduce the existence of a time
T 3 ∈ T 2 , 3T 4 such that u(T 3 , •) is smaller than δ T 4 in H 3 (O).
• We use Lemma 4 on the time interval T 3 , T 3 + T 4 to reach exactly zero. Once the system is at rest, it stays there until the final time T . This concludes the proof of Theorem 1 in the case of the slip condition. For the general case, we will use the same proof skeleton, but we will need to control the boundary layers. In the following sections, we explain how we can obtain small-time global approximate null controllability in the general case.
Boundary layer expansion and dissipation
As in the previous section, the allotted physical control time T is fixed (and potentially small). We introduce an arbitrary mathematical time scale ε ≪ 1 and we perform the usual scaling u ε (t, x) := εu(εt, x) and p ε (t, x) := ε 2 p(εt, x). In this harder setting involving a boundary layer expansion, we do not try to achieve approximate controllability towards zero in the smaller physical time interval [0, εT ] like it was possible to do in the previous section. Instead, we will use the virtually long mathematical time interval to dissipate the boundary layer. Thus, we consider (u ε , p ε ) the solution to:
∂ t u ε + (u ε • ∇) u ε -ε∆u ε + ∇p ε = ξ ε in (0, T /ε) × O, div u ε = σ ε in (0, T /ε) × O, u ε • n = 0 on (0, T /ε) × ∂O, N (u ε ) = 0 on (0, T /ε) × ∂O, u ε | t=0 = εu * in O. (41)
Here again, we do not expect to reach exactly zero with this part of the strategy. However, we would like to build a sequence of solutions such that |u (T,
•)| L 2 (O) = o(1)
. As in Section 2, this will allow us to apply a local result with a small initial data, a fixed time and a fixed viscosity. Due to the scaling chosen, this conditions translates into proving that
u ε T ε , • L 2 (O) = o(ε).
Following and enhancing the original boundary layer expansion for Navier slip-with-friction boundary conditions proved by Iftimie and the third author in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], we introduce the following expansion:
u ε (t, x) = u 0 (t, x) + √ εv t, x, ϕ(x) √ ε + εu 1 (t, x) + . . . + εr ε (t, x), (42)
p ε (t, x) = p 0 (t, x) + εp 1 (t, x) + . . . + επ ε (t, x). ( 43
)
The forcing terms are expanded as:
ξ ε (t, x) = ξ 0 (t, x) + √ εξ v t, x, ϕ(x) √ ε + εξ 1 (t, x), (44)
σ ε (t, x) = σ 0 (t, x). (45)
Compared with expansion [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF], expansion (42) introduces a boundary correction v. Indeed, u 0 does not satisfy the Navier slip-with-friction boundary condition on ∂O. The purpose of the second term v is to recover this boundary condition by introducing the tangential boundary layer generated by u 0 . In equations ( 42) and ( 43), the missing terms are technical terms which will help us prove that the remainder is small. We give the details of this technical part in Section 4. We use the same profiles u 0 and u 1 as in the previous section (extended by zero after T ). Hence, u ε ≈ √ εv after T and we must understand the behavior of this boundary layer residue that remains after the short inviscid control strategy.
Boundary layer profile equations
Since the Euler system is a first-order system, we have only been able to impose a single scalar boundary condition in [START_REF] Dardé | On the reachable set for the one-dimensional heat equation[END_REF] (namely, u 0 • n = 0 on ∂O). Hence, the full Navier slip-with-friction boundary condition is not satisfied by u 0 . Therefore, at order O( √ ε), we introduce a tangential boundary layer correction v. This profile is expressed in terms both of the slow space variable x ∈ O and a fast scalar variable z = ϕ(x)/ √ ε. As in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], v is the solution to:
∂ t v + (u 0 • ∇)v + (v • ∇)u 0 tan + u 0 ♭ z∂ z v -∂ zz v = ξ v in R + × Ō × R + , ∂ z v(t, x, 0) = g 0 (t, x) in R + × Ō, v(0, x, z) = 0 in Ō × R + , (46)
where we introduce the following definitions:
u 0 ♭ (t, x):= - u 0 (t, x) • n(x) ϕ(x) in R + × O, (47)
g 0 (t, x):=2χ(x)N (u 0 )(t, x) in R + × O. ( 48
)
Unlike in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF], we introduced an inhomogeneous source term ξ v in [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]. This corresponds to a smooth control term whose x-support is located within Ō \ Ω. Using the transport term, this outside control will enable us to modify the behavior of v inside the physical domain Ω. Let us state the following points about equations ( 46), ( 47) and ( 48):
• The boundary layer profile depends on d + 1 spatial variables (d slow variables x and one fast variable z) and is thus not set in curvilinear coordinates. This approach used in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] lightens the computations. It is implicit that n actually refers to the extension -∇ϕ of the normal (as explained in paragraph 2.1) and that this extends formulas (2) defining the tangential part of a vector field and (4) defining the Navier operator inside O.
• The boundary profile is tangential, even inside the domain. For any x ∈ Ō, z ≥ 0 and t ≥ 0, we have v(t, x, z) • n(x) = 0. It is easy to check that, as soon as the source term ξ v • n = 0, the evolution equation ( 46) preserves the relation v(0, x, z) • n(x) = 0 of the initial time. This orthogonality property is the reason why equation ( 46) is linear. Indeed, the quadratic term (v • n)∂ z v should have been taken into account if it did not vanish. In the sequel, we will check that our construction satisfies the property ξ v • n = 0.
• In [START_REF] Guerrero | Remarks on global approximate controllability for the 2-D Navier-Stokes system with Dirichlet boundary conditions[END_REF], we introduce a smooth cut-off function χ, satisfying χ = 1 on ∂O. This is intended to help us guarantee that v is compactly supported near ∂O, while ensuring that v compensates the Navier slip-with-friction boundary trace of u 0 . See paragraph 3.4 for the choice of χ.
• Even though ϕ vanishes on ∂O, u 0 ♭ is not singular near the boundary because of the impermeability condition u 0 • n = 0. Since u 0 is smooth, a Taylor expansion proves that u 0 ♭ is smooth in Ō.
Large time asymptotic decay of the boundary layer profile
In the previous paragraph, we defined the boundary layer profile through equation ( 46) for any t ≥ 0. Indeed, we will need this expansion to hold on the large time interval [0, T /ε]. Thus, we prefer to define it directly for any t ≥ 0 in order to stress out that this boundary layer profile does not depend in any way on ε. Is it implicit that, for t ≥ T , the Euler reference flow u 0 is extended by 0. Hence, for t ≥ T , system [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF] reduces to a parametrized heat equation on the half line z ≥ 0 (where the slow variables x ∈ O play the role of parameters):
∂ t v -∂ zz v = 0, in R + × O, for t ≥ T, ∂ z v(t, x, 0) = 0 in {0} × O, for t ≥ T. ( 49
)
The behavior of the solution to (49) depends on its "initial" data v(x, z) := v(T, x, z) at time T . Even without any assumption on v, this heat system exhibits smoothing properties and dissipates towards the null equilibrium state. It can for example be proved that:
|v(t, x, •)| L 2 (R+) t -1 4 |v(x, •)| L 2 (R+) . (50)
However, as the equation is set on the half-line z ≥ 0, the energy decay obtained in ( 50) is rather slow. Moreover, without any additional assumption, this estimate cannot be improved. It is indeed standard to prove asymptotic estimates for the solution v(t, x, •) involving the corresponding Green function (see [START_REF] Bartier | Improved intermediate asymptotics for the heat equation[END_REF], [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF], or [START_REF] Elena | Decay of solutions to parabolic conservation laws[END_REF]). Physically, this is due to the fact that the average of v is preserved under its evolution by equation [START_REF] Guerrero | A result concerning the global approximate controllability of the Navier-Stokes system in dimension 3[END_REF]. The energy contained by low frequency modes decays slowly. Applied at the final time t = T /ε, estimate (50) yields:
√ εv T ε , •, ϕ(•) √ ε L 2 (O) = O ε 1 2 + 1 4 + 1 4 , (51)
where the last ε 1 4 factor comes from the Jacobian of the fast variable scaling (see [56, Lemma 3, page 150]). Hence, the natural decay O(ε) obtained in [START_REF] Guo | A note on Prandtl boundary layers[END_REF] is not sufficient to provide an asymptotically small boundary layer residue in the physical scaling. After division by ε, we only obtain a O(1) estimate. This motivates the fact that we need to design a control strategy to enhance the natural dissipation of the boundary layer residue after the main inviscid control step is finished.
Our strategy will be to guarantee that v satisfies a finite number of vanishing moment conditions for k ∈ N of the form:
∀x ∈ O, R+ z k v(x, z)dz = 0. ( 52
)
These conditions also correspond to vanishing derivatives at zero for the Fourier transform in z of v (or its even extension to R). If we succeed to kill enough moments in the boundary layer at the end of the inviscid phase, we can obtain arbitrarily good polynomial decay properties. For s, n ∈ N, let us introduce the following weighted Sobolev spaces:
H s,n (R) := f ∈ H s (R), s α=0 R (1 + z 2 ) n |∂ α z f (z)| 2 dz < +∞ , (53)
which we endow with their natural norm. We prove in the following lemma that vanishing moment conditions yield polynomial decays in these weighted spaces for a heat equation set on the real line.
Lemma 6. Let s, n ∈ N and f 0 ∈ H s,n+1 (R) satisfying, for 0 ≤ k < n, R z k f 0 (z)dz = 0. ( 54
)
Let f be the solution to the heat equation on R with initial data f 0 :
∂ t f -∂ zz f = 0 in R, for t ≥ 0, f (0, •) = f 0 in R, for t = 0. ( 55
)
There exists a constant C s,n independent on f 0 such that, for 0 ≤ m ≤ n,
|f (t, •)| H s,m ≤ C s,n |f 0 | H s,n+1 ln(2 + t) 2 + t 1 4 + n 2 -m 2 . ( 56
)
Proof. For small times (say t ≤ 2), the t function in the right-hand side of ( 56) is bounded below by a positive constant. Thus, inequality (56) holds because the considered energy decays under the heat equation. Let us move on to large times, e.g. assuming t ≥ 2. Using Fourier transform in z → ζ, we compute:
f (t, ζ) = e -tζ 2 f0 (ζ). (57)
Moreover, from Plancherel's equality, we have the following estimate:
|f (t, •)| 2 H s,m m j=0 R (1 + ζ 2 ) s ∂ j ζ f (t, ζ) 2 dζ. (58)
We use [START_REF] Imanuvilov | Remarks on exact controllability for the Navier-Stokes equations[END_REF] to compute the derivatives of the Fourier transform:
∂ j ζ f (t, ζ) = j i=0 ζ i-j P i,j tζ 2 e -tζ 2 ∂ i ζ f0 (ζ), (59)
where P i,j are polynomials with constant numerical coefficients. The energy contained at high frequencies decays very fast. For low frequencies, we will need to use assumptions [START_REF] Havârneanu | Exact internal controllability for the two-dimensional Navier-Stokes equations with the Navier slip boundary conditions[END_REF]. Writing a Taylor expansion of f0 near ζ = 0 and taking into account these assumptions yields the estimates:
∂ i ζ f0 (ζ) |ζ| n-i ∂ n ζ f0 L ∞ |ζ| n-i |z n f 0 (z)| L 1 |ζ| n-i |f 0 | H 0,n+1 . (60)
We introduce ρ > 0 and we split the energy integral at a cutting threshold:
ζ * (t) := ρ ln(2 + t) 2 + t 1/2 . ( 61
)
High frequencies. We start with high frequencies |ζ| ≥ ζ * (t). For large times, this range actually almost includes the whole spectrum. Using ( 58) and ( 59) we compute the high energy terms:
W ♯ j,i,i ′ (t) := |ζ|≥ζ * (t) 1 + ζ 2 s e -2tζ 2 |ζ| i-j |ζ| i ′ -j P i,j tζ 2 P i ′ ,j tζ 2 ∂ i ζ f0 ∂ i ′ ζ f0 dζ. ( 62
)
Plugging estimate ( 60) into (62) yields:
W ♯ j,i,i ′ (t) ≤ |f 0 | 2 H 0,n+1 e -t(ζ * (t)) 2 |t| n-j+ 1 2 R 1 + ζ 2 s e -tζ 2 tζ 2 n-j P i,j tζ 2 P i ′ ,j tζ 2 t 1 2 dζ. ( 63
)
The integral in ( 63) is bounded from above for t ≥ 2 through an easy change of variable. Moreover,
e -t(ζ * (t)) 2 = e -ρt 2+t ln(2+t) = (2 + t) -ρt 2+t ≤ (2 + t) -ρ 2 . (64)
Hence, for t ≥ 2, combining ( 63) and ( 64) yields:
W ♯ j,i,i ′ (t) (2 + t) -ρ 2 |f 0 | 2 H 0,n+1 . (65)
In [START_REF] Lions | Exact controllability for distributed systems. Some trends and some problems[END_REF], we can choose any ρ > 0. Hence, the decay obtained in (65) can be arbitrarily good. This is not the case for the low frequencies estimates which are capped by the number of vanishing moments assumed on the initial data f 0 .
Low frequencies. We move on to low frequencies |ζ| ≤ ζ * (t). For large times, this range concentrates near zero. Using ( 58) and ( 59) we compute the low energy terms:
W ♭ j,i,i ′ (t) := |ζ|≤ζ * (t) 1 + ζ 2 s e -2tζ 2 |ζ| i-j |ζ| i ′ -j P i,j tζ 2 P i ′ ,j tζ 2 ∂ i ζ f0 ∂ i ′ ζ f0 dζ. (66)
Plugging estimate (60) into (66) yields:
W ♭ j,i,i ′ (t) ≤ |f 0 | 2 H 0,n+1 |ζ|≤ζ * (t) 1 + ζ 2 s |ζ| 2n-2j P i,j tζ 2 P i ′ ,j tζ 2 e -2tζ 2 dζ. (67)
The function τ → |P i,j (τ )P i ′ ,j (τ )| e -2τ is bounded on [0, +∞) thanks to the growth comparison theorem. Moreover, (1 + ζ 2 ) s can be bounded by (1 + ρ) s for |ζ| ≤ |ζ * (t)|. Hence, plugging the definition ( 61) into (67) yields:
W ♭ j,i,i ′ (t) |f 0 | 2 H 0,n+1 ρ ln(2 + t) 2 + t 1 2 +n-j . (68)
Hence, choosing ρ = 1 + 2n -2m in equation ( 61) and summing estimates (65) with ( 68) for all indexes 0 ≤ i, i ′ ≤ j ≤ m concludes the proof of (56) and Lemma 6.
We will use the conclusion of Lemma 6 for two different purposes. First, it states that the boundary layer residue is small at the final time. Second, estimate [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] can also be used to prove that the source terms generated by the boundary layer in the equation of the remainder are integrable in large time. Indeed, for n ≥ 2, f 0 and f satisfying the assumptions of Lemma 6, we have:
f L 1 (H 2,n-2 ) |f 0 | H 2,n+1 . (69)
Preparation of vanishing moments for the boundary layer profile
In this paragraph, we explain how we intend to prepare vanishing moments for the boundary layer profile at time T using the control term ξ v of equation [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]. In order to perform computations within the Fourier space in the fast variable, we want to get rid of the Neumann boundary condition at z = 0. This can be done by lifting the inhomogeneous boundary condition g 0 to turn it into a source term. We choose the simple lifting -g 0 (t, x)e -z . The homogeneous boundary condition will be preserved via an even extension of the source term. Let us introduce V (t, x, z) ∈ R d defined for t ≥ 0, x ∈ Ō and z ∈ R by:
V (t, x, z) := v(t, x, |z|) + g 0 (t, x)e -|z| . (70)
We also extend implicitly ξ v by parity. Hence, V is the solution to the following evolution equation:
∂ t V + (u 0 • ∇)V + BV + u 0 ♭ z∂ z V -∂ zz V = G 0 e -|z| + G0 |z|e -|z| + ξ v in R + × Ō × R + , V (0, x, z) = 0 in Ō × R + , (71)
where we introduce:
B i,j :=∂ j u 0 i -n • ∂ j u 0 n i +(u 0 • ∇n j )n i for 1 ≤ i, j ≤ d, (72)
G 0 :=∂ t g 0 -g 0 + (u 0 • ∇)g 0 + Bg 0 , (73)
G0 := -u 0 ♭ g 0 . ( 74
)
The null initial condition in ( 71) is due to the fact that u 0 (0, •) = 0 and hence g 0 (0, •) = 0. Similarly, we have g 0 (t, •) = 0 for t ≥ T since we extended u 0 by zero after T . As remarked for equation [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF], equation ( 71) also preserves orthogonality with n. Indeed, the particular structure of the zeroth-order operator B is such that (u 0 • ∇)V + BV • n = 0 for any function V such that V • n = 0. We compute the partial Fourier transform V (t, x, ζ) := R V (t, x, z)e -iζz dz. We obtain:
∂ t V + (u 0 • ∇) V + B + ζ 2 -u 0 ♭ V -u 0 ♭ ζ∂ ζ V = 2G 0 1 + ζ 2 + 2 G0 (1 -ζ 2 ) (1 + ζ 2 ) 2 + ξv . (75)
To obtain the decay we are seeking, we will need to consider a finite number of derivatives of V at ζ = 0. Thus, we introduce:
Q k (t, x) := ∂ k ζ V (t, x, ζ = 0). ( 76
)
Let us compute the evolution equations satisfied by these quantities. Indeed, differentiating equation ( 75) k times with respect to ζ yields:
∂ t ∂ k ζ V + (u 0 • ∇)∂ k ζ V + B + ζ 2 -u 0 ♭ ∂ k ζ V +2kζ∂ k-1 ζ V +k(k -1)∂ k-2 ζ V -u 0 ♭ (ζ∂ ζ + k)∂ k ζ V = ∂ k ζ 2G 0 1 + ζ 2 + 2 G0 (1 -ζ 2 ) (1 + ζ 2 ) 2 + ξv . (77)
Now we can evaluate at ζ = 0 and obtain:
∂ t Q k + (u 0 • ∇)Q k + BQ k -u 0 ♭ (k + 1)Q k = ∂ k ζ 2G 0 1 + ζ 2 + 2 G0 (1 -ζ 2 ) (1 + ζ 2 ) 2 + ξv ζ=0 -k(k -1)Q k-2 . ( 78
)
In particular:
∂ t Q 0 + (u 0 • ∇)Q 0 + BQ 0 -u 0 ♭ Q 0 = 2G 0 +2 G0 + ξv ζ=0 ( 79
)
∂ t Q 2 + (u 0 • ∇)Q 2 + BQ 2 -3u 0 ♭ Q 2 = -2Q 0 -4G 0 -12 G0 + ∂ 2 ζ ξv ζ=0 . (80)
These equations can be brought back to ODEs using the characteristics method, by following the flow Φ 0 . Moreover, thanks to their cascade structure, it is easy to build a source term ξ v which prepares vanishing moments. We have the following result:
Lemma 7. Let n ≥ 1 and u 0 ∈ C ∞ ([0, T ] × Ō) be a fixed reference flow as defined in paragraph 2.3. There exists
ξ v ∈ C ∞ (R + × Ō × R + ) with ξ v • n = 0, whose x support is in Ō \ Ω, whose time support is compact in (0, T ), such that: ∀0 ≤ k < n, ∀x ∈ Ō, Q k (T, x) = 0. (81)
Moreover, for any s, p ∈ N, for any 0 ≤ m ≤ n, the associated boundary layer profile satisfies:
|v(t, •, •)| H p x (H s,m z ) ln(2 + t) 2 + t 1 4 + n 2 -m 2 , ( 82
)
where the hidden constant depends on the functional space and on u 0 but not on the time t ≥ 0.
Proof. Reduction to independent control of n ODEs. Once n is fixed, let n ′ = ⌊(n-1)/2⌋. We start by choosing smooth even functions of z, φ j for 0 ≤ j ≤ n ′ , such that ∂ 2k ζ φj (0) = δ jk . We then compute iteratively the moments Q 2j (odd moments automatically vanish by parity) using ξ v j := ξ v j (t, x)φ j (z) to control Q 2j without interfering with previously constructed controls. When computing the control at order j, all lower order moments 0 ≤ i < j are known and their contribution as the one of Q 0 in (80) can be seen as a known source term.
Reduction to a null controllability problem. Let us explain why ( 79) is controllable. First, by linearity and since the source terms G 0 and G0 are already known, fixed and tangential, it suffices to prove that, starting from zero and without these source terms, we could reach any smooth tangential state. Moreover, since the flow flushing property ( 24) is invariant through time reversal, it is also sufficient to prove that, in the absence of source term, we can drive any smooth tangential initial state to zero. These arguments can also be formalized using a Duhamel formula following the flow for equation [START_REF] Elena | Decay of solutions to parabolic conservation laws[END_REF].
Null controllability for a toy system. We are thus left with proving a null controllability property for the following toy system:
∂ t Q + (u 0 • ∇)Q + BQ + λQ = ξ in (0, T ) × Ō, Q(0, •) = Q * in Ō, (83)
where B(t, x) is defined in [START_REF] Oleȋnik | Mathematical models in boundary layer theory[END_REF] and λ(t, x) is a smooth scalar-valued amplification term. Thanks to the flushing property [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF] and to the fact that Ō is bounded, we can choose a finite partition of unity described by functions η l for 1 ≤ l ≤ L with 0 ≤ η l (x) ≤ 1 and l η l ≡ 1 on Ō, where the support of η l is a small ball B l centered at some x l ∈ Ō. Moreover, we extract our partition such that: for any 1 ≤ l ≤ L, there exists a time t l ∈ (ǫ, Tǫ) such that dist(Φ 0 (0, t, B l ), Ω) ≥ δ/2 for |tt l | ≤ ǫ where ǫ > 0. Let β : R → R be a smooth function with β = 1 on (-∞, -ǫ) and β = 0 on (ǫ, +∞). Let Q l be the solution to [START_REF] Gabriel | On the effect of the internal friction of fluids on the motion of pendulums[END_REF] with initial data Q l * := η l Q * and null source term ξ. We define:
Q(t, x) := L l=1 β(t -t l )Q l (t, x), (84)
ξ(t, x):= L l=1 β ′ (t -t l )Q l (t, x). (85)
Thanks to the construction, formulas ( 84) and ( 85) define a solution to (83) with a smooth control term ξ supported in Ō \ Ω, satisfying ξ • n = 0 and such that Q(T, •) = 0. Decay estimate. For small times t ∈ (0, T ), when ξ v = 0, estimate (82) can be seen as a uniform in time estimate and can be obtained similarly as the well-posedness results proved in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF]. For large times, t ≥ T , the boundary layer profile equation boils down to the parametrized heat equation ( 49) and we use the conclusion of Lemma 6 to deduce (82) from (56).
Staying in a small neighborhood of the boundary
The boundary layer correction defined in [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF] is supported within a small x-neighborhood of ∂O. This is legitimate because Navier boundary layers don't exhibit separation behaviors. Within this xneighborhood, this correction lifts the tangential boundary layer residue created by the Euler flow but generates a non vanishing divergence at order √ ε. In the sequel, we will need to find a lifting profile for this residual divergence (see ( 116)). This will be possible as long as the extension n(x) := -∇ϕ(x) of the exterior normal to ∂O does not vanish on the x-support of v. However, there exists at least one point in O where ∇ϕ = 0 because ϕ is a non identically vanishing smooth function with ϕ = 0 on ∂O. Hence, we must make sure that, despite the transport term present in equation ( 46), the x-support of v will not encounter points where ∇ϕ vanishes.
We consider the extended domain O. Its boundary coincides with the set {x ∈ R d ; ϕ(x) = 0}. For any δ ≥ 0, we define V δ := {x ∈ R d ; 0 ≤ ϕ(x) ≤ δ}. Hence, V δ is a neighborhood of ∂O in Ō. For δ large enough, V δ = Ō. As mentioned in paragraph 3.1, ϕ was chosen such that |∇ϕ| = 1 and |ϕ(x)| = dist(x, ∂O) in a neighborhood of ∂O. Let us introduce η > 0 such that this is true on V η . Hence, within this neighborhood of ∂O, the extension n(x) = -∇ϕ(x) of the outwards normal to ∂O is well defined (and of unit norm). We want to guarantee that v vanishes outside of V η .
Considering the evolution equation ( 75), we see it as an equation defined on the whole of O. Thanks to its structure, we see that the support of V is transported by the flow of u 0 . Moreover, V can be triggered either by fixed polluting right-hand side source term or by the control forcing term. We want to determine the supports of these sources such that V vanishes outside of V η .
Thanks to definitions ( 48), ( 73) and ( 74), the unwanted right-hand side source term of ( 75) is supported within the support of χ. We introduce η χ such that supp(χ) ⊂ V ηχ . For δ ≥ 0, we define:
S(δ) := sup ϕ Φ 0 (t, t ′ , x) ; t, t ′ ∈ [0, T ], x ∈ V δ ≥ δ. ( 86
)
With this notation, η χ includes the zone where pollution might be emitted. Hence S(η χ ) includes the zone that might be reached by some pollution. Iterating once more, S(S(η χ )) includes the zone where we might want to act using ξ v to prepare vanishing moments. Eventually, S(S(S(η χ )))) corresponds to the maximum localization of non vanishing values for v. First, since u 0 is smooth, Φ 0 is smooth. Moreover, ϕ is smooth. Hence, ( 86) defines a smooth function of δ. Second, due to the condition u 0 • n = 0, the characteristics cannot leave or enter the domain and thus follow the boundaries. Hence, S(0) = 0. Therefore, by continuity of S, there exists η χ > 0 small enough such that S(S(S(η χ )))) ≤ η. We assume χ is fixed from now on.
Controlling the boundary layer exactly to zero
In view of what has been proved in the previous paragraphs, a natural question is whether we could have controlled the boundary layer exactly to zero (instead of controlling only a finite number of modes and relying on self-dissipation of the higher order ones). This was indeed our initial approach but it turned out to be impossible. The boundary layer equation ( 46) is not exactly null controllable at time T . In fact, it is not even exactly null controllable in any finite time greater than T . Indeed, since u 0 (t, •) = 0 for t ≥ T , v is the solution to [START_REF] Guerrero | A result concerning the global approximate controllability of the Navier-Stokes system in dimension 3[END_REF] for t ≥ T . Hence, reaching exactly zero at time T is equivalent to reaching exactly zero at any later time.
Let us present a reduced toy model to explain the difficulty. We consider a rectangular domain and a scalar-valued unknown function v solution to the following system:
∂ t v + ∂ x v -∂ zz v = 0 [0, T ] × [0, 1] × [0, 1], v(t, x, 0) = g(t, x) [0, T ] × [0, 1], v(t, x, 1) = 0 [0, T ] × [0, 1], v(t, 0, z) = q(t, z) [0, T ] × [0, 1], v(0, x, z) = 0 [0, 1] × [0, 1]. (87)
System ( 87) involves both a known tangential transport term and a normal diffusive term. At the bottom boundary, g(t, x) is a smooth fixed pollution source term (which models the action of N (u 0 ), the boundary layer residue created by our reference Euler flow). At the left inlet vertical boundary x = 0, we can choose a Dirichlet boundary value control q(t, z). Hence, applying the same strategy as described above, we can control any finite number of vertical modes provided that T ≥ 1.
However, let us check that it would not be reasonable to try to control the system exactly to zero at any given time T ≥ 1. Let us consider a vertical slice located at x ⋆ ∈ (0, 1) of the domain at the final time and follow the flow backwards by defining:
v ⋆ (t, z) := v(t, x ⋆ + (t -T ), z). ( 88
)
Hence, letting T ⋆ := Tx ⋆ ≥ 0 and using ( 88), v ⋆ is the solution to a one dimensional heat system:
∂ t v ⋆ -∂ zz v ⋆ = 0 [T ⋆ , T ] × [0, 1], v ⋆ (t, 0) = g ⋆ (t) [T ⋆ , T ], v ⋆ (t, 1) = 0 [T ⋆ , T ], v ⋆ (0, z) = q ⋆ (z) [0, 1], (89)
where g ⋆ (t) := g(t, x ⋆ + (t -T )) is smooth but fixed and q ⋆ (z) := q(T ⋆ , z) is an initial data that we can choose as if it was a control. Actually, let us change a little the definition of v ⋆ to lift the inhomogeneous boundary condition at z = 0. We set:
v ⋆ (t, z) := v(t, x ⋆ + (t -T ), z) -(1 -z)g ⋆ (t). (90)
Hence, system (89) reduces to:
T T⋆ e -n 2 π 2 (T -t) 1 -z, e n g ′ ⋆ (t)dt. (92)
If we assume that the pollution term g vanishes at the final time, equation ( 92) and exact null controllability would impose the choice of the initial control data:
q n ⋆ = 1 -z, e n T T⋆ e n 2 π 2 t g ′ ⋆ (t)dt. ( 93
)
Even if the pollution term g is very smooth, there is nothing good to be expected from relation [START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF].
Hoping for cancellations or vanishing moments is not reasonable because we would have to guarantee this relation for all Fourier modes n and all x ⋆ ∈ [0, 1]. Thus, the boundary data control that we must choose has exponentially growing Fourier modes. Heuristically, it belongs to the dual of a Gevrey space. The intuition behind relation [START_REF] Xin | On the global existence of solutions to the Prandtl's system[END_REF] is that the control data emitted from the left inlet boundary undergoes a heat regularization process as they move towards their final position. In the meantime, the fixed polluting boundary data is injected directly at positions within the domain and undergoes less smoothing. This prevents any hope from proving exact null controllability for system (87) within reasonable functional spaces and explains why we had to resort to a low-modes control process.
Theorem 1 is an exact null controllability result. To conclude our proof, we use a local argument stated as Lemma 4 in paragraph 2.6 which uses diffusion in all directions. Boundary layer systems like [START_REF] Van Dommelen | The spontaneous generation of the singularity in a separating laminar boundary layer[END_REF] exhibit no diffusion in the tangential direction and are thus harder to handle. The conclusion of our proof uses the initial formulation of the Navier-Stokes equation with a fixed O(1) viscosity.
Estimation of the remainder and technical profiles
In the previous sections, we presented the construction of the Euler reference flushing trajectory u 0 , the transported flow involving the initial data u 1 and the leading order boundary layer correction v. In this section, we follow on with the expansion and introduce technical profiles, which do not have a clear physical interpretation. The purpose of the technical decomposition we propose is to help us prove that the remainder we obtain is indeed small. We will use the following expansion:
u ε = u 0 + √ ε {v} + εu 1 + ε∇θ ε + ε {w} + εr ε , (94)
p ε = p 0 + ε {q} + εp 1 + εµ ε + επ ε , (95)
where v, w and q are profiles depending on t, x and z. For such a function f (t, x, z), we use the notation {f } to denote its evaluation at z = ϕ(x)/ √ ε. In the sequel, operators ∇, ∆, D and div only act on x variables. We will use the following straightforward commutation formulas:
div {f } = {div f } -n • {∂ z f } / √ ε (96) ∇ {f } = {∇f } -n {∂ z f } / √ ε, (97)
N ({f }) = {N (f )} - 1 2 {[∂ z f ] tan } / √ ε, ( 98
)
ε∆ {f } = ε {∆f } + √ ε∆ϕ {∂ z f } -2 √ ε {(n • ∇)∂ z f } + |n| 2 {∂ zz f } . ( 99
)
Within the x-support of boundary layer terms, |n| 2 = 1.
Formal expansions of constraints
In this paragraph, we are interested in the formulation of the boundary conditions and the incompressibility condition for the full expansion. We plug expansion (94) into these conditions and identify the successive orders of power of √ ε.
Impermeability boundary condition
The impermeability boundary condition u ε • n = 0 on ∂O yields:
u 0 • n = 0, ( 100
) v(•, •, 0) • n = 0, (101)
u 1 • n + ∂ n θ ε + w(•, •, 0) • n + r ε • n = 0. ( 102
)
By construction of the Euler trajectory u 0 , equation ( 100) is satisfied. Since the boundary profile v is tangential, equation ( 101) is also satisfied. By construction, we also already have u 1 • n = 0. In order to be able to carry out integrations by part for the estimates of the remainder, we also would like to impose r ε • n = 0. Thus, we read (102) as a definition of ∂ n θ ε once w is known:
∀t ≥ 0, ∀x ∈ ∂O, ∂ n θ ε (t, x) = -w(t, x, 0) • n. (103)
Incompressibility condition
The (almost) incompressibility condition div u ε = σ 0 in O (σ 0 is smooth forcing terms supported outside of the physical domain Ω) yields:
div u 0 -n • {∂ z v} = σ 0 , ( 104
)
{div v} -n • {∂ z w} = 0, (105)
div u 1 + div ∇θ ε + {div w} + div r ε = 0. (106)
In ( 105) and (106), we used formula (96) to isolate the contributions to the divergence coming from the slow derivatives with the one coming from the fast derivative ∂ z . By construction div u 0 = σ 0 , div u 1 = 0, n • ∂ z v = 0 and we would like to work with div r ε = 0. Hence, we read (105) and (106) as:
n • {∂ z w} = {div v} , (107)
-∆θ ε = {div w} .
(108)
Navier boundary condition
Last, we turn to the slip-with-friction boundary condition. Proceeding as above yields by identification:
N (u 0 ) - 1 2 [∂ z v] tan z=0 = 0, (109)
N (v) z=0 - 1 2 [∂ z w] tan z=0 = 0, (110)
N (u 1 ) + N (∇θ ε ) + N (w) z=0 + N (r ε ) = 0. (111)
By construction, (109) is satisfied. We will choose a basic lifting to guarantee (110). Last, we read (111) as an inhomogeneous boundary condition for the remainder:
N (r ε ) = g ε := -N (u 1 ) -N (∇θ ε ) -N (w) z=0 . (112)
Definitions of technical profiles
At this stage, the three main terms u 0 , v and u 1 are defined. In this paragraph, we explain step by step how we build the following technical profiles of the expansion. For any t ≥ 0, the profiles are built sequentially from the values of v(t, •, •). Hence, they will inherit from the boundary layer profile its smoothness with respect to the slow variables x and its time decay estimates obtained from Lemma 6.
Boundary layer pressure
Equation ( 46) only involves the tangential part of the symmetrical convective product between u 0 and v. Hence, to compensate its normal part, we introduce as in [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] the pressure q which is defined as the unique solution vanishing as z → +∞ to:
(u 0 • ∇)v + (v • ∇)u 0 • n = ∂ z q. (113)
Hence, we can now write:
∂ t v + (u 0 • ∇)v + (v • ∇)u 0 + u 0 ♭ z∂ z v -∂ zz v -n∂ z q = 0. (114)
This pressure profile vanishes as soon as u 0 vanishes, hence in particular for t ≥ T . For any p, s, n ∈ N, the following estimate is straightforward:
|q(t, •, •)| H 1 x (H 0,0 z ) |v(t, •, •)| H 2 x (H 0,2 z ) . (115)
Second boundary corrector
The first boundary condition v generates a non vanishing slow divergence and a non vanishing tangential boundary flux. The role of the profile w is to lift two unwanted terms that would be too hard to handle directly in the equation of the remainder. We define w as:
w(t, x, z) := -2e -z N (v)(t, x, 0) -n(x) +∞ z div v(t, x, z ′ )dz ′ (116)
Definition (116) allows to guarantee condition (110). Moreover, under the assumption |n(x)| 2 = 1 for any x in the x-support of the boundary layer, this definition also fulfills condition (105). In equation ( 116) it is essential that n(x) does not vanish on the x-support of v. This is why we dedicated paragraph 3.4 to proving we could maintain a small enough support for the boundary layer. For any p, s, n ∈ N, the following estimates are straightforward:
|[w(t, •, •)] tan | H p x (H s,n z ) |v(t, •, •)| H p+1 x (H 1,1 z ) , (117)
|w(t, •, •) • n| H p x (H 0,n z ) |v(t, •, •)| H p+1 x (H 0,n+2 z ) , (118)
|w(t, •, •) • n| H p x (H s+1,n z ) |v(t, •, •)| H p+1 x (H s,n z ) . (119)
Estimates ( 117), ( 118) and ( 119) can be grossly summarized sub-optimally by:
|w(t, •, •)| H p x (H s,n z ) |v(t, •, •)| H p+1 x (H s+1,n+2 z ) . (120)
Inner domain corrector
Once w is defined by (116), the collateral damage is that this generates a non vanishing boundary flux w • n on ∂O and a slow divergence. For a fixed time t ≥ 0, we define θ ε as the solution to:
∆θ ε = -{div w} in O, ∂ n θ ε = -w(t, •, 0) • n on ∂O. (121)
System ( 121) is well-posed as soon as the usual compatibility condition between the source terms is satisfied. Using Stokes formula, equations ( 96) and (105), we compute:
∂O w(t, •, 0) • n = ∂O {w} • n = O div {w} = O {div w} -ε -1 2 n • {∂ z w} = O {div w} -ε -1 2 {div v} = O {div w} -ε -1 2 div {v} + ε -1 n • {∂ z v} = O {div w} -ε -1 2 ∂O {v} • n = O {div w} , (122)
where we used twice the fact that v is tangential. Thus, the compatibility condition is satisfied and system (121) has a unique solution. The associated potential flow ∇θ ε solves:
∂ t ∇θ ε + u 0 • ∇ ∇θ ε + (∇θ ε • ∇) u 0 + ∇µ ε = 0, in O for t ≥ 0, div ∇θ ε = -{div w} in O for t ≥ 0, ∇θ ε • n = -w |z=0 • n on ∂O for t ≥ 0, (123)
where the pressure term µ ε := -∂ t θ ε -u 0 •∇θ ε absorbs all other terms in the evolution equation (see [START_REF] Weinan | Boundary layer theory and the zero-viscosity limit of the Navier-Stokes equation[END_REF]).
Estimating roughly θ ε using standard regularity estimates for the Laplace equation yields:
|θ ε (t, •)| H 4 x |{div w} (t, •)| H 2 x + |w(t, •, 0) • n| H 3 x ε 1 4 |w(t)| H 4 x (H 0,0 z ) + ε -1 4 |w(t)| H 3 x (H 1,0 z ) + ε -3 4 |w(t)| H 2 x (H 2,0 z ) + |v(t)| H 3 x (H 0,1 z ) ε -3 4 |w(t)| H 4 x (H 2,0 z ) + |v(t)| H 3 x (H 0,1 z ) , (124)
where we used [55, Lemma 3, page 150] to benefit from the fast variable scaling. Similarly,
|θ ε (t, •)| H 3 x ε -1 4 |w(t)| H 3 x (H 1,0 z ) + |v(t)| H 2 x (H 0,1 z ) , (125)
|θ ε (t, •)| H 2 x ε 1 4 |w(t)| H 2 x (H 0,0 z ) + |v(t)| H 1 x (H 0,1 z ) . (126)
Equation for the remainder
In the extended domain O, the remainder is a solution to:
∂ t r ε -ε∆r ε + (u ε • ∇) r ε + ∇π ε = {f ε } -{A ε r ε } in O for t ≥ 0, div r ε = 0 in O for t ≥ 0, N (r ε ) = g ε on ∂O for t ≥ 0, r ε • n = 0 on ∂O for t ≥ 0, r ε (0, •) = 0 in O at t = 0. (127)
Recall that g ε is defined in (112). We introduce the amplification operator:
A ε r ε := (r ε • ∇) u 0 + √ εv + εu 1 + ε∇θ ε + εw -(r ε • n) ∂ z v + √ ε∂ z w (128)
and the forcing term:
f ε :=(∆ϕ∂ z v -2(n • ∇)∂ z v + ∂ zz w) + √ ε(∆v + ∆ϕ∂ z w -2(n • ∇)∂ z w) + ε(∆w + ∆u 1 + ∆∇θ ε ) -(v + √ ε(w + u 1 + ∇θ ε ))•∇ (v + √ ε(w + u 1 + ∇θ ε )) -(u 0 • ∇)w -(w • ∇)u 0 -u 0 ♭ z∂ z w + (w + u 1 + ∇θ ε ) • n∂ z v + √ εw -∇q -∂ t w. (129)
In ( 128) and (129), many functions depend on t, x and z. The differential operators ∇ and ∆ only act on the slow variables x and the evaluation at z = ϕ(x)/ √ ε is done a posteriori in (127). The derivatives in the fast variable direction are explicitly marked with the ∂ z operator. Moreover, most terms are independent of ε, except where explicitly stated in θ ε and r ε . Expansion (94) contains 4 slowly varying profiles and 2 boundary layer profiles. Thus, computing ε∆u ε using formula (99) produces 4 + 2 × 4 = 12 terms. Terms ∆u 0 and {∂ zz v} have already been taken into account respectively in [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF] and [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]. Term ∆r ε is written in (127). The remaining 9 terms are gathered in the first line of the forcing term (129).
Computing the non-linear term (u ε • ∇)u ε using formula (97) produces 6 × 4 + 6 × 2 × 2 = 48 terms. First, 8 have already been taken into account in ( 22), ( 26), ( 46) and (123). Moreover, 6 are written in (127) as (u ε • ∇)r ε , 7 more as the amplification term (128) and 25 in the second and third line of (129). The two missing terms
{(v • n)∂ z v} and {(v • n)∂ z w} vanish because v • n = 0.
Size of the remainder
We need to prove that equation ( 127) satisfies an energy estimate on the long time interval [0, T /ε]. Moreover, we need to estimate the size of the remainder at the final time and check that it is small. The key point is that the size of the source term {f ε } is small in L 2 (O). Indeed, for terms appearing at order O(1), the fast scaling makes us win a ε 1 4 factor (see for example [56, Lemma 3, page 150]). We proceed as we have done in the case of the shape operator in paragraph 2.5.
The only difference is the estimation of the boundary term [START_REF] Geymonat | On the vanishing viscosity limit for acoustic phenomena in a bounded region[END_REF]. We have to take into account the inhomogeneous boundary condition g ε and the fact that, in the general case, the boundary condition matrix M is different from the shape operator M w . Using [START_REF] Carlo | Some results on the Navier-Stokes equations with Navier boundary conditions[END_REF] allows us to write, on ∂O:
(r ε × (∇ × r ε )) • n = ((∇ × r ε ) × n) • r ε = 2 (N (r ε ) + [(M -M w )r ε ] tan ) • r ε . ( 130
)
Introducing smooth extensions of M and M w to the whole domain O also allows to extend the Navier operator N defined in (4), since the extension of the normal n extends the definition of the tangential part [START_REF] Barrat | Large slip effect at a nonwetting fluid-solid interface[END_REF]. Using (130), we transform the boundary term into an inner term:
∂O (r ε × (∇ × r ε )) • n = 2 ∂O g ε •r ε + ((M -M w )r ε )•r ε = 2 O div [(g ε •r ε )n + (((M -M w )r ε )•r ε )n] ≤ λ |∇r ε | 2 2 + C λ |r ε | 2 2 + + |g ε | 2 2 + |∇g ε | 2 2 , (131)
for any λ > 0 to be chosen and where C λ is a positive constant depending on λ. We intend to absorb the |∇r ε | 2 2 term of (131) using the dissipative term. However, the dissipative term only provides the norm of the symmetric part of the gradient. We recover the full gradient using the Korn inequality. Indeed, since div r ε = 0 in O and r ε • n = 0 on ∂O, the following estimate holds (see [23, Corollary 1, Chapter IX, page 212]):
|r ε | 2 H 1 (O) ≤C K |r ε | 2 L 2 (O) + C K |∇ × r ε | 2 L 2 (O) . (132)
We choose λ = 1/(2C K ) in (131). Combined with (132) and a Grönwall inequality as in paragraph 2.5 yields an energy estimate for t ∈ [0, T /ε]:
|r ε | 2 L ∞ (L 2 ) + ε |r ε | 2 L 2 (H 1 ) = O(ε 1 4 ), (133)
as long as we can check that the following estimates hold: 4 ).
A ε L 1 (L ∞ ) = O(1), (134)
ε g ε 2 L 2 (H 1 ) = O(ε 1 4 ), (135)
f ε L 1 (L 2 ) = O(ε 1
In particular, the remainder at time T /ε is small and we can conclude the proof of Theorem 1 with the same arguments as in paragraph 2.6. Therefore, it only remains to be checked that estimates (134), ( 136) and (135) hold on the time interval [0, T /ε]. In fact, they even hold on the whole time interval [0, +∞).
Estimates for A ε . The two terms involving u 0 and u 1 vanish for t ≥ T . Thus, they satisfy estimate (134). For t ≥ 0, we estimate the other terms in A ε in the following way:
√ ε |∇v(t)| L ∞ √ ε |v(t)| H 3 x (H 1,0 z ) , (137)
ε |∇w(t)| L ∞ ε |w(t)| H 3 x (H 1,0 z ) , (138)
|∂ z v(t)| L ∞ |v(t)| H 2 x (H 2,0 z ) , (139) √ ε |∂ z w(t)| L ∞ √ ε |w(t)| H 2 x (H 2,0 z ) , (140)
ε ∇ 2 θ ε (t) L ∞ ε |θ ε (t)| H 4 . (141)
Combining these estimates with (124) and (120) yields:
A ε L 1 (L ∞ ) u 0 L 1 [0,T ] (H 3 ) + ε u 1 L 1 [0,T ] (H 3 ) + v L 1 (H 5 x (H 3,2 z )) . (142)
Applying Lemma 7 with p = 5, n = 4 and m = 2 concludes the proof of (134).
Estimates for g ε . For t ≥ 0, using the definition of g ε in (112), we estimate:
ε N (u 1 )(t) 2 H 1 ε u 1 (t) 2 H 2 , (143)
ε |N (∇θ ε )(t)| 2 H 1 ε |θ ε (t)| 2 H 3 , (144)
ε N (w) |z=0 (t) 2 H 1 ε |w(t)| 2 H 2 x (H 1,1 z ) . (145)
Combining these estimates with (125) and (120) yields:
ε g ε 2 L 2 (H 1 ) ε u 1 2 L 2 [0,T ] (H 2 ) + ε 3 4 v 2 L 2 (H 4 x (H 2,3 z )) . (146)
Applying Lemma 7 with p = 4, n = 4 and m = 3 concludes the proof of (135).
Estimates for f ε . For t ≥ 0, we estimate the 36 terms involved in the definition of f ε in (129). The conclusion is that (136) holds as soon as v is bounded in L 1 (H 4
x (H 3,4 z )). This can be obtained from Lemma 7 with p = 4, n = 6 and m = 4. Let us give a few examples of some of the terms requiring the most regularity. The key point is that all terms of (129) appearing at order O(1) involve a boundary layer term and thus benefit from the fast variable scaling gain of ε
|{∂ zz w} (t)| L 2 ε 1 4 |w(t)| H 1 x (H 2,0 z ) ε 1 4 |v(t)| H 2 x (H 3,2 z ) . (147)
Using ( 125) and (120), we obtain:
ε |∆∇θ ε (t)| L 2 ε 3 4 |w(t)| H 3 x (H 1,0 z ) + |v(t)| H 2 x (H 0,1 z ) ε 3 4 |v(t)| H 4 x (H 2,2 z ) . (148)
The time derivative {∂ t w} can be estimated easily because the time derivative commutes with the definition of w through formula (116). Moreover, ∂ t v can be recovered from its evolution equation [START_REF] Grenier | Spectral stability of Prandtl boundary layers: an overview[END_REF]:
|{∂ t w} (t)| L 2 ε 1 4 |∂ t w(t)| H 1 x (H 0,0 z ) ε 1 4 |∂ t v(t)| H 2 x (H 1,2 z ) ε 1 4 |v(t)| H 3 x (H 2,4 z ) + |ξ v (t)| H 3 x (H 2,4
z ) . (149) The forcing term ξ v is smooth and supported in [0, T ]. As a last example, consider the term
(∇θ ε • n)∂ z v.
We use the injection H 1 ֒→ L 4 which is valid in 2D and in 3D and estimate (126):
|(∇θ ε • n) {∂ z v} (t)| L 2 |∇θ ε (t)| H 1 |{∂ z v} (t)| H 1 ε 1 4 |v(t)| H 3 x (H 1,2 z ) |v(t)| H 2 x (H 1,0 z ) . (150)
As (82) both yields L ∞ and L 1 estimates in time, this estimation is enough to conclude. All remaining nonlinear convective terms can be handled in the same way or even more easily. The pressure term is estimated using (115).
These estimates conclude the proof of small-time global approximate null controllability in the general case. Indeed, both the boundary layer profile (thanks to Lemma 7) and the remainder are small at the final time. Thus, as announced in Remark 2, we have not only proved that there exists a weak trajectory going approximately to zero, but that any weak trajectory corresponding to our source terms ξ ε and σ ε goes approximately to zero. We combine this result with the local and regularization arguments explained in paragraph 2.6 to conclude the proof of Theorem 1 in the general case.
Global controllability to the trajectories
In this section, we explain how our method can be adapted to prove small-time global exact controllability to other states than the null equilibrium state. Since the Navier-Stokes equation exhibits smoothing properties, all conceivable target states must be smooth enough. Generally speaking, the exact description of the set of reachable states for a given controlled system is a difficult question. Already for the heat equation on a line segment, the complete description of this set is still open (see [START_REF] Dardé | On the reachable set for the one-dimensional heat equation[END_REF] and [START_REF] Martin | On the reachable states for the boundary control of the heat equation[END_REF] for recent developments on this topic). The usual circumvention is to study the notion of global exact controllability to the trajectories. That is, we are interested in whether all known final states of the system are reachable from any other arbitrary initial state using a control: Theorem 2. Let T > 0. Assume that the intersection of Γ with each connected component of ∂Ω is smooth. Let ū ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)) be a fixed weak trajectory of (1) with smooth ξ. Let u * ∈ L 2 γ (Ω) be another initial data unrelated with ū. Then there exists u ∈ C 0 w ([0, T ]; L 2 γ (Ω)) ∩ L 2 ((0, T ); H 1 (Ω)) a weak trajectory of (1) with u(0,
•) = u * satisfying u(T, •) = ū(T, •).
The strategy is very similar to the one described in the previous sections to prove the global null controllability. We start with the following lemma, asserting small-time global approximate controllability to smooth trajectories in the extended domain.
Lemma 8. Let T > 0. Let (ū, ξ, σ) ∈ C ∞ ([0, T ] × Ō) be a fixed smooth trajectory of [START_REF] Bocquet | Flow boundary conditions from nano-to micro-scales[END_REF]. Let u * ∈ L 2 div (O) be another initial data unrelated with ū. For any δ > 0, there exists u ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)) a weak Leray solution of (12) with u(0,
•) = u * satisfying |u(T ) -ū(T )| L 2 (O) ≤ δ.
Proof. We build a sequence u (ε) to achieve global approximate controllability to the trajectories. Still using the same scaling, we define it as:
u (ε) (t, x) := 1 ε u ε t ε , x , (151)
where u ε solves the vanishing viscosity Navier-Stokes equation [START_REF] Glass | Approximate Lagrangian controllability for the 2-D Euler equation. Application to the control of the shape of vortex patches[END_REF] with initial data εu * on the time interval [0, T /ε]. As previously, this time interval will be used in two different stages. First, a short stage of fixed length T to achieve controllability of the Euler system by means of a return-method strategy. Then, a long stage [T, T /ε], during which the boundary layer dissipates thanks to the careful choice of the boundary controls during the first stage. During the first stage, we use the expansion:
u ε = u 0 + √ ε {v} + εu 1,ε + . . . , (152)
where u 1,ε is built such that u 1,ε (0, •) = u * and u 1,ε (T, •) = ū(εT, •). This is the main difference with respect to the null controllability strategy. Here, we need to aim for a non zero state at the first order. Of course, this is also possible because the state u 1,ε is mostly transported by u 0 (which is such that the linearized Euler system is controllable). The profile u 1,ε now depends on ε. However, since the reference trajectory belongs to C ∞ , all required estimates can be made independent on ε. During this first stage, u 1,ε solves the usual first-order system [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF]. For large times t ≥ T , we change our expansion into:
u ε = √ ε {v} + εū(εt, •) + . . . , (153)
where the boundary layer profile solves the homogeneous heat system (49) and ū is the reference trajectory solving the true Navier-Stokes equation. As we have done in the case of null controllability, we can derive the equations satisfied by the remainders in the previous equations and carry on both well-posedness and smallness estimates using the same arguments. Changing expansion (152) into (153) allows to get rid of some unwanted terms in the equation satisfied by the remainder. Indeed, terms such as ε∆u 1 or ε(u 1 ∇)u 1 don't appear anymore because they are already taken into account by ū. One important remark is that it is necessary to aim for ū(εT ) ≈ ū(0) at the linear order and not towards the desired end state ū(T ). Indeed, the inviscid stage is very short and the state will continue evolving while the boundary layer dissipates. This explains our choice of pivot state. We obtain:
u (ε) (T ) -ū(T ) L 2 (O) = O ε 1 8 , (154)
which concludes the proof of approximate controllability.
We will also need the following regularization lemma:
∂ t r -∆r + (ū•∇)r + (r•∇)ū + (r•∇)r + ∇π = 0 in [0, T ] × O, div r = 0 in [0, T ] × O, r • n = 0 on [0, T ] × ∂O, N (r) = 0 on [0, T ] × ∂O, r(0, •) = r * in O, (155)
the following property holds true:
∃t r ∈ [0, T ], |r(t r , •)| H 3 (O) ≤ C |r * | L 2 (O) . (156)
Proof. This regularization lemma is easy in our context because we assumed a lot of smoothness on the reference trajectory ū and we are not demanding anything on the time t r at which the solution becomes smoother. We only sketch out the steps that we go through. We repeatedly use the Korn inequality from [68, Theorem 10.2, page 299] to derive estimates from the symmetrical part of gradients. Let È denote the usual orthogonal Leray projector on divergence-free vectors fields tangent to the boundaries. We will use the fact |∆r| L 2 |È∆r| L 2 which follows from maximal regularity result for the Stokes problem with div r = 0 in O, r • n = 0 and N (r) = 0 on ∂O. Our scheme is inspired from [START_REF] Galdi | An introduction to the Navier-Stokes initial-boundary value problem[END_REF].
Weak solution energy estimate. We start with the usual weak solution energy estimate (which is included in the definition of a weak Leray solution to (155)), formally multiplying (155) by r and integrating by parts. We obtain:
∃C 1 , for a.e. t ∈ [0, T ], |r(t)| 2 L 2 (O) + t 0 |r(t ′ )| 2 H 1 (O) dt ′ ≤ C 1 |r * | 2 L 2 (O) . (157)
In particular (157) yields the existence of 0 ≤ t 1 ≤ T /3 such that:
|r(t 1 )| H 1 (O) ≤ 3C 1 T |r * | L 2 (O) . (158)
Strong solution energy estimate. We move on to the usual strong solution energy estimate, multiplying (155) by È∆r and integrating by parts. We obtain:
∃C 2 , ∀t ∈ [t 1 , t 1 + τ 1 ], |r(t)| 2 H 1 (O) + t t1 |r(t ′ )| 2 H 2 (O) dt ′ ≤ C 2 |r(t 1 )| 2 H 1 (O) , (159)
where τ 1 ≤ T /3 is a short existence time coming from the estimation of the nonlinear term and bounded below as a function of |r(t 1 )| H 1 (O) . See [32, Theorem 6.1] for a detailed proof. Our situation introduces an unwanted boundary term during the integration by parts of ∂ t r, È∆r :
t t1 ∂O [D(r)n] tan [∂ t r] tan = - t t1 ∂O (M r) • ∂ t r. (160)
Luckily, the Navier boundary conditions helps us win one space derivative. When M is a scalar (or a symmetric matrix), this term can be seen as a time derivative. In the general case, we have to conduct a parallel estimate for ∂ t r ∈ L 2 by multiplying equation (155) by ∂ t r, which allows us to maintain the conclusion (159). In particular, this yields the existence of 0 ≤ t 2 ≤ 2T /3 such that:
|r(t 2 )| H 2 (O) ≤ C 2 τ 1 |r(t 1 )| H 1 (O) . (161)
Third energy estimate. We iterate once more. We differentiate (155) with respect to time to obtain an evolution equation on ∂ t r which we multiply by ∂ t r and integrate by parts. We obtain:
∃C 3 , ∀t ∈ [t 2 , t 2 + τ 2 ], |∂ t r(t)| 2 L 2 (O) + t t2 |∂ t r(t ′ )| 2 H 1 (O) dt ′ ≤ C 3 |∂ t r(t 2 )| 2 L 2 (O) , (162)
0 ≤ T 1 < T 2 ≤ T such that ū is smooth on [T 1 , T 2 ]
. This is a classical statement (see [START_REF] Temam | Behaviour at time t = 0 of the solutions of semilinear evolution equations[END_REF]Remark 3.2] for the case of Dirichlet boundary conditions). We will start our control strategy by doing nothing on [0, T 1 ]. Thus, the weak trajectory u will move from u * to some state u(T 1 ) which we will use as a new initial data. Then, we use our control to drive u(T 1 ) to ū(T 2 ) at time T 2 . After T 2 , we choose null controls. The trajectory u follows ū. Hence, without loss of generality, we can assume that T 1 = 0 and T 2 = T . This allows to work with a smooth reference trajectory.
To finish the control strategy, we use the local result from [START_REF] Guerrero | Local exact controllability to the trajectories of the Navier-Stokes system with nonlinear Navier-slip boundary conditions[END_REF]. According to this result, there exists δ T /3 > 0 such that, if we succeed to prove that there exists 0 < τ < 2T /3 such that |u(τ )ū(τ )| H 3 (O) ≤ δ T /3 , then there exist controls driving u to ū(T ) at time T . If we choose null controls r := uū satisfies the hypothesis of Lemma 9. Hence, there exists δ > 0 such that C(δ) ≤ δ T /3 and we only need to build a trajectory such that |u(T /3)ū(T /3)| L 2 (O) ≤ δ, which is precisely what has been proved in Lemma 8. This concludes the proof of Theorem 2.
Perspectives
The results obtained in this work can probably be extended in following directions:
• As stated in Remark 2, for the 3D case, it would be interesting to prove that the constructed trajectory is a strong solution of the Navier-Stokes system (provided that the initial data is smooth enough). Since the first order profiles are smooth, the key point is whether we can obtain strong energy estimates for the remainder despite the presence of a boundary layer. In the uncontrolled setting, an alternative approach to the asymptotic expansion of [START_REF] Iftimie | Viscous boundary layers for the Navier-Stokes equations with the Navier slip conditions[END_REF] consists in introducing conormal Sobolev spaces to perform energy estimates (see [START_REF] Masmoudi | Uniform regularity for the Navier-Stokes equation with Navier boundary condition[END_REF]).
• As proposed in [START_REF] Glass | Approximate Lagrangian controllability for the 2-D Euler equation. Application to the control of the shape of vortex patches[END_REF], [START_REF] Glass | Prescribing the motion of a set of particles in a three-dimensional perfect fluid[END_REF] then [START_REF] Glass | Lagrangian controllability at low Reynolds number[END_REF], respectively for the case of perfect fluids (Euler equation) then very viscous fluids (stationary Stokes equation), the notion of Lagrangian controllability is interesting for applications. It is likely that the proofs of these references can be adapted to the case of the Navier-Stokes equation with Navier boundary conditions thanks to our method, since the boundary layers are located in a small neighborhood of the boundaries of the domain which can be kept separated from the Lagrangian trajectories of the considered movements. This adaptation might involve stronger estimates on the remainder.
• As stated after Lemma 2, the hypothesis that the control domain Γ intersects all connected components of the boundary ∂Ω of the domain is necessary to obtain controllability of the Euler equation. However, since we are dealing with the Navier-Stokes equation, it might be possible to release this assumption, obtain partial results in its absence, or prove that it remains necessary. This question is also linked to the possibility of controlling a fluid-structure system where one tries to control the position of a small solid immersed in a fluid domain by a control on a part of the external border only. Existence of weak solutions for such a system is studied in [START_REF] Gérard | Existence of weak solutions up to collision for viscous fluid-solid systems with slip[END_REF].
• At least for simple geometric settings of Open Problem (OP), our method might be adapted to the challenging Dirichlet boundary condition. In this case, the amplitude of the boundary layer is O(1) instead of O( √ ε) here for the Navier condition. This scaling deeply changes the equations satisfied by the boundary layer profile. Moreover, the new evolution equation satisfied by the remainder involves a difficult term
1 √ ε (r ε • n)∂ z v.
Well-posedness and smallness estimates for the remainder are much harder and might involve analytic techniques. We refer to paragraph 1.5.1 for a short overview of some of the difficulties to be expected.
More generally speaking, we expect that the well-prepared dissipation method can be applied to other fluid mechanics systems to obtain small-time global controllability results, as soon as asymptotic expansions for the boundary layers are known.
A Smooth controls for the linearized Euler equation
In this appendix, we provide a constructive proof of Lemma 3. The main idea is to construct a force term ξ 1 such that ∇ × u 1 (T, •) = 0 in O. Hence, the final time profile U := u 1 (T, •) satisfies:
∇ • U = 0 in O, ∇ × U = 0 in O, U • n = 0 on ∂O. (163)
For simply connected domains, this implies that U = 0 in O. For multiply connected domains, the situation is more complex. Roughly speaking, a finite number of non vanishing solutions to (163) must be ruled out by sending in appropriate vorticity circulations. For more details on this specific topic, we refer to the original references: [START_REF] Coron | On the controllability of 2-D incompressible perfect fluids[END_REF] for 2D, then [START_REF] Glass | Exact boundary controllability of 3-D Euler equation[END_REF] for 3D. Here, we give an explicit construction of a regular force term such that ∇ × u 1 (T, •) = 0. The proof is slightly different in the 2D and 3D settings, because the vorticity formulation of ( 26) is not exactly the same. In both cases, we need to build an appropriate partition of unity.
A.1 Construction of an appropriate partition of unity
First, thanks to hypothesis [START_REF] Duoandikoetxea | Moments, masses de Dirac et décomposition de fonctions[END_REF], the continuity of the flow Φ 0 and the compactness of Ō, there exists δ > 0 such that: ∀x ∈ Ō, ∃t x ∈ (0, T ), dist Φ 0 (0, t x , x), Ω ≥ δ.
Hence, there exists a smooth closed control region K ⊂ Ō such that K ∩ Ω = ∅ and:
∀x ∈ Ō, ∃t x ∈ (0, T ), Φ 0 (0, t x , x) ∈ K.
Hence, each ball spends a positive amount of time within a given square (resp. cube) where we can use a local control to act on the u 1 profile. This square (resp. cube) can be of one of two types as constructed above: either of inner type, or of boundary type. We also introduce a smooth partition of unity η l for 1 ≤ l ≤ L, such that 0 ≤ η l (x) ≤ 1, η l ≡ 1 and each η l is compactly supported in B l . Last, we introduce a smooth function β : R → [0, 1] such that β ≡ 1 on (-∞, ǫ) and β ≡ 0 on (ǫ, +∞).
A.2 Planar case
We consider the initial data u * ∈ H 3 (O) ∩ L 2 div (O) and we split it using the constructed partition of unity. Writing [START_REF] Weinan | Blowup of solutions of the unsteady Prandtl's equation[END_REF] in vorticity form, ω 1 := ∇ × u 1 can be computed as ω l where ω l is the solution to:
∂ t ω l + (div u 0 )ω l + u 0 • ∇ ω l = ∇ × ξ l in (0, T ) × Ō, ω l (0, •) = ∇ × (η l u * ) in Ō. (168)
We consider ωl , the solution to (168) with ξ l = 0. Setting ω l := β(tt l )ω l defines a solution to (168), vanishing at time T , provided that we can find ξ l such that ∇ × ξ l = β ωl . The main difficulty is that we need ξ l to be supported in Ō \ Ω. Since β ≡ 0 outside of (-ǫ, ǫ), β ωl is supported in C m l thanks to (167) because the support of ω l is transported by (168). We distinguish two cases.
Inner balls. Assume that C m l is an inner square. Then B l does not intersect ∂O. Indeed, the streamlines of u 0 follow the boundary ∂O. If there existed x ∈ B l ∩ ∂O, then Φ 0 (0, t l , x) ∈ ∂O could not belong to C m l , which would violate (167). Hence, B l must be an inner ball. Then, thanks to Stokes' theorem, the average of ω l (0, •) on B l is null (since the circulation of η l u * along its border is null). Moreover, this average is preserved under the evolution by (168) with ξ l = 0. Thus, the average of ωl is identically null. It remains to be checked that, if w is a zero-average scalar function supported in an inner square, we can find functions (ξ 1 , ξ 2 ) supported in the same square such that ∂ 1 ξ 2 -∂ 2 ξ 1 = w. Up to translation, rescaling and rotation, we can assume that the inner square is C = [0, 1] 2 . We define: a(x 2 ) := Moreover, thanks to this explicit construction, the spatial regularity of ξ l is at least as good as that of ωl , which is the same as that of ∇ × (η l u * ). If u * ∈ H 3 (O), then ξ l ∈ C 1 ([0, T ], H 1 (O)) ∩ C 0 ([0, T ], H 2 (O)). This remains true after summation with respect to 1 ≤ l ≤ L and for the following constructions exposed below. If the initial data u * was smoother, we could also build smoother controls.
ξ 1 (x 1 , x 2 ) := -c ′ (x 1 )b(x 2 ), (170)
Boundary balls. Assume that C m l is a boundary square. Then, B l can either be an inner ball or a boundary ball and we can no longer assume that the average of ωl is identically null. However, the same construction also works. Up to translation, rescaling and rotation, we can assume that the boundary square is C = [0, 1] 2 , with the side x 2 = 0 inside O and the side 171) and (172). One checks that this defines a force which vanishes for x 1 ≤ 0, for x 1 ≥ 1 and for x 2 ≤ 0.
A.3 Spatial case
In 3D, each vorticity patch ω l satisfies:
∂ t ω l + ∇ × (ω l × u 0 ) = ∇ × ξ l in (0, T ) × Ō, ω l (0, •) = ∇ × (η l u * ) in Ō. (173)
Equation ( 173) preserves the divergence-free condition of its initial data. Hence, proceeding as above, the only thing that we need to check is that, given a vector field w = (w 1 , w 2 , w 3 ) : R 3 → R 3 such that:
support(w) ⊂ (0, 1) 3 , (174)
div(w) = 0, (175)
we can find a vector field ξ = (ξ 1 , ξ 2 , ξ 3 ) : R 3 → R 3 such that:
∂ 2 ξ 3 -∂ 3 ξ 2 = w 1 , (176)
∂ 3 ξ 1 -∂ 1 ξ 3 = w 2 , (177)
∂ 1 ξ 2 -∂ 2 ξ 1 = w 3 , (178)
support(ξ) ⊂ (0, 1) 3 .
(179)
As in the planar case, we distinguish the case of inner and boundary cubes.
Inner cubes. Let a ∈ C ∞ (R, R) be such that:
1 0 a(x)dx = 1, (180)
support(a) ⊂ (0, 1).
We define:
where : R 2 → R will specified later on. From (183), one has (178). From (184), one has (177). From (174), (175), (183) (183) and (184), one has (176). Using (174), (182), ( 183) and (184) one checks that (179) holds if h satisfies support(h) ⊂ (0, 1) 2 , (185)
∂ 2 h(x 2 , x 3 ) = W 2 (x 2 , x 3 ), (186)
∂ 3 h(x 2 , x 3 ) = W 3 (x 2 , x 3 ), (187)
where W 2 (x 2 , x 3 ) := -
Figure 1 :
1 Figure 1: Setting of the main Navier-Stokes control problem.
4 (δ) ≤ δ T 4 , where δ T 4 comes
444 from Lemma 4 and the function C T 4 comes from Lemma 5.
1 4
1 in L 2 of [56, Lemma 3, page 150]. For example, with (120):
Lemma 9 .
9 Let T > 0. Let ū ∈ C ∞ ([0, T ]× Ō) be a fixed smooth function with ū•n = 0 on ∂O. There exists a smooth function C, with C(0) = 0, such that, for any r * ∈ L 2 div (O) and any r ∈ C 0 w ([0, T ]; L 2 div (O)) ∩ L 2 ((0, T ); H 1 (O)), weak Leray solution to:
Figure 3 :
3 Figure 3: Paving the control region K with appropriate squares. Thanks to (165) and to the continuity of the flow Φ 0 :∀x ∈ Ō, ∃ǫ x > 0,∃t x ∈ (ǫ x , Tǫ x ), ∃m x ∈ {1, . . . M }, ∀t ′ ∈ (0, T ), ∀x ′ ∈ Ō, |t ′t x | < ǫ x and |xx ′ | < ǫ x ⇒ Φ 0 (0, t ′ , x ′ ) ∈ C mx . (166)By compactness of Ō, we can find ǫ > 0 and balls B l for 1 ≤ l ≤ M , covering Ō, such that:∀l ∈ {1, . . . L}, ∃t l ∈ (ǫ, Tǫ), ∃m l ∈ {1, . . . M }, ∀t ∈ (t lǫ, t l + ǫ), Φ 0 (0, t, B l ) ∈ C m l .(167)
1 0w(x 1 , x 2 )dx 1 ,
1121
ξ 2 (
2 x 1 , x 2 ) := -c(x 1 )a(x 2 ) + x1 0 w(x, x 2 )dx,(172)where c : R → [0, 1] is a smooth function with c ≡ 0 on (-∞, 1/4) and c ≡ 1 on (3/4, +∞). Thanks to (169), a vanishes for x 2 / ∈ [0, 1]. Thanks to (170), b vanishes for x 2 ≤ 0 (because a(x 2 ) = 0 when x 2 ≤ 0) and for x 2 ≥ 1 (because the b(x 2 ) = C w = 0 for x 2 ≥ 1). Thanks to (171) and (172), (ξ 1 , ξ 2 ) vanish outside of C and ∂ 1 ξ 2 -∂ 2 ξ 1 = w. Thus, we can build ξ l , supported in C m l such that ∇ × ξ l = β ωl .
x 2 = 1 in R 2 Figure 4 :
2124 Figure 4: A boundary square We start by extending w from C ∩ Ō to C, choosing a regular extension operator. Then, we use the same formulas (169), (170), (171) and (172). One checks that this defines a force which vanishes for x 1 ≤ 0, for x 1 ≥ 1 and for x 2 ≤ 0.
ξ 1 ( 0 (∂ 2 ξ 1 + 0 (∂ 3 ξ 1 -
10101 x 1 , x 2 , x 3 ) := a(x 1 )h(x 2 , x 3 ) (182)ξ 2 (x 1 , x 2 , x 3 ) := x1 w 3 )(x, x 2 , x 3 )dx,(183)ξ 3 (x 1 , x 2 , x 3 ) := x1 w 2 )(x, x 2 , x 3 )dx,
1 0w 3 1 0w 2 0 W 2 w 3
1312023 (x, x 2 , x 3 )dx, (188) W 3 (x 2 , x 3 ) := (x, x 2 , x 3 )dx.(189)From (174), (175), (188) and (189), one has:support(W 2 ) ⊂ (0, 1) 2 , support(W 3 ) ⊂ (0, 1) 2 ,(190)∂ 2 W 3 -∂ 3 W 2 = 0.(191)We define h byh(x 2 , x 3 ) := x2 (x, x 3 )dx,(192)so that (186) holds. From (190), (191) and (192), one gets (187). Finally, from (188), (190) and (192) one sees that (185) holds if and only if:k(x 3 ) = 0, (x 1 , x 2 , x 3 )dx 1 dx 2 .(194)Using (174), (175) and (194), one sees that k ′ ≡ 0 and support(k) ⊂ (0, 1), which implies (193).Boundary cubes. Now we consider a boundary cube. Up to translation, scaling and rotation, we assume that we are considering the cube C = [0, 1] 3 with the face x 1 = 0 lying inside O and the facex 1 = 1 lying in R 3 \ O.Similarly as in the planar case, we choose a regular extension of w to C. We set ξ 1 = 0 and we define ξ 2 by (183) and ξ 3 by (184). One has (176), (177), (178) in C ∩ Ō with support(ξ) ∩ Ō ⊂ C.
where τ 2 is a short existence time bounded from below as a function of |∂ t r(t 2 )| L 2 (O) , which is bounded at time t 2 since we can compute it from equation (155). Using (162), we deduce an L ∞ (H 2 ) bound on r seeing (155) as a Stokes problem for r. Using the same argument as above, we find a time t 3 such that r ∈ H 3 with a quantitative estimate. Now we can prove Theorem 2. Even though ū is only a weak trajectory on [0, T ], there exists
∂ t v ⋆ -∂ zz v ⋆ = -(1z)g ′ ⋆ (t) [T ⋆ , T ] × [0, 1], v ⋆ (t, 0) = 0 [T ⋆ , T ], v ⋆ (t, 1) = 0 [T ⋆ , T ], v ⋆ (0, z) = q ⋆ (z) [0, 1],(91)where we change the definition of q ⋆ (z) := q(T ⋆ , z) -(1z)g ⋆ (T ⋆ ). Introducing the Fourier basis adapted to system (91), e n (z) := sin(nπz), we can solve explicitly for the evolution of v ⋆ :v n ⋆ (T ) = e -n 2 π 2 T v n ⋆ (0) -
* Work supported by ERC Advanced Grant 266907 (CPDENL) of the 7th Research Framework Programme (FP7). | 117,612 | [
"12845",
"177864"
] | [
"1005052",
"1005054",
"1005052",
"27730"
] |
01485213 | en | [
"shs",
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01485213/file/SocialCommunicationBus.pdf | Rafael Angarita
Nikolaos Georgantas
Cristhian Parra
James Holston
email: jholston@berkeley.edu
Valérie Issarny
Leveraging the Service Bus Paradigm for Computer-mediated Social Communication Interoperability
Keywords: Social Communication, Computer-mediated Communication, Interoperability, Middleware, Service-oriented Architecture
Computer-mediated communication can be defined as any form of human communication achieved through computer technology. From its beginnings, it has been shaping the way humans interact with each other, and it has influenced many areas of society. There exist a plethora of communication services enabling computer-mediated social communication (e.g., Skype, Facebook Messenger, Telegram, WhatsApp, Twitter, Slack, etc.). Based on personal preferences, users may prefer a communication service rather than another. As a result, users sharing same interests may not be able to interact since they are using incompatible technologies. To tackle this interoperability barrier, we propose the Social Communication Bus, a middleware solution targeted to enable the interaction between heterogeneous communication services. More precisely, the contribution of this paper is threefold: (i), we propose a survey of the various forms of computer-mediated social communication, and we make an analogy with the computing communication paradigms; (ii), we revisit the eXtensible Service Bus (XSB) that supports interoperability across computing interaction paradigms to provide a solution for computer-mediated social communication interoperability; and (iii), we present Social-MQ, an implementation of the Social Communication Bus that has been integrated into the AppCivist platform for participatory democracy.
I. INTRODUCTION
People increasingly rely on computer-mediated communication for their social interactions (e.g., see [START_REF] Pillet | Email-free collaboration: An exploratory study on the formation of new work habits among knowledge workers[END_REF]). This is a direct consequence of the global reach of the Internet combined with the massive adoption of social media and mobile technologies that make it easy for people to view, create and share information within their communities almost anywhere, anytime. The success of social media has further led -and is still leading -to the introduction of a large diversity of social communication services (e.g., Skype, Facebook, Google Plus, Telegram, Instagram, WhatsApp, Twitter, Slack, ...). These services differ according to the types of communities and interactions they primarily aim at supporting. However, existing services are not orthogonal and users ultimately adopt one service rather than another based on their personal experience (e.g., see the impact of age on the use of computerbased communication in [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF]). As a result, users who share similar interests from a social perspective may not be able to interact in a computer-mediated social sphere because they adopt different technologies. This is particularly exacerbated by the fact that the latest social media are proprietary services that offer an increasingly rich set of functionalities, and the function of one service does not easily translate -both socially and technically-into the function of another. As an illustration, compare the early and primitive computer-mediated social communication media that is email with the richer social network technology. Protocols associated with the former are rather simple and email communication between any two individuals is now trivial, independent of the mail servers used at both ends. On the other hand, protocols associated with today's social networks involve complex interaction processes, which prevent communication across social networks.
The above issue is no different from the long-standing issue of interoperability in distributed computing systems, which requires mediating (or translating) the protocols run by the interacting parties for them to be able to exchange meaningful messages and coordinate. And, while interoperability in the early days of distributed systems was essentially relying on the definition of standards, the increasing complexity and diversity of networked systems has led to the introduction of various interoperability solutions [START_REF] Issarny | Middleware-layer connector synthesis: Beyond state of the art in middleware interoperability[END_REF]. In particular, today's solutions allow connecting networked systems in a nonintrusive way, i.e., without requiring to modify the systems [START_REF] Spitznagel | A compositional formalization of connector wrappers[END_REF], [START_REF] Mateescu | Adaptation of service protocols using process algebra and on-the-fly reduction techniques[END_REF], [START_REF] Gierds | Reducing adapter synthesis to controller synthesis[END_REF], [START_REF] Bennaceur | Automated synthesis of mediators to support component interoperability[END_REF], [START_REF] Bennaceur | A unifying perspective on protocol mediation: interoperability in the future internet[END_REF]. These solutions typically use intermediary software entities whose name differ in the literature, e.g., mediators [START_REF] Wiederhold | Mediators in the architecture of future information systems[END_REF], wrappers [START_REF] Spitznagel | A compositional formalization of connector wrappers[END_REF], mediating adapters [START_REF] Mateescu | Adaptation of service protocols using process algebra and on-the-fly reduction techniques[END_REF], or binding components [START_REF] Bouloukakis | Integration of Heterogeneous Services and Things into Choreographies[END_REF]. However, the key role of this software entity, whatever its name, is always the same: it translates the data model and interaction processes of one system into the ones of the other system the former needs to interact with, assuming of course that the systems are functionally compatible. In the following, we use the term binding component to refer to the software entity realizing the necessary translation. The binding component is then either implemented in full by the developer, or synthesized -possibly partially -by a dedicated software tool (e.g., [START_REF] Bennaceur | Automated synthesis of mediators to support component interoperability[END_REF]).
The development of binding components depends on the architecture of the overall interoperability system, since the components need to be deployed in the network and connected to the systems for which they realize the necessary data and process translation. A successful architectural paradigm for the interoperability system is the (Enterprise) Service Bus. A service bus introduces a reference communication protocol and data model to translate to and from, as well as a set of commodity services such as service repository, enforcing quality of service and service composition. Conceptually, the advantage of the service bus that is well illustrated by the analogy with the hardware bus from which it derives, is that it acts as a pivot communication protocol to which networked systems may plug into. Then, still from a conceptual perspective, a networked system becomes interoperable "simply" by implementing a binding component that translates the system's protocol to that of the bus. It is important to highlight that the service bus is a solution to middleware-protocol interoperability; it does not deal with application-layer interoperability [START_REF] Issarny | Middleware-layer connector synthesis: Beyond state of the art in middleware interoperability[END_REF], although nothing prevents the introduction of higher-level domain-specific buses.
This paper is specifically about that topic: introducing a "social communication bus" to allow interoperability across computer-mediated social communication paradigms. Our work is motivated by our research effort within the AppCivist project (http://www.appcivist.org/) [START_REF] Pathak | AppCivist -A Service-oriented Software Platform for Socially Sustainable Activism[END_REF]. AppCivist provides a software platform for participatory democracy that leverages the reach of the Internet and the powers of computation to enhance the experience and efficacy of civic participation. Its first instance, AppCivist-PB, targets participatory budgeting, an exemplary process of participatory democracy that lets citizens prepare and select projects to be implemented with public funds by their cities [START_REF] Holston | Engineering software assemblies for participatory democracy: The participatory budgeting use case[END_REF]. For city-wide engagement, AppCivist-PB must enable citizens to participate with the Internet-based communication services they are the most comfortable with. In current practice, for example, seniors and teenagers (or youngsters under 18) are often the most common participants of this process [START_REF] Hagelskamp | Public Spending, by the People. Participatory Budgeting in the United States and Canada in 2014 -15[END_REF], and their uses of technology can be fairly different. While seniors prefer traditional means of communication like phone calls and emails [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF], a typical teenager will send and receive 30 texts per day [START_REF] Lenhart | Teens, social media & technology overview 2015[END_REF]. The need for interoperability in this context is paramount since the idea is to include people in the participatory processes without leaving anyone behind. This has led us to revisit the service bus paradigm, for the sake of social communication across communities, to gather together the many communities of our cities.
The contributions of our paper are as follows:
• Social communication paradigms: Section II surveys the various forms of computer-mediated social communication supported by today's software services and tools. We then make an analogy with the communication paradigms implemented by middleware technologies, thereby highlighting that approaches to middleware interoperability conveniently apply to computer-mediated social communication interoperability. • Social Communication Bus architecture: Section III then revisits the service bus paradigm for the domain-specific context of computer-mediated social interactions. We specifically build on the XSB bus [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF] that supports interoperability across interaction paradigms as opposed to interoperability across heterogeneous middleware protocols implementing the same paradigm. The proposed bus architecture features the traditional concepts of bus protocols and binding components, but those are customized for the sake of social interactions whose couplings differ along the social and presence dimensions. • Social Communication Bus instance for participatory democracy: Section IV refines our bus architecture, introducing the Social-MQ implementation that leverages state of the art technologies. Section V then introduces how Social-MQ is used by the AppCivist-PB platform to enable reaching out a larger community of citizens in participatory budgeting campaigns. Finally, Section VI summarizes our contributions and introduces some perspectives for future work.
II. COMPUTER-MEDIATED SOCIAL COMMUNICATION
A. Computer-mediated Social Communication: An Overview Social communication technologies change the way humans interact with each other by influencing identities, relationships, and communities [START_REF] Thurlow | Computer mediated communication: Social interaction and the internet[END_REF]. Any human communication achieved through, or with the help of, computer technology is called computer-mediated communication [START_REF] Thurlow | Computer mediated communication: Social interaction and the internet[END_REF], or as we call it in our work, computer-mediated social communication to highlight the fact that we are dealing with human communication. In this paper, we more specifically focus on text-and voicebased social communication technologies. These social communication technologies are usually conceived as Internetbased services -which we call communication services -that allow individuals to communicate between them [START_REF] Richter | Functions of social networking services[END_REF]. Popular communication services include: Skype, which focuses on video chat and voice call services; Facebook Messenger, Telegram, WhatsApp, Slack, and Google Hangouts, which focus on instant messaging services; Twitter, which enables users to send and read short (140-character) messages; email, which lets users exchange messages, and SMS, which provides text messaging services for mobile telephony and also for the Web.
Depending on the communication service, users can send messages directly to each other or to a group of users; for example, a user can send an email directly to another user or to a mailing list where several users participate. In the former case, the users communicating via direct messaging "know each other". It does not mean that they have to know each other personally, it means they have to know the address indicating where and how to send messages directly to each other. In the latter example, communication is achieved via an intermediary: the mailing list address. In this case, senders do not specify explicitly the receivers of their messages; instead, they only have to know the address of the intermediary to which they can send messages. The intermediary then sends messages to the relevant receivers, or receivers ask the intermediary for messages they are interested in. Another example of an intermediary is Twitter, where users can send messages to a topic. Interested users can subscribe to that topic and retrieve published messages.
Overall, existing communication services may be classified according to the types of interactions they support [START_REF] Walther | Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction[END_REF]: interpersonal non-mediated communication, where individuals interact directly; impersonal group communication, where people interact within a group; and impersonal notifications, where people interact in relation with some events to be notified. Our goal is then to leverage the technical interoperability solutions introduced for distributed systems for the specific domain of computer-mediated social communication so as to enable users to interact across communication services.
B. Computer-mediated Social Communication: A Technical Perspective
Communication protocols underlying distributed systems may be classified along the following coupling dimensions [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF], [START_REF] Eugster | The many faces of publish/subscribe[END_REF]:
• Space coupling: A tight (resp. loose) space coupling means that the sender and target receiver(s) need (resp. do not need) to know about each other to communicate. • Time coupling: A tight time coupling indicates that the sender and target receiver(s) have to be online at same time to communicate, whereas a loose space coupling allows the receiver to be offline at the time of the emission; the receiver will receive messages when it is online again. • Synchronization coupling: Under a tight synchronization coupling, the sender is blocked while sending a message and the receiver(s) is(are) blocked while waiting for a message. Under a loose synchronization coupling, the sender is not blocked, and the target receiver(s) can get messages asynchronously while performing some concurrent activity. Following the above, we may define the coupling dimensions associated with computer-mediated social communication as:
• Social coupling: It is analogous to space coupling and refers to whether or not participants need to know each other to communicate. • Presence coupling: It is analogous to the time coupling concept and refers to whether participants need to interact simultaneously. • Synchronization coupling: Since we are addressing human interacting components, the synchronization coupling is alway loose since humans can do other activities after sending a message or while waiting for one. Hence, we do not consider this specific coupling in the remainder. We may then characterize the types of interactions of communication services in terms of the above coupling (see
C. Communication Service Interoperability
In general, users prefer a type of social interaction over the others [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF], [START_REF] Lenhart | Teens, social media & technology overview 2015[END_REF], [START_REF] Joinson | Self-esteem, interpersonal risk, and preference for e-mail to face-to-face communication[END_REF]. This preference translates in favoring certain communication services. For example, someone may want to never interact directly and thus uses email whenever possible. Further, the adoption of specific communication service instances for social interactions increasingly limits the population of users with which an individual can communicate. Our work focuses on the study of interoperability across communication services, including services promoting different types of social interaction. This is illustrated in We then need to study the extent to which different types of social interaction may be reconciled and when it is appropriate to synthesize the corresponding communication protocol adaptation. To do so, we build upon the eXtensible Service Bus (XSB) [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF] that is an approach to reconcile the middleware protocols run by networked systems across the various coupling dimensions (i.e., space, time, synchronization). This leads us to introduce the Social communication bus paradigm.
III. THE SOCIAL COMMUNICATION BUS
A. The eXtensible Service Bus
The eXtensible Service Bus (XSB) [START_REF] Georgantas | Serviceoriented distributed applications in the future internet: The case for interaction paradigm interoperability[END_REF], [START_REF] Kattepur | Analysis of timing constraints in heterogeneous middleware interactions[END_REF] defines a connector that abstracts and unifies three interaction paradigms found in distributed computing systems: client-server, a common paradigm for Web services where a client communicates directly with a server; publish-subscribe, a paradigm for content broadcasting; and tuple-space, a paradigm for sharing data with multiple users who can read and modify that data. XSB is implemented as a common bus protocol that enables interoperability among services employing heterogeneous interactions following one of these computing paradigms. It also provides an API based on the post and get primitives to abstract the native primitives of the client-server (send and receive), publish-subscribe (publish and retrieve), and tuple-space interactions (out, take, and read).
In this work, we present the Social Communication Bus as a higher-level abstraction: XSB abstracts interactions of Fig. 2: The Social Communication Bus architecture distributed computing interaction paradigms, while the Social Communication Bus abstracts interaction at the human level, that is, the computer-mediated social communication. Nonetheless, the Social Communication Bus relies on the XSB architectural paradigm. Most notably, the proposed Social Communication Bus inherits from XSB the approach to crossparadigm interoperability that allows overcoming the coupling heterogeneity of protocols.
B. Social Communication Bus Architecture
Figure 2 introduces the architecture of the Social Communication Bus. The bus revisits the integration paradigm of the conventional Enterprise Service Bus [START_REF] Chappell | Enterprise service bus[END_REF] to enable interoperability across the computer-mediated social communication paradigms presented in Section II and concrete communication services implementing them.
In more detail and as depicted, the Social Communication Bus implements a common intermediate bus protocol that facilitates the interconnection of heterogeneous communication services: plugging-in a new communication service only requires to implement a conversion from the protocol of the service to that of the bus, thus considerably reducing the development effort. This conversion is realized by a dedicated component, called Binding Component (BC), which connects the communication service to the Social Communication Bus. The binding that is implemented then overcomes communication heterogeneity at both abstract (i.e., it solves coupling mismatches) and concrete (i.e., it solves data and protocol message mismatches) levels. The BCs perform the bridging between a communication service and the Social Communication Bus by relying on the SC connectors. A SC connector provides access to the operations of a particular communication service and to the operations of the Social Communication Bus. Communi-cation services can communicate in a loosely coupled fashion via the Social Communication Bus.
The Social Communication Bus architecture not only reduces the development effort but also allows solving the interoperability issues presented in Section II-C as follows:
• Social coupling mediation.
C. The API for Social Communication (SC API)
To reconcile the different interfaces of communication services and connect them to the Social Communication Bus, we introduce a generic abstraction. The proposed abstraction comes in the form of a Social Communication Application Programming Interface (SC API). The SC API abstracts communication operations executed by the human user of a communication service, such as, e.g., sending or receiving a message. We also assume that these operations are exported by the communication service in a public (concrete) API, native to the specific communication service. This enables deploying interoperability artifacts (i.e., BCs) between heterogeneous communication services that leverage these APIs.
The SC API expresses basic common end-to-end interaction semantics shared by the different communication services, while it abstracts potentially heterogeneous semantics that is proper to each service. The SC API relies on the two following basic primitives:
• a post() primitive employed by a communication service to send a message; • a get() primitive employed by a communication service to receive a message. To describe a communication service according to the SC API, we propose a generic interface description (SC-IDL). This interface describes a communication service's operations, including the name and type of their parameters. The description is complemented with the following communication service information: name, its name; address, the address of the endpoint of its public API; protocol, its middleware protocol (e.g., HTTP, SMTP, AMQP, MQTT); and social_properties, which specifies if the communication service handles messages when its users are offline.
D. Higher-order Binding Components
BCs are in charge of the underlying middleware protocol conversion and application data adaptation between a communication service and the Social Communication Bus. As presented in XSB, BCs do not alter the behavior and properties of the communication services associated with them, they do not change the end-to-end communication semantics; however, since these communication services can be heterogeneous and can belong to different providers, it may be desirable to improve their end-to-end semantics to satisfy additional user requirements and to mediate social and presence coupling incompatibility. To this end, we introduce the higher-order BCs, which are BCs capable of altering the perceived behavior of communication services. We propose the two following higher-order BCs capabilities:
• Handling offline receiver: this case is related to the mediation of presence coupling in computer-based social communication, and it occurs when the receiver is not online and he is using a communication service that does not support offline message reception. Even though the server hosting this communication service is up and running, it discards received messages if the recipient is offline. A higher-order BC will send undelivered messages when the receiver logs back into the system. We do not enforce this capability; instead, we let users decide if they want to accept offline messages or not. • Handling unavailable receiver: this case is similar to the previous one but from a computing perspective, related to fault tolerance; for example, the server providing the receiver service is down, or there is no connectivity between the BC and the receiver. The BC will send undelivered messages once the receiver service is available again. In contrast to the previous case, this capability is provided by higher-order BCs by default.
IV. IMPLEMENTATION
A. Social-MQ: An AMQP-based Implementation of the Social Communication Bus
Social-MQ leverages the AMQP protocol as the Social Communication Bus. AMQP has several open source implementations, and its direct and publish/subscribe models serve well the purpose of social interactions: interpersonal non-mediated communication, impersonal group communication, and impersonal notifications. Additionally, AMQP has proven to have good reliability and performance in real-world critical applications in a variety of domains [START_REF] Appel | Towards Benchmarking of AMQP[END_REF]. We use RabbitMQ [START_REF]Rabbitmq[END_REF] as the AMQP implementation.
The bus comes along with a BC generator (see Figure 3). The generator takes as input the description of a communication service (SC-IDL), chooses the corresponding SC connector from the Implementation Pool, and produces a Concrete BC connecting the communication service with Social-MQ. The BC generator is implemented on the Node.js [START_REF]Node.js[END_REF] platform, which is based on the Chrome JavaScript virtual machine. Node.js implements the reactor pattern that allows Fig. 3: BC Generator Fig. 4: Social-MQ architecture building highly concurrent applications. Currently, BCs are generated for the Node.js platform uniquely. We intend to support other languages or platforms in future versions of the bus. Social-MQ currently supports four middleware protocols: AMQP, HTTP, MQTT, and SMTP.
Figure 4 illustrates the connection of communication services to Social-MQ. All the associated BCs are AMQP publishers and/or subscribers so that they can communicate with Social-MQ. In more detail:
• BC 1 exposes an HTTP endpoint so that the HTTP communication service can send messages to it, and it can act as HTTP client to post messages to the communication service. • BC 2 acts as an SMTP server and client to communicate with Email; • BC 3 has MQTT publisher and subscriber capabilities to communicate with the MQTT communication service. The above BCs are further refined according to the actual application data used by AppCivist, Email, and Facebook Messenger.
The interested reader may find a set of BCs generated by Social-MQ at https://github.com/rafaelangarita/bc-examples. These BCs can be executed and tested easily by following the provided instructions.
B. Social-MQ Implementation of Social Interaction Mediation
The loosely coupled interaction model between communication services provided by Social-MQ allows the mediation between the various types of social interactions supported by • Social coupling mediation: In the publish/subscribe model implemented by Social-MQ (Figure 5 (a)), senders publish a message to an address in Social-MQ, instead of sending it directly to a receiver. Receivers can subscribe to this address and be notified when new messages are published. This way, all social communication paradigms can interact using the publish/subscribe model. • Presence coupling mediation. When a communication service cannot receive messages because it is not available or its user is offline, messages intended for it are sent to a database to be queried and sent when the communication service can receive messages again (Figure 5 (b)).
V. THE APPCIVIST USE CASE
A. The AppCivist Platform for Participatory Democracy
To illustrate our approach, we elaborate on the use of Social-MQ to enable the AppCivist application for participatory democracy [START_REF] Pathak | AppCivist -A Service-oriented Software Platform for Socially Sustainable Activism[END_REF], [START_REF] Holston | Engineering software assemblies for participatory democracy: The participatory budgeting use case[END_REF] to interoperate with various communication services. This way, the citizens participating to AppCivist actions may keep interacting using the social media they prefer.
AppCivist allows activist users to compose their own applications, called Assemblies, using relevant Web-based components enabling democratic assembly and collective action. AppCivist provides a modular platform of services that range from proposal making and voting to communication and notifications. Some of these modules are offered as services implemented within the platform itself (e.g., proposal making), but for others, it relies on existing services. One of such cases is that of communication and notifications. Participatory processes often rely on a multitude of diverse users, who not always coincide in their technological choices. For instance, participatory budgeting processes involve people from diverse backgrounds and of all ages: from adolescents (or youngsters under 18), to seniors [START_REF] Hagelskamp | Public Spending, by the People. Participatory Budgeting in the United States and Canada in 2014 -15[END_REF]. Naturally, their technology adoption can be fairly different. While seniors favor traditional means of communication like phone calls and emails [START_REF] Dickinson | Keeping in touch: Talking to older people about computers and communication[END_REF], a typical teenager will send and receive 30 texts per day [START_REF] Lenhart | Teens, social media & technology overview 2015[END_REF]. The need for interoperability in this context is outstanding, and the Social Communication Bus is a perfect fit, with its ability to bridge communication services that power computerbased social communication. In the following, we discuss three communication scenarios: (i), impersonal notifications interconnected with impersonal group communication; (ii), interpersonal non-mediated communication interconnected with impersonal group communication; and (iii), interpersonal nonmediated communication interconnected with impersonal notifications. The last scenario also illustrates the presence coupling mediation feature of Social-MQ.
B. Impersonal Notifications Interconnected with Impersonal Group Communication
In this scenario, users of AppCivist interact via the impersonal notification paradigm by using a notification system implemented using AMQP as described in Listing 1. This notification system sends messages to concerned or interested users when different events occur in AppCivist; for example, when a user posts a new forum message. This scenario is illustrated in Figure 6 (a). AppCivist is connected to Social-MQ via BC 1 ; however, there is no need of protocol mediation, since both AppCivist and Social-MQ use AMQP. Mailing List is another system which exists independently of AppCivist. It is a traditional mailing list in which users communicate with each other using the impersonal group communication paradigm by sending emails to the group email address. Mailing List is connected to Social-MQ via BC 2 , and it is described in Listing 2. It accepts receiving messages whether or not receivers are online or offline (properties.offline.handling = true, Listing 2). It is due to the loose presence coupling nature of email communication. Now, suppose users in Mailing List want to be notified when a user posts a new forum message in AppCivist. Then, since AppCivist is an AMQP-based notification system, BC 1 can act as AMQP subscriber, receive notifications of new forum posts, and publish them in Social-MQ. In the same way, BC 2 acts as AMQP subscriber and receives notifications of new forum posts; however, this time BC 2 receives the notifications from Social-MQ. Finally, BC 2 sends an email to Mailing List using the SC Connector SMTP.
Listing 1: AppCivist AMQP SC-IDL { "name":"AppCivist" "address":"appcivist.littlemacondo.com:5672", "protocol":"AMQP", "operations":[ "notify":{ "interaction":"one-way", "type":"data", "scope":"assembly_id.forum.post", "post_message":[ {"name":"notification", "type":"text", "optional":"false"}] } ], "properties":[ {"offline":"true"} ] } Listing 2: Mailing List SC-IDL { "name":"Mailing List", "address":"mailinglist_server", "protocol":"SMTP", "operations":[ "receive_email":{ "interaction":"one-way", "type":"data", "scope":"mailinglist_address", "get_message":[ {"name":"subject", "type":"emailSubject", "optional":"true"}, {"name":"message", "type":"messageBody", "optional":"true"}, {"name":"attachment", "type":"file", "optional":"true"}] }, "send_email":{ //same as receive_email } ], "properties":[ {"offline":"true"} ] }
C. Interpersonal Non-mediated Communication Interconnected with Impersonal Group Communication
In the scenario illustrated in Figure 6 (b), there is an AppCivist communication service called Weekly Notifier. It queries the AppCivist database once a week, extracts the messages posted in AppCivist forums during the last week, builds a message with them, and sends the message to concerned users using interpersonal non-mediated communication via HTTP. That is, it is an HTTP client, so it sends the message to an HTTP server. Now, suppose we want Weekly Notifier to communicate with Mailing List. BC 1 exposes an HTTP endpoint to which Weekly Notifier can post HTTP messages. Differently from the previous case, we need to modify the original Weekly Notifier communication service since it needs to send messages to the endpoint exposed by BC 1 and it needs to specify Mailing List as a recipient. "name":"AppCivist" "address":"", "protocol":"HTTP", "operations":[ "notify":{ "interaction":"one-way", "type":"data", "post_message":[ {"name":"notification", "type":"text", "optional":"false"}] } ], "properties":[ {"offline":"false"} ] }
D. Interpersonal Non-mediated Communication Interconnected with Impersonal Notifications
After having introduced the previous scenario, we can pose the following question: what if messages sent by Weekly Notifier must be sent to multiple receivers? Should Weekly Notifier know them all and send the message individually to each one of them? Independently of the communication services registered in Social-MQ and their social communication paradigms, they can all interact in a fully decoupled fashion in terms of social coupling.
Social-MQ takes advantage of the exchanges concept of AMQP, which are entities where messages can be sent. Then, they can route messages to receivers, or interested receivers can subscribe to them. In the scenario illustrated in Figure 6 (c), Weekly Notifier sends HTTP messages directed to the Social-MQ exchange named AppCivist weekly notification. Interested receivers can then subscribe to AppCivist weekly notification to receive messages from Weekly Notifier. Finally, Mailing List and the instant messaging communication service, IM (Listing 4), can subscribe to AppCivist weekly notification via their corresponding BCs.
Listing 4: IM SC-IDL { "name":"IM", "address":"mqtt.example", "protocol":"MQTT", "operations":[ "receive_message":{ "interaction":"one-way", "type":"data", "scope":"receiver_id", "get_message":[ {"name":"message", "type":"text", "optional":"false"}] }, "receive_attachement":{ "interaction":"one-way", "type":"data", "scope":"receiver_id", "get_message":[ {"name":"message", "type":"file", "optional":"false"}] }, "send_message":{ //same as receive_message }, "send_attachement":{ //same as receive_attachement } ], "properties":[ {"offline":"false"} ] }
E. Assessment
In this section, we have studied three case studies illustrating how Social-MQ can solve the problem of computer-mediated social communication interoperability. These case studies are implemented for the AppCivist application for participatory democracy. As a conclusion, we argue that: (i), Social-MQ can be easily integrated into existing or new systems since it is non-intrusive and most of its processes are automated; (ii), regarding performance and scalability, Social-MQ is implemented on top of technologies that have proven to have high performance and scalability in real-world critical applications; and (iii), Social-MQ allows AppCivist users to continue using the communication service they prefer, enabling to reach a larger community of citizens, and promoting citizen participation.
VI. CONCLUSION AND FUTURE WORK
We have presented an approach to enable social communication interoperability in heterogeneous environments. Our main objective is to let users use their favorite communication service. More specifically, the main contributions of this paper are: a classification of the social communication paradigms in the context of computing; an Enterprise Service Bus-based architecture to deal with the social communication interoperability; and a concrete implementation of the Social Communication Bus studying real-world scenarios in the context of participatory democracy.
For our future work, we plan to present the formalization of our approach and to incorporate popular communication services such as Facebook Messenger, Twitter, and Slack. The interoperability with these kinds of services poses additional challenges, since the systems they belong to can be closed; for example, Facebook Messenger allows sending and receiving messages only to and from participants that are already registered in the Facebook platform. Another key issue to study is the security & privacy aspect of the Social Communication Bus to ensure that privacy needs of users communicating across heterogeneous social media are met. Last but not least, our studies will report the real-world experiences of AppCivist users regarding the Social Communication Bus.
Fig. 1 :Figure 1
11 Fig. 1: Social communication interoperability
Fig. 5 :
5 Fig. 5: (a) Space coupling mediation; (b) Presence coupling mediation
Fig. 6 :
6 Fig. 6: Use Cases: (a), impersonal notifications interconnected with impersonal group communication; (b), interpersonal non-mediated communication interconnected with impersonal group communication; and (c), interpersonal non-mediated communication interconnected with impersonal notifications
TABLE I :
I Properties of computer-mediated social interactions Table I for a summary and Table II for the related classification of popular services):
• Interpersonal non-mediated communication: Communi-
cating parties need to know each other. Thus, the social
coupling is tight. However, the presence coupling may be
either tight or loose. Communication services enforcing
a tight presence coupling relate to Video/voice calls
and chat systems. On the other hand, base services like
email, SMS, and instant messaging adopt a loose presence
coupling.
• Impersonal group communication: The social coupling is loose because any participant may communicate with a group without the need of knowing its members. A space serves as an area that holds all the information making up the communication. To participate, users modify the information in the space. The presence coupling may be either loose or tight. As an example of tight presence coupling, shared meeting notes may be deleted once a meeting is over, so that newcomers cannot read it. Similarly, newcomers in a Q&A session cannot hear previous discussions. In a different situation, a service may implement loose presence coupling so that a participant (group member) can write a post-it note and let it available to anybody entering the meeting room. In addition, groups can be either closed or open
[START_REF] Liang | Process groups and group communications: Classifications and requirements[END_REF]
. In a closed group, only members can send messages. In an open group, non-members may also send messages to the group. Video/voice conferences and real-time multi-user chat systems are examples of group communication with a tight presence coupling. Message forums, file sharing, and multi-user messaging systems are examples of group communication with a loose presence coupling. • Impersonal notifications: The social and presence coupling are loose. Participants do not need to know each other to interact. They communicate on the basis of shared interests (aka hashtags or topics). Twitter and Instagram are popular examples of such services.
TABLE II :
II Classification of popular communication services
ACKNOWLEDGMENTS
This work is partially supported by the Inria Project Lab CityLab (citylab.inria.fr), the Inria@SiliconValley program (project.inria.fr/siliconvalley) and the Social Apps Lab (citrisuc.org/initiatives/social-apps-lab) at CITRIS at UC Berkeley. The authors also acknowledge the support of the CivicBudget activity of EIT Digital (www.eitdigital.eu). | 42,066 | [
"9865",
"963734"
] | [
"454659",
"454659",
"454659",
"82005",
"82005",
"454659"
] |
01485231 | en | [
"chim"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01485231/file/BougharrafRJPCa2017_Postprint.pdf | B Lakhrissi
Kabouchi
H Bougharraf
email: hafida.bougharraf@gmail.com
R Benallal
T Sahdane
D Mondieig
Ph Negrier
S Massip
M Elfaydy
B Kabouchi
Study of 5-Azidomethyl-8-hydroxyquinoline Structure by X-ray Diffraction and HF-DFT Computational Methods 1
Keywords: 5-azidomethyl-8-hydroxyquinoline, X-ray diffraction, single crystal structure, hydrogen bonding, HF-DFT, HOMO-LUMO
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
8-Hydroxyquinoline molecule is a widely studied ligand. It is frequently used due to its biological effects ascribed to complexation of specific metal ions, such as copper(II) and zinc(II) [1,2]. This chelator properties determine its antibacterial action [3][4][5]. Aluminum(III) 8-hydroxyquinolinate has great application potential in the development of organic light-emitting diodes (OLEDs) and electroluminescent displays [6][7][8][9][10]. One of the serious problems of this technology is the failure of these devices at elevated temperatures. Also the use of 8-hydroxyquinoline in liquid-liquid extraction is limited because of its high solubility in acidic and alkaline aqueous solutions. In order to obtain the materials with improved properties for these specific applications, some 8-hydroxyquinoline deriv-atives have been synthesized. The antitumor and antibacterial properties of these compounds are extensively studied [11][12][13][14][15].
The literature presents X-ray crystal structure analysis of some derivatives of 8-hydroxyquinoline. It was shown, for example, that 8-hydroxyquinoline N-oxide crystallizes in the monoclinic system with space group P2 1 /c, Z = 4, and presents intramolecular H-bonding [16]. Gavin et al. reported the synthesis of 8-hydroxyquinoline derivatives [17,18] and their X-ray crystal structure analysis. 7-Bromoquinolin-8-ol structure was determined as monoclinic with space group C2/c, Z = 8. Its ring system is planar [11]. Recently, azo compounds based on 8-hydroxyquinoline derivatives attract more attention as chelating agents for a large number of metal ions [START_REF] Hunger | Industrial Dyes, Chemistry, Properties, Applications[END_REF][START_REF] La Deda | [END_REF]. Series of heteroarylazo 8-hydroxyquinoline dyes were synthesized and studied in solution to determine the most stable tautomeric form. The X-ray analysis revealed a strong intramolecular H-bond between the hydroxy H and the quinoline N atoms. This result suggests that the synthesized dyes are azo compounds stable in solid state [21].
In the present work, we choose one of the 8hydroxyquinoline derivatives, namely 5-azidomethyl-8-hydroxyquinoline (AHQ) (Scheme 1), also known for its applicability in extraction of some metal ions. It has been inferred from literature that the structural and geometrical data of AHQ molecule have not been reported till date, although several techniques were used in order to understand its behavior in different solvents [22], but many aspects of this behavior remain unknown. Here we report for the first time the structural characterization of AHQ molecule by X-ray diffraction analysis and the results of our calculations using density functional theory (B3LYP) and Hartree-Fock (HF) methods with the 6-311G(d,p) basis set, which are chosen to study the structural, geometric and charge transfer properties of AHQ molecule in the ground state.
EXPERIMENTAL
Synthesis of 5-Azidomethyl-8-hydroxyquinoline
All chemicals were purchased from Aldrich or Acros (France). 5-Azidomethyl-8-hydroxyquinoline was synthesized according to the method described by Himmi et al. [22], by reaction of sodium azide with 5chloromethyl-8-hydroxyquinoline hydrochloride in refluxing acetone for 24 h (Scheme 1).
A suspension of 5-chloromethyl-8-hydroxyquinoline hydrochloride (1 g, 4.33 mmol) in acetone (40 mL) was added dropwise to NaN 3 (1.3 g, 17 mmol) in acetone (10 mL). The mixture was refluxed for 24 h. After cooling, the solvent was evaporated under reduced pressure and the residue was partitioned between CHCl 3 /H 2 O (150 mL, 1 : 1). The organic phase was isolated, washed with water (3 × 20 mL) and dried over anhydrous magnesium sulfate. The solvent was removed by rotary evaporation under reduced pressure to give a crude product which was purified by recrystalization from ethanol to give the pure product as white solid (0.73 g, 85%).
Characterization of 5-Azidomethyl-8-hydroxyquinoline
The structure of the product was confirmed by 1 H and 13 C NMR and IR spectra. Melting points were determined on an automatic IA 9200 digital melting point apparatus in capillary tubes and are uncorrected. 1 H NMR spectra were recorded on a Bruker 300 WB spectrometer at 300 MHz for solutions in DMSO-d 6 . Chemical shifts are given as δ values with reference to tetramethylsilane (TMS) as internal standard. Infrared spectra were recorded from 400 to 4000 cm -1 on a Bruker IFS 66v Fourier transform spectrometer using KBr pellets. Mass spectrum was recorded on THERMO Electron DSQ II.
Mp: 116-118°C; IR (KBr) (cm -1 ): ν 2090 (C-N 3 , stretching); 1 H NMR (300 MHz, DMSO-d 6 ), δ ppm = 7.04-8.90 (m, 4H, quinoline), 4.80 (s, 1H, OH), 2.48 (s, 2H, aromatic-CH 2 -N 3 ), 13
Differential Scanning Calorimetry
To study the thermal behavior and to verify a possible phase transition [23] for the studied product, differential scanning calorimetric (DSC) analysis using ~4 mg samples was performed on Perkin-Elmer DSC-7 apparatus. Samples were hermetically sealed into aluminum pans. The heating rate was 10 K/min.
Crystallographic Data and Structure Analysis
X-ray powder diffraction analysis was performed on an Inel CPS 120 diffractometer. The diffraction lines were collected on a 4096 channel detector over an arc of 120° and centered on the sample. The CuK α1 (λ = 1.5406 Å) radiation was obtained by means of a curved quartz monochromator at a voltage of 40 kV and a current of 25 mA. The powder was put in a Lindemann glass capillary 0.5 mm in diameter, which was rotated to minimize preferential orientations. The experiment providing good signal/noise ratio took approximately 8 h under normal temperature and pressure. The refinement of the structure was performed using the Materials Studio software [24]. For the monocrystal experiment, a colorless single crystal of 0.12 × 0.10 × 0.05 mm size was selected and mounted on the diffractometer Rigaku Ultrahigh instrument with microfocus X-ray rotating anode tube (45 kV, 66 mA, CuK α radiation, λ = 1.54187 Å), The structure was solved by direct methods using SHELXS-97 [START_REF] Sheldrick | SHELXS-97 Program for the Refinement of Crystal Structure[END_REF] program and the Crystal Clear-SM Expert 2.1 software.
Theoretical Calculations
Density functional theory (DFT) calculations were performed to determine the geometrical and structural parameters of AHQ molecule in ground state, because this approach has a greater accuracy in reproducing the experimental values in geometry. It requires less time and offers similar accuracy for middle sized and large systems. Recently it's more used to study chemical and biochemical phenomena [START_REF] Assyry | [END_REF]27]. All calculations were performed with the Gaussian program package [START_REF] Frisch | Gaussian 03, Revision D.01 and D.02[END_REF], using B3LYP and Hartree-Fock (HF) methods with the 6-311G(d,p) basis set. Starting geometries of compound were taken from X-ray refinement data.
RESULTS AND DISCUSSION
Thermal analysis revealed no solid-solid phase transitions (Fig. 1). The melting temperature (mp = 115°C) was in agreement with the value measured in capillary with visual fixation of melting point. The melting heat found by DSC for the compound was ΔH = 155 J/g. X-ray diffraction patterns for AHQ powder at 295 K (Fig. 2) show a good agreement between calculated profile and the experimental result.
The results of refinement for both powder and single crystal techniques converged practically to the same crystallographic structure. Data collection parameters are given in Table 1.
The structure of AHQ molecule and packing view calculated from single crystal diffraction data, are shown in Figs. 3 and4, respectively.
The Fig. 3 indicates the nomination and the anisotropic displacement parameters of disordered pairs for the ORTEP drawing for AHQ molecule. Absorption corrections were carried out by the semi-empirical method from equivalent. The calculation of average values of intensities gives R int = 0.0324 for 1622 independent reflections. A total of 6430 reflections were 3).
It is well known that the hydrogen bonds between the molecule and its environment play an important role in stabilization of the supramolecular structure formed with the neighboring molecules [START_REF] Kadiri | [END_REF]30]. The Fig. 5 and Table 2 show 2. Because of the intramolecular hydrogen bonding, the phenol ring is twisted slightly, the torsion angle N(1)-C(6)-C(10)-O( 11) is 1.9(2)°. In addition, all the H-bonds involving neighboring molecules are practically in the same rings plane (Fig. 5b).
The standard geometrical parameters were minimized at DFT (B3LYP) level with 6-311G(d,p) basis set, then re-optimized again at HF level using the same basis set [START_REF] Frisch | Gaussian 03, Revision D.01 and D.02[END_REF] for better description. Initial geometry generated from X-ray refinement data and the optimized structures were confirmed to be minimum energy conformations. The energy and dipole moments for DFT and HF methods are respectively -18501.70 eV and 2.5114 D, -18388.96 eV and 2.2864 D.
The molecular structure of AHQ by optimized DFT (B3LYP) is shown in Fig. 6. The geometry parameters available from experimental data (1), optimized by DFT (B3LYP) (2) and HF (3) of the molecule are presented in Table 3. The calculated and experimental structural parameters for each method were compared.
As seen from Table 3, most of the calculated bond lengths and the bond angles are in good agreement with experimental ones. The highest differences are observed for N(1)-C( 6) bond with a value 0.012 Å for DFT method and N(14)-N( 15) bond with the difference being 0.037 Å for HF method.
For the bond angles those differences occur at O( 11 2)°] obtained by X-ray crystallography, these torsion angles have been calculated to be -66.5778°, -62.3307° for DFT and -65.2385°, -62.3058° for HF, respectively. This shows the larger deviation from the experimental values because the theoretical calculations have been performed for isolated molecule whereas the experimental data has been recorded in solid state and are related to molecular packing [31].
Figure 7 shows the patterns of the HOMO and LUMO of 5-azidomethyl-8-hydroxyquinoline molecule calculated at the B3LYP level. Generally this diagram shows the charge distribution around the different types of donors and acceptors bonds presented in the molecule in the ground and first excited states. HOMO as an electron donor represents the ability to donate an electron, while LUMO as an electron acceptor represent the ability to receive an electron 2. Geometry of the intra-and intermolecular hydrogen bonds Symmetry codes: 1: x, y, z; 2: 1 -x, -y, -z; 3: 1 -x, 1/2 + y, 1/2 -z. [32][33][34]. The energy values of LUMO, HOMO and their energy gap reflect the chemical activity of the molecule. In our case, the calculated energy values of HOMO is -6.165424 eV and LUMO is -1.726656 eV in gaseous phase. The energy separation between the HOMO and LUMO is 4.438768 eV, this lower value of HOMO-LUMO energy gap is generally associated with a high chemical reactivity [35,36], explains the eventual charge transfer interaction within the molecule, which is responsible for the bioactive properties of AHQ [37].
D H A D-H (Å) H•••A (Å) D•••A (Å) D-H•••A (deg)
Supplementary Material
Crystallographic data for the structure of 5-azidomethyl-8-hydroxyquinoline have been deposited at the Cambridge Crystallographic Data Center (CCDC 1029534). This information may be obtained on the web: http//www.ccdc.cam.ac.uk/deposit.
CONCLUSION
In the present work, 5-azidomethyl-8-hydroxyquinoline was synthesized and its chemical structure was confirmed using 1 H NMR, 13 C NMR and X-ray diffraction. The DSC analysis revealed no solid-solid transition for this product. The unit cell parameters obtained for the single crystal are: a = 12.2879( 9 This system of hydrogen bonds involves two neighboring molecules in the same plane. The geometric parameters of AHQ compound in ground state, calculated by density functional theory (B3LYP) and Hartree-Fock (HF) methods with the 6-311G(d,p) basis set, are in good agreement with the X-ray, except torsion angles which showed the deviation from the experimental, because of the geometry of the crystal structure is subject to intermolecular forces, such as van der Waals interactions and crystal packing forces, while only intramolecular interactions were considered for isolated molecule. The energy gap was found using HOMO and LUMO calculations, the less band gap indicates an eventual charge transfer within the molecule.
Scheme 1 .
1 Scheme 1. Synthesis of 5-azidomethyl-8-hydroxyquinoline molecule.
Fig. 1 .Fig. 2 .
12 Fig. 1. DSC thermogram for AHQ material: a, heating and b, cooling.
Fig. 3 .
3 Fig. 3. ORTEP drawing of AHQ showing the atom numbering. Displacement ellipsoids are drawn at the 50% probability level. H atoms are represented as small circles.
7.34° to 68.12° θ range. The final refinement produced with anisotropic atomic displacement parameters for all atoms converged to R 1 = 0.0485, wR 2 = 0.1312. The unit cell parameters obtained for the single crystal are: a = 12.2879(9) Å, b = 4.8782(3) Å, c = 15.7423(12) Å, β =100.807(14)°, which indicates that the structure is monoclinic with the space group P21/c. The crystal packing of AHQ shows that the molecule is not planar (Fig. 4). The orientation of the azide group is defined by torsion angles C(5)-C(7)-C(12)-N(13) [80.75(19)°] and C(8)-C(7)-C(12)-N(13) [-96.42(18)°] obtained by X-ray crystallography (Table
intra-and intermolecular hydrogen bonds present in the crystal structure of the AHQ. Weak intramolecular O-H•••N hydrogenbonding is present between the phenol donor and the adjacent pyridine N-atom acceptor [O11-N1 = 2.7580(17) Å and O11-H11•••N1 = 115.1(16)°] (Fig. 5a). Moderate intermolecular O-H•••N hydrogen bond is also present [O11-N11 = 2.8746(17) Å and O11-H11•••N1 = 130.1(17)°]. The acceptor function of oxygen atom is employed by two weak intermolecular C-H•••O hydrogen bonds, which parameters values are reported in Table
)-C(10)-C(9) bond angle with the different value 4.64° for DFT method, and O(11)-C(10)-C(9) bond angle with a value 4.75° for HF method. When the X-ray structure of AHQ is compared to the opti-mized one, the most notable discrepancy is observed in the orientation of the azide moiety, which is defined by torsion angles C(5)-C(7)-C(12)-N(13) [80.75(19)°] and C(7)-C(12)-N(13)-N(14) [47.0(
Fig. 4 .
4 Fig. 4. Crystal packing the AHQ chains.
Fig. 5 .Fig. 6 .
56 Fig. 5. View of H-bonding as dashed lines, H atoms not involved are omitted.
Fig. 7 .
7 Fig. 7. Molecular orbital surfaces and energy levels for the HOMO, LUMO of the AHQ compound computed at DFT/B3LYP/6-311G (d,p) level.
) Å, b = 4.8782(3) Å, c = 15.7423(12) Å, β = 100.807(14)°w hich indicates that the structure is monoclinic, P2 1 /c, with Z = 4 and Z ' = 1. The crystal structure is stabilized by intra and intermolecular O-H•••N and C-H•••O hydrogen bonds.
C NMR (75 MHz, DMSO-d 6 ), δ ppm = 51.38, 110.47, 122.69, 127.550, 130.03, 133.17, 139.29, 148.72, 154.54.
Table 1 .
1 Crystallographic data for AHQ molecule
Parameters Monocrystal Powder
Temperature, K 260(2) 295
Wavelength, Å 1.54187 1.54056
Space group P2 1 /c P 2 1 /c
a, Å 12.2879(9) 12.2643(12)
b, Å 4.8782(3) 4.8558(6)
c, Å 15.7423(12) 15.6838(14)
β, deg 100.807(14) 100.952(7)
Volume, Å 3 926.90(11) 917.01(17)
Z(Z') 4(1) 4(1)
Density (calcd.), g/cm 3 1.435 1.450
Table 3 .
3 Structural parameters of AHQ determined experimentally by X-ray diffraction (1) and calculated by the
Symmetry | 16,267 | [
"170942",
"13893"
] | [
"487966",
"487966",
"136813",
"136813",
"23279",
"487967",
"487967",
"487966"
] |
01485243 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01485243/file/RR-9038.pdf | George Bosilca
Clément Foyer † Emmanuel Jeannot
Guillaume Mercier
Guillaume Papauré
Online Dynamic Monitoring of MPI Communications: Scientific User and Developper Guide
Keywords: MPI, Monitoring, Communication Pattern, Process Placement MPI, Surveillance, Schéma de communication, Placement de processus
Understanding application communication patterns became increasingly relevant as the complexity and diversity of the underlying hardware along with elaborate network topologies are making the implementation of portable and efficient algorithms more challenging. Equipped with the knowledge of the communication patterns, external tools can predict and improve the performance of applications either by modifying the process placement or by changing the communication infrastructure parameters to refine the match between the application requirements and the message passing library capabilities. This report presents the design and evaluation of a communication monitoring infrastructure developed in the Open MPI software stack and able to expose a dynamically configurable level of detail about the application communication patterns, accompanied by a user documentation and a technical report about the implementation details.
Introduction
With the expected increase of applications concurrency and input data size, one of the most important challenges to be addressed in the forthcoming years is that of data transfers and locality, i.e. how to improve data accesses and transfers in the application.Among the various aspects of locality, one particular issue stems from both the memory and the network. Indeed, the transfer time of data exchanges between processes of an application depends on both the affinity of the processes and their location. A thorough analysis of an application behavior and of the target underlying execution platform, combined with clever algorithms and strategies have the potential to dramatically improve the application communication time, making it more efficient and robust to the changing network conditions (e.g. contention). In general the consensus is that the performance of many existing applications could benefit from an improved data locality [START_REF] Hoefler | An overview of topology mapping algorithms and techniques in high-performance computing[END_REF].
Hence, to compute an optimal -or at least an efficient -process placement we need to understand on one hand the underlying hardware characteristics (including memory hierarchies and network topology) and on the other hand how the application processes are exchanging messages. The two inputs of the decision algorithm are therefore the machine topology and the application communication pattern. The machine topology information can be gathered through existing tools, or be provided by a management system. Among these tools Netloc/Hwloc [START_REF] Broquedis | hwloc: A generic framework for managing hardware affinities in hpc applications[END_REF] provides a (almost) portable way to abstract the underlying topology as a graph interconnecting the various computing resources. Moreover, the batch scheduler and system tools can provide the list of resources available to the running jobs and their interconnections.
To address the second point, and understand the data exchanges between processes, precise information about the application communication patterns is needed. Existing tools are either addressing the issue at a high level failing to provide accurate details, or they are intrusive, deeply embedded in the communication library. To confront these issues we have designed a light and flexible monitoring interface for MPI applications that possess the following features. First, the need to monitor more than simply two-sided communications (a communication where the source and destination of the message are explicitly invoking an API for each message) is becoming prevalent. As such, our monitoring support is capable of extracting information about all types of data transfers: two-sided, one-sided (or Remote Memory Access) and I/O. In the scope of this report, we will focus our analysis on one-sided and two-sided communications. We record the number of messages, the sum of message sizes and the distribution of the sizes between each pair of processes. We also record how these messages have been generated (direct user calls via the two-sided API, automatically generated as a result of collective algorithms, related to one-sided messages). Second, we provide mechanisms for the MPI applications themselves to access this monitoring information, through the MPI Tool interface. This allows to dynamically enable or disable the monitoring (to record only specific parts of the code, or only during particular time periods) and gives the ability to introspect the application behavior. Last, the output of this monitoring provides different matrices describing this information for each pair of processes. Such data is available both on-line (i.e. during the application execution) or/and off-line (i.e. for post-mortem analysis and optimization of a subsequent run).
We have conducted experiments to assess the overhead of this monitoring infrastructure and to demonstrate is effectiveness compared to other solutions from the literature.
The outline of this report is as follows: in Section 2 we present the related work. The required background is exposed in Section 3. We then present the design in Section 4, and the implementation in Section 5. Results are discussed in Section 6 while the scientific conclusion is exposed in Section 7. The user documentation of the monitoring component is to be found in Section 8 with an example and the technical details are in Section 9.
Related Work
Monitoring an MPI application can be achieved in many ways but in general relies on intercepting the MPI API calls and delivering aggregated information. We present here some example of such tools.
PMPI is a customizable profiling layer that allows tools to intercept MPI calls. Therefore, when a communication routine is called, it is possible to keep track of the processes involved as well as the amount of data exchanged. However, this approach has several drawbacks. First, managing MPI datatypes is awkward and requires a conversion at each call. And last but not least, it cannot comprehend some of the most critical data movements, as an MPI collective is eventually implemented by point-to-point communications but the participants in the underlying data exchange pattern cannot be guessed without the knowledge of the collective algorithm implementation. For instance, a reduce operation is often implemented with an asymmetric tree of point-to-point sends/receives in which every process has a different role (root, intermediary and leaves). Known examples of stand-alone libraries using PMPI are DUMPI [START_REF] Janssen | A simulator for large-scale parallel computer architectures[END_REF] and mpiP [START_REF] Vetter | Statistical scalability analysis of communication operations in distributed applications[END_REF].
Score-P [START_REF] Knüpfer | Score-P: A Joint Performance Measurement Run-Time Infrastructure for Periscope[END_REF] is another tool for analyzing and monitoring MPI programs. This tool is based on different but partially redundant analyzers that have been gathered within a single tool to allow both online and offline analysis. Score-P relies on MPI wrappers and call-path profiles for online monitoring. Nevertheless, the application monitoring support offered by these tools is kept outside of the library, limiting the access to the implementation details and the communication pattern of collective operations once decomposed.
PERUSE [START_REF] Keller | Implementation and Usage of the PERUSE-Interface in Open MPI[END_REF] took a different approach by allowing the application to register callbacks that will be raised at critical moments in the point-to-point request lifetime, providing an opportunity to gather information on state-changes inside the MPI library and therefore gaining a very low-level insight on what data (not Inria only point-to-point but also collectives), how and when is exchanged between processes. This technique has been used in [START_REF] Brown | Tracing Data Movements Within MPI Collectives[END_REF][START_REF] Keller | Implementation and Usage of the PERUSE-Interface in Open MPI[END_REF]. Despite their interesting outcome the PERUSE interface failed to gain traction in the community.
We see that there does not exist tools that provide a monitoring that is both light and precise (e.g. showing collective communication decomposition).
Background
The Open MPI Project [START_REF] Gabriel | Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation[END_REF] is a comprehensive implementation of the MPI 3.1 standard [START_REF] Forum | MPI: A Message-Passing Interface Standard[END_REF] that was started in 2003, taking ideas from four earlier institutionallybased MPI implementations. It is developed and maintained by a consortium of academic, laboratory, and industry partners, and distributed under a modified BSD open source license. It supports a wide variety of CPU and network architectures that is used in the HPC systems. It is also the base for a number of vendors commercial MPI offerings, including Mellanox, Cisco, Fujitsu, Bull, and IBM. The Open MPI software is built on the Modular Component Architecture (MCA) [START_REF] Barrett | Analysis of the Component Architecture Overhead in Open MPI[END_REF], which allows for compile or runtime selection of the components used by the MPI library. This modularity enables experiments with new designs, algorithms, and ideas to be explored, while fully maintaining functionality and performance. In the context of this study, we take advantage of this functionality to seamlessly interpose our profiling components along with the highly optimized components provided by the stock Open MPI version.
MPI Tool, is an interface that has been added in the MPI-3 standard [START_REF] Forum | MPI: A Message-Passing Interface Standard[END_REF]. This interface allows the application to configure internal parameters of the MPI library, and also get access to internal information from the MPI library. In our context, this interface will offer a convenient and flexible way to access the monitored data stored by the implementation as well as control the monitoring phases.
Process placement is an optimization strategy that takes into account the affinity of processes (represented by a communication matrix) and the machine topology to decrease the communication costs of an application [START_REF] Hoefler | An overview of topology mapping algorithms and techniques in high-performance computing[END_REF]. Various algorithms to compute such a process placement exist, one being TreeMatch [START_REF] Jeannot | Process Placement in Multicore Clusters: Algorithmic Issues and Practical Techniques[END_REF] (designed by a subset of the authors of this article). We can distinguish between static process placement which is computed from traces of previous runs, and dynamic placement computed during the application execution (See experiments in Section 6).
Design
The monitoring generates the application communication pattern matrix. The order of the matrix is the number of processes and each (i, j) entry gives the amount of communication between process i and process j. It outputs several values and hence several matrices: the number of bytes and the number of messages exchanged. Moreover it distinguishes between point-to-point communications and collective or internal protocol communications.
It is also able to monitor collective operations once decomposed into pointto-point communications. Therefore, it requires to intercept the communication inside the MPI library itself, instead of relinking weak symbols to a third-party dynamic one, which allows this component to be used in parallel with other profiling tools (e.g. PMPI).
For scalability reasons, we can automatically gather the monitoring data into one file instead of dumping one file per rank.
To sum up, we aim at covering a wide spectrum of needs, with different levels of complexity for various levels of precision. It provides an API for each application to enable, disable or access its own monitoring information. Otherwise, it is possible to monitor an application without any modification of its source code by activating the monitoring components at launch time and to retrieve results when the application completes. We also supply a set of mechanisms to combine monitored data into communication matrices. They can be used either at the end of the application (when MPI_Finalize is called), or post-mortem. For each pair of processes, an histogram of geometrically increasing message sizes is available.
Implementation
The precision needed for the results had us to implement the solution within the Open MPI stack1 . The component described in this article has been developed in a branch of Open MPI (available at [13]) that will soon be made available on the stock version. As we were planning to intercept all types of communications, two-sided, one-sided and collectives, we have exposed a minimalistic common API for the profiling as an independent engine, and then linked all the MCA components doing the profiling with this engine. Due to the flexibility of the MCA infrastructure, the active components can be configured at runtime, either via mpiexec arguments or via the API (implemented with the MPI Tool interface).
In order to cover the wide range of operations provided by MPI, four components were added to the software stack. One in the collective communication layer (COLL), one in the one-sided layer (remote memory accesses , OSC), one in the point-to-point management layer (PML), and finally one common layer capable of orchestrating the information gathered by the other layers and record data. This set of components when activated at launch time (through the mpiexec option --mca pml_monitoring_enable x ), monitors all specified types of communications, as indicated by the value of x. The design of Open MPI allows for easy distinctions between different types of communication tags, and x allows the user to include or exclude tags related to collective communications, or to other internal coordination (these are called internal tags in opposition to external tags that are available to the user via the MPI API). Specifically, the PML layer sees communications once collectives have been decomposed into point-to-point operations. COLL and OSC both work at a higher level, in order to be able to record operations that do not go through the PML layer, for instance when using dedicated drivers. Therefore, as opposed to the MPI standard profiling interface (PMPI) approach where the MPI calls are intercepted, we monitor the actual point-to-point calls that are issued by Open MPI, which yields much more precise information. For instance, we can infer the underlying topologies and algorithms behind the collective algorithms, as an example the tree topology used for aggregating values in a MPI_Reduce call. However, this comes at the cost of a possible redundant recording of data for collective operations, when the data-path goes through the COLL and the PML components2 .
For an application to enable, disable or access its own monitoring, we implemented a set of callback functions using MPI Tool. At any time, it is possible to know the amount of data exchanged between a pair of processes since the beginning of the application or just in a specific part of the code. Furthermore, the final summary dumped at the end of the application gives a detailed output of the data exchanged between processes for each point-to-point, one-sided and collective operation. The user is then able to refine the results.
Internally, these components use an internal process identifier (ids) and a single associative array employed to translate sender and receiver ids into their MPI_COMM_WORLD counterparts. Our mechanism is therefore oblivious to communicator splitting, merging or duplication. When a message is sent, the sender updates three arrays: the number of messages, the size (in bytes) sent to the specific receiver, and the message size distribution. Moreover, to distinguish between external and internal tags, one-sided emitted and received messages, and collective operations, we maintain five versions of the first two arrays. Also, the histogram of message sizes distribution is kept for each pair of ids, and goes from 0 byte messages to messages of more than 2 64 bytes. Therefore, the memory overhead of this component is at maximum 10 arrays of N 64 bits elements, in addition to the N arrays of 66 elements of 64 bits for the histograms, with N being the number of MPI processes. These arrays are lazily allocated, so they only exist for a remote process if there are communications with it.
In addition to the amount of data and the number of messages exchanged between processes, we keep track of the type of collective operations issued on each communicator: one-to-all operations (e.g MPI_Scatter), all-to-one operations (e.g MPI_Gather) and all-to-all operations (e.g MPI_Alltoall). For the first two types of operations, the root process records the total amount of data sent and received, respectively, and the count of operations of each kind. For all-to-all operations, each process records the total amount of data sent, and the count of operations. All these pieces of data can be flushed into files either at the end of the application or when requested through the API.
Results
We carried out the experiments on an Infiniband cluster (HCA: Mellanox Technologies MT26428 (ConnectX IB QDR)). Each node features two Intel Xeon Nehalem X5550 CPUs with 4 cores (2.66 GHz) per each CPU.
Overhead Measurement
One of the main issues of monitoring is the potential impact on the application time-to-solution. As our monitoring can be dynamically enabled and disabled, we can compute the upper bound of the overhead by measuring the impact with the monitoring enabled on the entire application. We wrote a micro benchmark that computes the overhead induced by our component for various kinds of MPI functions, and measured this overhead for both shared-and distributed-memory cases. The number of processes varies from 2 to 24 and the amount of data ranges from 0 up to 1MB. Fig. 1 displays the results as heatmaps (the median of thousand measures). Blue nuances correspond to low overhead while yellow colors to higher overhead. As expected the overhead is more visible on a shared memory setting, where the cost of the monitoring is more significant compared with the decreasing cost of data transfers. Also, as the overhead is related to the number of messages and not to their content, the overhead decreases as the size of the messages increases. Overall, the median overhead is 4.4% and 2.4% for respectively the shared-and distributed-memory cases, which proves that our monitoring is cost effective.
We have also build a second micro benchmarks that performs a series of all-to-all only (with no computation) of a given buffer size. In Fig. 2, we outline the average difference between monitoring and non monitoring time when the exchanged buffer size varies and once we have normalize to one all-to-all call and to one processes. We also plot, as error bars, the 95% confidence interval computed with the Student paired T-Test.
We see that when the buffer size is small (less than 50 integers), the monitoring time is statistically longer than the non-monitoring time. On average monitoring one all-to-all call to one processes takes around 10ns. However, when the buffer size increases the error bars cover both negative and positive values meaning that, statistically, there is no difference between the monitoring time and the non-monitoring time. This is explained as follows : when the buffer size increases, the execution time increases while the monitoring time stays constant (we have the same number of messages). Therefore, the whole execution time is less stable (due to noise in the network traffic and software stack) and hence the difference between the monitoring case and the non-monitoring case becomes less visible and is hidden by this noise.
In order to measure the impact on applications, we used some of the NAS parallel benchmarks, namely BT, CG and LU. The choice of these tests is not innocent, we picked the ones with the highest number of MPI calls, in order to maximize the potential impact of the monitoring on the application. Table 1 shows the results, which are an average of 20 runs. Shaded rows mean that the measures display a statistically significant difference (using the Student's t-Test on the measures) between a monitored run and non-monitored one.
Only BT, CG and LU kernels have been evaluated as they are the ones issuing the largest number of messages per processors. They are therefore the ones for which the monitoring overhead should be most visible.
Overall, we see that the overhead is consistently below 1% and on average around 0.35%. Interestingly, for the LU kernel, the overhead seems lightly correlated with the message rate meaning that the larger the communication activity, the higher the overhead. For the CG kernel, however, the timings are Inria q q q q q q q q q q -400 -200 0 200 1 10 100 1000
Number of MPI INT sent Avg Time difference (monitoring -no monitoring) (ns)
All-To-All monitoring overhead (90% confidence interval) so small that it is hard to see any influence of this factor beyond measurements noise.
We have also tested the Minighost mini-application [START_REF] Barrett | Minighost: a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing[END_REF] that computes a stencil in various dimensions to evaluate the overhead. An interesting feature of this mini-application is that it outputs the percentage of time spent to perform communication. In Fig. 3, we depict the overhead depending on this communication ratio. We have run 114 different executions of the Minighost application and have split these runs in four categories depending on the percentage of time spent in communications (0%-25%, 25%-50%, 50%-75% and 75%-100%). A point represents the median overhead (in percent) and the error bars represent the first and third quantile. We see that the median overhead is increasing with the percentage of communication. Indeed, the more time you spend in communication the more visible is the overhead for monitoring these communications. However, the overhead accounts for only a small percentage.
MPI Collective Operations Optimization
In these experiments we have executed a MPI_Reduce collective call on 32 and 64 ranks (on 4 and 8 nodes respectively), with a buffer which size ranges between 1.10 6 and 2.10 8 integers and rank 0 acts as the root. We took advantage of the Open MPI infrastructure, to block the dynamic selection of the collective algorithm and instead forced the reduce operation to use a binary tree algorithm. Since we monitor the collective communications once they have been broken down into point-to-point communications, we are able to identify details of the collective algorithm implementation, and expose the underlying binary tree algorithm (see Fig. 4b). This provides a much more detailed understanding of the underlying communication pattern compared with existing tools, where the use of a higher-level monitoring tool (e.g. PMPI) completely hides the new process placement with the TreeMatch algorithm, and compared with the placement obtained using a high-level monitoring (that does not see the tree and hence is equivalent to the round-robin placement). Results are shown in Fig. 4a. We see that the optimized placement is much more efficient than the one based on high-level monitoring. For instance with 64 ranks and a buffer of 5.10 6 integers the walltime is 338 ms vs. 470 ms (39% faster).
Use Case: Fault Tolerance with Online Monitoring
In addition to the usage scenarios mentioned above, the proposed dynamic monitoring tool has been demonstrated in one of our recent work. In [START_REF] Cores | An application-level solution for the dynamic reconfiguration of mpi applications[END_REF], we have used the dynamic monitoring feature to compute the communication matrix during the execution of an MPI application. The goal was to perform elastic computations in case of node failures or when new nodes are available. The runtime system migrates MPI processes when the number of computing resources changes. To this end, the authors used the TreeMatch [START_REF] Jeannot | Process Placement in Multicore Clusters: Algorithmic Issues and Practical Techniques[END_REF] algorithm to recom- pute the process mapping onto the available resources. The algorithm decides how to move processes based on the application's gathered communication matrix: the more two processes communicate, the closer they shall be re-mapped onto the physical resources. Gathering the communication matrix was performed online using the callback routines of the monitoring: such a result would not have been possible without the tool proposed in this report.
Inria q q q q 0 1 2 3 0% -25% 25% -50% 50% -75% 75% -100%
Ratio of communication Median overhead percentage
1_21 1_23 1_24 10_21 10_23 10_24 1_21 1_23 1_24 10_21 10_23 10_24 10_21 10_23 10_24 1_23 1_24 10_21 10_23 10_24 70_21 70_23 1_21 1_23 1_24 10_21 10_23 10_24 95_21 95_23 1_21 1_23 1_24 10_21 10_23 10_24 55_21 55_23 55_24 79_21 79_23 79_24 95_21 95_23 95_24 10_21 10_23 10_24 10_21 10_23 10_24 55_21 55_23 55_24 79_21 79_23 79_24 95_21 95_23 95_24
Grid size and stencil type
Gain (avg values)
Group by Number of proc, Number of variables and affinity metric type
Static Process Placement of applications
We have tested the TreeMatch algorithm for performing static placement to show that the monitoring provides relevant information allowing execution optimization. To do so, we first monitor the application using the proposed monitoring tool of this report, second we build the communication matrix (here using the number of messages) then we apply the TreeMatch algorithm on this matrix and the topology of the target architecture and last we re-execute the application using the newly computed mapping. Different settings (kind of stencil, the stencil dimension, number of variables per stencil point, and number of processes) are shown in fig. 5. We see that the gain is up to 40% when compared to round-robin placement (the standard MPI placement) and 300% against random placement. The decrease of performance is never greater than 2%.
Scientific Conclusions
Parallel applications tend to use a growing number of computational resources connected via complex communication schemes that naturally diverge from the underlying network topology. Optimizing application's performance requires to identify any mismatch between the application communication pattern and the network topology, and this demands a precise mapping of all data exchanges between the application processes.
In this report we proposed a new monitoring framework to consistently track all types of data exchanges in MPI applications. We have implemented the tool as a set of modular components in Open MPI, allowing fast and flexible low level monitoring (with collective operation decomposed to their point-to-point expression) of all types of communications supported by the MPI-3 standard Inria (including one-sided communications and IO). We have also provided an API based on the MPI Tool standard, for applications to monitor their state dynamically, focusing the monitoring to only critical portions of the code. The basic usage of this tool does not require any change in the application, nor any special compilation flag. The data gathered can be provided at different granularities, either as communication matrices, or as histograms of message sizes. Another significant feature of this tool is that it leaves the PMPI interface available for other usages, allowing additional monitoring of the application using more traditional tools.
Micro-benchmarks show that the overhead is minimal for intra-node communications (over shared memory) and barely noticeable for large messages or distributed memory. Once applied to real applications, the overhead remains hardly visible (at most a few percents). Having such a precise and flexible monitoring tool opens the door to dynamic process placement strategies, and could lead to highly efficient process placement strategies. Experiments show that this tool enables large gain for dynamic or static cases. The fact that the monitoring records the communication after collective decomposition into point-to-points allows optimizations that were not otherwise possible.
User Documentation
This section details how the component is to be used. This documentation presents the concepts on which we based our component's API, and the different options available. It first explains how to use the component, then summarize it in a quick start tutorial.
Introduction
MPI_Tool is a concept introduced in the MPI-3 standard. It allows MPI developers, or third party, to offer a portable interface to different tools. These tools may be used to monitor application, measure its performances, or profile it.
MPI_Tool is an interface that ease the addition of external functions to a MPI library. It also allows the user to control and monitor given internal variables of the runtime system.
The present section is here to introduce the use the MPI_Tool interface from a user point of view, and to facilitate the usage of the Open MPI monitoring component. This component allows for precisely recording the message exchanges between nodes during MPI applications execution. The number of messages and the amount of data exchanged are recorded, including or excluding internal communications (such as those generated by the implementation of the collective algorithms).
This component offers two types of monitoring, whether the user wants a fine control over the monitoring, or just an overall view of the messages. Moreover, the fine control allows the user to access the results through the application, and let him reset the variables when needed. The fine control is achieved via the MPI_Tool interface, which needs the code to be adapted by adding a specific initialization function. However, the basic overall monitoring is achieved without any modification of the application code.
Whether you are using one version or the other, the monitoring need to be enabled with parameters added when calling mpiexec, or globally on your Open MPI MCA configuration file ($HOME/.openmpi/mca-params.conf). Three new parameters have been introduced:
--mca pml_monitoring_enable value This parameter sets the monitoring mode.
value may be:
0 monitoring is disabled
1 monitoring is enabled, with no distinction between user issued and library issued messages.
≥ 2 monitoring enabled, with a distinction between messages issued from the library (internal) and messages issued from the user (external).
--mca pml_monitoring_enable_output value This parameter enables the automatic flushing of monitored values during the call to MPI_Finalize. This option is to be used only without MPI_Tool, or with value = 0. value may be: Each MPI process flushes its recorded data. The pieces of information can be aggregated whether with the use of PMPI (see Section 8.4) or with the distributed script test/monitoring/profile2mat.pl.
0
--mca pml_monitoring_filename filename Set the file where to flush the resulting output from monitoring. The output is a communication matrix of both the number of messages and the total size of exchanged data between each couple of nodes. This parameter is needed if pml_monitoring _enable_output ≥ 3.
Also, in order to run an application without some monitoring enabled, you need to add the following parameters at mpiexec time: This mode should be used to monitor the whole application from its start until its end. It is defined such as you can record the amount of communications without any code modification.
In order to do so, you have to get Open MPI compiled with monitoring enabled. When you launch your application, you need to set the parameter pml _monitoring_enable to a value > 0, and, if pml_monitoring_enable_output ≥ 3, to set the pml_monitoring_filename parameter to a proper filename, which path must exists.
With MPI_Tool
This section explains how to monitor your applications with the use of MPI_Tool.
How it works
MPI_Tool is a layer that is added to the standard MPI implementation. As such, it must be noted first that it may have an impact to the performances.
As these functionality are orthogonal to the core ones, MPI_Tool initialization and finalization are independent from MPI's one. There is no restriction regarding the order or the different calls. Also, the MPI_Tool interface initialization function can be called more than once within the execution, as long as the finalize function is called as many times.
MPI_Tool introduces two types of variables, control variables and performance variables. These variables will be referred to respectively as cvar and pvar. The variables can be used to tune dynamically the application to fit best the needs of the application. They are defined by the library (or by the external component), and accessed with the given accessors functions, specified in the standard. The variables are named uniquely through the application. Every variable, once defined and registered within the MPI engine, is given an index that will not change during the entire execution.
Same as for the monitoring without MPI_Tool, you need to start your application with the control variable pml_monitoring_enable properly set. Even though, it is not required, you can also add for your command line the desired filename to flush the monitoring output. As long as no filename is provided, no output can be generated.
Initialization
The initialization is made by a call to MPI_T_init_thread. This function takes two parameters. The first one is the desired level of thread support, the second one is the provided level of thread support. It has the same semantic as the MPI _Init_thread function. Please note that the first function to be called (between MPI_T_init_thread and MPI_Init_thread) may influence the second one for the provided level of thread support. This function goal is to initialize control and performance variables.
But, in order to use the performance variables within one context without influencing the one from an other context, a variable has to be bound to a session. To create a session, you have to call MPI_T_pvar_session_create in order to initialize a session.
In addition to the binding of a session, a performance variable may also depend on a MPI object. For example, the pml_monitoring_flush variable needs to be bound to a communicator. In order to do so, you need to use the MPI_T_pvar_handle_alloc function, which takes as parameters the used session, the id of the variable, the MPI object (i.e. MPI_COMM_WORLD in the case of pml_monitoring_flush), the reference to the performance variable handle and a reference to an integer value. The last parameter allow the user to receive some additional information about the variable, or the MPI object bound. As an example, when binding to the pml_monitoring_flush performance variable, the last parameter is set to the length of the current filename used for the flush, if any, and 0 otherwise ; when binding to the pml_monitoring_messages _count performance variable, the parameter is set to the size of the size of bound communicator, as it corresponds to the expected size of the array (in number of elements) when retrieving the data. This parameter is used to let the application determines the amount of data to be returned when reading the performance variables. Please note that the handle_alloc function takes the variable id as parameter. In order to retrieve this value, you have to call MPI _T_pvar_get_index which take as a IN parameter a string that contains the name of the desired variable.
How to use the performance variables
Some performance variables are defined in the monitoring component: pml_monitoring_flush Allow the user to define a file where to flush the recorded data.
pml_monitoring_messages_count Allow the user to access within the application the number of messages exchanged through the PML framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
pml_monitoring_messages_size Allow the user to access within the application the amount of data exchanged through the PML framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_sent_count Allow the user to access within the application the number of messages sent through the OSC framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_sent_size Allow the user to access within the application the amount of data sent through the OSC framework with each Inria node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_recv_count Allow the user to access within the application the number of messages received through the OSC framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
osc_monitoring_messages_recv_size Allow the user to access within the application the amount of data received through the OSC framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
coll_monitoring_messages_count Allow the user to access within the application the number of messages exchanged through the COLL framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
coll_monitoring_messages_size Allow the user to access within the application the amount of data exchanged through the COLL framework with each node from the bound communicator (MPI_Comm). This variable returns an array of number of nodes unsigned long integers.
coll_monitoring_o2a_count Allow the user to access within the application the number of one-to-all collective operations across the bound communicator (MPI_Comm) where the process was defined as root. This variable returns a single unsigned long integer.
coll_monitoring_o2a_size Allow the user to access within the application the amount of data sent as one-to-all collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integers. The communications between a process and itself are not taken in account coll_monitoring_a2o_count Allow the user to access within the application the number of all-to-one collective operations across the bound communicator (MPI_Comm) where the process was defined as root. This variable returns a single unsigned long integer.
coll_monitoring_a2o_size Allow the user to access within the application the amount of data received from all-to-one collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integers. The communications between a process and itself are not taken in account coll_monitoring_a2a_count Allow the user to access within the application the number of all-to-all collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integer.
coll_monitoring_a2a_size Allow the user to access within the application the amount of data sent as all-to-all collective operations across the bound communicator (MPI_Comm). This variable returns a single unsigned long integers. The communications between a process and itself are not taken in account
In case of uncertainty about how a collective in categorized as, please refer to the list given in Table 2.
Once bound to a session and to the proper MPI object, these variables may be accessed through a set of given functions. It must be noted here that each of the functions applied to the different variables need, in fact, to be called with the handle of the variable.
The first variable may be modified by using the MPI_T_pvar_write function. The later variables may be read using MPI_T_pvar_read but cannot be written. Stopping the flush performance variable, with a call to MPI_T_pvar_stop, force the counters to be flushed into the given file, reseting to 0 the counters at the same time. Also, binding a new handle to the flush variable will reset the counters. Finally, please note that the size and counter performance variables may overflow for multiple large amounts of communications.
The monitoring will start on the call to the MPI_T_pvar_start until the moment you call the MPI_T_pvar_stop function.
Once you are done with the different monitoring, you can clean everything by calling the function MPI_T_pvar_handle_free to free the allocated handles, MPI_T_pvar_session_free to free the session, and MPI_T_Finalize to state the end of your use of performance and control variables.
Overview of the calls
To summarize the previous informations, here is the list of available performance variables, and the outline of the different calls to be used to properly access monitored data through the MPI_Tool interface.
• pml_monitoring_flush • coll_monitoring_o2a_size
• pml_monitoring_messages_count • pml_monitoring_messages_size • osc_monitoring_messages_sent_count • osc_monitoring_messages_sent_size • osc_monitoring_messages_recv_count • osc_monitoring_messages_recv_size • coll_monitoring_messages_count • coll_monitoring_messages_size Inria One-To-All All-
• coll_monitoring_a2o_count
• coll_monitoring_a2o_size
• coll_monitoring_a2a_count
• coll_monitoring_a2a_size
Add to your command line at least --mca pml_monitoring_enable [START_REF] Barrett | Analysis of the Component Architecture Overhead in Open MPI[END_REF][START_REF] Barrett | Minighost: a miniapp for exploring boundary exchange strategies using stencil computations in scientific parallel computing[END_REF] Sequence of MPI_Tool :
Use of LD_PRELOAD
In order to automatically generate communication matrices, you can use the monitoring_prof tool that can be found in test/monitoring/monitoring_prof.c. While launching your application, you can add the following option in addition to the --mca pml_monitoring_enable parameter:
-x LD_PRELOAD=ompi_install_dir/lib/monitoring_prof.so
This library automatically gathers sent and received data into one communication matrix. Although, the use of monitoring MPI_Tool within the code may interfere with this library. The main goal of this library is to avoir dumping one file per MPI process, and gather everything in one file aggregating all pieces of information.
The resulting communication matrices are as close as possible as the effective amount of data exchanged between nodes. But it has to be kept in mind Inria that because of the stack of the logical layers in Open MPI, the amount of data recorded as part of collectives or one-sided operations may be duplicated when the PML layer handles the communication. For an exact measure of communications, the application must use MPI_Tool 's monitoring performance variables to potentially subtract double-recorded data.
Examples
First is presented an example of monitoring using the MPI_Tool in order to define phases during which the monitoring component is active. A second snippet is presented for how to access monitoring performance variables with MPI_Tool.
Monitoring Phases
You can execute the following example with mpiexec -n 4 --mca pml_monitoring_enable 2 test_monitoring. Please note that you need the prof directory to already exists to retrieve the dumped files. Following the complete code example, you will find a sample dumped file and the corresponding explanations. This letter is followed by the rank of the issuing process, and the rank of the receiving one. Then you have the total amount in bytes exchanged and the count of messages. For point-to-point entries (i.e. E of I entries), the line is completed by the full distribution of messages in the form of a histogram. See variable size_histogram in Section 9.1.1 for the corresponding values. In the case of a disabled filtering between external and internal messages, the I lines are merged with the E lines, keeping the E header.
The end of the summary is a per communicator information, where you find the name of the communicator, the ranks of the processes included in this communicator, and the amount of data send (or received) for each kind of collective, with the corresponding count of operations of each kind. The first integer corresponds to the rank of the process that sent or recieved through the given collective operation type.
Accessing Monitoring Performance Variables
The following snippet presents how to access the performances variables defined as part of the MPI_Tool interface. The session allocation is not presented as it is the same as in the previous example. Please note that contrary to the pml _monitoring_flush variable, the class of the monitoring performance values is MPI_T_PVAR_CLASS_SIZE, whereas the flush variable is of class GENERIC. Also, performances variables are only to be read. printf("failed to stop handle on \"%s\" pvar, check that you" " have monitoring pml\n", count_pvar_name); MPI_Abort(MPI_COMM_WORLD, MPIT_result); } MPIT_result = MPI_T_pvar_handle_free(session, &count_handle); if (MPIT_result != MPI_SUCCESS) { printf("failed to free handle on \"%s\" pvar, check that you" " have monitoring pml\n", count_pvar_name); MPI_Abort(MPI_COMM_WORLD, MPIT_result); }
Technical Documentation of the Implementation
This section describes the technical details of the components implementation. It is of no use from a user point of view but it is made to facilitate the work for future developer that would debug or enrich the monitoring components. The architecture of this component is as follows. The Common component is the main part where the magic occurs. PML, OSC and COLL components are the entry points to the monitoring tool from the software stack point-of-view. The relevant files can be found in accordance with the partial directory tree presented in Figure 6.
Common
This part of the monitoring components is the place where data is managed. It centralizes all recorded information, the translation hash-table and ensures a unique initialization of the monitoring structures. This component is also the one where the MCA variables (to be set as part of the command line) are defined and where the final output, if any requested, is dealt with.
The header file defines the unique monitoring version number, different preprocessing macros for printing information using the monitoring output stream object, and the ompi monitoring API (i.e. the API to be used INSIDE Inria software stack, not the one to be exposed to the end-user). It has to be noted that the mca_common_monitoring_record_* functions are to be used with the destination rank translated into the corresponding rank in MPI_COMM_WORLD. This translation is done by using mca_common_monitoring_get_world_rank.
The use of this function may be limited by how the initialization occurred (see in 9.2).
Common monitoring
The the common_monitoring.c file defines multiples variables that has the following use:
mca_common_monitoring_hold is the counter that keeps tracks of whether the common component has already been initialized or if it is to be released. The operations on this variable are atomic to avoid race conditions in a multi-threaded environment.
mca_common_monitoring_output_stream_obj is the structure used internally by Open MPI for output streams. The monitoring output stream states that this output is for debug, so the actual output will only happen when OPAL is configured with --enable-debug. The output is sent to stderr standard output stream. The prefix field, initialized in mca _common_monitoring_init, states that every log message emitted from this stream object will be prefixed by "[hostname:PID] monitoring: ", where hostname is the configured name of the machine running the process and PID is the process id, with 6 digits, prefixed with zeros if needed.
mca_common_monitoring_enabled is the variable retaining the original value given to the MCA option system, as an example as part of the command line. The corresponding variable is pml_monitoring_enable. This variable is not to be written by the monitoring component. It is used to reset the mca_common_monitoring_current_state variable between phases. The value given to this parameter also defines whether or not the filtering between internal and externals messages is enabled.
mca_common_monitoring_current_state is the variable used to determine the actual current state of the monitoring. This variable is the one used to define phases.
mca_common_monitoring_output_enabled is a variable, set by the MCA engine, that states whether or not the user requested a summary of the monitored data to be streamed out at the end of the execution. It also states whether the output should be to stdout, stderr or to a file. If a file is requested, the next two variables have to be set. The corresponding variable is pml_monitoring_enable_output. Warning: This variable may be set to 0 in case the monitoring is also controlled with MPI_Tool. We cannot both control the monitoring via MPI_Tool and expect accurate answer upon MPI_Finalize.
mca_common_monitoring_initial_filename works the same as mca_common _monitoring_enabled. This variable is, and has to be, only used as a placeholder for the pml_monitoring_filename variable. This variable has to be handled very carefully as it has to live as long as the program and it has to be a valid pointer address, which content is not to be released by the component. The way MCA handles variable (especially strings) makes it very easy to create segmentation faults. But it deals with the memory release of the content. So, in the end, mca_common_monitoring _initial_filename is just to be read.
mca_common_monitoring_current_filename is the variable the monitoring component will work with. This variable is the one to be set by MPI_Tool's control variable pml_monitoring_flush. Even though this control variable is prefixed with pml for historical and easy reasons, it depends on the common section for its behavior.
pml_data and pml_count arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, and the count of messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI _COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application. If the filtering is disabled, these variables gather all information regardless of the tags. In this case, the next two arrays are, obviously, not used, even though they will still be allocated. The pml_data and pml_count arrays, and the nine next arrays described, are allocated, initialized, reset and freed all at once, and are concurrent in the memory.
filtered_pml_data and filtered_pml_count arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, and the count of internal messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI_COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application. The internal messages are defined as messages sent through the PML layer, with a negative tag. They are issued, as an example, from the decomposition of collectives operations.
osc_data_s and osc_count_s arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, and the count of messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI _COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application.
osc_data_r and osc_count_r arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes received to the current process to another process p, and the count of messages. The data in this array Inria at the index i corresponds to the data sent to the process p, of id i in MPI _COMM_WORLD. These arrays are of size N , where N is the number of nodes in the MPI application.
coll_data and coll_count arrays of unsigned 64-bits integers record respectively the cumulated amount of bytes sent from the current process to another process p, in the case of a all-to-all or one-to-all operations, or received from another process p to the current process, in the case of allto-one operations, and the count of messages. The data in this array at the index i corresponds to the data sent to the process p, of id i in MPI_COMM _WORLD. These arrays are of size N , where N is the number nodes in the MPI application. The communications are thus considered symmetrical in the resulting matrices.
size_histogram array of unsigned 64-bits integers records the distribution of sizes of pml messages, filtered or not, between the current process and a process p. This histogram is of log-2 scale. The index 0 is for empty messages. Messages of size between 1 and 2 64 are recorded such as the following. For a given size S, with 2 k ≤ S < 2 k+1 , the k-th element of the histogram is incremented. This array is of size N ×max_size_histogram, where N is the number of nodes in the MPI application.
max_size_histogram constant value correspond to the number of elements in the size_histogram array for each processor. It is stored here to avoid having its value hang here and there in the code. This value is used to compute the total size of the array to be allocated, initialized, reset or freed. This value equals (10 + max_size_histogram) × N , where N correspond to the number of nodes in the MPI application. This value is also used to compute the index to the histogram of a given process p ; this index equals i × max_size_histogram, where i is p's id in MPI_COMM _WORLD.
log10_2 is a cached value for the common logarithm (or decimal logarithm) of 2. This value is used to compute the index at which increment the histogram value. This index j, for a message that is not empty, is computed as follow j = 1 + log 10 (S)/log 10 (2) , where log 10 is the decimal logarithm and S the size of the message.
rank_world is the cached value of the current process in MPI_COMM_WORLD.
nprocs_world is the cached value of the size of MPI_COMM_WORLD.
common_monitoring_translation_ht is the hash table used to translate the rank of any process p of rank r from any communicator, into its rank in MPI_COMM_WORLD. It lives as long as the monitoring components do.
In any case, we never monitor communications between one process and itself.
The different functions to access MPI_Tool performance variables are pretty straight forward. Note that for PML, OSC and COLL, for both count and size, performance variables the notify function is the same. At binding, it sets the count variable to the size of MPI_COMM_WORLD, as requested by the MPI-3 standard (for arrays, the parameter should be set to the number of elements of the array). Also, the notify function is responsible for starting the monitoring when any monitoring performance value handle is started, and it also disable the monitoring when any monitoring performance value handle is stopped. The flush control variable behave as follows. On binding, it returns the size of the filename defined if any, 0 otherwise. On start event, this variable also enable the monitoring, as the performance variables do, but it also disable the final output, even though it was previously requested by the end-user. On the stop event, this variable flushes the monitored data to the proper output stream (i.e. stdout, stderr or the requested file). Note that these variables are to be bound only with the MPI_COMM_WORLD communicator. For far, the behavior in case of a binding to another communicator is not tested.
For the flushing itself, it is decomposed into two functions. The first one (mca_common_monitoring_flush) is responsible for opening the proper stream. If it is given 0 as its first parameter, it does nothing with no error propagated as it correspond to a disable monitoring. The filename parameter is only taken in account if fd is strictly greater than 2. Note that upon flushing, the record arrays are reset to 0. Also, the flushing called in common_monitoring.c call the specific flushing for per communicator collectives monitoring data.
For historical reasons, and because of the fact that the PML layer is the first one to be loaded, MCA parameters and the monitoring_flush control variable are linked to the PML framework. The other performance variables, though, are linked to the proper frameworks.
Common Coll Monitoring
In addition to the monitored data kept in the arrays, the monitoring component also provide a per communicator set of records. It keeps pieces of information about collective operations. As we cannot know how the data are indeed exchanged (see Section 9.4), we added this complement to the final summary of the monitored operations.
We keep the per communicator data set as part of the coll_monitoring _module. Each data set is also kept in a hash table, with the communicator structure address as the hash-key. This data set is made to keep tracks of the mount of data sent through a communicator with collective operations and the count of each kind of operations. It also cache the list of the processes' ranks, translated to their rank in MPI_COMM_WORLD, as a string, the rank of the current process, translated into its rank in MPI_COMM_WORLD and the communicator's name.
The process list is generated with the following algorithm. First, we allocate a string long enough to contain it. We define long enough as 1 + (d + 2) × s, where d is the number of digit of the higher rank in MPI_COMM_WORLD and s the Inria size of the current communicator. We add 2 to d, to consider the space needed for the comma and the space between each rank, and 1 to ensure there is enough room for the NULL character terminating the string. Then, we fill the string with the proper values, and adjust the final size of the string.
When possible, this process happen when the communicator is being created. If it fails, this process will be tested again when the communicator is being released.
This data set lifetime is different from the one of its corresponding communicator. It is actually destroyed only once its data had been flushed (at the end of the execution or at the end of a monitoring phase). To this end, this structure keeps a flag to know if it is safe to release it or not.
PML
As specified in Section 9.1.1, this component is closely working with the common component. They were merged initially, but separated later in order to propose a cleaner and more logical architecture.
This module is the first one to be initialized by the Open MPI software stack ; thus it is the one responsible for the proper initialization, as an example, of the translation hash table. Open MPI relies on the PML layer to add process logical structures as far as communicators are concerned.
To this end, and because of the way the PML layer is managed by the MCA engine, this component has some specific variables to manage its own state, in order to be properly instantiated. The module selection process works as follows. All the PML modules available for the framework are loaded, initialized and asked for a priority. The higher the priority, the higher the odds to be selected. This is why our component returns a priority of 0. Note that the priority is returned and initialization of the common module is done at this point only if the monitoring had been requested by the user.
If everything works properly, we should not be selected. The next step in the PML initialization is to finalize every module that is not the selected one, and then close components that were not used. At this point the winner component and its module are saved for the PML. The variables mca_pml_base_selected _component and mca_pml, defined in ompi/mca/pml/base/pml_base_frame.c, are now initialized. This point is the one where we install our interception layer. We also indicate ourself now initialized, in order to know on the next call to the component_close function that we actually have to be closed this time. Note that the adding of our layer require the add of the MCA_PML_BASE_FLAG _REQUIRE_WORLD flag in order to request for the whole list of processes to be given at the initialization of MPI_COMM_WORLD, so we can properly fill our hash table. The downside of this trick is that it stops the Open MPI optimization of lazily adding them.
Once that is done, we are properly installed, and we can monitor every messages going through the PML layer. As we only monitor messages from the emitter side, we only actually record when the messages are using the MPI_Send, MPI_Isend or MPI_Start functions.
OSC
This layer is responsible for remote memory access operations, and thus, it has its specificities. Even though the component selection process is quite close to the PML selection's one, there are some aspects on the usage of OSC modules that had us to adapt the interception layer.
The first problem comes from how the module is accessed inside the components. In the OSC layer, the module is part of the ompi_win_t structure. This implies that it is possible to access directly to the proper field of the structure to find the reference to the module. And it how it is done. Because of that it is not possible to directly replace a module with ours that would have saved the original module. The first solution was then to "extend" (in the ompi manner of extending objects) with a structure that would have contain as the first field a union type of every possible module. We would have then copy their fields values, save their functions, and replace them with pointers to our inception functions. This solution was implemented but a second problem was faced, stopping us from going with this solution.
The second problem was that the osc/rdma uses internally a hash table to keep tracks of its modules and allocated segments, with the module's pointer address as the hash key. Hence, it was not possible for us to modify this address, as the RDMA module would not be able to find the corresponding segments. This also implies that it is neither possible for us to extend the structures. Therefore, we could only modify the common fields of the structures to keep our "module" adapted to any OSC component. We designed templates, dynamically adapted for each kind of module.
To this end and for each kind of OSC module, we generate and instantiate three variables: OMPI_OSC_MONITORING_MODULE_VARIABLE(template) is the structure that keeps the address of the original module functions of a given component type (i.e. RDMA, PORTALS4, PT2PT or SM). It is initialized once, and referred to to propagate the calls after the initial interception. There is one generated for each kind of OSC component.
OMPI_OSC_MONITORING_MODULE_INIT(template) is a flag to ensure the module variable is only initialized once, in order to avoid race conditions. There is one generated for each OMPI_OSC_MONITORING_MODULE_VARIABLE(template), thus one per kind of OSC component.
OMPI_OSC_MONITORING_TEMPLATE_VARIABLE(template) is a structure containing the address of the interception functions. There is one generated for each kind of OSC component.
The interception is done with the following steps. First, we follow the selecting process. Our priority is set to INT_MAX in order to ensure that we would be the selected component. Then we do this selection ourselves. This gives us the opportunity to modify as needed the communication module. If it is the first time a module of this kind of component is used, we extract from the given Inria module the function's addresses and save them to the OMPI_OSC_MONITORING _MODULE_VARIABLE(template) structure, after setting the initialization flag. Then we replace the origin functions in the module with our interception ones.
To make everything work for each kind of component, the variables are generated with the corresponding interception functions. These operations are done at compilation time. An issue appeared with the use of PORTALS4, that have its symbols propagated only when the card are available on the system. In the header files, where we define the template functions and structures, template refers to the OSC component name.
We found two drawbacks to this solution. First, the readability of the code is bad. Second, is that this solution is not auto-adaptive to new components. If a new component is added, the code in ompi/mca/osc/monitoring/osc _monitoring_component.c needs to be modified in order to monitor the operations going through it. Even though the modification is three lines long, it my be preferred to have the monitoring working without any modification related to other components.
A second solution for the OSC monitoring could have been the use of a hash table. We would have save in the hash table the structure containing the original function's addresses, with the module address as a hash key. Our interception functions would have then search in the hash table the corresponding structure on every call, in order to propagate the functions calls. This solution was not implemented because because it offers an higher memory footprint for a large amount of windows allocated. Also, the cost of our interceptions would have been then higher, because of the search in the hash table. This reason was the main reason we choose the first solution. The OSC layer is designed to be very cost-effective in order to take the best advantages of the background communication and communication/computations overlap. This solution would have however give us the adaptability our solution lacks.
COLL
The collective module (or to be closer to the reality, modules) is part of the communicator. The modules selection is made with the following algorithm. First all available components are selected, queried and sorted in ascending order of priorities. The modules may provide part or all operations, keeping in mind that modules with higher priority may take your place. The sorted list of module is iterated over, and for each module, for each operation, if the function's address is not NULL, the previous module is replace with the current one, and so is the corresponding function. Every time a module is selected it is retained and enabled (i.e. the coll_module_enable function is called), and every time it gets replaced, it is disabled (i.e. the coll_module_disable function is called) and released.
When the monitoring module is queried, the priority returned is INT_MAX to ensure that our module comes last in the list. Then, when enabled, all the previous function-module couples are kept as part of our monitoring module. The modules are retained to avoid having the module freed when released by the selecting process. To ensure the error detection in communicator (i.e. an incomplete collective API), if, for a given operation, there is no corresponding module given, we set this function's address to NULL. Symmetrically, when our module is released, we also propagate this call to each underlying module, and we also release the objects. Also, when the module is enabled, we initialize the per communicator data record, which gets released when the module is disabled.
When an collective operation is called, both blocking or non blocking, we intercept the call and record the data in two different entries. The operations are groups between three kinds. One-to-all operations, all-to-one operations and all-to-all operations.
For one-to-all operations, the root process of the operation computes the total amount of data to be sent, and keep it as part of the per communicator data (see Section 9.1.2). Then it update the common_monitoring array with the amount of data each pair has to receive in the end. As we cannot predict the actual algorithm used to communicate the data, we assume the root send everything directly to each process.
For all-to-one operations, each non-root process compute the amount of data to send to the root and update the common_monitoring array with the amount of data at the index i, with i being the rank in MPI_COMM_WORLD of the root process. As we cannot predict the actual algorithm used to communicate the data, we assume each process send its data directly to the root. The root process compute the total amount of data to receive and update the per communicator data.
For all-to-all operations, each process compute for each other process the amount of data to both send and receive from it. The amount of data to be sent to each process p is added to update the common_monitoring array at the index i, with i being the rank of p in MPI_COMM_WORLD. The total amount of data sent by a process is also added to the per communicator data.
For every rank translation, we use the common_monitoring_translation _ht hash table.
Inria
(a) MPI_Send (b) MPI_Send (prog. overhead) (c) MPI_Bcast
Figure 1 :
1 Figure 1: Monitoring overhead for MPI_Send, MPI_Bcast, MPI_Alltoall, MPI_Put and MPI_Get operations. Left: distributed memory, right: shared memory.
Figure 1 :
1 Figure 1: Monitoring overhead for MPI_Send, MPI_Alltoall and MPI_Put operations. Left: distributed memory, right: shared memory. (cont.)
Figure 2 :
2 Figure 2: Mircobenchmark experiments.
Figure 3 :
3 Figure 3: Minighost application overhead in function of the communication percentage of the total execution time.
Figure 4 :
4 Figure 4: MPI_Reduce Optimization
Figure 5 :
5 Figure 5: Average gain of TreeMatch placement vs. Round Robin and random placements for various Minighost runs
final output flushing is disable 1
1 final output flushing is done in the standard output stream (stdout) 2 final output flushing is done in the error output stream (stderr) ≥ 3 final output flushing is done in the file which name is given with the pml_monitoring_filename parameter.
-
-mca pml ˆmonitoring This parameter disable the monitoring component of the PML framework --mca osc ˆmonitoring This parameter disable the monitoring component of the OSC framework --mca coll ˆmonitoring This parameter disable the monitoring component of the COLL framework Inria 8.2 Without MPI_Tool
1 . 3 . 4 . 5 . 7 .
13457 MPI_T_init_thread Initialize the MPI_Tools interface 2. MPI_T_pvar_get_index To retrieve the variable id MPI_T_session_create To create a new context in which you use your variable MPI_T_handle_alloc To bind your variable to the proper session and MPI object MPI_T_pvar_start To start the monitoring 6. Now you do all the communications you want to monitor MPI_T_pvar_stop To stop and flush the monitoring
test_monitoring.c (extract) #include <stdlib.h> #include <stdio.h> #include <mpi.h> static const void* nullbuff = NULL; static MPI_T_pvar_handle flush_handle; static const char flush_pvar_name[] = "pml_monitoring_flush"; static const char flush_cvar_name[] = "pml_monitoring_enable"; static int flush_pvar_idx; int main(int argc, char*
As it show on the sample profiling, for each kind of communication (pointto-point, one-sided and collective), you find all the related informations. There is one line per peers communicating. Each line start with a lettre describing the kind of communication, such as follows:EExternal messages, i.e. issued by the user I Internal messages, i.e. issued by the library S Sent one-sided messages, i.e. writing access to the remote memory Inria R Received one-sided messages, i.e. reading access to the remote memory C Collective messages
test/monitoring/example_reduce_count.c (extract) MPI_T_pvar_handle count_handle; int count_pvar_idx; const char count_pvar_name[] = "pml_monitoring_messages_count"; uint64_t*counts; /* Retrieve the proper pvar index */ MPIT_result = MPI_T_pvar_get_index(count_pvar_name, MPI_T_PVAR_CLASS_SIZE, &count_pvar_idx); if (MPIT_result != MPI_SUCCESS) { printf("cannot find monitoring MPI_T \"%s\" pvar, check that" " you have monitoring pml\n", count_pvar_name); MPI_Abort(MPI_COMM_WORLD, MPIT_result); } /* Allocating a new PVAR in a session will reset the counters */ MPIT_result = MPI_T_pvar_handle_alloc(session, count_pvar_idx, MPI_MAX, MPI_COMM_WORLD); /* OPERATIONS ON COUNTS */ ... free(counts); MPIT_result = MPI_T_pvar_stop(session, count_handle); if (MPIT_result != MPI_SUCCESS) {
Table 1 :
1
Kernel Class NP Monitoring time Non mon. time #msg/proc Overhead #msg/sec
bt A 16 6.449 6.443 2436.25 0.09% 6044.35
bt A 64 1.609 1.604 4853.81 0.31% 193066.5
bt B 16 27.1285 27.1275 2436.25 0.0% 1436.87
bt B 64 6.807 6.8005 4853.81 0.1% 45635.96
bt C 16 114.6285 114.5925 2436.25 0.03% 340.06
bt C 64 27.23 27.2045 4853.81 0.09% 11408.15
cg A 16 0.1375 0.1365 1526.25 0.73% 177600.0
cg A 32 0.103 0.1 2158.66 3.0% 670650.49
cg A 64 0.087 0.0835 2133.09 4.19% 1569172.41
cg B 8 11.613 11.622 7487.87 -0.08% 5158.27
cg B 16 6.7695 6.7675 7241.25 0.03% 17115.0
cg B 32 3.8015 3.796 10243.66 0.14% 86228.33
cg B 64 2.5065 2.495 10120.59 0.46% 258415.32
cg C 32 9.539 9.565 10243.66 -0.27% 34363.87
cg C 64 6.023 6.0215 10120.59 0.02% 107540.76
lu A 8 8.5815 8.563 19793.38 0.22% 18452.14
lu A 16 4.2185 4.2025 23753.44 0.38% 90092.45
lu A 32 2.233 2.2205 25736.47 0.56% 368816.39
lu A 64 1.219 1.202 27719.36 1.41% 1455323.22
lu B 8 35.2885 35.2465 31715.88 0.12% 7190.08
lu B 16 18.309 18.291 38060.44 0.1% 33260.53
lu B 32 9.976 9.949 41235.72 0.27% 132271.75
lu B 64 4.8795 4.839 44410.86 0.84% 582497.18
lu C 16 72.656 72.5845 60650.44 0.1% 13356.19
lu C 32 38.3815 38.376 65708.22 0.01% 54783.24
lu C 64 20.095 20.056 70765.86 0.19% 225380.19
Overhead for the BT, CG and LU NAS kernels collective algorithm communications. With this pattern, we have computed a
RR n°9038
A proof-of-concept version of this monitoring has been implemented in MPICH Inria
Nevertheless, a precise monitoring is still possible with the use of the monitoring API.RR n°9038
Inria
Publisher Inria Domaine de Voluceau -Rocquencourt BP 105 -78153 Le Chesnay Cedex inria.fr ISSN 0249-6399
Acknowledgments
This work is partially funded under the ITEA3 COLOC project #13024, and by the USA NSF grant #1339820. The PlaFRIM experimental testbed is being developed with support from Inria, LaBRI, IMB and other entities: Conseil Régional d'Aquitaine, FeDER, Université de Bordeaux and CNRS.
MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); to = (rank + 1) % size; from = (rank + size - | 75,644 | [
"748686",
"15678",
"176883"
] | [
"135613",
"409750",
"409750",
"409750",
"456313"
] |
01485251 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01485251/file/SooPRE2017_Postprint.pdf | Heino Soo
David S Dean
Matthias Krüger
Particles with nonlinear electric response: Suppressing van der Waals forces by an external field
We study the classical thermal component of Casimir, or van der Waals, forces between point particles with highly anharmonic dipole Hamiltonians when they are subjected to an external electric field. Using a model for which the individual dipole moments saturate in a strong field (a model that mimics the charges in a neutral, perfectly conducting sphere), we find that the resulting Casimir force depends strongly on the strength of the field, as demonstrated by analytical results. For a certain angle between the external field and center-to-center axis, the fluctuation force can be tuned and suppressed to arbitrarily small values. We compare the forces between these particles with those between particles with harmonic Hamiltonians and also provide a simple formula for asymptotically large external fields, which we expect to be generally valid for the case of saturating dipole moments.
I. INTRODUCTION
Neutral bodies exhibit attractive forces, called van der Waals or Casimir forces depending on context. The earliest calculations were formulated by Casimir, who studied the force between two metallic parallel plates [1], and generalized by Lifshitz [2] for the case of dielectric materials. Casimir and Polder found the force between two polarizable atoms [3]. Although van der Waals forces are only relevant at small (micron scale) distances, they have been extensively measured (see, e.g., Refs. [4,5]). With recent advances in measurement techniques, including the microelectromechanical systems (MEMS) framework [START_REF] Gad-El Hak | The MEMS Handbook[END_REF], Casimir-Polder forces become accessible in many other interesting conditions.
Due to the dominance of van der Waals forces in nanoscale devices, there has been much interest in controlling such forces. The full Lifshitz theory for van der Waals forces [2] shows their dependence on the electrical properties of the materials involved. Consequently, the possibility of tuning a material's electric properties opens up the possibility of tuning fluctuation-induced interactions. This principle has been demonstrated in a number of experimental setups, for instance, by changing the charge carrier density of materials via laser light [START_REF] Chen | [END_REF]8], as well as inducing phase transformations by laser heating, which of course engenders a consequent change in electrical properties [8]. There is also experimental evidence of the reduction of van der Waals forces for refractive-indexmatched colloids [9][10][11]. The question of forces in external fields, electric and magnetic, has been studied in several articles [12][13][14][15][16][17][18][19]. When applying external fields, materials with a nonlinear electric response (which exhibit "nonlinear optics") open up a variety of possibilities; these possibilities are absent in purely linear systems where the external field and fluctuating field are merely superimposed. Practically, metamaterials are promising candidates for Casimir force modulation, as they can exhibit strongly nonlinear optical properties [20,21] and their properties can be tuned by external fields [22]. The nature and description of fluctuation-induced effects in nonlinear systems are still under active research [23][24][25][26], including critical systems, where the underlying phenomenon is per se nonlinear [11]. For example, in Ref. [26], it was shown that nonlinear properties may alter Casimir forces over distances in the nanoscale. However, in the presence of only a small number of explicit examples, more research is needed to understand the possibilities opened up by nonlinear materials.
In this article, we consider an analytically solvable model for (anharmonic) point particles with strongly nonlinear responses. This is achieved by introducing a maximal, limiting value for the polarization of the particles, i.e., by confining the polarization vector in anharmonic potential wells. Casimir forces in such systems appear to be largely unexplored, even at the level of two-particle interactions. We find that strong external electric fields can be used to completely suppress the Casimir force in such systems. We discuss the stark difference of forces compared with the case of harmonic dipoles and give an asymptotic formula for the force in strong external fields, which we believe is valid in general if the involved particles have a maximal value for the polarization (saturate). In order to allow for analytical results, we restrict our analysis to the classical (high temperature) limit. However, similar effects are to be expected in quantum (low temperature) cases.
We start by computing the Casimir force for harmonic dipoles in an external field in Sec. II, where in Sec. II B we discuss the role of the angle between the field and the center-to-center axis. In Sec. III A we introduce the nonlinear (anharmonic) well model and compute the Casimir force in an external field in Sec. III C. We finally give an asymptotic expression for high fields in Sec. III D.
II. FORCE BETWEEN HARMONIC DIPOLES IN A STATIC EXTERNAL FIELD
A. Model
Classical van der Waals forces can be described by use of quadratic Hamiltonians describing the polarization of the 1 particles involved [27][28][29]. We introduce the system comprising two dipole carrying particles having the Hamiltonian,
H (h) = H (h) 1 + H (h) 2 + H int , (1)
H (h) i = p 2 i 2α -p i • E, (2)
H int = -2k[3(p 1 • R)(p 2 • R) -p 1 • p 2 ], (3)
where p i is the instantaneous dipole moments of particle i.
Here α denotes the polarizability, where, for simplicity of presentation, we choose identical particles. The external, homogeneous static electric field E couples to p i in the standard manner. The term H int describes the nonretarded dipole-dipole interaction in d = 3 dimensions with the coupling constant
k = 1 4πε 0 R -3 , ( 4
)
where R = |R| with R the vector connecting the centers of the two dipoles, while R denotes the corresponding unit vector. Since we are considering purely classical forces, retardation is irrelevant. Here ε 0 is the vacuum permittivity, and we use SI units. Inertial terms are irrelevant as well and have been omitted. (Since the interaction does not depend on, e.g., the change of p i with time, inertial parts can be integrated out from the start in the classical setting.)
B. Casimir force as a function of the external field
The force F for the system given in Eqs. ( 1)-( 3), at fixed separation R, can be calculated from (as the external electric field is stationary, the system is throughout in equilibrium)
F = 1 β ∂ R ln Z, ( 5
)
where Z = d 3 p 1 d 3 p 2 exp (-βH ) is the partition function, with the inverse temperature β = 1/k B T and H is the Hamiltonian of the system. By using the coupling constant k from Eq. ( 4), this may also be written as
F = 1 β (∂ R k) ∂ k Z Z . ( 6
)
Furthermore, we are interested in the large separation limit, and write the standard series in inverse center-to-center distance (introducing R ≡ |R|),
F = 1 β (∂ R k) ∂ k Z Z k=0 + 1 β (∂ R k)k ∂ 2 k Z Z - ∂ k Z Z 2 k=0
+ O(R -10 ). [START_REF] Chen | [END_REF] In this series, the first term is of order R -4 , while the second is of order R -7 . The external electric field induces finite (average) dipole moments. For an isolated particle, this is (index 0 denoting an isolated particle, or k = 0)
p i 0 = d 3 p i exp (-βH i )p i d 3 p i exp (-βH i ) . ( 8
)
For the case of harmonic particles, Eq. ( 2), this naturally gives
p i 0 = αE. ( 9
)
FIG. 1. Casimir force between harmonic dipoles as a function of the strength of the external field. The angle between the field and the center-to-center vector R is chosen ϕ = arccos ( 1 √ 3 ). The force component decaying with ∼R -4 [discussed after Eq. ( 10)] then vanishes, so that the force decays as ∼R -7 .
The mean dipole moments of the isolated particles in Eq. ( 9), induced by the external electric field, give rise to a force decaying as R -4 , i.e., the first term in Eq. ( 7). This can be made more explicit by writing
∂ k Z Z k=0 = 2 p 1 0 • p 2 0 -6( p 1 0 • R)( p 2 0 • R). ( 10
)
Representing a force decaying as R -4 , this term dominates at large separations. From Eq. ( 10), the dependence on the angle between E and R becomes apparent. The induced force to order R -4 can be either attractive (e.g., R E) or repulsive (e.g., R ⊥ E) [START_REF] Jackson | Classical Electrodynamics[END_REF]. We are aiming at reducing the Casimir force through the electric field, and thus, term by term, try to obtain small prefactors. The considered term ∼R -4 is readily reduced by choosing R • Ê = 1 √ 3 , for which this term is exactly zero, ( ∂ k Z Z ) k=0 = 0. See the inset of Fig. 1 for an illustration. In the following sections we will thus study the behavior of the term ∼R -7 as a function of the external field, keeping this angle throughout.
C. Force for the angle R •
Ê = 1 √ 3 For R • Ê = 1 √
3 , the force is of order R -7 for large R, and reads
F | R• Ê= 1 √ 3 = ∂ R k 2 2β ∂ 2 k Z Z k=0 + O(R -10 ). ( 11
)
The discussion up to here, including Eq. ( 11), is valid generally, i.e., for any model describing individual symmetric particles, where the induced polarization is in the direction of the applied field. For the case of harmonic dipoles, i.e., for Eq. ( 2), we denote F = F h . Calculating (
∂ 2 k Z
Z ) k=0 for this case yields a result which is partly familiar from the case of harmonic dipoles in the absence of external fields (denoted F 0 ),
F h = 1 + 2 3 αβE 2 F 0 + O(R -10 ), (12)
F 0 = - 72 β α 4πε 0 2 R -7 . ( 13
)
Again, for zero field, E → 0, this is in agreement with the Casimir-Polder force in the classical limit [27], given by F 0 .
As the field is applied, the force increases, being proportional to E 2 for αβE 2 1. This is due to interactions of a dipole induced by the E field with a fluctuating dipole [compare also (34) below]. The term proportional to E 2 is naturally independent of T . The force as a function of external field is shown in Fig. 1.
The Casimir force given by Eq. ( 12) is thus tunable through the external field, but it can only be increased due to the square power law. While this might be useful for certain applications, we shall in the following investigate the case of highly nonlinear particles. The fact that the force in Eq. ( 13) is proportional to α 2 suggests that reduction of the force could be achieved, if the polarizabilities were dependent on the external field. In the next section, we will investigate a model for saturating particle dipole moments, where indeed the forces can be suppressed.
III. FORCE BETWEEN SATURATING DIPOLES IN AN EXTERNAL FIELD
A. Model: Infinite wells
The response of a harmonic dipole to an external field is by construction linear for any value of the field [see Eq. ( 9)], and the polarization can be increased without bound. We aim here to include saturation by introducing a limit P for the polarization, such that |p i | < P at all times and for all external fields. This can be achieved by modifying the Hamiltonian in Eq. ( 2), assigning an infinite value for |p i | > P . The potential for |p i | obtained in such a way is illustrated in Fig. 2.
As we aim to study the effect of saturation, while keeping the number of parameters to a minimum, we additionally take the limit α → ∞. This yields an infinite well potential (see FIG. 2. Illustration of a simple potential for the individual dipoles, which describes saturation. A parabola of curvature α -1 is cut off by a hard "wall" at the value P . Practically, we simplify even further by letting the polarizability α tend to infinity, so that the potential of Eq. ( 14) is approached. Physically, α → ∞ means α βP 2 .
the lower curves of Fig. 2 for the approach of this limit),
H (w) i = -p i • E, |p i | < P, ∞, otherwise. (14)
Such models have been studied extensively in different contexts, as, e.g., asymmetric quantum wells of various shapes [START_REF] Rosencher | [END_REF][32][33], two-level systems with permanent dipole moments [34], and dipolar fluids [35]. These systems are also known to be tunable with an external electric field [36,37]. However, the Casimir effect has not been investigated. This model, for example, mimics free electrons confined to a spherical volume, such as in a perfectly conducting, neutral sphere. The maximum value for the dipole moment in this case is the product of the radius and the total free charge of the sphere. The charge distribution in a sphere has, additionally to the dipole moment, higher multipole moments, e.g., quadrupolar. For a homogeneous external field, the Hamiltonian in Eq. ( 14) is, however, precise, as higher multipoles couple to spatial derivatives (gradients) of the field [START_REF] Jackson | Classical Electrodynamics[END_REF], and only the dipole moment couples to a homogeneous field. Also, the interaction part, Eq. ( 3), contains, in principle, terms with higher multipoles. These do not, however, play a role for the force at the order R -7 .
B. Polarization and polarizability
We start by investigating the polarization of an individual particle as a function of the field E, resulting from Eq. ( 14), which is defined in Eq. ( 8). It can be found analytically,
p i 0 = Q(βEP)P Ê, (15)
Q(x) = 1 x (x 2 -3x + 3)e 2x -x 2 -3x -3 (x -1)e 2x + x + 1 . ( 16
)
Note that the product βEP is dimensionless. For a small external field, we find the average polarization is given by
p i 0 = 1 5 βP 2 E + O(E 3 ). ( 17
)
We hence observe, as expected, that for a small field the particles respond linearly, with a polarizability α 0 ≡ 1 5 βP 2 . This polarizability depends on temperature, as it measures how strongly the particles' thermal fluctuations in the well are perturbed by the field. We may now give another interpretation of the limit α → 0 in Fig. 2: In order to behave as a "perfect" well, the curvature, given by α -1 , must be small enough to fulfill α α 0 . The normalized polarization [i.e., Q(βEP) = | p i 0 | P ] is shown in Fig. 3 as a function of external field. For small values of E, one sees the linear increase, according to Eq. ( 17). In the large field limit, the polarization indeed saturates to P Ê. The dimensionless axis yields the relevant scale for E, which is given through (βP ) -1 . At low temperature (or large P ), saturation is approached already for low fields, while at high temperature (or low P ), large fields are necessary for saturation.
Another important quantity related to the polarization is the polarizability, which is a measure of how easy it is to induce or change a dipole moment in a system. For harmonic FIG. 3. Characterization of an isolated particle described by the well model. The mean dipole moment [see Eq. ( 15)] and polarizations [see Eqs. ( 20) and ( 21)]. P is the "width" of the well potential, and α 0 ≡ 1 5 βP 2 denotes the zero-field polarizability.
particles, it is independent of external fields [see Eq. ( 9)]. In the case of particles with a nonlinear response, the field-dependent polarizability tensor α ij is of interest. It is defined through the linear response,
α ij = ∂ p i ∂E j . ( 18
)
Note that this derivative is not necessarily taken at zero field E, so that α ij is a function of E. Indices i and j denote the components of vectors (in contrast to previous notation). The polarizability tensor as defined in Eq. ( 18) is measured in the absence of any other particle (in other words, at coupling k = 0). α ij can be deduced directly from the function Q in Eq. ( 16). In general, we can write
α ij (β,E,P ) = A ij (βEP )α 0 . ( 19
)
Recall the zero-field polarizability is given as α 0 ≡ 1 5 βP 2 [see Eq. ( 17)]. For the isolated particle, the only special direction is provided by the external field E, and it is instructive to examine the polarizability parallel and perpendicular to it. Taking, for example, E along the z axis, the corresponding dimensionless amplitudes A = A zz and
A ⊥ = A xx = A yy are A (x) = 5 d dx Q(x), ( 20
)
A ⊥ (x) = 5 1 x Q(x). ( 21
)
The amplitudes for parallel and perpendicular polarizability are also shown in Fig. 3. The direct connection with the polarization is evident. For small fields, where the polarization grows linearly, the polarizability is independent of E. Analytically,
A (x) = 1 -3 35 x 2 + O(x 3 ), (22)
A ⊥ (x) = 1 -1 35 x 2 + O(x 3 ). ( 23
)
For large fields, i.e., when βEP is large compared to unity, the polarizability reduces due to saturation effects. Asymptotically for large fields, the polarizability amplitudes are given as
A (x) = 10x -2 + O(x -3 ), ( 24
)
A ⊥ (x) = 5x -1 -10x -2 + O(x -3 ). ( 25
)
The parallel polarizability α falls off as E -2 and the parallel polarizability α ⊥ as E -1 . The different power laws may be expected, as near saturation, changing the dipole's direction is a softer mode compared to changing the dipole's absolute value.
C. Casimir force
The Casimir force between particles described by the well potential, Eq. ( 14), is computed from the following Hamiltonian,
H (w) = H (w) 1 + H (w) 2 + H int , ( 26
)
H (w) i = -p i • E, |p i | < P, ∞, otherwise, (27)
with the interaction potential H int given in Eq. ( 3). The discussion in Sec. II regarding the angle of the external field holds similarly here, i.e., Eq. ( 11) is valid and the force decaying as R -4 vanishes for the angle R • Ê = 1 √ 3 . Therefore, we continue by studying the R -7 term at this angle. Using Eq. ( 11), the Casimir force can be found analytically,
F w = f w (βEP )F 0 + O(R -10 ), (28)
with the zero-field force
F 0 = - 72 β α 0 4πε 0 2 R -7 , (29)
and the dimensionless amplitude
f w (x) = 25 3 1 x 4 (x 2 + 3) sinh(x) -3x cosh(x) [x cosh(x) -sinh(x)] 2 ×[(2x 2 + 21)x cosh(x) -(9x 2 + 21) sinh(x)]. (30)
Again, α 0 ≡ 1 5 βP 2 is the zero-field polarizability [see Eq. ( 17)]. The force is most naturally expressed in terms of F 0 , which is the force at zero field, equivalent to Eq. ( 13). The amplitude f w is then dimensionless and depends, as the polarization, on the dimensionless combination βEP.
The force is shown in Fig. 4. For zero external fields, the curve starts at unity by construction, where the force is given by F 0 . The force initially increases for small values of βEP, in accordance with our earlier analysis of harmonic dipoles. After this initial regime of linear response, the Casimir force decreases for βEP 1, and, for βEP 1, asymptotically approaches zero as E -1 ,
F w = - 48P 3 (4πε 0 ) 2 R -7 E -1 + O(E -2 ). ( 31
)
This behavior yields an enormous potential for applications: By changing the external field, the force can be switched on or off. The asymptotic law in Eq. ( 31) gives another intriguing insight: For large fields, the force is independent of temperature. This is in contrast to the fact that (classical) fluctuation-induced forces in general do depend on temperature. This peculiar observation is a consequence of cancellations between factors of β, and might yield further possibilities for applications. This is demonstrated in Fig. 5, where we introduced a reference temperature T 0 . Indeed, we see that for small values of E, the force does depend on temperature, while for large fields, the curves for different values of temperature fall on top of each other. As a remark, we note that F 0 is inversely proportional to temperature, in contrast to F 0 for harmonic particles in Eq. ( 13). This is because the zero-field polarizability depends on temperature for the well potentials considered here.
Regarding experimental relevance, it is interesting to note that, in a somewhat counterintuitive way, larger values of P lead to stronger dependence on the external field E (the important parameter is βEP). We thus expect that larger particles FIG. 5. Temperature dependence of the Casimir force for saturating particles. For small E, the force decreases with temperature because the zero-field polarizability is α 0 = 1 5 βP 2 . For large E, the force is unexpectedly independent of T . are better candidates for observing the effects discussed here. For example, for a gold sphere of radius 100 nm, we estimate P = 5 × 10 -19 Cm, so that βEP ∼ 1 for E = 10 mV/m at room temperature.
D. Asymptotic formula for high fields
What is the physical reason for the decay of the force for large field E observed in Fig. 4? For large values of βEP, the force may be seen as an interaction between a stationary dipole and a fluctuating one. This is corroborated by a direct computation of the force between a stationary dipole q, pointing in the direction of the electric field, and a particle with the Hamiltonian
H (s) 1 = p 2 2α + p 2 ⊥ 2α ⊥ -p • E, (32)
where "perpendicular" and "parallel" refer to the direction of the E field as before. The two such hypothetical particles interact via the Hamiltonian
H (s) int = -2k[3(p • R)(q • R) -p • q]. (33)
Choosing the angle between R and E as before, we find for the force between these particles (to leading order in k),
F s = -24α ⊥ q 2 1 4πε 0 2 R -7 . ( 34
)
This result can be related to Eq. [START_REF] Rosencher | [END_REF]. Substituting q = P Ê, the value at saturation, and α ⊥ = 5/(βEP )α 0 = P /E [using the leading term for large field from Eq. ( 25)], we find
F s = -24 P 3 E 1 4πε 0 2 R -7 . ( 35
)
This is identical to Eq. ( 31), except for a factor of 2. This is expected, as this factor of 2 takes into account the force from the first fixed dipole interacting with the second fluctuating one and vice versa. We have thus demonstrated that Eq. ( 34) may be used to describe the behavior of the force for large values of E. The importance of this observation lies in the statement, that such reasoning might be applicable more generally: in the case of more complex behavior of p(E), i.e., more complex (or realistic) particles. We believe that the value of q at saturation and the polarizability α ⊥ near saturation can be used to accurately predict the force in the limit of large external fields via Eq. (34).
IV. SUMMARY
We have demonstrated how the classical Casimir-Polder force between two saturating dipoles can be suppressed by applying an external static electric field. Of special interest is the angle ϕ = arccos ( 1 √ 3 ) between the external field and the vector connecting the dipoles, for which the deterministic dipole-dipole interaction vanishes. The remaining "Casimir-Polder" part can then be tuned and is arbitrarily suppressed at large values of external fields due to the vanishing polarizability. The force in this case decays as E -1 . This is in strong contrast to harmonic dipoles, which experience an increase of the force in the presence of an external field, growing with E 2 . We also provided a simple formula to estimate the force between particles under strong fields. It would be interesting to extend the results here to macroscopic objects composed of such dipole carrying particles, where multibody effects will potentially change the physics for dense systems. However, for dilute systems, where the pairwise approximation of van der Waals forces is accurate, the results obtained here are directly applicable and thus the modulation of Casimir or van der Waals forces predicted here will apply to a certain extent. Of course, an important main difference in more than two-body systems is that the deterministic component of the interaction cannot be obviously canceled by a uniform electric field, as there is more than one center-to-center vector, denoted by R in this article, separating the interacting dipoles.
FIG. 4 .
4 FIG. 4. Casimir force between two saturating particles in an external electric field E. The angle between the field and the vector R is ϕ = arccos ( 1 √ 3 ).
ACKNOWLEDGMENTS
We thank G. Bimonte, T. Emig, N. Graham, R. L. Jaffe, and M. Kardar for useful discussions. This work was supported by Deutsche Forschungsgemeinschaft (DFG) Grant No. KR 3844/2-1 and MIT-Germany Seed Fund Grant No. 2746830. | 24,116 | [
"14411"
] | [
"237119",
"498426",
"136813",
"237119",
"498426"
] |
01485412 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://inria.hal.science/hal-01485412/file/iwspa17-alaggan-HAL-PREPRINT.pdf | Mohammad Alaggan
email: mohammad.alaggan@inria.fr
Mathieu Cunche
email: mathieu.cunche@inria.fr§marine.minier@loria.fr
Marine Minier
Non-interactive (t, n)-Incidence Counting from Differentially Private Indicator Vectors *
. Given one or two differen-
tially private indicator vectors, estimating the distinct count of elements in each [START_REF] Balu | Challenging Differential Privacy: The Case of Non-Interactive Mechanisms[END_REF] and their intersection cardinality (equivalently, their inner product [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF]) have been studied in the literature, along with other extensions for estimating the cardinality set intersection in case the elements are hashed prior to insertion [START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF]. The core contribution behind all these studies was to address the problem of estimating the Hamming weight (the number of bits set to one) of a bit vector from its differentially private version, and in the case of inner product and set intersection, estimating the number of positions which are jointly set to one in both bit vectors.
We develop the most general case of estimating the number of positions which are set to one in exactly t out of n bit vectors (this quantity is denoted the (t, n)-incidence count), given access only to the differentially private version of those bit vectors. This means that if each bit vector belongs to a different owner, each can locally sanitize their bit vector prior to sharing it, hence the non-interactive nature of our algorithm.
Our main contribution is a novel algorithm that simultaneously estimates the (t, n)-incidence counts for all t ∈ {0, . . . , n}. We provide upper and lower bounds to the estimation error.
Our lower bound is achieved by generalizing the limit of two-party differential privacy [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF] into nparty differential privacy, which is a contribution of independent interest. In particular we prove a lower bound on the additive error that must be incurred by any n-wise inner product of n mutually differentiallyprivate bit vectors.
Our results are very general and are not limited to differentially private bit vectors. They should apply to a large class of sanitization mechanism of bit vectors which depend on flipping the bits with a constant probability.
Some potential applications for our technique include physical mobility analytics [START_REF] Musa | Tracking unmodified smartphones using wi-fi monitors[END_REF], call-detailrecord analysis [START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF], and similarity metrics computation [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF].
Introduction
Consider a set of n bit vectors, each of size m. Let a be the vector with m components, in which a i ∈ {0, . . . , n} is the sum of the bits in the i-th position in each of the n bit vectors. Then the (t, n)-incidence count is the number of positions i such that a i = t. Let the incidence vector Φ be the vector of n + 1 components in which Φ t is the (t, n)-incidence count, for t ∈ {0, . . . , n}. It should be noted that t Φ t = m, since all m buckets must be accounted for. Φ can also be viewed as the frequency of elements or histogram of a.
Now consider the vector ã resulting from the sanitized version of those vectors, if they have been sanitized by probabilistically flipping each bit b independently with probability 0 < p < 1/2:
b → b ⊕ Bernoulli(p) . (1)
Then each component of ã will be a random variable 1 defined as: ãi = Binomial(a i , 1 -p) + Binomial(na i , p). This is because (1) can be rewritten as: b → Bernoulli(p) if b = 0 and b → Bernoulli(1 -p) if b = 1, and there are a i bits whose value is one, and n -a i bits whose value is zero, and the sum of identical Bernoulli random variables is a Binomial random variable.
Finally, define Ψ to be the histogram of ã, similarly to Φ. To understand Ψ consider entry i of Φ, which is the number Φ i of buckets containing i ones out of n. Take one such bucket; there is a probability that the i ones in that bucket be turned into any of j = 0, 1, . . . , n. The vector describing such probabilistic transformation follows a multinomial distribution. This is visually illustrated in Figure 1, by virtue of an example on two bit vectors.
The main contribution of this paper is a novel algorithm to estimate the true incidence vector Φ given the sanitized incidence vector Ψ and p.
This model captures perturbed Linear Counting Sketches (similar to [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF] which is not a flipping model), and BLIP [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF][START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF] (a differentially-private Bloom filter).
In [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF], Alaggan, Gambs, and Kermarrec showed that when the flipping probability satisfies (1-p)/p = exp( ) for > 0, then this flipping mechanism will satisfy -differential privacy (cf. Definition 2.1). This means that the underlying bit vectors will be pro- 1 Which is a special case of the Poisson binomial distribution, where there are only two distinct means for the underlying Bernoulli distributions. The mean and variance of ãi are defined as the sums of the means and variances of the two underlying binomial distribution, because they are independent. Figure 1: An example to our model. There are two bit vectors and a represents the number of bits set to one in each adjacent position, while Φ represents the histogram of a. For example Φ 1 is the number of entries in a which are equal to 1 (shown in red). The rest of the diagram shows what happens to entries of a if the bit vectors are sanitized by randomly and independently flipping each of their bits with probably p < 1/2, and how the histogram consequently changes to the random variable Ψ. In particular, that Φ t is probabilistically transformed into a vector-valued Multinomial random variable.
tected with non-interactive randomized-responsedifferential privacy in which = ln((1 -p)/p).
Summary of Our Results
We find that our results are best presented in terms of another parameter 0 < η < 1 instead of p. Let η be such that the flipping probably p = 1/2 -η/2. We will not reference p again in this paper.
In our presentation and through the entirety of this paper, both η (which we will reference as "the privcay parameter") and (which will be referenced as "the differential privacy parameter") are completely interchangeable; since one fully determines the other through the relation
= ln( 1 + η 1 -η ) . (2)
However, our theoretical results will be presented in terms of η for the sake of simplicity of presentation.
On the other hand, the experimental evaluation will be presented in terms of ; since is the differential privacy parameter and it will provide more intuition to the reader about the privacy guarantees provided for the reported utility (additive error). In a practical application, one may decide the value of first to suit their privacy and utility needs and then compute the resulting η value that is then given to our algorithm. A discussion on how to choose is provided in [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF], which may also aid the reader with having an intuition to the value of used in our experimental evaluation and why we decided to use those values.
(t, n)-Incidence Estimation. In the following we describe upper U and lower bounds L to the additive error. That is,
max i |Ψ i -Φ i | U and L min i |Ψ i -Φ i |,
in which Φ is the estimate output by our algorithm (for the upper bound) or the estimate output by any algorithm (for the lower bound). L and U may depend on m, the size of the bit vectors, n, the number of bit vectors, η, the privacy parameter, and β, the probability that the bounds fails for at least one i.
Upper Bound. Theorem 4.4 states that there exist an algorithm that is -differentially private that, with probability at least 1 -β, simultaneously estimates Φ i for all i with additive error no more than
√ 2m • O(η -n ) • ln 1 β • ln(n + 1) .
Note that this is not a trivial bound since it is a bound on estimating n > 2 simultaneous n-wise inner products. Additionally, in relation to the literature on communication complexity [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF], we consider the numberin-hand rather than number-on-forehead communication model, which is more strict.
The O(η -n ) factor is formally proven, but in practice the actual value is much smaller, as explained in Section 6.1. A discussion of the practicality of this bound given the exponential dependence on n is given in Section 4.2.
Lower Bound. In Theorem 5.10 we generalize the results of [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF] to multiple bit vectors and obtain the lower bound that for all i, any -differentially private algorithm for approximating Φ i must incur additive error
Ω √ m log 2 (m) • β • 1 -η 1 + η ,
with probability at least 1 -β over randomness of the bit vectors and the randomness of the perturbation.
It is worth noting that the upper bounds hold for all values of , but the lower bound is only shown for < 1. Also notice that this lower bound does not depend on n.
The result also presents a lower bound on the additive error that must be incurred by any such algorithm for estimating n-wise inner product. The relation between the n-wise inner product and (t, n)incidence is made explicit in the proof of Theorem 5.10.
In Section 2, we start by presenting differential privacy, after which we discuss the related work in Section 3, then in Section 4 we describe the the (t, n)incidence counting algorithm and prove its upper bounds. The lower bound on n-wise inner product is then presented in Section 5. Finally, we finish by validating our algorithm and bounds on a real dataset in Section 6 before concluding in Section 7.
Background
Differential Privacy
The notion of privacy we are interested is Differential Privacy [START_REF] Dwork | Differential Privacy[END_REF]. It is considered a strong definition of privacy since it is a condition on the sanitization mechanism that holds equally well for any instance of the data to be protected. Furthermore, it makes no assumptions about the adversary. That is, the adversary may be computationally unbounded and has access to arbitrary auxiliary information. To achieve this, any differentially private mechanism must be randomized. In fact, the definition itself is a statement about a probabilistic event where the probability is taken only over the coin tosses of such mechanism. The intuition behind differential privacy is that the distribution of the output of the mechanism should not change much (as quantified by a parameter ) when an individual is added or removed from the input. Therefore, the output does not reveal much information about that individual nor even about the very fact whether they were in the input or not. Definition 2.1 ( -Differential Privacy [START_REF] Dwork | Differential Privacy[END_REF]). A randomized function F : {0, 1} n → {0, 1} n is -differentially private, if for all vectors x, y, t ∈ {0, 1} n :
Pr[F(x) = t] exp( • x -y H ) Pr[F(y) = t] , (3)
in which xy H is the Hamming distance between x and y, that is, the number of positions at which they differ. The probability is taken over all the coin tosses of F.
The parameter is typically small and is usually thought of as being less than one. The smaller its value the less information is revealed and more private the mechanism is. However, it also means less estimation accuracy and higher estimation error. Therefore the choice of a value to use for is a trade-off between privacy and utility. To the best of our knowledge there is no consensus on a method to decide what this value should be. In some of the literature relevant to differentially private bit vectors [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF], an attack-based approach was adopted as a way to choose the largest (and thus highest utility) possible such that the attacks fail. Given the attacks from [START_REF] Alaggan | BLIP: Non-Interactive Differentially-Private Similarity Computation on Bloom Filters[END_REF] we can choose up to three without great risk.
Related Work
Incidence counting has been studied in the streaming literature as well as in the privacy-preserving algorithms literature under the names: t-incidence counting [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF], occurrence frequency estimation [START_REF] Cormode | Finding the Frequent Items in Streams of Data[END_REF][START_REF] Datar | Estimating Rarity and Similarity over Data Stream Windows[END_REF], or distinct counting [START_REF] Mir | Pan-Private Algorithms via Statistics on Sketches[END_REF]. We use these terms interchangeably to mean an accurate estimate of the distinct count, not an upper or lower bound on it.
There are several algorithms in the streaming literature that estimate the occurrence frequency of different items or find the most frequent items [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF][START_REF] Datar | Estimating Rarity and Similarity over Data Stream Windows[END_REF][START_REF] Cormode | Finding the Frequent Items in Streams of Data[END_REF]. The problem of occurrence frequency estimation is related to that of incidence counting in the following manner: they are basically the same thing except the former reports normalized relative values. Our algorithm, instead, reports all the occurrence frequencies, not just the most frequent ones. We face the additional challenging that we are given a privacy-preserving version of the input instead of its raw value, but since in our application (indicator vectors) usually m n, we use linear space in n, rather than logarithmic space like most streaming algorithms.
The closest to our work is the t-incidence count estimator of Dwork, Naor, Pitassi, Rothblum, and Yekhanin [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF]. Their differentially private algorithm takes the private stream elements a i before sanitation and sanitizes them. To the contrary, our algorithm takes the elements a i after they have already been sanitized. An example inspired by [START_REF] Alaggan | Sanitization of Call Detail Records via Differentially-Private Bloom Filters[END_REF] is that of call detail records stored by cell towers. Each cell tower stores the set of caller/callee IDs making calls for every time slot (an hour or a day for instance), as an indicator vector. After the time slot ends, the resulting indicator vector is submitted to a central facility for further analysis that involves multiple cell towers. Our work allows this central facility to be untrusted, which is not supported by [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF].
In subsequent work, Mir, Muthukrishnan, Nikolov, and Wright [START_REF] Mir | Pan-Private Algorithms via Statistics on Sketches[END_REF] propose a p-stable distribution-based sketching technique for differentially private distinct count. Their approach also supports deletions (i.e. a i may be negative), which we do not support. However, to reduce the noise, they employ the exponential mechanism [START_REF] Mcsherry | Mechanism Design via Differential Privacy[END_REF], which is known to be computationally inefficient. Their algorithm also faces the same limitations than the ones of [START_REF] Dwork | Pan-Private Streaming Algorithms[END_REF].
Upper Bounds
The algorithm we present and the upper bounds thereof depend on the probabilistic linear mapping A between the observed random variable Ψ and the unknown Φ which we want to estimate. In fact, A and its expected value A = E[A ] are the primary objects of analysis of this section. Therefore we begin by characterizing them.
Recall that Ψ is the histogram of ã (cf. Figure 1) and that the distribution of ãi is Z(n, p, a i ) in which Z(n, p, j) = Binomial(j, 1 -p) + Binomial(n -j, p) , (4) and p < 1/2. The probability mass function of
Z(n, p, j) is presented in Appendix A.
In what follows we drop the n and p parameters of Z(n, p, j) since they are always implied from context. We will also denote P (Z(j)) be the probability vector characterizing Z(j):
(Pr[Z(j) = 0], Pr[Z(j) = 1], . . . , Pr[Z(j) = n]).
Finally, e i will denote the ith basis vector. That is, the vector whose components are zero except the ith component which is set to one.
The following proposition defines the probabilistic linear mapping A between Ψ and Φ. Proposition 4.1. Let A be a matrix random variable whose jth column independently follows the multinomial distribution Multinomial(Φ j , P (Z(j))). Then the histogram of ã is the sum of the columns of A : Ψ = A 1 in which 1 = (1, 1, . . . , 1), and thus Ψ = j Multinomial(Φ j , P (Z(j))). Proof. Since Ψ is the histogram of ã, it is thus can be written as Ψ = i e ãi = j i∈{k|a k =j} e ãi . Then since 1) the sum of k independent and identical copies of Multinomial(1, p), for any p, has distribution Multinomial(k, p), and 2) |{k | a k = j}| = Φ j by definition, and 3) e ãi is a random variable whose distribution is Multinomial(1, P (Z(a i ))), then the result follows; because i∈{k|a k =j} e ãi has distribution Multinomial(Φ j , P (Z(j))).
The following corollary defines the matrix A, which is the the expected value of A . Corollary 4.2. Let A ∈ R (n+1)×(n+1) be the matrix whose jth column is P (Z(j)). Then EΨ = AΦ.
Proof. Follows from the mean of the multinomial distribution: E[Multinomial(Φ j , P (Z(j)))] = Φ j P (Z(j)).
It is also worth noting that due to the symmetry in (4), we have that
A ij = Pr[Z(j) = i] = Pr[Z n-j = n -i] = A n-i,n-j .
(5) For the rest of the paper we will be working exclusively with 1 -normalized versions of Ψ and Φ. That is, the normalized versions will sum to one. Since they both originally sum to m, dividing both of them by m will yield a vector that sums to one. The following corollary extends the results of this section to the case when Ψ and Φ are normalized to sum to one. In the following, diag(x) is the diagonal matrix whose off-diagonal entries are zero and whose diagonal equals x.
Corollary 4.3. Ψ = A 1 = A diag(1/Φ)Φ =⇒ Ψ/m = A diag(1/Φ)(Φ/m) , and consequently EΨ = AΦ =⇒ EΨ/m = AΦ/m .
The Estimation Algorithm
Let Φ = Φ/m and Ψ = Ψ/m be the 1 -normalized versions of Φ and Ψ.
Intuition. The first step in our algorithm is to establish a confidence interval2 of diameter f (δ)/2 around the perturbed incidence vector Ψ, such that, with probability at least 1 -β, its expected value x def = A Φ is within this interval. Note that this confidence interval depends only of public parameters such as η, m, and n, but not on the specific Ψ vector. Afterwards, we use linear programming to find a valid incidence vector within this interval that could be the preimage of Ψ, yielding the vector y def = A Φ . Since x is within this interval with probability at least 1 -β then the linear program has a solution with probability at least 1 -β. Consequently, x and y are within ∞ distance f (δ) from each other, with probability at least 1-β. It remains to establish, given this fact, the ∞ distance between the true Φ and the estimated Ψ , which is an upper bound to the additive error of the estimate. The details are provided later in Section 4.2.
Our estimation algorithm will take Ψ and A as input and will produce an estimate Φ to Φ. It will basically use linear programming to guarantee that
Ψ -A Φ ∞ f (δ)/2 . ( 6
)
The notation x ∞ is the max norm or ∞ norm and is equal to max i |x i |. Suitable constraints to guarantee that Φ is a valid frequency vector (that its components are nonnegative and sum to 1) are employed. These constraints cannot be enforced in case the naïve unbiased estimator A -1 Ψ is used (it would be unbiased because of Corollary 4.3). This linear program is shown in Algorithm 1.
The objective function of the linear program. The set of constraints of the linear program specify a finite convex polytope with the guarantee that, with probability 1 -β, the polytope contain the true solution, and that all points in this polytope are within a bounded distance from the true solution. We are then simply using the linear program as a linear constraint solver that computes an arbitrary point within this polytope. In particular, we are not using the linear program as an optimization mechanism. Hence, the reader should not be confused by observing that the objective function which the linear program would normally minimize is simply a constant (zero) which is independent of the LP solution.
From a practical point of view, however, it matters which point inside the polytope gets chosen. In particular, the polytope represents the probabilisticallybounded preimage of the perturbed observation. It is unlikely that the true solution lies exactly on or close to the boundary of such polytope, and is rather expected, probabilistically speaking, to exist closer to the centroid of the polytope that to its boundary. We have experimentally validated that, for low n, the centroid of the polytope is at least twice as close to the true solution than the output of the linear program (using the interior point method) which is reported in Section 6. Unfortunately, it is computationally intensive to compute the centroid for high n and thus we were not able to experimentally validate this claim in these cases. This also means that the centroid method is not practical enough. Instead, we recomment the use of the interior point algorithm for linear programming which is more likely to report a point from the interior of the polytope that the simplex algorithm which always reports points exactly on the boundary. We have also experimentally validated that the former always produces better estimates than the latter, even though both of them do satisfy our upper bound (which is independent of the LP algorithm used). An alternative theoretical analysis which provides an formal error bound for the centroid method could be the topic of future work. In Section 6 we only report results using the interior method algorithm.
Parameter Selection. The remainder of this section and our main result will proceed to show sufficient conditions that, with high probability, make (6) imply Φ -Φ ∞ δ for user-specified accuracy requirement δ. These conditions will dictate that either one of δ, , or m depend on the other two. Typically the user will choose the two that matter to him most and let our upper bounds decide the third. For example, if the user wants m to be small for efficiency and δ also be small for accuracy, then she will have to settle for a probably large value of which sacrifices privacy. Sometimes the resulting combination may be unfeasible or uninteresting. For instance, maybe m is required to be too large to fit in memory or secondary storage. Or perhaps δ will be required to be greater than one, which means that the result will be completely useless. In these cases the user will have to either refine his choice of parameters or consider whether his task is privately computable in the randomized response model. It may also be the case that a tighter analysis may solve this problem, since some parts of our analysis are somewhat loose bounds and there may be room for improvement. The probability 1 -β that the bound holds can be part of the trade-off as well.
Algorithm 1 Linear Program
Given Ψ and η, solve the following linear program for the variable Φ , in which f (δ
)/2 = A -1 ∞ 2 ln(1/β) ln(n + 1)/m. minimize 0 , s.t. ∀i -f (δ)/2 j Ψj -A ij Φ j f (
δ)/2 , and ∀i Φ i 0 , and
i Φ i = 1 .
Then output Φ = m Φ as the estimate of Φ.
Upper Bounding the Additive Error
As explained earlier, the first step is to find an ∞ ball of confidence around the the expected value of the perturbed incidence vector. This is is provided by Theorem B.1 through a series of approximations and convergences between probability distributions, which are detailed in two lemmas, all in the appendix. The high level flow and the end result is shown in the following theorem and is meant only to be indicative. For details or exact definitions of particular symbols, kindly refer to Appendix B.
Theorem 4.4. The component-wise additive error between the estimated incidence vector output by Algorithm 1 and the true incidence vector is
Φ -Φ ∞ √ 2m • O(η -n ) • ln 1 β • ln(n + 1).
Proof. Assuming the matrix A is nonsingular, the matrix norm (of A -1 ) induced by the max norm is, by definition:
A -1 ∞ = sup x =0 A -1 x ∞ /
x ∞ , and since A is nonsingular we can substitute x = Ay in the quantifier:
A -1 ∞ = sup y =0 A -1 Ay ∞ / Ay ∞ without loss of gen- erality, yeilding sup y =0 { y ∞ / Ay ∞ }. Thus for all y = 0, A -1 ∞ y ∞ Ay ∞
. If we multiply both sides by Ay
∞ / A -1
∞ (which is positive), we get:
A -1 ∞ y ∞ Ay ∞ .
In the following, we let y = Φ -Φ . The rest of the proof begins by upper bounding the following expression using the preceding derivation: Practicality of the bound. The factor O(η -n ) grows exponentially with n since η < 1. Therefore, if the bound is used in this form it may be useful for parameter selection only for very small n. In practice, however, the O(η -n ) factor is a over-estimation and its effective value is asymptotically sub-exponential. We discuss this issue and propose a practical solution in Section 6.1.
A -1 -1 ∞ Φ -Φ ∞ A( Φ -Φ ) ∞ = A Φ -A Φ ∞ = A Φ + Ψ -Ψ -A Φ ∞ A Φ -Ψ ∞ + A Φ -Ψ ∞ 2 A Φ -Ψ ∞ (LP constraint) 2 m CDF -1 G(aR+M,bR) (1 -β) (By Lemma B.3;Φ j ↑, n ↑) = 2 m (M + Rβ ) → 2 m (E 2 + (E 3 -E 1 )β ) (By Lemma B.2;η ↓) → 2
Lower Bounds
In this section we generalize the results of [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF] to multiple bit strings and obtain the lower bound on approximating Φ i . In the rest of this section we use lg(x) to denote the logarithm to base 2, and we let µ 0 = 1/2 -η/2 and µ 1 = 1/2 + η/2. Definition 5.1. (Strongly α-unpredictable bit source) [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF]Definition 3.2] For α ∈ [0, 1], a random variable X = (X 1 , . . . , X m ) taking values in {0, 1} m is a strongly α-unpredictable bit source if for every i ∈ {1, . . . , m}, we have α Pr[Xi=0|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm] Pr[Xi=1|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm] 1/α , for every x 1 , . . . , x i-1 , x i+1 , . . . , x n ∈ {0, 1} m-1 . Definition 5.2 (β-closeness). Two random variables X and Y are β-close if the statistical distance between their distributions is at most β:
v 2 -1 |Pr[X = v] -Pr[Y = v]| β
, where the sum is over the set supp(X) ∪ supp(Y ).
Definition 5.3 (Min-entropy). Min-entropy of a ran-
dom variable X is H ∞ (X) = inf x∈Supp(X) lg 1 Pr[X=x] .
Proposition 5.4. (Min-entropy of strongly αunpredictable bit sources) If X is a strongly α-unpredictable bit source, then X has min-entropy at least m lg(1 + α).
Proof. Let p = Pr[X i = 1 | X 1 = x 1 , . . . , X i-1 = x i-1 , X i+1 = x i+1 , . . . , X m = x m ]
for any x 1 , . . . , x i-1 , x i+1 , . . . , x n ∈ {0, 1} m-1 . Then we know that α (1 -p)/p 1/α, and thus p 1/(1 + α). We can then verify that no string in the support of X has probability greater than 1/(1 + α) m . Thus X has min-entropy at least βm, in which β = lg(1 + α) α. Lemma 5.5. (A uniformly random bit string conditioned on its sanitized version is an unpredictable bit source) Let X be a uniform random variable on bit strings of length m, and let X be a perturbed version of X, such that X i = Bernoulli(µ 0 ) if X i = 0 and Bernoulli(µ 1 ) otherwise. Then X conditioned on X is a strongly 1-η 1+ηunpredictable bit source.
Proof. Observe that since X is a uniformly random bit string then X i and X j are independent random variables for i = j. Since X i depends only on X i for all i and not on any other X j for j = i, then X i and X j are also independent random variables. Then using Bayes theorem and uniformity of X we can verify that for all x ∈ {0, 1} m and for all x 1 , . . . , x i-1 , x i+1 , . . . , x n ∈ {0, 1} m-1 α Pr[Xi=0|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm,X =x ] Pr[Xi=1|X1=x1,...,Xi-1=xi-1,Xi+1=xi+1,...,Xm=xm,X =x ]
1/α , in which α = µ 0 /µ 1 = (1 -η)/(1 + η).
Lemma 5.6. Let S 1 , . . . , S n be n uniform random variables on bit strings of length m, and for all 1 i n let S i be a perturbed version of S i , such that for all 1 j m, S ij = Bernoulli(µ 0 ) if S ij = 0 and Bernoulli(µ 1 ) otherwise. Let Y be a vector such that Y j = i S ij and Y be a vector such that Proof. Follows the same line of the proof of Lemma 5.5. Theorem 5.7. [START_REF] Mcgregor | The Limits of Two-Party Differential Privacy[END_REF]Theorem 3.4] There is a universal constant c such that the following holds. Let X be an α-unpredictable bit source on {0, 1} m , let Y be a source on {0, 1} m with min-entropy γm (independent from X), and let Z = X • Y mod k for some k ∈ N, be the inner product of X and Y mod k. Then for every β ∈ [0, 1], the random variable (Y, Z) is β-close to (Y, U ) where U is uniform on Z k and independent of Y , provided that
Y j = i S ij . Then Y conditioned on Y is a strongly 1-η 1+η n -unpredictable
m c • k 2 αγ • lg k γ • lg k β .
Theorem 5.8. [11, Theorem 3.9] Let P (x, y) be a randomized protocol which takes as input two uniformly random bit vectors x, y of length m and outputs a real number. Let P be ln( 1+η 1-η )-differentially private and let β 0. Then with probability at least 1 -β over the inputs x, y ← {0, 1} m and the coin tosses of P , the output differs from x T y by at least
Ω √ m lg(m) • β • 1-η 1+η .
Theorem 5.9. Let P (S 1 , . . . , S n ) = j i S ij be the n-wise inner product of the vectors S 1 , . . . , S n . If for all i, S i is a uniform random variable on {0, 1} m , and S i is the perturbed version of S i , such that S ij = Bernoulli(µ 0 ) if S ij = 0 and Bernoulli(µ 1 ) otherwise, then with probability at least 1 -β the output of any algorithm taking S 1 , . . . , S n as inputs will differ from P (S 1 , . . . , S n ) by at least
Ω √ m lg(m) • β • 1-η 1+η .
Proof. Without loss of generality take S 1 to be one vector and Y with Y j = n i=2 S ij to be the other vector. Then we will use Theorem 5.8 to bound S T 1 Y . To use Theorem 5.8, we first highlight that S i is a ln( 1+η 1-η )-differentially private version of S i . Then since Theorem 5.8 depends on Theorem 5.7, we will show that S 1 and Y satisfies the condition of the latter theorem. Theorem 5.7 concerns inner product between two bit sources, one is an unpredictable bit source while the other has linear min-entropy. Lemma 5.5 shows that S 1 conditioned on its sanitized version S 1 is an α-unpredictable bit source and Lemma 5.6 shows that Y has linear min-entropy (assuming n is constant in m). Theorem 5.10. Let S 1 , . . . , S n be uniformly random binary strings of length m and let S i be a perturbed version of S i , such that S ij = Bernoulli(µ 0 ) if S ij = 0 and Bernoulli(µ 1 ) otherwise. Then let the vectors v, v of length m be such that v i = j S ji and v i = j S ji , and the vector Φ = (Φ 0 , . . . , Φ n ) in which Φ i = |{j : v j = i}| is the frequency of i in v, and similarly for Φ the frequency in v . Then with probability at least 1 -β the output of an algorithm taking S differs from Φ i for all i by at least
Ω √ m lg(m) • β • 1-η 1+η .
Proof. We will proceed by reducing n-wise inner product to frequency estimation. Since Theorem 5.9 forbids the former, then the theorem follows. The reduction is as follows. Let P (j, A) = i∈A S ij be the product of the bits in a particular position j across a subset A of the binary strings. Observe that j P (j, [n]), with [n] = {1, . . . , n}, is the nwise inner product of all the binary string. Similarly, let P (j, A) = i∈A (1 -S ij ), be the product of the negated bits. Finally, denote Q(A) = j P (j, A)P (j, A C ), in which
A C = [n] \ A is the complement of the set A. Now we claim that Φ k = A⊆[n],|A|=k Q(A)
, in which the sum is over all subsets of [n] of size k. This can be seen since for a set A of size k, P (j, A)P (j, A C ) is one only if a i = i S ij = k. Since there may be several sets A of the same size k, we can therefore conclude that the sum over all such sets A⊆[n],|A|=k P (j, A)P (j, A C ) is one if and only if a i = i S ij = k, and thus the sum (over all j) of the former quantity is the count (frequency) of the latter.
We will then show why the result follows first for Φ 0 and Φ n then for Φ 1 , Φ 2 , . . . , Φ n-1 . According to this reduction, Φ 0 (resp. Φ n ) is equivalent to the nwise inner product of {1 -S 1 , 1 -S 2 , . . . , 1 -S n } (resp. {S 1 , S 2 , . . . , S n }) and thus if one was able to compute Φ 0 (resp. Φ n ) within error γ they would have also been able to compute those two n-wise inner products within error γ. Then we employ the lower bound on the n-wise inner product from Theorem 5.9 to lower bound γ for Φ 0 and Φ n . For Φ i for i ∈ {0, n}, Φ i is equivalent to the sum of n i n-wise inner products. In the case all but one of those n-wise inner products are zero, an estimate of Φ i within error γ gives an estimate for a particular n-wise inner product within error γ as well, in which we can invoke Theorem 5.9 again to lower bound γ for Φ i .
Experimental Evaluation
We use the Sapienza dataset [START_REF] Barbera | CRAW-DAD dataset sapienza/probe-requests (v. 2013-09-10[END_REF] to evaluate our method. It is a real-life dataset composed of wireless probe requests sent by mobile devices in various locations and settings in Rome, Italy. We only use the MAC address part of the dataset, as typical physical analytics systems do [START_REF] Musa | Tracking unmodified smartphones using wi-fi monitors[END_REF]. It covers a university campus and as city-wide national and international events. The data was collected for three months between February and May 2013, and contains around 11 million probes sent by 162305 different devices (different MAC addresses), therefore this is the size (m) of our indicator vectors. The released data is anonymized. The dataset contains 8 setting called POLITICS1, POLITICS2, VATICAN1, VATICAN2, UNI-VERSITY, TRAINSTATION, THEMALL, and OTHERS. Each setting is composed of several files. Files are labeled according to the day of capture and files within the same setting occurring in the same day are numbered sequentially. In our experiments we set the parameter n ∈ {1, 2, . . . , 21}, indicating the number of sets we want to experiment on. Then we pick n random files from all settings and proceed to estimate their t-incidence according to our algorithm. We add 1 to all incidence counts to reduce the computational overhead necessary to find a combination of files with non-zero incidence for all t for large n, so that the t-incidence for this random subset is nonzero for all t. This is unlikely to affect the results since the additive error will be much larger than 1 (about O( √ m)) anyway.
The additive error reported is the maximum additive error across all t. In real-life datasets, the additive error would be a problem only for low values of t (closer to the "intersection"), since the true value may be smaller than the additive error. However, for high t (closer to the "union"), high additive error is unlikely to be damaging to utility. This is a property of most real-world datasets since they are likely to follow a Zipf distribution. If this is the case it may be useful to consider employing the estimated union (or high t) to compute the intersection (or low t) via the inclusion-exclusion principle instead.
Calibrating to the Dataset
In our experiments we observe that the value of A -1 ∞ may be too high for small , making it useless as an upper bound in this case. This is due to the definition of the induced norm, which takes the maximum over all vectors whose max norm is 1. This maximum is achieved for vectors in {-1, 1} n+1 . However, in reality it is unlikely that the error vector will be this large and thus it may never actually reach this upper bound (as confirmed by the experiments). Instead, we consider the maximum over Γ = {-γ, γ} n+1 for γ < 1 and use the fact that linearity implies
max x∈Γ A -1 x ∞ / x ∞ = γ A -1
∞ . We empirically estimate γ by estimating, from the dataset, the multinomial distribution of Φ for each n and each , then we sample vectors from this distribution and run our algorithm on them, and then compute γ from the the resulting error vector. We stress that this calibration process thus does not use any aspects of the dataset other than the distribution of Φ and that γ depends only on n and and not on the actual incidence vector. Therefore, in real-life situation where there is no dataset prior to deployment to run this calibration on, it suffices to have prior knowledge (or expectation) to the distribution of the incidence vectors. For most applications it should follow a power-law distribution.
If Figure 2, all the lines represent the 1 -β quantile. For instance, in the Sapienza line, the 1 -β quantile (over 1000 runs) is shown. For the other line, the upper bound value was computed to hold with probability at least 1-β. The value of β we used is 0.1. The corresponding values for the lower bound are independent from n and are {1.3 × 10 -5 , 8.7 × 10 -6 , 5.3 × 10 -6 , 3.2 × 10 -6 , 1.9 × 10 -6 , 1.2 × 10 -6 , 7.1 × 10 -7 }, respective to the x-axis. We observe that the upper bound is validated by the experiments as it is very close to the observed additive error. In addition, the additive error itself resulting from our algorithm is very small even for as small as 0.5. For = 0.1 the additive error increase is unavoidable since such relatively high error may be necessary to protect the high privacy standard in this case.
Conclusion
We have presented a novel algorithm for estimating incidence counts of sanitized indicator vectors. It can also be used to estimate the n-wise inner product of sanitized bit vectors as the relationship is described in the proof of Theorem 5.10. We provided a theoretical upper bound that is validated by experiments on real-life datasets to be very accurate. Moreover, we extended a previous lower bound on 2-wise inner product to n-wise inner product. Finally, we evaluated our algorithm on a real-world dataset and validated the accuracy, the general upper bound and the lower bound.
Figure 2: The additive error Φ -Φ ∞ (on the y-axis), is plotted against the differential privacy parameter (on the x-axis), and the number of vectors n (in different subplots). The y-axis is in logarithmic scale while the x-axis is in linear. bound p i = A ij (1 -A ij ) 1/4 (for any j, since in the limit A ij (1 -A ij ) = A ik (1 -A ik ) for all j, k). The bound holds since A ij is a probability value in (0, 1) and the maximum of the polynomial x(1 -x) is 1/4. Consequently, F η↓ (x) erf x 2/m n+1 .
Therefore F -1 η↓ (q) = ( m/2) erf -1 (q 1/(n+1) ). Hence, M + RC(1 -β) = ( m/2)D(n, β). Proof. Consider the following transformation of the random variable A :
m A Φ -A Φ ∞ = m (A -A )(Φ/m) ∞ = m (A -A diag(1/Φ))(Φ/m) ∞ = m (Adiag(Φ) -A )diag(1/Φ)(Φ/m) ∞ = m (Adiag(Φ) -A )1/m ∞ = (Adiag(Φ) -A )1 ∞ = AΦ -A 1 ∞ = max i {|A i• Φ -A i• |} = max i j A ij Φ j -A ij max i j A ij Φ j -Binomial(Φ j , A ij ) ,
since the marginal distribution of a multinomial random variable is the binomial distribution, which in turn converges in distribution to the normal distribution, by the centrali limit theorem, as min j Φ j grows (which is justified in Lemma C.1),
d → max i j A ij Φ j -N (Φ j A ij , σ 2 ij = Φ j A ij (1 -A ij ))
= max . The last convergence result is due the fact that maximums approach Gumbel and therefore we choose a Gumbel distribution matching the median and interquantile range of the actual distribution of the maximum of HalfNormals, whose CDF is the multiplication of their CDFs. This is done by setting a and b to be the parameters of a Gumbel distribution G(a, b) with zero median and interquantile range of one, and then using the fact that Gumbel distribution belong to a location-scale family, which also implies that the Gumbel distribution is uniquely defined by its median and interquantile range (two unknowns and two equations).
C Discrete Uniform Distribution on Φ
We treat the case of uniform streams in this section. We call the vector a = (a 1 , . . . , a m ) uniform if a i is uniform on the range {0, . . . , n}. Considering the marginal distribution of the resulting incidence vector, we observe that in this case EΦ i = EΦ j for all i, j. However, Φ i will be strongly concentrated around
ln(1/β) ln(n + 1)/m (By Theorem B.1;n ↑) , in which G is the Gumbel distribution, β = a -b ln(-ln(1 -β)), R, M which depend n, η and a, b which are absolute constants, are all defined in Lemma B.3 and subsequently approximated in Lemma B.2. It remains to show that A -1 ∞ = O(η -n ), which holds since η -n is the largest eigenvalue of A -1 . The increase of Φ j may either be justified or quantified in probability by Lemma C.1.
bit source, and therefore has at least m lg 1 + 1
Lemma B. 3 .
3 Let F (x) = i erf x/ j 2Φ j A ij (1 -A ij ) be a cumulative distribution function (CDF) and let M = F -1 (1/2) and R = F -1 (3/4) -F -1(1/4) be the median and the interquantile range of the distribution represented by F , respectively. Additionally, let C(x) = c 0 ln log 2 (1/x), in which c 0 = 1/ ln log 4 (4/3). Then, if α < 1 is a positive real number, then with probability at least 1 -β we have that A Φ -A Φ ∞ m -1 (M + RC(1 -β)).
n grows, converges in distribution to the Gumbel distribution, by the extreme value theorem[START_REF] Coles | An Introduction to Statistical Modeling of Extreme Values[END_REF]:d → G(aR + M, bR), in which R = F -1 (3/4) -F -1 (1/4), M = F -1 (1/2), a =-ln(ln 2)/ ln(log 4 (4/3)), and b = -1/ ln(log 4 (4/3)), where F (x) = i F i (x) and F i (x) is the CDF of HalfNormal θ 2 i . Therefore, computing the quantile function of the Gumbel distribution at 1 -β shows that with probability at least 1 -β we have m A Φ -A Φ ∞ M + R(a -b ln(-ln(1 -β))) = M + R ln(-log 2 (1-β))ln(log 4 (4/3))
The word "interval" is inappropriate here since the random variable is a vector. Technically, " ∞-ball" would be more appropriate.
If η is not small enough, then the probability matrix A approaches the identity matrix I and Ψ, a known quantity, becomes close to the unknown quantity Φ. Hence, we can substitute it instead. For practical purposes, if this is the case, then we we would not need this lemma and could using Lemma B.3 directly. In special cases where Φ is close to uniform, we may quantify the probability of setting Φ j = m/(n + 1) by Lemma C.1.
supported by Cisco grant CG# 593780.
A The Probability Mass Function (PMF) of Z(j)
Since Z(j) = Binomial(j, 1 -p) + Binomial(nj, p) then its PMF is equivalent to the convolution: Pr[Z(j) = i] = Pr[Binomial(j, 1p) = ] Pr[Binomial(n -j, p) = i -]. Consider one term in the summation, t , which equals (1 -p) p j-j p i-(1 -p) -i-j+n+ n-j i-
. Since the ratio
2 is a rational function in then the summation over can be represented as a hypergeometric function:
, given that i + j n. The case of i + j > n is computed by symmetry as in [START_REF] Coles | An Introduction to Statistical Modeling of Extreme Values[END_REF]. The notation 2 F 1 denotes the Gauss hypergeomet-
is the rising factorial notation, also known as the Pochhammer symbol (x) k .
B Bounding Deviation of A from its Mean
Theorem B.1 (Bounding deviation of A ). Let α = f (δ)/2 and β be positive real numbers less than one. Then with probability at least
Proof. Using Lemma B.2 and the fact that E(n, x) approaches Z -ln(πZ -π ln(π))/ √ 2, as n approaches ∞ (according to its expansion at n → ∞), in which Z(x) = ln(2n 2 / ln 2 (4/x)). This is a good approximation even for n 1 except for x = 1 it becomes a good approximation for n 4. Let
, and c 0 = 1/ ln log 4 (4/3).
Proof. Using Lemma B.3. Since A goes to a rank-1 matrix as fast as η n (its smallest eigenvalue), we see that for every i, A ij (1 -A ij ) approaches a value that does not depend on j, call it p i . Therefore,
a value which does not depend on the particular, unknown, composition. Since thus the choice of the weak (n + 1)-composition Φ of m does not matter, we set Φ j = m/(n + 1) in the statement of Lemma B.3 and proceed 3 . Therefore F (x) approaches F η↓ (x)
We could compute the limit p i if we require dependence on η for fine tuning. However, we will instead use the its mean. Therefore, we consider an even stronger model in which EΦ i is still equal to EΦ j for all i, j, but Φ i is marginally almost uniform on its range. An algorithm doing well in this latter case (with higher variance) can intuitively do at least as well in the former case (with less variance).
The vector Φ is a vector of n + 1 elements but only n degrees of freedom; since it has to sum to m. Therefore, we cannot consider the discrete uniform product distribution on its entries. Instead, we will consider the joint uniform distribution on all nonnegative integer vectors which sum to m. All such vectors are the set of weak (n + 1)-compositions of m [15, p. 25].
1-β , then with probability at least 1 -β, min j Φ j δ, assuming Φ are picked uniformly at random from all weak (n + 1)-compositions of m.
Proof. Notice that the sum of Φ must be m, therefore, it has only n degrees of freedom instead of n + 1. In fact, Φ is the multivariate uniform distribution on weak (n + 1)-compositions 4 of m. Notice that the marginal distribution of Φ j is not Uniform(0, m), but rather lower values of Φ j have strictly higher probability than greater ones.
Consider the compositions of m into exactly n + 1 parts, in which each part is greater than or equals δ. There is exactly 5
Hence, the joint probability that all entries of Φ exceed a desired threshold δ, simultaneously, is Cn+1(m;δ) Cn+1(m) . In the rest of this proof we will use n k , the unsigned Stirling cycle number (i.e. Stirling numbers of the first kind), -(n-1)) the falling factorial power, and x n = x(x + 1) • • • (x + (n -1)) the rising factorial power. We will also use the identity
All the definitions and a proof of the aforementioned identity could be found in [10, 4 A weak k-composition of an integer n is a way of writing n as the sum of k non-negative integers (zero is allowed) [15, p. 25]. It is similar to integer partitions except that the order is significant. The number of such weak compositions is
and then, substituting µ = m + 1 and ν = n + 1 for readability
which is true when the sufficient condition (µ-νδ) k -(1 -β)µ k 0 holds for all 1 k n. Equivalently, when | 47,443 | [
"915035",
"5208",
"1084364"
] | [
"203831",
"206120",
"203831",
"206120",
"206040",
"450090"
] |
01485736 | en | [
"info"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/hal-01485736/file/driving4HAL.pdf | Antonio Paolillo
email: paolillo@lirmm.fr
Pierre Gergondet
email: pierre.gergondet@aist.go.jp
Andrea Cherubini
email: cherubini@lirmm.fr
Marilena Vendittelli
email: vendittelli@diag.uniroma.it
Abderrahmane Kheddar
email: kheddar@lirmm.fr
Autonomous car driving by a humanoid robot
Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.
Introduction
The potential of humanoid robots in the context of disaster has been exhibited recently at the DARPA Robotics Challenge (DRC), where robots performed complex locomotion and manipulation tasks (DARPA Robotics Challenge, 2015). The DRC has shown that humanoids should be capable of operating machinery, originally designed for humans. The DRC utility car driving task is a good illustration of the complexity of such tasks.
Worldwide, to have the right to drive a vehicle, one needs to be delivered a license, requiring months of practice, followed by an examination test. To make a robot drive in similar conditions, the perception and control algorithms should reproduce the human driving skills.
If the vehicle can neither be customized nor automated, it is more convenient to think of a robot in terms of anthropomorphic design. A driving robot must have motion capabilities for operations such as: reaching the vehicle, entering it, sitting in a stable posture, controlling its commands (e.g., ignition, steering wheel, pedals), and finally egressing it. All these skills can be seen as action templates, to be tailored to each vehicle and robot, and, more importantly, to be properly combined and sequenced to achieve driving tasks.
Noticeable research is currently made, to automate the driving operation of unmanned vehicles, with the ultimate goal of reproducing the tasks usually performed by human drivers [START_REF] Nunes | Guest editorial introducing perception, planning, and navigation for intelligent vehicles[END_REF][START_REF] Liu | Vision-based real-time lane marking detection and tracking[END_REF][START_REF] Hentschel | Autonomous robot navigation based on open street map geodata[END_REF], by relying on visual sensors [START_REF] Newman | Navigating, recognizing and describing urban spaces with vision and lasers[END_REF][START_REF] Broggi | Sensing requirements for a 13,000 km intercontinental autonomous drive[END_REF][START_REF] Cherubini | Autonomous visual navigation and laser-based moving obstacle avoidance[END_REF]. The success of the DARPA Urban Challenges [START_REF] Buehler | Special issue on the 2007 DARPA Urban Challenge, part I-III[END_REF][START_REF] Thrun | Stanley: The robot that won the DARPA Grand Challenge[END_REF], and the impressive demonstrations made by Google (Google, 2015), have heightened expectations that autonomous cars will very soon be able to operate in urban environments. Considering this, why bother making a robot drive a car, if the car can make its way without a robot? Although both approaches are not exclusive, this is certainly a legitimate question.
One possible answer springs from the complexity of autonomous cars, which host a distributed robot, with various sensors and actuators controlling the different tasks. With a centralized robot, such embedded devices can be removed from the car. The reader may also wonder when should a centralized robot be preferred to a distributed one, i.e., a fully automated car?
We answer this question through concrete application examples. In the DRC [START_REF] Pratt | The DARPA Robotics Challenges[END_REF], one of the eight tasks that robot must overtake is driving a utility vehicle. The reason is that in disaster situations, the intervention robot must operate vehicles -usually driven by humans -to transport tools, debris, etc. Once the vehicle reaches the intervention area, the robot should execute other tasks, (e.g., turning a valve, operating a drill). Without a humanoid, these tasks can be hardly achieved by a unique system. Moreover, the robot should operate cranks or other tools attached to the vehicle [START_REF] Hasunuma | A tele-operated humanoid robot drives a backhoe[END_REF][START_REF] Yokoi | A tele-operated humanoid operator[END_REF]. A second demand comes from the car manufacturing industry [START_REF] Hirata | Fuel consumption in a driving test cycle by robotic driver considering system dynamics[END_REF]. In fact, current crash-tests dummies are passive and non-actuated. Instead, in crash situations, real humans perform protective motions and stiffen their body, all behaviors that are programmable on humanoid robots. Therefore, robotic crash-test dummies would be 2 more realistic in reproducing typical human behaviors.
These applications, along with the DRC itself, and with the related algorithmic questions, motivate the interest for developing a robot driver. However, this requires the solution of an unprecedented "humanoid-in-the-loop" control problem. In our work, we successfully address this, and demonstrate the capability of a humanoid robot to drive a real car. This work is based on preliminary results carried out with the HRP-4 robot, driving a simulated car [START_REF] Paolillo | Toward autonomous car driving by a humanoid robot: A sensor-based framework[END_REF]. Here, we add new features to that framework, and present experiments with humanoid HRP-2Kai driving a real car outdoor on an unknown road.
The proposed framework presents the following main features:
• car steering control, to keep the car at a defined center of the road;
• car velocity control, to drive the car at a desired speed;
• admittance control, to ensure safe manipulation of the steering wheel;
• three different driving strategies, allowing intervention or supervision of a human operator, in a smooth shared autonomy manner.
The modularity of the approach allows to easily enable or disable each of the modules that compose the framework. Furthermore, to achieve the driving task, we propose to use only standard sensors for a common full-size humanoid robot, i.e., a monocular camera mounted on the head of the robot, the Inertial Measurement Unit (IMU) in the chest, and the force sensors at the wrists. Finally, the approach being purely reactive, it does not need any a priori knowledge of the environment. As a result, the framework allows -under certain assumptions -to make the robot drive along a previously unknown road.
The paper organization reflects the schematic description of the approach given in the next Sect. 2, at the end of which we also provide a short description of the paper sections.
Problem formulation and proposed approach
The objective of this work is to enable a humanoid robot to autonomously drive a car at the center of an unknown road, at a desired velocity. More specifically, we focus on the driving task and, therefore, consider the robot sitting in the car, already in a correct driving posture.
Most of the existing approaches have achieved this goal by relying on teleoperation (DRC-Teams, 2015;[START_REF] Kim | Approach of team SNU to the DARPA Robotics Challenge finals[END_REF][START_REF] Mcgill | Team THOR's adaptive autonomy for disaster response humanoids[END_REF]. Atkeson and colleagues [START_REF] Atkeson | NO RESETS: Reliable humanoid behavior in the DARPA Robotics Challenge[END_REF] propose an hybrid solution, with teleoperated steering and autonomous speed control. The velocity of the car, estimated with stereo cameras, is fed back to a PI controller, while LIDAR, IMU and visual odometry data support the operator during the steering procedures.
In [START_REF] Kumagai | Achievement of recognition guided teleoperation driving system for humanoid robots with vehicle path estimation[END_REF], the gas pedal is teleoperated and a local planner, using robot kinematics for vehicle path estimation, and point cloud data for obstacle detection, enables autonomous steering. An impedance system is used to ensure safe manipulation of the steering wheel.
Other researchers have proposed fully autonomous solutions. For instance, in [START_REF] Jeong | Control strategies for a humanoid robot to drive and then egress a utility vehicle for remote approach[END_REF], autonomous robot driving is achieved by following the proper trajectory among obstacles, detected with laser measurements. LIDAR scans are used in [START_REF] Rasmussen | Perception and control strategies for driving utility vehicles with a humanoid robot[END_REF] to plan a path for the car, while the velocity is estimated with a visual odometry module.
The operation of the steering wheel and gas pedal is realized with simple controllers.
We propose a reactive approach for autonomous driving that relies solely on standard humanoids sensor equipment, thus making it independent from the vehicle sensorial capabilities, and does not require expensive data elaboration for building local representations of the environment and planning safe paths. In particular, we use data from the robot on-board camera and IMU, to close the autonomous driver feedback loop. The force measured on the robot wrists is exploited to operate the car steering wheel.
In designing the proposed solution, some simplifying assumptions have been introduced, to capture the conceptual structure of the problem, without losing generality:
1. The car brake and clutch pedals are not considered, and the driving speed is assumed to be positive and independently controlled through the gas pedal. Hence, the steering wheel and the gas pedals are the only vehicle controls used by the robot for driving.
2. The robot is already in its driving posture on the seat, with one hand on the steering wheel, the foot on the pedal, and the camera pointing the road, with focal axis aligned with the car sagittal plane. The hand grasping configuration is unchanged during operation.
3. The road is assumed to be locally flat, horizontal, straight, and delimited by parallel borders1 . Although global convergence can be proved only for straight roads, turns with admissible curvature bounds are also feasible, as shown in the Experimental section. Instead, crossings, traffic lights, and pedestrians are not negotiated, and road signs are not interpreted.
Given these assumptions, we propose the control architecture in Fig. 1. The robot sits in the car, with its camera pointing to the road. The acquired images and IMU data are used by two branches of the framework running in parallel: car steering and velocity control. These are described hereby.
The car steering algorithm guarantees that the car is maintained at the center of the road.
To this end, the IMU is used to get the camera orientation with respect to the road, while an image processing algorithm detects the road borders (road detection). These borders are used to compute the visual features feeding the steering control block. Finally, the computed steering wheel reference angle is transformed by the wheel operation block into a desired trajectory for the robot hand that is operating the steering wheel. This trajectory can be adjusted by an admittance system, depending on the force exchanged between the robot hand and the steering wheel.
The car velocity control branch aims at making the car progress at a desired speed, through the gas pedal operation by the robot foot. A Kalman Filter (KF) fuses visual and inertial data to estimate the velocity of the vehicle (car velocity estimation) sent as feedback to the car velocity control, which provides the gas pedal reference angle for obtaining the desired velocity. The pedal operation block transforms this signal into a reference for the robot foot.
Finally, the reference trajectories for the hand and the foot respectively operating the steering wheel and the pedal, are converted into robot postural tasks, by the task-based quadratic programming controller.
The driving framework, as described above, allows a humanoid robot to autonomously drive a car along an unknown road, at a desired velocity. We further extend the versatility of our framework by implementing three different "driving modes", in order to ease human supervision and eventual intervention if needed:
• Autonomous. Car steering and velocity control are both enabled, as indicated above, and the robot autonomously drives the car without any human aid. • Assisted. A human takes care of the road detection, the car velocity estimation, and the control, by teleoperating the robot ankle, and manually selecting the visual features (road borders). These are then used by the steering controller to compute the robot arm command.
• Teleoperated. Both the robot hand and foot are teleoperated for steering the wheel and the gas pedal operation, respectively. The reference signals are sent to the taskbased quadratic programming control through a keyboard or joystick. The human uses the robot camera images as visual feedback for driving.
For each of the driving modes, the car steering and velocity controllers are enabled or disabled, as described in Table 1. The human user/supervisor can intervene at any moment during the execution of the driving task, to select one of the three driving modes. The selection, as well as the switching between modes, is done by pushing proper joystick (or keyboard) buttons.
The framework has a modular structure, as presented in Fig. 1. In the following Sections, we detail the primitive functionalities required by the autonomous mode, since the assisted and teleoperation modes use a subset of such functionalities.
The rest of paper is organized as follows. Section 3 describes the model used for the carrobot system. Then, the main components of the proposed framework are detailed. Sect. 4 presents the perception part, i.e., the algorithms used to detect the road and to estimate the car velocity. Section 5 deals with car control, i.e., how the feedback signals are transformed into references for the steering wheel and for the gas pedal, while Sect. 6 focuses on humanoid control, i.e., on the computation of the commands for the robot hand and foot. The experiments carried out with HRP-2Kai are presented in Sect. 7. Finally, Sect. 8 concludes the paper and outlines future research perspectives.
Modelling
The design of the steering controller is based on the car kinematic model. This is a reasonable choice since, for nonholonomic systems, it is possible to cancel the dynamic parameters via feedback, and to solve the control problem at the velocity level, provided that the velocity issued by the controller is differentiable [START_REF] De Luca | Kinematics and Dynamics of Multi-Body Systems[END_REF]. To recover the dynamic system control input, it is however necessary to know the exact dynamic model, which is in general not available. Although some approximations are therefore necessary, these do not affect the controller in the considered scenario (low accelerations, flat and horizontal road).
On-line car dynamic parameter identification could be envisaged, and seamlessly integrated in our framework, whenever the above assumptions are not valid. Note, however, that the proposed kinematic controller would remain valid, since it captures the theoretic challenge of driving in the presence of nonholonomic constraints.
To derive the car control model, consider the reference frame F w placed on the car rear axle midpoint W , with the y-axis pointing forward, the z-axis upward and the x-axis completing the right handed frame (see Fig. 2a). The path to be followed is defined as the set of points that maximize the distance from both the left and right road borders. On this path, we consider a tangent Frenet Frame F p , with origin on the normal projection of W on the path. Then, the car configuration with respect to the path is defined by x, the Cartesian abscissa of W in F p , and by θ, the car orientation with respect to the path tangent (see Fig. 2b).
Describing the car motion through the model of a unicycle, with an upper curvature bound c M ∈ R + , x and θ evolve according to:
ẋ = v sin θ θ = ω ω v < c M , (1)
where v and ω represent respectively the linear and angular velocity of the unicycle. The front wheel orientation φ can be approximately related to v and ω through:
φ = arctan ωl v , (2)
with l the constant distance between the rear and front wheel axes2 . The parameters r, the radius of the wheel, and β, characterizing the grasp configuration, are also shown here.
Note that a complete car-like model could have been used, for control design purposes, by considering the front wheels orientation derivative as the control input. The unicycle stabilizing controller adopted in this paper can in fact be easily extended to include the dynamics of the front wheels orientation, for example through backstepping techniques. However, in this case, a feedback from wheel orientation would have been required by the controller, but is, generally, not available. A far more practical solution is to neglect the front wheels orientation dynamics, usually faster than that of the car, and consider a static relationship between the front wheels orientation and the car angular velocity. This will only require a rough guess on the value of the parameter l, since the developed controller shows some robustness with respect to model parameters uncertainties as will be shown in Sect. 5.
The steering wheel is shown in Fig. 3, where we indicate, respectively with F h and F s , the hand and steering wheel reference frames. The origin of F s is placed at the center of the wheel, and α is the rotation around its z-axis, that points upward. Thus, positive values of α make the car turn left (i.e., lead to negative ω).
Neglecting the dynamics of the steering mechanism [START_REF] Mohellebi | Adaptive haptic feedback steering wheel for driving simulators[END_REF], assuming the front wheels orientation φ to be proportional to the steering wheel angle α, controlled by the driver hands, and finally assuming small angles ωl/v in (2), leads to:
α = k α ω v , (3)
with k α a negative 3 scalar, characteristic of the car, accounting also for l.
The gas pedal is modeled by its inclination angle ζ, that yields a given car acceleration a = dv/dt. According to experimental observations, at low velocities, the relationship between the pedal inclination and the car acceleration is linear:
ζ = k ζ a. ( 4
)
The pedal is actuated by the motion of the robot foot, that is pushing it (see Fig. 4a).
ture constraint in (1).
3 Because of the chosen angular conventions. Assuming small values of ∆q a and ∆ζ, the point of contact between the foot and the pedal can be considered fixed on both the foot and the pedal, i.e., the length of the segment C 2 C 3 in Fig. 4b can be considered close to zero4 . Hence, the relationship between ∆q a and ∆ζ is easily found to be
∆ζ = l a l p ∆q a , (5)
where l a (l p ) is the distance of the ankle (pedal) rotation axis from the contact point of the foot with the pedal.
The robot body reference frame F b is placed on the robot chest, with x-axis pointing forward, and z-axis upward. Both the accelerations measured by the IMU, and the humanoid tasks, are expressed in this frame. We also indicate with F c the robot camera frame (see Fig. 2). Its origin is in the optical center of the camera, with z-axis coincident with the focal axis. The y-axis points downwards, and the x-axis completes the right-handed frame. F c is tilted by an angle γ (taken positive downwards) with respect to the frame F w , whereas the vector p w c = (x w c , y w c , z w c ) T indicates the position vector of the camera frame expressed in the car reference frame. Now, the driving task can be formulated. It consists in leading the car on the path, and aligning it with the path tangent:
(x, θ) → (0, 0) , (6)
while driving at a desired velocity:
v → v * . (7)
Task ( 6) is achieved by the steering control that uses the kinematic model ( 1), and is realized by the robot hand according to the steering angle α. Concurrently, ( 7) is achieved by the car
Perception
The block diagram of Fig. 1 shows our perception-action approach. At a higher level, the perception block, whose details are described in this Section, provides the feedback signals for the car and robot control.
Road detection
This Section describes the procedure used to derive the road visual features, required to control the steering wheel. These visual features are: (i) the vanishing point (V ), i.e., the intersection of the two borders, and (ii) the middle point (M ), i.e., the midpoint of the segment connecting the intersections of the borders with the image horizontal axis. Both are shown in Fig. 5.
Hence, road detection consists of extracting the road borders from the robot camera images. After this operation, deriving the vanishing and middle point is trivial. Since the focus of this work is not to advance the state-of-the-art on road/lane detection, but rather to propose a control architecture for humanoid car driving, we develop a simple image processing algorithm for road border extraction. More complex algorithms can be used to improve the detection and tracking of the road [START_REF] Liu | Vision-based real-time lane marking detection and tracking[END_REF][START_REF] Lim | Real-time implementation of vision-based lane detection and tracking[END_REF][START_REF] Meuter | A novel approach to lane detection and tracking[END_REF][START_REF] Nieto | Real-time lane tracking using Rao-Blackwellized particle filter[END_REF], or even to detect road markings [START_REF] Vacek | Road-marking analysis for autonomous vehicle guidance[END_REF]. However, our method has the advantage of being based solely on vision, avoiding the complexity induced by integration of other sensors [START_REF] Dahlkamp | Selfsupervised monocular road detection in desert terrain[END_REF][START_REF] Ma | Simultaneous detection of lane and pavement boundaries using model-based multisensor fusion[END_REF]. Note that more advanced software is owned by car industries, and therefore hard to find in open-code source or binary.
Part of the road borders extraction procedure follows standard techniques used in the field of computer vision [START_REF] Laganière | OpenCV 2 Computer Vision Application Programming Cookbook: Over 50 recipes to master this library of programming functions for real-time computer vision[END_REF] and is based on the OpenCV library [START_REF] Bradski | The OpenCV library[END_REF] that provides ready-to-use methods for our vision-based algorithm. More in detail, the steps used for the detection of the road borders on the currently acquired image are described below, with reference to Fig. 6.
• From the image, a Region Of Interest (ROI), shown with white borders in Fig. 6a, is manually selected at the initialization, and kept constant during the driving experiment. Then, at each cycle of the image processing, we compute the average and standard deviation of hue and saturation channels of the HSV (Hue, Saturation and Value) color space on two central rectangular areas in the ROI. These values are considered for the thresholding operations described in the next step.
• Two binary images (Fig. 6b and6c) are obtained by discerning the pixels in the ROI, whose hue and saturation value are in the ranges (average ± standard deviation) defined in the previous step. This operation allows to detect the road, while being adaptive to color variation. The HSV value channel is not considered, in order to be robust to luminosity changes.
• To remove "salt and pepper noise", the dilation and erosion operators are applied to the binary images. Then, the two images are merged by using the OR logic operator to obtain a mask of the road (Fig. 6d).
• The convex hull is computed with areas greater than a given threshold on the mask found in the previous step; then, a Gaussian filter is applied for smoothing. The result is shown in Fig. 6e.
• The Canny edge detector (Fig. 6f), followed by Hough transform (Fig. 6g) are applied, to detect the line segments on the image.
• Similar segments are merged5 , as depicted in Fig. 6h.
This procedure gives two lines corresponding to the image projection of the road borders. However, in real working conditions, it may happen that one or both the borders are not detectable because of noise on the image, or failures in the detection process. For this reason, we added a recovery strategy, as well as a tracking procedure, to the pipeline. The recovery strategy consists in substituting the borders, that are not detected, with artificial ones, defined offline as oblique lines that, according to the geometry of the road and to the configuration of the camera, most likely correspond to the road borders. This allows the computation of the vanishing and middle point even when one (or both) real road borders are not correctly detected. On the other hand, the tracking procedure gives continuity and robustness to the detection process, by taking into account the borders detected on the previous image. It consists of a simple KF, with state composed of the slope and intercept of the two borders6 . In the prediction step, the KF models the position of lines on the image plane as constant (a reasonable design choice, under Assumption 3, of locally flat and straight road), whereas the measurement step uses the road borders as detected in the current image.
From the obtained road borders (shown in red in Fig. 6a), the vanishing and middle point are derived, with simple geometrical computations. Their values are then smoothed with a low-pass frequency filter, and finally fed to the steering control, that will be described in Sect. 5.1.
Car velocity estimation
To keep the proposed framework independent from the car characteristics, we propose to estimate the car speed v, by using only the robot sensors, and avoiding information coming from the car equipment, such as GPS, or speedometer. To this end, we use the robot camera, to measure the optical flow, i.e. the apparent motion of selected visual features, due to the relative motion between camera and scene.
The literature in the field of autonomous car control provides numerous methods for estimating the car speed by means of optical flow [START_REF] Giachetti | The use of optical flow for road navigation[END_REF][START_REF] Barbosa | Velocity estimation of a mobile mapping vehicle using filtered monocular optical flow[END_REF]. To improve the velocity estimate, the optical flow can be fused with inertial measurements, as done in the case of aerial robots, in [START_REF] Grabe | On-board velocity estimation and closedloop control of a quadrotor UAV based on optical flow[END_REF]. Inspired by that approach, we design a KF, fusing the acceleration measured by the robot IMU and the velocity measured with optical flow.
Considering the linear velocity and acceleration along the forward car axis y w as state ξ = (v a) T of the KF, we use a simple discrete-time stochastic model to describe the car motion:
ξ k+1 = 1 ∆T 0 1 ξ k + n k , (8)
with ∆T the sampling time, and n k the zero-mean white gaussian noise. The corresponding output of the KF is modeled as:
η k = ξ k + m k , (9)
where m k indicates the zero-mean white gaussian noise associated to the measurement process. The state estimate is corrected, thanks to the computation of the residual, i.e., the difference between measured and predicted outputs. The measurement is based on both the optical flow (v OF ), and the output of the IMU accelerometers (a IM U ). Then, the estimation of the car velocity v will correspond to the first element of state vector ξ. The process to obtain v OF and a IM U is detailed below.
Measure of the car speed with optical flow
To measure the car velocity v OF in the KF, we use optical flow. Optical flow can be used to reconstruct the motion of the camera, and from that, assuming that the transformation from the robot camera frame to the car frame is known, it is straightforward to derive the vehicle velocity.
More in detail, the 6D velocity vector v c of the frame F c can be related to the velocity of the point tracked in the image ẋp through the following relation:
ẋp = Lv c , (10)
where the interaction matrix L is expressed as follows [START_REF] Chaumette | Visual servo control, Part I: Basic approaches[END_REF]:
L = -Sx
Here, (x p , y p ) are the image coordinates (in pixels) of the point on the ground, expressed as (x g , y g , z g ) in the camera frame (see Fig. 7). Furthermore, it is S x,y = f α x,y , where f is the camera focal length and α x /α y the pixel aspect ratio. In the computation of L, we consider that the image principal point coincides with the image center. As shown in Fig. 7b, the point depth z g can be reconstructed through the image point ordinate y p and the camera configuration (tilt angle γ and height z w c ):
z g = z w c cos sin(γ + ) , = arctan y p S y . (12)
Actually, the camera velocity v c is computed by taking into account n tracked points, i.e., in (10), we consider respectively L
= (L 1 • • • L n ) T and xp = ( ẋp,1 • • • ẋp,n ) T
, instead of L and ẋp . Then, v c is obtained by solving a least-squares problem7 :
v c = arg min χ || Lχ -xp || 2 . ( 13
)
The reconstruction of xp in ( 13) is based on the computation of the optical flow. However, during the navigation of the car, the vibration of the engine, poor textured views and other un-modeled effects add noise to the measurement process [START_REF] Giachetti | The use of optical flow for road navigation[END_REF]. Furthermore, other factors, such as variable light conditions, shadows, and repetitive textures, can jeopardize feature tracking. Therefore, raw optical flow, as provided by off-the-shelf algorithms -e.g., from the OpenCV library [START_REF] Bradski | The OpenCV library[END_REF], gives noisy data that are insufficient for accurate velocity estimation; so filtering and outlier rejection techniques must be added.
Since the roads are generally poor in features, we use a dense optical flow algorithm, that differs from sparse algorithms, in that it computes the apparent motion of all the pixels of the image plane. Then, we filter the dense optical flow, first according to geometric rationales, and then with an outlier rejection method [START_REF] Barbosa | Velocity estimation of a mobile mapping vehicle using filtered monocular optical flow[END_REF]. The whole procedure is described below, step-by-step:
• Take two consecutive images from the robot on-board camera.
• Consider only the pixels in a ROI that includes the area of the image plane corresponding to the road. This ROI is kept constant along all the experiment and, thus, identical for the two consecutive frames.
• Covert the frames to gray scale, apply a Gaussian filter, and equalize with respect to the histogram. This operation reduces the measurement noise, and robustifies the method with respect to light changes.
• Compute the dense optical flow, using the Farnebäck algorithm [START_REF] Farnebäck | chapter Two-Frame Motion Estimation Based on Polynomial Expansion[END_REF] implemented in OpenCV.
• Since the car is supposed to move forward, in the dense optical flow vector, consider only those elements pointing downwards on the image plane, and discard those not having a significant centrifugal motion from the principal point. Furthermore, consider only contributions with length between an upper and a lower threshold, and whose origin is on an image edge (detected applying Canny operator).
• Reject the outliers, i.e., the contributions ( ẋp,i , ẏp,i ), i ∈ {1, . . . , n}, such that ẋp,i / ∈ [ xp ± σ x ] and ẏp,i / ∈ [ ȳp ± σ y ], where xp ( ȳp ) and σ x (σ y ) are the average and standard deviation of the optical flow horizontal (vertical) contributions. This operation is made separately for the contributions of the right and left side of the image, where the module and the direction of the optical flow vectors can be quite different (e.g., on turns).
The final output of this procedure, xp , is fed to (13), to obtain v c , that is then low-pass filtered. To transform the velocity v c in frame F c , obtained from (13), into velocity v w in the car frame F w , we apply:
v w = W w c v c , (14)
with W w c the twist transformation matrix
W w c = R w c S w c R w c 0 3×3 R w c , (15)
R w c the rotation matrix from car to camera frame, and S w c the skew symmetric matrix associated to the position p w c of the origin of F c in F w .
Finally, the speed of the car is set as the y-component of v w : v OF = v w,y . This will constitute the first component of the KF measurement vector.
Measure of the car acceleration with robot accelerometers
The IMU mounted on-board the humanoid robot is used to measure acceleration, in order to improve the car velocity estimation through the KF. In particular, given the raw accelerometer data, we first compensate the gravity component, with a calibration executed at the beginning of each experiment8 . This gives a b , the 3D robot acceleration, expressed in the robot frame F b . Then, we transform a b in the car frame F w , to obtain:
a w = R w b a b , (16)
where R w b is the rotation matrix relative to the robot body -vehicle transformation. Finally, a IM U is obtained by selecting the y-component of a w . This will constitute the second component of the KF measurement vector.
Car control
The objective of car control is (i) to drive the rear wheel axis center W along the curvilinear path that is equally distant from the left and right road borders (see Fig. 2b), while aligning the car with the tangent to this path, and (ii) to track desired vehicle velocity v * . Basically, car control consists in achieving tasks ( 6) and ( 7), with the steering and car velocity controllers described in the following subsections.
Steering control
Given the visual features extracted from the images of the robot on-board camera, the visionbased steering controller generates the car angular velocity input ω to regulate both x and θ to zero. This reference input is eventually translated in motion commands for the robot hands.
The controller is based on the algorithm introduced by [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF] for unicycle corridor following, and recently extended to the navigation of humanoids in environments with corridors connected through curves and T-junctions [START_REF] Paolillo | Vision-based maze navigation for humanoid robots[END_REF]. In view of Assumption 3 in Sect. 2, the same algorithm can be applied here. For the sake of completeness, in the following, we briefly recall the derivation of the features model (that can be found, for example, also in [START_REF] Vassallo | Visual servoing and appearance for navigation[END_REF]) and the control law originally presented by [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF]. In doing so, we illustrate the adaptations needed to deal with the specificity of our problem.
The projection matrix transforming the homogeneous coordinates of a point, expressed in F p , to its homogeneous coordinates in the image, is:
P = K T c w T w p , ( 17
)
where K is the camera calibration matrix [START_REF] Ma | An Invitation to 3-D Vision: From Images to Geometric Models[END_REF], T c w the transformation from the car frame F w to F c , and T w p from the path frame F p to F w .
As intuitive from Fig. 2, the projection matrix depends on both the car coordinates, and the camera intrinsic and extrinsic parameters. Here, we assume that the camera principal point coincides with the image center, and we neglect image distortion. Furthermore, P has been computed neglecting the z-coordinates of the features, since they do not affect the control task. Under these assumptions, using P , the abscissas of the vanishing and middle point, respectively denoted by x v and x m , can be expressed as [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF][START_REF] Vassallo | Visual servoing and appearance for navigation[END_REF]:
x v = k 1 tan θ x m = k 2 x c θ + k 3 tan θ + k 4 , (18)
where
k 1 = -S x /c γ k 2 = -S x s γ /z w c k 3 = -S x c γ -S x s γ y w c /z w c k 4 = -S x s γ x w c /z w c .
We denote cos( * ) and sin( * ) with c * and s * , respectively. Note that with respect to the visual features model in [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF][START_REF] Vassallo | Visual servoing and appearance for navigation[END_REF], the expression of the middle point changes, due to the introduction of the lateral and longitudinal displacement, x w c and y w c respectively, of the camera frame with respect to the car frame. As a consequence, to regulate the car position to the road center, we must define a new visual feature xm = x m -k 4 . Then, the navigation task ( 6) is equivalent to the following visual task:
(x m , x v ) → (0, 0) . (19)
In fact, according to (18), asymptotic convergence of x v and xm to zero implies convergence of x and θ to zero, achieving the desired path following task.
Feedback stabilization of the dynamics of xm , is given by the following angular velocity controller [START_REF] Toibero | Switching visual servoing approach for stable corridor navigation[END_REF]:
ω = k 1 k 1 k 3 + xm x v - k 2 k 1 vx v -k p xm , (20)
with k p a positive scalar gain. This controller guarantees asymptotic convergence of both xm and x v to zero, under the conditions that v > 0, and that k 2 and k 3 have the same sign, which is always true if (i) γ ∈ (0, π/2) and (ii) y w c > -z w c / tan γ, two conditions always verified with the proposed setup.
Note that this controller has been obtained considering the assumption of parallel road borders. Nevertheless, this assumption can be easily relaxed since we showed in [START_REF] Paolillo | Vision-based maze navigation for humanoid robots[END_REF] that the presence of non-parallel borders does not jeopardize the controller's local convergence.
To realize the desired ω in (20), the steering wheel must be turned according to (3):
α = k α k 1 k 1 k 3 + xm x v - k 2 k 1 x v -k p xm v , (21)
where xm and x v are obtained by the image processing algorithm of Sect. 4.1, while the value of v is estimated through the velocity estimation module presented in Sect. 4.2.
Car velocity control
In view of the assumption of low acceleration, and by virtue of the linear relationship between the car acceleration and the pedal angle (eq. ( 4)), to track a desired car linear velocity v * we designed a PID feedback controller to compute the gas pedal command:
ζ = k v,p e v + k v,i e v + k v,d d dt e v . (22)
Here, e v = (v * -v) is the difference between the desired and current value of the velocity, as computed by the car velocity estimation block, while k v,p , k v,i and k v,d are the positive proportional, integral and derivative gains, respectively. In the design of the velocity control law, we decided to insert an integral action to compensate for constant disturbances (like, e.g., the effect of a small road slope) at steady state. The derivative term helped achieving a damped control action. The desired velocity v * is set constant here.
Robot control
This section presents the lower level of our controller, which enables the humanoid robot to turn the driving wheel by α, and push the pedal by ζ.
Wheel operation
The reference steering angle α is converted to the reference pose of the hand grasping the wheel, through the rigid transformation
T b * h = T b s (α) T s h (r, β) .
Here, T b * h and T b s are the transformation matrices expressing respectively the poses of frames F h and F s in Fig. 3 with respect to F b in Fig. 2a. Constant matrix T s h expresses the pose of F h with respect to F s , and depends on the steering wheel radius r, and on the angle β parameterizing the hand position on the wheel.
For a safe interaction between the robot hand and the steering wheel, it is obvious to think of an admittance or impedance controller, rather than solely a force or position controller [START_REF] Hogan | Impedance control -An approach to manipulation. I -Theory. II -Implementation. III -Applications[END_REF]. We choose to use the following admittance scheme:
f -f * = M ∆ẍ + B∆ ẋ + K∆x, (23)
where f and f * are respectively the sensed and desired generalized interaction forces in F h ; M , B and K ∈ R 6×6 are respectively the mass, damping and stiffness diagonal matrices. As a consequence of the force f applied on F h , and on the base of the values of the admittance matrices, ( 23) generates variations of pose ∆x, velocity ∆ ẋ and acceleration ∆ẍ of F h with respect to F s . Thus, the solution of ( 23) leads to the vector ∆x that can be used to compute the transformation matrix ∆T , and to build up the new desired pose for the robot hands:
T b h = T b * h • ∆T . ( 24
)
In cases where the admittance controller is not necessary, we simply set ∆T = I.
Pedal operation
Since there exists a linear relationship between the variation of the robot ankle and the variation of the gas pedal angle, to operate the gas pedal it is sufficient to move the ankle joint angle q a . From ( 22), we compute the command for the robot ankle's angle as:
q a = ζ ζ max
(q a,max -q a,min ) + q a,min .
Here, q a,max is the robot ankle configuration, at which the foot pushes the gas pedal, producing a significant car acceleration. Instead, at q a = q a,min , the foot is in contact with the pedal, but not yet pushing it. These values depend both on the car type, and on the position of the foot with respect to the gas pedal. A calibration procedure is run before starting driving, to identify the proper values of q a,min and q a,max . Finally, ζ max is set to avoid large accelerations, while saturating the control action.
Humanoid task-based control
As shown above, wheel and pedal operation are realized respectively in the operational space (by defining a desired hand pose T b h ) and in the articular space (via the desired ankle joint angle q a ). Both can be realized using our task-based quadratic programming (QP) controller, assessed in complex tasks such as ladder climbing [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF]. The joint angles and desired hand pose are formulated as errors that appear among the sum of weighted leastsquares terms in the QP cost function. Other intrinsic robot constraints are formulated as linear expressions of the QP variables, and appear in the constraints. The QP controller is solved at each control step. The QP variable vector x = (q T , λ T ) T , gathers the joint acceleration q, and the linearized friction cones' base weights λ, such that the contact forces f are equal to K f λ (with K f the discretized friction cone matrix). The desired acceleration q is integrated twice to feed the low level built-in PD control of HRP-2Kai. The driving task with the QP controller writes as follows: minimize
x N i=1
w i E i (q, q, q) 2 + w λ λ 2 subject to 1) dynamic constraints 2) sustained contact positions 3) joint limits 4) non-desired collision avoidance constraints 5) self-collision avoidance constraints,
where w i and w λ are task weights or gains, and E i (q, q, q) is the error in the task space. Details on the QP constraints (since they are common to most tasks) can be found in [START_REF] Vaillant | Multi-contact vertical ladder climbing with an HRP-2 humanoid[END_REF].
Here, we explicit the tasks used specifically during the driving (i.e. after the driving posture is reached). We use four (N = 4) set-point objective tasks; each task (i) is defined by its associated task-error i so that
E i = K p i i + K v i ˙ i + ¨ i .
The driving wheel of the car has been modeled as another 'robot' having one joint (rotation).
We then merged the model of the driving wheel to that of the humanoid and linked them, through a position and orientation constraint, so that the desired driving wheel steering angle α, as computed by ( 24), induces a motion on the robot (right arm) gripper. The task linking the humanoid robot to the driving wheel 'robot' is set as part of the QP constraints, along with all sustained contacts (e.g. buttock on the car seat, thighs, left foot).
The steering angle α (i.e. the posture of the driving wheel robot) is a set-point task (E 1 ). The robot whole-body posture including the right ankle joint control (pedal) is also a setpoint task (E 2 ), which realizes the angle q a provided by ( 25). Additional tasks were set to keep the gaze direction constant (E 3 ), and to fix the left arm, to avoid collisions with the car cockpit during the driving operation (E 4 ).
Experimental results
We tested our driving framework with the full-size humanoid robot HRP-2Kai built by Kawada Industries. For the experiments, we used the Polaris Ranger XP900, the same utility vehicle employed at the DRC. HRP-2Kai has 32 degrees of freedom, is 1.71 m tall and weighs 65 kg. It is equipped with an Asus Xtion Pro 3D sensor, mounted on its head and used in this work as a monocular camera. The Xtion camera provides images at 30 Hz with a resolution of 640 × 480 pixels. From camera calibration, it results S x S y = 535 pixels. In the presented experiments, x w c = -0.4 m, y w c = 1 m and z w c = 1.5 m were manually measured. However, it would be possible to estimate the robot camera position, with respect to the car frame, by localization of the humanoid [START_REF] Oriolo | Humanoid odometric localization integrating kinematic, inertial and visual information[END_REF], or by using the geometric information of the car (that can be known, e.g., in the form of a CAD model, as shown in Fig. 2). HRP-2Kai is also equipped with an IMU (of rate 500 Hz) located in the chest. Accelerometer data have been merged with the optical flow to estimate the car linear velocity, as explained in Sect. 4.2. Furthermore, a built-in filter processes the IMU data to provide an accurate measurement of the robot chest orientation. This is kinematically propagated up to the Xtion sensor to get γ, the tilt angle of the camera with respect to the ground.
The task-based control is realized through the QP framework (see Sect. 6.3) which allows to easily set different tasks that can be achieved concurently by the robot. The following table gives the weights of the 4 set-point tasks described in Sect. 6.3. Note that As for the gains in Section 5, we set k v,p = 10 -8 , k v,d = 3 • 10 -9 and k v,i = 2 • 10 -9 to track the car desired velocity v * , whereas in the steering wheel controller we choose the gain k p = 3, and we set the parameter k α = -5. While the controller gains have been chosen as a tradeoff between reactivity and control effort, the parameter k α was roughly estimated. Given the considered scenario, an exact knowledge of this parameter is generally not possible, since it depends on the car characteristics. It is however possible to show that, at the kinematic level, this kind of parameter uncertainty will induce a non-persistent perturbation on the nominal closed loop dynamics.
K v i = 2 × K p i .
Proving the boundedness of the perturbation term induced by parameter uncertainties would allow to conclude about the local asymptotic stability of the perturbed system. In general, this would imply a bound on the parameter uncertainty, to be satisfied to preserve local stability. While this analysis is beyond the scope of this paper, we note also that in practice it is not possible to limit the parameter uncertainty, that depends on the car and the environment characteristics. Therefore, we rely on the experimental verification of the visionbased controller robustness, delegating to the autonomous-assisted-teleoperated framework the task of taking the autonomous mode controller within its region of local asymptotic stability. In other words, when the system is too far from the equilibrium condition, and convergence of the vision-based controller could be compromised, due to model uncertainties and unexpected perturbations, the user can always resort to the other driving modes.
In the KF used for the car velocity estimation, the process and the measurement noise covariances matrices are set to diag(1e -4 , 1e -4 ) and diag(1e 2 , 1e 2 ), respectively. Since the forward axis of the robot frame is aligned with the forward axis of the vehicle frame, to get a IMU we didn't apply the transformation ( 16), but we simply collected the acceleration along the forward axis of the robot frame, as given by the accelerometers. The sampling time of the KF was set to ∆T = 0.002 s (being 500 Hz the frequency of the IMU measurements, the The cut-off frequencies of the low-pass filters applied to the visual features and the car velocity estimate were set to 8 and 2.5 Hz, respectively.
At the beginning of each campaign of experiments, we arrange the robot in the correct driving posture in the car as shown in Fig. 9a. This posture (except for the driving leg and arm) is assumed constant during driving: all control parameters are kept constant. At initialization, we also correct eventual bad orientations of the camera with respect to the ground plane, by applying a rotation to the acquired image, and by regulating the pitch and yaw angles of the robot neck, so as to align the focal axis with the forward axis of the car reference frame. The right foot is positioned on the gas pedal, and the calibration procedure described in Sect. 6.2 is used to obtain q a,max and q a,min .
To ease full and stable grasping of the steering wheel, we designed a handle, fixed to the wheel (visible in Fig. 9a), allowing the alignment of the wrist axis with that of the steer. With reference to Fig. 3, this corresponds to configuring the hand grasp with r = 0 and, to comply with the shape of the steering wheel, β = 0.57 rad. Due to the robot kinematic constraints, such as joint limits and auto-collisions avoidance, imposed by our driving configuration, the range of the steering angle α is restricted from approximately -2 rad to 3 rad. These limits cause bounds on the maximum curvature realizable by the car. Nevertheless, all of the followed paths were compatible with this constraint. For more challenging maneuvers, grasp reconfiguration should be integrated in the framework.
With this grasping setup, we achieved a good alignment between the robot hand and the steering wheel. Hence, during driving, the robot did not violate the geometrical constraints imposed by the steering wheel mechanism. In this case, the use of the admittance control for safe manipulation is not necessary. However, we showed in [START_REF] Paolillo | Toward autonomous car driving by a humanoid robot: A sensor-based framework[END_REF], that the admittance control can be easily plugged in our framework, whenever needed. In fact, in that work, an HRP-4, from Kawada Industries, turns the steering wheel with a more 'humanlike' grasp (r = 0.2 m and β = 1.05 rad, see Fig. 8a). Due to the characteristics of both the grasp and the HRP-4 hand, admittance control is necessary. For sake of completeness, we report, in Fig. 8b-8d, plots of the admittance behavior relative to that experiment. In particular, to have good tracking of the steering angle α, while complying with the steering wheel geometric constraint, we designed a fast (stiff) behavior along the z-axis of the hand frame, F h , and a slow (compliant) along the x and y-axes. To this end, we set the admittance parameters: m x = m y = 2000 kg, m z = 10 kg, b x = b y = 1600 kg/s, b z = 240 kg/s, and k x = k y = 20 kg/s 2 , k z = 1000 kg/s 2 . Furthermore, we set the desired forces f * x = f * z = 0 N, while along the y-axis of the hand frame f * y = 5 N, to improve the grasping stability. Note that the evolution of the displacements along the x and y-axes (plots in Fig. 8b-8c), are the results of a dynamic behavior that filters the high frequency of the input forces, while along the z-axis the response of the system is more reactive.
In the rest of this section, we present the HRP-2Kai outdoor driving experiments. In particular, we present the results of the experiments performed at the authorized portion of the AIST campus in Tsukuba, Japan. A top view of this experimental field is shown in Fig. 9b. The areas highlighted in red and yellow correspond to the paths driven using the autonomous and teleoperated mode, respectively, as further described below. Furthermore, we present an experiment performed at the DRC final, showing the effectiveness of the assisted driving mode. For a quantitative evaluation of the approach, we present the plots of the variables of interest. The same experiments are shown in the video available at https://youtu.be/SYHI2JmJ-lk, that also allows a qualitative evaluation of the online image processing. Quantitatively, we successfully carried out 14 experiments over of 15 repetitions, executed at different times, between 10:30 a.m. and 4 p.m., proving image processing robustness in different light conditions.
First experiment: autonomous car driving
In the first experiment, we tested the autonomous mode, i.e., the effectiveness of our framework to make a humanoid robot drive a car autonomously. For this experiment, we choose v * = 1.2 m/s, while the foot calibration procedure gave q a,max = -0.44 rad and q a,min = -0.5 rad.
Figure 10 shows eight snapshots taken from the video of the experiment. The car starts with an initial lateral offset, that is corrected after a few meters. The snapshots (as well as the video) of the experiment show that the car correctly travels at the center of a curved path, for about 100 m. Furthermore, one can observe that the differences in the light conditions (due to the tree shadows) and in the color of the road, do not jeopardize the correct detection of the borders and, consequently, the driving performance.
Figure 11 shows the plots related to the estimation of the car speed, as described in Sect. 4.2. On the top, we plot a IMU , the acceleration along the forward axis of the car, as reconstructed from the robot accelerometers. The center plot shows the car speed measured with the optical flow-based method (v OF ), whereas the bottom plot gives the trace of the car speed v obtained by fusing a IMU and v OF . Note that the KF reduces the noise of the v OF signal, a very important feature for keeping the derivative action in the velocity control law (22).
As well known, reconstruction from vision (e.g., the "structure from motion" problem) suffers from a scale problem, in the translation vector estimate [START_REF] Ma | An Invitation to 3-D Vision: From Images to Geometric Models[END_REF]. This issue, due to the loss of information in mapping 2D to 3D data, is also present in optical flow velocity estimation methods. Here, this can lead to a scaled estimate of the car velocity. For this reason, we decided to include another sensor information in the estimation process: the acceleration provided by the IMU. Note, however, that in the current state of the work, the velocity estimate accuracy has been only evaluated qualitatively. In fact, that high accuracy is only important in the transient phases (initial error recovery and curve negotiation). Instead, it can be easily shown that the perturbation induced by velocity estimate inaccuracy on the features dynamics vanishes at the regulation point corresponding to the desired driving task, and that by limiting the uncertainty on the velocity value, it is possible to preserve local stability. In fact, the driving performance showed that the estimation was accurate enough, for the considered scenario. In different conditions, finer tuning of the velocity estimator may be necessary.
Plots related to the steering wheel control are shown in Fig. 12a. The steering control is activated about 8 s after the start of the experiment and, after a transient time of a few seconds, it leads the car to the road center. Thus, the middle and vanishing points (the top and center plots, respectively) correctly converge to the desired values, i.e., x m goes to k 4 = 30 pixels (since γ = 0.2145 rad -see expression of k 4 in Sect. 5.1), and x v to 0. The bottom plot shows the trend of the desired steering command α, as computed from the visual features, and from the estimated car speed according to (21). The same signal, reconstructed from the encoders (black dashed line) shows that the steering command is smoothed by the task-based quadratic programming control, avoiding undesirable fast signal variations.
Fig. 12b presents the plots of the estimated vs desired car speed (top) and the ankle angle command sent to the robot to operate the gas pedal and drive the car at the desired velocity (bottom).
Also in this case, after the initial transient, the car speed converges to the nominal desired values (no ground truth was available). The oscillations observable at steady state are due to the fact that the resolution of the ankle joint is coarser than that of the gas pedal. Note, in fact, that even if the robot ankle moves in a small range, the car speed changes significantly. The noise on the ankle command, as well as the initial peak, are due to the derivative term of the gas pedal control (22). However, the signal is smoothed by the task-based quadratic programming control (see the dashed black line, i.e., the signal reconstructed by encoder In the same campaign of experiments, we performed ten autonomous car driving experiments.
In nine of them (including the one presented just above), the robot successfully drove the car for the entire path. One of the experiments failed due to a critical failure of the image processing. It was not possible to perform experiments on other tracks (with different road shapes and environmental conditions), because our application was rejected after complex administrative paperwork required to access other roads in the campus.
Second experiment: switching between teleoperated and autonomous modes
In some cases, the conditions ensuring the correct behaviour of the autonomous mode are risky. Thus, it is important to allow a user to supervise the driving operation, and control the car if required. As described in Sect. 2, our framework allows a human user to intervene at any time, during the driving operation, to select a particular driving strategy. The second experiment shows the switching between the autonomous and teleoperated modes.
In particular, in some phases of the experiment, the human takes control of the robot, by selecting the teleoperated mode. In these phases, proper commands are sent to the robot, to drive the car along two very sharp curves, connecting two straight roads traveled in autonomous mode. Snapshots of this second experiment are shown in Fig. 13.
For this experiment we set v * = 1.5 m/s, while after the initial calibration of the gas pedal, q a,min = -0.5 rad and q a,max = -0.43 rad. Note that the difference in the admissible ankle range with respect to the previous experiment is due to a slightly different position of the robot foot on the gas pedal.
Figure 14a shows the signals of interest for the steering control. In particular, one can observe that when the control is enabled (shadowed areas of the plots) there is the same correct behavior of the system seen in the first experiment. When the user asks for the teleoperated mode (non-shadowed areas of the plots), the visual features are not considered, and the steering command is sent to the robot via keyboard or joystick by the user. Between 75 and 100 s, the user controlled the robot (in teleoperated mode) to make it steer on the right as much as possible. Because of the kinematic limits and of the grasping configuration, the robot saturated the steering angle at about -2 rad even if the user asked a wider steering. This is evident on the plot of the steering angle command of Fig. 14a (bottom): note the difference between the command (blue continuous curve), and the steering angle reconstructed from the encoders (black dashed curve).
Similarly, Fig. 14b shows the gas pedal control behavior when switching between the two modes. When the gas pedal control is enabled, the desired car speed is properly tracked by operating the robot ankle joint (shadowed areas of the top plot in Fig. 14b). On the other hand, when the control is disabled (non-shadowed areas of the plots), the ankle command (blue curve in Fig. 14b, bottom), as computed by (25), is not considered, and the robot ankle is teleoperated with the keyboard/joystick interface, as noticeable from the encoder plot (black dashed curve). At the switching between the two modes, the control keeps sending commands to the robot without any interruption, and the smoothness of the signals allows to have continuous robot operation. In summary, the robot could perform the entire experiment (along a path of 130 m ca., for more than 160 s) without the need to stop the car. This was achieved thanks to two main design choices. Firstly, from a perception viewpoint, monocular camera and IMU data are light to be processed, allowing a fast and reactive behavior. Secondly, the control framework at all the stages (from the higher level visual control to the low level kinematic control) guarantees smooth signals, even at the switching moments.
The same experiment presented just above was performed five other times, during the same day. Four experiments resulted successful, while two failed do to human errors during teleoperation.
Third experiment: assisted driving at the DRC finals
The third experiment shows the effectiveness of the assisted driving mode. This strategy was used to make the robot drive at the DRC finals, where the first of the eight tasks consisted in driving a utility vehicle along a straight path, with two sets of obstacles. We successfully completed the task, by using the assisted mode. Snapshots taken from the DRC finals official video [START_REF] Darpatv | Team AIST-NEDO driving on the second day of the DRC finals[END_REF] are shown in Fig. 15. The human user teleoperated HRP-2Kai remotely, by using the video stream from the robot camera as the only feedback from the challenge field. In the received images, the user selected, via mouse, the proper artificial road borders (red lines in the figure), to steer the car along the path. Note that these artificial road borders, manually set by the user, may not correspond to the real borders of the road. In fact, they just represent geometrical references -more intuitive for humans -to easily define the vanishing and middle points and steer the car by using (21). Concurrently, the robot ankle was teleoperated to achieve a desired car velocity. In other words, with reference to the block diagram of Fig. 1, the user provides the visual features to the steering control, and the gas pedal reference to the pedal operation block. Basically, s/he takes the place of the road detection and car velocity estimation/control blocks. The assisted mode could be seen as a sort of shared control between the robot and the a human supervisor, and allows the human to interfere with the robot operation if required. As stated in the previous section, at any time, during the execution of the driving experience, the user can instantly and smoothly switch to one of the other two driving modes. At the DRC, we used a wide angle camera, although the effectiveness of the assisted mode was also verified with a Xtion camera.
Conclusions
In this paper, we have proposed a reactive control architecture for car driving by a humanoid robot on unkown roads. The proposed approach consists in extracting road visual features, to determine a reference steering angle to keep the car at the center of a road. The gas pedal, operated by the robot foot, is controlled by estimating the car speed using visual and inertial data. Three different driving modes (autonomous, assisted, and teleoperated) extend the versatility of our framework. The experimental results carried out with the humanoid robot HRP-2Kai have shown the effectiveness of the proposed approach. The assisted mode was successfully used to complete the driving task at the DRC finals.
The driving task has addressed, as an explicative case-study of humanoids controlling humantailored devices. In fact, besides the achievement of the driving experience, we believe that humanoids are the most sensible platforms for helping humans with everyday task, and the proposed work shows that complex real-world tasks can be actually performed in autonomous, assisted and teleoperated way. Obviously, the complexity of the task comes also with the complexity of the framework design, on both perception and control point-ofviews. This led us to make some working assumptions that, in some cases, limited the range of application of our methods.
Further investigations shall deal with the task complexity, to advance the state-of-art of algorithms, and make humanoids capable of helping humans with dirty, dangerous and demanding jobs. Future work will be done, in order to make the autonomous mode work efficiently in the presence of sharp curves. To this end, and to overcome the problem of limited steering motions, we plan to include, in the framework, the planning of variable grasping configurations, to achieve more complex manoeuvres. We are also planning to go to driving on uneven terrains, where the robot has also to sustain its attitude, w.r.t. sharp changes of the car orientation. Furthermore, the introduction of obstacle avoidance algorithms, based on optical flow, will improve the driving safety. Finally, we plan to add brake control and to perform the entire driving task, including car ingress and egress.
Figure 1 :
1 Figure 1: Conceptual block diagram of the driving framework.
Figure 2 :
2 Figure 2: Side (a) and top view (b) of a humanoid robot driving a car with relevant variables.
Figure 3 :
3 Figure3: The steering wheel, with rotation angle α, hand and steering frames, F h and F s . The parameters r, the radius of the wheel, and β, characterizing the grasp configuration, are also shown here.
Figure 4 :
4 Figure4: (a) The robot foot operates the gas pedal by regulating the joint angle at the ankle q a , to set a pedal angle ζ, and yield car acceleration a. (b) Geometric relationship between the ankle and the gas pedal angles.
Figure 5 :
5 Figure 5: The images of the road borders define the middle and vanishing point, respectively M and V . Their abscissa values are denoted with x m and x v .
(a) On-board camera image with the red detected road borders. The vanishing and middle point are shown respectively in cyan and green. (b) First color detection. (c) Second color detection. (d) Mask obtained after dilation and erosion. (e) Convex hull after Gaussian filtering. (f) Canny edge detection. (g) Hough transform. (h) Merged segments.
Figure 6 :
6 Figure 6: Main steps of the road detection algorithm. Although the acquired robot image (a) is shown in gray-scale here, the proposed road detection algorithm processes color images.
Figure 7 :
7 Figure7: Schematic representation of the robot camera looking at the road. (a) Any visible cartesian point (x g , y g , z g ) on the ground has a projection on the camera image plane, whose coordinates expressed in pixels are (x p , y p ). (b) The measurement of this point on the image plane, together with the camera configuration parameters, can be used to estimate the depth z g of the point.
Figure 8 :
8 Figure 8: Left: setup of an experiment that requires admittance control on the steering hand. Right: output of the admittance controller in the hand frame during the same experiment.
(a) Driving posture (b) Top view of the experimental area
Figure 9 :
9 Figure 9: The posture taken by HRP-2Kai during the experiments (a) and the experimental area at the AIST campus (b).
Figure 10 :
10 Figure 10: First experiment: autonomous car driving.
car from Kalman filter
Figure 11 :
11 Figure 11: First experiment: autonomous car driving. Acceleration a IMU measured with the robot IMU (top), linear velocity v OF measured with the optical flow (center), and car speed v estimated by the KF (bottom).
Figure 12 :
12 Figure 12: First experiment: autonomous car driving. (a) Middle point abscissa x m (top), vanishing point abscissa x v (center), and steering angle α (bottom). (b) Car speed v (top), already shown in fig 10, and ankle joint angle q a (bottom).
Figure 13 :
13 Figure 13: Second experiment: switching between teleoperated and autonomous modes.
Figure 14 :
14 Figure 14: Second experiment: switching between teleoperated and autonomous modes. (a) Middle point abscissa x m (top), vanishing point abscissa x v (center), and steering angle α (bottom). (b) Car speed v (top) and ankle joint angle q a (bottom).
Figure 15 :
15 Figure 15: Third experiment: assisted driving mode at the DRC finals. Snapshots taken from the DRC official video.
Table 1 :
1 Driving modes. For each mode, the steering and the car velocity control are properly enabled or disabled.
Driving mode Steering Car velocity
control control
Autonomous enabled enabled
Assisted enabled disabled
Teleoperated disabled disabled
Road detection is assisted by the human.
Table 2 :
2 QP weights and set-point gains.
E 1 E 2 E 3 E 4
w 100 5 1000 1000
K p 5 1 (ankle = 100) 10 10
The assumption on parallel road borders can be relaxed, as proved in(Paolillo et al.,
2016). We maintain the assumption here to keep the description of the controller simpler, as will be shown in Sect. 5.1.
Bounds on the front wheels orientation characterizing common service cars induce the maximum curva-
For the sake of clarity, in Fig.4bthe length of the segment C 2 C 3 is much bigger than zero. However, this length, along with angles ∆q a and ∆ζ, is almost null.
For details on this step, refer to[START_REF] Paolillo | Vision-based maze navigation for humanoid robots[END_REF].
Although 3 parameters are sufficient if the borders are parallel, a 4-dimensional state vector will cover all cases, while guaranteeing robustness to image processing noise.
To solve the least-square problem, n ≥ 3 points are necessary. In our implementation, we used the openCV solve function, and in order to filter the noise due to few contributions, we set n ≥ 25. If n < 25, we set v c = 0.
The assumption on horizontal road in Sect. 3 avoids the need for repeating this calibration.
Acknowledgments
This work is supported by the EU FP7 strep project KOROIBOT www.koroibot.eu, and by the Japan Society for Promotion of Science (JSPS) Grant-in-Aid for Scientific Research (B) 25280096. This work was also in part supported by the CNRS PICS Project ViNCI. The authors deeply thank Dr. Eiichi Yoshida for taking in charge the administrative procedures in terms of AIST clearance and transportation logistics, without which the experiments could not be conducted; Dr Fumio Kanehiro for lending the car and promoting this research; Hervé Audren and Arnaud Tanguy for their kind support during the experiments. | 72,573 | [
"1003690",
"935991",
"6566",
"1003691",
"176001"
] | [
"226175",
"395113",
"226175",
"395113",
"244844",
"226175",
"395113"
] |
00148575 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2007 | https://hal.science/hal-00148575/file/Oberdisse_RMC_SoftMatter2007.pdf | Julian Oberdisse
Peter Hine
Wim Pyckhout-Hintzen
Structure of interacting aggregates of silica nanoparticles in a polymer matrix: Small-angle scattering and Reverse Monte-Carlo simulations
Reinforcement of elastomers by colloidal nanoparticles is an important application where microstructure needs to be understood -and if possible controlled -if one wishes to tune macroscopic mechanical properties. Here the three-dimensional structure of big aggregates of nanometric silica particles embedded in a soft polymeric matrix is determined by Small Angle Neutron Scattering. Experimentally, the crowded environment leading to strong reinforcement induces a strong interaction between aggregates, which generates a prominent interaction peak in the scattering. We propose to analyze the total signal by means of a decomposition in a classical colloidal structure factor describing aggregate interaction and an aggregate form factor determined by a Reverse Monte Carlo technique. The result gives new insights in the shape of aggregates and their complex interaction in elastomers. For comparison, fractal models for aggregate scattering are also discussed.
Figures : 10
Tables : 3 I. INTRODUCTION
There is an intimate relationship between microscopic structure and mechanical properties of composite materials [1][START_REF] Nielsen | Mechanical Properties of Polymers and Composite[END_REF][START_REF] Frohlich | [END_REF][4][5]. Knowledge of both is therefore a prerequisite if one wishes to model this link [6][7][8]. A precise characterization of the three-dimensional composite structure, however, is usually difficult, as it has often to be reconstructed from two-dimensional images made on surfaces, cuts or thin slices, using electron microscopy techniques or Atomic Force Microscopy [9][10][11]. Scattering is a powerful tool to access the bulk structure in a nondestructive way [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF][START_REF] Peterlik | [END_REF]. X-ray scattering is well suited for many polymer-inorganic composites [14][15][16], but neutron scattering is preferred here due to the extended q-range (with respect to standard x-ray lab-sources), giving access to length scales between some and several thousand Angstroms. Also, cold neutrons penetrate more easily macroscopically thick samples, and they offer the possibility to extract the conformation of polymer chains inside the composite in future work [17]. Small Angle Neutron Scattering (SANS) is therefore a method of choice to unveil the structure of nanocomposites. This article deals with the structural analysis by SANS of silica aggregates in a polymeric matrix. Such structures have been investigated by many authors, often with the scope of mechanical reinforcement [18][19][20][21], but sometimes also in solution [22][23][24]. One major drawback of scattering methods is that the structure is obtained in reciprocal space. It is sometimes possible to read off certain key features like fractal dimensions directly from the intensity curves, and extensive modeling can be done, e.g. in the presence of a hierarchy of fractal dimensions, using the famous Beaucage expressions [25]. Also, major progress has been made with inversion to real space data [26]. Nonetheless, complex structures like interacting aggregates of filler particles embedded in an elastomer for reinforcement purposes are still an important challenge. The scope of this article is to report on recent progress in this field.
II. MATERIALS AND METHODS
II.1 Sample preparation.
We briefly recall the sample preparation, which is presented in [27]. The starting components are aqueous colloidal suspensions of silica from Akzo Nobel (Bindzil 30/220 and Bindzil 40/130), and nanolatex polymer beads. The latter was kindly provided by Rhodia. It is a coreshell latex of randomly copolymerized Poly(methyl methacrylate) (PMMA) and Poly(butylacrylate) (PBuA), with some hydrophilic polyelectrolyte (methacrylic acid) on the surface. From the analysis of the form factors of silica and nanolatex measured separately by SANS in dilute aqueous solutions we have deduced the radii and polydispersities of a lognormal size distribution of spheres [27]. The silica B30 has an approximate average radius of 78 Å (resp. 96 Å for B40), with about 20% (resp. 28%) polydispersity, and the nanolatex 143 Å (24% polydispersity).
Colloidal stock solutions of silica and nanolatex are brought to desired concentration and pH, mixed, and degassed under primary vacuum in order to avoid bubble formation. Slow evaporation of the solvent at T = 65°C under atmospheric pressure takes about four days, conditions which have been found suitable for the synthesis of smooth and bubble-free films without any further thermal treatment. The typical thickness is between 0.5 and 1 mm, i.e. films are macroscopically thick.
II.2 Small Angle Neutron Scattering.
The data discussed here have been obtained in experiments performed at ILL on beamline D11 [27]. The wavelength was fixed to 10.0 Å and the sample-to-detector distances were 1.25 m, 3.50 m, 10.00 m, 36.70 m, with corresponding collimation distances of 5.50 m, 5.50 m, 10.50 m and 40.00 m, respectively. Primary data treatment has been done following standard procedures, with the usual subtraction of empty cell scattering and H 2 O as secondary calibration standard [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF]. Intensities have been converted to cm -1 using a measurement of the direct beam intensity. Background runs of pure dry nanolatex films show only incoherent scattering due to the high concentration of protons, as expected for unstructured random copolymers. The resulting background is flat and very low as compared to the coherent scattering in the presence of silica, and has been subtracted after the primary data treatment.
III. STRUCTURAL MODELLING
III.1 Silica-latex model nanocomposites.
We have studied silica-latex nanocomposites made by drying a mixture of latex and silica colloidal solutions. The nanometric silica beads can be kept from aggregating during the drying process by increasing the precursor solution pH, and thus their electric charge.
Conversely, aggregation can be induced by reducing the solution pH. The resulting nanocomposite has been shown to have very interesting mechanical properties even at low filler volume fraction. The reinforcement factor, e.g., which is expressed as the ratio of Youngs modulus of the composite and the one of its matrix, E/E latex , can be varied by a factor of several tens at constant volume fraction of silica (typically from 3 to 15%) [28,29]. In this context it is important to recognize that the silica-polymer interface is practically unchanged from one sample to the other, in the sense that there are no ligands or grafted chains connecting the silica to the matrix. There might be changes to the presence of ions, but their impact on the reinforcement factor appears to be of 2 nd order [30]. Possible changes in the matrix properties are cancelled in the reinforcement factor representation, the influence of the silica structure is thus clearly highlighted in our experiments. Using a simplified analysis of the structural data measured by SANS, we could show that (i) the silica bead aggregation was indeed governed by the solution pH, and (ii) the change in aggregation number N agg was accompanied by a considerable change in reinforcement factor at constant silica volume fraction. Although we had convincing evidence for aggregation, it seemed difficult to close the gap and verify that the estimated N agg was indeed compatible with the measured intensity curves. This illustrates one of the key problems in the physical understanding of the reinforcement effect: interesting systems for reinforcement are usually highly crowded, making structural analysis complicated and thereby impeding the emergence of a clear structure-mechanical properties relationship. It is the scope of this article to propose a method for structural analysis in such systems.
III.2 Modelling the scattered intensity for interacting aggregates.
For monodisperse silica spheres of volume V si , the scattered intensity due to some arbitrary spatial organization can be decomposed in the product of contrast ∆ρ, volume fraction of spheres Φ, structure factor, and the normalized form factor of individual spheres, P(q) [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF][START_REF] Peterlik | [END_REF]. If in addition spheres are organized in monodisperse aggregates, the structure factor can be separated in the intra-aggregate structure factor S intra (q), and a structure factor describing the center-of-mass correlations of aggregates, S inter (q): I(q) = ∆ρ 2 Φ V si S inter (q) S intra (q) P(q)
(1)
Here the product S intra (q) P(q) can also be interpreted as the average form factor of aggregates, as it would be measured at infinite dilution of aggregates. In order to be able to compare it to the intensity in cm -1 , we keep the prefactors and define the aggregate form factor P agg =∆ρ 2 Φ V si S intra (q) P(q).
The above mentioned conditions like monodispersity are not completely met in our experimental system. However, it can be considered sufficiently close to such an ideal situation for this simple scattering law to be applicable. The small polydispersity in silica beads, e.g., is not expected to induce specific aggregate structures. At larger scale, the monodispersity of the aggregates is a working hypothesis. It is plausible because of the strong scattering peak in I(q), which will be discussed with the data. Strong peaks are usually associated with ordered and thus not too polydisperse domain sizes [31].
To understand the difficulty of the structural characterization of the nanocomposites discussed here, one has to see that aggregates of unknown size interact with each other through an unknown potential, which determined their final (frozen) structure. Or from a more technical point of view, we know neither the intra-nor the inter-aggregate structure factor, respectively denoted S intra (q) (or equivalently, P agg (q)), and S inter (q).
In the following, we propose a method allowing the separation of the scattered intensity in P agg (q) and S inter (q), on the assumption of (a) a (relative) monodispersity in aggregate size, and (b) that P agg is smooth in the q-range around the maximum of S inter . The inter-aggregate structure factor will be described with a well-known model structure factor developed for simple liquids and applied routinely to repulsively interacting colloids [START_REF] Hansen | Theory of Simple Liquids[END_REF][START_REF] Hayter | [END_REF][34]. The second factor of the intensity, the aggregate form factor, will be analyzed in two different ways. First, P agg will be compared to fractal models [25]. Then, in a second part, its modeling in direct space by Reverse Monte Carlo will be implemented and discussed [35][36][37][38][39].
Determination of the average aggregation number and S inter .
Aggregation number and aggregate interaction need to be determined first. The silica-latex nanocomposites discussed here have a relatively well-ordered structure of the filler phase, as can be judged from the prominent correlation peak in I(q), see Fig. 1 as an example for data.
The peak is also shown in the upper inset in linear scale. The position of this correlation peak q o corresponds to a typical length scale of the sample, 2π/q o , the most probable distance between aggregates. As the volume fraction (e.g., Φ = 5% in Fig. 1) and the volume of the elementary silica filler particles V si are known, one can estimate the average aggregation number:
N agg = (2π/q o ) 3 Φ/V si (2)
Two ingredients are necessary for the determination of the inter aggregate structure factor.
The first one is the intensity in absolute units, or alternatively the independent measurement of scattering from isolated silica particles, i.e. at high dilution and under known contrast conditions and identical resolution. The second is a model for the structure factor of objects in repulsive interaction. We have chosen a well-known quasi-analytical structure factor based on the Rescaled Mean Spherical Approximation (RMSA) [START_REF] Hayter | [END_REF]34]. Originally, it was proposed for colloidal particles of volume V, at volume fraction Φ, carrying an electrostatic charge Q, and interacting through a medium characterized by a Debye length λ D . In the present study, we use this structure factor as a parametrical expression, with Q and λ D as parameters tuning the repulsive potential. The Debye length, with represents the screening in solutions, corresponds here to the range of the repulsive potential, whereas Q allows to vary the intensity of the interaction. Although the spatial organization of the silica beads in the polymer matrix is due to electrostatic interactions in solution before film formation, we emphasize that this original meaning is lost in the present, parametrical description.
For the calculation of S inter , Φ is given by the silica volume fraction, and the aggregate volume V = 4π/3 R e 3 by N agg V si , with N agg determined by eq.( 2). R e denotes the effective radius of a sphere representing an aggregate. In principle, we are thus left with two parameters, Q and λ D .
The range λ D must be typically of the order of the distance between the surfaces of neighboring aggregates represented by effective charged spheres of radius R e , otherwise the structure factor would not be peaked as experimentally observed. As a starting value, we have chosen to set λ D equal to the average distance between neighboring aggregate surfaces. We will come back to the determination of λ D below, and regard it as fixed for the moment. Then only the effective charge Q remains to be determined.
Here the absolute units of the intensity come into play. N agg is known from the peak position, and thus also the low-q limit of S intra (q→0), because forward scattering of isolated objects gives directly the mass of an aggregate [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF]. The numerical value of the (hypothetical) forward scattering in the absence of interaction can be directly calculated using eq.( 1), setting S intra = N agg and S inter = 1. Of course the aggregates in our nanocomposites are not isolated, as their repulsion leads to the intensity peak and a depression of the intensity at small angles.
The limit of I(q→0) contains thus also an additional factor, S inter (q→0). In colloid science, this factor is known as the isothermal osmotic compressibility [START_REF] Neutrons | X-ray and Light: Scattering Methods Applied to Soft Condensed Matter[END_REF], and here its equivalent can be deduced from the ratio of the isolated aggregate limit of the intensity (S intra = N agg , S inter = 1), and the experimentally measured one I(q→0). It characterizes the strength of the aggregate-aggregate interaction.
Based on the RMSA-structure factor [START_REF] Hayter | [END_REF]34], we have implemented a search routine which finds the effective charge Q reproducing S inter (q→0). With λ D fixed, we are left with one free parameter, Q, which entirely determines the q-dependence of the inter-aggregate structure
factor. An immediate cross-check is that the resulting S inter (q) is peaked in the same q-region as the experimental intensity. In Fig. 1, the decomposition of the intensity in S inter (q) and S intra (q) is shown. It has been achieved with an aggregation number of 93, approximately forty charges per aggregate, and a Debye length of 741 Å, i.e. 85% of the average surface-tosurface distance between aggregates, and we come now back to the determination of λ D .
In Fig. 2, a series of inter-aggregate structure factors is shown with different Debye lengths: 50%, 85% and 125% of the distance between neighboring aggregate surfaces (872 Å). The charges needed to obtain the measured compressibility are 27, 40 and 64.5, respectively. In Fig. 2, the inter-aggregate structure factors are seen to be peaked in the vicinity of the experimentally observed peak, with higher peak heights for the lower Debye lengths.
Dividing the measured intensity I(q) by ∆ρ 2 Φ V si P(q) S inter yields S intra , also presented in the plot. At low-q, these structure factors decrease strongly, then pass through a minimum and a maximum at intermediate q , and tend towards one at large q (not shown). The high-q maximum is of course due to the interaction between primary particles.
In the low-q decrease, it can be observed that a too strong peak in S inter leads to a depression of S intra at the same q-value. Conversely, a peak that is too weak leads to a shoulder in S intra .
Only at intermediate values of the Debye length (85%), S intra is relatively smooth. In the following, it is supposed that there is no reason for S intra to present artefacts in the decrease from the Guinier regime to the global minimum (bumps or shoulders), and set the Debye length to the intermediate value (85%) for this sample. We have also checked that small variations around this intermediate Debye length (80 to 90%) yield essentially identical structure factors, with peak height differences of a view percent. This procedure of adjusting λ D to the value with a smooth S intra has been applied to all data discussed in this paper.
Fitting S intra using geometrical and fractal models.
Up to now, we have determined the inter-aggregate structure factor, and then deduced the experimental intra-aggregate structure factor S intra as shown in Fig. 2 by dividing the intensity by S inter according to eq.(1). To extract direct-space information from S intra for aggregates of unknown shape, two types of solutions can be sought. First, one can make use of the knowledge of the average aggregation number, and construct average aggregates in real space. This supposes some idea of possible structures, which can then be Fourier-transformed and compared to the experimental result S intra (q). For example, one may try small crystallites [40], or, in another context, amorphous aggregates [41]. Another prominent case is the one of fractal structures, which are often encountered in colloidal aggregation [42 -44].
Let us quickly discuss the scattering function of finite-sized fractals using the unified law with both Guinier regime and power law dependence [25,45]. An isolated finite-sized object with fractal geometry described by a fractal dimension d has three distinct scattering domains. At low q (roughly q < 1/R g ), the Guinier law reflects the finite size and allows the measurement of the aggregate mass from the intensity plateau, and of the radius of gyration R g from the low-q decay. At intermediate q (q > 1/R g ), the intensity follows a power law q -d up to the high-q regime (q > 1/R), which contains the shape information of the primary particles (of radius R) making up the aggregate. Generalizations to higher level structures have also been used [46][47][48][49]. Here we use a two-level description following Beaucage [25]:
( ) ( ) [ ] - ⋅ ⋅ + - ⋅ = 3 R q exp q 6 / qR erf B 3 R q exp G q I 2 2 d 3 2 / 1 g 1 2 g 2 1 ( ) [ ] p 3 2 / 1 2 2 2 2 q 6 / qR erf B 3 R q exp G ⋅ + - ⋅ + (3)
Note that there is no interaction term like S inter in eq.( 1), and that eq.( 3) accounts only for intra-aggregate structure in this case. The first term on the right-hand-side of eq.( 3) is the Guinier expression of the total aggregate. The second term, i.e. the first power law, corresponds to the fractal structure of the aggregate, the error function allowing for a smooth cross-over. This fractal law is weighted by the Guinier expression of the second level, which is the scattering of the primary silica particle in our case; this effectively suppresses the fractal law of the first level at high q. This is followed by an equivalent expression of the higher level, i.e. a Guinier law of primary particles followed by the power-law, which is the Porod law of the primary particles in this case.
Fitting S intra using Reverse Monte Carlo.
The second solution to extract real-space information from S intra is to fit the intra-aggregate structure factor by a Monte-Carlo approach which we describe here. It has been called
Reverse Monte Carlo (RMC) [35][36][37][38][39] because it is based on a feed-back between the structure in direct and reciprocal space, which makes it basically an automatic fitting procedure once the model is defined. The application of RMC to the determination of the aggregate structure from the scattered intensity is illustrated (in 2D) in Fig. 3. RMC was performed with a specially developed Fortran program as outlined in the Appendix. The method consists in generating representative aggregate shapes by moving elements of the aggregate in a random way -these are the Monte Carlo steps -, and calculate the corresponding structure factor at each step. The intensity is then compared to the experimentally measured one, which gives a criterion whether the Monte Carlo step is to be accepted or not. Monte-Carlo steps are repeated until no further improvement is obtained. If the algorithm converges, the outcome is a structure compatible with the scattered intensity. As an immediate result, it allows us to verify that an aggregate containing N agg filler particles -N agg being determined from the peak position q o -produces indeed the observed scattered intensity.
IV. APPLICATION TO EXPERIMENTAL RESULTS
IV.1 Moderate volume fraction of silica (Φ Φ Φ Φ = 5%, B30).
Aggregate interaction.
We now apply our analysis to the measured silica-latex nanocomposite structures [27]. We start with the example already discussed before (Figs. 1 and2), i.e. a sample with a moderate silica volume fraction of 5%, and neutral solution pH before solvent evaporation. From the peak position (q = 3.9 10 -3 Å -1 ), an average aggregation number of N agg = 93 can be deduced using eq.( 2). The aggregate mass gives us the hypothetical low-q limit of the intensity for non-interaction aggregates using eq. (1), with S inter =1, of 9550 cm -1 . The measured value being much lower, approximately 450 cm -1 , with some error induced by the extrapolation, the isothermal compressibility due to the interaction between aggregates amounts to about 0.05.
This rather low number expresses the strong repulsive interaction. The charged spheres representing the aggregates in the inter-aggregate structure factor calculation have the same volume as the aggregates, and thus an equivalent radius of R e = 367 Å. The surface-to-surface distance between spheres is therefore 872 Å. Following the discussion of Fig. 2, we have set the screening length λ D to 85% of this value, 741 Å. Using this input in the RMSAcalculation, together with the constraint on the compressibility, an electric charge of 40 elementary charges per aggregate is found. The corresponding s S inter are plotted in Fig. 2.
Fractal modeling.
A fit with a two level fractal, eq.( 3), has been performed with the aggregate form factor P agg obtained by dividing the experimental intensity by S inter . The result is shown in Fig. 4. There are several parameters to the fit, some of which can be found independently. The slope of the high-q power law, e.g., has been fixed to p= -4, in agreement with the Porod law. The radius of gyration of the primary particles is 76 Å, and the corresponding prefactor G 2 can be deduced from the particle properties [27] and concentration (103 cm -1 ). For comparison, the form factor of the individual particle is shown in Fig. 4 as a one level Beaucage function, i.e.
using only the last two terms of eq. ( 3). Furthermore, we have introduced the G 1 value of 9550 cm -1 calculated from N agg , i.e. from the peak position. Fitting yields the radius of gyration of aggregates (1650 Å), and a fractal dimension of 1.96. At intermediate q, however, the quality of the fit is less satisfying. The discrepancy is due to the minimum of S intra (cf. Fig.
2) around 0.02 Å -1 , a feature which is not captured by the model used here (eq. ( 3)).
Reverse Monte Carlo.
We now report on the results of the implementation of an RMC-routine applied to the structure of the sample discussed above (Φ = 5%, pH 7). In Fig. 5, we plot the evolution of χ 2 (cf. appendix) as a function of the number of Monte-Carlo tries for each bead (on average), starting from the a random initial condition as defined in the appendix. For illustration purposes, this is compared to the χ 2 from different initial conditions, i.e. aggregates constructed according to the same rule but with a different random seed. Such initial aggregate structures are also shown on the left-hand side of Fig. 6. In all cases, the χ 2 value is seen to decrease in Fig. 5 by about two orders of magnitude within five Monte-Carlo steps per bead. It then levels off to a plateau, around which it fluctuates due to the Boltzmann criterion.
We have checked that much longer runs do not further increase the quality of the fit, cf. the inset of Fig. 5. The corresponding aggregates at the beginning and at the end of the simulation run are also shown in Fig. 6. They are of course different depending on the initial condition and angle of view, but their statistical properties are identical, otherwise their Fourier transform would not fit the experimental data. It is interesting to see how much the final aggregate structures, rather elongated, look similar.
Having established that the algorithm robustly produces aggregates with similar statistical properties, we now compare the result to the experimental intensity in Fig. 7. Although some minor deviations between the intensities are still present, the agreement over five decades in intensity is quite remarkable. It shows that the aggregation number determined from the peak position q o is indeed a reasonable value, as it allows the construction of a representative aggregate with almost identical scattering behavior. In the lower inset of Fig. 7, the RMC result for the aggregate form factor P agg is compared to the experimental one (obtained by dividing the I(q) of Fig. 7 by S inter ). The fit is good, especially as the behavior around 0.02 Å -1 is better described than in the case of the fractal model, Fig. 4.
The radius of gyration can be calculated from the position of the primary particles in one given realization. We find R g around 1150 Å, a bit smaller than with the fractal model (1650 Å), a difference probably due to the fact that we are only approaching the low-q plateau. For the comparison of the fractal model to RMC, let us recall that both apply only to P agg , i.e. after the separation of the intensity in aggregate form factor P agg and structure factor S inter . Both methods give the same fractal dimension d of aggregates because this corresponds to the same slope of P agg . The aggregate form factor P agg and thus the intensity are better (although not perfectly) fitted with RMC. This is true namely for the minimum around 0.02 Å -1 , presumably because the nearest neighbor correlations inside each aggregate are captured by a physical model of touching beads. Last but not least, RMC gives snapshots of 3D real-space structures compatible with the scattered intensity, which validates the determination of N agg using eq. ( 2).
For the sake of completeness, we have tested RMC with aggregation numbers different from the one deduced from the peak position. Taking a very low aggregation number (i.e., smaller than the value obtained with eq.( 2))) leads to bad fits, whereas higher aggregation numbers give at first sight acceptable fits. The problem with too high aggregation numbers is that the peak position of S inter is different from the position of the intensity peak due to conservation of silica volume. RMC compensates for this by introducing an oscillation in S intra (or equivalently, P agg ) which effectively shifts the peak to its experimentally measured position.
In the upper inset of Fig. 7 P agg presenting such an artefact (N agg = 120 and 150) is compared to the one with the nominal aggregation number, N agg = 93 (filled symbols). The oscillation around 0.004 Å -1 is not present with N agg = 93, and becomes stronger as the aggregation number deviates more from the value determined from the intensity peak position, eq.( 2).
IV.2 Evolution with silica volume fraction.
In the preceding section we have analyzed a sample at moderate silica volume fraction, 5%. It is now interesting to check if the same type of modeling can be applied to higher silica volume fractions and bigger aggregates (i.e., lower solution pH), where the structure factor can be seen to be more prominent directly from I(q).
Evolution of structure with silica volume fraction (Φ Φ Φ Φ = 5 and 10%, B30).
In Fig. 8, two data sets corresponding to a lower pH of 5, for Φ = 5% and 10% (symbols) are compared to their RMC fits, in linear representation in order to emphasize the peaks. The parameters used for these calculations are given in Table 1, together with the aggregation numbers deduced from the peak position (using eq. ( 2)). As expected, these are considerably higher than at pH 7 [27]. Concerning the Debye length, it is interesting to note that its value relative to the inter-aggregate distance increases with volume fraction. As we have seen in section III.2, a higher Debye length leads to a weaker peak. This tendency is opposite to the influence of the volume fraction, and we have checked that the peak in S inter is comparable in height in both cases, i.e. the two tendencies compensate.
At first sight of Fig. 8, it is surprising that the intensity at 10% is lower than the one at 5%. This is only true at small-q -the 10% intensity being higher in the Porod domain, as it should, cf. P agg shown in the inset in log-scale. At both concentrations, the aggregate shape seems to be unchanged, (similar fractal dimension d, 2.25 and 2.3 for 5% and 10%, respectively), and together with the shift in peak position by a factor 2 ⅓ (as Φ is doubled) to a region where P agg is much lower, it explains the observed decrease in intensity. We will see in the discussion of a series with the silica B40 that this behavior is not general, and that aggregation depends (as observed before [27]) on the type of bead.
For illustration, the scattered intensity corresponding to the random initial condition of RMC (cf. appendix) is also shown in Fig. 8. The major initial deviation from the experimental values underlines the capacity of the RMC algorithm to converge quickly (cf. Fig. 5) towards a very satisfying fit of the experimental intensity. Note that there is a small angle upturn for the sample at 10%. This may be due to aggregation on a very large scale, which is outside the scope and the possibilities of our method.
Evolution of structure with silica volume fraction (Φ Φ Φ Φ = 3% -15%, B40)
We now turn to a series of samples with a different, slightly bigger silica beads (denoted B40), in a highly aggregated state (low pH), with a larger range of volume fractions. In Fig. 9 the intensities are plotted with the RMC fits, for the series Φ = 3 -15%, at pH 5, silica B40.
The parameters used for the calculations are given in the Table 2.
The fits shown in Fig. 9 are very good, which demonstrates that the model works well over a large range of volume fractions, i.e. varying aggregate-aggregate interaction. Concerning the parameters Debye length and charge, we have checked that the peaks in S inter are comparable in height (within 10%). Only their position shifts, as it was observed with the smaller silica (B30). Unlike the case of B30, however, the intensities follow a 'normal' increase with increase in volume fraction, which suggests a different evolution in aggregate shape and size for the bigger beads.
The case of the lowest volume fraction, Φ = 3%, deserves some discussion. The aggregation number is estimated to 188 using eq. ( 2). The peak is rather weak due to the low concentration, and it is also close to the minimum q-value. We thus had to base our analysis on an estimation of I(q→0), 700 cm -1 . The resulting inter-aggregate structure factor S inter is as expected only slightly peaked (peak height 1.1). We found that some variation of N agg does not deteriorate the quality of the fit, i.e. small variations do not introduce artificial oscillations in the aggregate form factor. We have, e.g., checked that the aggregate form factors P agg for N agg = 120 and 200 are equally smooth. At higher/lower aggregation number, like 100 or 230, oscillations appear in P agg . It is concluded that in this rather dilute case the weak ordering does not allow for a precise determination of N agg . For higher volume fractions, Φ>3%, the aggregation numbers given in Table 2 are trustworthy.
V. DISCUSSION
V.1 Uniqueness of the solution.
The question of the uniqueness of the solution found by RMC arises naturally. Here two different levels need to be discussed. The first one concerns the separation in aggregate form and structure factor. We have shown that the aggregate parameters (N agg , aggregate interaction) are fixed by the boundary conditions. Only in the case of weak interaction (Φ = 3%), acceptable solutions with quite different aggregation numbers (between about 120 and 200) can be found. In the other cases, variations by some 15% in N agg lead to bad intensity fits or artefacts in the aggregate form factor P agg . We can thus confirm that one of the main objectives is reached, namely that it is possible to find an aggregate of well-defined mass (given by eq.( 2)), the scattering of which is compatible with the intensity.
The second level is to know to what extend the RMC-realizations of aggregates are unique solutions. It is clear from the procedure that many similar realizations are created as the number of Monte Carlo steps increases (e.g., the plateau in Fig. 5), all with a comparable quality of fit. In Fig. 5, this is also seen to be independent from the initial condition, and Figs.
6 and 8 illustrated how far this initial condition is from the final structure. All the final realizations have equivalent statistical properties, and they can be looked at as representatives of a class of aggregates with identical scattering. However, no unique solution exists.
V.2 From aggregate structure to elastomer reinforcement.
We have shown in previous work that the mechanical properties of our nanocomposites depend strongly on aggregation number and silica volume fraction [28][29][30]. The aggregation number was estimated from the position, and we have now confirmed that such aggregates are indeed compatible with the complete scattering curves. It is therefore interesting to see how the real-space structures found by our method compare to the mechanical properties of the nanocomposites.
The low deformation reinforcement factors of the series in silica volume fraction (B40, pH5, Φ = 3 -15%) are recalled in Table 3 [30]. E/E latex is found to increase considerably with Φ, much more than N agg . Aggregate structures as resulting from the RMC-procedure applied to the data in Fig. 9 are shown in Fig. 10. At low Φ, aggregates are rather elongated, and with increasing they are seen to become slightly bulkier. We have determined their radii of gyration and fractal dimension with a one-level Beaucage fit, using only the first two terms of the right-hand-side of eq. ( 3), and applying the same method as in section IV.1. The results are summarized in Table 3. The fractal dimension is found to increase with Φ, as expected from Fig. 10. The aggregate radius R g first decreases, then increases again. If we compare R g the average distance between aggregates D (from the peak position of S inter ), we find a crowded environment. The aggregates appear to be tenuous structures, with an overall radius of gyration bigger than the average distance between aggregates, which suggests aggregate interpenetration.
In a recent article [30], we have determined the effective aggregate radius and fractal dimension from a mechanical model relating E/E latex to the compacity of aggregates. The numerical values are different (aggregate radii between 1200 and 980 Å, fractal dimensions between 2.1 and 2.45) due to the mechanical model which represents aggregates as spheres, but the tendency is the same: Radii decrease as Φ increases, implying bulkier aggregates with higher fractal dimensions. Only the increase in radius found at 15% is not captured by the mechanical model.
Our picture of reinforcement in this system is the based on the idea of percolation of hard silica structures in the matrix. Due to the (quasi-)incompressibility of the elastomer matrix, strain in any direction is accompanied by lateral compression, thus pushing aggregates together and creating mechanical percolation. Aggregates are tenuous, interpenetrating structures. The higher the silica volume fraction, the more compact the aggregates (higher d), the stronger the percolating links. At low N agg more or less constant, which implies that the aggregates decrease in size, cf. Table 3 for both fractal and RMC-analysis. Above 6%, increases, and the aggregates become both denser and grow again in size. At the same time, aggregates come closer (D goes down). This moves the system closer to percolation, and leads to the important increase in the reinforcement factor. In other systems, this is also what the reinforcement curves as a function of filler volume fraction suggest [28], where extremely strong structures made of the percolating hard filler phase are found above a critical volume fraction [50].
VI. CONCLUSION
have presented a complete analysis of the scattering function of complex spectra arising from strongly aggregated and interacting colloidal silica aggregates in nanocomposites. The main result is the validation of the determination of the average aggregation number by a complete fit of the data. This is achieved by a separation of the scattered intensity in a product of aggregate form and structure factor. The aggregate form factor can then be described either by a model, or Reverse Monte Carlo modeling. The use of the decomposition of I(q) in a product based on the assumption that aggregates are similar in size. This is justified by the strong peak in intensity, which indicates strong ordering, incompatible with too high polydispersity in size.
Fractal and RMC-modelling appear to be complementary, with the advantage of generality and simplicity for the fractal model, whereas RMC needs numerical simulations adapted to each case. However, RMC does not rely on approximations (Guinier), and by its geometrical construction it connects local configurations (bead-bead) to the global structure. RMC thus gives a real space picture of aggregates compatible with I(q), and thereby confirms calculation of aggregation numbers from the peak positions.
To finish, possible improvements of our method can be discussed. APPENDIX: Reverse Monte Carlo algorithm for scattering from aggregates.
A.1 Initial aggregate construction
The first step is to build an initial aggregate which can then evolve according to the Monte-Carlo rules in order to fit the experimental intensity I(q) of nanocomposites. From the intensity peak position and eq.( 2), the aggregation number N agg is known. The primary particles are the silica beads with a radius drawn from a size distribution function [27]. The initial aggregate is constructed by adding particles to a seed particle placed at the origin. Each new particle is positioned by randomly choosing one of the particles which are already part of the aggregate, and sticking it to it in a random direction. Then, collisions with all particles in the aggregate at this stage are checked, and the particle is accepted if there are no collisions. This is repeated until N agg is reached. Two realizations of initial aggregate structures are in Fig. 6.
A.2 Monte-Carlo steps
The Monte-Carlo steps are designed to change the shape of the aggregate, in order to reach closer agreement with the scattering data. To do this, the local aggregate topology has to be determined. The aim is to identify particles which can be removed from the aggregate without breaking it up, i.e. particles which sit on the (topological) surface of the aggregate. Moving such particles to another position in the aggregate leads to a new structure with updated topology. A Monte-Carlo step thus consists in randomly choosing one of the particles which can be removed, and repositioning it in contact with some other, randomly chosen particle, again in a random direction. As before, it is checked that there are no collisions with the other particles of the aggregate.
A.3 Fit to experimental intensity
Each Monte-Carlo step is evaluated by the calculation of the orientationally averaged aggregate form factor P agg (q) , which is multiplied by S inter (q), cf. eq. ( 1), and compared to the experimental intensity I(q). The comparison is done in terms of χ 2 :
( ) ( ) ∑ σ - = χ i 2 i RMC i 2 q I q I N 1 (A.1)
where the difference between RMC-prediction and experimental intensity is summed over the N q-values. The statistical error σ was kept fixed in all calculations. In the our algorithm, the move is accepted if it improves the agreement between the theoretical and experimental curves, or if the increase in χ 2 is moderate in order to allow for some fluctuations. This is implemented by a Boltzmann criterion on 2 : exp (-∆χ 2 / B) > random number in the interval [0,1] (A.2)
In the present implementation, B has been fixed to at most 1% of the plateau value of χ 2 . This plateau-value was found to be essentially independent of the choice of B. Given the quality of the fits, a simulated annealing approach was therefore not necessary.
Figure captions
Figure 1 :
1 Figure 1 : Structure of silica-latex nanocomposite (Φ = 5%, pH 7, B30) as seen by SANS. The experimental intensity ( ) is represented in log scale, and in linear scale in the upper inset. In the lower inset, the two structure factors S inter and S intra , are shown. Such a decomposition is the result of our data analysis as in the text.
Figure 2 :
2 Figure 2 : Structure factors (for Φ = 5%, pH 7, B30) obtained with different Debye lengths and charges, but identical compressibility. λ D = 436 Å (50%), Q = 64.5 ( ), λ D = 741 Å (85%), Q = 40 ( ). λ D = 1090 Å (125%), Q = 27 ( ). In parentheses the Debye lengths as a fraction of the inter-aggregate surface distance (872 Å).In the inset, a zoom on the artefact in S intra observed at 50%, but not at 85%, is shown.
Figure 3 : 4 :
34 Figure 3 : Schematic drawing illustrating the Reverse Monte Carlo algorithm applied to the generation of aggregates. An internal filler particle like the black bead can not be removed without destroying the aggregate.
Figure 5 :
5 Figure 5:Evolution of χ 2 with the number of Monte Carlo tries per bead for three different initial conditions. In the inset, a long run with 300 tries per bead.
Figure 6 :
6 Figure 6: Graphical representations of aggregate structures. Two initial configurations are shown on the left. The structures on the right are snapshots after 300 (top) and 30 (bottom) tries per bead, each starting from the initial configurations on the left.
Figure 7 :
7 Figure 7: Structure of silica-latex nanocomposite ( , Φ = 5%, pH 7, B30) compared to the RMC model prediction (N agg = 93, solid line). In the lower inset the aggregate form factor is compared to the RMC result. In the upper inset, the RMC-results (P agg ) for higher aggregation numbers (N agg = 120 and 150, solid lines) are compared to the nominal one (N agg = 93, symbols).
Figure 8 :
8 Figure 8: SANS-intensities of samples (B30 pH5) with silica volume fraction of 5% and 10% (symbols). The solid lines are the RMC-results. For illustration, the intensity of the RMC-algorithm calculated from the initial aggregate configuration is also shown (10%). In the inset, aggregate form factors P agg are compared.
Figure 9 :
9 Figure 9: Structure of silica-latex nanocomposites (symbols, Φ = 3%-15, pH 5, B40) compared to the RMC model predictions (see text for details).
Figure 10 :
10 Figure 10: Snapshots of aggregate structures at different silica volume fractions as calculated by RMC (pH5, B40-series).
Figures
Figures
FIGURE 1 (OBERDISSE)
Technically, the introduction of the spectrometer resolution function is straightforward but would not fundamentally change results, and considerably slow down the algorithm. A more ambitious project is be to get rid of the separation in aggregate form and structure factor by performing a RMC-simulation of a large system containing many aggregates [51]. It will be interesting to see if the Monte-Carlo algorithm converges spontaneously towards more or less monodisperse aggregates, or if very different solutions, not considered in the present work, exist.
Table 1 :
1 Parameters used for a successful decomposition in S inter and an artefact-free P agg , for series B30, pH5. The Debye length is given as a multiple of the surface-to-surface distance between neighboring aggregates.
Tables
Φ Φ Φ Φ Debye length factor Charge N agg
5% 60% 61 430
10% 175% 52 309
Φ Φ Φ Φ Debye length factor Charge N agg
3% 120% 52 120-200
6% 150% 58 168
9% 150% 78 196
12% 250% 63 238
15% 275% 55 292
Table 2 :
2 Parameters used for a successful decomposition in S inter and an artefact-free P agg , for series B40, pH5. The Debye length is given as a multiple of the surface-to-surface distance between neighboring aggregates.
Φ Φ Φ Φ d R g (Å) fractal R g (Å) RMC D (Å) E/E latex
3% 1.6 3470 2830 2400 2.8
6% 2.0 2640 1690 2000 6.4
9% 2.2 2290 2090 1780 23.2
12% 2.3 2150 1870 1750 29.6
15% 2.4 2550 2680 1750 42.5
Table 3 :
3 Series B40, pH5. Fractal dimension d and radius of gyration R g from one-level Beaucage fit compared to R g determined by RMC and inter-aggregate distance from S inter . The last column recalls the mechanical reinforcement factor of these samples.
Acknowledgements : Work conducted within the scientific program of the European Network of Excellence Softcomp: 'Soft Matter Composites: an approach to nanoscale functional supported by European Commission. Silica and latex stock solutions were a gift from Akzo Nobel and Rhodia. Help by Bruno Demé (ILL, Grenoble) as local contact on D11 and beam time by ILL is gratefully acknowledged, as well as support by the instrument responsible Peter Lindner. Thanks also to Rudolf Klein (Konstanz) for fruitful discussions on structure factors. | 47,075 | [
"995273"
] | [
"737",
"35905",
"35904"
] |
01485792 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2017 | https://shs.hal.science/halshs-01485792/file/DietrichList-OpinionPoolingGeneralized-Part1.pdf | Franz Dietrich
Probabilistic opinion pooling generalized Part one: General agendas
Keywords: Probabilistic opinion pooling, judgment aggregation, subjective probability, probabilistic preferences, vague/fuzzy preferences, agenda characterizations, a uni…ed perspective on aggregation
How can several individuals'probability assignments to some events be aggregated into a collective probability assignment? Classic results on this problem assume that the set of relevant events -the agenda -is a -algebra and is thus closed under disjunction (union) and conjunction (intersection). We drop this demanding assumption and explore probabilistic opinion pooling on general agendas. One might be interested in the probability of rain and that of an interest-rate increase, but not in the probability of rain or an interest-rate increase. We characterize linear pooling and neutral pooling for general agendas, with classic results as special cases for agendas that are -algebras. As an illustrative application, we also consider probabilistic preference aggregation. Finally, we unify our results with existing results on binary judgment aggregation and Arrovian preference aggregation. We show that the same kinds of axioms (independence and consensus preservation) have radically di¤erent implications for di¤erent aggregation problems: linearity for probability aggregation and dictatorship for binary judgment or preference aggregation.
Introduction
This paper addresses the problem of probabilistic opinion pooling. Suppose several individuals (e.g., decision makers or experts) each assign probabilities to some events. How can these individual probability assignments be aggregated into a collective probability assignment, while preserving probabilistic coherence? Although this problem has been extensively studied in statistics, economics, and philosophy, one standard assumption is seldom questioned: the set of events to which probabilities are assigned -the agenda -is a -algebra: it is closed under negation (complementation) and countable disjunction (union) of events. In practice, however, decision makers or expert panels may not be interested in such a rich set of events. They may be interested, for example, in the probability of a blizzard and the probability of an interest-rate increase, but not in the probability of a blizzard or an interest-rate increase. Of course, the assumption that the agenda is a -algebra is convenient: probability functions are de…ned onalgebras, and thus one can view probabilistic opinion pooling as the aggregation of probability functions. But convenience is no ultimate justi…cation. Real-world expert committees typically do not assign probabilities to all events in a -algebra. Instead, they focus on a limited set of relevant events, which need not contain all disjunctions of its elements, let alone all disjunctions of countably in…nite length.
There are two reasons why a disjunction of relevant events, or another logical combination, may not be relevant. Either we are not interested in the probability of such 'arti…cial'composite events. Or we (or the decision makers or experts) are unable to assign subjective probabilities to them. To see why it can be di¢ cult to assign a subjective probability to a logical combination of 'basic'events -such as 'a blizzard or an interest-rate increase'-note that it is not enough to assign probabilities to the underlying basic events: various probabilistic dependencies also a¤ect the probability of the composite event, and these may be the result of complex causal interconnections (such as the causal e¤ects between basic events and their possible common causes).
We investigate probabilistic opinion pooling for general agendas, dropping the assumption of a -algebra. Thus any set of events that is closed under negation (complementation) can qualify as an agenda. The general notion of an agenda is imported from the theory of binary judgment aggregation (e.g., List andPettit 2002, 2004;Pauly and van Hees 2006;[START_REF] Dietrich | Judgment Aggregation: (Im)Possibility Theorems[END_REF]Dietrich andList 2007a, 2013;[START_REF] Nehring | Abstract Arrovian Aggregation[END_REF][START_REF] Dokow | Aggregation of binary evaluations[END_REF][START_REF] Dietrich | The premise-based approach to judgment aggregation[END_REF]. We impose two axiomatic requirements on probabilistic opinion pooling:
(i) the familiar 'independence'requirement, according to which the collectively assigned probability for each event should depend only on the probabilities that the individuals assign to that event; (ii) the requirement that certain unanimous individual judgments should be preserved; we consider stronger and weaker variants of this requirement.
We prove two main results:
For a large class of agendas -with -algebras as special cases -any opinion pooling function satisfying (i) and (ii) is linear: the collective probability of each event in the agenda is a weighted linear average of the individuals' probabilities of that event, where the weights are the same for all events. For an even larger class of agendas, any opinion pooling function satisfying (i) and (ii) is neutral: the collective probability of each event in the agenda is some (possibly non-linear) function of the individuals'probabilities of that event, where the function is the same for all events.
We state three versions of each result, which di¤er in the nature of the unanimitypreservation requirement and in the class of agendas to which they apply. Our results generalize a classic characterization of linear pooling in the special case where the agenda is -algebra [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF][START_REF] Aczél | A characterization of weighted arithmetic means[END_REF][START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF]. 1 For a -algebra, every neutral pooling function is automatically linear, so that neutrality and linearity are equivalent here [START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF][START_REF] Wagner | Allocation, Lehrer Models, and the Consensus of Probabilities[END_REF]. 2As we will see, this fact does not carry over to general agendas: many agendas permit neutral but non-linear opinion pooling functions.
Some of our results apply even to agendas containing only logically independent events, such as 'a blizzard' and 'an interest-rate increase' (and their negations), but no disjunctions or conjunctions of these events. Such agendas are relevant in practical applications where the events in question are only probabilistically dependent (correlated), but not logically dependent. If the agenda is a -algebra, by contrast, it is replete with logical interconnections. By focusing on -algebras alone, the standard results on probabilistic opinion pooling have therefore excluded many realistic applications.
We also present a new illustrative application of probabilistic opinion pooling, namely to probabilistic preference aggregation. Here each individual assigns subjective probabilities to events of the form 'x is preferable than y'(or 'x is better than y'), where x and y range over a given set of alternatives. These probability 1 Speci…cally, if the agenda is a -algebra (with more than four events), linear pooling functions are the only pooling functions which satisfy independence and preserve unanimous probabilistic judgments [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF]Wagner 1980, McConway 1981). Linearity and neutrality (the latter sometimes under the names strong label neutrality or strong setwise function property) are among the most widely studied properties of opinion pooling functions. Linear pooling goes back to Stone (1961) or even Laplace, and neutral pooling to [START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF] and [START_REF] Wagner | Allocation, Lehrer Models, and the Consensus of Probabilities[END_REF]. For extensions of (or alternatives to) the classic characterization of linear pooling, see [START_REF] Wagner | Allocation, Lehrer Models, and the Consensus of Probabilities[END_REF][START_REF] Wagner | On the Formal Properties of Weighted Averaging as a Method of Aggregation[END_REF], [START_REF] Aczél | Aggregation Theorems for Allocation Problems[END_REF], [START_REF] Genest | Pooling operators with the marginalization property[END_REF], [START_REF] Mongin | Consistent Bayesian aggregation[END_REF][START_REF] Chambers | An ordinal characterization of the linear opinion pool[END_REF]. All these works retain the assumption that the agenda is a -algebra. [START_REF] Genest | Combining Probability Distributions: A Critique and Annotated Bibliography[END_REF] and [START_REF] Clemen | Combining Probability Distributions from Experts in Risk Analysis[END_REF] provide surveys of the classic literature. For opinion pooling under asymmetric information, see [START_REF] Dietrich | Bayesian group belief[END_REF]. For the aggregation of qualitative rather than quantitative probabilities, see [START_REF] Weymark | Aggregating Ordinal Probabilities on Finite Sets[END_REF]. For a computational, non-axiomatic approach to the aggregation of partial probability assignments, where individuals do not assign probabilities to all events in the -algebra, see [START_REF] Osherson | Aggregating disparate estimates of chance[END_REF].
assignments may be interpreted as beliefs about which preferences are the 'correct' ones (e.g., which correctly capture objective quality comparisons between the alternatives). Alternatively, they may be interpreted as vague or fuzzy preferences. We then seek to arrive at corresponding collective probability assignments.
Each of our linearity or neutrality results (with one exception) is logically tight: the linearity or neutrality conclusion follows if and only if the agenda falls into a relevant class. In other words, we characterize the agendas for which our axiomatic requirements lead to linear or neutral aggregation. We thereby adopt the state-of-the-art approach in binary judgment-aggregation theory, which is to characterize the agendas leading to certain possibilities or impossibilities of aggregation. This approach was introduced by Nehring and Puppe (2002) in related work on strategy-proof social choice and subsequently applied throughout binary judgment-aggregation theory. One of our contributions is to show how it can be applied in the area of probabilistic opinion pooling.
We conclude by comparing our results with their analogues in binary judgmentaggregation theory and in Arrovian preference aggregation theory. Interestingly, the conditions leading to linear pooling in probability aggregation correspond exactly to the conditions leading to a dictatorship of one individual in both binary judgment aggregation and Arrovian judgment aggregation. This yields a new uni…ed perspective on several at …rst sight disparate aggregation problems.
The framework
We consider a group of n 2 individuals, labelled i = 1; :::; n, who have to assign collective probabilities to some events.
The agenda. Let be a non-empty set of possible worlds (or states). An event is a subset A of ; its complement ('negation') is denoted A c := nA. The agenda is the set of events to which probabilities are assigned. Traditionally, the agenda has been assumed to be a -algebra (i.e., closed under complementation and countable union, and thereby also under countable intersection). Here, we drop that assumption. As already noted, we may exclude some events from the agenda, either because they are of no interest, or because no probability assignments are available for them. For example, the agenda may contain the events that global warming will continue, that interest rates will remain low, and that the UK will remain in the European Union, but not the disjunction of these events. Formally, we de…ne an agenda as a non-empty set X of events which is closed under complementation, i.e., A 2 X ) A c 2 X. Examples are X = fA; A c g or X = fA; A c ; B; B c g, where A and B may or may not be logically related.
An example of an agenda without conjunctions or disjunctions. Suppose each possible world is a vector of three binary characteristics. The …rst takes the value 1 if atmospheric CO 2 is above some threshold, and 0 otherwise. The second takes the value 1 if there is a mechanism to the e¤ect that if atmospheric CO 2 is above that threshold, then Arctic summers are ice-free, and 0 otherwise. The third takes the value 1 if Arctic summers are ice-free, and 0 otherwise. Thus the set of possible worlds is the set of all triples of 0s and 1s, excluding the inconsistent triple in which the …rst and second characteristics are 1 and the third is 0, i.e., = f0; 1g3 nf(1; 1; 0)g. We now de…ne an agenda X consisting of A; A ! B; B, and their complements, where A is the event of a positive …rst characteristic, A ! B the event of a positive second characteristic, and B the event of a positive third characteristic. (We use the sentential notation 'A ! B'for better readability; formally, each of A, B, and A ! B are subsets of . 3 ) Although there are some logical connections between these events (in particular, A and A ! B are inconsistent with B c ), the set X contains no conjunctions or disjunctions.
Probabilistic opinions. We begin with the notion of a probability function.
The classical focus on agendas that are -algebras is motivated by the fact that such functions are de…ned on -algebras. Formally, a probability function on aalgebra is a function P : ! [0; 1] such that P ( ) = 1 and P is -additive (i.e., P (A 1 [A 2 [:::) = P (A 1 )+P (A 2 )+::: for every sequence of pairwise disjoint events A 1 ; A 2 ; ::: 2 ). In the context of an arbitrary agenda X, we speak of 'opinion functions'rather than 'probability functions'. Formally, an opinion function for an agenda X is a function P : X ! [0; 1] which is probabilistically coherent, i.e., extendable to a probability function on the -algebra generated by X. Thisalgebra is denoted (X) and de…ned as the smallest -algebra that includes X. It can be constructed by closing X under countable unions and complements. 4 In our expert-committee example, we have (X) = 2 , and an opinion function cannot assign probability 1 to all of A, A ! B, and B c . (This would not be extendable to a well-de…ned probability function on 2 , given that A \ (A ! B) \ B c = ?.) We write P X to denote the set of all opinion functions for the agenda X. If X is a -algebra, P X is the set of all probability functions on it.
Opinion pooling. Given the agenda X, a combination of opinion functions across the n individuals, (P 1 ; :::; P n ), is called a pro…le (of opinion functions). An (opinion) pooling function is a function F : P n X ! P X , which assigns to each pro…le (P 1 ; :::; P n ) a collective opinion function P = F (P 1 ; :::; P n ), also denoted P P 1 ;:::;Pn . For instance, P P 1 ;:::;Pn could be the arithmetic average 1 n P 1 + :::
+ 1 n P n .
Linearity and neutrality. A pooling function is linear if there exist realvalued weights w 1 ; :::; w n 0 with w 1 + ::: + w n = 1 such that, for every pro…le (P 1 ; :::; P n ) 2 P n X ,
P P 1 ;:::;Pn (A) = n X i=1 w i P i (A) for all A 2 X.
If w i = 1 for some 'expert' i, we obtain an expert rule given by P P 1 ;:::;Pn = P i . More generally, a pooling function is neutral if there exists some function D : [0; 1] n ! [0; 1] such that, for every pro…le (P 1 ; :::; P n ) 2 P n X , P P 1 ;:::;Pn (A) = D(P 1 (A); :::; P n (A)) for all A 2 X:
(1)
We call D the local pooling criterion. Since it does not depend on the event A, all events are treated equally ('neutrality'). Linearity is the special case in which D is a weighted linear averaging criterion of the form D(x) = X n i=1 w i x i for all x 2 [0; 1] n . Note that, while every combination of weights w 1 ; :::; w n 0 with sumtotal 1 de…nes a proper linear pooling function (since linear averaging preserves probabilistic coherence), a given non-linear function D : [0; 1] n ! [0; 1] might not de…ne a proper pooling function. Formula (1) might not yield a well-de…ned -i.e., probabilistically coherent -opinion function. We will show that whether there can be neutral but non-linear pooling functions depends on the agenda in question. If the agenda is a -algebra, the answer is known to be negative (assuming jXj > 4). However, we will also identify agendas for which the answer is positive. Some logical terminology. An event A is contingent if it is neither the empty set ? (impossible) nor the universal set (necessary). A set S of events is consistent if its intersection \ A2S A is non-empty, and inconsistent otherwise. A set S of events entails another event B if the intersection of S is included in B (i.e., \ A2S A B).
Two kinds of applications. It is useful to distinguish between two kinds of applications of probabilistic opinion pooling. We may be interested in either of the following:
(a) the probabilities of certain propositions expressed in natural language, such as 'it will rain tomorrow'or 'the new legislation will be repealed';
(b) the distribution of some real-valued (or vector-valued) random variable, such as the number of insurance claims over a given period, or tomorrow's price of a given share, or the weight of a randomly picked potato from some farm.
Arguably, probabilistic opinion pooling on general agendas is more relevant to applications of type (a) than to applications of type (b). An application of type (a) typically gives rise to an agenda expressible in natural language which does not constitute a -algebra. It is then implausible to replace X with the -algebra (X), many elements of which represent unduly complex combinations of other events. Further, even when (X) is …nite, it may be enormous. If X contains at least k logically independent events, then (X) contains at least 2 2 k events, so its size grows double-exponentially in k.5 This suggests that, unless k is small, (X) may be too large to serve as an agenda in practice. By contrast, an application of type (b) plausibly gives rise to an agenda that is a -algebra. Here, the decision makers may need a full probability distribution over the -algebra, and they may also be able to specify such a distribution. For instance, a market analyst estimating next month's distribution of Apple's share price might decide to specify a log-normal distribution. This, in turn, requires the speci…cation of only two parameters: the mean and the variance of the exponential of the share price. We discuss opinion pooling problems of type (b) in a companion paper [START_REF] Dietrich | Probabilistic opinion pooling generalized -Part two: The premise-based approach[END_REF], where they are one of our principal applications.
Axiomatic requirements on opinion pooling
We now introduce some requirements on opinion pooling functions.
The independence requirement
Our …rst requirement, familiar from the literature, says that the collective probability of each event in the agenda should depend only on the individual probabilities of that event. This requirement is sometimes also called the weak setwise function property.
Independence. For each event A 2 X, there exists a function D A : [0; 1] n ! [0; 1] (the local pooling criterion for A) such that, for all P 1 ; :::; P n 2 P X , P P 1 ;:::;Pn (A) = D A (P 1 (A); :::; P n (A)):
One justi…cation for independence is the Condorcetian idea that the collective view on any issue should depend only on individual views on that issue. This re ‡ects a local, rather than holistic, understanding of aggregation. (On a holistic understanding, the collective view on an issue may be in ‡uenced by individual views on other issues.) Independence, understood in this way, becomes less compelling if the agenda contains 'arti…cial'events, such as conjunctions of intuitively unrelated events, as in the case of a -algebra. It would be implausible, for instance, to disregard the individual probabilities assigned to 'a blizzard' and to 'an interest-rate increase'when determining the collective probability of the disjunction of these events. Here, however, we focus on general agendas, where the Condorcetian justi…cation for independence is more plausible.
There are also two pragmatic justi…cations for independence; these apply even when the agenda is a -algebra. First, aggregating probabilities issue-by-issue is informationally and computationally less demanding than a holistic approach and thus easier to implement in practice. Second, independence prevents certain types of agenda manipulation -the attempt by an agenda setter to in ‡uence the collective probability assigned to some events by adding other events to, or removing them from, the agenda.6 Nonetheless, independence should not be accepted uncritically, since it is vulnerable to a number of well-known objections.7
The consensus-preservation requirement
Our next requirement says that if all individuals assign probability 1 (certainty) to an event in the agenda, then its collective probability should also be 1.
Consensus preservation. For all A 2 X and all P 1 ; :::; P n 2 P X , if, for all i, P i (A) = 1, then P P 1 ;:::;Pn (A) = 1.
Like independence, this requirement is familiar from the literature, where it is sometimes expressed as a zero-probability preservation requirement. In the case of general agendas, we can also formulate several strengthened variants of the requirement, which extend it to other forms of consensus. Although these variants are not as compelling as their original precursor, they are still defensible in some cases. Moreover, when the agenda is a -algebra, they all collapse back into consensus preservation in its original form.
To introduce the di¤erent extensions of consensus preservation, we begin by drawing a distinction between 'explicitly revealed', 'implicitly revealed', and 'unrevealed'beliefs: Individual i's explicitly revealed beliefs are the probabilities assigned to events in the agenda X by the opinion function P i . Individual i's implicitly revealed beliefs are the probabilities assigned to any events in (X)nX by every probability function on (X) extending the opinion function P i ; we call such a probability function an extension of P i and use the notation P i . These probabilities are 'implied'by the opinion function P i . For instance, if P i assigns probability 1 to an event A in the agenda X, this 'implies'an assignment of probability 1 to all events B outside the agenda that are of the form B A. Individual i's unrevealed beliefs are probabilities for events in (X)nX that cannot be deduced from the opinion function P i . These are only privately held. For instance, the opinion function P i may admit extensions which assign probability 1 to an event B but may also admit extensions which assign a lower probability. Here, individual i's belief about B is unrevealed.
Consensus preservation in its original form concerns only explicitly revealed beliefs. The …rst strengthened variant extends the requirement to implicitly revealed beliefs. Let us say that an opinion function P on X implies certainty of an event A if P (A) = 1 for every extension P of P .
Implicit consensus preservation. For all A 2 (X) and all P 1 ; :::; P n 2 P X , if, for all i, P i implies certainty of A, then P P 1 ;:::;Pn also implies certainty of A.
This ensures that whenever all individuals either explicitly or implicitly assign probability 1 to some event, this is preserved at the collective level. Arguably, this requirement is almost as plausible as consensus preservation in its original form.
The second extension concerns unrevealed beliefs. Informally, it says that a unanimous assignment of probability 1 to some event should never be overruled, even if it is unrevealed. This is operationalized as the requirement that if every individual's opinion function is consistent with the assignment of probability 1 to some event (so that we cannot rule out the possibility of the individuals'privately making that assignment), then the collective opinion function should also be consistent with it. Formally, we say that an opinion function P on X is consistent with certainty of an event A if there exists some extension P of P such that P (A) = 1.
Consensus compatibility. For all A 2 (X) and all P 1 ; :::; P n 2 P X , if, for all i, P i is consistent with certainty of A, then P P 1 ;:::;Pn is also consistent with certainty of A.
The rationale for this requirement is a precautionary one: if it is possible that all individuals assign probability 1 to some event (though this may be unrevealed), the collective opinion function should not rule out certainty of A.
A third extension of consensus preservation concerns conditional beliefs. It looks more complicated than consensus compatibility, but it is less demanding. Its initial motivation is the idea that if all individuals are certain of some event in the agenda conditional on another event, then this conditional belief should be preserved collectively. For instance, if everyone is certain that there will be a famine, given a civil war, this belief should also be held collectively. Unfortunately, however, we cannot de…ne individual i's conditional probability of an event A, given another event B, simply as P i (AjB) = P i (A \ B)=P i (B) (where P i (B) 6 = 0 and P i is individual i's opinion function). This is because, even when A and B are in X, the event A \ B may be outside X and thus outside the domain of P i . So, we cannot know whether the individual is certain of A given B. But we can ask whether he or she could be certain of A given B, i.e., whether P i (AjB) = 1 for some extension P of P .
This motivates the requirement that if each individual could be certain of A given B, then the collective opinion function should also be consistent with this 'conditional certainty'. Again, this can be interpreted as requiring the preservation of certain unrevealed beliefs. A unanimous assignment of conditional probability 1 to one event, given another, should not be overruled, even if it is unrevealed.
We capture this in the following way. Suppose there is a …nite set of pairs of events in X -call them (A; B), (A 0 ; B 0 ), (A 00 ; B 00 ), and so on -such that each individual could be simultaneously certain of A given B, of A 0 given B 0 , of A 00 given B 00 , and so on. Then the collective opinion function should also be consistent with conditional certainty of A given B, A 0 given B 0 , and so on. Formally, for any …nite set S of pairs (A; B) of events in X, we say that an opinion function P on X is consistent with conditional certainty of all (A; B) in S if there exists some extension P of P such that P (AjB) = 1 for all (A; B) in S for which P (B) 6 = 0.
Conditional consensus compatibility. For all …nite sets S of pairs of events in X and all P 1 ; :::; P n 2 P X , if, for all i, P i is consistent with conditional certainty of all (A; B) in S, then P P 1 ;:::;Pn is also consistent with conditional certainty of all (A; B) in S.
The following proposition summarizes the logical relationships between the di¤erent consensus-preservation requirements; a proof is given in the Appendix.
Proposition 1 (a) Consensus preservation is implied by each of (i) implicit consensus preservation, (ii) consensus compatibility, and (iii) conditional consensus compatibility, and is equivalent to each of (i), (ii), and (iii) if the agenda X is a -algebra. (b) Consensus compatibility implies conditional consensus compatibility.
Each of our characterization results below uses consensus preservation in either its original form or one of the strengthened forms. Implicit consensus preservation does not appear in any of our results; we have included it here for the sake of conceptual completeness.8 4 When is opinion pooling neutral?
We now show that, for many agendas, the neutral pooling functions are the only pooling functions satisfying independence and consensus preservation in either its original form or one of the strengthened forms. The stronger the consensuspreservation requirement, the larger the class of agendas for which our characterization of neutral pooling holds. For the moment, we set aside the question of whether independence and consensus preservation imply linearity as well as neutrality; we address this question in the next section.
Three theorems
We begin with the strongest of our consensus-preservation requirements, i.e., consensus compatibility. If we impose this requirement, our characterization of neutral pooling holds for a very large class of agendas: all non-nested agendas. We call an agenda X nested if it has the form X = fA; A c : A 2 X + g for some set X + ( X) that is linearly ordered by set-inclusion, and non-nested otherwise. For example, binary agendas of the form X = fA; A c g are nested: take X + := fAg, which is trivially linearly ordered by set-inclusion. Also, the agenda X = f( 1; t]; (t; 1) : t 2 Rg (where the set of possible worlds is = R) is nested: take X + := f( 1; t] : t 2 Rg, which is linearly ordered by set-inclusion.
By contrast, any agenda consisting of multiple logically independent pairs
A; A c is non-nested, i.e., X is non-nested if X = fA k ; A c k : k 2 Kg with jKj 2 such that every subset S X containing precisely one member of each pair fA k ; A c k g (with k 2 K) is consistent.
As mentioned in the introduction, such agendas are of practical importance because many decision problems involve events that exhibit only probabilistic dependencies (correlations), but no logical ones. Another example of a non-nested agenda is the one in the expert-committee example above, containing A, A ! B, B, and their complements.
Theorem 1 (a) For any non-nested agenda X, every pooling function F : P n X ! P X satisfying independence and consensus compatibility is neutral.
(b) For any nested agenda X ( 6 = f?; g), there exists a non-neutral pooling function F : P n X ! P X satisfying independence and consensus compatibility.
Part (b) shows that the agenda condition used in part (a) -non-nestedness -is tight: whenever the agenda is nested, non-neutral pooling functions become possible. However, these pooling functions are non-neutral only in a limited sense: although the pooling criterion D A need not be the same for all events A 2 X, it must still be the same for all A 2 X + , and the same for all A 2 XnX + (with X + as de…ned above), so that pooling is 'neutral within X + 'and 'neutral within XnX + '. This is clear from the proof. 9What happens if we weaken the requirement of consensus compatibility to conditional consensus compatibility? Both parts of Theorem 1 continue to hold, though part (a) becomes logically stronger, and part (b) logically weaker. Let us state the modi…ed theorem explicitly:
Theorem 2 (a) For any non-nested agenda X, every pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility is neutral. (b) For any nested agenda X ( 6 = f?; g), there exists a non-neutral pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility.
The situation changes once we weaken the consensus requirement further, namely to consensus preservation simpliciter. The class of agendas for which our characterization of neutrality holds shrinks signi…cantly, namely to the class of path-connected agendas. Path-connectedness is an important condition in judgment-aggregation theory, where it was introduced by Nehring and Puppe (2010) (under the name 'total blockedness') and has been used, for example, to generalize Arrow's theorem (Dietrich andList 2007a, Dokow and[START_REF] Dokow | Aggregation of binary evaluations[END_REF].
To de…ne path-connectedness, we require one preliminary de…nition. Given an agenda X, we say that an event A 2 X conditionally entails another event B 2 X, written A ` B, if there exists a subset Y X (possibly empty, but not uncountably in…nite) such that fAg [ Y entails B, where, for non-triviality, Y [ fAg and Y [ fB c g are each consistent. For instance, if ? 6 = A B 6 = , then A ` B (take Y = ?; in fact, this is even an unconditional entailment). Also, for the agenda of our expert committee, X = fA;
A c ; A ! B; (A ! B) c ; B; B c g, we have A ` B (take Y = fA ! Bg).
We call an agenda X path-connected if any two events A; B 2 Xnf?; g can be connected by a path of conditional entailments, i.e., there exist events A 1 ; :::
; A k 2 X (k 1) such that A = A 1 ` A 2 ` ::: ` A k = B.
An example of a path-connected agenda is X := fA; A c : A R is a bounded intervalg, where the underlying set of worlds is = R. For instance, there is a path of conditional entailments from [0; 1] 2 X to [2; 3] 2 X given by [0; 1] ` [0; 3] ` [2; 3]. To establish [0; 1] ` [0; 3], it su¢ ces to conditionalize on the empty set of events Y = ? (i.e., [0; 1] even unconditionally entails [0; 3]). To establish [0; 3] ` [2; 3], one may conditionalize on Y = f[2; 4]g.
Many agendas are not path-connected, including all nested agendas (6 = f?; g) and the agenda in our expert-committee example. The following result holds.
Theorem 3 (a) For any path-connected agenda X, every pooling function F :
P n X ! P X satisfying independence and consensus preservation is neutral. (b) For any non-path-connected agenda X (…nite and distinct from f?; g), there exists a non-neutral pooling function F : P n X ! P X satisfying independence and consensus preservation.
Proof sketches
We now outline the proofs of Theorems 1 to 3. (Details are given in the Appendix.) We begin with part (a) of each theorem. Theorem 1(a) follows from Theorem 2(a), since both results apply to the same agendas but Theorem 1(a) uses a stronger consensus requirement.
To prove Theorem 2(a), we de…ne a binary relation on the set of all contingent events in the agenda. Recall that two events A and B are exclusive if A \ B = ? and exhaustive if A [ B = . For any A; B 2 Xnf?; g, we de…ne A B , there is a …nite sequence A 1 ; :::; A k 2 X of length k 1 with A 1 = A and A k = B such that any adjacent A j ; A j+1 are neither exclusive nor exhaustive.
Theorem 2(a) then follows immediately from the following two lemmas (proved in the Appendix).
Lemma 1 For any agenda X (6 = f?; g), the relation is an equivalence relation on Xnf?; g, with exactly two equivalence classes if X is nested, and exactly one if X is non-nested.
Lemma 2 For any agenda X (6 = f?; g), a pooling function satisfying independence and conditional consensus compatibility is neutral on each equivalence class with respect to (i.e., the local pooling criterion is the same for all events in the same equivalence class).
The proof of Theorem 3(a) uses the following lemma (broadly analogous to a lemma in binary judgment-aggregation theory; e.g., Nehring and[START_REF] Nehring | Abstract Arrovian Aggregation[END_REF]Dietrich andList 2007a).
Lemma 3 For any pooling function satisfying independence and consensus preservation, and all events A and B in the agenda X, if A ` B then D A D B , where D A and D B are the local pooling criteria for A and B, respectively. (Here D A D B means that, for all (p 1 ; :::; p n ), D A (p 1 ; :::; p n ) D B (p 1 ; :::; p n ).)
To see why Theorem 3(a) follows, simply note that D A D B whenever there is a path of conditional entailments from A 2 X to B 2 X (by repeated application of the lemma); thus, D A = D B whenever there are paths in both directions, as is guaranteed if the agenda is path-connected and A; B 6 2 f?; g.
Part (b) of each theorem can be proved by explicitly constructing a nonneutral pooling function -for an agenda of the relevant kind -which satis…es independence and the appropriate consensus-preservation requirement. In the case of Theorem 3(b), this pooling function is very complex, and hence we omit it in the main text. In the case of Theorems 1(a) and 1(b), the idea can be described informally. Recall that a nested agenda X can be partitioned into two subsets, X + and XnX + = fA c : A 2 X + g, each of which is linearly ordered by set-inclusion. The opinion pooling function constructed has the property that (i) all events A in X + have the same local pooling criterion D = D A , which can be de…ned, for example, as the square of a linear pooling criterion (i.e., we …rst apply a linear pooling criterion and then take the square), and (ii) all events in XnX + have the same 'complementary'pooling criterion D , de…ned as D (x 1 ; :::; x n ) = 1 D(1 x 1 ; :::; 1 x n ) for all (x 1 ; :::; x n ) 2 [0; 1] n . Showing that the resulting pooling function is well-de…ned and satis…es all the relevant requirements involves some technicality, in part because we allow the agenda to have any cardinality.
When is opinion pooling linear?
As we have seen, for many agendas, only neutral pooling functions can satisfy our two requirements. But are these functions also linear? As we now show, the answer depends on the agenda. If we suitably restrict the class of agendas considered in part (a) of each of our previous theorems, we can derive linearity rather than just neutrality. Similarly, we can expand the class of agendas considered in part (b) of each theorem, and replace non-neutrality with non-linearity.
Three theorems
As in the previous section, we begin with the strongest consensus-preservation requirement, i.e., consensus compatibility. While this requirement leads to neutrality for all non-nested agendas (by Theorem 1), it leads to linearity for all non-nested agendas above a certain size.
Theorem 4 (a) For any non-nested agenda X with jXnf ; ?gj > 4, every pooling function F : P n X ! P X satisfying independence and consensus compatibility is linear. (b) For any other agenda X ( 6 = f?; g), there exists a non-linear pooling function F : P n X ! P X satisfying independence and consensus compatibility.
Next, let us weaken the requirement of consensus compatibility to conditional consensus compatibility. While this requirement leads to neutrality for all nonnested agendas (by Theorem 2), it leads to linearity only for non-simple agendas. Like path-connected agendas, non-simple agendas play an important role in binary judgment-aggregation theory, where they are the agendas susceptible to the analogues of Condorcet's paradox: the possibility of inconsistent majority judgments (e.g., Dietrich andList 2007b, Nehring and[START_REF] Nehring | The structure of strategy-proof social choice -Part I: General characterization and possibility results on median spaces[END_REF].
To de…ne non-simplicity, we …rst require a preliminary de…nition. We call a set of events Y minimal inconsistent if it is inconsistent but every proper subset Y 0 ( Y is consistent. Examples of minimal inconsistent sets are (i) fA; B; (A \ B) c g, where A and B are logically independent events, and (ii) fA; A ! B; B c g, with A, B, and A ! B as de…ned in the expert-committee example above. In each case, the three events are mutually inconsistent, but any two of them are mutually consistent. The notion of a minimal inconsistent set is useful for characterizing logical dependencies between the events in the agenda. Trivial examples of minimal inconsistent subsets of the agenda are those of the form fA; A c g X, where A is contingent. But many interesting agendas have more complex minimal inconsistent subsets. One may regard sup Y X:Y is minimal inconsistent jY j as a measure of the complexity of the logical dependencies in the agenda X. Given this idea, we call an agenda X non-simple if it has at least one minimal inconsistent subset Y X containing more than two (but not uncountably many 10 ) events, and simple otherwise. For instance, the agenda consisting of A, A ! B, B and their complements in our expert-committee example is non-simple (take Y = fA; A ! B; B c g).
Non-simplicity lies logically between non-nestedness and path-connectedness: it implies non-nestedness, and is implied by path-connectedness (if X 6 = f ; ?g). 11
10 This countability addition can often be dropped because all minimal inconsistent sets Y X are automatically …nite or at least countable. This is so if X is …nite or countably in…nite, and also if the underlying set of worlds is countable. It can further be dropped in case the events in X are represented by sentences in a language. Then, provided this language belongs to a compact logic, all minimal inconsistent sets Y X are …nite (because any inconsistent set has a …nite inconsistent subset). By contrast, if X is a -algebra and has in…nite cardinality, then it usually contains events not representing sentences, because countably in…nite disjunctions cannot be formed in a language. Such agendas often have uncountable minimal inconsistent subsets. For instance, if X is the -algebra of Borel-measurable subsets of R, then its subset Y = fRnfxg : x 2 Rg is uncountable and minimal inconsistent. This agenda is nonetheless non-simple, since it also has many …nite minimal inconsistent subsets Y with jY j 3 (e.g., Y = ff1; 2g; f1; 3g; f2; 3gg). 11 To give an example of a non-nested but simple agenda X, let X = fA; A c ; B; B c g, where the events A and B are logically independent, i.e., A \
B; A \ B c ; A c \ B; A c \ B c 6 = ?. Clearly, this
To see how exactly non-simplicity strengthens non-nestedness, note the following fact [START_REF] Dietrich | Judgment aggregation and agenda manipulation[END_REF]:
Fact (a)
(Y nfAg) [ fA c g is consistent for each A 2 Y .
Note that the characterizing condition in (b) can be obtained from the one in (a) simply by replacing 'subset Y 'with 'inconsistent subset Y (of countable size)'.
Theorem 5 (a) For any non-simple agenda X with jXnf ; ?gj > 4, every pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility is linear. (b) For any simple agenda X (…nite and distinct from f?; g), there exists a non-linear pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility.
Finally, we turn to the least demanding consensus requirement, namely consensus preservation simpliciter. We have seen that this requirement leads to neutral pooling if the agenda is path-connected (by Theorem 3). To obtain a characterization of linear pooling, path-connectedness alone is not enough. In the following theorem, we impose an additional condition on the agenda. We call an agenda X partitional if it has a subset Y which partitions into at least three non-empty events (where Y is …nite or countably in…nite), and non-partitional otherwise. (A subset Y of X partitions if the elements of Y are individually non-empty, pairwise disjoint, and cover .) For instance, X is partitional if it contains (nonempty) events A, A c \ B, and
A c \ B c ; simply let Y = fA; A c \ B; A c \ B c g.
Theorem 6 (a) For any path-connected and partitional agenda X, every pooling function F : P n X ! P X satisfying independence and consensus preservation is linear. (b) For any non-path-connected (…nite) agenda X, there exists a non-linear pooling function F : P n X ! P X satisfying independence and consensus preservation.
agenda is non-nested. It is simple since its only minimal inconsistent subsets are fA; A c g and fB; B c g. To give an example of a non-path-connected, but non-simple agenda, let X consist of A; A ! B; B and their complements, as in our example above. We have already observed that it is non-simple. To see that it is not path-connected, note, for example, that there is no path of conditional entailments from B to B C . Part (b) shows that one of theorem's agenda conditions, path-connectedness, is necessary for the characterization of linear pooling (which is unsurprising, as it is necessary for the characterization of neutral pooling). By contrast, the other agenda condition, partitionality, is not necessary: linearity also follows from independence and consensus preservation for some non-partitional but path-connected agendas. So, the agenda conditions of part (a) are non-minimal. We leave the task of …nding minimal agenda conditions as a challenge for future research. 12Despite its non-minimality, the partionality condition in Theorem 6 is not redundant: if it were dropped (and not replaced by something else), part (a) would cease to hold. This follows from the following (non-trivial) proposition:
Proposition 2 For some path-connected and non-partitional (…nite) agenda X, there exists a non-linear pooling function F : P n X ! P X satisfying independence (even neutrality) and consensus preservation. 13Readers familiar with binary judgment-aggregation theory will notice that the agenda which we construct to prove this proposition violates an important agenda condition from that area, namely even-number negatability (or non-a¢ neness) (see [START_REF] Dietrich | A generalised model of judgment aggregation[END_REF], Dietrich and List 2007[START_REF] Dokow | Aggregation of binary evaluations[END_REF]. It would be intriguing if the same condition turned out to be the correct minimal substitute for partionality in Theorem 6.
Proof sketches
We now describe how Theorems 4 to 6 can be proved. (Again, details are given in the Appendix.) We begin with part (a) of each theorem. To prove Theorem 4(a), consider a non-nested agenda X with jXnf ; ?gj > 4 and a pooling function F satisfying independence and consensus compatibility. We want to show that F is linear. Neutrality follows from Theorem 1(a). From neutrality, we can infer linearity by using two lemmas. The …rst contains the bulk of the work, and the second is an application of Cauchy's functional equation (similar to its application in [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF][START_REF] Aczél | A characterization of weighted arithmetic means[END_REF][START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF]. Let us write 0 and 1 to denote the n-tuples (0; :::; 0) and (1; :::; 1), respectively.
:; x n ) = n X i=1 w i x i for all x 2 [0; 1] n
for some non-negative weights w 1 ; :::; w n with sum 1.
The proof of Theorem 5(a) follows a similar strategy, but replaces Lemma 4 with the following lemma:
Lemma 6 If D : [0; 1] n ! [0; 1]
is the local pooling criterion of a neutral and conditional-consensus-compatible pooling function for a non-simple agenda X, then (2) holds.
Finally, Theorem 6(a) can also be proved using a similar strategy, this time replacing Lemma 4 with the following lemma:
Lemma 7 If D : [0; 1] n ! [0; 1]
is the local pooling criterion of a neutral and consensus-preserving pooling function for a partitional agenda X, then (2) holds.
Part (b) of each of Theorems 4 to 6 can be proved by constructing a suitable example of a non-linear pooling function. In the case of Theorem 4(b), we can re-use the non-neutral pooling function constructed to prove Theorem 1(b) as long as the agenda satis…es jXnf ; ?gj > 4; for (small) agendas with jXnf ; ?gj 4, we construct a somewhat simplistic pooling function generating collective opinion functions that only assign probabilities of 0, 1 2 , or 1. The constructions for Theorems 5(b) and 6(b) are more di¢ cult; the one for Theorem 5(b) also has the property that collective probabilities never take values other than 0, 1 2 , or 1.
Classic results as special cases
It is instructive to see how our present results generalize classic results in the literature, where the agenda is a -algebra (especially [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF][START_REF] Aczél | A characterization of weighted arithmetic means[END_REF][START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF]. Note that, for a -algebra, all the agenda conditions we have used reduce to a simple condition on agenda size:
Lemma 8 For any agenda X (6 = f ; ?g) that is closed under pairwise union or intersection (i.e., any agenda that is an algebra), the conditions of non-nestedness, non-simplicity, path-connectedness, and partitionality are equivalent, and are each satis…ed if and only if jXj > 4.
Note, further, that when X is a -algebra, all of our consensus requirements become equivalent, as shown by Proposition 1(a). It follows that, in the special case of a -algebra, our six theorems reduce to two classical results:
Theorems 1 to 3 reduce to the result that all pooling functions satisfying independence and consensus preservation are neutral if jXj > 4, but not if jXj = 4; Theorems 4 to 6 reduce to the result that all pooling functions satisfying independence and consensus preservation are linear if jXj > 4, but not if jXj = 4.
The case jXj < 4 is uninteresting because it implies that X = f?; g, given that X is a -algebra. In fact, we can derive these classic theorems not only for -algebras, but also for algebras. This is because, given Lemma 8, Theorems 3 and 6 have the following implication:
Corollary 1 For any agenda X that is closed under pairwise union or intersection (i.e., any agenda that is an algebra), (a) if jXj > 4, every pooling function F : P n X ! P X satisfying independence and consensus preservation is linear (and by implication neutral); (b) if jXj = 4, there exists a non-neutral (and by implication non-linear) pooling function F : P n X ! P X satisfying independence and consensus preservation.
Probabilistic preference aggregation
To illustrate the use of general agendas, we now present an application to probabilistic preference aggregation, a probabilistic analogue of Arrovian preference aggregation. A group seeks to rank a set K of at least two (mutually exclusive and exhaustive) alternatives in a linear order. Let K be the set of all strict orderings over K (asymmetric, transitive, and connected binary relations). Informally, K can represent any set of distinct objects, e.g., policy options, candidates, social states, or distributions of goods, and an ordering over K can have any interpretation consistent with a linear form (e.g., 'better than', 'preferable to', 'higher than', 'more competent than', 'less unequal than'etc.).
For any two distinct alternatives x and y in K, let x y denote the event that x is ranked above y; i.e., x y denotes the subset of K consisting of all those orderings in K such that x y. We de…ne the preference agenda as the set X K = fx y : x; y 2 K with x 6 = yg; which is non-empty and closed under complementation, as required for an agenda (this construction draws on Dietrich and List 2007a). In our opinion pooling problem, each individual i submits probability assignments for the events in X K , and the group then determines corresponding collective probability assignments. An agent's opinion function P : X K ! [0; 1] can be interpreted as capturing the agent's degrees of belief about which of the various pairwise comparisons x y (in X K ) are 'correct'; call this the belief interpretation. Thus, for any two distinct alternatives x and y in K, P (x y) can be interpreted as the agent's degree of belief in the event x y, i.e., the event that x is ranked above (preferable to, better than, higher than ...) y. (On a di¤erent interpretation, the vaguepreference interpretation, P (x y) could represent the degree to which the agent prefers x to y, so that the present framework would capture vague preferences over alternatives as opposed to degrees of belief about how they are ranked in terms of the appropriate criterion.) A pooling function, as de…ned above, maps n individual such opinion functions to a single collective one.
What are the structural properties of this preference agenda? Lemma 9 For a preference agenda X K , the conditions of non-nestedness, nonsimplicity, and path-connectedness are equivalent, and are each satis…ed if and only if jKj > 2; the condition of partitionality is violated for any K.
The proof that the preference agenda is non-nested if and only if jKj > 2 is trivial. The analogous claims for non-simplicity and path-connectedness are wellestablished in binary judgment-aggregation theory, to which we refer the reader. 14Finally, it is easy to show that any preference agenda violates partitionality.
Since the preference agenda is non-nested, non-simple, and path-connected when jKj > 2, Theorems 1(a), 2(a), 3(a), 4(a), and 5(a) apply; but Theorem 6(a) does not, because partitionality is violated. Let us here focus on Theorem 5. This theorem has the following corollary for the preference agenda:
Corollary 2 For a preference agenda X K , (a) if jKj > 2, every pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility is linear; (b) if jKj = 2, there exists a non-linear pooling function F : P n X ! P X satisfying independence and conditional consensus compatibility.
It is interesting to compare this result with Arrow's classic theorem. While Arrow's theorem yields a negative conclusion if jKj > 2 (showing that only dictatorial aggregation functions satisfy its requirements), our linearity result does not have any negative ‡avour. We obtain this positive result despite the fact that our axiomatic requirements are comparable to Arrow's. Independence, in our framework, is the probabilistic analogue of Arrow's independence of irrelevant alternatives: for any pair of distinct alternatives x; y in K, the collective probability for x y should depend only on individual probabilities for x y. Conditional consensus compatibility is a strengthened analogue of Arrow's weak Pareto principle (an exact analogue would be consensus preservation): it requires that, for any two pairs of distinct alternatives, x; y 2 K and v; w 2 K, if all individuals are certain that x y given that v w, then this agreement should be preserved at the collective level. The analogues of Arrow's universal domain and collective rationality are built into our de…nition of a pooling function, whose domain and co-domain are de…ned as the set of all (by de…nition coherent) opinion functions over X K .
Thus our result points towards an alternative escape-route from Arrow's impossibility theorem (though it may be practically applicable only in special contexts): if we enrich Arrow's informational framework by allowing degrees of belief over di¤erent possible linear orderings as input and output of the aggregation (or alternatively, vague preferences, understood probabilistically), then we can avoid Arrow's dictatorship conclusion. Instead, we obtain a positive characterization of linear pooling, despite imposing requirements on the pooling function that are stronger than Arrow's classic requirements (in so far as conditional consensus compatibility is stronger than the analogue of the weak Pareto principle).
On the belief interpretation, the present informational framework is meaningful so long as there exists a fact of the matter about which of the orderings in K is the 'correct' one (e.g., an objective quality ordering), so that it makes sense to form beliefs about this fact. On the vague-preference interpretation, our framework requires that vague preferences over pairs of alternatives are extendable to a coherent probability distribution over the set of 'crisp'orderings in K .
There are, of course, substantial bodies of literature on avoiding Arrow's dictatorship conclusion in richer informational frameworks and on probabilistic or vague preference aggregation. It is well known, for example, that the introduction of interpersonally comparable preferences (of an ordinal or cardinal type) is su¢ cient for avoiding Arrow's negative conclusion (e.g., [START_REF] Sen | Collective Choice and Social Welfare[END_REF][START_REF] Sen | Collective Choice and Social Welfare[END_REF]. Also, di¤erent models of probabilistic or vague preference aggregation have been proposed. 15 A typical assumption is that, for any pair of alternatives x; y 2 K, each individual prefers x to y to a certain degree between 0 and 1. However, the standard constraints on vague or fuzzy preferences do not require individuals to hold probabilistically coherent opinion functions in our sense; hence the literature has tended to generate Arrow-style impossibility results. By contrast, it is illuminating to see that a possibility result on probabilistic preference aggregation can be derived as a corollary of one of our new results on probabilistic opinion pooling.
A uni…ed perspective
Finally, we wish to compare probabilistic opinion pooling with binary judgment aggregation and Arrovian preference aggregation in its original form. Thanks to the notion of a general agenda, we can represent each of these other aggregation problems within the present framework.
To represent binary judgment aggregation, we simply need to restrict attention to binary opinion functions, i.e., opinion functions that take only the values 0 and 1. 16 Binary opinion functions correspond to consistent and complete judgment sets in judgment-aggregation theory, i.e., sets of the form J X which satisfy \ A2J A 6 = ? (consistency) and contain a member of each pair A; A c 2 X (completeness).17 A binary opinion pooling function assigns to each pro…le of binary opinion functions a collective binary opinion function. Thus, binary opinion pooling functions correspond to standard judgment aggregation functions (with universal domain and consistent and complete outputs). To represent preference aggregation, we need to restrict attention both to the preference agenda, as introduced in Section 7, and to binary opinion functions, as just de…ned. Binary opinion functions for the preference agenda correspond to linear preference orders, as familiar from preference aggregation theory in the tradition of Arrow. Here, binary opinion pooling functions correspond to Arrovian social welfare functions.
The literature on binary judgment aggregation contains several theorems that use axiomatic requirements similar to those used here. In the binary case, however, these requirements lead to dictatorial, rather than linear, aggregation, as in Arrow's original impossibility theorem in preference-aggregation theory. In fact, Arrow-like theorems are immediate corollaries of the results on judgment aggregation, when applied to the preference agenda (e.g., Dietrich andList 2007a, List and[START_REF] List | Aggregating sets of judgments: Two impossibility results compared[END_REF]. In particular, the independence requirement reduces to Arrow's independence of irrelevant alternatives, and the unanimity-preservation requirements reduce to variants of the Pareto principle.
How can the same axiomatic requirements lead to a positive conclusionlinearity -in the probabilistic framework and to a negative one -dictatorshipin the binary case? The reason is that, in the binary case, linearity collapses into dictatorship because the only well-de…ned linear pooling functions are dictatorial here. Let us explain this point. Linearity of a binary opinion pooling function F is de…ned just as in the probabilistic framework: there exist real-valued weights w 1 ; :::; w n 0 with w 1 + ::: + w n = 1 such that, for every pro…le (P 1 ; :::; P n ) of binary opinion functions, the collective truth-value of any given event A in the agenda X is the weighted arithmetic average w 1 P 1 (A) + + w n P n (A). Yet, for this to de…ne a proper binary opinion pooling function, some individual i must get a weight of 1 and all others must get a weight of 0, since otherwise the average w 1 P 1 (A) + + w n P n (A) could fall strictly between 0 and 1, violating the binary restriction. In other words, linearity is equivalent to dictatorship here. 18 We can obtain a uni…ed perspective on several distinct aggregation problems by combining this paper's linearity results with the corresponding dictatorship results from the existing literature (adopting the uni…cation strategy proposed in [START_REF] Dietrich | The aggregation of propositional attitudes: towards a general theory[END_REF]. This yields several uni…ed characterization theorems applicable to probability aggregation, judgment aggregation, and preference aggregation. Let us state these results. The …rst combines Theorem 4 with a result due to [START_REF] Dietrich | Judgment aggregation and agenda manipulation[END_REF]; the second combines Theorem 5 with a result due to [START_REF] Dietrich | Propositionwise judgment aggregation: the general case[END_REF]; and the third combines Theorem 6 with the analogue of Arrow's theorem in judgment aggregation (Dietrich andList 2007a and[START_REF] Dokow | Aggregation of binary evaluations[END_REF][START_REF] Dokow | Aggregation of binary evaluations[END_REF]. In the binary case, the independence requirement and our various unanimity requirements are de…ned as in the probabilistic framework, but with a restriction to binary opinion functions. 19 Theorem 4 + (a) For any non-nested agenda X with jXnf ; ?gj > 4, every binary or probabilistic opinion pooling function satisfying independence and consensus compatibility is linear (where linearity reduces to dictatorship in the binary case). (b) For any other agenda X ( 6 = f?; g), there exists a non-linear binary or probabilistic opinion pooling function satisfying independence and consensus compatibility.
Theorem 5 + (a) For any non-simple agenda X with jXnf ; ?gj > 4, every 18 To be precise, for (trivial) agendas with Xnf ; ?g = ?, the weights w i may di¤er from 1 and 0. But it still follows that every linear binary opinion pooling function (in fact, every binary opinion pooling function) is dictatorial here, for the trivial reason that there is only one binary opinion function and thus only one (dictatorial) binary opinion pooling function. 19 In the binary case, two of our unanimity-preservation requirements (implicit consensus preservation and consensus compatibility) are equivalent, because every binary opinion function is uniquely extendible to (X). Also, conditional consensus compatibility can be stated more easily in the binary case, namely in terms of a single conditional judgment rather than a …nite set of conditional judgments.
binary or probabilistic opinion pooling function satisfying independence and conditional consensus compatibility is linear (where linearity reduces to dictatorship in the binary case). (b) For any simple agenda X (…nite and distinct from f?; g), there exists a non-linear binary or probabilistic opinion pooling function satisfying independence and conditional consensus compatibility.
Theorem 6 + (a) For any path-connected and partitional agenda X, every binary or probabilistic opinion pooling function satisfying independence and consensus preservation is linear (where linearity reduces to dictatorship in the binary case). (b) For any non-path-connected (…nite) agenda X, there exists a non-linear binary or probabilistic opinion pooling function satisfying independence and consensus preservation.20
By Lemma 9, Theorems 4 + , 5 + , and 6 + are relevant to preference aggregation insofar as the preference agenda X K satis…es each of non-nestedness, nonsimplicity, and path-connectedness if and only if jKj > 2, where K is the set of alternatives. Recall, however, that the preference agenda is never partitional, so that part (a) of Theorem 6 + never applies. By contrast, the binary result on which part (a) is based applies to the preference agenda, as it uses the weaker condition of even-number-negatability (or non-a¢ neness) instead of partitionality (and that weaker condition is satis…ed by X K if jKj > 2). As noted above, it remains an open question how far partitionality can be weakened in the probabilistic case. 21
A Proofs
We now prove all our results. In light of the mathematical connection between the present results and those in our companion paper on 'premise-based'opinion pooling for -algebra agendas [START_REF] Dietrich | Probabilistic opinion pooling generalized -Part two: The premise-based approach[END_REF], one might imagine two possible proof strategies: either one could prove our present results directly and those in the companion paper as corollaries, or vice versa. In fact, we will mix those two strategies. We will prove parts (a) of all present theorems directly (and use them in the companion paper to derive the corresponding results), while we will prove parts (b) directly in some cases and as corollaries of corresponding results from the companion paper in others. This Appendix is organised as follows. In Sections A.1 to A.5, we prove parts (a) of Theorems 2 to 6, along with related results. Theorem 1(a) requires no independent proof, as it follows from Theorem 2(a). In Section A.6, we clarify the connection between the two papers, and then prove parts (b) of all present theorems. Finally, in Section A.7, we prove Propositions 1 and 2.
A.1 Proof of Theorem 2(a)
As explained in the main text, Theorem 2(a) follows from Lemmas 1 and 2. We now prove these lemmas. To do so, we will also prove some preliminary results.
Lemma 10 Consider any agenda X.
(a)
de…nes an equivalence relation on Xnf?; g: (b) A B , A c B c for all events A; B 2 Xnf?; g. (c) A B ) A B for all events A; B 2 Xnf?; g. (d) If X 6 = f?; g, the relation has either a single equivalence class, namely Xnf?; g, or exactly two equivalence classes, each one containing exactly one member of each pair A; A c 2 Xnf?; g.
Proof. (a) Re ‡exivity, symmetry, and transitivity on Xnf?; g are all obvious (we have excluded ? and to ensure re ‡exivity).
(b) It su¢ ces to prove one direction of implication (as (A c ) c = A for all A 2 X). Let A; B 2 Xnf?; g with A B. Then there is a path A 1 ; :::; A k 2 X from A to B such that any neighbours A j ; A j+1 are non-exclusive and non-exhaustive. So A c 1 ; :::; A c k is a path from A c to B c , where any neighbours A c j ; A c j+1 are non-exclusive (as (d) Let X 6 = f?; g. Suppose the number of equivalence classes with respect to is not one. As Xnf?; g 6 = ?, it is not zero. So it is at least two. We show two claims: Claim 1. There are exactly two equivalence classes with respect to . Proof of Claim 2. For a contradiction, let Z be an ( -)equivalence class containing the pair A; A c . By assumption, Z is not the only equivalence class, so there is a B 2 Xnf?; g with B 6 A (hence B 6 A c ). Then either A \ B = ? or A [ B = . In the …rst case, B A c , so that B A c by (c), a contradiction. In the second case, A c B, so that A c B by (c), a contradiction.
A c j \ A c j+1 = (A j [ A j+1 ) c 6 = c = ?) and non-exhaustive (as A c j [ A c j+1 = (A j \ A j+1 ) c 6 = ? c = ). So, A c B c .
Proof of Lemma 1. Consider an agenda X 6 = f?; g. By Lemma 10(a), is indeed an equivalence relation on Xnf?; g. By Lemma 10(d), it remains to prove that X is nested if and only if there are exactly two equivalence classes. Note that X is nested if and only if Xnf?; g is nested. So we may assume without loss of generality that ?; = 2 X.
First, suppose there are two equivalence classes. Let X + be one of them. By Lemma 10(d), X = fA; A c : A 2 X + g. To complete the proof that X is nested, we show that X + is linearly ordered by set-inclusion . Clearly, is re ‡exive, transitive, and anti-symmetric. We must show that it is connected. So, let A; B 2 X + ; we prove that A B or B A.
Since A 6 B c (by Lemma 10(d)), either A \ B c = ? or A [ B c = . So, either A B or B A.
Conversely, let X be nested. So X = fA; A c : A 2 X + g for some set X + that is linearly ordered by set inclusion. Let A 2 X + . We show that A 6 A c , implying that X has at least -so by Lemma 10(d) exactly -two equivalence classes. For a contradiction, suppose A A c . Then there is a path A 1 ; :::; A k 2 X from A = A 1 to A c = A k such that, for all neighbours A j ; A j+1 , A j \ A j+1 6 = ? and A j [ A j+1 6 = . Since each event C 2 X either is in X + or has its complement in X + , and since A 1 = A 2 X + and A c k = A 2 X + , there are neighbours A j ; A j+1 such that A j ; A c j+1 2 X + . So, as X + is linearly ordered by , either
A j A c j+1 or A c j+1 A j , i.e., either A j \ A j+1 = ? or A j [ A j+1 = , a contradiction.
We now give a useful re-formulation of the requirement of conditional consensus compatibility for opinion pooling on a general agenda X. Note …rst that an opinion function is consistent with certainty of A (2 X) given B (2 X) if and only if it is consistent with certainty of the event 'B implies A'(i.e., with zero probability of the event BnA or 'B but not A'). This observation yields the following reformulation of conditional consensus compatibility (in which the roles of A and B have been interchanged):
Implication preservation. For all P 1 ; :::; P n 2 P X , and all …nite sets S of pairs (A; B) of events in X, if every opinion function P i is consistent with certainty that A implies B for all (A; B) in S (i.e., some extension P i 2 P (X) of P i satis…es P i (AnB) = 0 for all pairs (A; B) 2 S), then so is the collective opinion function P P 1 ;:::;Pn .
Proposition 3 For any agenda X, a pooling function F : P n X ! P X is conditional consensus compatible if and only if it is implication preserving.
Proof of Lemma 2. Let F be an independent and conditional-consensus-compatible pooling function for agenda X. For all A 2 X, let D A be the pooling criterion given by independence. We show that D A = D B for all A; B 2 X with A \ B 6 = ? and A [ B 6 = . This will imply that D A = D B whenever A B (by induction on the length of a path from A to B), which completes the proof. So, let A; B 2 X with A \ B 6 = ? and A [ B 6 = . Notice that A \ B, A [ B, and AnB need not belong to X. Let x 2 [0; 1] n ; we show that D A (x) = D B (x). As A \ B 6 = ? and A c \ B c = (A [ B) c 6 = ?, there are P 1 ; :::; P n 2 P (X) such that P i (A \ B) = x i and P i (A c \ B c ) = 1 x i for all i = 1; :::; n. Now consider the opinion functions P 1 ; :::; P n 2 P X given by P i := P i j X . Since P i (AnB) = 0 and P i (BnA) = 0 for all i, the collective opinion function P P 1 ;:::;Pn has an extension P P 1 ;:::;Pn 2 P (X) such that P P 1 ;:::;Pn (AnB) = P P 1 ;:::;Pn (BnA) = 0, by implication preservation (which is equivalent to conditional consensus compatibility by Proposition 3). So P P 1 ;:::;Pn (A) = P P 1 ;:::;Pn (A \ B) = P P 1 ;:::;Pn (B), and hence, P P 1 ;:::;Pn (A) = P P 1 ;:::;Pn (B). So, using the fact that P P 1 ;:::;Pn (A) = D A (x) (as P i (A) = x i for all i) and P P 1 ;:::;Pn (B) = D B (x) (as P i (B) = x i for all i), we have
D A (x) = D B (x).
A.2 Proof of Theorem 3(a)
As explained in the main text, Theorem 3(a) follows from Lemma 3, which we now prove.
Proof of Lemma 3. Let F : P n X ! P X be independent and consensus-preserving. Let A; B 2 X such that A ` B, say in virtue of (countable) set Y X. Write D A and D B for the pooling criterion for A and B, respectively. Let x = (x 1 ; :::; x n ) 2 [0; 1] n . We show that D A (x) D B (x). As \ C2fAg[Y C is non-empty but has empty intersection with B c (by the conditional entailment), it equals its intersection with B, so \ C2fA;Bg[Y C 6 = ?. Similarly, as \ C2fB c g[Y C is non-empty but has empty intersection with A, it equals its intersection with A c , so
\ C2fA c ;B c g[Y C 6 = ?. Hence there exist ! 2 \ C2fA;Bg[Y C and ! 0 2 \ C2fA c ;B c g[Y C
. For each individual i, we de…ne a probability function P i : (X) ! [0; 1] by P i := x i ! + (1 x i ) ! 0 (where ! ; ! 0 : (X) ! [0; 1] are the Dirac-measures at ! and ! 0 , respectively), and we then let P i := P i j X . As each P i satis…es P i (A) = P i (B) = x i , P P 1 ;:::;Pn (A) = D A (P 1 (A); :::; P n (A)) = D A (x), P P 1 ;:::;Pn (B) = D B (P 1 (B); :::; P n (B)) = D B (x).
Further, for each P i and each C 2 Y , we have P i (C) = 1, so that P P 1 ;:::;Pn (C) = 1 (by consensus preservation). Hence P P 1 ;:::;Pn (\ C2Y C) = 1, since 'countable inter-sections preserve probability one'. So, P P 1 ;:::;Pn (\ C2fAg[Y C) = P P 1 ;:::;Pn (A) = D A (x), P P 1 ;:::;Pn (\ C2fBg[Y C) = P P 1 ;:::;Pn (B) = D B (x).
To prove that D A (x) D B (x), it su¢ ces to show that P P 1 ;:::;Pn (\ C2fAg[Y C) P P 1 ;:::;Pn (\ C2fBg[Y C). This is true because
\ C2fAg[Y C = \ C2fA;Bg[Y \ C2fBg[Y C,
where the identity holds by an earlier argument.
A.3 Proof of Theorem 4(a)
As explained in the main text, Theorem 4(a) follows from Theorem 1(a) via Lemmas 4 and 5.22 It remains to prove both lemmas. We draw on a known agenda characterization result and a technical lemma.
Proposition 4 (Dietrich 2013) For any agenda X, the following are equivalent: Proof. (a) As X 6 = f ; ?g, we may pick some A 2 Xnf ; ?g. For each x 2 [0; 1] n , there exist (by A 6 = ?; ) opinion functions P 1 ; :::; P n 2 P X such that (P 1 (A); :::; P n (A)) = x, which implies that (P 1 (A c ); :::; P n (A c )) = 1 x and D(x) + D(1 x) = P P 1 ;:::;Pn (A) + P P 1 ;:::;Pn (A c ) = 1.
(b) Given consensus-preservation D(1) = 1. By part (a), D(0) = 1 D(1). So D(0) = 0.
Proof of Lemma 4. Let D be the local pooling criterion of such a pooling function for such an agenda X. Consider any x; y; z 2 [0; 1] n with sum 1. By Proposition 4, there exist A; B; C 2 X such that each of the sets
A := A c \ B \ C, B := A \ B c \ C, C := A \ B \ C c
is non-empty. For all individuals i, since x i + y i + z i = 1 and since A ; B ; C are pairwise disjoint non-empty members of (X), there exists a P i 2 P (X) such that P i (A ) = x i , P i (B ) = y i and P i (C ) = z i . By construction,
P i (A [ B [ C ) = x i + y i + z i = 1 for all i:
(3)
Let P i := P i j X for each individual i. For the pro…le (P 1 ; :::; P n ) 2 P n X thus de…ned, we consider the collective opinion function P P 1 ;:::;Pn . We complete the proof by proving two claims.
Claim 1. P (A ) + P (B ) + P (C ) = P (A [ B [ C ) = 1 for some P 2 P (X) extending P P 1 ;:::;Pn .
The …rst identity holds for all extensions P 2 P (X) of P , by pairwise disjointness of A ; B ; C . For the second identity, note that each P i has an extension P i 2 P (X) for which P i (A [ B [ C ) = 1, so that by consensus compatibility P P 1 ;:::;Pn also has such an extension.
Consider any individual i. We de…ne D i : [0; 1] ! [0; 1] by D i (t) = D(0; :::; 0; t; 0; :::; 0), where t occurs at position i in (0; :::; 0; t; 0; :::; 0). By (7), D i (s + t) = D i (s) + D i (t) for all s; t 0 with s + t 1. As one can easily check, D i can be extended to a function D i : [0; 1) ! [0; 1) such that D i (s + t) = D i (s) + D i (t) for all s; t 0, i.e., such that D i satis…es the non-negative version of Cauchy's functional equation. So, there is some w i 0 such that D i (t) = w i t for all t 0 (by Theorem 1 in [START_REF] Aczél | Lectures on Functional Equations and their Applications[END_REF]. Now, for all x 2 [0; 1] n , D(x) = X n i=1 D i (x i ) (by repeated application of ( 7)), and so (as
D i (x i ) = D i (x i ) = w i x i ) D(x) = X n i=1 w i x i . Applying the latter with x = 1 yields D(1) = X n i=1 w i , hence X n i=1 w i = 1.
A.4 Proof of Theorem 5(a)
As explained in the main text, Theorem 5(a) follows from Theorem 2(a) via Lemmas 6 and 5. 23 It remains to prove Lemma 6.
Proof of Lemma 6. Let D be the local pooling criterion of a neutral and conditionalconsensus-compatible pooling function for a non-simple agenda X. Consider any x; y; z 2 [0; 1] n with sum 1. As X is non-simple, there is a (countable) minimal inconsistent set Y X with jY j 3. Pick pairwise distinct A; B; C 2 Y . Let
A := \ E2Y nfAg E, B := \ E2Y nfBg E, C := \ E2Y nfCg E.
As (X) is closed under countable intersections, A ; B ; C 2 (X). For each i, as x i + y i + z i = 1 and as A ; B ; C are (by Y 's minimal inconsistency) pairwise disjoint non-empty members of (X), there exists a P i 2 P (X) such that
P i (A ) = x i ; P i (B ) = y i ; P i (C ) = z i .
By construction,
P i (A [ B [ C ) = x i + y i + z i = 1 for all i. (8)
Now let P i := P i j X for each individual i, and let P := P P 1 ;:::;Pn . We derive four properties of P (Claims 1-4), which then allow us to show that D(x) + D(y) + D(z) = 1 (Claim 5).
Claim 1. P (\ E2Y nfA;B;Cg E) = 1 for all extensions P 2 P (X) of P .
For all E 2 Y nfA; B; Cg, we have E A [ B [ C , so that by (8) P 1 (E) = ::: = P n (E) = 1, and hence P (E) = 1 (by consensus preservation, which follows from conditional consensus compatibility by Proposition 1(a)). So, for any extension P 2 P (X) of P , we have P (E) = 1 for all E 2 Y nfA; B; Cg. Thus P (\ E2Y nfA;B;Cg E) = 1, as 'countable intersections preserve probability one'.
Claim 2. P (A c [ B c [ C c ) = 1 for all extensions P 2 P (X) of P .
Let P 2 P (X) be an extension of P . Since A \ B \ C is disjoint from \ E2Y nfA;B;Cg E, which has P -probability one by Claim 1, P (A \ B \ C) = 0. This implies Claim 2, since
A c [ B c [ C c = (A \ B \ C) c . Claim 3. P ((A c \ B \ C) [ (A \ B c \ C) [ (A \ B \ C c )) = 1 for some extension P 2 P (X) of P . As A c \ B c is disjoint with each of A ; B ; C , it is disjoint with A [ B [ C ,
which has P i -probability of one for all individuals i by (8). So, P i (A c \ B c ) = 0, i.e., P i (A c nB) = 0, for all i. Analogously, P i (A c nC) = 0 and P i (B c nC) = 0 for all i. Since, as just shown, each P i has an extension P i which assigns zero probability to A c nB, A c nC and B c nC, by conditional consensus compatibility (and Proposition 3) the collective opinion function P also has an extension P 2 P (X) assigning zero probability to these three events, and hence, to their union
(A c nB)[(A c nC)[(B c nC) = (A c \B c )[(A c \C c )[(B c \C c ).
In other words, with P -probability of zero at least two of A c ; B c ; C c hold. Further, with P -probability of one at least one of A c ; B c ; C c holds (by Claim 2). So, with P -probability of one exactly one of A c ; B c ; C c holds. This is precisely what had to be shown. A.5 Proof of Theorem 6(a)
As explained in the main text, Theorem 6(a) follows from Theorem 3(a) via Lemmas 7 and 5 (while applying Lemma 11(b)). It remains to prove Lemma 7.
Proof of Lemma 7. Let D be the local pooling criterion for such a pooling function for a partitional agenda X. Consider any x; y; z 2 [0; 1] n with sum 1. Since X is partitional, some countable Y X partitions into at least three non-empty events. Choose distinct A; B; C 2 Y . For each individual i, since x i + y i + z i = 1 and since A, B and C are pairwise disjoint and non-empty, there is some P i 2 P X such that P i (A) = x i ; P i (B) = y i ; P i (C) = z i .
Let P be the collective opinion function for this pro…le. Since Y is a countable partition of and P can be extended to a ( -additive) probability function,
P E2Y P (E) = 1. Now,
A.6 Proof of parts (b) of all theorems
Parts (b) of three of the six theorems will be proved by reduction to results in the companion paper. To prepare this reduction, we …rst relate opinion pooling on a general agenda X to premise-based opinion pooling on a -algebra agenda, as analysed in the companion paper. Consider any agenda X and any -algebra agenda of which X is a subagenda. (A subagenda of an agenda is a subset which is itself an agenda, i.e., a non-empty subset closed under complementation.) For instance, could be (X). We can think of the pooling function F for X as being induced by a pooling function F for the larger agenda . Formally, a pooling function F : P n ! P for agenda induces the pooling function F : P n X ! P X for (sub)agenda X if F and F generate the same collective opinions within X, i.e., F (P 1 j X ; :::; P n j X ) = F (P 1 ; :::; P n )j X for all P 1 ; :::; P n 2 P :
(Strictly speaking, we further require that P X = fP j X : P 2 P g, but this requirement holds automatically in standard cases, e.g., if X is …nite or (X) = . 24 ) We call F the inducing pooling function, and F the induced one. Our Lemma 13 Consider an agenda X and the corresponding -algebra agenda = (X). Any pooling function for X is (a) induced by some pooling function for agenda ; (b) independent (respectively, neutral, linear) if and only if every inducing pooling function for agenda is independent (respectively, neutral, linear) on X, where 'every'can further be replaced by 'some'; (c) consensus-preserving if and only if every inducing pooling function for agenda is consensus-preserving on X, where 'every' can further be replaced by 'some'; (d) consensus-compatible if and only if some inducing pooling function for agenda is consensus-preserving; (e) conditional-consensus-compatible if and only if some inducing pooling function for agenda is conditional-consensus-preserving on X (where in (d) and (e) the 'only if'claim assumes that X is …nite).
Proof of Lemma 13. Consider an agenda X, the generated -algebra = (X), and a pooling function F for X.
(a) For each P 2 P X , …x an extension in P denoted P . Consider the pooling function F for de…ned by F (P 1 ; :::; P n ) = F (P 1 j X ; :::; P n j X ) for all P 1 ; :::; P n 2 P .Clearly, F induces F (regardless of how the extensions P of P 2 P X were chosen).
(b) We give a proof for the 'independence'case; the proofs for the 'neutrality' and 'linearity'cases are analogous. Note (using part (a)) that replacing 'every'by 'some'strengthens the 'if'claim and weakens the 'only if'claim. It thus su¢ ces to prove the 'if'claim with 'some', and the 'only if'claim with 'every'. Clearly, if some inducing F is independent on X, then F inherits independence. Now let F be independent with pooling criteria D A ; A 2 X. Consider any F : P n ! P n inducing F . Then F is independent on X with the same pooling criteria as for F because for all A 2 X and all P 1 ; :::; P n 2 P we have F (P 1 ; :::; P n )(A) = F (P 1 j X ; :::; P n j X )(A) as F induces F = D A (P 1 j X (A); :::; P n j X (A)) by F 's independence = D A (P 1 (A); :::; P n (A)).
(c) As in part (b), it su¢ ces to prove the 'if'claim with 'some', and the 'only if' claim with 'every'. Clearly, if some inducing F is consensus-preserving on X, F inherits consensus preservation. Now let F be consensus-preserving and induced by F . Then F is consensus-preserving on X because, for all A 2 X and which is either (X) or, if X is …nite, any -algebra which includes X. Our proof of Lemma 13 can be extended to this generalized statement (drawing on Lemma 15 and using an argument related to the 'Claim'in the proof of Theorem 1(b) of the companion paper).
Lemma 14 If a pooling function for a -algebra agenda is independent on a subagenda X (where X is …nite or (X) = ), then it induces a pooling function for agenda X.
The proof draws on a measure-theoretic fact in which the word '…nite'is essential:
Lemma 15 Every probability function on a …nite sub--algebra of -algebra can be extended to a probability function on .
Proof. Let 0 be a …nite sub--algebra of -algebra , and consider any P 0 2 P 0 . Let A be the set of atoms of 0 , i.e., ( -)minimal events in 0 nf?g. As 0 is …nite, A must partition . So, X A2A P 0 (A) = 1. For each A 2 A, let Q A be a probability function on such that Q A (A) = 1. (Such functions exist, since each Q A could for instance be the Dirac measure at some ! A 2 A.) Then P := X A2A P 0 (A)Q A de…nes a probability function on , because (given the identity X A2A:P 0 (A)6 =0 P 0 (A) = 1) it is a convex combination of probability functions on . Further, P extends P 0 , because it agrees with P 0 on A, hence on 0 .
Proof of Lemma 14. Suppose the pooling function F for -algebra agenda is independent on subagenda X, and that X is …nite or (X) = . Let 0 := (X).
If X is …nite, so is 0 . Each P 2 P X can by de…nition be extended to a function in P 0 , which (by Lemma 15 in case 0 is a …nite -algebra distinct from ) can be extended to a function in P . For any Q 2 P X , pick an extension Q 2 P . De…ne a pooling function F 0 for X by F 0 (Q 1 ; :::; Q n ) := F (Q 1 ; :::; Q n )j X for all Q 1 ; :::; Q n 2 P X . Now F induces F 0 for two reasons. First, for all P 1 ; :::; P n 2 P , F 0 (P 1 j X ; :::; P n j X ) = F (P 1 j X ; :::; P n j X )j X = F (P 1 ; :::; P n )j X , where the second '='holds as F is independent on X. Second, P X = fP j X : P 2 P g, where ' 'is trivial and ' 'holds because each P 2 P X equals P j X .
Proof of parts (b) of Theorems 1-6.
By their construction, the numbers p 1 ; :::; p 4 given by ( 10)-( 12) satisfy condition (b) and equation p 1 + ::: + p 4 = 1. To complete the proof of conditions (a)-(b), it remains to show that p 1 ; :::; p 4 0. We do this by proving two claims.
Claim 1. p 4 0, i.e., t 12 +t 13 +t 23 2 1.
We have to prove that T (q 12 ) + T (q 13 ) + T (q 23 ) 2. Note that q 12 + q 13 + q 23 = q 1 + q 2 + q 1 + q 3 + q 2 + q 3 = 2(q 1 + q 2 + q 3 ) 2.
We distinguish three cases.
Case 1 : All of q 12 ; q 13 ; q 23 are all at least 1=2. Then, by (i)-(iii), T (q 12 ) + T (q 13 ) + T (q 23 ) q 12 + q 13 + q 23 2, as desired.
Case 2 : At least two of q 12 ; q 13 ; q 23 are below 1=2. Then, again using (i)-(iii), T (q 12 ) + T (q 13 ) + T (q 23 ) < 1=2 + 1=2 + 1 = 2, as desired.
Case 3 : Exactly one of q 12 ; q 13 ; q 23 is below 1=2. Suppose q 12 < 1=2 q 13 q 23 (otherwise just switch the roles of q 12 ; q 13 ; q 23 ). For all 0 such that q 23 + 1, the properties (i)-(iii) of T imply that T (q 13 ) + T (q 23 ) T (q 13
) + T (q 23 + ):
Since the graphical intuition for ( 13) is clear, let us only give an informal proof, stressing visualisation. Dividing by 2, we have to show that the average value a 1 := 1 2 T (q 13 )+ 1 2 T (q 23 ) is at most the average value a 2 := 1 2 T (q 13 )+ 1 2 T (q 23 + ).
One might wonder why the pooling function constructed in this proof violates conditional consensus compatibility. (It must do so, because otherwise pooling would be linear -hence neutral -by Theorem 5(a).) Let and X be as in the proof, and consider a pro…le with complete unanimity: all individuals i assign probability 0 to ! 1 , 1/4 to ! 2 , 1/4 to ! 3 , and 1/2 to ! 4 . As f! 1 g is the di¤erence of two events in X (e.g. f! 1 ; ! 2 gnf! 2 ; ! 3 g), implication preservation (which is equivalent to conditional consensus compatibility) would require ! 1 's collective probability to be 0 as well. But ! 1 's collective probability is (in the notation of the proof) given by p 1 = t 12 + t 13 t 23 2 = T (q 12 ) + T (q 13 ) T (q 23 ) 2 .
Here, q kl is the collective probability of f! k ; ! l g under a linear pooling function, so that q kl is the probability which each individual assigns to f! k ; ! l g. So
p 1 = T (1=4) + T (1=4) T (1=2) 2 = T (1=4) T (1=2) 2 ,
which is strictly positive as T is strictly concave on [0; 1=2] with T (0) = 0.
Lemma 4
4 If D : [0; 1] n ! [0; 1] is the local pooling criterion of a neutral and consensus-compatible pooling function for a non-nested agenda X with jXnf ; ?gj > 4, then D(x) + D(y) + D(z) = 1 for all x; y; z 2 [0; 1] n with x + y + z = 1. (2) Lemma 5 If a function D : [0; 1 n ] ! [0; 1] with D(0) = 0 satis…es (2), then it takes the linear form D(x 1 ; ::
(c) Let A; B 2 Xnf?; g. If A B, then A B due to a direct connection, because A; B are neither exclusive (as A \ B = A 6 = ?) nor exhaustive (as A [ B = B 6 = ).
Claim 2 .
2 Each class contains exactly one member of any pair A; A c 2 Xnf?; g. Proof of Claim 1. For a contradiction, let A; B; C 2 Xnf?; g be pairwise not ( -)equivalent. By A 6 B, either A \ B = ? or A [ B = . We may assume the former case, because in the latter case we may consider A c ; B c ; C c instead of A; B; C. (Note that A c ; B c ; C c are again pairwise non-equivalent by (b) and A c \ B c = (A [ B) c = c = ?.) Now, since A \ B = ?, we have B A c , whence A c B by (c). By A 6 C, there are two cases: either A \ C = ?, which implies C A c , whence C A c by (c), so that C B (as A c B and is transitive by (a)), a contradiction; or A [ C = , which implies A c C, whence A c C by (c), so that again we derive the contradiction C B, which completes the proof of Claim 1.
(a) X is non-nested with jXnf ; ?gj > 4; (b) X has a (consistent or inconsistent) subset Y with jY j 3 such that (Y nfAg) [ fA c g is consistent for each A 2 Y ; (c) X has a (consistent or inconsistent) subset Y with jY j = 3 such that (Y nfAg) [ fA c g is consistent for each A 2 Y . Lemma 11 If D : [0; 1] n ! [0; 1] is the local pooling criterion of a neutral pooling function for an agenda X (6 = f ; ?g), then (a) D(x) + D(1 x) = 1 for all x 2 [0; 1] n , (b) D(0) = 0 and D(1) = 1, provided the pooling function is consensus preserving.
Claim 2 .
2 D(x) + D(y) + D(z) = 1. Consider an extension P 2 P (X) of P P 1 ;:::;Pn of the kind in Claim 1. As P (A [ B [ C ) = 1, and as the intersection of A c with A [ B [ C is A , P (A c ) = P (A ):(4)Since A c 2 X, we further have P (A c ) = P P 1 ;:::;Pn (A c ) = D(P 1 (A c ); :::; P n (A c )), whereP i (A c ) = P i (A c ) = x i for each individual i. So, P (A c ) = D(x).This and (4) imply that P (A ) = D(x). Analogously, P (B ) = D(y) and P (C ) = D(z). So, Claim 2 follows from Claim 1. Proof of Lemma 5. Consider any D : [0; 1 n ] ! [0; 1] such that D(0) = 0 and D(x) + D(y) + D(z) = 1 for all x; y; z 2 [0; 1] n with x + y + z = 1: (5) We have D(1) = 1 (since D(1) + D(0) + D(0) = 1 where D(0) = 0) and D(x) + D(1 x) = 1 for all x 2 [0; 1] (6) (since D(x) + D(1 x) + D(0) = 1 where D(0) = 0). Using (5) and then (6), for all x; y 2 [0; 1] n with x + y 2 [0; 1] n , 1 = D(x) + D(y) + D(1 x y) = D(x) + D(y) + 1 D(x + y). So, D(x + y) = D(x) + D(y) for all x; y 2 [0; 1] n with x + y 2 [0; 1] n :
Claim 4 .
4 P (A )+P (B )+P (C ) = P (A [B [C ) = 1 for some extension P 2 P (X) of P . Consider an extension P 2 P (X) of P of the kind in Claim 3. The …rst identity follows from the pairwise disjointness of A ; B ; C . Regarding the second identity, note that A [ B [ C is the intersection of the events \ E2Y nfA;B;Cg E and (A c \ B \ C) [ (A \ B c \ C) [ (A \ B \ C c ), each of which has P -probability of one by Claims 1 and 3. So P (A [ B [ C ) = 1. Claim 5. D(x) + D(y) + D(z) = 1. Consider an extension P 2 P (X) of P of the kind in Claim 4. As P (A [ B [ C ) = 1 by Claim 4, and as the intersection of A c with A [ B [ C is A , P (A c ) = P (A ):(9)Since A c 2 X, we also have P (A c ) = P P 1 ;:::;Pn (A c ) = D(P 1 (A c ); :::; P n (A c )),where P i (A c ) = P i (A c ) = x i for all individuals i. So P (A c ) = D(x). This and (9) imply that P (A ) = D(x). Similarly, P (B ) = D(y) and P (C ) = D(z). So Claim 5 follows from Claim 4.
for each E 2 Y nfA; B; Cg, we have P (E) = 0 by consensus preservation (as P i (E) = 0 for all i). So P (A) + P (B) + P (C) = 1. Hence D(x) + D(y) + D(z) = 1 because P (A) = D(P 1 (A); :::; P n (A)) = D(x); P (A) = D(P 1 (B); :::; P n (B)) = D(y); P (A) = D(P 1 (C); :::; P n (C)) = D(z).
First, Theorems 2(b) and 6(b) follow directly from Theorems 1(b) and 3(b), respectively, since consensus compatibility implies conditional consensus compatibility (by Proposition 1) and as non-neutrality implies non-linearity. Second, we derive Theorems 1(b), 3(b) and 5(b) from the corresponding results in the companion paper, namely Theorems 1(b), 3(b), and 5(b), respectively. The matrix of our three-equation system into triangular form:
An agenda X (with jXnf ; ?gj > 4) is non-nested if and only if it has at least one subset Y with jY j 3 such that (Y nfAg) [ fA c g is consistent for each A 2 Y . (b) An agenda X (with jXnf ; ?gj > 4) is non-simple if and only if it has at least one inconsistent subset Y (of countable size) with jY j 3 such that
Recalling that p 4 = 1 (p 1 + p 2 + p 3 ), we also havep 4 = 1 t 12 + t 13 + t 23 2 :
1 t 12 1 t 13 1 1 t 23 1 A ! ! 0 @ 0 @ 1 1 1 1 1 -1 t 12 t 13 t 12 1 1 t 13 t 12 2 t 23 + t 13 t 12 2 1 t 23 +t 13 t 12 1 t 12 A . 1 A
The system therefore has the following solution:
p 3 = t 23 + t 13 t 12 2 (10)
p 2 = t 12 t 13 + t 23 + t 13 t 12 2 = t 12 + t 23 t 13 2 (11)
p 1 = t 12 t 12 + t 23 t 13 2 = t 12 + t 13 t 23 2
This assumes that the -algebra contains more than four events.
Note that A ! B ('if A then B') is best interpreted as a non-material conditional, since its negation, unlike that of a material conditional, is consistent with the negation of its antecedent, A (i.e., A c \ (A ! B) c 6 = ?). (A material conditional is always true when its antecedent is false.) The only assignment of truth-values to the events A, A ! B, and B that is ruled out is (1; 1; 0). If we wanted to re-interpret ! as a material conditional, we would have to rule out in addition the truth-value assignments (0; 0; 0), (0; 0; 1), and (1; 0; 1), which would make little sense in the present example. The event A ! B would become A c [ B (= (A \ B c ) c ), and the agenda would no longer be free from conjunctions or disjunctions. However, the agenda would still not be a -algebra. For a discussion of non-material conditionals, see, e.g.,[START_REF] Priest | An Introduction to Non-classical Logic[END_REF].
Whenever X contains A and B, then(X) contains A [ B, (A [ B) c , (A [ B) c [ B,and so on. In some cases, all events may be constructible from events in X, so that (X) = 2 .
For instance, if X contains k = 2 logically independent events, say A and B, then X includes a partition A of into 2 k = 4 non-empty events, namely A = fA \ B; A \ B c ; A c \ B; A c \ B c g, and hence X includes the set f[ C2C C : C Ag containing 2 2 k = 16 events.
When X is a -algebra,[START_REF] Mcconway | Marginalization and Linear Opinion Pools[END_REF] shows that independence (his weak setwise function property) is equivalent to the marginalization property, which requires aggregation to commute with the operation of reducing the -algebra to some sub--algebra X. A similar result holds for general agendas
X.7 When the agenda is a -algebra, independence con ‡icts with the preservation of unanimously held judgments of probabilistic independence, assuming non-dictatorial aggregation[START_REF] Genest | Further Evidence against Independence Preservation in Expert Judgement Synthesis[END_REF][START_REF] Bradley | Aggregating Causal Judgments[END_REF]. Whether this objection also applies in the case of general agendas depends on the precise nature of the agenda. Another objection is that independence is not generally compatible with external Bayesianity, the requirement that aggregation commute with Bayesian updating of probabilities in light of new information.
An interesting fourth variant is the requirement obtained by combining the antecedent of implicit consensus preservation with the conclusion of consensus compatibility. This condition weakens both implicit consensus preservation and consensus compatibility, while still strengthening the initial consensus preservation requirement.
As a consequence, full neutrality follows even for nested agendas if independence is slightly strengthened by requiring that D A = D A c for some A 2 Xnf?; g.
A generalized de…nition of partitionality is possible in Theorem 6: we could de…ne X to be partitional if there are …nite or countably in…nite subsets Y; Z X such that the set fA \ C : A 2 Y g, with C = \ B2Z B, partitions C into at least three non-empty events. This de…nition generalizes the one in the main text, because if we take Z = ?, then C becomes (= \ B2? B) and Y simply partitions . But since we do not know whether this generalized de…nition renders partitionality logically minimal in Theorem 6, we use the simpler de…nition in the main text.
In this proposition, we assume that the underlying set of worlds satis…es j j 4.
To see that X K is non-simple if jKj > 2, choose three distinct alternatives x; y; z 2 K and note that the three events x y; y z; and z x in X K are mutually inconsistent, but any pair of them is consistent, so that they form a minimal inconsistent subset of X K .
A model in which individuals and the collective specify probabilities of selecting each of the alternatives in K (as opposed to probability assignments over events of the form 'x is ranked above y') has been studied, for instance, by[START_REF] Intriligator | A Probabilistic Model of Social Choice[END_REF], who has characterized a version of linear averaging in it. Similarly, a model in which individuals have vague or fuzzy preferences has been studied, for instance, by[START_REF] Billot | Aggregation of preferences: The fuzzy case[END_REF] and more recently by Piggins and Perote-Peña (2007) (see also[START_REF] Sanver | Sophisticated Preference Aggregation[END_REF].
Formally, a binary opinion function is a function f : X ! f0; 1g that is extendible to a probability function on (X), or equivalently, to a truth-function on (X) (i.e., a f0; 1g-valued function on (X) that is logically consistent).
Speci…cally, a binary opinion function f : X ! f0; 1g corresponds to the consistent and complete judgment set fA 2 X : f (A) = 1g.
In the binary case in part (a), partionality can be weakened to even-number negatability or non-a¢ neness. SeeDietrich and List (2007a) and[START_REF] Dokow | Aggregation of binary evaluations[END_REF].
Of course, one could also state uni…ed versions of Theorems 1 to 3 on neutral opinion pooling, by combining these theorems with existing results on binary judgment aggregation. We would simply need to replace the probabilistic opinion pooling function F : P n X ! P X with a binary or probabilistic such function.
This uses Lemma 11(b) below, where consensus preservation holds by consensus compatibility.
This uses Lemma 11(b), where consensus preservation holds by conditional consensus compatibility.
In these cases, each opinion function in P X is extendable not just to a probability function on (X), but also to one on . In general, extensions beyond (X) may not always be possible,
pooling on general agendas'(September 2007). Dietrich was supported by a Ludwig Lachmann Fellowship at the LSE and the French Agence Nationale de la Recherche (ANR-12-INEG-0006-01). List was supported by a Leverhulme Major Research Fellowship (MRF-2012-100) and a Harsanyi Fellowship at the Australian National University, Canberra. 1
axiomatic requirements on the induced pooling function F -i.e., independence and the various consensus requirements -can be related to the following requirements on the inducing pooling function F for the agenda (introduced and discussed in the companion paper): Independence on X. For each A in subagenda X, there exists a function D A : [0; 1] n ! [0; 1] (the local pooling criterion for A) such that, for all P 1 ; :::; P n 2 P , P P 1 ;:::;Pn (A) = D A (P 1 (A); :::; P n (A)).
Consensus preservation. For all A 2 and all P 1 ; :::; P n 2 P , if P i (A) = 1 for all individuals i then P P 1 ;:::;Pn (A) = 1.
Consensus preservation on X. For all A in subagenda X and all P 1 ; :::; P n 2 P , if P i (A) = 1 for all individuals i then P P 1 ;:::;Pn (A) = 1.
Conditional consensus preservation on X. For all A; B in subagenda X and all P 1 ; :::; P n 2 P , if, for each individual i, P i (AjB) = 1 (provided P i (B) 6 = 0), then P P 1 ;:::;Pn (AjB) = 1 (provided P P 1 ;:::;Pn (B) 6 = 0). 25 The following lemma establishes some key relationships between the properties of the induced and the inducing pooling functions:
Lemma 12 Suppose a pooling function F for a -algebra agenda induces a pooling function F for a subagenda X (where X is …nite or (X) = ). Then:
F is independent (respectively, neutral, linear) if and only if F is independent (respectively, neutral, linear) on X; F is consensus-preserving if and only if F is consensus-preserving on X;
This lemma follows from a more general result on the correspondence between opinion pooling on general agendas and on -algebra agendas. 26 as is well-known from measure theory. For instance, if = R, X consists of all intervals or complements thereof, and = 2 R , then (X) contains the Borel-measurable subsets of R, and it is well-known that measures on (X) may not be extendable to = 2 R (a fact related to the Banach-Tarski paradox). 25 If one compares this requirement with that of conditional consensus compatibility for a general agenda X, one might wonder why the new requirement involves only a single conditional certainty (i.e., that of A given B), whereas the earlier requirement involves an entire set of conditional certainties (which must be respected simultaneously). The key point is that if each P i is a probability function on , then the simpli…ed requirement as stated here implies the more complicated requirement from the main text.
26 More precisely, Lemma 12 is a corollary of a slightly generalized statement of Lemma 13, in P 1 ; :::; P n 2 P such that P 1 (A) = = P n (A) = 1, we have F (P 1 ; :::; P n )(A) = F (P 1 j X ; :::; P n j X )(A) as F induces F = 1 as F is consensus preserving.
(d) First, let F be consensus-compatible and X …nite. We de…ne F as follows. For any P 1 ; :::; P n 2 P , consider the event A in which is smallest subject to having probability one under each P i . This event exists and is constructible as A = \ A2 (X):P 1 (A)= =P n (A)=1 A, drawing on …niteness of = (X) and the fact that intersections of …nitely many events of probability one have probability one. Clearly, A is the union of the supports of the functions P i . We de…ne F (P 1 ; :::; P n ) as any extension in P of F (P 1 j X ; ::::; P n j X ) assigning probability one to A . Such an extension exists because F is consensuscompatible and each P i j X is extendable to a probability function (namely P i ) assigning probability one to A . Clearly, F induces F . It also is consensuspreserving: for all P 1 ; :::; P n 2 P and A 2 , if P 1 (A) = = P n (A) = 1, then A includes the above-constructed event A , whence F (P 1 ; :::; P n )(A) = 1 as F (P 1 ; :::
Conversely, let some inducing pooling function F be consensus-preserving. To see why F is consensus-compatible, consider P 1 ; :::; P n 2 P X and A 2 such that each P i has an extension P i 2 P for which P i (A) = 1. We show that some extension P 2 P of F (P 1 ; :::; P n ) satis…es P (A) = 1. Simply let P be F (P 1 ; :::; P n ) and note that P is indeed an extension of F (P 1 ; :::; P n ) (as F induces F ) and P (A) = 1 (as F is consensus-preserving).
(e) First, let F be conditional-consensus-compatible, and let X be …nite. We de…ne F as follows. For a pro…le (P 1 ; :::; P n ) 2 P n , consider the (…nite) set S of pairs (A; B) in X such that P i (AjB) = 1 for each i with P i (B) 6 = 0 (equivalently, such that P i (BnA) = 0 for each i). Since F is conditional-consensus-compatible (and since in the last sentence we can replace each 'P i 'with 'P i j X '), there is an extension P 2 P of F (P 1 j X ; :::; P n j X ) such that P (AjB) = 1 for all (A; B) 2 S for which P (B) 6 = 0. Let F (P 1 ; :::; P n ) := P . Clearly, F induces F and is conditional-consensus-preserving on X.
Conversely, let some inducing F be conditional-consensus-preserving on X.
To check that F is conditional-consensus-compatible, consider P 1 ; :::; P n 2 P X and a …nite set S of pairs (A; B) in X such that each P i can be extended to P i 2 P with P i (AjB) = 1 (provided P i (B) 6 = 0). We require an extension P 2 P of F (P 1 ; :::; P n ) such that P (AjB) = 1 for all (A; B) 2 S for which P (B) 6 = 0. Now P := F (P 1 ; :::; P n ) is such an extension, since F induces F and is conditional-consensus-preserving on X.
Which pooling functions for induce ones for X? Here is a su¢ cient condition: derivations are similar for the three results; we thus spell out the derivation only for Theorem 1(b). Consider a nested agenda X 6 = f ; ?g. By the companion paper's Theorem 1(b) (see also the footnote to it), some pooling function F for agenda := (X) is independent on X, (globally) consensus preserving and nonneutral on X. By Lemma 14, F induces a pooling function for (sub)agenda X, which by Lemma 12 is independent, consensus-compatible, and non-neutral.
Finally, we prove Theorem 4(b) directly rather than by reduction. Consider an agenda X 6 = f?; g which is nested or satis…es jXnf?; gj 4. If X is nested, the claim follows from Theorem 1(b), since non-neutrality implies non-linearity. Now let X be non-nested and jXnf?; gj 4. We may assume without loss of generality that ?; 6 2 X (as any independent, consensus-compatible, and nonneutral pooling function for agenda X 0 = Xnf?; g induces one for agenda X). Since jXj 4, and since jXj > 2 (as X is non-nested), we have jXj = 4, say X = fA; A c ; B; B c g. By non-nestedness, A and B are logically independent, i.e., the events A \ B, A \ B c , A c \ B, and A c \ B c are all non-empty. On P n X , consider the function F : (P 1 ; ::; P n ) 7 ! T P 1 , where T (p) is 1 if p = 1, 0 if p = 0, and 1 2 if p 2 (0; 1). We complete the proof by establishing that (i) F maps into P X , i.e., is a proper pooling function, (ii) F is consensus-compatible, (iii) F is independent, and (iv) F is non-linear. Claims (iii) and (iv) hold trivially.
Proof of (i): Let P 1 ; :::; P n 2 P X and P := F (P 1 ; :::; P n ) = T P 1 . We need to extend P to a probability function on (X). For each atom C of (X) (i.e., each C 2 fA \ B; A \ B c ; A c \ B; A c \ B c g), let P C be the unique probability function on (X) assigning probability one to C. We distinguish between three (exhaustive) cases.
Case 1 : P 1 (E) = 1 for two events E in X. Without loss of generality, let P 1 (A) = P 1 (B) = 1, and hence, P 1 (A c ) = P 1 (B c ) = 0. It follows that P (A) = P (B) = 1 and P (A c ) = P (B c ) = 0. So P extends (in fact, uniquely) to a probability function on (X), namely to P A\B .
Case 2 : P 1 (E) = 1 for exactly one event E in X. Without loss of generality, assume P 1 (A) = 1 (hence, P 1 (A c ) = 0) and P 1 (B); P 1 (B c ) 2 (0; 1). Hence, P (A) = 1, P (A c ) = 0 and P (B) = P (B c ) = 1 2 . So P extends (again uniquely) to a probability function on (X), namely to 1 2 P A\B + 1 2 P A\B c . Case 3 : P 1 (E) = 1 for no event E in X. Then P 1 (A); P 1 (A c ); P 1 (B); P 1 (B c ) 2 (0; 1), and so P (A) = P (A c ) = P (B) = P (B c ) = 1 2 . Hence, P extends (nonuniquely) to a probability function on (X), e.g., to
. Proof of (ii): Let P 1 ; :::; P n 2 P X and consider any C 2 (X) such that each P i extends to some P i 2 P (X) such that P i (C) = 1. (It only matters that P 1 has such an extension, given the de…nition of F .) We have to show that P := F (P 1 ; :::; P n ) = T P 1 is extendable to a P 2 P (X) such that P (C) = 1. We verify the claim in each of the three cases considered in the proof of (i). In Cases 1 and 2, the claim holds because the (unique) extension P 2 P (X) of P has the same support as P 1 . (In fact, in Case 1 P = P 1 .) In Case 3, C must intersect with each event in X (otherwise some event in X would have zero probability under P 1 , in contradiction with Case 3) and include more than one of the atoms A \ B, A \ B c , A c \ B, and A c \ B c (again by Case 3). As is easily checked, C (A\B)[(A c \B c ) or C (A\B c )[(A c \B). So, to ensure that the extension P or P satis…es P (C) = 1, it su¢ ces to specify P as 1 2 P A\B + 1 2 P A c \B c in the …rst case, and as 1 2 P A\B c + 1 2 P A c \B in the second case.
A.7 Proof of Propositions 1 and 2
Proof of Proposition 1. Consider an opinion pooling function for an agenda X.
We …rst prove part (b), by showing that conditional consensus compatibility is equivalent to the restriction of consensus compatibility to events A expressible as ([ (C;D)2S (CnD)) c for …nite S X X. This fact follows from the equivalence of conditional consensus compatibility and implication preservation (Proposition 3) and the observation that, for any such set S, an opinion function is consistent with zero probability of all CnD with (C; D) 2 S if and only if it is consistent with zero probability of [ (C;D)2S (CnD), i.e., probability one of ([ (C;D)2S (CnD)) c . We now prove part (a) The claims made about implicit consensus preservation and consensus compatibility have already been proved (informally) in the main text. It remains to show that conditional consensus compatibility implies consensus preservation and is equivalent to it if X = (X). As just shown, conditional consensus compatibility is equivalent to the restriction of consensus compatibility to events A of the form ([ (C;D)2S (CnD)) c for some …nite set S X X. Note that, for any A 2 X, we may de…ne S as f(A c ; A)g, so that ([ (C;D)2S (CnD)) c = (A c nA) c = A. So, conditional consensus compatibility implies consensus preservation and is equivalent to it if X = (X).
Proof of Proposition 2. Assume j j 4. We can thus partition into four nonempty events and let X consist of any union of two of these four events. The set X is indeed an agenda since A 2 X , A c 2 X. Since nothing depends on the sizes of the four events, we assume without loss of generality that they are singleton, i.e., that = f! 1 ; ! 2 ; ! 3 ; ! 4 g and X = fA : jAj = 2g.
Step 1. We here show that X is path-connected and non-partitional. Nonpartitionality is trivial. To establish path-connectedness, we consider events A; B 2 X and must construct a path of conditional entailments from A to B. This is done by distinguishing between three cases.
Case 1 : A = B. Then the path is trivial, since A ` A (take Y = ?).
Case 2 : A and B have exactly one world in common. Call it !, and let ! 0 be the unique world in n(A [ B). Then A ` B in virtue of Y = ff!; ! 0 gg. Case 3 : A and B have no world in common. We may then write
Step 2. We now construct a pooling function (P 1 ; :::; P n ) 7 ! P P 1 ;:::;Pn that is independent (in fact, neutral), consensus-preserving, and non-linear. As an ingredient of the construction, consider …rst a linear pooling function L : P n X ! P X (for instance the dictatorial one given by (P 1 ; :::; P n ) 7 ! P 1 ). We shall transform L into a non-linear pooling function that is still neutral and consensus-preserving. First, …x a transformation T : [0; 1] ! [0; 1] such that:
(Such a T exists; e.g. T (x) = 4(x 1=2) 3 + 1=2 for all x 2 [0; 1].) Now, for any P 1 ; :::; P n 2 P X and A 2 X, let P P 1 ;:::;Pn (A) := T (L(P 1 ; :::; P n )(A)). We must prove that, for any P 1 ; :::; P n 2 P X , the function P P 1 ;:::;Pn , as just de…ned, can indeed be extended to a probability function on (X) = 2 . This completes the proof, as it establishes that we have de…ned a proper pooling function and this pooling function is neutral (since L is neutral), consensus-preserving (since L is consensus-preserving and T (1) = 1), and non-linear (since L is linear and T a non-linear transformation).
To show that P P 1 ;:::;Pn can be extended to a probability function on (X) = 2 , we consider any probability function Q on 2 and show that T Qj X extends to a probability function on 2 (which completes our task, since Qj X could be L(P 1 ; :::; P n ) for P 1 ; :::; P n 2 P X ). It su¢ ces to prove that there exist real numbers p k = p Q k , k = 1; 2; 3; 4, such that the function on 2 assigning p k to each f! k g is a probability function and extends T Qj X , i.e., such that (a) p 1 ; p 2 ; p 3 ; p 4 0 and
For all k 2 f1; 2; 3; 4g, let q k := Q(f! k g); and for all k; l 2 f1; 2; 3; 4g with k < l, let q kl := Q(f! k ; ! l g). In order for p 1 ; :::; p 4 to satisfy (b), they must satisfy the system p k + p l = T (q kl ) for all k; l 2 f1; 2; 3; 4g with k < l.
Given p 1 + p 2 + p 3 + p 4 = 1, three of these six equations are redundant. Indeed, consider k; l 2 f1; 2; 3; 4g, k < l, and de…ne k 0 ; l 0 2 f1; 2; 3; 4g, k 0 < l 0 , by fk 0 ; l 0 g = f1; 2; 3; 4gnfk; lg. As p k + p l = 1 p k 0 p l 0 and T (q kl ) = T (1 q k 0 l 0 ) = 1 T (q k 0 l 0 ), the equation p k + p l = T (q kl ) is equivalent to p k 0 + p l 0 = T (q k 0 l 0 ). So (b) reduces (given p 1 + p 2 + p 3 + p 4 = 1) to the system p 1 + p 2 = T (q 12 ), p 1 + p 3 = T (q 13 ), p 2 + p 3 = T (q 23 ). This is a system of three linear equations in three variables p 1 ; p 2 ; p 3 2 R. To solve it, let t kl := T (q kl ) for all k; l 2 f1; 2; 3; 4g, k < l. We …rst bring the coe¢ cient Let SL be the straight line segment in R 2 joining the points (q 13 ; T (q 13
)) and (q 23 + ; T (q 23 + )), and let SL be the straight line segment joining the points (q 13 ; T (q 13 )) and (q 23 ; T (q 23 )). Since a 1 and a 2 are, respectively, the second coordinates of the points on SL and SL with the …rst coordinate 1 2 q 13 + 1 2 q 23 , it su¢ ces to show that SL is 'below'SL. This follows once we prove that T 's graph is 'below'SL (as T is convex on [1=2; 1] and SL joins two points on T 's graph on [1=2; 1]). If q 13 1=2, this is trivial by T 's convexity on [1=2; 1]. Now let q 13 < 1=2. Let SL 0 be the straight line segments joining the points (q 13 ; T (q 13 )) and (1 (q 13 ); T (1 (q 13 ))), and let SL 00 be the straight line segment joining the points (1 (q 13
); T (1 (q 13 ))) and (q 23 + ; T (q 23 + )). Check using T 's properties that LS 0 passes through the point (1=2; 1=2). This implies that (*) T 's graph is 'below'SL 0 on [1=2; 1], and that (**) SL 00 is steeper than SL 0 (by T 's convexity on [1=2; 1]). Also, (***) T 's graph is 'below' SL 00 (again by T 's convexity on [1=2; 1]). In sum, on [1=2; 1], T 's graph is (by (*) and (***)) 'below'both SL 0 and SL 00 which are both 'below'SL by (**). So, still on [1=2; 1], T 's graph is 'below'SL. This proves (13). Applying (13) with = 1 q 23 , we obtain T (q 13 ) + T (q 23 ) T (q 13 (1 + q 23 )) + T (1):
On the right side, T (1) = 1 and (as q 13 (1 + q 23 ) 1 q 12 and as T is increasing) T (q 13 (1 + q 23 )) T (1 q 12 ) = 1 T (q 12 ). So T (q 13 ) + T (q 23 ) 1 + 1 T (q 12 ), i.e., T (q 12 ) + T (q 13 ) + T (q 23 ) 2, as claimed.
Claim 2. p k 0 for all k = 1; 2; 3.
We only show that p 1 0, as the proofs for p 2 and p 3 are analogous. We have to prove that t 13 + t 23 t 12 0, i.e., that T (q 13 ) + T (q 23 ) T (q 12 ), or equivalently, that T (q 1 +q 3 )+T (q 2 +q 3 ) T (q 1 +q 2 ). As T is increasing, it su¢ ces to establish that T (q 1 ) + T (q 2 ) T (q 1 + q 2 ). We again consider three cases.
Case 1 : q 1 + q 2 1=2. Suppose q 1 q 2 (otherwise swap the roles of q 1 and q 2 ). For all 0 such that q 1 0, we have T (q 1 ) + T (q 2 ) T (q 1 ) + T (q 2 + ), as T is concave on [0; 1=2] and 0 q 1 q 1 q 2 q 2 + 1=2. So, for = q 1 , T (q 1 ) + T (q 2 ) T (0) + T (q 2 + q 1 ) = T (q 1 + q 2 ):
Case 2 : q 1 + q 2 > 1=2 but q 1 ; q 2 1=2. By (i)-(iii),
T (q 1 ) + T (q 2 ) q 1 + q 2 T (q 1 + q 2 ).
Case 3 : q 1 > 1=2 or q 2 > 1=2. Suppose q 2 > 1=2 (otherwise swap q 1 and q 2 in the proof). Then q 1 < 1=2, since otherwise q 1 + q 2 > 1. Let y := 1 q 1 q 2 . As y < 1=2, an argument analogous to that in Case 1 yields T (q 1 )+T (y) T (q 1 +y), i.e., T (q 1 )+T (1 q 1 q 2 ) T (1 q 2 ). So, by (i), T (q 1 )+1 T (q 1 +q 2 ) 1 T (q 2 ), i.e., T (q 1 ) + T (q 2 ) T (q 1 + q 2 ). | 114,698 | [
"6630"
] | [
"15080",
"301309",
"328453"
] |
01485803 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485803/file/978-3-642-41329-2_10_Chapter.pdf | Karl Hribernik
Thorsten Wuest
Klaus-Dieter Thoben
Towards Product Avatars Representing Middle-of-Life Information for Improving Design, Development and Manufacturing Processes
Keywords: PLM, Product Avatar, BOL, Intelligent Products, Digital Representation, Information, Data 1
In today's globalized world, customers increasingly expect physical products and related information of the highest quality. New developments bring the entire product lifecycle into focus. Accordingly, an emphasis must be placed upon the need to actively manage and share product lifecycle information. The so-called Product Avatar represents an interesting approach to administrate the communication between intelligent products and their stakeholders along the product lifecycle. After its initial introduction as a technical concept, the product avatar now revolves around the idea individualized digital counterparts as targeted digital representations of products enabling stakeholders to benefit from value-added services built on product lifecycle information generated and shared by Intelligent Products. In this paper, first the concept of using a Product Avatar representation of product lifecycle information to improve the first phases, namely design, development and manufacturing will be elaborated on. This will be followed by a real life example of a leisure boat manufacturer incorporating these principles to make the theoretical concept more feasible.
INTRODUCTION
In today's globalized world, customers increasingly expect physical products and related information of the highest quality. New developments bring the entire product lifecycle into focus, such as an increased sensibility regarding sustainability. Accordingly, an emphasis must be placed upon the need to actively manage and share product lifecycle information.
The so-called Product Avatar [START_REF] Hribernik | The product avatar as a product-instance-centric information management concept[END_REF]] represents an interesting approach to administrate the communication between intelligent products and their stakeholders. After its initial introduction as a technical concept, the Product Avatar now revolves around the idea individualized digital counterparts as targeted digital representations of products enabling stakeholders to benefit from value-added services built on product lifecycle information generated and shared by Intelligent Products [START_REF] Wuest | Can a Product Have a Facebook? A New Perspective on Product Avatars in Product Lifecycle Management[END_REF].
During the middle of life phase (MOL) of a product a broad variety of data and consequently information can be generated, communicated and stored. The ready availability of this item-level information creates potential benefits for processes throughout the product lifecycle. Specifically in the beginning-of-life phase (BOL) of the product lifecycle, opportunities are created to continuously improve future product generations by using item-level MOL information in design, development and manufacturing processes. However, in order to make use of the information, its selection and presentation has to be individualized, customized and presented according to the stakeholders' requirements. For example, in the case of design processes, this means taking the needs of design engineers into account, and during manufacturing, the production planner.
In this paper, first the concept of using a product avatar representation of product lifecycle information to improve the first phases, namely design, development and manufacturing will be elaborated on. This will be followed by a real life example of a leisure boat manufacturer incorporating these principles to make the theoretical concept more feasible.
PRODUCT LIFECYCLE MANAGEMENT AND INTELLIGENT PRODUCTS
The theoretical foundation for the Product Avatar concept is on the one hand Product Lifecycle Management. This can be seen as the overarching data and information source from which the product Avatar retrieves the bits and pieces according to the individual needs of a stakeholder. On the other hand, this depends on Intelligent Products being able to gather and communicate data and information during the different lifecycle phases. In the following both areas are introduced as a basis for the following elaboration on the Product Avatar concept
Product Lifecycle Management
Every product has a lifecycle. Manufacturers are increasingly becoming aware of the benefits inherent in managing those lifecycles [START_REF] Sendler | Das PLM-Kompendium. Referenzbuch des Produkt-Lebenszyklus-Managements[END_REF]. Today's products are becoming increasingly complicated. For example, the amount of component parts is increasing. Simultaneously, development, manufacturing and usage cycles are accelerating [START_REF] Sendler | Das PLM-Kompendium. Referenzbuch des Produkt-Lebenszyklus-Managements[END_REF] and production is being distributed geographically. These trends highlight the need for innovative concepts for structuring and handling product related information efficiently throughout the entire lifecycle. On top that, customer demand for more customisation and variation stresses the need for a PLM at item and not merely type-level [Hribernik, Pille, Jeken, Thoben, Windt & Busse, 2010]. Common graphical representations of the product lifecycle encompass three phases beginning of life (BOL), Middle of Life (MOL) and End of Life (EOL) -arranged either in a circle or in a linear form (see Figure 1). The linear form represents the product lifecycle "from the cradle to the grave".
The social web offers a number of opportunities for item-level PLM. For example, Web 2.0-based product information acquisition could contribute to the improvement of the quality of future products [START_REF] Merali | Web 2.0 and Network Intelligence[END_REF][START_REF] Gunendran | Methods for the capture of manufacture best practice in product lifecycle management[END_REF]].
Intelligent Products
Intelligent Products are physical items, which may be transported, processed or used and which comprise the ability to act in an intelligent manner. McFarlane et al.
[McFarlane, Sarma, Chirn, Wong, Ashton, 2003] define the Intelligent Product as "...a physical and information based representation of an item [...] which possesses a unique identification, is capable of communicating effectively with its environment, can retain or store data about itself, deploys a language to display its features, production requirements, etc., and is capable of participating in or making decisions relevant to its own destiny."
The degree of intelligence an intelligent product may exhibit varies from simple data processing to complex pro-active behaviour. This is the focus of the definitions in [McFarlane, Sarma, Chirn, [START_REF] Mcfarlane | Auto ID systems and intelligent manufacturing control[END_REF]] and [START_REF] Kärkkäinen | Intelligent products -a step towards a more effective project delivery chain[END_REF]. Three dimensions of characterization of Intelligent Products are suggested by [START_REF] Meyer | Intelligent Products: A Survey[END_REF]]: Level of Intelligence, Location of Intelligence and Aggregation Level of Intelligence. The first dimension describes whether the Intelligent Product exhibits information handling, problem notification or decision making capabilities. The sec-ond shows whether the intelligence is built into the object, or whether it is located in the network. Finally, the aggregation level describes whether the item itself is intelligent or whether intelligence is aggregated at container level. Intelligent Products have been shown to be applicable to various scenarios and business models. For instance, Kärkkäinen et al. describe the application of the concept to supply network information management problems [START_REF] Kärkkäinen | Intelligent products -a step towards a more effective project delivery chain[END_REF]. Other examples are the application of the Intelligent Products to supply chain [START_REF] Ventä | Intelligent and Systems[END_REF], manufacturing control [McFarlane, Sarma, Chirn, Wong, Ashton, 2003], and production, distribution, and warehouse management logistics [Wong, McFarlane, Zaharudin, Agrawal, 2009]. A comprehensive overview of fields of application for Intelligent Products can be found in survey paper by Meyer et al [START_REF] Meyer | Intelligent Products: A Survey[END_REF].
Thus, an Intelligent Product is more than just the physical productit also includes the enabling information infrastructure. Up to now, Intelligent Products are not "socially intelligent" [START_REF] Erickson | Social systems: designing digital systems that support social intelligence[END_REF] in that they could create their own infrastructure to communicate with human users over or store information in. However, Intelligent Products could make use of available advanced information infrastructures designed by socially intelligent users, consequently enhancing the quality of information and accessibility for humans who interact with them.
PRODUCT AVATAR
One approach to representing the complex information flows connected to item-level PLM of an intelligent product is the Product Avatar. This concept describes a digital counterpart of the physical Intelligent Product which exposes functionality and information to stakeholders of the product's lifecycle via a user interface [
Concept behind the Product Avatar
The concept of the Product Avatar describes a distributed and de-centralized approach to the management of relevant, item-level information throughout a product's lifecycle [Hribernik, Rabe, Thoben, & Schumacher, 2006]. At its core lies the idea that each product should have a digital counterpart by which it is represented towards the different stakeholders involved in its lifecycle. In the case of Intelligent Products, this may also mean the implementation of digital representations towards other Intelligent Products. Consequently, the Avatar concept deals with establishing suitable interfaces towards different types of stakeholder. For Intelligent Products, the interfaces required might be, for example services, agents or a common messaging interfaces such as QMI. For human stakeholders, such as the owner, producer or designer, these interfaces may take the shape, e.g., of dedicated desktop applications, web pages or mobile "apps" tailored to the specific information and interaction needs. This contribution deals with the latter.
Example for Product Avatar Application during the MOL Phase
In order to make the theoretical concept of a Product Avatar more feasible, a short example based on a real case will be given in this section.
The authors successfully implemented a Product Avatar application for leisure boats in the usage (MOL) phase of the lifecycle for the stakeholder group "owner" by using the channel of the popular Social Network Service (SNS) Facebook. The goal was to create additional benefits for users by providing services (e.g. automatic logbook with location based services). The rationale behind using the popular SNS Facebook was that users are already familiar with the concept and that the inherent functions of the SNS expanded by the PLM based services increase the possibilities for new services around the core product of a leisure boat (see Figure 3). The Product Avatars main function in this phase was, to provide pre-defined (information) services to users. The PLM information needed were either based on a common data base where all PLM data and information for the individual product were stored or derived through a mediating layer, e.g. the Semantic Mediator (Hribernik, Kramer, Hans, Thoben, 2010), from various available databases. Among the services implemented was a feature to share the current location of the boat including an automatic update of the weather forecast employing Google maps and Yahoo weather. Additionally, information like the current battery load or fuel level were automatically shared on the profile (adjustable by the user for data security reasons). (see Figure 4)
PRODUCT AVATAR APPLICATION IN THE BOL PHASE
In this section the practical example of a Product Avatar for a leisure boat, shortly introduced with a focus on MOL in the section before, will be described in more detail focusing on the BOL phase. First the stakeholders with an impact on the BOL phase will be presented and discussed briefly. Afterwards, some insights on MOL data capturing through sensor application and the different existing prototypes are introduced. The last sub-section will then give three examples of how MOL data can be applied during the BOL phase in a beneficial way for different stakeholders.
Stakeholders
The stakeholders having an impact on BOL processes can be clustered in two main groups: data producing (MOL) and data exploiting (BOL).
The group of data producing stakeholders during the MOL phase is fairly large and diverse. The main stakeholders with the biggest impact are: Users (owners): This stakeholder controls what data will be communicated (data security). Furthermore, they are responsible for the characteristic of the data captured through the way they use the boat. Producers: This group has an impact on updates (software), what sensors are implemented and what services available all influencing the data availability and quality. Maintenance: This group on the one hand produces relevant data themselves when repairing the boat but also ensures the operation readiness of the sensors etc.
In the BOL phase, the stakeholders are more homogenious as all have a common interest of building the boat. However, they have different needs towards possible MOL data application. There are two main groups to be identified:
OEMs: This stakeholder is responsible for the overall planning and production of the boat and later the contact towards the customer. He has the strongest interest in learning about the "real" usage of the boat based on MOL data. Suppliers: This group is mostly integrated in the planning process through the OEM. However, even so indirectly included in planning activities, MOL data can be of high value for their operations.
Depending on the Customer Order Decoupling Point, the user might also fall into this category of important stakeholders during the BOL phase. However, the user at this stage is mostly considered not to be directly involved in the product developing activities and has to rely on the OEMs communication.
4.2
Capturing of MOL Data
Today's technological development, especially in the field of sensor technology, presents almost unlimited possibilities of data capturing. Of course this is limited by common sense and economic reasons.
To capture MOL data of a leisure boat, the development included three stages of prototypes.
The first stage of the so called Universal Marine Gateway (UMG) were three sensors (humidity, pressure and temperature) connected to a processing unit (here: Bea-gleBone) and mounted in an aquarium. In this lab prototype (see Figure 5) first hands on experience was gathered and the software interfaces with PLM data infrastructure was tested. The next stage, the UMG Prototype MK.II (see Figure 6) incorporated the findings of the first lab prototype on a miniature model of a boat in order to learn about the effects of a mobile application and wireless communication on the data quality and impact on capturing still in a secure environment. The final stage,UMG Prototype MK. III (see Figure 7), consists of a fully functional and live size boat where a set of sensors, based on the findings of the earlier stage tests, is implemented. This prototype will be tested under realistic settings and different scenarios. The practical implementation of the sensor equipment implies a series of challenges. E.g. the sensors need to be protected against damage coused by impact when debarking on shore. On the other hand they have to be "open" to the sourrounding environment to measure correctly. Other challenges include how the captured data is communicated "in the wild" to the data base.
Application, Limitation and Discussion
In this sub-section three exemplary cases of utilization and application of MOL data of a leisure boat by the BOL stakeholders through the Product Avatar are presented below. It is however evident that the use cases are just a short description without going into detail as this would exceed the purpose of this paper. The first use case of MOL data is based on the Product Avatar supplying data of location, temperature, humidity in combination with a timestamp to boat designers.
Ideally they can derive information on not only suitable material (e.g. what kind of wood can withstand high humidity and sun) and dimensioning of certain details (e.g. sunroof more likely to be used in tropical environments) but also on equipment needed under the circumstances (e.g. heating system or air conditioning).
Whereas the benefit of the first use case could also be realized utilizing other methods, the second one is more technical. The Product Avatar provides information directly to the suppliers of the Boat OEM, namely the engine manufacturer. Through aggregated data of, on the one side, the engine itself, e.g. rpm or heat curve and on the other side supplying information about the conditions it is used, e.g frequency, runtime, but also outside temperature etc the engine designers can reduce the risk of over-engineering. When a boat is just used a few times a year, the durability of the engine module might not be as important.
The third use case is in between the former two. Whilst it is unlikely that MOL data can influence manufacturing processes directly, it definiately can influence them indirectly through the process planning. An example is that through location based data and accompanying legal information for that location, both provided by the Product Avatar, the production planner can change the processes. So can it be necessary to e.g. add Shark-Inspired Boat Surface on the hull instead of using toxic paint as for the region the boat is mostly used the toxic paint is illegal. This could also be an application for the MOL phase again, notifying boat users not to enter a certain area (e.g. a coral reef) as they might inflict damage to the environment, which might be valued by environmelntal conscious users
CONCLUSION AND OUTLOOK
This paper presented an introduction on the basic principles of PLM and Intelligent Products as a basis for the concept of a Product Avatar as a digital representation of a physical product. After Introducing the theoretical concept and giving an example of application of PLM data during the MOL phase, the usage of MOL data during the BOL phase was elaborated. To do so, the main stakeholders of both phases were derived and the process towards data capturing on leisure boats was briefly introduced. This was followed by three hypothetical use cases on how MOL data provided by a Product Avatar can be beneficial for the stakeholders.
In conclusion, the Product Avatar can only be as good as the existing data and information and, very importantly, the knowledge on what information and data is needed in what way (e.g. format) through which channel by which individual stakeholder.
In the next steps the Product Avatar concept will be expanded and evaluated further through scenarios as described in the use cases.
Fig. 1 .
1 Fig. 1. -Phases of the Product Lifecycle
Fig. 2 .
2 Fig. 2. -Digital, Stakeholder specific Representation of a Product through a Product Avatar
Fig. 3 .
3 Fig. 3. -Screenshot of the Facebook Product Avatar for the Usage phase (MOL)
Fig. 4 .
4 Fig. 4. -Screenshot of an excerpt of information provided by the Product Avatar
Fig. 5 .
5 Fig. 5. -Lab Prototype "Universal Marine Gateway" (UMG) with Example Sensors
Fig. 6 .
6 Fig. 6. -UMG Prototype Mk. II "in Action"
Fig. 7 .
7 Fig. 7. -Prototype boat for sensor integration and testing
ACKNOWLEDGEMENT
This work has partly been funded by the European Commission through the BOMA "Boat Management" project in FP7 SME-2011-1 "Research for SMEs". The authors gratefully acknowledge the support of the Commission and all BOMA project partners. The results presented are partly based on a student project at the University of Bremen. The authors would like to thank the participating students for their significant contributions: Anika Conrads, Erdem Galipoglu, Rijad Merzic, Anna Mursinsky, Britta Pergande, Hanna Selke and Mustafa Severengiz. | 20,657 | [
"996300",
"991770",
"989864"
] | [
"217679",
"217679",
"217679"
] |
01485806 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485806/file/978-3-642-41329-2_13_Chapter.pdf | Steve Rommel
email: steve.rommel@ipa.fraunhofer.de
Andreas Fischer
email: andreas.fischer@ipa.fraunhofer.de
Additive Manufacturing -A Growing Possibility to Lighten the Burden of Spare Parts Supply
Keywords: Additive Manufacturing, Spare Parts, Spare Parts Management 1
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Considering the global but also the local market and the competition of corporations within these markets constant improve of products and processes are required to search and find more cost effective solutions to manufacture products. At the same time services and products need to offer growing possibilities and ways for customers to individualize, specialize and improvise these products. This fact holds also true for spare parts and its market importance.
Today's spare parts industry is characterized by high volume production with sometimes specialized products, long distance transportation and extensive warehousing, resulting in huge inventory of spare parts. These spare parts even hold the risk of being outdated or not usable at the time of need so that they are often scrapped afterwards. On an industrial level, reacting to this burden, companies are competing or collaborating with OEMs, to provide a variety of maintenance services and products which in turn are limited with regards to the broadness and flexibility of their service solutions, especially when design or feature changes to spare parts (copies or OEMmanufactured) are required.
Additive Manufacturing offers new sometimes unimaginable possibilities for manufacturing a product which have the potential to change the logistical and business requirements and therefore create the possibilities to lighten the burden. Being a new possibility the following aspects of business need to be further developed with the focus on additive manufacturing: standardization of manufacturing processes logistics product and process management certification process Product and business management.
The goal of funded and private research projects is to develop a model which incorporates the old and new requirements of manufacturing and the market demands to assist companies to better compete in the market settings.
ADDITITVE MANUFACTURING AND SPARE PARTS MANAGEMENT
ADDITIVE MANUFACTURING TECHNOLOGIES
Additive Manufacturing and its technologies involve all technologies used to manufacture a product by adding (placing and bonding) layers of the specific material to each other in a predetermined way. These so called layers are generally speaking 2Dcross-sections of the products 3D-model. AM therefore is creating the geometry as well as the material characteristics during the build predetermined by the material selected. The contour is created in the x-y-direction. The z-direction creates the volume and therefore the 3rd dimension.
Additive Manufacturing offers the possibility to optimize products after each run of parts being build and the lessons learned. There is generally speaking little to no limitations to the freedom in design given by this process. Complex shapes and functional parts can be realized by these innovative processes directly from CAD data. Two examples of technologies are Selective Laser Sintering (SLS) as shown in Figure 1 and Fused Deposition Modeling (FDM) as shown in Figure 2. In order to hold a finished real product in hand two main process steps need to be performed [START_REF] Gebhardt | Generative Fertigungsverfahren: Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF]):
1. Developing of the cross-sections (layers) of the 3D-model 2. Fabrication of the physical product. In order to withstand the test of being a true addition or even alternative to conventional manufacturing technologies additive manufactured products are looked at possessing the same mechanical and technological characteristics as comparable conventionally manufactured products. This does not equal the fact that its material characteristics have to be exactly the same to the once of conventional technologies. This can be a false point of view and a limiting factor to the use of additive manufacturing, because the new freedom of design offered also offers the possibility to create new products which may look completely different but perform the required function equally well or better.
The thinking has to shift from "get exactly the same product and its material characteristics with another technology, so I can compare it" to "get the same performance and functionality of the product regardless of the manufacturing technology" used.
Besides the benefit of using the 3D-model data directly for the manufacturing process of a product exist additional benefits listed below [START_REF] Gebhardt | Generative Fertigungsverfahren: Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF][START_REF] Hopkinson | Rapid Manufacturing: An Industrial Revolution for the Digital Age: The Next Industrial Revolution[END_REF]:
Benefits of AM Integration of functions, increase in complexity of products and components and including of internal structures for the stability of the product Manufacturing of products very difficult or not possible to manufacture with traditional/conventional manufacturing technologies (e.g. undercuts …) Variation of spare products -ability to adopt "local" requirements to the same products and therefore supply local markets with the product and its expected features, low effort and no true impact on the manufacturing Customizationtwo forms of customization are possible -Manufacturer customization (MaCu), and Client customization (CliCu) No tooling, and reduction in process steps One-piece or small volume series manufacturing is possible, Product-on-demand Alternative logistic strategies based on the current requirements give the AM an enormous flexibility with regards to the strategy of the business model Table 1. Benefits of Additive Manufacturing
SPARE PARTS MANAGEMENT
Spare parts nowadays have become a sales and production factor, especially for any manufacturing company with a highly automated, complex and linked machinery park and production setup. Any production down-time caused by the failure of a component of any equipment within production lines will lead not only to capacity issues but also to monetary losses. These losses may come from [START_REF] Biedermann | Ersatzteilmanagement: Effiziente Ersatzteillogistik für Industrieunternehmen (VDI-Buch)[END_REF]:
Distribution: sales lost due to products not being manufactured Production: unused material, increase in material usage due to reduced capacity, additional over-time for employees to balance the inventory and make up for the lost production, additional maintenance costs of 2-30% of overall production costs Purchasing and supply chain: increase in storage for spare parts and costs incurred due to the purchase of spare parts These points alone illustrate the need for each corporation to choose the right spare parts strategy in order to reduce to risk to the business to a minimum by determining the right balance between the minimum inventory and the availability to delivery spare parts intime to prevent production down-time or dissatisfied customers. When choosing this strategy on important aspect is the type of spare parts. According to DIN 13306 a spare part is an "item intended to replace a corresponding item in order to retain or maintain the original required function of the item". Biedermann is defining the items in the following way [START_REF] Biedermann | Ersatzteilmanagement: Effiziente Ersatzteillogistik für Industrieunternehmen (VDI-Buch)[END_REF]:
Spare part: item, group of items or complete products intended to replace damaged, worn-out or missing items, item groups or products Reserve/Back-up item: item, which is allocated to one or more machines (anlagen), and therefore not used individually and in disposition and stored for the purpose of maintenance, Back-up items are usually expensive and are characterized by a low inventory level with a high monetary value Consumable item: item, which due to its setup will be consumed during use and has no economically sound way of maintenance Another aspect in determining the strategy is the type of maintenance the company is choosing or offering. There are three basic strategies:
Total Preventive Maintenance (TPM): characterized by the performance of inspections, maintenance work and replacement of components prior to the failure of the equipment. Scheduled Maintenance or Reliability Centered Maintenance: strategy where the replacement of an item is as the term says planned ahead of time. Corrective Maintenance or Repair which is also called Risk Based Maintenance: an item fails and will be replaced in order to convert the installations or equipment back into production mode.
Besides the mentioned type of spare part and the maintenance strategy the following aspects play an equally important role when selecting the strategy failure behavior and failure rate of the item reliability requirements level of information available and receivable back-up solutions Alternatives amongst others.
The decision on the strategy and the type of spare parts determines the logistics and supply chain model to be chosen and therefore the cost for the logistics portion.
SPARE PARTS LOGISTICS
The current spare parts logistics strategies are typically focusing on the procurement of spare parts from an already established supplier. In many cases this supplier is responsible for manufacturing the initial primary products. This bares the benefits of an already established business relationship, defined and common understanding of the requirements for the products and services offered, clear definition of the responsibilities, established logistic, established payment modalities and a common understanding of the expectations of either party. On the other hand there are some downfalls like lack of innovative ideas, unwanted dependence of each other, or an increase in logistics costs to name a few.
The logistics strategy itself is determined by two groups of factors:
NEW PROCESS DESIGN AND BUSINESS MODEL
PROPOSED PROCESS FLOW
Derived from the limitations and effects of the current supply strategies of spare parts, customer feedback from questionnaires and based on the processes of AM in combination with the product figure 5 illustrates a generic conventional process flow model and figure 6 the proposed preliminary process flow model.
As with standard process models the proposed process flow model covers all the process steps starting with the input from the market in the form of customer orders and customer feedback up until the delivery of the finished product to the customer.
Fig. 5. Conventional process flow
Fig. 6. Preliminary Process Model
The process model will be in a permanent updating stage for some time due to the development stage of the technologies. Producing parts using Additive Manufacturing technologies has an impact on multiple levels and multiple areas of a business' operation.
IMPACT OF AM TECHNOLOGIES ON SPARE PARTS MANUFACTURING
The impact of using AM technologies to manufacture spare parts will be described in the following subchapter. This chapter presents only an overview of the main benefits.
REDUCTION OR ELIMINATION OF TOOLING.
Conventional manufacturing like injection molding requires various tools in order to fabricate a product from start to finish. This results not only in costs for the tool build and tooling material, but also in time for the tool build, setup procedures during production periods and maintenance activities in order to keep the tools and therefore production running. Additionally tooling often has to be stored for a defined time after the end of production (EOP) to be able to produce spare parts when needed.
There are two possible alternatives to the conventional way of manufacturing spare parts being proposed: one being the fabrication of products including its spare parts strictly using Additive Manufacturing technologies from the start, thus eliminating tooling completely. The other alternative is to manufacture primary products using conventional technologies including tooling but manufacturing spare parts using Additive Manufacturing technologies.
In order to make a decision which alternative is to be preferred the suggestion is to perform an analysis of the spare part in question to determine the potentials and risks of using Additive Manufacturing. Depending on the spare parts characteristics and the spare parts strategy the following benefits can be achieved: Reduction or elimination of tooling Freeing up storage space for not needed tooling Freeing up storage space for already produced products Reduction in costs for logistics Freeing up production time for not required products to be made ahead of time after EOP Reductions of obsolete or excessive spare parts being produced at EOP and being disposed if not required.
In the case of spare parts an additional benefit is that product failures causing the need for spare parts can be examined and corrective actions can be implemented into the product design without the need to also change or update tooling data, tooling and processes.
REDUCING COMPLEXITY.
The manufacturing of spare parts directly from 3D CAD data significantly reduces the complexity in organizational and operational processes e.g. reduction of data transfers and conversion for the various tools and equipment.
On the other hand, handling of data is much more convenient than handling of real parts, but also required a secured loop in order to assure the correct data handling and storage. Within the mega trend of customization / individualization of products it is very easy to produce lots of different versions and personalized products with very little additional effort short-term and long-term. Data handling of all the versions will be the limitation.
MANUFACTURING "ON DEMAND" AND "ON LOCATION".
The main advantage of Additive Manufacturing spare parts is the possibility to produce these parts on demand. Two alternative models of this process are possible. First the spare parts will be kept on stock in very small numbers and the customer demand will trigger the delivery of the parts from the stock and the immediate production of the desired number of parts to refill the stock. Second is to eliminate the stock and produce directly the number and version of parts that the customer demands. The timing demand will be longer but no capital will be tied up in the spare parts sitting in storage. Another advantage is the future production on location: Production on location envisions sending the 3D-CAD part data with additional information regarding building process, materials and tolerances to a production site close to the customer. The parts could be manufactured in independent or dependent production facilities that have clearly defined and certified Additive Manufacturing capacities. This model could have a large impact on the logistics that will be evaluated. The impact of a production on demand, on location and with local material is recapped in the following Staying competitive using traditional business model concepts is becoming more and more difficult. Customization and the response time to customer needs are two critical factors of being successful. 21st century companies have to focus on moving physical products as well as their information quickly through retail, distribution, assembly, manufacture and supply. This is part of the value proposition manufacturing and service providers offer to their customers. Using Additive Manufacturing can provide a significant competitive advantage to a company.
Business models which consist of a deeper cooperation between suppliers and receivers of on-demand parts with possible virtual networks will have to be developed. The stake holders involved vary depending on the type of spare part and the setup between the manufacturer and the user. Depending on the business model each player has a different level of involvement and therefore a different level of value creation he adds to the overall product.
CONCLUSIONS AND OUTLOOK
Implementing and using Additive Manufacturing in order to manufacture spare parts offers a viable option not only to familiarize one with a new emerging manufacturing technology but also presents opportunities to offer products and services to the customer which fit their desire and requirements regarding time and cost effective deliver.
It is however important to take in account that Additive Manufacturing has also its current limitations like size, surface finish quality and the production volume (number of parts). Additive Manufacturing and its benefits has the potential of an enormous economic impact by reducing inventory levels to an absolute minimum as well as reducing logistics cost significantly.
Fig. 1 .Fig. 2 .
12 Fig. 1. Schematic diagram of SLS [following VDI 3404]
Fig. 3.Figure 3: Two main process steps of AM
Figure 3 :
3 Fig. 3.Figure 3: Two main process steps of AM
Fig. 4 .
4 Fig. 4. Process Steps of AM
Fig. 7 .Fig. 8 .Fig. 9 .
789 Fig. 7. Key stake holders of an Additive Manufactured Spare Parts logistics
Table 2 .
2 1. Exogenous Factors: social and political environment and settings, market situation and competition, type of spare part and customer requirements and expectations [Michalak 2009]. 2. Endogenous Factors: company internal factors, in this case inbound, production and outbound logistics. With the focus on additive manufacturing the parameters for spare parts supplied using Additive Manufacturing are shown in the tables 5-7: Parameter Selection inbound logistics | Rommel, Fraunhofer IPA (following Michalak 2009)
Parameter Parameter Value
Place of spare parts production Internal External
Location of spare
parts manufac- Local Domestic Global
Sourcing turer Number of pos-
sible spare parts Single Multiple
manufacturers
Vertical produc-tion integration Components Modular
Allocation concepts Stock JIT Postponement
Parameter Parameter Value
Outbound logistics structure Vertical (steps of distribution) Horizontal (number of distribution single-step single multiple-step multiple
units)
Sales strategy intensive selective exclusive
storage location structure (if need-ed) central local
Table 3 .
3 Parameter Selection outbound logistics | Rommel, Fraunhofer IPA (following Michalak 2009)
Parameter Degree of centralization
Trending towards Trending towards de-
centralized storage centralized storage
Assortment broad limited
Delivery time sufficient fastest delivery (specific
time …)
Product value high low
Level of concentration one source multiple sources
of manufacturing sites
Customer structure few big size compa- many small sized compa-
nies nies
Specific storage requi- yes no
rements
Specific natio- few many
nal/regional require-
ments
Table 4 .
4 Parameter table for selecting the storage location strategy | Rommel, Fraunhofer IPA (following Schulte 2005)
table 5 :
5
On demand On location Local material
No more warehousing for Worldwide service Reaction on local re-
spare parts including space, without limitations quirements
building maintenance, ener-
gy for climate control,
workers…
No more logistics of scrap- No more logistics for Environmental friendly
ping unused old spare parts end products
No more time limitations Faster response time Much less raw material
for spare parts support over long distances logistics
Social benefits of job
creation in the local area
Cultural adaption
Table 5 .
5 Impact of Additive Manufacturing on Spare Parts
3.2.4 BUSINESS MODEL OPPORTUNITIES. | 20,043 | [
"1003698",
"1003699"
] | [
"443235",
"443235"
] |
01485808 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485808/file/978-3-642-41329-2_15_Chapter.pdf | Karlheinz J Hoeren
email: karlheinz.hoeren@uni-due.de
Gerd Witt
email: gerd.witt@uni-due.de
Karlheinz P J Hoeren
Design-Opportunities and Limitations on Additive Manufacturing Determined by a Suitable Test-Specimen
Keywords: Additive Manufacturing, Laser Beam Melting, Fused Layer Modeling, Laser Sintering, test-specimen
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Additive manufacturing can be described as a direct, tool-less and layer-wise production of parts based on 3D product model data. This data can be based on Image-Generating Measuring Procedures like CT (computed tomography), MRI (magnetic resonance imaging) and 3D-Scanning, or, like in the majority of cases, a 3D-CAD construction. Due to the layer-wise and tool-less buildup-principle, additive manufac-turing offers a huge amount of freedom for designers, compared to conventional manufacturing processes. For instance rear sections, light weight constructions or inner cavities can be built up without a significant rise of manufacturing costs. However, there are some specific limitations on the freedom of construction in additive manufacturing. These limitations can partly be attributed to the layer-wise principle of buildup, which all additive manufacturing technologies have in common, but also to the individual restrictions that come along with every single manufacturing technology. [START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF] [2] [START_REF] Gebhardt | Generative Fertigungsverfahren[END_REF] Following, after a short description of the additive manufacturing technologies Laser Beam Melting (LBM), Laser Sintering (LS) and Fused Layer Mod-eling (FLM), the geometry of a test-specimen that has been designed by the chair of manufacturing technologies of the University Duisburg-Essen will be introduced. Based on this geometry, the design-opportunities and limitations of the described tech-nologies will be evaluated.
LASER BEAM MELTING (LBM)
Besides Electron Beam Melting, LBM is the only way of directly producing metal parts in a powder-based additive manufacturing process. [START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fun-damentals, terms and definitions, quality pa-rameter, supply agreements[END_REF] [START_REF] Wiesner | Selective Laser Melting -Eine Verfahrensvariante des Strahlschmelzens[END_REF] In course of the LBM process, parts are built up by repeatedly lowering the build-platform, recoating the build-area with fresh powder from the powder-supply and selective melting of the metal powder by means of a Laser-Beam. For the schematic structure of LBM-Machine see Figure 1. [START_REF] Sehrt | Möglichkeiten und Grenzen der generativen Herstellung metallischer Bauteile durch das Strahlschmelzverfahren[END_REF] Melting metal powder, a lot of energy is needed. Therefore a huge amount of thermal energy is led into the build-area. In order to lead this process-warmth away from the build-plane, and in order to keep the parts in place, supports (or supportstructures) are needed in Laser Beam Melting. Sup-port-structures are to be placed underneath every surface that is inclined by less than about 45° to-wards the buildplatform. They are build up simulta-neously with the part and consist of the same mate-rial. Thus parts have to be mechanically separated from their supports after the LBM-process, for ex-ample by sawing, milling or snapping off. As a re-sult, the surface-quality in supported areas is significantly reduced and LBM-parts are often reworked with processes like abrasive blasting, barrel finishing, or electrolytic polishing. [7] [8] Fig. 1. -schematic illustration of the LBM process
LASER SINTERING (LS)
In Laser Sintering, unlike LBM, plastic powder is used as basic material. Regarding the procedure, LBM and LS are very similar (see Figure 2). One of the main differences is that in LS no support-structures are necessary. This is because in LS the powder bed is heated to a temperature just below the melting point of the powder, so the energy that has to be introduced by the laser for melting the powder is very low. Therefore only little additional heat has to be led away from the build-area. On the one hand, this amount of energy can be compen-sated by the powder, on the other hand, due to the smaller temperature gradient, the curl-effect is less pronounced to occur. The curl-effect is the effect that causes a part to bend inside the powder bed and become deformed or even collide with the re-coating-unit. Latter would lead to a process break down. In FLM, slices of the part are built up by extruding an ABSplus wire through the heated nozzles of a movable printing head (see Figure 3). The printing head is moved in the x-, y-plane of the FLM ma-chine to build up a layer. When a layer is finished, the build-platform is lowered by a layer-thickness and the next layer is built up. [START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fun-damentals, terms and definitions, quality pa-rameter, supply agreements[END_REF] [1] Fig. 3. -schematic illustration of the FLM process Since FLM is not a powder-based procedure, like LS, or LBM, there is no powder to prevent the heat-ed ABSplus from bending up or down inside the build-chamber. Therefore in FLM, supports are needed. In contrast to supports used in LBM, these supports only have the function, to hold the part in place. One special thing about supports in FLM is that a second, acid material is extruded through a second nozzle in order to build up supports. This way, when the FLM process is finished, the part can be put into an alkaline solution and the supports are dissolved. As a consequence, supports are not the main reason for the low finish-quality of parts pro-duced by FLM. However, the high layer-thickness, which is one of the factors that make FLM cheap, compared to other additive manufacturing technolo-gies, impacts the finishquality in a negative way.
DESIGN OF THE TEST-SPECIMEN
The chair of manufacturing technologies of the Uni-versity Duisburg-Essen has developed a test-specimen to convey the limits of additive manufac-turing technologies. This specimen is designed to illustrate the smallest buildable wall-thicknesses, gapwidths and cylinder and bore diameters de-pending on their orientation inside the build-chamber. Thus diameters/thicknesses between 0.1 and 1 mm are built up at intervals of 0.1 mm and diameters/thicknesses between 1 and 2 mm are built up at intervals of 0.25 mm (see Figure 4). In addition, the test specimen contains walls with different angles towards the build-platform (x-, y-plane), in order to show the change in surface-quality of downskin surfaces with increas-ing/decreasing angles. Furthermore a bell-shaped geometry is built up in order to give a visualisation of the so called stair-effect. This effect characterises the lack of reproduction-accuracy due to the fact that parts are built up layerwise depending on the orientation of a surface towards the x-, y-plane. For a further evaluation of the test-specimen, be-sides visual inspection, the distances between indi-vidual test-features are constructed large enough to enable the use of a coordinate measuring machine. However, the chief difference concerning the design of the test specimen compared to other test-specimens of the chair of manufacturing technolo-gies [START_REF] Wegner | Design Rules For Small Geometric Features In Laser Sintering[END_REF] [START_REF] Reinhardt | Ansätze zur Qualitäts-bewertung von generativen Fertigungsverfahren durch die Einführung eines Kennzahlen-systems[END_REF] is that this specimen is designed to suit the special requirements coming along with an additive manufacturing by technologies using sup-ports (especially LBM). Besides the features de-scribed earlier, these special requirements result in the following problems:
CURL-EFFECT
The curl-effect, which already was mentioned in the description of LBM, needs special focus. Since the production of large surfaces inside the x-y-plane is directly connected with a stronger occurrence of the curl-effect, this has to be avoided. Therefore the test-specimen is divided into eleven platforms, con-taining the individual testfeatures. The platforms are connected by small bridges, positioned in a z-level below the unpskin-surfaces of the platforms. This way, a large test-specimen can be produced without melting up large surfaces, especially in a z-level that may have an influence on the features to be evaluated.
SUPPORTS
As described before, in some additive manufactur-ing processes, supports have to be built up with the parts for several reasons. Since supports often have to be removed manually, the geometry of test-specimen should require few and little massive supports. This way production costs for support material and post-processing requirements can be kept at a low level.
In most cases the critical angle between build-platform and a surface to be built up is at about 45 degrees. Thus the downskin surface of each plat-form of the test specimen is equipped with a 60 degree groove. This way the amount of support that has to be placed under the platforms is significantly reduced without lowering processstability (see Fig-ure 5). Additionally there are two kinds of test-features on the test-specimen which require supports. Since walls and cylinders that are oriented parallel to the build platform cannot be built without supports, the platforms containing these features are placed at the outside of the test specimen. By this means, the features are accessible for manual post-processing and visual inspection.
RECOATING
In powder-or liquid-based additive manufacturing processes, different recoatingsystems are used to supply the build-platform with fresh material. One thing, most recoating-systems have in common is some kind of blade that pushes the material from one side of the build-chamber to the other. Especially in LBM, the surfaces that just have been built up tend to have heightenings. In the majority of cases, these heightenings are the edges of the part, slightly bending up as a result of the meltingprocess. The combination of these circumstances can cause a scratching between the recoating-unit and the part to varying extends, depending on the build-material and the build-parameters used. In order to keep this phenomenon from affecting the features of the test-specimen, all platforms of the test specimen are oriented in an angle of 45 degrees towards the recoating-unit. This way, harsh contacts between recoatingunit the long edges of the platforms can be avoided. However, there is another problem connected with the recoating-direction that may have an influence on the results, when small features are built up. As the testspecimen is designed to show the most filigree features that can be produced with each additive manufacturing process, the diameters and wall-thicknesses have to be decreased to the point, where they cannot be built up anymore. At this point, the features either are snapped by the recoat-ing-unit, or they cannot be built up as a connected object anymore. In both cases, fragments of the test-features are pushed across the powder-bed the recoating-unit. In order to prevent these fragments from having an influence on the build-process, by getting stuck between the recoating-unit and another area of the part or by snapping other test-features, the platforms and test-features are arranged in a suitable way. For instance, all diameters and wall-thicknesses decrease with the recoating-direction. Additionally, platforms with gaps are placed behind platforms with filigree features, so the space above them can be used as outlet zone.
PRODUCTION OF TEST-SPECIMENS WITH LBM, LS AND FLM
Following, the results of visual inspections and measurements on test-specimen, build of Hastel-loy X (LBM with an EOSINT M270 Laser Beam Melting System), glassfilled polyamide (LS with a FORMIGA P 100 Laser Sintering System) and ABSplus (FLM with a Stratasys Dimension 1200es Fused Layer Modeling System) are discussed. For inspections and measurements, the test-specimen made of Hastelloy X and glass-filled polyamide have been exempted from powder adhesions by blasting with glass-pearls (LS), respectively corundum and glass-pearls (LBM). On the testspecimen produced by FLM, only supports have been removed by putting it into an alkaline bath.
WALL-THICKNESSES
A look at the minimum producible wall-thicknesses conveys that in LBM the most filigree walls can be produced. Additionally walls in LBM show the slight-est deviation from the specified dimensions (see Figure 7). However, in LBM there is a considerable difference between walls oriented parallel the re-coating-unit and wall orientated orthogonal to the recoating-unit. The walls oriented parallel to the recoatingunit can only be built up down to a thick-ness of 0.7 mm. Thinner walls have been snapped by the recoating-unit (see Figure 6). Especially in FLM, but also in LS one can observe that from a certain threshold, in spite of decreasing nominal dimensions, the measured wall-thicknesses do not become any thinner. In FLM this can be ex-plained by the fact that an object that is built up at least consists of its contours. Taking into account the diameter of an ABSplus wire and the fact that it is squeezed onto the former layer, it is clear that the minimum wall-thickness is situated at about 1 mm. In LS, the explanation is very similar. However the restricting factor is not the thickness of a wire, but the focus diameter of the LS-System in combination with the typical powder-adhesions. The wall-thicknesses along the z-axis in powder-based manufacturing technologies (LBM and LS) are always slightly thicker than the nominal size (see Figure 8). This is to be explained by the fact, that melting the first layers of the walls, especially in LS, excess energy is led into the powder underneath the walls and melts additional powder particles. In LBM this effect can be observed less intense, since the manual removal of supports affects the results.
Fig. 8. -results of measuring minimal wall-thicknesses along the z-axis
In FLM, the course of measured wall-thicknesses is erratic within the range from 0.25 to 1.0 mm. This can be explained considering the layer-thickness in FLM. The layerthickness in FLM is 0.254 mm. That way, a nominal thickness of 0.35 mm for example can either be represented by one, or two layers (0.254 mm or 0.508 mm). Since the resolution in FLM is very coarse, this effect can also be seen by a visual inspection of the walls (see Figure 9).
CYLINDERS
The test-specimen contains clinders with an angle of 0 degrees and cylinders with a polar angle of 45 and 90 degrees, each in negative x-and y-direction. The orientation along the x-axis has been chosen, since the process-stability in LBM is a lot higher, if the unsupported cylinders with a polar angle of 45 degrees dont grow against the recoating-direction.
Fig. 10. -results of measuring minimal cylinder-diameters
Comparing the cylinders with a polar angle of 0 degrees, shows again, that in LBM the most filligree featuers can be built up with the best accuarcy (see Figure 10). However, there are breaks visible in the cylinders in a hight of about 5 mm at cylinders with a diameter of less than 0.9 mm (see Figure 11). These breaks are a result of the scratching between the cylinders and the recoater-blade. This time the blade did not snap the cylinders since the geometry is more flexible. Thus the cylinders were able to flip back into their former positions be built on. The results in LS are comparable tho those of the minimal wall-thicknesses along the x-and y-axis (see Figure 10). The smallest possible diameter in LS is 0.5 mm. In FLM, however, only cylinders with a diameter of 2 mm can be built up.
Fig. 12. -form deviation of FLM cylinders
The results concerning cylinders with a polar angle of 45 and 90 degrees in LBM and LS are correlating with the results of cylinders with a polar angle of 0 degees regarding their accuracy. In FLM it is striking that smaller diameters can be built with increasing polar angles (see Figure 12). At a polar angle of 90 degrees, even cylinders with 0.1 mm diameter can be built. However, with increasing polar angles the form deviation in FLM becomes more visible. Due to the coarse resolution in FLM, caused by thick layers and a thick ABSplus wire, the deviation in Form and diameter for small cylinders becomes so large that inspecting their diameter is not possible anymore from 0.9 mm downwards (see Figure 13).
GAPS AND BORES
The evaluation of gaps and bores is reduced to a visual inspection. This is due to the fact that the accuracy of such filligreee bores cant be usefully inspected with a coordinate measuring machine, since the diameter of the measurement-end would be on the same scale as the diameter of the bores and the irregularities that are to be inspected. The results of the visual inspection are summarised in Table 1. One striking concerning bores in LS is their quality, which is worse compared to the other mannufacturing technologies. This becomes clear either by inspecting the smallest depictable diameters, but also by taking a look at the huge form deviation of bores in LS (see Figure 14). The eplanation for both, form deviation and resolution is found in the way of energy contribution in LS. As described above, in LS, less energy is necessary to melt the powder, compared tho LBM. Thus the threshold between melting the powder and not melting the powder is much smaller. Consequently, if excess energy is led into the part, surrounding powder ist melted and form deviations will occur.
ANGLES TOWARD BUILD-PLATFORM
The test-specimen contains five walls, inclined from 80 to 40 degrees towards the build-platform in steps of 10 degrees (see Figures 151617). This walls serve as a visualisation of the decreasing surface quality with decreasing angles towards the buildplatform. Again the walls are inclined to the negative x-direction in order to raise process stability and avoid process aborts. If possible, this walls should be built without support-structures, so deviations in form and surface quality can be displayed within the critical area. In LS, the surface quality appears hardly affected by different angles towards the build-platform (see Figure 15). Even with an angle of 40 degrees, the stair-effect (visibility of layers on stongly inclined walls) is not visible. Taking a look at the walls built by FLM, it becomes clear that the stair-effect in FLM is visible right from the beginning (see Figure 16). This is due to the coarse resolution of FLM. Additionally, the wall inclined by 40 degrees even has a worse surface quality then the other walls. In FLM, supports are created automaticly. Therefore users are not able to erase supports before starting a FLM process. The wall, inclined by 40 degrees, was built up with supports. Thus the lack of surface-quality results from the connection between supports and part.
Fig. 17. -Anlges towards build-platform in LBM
The walls in LBM convey a strong influence of angles between parts and buildplatform and the surface-quality of downskin surfaces (see Figure 17). A first discoloration of the surface can be seen on the wall inclined by 60 degrees. This discoloration is a result of process-warmth, not beeing able to leave the part due to the fact that these walls dont have support-structures. At an inclination of 50 degrees, a serious deterioration of the surface quality becomes visible. This deteroration becomes even stronger with an inclination by 40 degrees. Additionally, the edge of the wall inclined by 40 degrees appears frayed. The reason for this can be found in the fact that with decreasing angle towrad the build-platform and increasing jam of heat inside the part, the curl-effect becomes stronger. In this case, the recoater-unit starts scantching the curled edge. This is a first sign, that at this angle of inclination, process aborts may occur depending on the orientation of the part towards the recoating-unit.
STAIR-EFFECT
The bell-shaped feature on the test-specimen serves as a visualisation of the staireffect. Comparing the built up test-specimens, a clear difference in surface quality can be recognised.
In LBM, steps are just slightly visible at an angle of 10 to 15 degrees towards the build-platform (see Figure 18). Due to the thin layer-thickness in LBM, the whole bell-profile apperas very fine and smooth. Taking a look at the LS-bell-profile, it becomes clear that the surfaces are a bit more rough than in LBM. The stair-effect is already visible at an angle of 20 degrees. In FLM, as mentioned above, single layers are always visible, due to the coarse resolution of the technology. In spite of this, the bell-profile conveys that, using the FLM-technology, angles of less then about 20 degrees inevitably lead to a loss of shape.
CONCLUSIONS
Comparing the different test-specimen, built by LBM, LS and FLM, the first thing to be recognised is that in LBM the most filigree structures can be produced with the best accuracy. However, it becomes clear that the LBM-process is much more complex than for example the FLM-process. Both, designers and operators have to be aware of the typical constrains that are connect with the process-specific characteristics of LBM. This becomes particularly obvious, considering the huge influence that part-orientation and supports have on process stability and part-quality. As mentioned above, LS is very similar to LBM concerning the course of procedure. This similarity can also be seen, comparing the test-specimen. In LS, most features are just slightly less filigree than in LBM. Due to the fact that support-structures are not needed for LS, a lot of time and money can be saved in pre-and, as a consequence, post-processing. In addition, the process-handling is easier and process aborts are a lot less likely.
[START_REF] Rechtenwald | Funk-tionsprototypen aus Peak[END_REF] [10][START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fun-damentals, terms and definitions, quality pa-rameter, supply agreements[END_REF]
Fig. 2 . 3 FUSED
23 Fig. 2. -schematic illustration of the LS process
Fig. 4 .
4 Fig. 4. -test-specimen made of glassfilled polyamide 12 by LS
Fig. 5 .
5 Fig. 5. -downskin-surface of the test-specimen produced by LBM after support-removal
Fig. 6 .
6 Fig. 6. -snapped walls, orientated parallel to the recoating-unit in LBM
Fig. 7 .
7 Fig. 7. -results of measuring minimal wall-thicknesses along the y-axis (parallel to recoatingunit in LBM)
Fig. 9 .
9 Fig. 9. -minimal wall-thicknesses along the z-axis in FLM
Fig. 11 .
11 Fig. 11. -breaks in LBM-Cylinders
Fig. 13 .
13 Fig. 13. results of measuring cylinders with a polar angle of 45 and 90 degrees manufactured by FLM
Fig. 14 .
14 Fig. 14. -form deviation of bores along the y-axis in LS
Fig. 15 .
15 Fig. 15. -angles towards build-platform in LS
Fig. 16 .
16 Fig. 16. -angles towards build-platform in FLM
Fig. 18 .
18 Fig. 18. -Comparison of bell-shaped features on the test-specimen built by LBM, LS and FLM
Table 1 .
1 -smallest depictable bores and gaps determined by visual inspection
Taking a look at the FLM-process, it is obvious that this technology is way less complex and filigree than LBM and LS. Fine features often can`t be displayed and deviations in form and dimension often can be recognised. However, the FLMprocess is very easy to be handled. Supports are constructed automatically and when the part is built up, they can be removed by an alkaline-bath. Additionally, no precautions have to be taken and no cleaning effort has to be done handling any powder. The FLM-technology is much cleaner than LBM and LS and therefore much more suitable for an office-surrounding. The last thing to be taken into account for this comparison are process-costs. The FLM-technology is a lot cheaper than LBM (which is the most expensive technology) and LS. | 24,265 | [
"1003700",
"1003701"
] | [
"300612",
"300612"
] |
01485809 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485809/file/978-3-642-41329-2_16_Chapter.pdf | Stefan Kleszczynski
email: stefan.kleszczynski@uni-due.de
Joschka Zur Jacobsmühlen
Jan T Sehrt
Gerd Witt
email: gerd.witt@uni-due.de
Mechanical Properties of Laser Beam Melting Components Depending on Various Process Errors
Keywords: Additive Manufacturing, Laser Beam Melting, process errors, mechanical properties, High Resolution Imaging
Additive Manufacturing processes are constantly gaining more influence. The layer-wise creation of solid components by joining formless materials allows tool-free generation of parts with very complex geometries. Laser Beam Melting is one possible Additive Manufacturing process which allows the production of metal components with very good mechanical properties suitable for industrial applications. These are for example located in the field of medical technologies or aerospace. Despite this potential a breakthrough of the technology has not occurred yet. One of the main reasons for this issue is the lack of process stability and quality management. Due to the principle of this process, mechanical properties of the components are strongly depending on the process parameters being used for production. As a consequence incorrect parameters or process errors will influence part properties. For that reason possible process errors were identified and documented using high resolution imaging. In a next step tensile test specimens with pre-defined process errors were produced. The influence of these defects on mechanical properties were examined by determining the tensile strength and the elongation at break. The results from mechanical testing are validated with microscopy studies on error samples and tensile specimens. Finally this paper will give a summary of the impact of process errors on mechanical part quality. As an outlook the suitability of high resolution imaging for error detection is discussed. Based on these results a future contribution to quality management is aspired.
Introduction
Additive Manufacturing (AM) offers many advantages for manufacturing of complex and individual parts. It provides a tool-free production, whereby physical parts are created from virtual solid models in a layer by layer fashion [START_REF] Gibson | Additive Manufacturing Technologies -Rapid Prototyping to Direct Digital Manufacturing[END_REF][START_REF] Gebhardt | Generative Fertigungsverfahren -Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF]. In a first step the layer data is gained by slicing virtual 3D CAD models into layers of a certain thickness. Layer information could also be gained by slicing data from 3D scanning or CT scanning. In the following build process the layer information is converted into physical parts by creating and joining the respective layers. The principle of layer creation classifies the AM process [START_REF]VDI-Guideline 3404: Additive fabrication -Rapid technologies (rapid prototyping) -Fundamentals, terms and definitions, quality parameter, supply agreements[END_REF]. Laser Beam Melting (LBM) as an AM process offers the opportunity of small volume production of metal components. Here a thin layer of metal powder is deposited onto the build platform. In a next step the powder is molten into solid material by moving a laser beam (mostly Nd-or Yb-fibre laser source) across the current cross-section of the part. After this, the build platform is lowered and the two process stages are repeated iteratively until the solid metal part is fully produced (figure 1). As a result of the process the produced components show very good mechanical properties, which are widely comparable to conventionally processed materials [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF] or in some cases even better [START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF]. The density of components reaches approximately 100 % [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF], [6 -8]. Potential applications for LBM components are located in the domain of medical implants, FEM optimized lightweight components or the production of turbine blades with internal cooling channels [START_REF] Gibson | Additive Manufacturing Technologies -Rapid Prototyping to Direct Digital Manufacturing[END_REF][START_REF] Gebhardt | Generative Fertigungsverfahren -Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF][START_REF] Wohlers | Wohlers Report 2011 -Annual Worldwide Progress Report[END_REF].
There are about 158 factors of process influences [START_REF] Sehrt | Möglichkeiten und Grenzen bei der generativen Herstellung metallischer Bauteile durch das Strahlschmelzverfahren[END_REF] from which the parameters of laser power, scanning velocity, hatch distance (distance of melt traces) and layer thickness have been reported as the most influencing ones [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF][START_REF] Sehrt | Möglichkeiten und Grenzen bei der generativen Herstellung metallischer Bauteile durch das Strahlschmelzverfahren[END_REF]. These main process parameters mentioned are often set into connection by means of the magnitude of volume energy density E_v [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF][START_REF]VDI-Guideline 3405 -2. Entwurf. Additive manufacturing processes, Rapid Manufacturing -Beam melting of metallic parts -Qualification, quality assurance and post processing[END_REF] which is defined as:
(1)
Fig. 1. Schematic process principle of LBM
where P_l stands for Laser Power, h stands for the hatch distance, v_s stands for the scanning velocity and d stands for the powder layer thickness. Since the process of layer creation determines the resulting part properties [START_REF] Gibson | Additive Manufacturing Technologies -Rapid Prototyping to Direct Digital Manufacturing[END_REF][START_REF] Gebhardt | Generative Fertigungsverfahren -Rapid Prototyping -Rapid Tooling -Rapid Manufacturing[END_REF][START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF][START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF] wrong process parameters or technical defects in certain machine components could also cause process errors which deteriorate mechanical properties. Spierings et al. [START_REF] Spierings | Designing material properties locally with Additive Manufacturing technology SLM[END_REF] show that the resulting part porosity mainly depends on the used process parameters and significantly affects the mechanical properties. In addition a correlation between volume energy density and the respective part properties is inves-tigated with the result that volume energy density can be considered as the parameter determining part porosity. Yasa et al. [START_REF] Yasa | Application of Laser Re-Melting on Selective Laser Melting parts[END_REF] investigate the influence of double exposure strategies on resulting part properties. It is noted that the application of re-melting is able to improve surface quality and reduce part porosity.
Due to high security requirements of some potential domains of applications and actual standardisation efforts, a demand for suitable quality control for LBM technologies has been reported [START_REF]VDI-Guideline 3405 -2. Entwurf. Additive manufacturing processes, Rapid Manufacturing -Beam melting of metallic parts -Qualification, quality assurance and post processing[END_REF][START_REF] Lott | Design of an Optical system for the In Situ Process Monitoring of Selective Laser Melting (SLM)[END_REF][START_REF] Kruth | Feedback control of selective laser melting[END_REF]. Thus far, some different approaches for process control and process monitoring have been given in literature. Kruth et al. monitor the current melt pool using a coaxial imaging system and control laser power to hold the size of the melt pool constant [START_REF] Kruth | Feedback control of selective laser melting[END_REF]. As the thermal conductivity of metal powder is about three orders of magnitude lower than those of solid metal [START_REF] Meiners | Direktes Selektives Laser Sintern einkomponentiger metallischer Werkstoffe[END_REF] this system can improve the part quality for overhanging structures by lowering the laser power when the size of the melt pool shows fluctuations in these certain regions. Lott et al. [START_REF] Lott | Design of an Optical system for the In Situ Process Monitoring of Selective Laser Melting (SLM)[END_REF] improve this approach by adding additional lighting to resolve melt pool dynamics at higher resolution. In [START_REF] Craeghs | Online Quality Control of Selective Laser Melting[END_REF] images of the deposited powder layers are taken additionally using a CCD camera. This enables the detection of coating errors due to a damaged coater blade. Doubenskaia et al. [START_REF] Doubenskaia | Optical System for On-Line Monitoring and Temperature Control in Selective Laser Melting Technology[END_REF] use an optical system consisting of an infra-red camera and a pyrometer for visualisation of the build process and online temperature measurements. All approaches previously mentioned feature an implementation into the optical components or the machine housing of the respective LBM system. This makes it elaborate and expensive to equip existing LBM machines with these systems. Moreover the coaxial monitoring systems are limited to the inspection of the melt pool. The result of melting remains uninspected. The CCD camera used in [START_REF] Craeghs | Online Quality Control of Selective Laser Melting[END_REF] is restricted to the inspection of the powder layer. Possible errors within the compound of melt traces cannot be resolved.
In this work the influence of process errors on resulting part properties is investigated. First selected process errors are provoked and documented using a high resolution imaging system, which is able to detect errors at the scale of single melt traces. A further description of the imaging system is shown in paragraph 2.1 and in [START_REF] Kleszczynski | Error Detection in Laser Beam Melting Systems by High Resolution Imaging[END_REF][START_REF] Jacobsmühlen | High Resolution Imaging for Inspection of Laser Beam Melting systems[END_REF]. In general, process errors can influence process stability and part quality [START_REF] Kleszczynski | Error Detection in Laser Beam Melting Systems by High Resolution Imaging[END_REF]. Therefore error samples are built by manipulating the main exposure parameters and exposure strategies. Next, tensile specimens with selected errors are built and tested. The results are validated by microscopy studies on the tested tensile specimens. Finally a correlation between tensile strength, elongation at break, porosity and error type is discussed.
Method
LBM and high resolution system
For the experiments in this work an EOSINT M 270 LBM system (EOS GmbH, Germany) is used. Hastelloy X powder is used as material, which is a nickel-base super alloy suitable for applications such as gas turbine blades. The documentation of process errors is carried out with an imaging systems consisting of a monochrome 29 megapixel CCD camera (SVS29050 by SVS-VISTEK GmbH, Germany). A tilt and shift lens (Hartblei Macro 4/120 TS Superrotator by Hartblei, Germany) helps to reduce perspective distortion by shifting the camera back and allows placing the focal plane on the build platform using its tilt ability. A 20 mm extension tube reduces the minimum object distance of the lens. The imaging system is mounted in front of the LBM system using a tube construction which provides adjustable positioning in height and distance from the machine window (figure 2). Two orthogonally positioned LED line lights provide lighting for the build platform. Matt reflectors on the machine back and the recoater are used to obtain diffuse lighting from a close distance, which was found to yield the best surface images. The field of view is limited to a small substrate platform (10 cm x 10 cm) to enable best possible resolving power (25 µm/pixel to 35 µm/pixel) [START_REF] Jacobsmühlen | High Resolution Imaging for Inspection of Laser Beam Melting systems[END_REF]. Image acquisition after powder deposition and laser exposure is triggered automatically using limit switches of the machine's coater blade and laser hourmeter.
Determination of mechanical properties
Test specimens for tensile testing are built as cylindrical raw part by LBM. The final specimen shape is produced by milling the raw parts into the standardised shape according to DIN 50125 -B 5x25 [START_REF]DIN 50125 -Testing of metallic materials -Tensile test pieces[END_REF]. Tensile tests are performed according to the specifications of DIN 50125. A Galdabini Quasar 200 machine is used for the tests. The fragments of test specimens are used for further microscopy studies. For which unstressed material from the specimen's thread heads is prepared into grinding samples. The microscopy studies are carried out using Olympus and Novex microscopes. Porosity of error samples is determined using an optical method according to [START_REF] Yasa | Application of Laser Re-Melting on Selective Laser Melting parts[END_REF] and [START_REF]VDI-Guideline 3405 -2. Entwurf. Additive manufacturing processes, Rapid Manufacturing -Beam melting of metallic parts -Qualification, quality assurance and post processing[END_REF] where the acquired images are converted to black and white images using a constant threshold value. Finally the ratio of black pixels, representing the porosity, is measured.
Documentation of process Errors
Process errors
An overview of typical process errors has been given in previous work [START_REF] Kleszczynski | Error Detection in Laser Beam Melting Systems by High Resolution Imaging[END_REF]. In this paper the main focus is on errors that influence part quality and in particularly on errors that affect mechanical properties. As mentioned in paragraph 1 mechanical properties strongly depend on process parameters, which define the energy input for the melting of the powder and in consequence the ratio of distance and width of the single melt traces. Technical defects of the laser source or the choice of wrong process parameter sets could therefore worsen the compound of layers and melt traces leading to porous regions. On the other hand too much energy input could lead to heat accumulation. In this case surface tensions of the melt induce the formation of superelevated regions, which could endanger process stability by causing collisions with the recoating mechanism. However higher energy inputs have been reported to increase mechanical properties due to a better compound of melt traces and lower porosity [START_REF] Yasa | Application of Laser Re-Melting on Selective Laser Melting parts[END_REF]. To provoke errors of these two kinds the main process parameters laser power, hatch distance and scanning velocity are changed by 20 % and 40 % arround the standard value, which was found by systematic qualification experiments (see [START_REF] Sehrt | Anforderungen an die Qualifizierung neuer Werkstoffe für das Strahlschmelzen[END_REF]). Additionally the layer thickness is doubled from 20 µm to 40 µm for one sample keeping the process parameters constant. As illustrated in equation 1 these variations directly affect the energy input. Another sample is built using a double exposure strategy resulting in higher energy inputs. For the experiments the stripe exposure strategy is used. Hereby the cross sections of the current parts are separated in stripes of 5 mm length. The overlap value for these stripes is another process parameter which could affect the compound of melt traces. Therefore one sample with the lowest possible stripe overlap value of 0,01 mm is build.
Error samples
Figure 3 shows an image, which was recorded during the build process of error samples using the high resolution imaging system. The samples are arranged in a matrix. The first three rows of the matrix represent the process parameters scanning velocity (vs), laser power (Pl) and hatch distance (h). These values are varied in the columns from -40 % to + 40 % in steps of 20 %.The last row of the matrix contains a reference sample (left), a sample being built with double layer thickness (mid left), a sample being built with double exposure (mid right) and a sample being built with the lowest possible stripe overlap value (right). As can be seen from figure 3, samples representing higher volume energy densities (reduced scanning velocity/hatch, increased laser power or double exposure) appear much brighter and smoother than those of samples representing low volume energy density (increased scanning velocity/hatch, reduced laser power or double layer thickness). A closer view at the documented error samples (figure 4) shows that surface irregularities and gross particles are visible at the sample of double layer thickness.
The sample with 40 % enlarged hatch distance indicates a poor connection of melt traces, which could be a signal for increased porosity. The sample with 40 % increased laser power shows a strong connection of melt traces, although there are some superelevated regions visible at the edges. A comparison of the high resolution images with images taken from microscopy confirms these impressions. Here it is clearly visible that there is no connection between the melt traces at the sample build with enlarged hatch distances. The error samples with 40 % respectively increased or decreased parameters show the strongest deviation from the reference sample. These parameter sets are used for production of tensile test specimens. Additionally tensile test specimens representing standard, double layer, double exposure and reduced stripe overlap parameter sets are built.
Mechanical Properties
Tensile strength
For each type of error six specimens are produced as described in paragraph 2.2 to ensure the level of statistical certainty. The determined values for tensile strength and the associated standard deviations are presented in figure 5. Additionally the calculated values for volume energy density are added into the chart. As can be seen the respective bars representing tensile strength and volume energy density show a similar trend for almost all specimens. In the case of the specimen of "double layer" this trend is not applicable. Higher values for the mean tensile strength are determined (comparing "double layer" to "H + 40 %" and "Vs + 40 %"), although this specimen has the lowest value for the volume energy density. Here it is remarkable that specimen "P -40 &" shows a tensile strength which is about 14 % (117 MPa) lower than those of specimen "double layer" while the values for volume energy density are at the same level. Specimens produced using higher energy input parameter sets [START_REF] Sehrt | Anforderungen an die Qualifizierung neuer Werkstoffe für das Strahlschmelzen[END_REF]. At this point it has to be stated that the maximum value (1110 MPa) is achieved after heat treatment.
Elongation at break
Figure 6 shows the determined values for the elongation at break compared to the calculated values of volume energy density. Unlike the results for tensile strength, there seems to be no connection between the elongation at break and volume energy density. Furthermore, there are no significant divergent trends recognizable between high energy input and low energy input parameter sets. It is remarkable that there are three different levels of values recognizable in the chart. First there is the level of about 30 % elongation at break which is determined for most of the specimens (reference, double exposure, stripe overlap, double layer, Vs + 40 %). Second there is the level of about 25 % Fig. 6. Results from determination of elongation at break compared to calculated values of volume energy density to 28 % elongation at break which is detected at four specimens (Vs -40 %, H -40 %, P + 40 %, H + 40 %). Here it is remarkable that the three high energy input parameter sets (Vs -40 %, H -40 %,P + 40 %) show the lowest standard deviation compared to all other specimens. Finally the lowest value for elongation at break is measured for specimen "P -40 %" representing the parameter set with lowest energy input. In literature the values for elongation at break for Hastelloy X are located in the range of 22 -60 % [START_REF] Sehrt | Anforderungen an die Qualifizierung neuer Werkstoffe für das Strahlschmelzen[END_REF] depending on the respective heat treatment. With exception of specimen "P -40 %" all determined values are within this range. However the determined values are at least 50 % lower than the maximal values reported.
Porosity
After mechanical testing, selected specimens are used for determination of porosity using microscopy according to the procedure described in paragraph 2.2. For the reference specimen the porosity is determined to 0,04 %, which is comparable to results from previous studies [4 -7] emphasising that LBM components achieve up to 99 % density. Specimen "Pl -40 %", which has the lowest value for volume energy density, shows the highest porosity. The determined value is 3,94 %. The results from porosity analysis (as presented in figure 7) underline previously published statements, which say that porosity grows with sinking energy input. It has to be stated that in general "high energy input" specimens show very similar porosity values (0,020 % to 0,027 %, see figure 7). The determined porosity values are higher for low energy input specimens (0,227 % to 3,938 %), which confirms the assumption that porosity is strongly dependent on energy input. The porosity values of the "reference" and the "reduced stripe overlap" specimen differ from each other by 0,02 %. Thus the reduced stripe overlap specimens show a lower porosity value. This is remarkable due to the fact that the "reduced stripe overlap" was suspected to increase part porosity. One explanation for this result might be found in the exposure strategy. As mentioned in paragraph 3.1 the cross section of parts are subdivided in stripes of a certain width. After exposure these stripes are rotated and gaps in the compound of melt traces could be closed during exposure of the next layer. On the other hand it has to be stated that the details from photomicrographs used for the analysis show only one certain area of the whole cross section. Moreover pores are distributed stochastically, which makes it difficult to make a statement with an accuracy level of a hundredth of a percent. Figure 8 shows photomicrographs from reference specimen (middle), reduced scanning speed specimen (top, highest tensile strength) and reduced laser power specimen (at the bottom, lowest tensile strength). Specimen "vs -40 %" shows little and small pores. The value of porosity is 0,025 %. The same appearance is visible at the photomicrographs from the reference sample, which shows slightly more but still small pores. Specimen "Pl -40 %", in contrast, shows clearly more and bigger pores, which seem to be distributed stochastically (figure 8, at the bottom).
Specimen
Discussion
The results presented in the previous sections of this paper prove that mechanical properties strongly depend on process parameters. In general it can be stated that increasing energy input improves tensile strength and reduces porosity. It is to be expected that porosity affects tensile strength, due to the fact that irregularities like pores induce crack formation at a certain mechanical load. The elongation at break on the other hand is not systematically affected by different energy input parameter sets.
Here, there are some groups of parameter sets which show values at similar levels. However there is no general connection between energy input and the elongation at break for the investigated material. It seems like the exposure strategies have more influence in this case. As can be seen from figure 6 the three high energy input parameter specimens "Vs -40 %", "H -40 %", "P + 40 %" show similar values for elongation at break. The "double exposure" specimen has a calculated volume energy density which is comparable to those of the other high energy specimens. Nevertheless the elongation at break of this sample lies in the same region as the "reference" sample and some "low energy input" samples. One possible explanation for this appearance could be that the "double exposure" sample was built using two different energy input parameter sets. One for melting the powder and another parameter set for re-melting the produced layer. Thus the heat flow has been different to those of the "high energy input parameter" samples, which has obviously induced different mechanical properties. The "high energy input" specimens show improved values for tensile strength but lower values for elongation at break compared to the reference sample. In contrast the "double exposure" sample shows an improved value for tensile strength at constant ductility. Figure 9 compares the results of tensile and porosity studies depending on the volume energy density. For this purpose the respective numbers of specimens are plotted into the chart. For identification see the explanation in the chart. Comparing the two plots of logarithmic interpolations shows that both are working contrarily. Both of the magnitudes seem to run asymptotically to parallels of the x-axis for high values of the volume energy density. The tendencies at the tensile tests underline the results from porosity determination (specimen 7: double layer, Rm = 833 MPa, porosity 0.227 %, specimen 8: hatch distance plus 40 %, Rm = 813 MPa, porosity 1.633 %). In this case specimen 8 shows a higher value of porosity and lower tensile strength. Comparing these results with the images from figure 4 allows the conclusion that a poor connection of melt traces causes higher tensile strength values than no connection of melt traces. Specimens 7 shows that the previously mentioned correlation between tensile strength, volume energy density and porosity is not applicable to every kind of an error. Here the low value for the volume energy density does not correlate with the interpolation for tensile strength and porosity of other specimens.
Fig. 9. Connection between porosity, tensile strength and volume energy density
This shows that volume energy density is more suitable for estimating tendencies concerning the magnitudes of tensile strength and porosity. A more significant influence is spotted at the type of error, respectively to the kind of energy input or exposure strategy.
conclusions
In this paper a brief demonstration for documenting possible process errors in the area of LBM by using a high resolution imaging system was given. The results and validations via microscopy show a good correlation between the recorded images. High resolution imaging might be an alternative and more pragmatic approach for process monitoring and quality management in the area of LBM due to the fact that the system is easy to implement and compatible to every LBM systems that features a window for the inspection of the process.
In a second step the impact of process errors on tensile strength, porosity and elongation at break was investigated. It could be shown that a higher energy input mostly induces higher values for tensile strength and lower porosities. On the other hand it was found, that the lower the volume energy density is, the lower the determined tensile strength and the higher the porosity are. For some error samples it could be found that the measure of volume energy density is not in a direct correlation with the resulting part properties. This was noticed in detail by comparing the tensile strength of samples with similar values for volume energy density, which were varying for about 117 MPa. Here the nature of melt trace connection seems to have the bigger influence. The mentioned disagreement of volume energy density with resulting part properties was especially noticed at the determination of the elongation at break. Here some samples that were built with "high energy parameter sets" showed a reduced elongation at break, which induced that the higher energy input seems to embrittle the material compared to the values of the reference specimen. At the same time another specimen, with a comparable higher level of volume energy density resulting in an higher tensile strength, showed higher values for the elongation at break, which were at the same level as specimens produced with low energy input parameters.
Nevertheless all determined mean values for tensile strength and elongation at break were in the range of known values from conventionally produced samples. Only the sample with the lowest tensile strength, lowest elongation at break and highest porosity, which was produced by reducing laser power by 40 %, showed values which were at the lower end of the known range. The elongation at break, which is a measure for ductility of materials, did not reach more than 50 % of the known maximum value from literature. This means that for some applications, where high elongation at break values are required, heat treatments are still necessary to improve this certain part property.
For future work the further investigation of the influences of varying process parameters is necessary for different materials and different machine systems, which might use other laser sources or inert gases for flooding of the process chamber. Especially in case of elongation at break it would be interesting to analyse the influence of different exposure strategies. Using high resolution imaging systems for collecting data of different error types and materials could be a useful tool to create a knowledge database, which links process parameters, resulting surface images and resulting mechanical part properties. In a next step an automated image analysis could detect significant differences in the structure of melt traces and might therefore also be applicable to quality management and production documentation.
Fig. 2 .
2 Fig. 2. Camera setup in front of LBM system EOSINT M 270
Fig. 3 .Fig. 4 .
34 Fig. 3. Documentation of error samples using high resolution imaging
Fig. 5 .
5 Fig. 5. Results from determination of tensile strength compared to calculated values of volume energy density
Fig. 7 .Fig. 8 .
78 Fig. 7. Summary of determined porosity values
Acknowledgment
The IGF project 17042 N initiated by the GFaI (Society for the Promotion of Applied Computer Science, Berlin, Germany), has been funded by the Federal Ministry of Economics and Technology (BMWi) via the German Federation of Industrial Research Associations (AiF) following a decision of the German Bundestag. | 31,408 | [
"1003702",
"1003701"
] | [
"300612",
"303510",
"300612",
"300612"
] |
01485814 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485814/file/978-3-642-41329-2_20_Chapter.pdf | Steffen Nowotny
email: steffen.nowotny@iws.fraunhofer.de
Sebastian Thieme
David Albert
Frank Kubisch
Robert Kager
Christoph Leyens
Generative Manufacturing and Repair of Metal Parts through Direct Laser Deposition using Wire Material
In the field of Laser Additive Manufacturing, modern wire-based laser deposition techniques offer advantageous solutions for combining the high quality level of layer-by-layer fabrication of high value parts with the industry's economical requirements regarding productivity and energy efficiency. A newly developed coaxial wire head allows for omni-directional welding operation and, thus, the use of wire even for complex surface claddings as well as the generation of three-dimensional structures. Currently, several metallic alloys as steel, titanium, aluminium, and nickel are available for the generation of defect-free structures. Even cored wires containing carbide hardmetals can be used for the production of extra wear-resistant parts. Simultaneous heating of the wire using efficient electric energy increases significantly the deposition rate and energy efficiency. Examples of application are light-weight automotive parts, turbine blades of Nickel super alloys, and complex inserts of injection moulds.
Introduction
Laser buildup welding is a well-established technique in industrial applications of surface cladding and direct fabrication of metallic parts. As construction materials, powders are widely used because of the large number of available alloys and the simple matching with the shaped laser beam. However, the powder utilization is always less than 90 %, characteristic values are in the range of 60 %, and so a serious part of the material is lost. Additionally, there is a risk for machine, operators and environment due to dangerous metal dust. The alternative use of wires as buildup material offers a number of advantages: The material utilization is always 100 %, and rightly independent from the part's shape and size. The process is clean and safe, and so the effort for protecting personnel and environment is much less. Also the wire feed is completely independent from the gravity which is of great advantage especially in applications of three-dimensional material deposition.
The main challenge compared to powder is the realization of an omni-directional welding operation with stable conditions of the wire supply. The only possible solution therefore is to feed the wire coaxially in the centre axis of the laser beam. This technology requires a complex optical system which permits the integration of the wire material into the beam axis without any shadowing effects of the laser beam itself. Accordingly, the work presented here was focused of the development of a new optics system for the practical realization of the centric wire supply as well as the related process development for the defect-free manufacturing of real metallic parts.
Laser wire deposition head
Based on test results of previous multi beam optics [START_REF] Nowotny | Laser Cladding with Centric Wire Supply[END_REF], a new optical system suitable for solid-state lasers (slab, disk, fiber) has been developed. The optical design of the head shown in Figure 1 is based on reflective optical elements and accommodates a power range of up to 4 kW. The laser beam is symmetrically split into three parts so that the wire can be fed along the centre axis without blocking the beam. The partial beams are then focused into a circular spot with a diameter ranging from 1.8 to 3 mm. The setup enables a coaxial arrangement for beam and wire, which makes the welding process completely independent of weld direction. The coaxial alignment is even stable in positions that deviate from a horizontal welding setup.
The wire is fed to the processing head via a hose package that contains wire feeder, coolant supply and protection gas delivery. Wire feeders from all major manufactures can be easily adapted to this setup and are selected based on wire type, feed rate, and operation mode. Typical wire diameters range from 0.8 to 1.2 mm. However, the new technology is principally also suitable for finer wires of about 300 µm in diameter. The wires can be used in cold-and hot-wire setups to implement energy source combinations. The new laser wire processing head is useful for large-area claddings as well as additive multilayer depositions to build three-dimensional metallic structures.
Fig. 1. Laser processing optic with coaxial wire supply
For process monitoring, the wire deposition head may be equipped with a camerabased system which measures dimensions and temperature of the melt bath simultaneously during the running laser process. Optionally, an optical scanning system controls the shape and dimension of the generated material volume in order to correct the build-up strategy if necessary [START_REF] Hautmann | Adaptive Laser Welding (in german) REPORT[END_REF].
Deposition process and results
Fig. 2 shows a typical laser wire cladding process during a multiple-track deposition.
The process shows a stable behaviour with extremely low emissions of splashes and dust, compared to powder-based processes. The integrated on-line temperature regulation keeps the temperature of the material on a constant level during the whole manufacturing duration.
Fig. 2. Process of laser wire deposition of a metal volume
The part is built from a large number of single tracks, which are placed according to a special build-up strategy. This strategy is designed by computer calculation prior to the laser generative process. Normally, an intermediate machining between the tracks and layers is not necessary. The primary process parameters laser power, wire feeding rate and welding speed have to be adapted to each other to enable a continuous melt flow of the wire into to the laser induced melt pool. For certain primary parameters, the process stability depends on the heat flow regime during the build-up process. Besides the temperature regulation mentioned above, also interruptions between selected layers may be useful to cool down the material. If necessary, also an active gas cooling of the material can be applied [START_REF] Beyer | High-Power Laser Materials Processing Proceedings of the 31st International Congress on Applications of Lasers and Electro-Optics[END_REF].
In Fig. 3 the cross section of a generated structure of the Nickel super alloy INCONEL718 is shown. The solidified structure is defect free and each layer is metallurgically bonded to the other. Through optimization of the process parameters, even the crack-sensitive IN718 structure is crack-free. An optimized build-up strategy allows minimal surface roughness of RZ = 63 µm. A layer thickness of 1.4 mm and a build-up rate of 100 cm³/h can be achieved with 3.0 kW laser power and 2.0 m/min welding speed. Simultaneous heating of the wire using efficient electric energy (hotwire deposition) increases significantly the deposition rate up to about 160 cm³/h [START_REF] Pajukoski | Laser Cladding with Coaxial Wire Feeding Proceedings of the 31st International Congress on Applications of Lasers and Electro-Optics[END_REF].
Fig. 3 .
3 Fig. 3. Cross-section of a laser generated wall of INCONEL718
Figure 4
4 illustrate two examples of layer-by-layer generated parts. Figure 4a shows a turbine blade out of INCONEL718. The blade's height is 100 mm, and inside it has a hollow structure. The height of the inlet tube of Figure 4b is 85 mm, and it consists of light-weight alloy AlMg5. The height of the single layers is 0.4 mm for the Ni alloy and 0.7 mm for the Al alloy.
Fig. 4 .
4 Fig. 4. Turbine blade out of INCONEL718 Inlet tube out of AlMg5
Summary
The current state of laser wire deposition shows the wide range of potential applications of this new technique. In addition to the well-established powder welding and powder-bed melting techniques, wires represent an advantageous alternative for highquality laser deposition. A specially developed laser head with coaxial wire supply permits omni-directional welding operation and thus new dimensions in additive manufacturing. Also, equipment for the on-line process regulation is available and can be used for quality management. In particular, the regulation concerns the melt bath's dimensions and temperature on its surface.
As construction material, commercially available welding feedstock wires can be used. The material utilization is always 100 %, the welding process is clean, and the variant of hot-wire cladding advantageously increases productivity and energy efficiency. The generated metal structures are completely dense, as important precondition for a high mechanical strength of the final parts. The surface roughness is typically lower than RZ100 µm, and the model-to-part-accuracy lies in the range of some tenth of a millimetre.
Examples of application are corrosion protection coatings on cylinders, turbine parts of Nickel and Titanium [START_REF] Brandl | Deposition of Ti-6Al-4V using laser and wire Surface & Coatings[END_REF] alloys as well as light-weight parts for automobile use. | 9,161 | [
"1003706"
] | [
"488104",
"488104",
"488104",
"488104",
"488104",
"96520"
] |
01485816 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485816/file/978-3-642-41329-2_22_Chapter.pdf | Kesheng Wang
email: kesheng.wang@ntnu.no
Quan Yu
email: quan.yu@ntnu.no
Product Quality Inspection Combining with Structure Light System, Data Mining and RFID Technology
Keywords: Inspection Combining with Structure Light System, Data Mining and RFID Technology Quality inspection, Structure Light System, Data mining, RFID 1
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Product quality inspection is a nontrivial procedure during the manufacturing process both for the semi-finished and end products. 3D vision inspection has been rapidly developed and increasingly applied in product quality inspection, with the advantages of high precision and applicability comparing with commonly used 2D vision approaches. 3D vision is superior in inspection of multi-features parts due to providing the height information. 3D vision techniques comprise approaches on the basis of different working principles [START_REF] Barbero | Comparative study of different digitization techniques and their accuracy[END_REF]. Among various approaches, the Structure Light System (SLS) is a kind of cost effective techniques for the industrial production [START_REF] Pernkopf | 3D surface acquisition and reconstruction for inspection of raw steel products[END_REF][START_REF] Xu | Real-time 3D shape inspection system of automotive parts based on structured light pattern[END_REF][START_REF] Skotheim | Structured light projection for accurate 3D shape determination[END_REF]. By projecting specific patterns on the inspected product, the camera captures corresponding images. The 3D measurement information of the product is retrieved in the form of the point cloud on the basis of the images. With the generated 3D point cloud, automated quality inspection can be performed with less human interference, where data mining approach is commonly used [START_REF] Ravikumar | Machine learning approach for automated visual inspection of machine components[END_REF][START_REF] Lin | Measurement method of three-dimensional profiles of small lens with gratings projection and a flexible compensation system[END_REF].
However, although the quality information is able to be decided on the basis of the SLS and data mining approaches, it will not become more valuable until it is used to improve the production process and also achieve the real-time data access and the quality traceability.
In this paper, Radio Frequency Identification (RFID) technique is used to integrate the quality information together with the product. RFID uses a wireless non-contact radio system to identify objects and transfer data from tags attached to movable items to readers, which is fast, reliable, and does not require physical sight or contact between reader/scanner and the tagged objects. By assigning a RFID tag to each inspected product, it is possible to identify the product type and query the quality inspection history. An assembly quality inspection problem is selected as a case study to test the feasibility of proposed complex system. The proposed approach will be an alternative for SMEs considering the fast product type update with respect to the fast changing market.
The paper is organized as the following: Section 1 introduces the general applications of 3D vision in manufacturing and the importance of combining SLS, Data mining and RFID technology. Section 2 introduces the architecture of the complex system and the working process in each system level respectively. Section 3 shows a case to study the feasibility of the system combining the three techniques. Section 4 comes to the conclusion of the applicability of the complex system.
QUALITY INSPECTION SYSTEM ARCHITECTURE ON THE BASIS OF STRUCTURE LIGHT SYSTEM, DATA MINING AND RFID
The RFID 3D quality inspection system combines the function of product quality inspection and RFID tracing and tracking. By attaching the RFID tag to the product, the system takes the pictures of the inspected product as the inputs, generates the 3D point cloud and finally writes the quality related information of the product in the RFID tag. Thus, it is available to monitor the product quality along the production line and achieve the real-time quality control.
2.1
System architecture The quality inspection system comprises 4 levels as shown in Figure 1, which are 3D vision level, data processing level, computational intelligence level and RFID level respectively. Within each level of the system, the data is converted as the sequence of the point cloud, the feature vector, the quality information and the writable RFID data. Each level is introduced as following:
RFID level
Computational
1. 3D vision level consists of the Structure Light System, which uses camera together with the projector to generate the point cloud of the inspected product. 2. Data processing level comprises quality related feature determination and extraction. The product quality is quantified on the basis of the point cloud according to the design requirements. The feature vector is generated after the processing as the input of the next level. 3. Computational intelligence level uses data mining approaches to achieve the automated quality classification on the basis of the feature vector. 4. RFID level comprises the RFID hardware and software, which achieves the product tracking and controlling by writing and reading RFID tag attached on the product.
Introduction of the Structure light system
Structured Light System (SLS) is one of typical 3D vision techniques. SLS accomplishes to point cloud acquisition by projecting specific patterns onto the measured object and capturing the corresponding images. The point cloud of the object surface can be generated with the images analysis approach.
The hardware of a SLS consists of a computer, an image capture device and a projector. Figure 2 shows the working process of a typical SLS, which can be divided into 4 steps. Step 1 is the pattern projection. A coded light pattern is projected onto the scene by the projector. The pattern can be either a single one or series with respect to the type of the code. 2.
Step 2 is the image recording. The inspected object is captured by the camera, and then captured images are stored in sequence if pattern series are used. The scene is captured previously as references without the presence of the object. Comparing the images with the inspected object to the one without it, it is observed that the pattern is distorted due to the existence of the object, which indicates the height information.
3.
Step 3 is the phase map generating. Images captured in step 2 are analysed by the computer with fringe analysis techniques on the basis of the pattern encoding rule.
The wrapped phase maps are obtained firstly, and to be unwrapped to obtain the map with continuous phase distribution. 4. Step 4 is the transformation from phase to height. The height value of each image pixel is derived from its phase by phase calibration or phase-height mapping comparing with the reference obtained from step 2. After the calibration, the pixels in the image are transformed to points in metric unit, and the height value in each pixel is calculated, so that the 3D point cloud of the inspected object is formed.
Decision support using data mining approaches
To achieve the automated product quality inspection, it is of importance to use computational intelligence approaches to classify the product quality with less artificial interference. Data mining method is often used to analyze the massive data and find out the knowledge to ascertain the product quality [START_REF] Wang | Applying data mining to manufacturing: the nature and implications[END_REF]. Data mining based quality inspection requires input variables related to product quality. Although the point cloud is able to be acquired using SLS in the form of 3D coordinates, it generally includes mega points and is unlikely possible to use the data driven methods to solve the classification problem. In this case, the point cloud generated by the SLS has to be processed according to the specifics of the product. Mega points are converted to be a vector including the most useful quality related parameters. It is effecient to focus on the partial point cloud mostly representing the product quality, which can be seen as the Region of Interest (ROI) of the point cloud. Furtherly, a vector
12 , , , n X x x x
is extracted from the mega points, which comprises feature values xi calculated according to the point cloud. Thus, the mega points are converted to a single vector including geometrical features which most represent the product quality information. With this step of simplification, it becomes feasible to select the most suitable data mining approaches on the basis of the extracted feature vectors to do the quality classification. For example, three typical data mining approaches are common used in the classification problem, which are Artificial Neural Networks (ANN), Decision tree and Support Vector Machines (SVM) respectively.
Decision Tree
A decision tree is one of data mining approaches applied in many real world applications as a solution to classification problems. A decision tree is a flowchart-like tree structure, where each internal node denotes a test on an attribute, each branch represents an outcome of the test, and each leaf node holds a class label. The construction of decision tree classifiers does not require any domain knowledge or parameter setting, and therefore is appropriate for exploratory knowledge discovery.
C4.5 is a classic algorithm for decision tree induction. The succedent algorithm C5.0 is available in IBM SPSS Modeler®. By using this software, it is easy to accomplish the decision tree induction and test.
Artificial Neural Networks
As another effective data mining approach, an Artificial Neural Network consists of layers and neurons on each of them. Parameters are adjustable in an ANN such as the number of the hidden layers and neurons, the transfer functions between layers and the train method etc. A powerful ANN toolbox is available in Matlab® and can be highly customized to get the best result by the user.
Support Vector Machines (SVM)
A Support Vector Machine (SVM) is a supervised learning method for data analysis and pattern recognition. The standard SVM is designed for binary classification. Given a set of training examples, each marked as belonging to one of two categories, several SVM training algorithms are available to build a model that assigns examples into corresponding category. New examples are then predicted to belong to a category based on the constructed model. For multi-classification problem, a commonly used approach is to construct K separate SVMs, in which the kth model yk(x) is trained using the data from class Ck as the positive examples and the data from the remaining K -1 classes as the negative examples, which is known as the one-versus-the-rest approach.
RFID level
Radio Frequency Identification (RFID) is one of numerous technologies grouped under the term of Automatic Identification (Auto ID), such as bar code, magnetic inks, optical character recognition, voice recognition, touch memory, smart cards, biometrics etc. Auto ID technologies are a new way of controlling information and material flow, especially suitable for large production networks [START_REF] Elisabeth | The RFID Technology and Its Current Applications[END_REF]. RFID is the use of a wireless non-contact radio system to transfer data from a tag attached to an object, for the purposes of identification and tracking. In general terms, it is a means of identifying a person or object using a radio frequency transmission. The technology can be used to identify, track, sort or detect a wide variety of objects [START_REF] Lewis | A Basic Introduction to RFID technology and Its Use in the Supply Chain[END_REF]. RFID system can be classified by the working frequency, i.e. Low Frequency (LF), High Frequency (HF), Ultra High Frequency (UHF) and Microwave. Different frequency works for various media, e.g. UHF is not applicable to metal but HF is metal friendly. Thus, the working frequency has to be used on the basis of tracked objects.
Hardware of RFID system includes RFID tag, RFID reader and RFID antenna. RFID tag is an electronic device that can store and transmit data to a reader in a contactless manner using radio waves, which can be read-only or read-write. Tag memory can be factory or field programmed, partitionable, and optionally permanently locked, which enables the users save the customized information in the tag and read it everywhere, or kill the tag when it will not be used anymore. Bytes left unlocked can be rewritten over more than 100,000 times, which achieves a long useful life. Moreover, the tags can be classified by power methods i.e. passive tags without power, semi-passive tags with battery and active tags with battery, processor and i/o ports. The power supply increases the cost of the tag but enhance the readable performance. Furthermore, a middleware is required as a platform for managing acquired RFID data and routing it between tag readers and other enterprise systems. Recently, RFID become more and more interesting technology in many fields such as agriculture, manufacturing and supply chain management.
CASE STUDY
In this paper, a wheel assembly problem is proposed as the case study of the implementaion for combining the SLS, data mining approaches and RFID technology. In the first step, the assembly quality classification is introduced. Secondly, the point cloud of the object is acuqired using SLS and converted to be the feature vector, which is defined according to the assembly requirments and provided to the data mining classifier as the input. At last, the quality is decided by the classifier and converted to RFID data, which is saved in the RFID tag attached on the object.
Problem description
To certify the feasibility of proposed 3D vision based quality inspection, LEGO® wheel assembly inspection is taken as the example in this paper. In the supposed wheel assembly inspection, the object is to check the assembly quality. A wheel is constituted of 2 components, the rim and the tire. Possible errors occur during the assembly process as shown in Figure 3, which are supposed to be divided into 5 classes, according to the relative position of the rim and tire:
1. Wheel assembly without errors 2. Tire is compressed 3. There exists an offset for one side 4. Rim is detached from the tire 5. Rim is tilted
Fig. 3. Wheel assembly classification
Each class has a corresponding inner layout, which is shown respectively in Figure 4.
The section views show the differences among the classes.
Fig. 4. Inner layout of the wheel
2D vision is not applicable to distinguish some cases because of the similarity of pictures, as shown in Figure 5.
Fig. 5. Similarity from the top of the view
It is noticed that there is not so much difference between the two wheels from the top view in the image. However, the height is different if seen from the side view. Structured Light System (SLS) is an effective solution which can get 3D point cloud of the inspected part, so that the real height value of the parts are obtained while errors can be recognized with the metric information directly.
Feature extraction for the classification
The hardware configuration is shown in Figure 6. The image capture device is used with a SONY XCG-U100E industrial camera, with UXGA resolution (1600×1200 pixels) and Gigabit Ethernet (GigE) interface, and together with a Fujinon 9mm lens. A BenQ MP525 digital projector is employed to project the patterns. The hardware control and the image processing are performed with commercial software Scorpi-on®. After calibration, the accuracy of the measurement can achieve 0.01mm in this case study. Regarding the 5 classes in the wheel assembly problem, it is of importance to extract features most related to the assembly status from the point cloud. These 5 features extracted above are able to describe the pose of a wheel, which indicates the height and inclination. For each profile, 5 feature values are extracted. Thus, a vector XS is used to denote a wheel where X S = {X 1 , X 2 , …, X 6 } as the inputs of data mining approaches.
3.3
Quality information embeded using RFID tag
After the assembly quality has been approved using SLS and data mining decision support system. The quality information will be written into the RFID tag placed in the wheel. Thus, the quality information is combined with the product together for the check in the future. In this wheel assembly quality inspection problem, the tag accessing is to be completed with the Reader Test Tool of the RFID reader as shown in Figure 10.
Fig. 10. RFID reader test tool
In this case study, the OMNI-ID RFID tag, the SIRIT RFID reader and the IMPINJ near-field antenna are used to construct the RFID system, as shown in Figure 11. Then the tagged tire and a rim are assembled. Because the EPC code of each RFID tag is set to be unique, each wheel is given by a unique . Finally the assembly quality of the wheel is to be inspected using SLS and data mining based decision support system. After the classification, the quality information will be written into the tag and kept with the product as shown in Figure 13.
Fig. 13. Quality inspection and information writing
The memory of the OMNI-ID RFID tag is divided into three parts, which are allocated for the EPC code of 96bits, the user data of 512bits and the TID data of 64bits. The 512bits user data are reserved for the customized information, which can be converted to 64 ASCII characters. In this case, the quality inspection related information is written in the user memory of the RFID tag. The information comprises the classified assembly quality, the inspection time and the inspection date, in the form of "QUALITY=X TIME=HH:MM DATE=DD.MM.YY". Because the tag memory is written in the form of hexadecimal, the texts have to be converted to HEX before writting. Supposing the quality is classified to be 1 and the inspection is taken at 14:05 on 30.04.13, the information is converted from the ASCII characters to hexadecimal as shown in Figure 14. The information writing is completed using reader test tool as shown in Figure 15.
Fig. 15. RFID tag information control
After the quality information is written in the user memory of the tag, it is able to be read out with any other RFID reader. Using the same decoding approach, the hexadecimal can be restored to characters again for later check.
Result and analysis
During the quality inspection test. After the classification, the inspection related information is written in the RFID tag.
The inspection time and date is available using reader test tool as shown in Figure 16.
Fig. 16. Acquiring the RFID tag information
Combining with the quality classfication result, the text to be written is QUALITY=1 TIME=07:27 DATE=21.05.13 Which is converted to be HEX as following using compiler 0x5155414C4954593D312054494D453D30373A323720444154453D32312E3035 2E3133
The converted HEX is written in the user memory of the tag in command line of the reader test tool as shown in Figure 17.
CONCLUSIONS
The combination of Structured Light System, the data mining approach and RFID technology is tested in this paper. SLS is applicable in this proposed wheel assembly quality classfication problem. According to the assembly requirements, the feature definition on the basis of the point cloud is suitable for similar product type. The feature vector extraction provides the SVM classifier availalbe inputs and ahecieves 95.8% Correctly Classified Instances. Meanwhile, the RIFD system successfully converts the quality inspection result to acceptable data format for the tag and writes the information in it. This step is able to improve the traceablity of the product quality. Suppose multiple SLS inspection stations are assigned along the assembly line, the quality inspection results are saved in the RFID tag respectively. The earlier inspection result is available for the system before the product enters into the following processing station. The tag embeded information does not require the remote database access. The interuption due to the product quality is able to be avoided. In the following work, the middleware for integrating the three system will be developed.
Fig. 1 .
1 Fig. 1. System architecture
Fig. 2 .
2 Fig. 2. General working process of a Structured Light System
Fig. 6 .
6 Fig. 6. SLS hardware
Fig. 7 .Fig. 8 .
78 Fig. 7. Point cloud acquired with SLS
Fig. 9 .
9 Fig. 9. Feature extraction
Fig. 11 .
11 Fig. 11. RFID system hardware
Fig. 12 .
12 Fig. 12. Attach a RFID tag to a tire
Fig. 14 .
14 Fig. 14. Conversion from the ASCII characters to the Hexadecimal
Fig. 17 .
17 Fig. 17. Information writting using RFID system
Table 1 .
1 Results of SVM
Six measures are obtained for the classifier on the
basis of 4 outcomes: True Positive (TP), True Negative (TN), False Positive (FP) and
False Negative (FN), which constructs a confusion matrix
TP (correctly accepted) FN (incorrectly refused)
FP (incorrectly accepted) TN (correctly refused)
For each validation, the confusion matrixes of each class are constructed respec-
tively, and the final confusion matrix for each validation would contain the average
values for all classes combined.
The 6 measures are defined as following
1. Correctly Classified Instances (CCI): Percentage of samples correctly classified.
2. Incorrectly Classified Instances (ICI): 100% -CCI. | 21,868 | [
"1003708",
"1003709"
] | [
"50794",
"50794"
] |
01485817 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485817/file/978-3-642-41329-2_23_Chapter.pdf | Philipp Sembdner
Stefan Holtzhausen
Christine Schöne
email: christine.schoene@tu-dresden.de
Ralph Stelzer
Additional Methods to Analyze Computer Tomography Data for Medical Purposes and Generatively Produced Technical Components
Keywords: Computer tomography, Reverse Engineering, 3D-Inspection 1
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
In the context of industrial manufacturing and assembly, continuous quality analysis is absolutely necessary to guarantee the production guidelines predefined in an operation. Optical 3D measuring systems for contactless measurement of component geometries and surfaces are increasingly being applied in the production process. These measuring techniques are also being used more and more often in combination with automated manufacturing supervision processes to maintain consistently high standards of quality [START_REF] Bauer | Handbuch zur industriellen Bildverarbeitung -Qualitätssicherung in der Praxis[END_REF].
One disadvantage of such systems is that it is only possible to inspect visible regions of the manufactured object. It is impossible to check inner areas of components or joints, such as welded, soldered or adhesive joints, by means of these nondestructive measuring techniques. Here, it makes sense to inspect the formation of blowholes or inclusions in pre-series manufacturing to optimise the production process or to safeguard the quality standards during series manufacturing [START_REF] Zabler | Röntgen-Computertomographie in der industriellen Fertigung (Kraftfahrzeug-Zulieferer) -Anwendungen und Entwicklungsziele[END_REF].
Computer tomography (CT) is an imaging technology that provides a proven solution to this problem. State of the art in the medical environment, this technology has become more and more established in other technological fields. However, in mechanical engineering, we are faced with other requirements that must be fulfilled by the procedure, both in terms of the definition of the measuring task and strategy and with consideration of the issue of measuring uncertainty.
Because high accuracy is needed, micro CT systems are frequently used, resulting in huge data volumes in the form of high-resolution slice images. However, increases in the capacities of computer systems in recent years make image analysis, as well as 3D modelling, based on these slices images a promising technology. Consequently, it is necessary to develop efficient analysis strategies for data gathered by means of imaging techniques to find new strategies for quality assurance and process optimisation. The Reverse Engineering team at the Chair of Engineering Design and CAD of the Dresden University of Technology has been studying the analysis and screening of CT data, at first mainly from medicine [START_REF] Schöne | Individual Contour Adapted Functional Implant Structures in Titanium[END_REF][START_REF] Sembdner | Forming the interface between doctor and designing engineeran efficient software tool to define auxiliary geometries for the design of individualized lower jaw implants[END_REF], for several years. An example is given in Fig. 1, in which a discrete 3D model is generated from CT data. In this process, the calculation of the iso-surfaces is performed by means of the Marching Cubes Algorithm [START_REF] Seibt | Umsetzung eines geeigneten Marching Cubes Algorithmus zur Generierung facettierter Grenzflächen[END_REF]. In the next step, the segmented model of the lower jaw bone is used for operation planning and the design of an individual implant for the patient. Due to industry demand, processing of CT image data in the technical realm is becoming more and more important. The paper elucidates opportunities for component investigation from CT data by means of efficient image processing strategies and methods using the example of a soldered tube joint.
FUNDAMENTALS
Computer tomography (CT) is an imaging technique. As a result of its application, we obtain stacks of slice images. As a rule, these data are available in the DICOM format, which has been established in medical applications. Apart from the intrinsic image data, it includes a file head, incorporating the most essential information about the generated image. Relevant information here includes patient data, image position and size, pixel or voxel distance and colour intensity. In the industrial realm, the images are frequently saved as raw data (RAW) or in a standardised image format, such as TIFF. For ongoing processing of the image data in the context of the threedimensional object, additional geometric data (pixel distance, image position etc.) must be available separately. The colour intensity values of the image data, which depend on density, are saved in various colour intensities (8 bit and higher). In medical applications, a 12 bit scale is often used. Evaluation of CT data is often made more challenging by measuring noise and the formation of artefacts due to outshining. It is impossible to solve these problems simply by using individual image filters. For this reason, noise and artefact reduction are discussed in many publications [START_REF] Hahn | Verfahren zur Metallartefaktreduktion und Segmentierung in der medizinischen Computertomographie[END_REF].
Characteristics
In the following, the authors elucidate the CT data analysis methods implemented as program modules to read, process and display CT data developed at the Chair.
INSPECTION OF A SOLDERED TUBE JOINT
The task was to inspect two soldered flange joints on a pipe elbow in co-operation with a manufacturer and supplier of hydraulic hose pipes (see Fig. 2). The goal was to dimension the tube joint to withstand higher pressure values. It was first necessary to demonstrate impermeability. What this means, in practical terms, is that the quantity and size of blowholes or inclusions of air (area per image, volume in the slice stack) in the soldering joints have to be inspected in order to guarantee that a closed soldering circle of about 3…4 mm can be maintained. It is possible to execute the measurements using pre-series parts, sample parts from production, or parts returned due to complaints.
Methodes for blowholds
A slice image resulting from the CT record is shown in Fig. 3. On the right side, one may clearly see inclusions in the region of the soldering joint. It is necessary to detect these positions and to quantify their area. If this is done using several images in the slice stack, we can draw conclusions regarding the blowholes' volume.
Fig. 3. -Slice image with blowholes in the soldering region
To guarantee that only the zone of the soldering joint was considered for the detection of air inclusions, instead of erroneously detecting inclusions in the tubes themselves, we first determined the tube's centre point in the cross section at first. This approach only works if the slices are perpendicular to the tube centre line, so that the internal contour of the tube forms a circle. The centre point is determined as follows (Fig. . This filter detects each separate object in an image. The objects are marked with polygons (in our example, rectangles). c) Then the identified objects lying in the soldering region are found as a function of the calculated centre point and the given soldering circle diameter (= outer diameter of the inner tube). In this search, a tolerance is added to the soldering circle diameter (in our example: ±10%). Our goal is to record only the soldering joint rather than to detect inclusions in the tube itself. Now, the bright pixels (in our example with gray scale 4095) are identified inside the rectangle. They represent the air inclusions, for which we are searching. It is possible to quantify the area of the blowholes in the image using the known pixel width in both image directions.
The result of blowhole detection is especially dependent on the choice of an adequate threshold. This threshold has to be predefined by the user. If this threshold must be used over several slice images, the value has to be adjusted again if necessary.
Generation of 3D freeform cross sections
Analysis of planar (2D) cross sections using the CT data record does not allow for a complete analysis of cylindrical or freeform inner structures in one view. Especially when evaluating a soldered joint, alternative slice images provide a way to represent the area to be soldered in a manner rolled out on a plane. To do this, slice images are generated, which cope with Spline surfaces according to their mathematical representation. The basis for this approach is that all slice images with their local, twodimensional co-ordinate systems <st> are transformed into a global co-ordinate system <xyz> (Fig. 6). Consequently, each image pixel of a slice image k can be represented as a threedimensional voxel V by its co-ordinates [n,m,k]. This voxel is also defined in threedimensional space. In this reference system, one may define any Spline surface of the type F(u,v) (Bezier, Hermite, etc.). In the case investigated here, we used Hermite surface patches, which are described by defining a mass point matrix Gab. It is possible to calculate discrete points Pi,j on the patch surface. These points are also defined in the global reference system. The quantity of points on the Spline surface in u-or in v directions determines the resolution of the desired slice image. The gray intensity values for one patch point can be determined by trilinear or tricubic interpolation of the gray intensity values of the adjacent voxel.
The procedure to create a freeform slice through the slice image stack can be described as follows:
1. In one or more CT slice images, the soldered joint is marked by a Spline curve (shown in red colour in Fig. 7). In this process, the quantity of defined curves is arbitrary. The quantity of supporting points per layer has to be the same in order to establish the mass point matrix Gab. 2. Now we can calculate the Spline surface with the help of the mass points. Subsequently, on this surface, discrete points are calculated by iteration of the <uv> coordinates. As a result, a point cloud of three-dimensional points is created, which can be visualised both in 2D as a curved slice image (Fig. 9), and in 3D space as a triangulised object (Fig. 8). -Rolled out cross section of a tube joint Thus, it is now possible to make qualitative statements about the joint's impermeability. Furthermore, one may perform an analysis to measure, for example, the size of the blowhole on the generated slice image. However, if this cross section is executed repeatedly in the region of the soldering position, concentric to the originally defined cross section, then we obtain a number of these three-dimensional panorama views of the soldered joint. Visual inspection of these views in their entirety, without taking into account further evaluation strategies, may provide an initial estimate of impermeability.
SUMMARY
The use of computer tomography in industry offers a great potential for contactless and nondestructive recording of non-visible component regions. Since it is possible in this context to apply a significantly higher radiation level than for medical CTs, measuring uncertainty may be clearly reduced and data volumes concomitantly increased. Additionally, since the test objects are mostly stationary, as a rule, movement blurs can also be avoided [START_REF] Zabler | Röntgen-Computertomographie in der industriellen Fertigung (Kraftfahrzeug-Zulieferer) -Anwendungen und Entwicklungsziele[END_REF].
The results provided by computer tomography can be used equally effectively for various tasks. In the narrower sense of Reverse Engineering, it is possible to use the data for modelling, for example. However, the most common applications come from measuring analyses, such as wall thickness analyses and test methods within the context of quality assurance. The example discussed in the paper shows that the implementation of efficient analysis strategies is essential for process monitoring and automation. The option of generating arbitrary freeform cross sections by means of a slice image stack particularly opens up new strategies for component investigation.
Fig. 1 .
1 Fig. 1.-Application of medical CT for operation planning[START_REF] Sembdner | Forming the interface between doctor and designing engineeran efficient software tool to define auxiliary geometries for the design of individualized lower jaw implants[END_REF]
Fig. 2 .
2 Fig. 2. -Tube joint with two soldered flanges
Fig. 4 .Fig. 5 .
45 Fig. 4. -Determination of the tube centre point in the slice image
Fig. 6 .
6 Fig. 6. -Representation of the slice images in a compound of slice images Left: In a global reference co-ordinate system, all slice images are unambiguously defined. This way, indexing of a voxel V is possible (feasible) by indexing n, m, k. Right: In this global reference system, one may define an arbitrary Spline surface, whose discrete surface points P may be unambiguously transformed into the reference system.
Fig. 7 .
7 Fig. 7. -Marking of the soldered joint in the CT image
Fig. 8 .
8 Fig. 8. -3D cross section through a tube joint in the region of the soldering position
Medical CT of a skull Industrial CT of a tube joint
Image format DICOM RAW, TIFF
Image size 512 x 512 pixels 991 x 991 pixels
Pixel size 0.44 x 0.44 mm 0.08 x 0.08 mm
Distance between im- 1 mm 0.08 mm
ages
Image number 238 851
Data volume 121 MB 1550 MB
Measuring volume about 225 x 225 x 238 about 80 x 80 x 68 mm
mm
Table 1 .
1 -, it is approximately 13 times greater). It is very difficult to handle such a flood of data. Consequently, either we have to have available powerful computer systems capable of handling huge data volumes, or data stock has to be reduced, which, in turn, leads to losses in accuracy. For the latter option, one solution is to remove individual layers off the slice stack, thereby reducing image resolution or colour intensity.
Comparison of a medical and an industrial CT data record Table
1
offers an example outlining the difference between a medical CT and an industrial micro-CT based on two typical data records. In this representation, the differences in accuracy can be seen. It is possible for high-resolution industrial CTs to achieve measuring inaccuracy values of less than 80 µm. This results in a clearly higher data volume, which is frequently many times that of a medical CT (in the example shown in Table
1 | 14,650 | [
"1003710"
] | [
"96520",
"96520",
"96520",
"96520"
] |
01485818 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485818/file/978-3-642-41329-2_24_Chapter.pdf | György Gyurecz
email: gyurecz.gyorgy@bgk.uni-obuda.hu
Gábor Renner
email: renner@vision.sztaki.hu
Correction of Highlight Line Structures
Keywords: Highlight lines, highlight line structure Ci Ai, Ti 1 Bi, Ti 2
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Most important class A surfaces can be found on cars, airplanes, ship hulls, household appliances, etc. Beyond functional criteria, the design of class A surfaces involves aspects concerning style and appearance. Creating tools supporting the work of a stylist is a challenging task in CAD and CAGD.
A highlight line structure is a series of highlight lines, representing visually the reflection and the shape error characteristics of the surface. They are calculated as the surface imprint of the linear light source array placed above the surface [START_REF] Beier | Highlight-line algorithm for real time surface quality assessment[END_REF].
The structures are evaluated by the pattern and the individual shape of the highlight lines. A comprehensive quality inspection can be carried out by the comparison of the highlight line structures of different light source and surface position settings. The uniform or smoothly changing highlight line pattern is essential for the high quality highlight line structures.
Following the inspection, the defective highlight curve segments are selected and corrected. Based on the corrected highlight curves, the parameters of the surface producing the new highlight line structure can be calculated [START_REF] Gyurecz | Correcting Fine Structure of Surfaces by Genetic Algorithm[END_REF].
In our method the correction of highlight line structure is carried out in two steps. First, sequences of evaluation points are defined to measure the error in terms of distance and angle functions. Next, these functions are smoothed and based on the new function values, new highlight line points are calculated. New highlight curve curves are constructed using these points. The outline of the method is summarized in Figure 1. For a point on the highlight line d(u,v)=0 holds, which must be solved for the control points of S(u,v). To design high quality surfaces, this relation has to be computed with high accuracy. We developed a robust method for computing points on highlight lines, which is described in detail in [START_REF] Gyurecz | Robust computation of reflection lines[END_REF].
The highlight lines are represented by curves constructed by interpolation in B-Spline form. For the calculation of P i control points of C(t) curves system of equation is solved, where the unknowns are the control points [START_REF] Pigel | The NURBS Book[END_REF].
( ̅ ) ∑ ( ̅ ) (2)
The parameter values tk of the highlight points Qk are set proportional to the chord distance between highlight points. To ensure C2 continuity of the curves, the degree r of the basis function N is set to 3.
Selection of the defective highlight curve segments
Selection identifies the location of the correction by fitting a sketch curve on the surface around the defective region. This is carried out by the interactive tools of the CAD system. For the identification of the affected highlight curve segments Ci, i=0...N, and the endpoints A i and B i intersection points are searched. The identification is carried out by an algorithm utilizing exhaustive search method [START_REF] Deb | Optimization for Engineering Design: Algorithms and Examples[END_REF]. The tangents T i1 and T i2 corresponding to the endpoints are also identified; they are utilized in a subsequent process of correction.
In Figure 2, the defective curve segments are shown in bold; the endpoints are marked by solid squares. The dashed curve represents the user drawn sketch curve.
Evaluation of the highlight line pattern
The structure of the selected highlight curve segments is evaluated on sequences sj, j=0...M of highlight points E 0,0 ,…E i,j …E N,M spanning over the defective segment in crosswise direction. The sequences include correct highlight curve points E 0,j , E 1,j and E N-1,j , E N,j needed to ensure the continuity of corrected highlight segments with the adjoining unaffected region. We evaluate the structure error by dj distance and ∝j angle functions defined on sj, sequences The distance function represents the inequalities of the structure in crosswise direction; the angle function characterizes the structure error along the highlight curves. where at
The location of evaluation point is in the surrounding of where:
(4) at
E0,0 sj C0 CN E0,M EN,M ' 1 i E ' 1 i H i T ' 1 i T i T 1 i H i H 1 i C i C 1 i E i E
Definition of distance and angle error functions
The distance error function is defined by the d_(i,j) distances between the consecutive sequence elements:
‖ ‖ ‖ ‖ (5)
The angle error function is defined by α_(i,j) angles between the consecutive H_i vectors:
( ‖ ‖ ‖ ‖ ) (6)
In Figure 5 Based on the new functions, points for the new highlight curves are obtained.
Calculation of the new highlight curve points
The new function values are calculated by least square approximation method, applied on the original functions. The continuity with the highlight line structure of the adjoining region is ensured by constraints on function end tangents Ti1 and Ti2. The tangents are calculated as: Tj 1 = E0,j -E1,j and Tj 2 = EN-1,j -EN,j.
Figure 7 shows calculation of new R_(i,j) points (indicated by solid squares).
Construction of the corrected highlight curve segments
The new Ci highlight curve segments are cubic B-Splines constructed from the new R (i,j) points by constrained least squares curve fitting method [START_REF] Pigel | The NURBS Book[END_REF]. The points to be approximated are R(i,0) …R(i,j)…R(i,M) new highlight curve points, arranged by Ci, highlight curves. The constraints are Ai and Bi segment endpoints and the T i 1 and T i 2 endpoint tangents. The u Ai and u Bi parameter values of the new segments correspond to Ai and Bi segment endpoints. For the calculation of Pi control points, system of equations is solved. The uk, parameters of the curve points Qk are defined on u k = u Ai …u Bi . The parameter values are set proportional to the chord distance between the highlight curve points.
Tj 1 Tj 2 E0,j E1,j EN-1,j EN,j C 1 R i,j E 1,M-1 E 1,1 V i,j C N-1
Application and Examples
The method is implemented in Rhino 4 NURBS modeler. The calculation of new highlight curve points and the construction of corrected highlight curve segments is written in C++ code, the calculation and selection of highlight curves is realized in VBA. We tested our method on several industrial surfaces. In Fig. 9 and Fig. 10, two highlight curve structures before and after corrections are presented. The defective surface area is selected interactively, the evaluation and correction of highlight lines is automated. The parameters of the automatic correction can be adjusted by the designer.
The method is successfully implemented in the surface modeling software (Rhino 4) widely used in industrial shape design. The method is applicable to surfaces with uniform or changing highlight line pattern, and wide range of highlight line errors. The applicability of the method was proved on number of industrial surfaces.
Fig. 1 .
1 Fig. 1. Block diagram of the highlight line structure improvement method
Fig. 2 .
2 Fig. 2. Selection of the defective highlight curve segments
Fig. 3 .
3 Fig. 3. Definition of the evaluation point sequences
Fig. 4 .
4 Fig. 4. Calculation of the evaluation points Point is in the perpendicular direction if
, an error function constructed from points Ei,j| i=0..N , N=5 is presented. The i=2…N-2 sequence of the error functions correspond to points on defective highlight curves. The rapid and irregular changes represent the defects in the highlight curve structure. The function values at i=0,1 and N-1,N correspond to points on highlight curves of the adjoining correct pattern.
Fig. 5 .
5 Fig. 5. Error function example
Fig. 6 .
6 Fig. 6. The error function after smoothing
Fig. 7 .
7 Fig. 7. Calculation of points for new highlight curve segments
Fig. 8 .
8 Fig. 8. New highlight curve segment
Fig. 9 .Fig. 10 .
910 Fig. 9. Car body element before and after correction | 8,364 | [
"1003711",
"1003712"
] | [
"461402",
"306576"
] |
01485819 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485819/file/978-3-642-41329-2_25_Chapter.pdf | George L Kovács
email: kovacs.gyorgy@sztaki.mta.hu
Imre Paniti
email: paniti.imre@sztaki.mta.hu
Re-make of Sheet Metal Parts of End of Life Vehicles -Research on Product Life-Cycle Management
Keywords: life vehicle, sheet-metal, incremental sheet forming, sustainability, product life-cycle
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Some definitions and abbreviations
The motivation of the example used during this study was born from the EU Directive 2000/53/EC [1a] according to which by 2015, the portion of each vehicle that should be recycled or reused have to increase to 95%. To avoid being too abstract in the paper we use the management of sheet metal parts of worn-out or crashed cars as example. We show a possible way of value evaluation and measurement of values during PLCM. On the other hand we deal with the complexity and problems of decision making during processing of used sheet-metal parts' l, if the main goal is Remake (reuse). The decisions are about dismantling and remake as e.g. re-cycling or re-use with or without repair, how to repair, etc. Some definitions and abbreviations will be given first to help better understanding some expressions in the context of worn out or broken or simply End of Life (EOL) vehicles. Remake, re-use and re-cycling should not be mixed up with waste management, as our goal is to use everything instead of producing waste.
─ ELV (End of Life Vehicle, EOL vehicle): cars after collision or worn out cars ─ Shredder (S): a strong equipment breaking, tearing everything into small pieces, to shreds, almost like a mill.
Introduction and state-of-the-art
Management of EOV (EOL vehicles) is a rather complicated task, which needs to process several technical and legal steps (paperwork). All parts of all vehicles have to be under control during their whole Life-Cycle, including permissions to run, licence plates, permissions to stop, informing local authorities, etc. In this paper we deal only with technical aspects of a restricted set of parts, namely with sheet-metal parts.
2.1 Some sources (for example [START_REF]Depollution and Shredder Trial on End of Life Vehicles in Ireland[END_REF]) simplify the procedure to the following 3 steps:
Depollution; Dismantling; Shredding
The main goal is achieved, but several points (decision points) remain open and several questions unanswered, and nobody really knows what to do and how to do and what are the consequences of certain activities. What happens with the different parts?? Is there anything worth repairing, etc. ? There are others, who claim that the process is a little more complicated. The following EU suggestion does not go into details; it is a straightforward average procedure including paper work, which is crucial if someone deals with EOV.
2.2
According to [2] the procedure of dismantling should be the following:
─ Delivery of EOL vehicle ─ Papers and documents are fixed, permissions issued (or checked if issued by others) ─ Put the car onto a dry bed: remove dangerous materials and liquids, store everything professionally ─ Select useful parts, take them out and store under roof ─ Sell/offer for selling the tested parts ─ Press the body to help economic delivery ─ Re-use raw materials A little more precise description of the removal sequence of different important parts/materials with more details, however still without decision points is the following:
─ Remove battery and tanks filled with liquid gas ─ Remove explosive cartridges of airbag and safety bells ─ Remove gasoline, oils, lubricants, cooling liquid, anti-freezing, break liquid, airconditioner liquid ─ Most careful removal of parts containing quick silver ─ Remove catalysts ─ Remove metal parts containing copper, aluminium, magnesium ─ Remove tyres and bigger size plastic parts (fender/bumper, panel, containers for liquids) ─ Remove windshields and all glass products
2.3
The IDIS web site [3] has the following opinion:
The International Dismantling Information System (IDIS) was developed by the automotive industry to meet the legal obligations of the EU End of Life Vehicle (ELV) directive and has been improved to an information system with vehicle manufacturer compiled information for treatment operators to promote the environmental treatment of End-of-Life-Vehicles, safely and economically. The system development and improvement is supervised and controlled by the IDIS2 Consortium formed by automotive manufacturers from Europe, Japan, Malaysia, Korea and the USA, covering currently 1747 different models and variants from 69 car brands.
The access to and the use of the system is free of charge. The basic steps of dismantling suggested by IDIS2 are as follows:
Batteries --Pyrotechnics --Fuels AC( Air Conditioner) -Draining -Catalysts Controlled Parts to be removed -Tires --Other Pre-treatment Dismantling
2.4
At GAZ Autoschool [4] in the UK the following are underlined as the most important steps to follow:
1. Removing vehicle doors, bonnet, boot, hatch. Removing these items early in the dismantling process enables easier access to vehicle interior, reduces restriction in work bays and minimises the risk of accident damage to potentially valuable components. 2. Removing interior panels, trim, fittings and components. This is a relatively clean and safe operation which maximises the resale opportunities available for items whose value depends on appearance/condition and which may be damaged if left on the vehicle. Components to be removed include dashboard, instrument panel, heater element, control stalks, steering column. 3. Remove light clusters: An easy process but one which needs care to avoid damage.
Once removed items need to be labeled and stored to enable potential re-sale. 4. Removal of wiring harness: The harness should be removed without damage, meaning that all electrical components are unclipped and the wires pulled back through into the interior of the car so that it can be removed complete and intact.
Harness should be labeled and then stored appropriately. 5. Removal of Engine and Gearbox: This will involve the use of an engine hoist, trolley jacks and axle stands, and will often necessitate working under the vehicle for a short period to remove gear linkages etc. Often the dirtiest and most physical task. Engine and gearbox oil together with engine coolant will need to be drained and collected for storage. 6. Engine dismantling: Engines are kept for resale where possible. 7. Gearbox dismantling: Gearboxes are kept for resale where possible. 8. Brakes and shock absorbers: Brake components are checked and offered for resale where they are serviceable.
2.5
Finally we refer to [START_REF] Kazmierczak | Lersř Parkallé: A Case Study of Serial-Flow Car Disassembly: Ergonomics, Productivity and Potential System Performance[END_REF], which is a survey and case study on serial flow car disassembly. The suggested technology can be represented by Fig. 1., where one can see, that the system used five stations and four buffers.
At stations 1-3 glass, rubber, and interior are removed. At station 4 the "turning machine" rotated cars upside down to facilitate engine and gearbox unfastening At station 5 the engine and gearbox are removed .
The procedure is the following:
1. Take a lot of pictures before you begin the disassembly process, including pictures of the interior. This important issue is rarely mentioned by other authors 2. Get a box of zip lock plastic bags in each size available to store every nut, bolt, hinge, clip, shim, etc. Make color marks to all. 3. Make sure you have a pen and a notebook by your side at all times to document any helpful reminders, parts in need of replacement and to take inventory 4. Remove the fenders, hood and trunk lid with the assistance of at least one able body to avoid damage and personal injury 5. Remove the front windshield and the rear window by first removing the chrome molding from the outside of the car, being careful not to scratch the glass. 6. This would be a good point to gut the interior. Remove the seats, doors and interior panels, carpeting and headliner. 7. Clear the firewall and take all the accessories off the engine.
8. Go through your notebook and highlight all the parts that need to be replaced and make a separate "to do" list for ordering them.
The most characteristic in the above study is a practical approach: be very careful and document everything, use bags and color pens and notebook to keep track of all parts and activities. This non-stop bookkeeping may hinder effective and fast work, however surely helps in knowing and tracing everything, if needed.
Sheet-metal parts' management
It can be seen from the previous examples that practically nobody deals with the special problems of sheet metal parts of EOL vehicles. On the other hand it is clear that sheet metal parts are only a certain percentage of an EOL vehicle. But it is clear, that almost every EOL vehicle has several sheet metal parts, which could be re-used with or without corrections, with or without re-paint. This makes us to believe that it is worthwhile to deal with sheet metal parts separately, moreover in the following part of our study this will be our only issue. To think and to speak about re-use (re-make, re-shape) as a practical issue, we need as minimum:
(a) proper dismantling technology to remove sheet-metal parts without causing damages to them (b) a measurement technology to evaluate the dismantled part and a software to compare the measured values to requested values, to define whether the dismantled part is appropriate or needs correction and to decide its applicability to be used for another vehicle, and finally (c) a technology to correct slightly damaged sheets, based on CAD/CAM information. This information may come from design (new or requested parts) or through measurements by a scanner (actual, dismantled part).
Our staff using ISF technology and the robotic laboratory of SZTAKI is able to perform the requested operations on our machines and on the software
A sheet-metal decision sequence
Our approach needs to follow a rather complicated decision sequence; it will be detailed only for sheet metal (SM) parts, supposing the available Incremental Sheet Forming (ISF) facilities. It is the following, using the above defined abbreviations, emphasizing decision types and points: This decision sequence -naturallycan be taken into account as a small and timely short period of the PLMC, namely of the EOL cars' sheet-metal parts. Each move and activity have certain actual prices, which are commonly accepted, however we know that they are not really correct, they do not support sustainability, and on the other hand often increase negative effects.
3.2
Evaluation of costs and advantages of the re-use and re-make of sheetmetal parts of EOVs After such a long, or long-looking decision procedure we need a methodology for evaluation of all, what we do or do not perform to have re-usable sheet-metal parts from parts of EOL vehicles.
The simplest way would be simply compare costs and prices of all involved parts (good, to be corrected, etc.) and services (scanning, ISF, manual work, painting, etc.) and all conditions (shredder or repair, etc.).
Today that is the only way some people follow, if any. Generally a very fast view at the EOV is enough to send it to the shredder, as this is the simplest decision with the smallest risk. To be more precise, the risk is there, but rather hidden.
The cost/value estimations and comparisons can be performed relatively easily, however the results correspond only to the present economicalpolitical situation and to the actual financial circumstances, and would not say anything about the future, what is embedded into the "sustainability", "footprint" and "side-effects" (see later on). We believe that there exist giving appropriate tools and means for "real", "future centric" evaluations, thus we need to find and use them.
Our choice are the KILT model and the TYPUS metrics, which will be explained and used for our study, for details see [START_REF] Michelini | Integrated Design for Sustainability: Intelligence for Eco-Consistent Products and Services[END_REF], [START_REF] Michelini | Knowledge Enterpreneurship and Sustainable Growth[END_REF], [START_REF] Michelini | Knowledge Society Engineering, A Sustainable Growth Pledge[END_REF] and [8a].
The main goal of the above tools is to model and quantify the complete delivery (all products, side-products, trash and effects of them, i.e. all results) of a firm, and to model all interesting and relevant steps of the LC (or LCM). It is clear from the definitions (see later and the references) that any production steps can be evaluated and can be understood as cost values. If we speak about car production, the input is row material, machining equipment and design information, and people, who work, etc. Output (delivery) is the car. Sheet metal production is a little part of car making, generally prior to body assembly. For our study we take into consideration only sheetmetal parts.
We consider and make measurements, comparisons, re-make by using ISF and repaint, and other actions. These can hardly be compared with the "simple" processes used in new car manufacturing. Every step of the decision sequence below can be investigated one by one, taking into account all effects and side-effects. For the sake of simplicity only the input (sheet metal part to be measured and perhaps corrected) and the output (sheet-metal part ready to be used again) may be enough.
Fig. 2. shows some qualitative relationships, which cannot be avoided if environmental issues, sustainability, re-use and our future are important points.
Fig. 2 gives a general picture of our main ideas, and it needs some explanation. See [8a] for more details. It is a rather simplified view of some main players in the production/service arena, however it still shows quite well certain main qualitative relationships. We believe that these can be used to understand what is going on in our (engineering-manufacturing-sustainable) world.
Fig. 2. Re-use, PLCM, ecology, sustainability and KILT
The TYPUS/KILT metrics, methodology and model give us a possibility to better understand and evaluate production results and their components in terms of the K, I, L, T values. They give us a method of calculations and comparisons based on realistic values. The side effects and 2nd and 3rd order effects, etc. mean the following: let us consider a simple and simplified example: to produce a hybrid car (today) means (among others) to produce and build in two engines, two engines need more metal than one (side effect), to produce more metal we need more electrical energy and more ores (2nd order side effect), to produce more electricity more fuel is necessary and to produce more ores needs more miners' work (3rd order side effect), etc., and it could be continued. It is a hard task to know how deep and how broad we should go with such calculations. And if we take a look at our example there are several other viewpoints that could be taken into account. Just one example: the increased water consumption during mining. We have to confess that in the recent study on sheetmetal parts of EOL cars we do not deal with the side-effects at all. The reason is simply that we are in the beginning of the research and only try to define what should we do in this aspect.
Today the whole world, all at least most countries understand the importance of natural resources, environment, and based on this understanding reuse and recycling are getting more and more important in everyday life, as well as the decrease of CO2 emission, etc. These all request to keep energy, water, natural resources, manpower, etc. consumption in a moderate, sustainable level. This leads to sustainable development, or even to sustainability.
Ecological Footprint Life-Cycle PLCM
Re-use Re-cycling Sorting Disassembly
Effects: 1st, 2nd, 3rd order, etc.
Sustainability Sustainable Development
TYPUS/KILT metricsmethodology
3.3
The KILT model and the TYPUS metrics.
Just to remember we repeat some main points of the KILT model and the TYPUS, which are properly explained in [6,7,8 and 8a].TYPUS metrics means Tangibles Yield per Unit of Service. It is measured in moneyon ecological basis. It reflects the total energy and material consumption of (all) (extended) products of a given unit, e. g. of an enterprise. But it can be applied for bigger (e.g. virtual enterprises) or any smaller units (e.g. workshop or one machine) or for any selected actions (e.g painting, bending, cutting, etc.) of any complexity. In this study it is all only about sheet-metal management of EOV, however due to the complexity of the problems and due to the status of the research we still do not make real value calculations.
The metrics assumes several things, as : life-cycle function; material and energy provisions during manufacturing, operation, repair, reuse or dismissal, etc.
KILT is an arbitrarily, but properly chosen implementation of TYPUS,
we could imagine other realizations as well, However, recently the given definitions seem to be the best to manage the requested goals, as far as the authors believe. The related TYPUS metrics is further discussed later on. In earlier models and considerations The delivered quantities (all outputs), Q, is assumed to depend on the contributed financial ( I ) and human ( L ) capitals , plus the know-how ( K ) (innovation) and the tangibles ( T ) have non negligible effects..
The relationships are still work as multiplication and looks the next:
Q = f (K, I, L, T)
Summarizing the different factors we get some content to all of them as capital, knowledge, activity, material, etc. at the same time: K: Technical capitalknowledge, technology, know how, etc.-intangibles I: Financial capital-investment, capital, etc. L: Human capital-labor, traditional labor, human efforts, welfare charges, etc. T: Natural capitaltangible resources: material, consumables, ecologic fees, utilities, commodities, etc.
All the contributed technical K, financial I, human L and natural T capitals are included, and there is a tetra-linear dependence, which assumes to operate nearby equilibrium assets. The KILT models reliably describe the delivered product quantities, Q. Lacking one contribution (any of the above factors has a value of 0), the balance is lame, and the reckoned productivity figures, untruthful or meaningless.
The tetra-linear dependence means the equivalence of assets alone, and their synergic cumulated action. The company return is optimal, when the (scaled) factors are balanced; the current scaling expresses in money the four capitals (the comparison of non-homogeneous quantities is meaningless; the output Q has proper value, with the four inputs homogeneity). The return vanishes or becomes loss, if one contribution disappears. The loss represents the imbalance between constituent (know-how, money, work out-sourcing, bought semi-finished parts, etc.) flows.
The TYPUS metrics.
TYPUS, tangibles yield per unit service: the measurement plot covers the materials supply chain, from procurement, to recovery, so that every enjoyed product-service has associated eco-figures, assembling the resources consumption and the induced falls-off requiring remediation. The results are expressed in money. The point is left open, but, it needs to be detailed, to provide quantitative (legal metrology driven) assessment of the "deposit-refund" balance.
The metrics is an effective standard, aiming at the natural capital intensive exploitation. The supply chain lifecycle visibility needs monitoring and recording the joint economic/ecologic issues, giving quantitative assessment of all input/output materials and energy flows.
We have to apply these considerations to the decision sequence of "remakeno remake" of sheet metal parts of EOV, taking into account all technological steps and human actions including their side effects, if we can understand, define, measure and quantify them. It will take some ore time and a lot of researchers' efforts. To be a little positive we are convinced that the above discussed metrics and model can be used between any two points of the PLC, i.e. all costs, outputs, results, effects and side effects can be measured, calculated and evaluated in the decision sequence of sheet metal parts of EOL cars in our case.
Some technological issues of ISF
There are several open issues concerning the ISF technology and 3D measurements.
We have to make several experiments with the scanner system to have an exact view of the measured sheet, and with the software systems, which compare the different surfaces with each other and with the accepted shape's data. These may be results of scanning, but more often results from the design (CA/CAM) processes. Finally the software dictatesand the humans generally accept -what should be done using ISF.
On the ISF side we still have problems with accurate shape and thickness measurements. These works are running recently with high efforts. Fig. 3 presents an ISF experiment with an industrial robot using a 50 cm x 50 cm frame for the sheet. For car parts we can use the same robot, but a larger, and different frame will be needed. It is on the design table already.
Fig. 3. ISF experiment with a FANUC robot
From the technological point of view, the ISF consists of the gradual plastic deformation of a metal sheet by the action of a spherical forming tool whose trajectory is numerically controlled. The interest in evolution of ISF is rather old; it started in 1967 with the patent of Leszak [START_REF] Leszak | Apparatus and Process for Incremental Dieless Forming[END_REF]. This idea and technology is still active today in the field of producing sheet metal and polystirol parts in small batch and one-of-a-kind production, rapid prototypes, in medical aid manufacturing and in architectural design. A specific forming tool is mounted on the machine spindle or on a robot, and it is moved according to a well-defined tool path to form the sheet into the desired shape. Several ISF strategies have been developed which mainly differ in equipment and forming procedure. In particular, the process can be divided into:
─ Single Point Incremental Forming (SPIF)
Here the sheet metal is shaped by a single tool (with a faceplate supporting the initial level of the sheet).
─ Two Points Incremental Forming (TPIF),
where the sheet metal shaping is ensured by: a) two counter tools or b) a local die support that is a sort of partial / full die.
In Full Die Incremental Forming the tool shapes the sheet alongside a die; this die could be produced from cheap materials such as wood, resin or low cost steel; the use of a die ensures a better and more precise shape of the final piece.
As repair tool only SPIF or TPIF with synchronised counter tools can be considered because the manufacturing of full or partial dies needs more time and money. On the other hand SPIF has some drawbacks compared to TPIF with two counter tools.
Experimental Investigations and Numerical Analysis were carried out by Shigekazu Tanaka et al. [START_REF] Tanaka | Residual Stress In Sheet Metal Parts Made By Incremental Forming Process[END_REF] to examine the residual stress in sheet metal parts obtained by incremental forming operations because distortion were observed after removing the outer portion of the incremental formed sheet metal part. Results showed that "tension residual stress is produced in the upper layer of the sheet and compression stress in the lower", furthermore the stress is increasing with the increase of the tool diameter [START_REF] Tanaka | Residual Stress In Sheet Metal Parts Made By Incremental Forming Process[END_REF].
Crina Radu [START_REF] Radu | Analysis of the Correlation Accuracy-Distribution of Residual Stresses in the Case of Parts Processed by SPIF, Mathematical Models and Methods in Modern Science[END_REF] analysed the correlation between the accuracy of parts processed by SPIF using different values of process parameters and the distribution of the residual stresses induced in the sheets as results of cold incremental forming.
The hole drilling strain gauge method was applied to determine the residual stresses distribution through the sheet thickness. Experiments showed that the increase of tool diameter and incremental step depth increased residual stresses, which led to higher geometrical deviations [START_REF] Radu | Analysis of the Correlation Accuracy-Distribution of Residual Stresses in the Case of Parts Processed by SPIF, Mathematical Models and Methods in Modern Science[END_REF].
J. Zettler et al. stated in their work that SPIF indicate "great residual stresses to sheet during the forming which lead to geometrical deviations after releasing the fixation of the sheet". They introduced a spring back compensation procedure in which an optical measurement system is used for measuring the part geometry after the forming [START_REF] Zettler | Springback Compensation for Incremental Sheet Metal Forming Applications, 7. LS-DYNA Anwenderforum[END_REF].
Silva [START_REF] Silva | Revisiting single-point incremental forming and formability/failure diagrams by means of finite elements and experimentation[END_REF] et al. made some Experimental Investigations and Numerical Analysis to evaluate the applicability and accuracy of their analytical framework for SPIF of metal sheets. They stated that "plastic deformation occurs only in the small radial slice of the component being formed under the tool. The surrounding material experiences elastic deformation and, therefore, it is subject of considerably lower stresses."
In order to compensate springback more effectively (in-process) online residual stress measurements are suggested. Residual Stress Measurement Methods can be characterized according to the length detection over the stresses balance.
Feasible Non-Destructive Testing (NDT) Methods based on a summary of Withers et al. [START_REF] Withers | Residual stress part 2-nature and origins[END_REF] are Ultrasonic and Magnetic Barkhausen noise (MBN) measurements. By comparing these methods we can say that ultrasonic solutions can be used for nonferromagnetic materials too, but for the evaluation of Multiple Residual Stress Components the Barkhausen noise (BN) testing is preferable. The work of Steven Andrew White showed that "BN testing is capable of providing near-surface estimates of axial and hoop stresses in feeder piping, and could likely be adapted for in situ feeder pipe inspection or quality assurance of stress relief during manufacture" [16].
By adapting the MBN solution of Steven Andrew White to Incremental Forming of sheet metals we can realize an enhanced concept of J. Zettler et al. [START_REF] Zettler | Springback Compensation for Incremental Sheet Metal Forming Applications, 7. LS-DYNA Anwenderforum[END_REF] where the optical measurement system is replaced/extended by a MBN measurement device integrated into a forming tool. This solution may allow finishing the manufacturing/repairing of a part with high geometrical accuracy however, without releasing the fixation.
Conclusions and further plans
Our real goal is to give some means and tools to calculate different values which correspond to different phases of the life-cycle of a product (PLC). We specially emphasize re-use and re-cycling as important LC phases, due to the approaching water-, energy-and raw material-shortages. Generally on product we mean anything which is used by simple users (a car, a cup, a bike, or a part of them, etc.), or which are used by dedicated users to produce or manage other products (a machine tool, a robot, a house, a test environment, etc.), or which are used to manage everything else (a firm, a factory, a ministry, etc.). We differentiate between simple products and extended products (as traditional and extended enterprise) and between tangible and intangible parts (aspects) and service is taken into account as a product, too.
In the recent study we restrict ourselves to a very narrow part of the PLCM of cars: to evaluate EOL cars' sheet-metal parts, then to decide whether to re-make (repair, use) them or let them go to the shredder to be dismissed.
During our research to assist re-use of sheet metal parts we had several problems to solve to make waste as little as possible, and to prefer re-use, or re-make. There were several machine-and human decisions, which need support. An important assistance is 3D modelling and visualisation to help human decision making if a simple view is not enough, as it cannot be exact enough. The set and methods of decision making drove us to the way of cognitive info-communication. This way should be extended and explained in more details in the future.
We showed that the above explained simple multiplications forms of KILT cannot yet be used for economically useful calculations, they contain only several ideas and qualitative relationships to go on a right way. We plan to find proper relationships to use our ideas and formulae for real world situations to assist not only designers and engineers in their work, but politicians and other decision makers as well. These studies and their resulting calculations, values and suggestions how to proceed will be in a following study. Specific applications to ISF technology may mean simplifications and easier understanding and using of the metrics and the model.
Fig. 1 .
1 Fig. 1. Serial-Flow Car Disassembly
DD: disassembly starts, parts are taken off one by one, and sorted and stored, until the next decision can be done 5. Decision 2: S or DM -done at any time. Decision may be partial shredder (PS) and partial dismantling (PD) after a while. (a) if PS&PD: certain parts are taken apart, the rest goes to shredder. (b) if DM or DD and PD are done, now we have a lot of parts organised somehow 6. Decision 3: Select sheet metal parts: automatically or manually or hybrid way, keep SMs, put away the rest (a) Examine all SM parts, first thickness (TH) measurement is done (b) if TH is too small, part goes to S. The rest goes to border measurement (SB), as borders (SB), of a sheet may easily be damaged during dismantling. (c) SB may be done by optics and AI and/or by human or both, after each other. (d) if SB is repairable or good, make shape measurement (SHM) (e) Compare measured sheet (MS) to standard shape (SH). SH can be taken by measuring a failure-free sample, or from any appropriate catalogue. For processing we need CAD data in both cases. (f) Compare SH with MS. and calculate differences somehow, it is the deviation from standard (DS) 7. Decision 4: if DS is small enough (defined by the customer, who will need the part, Decision 6: if accepted it goes to the shop or to a workshop for painting, and then to a shop. (a) if rejected it goes back to 6.6. 10. The part is accepted, sent to the shop or to business again
4. Decision 1: Shredder (S) or dismantling (DM) or delayed decision after the begin-
ning of dismantling (DD). Decision is done basically by human, eventually assist-
ed by measurements, or even by 3D part modeling
(a) if S: no more work to do: car goes to shredder and then burial (dismissal)
(b) if DM: disassembly starts, parts are taken off one by one until the last one,
based on a given protocol for all car types.
(c) if
1. Car arrives: on wheels or on a trailer, papers and documents are fixed, permission
issued (or checked if issued by others)
2. It goes or it is taken to the dismantling bed (dry bed)
3. Remove liquids and dangerous materials (Unconditional)
or it is an average value generally accepted), part goes to repair. The rest goes to shredder. 8. Decision 5: repair by hand, by ISF or combined, any sequence is possible (a) if ISF: part goes to ISF centre together with its CAD/CAM code, and processed (b) if Manual or Combined: part goes to worker, when needed, after or before ISF (c) if ISF is donea final measurement is needed (SHM). 9. | 32,260 | [
"1003713",
"1003714"
] | [
"306576",
"488112",
"306576"
] |
01485820 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485820/file/978-3-642-41329-2_26_Chapter.pdf | Martin Hardwick
email: hardwick@steptools.com
David Loffredo
Joe Fritz
Mikael Hedlind
Enabling the Crowd Sourcing of Very Large Product Models
Keywords: Data exchange, Product Models, CAD, CAM, STEP, STEP-NC, 1
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Part 21 is a specification for how to format entities describing product data [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 21: Implementation methods: Clear text encoding of the exchange structure[END_REF]. The format is minimal to maximize upward compatibility and simple to allow for quick implementation. It was invented before XML, though not before SGML, and it makes no special allowance for URL's [START_REF]Uniform Resource Identifiers (URI): Generic Syntax[END_REF].
Several technical data exchange standards use Part 21. They include STEP for mechanical products, STEP-NC for manufacturing products and IFC for building products. Over twenty years, substantial installed bases have been developed for all three with many Computer Aided Design (CAD) systems reading and writing STEP, a growing number of Computer Aided Manufacturing (CAM) systems reading and writing STEP-NC, and many Building Information Management (BIM) systems reading and writing IFC.
The data described by STEP, STEP-NC and IFC is continually growing [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 1: Overview and fundamental principles[END_REF]. STEP was first standardized as ISO 10303-203 for configuration controlled assemblies, and as ISO 10303-214 for automotive design. Both protocols describe the same kinds of information, and have taken turns at the cutting edge. Currently they are being replaced by ISO 10303-242 which will add manufacturing requirements, such as tolerances and surface finishes, to the product data [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 242: Application protocol: Managed Model-based 3D Engineering[END_REF].
STEP-NC is a related standard for manufacturing process and resource data. It has been tested by an industry consortium to verify that it has all the features necessary to replace traditional machining programs. They recently determined that it is ready for implementation and new interfaces are being developed by the Computer Aided Manufacturing (CAM) system vendors [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 238: Application Protocols: Application interpreted model for computerized numerical controllers[END_REF].
IFC describes a similar set of standards for building design and construction. IFC has made four major releases with the most recent focused on enabling the concurrent modeling of building systems. These include the systems for electric power, plumbing, and Heating, Ventilation and Air Conditioning (HVAC). The building structural elements such as floors, rooms and walls were already covered by previous editions. With the new release, different contractors will be able to share a common model during the construction and maintenance phases of a building [START_REF]ISO 16739: Industry Foundation Classes for data sharing in the construction and facility management industries[END_REF].
All three models are being used by a large community to share product data but Part 21 has been showing its age for several years. In the last ten years there have been six attempts to replace it with XML [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 28: Implementation methods: XML representations of EXPRESS schemas and data, using XML schemas[END_REF]. To date, none has succeeded but there is a growing desire for a more powerful and flexible product data format.
This paper describes an extension to Part 21 to enable the crowd sourcing of very large product models. Extending the current format has the advantage of continuing to support the legacy which means there will be a large range of systems that can already read and write the new data. The new edition has two key new capabilities:
1. The ability to distribute data between model fragments linked together using URI's. 2. The ability to define intelligent interfaces that assist the user in linking, viewing and running the models.
The next section describes the functionalities and limitations of Part 21 Editions 1 and 2. The third section describes how the new format enables massive product databases. The fourth section describes how the new format enables crowdsourcing. The fifth section outlines the applications being used to test the specification. The last section contains some concluding remarks.
EDITIONS 1 AND OF PART 21
STEP, STEP-NC and IFC describe product data using information models. Each information model has a schema described in a language called EXPRESS that is also one of the STEP standards [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 1: Overview and fundamental principles[END_REF]. EXPRESS was defined by engineers for engineers. Its main goal was to give clear, concise definitions to product geometry and topology.
An EXPRESS schema defines a set of entities. Each entity describes something that can be exchanged between two systems. The entity may describe something simple such as a Cartesian point or something sophisticated such as a boundary representation. In the latter case the new entity will be defined from many other entities and the allowed data structures. The allowed data structures are lists, sets, bags and arrays. Each attribute in an entity is described by one of these data structures, another entity or a selection of the above. The entities can inherit from each other in fairly advanced ways including AND/OR combinations. Finally EXPRESS has rules to define constraints: a simple example being a requirement for a circle radius to be positive; a more complex example being a requirement for the topology of a boundary representation to be manifold. Part 21 describes how the values of EXPRESS entities are written into files. A traditional Part 21 file consists of a header section and a data section. Each file starts with the ISO part number (ISO-10303-21) and begins the header section with the HEADER keyword. The header contains at least three pieces of information a FILE_DESCRIPTION which defines the conformance level of the file, a FILE_NAME and a FILE_SCHEMA. The FILE_NAME includes fields that can be used to describe the name of the file, a time_stamp showing the time when it was written, the name and organization of the author of the file. The FILE_NAME can also include the name of the preprocessing system that was used to write the file, and the name of the CAD system that created the file. One or more data sections follow the header section. In the first edition only one was allowed and this remains the case for most files. The data section begins with the keyword DATA, followed by descriptions of the data instances in the file. Each instance begins with an identifier and terminates with a semicolon ";". The identifier is a hash symbol "#" followed by an unsigned integer. Every instance must have an identifier that is unique within this file but the same identifier can be given to another instance in another file. This includes another version of the same file.
ISO
The identifier is followed by the name of the entity that defines the instance. The names are always capitalized because EXPRESS is case insensitive. The name of the instance is then followed by the values of the attributes listed between parentheses and separated by commas. Let's look at instance #30. This instance is defined by an entity called FACE_BOUND. The entity has three attributes. The first attribute is an empty string, the second is a reference to an EDGE_LOOP and the third is a Boolean with the value True. The EXPRESS definition of FACE_BOUND is shown below. FACE-BOUND is an indirect subtype of representation item. The first attribute of FACE-BOUND (the string) is defined by this super-type. Note also that the "bound" attribute of face_bound is defined to be a loop entity so EDGE_LOOP must be a subtype of LOOP. The first goal was met by requiring the files to be encoded in simple ASCII, by requiring all of the data to be in one file, and by the requiring every identifier to be an unsigned integer that only has to be unique within the context of one file. The latter condition was presumed to make it easier for engineers to write parsers for the data.
ENTITY
The second design goal was met by minimizing the number of keywords and structural elements (nested parentheses). In most cases, the only keyword is the name of the entity and unless there are multiple choices there are no other keywords listed. The multiple choices are rare. They happen if an attribute can be defined by multiple instances of the same type. An example would be an attribute which could be a length or a time. If both possibilities are represented as floating point numbers then a keyword is necessary to indicate which has been chosen.
Making the Part 21 format simple was helpful in the early years as some users developed EXPRESS models and hand populated those files. However as previously mentioned there are thousands of definitions in STEP, STEP-NC and IFC and to make matters worse the definitions are normalized to avoid insertion and deletion anomalies. Consequently, it quickly became too difficult for engineers to parse the data by hand and a small industry grew up to manage it using class libraries. This industry then assisted the CAD, CAM and BIM vendors as they implemented their data translators.
The two design goals conflict when minimizing the number of keywords makes the files harder to read. This was dramatically illustrated when XML became popular. The tags in XML have allowed users to create many examples of relatively easy to understand, self-describing web data. However, for product models contain thousands of definitions the tags are less helpful. The following example recodes the first line of the data section of the previous example in XML.
<data-instance ID="i10"> <type IDREF="oriented_edge"> <attribute name="name" type="label"></attribute> <attribute name="edge_start" type="vertex" value="derived"/> <attribute name="edge_end" type="vertex" value="derived"/> <attribute name="edge_element" type="edge"><instance-ref IDREF="i44820"/></attribute> <attribute name="orientation" type="BOOLEAN">TRUE</attribute> </type> </data-instance> Six XML data formats have been defined for STEP in two editions of a standard known as Part 28 [START_REF]ISO 16739: Industry Foundation Classes for data sharing in the construction and facility management industries[END_REF]. The example above shows the most verbose format called the Late Binding which tried to enable intelligent data mining applications. Other formats were more minimal though none as minimal as Part 21. In practice the XML tags add small value to large product models because anyone that wants to parse the data needs to process the EXPRESS. Plus they cause occasional update problems because in XML adding new choices means adding new tags and this can mean old data (without the tags) is no longer valid.
The relative failure of Part 28 has been mirrored by difficulties with Part 21 Edition 2. This edition sought to make it easier for multiple standards to share data models. At the time STEP was moving to an architecture where it would be supporting tens or hundreds of data exchange protocols each tailored to a specific purpose and each re-using a common set of definitions. Edition 2 made it possible to validate a STEP file in all of its different contexts by dividing the data into multiple sections each described by a different schema. In practice, however, the applications have folded down to just three that are becoming highly successful: STEP for design data, STEP-NC for manufacturing data and IFC for building and construction date.
The failures of XML and Edition 2 need to be balanced against the success of Edition 1. This edition is now supported by nearly every CAD, CAM and BIM system. Millions of product models are being made by thousands of users. Consequently there is an appetite for more, and users would like to be able to create massive product models using crowd sourcing.
MASSIVE PRODUCT MODELS
The following subsections describe how Edition 3 enables very large product models. The first two subsections describe how model fragments can be linked by URI's in anchor and reference sections. The third subsection describes how the transport of collections of models is enabled using ZIP archives. The fourth subsection describes how the population of a model is managed using a schema population.
Anchor section
The syntax of the new anchor section is simple. It begins with the keyword ANCHOR and ends with the keyword ENDSEC. Each line of the anchor section gives an external name for one of the entities in the model. The external name is a reference that can be found using the fragment identifier of a URL. For example, the URL www.server.com/assembly.stp#front_axle_nauo references the front_axle in the following anchor section.
ANCHOR; <front_axle_nauo> = #123; <rear_axle_nauo> = #124; <left_wheel_nauo> = #234; <right_wheel_nauo> = #235; ENDSEC;
Unlike the entity instance identifiers of Edition 1, anchor names are required to be unique and consistent across multiple versions of the exchange file. Therefore, alt-hough the description of the front_axle in chasis.stp may change, the front_axle anchor remains constant.
Reference section
The reference section follows the anchor section and enables references into another file. Together the reference and anchor sections allow very large files to be split into fragments.
The reference section begins with the keyword REFERENCE and ends with the keyword ENDSEC. Each line of the reference section gives a URI for an entity instance defined in an external file. In this example, the external file contains references to the anchors given in the previous example. The file is defining a manufacturing constraint on an assembly [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 28: Implementation methods: XML representations of EXPRESS schemas and data, using XML schemas[END_REF]. The example uses names for the entity identifiers. This is another new feature of Edition 3. Instead of requiring all entity identifiers to be numbers they can be given names to make it easier for casual users to code examples, and for systems to merge data sets from multiple sources. Numbers are used for the entities #124, #125 and #126 because it is traditional but the rest of the content has adopted the convention of giving each instance a name to indicate its function and a qualifier to indicate its type. Thus "chasis_pd" indicates that this instance is the product_definition entity of the chasis.stp file.
DATA
ZIP Archives and Master Directories
The anchor and reference sections allow a single earlier-edition file to be split into multiple new files but this can result in management problems. The old style led to files that were large and difficult to edit outside of a CAD system, but all of the data was in one file which was easier to manage. ZIP archives allow Part 21 Edition 3 to split the data and continue the easy data management. A ZIP archive is a collection of files that can be e-mailed as a single attachment. The contents of the archive can be any collection including another archive. A ZIP archive is compressed and may reduce the volume by as much as 70%. Many popular file formats such as ".docx" are ZIP files and can be accessed using ZIP tools (sometimes only after changing the file extension to .zip) Edition 3 allows any number of STEP files to be included in an archive. Each file in the directory can be linked to the other files in the ZIP using relative addresses and to other files outside of the ZIP using absolute addressing. Relative addresses to files outside the ZIP are not allowed so that applications can deploy the zipped data at any location in a file system.
References into the ZIP file are allowed but only via a master directory stored in the root. This Master directory describes where all the anchors in the ZIP can be found using the local name of the file. Outside the archive the only visible name is that of the archive itself. If there is a reference to this name then a system is required to open the master directory and look for the requested anchor.
In Figure 1 the file ISO-10303-21.txt is the master directory. It contains the forwarding references to the other files and it is the file that can be referenced from outside of the archive using the name of the archive.
Schema population with time stamps
The support for data distribution in Part 21 Edition 3 gives rise to a problem for applications that want to search a complete data set. If the data is distributed and normalized then there may be "orphan" files that contain outbound references but no inbound ones. For example, Figure 2 shows how a file may establish a relationship between a workplan and a workpiece by storing URI's to those items but not have a quality that needs to be referenced from anywhere else. The schema population includes all the entity instances in all the data sections of the file.
If there is a reference section, then the schema population also includes the schema populations of all the files referenced by URIs. If the header has a schema_population definition then the schema population also includes the schema population of each file in the set of external_file_locations.
The last inclusion catches the "orphan files". The following code fragment gives an example. In this example the STEP file shown is referencing two other files. There are two other attributes in each reference. An optional time stamp shows when the reference was last checked. An optional digital signature validates the integrity of the referenced file. The time stamp and signature enable better data management in contractual situations. Clearly if the data is distributed there will be opportunities for mistakes and mischief.
SCHEMA_POPULATION( ('http
INTELLIGENT INTERFACES
The following subsections describe how Edition 3 enables crowdsourcing using intelligent interfaces. The first subsection describes how JavaScript has been added to the model. The second subsection describes how some of the data rules have been relaxed for easier programming. The third subsection describes how application specific programming is enabled using data tags. The last subsection summarizes the options for making the programming more modular.
JavaScript
The goal of adding JavaScript to Part 21 Edition 3 is to make managing the new interfaces easier by encapsulating the tasks that can be performed on those interfaces as methods. For example, the following code checks that a workpiece is valid before linking it to a workplan. In Edition 3 a file can include a library of JavaScript functions to operate on an object model of the anchors and references but not necessarily the data sections. In many cases the volume of the data sections will overwhelm a JavaScript interpreter.
The following three step procedure is used for the conversion:
1. Read the exchange structure and create a P21.Model object with anchor and reference properties. 2. Read the JavaScript program definitions listed in the header section. 3. Execute the JavaScript programs with the "this" variable set to the P21.Model object.
For more details see Annex F of the draft specification at www.steptools.com/library/standard/. The procedure creates one object for each exchange file and gives it the behaviour defined in the JavaScript. The execution environment then uses those objects in its application. For example, in the code above workplan and workpiece are object models for two exchange structures and the application is checking for compatibility before linking them.
Data relaxation
In order to be reusable a product model needs to flexible and extensible. The information models defined by the STEP, STEP-NC and IFC standards have been carefully designed over many releases to achieve these qualities. Interface programming is different because an interface can be created as a contingent arrangement of anchors and references for a specific purpose. The information model has not changed so application programming for translation systems stays the same, but for interface programming the requirements are different. Therefore, two relaxations have been applied to the way data is defined for interfaces in the new edition.
1. The instance identifiers are allowed to be alphanumeric. 2. The values identified can be lists and literals.
Editions 1 and 2 of Part 21 restricted the format so that every identifier had to be an unsigned integer. This helped emphasize that the identifiers would not be consistent across files and at the time it was thought to make it easier for parsers to construct symbol tables. The symbol table reason is false. Every modern system uses a hash table for its symbols and these tables are agnostic with respect to the format of the identifier.
Requiring numbers for identifiers has always made hand editing harder than necessary. Therefore, Edition 3 supports alphanumeric names. The following example of unit definition illustrates the advantage. Each unit definition follows a pattern. Unit definitions also show why a more flexible approach to defining literals is desirable. The following is a file that defines some standard constants. The new identifiers can be used in the data section as well as the anchor and reference sections.
Tags for fast data caching
In many cases the JavaScript functions operating on the interfaces need additional data to be fully intelligent. Therefore, the new edition allows additional values to be tagged into the reference and data sections. Each tag has a category and a value. The category describes its purpose and the value is described by a literal. The following example shows the data checked by the JavaScript function of the previous example. The tag data may be initialized by a pre-processor or created by other means. In the above example two pieces of data are being linked and it is important to know that the workpiece is ready for use by manufacturing.
Another role for tags is as a place to cache links to visualization information. Again this data may be summarized from the STEP, STEP-NC or IFC information model and the tags allow it to be cached at a convenient place for rapid display.
The last example shows tags being used to document the STEP ARM to AIM mapping. Those who have worked on STEP and STEP-NC know that they have two definitions: a requirements model describing the information requirements; and an interpreted model that maps the requirements into a set of extensible resources [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 1: Overview and fundamental principles[END_REF]. The tags can become a way to represent the mapping between these two models in the data.
REFERENCE;
#1234 {x_axis:(#3091,#3956)}{y_axis: (#2076)} = <#ma-chine_bed>; #4567 {z_axis:(#9876,#5273)}= <#tool_holder>; ENDSEC;
A quick summary of the new data options in Edition 3 is that it allows URL's to be placed between angular brackets ("<>") and application specific data to be placed between curly brackets ("{}").
Modularity options
Part 21 Edition 3 has three options for modularizing STEP, STEP-NC and IFC data.
Continue using traditional files, but surround those files with interfaces referencing into the data Create a product data web linked by URL's. Create a ZIP archive of replaceable components.
The traditional approach to STEP implementation creates a massive symbol table using an EXPRESS compiler and then reads the exchange data into objects described by the table. This is expensive both with respect to processing time and software investment, but efficient if all of the data is being translated into a CAD system.
The new edition allows the Part 21 data to be arranged into interfaces for light weight purposes such as checking tolerances, placing subsystems and running processes. Therefore, alternate implementation paradigms are possible. Three being considered include:
1. A JavaScript programming environment can process just the data in an interface. In this type of implementation the Part 21 files are rapidly scanned to create the objects required for the anchor and reference sections. 2. A web browser environment can be activated by including an "index.html" file in the ZIP archive along with code defining the P21 object model of the interface.
This type of implementation will be similar to the previous one but with steaming used to read the Part 21 data. 3. The third type of implementation is an extended Standard Data Access Interface (SDAI). The SDAI is an application programming interface for Edition 1 and 2 data that can be applied to Edition 3 because of upward compatibility.
One option for an SDAI is to merge all the Edition 3 data into one large file for traditional CAD translation processing. Another option is to execute the JavaScript and service web clients
APPLICATIONS
PMI Information for Assemblies
The first in-progress application is the management of Product Manufacturing Information (PMI) for assemblies. Figure 3 shows a flatness tolerance on one of the bolts in an assembly. In the data, a usage chain is defined to show which of the six copies of the bolt has the tolerance.
Fig. 3. -Assembly tolerances
The data for the example can be given many organizations. One is the traditional single file which will work well for small data sets. For large assemblies the new specification enables a three layer organization. The lowest layer is the components in the model. The second layer is the assemblies and sub-assemblies. The third layer is the PMI necessary to manufacture the assembly.
In order for this organization to work the product components must expose their product coordinate systems to the assembly modules and their faces to the PMI modules. Similarly the assembly modules must expose their product structure to the PMI modules. The following shows the resulting anchor and reference sections of the PMI module. This code is then referenced in the data sections of the PMI modules.
Data Model Assembly
STEP and STEP-NC are related standards that share definitions. The two models can be exported together by integrated CAD/CAM systems, but if different systems make the objects then they must be linked outside of a CAD system. In STEP-NC a workplan executable is linked to the shape of the workpiece being machined. The two entities can be exported as anchors in the two files and an intelligent interface can link them on-demand. The following code shows the interface of a linker file with open references to the two items that must be linked. The linker Ja-vaScript program given earlier sets these references after checking the validity of the workpiece and workplan data sets. It also sets the name of the reference to indicate if workpiece represents the state of the part before the operation (as-is) or after the operation (to-be).
REFERENCE;
#exec = $; #shape = $; #name = $; ENDSEC; DATA; #10=PRODUCT_DEFINITION_PROCESS(#name,'',#exec,''); #20=PROCESS_PRODUCT_ASSOCIATION('','',#shape,#10); ENDSEC;
A number of CAM vendors are implementing export interfaces for STEP-NC [START_REF]Industrial automation systems and integration -Product data representation and exchange -Part 238: Application Protocols: Application interpreted model for computerized numerical controllers[END_REF]. They export the process data which needs to be integrated with workpiece data to achieve interoperability. The workpieces define the cutting tools, fixtures and machines as well as the as-is and to-be removal volumes.
Next Generation Manufacturing
The last application is the control of manufacturing operations. Today manufacturing machines are controlled using Gcodes generated by a CAM system [START_REF] Hardwick | A roadmap for STEP-NCenabled interoperable manufacturing[END_REF]. Each code describes one or more axis movements. A part is machined by executing millions of these codes in the right order with the right setup and the right tooling. Change is difficult which makes manufacturing inflexible and causes long delays while engineers validate incomplete models. Part 21 Edition 3 can replace these codes with JavaScript running STEP-NC. The broad concept is to divide the STEP-NC program into modules each describing one of the resources in the program. For example, one module may define a toolpath and another module may define a workingstep. The modules can be put into a ZIP archive so the data volume will be less and the data management easier. The JavaScripts defined for each module makes them intelligent. For example for a workingstep, the script can control the tool selection and set tool compensation parameters.
Before a script is started other scripts may be called to make decisions. For instance an operation may need to be repeated because insufficient material was removed, or an operation may be unnecessary because a feature is already in tolerance. Such functionalities can be programmed in today's Gcode languages but only with difficulty.
The JavaScript environment is suited to manufacturing because it is event driven. Performance should not be an issue because in practice machine controls operate by running look-ahead programs to predict future movements. Changing the look-ahead to operate on JavaScript instead of Gcode is probably a better use of resources.
For an example of how such a system might operate see Figure 4 which is a screen capture of the following WebGL application: http://www.steptools.com/demos/nc-frames.html?moldy/ The new edition adds URI's and JavaScript to enable the crowdsourcing of massive product models. Other supporting features include a schema population to keep track of all the components, ZIP archives to enable better data management, data relaxation to enable easier interface programming, and data tags to allow application specific programming.
The new specification is not yet finished. Additional extensions are still being considered. They include allowing the files in a ZIP archive to share a common header section, merging the anchor and reference sections into an interface section and loosening the syntax to allow for lists of URI's.
The current specification can be accessed at: http://www.steptools.com/library/standard/p21e3_dis_preview.html Implementation tools are being developed to enable testing. They include libraries to read and write the data into a standalone JavaScript system called NodeScript, libraries to steam the data into web browsers, and libraries to run the scripts in an SDAI. The NodeScript implementation is available as open source at the following location. http://www.steptools.com/library/standard/ The next steps (pun intended) are to:
1. Complete the prototype implementations so that they can be used to verify the specification. 2. Submit the specification to ISO for review as a Draft International Standard (DIS). 3. Respond to the international review with additional enhancements for the additional requirements.
4. Begin the development of common object models for design and manufacturing applications. For example, object models for the execution of machine processes, and object models for the definition of assembly tolerances. 5. Create applications to demonstrate the value of the specification. The applications will include attention grabbing ones that use kinematics to show the operation of products and machines, value added ones that use the JavaScript to link data sets and create massive product models, and manufacturing ones to verify tolerances while processes are running.
-10303-21; HEADER; /* Exchange file generated using ST-DEVELOPER v1.5 */ FILE_DESCRIPTION( /* description */ (''), /* implementation_level */ '2; 1'); FILE_NAME( /* name */ 'bracket1', /* time_stamp */ '1998-03-10T10:47:06-06:00', /* author */ (''), /* organization */ (''), /* preprocessor_version */ 'ST-DEVELOPER v1.5', /* originating_system */ 'EDS -UNIGRAPHICS 13.0', /* authorisation */ ''); FILE_SCHEMA (('CONFIG_CONTROL_DESIGN')); /* AP203 */ ENDSEC; DATA; #10 = ORIENTED_EDGE('',*,*,#44820,.T.); #20 = EDGE_LOOP('',(#10)); #30 = FACE_BOUND('',#20,.T.); #40 = ORIENTED_EDGE('',*,*,#44880,.F.); #50 = EDGE_LOOP('',(#40)); #60 = FACE_BOUND('',#50,.T.); #70 = CARTESIAN_POINT('',(-1.31249999999997,14.594,7.584)); #80 = DIRECTION('',(1.,0.,3.51436002694883E-15)); … ENDSEC; END-ISO-10303-21;
REFERENCE; /* assembly definitions for this constraint */ #outer_seal_nauo = <assembly.stp#outer_seal>; #outer_bearing_nauo = <assembly.stp#outer_bearing>; #right_wheel_nauo = <assembly.stp#right_wheel>; #rear_axle_nauo = <assembly.stp#rear_axle>; /* Product definitions */ #seal_pd = <assembly.stp#seal_pd>; #bearing_pd = <assembly.stp#bearing_pd>; #wheel_pd = <assembly.stp#wheel_pd>; #axle_pd = <assembly.stp#axle_pd>; #chasis_pd = <assembly.stp#chasis_pd>; ENDSEC;
Fig. 1 .
1 Fig. 1. -Zip Archive
Fig. 2 .
2 Fig. 2. -Link files for massive databases
<http://www.iso10303.org/part41/si_base_units.stp#METRE>; #kilogram = <http://www.iso10303.org/part41/si_base_units.stp#KILOGRA M>; #second = <http://www.iso10303.org/part41/si_base_units.stp#SECOND> ; ENDSEC; DATA; /* Content extracted from part 41:2013 */ #5_newton=DERIVED_UNIT_ELEMENT(#meter,1.0); #15_newton=DERIVED_UNIT_ELEMENT(#kilogram,1.0); #25_newton=DERIVED_UNIT_ELEMENT(#second,-2.0); #new-ton=SI_FORCE_UNIT((#5_newton,#15_newton,#25_newton),*,$,. NEWTON.); #5_pascal=DERIVED_UNIT_ELEMENT(#meter,-2.0); #25_pascal=DERIVED_UNIT_ELEMENT(#newton,1.0); #pas-cal=SI_PRESSURE_UNIT((#5_pascal,#25_pascal),*,$,.PASCAL.) ;
<bolt.stp#bolt>; #nut = <nut.stp#nut>; #rod = <rod.stp#rod>; #plate = <plate.stp#plate>; #l-bracket = <l-bracket.stp#l-bracket>; #bolt_shape = <bolt.stp#bolt_shape>; #nut_shape = <nut.stp#nut_shape>; #rod_shape = <rod.stp#rod_shape>; #plate_shape = <plate.stp#plate_shape>; #l-bracket_shape = <l-bracket.stp#l-bracket_shape>; #bolt_wcs = <bolt.stp#bolt_wcs>; #nut_wcs = <nut.stp#nut_wcs>; #rod_wcs = <rod.stp#rod_wcs>; #plate_wcs = <plate.stp#plate_wcs>; #l-bracket_wcs = <l-bracket.stp#l-bracket_wcs>; #bolt_top_face = <bolt.stp#bolt_top_face>; ENDSEC;
Fig. 4 .
4 Fig. 4. -WebGL Machining | 35,000 | [
"1003715"
] | [
"33873",
"488114",
"488114",
"366312",
"469866"
] |
01485824 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485824/file/978-3-642-41329-2_2_Chapter.pdf | Fumihiko Kimura
email: fumihiko.kimura@hosei.ac.jp
IT Support for Product and Process Development in Japan and Future Perspective
Keywords: Product development, Process development, IT support, CAD, CAM 1
Due to the globalization of market and manufacturing activity, manufacturing industry in industrially advanced countries are facing difficult problems, such as severe competition with the low-cost production in developing countries and radical changes of customer requirements in industrially mature countries, etc. For coping with these problems, it is important to identify hidden or potential customer expectation, and to develop systematized design and manufacturing technology to augment human expertise for innovative product development. It is known that the strength of Japanese manufacturing industry comes from the intimate integration of sophisticated human expertise and highly efficient production capability. For keeping the competitiveness of the Japanese industry, it is strongly required to systematize the product and process development throughout the total product life cycle, and to introduce IT methods and tools for supporting creative and intelligent human activities and for automating well understood engineering processes. In this paper, current issues in manufacturing industry are generally reviewed. Future directions of manufacturing industry are described, and important technological issues and their IT support solutions are discussed. Finally future perspective for advanced IT support is investigated.
INTRODUCTION
Manufacturing is a basic discipline for sustaining economy for advanced industrialized countries, such as Japan, USA and Europe. However due to the recent trends of globalization in manufacturing activities, difficult issues are arising, such as severe competition with very low cost production of developing countries, risk management for global distribution of product development and production activities, environmental problems for achieving sustainable manufacturing, etc. In this paper, the recent advances of product and process development technology, current issues and future perspectives are reviewed from the standpoint of IT support, and particularly Japanese activities are discussed for keeping the competitiveness in the future.
It is well known that the strength of Japanese manufacturing industry comes from the intimate integration of sophisticated human expertise and highly efficient production technology. For keeping the competitiveness of the Japanese industry, it is essential to maintain the quality and volume of the expert human work force, but it is predicted that the population of Japanese working age people will decease about half in the year of 2050. Therefore it is strongly required to systematize the product and process development technology throughout the total product life cycle, and to introduce IT methods and tools for supporting creative and intelligent human activities and for automating well understood engineering processes. This new IT supported way of product and process development will rationalize the current human-dependent processes, and achieve efficient global collaboration among industrially developed countries and developing countries. The underlying research and development activities are discussed in this paper.
In the next section, current issues in manufacturing industry are generally reviewed. Future directions of manufacturing industry are described in section 3. Then important technological issues and their IT support solutions are discussed in sections 4 to 7. Finally future perspective for advanced IT support is shown in section 8.
CURRENT ISSUES IN MANUFACTURING
Technological, economical, and social situations for manufacturing are changing rapidly in recent years. There are many issues manufacturing industry is facing today, especially in industrially advanced countries. Major issues are reviewed, and specific problems with Japanese industry are discussed.
Energy and resource constraints
It is needless to say that industrial technology in advanced countries cannot be fully diffused to developing countries due to over waste of energy and resources. For example, today's standard automobiles cannot be directly spread, and small lightweighted energy-efficient vehicles should be developed for mass use in the developing countries. Technology innovation is required for such new developments. Japan and other advanced countries import large amount of energy sources and other resources. There are always risks of disruption of those supplies due to natural disaster and political problems.
Competition in global market
Big consumer market is emerging in developing countries, and completely new categories of very cheap products are required for adapting to the global market needs. It is fairly hard for Japanese industry to change their technology from high-end products to commodity products. This change requires not only new product strategy but also new technology.
Radical changes of world market
In industrially advanced countries, most of the products have been already spread among the consumers, and, generally speaking, consumer's products are not sold well. It is a big problem how to inspire hidden potential demands for new products.
Service design for product production
Fundamentally it is very important for manufacturing industry to capture potential social expectation for the future, and to propose vision and scenario to approach to it. For example, it is an urgent issue to realize society of higher resource efficiency. Maintenance of social infrastructure, such as roads and bridges, is a good example, and new technology is urgently desired for efficient life cycle management.
Population problem
In Japan, and similarly in other advanced countries, labour force in manufacturing industry will decrease rapidly in coming 50 years. For sustaining the industrial level, it is very important to amplify human intellectual ability by IT support, and to automate production processes as much as possible.
FUTURE DIRECTION FOR MANUFACTURING TECHNOLOGY INNOVATION
For analyzing the issues explained in the previous section, a role of design and manufacturing engineering is investigated, and the issues are classified according to the customer expectation and technology systematization.
Figure 1 shows a key role of design and manufacturing engineering. For enhancing the quality of life (QOL), social expectation for technology development and new products is expressed in the form of explicit demands, social scenario or general social vision, through market mechanism and other social /political mechanisms. Based on the level of industrialization of the society, the expectation is very clearly expressed, or is not expressed explicitly. Depending on such expectation or independently, design and manufacturing engineering tries to develop and propose various attractive technology options in the form of new systems and products to the society, based on the contribution from basic science and engineering.
Traditionally what customers want to get was clear, and known to customers themselves and to engineers. In such cases, according to the social expectation, appropriate technology options can be identified, and the developed systems and products are accepted by customers for better QOL. This case corresponds to the Class I and Class II problems in Figure 2, and is further explained below.
Today especially in industrially advanced society, customer demand or social expectation is not explicitly expressed, but only potentially noticed. Then, even though advanced technology and products are available, they may not be accepted by the society, and the customers are frustrated.There seems to be a big discrepancy between customer wish and producer awareness. This case corresponds to the Class III problem in Figure 2, and is further explained below. Class I Problem: Customer expectation is clearly expressed, and surrounding conditions for manufacturing activity is well known to producers. Therefore the problems are well understood, and systematized technology can be effectively applied.
Class II Problem: Customer expectation is clearly expresed, but surrounding conditions are not well captured or not known. In this case, product and production technology must be adapted to the changing and unknown situations. Integration of human expertise is required for problem solving.
Class III Problem: Customer expectation is not explicitly expressed or is not known. In this case, the problems are not clearly described, and tight coupling exists between identification of customer expectation and correponding technology for problem solving. Co-creative approach between customers and producers is mandatory. The current situation of Japanese manufacturing industry is explained by use of the above classification. Figure 3 shows the manufacturing problem classification with two coordinates: customer expectation and adaptability to surrounding conditions. Here adaptability to surrounding conditions is considered to depend on technology systematization, and it can be characterized by technology innovativeness and maturity.
Fig. 3. -Current trend of product development
Under the clear target setting situation, traditionally Japanese industry is very strong, even with varying conditions and unknown technology issues, to adapt to the difficult technology problems by applying front-edge technology and very sophisticated human involvement, and to produce very high-quality attractive products. This is a Class II problem, and Japanese approach is called as "Suriawase" in Japanese, which means sophisticated integration of human expertise with problem solving.
If the problem is well understood, and technology to solve the problem is well matured, the whole product development and production process can be systematized, and possibly automated. This is a Class I problem. Products which belong to this Class tend to be mass production products, and their competitiveness mainly depends on cheap price. The current difficulty of Japanese manufacturing industry is the technological and organizational inflexibility to adapt to this problem. By simply applying the sophisticated Class II problem solving method to Class I problems, it results in very expensive products with excessive quality.
If we look at the world market situation, two expanding or emerging market areas are recognized, as shown in Figure 3. One area is a mass production commodity product area, where products belong to the Class I problem, and the price is the most critical competitive factor. Another area is an innovative product area, where products belong to the Class III problem. The most important factor for Class III products is to identify customers' potential or hidden expectation, and to incorporate appropriate advanced knowledge from basic science and engineering toward product innovation.
Based on the above discussion, important future directions for manufacturing technology development are considered.
Identification of customer expectation
As the product technology is advancing so rapidly, the customers are normally uable to capture the vision of future society, and tend to be frustrated with the proposed products and systems from the producers. It is important to develop a methodology to search for the hidden and potential social expectation, and to identify various requirements of customers for daily life products and social infrastructure. It is effective to utilize IT methods to promote observation, prediction and information sharing for mass population society. This issue is discussed in Section 4.
Systematization of technology
The manufacturing problems become complex, such as large system problems, complexity problems due to multi-disciplinary engineering, and extremely difficult requiremts toward product safety and energy efficiency, etc. The traditional human dependent approaches only cannot cope with the problems effectively, and it is mandatory to introduce advanced IT support, and to systematize and to integrate various engineering disciplines toward design and engineering methods. These issues are discussed in Sections 5, 6 and 7.
POTENTIAL CUSTOMER EXPECTATION
It is often argued that recent innovative products, such as a smart phone or a hybrid vehicle, could not be commercialized by the conventional market research activity, because impacts of those innovative products are difficult to imagine by normal consumers due to their technological difficulty. Many customers have vague expectation for new products, but they cannot express their wish correctly, therefore their expectation cannot be satisfied. Manufacturers can offer new products based on their revolutional technology, but it is not easy to match their design intention with customers' real wish. It is very important to develop a methodology to capture hidden or potential customer expectation. In recent years, Japanese industry and research community have heavily disussed this issue, and proposed various practical methods for observing the potential social expectation [START_REF]Discovery of Social Wish through Panoramic Observation[END_REF]. It is still pre-mature to develop systematic methods, but several useful existing methods are discussed: ─ systematic survey of existing literature, ─ multi-disciplinary observation, ─ observation of social dilemma and trade-off, ─ collection of people's intuitive concern, ─ deep analysis of already known social concern, ─ re-examiation of past experiences.
Modelling, simulation, and social experiments are useful tools for prediction and information sharing. IT support is very effective for data mining and bibliometrics. Combination of information network and sensor capability has a big potential for extracting unconsciously hidden social wish. Promising approaches are advocated as a Cyber-Physical System [START_REF] Lee | Computing Foundations and Practice for Cyber-Physical Systems: A Preliminary Report[END_REF]. A huge number of sensors are distributed into the society, and various kinds of information are collected and analysed. There are many interesting trials in Japan, such as energy consumption trends of supermarkets, combination of contents and mobility information, zero-emission agriculture, healthcare information, etc. Important aspects are collection of demand-side information and combination of different kinds of industrial activities. It is expected that, by capturing latent social or customer wish, social infrastructure and individual QOL are better servicified by the manufacturing industry.
LARGE SYSTEM AND COMPLEXITY PROBLEMS
By the diversity and vagueness of customer requirements, industrial products and systems tend to become large in scale and complicated. Large system problems typically occur for designing social infrastructure or complicated products like a space vehicle, etc. Complexity problems are often related with multi-disciplinary engineering, such as designing mechatronics products.
For coping with large system problems, various system engineering methods are already well developed, but these methods are not fully exploited for product design and manufacturing practices. Those methods include the following system synthesis steps [START_REF]Towards Solving Important Social Issues by System-Building Through System Science and Technology[END_REF]: ─ common understanding via system modelling, ─ subsystem decomposition and structuring, ─ quantitative analysis of subsystem behaviour, ─ scenario setting and system validation.
There are important viewpoints for system design, such as harmonization of local optimization and global optimization, multi-scale consideration, structural approach to sensitivity analysis, etc. Standardization and modular concept are essential for effective system decomposition. The V-Model approach in system engineering is valid, but decomposition, verification and validation processes become very complicated with multi-disciplinary engineering activities in product design and manufacturing.
Various kinds of model-based approaches are proposed, and standardization for model description scheme and languages is progressing. For coping with large scale and complexity problems, it is important to take into account of the following aspects: ─ modelling and federation at various abstraction levels and granularity, ─ modular and platform approach, ─ multi-disciplinary modelling.
For system decomposition and modularization, it is effective to utilize a concept of function modelling as a basis, instead of physical building blocks. Product development processes are modelled, starting from requirement modelling, via function modelling and structure modelling, to product modelling. Those modelling should be per-formed in multi-disciplinary domains, and appropriately federated. Many research works are being performed, but industrial implementation are not yet fully realized.
UPSTREAM DESIGN PROBLEMS
It is argued that inappropriate product functionality and product defects are often caused at the upstream of product development processes. It is very expensive and time-consuming to remedy such problems at the later stages of product development, because many aspects of products have been already fixed. It is very effective to spend more time and effort at the stages of product requirement analysis, concept design and function design.
In Japan, it is currently a big problem that products tend to have excessive functions, and become the so-called "Galapagos" products, as shown in Figure 4. "Galapagos" products mean products designed to incorporate available leading technology as much as possible for product differentiation, and it results in very expensive products. Now the big market is expanding into developing countries. As the "Galapagos" products cannot be sold well in such market, it is required to eliminate excessive functions, and to make the products cheaper. But, it is difficult to compete with the cheap products designed especially for such market from scratch.
Fig. 4. -Identification of essential requirement
This problem happens from the ambiguity of product requirement identification. The following approach is important for coping with this problem: ─ identification of essential product requirements, ─ realization of essential functions with science-based methods, ─ rationalization and simplification of traditional functions and processes, ─ minimization of required resources.
The above approach cannot be implemented by conventional technology only, but requires dedicated advanced technology specifically tailored for the target products,
Identification of Requirement
Manufacturing Engineering such as extremely light-weighted materials, highly energy-efficient processes, etc. This is a way that Japanese industry can take for competitiveness.
A systematic approach to upstream design is a very keen research issue in Japan. An interesting approach is advocated by the name 1DCAE [START_REF] Ohtomi | The Challenges of CAE in Japan; Now and Then, Keynote Speech[END_REF]. There are many IT tools available today for doing precise engineering simulation. However it is very cumbersome to use those tools for upstream conceptual design activity. Also those tools are inconvenient for thinking and understanding product functional behaviour in an intuitive way. 1DCAE tries to establish a methodology to systematically utilize any available methods and tools to enhance the true engineering understanding of product characteristics to be designed, and to support the conceptualization of the products. 1DCAE is to exhaustively represent and to analyse product functionality, performance and possible risks at an early stage of design, and to provide a methodology for visualizing the design process and promoting the designer's awareness of possible design problems.
Figure 5 shows a 1DCAE approach for mechatronics product design. For conceptual design, various existing IT tools are used based on mathematical analysis and physics, such as dynamics, electronics and control engineering. Through good understanding of product concept and functional behaviour, detailed product models are developed. Fig. 5. -1DCAE approach for mechatronics product design [START_REF] Ohtomi | The Challenges of CAE in Japan; Now and Then, Keynote Speech[END_REF] Figure 6 represents a 1DCAE approach for large scale system design, such as a spacecraft. In this case, system optimization and risk mitigation are the very important design target. Design process is optimized by using DSM (Design Structure Matrix) method, and every necessary technological aspect of the products is modelled at appropriate granularity for simulation.
IMPORTANCE OF BASIC TECHNOLOGY
In addition to the various system technologies, the basic individual technology is also important. In recent years, remarkable progress of product and process development technology has been achieved by use of high speed computing capability. As indicated in Figure 1, there are many interesting and useful research results in basic science and engineering which could be effectively applied for practical design and manufacturing. However, many of those are not yet utilized. Science-based approach enables generally applicable reliable technology. Quite often the so-called expert engineering know-how or "Suriawase" technology can be rationalized by the science-based developments. By these developments, traditional manufacturing practices relying on veteran expert engineers and workers can be replaced by comprehensive automated systems, as discussed in the next section. By sophisticated engineering simulation with supercomputers, extreme technologies have been developed, such as light-weighted high-strength materials, low-friction surfaces, nano-machining, low-energy consumption processes, etc. Advanced modelling technology is developed, which can represent volumetric information including graded material properties and various kinds of defects in the materials. Powerful measurement methods, such as neutron imaging, are being available to visualize internal structure of components and assemblies. By such precise modelling, accuracy of computer simulation is very much enhanced, and delicate engineering phenomena can be captured, which is difficult for physical experiments.
Ergonomic modelling and robotics technology have evolved, and behaviour of human-robot interaction can be simulated precisely by computer. This is a basis for designing comprehensively automated production systems, as discussed in the next section. There are several critical issues for realizing future-oriented IT support systems as shown in Figure 7. One of the important problems is to integrate well developed practical design methods into IT support systems. There are many such methods as Quality Function Deployment (QFD), Functional Modelling, First Order Analysis (FOA), Design for X (DfX), Design for Six Sigma (DFSS), Design Structure Matrix (DSM), Optimization Design, Design Review, Failure Mode and Effect Analysis (FMEA), Fault Tree Analysis (FTA), Life Cycle Assessment (LCA), etc. For implementing those methods in digital support systems, it is necessary to represent pertinent engineering information, such as qualitative/quantitative product behaviour, functional structure, tolerances, errors and disturbances, etc. It is still pre-mature to install such engineering concept into practical IT support systems. Figure 9 shows an example of product model representation which can accommodate various kinds of disturbances arising during production and product usage, and can support the computerization of practical reliability design methods. Further theoretical work and prototyping implementation are desired for practical use.
Fig. 1 .
1 Fig. 1. -Importance of design and manufacturing engineering
Fig. 2 .
2 Fig. 2.-Classification of design and manufacturing problems[START_REF] Ueda | Value Creation and Decision-Making in Sustainable Society[END_REF]
Fig. 6 .
6 Fig.6. -1DCAE approach for large system design[START_REF] Ohtomi | The Challenges of CAE in Japan; Now and Then, Keynote Speech[END_REF]
Fig. 8 .
8 Fig. 8. -Digital Pre-Validation of Production Lines [6]
Fig. 9 .
9 Fig. 9. -Model based engineering with disturbances [7]
FUTURE PERSPECTIVE FOR ADVANCED IT SUPPORT
Various kinds of CAD/CAM systems are effectively utilized in industry today, and they have already become indispensable tools for daily product and process development works. However their functionality is not satisfactory from the future rquirements for IT support discussed in the previous sections. Two important aspects for advanced IT support for product and process developments are identified. One is comprehensive support of intelligent human engineers for creative product design, and the other is systematic rationalization and automation for well developed engineering processes.
The possible configuration of advanced IT support for product and process development is shown in Figure 7. "Virtual Product Creation" deals with intelligent human support for product design, and "Error-Free Manufacturing Preparation" performs comprehensive support and automation of well developed engineering processes. A core part of the system is "Integrated Life Cycle Modelling", which represents all the necessary product and process information for intelligent support and automation. Technologies discussed in Sections 5 to 7 are somehow integrated in these system modules.
Fig. 7. -IT support for product and process development
In Japanese industry, some part of those system functionalities are implemented individually as in-house applications, and some are realized as commercially available IT support systems. Figure 8 shows an example of digital pre-validation of production lines for electronics components. Recently Japanese companies operate such factories in foreign countries. By using the digital pre-validation, most of the required engineering works can be done in Japan before actual implementation of the factory equipments in foreign contries. Line design work load of human expertise can be radically reduced by this support system. This system incorporates many sophiscticated modelling and evaluation engineering know-hows, and exhibits differentiated characteristics from the conventional factory simulation systems.
SUMMARY
With the globalization of market and manufacturing activity, manufacturing industry in industrially advanced countries are facing difficult problems, such as severe competition with the low-cost production in developing countries and radical changes of customer requirements in industrially mature countries, etc. For coping with these problems, it is important to identify hidden or potential customer expectation, and to develop systematized design and manufacturing technology to augment human expertise for innovative product development. As Japan expects the radical decrease of population in coming 50 years, it is very important to systematize the product and process development technology throughout the total product life cycle, and to introduce IT methods and tools for supporting creative and intelligent human activities, and for automating well understood engineering processes. In this paper, current issues in manufacturing industry are generally reviewed. Future directions of manufacturing industry are described, and important technological issues and their IT support solutions are discussed from the viewpoints of potential customer expectation identification, large system and complexity problems, upstream design problems and important basic technology. Finally future perspective for advanced IT support is investigated. | 27,921 | [
"1003717"
] | [
"375074"
] |
01485826 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485826/file/978-3-642-41329-2_31_Chapter.pdf | M Borzykh
U Damerow
C Henke
A Trächtler
W Homberg
Modell-Based Approach for Self-Correcting Strategy Design for Manufacturing of Small Metal Parts
Keywords: Metal parts, punch-bending process, control strategies, modelbased design, manufacturing engineering 1
The compliance of increasing requirements on the final product often constitutes a challenge in manufacturing of metal parts. The common problem represents the precise reproduction of geometrical form. The reasons for form deviation can be e.g. varying properties of the semi-finished product as well as wear of the punch-bending machine or the punch-bending tool themself. Usually the process parameters are manually adjusted on the introduction of new production scenario or after the deviation between the actual form of produced pieces and the designed form become clear. The choice of new process parameters is normally based on the experience of the machine operators. It leads to a time-consuming and expensive procedure right on the early stages of production scenarios as well as during the established production process. Furthermore, the trend of miniaturization of part sizes along with narrowing tolerances and increase in the strengths of materials drastically pushes up the requirements on the production process. Aiming at reduction of scrap rate and setup-time of production scenarios, a model-based approach is chosen to design a self-correcting control strategy. The strategy is designed by modeling the bending process. In the first step the bending process has to be analyzed on the model by varying of process variables influencing the process significantly. It is done by corresponding simulations. After that, the correlations between significant variables and geometrical deviation were defined and different self-correcting control strategies were designed and tested. In order to identify and validate the simulation and to test the quality of the self-correcting control strategies, a special experimental tool was built up. The experimental tool is equipped with an additional measurement de-vice and can be operated on a universal testing machine. Finally, the selfcorrecting control strategies were tested under real production conditions on the original tool in order to address further influences of the punch-bending machine on the manufacturing process.
INTRODUCTION
The increasing international competition on the one hand and the trend toward miniaturization of components on the other hand represent the challenges for manufacturers of electrical connection technology. To meet these challenges, the new production technologies with smart tools should be developed.
Complex metal parts e.g. plug contacts being used in the electrical connection technology are currently produced on cam disc based punch-bending machines. These machines are mechanically working and use the same adjustments for all production steps. Due to the on-going trend of reduction in size of produced parts with simultaneous decreasing tolerances and use of high strength materials, geometrical deviations of the final product appear increasingly. The use of punch-bending machines with NC-controlled axis allows a more flexible set up in comparison to cam disk based machines.
The figure 1 presents the active structure of the conventional bending process using a punch-bending machine with two NC-controlled axes. Material flow runs from the feed/punch through bending and correction punch down to the chute. The advantage here is that the operator selects a product to be manufactured and the movements of the each axis are automatically generated. In this case, for the production two punches are used: bending and correction punch. The operator thereby receives the status and the information about the machine, but not about the manufacturing process. Today, when undesirable geometrical deviations appear, the new process parameters have to be set by the operator based on his or her personal experience. These targeted interventions are only possible when the punch-bending machine is stopped. Hence, this procedure is very time consuming especially when it is necessary to perform it more than once. Besides that, frequent leaving of the tolerances leads to high scrap rate. The failure to reproduce form of the element within allowable tolerances is caused by varying shape or strength of the semi-finished material (flat wire) as well as the thermal and dynamical behavior and wears phenomena of the punch-bending machine itself or of the punch-bending tool.
OBJECTIVE
The aim of a project at the Fraunhofer Institute for Production Technology (IPT) in cooperation with the University of Paderborn is to develop a punch-bending machine being able to react adaptively on changing properties of the process as well as on variability of the flat wire properties. This aim is targeted in implementation of a selfcorrecting control strategy. Figure 2 shows the enhancement of a controlled process keeping the nominal dimension within the tolerances compared with the current noncontrolled situation.
Fig. 2. -Non-controlled and controlled processes
The short-circuit bridge (Fig. 3) was employed as a basic element for the process of control strategy development. The geometrical shape is created in the first two bending steps and with the last bending step the opening dimension is adjusted. In order to keep the opening dimension of the short circuit bridge within the tolerances, it is necessary to detect a leaving of the allowable interval first and then to take appropriate corrective action by a punch in the next step. Development of a self-correcting control strategy needs all components of the process to be taken into account. Therefore, machine behavior has to be analyzed as well as the behavior of the tool, the flat wire and workpiece shape. Additional measurement devices have to be developed in order to measure process variables such as the opening dimension of the short circuit bridge and the punch force. In a self-correcting control strategy, the measured process variables are used to calculate the corrected punch movement by an algorithm in a closed-loop mode. Furthermore, the position accuracy of the punchbending machine axis as well as of the punches of the tool is analyzed by means of displacement transducers.
The desired approach is similar to the VDI guidelines for the design methodology for mechatronic systems [VDI-Guideline 2206 ( 2004)]. The objective of this guideline is to provide methodological support for the cross-domain development of mechatronic systems. In our case, these domains are bending process, modeling and control engineering.
ANALYZING THE INITIAL PROCESS
Gaining a basic understanding of the current process flow, the process design, the tool design, the behavior of the punch bending machine as well as of the material used for the flat wire are to be analyzed. The punch-bending tool is used to produce the short circuit bridge with three bending steps. The punches used for the single bending steps are driven by the NC-axis of the punch-bending machine. It could be observed that the geometrical deviations of the workpiece occur within short time and therefore wear phenomena are unlikely to be responsible for problems with shape of the final product and their influence can be neglected. Geometrical deviations could also result from the positioning accuracy of NC-axis or from varying properties of the semifinished material. The positioning accuracy as given by the machine manufacturer is within 0.02 mm tolerance what was proved by additional measurements with a laser interferometer. This accuracy is sufficient for the bending process. Finally, the deviations of part's geometry are most probably caused by the changes of the flat band properties. To investigate the properties of the flat band, the model-based approach with the further identification and validation on an experimental tool was chosen.
MODEL-BASED ANALYSIS OF BENDING PROCESS
TEST ON AN EXPERIMENTAL TOOL
In order to investigate the properties of real flat wire, an experimental tool representing the significant bending operations of the production was build up. The tool can be operated on a universal testing machine, allowing measurement of the punch movement and force during the whole bending process. The experimental tool is used to investigate the impact of the geometrical dimension on the flat wire when its thickness and width change. A reduction of the thickness t of the flat wire at a constant width w showed the punch force to decrease clearly (Figure 6a). But when the thick-ness t is kept constant and the width w of the flat wire is reduced, there will be a significantly smaller decrease of the punch force (Figure 6b). This behavior could also be observed during measurement of the punch force during the manufacturing of the short-circuit bridge in the production tool. There the punch is always moved on a fixed end position so by means of the punch force, the changing thickness of the flat wire can be detected indirectly. The thickness of the flat wire varies by +/-0.015 mm but remains within the admissible production tolerance set by the manufacturer. Furthermore it could be observed that the change of thickness affects the opening dimension of the flat wire significantly.
MEASUREMENT DEVICE
The opening dimension is a decisive parameter for the functioning of the short circuit bridge and has to be checked in quality assurance procedures. In order to check and to adjust the opening dimension in a defined way, it has to be measured runtime during the manufacturing process by means of contact or contactless measurement methods.
Because the short circuit bridge is formed within the tool and access to it is rendered, a contactless optical measurement device has proven to be the most appropriate. For keeping the opening dimension of the short-circuit bridge within the tolerance range of 1.2 mm, a measurement accuracy of about 0.02 mm is indispensable. The measurement device has to be fast enough to detect the opening dimension of each workpiece at a production speed of 60 parts per minute. Consequently an optical measurement device has been found to be the most appropriate one.
For testing the function of the measurement method, a self-developed setup was chosen. A schematic setup of measurement device is shown below (Fig. 7). A shadow of the short circuit bridge on the level of the opening dimension is cast by a flat LED-backlight to avoid a perspective error [START_REF] Hentschel | Feinwerktechnik, Mikrotechnik[END_REF] The information of the CCD sensor is processed via a real-time IO-system manufactured by dSpace GmbH and transferred to MatLab/Simulink, where the opening dimension is being calculated. For first investigations this self-developed setup was chosen to prove the functioning of the measurement method and to keep the costs low. The measurement result is influenced by the relative position of the measured object to the CCD sensor on one hand and by vibration or shock and contamination in the punch-bending machine on the other. First the width of the measurement object was changed in a range in which the current opening dimension varies. A very good linear connection between the width B of measurement object and dark pixels can be observed. The measure accuracy per pixel is about 0.02 mm including measurement tolerances and is accurate enough to recognize a leaving of the tolerance early. By varying the distance A at a constant width B the measurement becomes inaccurate. But observation of the real process has shown that possible movement in direction A is negligible because the short circuit bridge is fixed in the tool during the bending operations.
In order to investigate vibrations or shocks in the process an acceleration sensor was attached to the optical measurement device. When the punch-bending machine in running on 60 RPM accelerations of about 0.2 m/s² could be detected which will not affect the measurement.
Further investigations have shown that the change of thickness of the flat wire impacts the opening dimension of the short-circuit bridge. The thickness of the flat wire can be estimated indirectly by measuring the punch force in the production tool. This method showed reliable results and will keep costs low if an already existing force sensor is used.
SELF-CORRECTING STRATEGY
To build up a self-correcting strategy, it is necessary to detect the opening dimension for the each workpiece especially when it is beginning to run from the desired value to one of the tolerance limits. In the next step the punch movement has to be adapted by a defined value to correct the opening dimension. Because there is only very little time between measuring and the correcting step, a closed-loop control for the trend correction is used. Therefore the information on the current opening dimension is used for correcting the opening dimension of the next short circuit bridge. This is possible because the changes of size of the opening dimension are slight enough.
After that, the punch force of the first bending step is used to determine the influence of the flat wire thickness. The information given by the punch force applied to a part can be used for the same part because there is enough time between the measurement and the correcting bending step. So for the self-correcting strategy the opening dimension from one short circuit bridge before (yi-1) and the maximum punch force of the first bending step from the previous and current short circuit bridge (Fi-1 and Fi) are used together with additional constant terms (k1, k2).
di F F k y y k u i i i desired i ) ( ) ( ( 1 2 1 1 (1)
where ipart number.
The coefficient k1 is calculated from the relationship between the plastic change of the opening dimension and position of the punch actuator using the bending model. The Term (Fi -Fi-1) represents a discrete differentiator of the maximum punch force from the first bending step, and the coefficient k2 is calculated from the relationship between the change of the wire thickness and the maximum punch force using the bending model as well.
The figure 9 illustrates the extended active structure of the new self-correcting bending tool on the punch-bending machine with two NC-controlled axes. The conventional bending tool is extended by two components. The first new component is the integrated measuring equipment, such as camera system and force sensor. The second new component is located in the information processing. The information about the current article is collected and processed. It can be determined being the opening dimension in the tolerance or not. Furthermore, the new adjustment path for the correction punch can be calculated. Thus, the on-line process regulation is realized. Additionally, the operator must set up the tolerance and the desired value for the opening dimension. A first verification of the self-correcting strategy was carried out on the real short circuit bridge by using the experimental tool. These tests showed very good results with a stable performance of the closed-loop control. Nevertheless, the experimental tool could not be used to test the self-correcting strategy under production conditions. By implementing the optical measurement device into the production tool and the algorithm of the self-correcting control strategy into the controller of the punchbending machine, a test under production condition could be carried out. At the production speed of 60 RPM of the punch-bending machine the opening dimension as well as the punch force could be measured reliable and the closed-loop control showed a stable behavior so the opening dimension of the short circuit bridge could be held within the tolerances (Fig. 10).
CONCLUSIONS
The trend in the electrical connection technology goes to a minimization of the metal part size and narrowing of tolerances. Because of unavoidable varying properties of the high strength materials the small tolerances can be only kept under a high scrap rate and a large expenditure of time. In this case the production process of a short circuit bridge was used to reduce scrap rate and the setup time of the process. Therefore a self-correcting strategy based on a closed loop control was built up. This selfcorrecting strategy uses geometrical dimensions of the workpiece measured during the bending process to be able to keep the opening dimension of the short circuit bridge by a correcting bending step within the tolerances over the whole process period.
Fig. 1 .
1 Fig. 1. -Active structure of the conventional bending process
Fig. 3 .
3 Fig. 3. -Short-circuit bridge
AFig. 4 .Fig. 5 .
45 Fig. 4. -Setup of the MBS model: a) Modell of the bending process; b) Modelling of the workpiece
Fig. 6 .
6 Fig. 6. -Influence of the thickness and width of the flat wire concerning the punch force
, Demant, C. (2011)]. The shadow is received through an objective and produces dark areas on a CCD linear image sensor which detects the transition between light and dark. Knowing the size of pixels and their position in the line it is possible to calculate the opening dimension.
Fig. 7 .
7 Fig. 7. -Setup of the optical measurement device
Figure 8
8 shows the schematic structure of the process control. The calculation of the control input (ui) for punch actuator is shown in the equation (1). Thus, the control law corresponds to the discrete I-controller [Shinners, Stanley M. (1998)].
Fig. 8 .
8 Fig. 8. -Schematic design of the closed-loop control for the trend correction
Fig. 9 .
9 Fig. 9. -Active structure of the self-correcting bending process
Fig. 10 .
10 Fig. 10. -Measured trend of the opening dimension without (a) and with (b) the self-correcting strategy
ACKNOWLEDGEMENTS
We express our deep gratitude to the AiF/ ZIM for funding this project. We would like to gratefully acknowledge the collaborative work and support by our project partners Otto Bihler Maschinenfabrik GmbH & Co. KG and Weidmüller Interface GmbH & Co. KG. | 18,161 | [
"1003720",
"1003721",
"1003722",
"1003723",
"1003724"
] | [
"446543",
"74348",
"446543",
"74348",
"74348"
] |
01485827 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485827/file/978-3-642-41329-2_32_Chapter.pdf | Michiko Matsuda
email: matsuda@ic.kanagawa-it.ac.jp
Fumihiko Kimura
email: fumihiko.kimura@hosei.ac.jp
Digital Eco-Factory as An IT Support Tool for Sustainable Manufacturing
Keywords: Production modelling, Software agent, Sustainable production planning, Virtual factory, Environmental simulation
A sustainable discrete manufacturing using a digital eco-factory as an IT tool has been proposed. In this paper, details of a digital eco-factory with construction methods are discussed. A digital eco-factory is a virtual factory and IT support platform on which a production scenario is examined from various viewpoints. When a digital eco-factory is used, environmental impact of the planned production scenario is examined in addition to productivity and manufacturability. A digital eco-factory is constructed on a digital factory. A digital factory is constructed on the virtual production line. Multi agent technologies can be applied to modelling an actual shop floor and its components. All components are configured as software agents. These agents are called "machine agents." Furthermore, manufactured products are also configured as software agents. These agents are called "product agents." In the digital eco-factory, there are three panels which have a user interface from each different viewpoint. The three panels are a plant panel, product panel and an environmental index panel. By using a digital eco-factory, a production system designer can do pre-assessment of the configuration of the production line and production scenario, and a factory equipment vender can show performance of his equipment.
INTRODUCTION
A low carbon society has striven for a long time to achieve conservation of the global environment. However, the realization does not progress easily. Although mechanical products, such as cars, personal computers, mobile phones, household appliances, and daily use equipment are indispensable in everyday life, their recycle/reuse systems are still at the level of considering recycling/reuse of their material or component parts only. For society to proceed further, the recycle-based manufacturing system must be improved fundamentally from the viewpoint of sustainability. At present, manufacturing enterprises are required to optimize the service to the product user while considering the sustainability of the global environment, and it becomes usual to design the whole product life cycle before the production. At the design stage, various kinds of CADE(s) (CAD for Environment) are used as IT support tool. This ICT investment for the use of these tools is becoming a big defrayment. Moreover, starting with the ISO 14000 series (e.g. [START_REF]An Empirical Study of the Energy Consumption in Automotive Assembly[END_REF][2][3]) for environmental management, which was issued in 1996, the methodologies for life cycle assessment techniques are being standardized (e.g. [4]). According to the above trend, at the real manufacturing scene [5], an IT support tool with low ICT investment is strongly desired for estimating production cost and environmental impact of production plans before actual production.
To use a digital eco-factory as an IT support tool for green production planning has been proposed by the authors [START_REF] Matsuda | Digital eco-factory as an IT platform for green production, Design for innovative value towards sustainable society[END_REF][START_REF] Matsuda | Configuration of the Digital Eco-Factory for Green Production[END_REF][START_REF] Matsuda | Usage of a digital eco-factory for green production preparation[END_REF]. A digital eco-factory is a virtual factory and integrated IT platform on which a production scenario is simulated and examined from various viewpoints. When the proposed digital eco-factory is used, green performance of the planned production scenario is examined in addition to productivity and manufacturability at the same time with various granularities such as machine level, product level and factory level. In the future, when this digital eco-factory is available as a Web service such as Cloud service and SaaS (Software as a Service), it becomes possible to use IT support tools for sustainable manufacturing with low investment. As a first step towards the above direction, the digital eco-factory must be implemented in practice.
The detailed inside structure of a digital eco-factory is discussed and determined for a practical implementation in this paper. First, technical requirements and IT solutions for them are presented. And a conceptual structure of a digital eco-factory for discrete manufacturing is shown. Modelling of a production line for the construction of virtual factory is discussed in detail. Then, the usage and control of a digital ecofactory are explained. A digital eco-factory is operated based on the execution of virtual manufacturing. Finally, an example of trial implementation is introduced.
IT SUPPORT TOOL FOR SUSTAINABLE MANUFACTURING
2.1
Requirements for the IT support tool 2.1.1 General idea of a IT support tool.
The production engineer and plant manager use the IT support tool to assess and examine the production scenario before the actual execution of the production. The product designer also uses this tool to consider the production process in the product life cycle. Moreover, the manufacturing device and equipment developer use this tool to examine and show the capability and environmental efficiency of new devices and equipment. The IT support tool shows performance and environmental impact from the various view points by simulation of the input production scenario. Figure 1 shows the image of the IT support tool. This IT support tool is called "digital ecofactory." Production lines in an actual factory are modelled as a virtual factory in a digital eco-factory. Virtual manufacturing is performed according to the input scenario in the virtual factory, and its performance is watched from the product viewpoint, production line viewpoint and other viewpoints. There are functional requirements for the digital eco-factory as an IT support tool from systematic view, monitoring view and user interface view [START_REF] Matsuda | Digital eco-factory as an IT platform for green production, Design for innovative value towards sustainable society[END_REF][START_REF] Matsuda | Configuration of the Digital Eco-Factory for Green Production[END_REF]. Following are the major functional requirements:
easy input of production scenario such as device/equipment configuration, production schedule, process plan, manufactured product data, optimization parameters, change of schedule/plan, precise simulation of production scenario from the machine view, process view and product view, simulation which also includes added peripheral equipment such as an airconditioner to equipment which is directly used in the production. computation of environmental items such as the amount of raw materials and various energy intensities (ex. CO2, NOx, SOx, energy consumption) in addition to conventional items such as production costs and delivery time, monitoring for status of each and every process (machine), each and every product, and the system as a whole, and monitoring for relationships between environmental indicators and an indicator in cost-oriented conventional processes such as delivery time and production cost.
A digital eco-factory as an IT support tool
To fulfil the above functional requirements, the digital eco-factory must be a robust IT platform for simulation of various production scenarios, pre-assessment of various line configurations, and comparison of several production processes. Furthermore, technologies are required for the proper evaluation of each process by carefully making individual components one by one, and assessing the entire factory based on them.
To implement these things on an actual IT platform, it is important to construct precise models of a production line, production process and target product including both of static properties and dynamic behaviour. In other words, the core of the digital ecofactory is a digital factory in which actual machine, production line and factory are
Actual Factory
Digital Eco-Factory (Saas)
Product View
Machine/Production Line View
Planning & Deliberation Simulation for Productivity & Environmental Impact
Infrastructure View Virtual Factory mirrored. There are several previous studies about digital factory (e.g. [START_REF] Freedman | An overview of fully integrated digital manufacturing technology[END_REF][START_REF] Bley | Integration of product design and assembly planning in the digital Factory[END_REF][START_REF] Monostori | Agent-based systems for manufacturing[END_REF]) in which production lines are modelled statically. Based on these result, authors proposed using multi agent technology to model factory elements statically and dynamically [START_REF] Matsuda | Flexible and autonomous production planning directed by product agents[END_REF][START_REF] Matsuda | Agent Oriented Construction of A Digital Factory for Validation of A Production Scenario[END_REF]. Moreover, it is proposed to construct a digital eco-factory using this agent based digital factory. The conceptual structure of the proposed digital ecofactory is shown in Figure 2. In Figure 2, the digital factory is the basis of the digital eco-factory. The digital factory is constructed on the virtual production line modelling an actual production line and its components. All components are configured as software agents. These agents are called "machine agents." In addition to machine agents, manufactured products are also configured as software agents. These agents are called "product agents." In the digital eco-factory, there are three panels which have a user interface from each viewpoint. The three panels are plant panel, product panel and environmental index panel. The operator of the digital eco-factory can input production scenario, configuration of the shop floor, control policy for the production line, energy saving policy, granularity of environmental indexes etc. through the user interface of the panels. The operator can also observe progress and results of virtual production through the user interface of the panel. The product panel controls the progression of virtual production by the creation of product agents. The structure of a machine capability model is schematically shown in Figure 3. A machine capability model consists of specification data of a machine, operations which the machine can perform, knowledge on how to operate processes, required utilities such as air and light, and knowledge on how to calculate cost related items and environmental indexes. Operation data has operation orders and its own operation conditions. Operation data consists of performed operation types such as machining, screwing and bonding, using tools and jigs corresponding to the each operation, energy consumption for each single operation and other operation information such as operation method and control algorithm. The machine capability model provides the associated production line ID and its position in the line. If an associated production line ID is the same, they are in the same production line and the associated line position shows the order of machine positioning. A machine agent has its own machine capability model. The plant panel has templates for machine capability models and fulfils an adequate template when initially setting a machine agent.
MODELING FOR VIRTUAL PRODUCTION
product agent.
A product agent is a collective designation such as a workpiece agent with the workpiece data and the machining process data, and the part agent with the part data and the assembly process data. According to the production schedule, a product agent with product model and process plan data is created by the product panel. Product model and process plan data are prepared outside of a digital eco-factory using a design assist system such as a CAD/CAM system. Usually, the product model and process plan are included in a production scenario. The activity diagram of the product agent is shown in Figure 5. A product agent has a machine allocation rule, process plan for completing the product and the product model. When the production request is accepted, a product agent allocates jobs to adequate machine agents in an order according to the process plan, monitors product condition in the virtual operation by collecting productivity data and environmental data from the machine agent, and reports production status of the product.
Production scenario
A reviewing production scenario is input to the digital eco-factory. Usually, a production scenario is prepared by a production engineer such as a process planner. A production scenario is validated by virtual production following the scenario. By repeatedly modifying and validating a scenario, a proper production scenario is selected from an economical point and environmental point of view. A formal structure of the production scenario is shown in Figure 6. A production scenario is constructed from product data, which is a target of the manufacturing process, which are job sequences for producing the product, and rules and methods for executing the virtual production. Product data includes data of its component parts and workpiece data. A process consists of sub processes. A minimum sub-process is a job which is executed on some resources. There are rules and methods such as methodology and optimized parameters for production line control, dispatching rules for scheduling and theory for machine allocations [START_REF] Matsuda | Usage of a digital eco-factory for green production preparation[END_REF][START_REF] Matsuda | Agent Oriented Construction of A Digital Factory for Validation of A Production Scenario[END_REF]. Fig. 6. -Structure of a production scenario [START_REF] Matsuda | Usage of a digital eco-factory for green production preparation[END_REF]
4
A DIGITAL ECO-FACTORY
Construction of the virtual production line
A virtual production line is constructed by machine agents and product agents. A sequence diagram of a virtual manufacturing in a virtual production line is shown in Figure 7. A virtual production line configuration is given as activations of machine agents by the plant panel. According to the production scenario, the product panel creates product agents. When a product agent is created, the first step of the virtual manufacturing procedure is that a product agent requests machine status from machine agents. Depending on the reply, the product agent requests the job execution on (constraints)
the allocated machine. The machine agent replies with the estimated job starting time and the product agent confirms the job request. Furthermore, the product agent requests an AGV agent to transfer virtual things such as material, mounted parts and tools to the machine for virtual execution of the job. The machine agent proceeds with the virtual operation according to the scheduled job list in proper order. When the machine agent starts the virtual operation, the machine agent notices the starting to the product agent. During execution of the virtual operations in the machine agent, the machine agent reports the condition and status to the product agent and others. The product agent makes and sends the report to the product panel. The plant panel sets up the configuration of a virtual production line, and shows the operating condition and status of machines which are used to manufacture products in a virtual production line. Figure 8 shows the activity diagram of a plant panel. The configuration of the production line and details of component machines (devices and equipment) are provided from outside of the digital eco-factory by the operator. When a new machine is indicated, the plant panel sets up machine agents with a machine capability model corresponding to the input configuration by using templates of the machine capability model. On the other hand, when an already set up machine is indicated, the plant panel activates the corresponding machine agent with associated production line information. During execution of virtual production, by communicating with machine agents, the plant panel monitors the machine status on the virtual production line, collects operation condition data, productivity data and environmental data, and calculates the total of the economical and environmental index. When the configuration is changed, an operator could indicate this through deletion/generation of machine agents through the plant panel.
Fig. 1 .
1 Fig. 1. -General idea of an IT support tool
Fig. 2 .
2 Fig. 2. -Conceptual structure of the digital eco-factory
1
1 Machine agent and machine capability model.A machine agent is a software agent which has its machine capability model. According to the production line configuration, machine agents are set by the plant pan-machine agent simulates behaviour and activity of the manufacturing machine by referring the machine capability model. Manufacturing machines represent all of the device/equipment on the shop floor, including human operators. In other words, the machine capability model statically describes a machine's data, and a machine agent dynamically represents a machine's performance. Machine agents communicate with each other and autonomously structure a production line on the virtual shop floor.
Fig. 3 .
3 Fig. 3. -structure of a machine capability model
Fig. 4 .
4 Fig. 4. -Activity diagram for a machine agent
Fig. 5 .
5 Fig. 5. -Activity diagram for a product agent
Fig. 7 .
7 Fig. 7. -Sequence diagram of a virtual production line
of the digital eco-factory. A digital eco-factory will support sustainable discrete manufacturing by virtually executing production. For future work, more trial implementations are required, and further detailed design should be generated, based on the results of trial implementations. ACKNOWLEDGEMENTS
The authors thank members of research project titled "the digital eco factory" by FAOP (FA Open Systems Promotion Forum) in MSTC (Manufacturing Science and Technology Center), Japan for fruitful discussions and their supports. The authors are also grateful to Dr. Udo Graefe, retired from the National Research Council of Canada for his helpful assistance with the writing of this paper in English.
Product panel.
The product panel creates the product agent with a process model by referring to the production scenario and product model, and shows the progression and status of the manufactured product on the virtual shop floor from the productivity and environmental views. The activity diagram of the product panel is shown in Figure 9. At first, the product panel analyses the production scenario which includes manufactured product data such as parts structure, production amount, delivery period, and process plan, and the product panel generates the production schedule plan. According this plan, the product panel creates product agents and inputs them to the digital factory to start production for each product. As virtual production proceeds, the product panel monitors the progression and status of the products on the virtual shop floor from the productivity and environmental views. And the product panel displays the product status, collects environmental data and productivity data, calculates an environmental index by communicating with product agents and reports them.
Environmental index panel.
The environmental index panel shows green performance indexes such as carbon dioxide emissions and energy consumption at the machine level, production line level and plant/factory level. At the plant level, green performance indexes from plant utilities such as air compressor, air conditioning, exhaust air and lighting are also included. Figure 10 shows the activity diagram of an environmental index panel. The environmental index panel calculates green performance indexes based on the operating condition report from a machine agent by referring to the machine capability model. And the index panel generates a green performance report for each machine. This report is called the machine index report. Using this machine index report, the index panel calculates green performance indexes for production lines and produces a line index report. Then, using this line index report and utility consumptions, the index panel calculates green performance indexes of the whole plant. Utility consumptions are calculated in parallel using reported data such as power consumption and airflow volume from machine agents and referring to machine capability models.
USAGE OF A DIGITAL ECO-FACTORY
Green performance simulation
Using the digital eco-factory, the productivity and green performance of production scenario can be simulated and evaluated. The sequence flow in the digital eco-factory is shown in Figure 11. The sequence flow of the digital factory which is the core of a digital eco-factory is shown in Figure 7. In Figure 11, relationships among the three panels and the digital factory are clarified. The production plan which indicates the workpiece/part input order to the production line and production scenarios are input to the product panel. By changing the production plan, the creation order of the product agent can be controlled. And, by changing production scenario, the job allocation to a machine agent by a product agent can be controlled. The production line configuration is input to the plant panel. By changing line configuration data, the activation of a machine agent through the plant panel can be controlled. As a result, various production plans, line configurations and production scenarios are easily comparable by using the digital eco-factory. Three panels monitor and report green performance simulation in the digital factory through their own view.
Trial example
Proposed concept of digital eco-factory is applied to the PCA (Printed Circuit Assembly) line. This trial system is implemented using a commercially available multi-agent simulator "artisoc." A PCA line consists of a solder paste printing machine, three electronic part mounters, reflow furnace and testing machine. In the PCA line, processes for the above machines are proceeded in sequence, these machine's capabilities are modelled as individual machine agents due to the precise simulation. There are six types of printed boards produced, depending on the number of mounted electronic components and the temperature of the solder. When a blank PCB (Printed Circuit Board) is input to a solder paste printing machine, the production process is started. A PCB is modelled as a part agent which is one of the product agents. Figure 12 shows the modelling concept for PCA line and parts of the concrete descriptions for some the agents in "artisoc". In this example, there are two PCA lines. Figure 13 shows displays of the execution example for the virtual production of the PCA. The animation display for the condition of agents is seen at the left-upper part of Figure 13, and the window at the right-upper part is the control panel for setting production volume for each type of PCA. Power consumption of each machine on each line from the environmental view is monitored in the lower part of Figure 13. The block graph at the lower left shows power consumptions for each machine in the PCA line no.1. Power consumption of the reflow soldering oven is predominantly large. In the PCA line at the lower right, the same phenomenon could be seen.
CONCLUSIONS
For the practical implementation of the proposed digital eco-factory, the detailed design of the digital eco-factory is discussed in this paper. Key items are how to precisely model the production activities statically and dynamically. In this paper, it is proposed that multi agent technologies are applied for modelling of production line and production behaviour. All elements configuring the production line are implemented as software agents including manufactured products. Agents communicate with each other and autonomously construct a virtual production line. Through the virtual manufacturing in the virtual production line, environmental effects can be estimated. The small trial example shows the effectiveness of the proposed implementation method | 24,430 | [
"1003725",
"1003717"
] | [
"488126",
"375074"
] |
01485828 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485828/file/978-3-642-41329-2_33_Chapter.pdf | Elmira Kh
Dusalina
Nafisa I Yusupova
email: yussupova@ugatu.ac.ru
Gyuzel R Shakhmametova
Elmira Kh Dusalina
email: dusalina.elmira@gmail.com
Enterprises Monitoring for Crisis Preventing Based on Knowledge Engineering
Keywords: Enterprise monitoring, crisis preventing, bankruptcy, decision support system, expert system technology, data mining
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Enterprises monitoring is a management process component, which represents an enterprise activity continuous observation and analysis with a changes dynamics tracking. Monitoring of enterprises is important for identification of possible signs of crisis states, its preventing and enterprises safety. Enterprises crisis states include its insolvency, an inability to pay debts and a bankruptcy. Quickly estimation of the changes in the financial state of the enterprise allows to realize management decisions at the early stage of the crisis and to avoid the negative consequences for enterprise. One of the main enterprise crises is a bankruptcy. The researches in the field of bankruptcy monitoring have been carried on for a long time and can be found in the papers of many scientists, as well as in the IT-decisions. These problems are considered in detail in [START_REF] Yusupova | Intelligent Information Technologies in the Decision Support System for Enterprises Bankruptcy Monitoring[END_REF]. In the article the decision support system (DSS) for enterprises monitoring on the base of knowledge engineering is proposed. The second section describes state of art in enterprise bankruptcy forecasting, third onethe problem statement. The decision support system (DSS) for enterprises monitoring is considered in forth section. The fifth section is devoted to DSS modules in details and in sixth section DSS implementation efficiency analysis is presented.
STATE OF ART
There are two main approaches to enterprise bankruptcy forecasting in modern business and financial performance practice [START_REF] Belyaev | Anti-Crisis Management[END_REF]. Quantitative methods are based on financial data and include the following coefficients: Altman Z-coefficient (the USA); Taffler coefficient (Great Britain); two-factor model (the USA); Beaver metrics system and the others. A qualitative approach to enterprise bankruptcy forecasting relies on the comparison of the financial data of the enterprise under review with the data of the bankrupt business (Argenti A-account, Scone method). Integrated points-based system used for the comprehensive evaluation of business solvency includes the characteristics of both quantitative and qualitative approaches. An apparent advantage of the methods consists in their system and complex approach to the forecasting of signs of crisis development, their weaknesses lie in the fact that the models are quite complicated in making decisions in case of a multi-criteria problem, it is also worth mentioning that the taken forecasting decision is more subjective. The carried-out analysis of methods of enterprises bankruptcy predicting and the analysis of possibilities of well known IT-decisions in this field showed that the development of a decision support system for bankruptcies monitoring is needed [START_REF] Yusupova | Intelligent Information Technologies in the Decision Support System for Enterprises Bankruptcy Monitoring[END_REF]. The data required for anti-crisis management are semi-structured data in the majority of cases and therefore the application of intelligent information technologies is necessary [START_REF] Jackson | Introduction to Expert Systems[END_REF][START_REF] Duk | Data Mining[END_REF].
Financial and economic application software that is available on the market nowadays is quite varied and heterogeneous. The necessity to develop such software products is dictated by the need of enterprises to promptly receive management data in due time and to forecast the signs of crisis development. To one extent or another, tools for anti-crisis management are available in a number of ready-made IT-decisions [START_REF] Yusupova | Intelligent Information Technologies in the Decision Support System for Enterprises Bankruptcy Monitoring[END_REF]. But the data analysis in many software products actually consists in providing the necessary strategic materials, while software products should meet the increasing needs such as analysis and forecasting enterprise financial performance in the next period of report.
The distinguishing feature of presenting research is the possibility of fraudulent bankruptcy indications forecasting at its early stages when it is possible to take preventive measures.
PROBLEM STATEMENT
Enterprises monitoring involves an enterprise activity observation, works on adverse effects timely detection and assessment and includes a creation and an implementation of modern techniques providing a data collection and transmission automation. Enterprises monitoring goal is an early crisis (bankruptcy) signs detection, warning and prevention based on enterprise financial indicators analysis. Enterprises monitoring allows a bankruptcy process information transparency organizing, an enterprise economic component subjectivity assessment reducing and a decision maker early warning in case of a false bankruptcy signs presence.
Data required for enterprises monitoring are both structured (characterized by a large volume and represent diverse information that contains hidden patterns) and semi structured, that creates an information processing problem. Therefore, it's not always possible to solve a problem of a decision making support without an intelligent information technology application.
The authors of the study aim to develop models and algorithms based on intelligent technologies for the detection of the crisis state of the enterprise while still in its early stages for the timely changes of the development strategy of the enterprise, which will increase stability and economic independence of the enterprise, as well as reduce the impact of the human (subjective) factor on making important management decisions. Decision support system for crisis management is discussed in this article on the example of monitoring bankruptcies.
DECISION SUPPORT SYSTEM FOR MONITORING BANKRUPTCY
The major aspect of the bankruptcy monitoring problem is the analysis and identification in good time of the signs of fraudulent bankruptcies [START_REF] Belyaev | Anti-Crisis Management[END_REF]. The basis of the whole complex of techniques for the decision support system (DSS) is legally approved methodical instructions on accounting and analysis of enterprise' financial position and solvency so as to group the enterprises depending on the level of risk of bankruptcy, as well as techniques for the identification of the signs of fictitious and deliberate bankruptcy. These techniques are currently used by auditors and arbitration managers. To develop the decision support system for monitoring enterprise bank-ruptcy the authors propose the following general scheme of DSS (Figure 2) and used knowledge engineering, expert system (ES) technology [START_REF] Jackson | Introduction to Expert Systems[END_REF] and data mining (DM) technology [START_REF] Senthil Kumar | Knowledge Discovery Practices and Emerging Application of Data Mining: Trends and New Domains[END_REF].
The expert system technology underlies two modules of DSS in bankruptcy monitoring [START_REF] Shakhmametova | Expert System for Decision Support in Anti-Crisis Monitoring[END_REF]:
module for grouping companies depending on the level of risk of bankruptcy (module1); module for the identification of the signs of illegal bankruptcy (module 2). Primary, intermediate and resulting data are stored in the main decision support database organized according to the relational model. To keep the decision support system operating the primary data on the company is imported in the system either automatically or manually. Interaction between the DSS and the user is carried out by means of an interface subsystem.
In the first phase of the DSS the enterprise is classified according to the degree of the threat of bankruptcy by means of module 1 of the expert system (Figure 3). Depending on the results, the enterprise is either checked for signs of fraudulent bankruptcy (I step, module 1 of the expert system), or financial performance is forecasted using the data mining technology (II step, DM module). In the third phase on the basis of the forecasted values the signs of the deliberate bankruptcy are identified (III step, module 2 of expert system). On the IV step a report is made for the decision maker.
DSS MODULES
Expert system modules
ES module 1 for an enterprises grouping in accordance with a bankruptcy threat degree involves an enterprises classification on the basis of its financial indicators into five groups:
group 1solvent enterprises, that have an ability to pay fully and within the prescribed period their current obligations at the expense of their current economic activity or liquid assets (G1); group 2enterprises without a sufficient financial resources to ensure their solvency (G2); group 3enterprises with established by law bankruptcy signs (G3); group 4enterprises, that have a direct threat of the bankruptcy proceedings institution (G4); group 5enterprises, in respect of which an arbitral tribunal accepted for consideration an application for recognition of the enterprise as a bankrupt (G5).
This grouping allows determining an enterprise that should be analyzed for potential signs of a deliberate bankruptcy and an enterprise, in respect of which bankruptcy proceedings have already been entered, that should be analyzed for potential signs of a fictitious bankruptcy. Knowledge representation production model is applied for the knowledge base development (table 1). Where R1false bankruptcy signs are present; R2false bankruptcy signs are present, fixed assets can be withdrawn from an enterprise; R3false bankruptcy signs are absent, an enterprise pays out compulsory payments; R4false bankruptcy signs are absent, an enterprise is in a difficult financial situation; R5false bankruptcy signs are present, a deliberate debts accumulation for subsequent cancellation is taking place.
Data mining module
The problem which is solved by the data mining module in DSS in monitoring enterprise bankruptcy is the problem of forecasting financial indicators of the enterprise (company) and is considered in detail in [START_REF] Yusupova | Data Mining Application for Anti-Crisis Management[END_REF]. This problem can be seen as a problem of forecasting the time series, as the data for the prediction of financial indicators are presented in the form of measurement sequences, collated at non-random moments of time.
The dynamics of lots of financial and economic indicators has a stable fluctuation constituent. In order to obtain accurate predictive estimates it is necessary to represent correctly not only the trend but the seasonal components as well. The use of data mining methods in time series forecasting makes the solution of the given task possible. These methods have a number of benefits: possibility to process large volumes of data; possibility to discover hidden patterns; use of neural networks in forecasting allows obtaining the result of the required accuracy without determining the precise mathematical dependence.
There are a lot of other benefits of data mining such as basic data pre-processing, their storage and transformation, batch processing, importing and exporting of large volumes of data, availability of data pre-processing units as well as ample opportunities for data analysis and forecasting. The algorithm for forecasting the companies' financial indicators has been developed (Figure 4). Forecasting of enterprise financial indicators in the DSS can be performed by means of a number of DM techniques such as partial and complex data preprocessing, autocorrelation analysis, the method of "sliding window" and neural networks. In solving the problem of forecasting the time series with the aid of a neural net it is required to input the values of several adjacent counts from the initial set of data into the analyzer. This method of data sampling is called "sliding window" (windowbecause only a certain area of data is highlighted, sliding -because this window "moves" across the whole data set). Transformation of the sliding window has the parameter "depth of plunging" -the number of the "past" counts in the window.
The software implementation of the data mining module to forecast the financial indicators of the enterprise is performed by means of the analytical platform [8]. As it was mentioned above, the data mining module is realized by the following main steps: primary data input; using of "sliding window"; neural network programmingconstructing and teaching; forecasting.
Each of the financial indicators has its own prediction algorithm that includes the size of the step of the sliding window, neural network (NN) structure, the form of the activation function and its value (Table 3). These parameters are defined for each enterprise individually. 4).
ES module 1 for an enterprises grouping in accordance with a bankruptcy threat degree has correctly determined an enterprises specific group membership in all cases. For a proposed decision making support system efficiency analysis a comparative analysis of the DSS results with classical methods results was conducted. Thus, according to classical methods, enterprise 1 is unprofitable, that confirms decision support system application results. Enterprise 4 is effectively functioning, that also confirms decision support system application results. Enterprise 2 and enterprise 3 were identified by classical methods as effectively functioning, but this conclusion wasn't confirmed by real data, namely enterprises accounting balance analysis results. This situation demonstrates a high efficiency of a system application. Therefore, a proposed decision making support system diagnoses an enterprises financial condition more accurately. The analysis of the effectiveness of the data mining module is based on the comparative analysis of the financial indicators for the same period of time, obtained directly from the enterprise and forecasted through data mining. The fragment of the analysis of the effectiveness of data mining with deviation of the forecasted values of the financial indicators from the actual data is presented in Table 5.
Analysis of the effectiveness of data mining for values forecasting showed that the deviations of the forecasted values from the real data are in the range from 1,35% to 8,74%. The average deviation is about 6,5 %, which is quite a good result for forecasting.
ES module 2 for a false bankruptcy signs detecting analysis also showed a high efficiency of a system application. This conclusion is confirmed by an analysis of the table 6.
Thus, it may be concluded about an adequacy of considered informational support methods for a complex decision making support system designed to prevent crises in an enterprises monitoring.
CONCLUSIONS
The decision support system for bankruptcy monitoring including the data mining module is developed. The decision maker using the DSS may be the top manager or supervisory authority. It is possible for users of the system to monitor the major trends in the economic processes of the enterprise. With the help of the expert system the enterprise is classified according to the degree of the bankruptcy threat Then with the help of data mining means, neural networks in particular, the enterprise financial indicators can be forecasted for the definite period of time (for example for 3 months).
The aim of the neural network at this stage is to catch the regularities of the financial indicators changes and detect them. And then on the basis of the forecasted indicators with the help of the expert system the signs of the enterprise illegal bankruptcy are identified. The condition of the enterprise is defined not for present moment but for the definite time period (for example for 3 months). It gives an opportunity to take measures preventing the enterprise from fraudulent bankruptcy. The efficiency analysis reveals good results of DSS implementation for bankruptcy monitoring. This research has been supported by grants № 11-07-00687-а, № 12-07-00377-а of the Russian Foundation for Basic Research and grant "The development of tools to support decision making in different types of management activities in industry with semi-structured data based on the technology of distributed artificial intelligence" of the Ministry of Education of the Russian Federation.
Fig. 1 .
1 Fig. 1. -Enterprise monitoring system
Fig. 2 .
2 Fig. 2. -The general scheme of the DSS in monitoring enterprise bankruptcy
Fig. 3 .
3 Fig. 3. -The steps of the DSS modules using
Fig. 4 .
4 Fig. 4. -Main stages of the data mining module
Table 1 .
1 Expert system module 1-knowledge base fragment ES module 2 for a false bankruptcy signs detecting, based on the financial coefficients analysis, determines a false bankruptcy presence. Financial coefficients (debtor obligations provision by assets, net assets value, long-term investments share in assets, creditor debts share in liabilities, etc.) are calculated on the basis of enterprise financial indicators. Table2shows expert system module operational rules to detect a false bankruptcy signs presence.
№ Rules of production
Rule 1 If К1 ≤ 6, Тhen G1
Rule 2 If К1 > 6 And К2 ≥ 1, Тhen G1
Rule 3 If К1 > 6 And К2 < 1, Тhen G2
Rule 4 If К3 = 1, Then G3
… …
Rule11 If К10 = 1, Then G5
… …
№ Rules of production
Rule 1 X1 : If K1(tj+Δt)<K1(tj), Then R1, Or X2
Rule 2 X2 : If K2(tj+Δt)<K2(tj), Then R1, Or X3
Rule 3 X3 : If K3(tj+Δt)<K3(tj), Then R1, Or X4
Rule 4 X4 : If K4(tj+Δt)<K4(tj), Then R2, Or X5
Rule 5 X5 : If K5(tj+Δt)≥K5(tj), Then R2, Or X6
Rule 6 X6 : If K6(tj+Δt)=K6(tj), Then X7, Or X8
Rule 7 X7 : If K7(tj+Δt)<K7(tj), Then R1, Or R4
Rule 8 X8 : If K6(tj+Δt)>K6(tj), Then X9, Or R3
Rule 9 X9 : If K7(tj+Δt)≤K7(tj), Then R5, Or R4
… …
Table 2 .
2 Expert system module 2 -knowledge base fragment
Table 3 .
3 Algorithms of data mining application to forecast enterprise's financial indicators
6 6 DSS IMPLEMENTATION EFFICIENCY ANALYSIS
DSS has been used in state monitoring of a number of industrial and agro-industrial
enterprises of Republic Bashkortostan (Russia) (Table
Table 4 .
4 The enterprises characteristics
Table 5 .
5 Deviations of forecasted values from actual data, in percentage
prise DSS result Real situation
Enter
1 false bankruptcy signs are investigating authorities
present, fixed assets can initiated verification
be withdrawn from an
enterprise
2 false bankruptcy signs are enterprise continues work-
absent, an enterprise is in ing, compulsory payments
a difficult financial situation debts increase
3 false bankruptcy signs are enterprise continues work-
present ing, enterprise obligations
constitute 95%
Table 6 .
6 Comparative analysis of the ES -module 2 application results | 19,380 | [
"1003726",
"1003727",
"1003728"
] | [
"488127",
"488127",
"488127"
] |
01485832 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485832/file/978-3-642-41329-2_37_Chapter.pdf | Clemens Schwenke
email: clemens.schwenke@tu-dresden.de
Thomas Wagner
email: thomas.wagner2@tu-dresden.de
Klaus Kabitzsch
email: klaus.kabitzsch@tu-dresden.de
Event Based Identification and Prediction of Congestions in Manufacturing Plants
Keywords: Semiconductor AMHS, Model building, Event analysis, congestion prevention
ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In modern semiconductor industry, more and more highly integrated customized wafer products have to be produced in shorter and shorter periods of time. Consequently, a large amount of production steps for a big variety of different products is carried out on the one hand. On the other hand, the production equipment is used flexibly, so that the transport system in a highly automated factory has to be adjusted frequently to new routes for wafer transports between stations. In general, these stations are connected by automated material handling systems (AMHS) which are complex interwoven networks of transport elements such as conveyor belts, rotary tables and other handling devices.
Besides adjusting the transport system to new demands, the operating engineers of automated material handling systems face on main problem. AMHS often show congestion phenomena, which reduce the throughput in a wafer fabrication facility (fab). Most of the times, congestions result in queues of work pieces waiting for other work pieces because stations are temporarily overloaded or switched off (down times).
Modern material handling systems provide features to detect and adjust to undesirable situations automatically and resolve congestions. For this feature intelligent material flow routing rules have to be implemented as well as rules for altering the feed of new work pieces entering the system.
But in real fabs, practitioners ask themselves the following question. Which rules have to implemented, and more important, how to determine these rules systematically?
In order to analyze the overall performance of transport systems, event data of material passing certain waypoints of the system has been collected in log files.. But the task of analyzing transient congestions and extracting conclusions often is still a sisyphean undertaking because of three reasons. First, this task is still carried out mostly manually by visually inspecting log files. Second, sometimes the efforts of studying log files do not result in generally applicable rules. Third, sometimes congestions seem unexplainable.
In order to free the expert from this time consuming task, this paper introduces an approach to how event based congestion analysis and prediction can be executed automatically.
Consequently, the steps of an approach for semi automatic event data inspection are provided. These steps include collecting of relevant event data, building a state model of the transport process, identification of temporarily overloaded segments, backtracking to influencing segments and congestion analysis in order to extract rules for prediction of congestions.
As a result, rules for the prediction of congestion occurrence are derived. For validation, these rules are applied to new event data of the same AMHS, so that to predict congestions were predicted early enough for operators being able to take actions. For an exemplary use case, a set of trace data of wafer lots in an automated production line has been used for proving the approach. All steps of this workflow have been implemented in a software framework and tested against real fab data.
This paper is structured as follows. In Section 2 related work is considered. The approach for data inspection is described in Section 3. Section 3 includes an exemplary validation. Conclusions are drawn and an outlook is given Section 4.
Related Work
The authors investigated several possibilities to identify and analyze congestions in a given transport system. Consequently, some relevant approaches are discussed and the disadvantages of those, that initially seem to be most obvious and useful, are portrayed. The first thought when transport systems shall be examined is queuing systems. But the pure queuing theory could not easily be applied for the authors real world use case, because arrival rates, service durations and sometimes even capacities of the system's elements change constantly. Consequently, the second thought is time series. But the application of pure classic time series analysis was not feasible, because models could not explain the observed phenomena exact enough, nonsense correlations were found, or the calculation costs were too high. Third, the authors considered state model building for examining event sequences. Finally, the authors combined findings from several specific fields. Therefore, the consideration of related work covers work in state model building, queuing theory and time series.
State model building
Automatic state model building requires event logs as input data and provides highly aggregated information about state changes in a system. The prerequisite is that recorded events can be understood as notifications of state changes of entities. Briefly, the main essence of state model building is the extraction of a graph out of traces of events.
In the case of material flow systems, events are recorded when loads enter workstations where they are processed, or when they enter conveyor segments where they are transported to a succeeding workstation. In the extracted graph, nodes represent states of loads being in a certain station or transport segment, and edges represent the transfer of a load into another station or transport segment. For this kind of event data, an event always indicates that a load entered a work station or transport segment. The use of discrete state models describing a system's (or device's) behavior as a sequence of possible steps was studied successfully before [START_REF] Kemper | Trace based analysis of process interaction models[END_REF].
On the one hand, state models are useful to monitor or identify business processes [START_REF] Agrawal | Mining process models from workflow logs[END_REF]. Van der Aalst et al. [START_REF] Van Der Aalst | Business process mining: An industrial application[END_REF] used state model building as a method for analyzing business processes, where events are generated when certain work steps begin and end. In so-called process mining, models of business processes shall be recovered or checked. Additionally, the relevance of business steps can be evaluated and performance indicators are calculated based on event logs. The main problems are to recover adequate models and to identify relevant process steps, since the log data of business processes, involving humans and external events, oftentimes contain non-deterministic portions. The resulting models then have to be mostly analyzed manually, sometimes including a few automatically calculated performance parameters if applicable.
On the other hand, state model building can be used for the examination of event logs of machines or transport systems, for example in semiconductor industry or in logistics applications. Compared to extracted models of business processes, the extracted models of logistic and manufacturing applications are more deterministic but contain many more states, so that sophisticated tailored analysis approaches are necessary to detect and explain unwanted phenomena, such as extremely varying delays [START_REF] Gellrich | Modeling of Transport Times in Partly Observable Factory Logistic Systems based on Event Logs[END_REF] or changing reject occurrence [START_REF] Shanthikumar | Queueing theory for semiconductor manufacturing systems: a survey and open problems[END_REF].
In order to enrich a pure state model with more information, Vasyutynskyy suggested the combination of state model building with calculation of performance indicators, such as overall throughput times, holding times and inter arrival times on states. Consequently the result is called an extended state model [START_REF] Vasyutynskyy | Analysis of Internal Logistic Systems Based on Event Logs[END_REF].
State models can be used as the basis for detailed analysis of congestions if they manifest as tailbacks of loads waiting for preceding loads [START_REF] Schwenke | Event-based recognition and source identification of transient tailbacks in manufacturing plants[END_REF]. In this work, an approach to automatically carry out transient tailback recognition and cause identifica-tion was introduced. In order to identify origins and causes of these observed tailbacks, historic event log data of loads passing certain waypoints were inspected. The approach is based on analysis of holding times and capacities of transport segments. As a result, complete lists of tailbacks and affected segments are provided. Plus, for each tailback an initial cause event is determined. But this tailback analysis approach does not relate the occurrence of tailbacks to the constantly altering arrival rates of new loads entering the system. Therefore it was necessary to investigate different approaches to enable a successful prognosis of congestions.
Queuing Theory
Queuing theory is a tool for estimating performance indicators in networks of waiting lines and service stations. The service stations take a certain amount of time, e.g., for processing one work piece. The work pieces, or loads, travel through the system and wait in line in front of the service stations, thus forming queues.
The main application is the design of queuing systems, [START_REF] Beranek | A Method of Predicting Queuing at Library Online PCs[END_REF], [START_REF] Horling | Using Queuing Theory to Predict Organizational Metrics[END_REF]. At the design time important questions are: What is the average queue length, how long are average waiting times Wq, and how many service stations are needed? For answering the question of the average waiting time in the queue Wq, Formula (1) can be used [START_REF] Gross | Fundamentals of queuing theory[END_REF].
(1)
The arguments for this Formula are the complete waiting time W that was spent in a system of queue and service station. The time W includes the average service duration . Alternatively the time Wq can be calculated using the arrival rate λ and service rate μ.
The basic assumption in queuing theory is stable arrival-and service rates. In contrast, these rates change frequently in the investigated real world systems, e.g. depending on product mix and order situation. As a result, Formula (1) for estimating Wq, was not directly applicable. The average time loads spend in a conveyor segment is called in the following.
Time series analysis
Classic time series analysis provides many disciplines [START_REF] Hamilton | Time series analysis[END_REF]. For this work the most important ones are the following. The first one is time series analysis in the time domain, where oftentimes trends and seasons are extracted by developing linear models until the residues cannot be minimized anymore and are similar to stochastic white noise. This approach is used in, e.g., economics, biology and agriculture [START_REF] Mead | Statistical Methods in Agriculture and Experimental Biology[END_REF]. Sometimes this approach is also used in physics or engineering but only as a last resort if model building using known facts did not provide useful results [START_REF] Palit | Computational intelligence in time series forecasting: Theory & engineering applications[END_REF]. For example, time series analysis is used in the field of predictive maintenance to model trends, seasons and noise of deterioration indicators [START_REF] Krause | A generic Approach for Reliability Predictions considering non-uniformly Deterioration Behaviour[END_REF]. But these approaches try to smooth of outliers instead of explaining them.
The second discipline often is applied for modeling the remaining residues, after trend and season are extracted, by estimating auto regressive moving average (ARIMA) models. ARIMA models often are used to model processes in economics, especially in financial industry trying to predict effects in the stock market [START_REF] Wang | Stock market trend prediction using ARIMA-based neural networks[END_REF]. This is done by assuming the stochastic nature of the unexplainable processes [START_REF] Bollerslev | Modeling and pricing long memory in stock market volatility[END_REF], [START_REF] Nelson | The time series behavior of stock market volatility and returns[END_REF]. Therefore, the main ingredients of these models are two parts, the auto regressive (AR) part and the moving average (MA) part. The AR part tries to model the time series by explaining the current value mainly by the previous value. The MA part models a white noise so, that in conjunction with the AR part the given time series can be approximated. Unexplainable peaks in general are considered outliers and are smoothed [START_REF] Breen | Economic significance of predictable variations in stock index returns[END_REF].
In contrast, the authors needed to explain the outliers, instead of smoothing them. As a result, the above mentioned time series analysis approaches were not applicable. One reason for this is that in reality peaks are not always stochastic and do not solely depend on the previous value.
Summary
When first confronted, the authors tested the following approach. First, the inter arrival times and holding times on conveyor segment in front of stations or rotary tables were examined for aggregated periods of time. With Formula (1) of the queuing theory, the waiting times W at stations were estimated at these time periods, but they did not match the actually observed holding times.
By a different approach, the authors aimed to produce forecast models in order to estimate autoregressive moving average (ARMA) models for inter arrival times and holding times. The predictions of these models were used to estimate the current waiting time W applying Formula (1) to the forecasted arrival rates.
Unfortunately, the quality of these models was not good enough for reliable predictions, because of one important fact. Forecast models tend to smooth peaks, because most of the times they are considered outliers. But in contrast, in the use case of investigating transport system data, the peaks of holding times are the sought after and to be explained congestions. Consequently, generic naive time series analysis was not constructive.
As examined by the authors, the key to understanding the establishing and resolving of congestions is the combination of system knowledge with time series analysis. Congestions can travel through the system like waves, superimpose and thus, cause significantly varying waiting times on certain conveyor segments in front of service stations. Therefore, there is a true relation between only certain arrival rates, service rates and waiting times. As a result, the authors integrated a step in the overall approach that selects only the relevant time series before they are investigated further.
The suggested analysis approach consists of a workflow of five general steps, see Figure 1. First event data of the AMHS has to be collected.
Fig. 1. Workflow of Analysis of Congestions
Second, a state model has to be extracted from of this event data. Third, overloaded segments of the transport system have to be identified. Fourth, the relevant source segments that feed loads into the system are identified by systematic backtracking.
Finally, the ultimate purpose of analyzing congestions by correlating them with arrival rates at source segments can be carried out. The four first steps are prerequisites for the last step. All five steps are described in detail in the following.
Logging of event data
The first step is the collection of event data in the factory's AMHS. That is, at each relevant conveyor belt or rotary table an event is logged. The event contains the essential information in the three fundamental attributes, timestamp, segment number and load number. Based on these elementary attributes, a graph of the transport system can be built in the second step.
State model building
In this step, a state model of the transport system is extracted from the logged event data. The authors published applications of this method before in [START_REF] Schwenke | Event-based recognition and source identification of transient tailbacks in manufacturing plants[END_REF], [START_REF] Wagner | Modeling and wafer defect analysis in semiconductor automated material handling systems[END_REF]. But for completeness, the algorithm of the method isbriefly described.
The algorithm extracts all relevant entities for building an extended state-transition model of a given log file of a given transport system. Consequently, the resulting model will consist of the following entities. S = {s 1 ; s 2 ; ... ; s n }
(2) S is the finite not empty set of states, representing transport system elements, e.g., rotary tables or linear conveyor modules as well as storage elements (stockers) or production equipment (work stations). L = {l 1 ; l 2 ; ... ; l m }
L is the finite set of loads, representing the moved entities, e.g., wafer carrier.
T C S X S (4)
T is the finite not empty set of transitions, representing interconnections between the single elements. T is a subset of all ordered pairs of states. One element of T is a binary relation over S. For example, (s 1 ; s 2 ) T with s 1 ; s 2 S; represents a transition from State s 1 to s 2 . For instance, a rotary table can be used as a crossing, unification or split of transport streams. Therefore it is connected to several other elements and can exhibit multiple transitions.
The event log is an ordered sequence of events E as follows.
E = {(τ 1 ; s 1 ; l 1 ); ... ; (τ N ; s N ; l N )} ( 5)
One event e = (τ; l; s) E is defined as a triple consisting of timestamp τ Z, state s S and load l L. Z is the set of timestamps τ, so that Z = {τ 1 ;...; τ N }. The above mentioned entities S, T and L can be systematically extracted from this ordered sequence of events as shown in Figure 2.
Fig. 2. State Model Building
The model building algorithm is a loop that processes ach event separately in one individual loop cycle. This loop consists of steps for extracting elements from events as well as for finding or creating model entities, so that they can be included into the model. Conditional decisions allow for breaking out of the loop if not all steps have to be carried out, because current entities are already part of the model.
Identification of overloaded segments
After the state model is extracted, the third step of the overall workflow can be executed. Overloaded segments are results of congestions. In the state-transition model these segments are states. The suspect states are identified by finding states that sometimes exhibit unusual long holding times ω. Longer holding times are an effect of previous loads holding up succeeding loads and therefore affect the average holding times of loads on certain states.
For each state one corresponding average holding time ω(s i ) can be calculated. The states that exhibit unusual long average holding times compared to the average holding time over all states, ω(s i ) > , are suspects to be affected by at least transient congestion effects. This comparison has to be executed for many fractions of time.
Fig. 3. Identification of States that temporarily exhibit congestions
As a result, a set of congestion states S effect C S is found.
Backtracking to influencing segments
In order to find states that influence the holding time of congestion states, a backtracking is carried out. This is necessary to compare the time series of only those states that actually can have an influence and not others that are unlikely to have an impact. This backtracking is carried out for each congestion state. The relevant influencing states are called feeding states. In that context, a feeding state is the closest preceding state that either exhibits more than one outgoing transition d out (s) ≥ 2 or that is a load source of the system, e.g. a production equipment input. Other states that represent linear conveyor segments and are closer do not have to be considered because the arrival rates do not differ from the feeding state. On each identified feeding state, recursively the same backtracking to previous feeding states is carried out. This recursion terminates when a number of maximum backtracking depth b max is reached or if no more preceding states can be found in the statetransition model.
The result of this algorithm is a tree of feeding states for each congestion state, for example see Figure 4. The dashed line marks the maximum backtracking depth selected by the user. In this case, the time series of two feeding states sf1 and sf2 have to be considered in the congestion analysis step described in the next Subsection. Increased backtracking depths can result in longer forecast lead times for congestions but also cause higher calculation costs since more states have to be considered.
Congestion analysis
Once the above mentioned prerequisites are available, the actual congestion analysis can be started. The approach presented focuses on the diagnosis and prediction of tailback events caused by the dynamic interactions of different transport system elements or areas. Other possible causes of tailbacks, like random failures of single transport elements, are much less related to the system behavior which is observable using the event logs described in Section 3.1 and are therefore not considered. However, the prediction of such tailbacks could be tackled using semantic information about the transport systems hardware, e.g. mean time between failure considerations.
Here, the progression of the inter arrival times (IAT) of the identified feeding states are considered in order to identify conditions that provoked the anomalies on the congestion states. This approach allows to draw inferences from temporary different workload situations on different load sources about the manifestation of transient tailbacks, e.g. due to temporary load concentration or mutual obstruction.
Depending on the situation, not every feeding state identified in Section 3.4 has an influence to the appearance of tailbacks on the congestion states. To select the relevant subset of feeding states, several methods of selection can be considered. One approach could be to weigh the IATs of the different feeding state and ignore the ones that transport only a seemingly irrelevant amount of lots. However, the simplified example depicted in Figure 5 suggests that this is a misleading approach.
Fig. 5. Influence of a low frequency feeding state on congestion probability
In Figure 5, a congestion state A (see Figure 6) is shown which receives its loads at a rate of approximately three loads per minute from a major source B. If no other sources participate, no congestions appear at this state A as indicated by the red line. However, there exists another feeding state C which considerably influences the holding times on state A. Although state C only contributes to the traffic with around one load in nine minutes (green line) to around one load in 4 minutes (blue line), it significantly increases the holding times of A and therefore even causes congestions (noticeable peaks in the blue line).
In the presented case, this is caused by a dead-lock prevention mechanism implemented in the transport systems controllers that block all traffic from C down to D once a load enters the critical area shown as a red cuboid in Figure 6. Therefore, it is necessary to measure the real influence of a feeding state's IAT on the HT of a congestion state regardless of its arrival rate. To achieve this, a correlation approach is used as a first step. For this purpose, the time series of holding time of the congestion state is correlated with the time series of the inter arrival times of all identified feeding states.
Depending on their distance, it takes a certain amount of time until the inter arrival times of the feeding states affect the holding times of the congestion state. To take these delays into account, each IAT time series is cross-correlated with the HT time series, using a default maximum lag as defined below where N is the number of observations in the time series.
( ) (6)
As a second step, in the resulting correlation values r l for each lag l, it is then sought for the maximum negative correlation. That is, the maximum suggested impact of a decreasing IAT (increasing arrival rate) of the feeding states on increased HTs of the congestion state. This correlation value r l is then checked for significance by comparing it with the values of the approximated 99% confidence interval c (∝=0.01):
√ (7)
Third, once the set of significant feeding states S^sig has been found and the corresponding lags yielding the maximum negative correlation for each state noted, the critical inter-arrival rates must be identified. These are the ones that cause overload situations on the congestion state, if they appear in combination. For this purpose, a set V for each feeding state is constructed as follows.
{⋁ }
With . In summary, this set V contains a mapping from the interarrival times of one feeding state to the corresponding holding times of the congestion state, shifted by l to compensate for the time delay between cause and effect as mentioned above. Afterwards, V is sorted by its IAT values in descending order, i.e., from the least to the most frequent lot appearance. Next, the HT components of V are scanned along the falling IAT values. Once the holding time on the congestion state first reaches or exceeds the critical value ( ) ̅( ) as described in Section 3.3, a previous is used to define a rule indicating the danger of congestion. This rule sets a Boolean value ( ) depending on whether this value is undershot. The parameter k can be used to manipulate the lead time and sensitivity of the congestion prognosis by choosing more conservative, i.e. larger, values of , so that the warning signal (
) is set earlier. This procedure is repeated for every and the resulting rules are combined to one single rule, suggesting a high congestion probability once all of the conditions are met.
The construction of these rules will now be demonstrated by the example shown in Figure 6. In this example, the state A's relevant was found at 60 seconds.
In Figure 7 the ordered set V is shown for the congestion state A and the feeding state B. For this state, the congestion effects began to manifest once the IAT of state B was less than or equal to 27 seconds. Using a parameter value of 3 for the lead time parameter k, i.e., the next larger value is used which is 28 seconds. In this example the congestion indication rule can be defined as follows.
(9)
A second rule is derived from the holding-and inter-arrival times of the states A and C as shown in Figure 8. Here, congestions on state A appeared once the inter-arrival times of state C was lower than or equal to 86 seconds. Using again a k of 1, the corresponding congestion indication rule therefore is as follows. [START_REF] Gross | Fundamentals of queuing theory[END_REF] As a last step, the resulting rules must be combined to reflect the mentioned interrelation of the IATs between the corresponding states. A combined rule would be noted as follows. 0 (11)
Conclusion and Outlook
The presented approach has been implemented into a comprehensive analysis framework. User input is only required to define the input parameters k and the maximum backtracking depth b max to influence the prognosis lead time. Subsequently, the warning rules are derived fully automatically and can afterwards be evaluated against the current transport system behavior at runtime. The derived warning rules for congestion prognosis will serve as a basis for dynamic routing approaches within the transport system controllers. If their conditions are met, the controllers will be alarmed about possible future congestion situations. As a possible countermeasure, they can reroute part of the incoming traffic flow across different system parts, thus gaining a consistent lot flow while sacrificing only a small amount of transport speed for a few lots.
In the use cases investigated, the derived rules predicted the observed congestions accurately enough to allow for effective prevention measures in most cases. However, the approach also exhibited a few limitations that have to be considered regarding the choice of the parameter values k and b max .
If the maximum backtracking depth is set too large, then too many feeding states have to be considered, eventually causing interferences between the growing variety of load situations. That means, several groups of feeding states may cause overload situations on a single congestion states independently. Using just the cross-correlation approach, this can neither be reliably distinguished nor can it be expressed using just AND conjunctions of the warning rules.
In addition, the lead time parameter k must be chosen carefully, since small values may reduce the prognosis horizon too much. On the other hand, values too large may provoke a lot of false-positive congestion warnings.
Therefore, future work will focus on defining metrics to aid the system experts choosing the right parameter values. In addition, the authors will investigate a wider set of influencing variables to determine their suitability for making the predictions more accurate.
Fig. 4 .
4 Fig. 4. Tree of influencing neighbor feeding states. Result of backtracking algorithm.
Fig. 6 .
6 Fig. 6. Excerpt of the example system showing a congestion state A and the feeding states B and C
Fig. 7 .
7 Fig. 7. Critical IAT for feeding state B
Fig. 8 .
8 Fig. 8. Critical IAT for feeding state C | 30,118 | [
"1003732",
"1003733",
"1003734"
] | [
"96520",
"96520",
"96520"
] |
01485833 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485833/file/978-3-642-41329-2_38_Chapter.pdf | Dipl.-Ing Gerald Rehage
M.Sc. Dipl.-Ing.(FH Frank Bauer
Prof. Dr.-Ing Jürgen Gausemeier
email: juergen.gausemeier@uni-paderborn.de
Dr. rer Nat Benjamin Jurke
Dr -Ing Peter Pruschek
email: peter.pruschek@gildemeister.com
Intelligent Manufacturing Operations Planning, Scheduling and Dispatching on the Basis of Virtual Machine Tools
Keywords: Operations planning, scheduling, dispatching, machine tools, simulations, industry 4.0
Today, numerical-controlled machine tools are used for flexible machining of increasingly individualized products. The selection of the most economic tool, machining strategy and clamping position is part of the manufacturing operations planning and bases on the employees' practical knowledge. The NC programmer is supported by current CAM systems with material removal simulation and collision detection. This early validation avoids damages and increases the productivity of the machines. The benefit of simulations can be increased by a better model accuracy. In common CAM systems the machine behaviour is often insufficiently reproduced; for example, the dynamic characteristics of axes, the tool change and the chronological synchronization are simplified or neglected. In view of complex operations, a slow trial run on the real machine or substantial safety margins are necessary. The described deficits can be solved by virtual machine tools. Thereby a virtual numerical control and a detailed machine kinematic is used for an accurate simulation. The result is an error-free NC program which can be directly used on the real machine. In addition, the exact processing time is determined and supplied for the operations scheduling and dispatching. Furthermore, virtual machine tools provide promising approaches for an automated optimization of machining operations and machine set up. Parameters for the optimization of the processing time are, for example, different clamping positions or tooling arrangement. Simulating several parameters requires lots of computational power. Hence, the vision of the project "InVorMa" is a cloud application, which supports the operation planning, scheduling and dispatching of machine tools. A computer cluster (cloud) provides noticeably faster results than a single computer. Therefore, the machine tools and the manufacturing equipment of the user are cloned one-to-one in the cloud. The manufacturing documents are optimized by the cloud application before they are forwarded to the shop floor. The optimization involves the NC program for each machine as well as the distribution of orders. The practical knowledge of the manufacturing planner and the results of the optimizations are pre-processed for reuse by an integrated knowledge base.
INTRODUCTION
Manufacturing in high-wage countries requires the efficient use of resources. Increasingly individualized products require a highly flexible production system [START_REF] Abele | Zukunft der Produktion -Herausforderungen, Forschungsfelder, Chancen[END_REF]. In the field of machining of metals, the needed flexibility is achieved by numericalcontrolled machine tools. It is the function of the manufacturing planner to ensure a rational use of the operating means. This is based on his practical knowledge and furthermore on the utilization of machine simulations to avoid damage and increase productivity from the office. The aim of the project "Intelligent Manufacturing Opera- The current procedure of operation planning was documented with the pilot users as groundwork for the requirements of a simulative assistance in this field. Figure 1 shows the summarised steps, tasks and results. In the operation planning, the manufacturing methods, production resources and sequences are determined according to firm-specific goals (such as punctuality, profitability, quality). In addition, the processing time and setting time is predicted on the basis of empirical values and the pre-calculation. In this phase the order of raw and purchased parts are initiated. Results are the routing sheet, the allowed times and the procurement orders.
It is the work task of the operations scheduling and dispatching to determine the start time and sequence of manufacturing orders as well as the allocation of resources with regard to the allowed times, scheduled delivery dates and the disposable machines. The Result is the up-dated master scheduling.
The last step is the NC programming for every manufacturing operation on numerical-controlled machines. Today, CAM-Systems are used for NC programming away from the machine. These provide an automatic calculation of the tool path for predefined shapes (e.g. plane surface, island, groove) deduced from given CAD models of blank, finished part, tool and fixture. The setting of technological machining parameters, used tools and clamping positions is still a manual task of the NC programmer. Hereby, he has a huge impact on the processing time and quality. Results are the NC program, the process sheet and the sketch of set up.
APPLICATION OF VIRTUAL MACHINE TOOLS
Nowadays, the CAD supported NC programming includes the simulation of machining. This kind of verification has become quite popular due to the process reliability of machining with 4 to 5 axes [START_REF] Rieg | Handbuch Konstruktion[END_REF]. Against the background of reduced batch sizes, the simulation achieves an increasing acceptation also for machines with 3 axes, since it is possible to reduce the test runs for new workpieces and special tools and moreover to reduce the risk of discard. Therefor, common CAM systems provide a material removal simulation and an integrated collision detection. The material removal simulation shows the change of the workpiece during the machining. The automated collision detection reports any unwanted contact between the tool (shank, holder), workpiece und fixture. However, the reproduction of the real machine behaviour is mostly reproduced insufficient by these systems. For example, the dynamic characteristics of the axes, the movement of PLC controlled auxiliary axes, the automatic tool and pallet change, as well as the time synchronization of all movements are only simplified implemented or even neglected [START_REF] Kief | CNC-Handbuch[END_REF]. The basis of all common CAM systems is the emulation of the calculated tool paths by an imitated control. Therefore, the machine independent source code CLDATA (cutter location data) [START_REF]DIN 66215: Programmierung numerisch gesteuerter Arbeitsmaschinen -CLDATA Allgemeiner Aufbau und Satztypen[END_REF] is used instead of the control manufacturer specific NC program that run on the real machine. The machine specific NC program is compiled after the simulation by a post processor to adapt the source code to the exact machine configuration. The wear points of integrated simulations are known to the NC programmer, they are compensated by tolerant safe distances. This causes extended processing times and with it unused machine capacities. The result of the integrated simulation in CAM systems is a checked syntax, tool path, zero point and collision-free run of the NC program and additionally the approximated processing time. Nevertheless, a slow and careful test run is still necessary for complex machining operations due to the low modeling accuracy. The optimized machining by utilization of simulations requires a reliable verification of the operations that are defined in the NC program. The simulations in the contemporary CAM systems can't provide this due to the mentioned deficits.
Fig. 2. -The simulation models of the virtual machine tool
An approach to optimize the NC program away from the machine is the realistic simulation with virtual machining tools [START_REF]DMG Powertools -Innovative Software Solutions[END_REF]. This includes the implementation of a virtual numerical control with the used NC interpolation as well as the behaviour of the PLC and the actuators. Additionally the entire machine kinematics, the shape of the workspace and peripheries are reproduced in the virtual machine (figure 2). Input data is the shape of the blank and the used fixture as well as the machine specific NC program. The virtual machine tool enables the execution of the same tests as the real machine. This includes optimizing parameters (for example different clamping positions or tooling arrangements) to reduce the processing time. The result is an absolutely reliable NC program, which can run straight on the real machine. Additionally, the reliable processing time is determined by the simulation and made available to the operations scheduling.
However, the variation of parameters (for example the clamping position) or adaptations (to minimize unnecessary operations and tool changes) have to be done manually by the user in the NC program. A new simulation run is necessary after each change and the result must be analysed and compared to previous simulations by the user. This is an iterative process until a subjective optimum (concerning time, costs, quality) is found. The simulation on a single PC runs only 2 to 10 times faster than the real processing time depending on the complexity of the workpiece. Today, the simulation of complex and extensive machining takes too long for multiple optimization runs.
VISION: CLOUD-APPLICATION TO SUPPORT PROCESS PLANNING
The illustrated possibilities of virtual machine tools offer promising approaches for optimizing the machining and setting up of the machine. The vision of the project InVorMa is a cloud application, which supports the employees in the planning, scheduling and dispatching of manufacturing operations on tooling machines (figure 3). Instead of passing the manufacturing order and documents directly to the shop floor, the relevant data is previously optimized by the cloud application. The optimization involves the NC program of individual machines as well as the efficient scheduling and dispatching of orders to individual machines. The user obtains the service over the internet from a cloud service, this provides considerably more rapid results compared to a simulation on local hardware. Recent market studies emphasize the potential benefits of an automated routing sheet generation, the integration of expert knowledge and the planning validation through simulation [START_REF] Denkena | Quo vadis Arbeitsplanung? Marktstudie zu den Entwicklungstrends von Arbeitsplanungssoftware[END_REF].
FIELDS OF ACTION
In the light of the presented tasks of operations planning, scheduling and dispatching as well as the exposed potentials and disadvantages of virtual machine tools, there are four fields of action (figure 4).
1. A significant increase in simulation speed is the basis of the intended optimization.
The main approach is the use of powerful hardware in a computer cluster. For some time, "Cloud computing" is a highly topical technological trend [START_REF]Fujitsu Launches Cloud Service for Analytical Simulations[END_REF]. However, this technology has not yet been used for the simulation of virtual machine tools.
2. The optimized machining result from the evaluation of possible resource combinations and parameters. Depending on the workpiece shapes to be manufactured, there are different combinations of available tool machines, tools, fixtures for the machining. For example, the machine configuration and parameters can be used to control the clamping position, the tooling arrangement in the magazine and the superposition of the feed speed. To optimize the machining process, the possible combinations have to be simulated and evaluated automatically. 3. Optimizing the machining on each machine does not necessarily lead to an efficient scheduling. This requires a cross-machine optimization witch considers the processing time, the resource management and the occupancy rate of all machines.
Waiting orders have to be economically dispatched to available machines. The operation scheduling needs to adapt continuously to the current situation, such as new orders and failures of machines or workers. Nowadays, operation planning is based essentially on the experience of the responsible manufacturing planner. The combination of resources as well as the machine settings is chosen with regard to the shape and the mechanical behaviour of the workpiece. If the machining result does not reach the expectations, this will be considered in further planning tasks. Therefore, a computer-aided optimization requires an aimed processing and reuse of technical and practical knowledge.
CONCEPT AS A CLOUD APPLICATION
The operation planning is assisted by the verification and optimization of the NC program and machine set up by the use of simulations. In addition, the operations scheduling and dispatching is improved by a pre-selection of resources and providing of reliable processing times. Figure 5 shows the system architecture of the cloud application with its modules "Production Optimizer", "Setup Optimizer", "Simulation Scheduler" and the "Virtual Manufacturing" as basis.
The optimization steps in each module are supported by a "Knowledge Base", which provides both, technical and practical knowledge from previous simulations.
The interface for incoming and outgoing information is part of this "Knowledge Base". The user sends the manufacturing order and documents (blank description, NC program) as well as the desired firm-specific goals to the cloud application. This represents the new bridge between the customers' CAPP-System (Computer-aided process planning) and the shop floor control. First of all, the "Knowledge Base" determines suitable machine tools by reference to the machining operations described in the NC program and the existing resources. The result is a combination of resourcescomposed of machine, fixture and toolfor each machining step. The selection bases on the description of relations between resources and possible machining operations. Empirical data from previous simulations like the process times are reused to estimate the processing time on each of the suitable resource combination. This outcome is utilized by the "Production Optimizer" to accomplish a cross-machine optimization using a mathematical model and a job shop scheduling. This takes account of the processing time, delivery date, batch size, current resource disposability, machine costs material availability, set up time, maintenance plan and the shift schedule. The master schedule sets the framework for the detailed operations scheduling and dispatching. Scheduling is the assignment of starting and completion dates to operations on the route sheet. Selecting and sequencing of waiting operations to specific machines is called dispatching. Here, the real time situation in the shop floor is provided thru the "Knowledge Base" to ensure a reliable scheduling.
In the next step, the NC program is optimized for the selected machine tool by the "Setup Optimizer". It varies systematically the parameters of the NC program and evaluates the simulation results from the virtual machine tool. For example, the target is to determine a timesaving workpiece clamping position, to remove collisions, to minimize the tool change times and empty runs or to maximize the cutting speed profile. The parameters that are being evaluated are chosen by a special algorithm in an adequate distance in order to reduce the number of simulation runs and to quickly identify the optimum parameter range. The result is an optimized, verified NC program and the necessary parameters to set up the machine. The results of all performed optimizations are saved in the "Knowledge Base". It links workpiece information, configurations and technological parameters with already conducted simulation results. Thus, it is possible to early identify parameters with a high potential for optimization as well as relevant parameter ranges for new NC programs. This restriction for the scope of solutions reduces the number of necessary simulation runs too. All simulation orders from the "Setup Optimizer" are managed by the "Simulation Scheduler" and distributed to the virtual machine tools and hardware capabilities. To increase the simulation speed, extensive NC programs are divided into sub-programs, that are simulated parallel and the results combined again afterwards.
The prerequisite for achieving the overall aim is the customized "Virtual Manufacturing" with virtual images of all available machine tools and manufacturing equipment. This includes the virtual machine tool in that current version as well as tools, holders and fixtures as CAD models.
If it is necessary, multiple instances of a virtual machine are generated in the computer cluster of the cloud application. For further improvements, potentials for the parallelization of separated computations are considered. For example, the calculations for the collision detection and the simulation of the numerical control systems can be executed on different CPU cores.
CONCLUSIONS
The fully automated operation planning will not be realized in a short period of time.
Instead, the paradigm of Industry 4.0 pushes decision-making techniques to support the user. The presented project combines approaches of knowledge reusing, advanced planning and scheduling and reliable machine simulations in a cloud application.
Virtual machine tools are used to verify and improve the machining without interrupting the production process on the shop floor. In addition, a more efficient distribution of manufacturing orders to the machine tools is addressed. This enables an increase in efficiency without changing the existing machine tools.
The following tasks are part of the next project phase: Characteristics and a taxonomy to describe manufacturing processes and resources are defined for the "Knowledge Base". Simultaneously, a concept is developed to speed up the simulation run; this includes software and hardware technologies. Furthermore, a basic model for scheduling and dispatching is developed; this can be adjusted later to the customers' framework.
FUNDING NOTE
This research and development project is / was funded by the German Federal Ministry of Edu-cation and Research (BMBF) within the Leading-Edge Cluster "Intelligent Technical Systems OstWestfalenLippe" (it's OWL) and managed by the Project Management Agency Karlsruhe (PTKA). The author is responsible for the contents of this publication.
Planning, Scheduling and Dispatching on the Basis of Virtual Machine Tools" (InVorMa) is a cloud-based simulation of machine tools and it is developed by the Heinz Nixdorf Institute and the Decision Support & Operations Research Lab (DSOR) of the University of Paderborn as well as the Faculty of Engineering sciences and Mathematics of the University of Applied Sciences Bielefeld in cooperation with the machine tool manufacturer Gildermeister Drehmaschinen GmbH. The companies Strothmann GmbH and Phoenix Contact GmbH & Co. KG support the definition of requirements as well as the following validation phase as pilot users.
Fig. 1 .
1 Fig. 1. -Summarized as-is process of operations planning, scheduling and dispatching
Fig. 3 .
3 Fig. 3. -Cloud application for supporting the operations planning, scheduling and dispatching
Fig. 4 .
4 Fig. 4. -Fields of action
Fig. 5 .
5 Fig. 5. -Architecture of the cloud application | 19,435 | [
"1003735"
] | [
"488132",
"488132",
"488132",
"488133",
"488133"
] |
01485834 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485834/file/978-3-642-41329-2_39_Chapter.pdf | Marius Essers
email: marius.essers@tu-dresden.de
Martin Erler
email: martin.erler@tu-dresden.de
Andreas Nestler
email: andreas.nestler@tu-dresden.de
Alexander Brosius
Dipl.-Ing Marius Eßers
Dipl.-Ing Martin Erler
Dr Priv.-Doz -Ing
Methodological Issues in Support of Selected Tasks of the Virtual Manufacturing Planning
Keywords: Virtual Machining, Virtual Manufacturing, Virtual Machine Tooling, Virtual Machine
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
In the area of production planning there are many points of application for the supporting use of simulation models due to the multitude of influencing variables to be taken into account. In principle all unique activities to be implemented for the design of a manufacturing system and the manufacturing processes to be planned can be simulated. The challenge is in developing models with suitable representations and visualisations as well as furnishing them with additional, growing physical characteristics. This affects all activities to be planned for drafting, design and optimisation of manufacturing processes in component production [01]. With the inclusion of physical characteristics from models, increasingly realistic statements for technological matters in particular can be attained. Realistic, in the sense of the planning of target stipulations, means attaining sufficient accuracy for the results relating to a relevant point of observation. Thus with the knowledge of sufficiently accurate machining forces for planned operations, the power and energy considerations can also be incorporated into the simulation, e.g. for the reduction of the energy expended through the evaluation for the design of operations based on low energy requirements [02]. For the best possible process design, conditions typical to the planning must be selected, whereby the various different display options incorporated for an increase in process planning quality can be tested and which can be linked to a comprehensive procedure [03]. Proving techniques, which will analyse the processes already designed as a follow-up, must also be integrated for verification purposes. The functionality existing at present for commercial and non-commercial machining simulation systems amounts primarily to the classical collision avoidance [04], the predominantly geometry-based visualisation of the overall system Machine-Tool-Workpiece [05] and selected optimisation on the basis of the NC code [06].
For the best possible process design and verification, the machining process must take account of the working process of the mechanical processing and its effect on the physical complete system Machine-Tool-Workpiece. In addition, further processes, e.g. the setting up of the machine, must also be taken into account in order to avoid potential fault sources.
The objective is the combination of a long term process simulation and a point-intime-specific system simulation with a high degree of detailing (Figure 1).
Fig. 1. Trends of process simulation
The high degree of detailing enables the planner to use additional functions of the process simulation for substantiated forecasting for special problem cases for a defined assessment period. These types of functions are not available in commercially available planning systems or are only available in rudimentary forms. This approach also counteracts transient performance problems occurring when working with a high degree of detail, which severely restrict the assessment period, in order to economically facilitate the overall process simulation with physical characteristics. The following investigations into methodical aspects are performed as examples of the virtual design of a milling machining centre and the milling process to be planned.
METHODOLOGICAL ASPECT OF SHORT PERIOD SIMULATION OF MANUFACTURING SYSTEMS 1
Here the degree of detailing can be considered as an alternative illustration for a given process section. That means that for an assessment period alternative illustrations are possible, whereby each one can depict another degree of abstraction. That applies both to the representations and the characteristics of the models.
For the illustration of the most comprehensive range of characteristics in the models of the mechanical machining, a multitude of application domains must be combined [07]. So that various different methods can be used, logically a universal user interface must be created for the design and implementation of the simulation.
The SimulationX development environment was selected for these requirements. SimulationX can be used for interdisciplinary drafting, the modelling and analysis of physical-technical systems on a common platform [08]. In standard form it offers domains in the fields of drive technology and electrical engineering, flexible multibody mechanics and others. Additional models can be coupled to expand the functional scope. In principle the models can communicate via a locally shared memory or via a network. The basis for the coupling is an interface definition. Alongside a propriety interface, an example functional mock-up interface (FMI) of the MODELISAR project [09] can also be used. This enables a modular approach, whereby the computing resources will also have an influence on the costs of the degree of detail. The object-oriented implementation of the manufacturing system is sub-divided into important sub-systems with bidirectional interfaces, whereby clamping systems are currently not considered (Figure 2).
The virtual workpiece, virtual tool and virtual machine tool sub-systems are dealt with in more detail below. The working point for the process will be explained in the process simulation section.
Cutter-workpiece engagement Cutter Workpiece
Machine Tool
Fig. 2. -Object-oriented illustration of a manufacturing system
Virtual Workpiece
The real workpiece undergoes continuous change during the machining. This affects both the external form as well as the stiffness and the mass which are changing due to the removal of material. Where large volumes are to be machined, the mass has a greater effect on the complete system and for example on the expected energy con-sumption. For smaller components with thin-walled structures the smallest amount of material removal has a substantial effect on the stiffness. There are well-proven techniques existing for the representation of workpiece geometry. Alternatively the mapping of a coupled NC simulation core (NCSK) [START_REF] Lee | Tool load balancing at simultaneous five-axis ball-end milling via the NC simulation kernel[END_REF] can be applied via a Z-map [START_REF] Inui | Fast Visualization of NC-Milling Result using graphics Acceleration Hardware[END_REF] or via a 3-dexel model. The widely-used data format STL is used as a basis. With this, the starting geometry is imported and the virtual finished geometry exported. Volumetric and dimensional calculations, for example, can be carried out based on this.
In principle, modelling is available as an ideally stiff workpiece. Firstly a check is carried out to ascertain whether a modular replacement system can be used to illustrate the changing stiffness. However, this does not enable the complex geometry to be completely illustrated. Therefore a freely available FEA software module is incorporated by means of Co-Simulation [START_REF] Calculix | A Free Software Three-Dimensional Structural Finite Element Program[END_REF] to show the structural mechanics of the workpiece (Figure 3). With this model representation forces, which can be used for deformation, can be applied to the node points.
Virtual Tool
If rough calculations are to be carried out then the tool can be adopted initially as ideally stiff (Figure 4a). There are various different illustrative models available for further detailing. If one ignores the wear on the cutting edge of the tool then it undergoes no geometric change in the process. An approximated representation of the tool with two flexible multi-bodies represents the shaft and the cutting part (Figure 4b). For a more exact implementation of the tool stiffness the FEM software module is available again (Figure 4c).
Virtual Machine Tool
As a minimum requirement on a virtual machine tool the kinematics must be implemented in order to be able to realise detailed movement information. The existing objects of the ITI mechanics library from the CAx development platform SimulationX are utilised for this. As an example a 3-axis portal-design vertical milling machine is modelled (Figure 5). Information from the machine documentation is sufficient to illustrate further characteristics. This is critical for the illustration of masses and centres of gravity for the machine components. The stiffnesses of the guides are modelled through springdamper systems. The machine components are primarily adopted as ideally stiff, so that all simulated machine deformations are created by the guides. The illustration of the cascade controller and the electrical illustration of the translational drive components is implemented through the "controller" components. An electrical drive motor ("driveMotor") and a ball screw drive ("ballScrewDrive") from the ITI mechanics library are used in these components. With stage one the representation of a Z-map is applied, built upon the kinematics of the penetration between tool and workpiece. In addition, an analytical geometry model calculates geometric intervention points within the XY-level of the tool in the time range of a tooth feed without taking vibrations into account (Figure 8) The intervention points determined are used for the calculation of an average cutting depth (Formula 1). Formula 1. -Berechnung der averaged cutting thickness auf geometrischer Grundlage Furthermore, the average cutting depth (Formula 2 and Figure 9) can be shown via a calculated value through the penetrated heights and their number in Zdirection. A force model, which delivers the machining forces which can be arbitrary in terms of magnitude, direction and application can in turn be applied to the geometric sizes determined. These are applied to the workpiece and tool in the vicinity of the working point for the process and will result in the deflection of the sub-system involved. This deflection in turn has a direct influence on the common penetration and the resultant geometric machining parameters.
f
METHODICAL ASPECTS OF THE SIMULATION OVER A LONG PERIOD OF TIME
The simulation over a long period of time should be considered here to be a complete simulation process for an existing NC program. To do so systems are employed here, which are closer to the real machine/controller combination than is possible with the classical post-processor CAD/CAM or NC programming systems. Alongside the verification -so, the assurance of freedom from errors in the sense of collision avoidance and guaranteed achievement of the required surface qualities and dimensional accuracy -the objective of a simulation process is also increasingly the reduction of unutilised safety reserves, which lie in the technical process parameters and which finally lead to a non-optimum primary processing time or secondary processing time. In order to be able to utilise these reserves and thus to be able to reduce machining times and tool costs, the consideration of further influences is required in the simulation.
With BC code based verification systems this is not normally possible, as the constraint that the NC code is dealt with as a whole significantly increases the difficulty of a more detailed evaluation. Influences of the machine tool control system for example remain almost completely disregarded.
Coupled with a real CNC controller a simulation on the other hand can provide significantly more statements about an NC program and the resultant machining [START_REF] Kadir | Virtual machine tools and virtual machining-A technological review[END_REF], as it evaluates the control signals generated for the individual axes directly, for example.
Virtual Machine Tool Environment (VMTE)
Alongside the transfer and processing of the data, the incorporation of information generated on the CNC control system side (e.g. corner rounding, reduced approach torques or feed limitations) also requires a simulation model, where this can be illustrated. One such model has been developed -the Virtual Machine Tool Environment (VMTE). It provides a basic model in the sense of a Co-Simulation, which enables the processing of CNC control system control signals to machine model transformation information. Two important requirements arise from the application conditions described:
The VMTE shall be quick and simple to create, as well as realtime-capable and modular, in order to achieve many iterations, a broad application spectrum and a high utility value.
The rapid path to VMTE
The process developed for creating the simulation models allows the use of generally available information and data for a machine tool and thus very quick creation of a VMTE, so that this can be economically used in many new areas of application.
Prerequisites.
In order to be able to use the VMTE as a planning and development tool, it must be able to be quickly adapted to the specific tasks for which it is required. Frequent changes to the machine configuration are normal in the early phase of development and planning as part of the process development and process checking. A wide range of different machines with differing configurations is also necessary for basic training and advanced training purposes. The consideration of established machine design variations in conjunction with the final operational sequence is an important point of the operational fine planning for the utilisation of VMTE in the manufacturing phase.
Machine tools are generally based on serial kinematics. The most important tasks in the development of a model for illustrating these serial kinematics are the development of the kinematics on the basis of the machine configuration and the linking of the graphical data with each axis. The basic mechanism for the transformation of the original CAD data and the transfer into graphical data as well as its population with machine functions can be largely generalised such that a VMTE can be created in less than 30 minutes.
Generalised virtual machine.
In order to achieve this the emphasis is not on the linking, in order to illustrate the kinematics, but rather the axes which represent the real components. This approach is closer to reality and results in the position of the axes being directly adjustable with respect to one another. There is also no need for additional parameters such as in the DH convention [START_REF] Denavit | A kinematic notation for lower-pair mechanisms based on matrices[END_REF]. The mobility of the model (and thus the movement of its axes) is achieved through the decomposition of one axis into two axis modules and their displacement or rotation with respect to one another.
In doing so an axis module has a zero point and a connection point, whose position and orientation with respect to one another will be described through a transformation matrix and which define the interior of the module (Figure 10). The combination of two modules (basic part and mobile part) describes the configuration of an axis. The mobile part will be moved relative to the basic part (likewise through a transformation matrix), whereby the typical axis characteristics will be realized. The set-up interface which is thus finally fully parameterized can also be considered an interface and can be supplied with information provided from outside. The CNC control system provides one such interface for example.
VMTE for simulation of a long period of time
Many intrinsic controller characteristics can also be considered with large time spans during the verification of the analysis through the coupling of the machine model and a real or virtual CNC control system, and these would otherwise have to have been modelled. Due to its very short creation time, ability to be fully parameterised and intrinsic consideration of CNC controller influences the kinematic base model provided offers itself as a basis for a virtual machine environment (Figure 12). The fully parameterised interface enables changes to be made to the configuration whilst the simulation is running, so that various different configurations can be used as the subject-matter of the simulation or so that the simulation can take account of the configuration changes.
OUTLOOK -THE COMPREHENSIVE MANUFACTURING SIMULATION
The sub-processes detected in the VMTE, but which cannot be simulated there in sufficient detail, can be considered and evaluated downstream or in the meantime through the increased degree of detail in an enlargement of arbitrary resolution. In doing so not only can the time steps be reduced -thus increasing the slow-motionbut also the resolution of the model (e.g. FE meshes) in question can be increased. The combination of a simulation for a large time period and detailed process simulation enables the comprehensive evaluation of the complete manufacturing process as well as the parameters and influencing variables involved in it, both in a holistic context and in detail.
In order to achieve this, the two simulations must interact with one another. This is achieved through the parametrisation of the two simulations. Thus the VMTE can transfer a parameterised machine tool model to the process simulation and can detect and specify the periods of time to be considered. The highly accurate analyses in small time periods received from the process simulation can be returned and used for consideration or correction of the VMTE in large time periods, where small changes have a significant impact.
The approach presented unites geometrical/kinematic simulation methods for a large time period and restricted degree of detail, with highly detailed methods such as MKS and FEA analysis for small time periods. In this way the advantages of both methods can be utilised and their disadvantages reduced.
Fig. 3 .
3 Fig. 3. -Geometric (a) and structural-mechanical representation (b) workpiece
Fig. 4 .
4 Fig. 4. -Tool representation models
Fig. 5 .
5 Fig. 5. Kinematics of the Mikromat 4V HSC (a) machine and a simplified visualisation (b)
Fig. 6 .Figure 7 Fig. 7 .
677 Fig. 6. -Control-related and electrical implementation of an axis
∑FormulaFig. 9 .
9 Fig. 9. -Geometrical cutter-workpiece engagementOn the basis of the two values calculated, and , the Victor/Kienzle[START_REF] Kienzle | Spezifische Schnittkräfte bei der Metallbearbeitung[END_REF] force model for calculating a determined machining force can be applied. Because this procedure only permits 3-axis machining and only a rough determination of the machining forces, the considerably more accurate NCSK has been coupled via an FMI for Co-Simulation as a further alternative. The NCSK works with a three-dexel model for
Fig. 10 .
10 Fig. 10. -Generalized axis pattern for serial kinematics
Fig. 11 .
11 Fig. 11. -Configuration scheme for axis chains
Fig. 12 .
12 Fig. 12. -Sample VMT Ops Ingersoll Funkenerosion GmbH SH 650
Acknowledgement: This work is kindly supported by the AiF ZIM Project SimCAP (KF
2693604GC1) | 19,488 | [
"1003736",
"1003737",
"1003738",
"1003739"
] | [
"96520",
"96520",
"96520",
"96520"
] |
01485836 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485836/file/978-3-642-41329-2_40_Chapter.pdf | Marcus Petersen
email: marcus.petersen@uni-paderborn.de
Jürgen Gausemeier
email: juergen.gausemeier@uni-paderborn.de
Dipl.-Inf Marcus Petersen
Prof. Dr.-Ing Jürgen Gausemeier
A Comprehensive Framework for the Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components
Keywords: Manufacturing Process Planning, Functional Graded Components, Expert System, Specification Technique, Sustainable Production
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
Functional gradation denotes a continuous distribution of properties over at least one of the spatial dimensions of a component consisting of only one material. This distribution is tailored according to the intended application of the component [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF].
Application areas for the use of functional graded components can be found for example in the automotive industry. Car interior door panels for instance are usually plastic materials that are supposed to absorb the impact energy of a lateral crash to an assured extent. The resulting deformation however must in no case lead to an injury of the car's passengers. To achieve a desired deformation behaviour it is necessary to assign exactly defined material properties to specific locations of the door panel. By a functional gradation, e.g. of the hardness, the functionality of the component can be considerably extended. The formerly purely decorative interior door panel becomes a functional element of the passive vehicle safety.
Functional graded components provide a resource-conserving alternative for modern composite materials [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF] and therefore offer high potential to achieve a sustainable production. Instead of using post-processing steps to create the composites and their graded properties, the gradation is produced during their moulding process. This process integration for example shortens the manufacturing process chain for the production of the component and increases the energy efficiency significantly.
The production of functional graded components requires complex manufacturing process chains, such as thermo-mechanically coupled process steps [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. While there are several material scientific approaches on how to develop an isolated process step to achieve a certain material structure, the holistic design of connected manufacturing process chains is much more difficult. For that purpose in section two an exemplary manufacturing process chain will be used to demonstrate our approach. To realise the full potentials of functional gradation, a computer-aided framework for the planning and optimisation of this manufacturing process chains will be introduced in the following subsection, whereupon the hierarchical process chain synthetisation as part of the Expert System will be presented in section three. Section four summarises the approach and identifies the significant future research challenges.
FUNCTIONAL GRADED COMPONENTS
Exemplary Manufacturing Process Chain
The manufacturing process chains for functional graded components are characterised by strong interdependencies between the components and the applied manufacturing processes as well as between the process steps themselves. According to the presented interior door panel (cf. section 1), a manufacturing process chain for self-reinforced polypropylene composites is used here as a demonstrator. This process chain uses a thermo-mechanical hot-compaction process to integrate the functional gradation into self-reinforced polypropylene composites by processing layered semi-finished textile products on a thermoplastic basis. The semi-finished textile products were previously stretched and provide a self-reinforcement based on a macromolecular orientation. This self-reinforcement leads to a sensitive behaviour regarding pressure and thermal treatments and is therefore essential for the functional gradation of the composite [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF], [START_REF] Bledzki | Functional graded self-reinforced polypropylene sheets[END_REF].
Figure 1 shows the exemplary manufacturing process chain for self-reinforced polypropylene composites starting with a gradual preheating of the semi-finished textile products in a partially masked IR-preheating station. In the next step, the thermal gradation of the product will be enhanced due to consolidation by a special compression moulding tool. The tool was particularly design for thermo-mechanical gradation. For this reason, both tool halves can be tempered differentially and completely independent from each other. Furthermore this tool applies a mechanical gradation by a local pressure reduction of up to 30% due to the triangular geometry. A cooling phase is necessary before demoulding the self-reinforced polypropylene composite [START_REF] Paßmann | Prozessinduzierte Gradierung eigenverstärkter Polypropylen-Faserverbunde beim Heißkompaktieren und Umformen[END_REF].
Fig. 1. Exemplary manufacturing process chain for self-reinforced polypropylene composites
Comprehensive Planning Framework
The exemplary manufacturing process chain for self-reinforced polypropylene composites is characterised by strong interdependencies (cf. section 2.1). These interdependencies are typical for the production of components with functionally graded properties and need to be considered. Therefore a comprehensive planning framework for the planning and optimisation of manufacturing process chains is under development. This framework integrates several methods, tools and knowledge obtained by laboratory experiments and industrial cooperation projects in which the concept of functional gradation has been analysed. The planning process within the framework is continuously assisted by the modules "Component Description", "Expert System" and "Modelling and Process Chain Optimisation" [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. Figure 2 gives an overview about the structure of the planning framework and the information exchanges between the modules. The input information for the manufacturing process planning is provided by the computer-aided design (CAD) model of the component and the intended graded properties. Based on this information, several alternative process chains for the manufacturing of the component are synthesised by means of the framework. After this, the process parameters of each process chain are optimised based on empirical models. The best manufacturing process chain is de-scribed using a dedicated specification technique for production systems in the last step of the planning process [START_REF] Gausemeier | Planning of Manufacturing Processes for Graded Components[END_REF].
The Component Description module enables the desired graded properties to be integrated into the CAD model of the component. The model usually consists of geometric features (e.g. cylinder or disc), which will be extracted after loading the model. These features allow the framework to consider the geometry of the whole component and to pre-select reasonable gradients according to the geometry. This pre-selection increases the efficiency of describing the intended gradient since the manufacturing planner can directly provide the desired graded properties by modifying the parameters of the proposed gradients. If the CAD model does not contain any geometric feature or the user does not want to use one of the pre-selected gradients, the component is divided into small volume elements. These so called voxels enable the component model to be locally addressed and can be used as supporting points for the functionbased integration of the component's graded properties [START_REF] Bauer | Feature-based component description for functional graded parts[END_REF]. Based on the enhanced CAD model of the first module, the Expert System synthesises several alternative process chains for manufacturing the component. For that purpose all the manufacturing processes available in the knowledge base are filtered according to the component description such as material, geometry or the desired graded properties (e. g. hardness or ductility). To realise this filtering process, the content of the knowledge base is structured by an ontology. The ontology classifies the process steps with regard to their characteristics and connects the information of the knowledge base via relations between the content elements. An inference machine is applied to draw conclusions from the ontology, especially with respect to the varied interdependencies between the manufacturing processes. These conclusions provide the main information for connecting the several process steps of the knowledge base during the synthetisation of reasonable manufacturing process chains according to the enhanced CAD model of the component. The synthetisation of several alternative manufacturing process chains by a hierarchical process chain synthesis is described in section three.
The exemplary manufacturing process chain for self-reinforced polypropylene composites is for example characterised by the fact that the initial material temperature, which is adjusted during the IR-preheating process in preparation of the compression moulding process has a strong influence on the mouldability of the component.
Those and all the other interdependencies mentioned above need to be considered during the pairwise evaluation of process steps to ensure the compatibility of the synthesised process chains for the manufacturing of the component. All process chains with incompatible process steps are disregarded. Thus the result of the Expert System module is a set of several alternative process chains which are capable of producing the component [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF].
The parameters of a preferred set of manufacturing process chains are optimised by means of the Modelling and Process Chain Optimisation module [START_REF] Biermann | Computer-Aided Planning and Optimisation of Manufacturing Processes for Functional Graded Components[END_REF]. To accomplish this, predictions of empirical models based on several experiments, measurements and simulations of samples provide a comprehensive solution space (cf. [START_REF] Wagner | Efficient modeling and optimisation of the property gradation of self-reinforced polypropylene sheets within a thermo-mechanical compaction process[END_REF]). Modern empirical modelling techniques are then used as surrogates for the processes and a hybrid hierarchical multi-objective optimisation is utilised to identify the optimal setup for each process step of a manufacturing process chain. In the context of functional gradation, design and analysis of computer experiments (DACE) models have proven to show a very good prediction quality [START_REF] Sieben | Empirical Modeling of Hard Turning of AISI 6150 Steel Using Design and Analysis of Computer Experiments[END_REF], [START_REF] Wagner | Analysis of a Thermomechanically Coupled Forming Process Using Enhanced Design and Analysis of Computer Experiments[END_REF].
After all, the process chain that is capable of producing a functional graded component in the best way regarding to the component description is described using a dedicated specification technique. This fundamental specification is based on a process sequence and a resource diagram [START_REF] Gausemeier | Integrative Development of Product and Production System for Mechatronic Products[END_REF].
Figure 3 shows an extract of the optimised process sequence for the manufacturing of self-reinforced functional graded polypropylene composites with an example set of process step parameters for the compression moulding. Further information about the specification technique can be found in [START_REF] Gausemeier | Integrative Development of Product and Production System for Mechatronic Products[END_REF].
Fig. 3. Hierarchical process chain synthetisation as part of the Expert System module
The next section gives an overview about the underlying principles of the manufacturing process chain synthetisation within the Expert System.
HIERARCHICAL PROCESS CHAIN SYNTHETISATION
The Expert System within the planning framework synthesises several alternative manufacturing process chains for functional graded components in a hierarchical way. This synthetisation is assisted by two steps -the "Core Process Selection" and the "Process Chain Synthetisation". The component requirements provided by the Component Description module, such as the enhanced CAD model, the material or general requirements (e.g. the surface quality) constitute the product attributes for the requirements profile of the component. This radar chart profile [START_REF] Fallböhmer | Generieren alternativer Technologieketten in frühen Phasen der Produktentwicklung[END_REF] and also the component requirements represent the input information for the Expert System (cf. Figure 4).
Core Process Selection
The Core Process Selection (according to [START_REF] Ashby | Materials Selection in Mechanical Design -Das Original mit Übersetzungshilfen[END_REF]) marks the first synthetisation loop of the Expert System and results in the core process for the manufacturing of the component. Thereby the process step which fulfils the requirements of the component according to the requirements profile in the best way is selected to be the core process. This manufacturing process also establishes the root process i.e. the starting point for the hierarchic process chain synthetisation within the iteration loops of the Expert System (cf. section 3.2). At first all the manufacturing process steps available in the knowledge base are structured according to each product attribute of the requirement profile for the manufacturing of the component. For this purpose the Expert System utilises matrix tables, in which the manufacturing processes are displayed in the rows and their ability range of the current product attribute is represented in the columns. These so called selection diagrams provide the basis for the automatic selection of the core process. Figure 5 shows an example of such a matrix table for the product attribute "tolerance".
Based on these selection diagrams, all the manufacturing processes which do not match the product requirements in the defined range are removed. For the other process steps, a process profile is created in addition to the requirement profile. The process profile is presented to the user within the planning framework to explain the results of the selection process. These profiles show the fulfilment of the component requirements by the given manufacturing processes in a comprehensive way (cf. Figure 4).
The manufacturing process step with the highest fulfilment of the product attributes is selected to be the core process and the unfulfilled requirements form the main input for the Process Chain Synthetisation as new requirement profile.
Process Chain Synthetisation
The Process Chain Synthetisation starts only if the Core Process Selection ends up with some unfulfilled requirements. This step of the Expert System tries to reduce the unfulfilled requirements down to a minimum by creating several alternative process chains.
To create the process chains, the Expert System restarts the Core Process Selection as a loop, in which the unfulfilled requirements of a completed iteration loop provide the input information for the next iteration. This loop continues until no further process step can be found to fulfil the open requirements of the requirements profile.
After every iteration loop, a pairwise evaluation with the new selected manufacturing process and the already connected process steps is performed to ensure the com-patibility of the synthesised process chain. If there is only one incompatible process step in the process chain a new alternative process chain will be started without this step, but with an own unfulfilled requirement profile. This new process chain will also be considered during the following iterations, whereby new selected process steps will be integrated in every suitable process chain. If the Expert System has to consider two or more alternative process chains, the Process Chain Synthetisation continues until no further process step can be found to fill up one of the open unfulfilled requirement profiles of the process chains.
The result of the hierarchical process chain synthetisation is a set of several alternative process chains which are all able to achieve the desired component requirements (cf. Figure 6).
CONCLUSIONS AND OUTLOOK
Functional graded components offer an innovative and sustainable approach for customisable smart products. Thus a comprehensive framework for the computer-aided planning and model-based optimisation of components with functional graded properties has been presented and demonstrated with an application example. Future work includes the enhancement of the knowledge base with additional manufacturing process steps, materials and interdependencies as well as the adjustment of the ontology. Furthermore the inference rules of the expert system have to be expanded to realise the synthetisation of more complex manufacturing process chains and their pairwise evaluation. The Expert System of the comprehensive planning framework is able to automatically synthesise process chains for the manufacturing of a component with functionally graded properties. However the final selection of the best process chain for the specific production objective must still be conducted manually since it is not always obvious which alternative fulfils all the requirements according to the objective in the best way. The Analytic Hierarchy Process (cf. [START_REF] Saaty | The analytic hierarchy processplanning, priority setting, resource allocation[END_REF]) may offer an effective approach to handle the highly diverse characteristics of the decision criteria while not overstraining the decision process with data acquisition and examination.
Fig. 2 .
2 Fig. 2. -Planning Framework for the computer-aided planning and optimisation of manufacturing processes for functional graded components
Fig. 4 .
4 Fig. 4. -Part of the optimised process sequence for the manufacturing of self-reinforced polypropylene composites
Fig. 5 .
5 Fig. 5. -Example of a selection diagram (according to [11])
Fig. 6 .
6 Fig. 6. -Set of alternative process chains for the interior door panel given by the Expert System
ACKNOWLEDGEMENT
The work in this contribution is based upon investigations of the Collaborative Transregional Research Centre (CRC) Transregio 30, which is kindly supported by the German Research Foundation (DFG). | 19,395 | [
"1003741",
"1003742"
] | [
"74348",
"74348"
] |
01485841 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485841/file/978-3-642-41329-2_8_Chapter.pdf | Lapo Chirici
email: lapochirici@gmail.com
Kesheng Wang
email: kesheng.wang@ntnu.no
A 'lean' Fuzzy Rule to Speed-up a Taylor-made Warehouse Management Process
Keywords: Logistics, Warehouse management, Putaway process, Fuzzy rules, Data Mining 1
The minimization of the inventory storage cost and -as a consequence -optimize the storage capacity based on the Stock Keeping Unit (SKU) features is a challenging problem in operations management. In order to accomplish this objective, experienced managers make usually effective decisions based on the common sense and practical reasoning models. An approach based on fuzzy logic can be considered as a good alternative to the classical inventory control models. The purpose of this paper is to present a methodology which assigns incoming products to storage locations in storage departments/zones in order to reduce material handling cost and improve space utilization. The iterative Process Mining algorithm based on the concept of Fuzzy Logic systems set and association rules is proposed, which extracts interesting patterns in terms of fuzzy rules, from the centralized process datasets stored as quantitative values.
INTRODUCTION
In this era of such drastic and extemporaneous changes, manufacturers with global view pay strong efforts in striving to achieve a lean production, outsource their components, and manage the complexity of supply chain [START_REF] Blecker | RFID in Operation and Supply Chain Management -Research and Application[END_REF].
Warehouse management plays a vital role to be a central actor in any kinds of industry which put-away process is a key activity that brings significant influence and challenges to warehouse performance.
In this dynamic operating environment, reducing the operation mistakes and providing accurate real time inventory information to stakeholder become the basic requirements to be an order qualifier. Here, an OLAP based intelligent system called Fuzzy Storage Assignment System (FSAS) is proposed to easy manipulate the decision support data and rationalize the production in terms of storage location assignment problem (SLAP).
In condition of information's uncertainty, fuzzy logic systems can provide methodologies for carrying out approximate reasoning processes when available. Identifying an approach that can bring out the peculiarities of the key operations of warehouse is focal to track the priorities for the storage in terms of Stock Keeping Units (SKUs) [START_REF] Chirici | A Tailor Made RFID-fuzzy Based Model to Optimize the Warehouse Managment[END_REF]. Hence the need to develop a Put-away Decision Tree in order to automates the analysis of possible hidden rules useful to discover the most appropriate storage location assignment decisions. Some examples of SKU features are their dimensions, weights, loading values and popularity. All these are important in order to find out the relationship between the SKU properties and the relative assigned storage location. The aim of the paper is to create an algorithm that would be able to provide the best allocation position for SKUs in a just-in-time manner and with a lean and intelligent stock rotation. This approach provides strategic decisions to optimize the functionality and minimize the costs in a full automated warehouse.
2
THE "PUTAWAY'S DILEMMA"
Manage the SLAP
Warehouse storage decisions influence the main key performance indicators of a warehouse, such as order picking time and cost, productivity, shipping (and inventory) accuracy and storage density (Frazelle, 2002). The customers are looking always to obtain more comprehensive services and shorter response time. The storage location assignment problem (SLAP) results essential to assign incoming products to storage locations in well-defined departments/zones in order to reduce material handling cost and improve space utilization (Gu et al. 2007).
Handling the storage location process is an activity that requires the supervision of several relevant factors. Up to now, some warehouse management systems (WMS) have been developed to acquire "simple data" by the warehouse operators and let recorded to computer support in intelligent slotting (storage location selection) in such way to ensure a constant quality of information available [START_REF] Chede | Fuzzy Logic Analysis Based on Inventory Considering Demand and Stock Quantity on Hand[END_REF].
Besides that, both the lack of relevant data and WMS low customization capability for supporting the put-away process, highlight a common problem the warehouse manager has to deal with. Thus, the put-away decisions are often based on human knowledge, sprinkled unavoidably by a high gradient of inaccuracy (and consequently long order time), which can bring to a negative impacts on customer satisfaction [START_REF] Zou | The Applications of RFID Technology in Logistics Management and Constraints[END_REF].
Previous theories on SLAP
Warehouse is used to store inventories during all phases of the logistics process (James et al., 2001). The five key operations in warehouse are receiving, put-away, storage, order picking as well as utilizing and shipping (Frazelle, 2002). Hausman in 1976 suggested that warehouse storage planning involves decisions on storage policy and specific location assignment. In general, there are a wide variety of storage policies such as random storage, zoning, shortest/closest driveway, open location, etc (Michael et al., 2006). As each of the storage strategy with its own characteristics, there are different ways to solve the storage location assignment problem (SLAP). Brynzer and Johansson (1996) treated SLAP improving a strategy for pre-aggregate components and information for the picking work in storehouses. And this latter leveraging on the product's structure/shape in order to reduce order picking times. Pan and Wu (2009) developed an analytical model for the pick-and-pass system [START_REF] Convery | RFID Technology for Supply Chain Optimization: Inventory Management Applications and Privacy Issues[END_REF], [START_REF] Ho | Providing decision support functionality in warehouse management using the RFID-based fuzzy association rule mining approach[END_REF]. His theory was founded on three algorithms that optimally allocated items in the storage, analyzing apriori the cases of a single picking zone, a picking line with unequalsized zones, and a picking line with equal-sized zones in a pick-and-pass system. A nonlinear integer programming model built on a branch-and-bound algorithm was developed to enlighten class-based storage implementation decisions, considering the storage space, handling costs and area reduction (Muppani and Adil, 2008).
Introducing Fuzzy Logic
Fuzzy logic has already proven its worth to be used as tool to deal with real life problems that are full of ambiguity, imprecision and vagueness [START_REF] Chirici | A Tailor Made RFID-fuzzy Based Model to Optimize the Warehouse Managment[END_REF]. Fuzzy logic is a derivative from classical Boolean logic and implements soft linguistic variables on a continuous range of truth values to be defined between conventional binary. It can often be considered a suspect of conventional set theory. Since fuzzy logic handles approximate information in a systematic way, it is ideal for controlling non-linear systems and for modeling complex systems where an inexact model exists or systems where ambiguity or vagueness is common. A typical fuzzy system consists of a rule base, membership functions and an inference procedure. Fuzzy logic is a super set of conventional Boolean logic that has been extended to handle the concept of partial truthtruth-values between "completely true" and "completely false". In classical set theory, a subset U of asset S can be defined as a mapping from the elements of S to the elements the subset [0, 1], U: S -> {0, 1} [START_REF] Zadeh | Fuzzy sets[END_REF].
The mapping may be represented as a set of ordered pairs, with exactly one ordered pair present for each element of S. The first element of the ordered pair is an element of the set S, and the second element is an element of the set (0, l). Value zero is used to represent non-membership, and the value one is used to represent complete membership. The truth or falsity of the statement. The 'X is in U' is determined by finding the ordered pair whose first element is X. The statement is true if the second element of the ordered pair is 1, and the statement is false if it is 0.
FROM FUZZIFICATION TO SLAM
Online Analytical Process
In order to collect and provide quality data for business intelligence analysis, the use of decision support system (DSS) becomes crucial to assist managers within problem solving critical area (Dunham, 2002). Online analytical process (OLAP) is a decision support system (DSS) tool which allows accessing and parsing data in a flexible and timely basis. Moreover, OLAP enables analysts to explore, create and manage enter-prise data in multidimensional ways (Peterson, 2000). The decision maker, therefore, is able to measure the business data in different deeper levels and aggregate them depending on his specific needs. According to Dayal and Chaudhuri (1997), the typical operations performed by OLAP software can be divided into four aspects: (i) roll up, (ii) drill down, (iii) slice and dice and (iv) pivot. With the use of OLAP, the data can be viewed and processed in a real time and efficient way. Artificial Intelligence (AI) is one of the techniques that support comprehensive knowledge representations and practical manipulation strategy (Robert, 1990). By the use of AI, the system is able to learn from the past experiences and handle uncertain and imprecise environment (Pham et al., 1996). According to Chen and Pham (2006), fuzzy logic controller system comprises three main processes: fuzzification, rule base reasoning and defuzzification. Petrovic et al. (2006) argued that fuzzy logic is capable to manage decision making problems with the aim of optimizing more than one objective. This latter proved that fuzzy logic could be adopted to meet the multi-put-away objective operation in the warehouse industry. Lau et al. ( 2008) proposed a stochastic search technique called fuzzy logic guided genetic algorithms (FLGA) to assign items to suitable locations such that the required sum of the total travelling time of the workers to complete all orders is minimized. With the advantages of OLAP and AI techniques in supporting decision making, an intelligent put-away system -namely Fuzzy Storage Assignment System (FSAS) -for the novel real world warehouse operation is proposed to enhance the performance of WMS system. Two key elements would be embraced: (1) Online Analytical Processing (OLAP) in the Data Capture and Analysis Module (DCAM); (2), a fuzzy logic system in the Storage Location Assignment Module (SLAM), with objective to achieve the optimal put-away decision minimizing the order cycle time, material handling cost and damage of items.
Fuzzy Storage Assignment System.
The Fuzzy Storage Assignment System (FSAS) is designed to capture the distributed item's data (warehouse status included) from different organizations along the supply chain. The crucial passage concerns the conversion of data into information to hone the correct put-away decision for SLAP [START_REF] Lam | Development of an OLAP Based Fuzzy Logic System for Supporting Put Away Decision[END_REF]. The tangible influence on the warehouse performance is immediately recognizable. In fact, it also allows warehouse worker to visualize a report regarding the status of SKUs in real time, both in arriving both already stocked in the warehouse. The architecture of the FSAS is illustrated in Figure 1. Generally, FSAS consists of two modules: (1) Data capture and analysis Module (DCAM) and (2) Storage location assignment module (SLAM). These are used to achieve the research objectives through a fully automated recommendation storage system. The second component, OLAP, provides calculation and multi-dimension structure of data. Warehouse management bases on these information in SKUs and the warehouse to make strategic decision formulating the Fuzzy rules for SLAP. Through holistic manipulation of quality information, the warehouse engineers are able to develop a set of specific rules or algorithms to fit their unique daily operations, warehouse configuration and their operational objective. DCAM offers the refined parameter of SKUs and warehouse that act as the input of the next module-SLAM for generating the automatic recommendation for SLAP.
The last but not least component, the data mart, is developed to store the refined parameter and fuzzy rules (as a fuzzy rule repository), directly and specifically support the SLAP.
Storage Location Assignment Module (SLAM)
The storage location assignment module is used to decide the correct storage location for arrival SKUs, based on the analyzed information and the fuzzy rules set from DCAM. Its major component is the fuzzy logic system that consists of a fuzzy set, a fuzzy rule and fuzzy inference. The fuzzy rules is a set of rules that integrate the elements of the selected storage strategies, experience and knowledge of expert, and regulations. It is characterized by an IF (condition) THEN (action) structure.
The set of rules determines each item storage location; the system will match the characteristics of the SKU and current warehouse (conditions) with the fuzzy rule and then find out the action (where it should be shored). Finally the automatic put-away solution is generated. The SLAM start from the data mart in the former DCAM, it provides the parameters that are format compatible to the fuzzy system, than the parameters will be the input into the fuzzy system that is specifically developed to support the SLAP. The output of fuzzy system will be explained as the recommendation of final storage location for the inbound cargo, than the warehouse workers will store the inbound cargo as the recommendation, finally the storage information will be updated to the WMS system.
The "Golden zone" partition.
There are golden zone (the most accessible), silver zone (middle accessible) and bronze zone (the least accessible). Therefore there are 3 subzones inside each of stor-age zone, A, B and C in the sequence of the accessibility in which zone A with the highest accessibility.
THE REAL CASE
Problem identification
Generally, the Espresso Siena & Co. is characterized by a handle large amount of requests in warehouse operation. Efficient storage location assignment may result in minimizing the cost as well as the damage rate in order to increase the customer satisfaction. However, the current practice of SLAP in deciding storage department, location and the suitable tier relies on the warehouse manager is based on his knowledge. Problems may be raised as the wrong storage environment offers the storage item (resulting in deterioration of item quality) and the long storage location process (resulted in longer inbound shipment processing cycle). This is caused by the insufficient data availability and the lack of systematic decision support system in the decision process. According to the past experience, cargo storing in high tier of a pallet rack or the item with higher loading weigh have more probabilities of getting damage or higher loading height, because of the difficulty to control the pallet truck well. The more expensive cargo, the higher loss the warehouse suffers from the damage.
To ensure the accurate and real-time data can be used, the proposed FSAS for integrating data, extracting quality data from different data source and assigning appropriate storage location for the inbound items, in the way to minimize the risk of getting damage and the loss from it during the put-away and storage process.
Deployment of Online Analytical Process in DCAM.
SKU data and warehouse data are captured and transferred into the centralized data warehouse from the data source systems. Through the OLAP application it's possible to build up a multidimensional data model called a star schema. This is composed of a central fact table and a set of surrounding dimension tables and each table has its own attributes in variety of data type. The users are able to view the data in different levels of detail, so the warehouse engineer can generate real-time report for decisionmaking. In fact the OLAP function allows finding out the statistics of SKUs activities for a specific period of time, representing the SKUs dimension, storage environment and information of warehouse etc [START_REF] Laurent | Scalable Fuzzy Algorithms for Data Management and Analysis: Methods and Design[END_REF]. This gives the possibility to master the critical decision support data by the warehouse operator. To ensure the OLAP approach functions properly, the OLAP data cube needs to be built in advanced in the OLAP sever. The cube is developed in a star schema (Figure 2) consisting of dimensions, measures and calculated members.
Dimensions.
In SKU dimension, the "SKU_ID", and "Product Type" fields are used to find out the dimensions of the SKU and the other characteristics for the storage department selection. In the "Invoice" dimension, the "Invoice ID", and "SKU_ID" and "Invoice _Type" fields are used to find the activity patterns of SKU's for deciding the location inside the department for the SKU.
In "Time" dimension, the "Delivery Date" and "Arrival Date" field are used to find the expected storage time for the SKU and the number of transaction during the specific period.
Measures.
"Loading Item Height", "Loading Item Width", "Unit Cost" and "Unit_Cube" etc. are all used to provide critical information for the warehouse manager, in order to realize fuzzy rule composition and perform as a fuzzy input for implication process.
Calculated Member.
The calculated member calculates the mean of "Popularity", "Turnover", "Cu-be_Movement" and "Pick_density" etc., needed for fuzzy rule composition and implication process.
Deployment of fuzzy system in SLAM
The fuzzy rules and membership function of the company have to be first formulated in the fuzzy system for each parameter. The parameters (Table 1) and the fuzzy rules of others rule sets are specifically set by the warehouse manager, in order to truly reflect the operational conditions of such product families. The formulation is worked out by the knowledge of experts with the revision on the past experience on the warehouse daily operation; the historical revision could be achieved by the help of the OLAP report, in the former module-DCAM. Different sets of fuzzy rule, with particular parameters, make the decision to determine the storage zone/department, storage location and tier level for the item storage. The fuzzy rules are stored in the knowledge database and defined as a conditional statement in IF-THEN form [START_REF] Lam | Development of an OLAP Based Fuzzy Logic System for Supporting Put Away Decision[END_REF], [START_REF] Li | Mining Belief-Driven Unexpected Sequential Patterns and Implication Rules in Rare Association Rule Mining and Knowledge Discovery: Technologies for Infrequent and Critical Event Detection[END_REF]. Some examples of fuzzy rules are shown in Table 2. The warehouse manager ranged from 0-1 determines the membership function of each parameter. There is more than one type of membership functions existing, some with Gaussian distribution function, others with sigmoid curve, or quadratic and cubic polynomial curves. For this case study, since it's possible to demonstrate the manager's knowledge through the trapezoid and triangular membership functions, the graphic formats of the membership functions of the example parameters are demonstrated as the Figure 3.
The MATLAB-Fuzzy Logic Toolbox needs to create and execute fuzzy inference systems. With the above fuzzy rules and required data, the final storage location for the incoming item would be automatically generated from the Fuzzy Toolbox for SLAP.
In order to demonstrate the feasibility of the system, one supplier delivery input is selected into the FSAS system. When the market-operating department fulfill the relevant data into the ERP, these will be extracted by central data warehouse and then go to the OLAP module. At the same time, the warehouse department is informed and starts to go through their slotting decision tree.
CONCLUSIONS
This research tries to introduce the design and the implementation of a FSAS, which embraces the fuzzy theory to achieve warehouse capacity improvement and optimize the put-away process. The implementation of the proposed methodology, in the aspect of warehouse management through simulation, has been succeeded. Incorporating the error measurement and the complexity of the process into the fitness evaluation, the generalized fuzzy rule sets can be less complex and more accurate. In the matter of generation of new fuzzy rules, the membership functions are assumed to be static and known. Other fuzzy learning methods should be considered to dynamically adjust the various parameters of the membership functions, to enhance the model accuracy. Future contribution of this endeavor goes to validate the decision model in a way to be launched in case companies. Despite increasingly manufacturers and retailers emphasize the just-in-time inventory strategy, the delivery orders will become more frequent with smaller lot size. This creates considerable demand for put-away processes in warehouses, since put-away process is able to match the characteristics of the storage item and the storage location. In order to achieve this standard, the warehouse operators first need to master the characteristics of the incoming items and storage location and then correctly match the storage location, minimizing the material handling cost, product damage and order cycle. An OLAP based intelligent Fuzzy Storage Assignment System (FSAS) becomes suitable to integrate day-by-day operational knowledge from human's mind, supporting a key operation in warehouse-put-away process, minimizing product damage and material handling cost. FSAS enables the warehouse operators to perform put-way decision: (i) real-time decision support data with different query dimensions (ii) mimicking the warehouse manager to provide recommendation for SLAP. Further research on enhancing the fuzzy rules generation is considered to improve the accuracy in the suitable storage location assignment. As the database has been well developed for the put away process in the DCAM, this is eligible to provide overview on the past performance of the warehouse.
Fig. 1 .
1 Fig. 1. -Fuzzy Storage Assignment System Algorithm
: Portion of a six-tiers warehouse (b): Software to design customized warehouse in 3D
Fig. 2 .
2 Fig. 2. The relational database structure of DCAM
Fig. 3 .
3 Fig. 3. The MATLAB graphic function model
Table 1 .
1 The parameters taken into account to optimize the put-away decision process
RULE 1:
IF Loading Item Height Low AND
Loading Item Width Short AND
Loading Item Length Short AND
Loading ItemWeight Small AND
Loading Item Cube Small
THEN Capability of Storage Depart-
ment is LOW
RULE 2:
IF Popularity High AND
Turnover Rate High AND
Cube Movement Hight AND
Pick Density not Hight AND
Expected Storage Days Short
THEN Accessibility of Storage Zone
is Good
RULE 3:
IF Loading Item Value High AND
Loading Item Height High AND
Loading Item Weight High
THEN Tier Selection is Medium
Table 2 .
2 Fuzzy Association Decision Rule | 23,762 | [
"1003746",
"1003708"
] | [
"366408",
"50794"
] |
01485842 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485842/file/978-3-642-41329-2_9_Chapter.pdf | Lars Wolter
email: lars.wolter@tu-berlin.de
Haygazun Hayka
Rainer Stark
email: rainer.stark@ipk.fraunhofer.de
Improving the Usability of Collaboration Methods and Technologies in Engineering
Keywords: Collaboration, collaborative engineering, PLM, heterogeneous IT landscape, intellectual property rights
des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
INTRODUCTION
The industry sets new requirements for collaborative engineering due to technological improvements on products and product development methods, increasing complexity of supply chains and the trend to establish virtual teams. The paper discusses these requirements and the resulting fields in need of action. Each field shows different opportunities for the industry. Technology can give better integration and usability, the processes can be more transparent and standardized and the human factors can receive more attention to increase the stimulation of the stakeholders. To utilize all the opportunities in these fields, the collaborative engineering processes, methods and tools can not only focus on the technical goals and how to achieve them, they must also consider the human factor, containing the other stakeholders and there reasons. Only then the task work can be united with teamwork for successful collaboration.
The need for Collaborative Engineering
The increasing complexity of consumer products and industrial goods also increases the complexity of their development. This is given by a raising number of parts in each individual product and especially the combination of multiple engineering domains into a single product. Additional complexity in today's product development originates from release cycles that need to get shorter to stay competitive. To address the complexity a company needs to involve more people in the development process, each being an expert in his domain. This also includes the engineers which are experts for specific domains, parts, functionalities or steps in the development and manufacturing process.
As with common meetings collaboration is a task that does not produce anything but needs to be done to be successful. Therefor the collaboration needs to be efficient and natural for all participants. The collaboration in product development is happening on many different levels starting from asynchronous groupware systems, telecooperation solutions, viewing collaboration and full featured interactive collaboration. Additionally the collaboration can be done locally or across long distances, it can be with or without the use of digital tools. Then there is also the difference of collaboration inside companies which happens intensively and the collaboration with partners and suppliers which is performed less intensively [START_REF] Müller | Study on Collaborative Product Development and Digital Engineering Tools[END_REF] and is also shown in figure 1. This increases the complexity of collaboration management which only needed to supply meeting rooms and a telephone number to allow collaboration. To manage all the collaboration scenarios in companies today, additional people need to be involved being experts in the area of collaboration.
The focus during this paper lies in engineering scenarios during virtual engineering. It does not address collaboration during concept development, production or selling.
Fig. 1. -Distribution of Engineering Working Time [START_REF] Müller | Study on Collaborative Product Development and Digital Engineering Tools[END_REF]. Communication, coordination and negotiation happen intensively in the industry.
Problems of Collaborative Engineering
In collaboration there is a large field of problems that can be analyzed and addressed. This article focuses on collaboration supported by digital tools that an engineer uses during his product development tasks. For an engineer the types of tasks have changed over time. There is much more information needed to fulfill his day-to-day tasks forcing him to spend a lot of his time to search and acquire this information, not only from IT-Systems but also from other colleagues or partners. This also means he has to supply information he generates during his work either by storing them in an IT-System or by directly communicating them to other people. This additional overhead is already part of the collaboration happening in a company. Looking at basic collaboration tasks without the use of digital tools already introduces a lot of problems. If you leave someone a message on a post-it it can get lost during cleaning, your block of post-its could be empty or you try to find someone for a talk but don't know the room where that person currently is. Those problems can be directly translated to collaboration using digital tools. For example the email getting lost in the spamfolder, no free space on the network drives or the Web-Ex session that cannot initiate because of a too restrictive firewall. These kinds of problems can be addressed very effectively by rules and a good organization, which are necessary to achieve robustness in using digital engineering technology [START_REF] Stark | The way forward of Virtual Product Creationhow to achieve robustness in using digital engineering technology?[END_REF]. But digital tools for collaboration introduce new kind of problems that need to be addressed separately.
Heterogeneity.
Due to the increasing fragmentation of companies, either through outsourcing or by integrating other companies leads to the use of multiple IT-Systems dedicated to the same kind of task. This is also true for the collaboration tools, either stand-alone or integrated, because most of them can only be connected to tools of the same vendor. This is not a problem with the telephone for example; it can be used with every other telephone from different vendors.
Tool Acceptance.
An engineer today has to use multiple digital tools to solve his engineering tasks, this includes CAx, PDM, ERP, MES, Excel, Outlook and various other tools. The number of tools increases in a collaboration situation because an extra collaboration tool needs to be used. Same goes for engineering tools, which normally only the collaboration partner uses. This is a frustrating situation for the engineer because it maybe a tool he only uses irregular when contacting a supplier for example and sometimes differs from his regular tools in usage and even methodic concepts. The engineer is naturally blocking to use this extra tool for collaboration resulting in reduced efficiency or lesser collaboration. This also happens for local collaboration scenarios, where multiple people discuss a situation with a specific tool. This happens in design reviews for example, where a dedicated operator is necessary to operate the tool introducing an extra layer into the interaction.
Protection of intellectual property.
The ideas of now products as well as the methods and processes to produce them are a constant interest for all competitors in the market. Therefor this knowledge needs to be kept secret. Collaboration using digital tools is normally associated with sharing your own digital data. This results in trust problems with any kind of new or specialized tool which connects multiple stakeholders for collaborative purposes. Opposite to that generally used tools like emails have a very high trust even if they are used insecurely. The problem is to make any kind of collaboration tool used be trusted to protect the intellectual property; otherwise it cannot be effectively used.
How are these problems currently addressed
The market has a multitude of solutions to achieve collaboration in engineering. The system vendors of larger software products started to integrate collaboration features in their products. This includes features common to current social media products like text, audio and video chat combined with the sharing of product data. One example of those solutions is 3D-Live in Catia v6 from Dassault-Systemes. Such a system works very well in a homogenous environment, in which all participants use the same CAD program which tightly integrates into the PDM of the same vendor to make the product data sharing available for collaborative engineering. Using a different CAD system breaks all the collaboration features. This problem can be solved by deploying all applications to every user, forcing the engineer to use multiple IT-Systems in collaborative scenarios. Another way to address the heterogeneity problem is to use stand-alone collaboration solutions. Standalone solutions are a less complex extra piece of software, but also need a context switch and the conversion of the data from the engineering tool. Multiple vendors developed standalone tools to allow collaboration without the need of supplying the whole authoring tool set to all users.
There are two kinds of stand-alone solutions for collaboration. Screensharing solutions are the easiest to use in this category and don't need any data conversion. They work by capturing the contents of the screen or a specific application window and transmit that as a video stream to the other participants. These solutions are well known in web conferencing and are widely used because of their ease of usability. The user just starts the screensharing solution and decides which screen to share. After the setup, he only operates his preferred tool. Products in this category are WebEx from Cisco or WebConf from Adobe. But the concept of screensharing lacks the possibility of an equal participation in the collaboration scenario, because only one participant can present his screen to all others. Even when using multiple monitors, a monitor for each participant would be needed to present all the different views.
The second type of solution is applications that use their own visualization engine to display geometric models. Most of those solutions import data from the engineering tool and share them across multiple participants. Because these tools are separate from the authoring tools, they normally only import common exchange formats. This means the collaborating engineer needs to export the product data from his application in a suitable format for the collaboration tool and all the data need to be transmitted to all participants. This conversion and transfer of product data induces long setup times. This kind of tools can have a lot of functionality to alter the viewing, leaving annotations and therefor support the collaboration very well, but results in additional tool-knowledge for the engineer. Changes that can only be done in the authoring system would need an export of the changed product data and redistribution of those changes to all participants. Examples for this kind of tools are.
The most basic way of collaboration thru sharing data is handled asynchronously. Data management solutions allow locking of complete files by different users but there is no technique available that can merge two differently changed CAD files like Microsoft Word is offering for documents. Most PDM and other data storage systems ease the asynchronous collaboration with large files by duplicating them to different sites making them rapidly available from different locations. This is still limited in speed, but is already well established among the industries.
The last kind of well-established collaboration solutions consists of web based groupware solutions. They are often based around whole communication solutions for email, task and workflow management. They allow the users, to exchange tasks, documents and other information using wikis forums or blogs. Newer systems also incorporate so called social office functions to allow commenting, rating and sharing of information across the company intranet. Examples for these kinds of systems are Microsoft Office with Sharepoint and Outlook or open source groupware solutions like Liferay, Tiki-Wiki or other portal solutions. These solutions are independent of product development but can be customized to fit specific products and companies. The type of collaboration is limited, but due to the inclusion of real time communication and web based editors for documents these solutions are not only used for cooperation but also full collaboration.
Research to increase the collaboration efficiency
Some research is done to support the engineering collaboration. One approach is to let different CAD systems communicate with each other, to allow collaborative design even with heterogeneous CAD systems. One of those approaches in [START_REF] Li | Real-Time Collaborative Design With Heterogeneous CAD Systems Based on Neutral Modeling Commands[END_REF] uses a common set of commands. Every CAD system translates its own authoring commands to this common set which is distributed to all participants and at each target platform converted to a command of that specific CAD system. Therefor it does not need to exchange any product data beforehand. This approach allows concurrent design from the beginning; it does not allow modification of existing product data. Other Research is done to allow better network communication even in firewalled scenarios across companies for communication applications [START_REF] Stark | Verteilte Design Reviews in heterogenen Systemwelten[END_REF]. This is a very fundamental type of research affecting nearly all collaboration attempts which communi-cate thru the internet. Also very fundamental is research to resolve conflicts in collaboratively authored CAD. This is addressed for example by research activities to integrate Boolean operations in CAD systems [START_REF] Zheng | Conflict resolution of Boolean operations by integration in real-time collaborative CAD systems[END_REF].
There is also research activity to enhance distributed design reviews. There are solutions that adapt to different kind of devices allowing the use of large VR-Cave-Systems together with participants only using desktop computers during the collaboration. One such approach is documented in [START_REF] Daily | Distributed Design Review in Virtual Environments[END_REF]. The setup they are using uses VRML models to visualize 3D data but can also show the screens of the participants using screensharing.
Many research projects focus on the protection of the intellectual property. Everything related to encryption is only protecting the data while it is traveling between the collaboration participants. If the aim is to not let one of the participants misuse data from other participants, any encryption is of no use. Therefor some research projects describe methods and algorithms to watermark geometric models [START_REF] Kuo | A Blind Robust Watermarking Scheme for 3D Triangular Mesh Models Using 3D Edge Vertex Detection[END_REF]. This way it can be traced back where copies originated allowing the owner to sue them. Other methods reduce the details in certain areas by using multiresolution meshes and specifying the detail priorities of certain areas [START_REF] Zyda | User-controlled creation of multiresolution meshes[END_REF].
On the other hand there is also research going on for 3D model reconstruction from image sequences. With flaws this was possible 1996 from camera images [START_REF] Beardsley | 3D model acquisition from extended image sequences[END_REF]. Better algorithms, the good quality of the rendered images and faster computers allow much better reconstruction [START_REF] Snavely | Scene Reconstruction and Visualization from Internet Photo Collections: A Survey[END_REF]. But for all reconstruction algorithms it is important that there is at least one image from every feature necessary to reconstruct the original model. If the backside or inside of an object is never shown, it cannot be reconstructed.
1.5
Areas in need of Action
Technology.
The Technology for collaboration is in a very good state, but there are technological problems when different solutions need to be connected. The rapid increase in information and the expectation of its global availability introduces a new field of information management that does not require a central distribution point but intelligent information containers that can manage the containing information and is able to route those information to systems and participants in a collaborative scenario that need them.
Besides the technical areas that are in need of action, the collaboration between engineers needs to address human factors to make the collaboration to something an engineer wants to do instead of something he needs to do. This can be seen in interaction with tools, where a user is pleased to use a tablet-pc to read its newspaper using simple gestures. This kind of technology common to consumer products needs to be adapted to solutions for the industry.
Processes.
The business processes need to address the specialty in collaborative processes. When working in a collaborative manner, each step has a meaning for the task the group does together, like the patterns in [START_REF] De Vreede | Collaboration Engineering: Designing Repeatable Processes for High-Value Collaborative Tasks[END_REF]. This can also lead to additional tasks for collaborative processes, because after a collaborative session the gathered information needs to be converged from multiple participants before it can be evaluated.
Methods.
The different methods of collaboration can be as coarse has to choose from personal meeting, phone call or use of email to very fine grained methods that define the format of the emails to send for specific tasks. Having a method for handling specific collaboration tasks is a must, to ensure correct flow of the information.
NEW SOLUTIONS FOR COLLABORATIVE ENGINEERING
To fill some of the gaps presented above the following solution are described. These solutions handle different collaboration scenarios like local collaboration on multitouch tables as well as remote collaboration over the network. The technology for touch is well established in the consumer area. The technology is also very mature in its use. To be of a real benefit in the industry not only the technology needs to be used, but also the methods must adapt to the scenarios, where multitouch environments can be used to raise the efficiency of engineering processes. Dur-ing the research at the Fraunhofer IPK methods where developed to visualize product structures on multitouch tables [START_REF] Woll | Kollaborativer Design Review am Multitouchtisch[END_REF]. The requirement is a good usability with touch devices but also be understandable und usable in a multi-person-scenario where the participants have different views onto the multitouch device. This resulted in a Voronoi [START_REF] Balzer | Voronoi treemaps for the visualization of software metrics[END_REF] based structure as seen in figure 1. This special structure was analyzed to see if it can fulfill some typical tasks in engineering like search and compare operations with this structure [START_REF] Schulze | Intuitive Interaktion mit Strukturdaten aus einem PLM-System[END_REF] Fig. 3. -Multi-User Multitouch environment for design reviews.
Using current Touch Technology for local Cooperation
In figure 3 this special application can be seen in a multi user environment. It allows multiple participants in a meeting to either work in their own workplace or to cooperate with some or all of the other participants. This allows part of the cooperating group to prepare their content while others are discussing a previous item. The example is the door of a car. In this example the expert for the door opener can discuss some details with the chassis expert while the experts for the window-lifter-system illuminate the use of an extra-ordinary expensive part to the management.
Technology for secure and instant collaboration
The here presented solution constitutes a combination of screen sharing and the local visualization at each participant. In contrast to screen sharing not the whole program window or the whole desktop is being transmitted, but only the 2D image of the rendered 3D model, which is superimposed with the images of all participants. A correct superposition is necessary so that every participant can correctly perceive the visual impression of the complete product and properly interpret the correlations and distances between the components. This collaboration technique focuses on different scenarios shown in figure 4. All participants shown in figure 4 see a 3D representation of the object being reviewed, in this case a truck. The parts in blue are locally existent as 3D-Modells and are locally rendered on the computer and the rendered image is transferred to all participants. The gray parts of the model do not exist on the local computer. They are just 2D images streamed from one of the other participants. All views share the same point of view and orientation while looking at the truck. This information is also shared among the users and consists of a simple matrix. The scenario can also incorporate special participants like the mobile lead engineer which only needs a web browser to join the session. He is not supplying any 3D model, he just consumes the images. The opposing case is the PDM System at site B which just renders it locally stored data and sends it to the others. This participant does not consume any information. All the other participants, the OEM at site east and the two suppliers deliver their own data and consume from the others. The OEM holds the 3D models of the chassis, while supplier A holds the cabin and supplier B the wheels. They only deliver their own property as images without being afraid that for example supplier B can steal the 3D model data from supplier A.
To achieve the correct superposition a so-called depth image is transmitted additionally. The depth images can be used, to decide for every pixel from which 2D image the respective pixel should be used figure 5.
To create, transmit or show it on a screen, a 2D image of a 3D model must be rendered. This process is called image synthesis, where for each pixel on the screen must be determined, which part of the 3D model it represents. The color of that point is being shown at this pixel (figure 5, bottom left). The depth image is constructed on the same principle. This is achieved by storing a distance value instead of a color value. If you interpret this distance value as a color, so is the value that represents the pixel brighter for larger distances and darker for shorter distances.
At multiple participants a color-and depth rendering is created that are all collected. These can now be used to construct a joint image. Condition is that all participants have the same point of view. This joint point of view acts as a zero point of the viewed scene and is reached by using the aforementioned matrix that is shared among the participants. This aspect corresponds to the technique of local visualization where likewise just the view matrix needs to be exchanged.
Assuming there are three participants with color renderings C1-C3 and depth renderings D1-D3. Then for each pixel it is analyzed which of the three Dx is the darkest. If it is D3 then the color value of C3 is being used. That way a new image is assembled where all models of all three participants are integrated. An interpolation allows combining images with different resolutions as long as the aspect ratio is maintained. This is of advantage when generating images on hardware with different performances. The images from different sources can still be combined. The realization of this technology was executed in a prototypic collaboration tool.
Increased acceptance through collab-oration using CAD systems
To approach the problem of acceptance, methods were evaluated to include collaboration functionality in existing CAD systems. Through the strong link with the rendering an easy connection via a plugin is difficult. In the course of work the CAD systems Spaceclaim and NX were extended via plugins in a way that they communicate their viewing location with other participants and could react to commands. Doing that the CAD systems showed limitations of different magnitude. The realization of the plugin proved more difficult for NX than for Spaceclaim. An automatic reacting e.g. to the network commands could not be realized with the NX API until the end of the project. Nonetheless the realization of the plugins proved that a collaborative coupling of two very diverse CAD systems is possible and a consequent commitment across manufacturers towards an open adaption of the API according to the CPO could create new possibilities. Similar as in [START_REF] Li | Real-Time Collaborative Design With Heterogeneous CAD Systems Based on Neutral Modeling Commands[END_REF] it was also tested to fulfill some modeling tasks using heterogeneous CAD systems. But the target was different as in the paper because the focus is still located in the design review scenario where existing CAD Models need to be examined and possibly changed. Therefor the problem of identifying the same parts in the different systems needs to be managed. The research in this area is still ongoing, to deliver a solid solution.
CONCLUSIONS
The increasing need for effective collaboration solutions is a challenge for today's companies. But system vendors and research facilities are continuing to generate better solutions. Useful cross vendor programs like the codex of PLM openness can ensure that future collaboration solutions will not suffer because of heterogeneous IT landscapes. The continuous evolvement of user interaction technologies from the consumer market into the industry gives the opportunities for user friendly and easy to use solutions. The presented solutions and ongoing research work introduce new accents for collaboration. These accents can be used by the industry to bring new concepts of collaboration into the companies to increase their collaboration abilities.
Fig. 2 .
2 Fig. 2. -Touch optimized visualization of a product structure
Fig. 4 .
4 Fig. 4. -The different collaboration scenarios condensed in a single collaboration.
Fig. 5 .
5 Fig. 5. -Description of the image merging algorithm that merges two rendered images displayed at the top into a single image in the bottom middle. Therefor it uses a depth image explained on bottom left. Bottom right describes the basic principle of rendering a 3D scene into a 2D image in conjunction to the rendering of a depth image. | 26,624 | [
"1003747",
"1003748"
] | [
"86624",
"306935",
"306935"
] |
01485931 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485931/file/978-3-642-38530-8_2_Chapter.pdf | Henrich C Pöhls
Stefan Peters
email: petersstefan@gmx.net
Kai Samelin
Joachim Posegga
Hermann De Meer
email: demeer@fim.uni-passau.de
Malleable Signatures for Resource Constrained Platforms
Malleable signatures allow the signer to control alterations to a signed document. The signer limits alterations to certain parties and to certain parts defined during signature generation. Admissible alterations do not invalidate the signature and do not involve the signer. These properties make them a versatile tool for several application domains, like e-business and health care. We implemented one secure redactable and three secure sanitizable signature schemes on secure, but computationally bounded, smart card. This allows for a secure and practically usable key management and meets legal standards of EU legislation. To gain speed we securely divided the computing tasks between the powerful host and the card; and we devise a new accumulator to yield a useable redactable scheme. The performance analysis of the four schemes shows only a small performance hit by the use of an off-the-shelf card.
Introduction
Digital signatures are technical measures to protect the integrity and authenticity of data. Classical digital schemes that can be used as electronic signatures must detect any change that occurred after the signature's generation. Digital signatures schemes that fulfill this are unforgeable, such as RSA-PSS. In some cases, controlled changes of signed data are required, e.g., if medical health records need to be sanitized before being made available to scientists. These allowed and signer-controlled modifications must not result in an invalid signature and must not involve the signer. This rules out re-signing changed data or changes applied to the original data by the signer. Miyazaki et al. called this constellation the "digital document sanitization problem" [START_REF] Miyazaki | Digital documents sanitizing problem[END_REF]. Cryptographic solutions to this problem are sanitizable signatures (SSS) [START_REF] Ateniese | Sanitizable Signatures[END_REF] or redactable signatures (RSS) [START_REF] Johnson | Homomorphic signature schemes[END_REF]. These have been shown to solve a wide range of situations from secure routing or anonymization of medical data [START_REF] Ateniese | Sanitizable Signatures[END_REF] to e-business settings [START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF][START_REF] Pöhls | Sanitizable Signatures in XML Signature -Performance, Mixing Properties, and Revisiting the Property of Transparency[END_REF][START_REF] Tan | Applying sanitizable signature to web-service-enabled business processes: Going beyond integrity protection[END_REF]. For a secure and practically usable key management, we implemented four malleable signature schemes on an off-the-shelf smart card. Hence, all the algorithms that involve a parties secret key run on the smart card of that party. Smart cards are assumed secure storage and computation devices which allow to perform these actions while the secret never leaves the card's protected computing environment. However, they are computationally bounded.
Contribution
To the best of our knowledge, no work on how to implement these schemes on resource constraint platforms like smart cards exists. Additional challenges are sufficient speed and low costs. Foremost, the smart card implementation must be reasonably fast and manage all the secrets involved on a resource constraint device. Secondly, the implementation should run on off-the-shelf smart cards; cheaper cards only offer fast modular arithmetics (e.g., needed for RSA signatures). The paper's three core contribution are the:
(1) analysis and selection of suitable and secure schemes;
(2) implementation of three SSSs and one RSS scheme to measure runtimes;
(3) construction of a provably secure RSS based on our newly devised accumulator with a semi-trusted third party.
Previously only accumulators with fully-trusted setups where usably fast. This paper shows how to relax this requirement to a semi-trusted setup. Malleable signatures on smart cards allow fulfilling the legal requirement of keeping keys in a "secure signature creation device" [START_REF] Ec | Directive 1999/93/EC from 13 December 1999 on a Community framework for electronic signatures[END_REF].
Overview and State of the Art of Malleable Signatures
With a classical signature scheme, Alice generates a signature σ using her private key sk sig and the SSign algorithm. Bob, as a verifier, uses Alice's public key pk sig to verify the signature on the given message m. Hence, the authenticity and integrity of m is verified. Assume Alice's message m is composed of a uniquely reversible concatenation of blocks, i.e., m = (m [START_REF] Ahn | Computing on authenticated data[END_REF], m[2], . . . , m[ ]). When Alice uses a RSS, it allows that every third party can redact a block
m[i] ∈ {0, 1} * . To redact m[i] from m means creating a m without m[i], i.e., m = (. . . , m[i -1], m[i + 1], . . .
= (. . . , m[i -1], m[i] , m[i + 1], . . . ).
In comparison to RSSs, sanitization requires a secret, denoted as sk san , to derive a new signature σ , such that (m , σ ) verifies under the given public keys.
A secure RSS or SSS must at least be unforgeable and private. Unforgeability is comparable to classic digital signature schemes allowing only controlled modifications. Hence, a positive verification of m by Bob means that all parts of m are authentic, i.e., they have not been altered in a malicious way. Privacy inhibits a third party from learning anything about the original message, e.g., from a signed redacted medical record, one cannot retrieve any additional information besides what is present in the given redacted record.
The concept behind RSSs has been introduced by Steinfeld et al. [START_REF] Steinfeld | Content extraction signatures[END_REF] and by Johnson et al. [START_REF] Johnson | Homomorphic signature schemes[END_REF]. The term SSS has been coined by Ateniese et al. [START_REF] Ateniese | Sanitizable Signatures[END_REF]. Brzuska et al. formalized the standard security properties of SSSs [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. RSSs were formalized for lists by Samelin et al. [START_REF] Samelin | Redactable signatures for independent removal of structure and content[END_REF]. We follow the nomenclatures of Brzuska et al. [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. If possible, we combine explanations of RSSs and SSSs to indicate relations. In line with existing work we assume the signed message m to be split in blocks m[i], indexed by their position. W.l.o.g., we limit the algorithmic descriptions in this paper to simple structures to increase readability. Algorithms can be adapted to work on other data-structures. We keep our notation of Sanitizer general, and also cater for multiple sanitizers, denoted as Sanitizer i [START_REF] Canard | Sanitizable signatures with several signers and sanitizers[END_REF]. Currently, there are no implementations of malleable signatures considering multi-sanitizer environments. A related concept are proxy signatures [START_REF] Mambo | Proxy signatures for delegating signing operation[END_REF]. However, they only allow generating signatures, not controlled modifications. We therefore do not discuss them anymore. For implementation details on resource constrained devices, refer to [START_REF] Okamoto | Extended proxy signatures for smart cards[END_REF].
Applications of Malleable Signatures
One reason to use malleable signatures is the unchanged root of trust: the verifier only needs to trust the signer's public key. Authorized modifications are specifically endorsed by the signer in the signature and subsequent signature verification establishes if none or only authorized changes have occurred. In the e-business setting, SSS allows to control the change and to establish trust for intermediary entities, as explained by Tan and Deng in [START_REF] Tan | Applying sanitizable signature to web-service-enabled business processes: Going beyond integrity protection[END_REF]. They consider three parties (manufacturer, distributor and dispatcher ) that carry out the production and the delivery to a forth party, the retailer. The distributor produces a malleable signature on the document and the manufacturer and dispatcher become sanitizers.Due to the SSS, the manufacturer can add the product's serial number and the dispatcher adds shipment costs. The additions can be done without involvement of the distributor. Later, the retailer is able to verify all the signed information as authentic needing only to trust the distributor. Legally binding digital signatures must detect "any subsequent change" [START_REF] Ec | Directive 1999/93/EC from 13 December 1999 on a Community framework for electronic signatures[END_REF], a scheme by Brzuska et al. was devised to especially offer this public accountability [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF].
Another reason to use a malleable signature scheme is their ability to sign a large data set once, and then to only partly release this information while retaining verifiability. This privacy notion allows their application in healthcare environments as explained by Ateniese et al. [START_REF] Ateniese | Sanitizable Signatures[END_REF]. For protecting trade secrets and for data protection it is of paramount important to use a private scheme. Applications that require to hide the fact that a sanitization or redaction has taken place must use schemes that offer transparency, which is stronger than privacy [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. However, the scheme described by Tan and Deng is not private according to the state-of-the-art cryptographic strict definition [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF].
Motivation for Smart Cards
To facilitate RSSs and SSSs in practical applications, they need to achieve the same level of integrity and authenticity assurance as current standard digital signatures. This requires them to be unforgeable while being linkable to the legal entity that created the signature on the document. To become fully recognized by law, i.e., to be legally equivalent to hand-written signatures, the signature needs to be created by a "secure signature creation device" (SSCD) [START_REF] Ec | Directive 1999/93/EC from 13 December 1999 on a Community framework for electronic signatures[END_REF]. Smart cards serve as such an SSCD [START_REF] Meister | Protection profiles and generic security targets for smart cards as secure signature creation devices -existing solutions for the payment sector[END_REF]. They allow for using a secret key, while providing a high assurance that the secret key does not leave the confined environment of the smart card. Hence, smart cards help to close the gap and make malleable signatures applicable for deployment in real applications. State of the art secure RSSs and SSSs detect all modifications not endorsed by the signer as forgeries. Moreover, Brzuska et al. present a construction in [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] and show that their construction fulfills EU's legal requirements [START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF].
Sanitizable and Redactable Signature Schemes
We assume the verifier trusts and possesses the Signer's public key pk sig and can reconstruct all other necessary information from the message-signature pair (m, σ) alone. Existing schemes have the following polynomial time algorithms: SSS := (KGen sig , KGen san , Sign SSS , Sanit SSS , Verify SSS , Proof SSS , Judge SSS ) RSS := (KGen sig , Sign RSS , Verify RSS , Redact RSS ) Key Generation (SSS, RSS). Generates key pairs. Only SSSs need KGen san .
(pk sig , sk sig ) ← KGen sig (1 λ ), (pk i san , sk i san ) ← KGen san (1 λ ) Signing (SSS, RSS). Requires the Signer's secret key sk sig . For Sign SSS , it additionally requires all sanitizers' public keys {pk 1 san , . . . , pk n san }. adm describes the sanitizable or redactable blocks, i.e., adm contains their indices.
(m, σ) ← Sign SSS (m, sk sig , {pk 1 san , . . . , pk n san }, adm), (m, σ) ← Sign RSS (m, sk sig )
Sanitization (SSS) and Redaction (RSS).
The algorithms modify m according to the instruction in mod, i.e., m ← mod(m). For RSSs, mod contains the indices to be redacted, while for SSSs, mod contains index/message pairs {i, m[i] } for those blocks i to be sanitized. They output a new signature σ for m . SSSs require a sanitizer's private key, while RSSs allow for public alterations.
(m , σ ) ← Sanit SSS (m, mod, σ, pk sig , sk i san ), (m , σ ) ← Redact RSS (m, mod, σ, pk sig ) Verification (SSS, RSS). The output bit d ∈ {true, false} indicates the correctness of the signature with respect to the supplied public keys.
d ← Verify SSS (m, σ, pk sig , {pk 1 san , . . . , pk n san }), d ← Verify RSS (m, σ, pk sig ) Proof (SSS).
Uses the signer's secret key sk sig , message/signature pairs and the sanitizers' public keys to output a string π ∈ {0, 1} * for the Judge SSS algorithm.
π ← Proof SSS (sk sig , m, σ, {(m i , σ i ) | i ∈ N + }, {pk 1
san , . . . , pk n san }) Judge (SSS). Using proof π and public keys it decides d ∈ {Sig, San i } indicating who created the message/signature pair (Signer or Sanitizer i ).
d ← Judge SSS (m, σ, pk sig , {pk 1 san , . . . , pk n san }, π)
Security Properties of RSSs and SSSs
We consider the following security properties as formalized in [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF][START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] :
Unforgeability (SSS, RSS) assures that third parties cannot produce a signature for a "fresh" message. "Fresh" means it has been issued neither by the signer, nor by the sanitizer. This is similar to the unforgeability requirements of standard signature schemes.
Immutability (SSS, RSS) immutability prevents the sanitizer from modifying non-admissible blocks. Most RSSs do treat all blocks as redactable, but if they differentiate, immutability exists equally, named "disclosure secure" [START_REF] Samelin | Redactable signatures for independent removal of structure and content[END_REF].
Privacy (SSS, RSS) inhibits a third party from reversing alterations without knowing the original message/signature pair.
Accountability (SSS) allows to settle disputes over the signature's origin.
Trade secret protection is initially achieved by the above privacy property. Cryptographically stronger privacy notions have also been introduced:
Unlinkability (SSS, RSS) prohibits a third party from linking two messages.
All current notions of unlinkability require the use of group signatures [START_REF] Brzuska | Unlinkability of sanitizable signatures[END_REF]. Schemes for statistical notions of unlinkability only achieve the less common notion of selective unforgeability [START_REF] Ahn | Computing on authenticated data[END_REF]. We do not consider unlinkability, if needed it can be achieved using a group signature instead of a normal signature [START_REF] Canard | Implementing group signature schemes with smart cards[END_REF].
Transparency (SSS, RSS) says that it should be impossible for third parties to decide which party is accountable for a given signature-message pair.
However, stronger privacy has to be balanced against legal requirements. In particular, transparent schemes do not fulfill the EU's legal requirements for digital signatures [START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF]. To tackle this, Brzuska et al. devised a non-transparent, yet private, SSS with non-interactive public accountability [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. Their scheme does not impact on privacy and fulfills all legal requirements [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF][START_REF] Pöhls | The role of data integrity in eu digital signature legislation -achieving statutory trust for sanitizable signature schemes[END_REF].
Non-interactive public accountability (SSS, RSS) offers a public judge, i.e., without additional information from the signer and/or sanitizer any third party can identify who created the message/signature pair (Sig or San i ).
Implementation on Smart Cards
First, the selected RSSs and SSSs must be secure following the state-of-the-art definition of security, i.e, immutable, unforgeable, private and either transparent or public-accountable. Transparent schemes can be used for applications with high privacy protection, e.g., patient records. Public accountability is required for a higher legal value [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. Second, the schemes underlying cryptographic foundation must perform well on many off-the-shelf smart cards. Hence, we chose primitives based on RSA operations computing efficiently due to hardware acceleration.
The following schemes fulfill the selection criterions and have been implemented: Each participating party has its own smart card, protecting each entities' secret key. The algorithms that require knowledge of the private keys sk sig or sk i san are performed on card. Hence, at least Sign and Sanit involve the smart card. When needed, the host obtains the public keys out of band, e.g., via a PKI.
SSS Scheme BFF + 09 [5]
The scheme's core idea is to generate a digest for each admissible block using a tag-based chameleon hash [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF]. Finally, all digests are signed with a standard sig-nature scheme. At first, let S := (SKGen, SSign, SVerify) be a regular UNF-CMA secure signature scheme. Moreover, let CH := (CHKeyGen, CHash, CHAdapt) be a tag-based chameleon hashing scheme secure under random-tagging attacks. Finally, let PRF be a pseudo random function and PRG a pseudo random generator. We modified the algorithms presented in [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] to eliminate the vulnerability identified by Gong et al. [START_REF] Gong | Fully-secure and practical sanitizable signatures[END_REF]. See [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] for the algorithms and the security model.
Key Generation: KGen sig on input of 1 λ generates a key pair (sk, pk) ← SKGen(1 λ ), chooses a secret κ ← {0, 1} λ and returns (sk sig , pk sig ) ← ((sk, κ), pk). KGen san generates a key pair (sk ch san , pk ch san ) ← CHKeyGen(1 λ ). Signing: Sign on input of m, sk sig , pk ch san , adm it generates nonce ← {0, 1} λ , computes x ← PRF(κ, nonce), followed by tag ← PRG(x), and chooses
r[i] $ ← {0, 1} λ for each i ∈ adm at random. For each block m[i] ∈ m let h[i] ← CHash(pk ch san , tag, (m, m[i]), r[i]) if i ∈ adm m[i]
[i] ∈ m, h[i] ← CHash(pk ch san , tag, (m, m[i]), r[i]) if i ∈ adm m[i]
otherwise and returns SVerify(pk san , (h, pk ch san , adm), σ 0 ), where h = (h[0], . . . , h[l]). Proof: Proof on input of sk sig , m, σ, pk ch san and a set of tuples {(m i , σ i )} i∈N from all previously signer generated signatures it tries to lookup a tuple (pk ch san , tag,
m[j], r[j]) such that CHash(pk ch san , tag, (m, m[j]), r[j]) = CHash(pk ch san , tag i , (m i , m i [j]), r i [j]). Set tag i ← PRG(x i ), where x i ← PRF(κ, nonce i ). Return π ← (tag i , m i , m i [j], j, pk sig , pk ch san , r[j] i , x i ).
If at any step an error occurs, ⊥ is returned.
Judge: Judge on input of m, a valid σ, pk sig , pk ch san and π obtained from Proof checks that pk sig = pk sig π and that π describes a non-trivial collision under CHash(pk san , •, •, •) for the tuple (tag, (j, m[j], pk sig ), r[j]) in σ. It verifies that tag π = PRG(x π ) and on success outputs San, else Sig.
3.2 SSS Scheme BFF + 09 [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] on Smart Card.
In this scheme, the algorithms Sign, Proof and CHAdapt from Sanit require secret information. The smart card's involvement is illustrated in Fig. 1. First, During KGen sig we generate κ as a 1024 Bit random number using the smart card's pseudo random generator and store it on card. To obtain x, illustrated as invocation of PRF(•, •), the host passes a nonce to the card, which together with κ forms the input for the PRF implementation on card. The card returns x to the host. On the host system, we let tag ← PRG(x). Second, CHAdapt used in Sanit requires a modular exponentiation using d as exponent. d is part of the 2048 Bit private RSA key obtained by CHKeyGen. The host computes only the
intermediate result i = ((H(tag, m, m[i]) • r e ) • (H(tag , m , m [i]) -1
)) mod N from the hash calculation described in [START_REF] Brzuska | Security of Sanitizable Signatures Revisited[END_REF] and sends i to the smart card. The final modular exponentiation is performed by the smart card using the RSA decrypt operation, provided by the Java Card API2 , to calculate r = i d mod N and returns r . Finally, to execute the Proof algorithm on the Signer's host requires the seed x as it serves as the proof that tag has been generated by the signer.
To obtain x, the host proceeds exactly as in the Sign algorithm, calling the PRF implementation on the card with the nonce as parameter.
SSS Schemes BFLS09 [6] and BPS12 [8]
The core idea is to create and verify two signatures: first, fixed blocks and the Sanitizer's pk san must bear a valid signature under Signer's pk sig . Second, admissible blocks must carry a valid signature under either pk sig or pk san . The scheme by Brzuska et al. [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] is a modification of the scheme proposed by Brzuska et al. [START_REF] Brzuska | Sanitizable signatures: How to partially delegate control for authenticated data[END_REF], that is shown to achieve message level public accountability [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] using an additional algorithm called Detect. Both, BFF + 09 and BPS12, solely build upon standard digital signatures. We implemented both; due to space restrictions and similarities, we only describe the BPS12 scheme, which achieves blockwise public accountability. Refer to [START_REF] Brzuska | Sanitizable signatures: How to partially delegate control for authenticated data[END_REF] and [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] for the security model. In this section, the uniquely reversible concatenation of all non-admissible blocks within m is denoted FIX m , that of all admissible blocks is denoted as adm m .
Key Generation: On input of 1 λ KGen sig generates a key pair (pk sig , sk sig ) ← SKGen(1 λ ). KGen san generates a key pair (pk san , sk san ) ← SKGen(1 λ ).
Signing 3.4 SSS Schemes BFLS09 [START_REF] Brzuska | Sanitizable signatures: How to partially delegate control for authenticated data[END_REF] and BPS12 [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF] on Smart Card.
We implemented Sign and Sanit with involvement of the smart card. Fig. 2 illustrates the interactions. The algorithms are executed on the host system as Host SC hσ FIX = H(0, mfix, adm, pk san ) σFIX SSign (sksig, hσ FIX )
hσ [i] = H(1, i, m[i],
pk san , pk sig , tag, ⊥)
σ[i] SSign(sksig, hσ [i])
For each m[i] ∈ m:
Sign Sign
Host SC
h σ FULL = H(1, i, m [i],
pk san , pk sig , tag, tag )
σ [i] SSign (sksan, h σ FULL )
For each m[i] ∈ mod:
Sanit Sanit
RSS Scheme PSPdM12 [24]
The scheme's core idea is to hash each block and accumulate all digests with a cryptographic accumulator. This accumulator value is signed with a standard signature scheme. Each time a block is accumulated, a witness that it is part of the accumulated value is generated. Hence, the signed accumulator value is used to provide assurance that a block was signed given the verifier knows the block and the witness. A redaction removes the block and its witness. They further extended the RSS's algorithms with Link RSS , Merge RSS . We omit them, as they need no involvement of the smart card because they require no secrets. Refer to [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] for details on the security model.
Building block: Accumulator. For more details than the algorithmic description, refer to [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF][START_REF] Benaloh | One-way accumulators: A decentralized alternative to digital signatures[END_REF][START_REF] Lipmaa | Secure accumulators from euclidean rings without trusted setup[END_REF][START_REF] Sander | Efficient accumulators without trapdoor extended abstracts[END_REF]. We require the correctness properties to hold [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF].
ACC consists of five PPT algorithms ACC := (Setup, Gen, Dig, Proof, Verf):
Setup. Setup on input of the security parameter λ returns the parameters parm, i.e., parm ← Setup(1 λ )
Gen. Gen, on input of the security parameter λ and parm outputs pk i.e., pk ← Gen(1 λ , parm).
Dig. Dig, on input of the set S, the public parameter pk outputs an accumulator value a and some auxiliary information aux, i.e, (a, aux) ← Dig(pk, S)
Proof. Proof, on input of the public parameter pk, a value y ∈ Y pk and aux returns a witness p from a witness space P pk , and ⊥ otherwise, i.e., p ← Proof(pk, aux, y, S)
Verf. On input of the public parameters parm, public key pk, an accumulator a ∈ X pk , a witness p, and a value y ∈ Y pk Verf outputs a bit d ∈ {0, 1} indicating whether p is a valid proof that y has been accumulated into a, i.e., d ← Verf(pk, a, y, p). Note, X pk denotes the output and Y pk the input domain based on pk; and parm is always correctly recoverable from pk.
Our Trade-off between Trust and Performance. Pöhls et al. [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] require ACC to be collision-resistant without trusted setup. Foremost, they require the ACC's setup to hide certain values used for the parameter generation from untrusted parties, as knowledge allows efficient computation of collisions and thus forgeries of signatures. All known collision-resistant accumulators based on number theoretic assumptions either require a trusted third party (TTP), named the accumulator manager [START_REF] Benaloh | One-way accumulators: A decentralized alternative to digital signatures[END_REF][START_REF] Li | Universal accumulators with efficient nonmembership proofs[END_REF], or they are very inefficient. As said, the TTP used for setup of the ACC must be trusted not to generate collisions to forge signatures. However, existing schemes without TTP are not efficiently implementable, e.g., the scheme introduced by Sander requires a modulus size of 40, 000 Bit [START_REF] Sander | Efficient accumulators without trapdoor extended abstracts[END_REF].
Our trade-off still requires a TTP for the setup, but inhibits the TTP from forging signatures generated by signers. In brief, we assume that the TTP which signs a participant's public key also runs the ACC setup. The TTP already has as a secret the standard RSA modulus n = pq, p, q ∈ P. If we re-use n as the RSAaccumulator's modulus [START_REF] Benaloh | One-way accumulators: A decentralized alternative to digital signatures[END_REF], the TTP could add new elements without detection. However, if we add "blinding primes" during signing, neither the TTP nor the signer can find collisions, as long as the TTP and the signer do not collude. We call this semi-trusted setup. Note, as we avoid algorithms for jointly computing a modulus of unknown factorization, we do not require any protocol runs. Thus, keys can be generated off-line. The security proof is in the appendix.
On this basis we build a practically usable undeniable RSS, as introduced in [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF].
It is based on a standard signature scheme S := (SKGen, SSign, SVerify) and our accumulator with semi-trusted setup ACC := (Setup, Gen, Dig, Proof, Verf).
Key Generation:
The algorithm KeyGen generates (sk S , pk S ) ← SKGen(1 λ ). It lets parm ← Setup(1 λ ) and pk ACC ← Gen(1 λ , parm). The algorithm returns ((pk S , parm, pk ACC ), (sk S )).
Signing: Sign on input of sk S , pk ACC and a set S, it computes (a, aux) ← Dig(pk ACC , (S)). It generates P = {(y i , p i ) | p i ← Proof(pk ACC , aux, y i , S) | y i ∈ S}, and the signature σ a ← SSign(sk S , a). The tuple (S, σ s ) is returned, where σ s = (pk S , σ a , {(y i , p i ) | y i ∈ S}).
Verification: Verify on input of a signature σ = (pk S , σ a , {(y i , p i ) | y i ∈ S}), parm and a set S first verifies that σ a verifies under pk S using SVerify. For each element y i ∈ S it tries to verify that Verf(pk ACC , a, y i , p i ) = true. In case Verf returns false at least once, Verify returns false and true otherwise.
Redaction: Redact on input of a set S, a subset R ⊆ S, an accumulated value a, pk S and a signature σ s generated with Sign first checks that σ s is valid using Verify. If not ⊥ is returned. Else it returns a tuple (S , σ s ), where σ s = (pk S , σ a , {(y i , p i ) | y i ∈ S }) and S = S \ R.
3.6 RSS Scheme PSPdM12 [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] on Smart Card. This scheme involves the smart card for the algorithms Setup and Sign, illustrated in Fig. 3. We use the smart card to obtain the blinding primes of the modulus described in Sect. 3.5, needed by Setup. To compute these primes on card, we generate standard RSA parameters (N, e, d) with N being of 2048 Bit length, but store only N on card and discard the exponents. On the host system this modulus is multiplied with that obtained from the TTP to form the modulus used by ACC. Additionally, the smart card performs SSign to generate σ a .
Performance and Lessons Learned
We implemented in Java Card [START_REF] Chen | Java Card Technology for Smart Cards: Architecture and Programmer's Guide[END_REF] 2.2.1 on the "SmartC@fé R Expert 4.x" from Giesecke and Devrient [START_REF] Giesecke | SmartC@fé R Expert 4[END_REF]. The host system was an Intel i3-2350 Dual Core 2.30 GHz with 4 GiB of RAM. For the measurements in Tab. 1, we used messages with 10, 25 and 50 blocks of equal length, fixed to 1 Byte. The block size has little impact as inputs are hashed. However, the number of blocks impacts performance in some schemes. 3.12 5 7.16 5 13.24 5 2.60 5 6.65 5 12.74 5 0.016 0.039 0.084 0.043 0.051 0.060 0.001 0.001 0.002 [START_REF] Pöhls | Transparent mergeable redactable signatures with signer commitment and applications[END_REF] 11.16 5 59.97 5 221.97 5 1.42 3.17 6.32 1.32 3.12 6.12 -4 -4 -4 -4 -4 -4 4 Algorithm not defined by scheme 5 Involves smart card operations Table 1. Performance of SSS prototypes; median runtime in seconds and Redact operations modify all sanitizable blocks. The BFLS12 scheme allows multiple sanitizers and was measured with 10 sanitizers. Verify and Judge always get sanitized or redacted messages. The results for the BFLS12 scheme include the verification against all possible public keys (worst-case). We measured the complete execution of the algorithms, including those steps performed on the host system. We omit the time KeyGen takes for 2048 bit long key pairs, as keys are usually generated in advance.
We carefully limited the involvement of the smart card, hence we expect the performance impact to be comparable to the use of cards in regular signature schemes. For the RSS we have devised and proven a new collision-resistant accumulator. If one wants to compare, BPS12 states around 0.506s for signing 10 blocks with 4096 bit keys [START_REF] Brzuska | Non-interactive public accountability for sanitizable signatures[END_REF]. We only make use of the functions exposed by the API. Hence, our implementations are portable to other smart cards, given they provide a cryptographic co-processor that supports RSA algorithms. We would have liked direct access to the cryptographic co-processor, as raised in [START_REF] Tews | Performance issues of selective disclosure and blinded issuing protocols on java card[END_REF], instead of using the exposed ALG RSA NOPAD as a workaround.
Experiment Semi -Trusted -Collision -ResistancePK ACC A (λ) parm $ ← Setup(1 λ ) (pk * , p * , m * , a * ) ← A ODig(•,•) (1 λ , parm)
where oracle ODig, on input of Si, pk i returns: (ai, auxi) ← Dig(pk i , Si) (answers/queries indexed by i,
1 ≤ i ≤ k) Pi = {(sj, pi) | pi ← Proof(pk i , auxi, sj, Si), sj ∈ Si} return (ai, Pi) return 1, if: Verf(pk * , a * , m * , p * ) = 1 and ∃i, 1 ≤ i ≤ k : ai = a * and m * / ∈ Si Fig. 4. Collision-Resistance with Semi-Trusted Setup Part I Experiment Semi -Trusted -Collision -ResistancePARM ACC A (λ) (parm * , s * ) ← A(1 λ ) (pk * , p * , m * , a * ) ← A ODig(•,•),GetPk() (1 λ , s * )
where oracle ODig, on input of pk i , Si: (ai, auxi) ← Dig(pk i , Si) (answers/queries indexed by i, We say that an accumulator ACC with semi-trusted setup is collision-resistant for the public key generator, iff for every PPT adversary A, the probability that the game depicted in Fig.
1 ≤ i ≤ k) Pi = {(sj, pi) | pi ← Proof(pk i , auxi, sj, Si), sj ∈ Si} return (ai, Pi) where oracle GetPk returns: pk j ← Gen(1 λ , parm * ) (answers/queries indexed by j, 1 ≤ j ≤ k ) return 1, if: Verf(pk * , a * , m * , p * ) = 1 and ∃i, 1 ≤ i ≤ k : ai = a * , m * / ∈ Si and ∃j, 1 ≤ j ≤ k : pk * = pk j
A returns 1, is negligible (as a function of λ).
The basic idea is to let the adversary generate public key pk. The other part is generated by the challenger. Afterwards, the adversary has to find a collision.
Definition 2 (Collision-Resistance with Semi-Trusted Setup (Part II)).
We say that an accumulator ACC with semi-trusted setup is collision-resistant for the parameter generator, iff for every PPT adversary A, the probability that the game depicted in The basic idea is to either let the adversary generate the public parameters parm, but not any public keys; they are required to be generated honestly. Afterwards, the adversary has to find a collision.
Setup. The algorithm Setup generates two safe primes p 1 and q 1 with bit length λ. It returns n 1 = p 1 q 1 .
Gen. On input of the parameters parm, containing a modulus n 1 = p 1 q 1 of unknown factorization and a security parameter λ, the algorithm outputs a multi-prime RSA-modulus N = n 1 n 2 , where n 2 = p 2 q 2 , where p 2 , q 2 ∈ P are random safe primes with bit length λ.
Verf. On input of the parameters parm = n 1 , containing a modulus N = p 1 q 1 p 2 q 2 = n 1 n 2 of unknown factorization, a security parameter λ, an element y i , an accumulator a, and a corresponding proof p i , it checks, whether p yi i (mod N ) = a and if n 1 | N and n 2 = N n1 / ∈ P. If either checks fails, it returns 0, and 1 otherwise Other algorithms: The other algorithms work exactly like the standard collision-free RSA-accumulator, i.e., [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF].
Theorem 1 (The Accumulator is Collisions-Resistant with Semi-Trusted Setup.). If either the parameters parm or the public key pk has been generated honestly, the sketched construction is collision-resistant with semi-trusted setup.
Proof. Based on the proofs given in [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF], we have to show that an adversary able to find collisions is able to find the e th root of a modulus of unknown factorization. Following the definition given in As parm is public knowledge, every party can compute n 2 = N n1 . For this proof, we assume that the strong RSA-assumption [START_REF] Barić | Collision-free accumulators and fail-stop signature schemes without trees[END_REF] holds in (Z/n 1 Z) and (Z/n 2 Z). Moreover, we require that gcd(n 1 , n 2 ) = 1 holds. As (Z/N Z) ∼ = (Z/n 1 Z) × (Z/n 2 Z) we have a group isomorphism ϕ 1 . Furthermore, as the third party knows the factorization of n 1 , we have another group isomorphism ϕ 2 . It follows: (Z/N Z) ∼ = (Z/p 1 Z) ×(Z/q 1 Z) ×(Z/n 2 Z). Assuming that A can calculate the e th root in (Z/N Z), it implies that it can calculate the e th root in (Z/n 2 Z), as calculating the e th root in (Z/pZ), with p ∈ P is trivial. It follows that A breaks the strong RSA-assumption in (Z/n 2 Z). Building a simulation and an extractor is straight forward. II) Malicious Signer. Similar to I). III) Outsider. Outsiders have less knowledge, hence a combination of I) and II).
Obviously, if the factorization of n 1 and n 2 is known, one can simply compute the e-th root in (Z/N Z). However, we assumed that signer and TTP do not collude. All other parties can collude, as the factorization of n 2 remains secret with overwhelming probability.
otherwise and computes σ 0 ← SSign(sk sig , (h, pk ch san , adm)), where h = (h[0], . . . , h[l]). It returns σ = (σ 0 , tag, nonce, r[0], . . . , r[k]), where k = |adm|. Sanitizing: Sanit on input of a message m, information mod, a signature σ = (σ 0 , tag, nonce, adm, r[0], . . . , r[k]), pk sig and sk ch san checks that mod is admissible and that σ 0 is a valid signature for (h, pk san , adm). On error, return ⊥. It sets m ← mod(m), chooses values nonce $ ← {0, 1} λ and tag $ ← {0, 1} 2λ and replaces each r[j] in the signature by r [j] ← CHAdapt(sk ch san , tag, (m, m[j]), r[j], tag , (m , m [j])). It assembles σ = (σ 0 , tag , nonce , adm, r [0], . . . , r [k]), where k = |adm|, and returns (m , σ ). Verification: Verify on input of a message m, a signature σ = (σ 0 , tag, nonce, adm, r[0], . . . , r[k]), pk sig and pk ch san lets, for each block m
Fig. 1 .
1 Fig. 1. BFF + 09: Data flow for algorithms Sign, CHAdapt and Proof
Fig. 2 .
2 Fig. 2. BFLS09: Data flow between smart card and host for Sign and Sanit
Fig. 3 .
3 Fig. 3. PSPdM12: Data flow between smart card and host for Sign and Setup
Fig. 5 .
5 Fig. 5. Collision-Resistance with Semi-Trusted Setup Part II
Fig. A returns 1 ,
1 is negligible (as a function of λ).
Fig. A and Fig. A, we have three cases: I) Malicious Semi-Trusted Third Party.
). Redacting further requires that the third-party is also able to compute a new valid signature σ for m that verifies under Alice's public key pk sig . Contrary, in an SSS, Alice decides for each block m[i] whether sanitization by a designated third party, denoted Sanitizer, is admissible or not. Sanitization means that Sanitizer i can replace each admissible block m[i] with an arbitrary string m[i] ∈ {0, 1} * and hereby creates a modified message m
modified to eliminate the vulnerability identified by Gong et al.[START_REF] Gong | Fully-secure and practical sanitizable signatures[END_REF].
RSA implementation must not apply any padding operations to its input. Otherwise, i is not intact anymore. We use Java Card's ALG RSA NOPAD to achieve this.
Is funded by BMBF (FKZ:13N10966) and ANR as part of the ReSCUeIT project. The research leading to these results was supported by "Regionale Wettbewerbsfähigkeit und Beschäftigung", Bayern, 2007-2013 (EFRE) as part of the SECBIT project (http://www.secbit.de) and the European Community's Seventh Framework Programme through the EINS Network of Excellence (grant agreement no. [288021]). | 40,325 | [
"1001640",
"1003786",
"1003787",
"1003788",
"979976"
] | [
"98761",
"98761",
"98761",
"98761",
"98761"
] |
01485933 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485933/file/978-3-642-38530-8_4_Chapter.pdf | Daniel Schreckling
Stephan Huber
Focke Höhne
Joachim Posegga
URANOS: User-Guided Rewriting for Plugin-Enabled ANdroid ApplicatiOn Security
URANOS is an Android application which uses syntactical static analysis to determine in which component of an Android application a permission is required. This work describes how the detection and analysis of widely distributed and security critical adware plugins is achieved. We show, how users can trigger bytecode rewriting to (de)activate selected or redundant permissions in Android applications without sacrificing functionality. The paper also discusses performance, security, and legal implications of the presented approach.
Introduction
Many Smartphone operating systems associate shared resources with permissions. API calls accessing such resources require permissions to gain the required privileges. Once an application obtains these privileges, it can generally access all the items stored in the respective resource. Additionally, such privileges are often valid until the deinstallation or an update of the application. These properties conflict with the emerging privacy needs of users. Increasing sensitivity encourages the protection of data which helps applications, vendors, or providers to generate individual user profiles. Unfortunately, current coarse grained permission systems only provide limited control or information about an application. Hence, informed consents to the use of permissions are far from being available.
In Android, numerous analyses of permissions requested by an application [START_REF] Chan | Droidchecker: analyzing android applications for capability leak[END_REF][START_REF] Gibler | AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale[END_REF][START_REF] Hornyack | These aren't the droids you're looking for: retrofitting android to protect data from imperious applications[END_REF][START_REF] Stevens | Investigating user privacy in android ad libraries[END_REF][START_REF] Zhou | Dissecting android malware: Characterization and evolution[END_REF] substantiate this problem. Permissions increase the attack surface of an application [START_REF] Chin | Analyzing inter-application communication in android[END_REF][START_REF] Bugiel | XManDroid: A New Android Evolution to Mitigate Privilege Escalation Attacks[END_REF][START_REF] Grace | Systematic Detection of Capability Leaks in Stock Android Smartphones[END_REF] and the platform executing it. Thus, granting permissions in excessive manners induces new exploit techniques. Static analysis and runtime monitoring frameworks have been developed to detect permission-based platform and application vulnerabilities. There are also Android core extensions enabling the deactivation of selected permissions. However, such frameworks either interfere with the usability of the application and render it unusable or they only provide permission analysis on separate hosts.
Thus, there is a strong need for flexible security solutions which do not aim at generality and precision but couple lightweight analysis and permission modification mechanisms.
We define URANOS, an application rewriting framework for Android which enables the selective deactivation of permissions for specific application contexts, e.g. plugins. The contributions of this paper include an on-device static analysis to detect permissions and their usage, selective on-device rewriting to guarantee user-specific permission settings, and a prototype implementing detection and rewriting in common Android applications.
Our contribution is structured as follows: Section 3 provides a knowledge base for this contribution, Section 2 gives a high-level overview of URANOS. Its components are explained in Section 4. Section 5 discusses performance, limitations, and legal implications. Finally, Section 6 lists related work before Section 7 summarizes our conclusions. We strive for an efficient on-device framework (see Figure 1) for Android which allows users to selectively disable permissions assigned to an application. To preserve functionality a static analysis infers the permissions required during execution from the bytecode. For efficiency we exploit existing knowledge about the permission requirements of Android API calls, resource access, intent broadcasting etc. Detected permissions are compared with the permissions requested in the application manifest to detect excessive permissions etc. Additionally, we scan the bytecode for plugins using a pre-generated database of API methods and classes used in popular adware. They define context for each bytecode instruction. This allows us to infer the permissions exclusively required for plugins or for the application hosting the plugins. We communicate this information to the user. Depending on his needs, the user can enable or disable permissions for specific application contexts.
Disabled and excessive permissions can be completely removed from the manifest. However, removing an effectively required permission will trigger a security exception during runtime. If these exceptions are unhandled the application will terminate. Therefore, URANOS additionally adapts the application bytecode and replaces the API calls in the respective call context by feasible wrappers.
This combination of analysis and rewriting allows a user to generate operational applications compliant with his security needs. Unfortunately, compliant but rewritten Android applications are neither directly installed nor are they updated by Android. Therefore, URANOS also delivers an application manager service, replacing applications with their rewritten counterparts and ensuring their updates.
Background
This section gives a short overview of the structure of Android applications, their execution environment, and the permission system in Android.
Android Applications
Applications (Apps) are delivered in zipped packages (apk files). They contain multimedia content for the user interface, configuration files such as the manifest, and the bytecode which is stored in a Dalvik executable (dex file). Based on the underlying Linux, Android allots user and group IDs to each application.
Four basic types of components can be used to build an App: activities, services, content providers, and broadcast receivers. Activities constitute the user interface of an application. Multiple activities can be defined but only one activity can be active at a time. Services are used to perform time-consuming or background tasks. Specific API functions trigger remote procedure calls and can be used to interact with services. Application can define content providers to share their structured data with other Apps. To retrieve this data, so called ContentResolvers must be used. They use URIs to access a provider and query it for specific data. Finally, broadcast receivers enable applications to exchange intents. Intents express an intent to perform an action within or on a component. Actions include the display of a picture or the opening of a specific web page.
Developers usually use these components defined in the Android API and the SDK to build, compile, and pack Apps. Their apks are signed with the private developer key, distributed via the official Android market, other markets, or it is delivered directly to a Smartphone.
Dalvik Virtual Machine
Bytecode is stored in Dalvik executables (dex files) and is executed in a register based virtual machine (VM) called Dalvik. Each VM runs in its own application process. The system process Zygote starts at boot time and manages the spawning of new VMs. It also preloads and preinitializes the core library classes.
Dex files are optimized for size and data sharing among classes. In contrast to standard Java archives, the dex does not store several class files. All classes are compiled into one single dex file. This reduces load time and supports efficient code management. Type-specific mappings are defined over all classes and map constant properties of the code to static identifiers, such as constant values, class, field, and method names. The bytecode can also contain developer code not available on the platform, e.g. third-party code, such as plugins (see Figure 2).
Bytecode instructions use numbered register sets for their computations. For method calls, registers passed as arguments are simply copied into new registers only valid during method execution.
Android Permissions
Android permissions control application access to resources. Depending on their potential impact Android distinguishes three levels: normal, dangerous, signature and signatureORsystem. Unlike normal permission which do not directly cause financial harm to users, dangerous and system permission control access to critical resources and may enable access to private data. Granted signature or signaturesORsystem permission grant access to essential system services and data. During installation permissions are assigned as requested in the manifest. The user only approves dangerous permissions. Normal permissions are granted without notification and signature or signatureORsystem permissions verify that the application requesting the permissions has been signed with the key of the device manufacturer.
Resource access can be obtained through API calls, the processing of intents, and through access to content providers and other system resources such as an SD card. Thus, permission enforcement varies with the type of resource accessed. In general, permission assignment and enforcement can be described using a label model as depicted in Figure 2. Each system resource or system service is labeled with the set of permissions it requires to be accessed. An application uses the public API to trigger resource access. This request is forwarded to the system. The system libraries, the binder, and the implementation of the libraries finally execute the resource access. We abstract from the details of the binderlibrary pair and call this entity a central permission monitor. It checks whether an application trying to access a resource with label L x has been assigned this label. If not, access is forbidden and an appropriate security exception is thrown.
Android also places permission checks in the API and RPC calls [START_REF] Felt | Android permissions demystified[END_REF]. Thus, security exceptions may already occur although the access requests have not reached the permission monitor, yet. As such checks may be circumvented by reflection the actual enforcement happens in the system.
The URANOS Framework
This section explains our system in more detail. To ease the understanding we complement our description with Figure 3.
URANOS
Application Processing
To process manifest and bytecode of the application, URANOS must obtain access to the apks. Depending on how the developer decides how to publish an APK, it is stored in different file system locations: the regular application storage, the application storage on an SD card, or storage which prevents the forwarding (forward-lock) of the application. The PackageManager API offered by Android can be used to retrieve the path and filename of the apks.
Regular applications are able to obtain reading access to other apks. Thus, as a regular application, URANOS can copy apks to a local folder and process them. With root permissions, it can also process forward-locked applications.
Apks are extracted to obtain access to the manifest and the dex file. We enhanced the dex-tools from the Android source tree. It directly operates on the bytecode and can extract information required for our analysis. Thus, we avoid intermediate representations. Handles to manifest and bytecode are forwarded [START_REF] Backes | App-Guard -Real-time policy enforcement for third-party applications[END_REF] to the static analysis and rewriting components of our framework.
Permission Detection
Next, we parse the manifest and retrieve the set P apk of permissions requested by the App. Afterwards, we scan the bytecode to find all invoke instructions and determine the correct signature of the methods invoked. Invoke instructions use identifiers pointing to entries of a management table which contains complete method signatures. From this table we derive the set I of methods potentially invoked during execution. As this is a syntactical process set I may contain methods which are never called.
We then use function π to compute P M = ∀m∈I π(m), i.e. π maps method m to a set of permissions required to invoke m at runtime. Thus, P M reflects the permissions required by the application to execute all methods in I. Function π is based on the results of Felt et al. [START_REF] Felt | Android permissions demystified[END_REF] which associate actions in an Android App with its required permissions, e.g. method calls.
The use of content providers or intents may also require permissions. However, specific targets of both can be specified using ordinary strings. To keep our analysis process simple we search the dex for strings which map the pattern of a content provider URI or of a activity class name which is defined in the Android API. If a pattern is matched, we add the respective permission to the set P P of provider permissions or to the set P I of intent permissions, respectively.
At the end of this process we intersect the permissions specified in the manifest with the permissions extracted from the bytecode, i.e. P val = P apk ∩ (P M ∪ P P ∪ P I ) to obtain the validated permissions likely to be required for the execution of the application. Our heuristics induce an over-approximation in this set. Section 5 explains why it does not influence the security of our approach.
Context Detection
Based on P val we now determine the App components in which the methods requiring these permissions are called. For this purpose we define the execution context for an instruction. It is the signature of the method and its class in which the instruction is executed. This definition is generic and can be applied to various detection problems. We focuse on widely distributed plugins for Android.
To give users a better understanding on the possible impact of the plugins hosted by the analyzed Apps we manually assign each plugin to the following four categories: passive, active, audio advertising, and feature extensions.
Passive advertising plugins display advertisements as soon as an activity of the hosting application is active. They are usually integrated into the user interface with banners as placeholders. Active advertising plugins are similar to pop-up windows and do not require a hosting applications. They use stand alone activities or services, intercept intents, or customize events to become active. Audio advertising is a rather new plugin category which intercepts control sequences and interferes with the user by playing audio commercials or similar audio content, e.g. while hearing the call signal on the phone. Feature extensions include features in an application a user or developer may utilize. Among many others, they include in-app billing or developer plugins easing the debugging process.
To detect plugins in an application, we perform the same steps required for archiving the signatures. We scan the application manifest and bytecode for the names listed above and investigate which libraries have to be loaded at runtime. From this information we build a signature and try to match it against our plugin database. This process also uses fuzzy patterns to match the strings inferred from the application. We assume that plugins follow common naming conventions. So, full class names should start with the top Internet domain name, continue with the appropriate company name, and end with the class names. If we do not find matches on full class names, we search for longest common prefixes. If they contain at least two subdomains, we continue searching for the other names to refine the plugin match. In this way we can account for smaller or intentional changes in class or package naming and prevent a considerable decline of the detection rate.
The ability to detect classes of plugins allows us to determine execution contexts. During the bytecode scanning, we track the context C. As soon as our analysis enters the namespace of a plugin class, we change C. It is defined by the name of the plugin N P lugin or by the name N apk of the application if no plugin matches. We generate a map for each method call to its calling context. Together with the function π, this implicitly defines a map γ from permissions to calling contexts. We can now distinguish four types of permissions:
Dispensable permissions p ∈ P apk \ P val are not required by the application, Application only permissions p ∈ P apk are exclusively required for the hosting application to run, i.e. γ(p) = {N apk }, Plugin only permissions p ∈ P apk are exclusively required for the execution of a plugin, i.e. γ(p) ∩ {N apk } = ∅, and Hybrid permissions p ∈ P apk which are required by both, the hosting application and the plugin, i.e. γ(p) does not match the conditions for the other three permission types.
This result is communicated to the user in step [START_REF] Bugiel | XManDroid: A New Android Evolution to Mitigate Privilege Escalation Attacks[END_REF]. He gets an overview of the types of permissions and the context in which they are required. The user can enable or disable them in the entire application, only in the plugin, or only in the hosting application. The next section shows how to support this feature with the help of bytecode rewriting and without modifying Android.
Rewriter
In general, dispensable permissions are not required for the execution and don't need to be assigned to the application. They can removed from the manifest. The same holds for permissions which should be disabled for the entire application. Thus, the first rewriting step is performed on the application manifest. It revokes the permissions either not required or not desired.
However, withdrawing permissions from an application may render it unusable. Calls to methods which require permissions will throw exceptions. If they are not handled correctly, the runtime environment could finally interrupt execution. To avoid this problem, enable the deactivation of permissions in only specific application components, and to retain an unmodified Android core, the activation or deactivation of permissions triggers a rewriting process (3). It is guided by the results of the syntactical analysis (4). The rewriter, described in this section, adapts the bytecode in such a way that the App can be executed safely even without the permissions it originally requested.
API Methods
For each method whose execution requires a permission, we provide two types of wrappers [START_REF] Conti | CRePE: context-related policy enforcement for android[END_REF] to replace the original call. Regular API method calls which require a permission, can be wrapped by simple try and catch blocks as depicted by WRAPPER1 in Listing 1.1. If the permission required to execute the API call has been withdrawn, we catch the exception and return a feasible default value. In case the permission is still valid, the original method is called. In contrast, the second wrapper WRAPPER2 (Listing 1.2) completely replaces the original API call and only executes a default action. Evidently, rewriting could be reduced to only WRAPPER2. But, WRAPPER1 reduces the number of events at which an application has to be rewritten and reinstalled. Assume that a user deactivates a permission for the entire application. The permission is removed from the manifest and all methods requiring it are wrapped. Depending on the wrapper and the next change in the permission settings a rewriting may be avoided because the old wrapper already handles the new settings, e.g. the reactivation of the permission.
Wrappers are static methods and apart from one additional instance argument for non-static methods, they inherit the number and type of arguments from the methods they wrap. This makes it easy to automatically derive them from the API. Additionally, it simplifies the rewriting process as follows.
URANOS delivers a dex file which contains the bytecode of all wrappers. This file is merged with the original application dex using the dx compiler libraries. The new dex now contains the wrappers but does not make use of them, yet. In the next step we obtain the list of method calls which need to be replaced from the static analysis component [START_REF] Chin | Analyzing inter-application communication in android[END_REF]. The corresponding invoke instructions are relocated in the new dex and the old method identifiers are exchanged with the identifiers of the corresponding wrapper methods.
Here, the rewriting process is finished even if the wrapped method is nonstatic. At bytecode level, the replacement of a non-static method with a static one simply induces a new interpretation of the registers involved in the call. The register originally storing the object instance is now interpreted as the first method argument. Thus, we pass the instance register to the wrapper in the first argument and can leave all other registers in the bytecode untouched. We illustrate this case in Listing 1.3. It shows bytecode mnemonics for the invocation of the API method getDeviceId as obtained by a disassemblers. The instruction invoke-virtual calls the method getDeviceId on an instance of class TelephonyManager. It is rewritten to a static call in Listing 1.4 and passes the instance as an argument to the static wrapper method.
Reflection Android supports reflective method calls. They use strings to retrieve or generate instances of classes and to call methods on such instances. These operations can be constructed at runtime. Hence, the targets of reflective calls are not decidable during analysis and calls to API methods may remain undetected. Therefore, we wrap the methods which can trigger reflective calls, i.e. invoke and newInstance. During runtime, these wrappers check the Method instance passed to invoke or the class instance on which newInstance is called. Depending on its location in the bytecode the reflection wrapper is constructed in such a way that it passes the invocation to the appropriate wrapper methods (see above) or executes the function in the original bytecode. This does not require dynamic monitoring but can be integrated in the bytecode statically. Reflection calls show low performance and are used very infrequently. Thus, this rewriting will not induce high additional overhead.
Content Providers Similar to reflective calls, we handle content providers. Providers must be accessed via content resolvers (see Section 3) which forward the operations to be performed on a content provider: query, insert, update, and delete. They throw security exceptions if required read or write permissions are not assigned to an application. As these methods specify the URI of the content provider we replace all operations by a static wrapper which passes their call to a monitor. It checks whether the operation is allowed before executing it.
Intents In general, intents are not problematic as they are handled in the central monitor of Android, i.e. the enforcement does not happen in the application. If an application sends an intent to a component which requires permissions an exception in the error log is generated if the application does not have this permission. The corresponding action is not executed but the application does not crash. Thus, our rewriting must cover situations in which only some instructions in specific execution contexts must not send or receive intents. The control over sending can be realized by wrappers handling the available API methods such as startActivity, broadcastIntent, startService, and bindService. The wrappers implement monitors which first analyse the intent to be sent. Depending on the target, the sending is aborted. By rewriting the manifest, we can control which intents a component can receive. This excludes explicit intents which directly address a application component. Here, we assume that the direct access of a system component to an application can be considered legitimate.
Application Management
We realize permission revocation by repackaging applications. First, our App manager obtains the manifest and dex (6) from the rewriter. For recovery, we first backup the old dex file and its corresponding manifest. All other resources, such as libraries, images, audio or video files, etc. are not backed up as they remain untouched. They are extracted from the original apk (7), signed with the URANOS key together with the new bytecode and manifest. The signed application is then directly integrated into a new apk. This process is slow due to the zip compression of the archive. In the end, the application manager assists the user to deinstall the old and install the new application [START_REF] Enck | On lightweight mobile phone application certification[END_REF][START_REF] Felt | Android permissions demystified[END_REF].
In the background we also deploy a dedicated update service. It mimics the update functionality of Android but also operates on the applications resigned by URANOS. We regularly query the application market for updates, inform the user about them, and assists the update process by deinstalling the old App, rewriting the new App, and installing it. Similarly, the App manager provides support for deinstallation and recovery.
Discussion
Performance
To assess the performance of our approach we downloaded over 180 popular applications from the Google Play Store. The URANOS App was adjusted in such a way that it automatically starts analysing and rewriting newly installed applications. Our benchmark measured the analysis time, i.e. the preprocessing of the dex (pre) and the execution context detection (det), and the rewriting time, i.e. the merging of wrappers (wrap), the rewriting of the resulting dex (rew), and the total time require to generate the final apk (tot). The analysis and rewriting phase were repeated 11 times for each App. The first measurement was ignored as memory management and garbage collection often greatly influence the first measurements and hard to reproduce as they heavily depend on the phone state. For the rewriting process, we always selected three random permissions to be disabled. If there were less permissions we disabled all. All measurements were conducted on a Motorola RAZR XT910, running Android 4.0.4 on a 3.0.8 kernel. Due to space restrictions this contribution only discusses a selection of applications and their performance figures. An overview of the complete results, a report on the impact of our rewriting on the App functionality, and the App itself are available at http://web.sec.uni-passau.de/research/uranos/.
Apart from the time measurements mentioned above Table 1 enumerates the number of plugins the application contains (#pl), the number of permissions requested (#pm), the number classes (#cl) in the dex and the size of the apk. In particular the apk size has a tremendous impact on the generation of the rewritten application due to APK compression. This provides potential for optimization in particular if we look at the rather small time required to merge the wrapper file of 81 kB into the complex dex structure and redirecting the method calls. This complexity is also reflected in the time for pre-processing the dex to extract information required to work on the bytecode.
We can also see that the number of classes and permissions included in an application influence the analysis time. Classes increase the number of administrative overhead in a dex. Thus their number also increases the effort to search for the appropriate code locations. Here, Shazam and Instagram are two extreme examples. In turn, the number of permissions increase the number of methods which have to considered during analysis and rewriting. In our measurements, we do not include execution overhead. The time required for the additional instructions in the bytecode are negligible and within measuring tolerance. Thus, although the generation of the final apk is slow, our measurements certify that the analysis and rewriting on Android bytecode can be implemented efficiently on a Smartphone. While other solutions run off-device and focus on precision, such as the web interface provided by Woodpecker [START_REF] Felt | Android permissions demystified[END_REF], URANOS can deliver timely feedback to the user. With this information he can decide about further countermeasures also provided by our system.
Limitations
As we have already stated above, our analysis uses approximations. In fact, P M is an overapproximation of the permissions required by method calls, e.g. there may be methods in the bytecode which are never executed. Thus, the mere existence of API calls does not justify a permission assignment to an application. On the other hand P P ∪ P I is an underapproximation as we only consider strings as a means to communicate via intents or to access resources. There are numerous other ways for such operations, our heuristic does not cover.
Attackers or regular programmers can achieve underapproximations by hiding intent or provider access with various implementation techniques. In this case URANOS will alert the user that a specific permission may not be needed. The user will deactivate the respective permission and no direct damage is caused. Overapproximation can be achieved by simply placing API calls in the bytecode which are never executed. In this case, our analysis does not report the permission mapping to those dead API calls to be dispensable. Thus, the overapproximation performed in this round may give the user a wrong sense of security concerning the application. Therefore, URANOS also allows the deactivation of permissions in the hosting application and not only in the plugin.
Attackers may also hide plugin code by obfuscating it, e.g. by renaming all plugin APIs. In this case, URANOS will not detect the plugin. This will prevent the user from disabling permissions for this plugin. In this case, it is still possible to remove permissions for the whole application. Plugin providers which have an interest in the use of their plugins will not aim for obfuscated APIs.
Legal Restrictions
If software suffices the copyright law's fundamental requirement of originality it is protected by international and national law, such as the Universal Copyright Convention, the WIPO Copyright Treaty grant protection, Article 10 of the international TRIPS agreement of the WTO agreement, and the European Directive 2009/24/EC. In general, these directives prohibit the manipulation, reproduction, or distribution of source code if the changes are not authorized by its rights holder. No consent for modification is required if the software is developed using an open source software licensing model or if minor modifications are required for repair or maintenance. To achieve interoperability of an application even reverse engineering may be allowed. However, any changes must not infringe with the regular exploitation of the affected application and the legitimate interest of the rights holder.
URANOS cannot satisfy any of the conditions mentioned above. First of all, all actions are performed automatically. Thus, it is not possible to query the rights owner for his permission to alter the software. One may argue that URANOS rewrites the application in order to ensure correct data management. Unfortunately, the changes described above directly infringe with the interest of the rights holder of the application.
On the other hand one argue that a developer must inform the user how the application processes and uses his personal data as highlighted in the "Joint Statement of Principles" of February 22 nd 2012 and signed by global players like Amazon, Apple, Google, Hewlett-Packard, Microsoft and Research In Motion. However, current systems only allow an informed consent of insufficient quality. In particular when using plugins, a developer would need to explain how user data is processed. But developers only use APIs to libraries without knowing internal details. To provide adequate information about the use of data a developer would have to understand and/or reverse engineer the plugin mechanisms he uses. So, for most plugins or libraries, the phrasing of a correct terms of use is impossible. Yet, this fact does not justify application rewriting. The user can still refuse the installation. If, despite deficient information, he decides to install the software he must stick to the legal restrictions and use it as is.
In short: URANOS and most security systems which are based on application rewriting conflict with international and most national copyright protection legislation. This situation is paradoxical as such systems try to protect private data from being misused by erroneous or malicious application logic. Thus, they try to enforce data protection legislation but are at the same time limited by copyright protection laws.
Related Work
This section focuses on recent work addressing permission problems in Android. We distinguish two types of approaches: Analysis and monitoring mechanisms.
Permission Analysis
One of the first publications analysing the Android permission system is Kirin [START_REF] Enck | On lightweight mobile phone application certification[END_REF]. It analyzes the application manifest and compares it with simple and pre-defined rules. In contrast to URANOS, rules can only describe security critical combinations of permissions and simply prevent an application from being installed.
The off-device analysis in [START_REF] Chin | Analyzing inter-application communication in android[END_REF] is more sophisticated. It defines attack vectors which are based on design flaws and allow for the misuse of permissions. Chin et al. describe secure coding guidelines and permissions as a means to mitigate these problems. Their tool, ComDroid, can support developers to detect such flaws but it does not help App users in detecting and mitigating such problems.
This lack of user support also holds Stowaway [START_REF] Felt | Android permissions demystified[END_REF]. This tool is focused on permissions which are dispensable for the execution of an application. Comparable to URANOS, Stowaway runs a static analysis on the bytecode. However, this analysis is designed for a server environment. While it provides better precision through a flow analysis, it can not correct the detected problems and the analysis times exceed those of URANOS by several magnitudes.
Similar to Stowaway, AndroidLeaks [START_REF] Gibler | AndroidLeaks: automatically detecting potential privacy leaks in android applications on a large scale[END_REF] uses an off-device analysis which detects privacy leaks. Data which is generated by operations which are subject to permission checks are tracked through the application to data sinks using static information flow analysis. AndroidLeaks supports the human analyst. The actual end user can not directly benefit from this system.
DroidChecker [START_REF] Chan | Droidchecker: analyzing android applications for capability leak[END_REF] and Woodpecker [START_REF] Grace | Systematic Detection of Capability Leaks in Stock Android Smartphones[END_REF] use inter-procedural control flow analyzes to look for permission vulnerabilities, such as the confused deputy (CD) vulnerability. However, DroidChecker additionally uses taint tracking to detect privilege escalation vulnerabilities in single applications while Woodpecker targets system images. Techniques applied in Woodpecker were also used to investigate the functionality of in-app advertisement [START_REF] Grace | Unsafe exposure analysis of mobile in-app advertisements[END_REF]. URANOS is based on an extended collection of advertisement libraries used in this work. Similar analytical work with a less comprehensive body has been conducted in [START_REF] Stevens | Investigating user privacy in android ad libraries[END_REF].
Enhanced Permission Monitoring
An early approach which modifies the central security monitor in Android to introduce an enriched permission system of finer granularity is Saint [START_REF] Ongtang | Semantically Rich Application-Centric Security in Android[END_REF]. However, Saint mainly focuses on inter-application communication. CRePE [START_REF] Conti | CRePE: context-related policy enforcement for android[END_REF] goes one step further and extends Android permissions with contextual constraints such as time, location, etc. However, CRePE does not consider the execution context in which permissions are required. Similar holds for Apex [START_REF] Nauman | Apex: extending android permission model and enforcement with user-defined runtime constraints[END_REF]. It manipulates the Android core implementation to modify the permissions framework and also introduces additional constraints on the usage of permissions.
Approaches such as QUIRE [START_REF] Dietz | QUIRE: Lightweight Provenance for Smart Phone Operating Systems[END_REF] or IPC inspection of Felt et al. [START_REF] Felt | Permission redelegation: attacks and defenses[END_REF] focus on the runtime prevention of CD attacks. QUIRE defines a lightweight provenance system for permissions. Enforcement in this framework boils down to the discovery of chains which misuse the communication to other apps. IPC inspection solves this problem by reinstantiating apps with the privileges of their callers.
Both approaches require an OS manipulation and consider an application to be monolithic. This prevents them from recognizing execution contexts for permissions. The same deficiencies hold for XManDroid [START_REF] Bugiel | XManDroid: A New Android Evolution to Mitigate Privilege Escalation Attacks[END_REF] which extends the goal of QUIRE and IPC inspection by also considering colluding applications.
Similar to IPC inspection and partially based on QUIRE is AdSplit [START_REF] Shekhar | AdSplit: separating smartphone advertising from applications[END_REF]. It targets advertisement plugins and also uses multi-instantiation. It separates the advertisement and its hosting application and executes it in independent processes. Although mentioned in their contribution Shekhar does not aim at deactivating permissions in one part of the application or at completely suppressing communication between the separated application components.
Leontiadis et al. also do not promote a complete deactivation of permissions and separate applications and advertising libraries to avoid overprivileged execution [START_REF] Leontiadis | Don't kill my ads!: balancing privacy in an ad-supported mobile application market[END_REF]. A trade-off between user privacy and advertisement revenue is proposed. A separated permission and monitoring system controls the responsible processing of user data and allows to interfere with IPC if sufficient revenue has been produced. URANOS could be coupled with such a system by only allowing the deactivation of permissions if sufficient revenue has been produced. However, real-time monitoring would destroy the lightweight character of URANOS.
Although developed independently, AdDroid [START_REF] Pearce | AdDroid: Privilege separation for applications and advertisers in Android[END_REF] realizes a lot of the ideas proposed by Leontiadis et al. AdDroid proposes specific permissions for advertisement plugins. Of course, this requires modifications to the overall Android permission system. Further, to obtain a privilege separation, AdDroid also proposes a process separation of advertisement functionalities from the hosting application. Additionally, the Android API is proposed to be modified according to the advertisement needs. It remains unclear how such a model should be enforced. The generality of URANOS could contribute to such an enforcement.
Two approaches which are very similar to URANOS are I-Arm Droid [START_REF] Davis | I-arm-droid: A rewriting framework for in-app reference monitors for android applications[END_REF] and AppGuard [START_REF] Backes | App-Guard -Real-time policy enforcement for third-party applications[END_REF]. Both systems rewrite Android bytecode to enforce user defined policies. I-Arm Droid does not run on the Android device and is designed to enforce developer defined security policies enforced by inlined reference monitors at runtime. The flexibility of the inlining process is limited as all method calls are replaced by monitors. Selective deactivation of permissions is not possible. The same holds for AppGuard. While it can be run directly on the device the rewriting process replaces all critical method calls. AppGuard compares to URANOS as it uses a similar resource and user friendly deployment mechanism which do not require root access on the device.
Conclusions
The permission system and application structure in today's Smartphones do not provide a good foundation for an informed consent of users. URANOS takes a first step into this direction by providing enhanced feedback. The user is able to select which application component should run with which set of permissions. Thus, although our approach can not provide detailed information about its functionality the user benefits from a finer granularity of permission assignment.
If in doubt, he is not confronted with a all or nothing approach but can selectively disable critical application components. The execution contexts we define in our work are general and can describe many different types of application components. Further, we neither require users to manipulate or root their Smartphones. Instead we maintain the regular install, update, and recovery procedures.
Our approach is still slow when integrating the executable code into a fully functional application. However, this overhead is not directly induced by our efficient analysis or rewriting mechanisms. In fact, we highlighted the practical and security impact of a trade-off between a precise and complete flow analysis and a lightweight but fast and resource saving syntactical analysis which can run on user device without altering its overall functionality.
Fig. 1 .
1 Fig. 1. High-level overview of URANOS
9 Fig. 3 .
93 Fig. 3. System Overview
Listing 1 . 1 .Listing 1 . 2 .
1112 Wrapper pattern one public static WRAPPER1 { try { A PI _C AL L _A CT I ON ; } catch ( S e c u r i t y E x c e p t i o n se ) { DEFA ULT_ACTI ON ; } } Wrapper pattern two public static WRAPPER2 { DEFAU LT_ACTI ON ; }
Table 1 .
1 Selection of analyzed and rewritten applications
App #pl #pm #cl apk[MB] pre[ms] det[ms] wrap[ms] rew[ms] tot[ms]
100 Doors 4 3 757 14.4 1421 356 1690 4277 9073
Angry Birds 10 6 873 24.4 1863 640 2308 5767 50408
Bugvillage 13 8 1127 3.1 1819 1214 3425 6832 18092
Coin Dozer 11 6 855 14.7 2028 788 2605 6457 56749
Fruit Ninja 8 8 1472 19.2 2520 1197 3657 7955 144374
Instagram 7 7 2914 12.9 5168 1906 8114 17031 39908
Logo Quiz 3 2 232 9.7 553 96 701 1939 7729
Shazam 8 13 2822 4.4 4098 3214 7837 15182 27263
Skyjumper 3 4 292 0.9 772 257 1222 2991 4106
Acknowledgements
The research leading to these results has received funding from the European Union's FP7 project COMPOSE, under grant agreement 317862. | 43,585 | [
"1003790",
"1003791",
"1003792",
"1003788"
] | [
"98761",
"98761",
"98761",
"98761"
] |
01485935 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485935/file/978-3-642-38530-8_6_Chapter.pdf | Michael Lackner
email: michael.lackner@tugraz.at
Reinhard Berlach
email: reinhard.berlach@tugraz.at
Wolfgang Raschke
email: wolfgang.raschke@tugraz.at
Reinhold Weiss
email: rweiss@tugraz.at
Christian Steger
email: steger@tugraz.at
A Defensive Virtual Machine Layer to Counteract Fault Attacks on Java Cards
Keywords: Java Card, Defensive Virtual Machine, Countermeasure, Fault Attack
The objective of Java Cards is to protect security-critical code and data against a hostile environment. Adversaries perform fault attacks on these cards to change the control and data flow of the Java Card Virtual Machine. These attacks confuse the Java type system, jump to forbidden code or remove run-time security checks. This work introduces a novel security layer for a defensive Java Card Virtual Machine to counteract fault attacks. The advantages of this layer from the security and design perspectives of the virtual machine are demonstrated. In a case study, we demonstrate three implementations of the abstraction layer running on a Java Card prototype. Two implementations use software checks that are optimized for either memory consumption or execution speed. The third implementation accelerates the run-time verification process by using the dedicated hardware protection units of the Java Card.
Introduction
A Java Card enables Java applets to run on a smart card. The primary purpose of using a Java Card is the write-once, run-everywhere approach and the ability of post-issuance installation of applets [START_REF] Sauveron | Multiapplication smart card: Towards an open smart card? Information Security Technical Report[END_REF]. These cards are used in a wide range of applications (e.g., digital wallets and transport tickets) to store security-critical code, data and cryptographic keys. Currently, these cards are still very resourceconstrained devices that include an 8-or 16-bit processor, 4kB of volatile memory and 128kB of non-volatile memory. To make a Java Card Virtual Machine run on such a constrained device, a subset of Java is used [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF]. Furthermore, special Java Card security concepts, such as the Java Card firewall [START_REF]Oracle: Runtime Environment Specification. Java Card Platform[END_REF] and a verification process for every applet [START_REF] Leroy | Bytecode verification on Java smart cards[END_REF], were added. The Java Card firewall is a run-time security feature that protects an applet against illegal access from other applets. For every access to a field or method of an object, this check is performed. Unfortunately, the firewall security mechanism can be circumvented by applets that do not comply with the Java Card specification. Such applets are called malicious applets.
To counteract malicious applets, a bytecode verification process is performed. This verification is performed either on-card or off-card for every applet [START_REF] Leroy | Bytecode verification on Java smart cards[END_REF]. Note that this bytecode verification is a static process and not performed during applet execution. The reasons for this static approach are the high resource needs of the verification process and the hardware constraints of the Java Card. This behavior is now abused by adversaries. They upload a valid applet onto the card and perform a fault attack (FA) during applet execution. Adversaries are now able to create a malicious applet out of a valid one [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF].
A favorite time for performing a FA is during the fetching process. At this time, the virtual machine (VM) reads the next Java bytecode values from the memory. An adversary that performs an FA at this time can change the readout values. The VM then decodes the malicious bytecodes and executes them, which leads to a change in the control and data flow of the applet. A valid applet is mutated by such an FA to a malicious applet [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF][START_REF] Mostowski | Malicious Code on Java Card Smartcards: Attacks and Countermeasures[END_REF][START_REF] Hamadouche | Subverting Byte Code Linker service to characterize Java Card API[END_REF] and gains unauthorized access to secret code and data [START_REF] Markantonakis | Smart card security[END_REF][START_REF] Bar-El | The Sorcerer's Apprentice Guide to Fault Attacks[END_REF].
To counteract an FA, a VM must perform run-time security checks to determine if the bytecode behaves correctly. In the literature, different countermeasures, such as control-flow checks [START_REF] Sere | Evaluation of Countermeasures Against Fault Attacks on Smart Cards[END_REF], double checks [START_REF] Barbu | Java Card Operand Stack:Fault Attacks, Combined Attacks and Countermeasures[END_REF], integrity checks [START_REF] Bouffard | Evaluation of the Ability to Transform SIM Applications into Hostile Applications[END_REF] and method encryption [START_REF] Razafindralambo | A Dynamic Syntax Interpretation for Java Based Smart Card to Mitigate Logical Attacks[END_REF], have been proposed. Barbu [3] proposed a dynamic attack countermeasure in which the VM executes either standard bytecodes or bytecodes with additional security checks.
All these works do not concentrate on the question of how these security mechanisms can be smoothly integrated into a Java Card VM. For this integration, we propose adding an additional security layer into the VM. This layer abstracts the access to internal VM resources and performs run-time security checks to counteract FAs. The primary contributions of this paper are the following:
-Introduction of a novel defensive VM (D-VM) layer to counteract FAs during run-time. Access to security-critical resources of the VM, such as the operand stack (OS), local variables (LV) and bytecode area (BA), is handled using this layer. -Usage of the D-VM layer as a dynamic countermeasure. Based on the actual security level of the card, different implementations of the D-VM layer are used. For a low-security level, the D-VM implementation uses fewer checks than for a high-security level. The security level depends on the credibility of the currently executed applet and run-time information received by hardware or software modules. -A case study of a defensive VM using three different D-VM layer implementations. The API of the D-VM layer is used by the Java Card VM to perform run-time checks on the currently executing bytecode. -The defensive VMs are executed on a smart card prototype with specific HW security features to speed up the run-time verification process. The resulting run-time and main memory consumption of all implemented D-VM layers are presented.
Section 2 provides an overview of attacks on Java Cards and the current countermeasures against them. Section 3 describes the novel D-VM layer presented in this work and its integration into the Java Card design. Furthermore, the method by which the D-VM layer enables the concept of dynamic countermeasures is presented. Section 4 presents implementation details regarding how the three D-VM implementations are inserted into the smart card prototype. Section 5 analyzes the additional costs for the D-VM implementations based on the execution and main memory overhead. Finally, the conclusions and future work are discussed in Section 6.
Related Work
In this section, the basics of the Java Card VM and work related to FA on Java Cards are presented. Then, an analysis of work regarding methods of counteracting FAs and securing the VM are presented. Finally, an FA example is presented to demonstrate the danger posed by such run-time attacks for the security of Java Cards.
Java Card Virtual Machine
A Java Card VM is software that is executed on a microprocessor. The VM itself can be considered a virtual computer that executes Java applets stored in the data area of the physical microprocessor. To be able to execute Java applets, the VM uses internal data structures, such as the OS or the LV, to store interim results of logical and combinatorial operations. All of these internal data structures are general objects for adversaries that attack the Java Card [START_REF] Barbu | Java Card Operand Stack:Fault Attacks, Combined Attacks and Countermeasures[END_REF][START_REF] Razafindralambo | A Dynamic Syntax Interpretation for Java Based Smart Card to Mitigate Logical Attacks[END_REF][START_REF] Vertanen | Java Type Confusion and Fault Attacks[END_REF].
For every method invocation performed by the VM, a new Java frame [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF] is created. This frame is pushed to the Java stack and removed from it when the method returns. In most VM implementations, this frame internally consists of three primary parts. These parts have static sizes during the execution of a method. The first frame part is the OS on which most Java operations are performed. The OS is the source and destination for most of the Java bytecodes. The second part is the LV memory region. The LV are used in the same manner as the registers on a standard CPU. The third part is the frame data, which holds all additional information needed by the VM and Java Card Runtime Environment (JCRE) [START_REF]Oracle: Runtime Environment Specification. Java Card Platform[END_REF]. This additional information includes, for example, return addresses and pointers to internal VM-related data structures.
Attacks on Java Cards
Loading an applet that does not conform to the specification defined in [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF] onto a Java Card is a well-known problem called a logical attack (LA). After an LA, different applets on the card are no longer protected by the so-called Java sandbox model. Through this sandbox, an applet is protected from illegal write and read operations of other applets. To perform an LA, an adversary must know the secret key to install applets. This key is known for development cards, but it is highly protected for industrial cards and only known by authorized companies and authorities. In conclusion, LAs are no longer security threats for current Java Cards.
Side-channel analyses are used to gather information about the currently executing method or instructions by measuring how the card changes environment parameters (e.g., power consumption and electromagnetic emission) during runtime. Integrated circuits influence the environment around them but can also be influenced by the environment. This influence is abused by an FA to change the normal control and data flow of the integrated circuit. Such FAs include glitch attacks on the power supply and laser attacks on the cards [START_REF] Bar-El | The Sorcerer's Apprentice Guide to Fault Attacks[END_REF][START_REF] Vertanen | Java Type Confusion and Fault Attacks[END_REF]. By performing side-channel analyses and FAs in combination, it is possible to break cryptographic algorithms to receive secret data or keys [START_REF] Markantonakis | Smart card security[END_REF].
In 2010, a new group of attacks called combined attacks (CA) was introduced. These CAs combine LAs and FAs to enable the execution of ill-formed code during run-time [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF]. An example of a CA is the removal of the checkcast bytecode to cause type confusion during run-time. Then, an adversary is able to break the Java sandbox model and obtain access to secret data and code stored on the card [START_REF] Barbu | Attacks on Java Card 3.0 Combining Fault and Logical Attacks[END_REF][START_REF] Mostowski | Malicious Code on Java Card Smartcards: Attacks and Countermeasures[END_REF]. In this work work, we concentrate on countering FAs during the execution of an applet using our D-VM layer.
Countermeasures Against Java Card Attacks
Since approximately 2010, an increasing number of researchers have started concentrating on the question of what tasks must be performed to make a VM more robust against FAs and CAs. Several authors [START_REF] Sere | Checking the Paths to Identify Mutant Application on Embedded Systems[END_REF][START_REF] Bouffard | Evaluation of the Ability to Transform SIM Applications into Hostile Applications[END_REF] suggest adding an additional security component to the Java Card applet. In this component, they store checksums calculated over basic blocks of bytecodes. These checksums are calculated off-card in a static process and added to a new component of the applet. During run-time, the checksum of executed bytecodes is calculated using software and compared with the stored checksums. If these checksums are not the same, a security exception is thrown.
Another FA countermeasure is the use of control-flow graph information [START_REF] Sere | Evaluation of Countermeasures Against Fault Attacks on Smart Cards[END_REF]. To enable this approach, a control-flow graph over basic blocks is calculated offcard and stored in an additional applet component. During run-time, the current control-flow graph is calculated and compared with the stored control graph.
In [START_REF] Razafindralambo | A Dynamic Syntax Interpretation for Java Based Smart Card to Mitigate Logical Attacks[END_REF], the authors propose storing a countermeasure flag in a new applet component to indicate whether the method is encrypted. They perform this encryption using a secret key and the Java program counter for the bytecode of every method. Through this encryption, they are able to counteract attacks that change the control-flow of an applet to execute illegal code or data.
Another countermeasure against FAs that target the data stored on the OS is presented in [START_REF] Barbu | Java Card Operand Stack:Fault Attacks, Combined Attacks and Countermeasures[END_REF]. In this work, integrity checks are performed when data are pushed or popped onto the OS. Through this approach, the OS is protected against FAs that corrupt the OS data.
Another run-time check against FAs is proposed in [START_REF] Dubreuil | Type Classification against Fault Enabled Mutant in Java Based Smart Card[END_REF][START_REF] Lackner | Towards the Hardware Accelerated Defensive Virtual Machine -Type and Bound Protection[END_REF], in which they create separate OSes for each of the two data types, integralValue and reference. With this approach of splitting the OS, it is possible to counteract type-confusion attacks. A drawback is that in both works, the applet must be preprocessed.
In [START_REF] Barbu | Dynamic Fault Injection Countermeasure[END_REF], the authors propose a dynamic countermeasure to counteract FAs. Bytecodes are implemented in different versions inside the VM, a standard version and an advanced version that performs additional security checks. The VM is now able to switch during run-time from the standard to the advanced version. By using unused Java bytecodes, an applet programmer can explicitly call the advanced bytecode versions.
The drawbacks of current FA countermeasures are that most of them add an additional security component to the applet or rely on preprocessing of the applet. This has different drawbacks, such as increased applet size or compatibility problems for VMs that do not support these new applet components. In this work, we propose a D-VM layer that performs checks on the currently executing bytecode. These checks are performed based on a run-time policy and do not require an off-card preprocessing step or an additional applet component.
EMAN4 Attack: Jump Outside the Bytecode Area
In 2011, the run-time attack EMAN4 was found [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF]. In this work a laser was used to manipulate the read out values from the EEPROM to 0x00. By this laser attack an adversary is able to change the Java bytecode of post-issuance installed applets during their execution.
The target time of the attack is when the VM fetches the operands of the goto w bytecode from the EEPROM. Generally the goto w bytecode is used to perform a jump operation inside a method. The goto w bytecode consists of the operand byte 0xa8 and two offset bytes for the branch destination [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF]. This branch offset is added to the actual Java program counter to determine the next executing bytecode. An adversary which changes this offset is able to manipulate the control flow of the applet.
With the help of the EMAN4 attack it is possible to jump with the Java program counter outside the applet bytecode area (BA), as illustrated in Figure 1. This is done by changing the offset parameters of the goto w bytecode from 0xFF20 to 0x0020 during the fetch process of the VM. The jump destination address of the EMAN4 attack is a data array outside the bytecode area. This data array was previously filled with adversary defined data. After the laser attack the VM executes the values of the data array. This execution of adversary definable data leads to considerably more critical security problems, such as memory dumps [START_REF] Bouffard | The Next Smart Card Nightmare[END_REF]. In this work we counteract the EMAN4 attack by our control flow policy. This policy only allows to fetch bytecodes which are inside the bytecode area. Execute Malicious Data goto_w 0x0020
Fig. 1. The EMAN4 run-time attack changes the jump address 0xFF20 to 0x0020, which leads to the security threat of executing bytecode outside the defined BA of the current applet [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF].
Defensive VM Layer
In this work, we propose adding a novel security layer to the Java Card. Through this layer, access to internal structures (e.g., OS, LV and BA) of the VM is handled. In reference to its defensive nature and its primary use for enabling a defensive VM, we name this layer the defensive VM (D-VM) layer. An overview of the D-VM layer and the D-VM API, which is used by the VM, is depicted in Figure 2 and is explained in detail below.
Functionalities offered by the D-VM API include, for example, pushing and popping data onto the OS, writing and reading from the LV and fetching Java bytecodes. It is possible for the VM to implement all Java bytecodes by using these API functions. The pseudo-code example in Listing 1.1 shows the process of fetching a bytecode and the implementation of the sadd bytecode using our D-VM API approach. The sadd bytecode pops two values of integral data type from the OS and pops the sum as an integral data type back onto the OS. cialized for VM security, is able to implement and choose the appropriate countermeasures within the D-VM layer. These countermeasures are based on stateof-the-art knowledge and the hardware constraints of the smart card architecture. Programmers implementing the VM do not need to know these security techniques in detail but rather just use the D-VM API functions.
If HW features are used, the D-VM layer communicates with these units and configures them through specific instructions. Through this approach, it is also very easy to alter the SW implementations by changing the D-VM layer implementation without changing specific Java bytecode implementations. It is possible to fulfill the same security policy on different smart card platforms where specific HW features are available.
On a code size-constrained smart card platform, an implementation that has a small code size but requires more main memory or execution time is used. The appropriate implementations of security features within the D-VM API are used without the need to change the entire VM.
Dynamic Countermeasures: The D-VM layer is also a further step to enable dynamic fault attack countermeasures such as that proposed by Barbu in [START_REF] Barbu | Dynamic Fault Injection Countermeasure[END_REF]. In this work, he proposes a VM that uses different bytecode implementations depending on the actual security level of the smart card. If an attack or malicious behavior is detected, the security level is decreased. This decreased security leads to an exchange of the implemented bytecodes with more secure versions. In these more secure bytecodes, different additional checks, such as double reads, are implemented, which leads to decreased run-time performance.
Our D-VM layer further advances this dynamic countermeasure concept. Depending on the actual security level, an appropriate D-VM layer implementation is used. Therefore, the entire bytecode implementation remains unchanged, but it is possible to dynamically add and change security checks during run-time. An overview of this dynamic approach is outlined in Figure 3. The actual security level of the card is determined by HW sensors (e.g., brightness and supply voltage) and the behavior of the executing applet. For example, at a high security level, the D-VM layer can perform a read operation after pushing a value into the OS memory to detect an FA. At a lower security level, the D-VM layer performs additional bound, type and control-flow checks.
D-VM Layer
Apples
Security Context of an Applet: Another use case for the D-VM layer is the post-issuance installation of applets on the card. We focus on the user-centric ownership model (UCOM) [START_REF] Akram | A Paradigm Shift in Smart Card Ownership Model[END_REF] in which Java Card users are able to load their own applets onto the card. For the UCOM approach, each newly installed applet is assigned a defined security level at installation time. The security level depends on how trustworthy the applet is. For example, the security level for an applet signed with a valid key from the service provider is quite high, which results in a high execution speed. Such an applet should be contrasted with an applet that has no valid signature and is loaded onto the card by the Java Card owner. This applet will run at a low security level with many run-time checks but a slower execution speed. Furthermore, access to internal resources and applets installed on the card could be restricted by the low security level.
Security Policy
This chapter introduces the three security policies used in this work. With the help of these policies, it is possible to counteract the most dangerous threats that jeopardize security-critical data on the card. The type and bound policies are taken from [START_REF] Lackner | Towards the Hardware Accelerated Defensive Virtual Machine -Type and Bound Protection[END_REF] and are augmented with a control-flow policy. The fulfillment of the three policies on every bytecode is checked by three different D-VM layer implementations using our D-VM API.
Control-Flow Policy:
The VM is only allowed to fetch bytecodes that are within the borders of the currently active method's BA. Fetching of bytecodes that are outside of this area is not allowed. The actual valid method BA changes when a new method is invoked or a return statement is executed. Because of this policy, it is no longer possible for control-flow changing bytecodes (e.g., goto w and if scmp w ) to jump outside of the reserved bytecode memory area. This policy counters the EMAN4 attack [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF] on the Java Card and all other attacks that rely on the execution of a data array or code of an-other applet that is not inside the current BA.
Type Policy: Java bytecodes are strongly typed in the VM specification [START_REF]Oracle: Virtual Machine Specification. Java Card Platform[END_REF]. This typing means that for every Java bytecode, the type of operand that the bytecode expects and the type of the result stored in the OS or LV are clearly defined. An example is the sastore bytecode, which stores a short value in an element of a short array object. The sastore bytecode uses the top three elements from the OS as operands. The first element is the address of the array object, which is of type reference. The second element is the index operand of the array, which must be of type short. The third element is the value, which is stored within the array element and is of type short.
Type confusion between values of integral data (boolean, byte or short) and object references (byte[], short[] or class A, for example) is a serious problem for Java Cards [START_REF] Vertanen | Java Type Confusion and Fault Attacks[END_REF][START_REF] Mostowski | Malicious Code on Java Card Smartcards: Attacks and Countermeasures[END_REF][START_REF] Iguchi-Cartigny | Developing a Trojan applets in a smart card[END_REF][START_REF] Vetillard | Combined Attacks and Countermeasures[END_REF][START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF][START_REF] Hamadouche | Subverting Byte Code Linker service to characterize Java Card API[END_REF]. To counter these attacks, we divide all data types into the two main types, integralData and reference. Note that this policy does not prevent type confusion inside the main type reference between array and class types.
Bound Policy: Most Java Card bytecodes push and pop data onto the OS or read and write data into the LV, which can be considered similar to registers. The OS is the main component for most Java bytecode operations. Similar to buffer overflow attacks in C programs [START_REF] Cowan | Buffer overflows: attacks and defenses for the vulnerability of the decade[END_REF], it is possible to overflow the reserved memory space for the OS and LV. An adversary is then able to set the return address of a method to any value. Such an attack was first found in 2011 by Bouffard [START_REF] Bouffard | Combined Software and Hardware Attacks on the Java Card Control Flow[END_REF][START_REF] Bouffard | The Next Smart Card Nightmare[END_REF]. An overflow of the OS happens by pushing or popping too many values onto the OS. An LV overflow happens when an incorrect LV index is accessed. This index parameter is decoded as an operand for several LV-related bytecodes (e.g., sstore, sload and sinc). This operand is therefore stored permanently in the nonvolatile memory. Thus, changing this operand through an FA gives an attacker access to memory regions outside the reserved LV memory region. These memory regions are created for every method invoked and are not changed during the method execution. Therefore in this work, we permit Java bytecodes to operate only within the reserved OS and LV memory regions.
Java Card Prototype Implementation
In this work three implementations of the D-VM layer are proposed to perform run-time security checks on the currently executing bytecode. Two implementations perform all checks in SW to ensure our security policies. One implementation uses dedicated HW protection units to accelerate the run-time verification process. The implementations of the D-VM layer were added into a Java Card VM and executed on a smart card prototype. This prototype is a cycle-accurate SystemC [START_REF]IEEE: Open SystemC Language Reference Manual IEEE Std 1666-2005[END_REF] model of an 8051 instruction set-compatible processor. All software components, such as the D-VM layer and the VM, are written in C and 8051 assembly language.
D-VM Layer Implementations
This section presents the implementation details for the three implemented D-VM layers used to create a defensive VM. All three implemented D-VM layers fulfill our security policy presented in Chapter 3 but differ from each other in the detailed manner in which the policies are satisfied. The key characteristic of the two SW D-VM implementations is that they use a different implementation of the type-storing approach to counteract type confusion. The run-time type information (integralData or reference) used to perform run-time checks can be stored either in a type bit-map (memory optimization) or in the actual word size of the microprocessor (speed optimization). The HW Accelerated D-VM uses a third approach and stores the type information in an additional bit of the main memory. Through this approach, the HW can easily store and check the type information for every OS and LV entry. An overview of how the type-storing policy is ensured by our D-VM implementations and a memory layout overview are shown in Figure 4 for every entry of the OS and LV is now represented by a one-bit entry. A problem with this approach is that the run-time overhead is quite high because different shift and modulo operations must be performed to store and read the type information from the type bitmap. These operations (shift and modulo) are, for the 8051 architecture, computationally expensive operations and thus lead to longer execution times. An advantage of the bit-storing approach is the low memory overhead required to hold the type information in the type bitmap.
Word Storing D-VM:
The run-time performance of the type storing and reading process is increased by storing the type information using the natural word size of the processor and data bus on which the memory for the OS and LV is located. Every element in the OS and LV is extended with a type element of a word size such that it can be processed very quickly by the architecture. By choosing this implementation, the memory consumption of the type-storing process increases compared with the previously introduced SW Bit Storing D-VM. Pseudo-codes for writing to the top of the stack of the OS for the bit-and word-storing approach are shown in Listings 1.2 and 1.3. HW Accelerated D-VM: Performing type and bound checks in SW to fulfill our security policy consumes a lot of computational power. Types must be loaded, checked and stored for almost every bytecode. The bounds of the OS and LV must be checked such that no bytecode performs an overflow. The HW Accelerated D-VM layer uses specific HW protection units of the smart card to accelerate these security checks. New protection units (bound protection and type protection) are able to check if the current memory move (MOV) operation is operating in the correct memory bounds. The type information for the OS and LV entries is stored as an additional type bit for every main memory word. The information is decoded into new assembly instructions to specify which memory region (OS, LV or BA) and with which data type (integralData or reference) the MOV operation should write or read data. An overview of the HW Accelerated D-VM is shown in Figure 5. Depending on the assembly instruction, the HW protection units perform four security operations:
-Check if the Java opcode is fetched from the current active BA.
-Check if the destination address of the operation is within the memory area of the OS or LV. If the operation is not within these two bounded areas, a HW security exception is thrown. -For every write operation write the type decoded in the CPU instruction into the accessed memory word. -For every read operation, check if the stored type is equal to the type decoded in the CPU instruction. If they are not equal, throw a hardware security exception. Malicious Java bytecodes violating our run-time policy will be detected by new introduced HW protection units.
Bound
Prototype Results
In this section, we present the overall computational overhead of the three implemented D-VM layers and their main memory consumption. All of them are compared with a VM implementation without the D-VM layer. The speed comparison is performed for different groups of bytecodes by self written microbenchmarks where all bytecodes under test are measured. These test programs first perform an initialization phase where the needed operands for the bytecode under test are written into the OS or LV. After the execution of the bytecode under test the effects on the OS or LV are removed. Note that our smart card platform has no data or instruction cache. Therefore, no caching effects must be taken into account for all test programs.
Computational Overhead
Speed comparisons for specific bytecodes are shown in Figure 6. For example, the Java bytecode sload requires 148% more execution time for the Word Storing D-VM. For the Bit Storing D-VM, the execution overhead is 212%. The increased overhead is because of the expensive calculations used to store the type information in a bitmap. For the HW Accelerated D-VM, the execution speed decreases by only 4% because all type and bound checks are performed using HW. Additional run-time statistics for groups of bytecodes are listed in Table 1.
As expected, the Bit Storing D-VM consumes the most overall run-time, with an increase of 208%. The Word Storing D-VM needs 142% more run-time. The HW Accelerated D-VM has only 6% more overhead.
Main Memory Consumption
The HW Accelerated D-VM requires one type bit per 8 bits of data to store the type information during run-time. This results in an overall main memory
Conclusions and Future Work
This work presents a novel security layer for the virtual machine (VM) on Java Cards. Because it is intended to defend against fault attacks (FAs), it is called the defensive VM (D-VM) layer. This layer provides access to security-critical resources of the VM, such as the operand stack, local variables and the bytecode area. Inside this layer, security checks, such as type checking, bound checking and control-flow checks, are performed to protect the card against FAs. These FAs are executed during run-time to change the control and data flow of the currently executing bytecode. By storing different implementations of the D-VM layer on the card, it is possible to choose the appropriate security implementation based on the actual security level of the card. Through this approach, the number of security checks can be increased during run-time by switching among different D-VM implementations. Furthermore, it is possible to assign a trustworthy applet a low security level, which results in high execution performance, and vice versa. One D-VM layer implementation can be, for example, low security with high execution speed or high security with low execution speed. Another advantage is the concentration of the security checks inside the layer.
To demonstrate this novel security concept, we implemented three D-VM layers on a smart card prototype. All three layers fulfill the same security policy (control-flow, type and bound) for bytecodes but differ in their implementation details. Two D-VM layer implementations are fully implemented in software but differ in the manner in which the type information is stored. The Bit Storing D-VM has the highest run-time overhead, 208%, but the lowest memory increase, 6.25%. The Word Storing D-VM decreases the run-time overhead to 142% but consumes approximately 33% more memory. The HW Accelerated D-VM uses dedicated Java Card HW to accelerate the run-time verification process and has an execution overhead of only 6% and a memory increase of 12.5%.
In future work, we will focus on the question of which sensor data should be used to increase the internal security of the Java Card. Another question is how many security states are required and how much they differ in their security needs.
Listing 1 . 1 .Fig. 2 .
112 Fig.2. The VM executes Java Card applets and uses the newly introduced D-VM layer to secure the Java Card against FAs.
Fig. 3 .
3 Fig. 3. Based on the current security level of the VM, an appropriate D-VM layer implementation is chosen.
Fig. 4 .
4 Fig. 4. The Bit Storing D-VM stores the type information for every OS and LV entry in a type bitmap. The Word Storing D-VM stores the type information below the value in the reserved OS and LV spaces. The HW Accelerated D-VM holds the type information as an additional type bit, which increases the memory size of a word from 8 bits to 9 bits.
Listing 1 . 2 .
12 Operations needed to push an element on the OS by the Bit Storing D-VM. d v m p u s h i n t e g r a l D a t a ( v a l u e ) { // push v a l u e onto OS and // i n c r e a s e OS s i z e OS [ s i z e ++] = v a l u e ; // s t o r e t y p e i n f o r m a t i o n // i n t o t y p e bitmap , //INT->i n t e g r a l D a t a t y p e bitmap [ s i z e / 8 ] = INT<<( s i z e %8); } Listing 1.3. Operations needed to push an element on the OS by the Word Storing D-VM. d v m p u s h i n t e g r a l D a t a ( v a l u e ) { // push v a l u e onto OS // i n c r e a s e OS s i z e OS [ s i z e ++] = v a l u e ; // s t o r e t y p e i n f o r m a t i o n // i n t o n e x t memory word , //INT->i n t e g r a l D a t a t y p e OS [ s i z e ++] = INT ; }
Fig. 5 .
5 Fig.5. Overview of the HW Accelerated D-VM implementation using new typed assembly instructions to access VM resources (OS, LV and BA). Malicious Java bytecodes violating our run-time policy will be detected by new introduced HW protection units.
Table 1 .
1 Speed comparison for different groups of bytecodes compared with a VM without the D-VM layer. 5%. The Word Storing D-VM requires in the worst case 33% more memoy because one type byte holds the type information for two data bytes. The Bit Storing D-VM requires approximately 6.25% more memory in the case in which the entire memory is filled with OS and LV data. This is because the Bit Storing D-VM requires one type bit per 16 bits of data.
Bytecode Groups HW Accelerated D-VM Word Storing D-VM Bit Storing D-VM
Arithmetic/Logic +7% +146% +240%
LV Access +5% +185% +243%
OS Manipulation +5% +151% +231%
Control Transfer +7% +113% +173%
Array Access +5% +130% +166%
Overall +6% +142% +208%
increase of 12.
Acknowledgments The authors would like to thank the Austrian Federal Ministry for Transport, Innovation, and Technology, which funded the CoCoon project under the FIT-IT contract FFG 830601. We would also like to thank our project partner NXP Semiconductors Austria GmbH. | 38,073 | [
"974301",
"1003793",
"1003794",
"1003795",
"1003796"
] | [
"65509",
"65509",
"65509",
"65509",
"65509"
] |
01485938 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01485938/file/978-3-642-38530-8_9_Chapter.pdf | Pierre Dusart
email: pierre.dusart@xlim.fr
Sinaly Traoré
Lightweight Authentication Protocol for Low-Cost RFID Tags
Keywords:
Providing security in low-cost RFID (Radio Frequency Identification) tag systems is a challenging task because low-cost tags cannot support strong cryptography which needs costly resources. Special lightweight algorithms and protocols need to be designed to take into account the tag constraints. In this paper, we propose a function and a protocol to ensure pre-shared key authentication.
Introduction
In the future, optical bar codes based systems will be replaced by Radio Frequency Identification systems. These systems are composed of two parts:
a RFID tag which replaces the bar code; a RFID reader which handles information send from the tag.
The tag consists of a microchip which communicates with a reader through a small integrated antenna. Various external form factors can be used: the tag can look like a sheet of paper, like a plastic card or can be integrated below bar code for backward device's compatibility.
RFID tags offer many advantages over optical bar codes [START_REF] Agarwal | RFID: Promises and Problems[END_REF]:
the use of microchip enables a range of functionalities like computing capability or readable/writable storage. The stored data, depending on the capacity of the tag, can be static identification number up to rewritable user data. the use of RF antenna enables communication between the reader and the tag without line of sight from a distance of several decimeters [START_REF] Weis | Rfid (radio frequency identification): Principles and applications 3[END_REF]. A reader can communicate sequentially with up to hundred tags per second.
To provide further functionalities than bar codes, the tag may require data storage. For example, the price of a product can be stored into the tag [3]. To know the price of a product, the customer can ask directly the tag instead of asking the database server connected with the cash register. With these new features, the adoption of RFID technology is growing: inventory without unpacking [START_REF] Östman | Rfid 5 most common applications on the shop floor[END_REF], prevention of counterfeiting [START_REF] James | Fda, companies test rfid tracking to prevent drug counterfeiting[END_REF], quality chain with environmental sensing [START_REF] Miles | RFID Technology and Applications[END_REF] are deployed applications. The tag systems can be easily adapted for universal deployment by various industries with low prices.
But a new technology must also take into account problems inherited from legacy systems. For example in a shop, security problems to deal with are:
an item is changed to another (it means for RFID to substitute a tag for a fake one); a price is changed without authorization by a malicious user (it means for RFID, to write a tag), . . .
In addition, the privacy problem must be considered in some context i.e. an user must not reveal unintentionally information about himself. It means for RFID, the ability of a tag to reveal its identity only to authenticated partners.
To cope with security and privacy problems, the first idea is to use asymmetric cryptography (e.g. RSA [START_REF] Rivest | A method for obtaining digital signatures and public-key cryptosystems[END_REF]) like in public key infrastructures. Unfortunately tags with strong cryptography [START_REF] Feldhofer | Strong crypto for rfid tags -a comparison of low-power hardware implementations[END_REF] and tamper resistant hardware [START_REF] Kömmerling | Design principles for tamper-resistant smartcard processors[END_REF] are too expensive for a wide deployment.
Hence a constraint class of cryptography [START_REF] Poschmann | Lightweight cryptography -cryptographic engineering for a pervasive world[END_REF], named Lightweight Cryptography, appears.
The aim of this paper is to propose a protocol and its related computational function. Section 2 introduces the system model and the underlying assumptions for our protocol. Then related work is presented in section 3. The protocol environment is described in section 4. Section 5 presents the protocol details and the computational functions. Section 6 provides an analysis of some security constraints and shows that the protocol satisfies the lightweight class. Section 7 illustrates how our protocol behaves against cryptographic attacks.
System model and assumptions
We consider a system with one RFID tag reading system and several low cost RFID tags. We assume that each tag shares a secret K with the reader, which is shared in a secure manner before the beginning of the communication (e.g. in manufacturing stage). The aim of the communication is to authenticate the tag i.e. find its identity and prove that it belongs to the system (by knowing the same secret).
The tag is passively powered by the reader, thus:
the communication needs to be short (speed and simplicity of an algorithm are usually qualifying factors); the communication can be interrupted at any time if the reader does not supply enough energy to the tag.
For cost reasons, the standard cryptographic primitives (hash function, digital signature, encryption) are not implemented (no enough computation power is available or too much memory is required). Hence, we need a protocol using primitives with a low complexity. This property which is named "Lightweight property" [START_REF] Poschmann | Lightweight cryptography -cryptographic engineering for a pervasive world[END_REF] consists to use basic boolean operations like XOR, AND, ... The security of protocols needs also a good random number generator [START_REF] Hellekalek | Good random number generators are (not so) easy to find[END_REF]. This part can be assumed by the reader environment where the features can be higher and costly (e.g. a computer connected with a tag reading system).
Related work
The RFID technology needs security mechanisms to ensure the tag identity. Hence a tag spoofing, where an attacker replaces the genuine tag by its own creation, is defeated if good authentication mechanisms are used. But classical authentication solutions use cryptographic primitives like AES [START_REF]Advanced encryption standard[END_REF] or hash functions (SHA1 [START_REF] Eastlake | US Secure Hash Algorithm 1 (SHA1)[END_REF] or MD5 [START_REF] Rivest | The MD5 Message-Digest Algorithm[END_REF]) which are not adapted to low cost RFID tags. It is thus necessary to look for new suitable primitives for this specific constraint resources environment. In [START_REF] Vajda | Lightweight authentication protocols for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | Lmap: A real lightweight mutual authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | M 2 ap: A minimalist mutual-authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | Emap: An efficient mutual-authentication protocol for low-cost rfid tags[END_REF], authors suggest some protocol families based on elementary arithmetic (e.g. binary bit addition or modular addition by a power of 2). However in [START_REF] Defend | Cryptanalysis of two lightweight rfid authentication schemes[END_REF], B. Defend et al. put in defect XOR and SUBSET protocols given in [START_REF] Vajda | Lightweight authentication protocols for low-cost rfid tags[END_REF] by learning key sequence. They proved that with few resources, an attacker can recover the session keys of these two protocols. The LMAP, M 2 AP and EMAP protocols proposed respectively in [START_REF] Peris-Lopez | Lmap: A real lightweight mutual authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | M 2 ap: A minimalist mutual-authentication protocol for low-cost rfid tags[END_REF][START_REF] Peris-Lopez | Emap: An efficient mutual-authentication protocol for low-cost rfid tags[END_REF] allow a mutual authentication between the reader and the tag but are also completely broken [START_REF] Li | Security analysis of two ultra-lightweight rfid authentication protocols[END_REF] by key recovery attacks. In [START_REF] Lee | Efficient rfid authentication protocols based on pseudorandom sequence generators[END_REF], the authors proposed a family of protocols, called S-protocols, based on a family of generic random number generators that they introduced in the same paper. They presented a formal proof which guarantees the resistance of the S-protocol against the attacks of desynchronization [START_REF] Lo | De-synchronization attack on rfid authentication protocols[END_REF][START_REF] Van Deursen | Security of rfid protocols -a case study[END_REF] and impersonation [START_REF]Sixth International Conference on Availability, Reliability and Security, ARES 2011[END_REF]. With a small modification, they proposed the family of S * -protocols, which not only has the properties of S-protocols but also allows a mutual authentication between the reader and the tag. However authors do not show that their generic functions are compatible with lightweight RFID tags. In [START_REF] Yeh | Securing rfid systems conforming to epc class 1 generation 2 standard[END_REF], Yeh proposes a protocol corrected by Habibi [START_REF] Habibi | Attacks on a lightweight mutual authentication protocol under epc c-1 g-2 standard[END_REF], but attacks [START_REF] Castro | Another fallen hash-based rfid authentication protocol[END_REF] appear using O(2 17 ) off-line evaluations of the main function. Recently, some protocols are also defined in ISO/IEC WD 26167-6. Since they use AES engine [START_REF] Song | Security improvement of an rfid security protocol of iso/iec wd 29167-6[END_REF], they are out of the scope of this paper.
Protocol requirements and Specifications
We want to use a very simple dedicated protocol which uses a non-invertible function h. We provide a protocol in which the tag identity is sent in a secure manner and the tag is authenticated according to a challenge given by the reader. Then the reader shows that it knows a secret key by calculating an answer to the tag challenge.
We present the authentication protocol: the reader needs to verify the identity of the tag. For the verification of the tag identity iD, the RFID reader R sends to the tag T a challenge C. Next, the tag proves its identity iD by computing a response using the common secret K, shared with the reader. We avoid taking K = 0 for a maximum security. Denoting by Auth this response, the authentication phase is presented in the following scheme:
-R -→ T : C = (C 0 , C 1 , . . . , C 15 ) where C i are bytes randomly chosen.
-T -→ R : Auth = [iD ⊕ h K (C), h iD (C)]
To verify, the reader computes h K (C) using its challenge C and the key K and then it can retrieve the identity of the tag. Next the authentication of the tag can be verified by computing h iD (C) using the result of previous computation and the first challenge. The protocol allows card authentication by the reader. It can be adapted to allow mutual authentication with a slightly modification: a challenge C' (which can be a counter) is sent with the tag response Auth. Next the reader should respond with the computation of h K⊕C (C ⊕ iD).
Proposal description
Our protocol uses a function h that is composed of two sub-functions S and f taking respectively one and two bytes as input. The function h used in the protocol must be lightweight (for low-cost devices) and satisfy some properties:
must be a like a one-way function (from output, input cannot be retrieved); its output must seem to be random; its output length must be sufficient to have enough intrinsically security (to avoid replay and exhaustive authentication search).
We define an input size and an output size of 16 bytes for h and the same size for the secret key K. Output size is chosen to be presented in the 16-byte form to iterate an algorithm defined on byte. Function f which processes byte data blocks and a substitution function S are described in the following subsections.
Function design
f function Here we define the function f which needs two input bytes to produce an output result of one byte.
f : F 256 × F 256 -→ F 256 (x, y) -→ z with z := [x ⊕ ((255 -y) 1)]+16•[((255 -x) ⊕ (y 1)) mod 16] mod 256, ( 1
)
where ⊕ is the bitwise exclusive or, + represents the classical integer addition, n 1 divides n by 2, n 1 multiplies n by 2 and keeps the result modulo 256 by not taking into account a possible overflow and "16•" is the classical multiplication by 16. In the subsection 6.2, we explain how to keep lightweight these various operations by using 8-bit registers.
We have the following properties:
f is non-symmetric, i.e., for all (x, y) pair in F 256 × F 256 , the function verifies f (x, y) = f (y, x); f has a uniform distribution of values, i.e., for all z in F 256 , the function verifies
{(x, y) ∈ F 256 × F 256 : f (x, y) = z} = 256.
These properties can be easily verified. Hence we consider that the f function is one-way: one cannot retrieve the good (x, y)-entry with the z value. The function h inherits of this property. Let i ∈ {0, • • • , 15} a vector index and j ∈ {1, 2, 3, 4} a round index. Let M = (M 0 , • • • , M 15 ) a vector of 16 bytes. The function f does not use the same entries depending on a vector index i and a round index j. We define:
F j i (M ) = f (M i , M (i+2 j-1 ) mod 16 ). and F j (M ) = (F j 0 (M ), F j 1 (M ), • • • , F j 15 (M )).
A working example of these indexes can be found in the table 2.
S function Our S function is not a new one. We choose the AES [START_REF]Advanced encryption standard[END_REF][START_REF] Daemen | The Design of Rijndael: AES -The Advanced Encryption Standard[END_REF] SubBytes function for the quality of its properties.
The SubBytes transformation is a non-linear byte substitution. For example, the eight-bits data "00000000" is transformed into B = "01100011".
To avoid attacks based on simple algebraic properties, the definition of Sub-Bytes Transformation is the composition of the following two transformations in the finite field F 2 8 with a chosen structure representation
F 2 8 ≈ F 2 (X)/(X 8 + X 4 + X 3 + X + 1).
The first transformation is the multiplicative inverse in Galois Field GF (2 8 ), known to have good non-linearity properties. Then the multiplicative inverse of each element is taken (the 8bit-element "00000000", or {00} in hexadecimal format, is mapped to itself). Next, the previous result is combined with an invertible affine transformation:
x → Ax ⊕ B,
where A is a 8 × 8 fixed matrix over GF (2) and B is the number defined above and ⊕ operates "Exclusive Or" on the individual bits in a byte.
The SubBytes Transformation is also chosen to avoid any fixed point (S(a) = a), any opposite fixed point (S(a) = ā) and also any self invertible point (S(a) = S -1 (a)).
Because it is based on many mathematical objects, the SubBytes function could seem difficult to implement but the transformation could be reduce in an 8-bit substitution box. Hence for any element the result can be found by looking up in a table (see the Figure 7 of [START_REF]Advanced encryption standard[END_REF]: substitution values for the byte {xy} (in hexadecimal format)).
We define by S the following transformation: let M = (M 0 , • • • , M 15 ) a 16byte vector. Let S the function which associates M with the vector
S(M ) = (SubBytes(M 0 ), • • • , SubBytes(M 15 )).
Description of the authentication function
h : (C, K) -→ H
Formally, we will follow the tag computation. First, we add the challenge to key by Xor operation, i.e. we calculate
D = C ⊕ K = (C 0 ⊕ K 0 , . . . , C 15 ⊕ K 15 ).
Then we apply the substitution S to D. The first state M 0 is initialized by M 0 = S(D). Then, we calculate the following values:
M 1 = S(F 1 (M 0 )) ⊕ K, M 2 = S(F 2 (M 1 ))) ⊕ K, M 3 = S(F 3 (M 2 ))) ⊕ K, M 4 = S(F 4 (M 3 ))) ⊕ K.
Finally, the function returns H = M 4 = (M 4 0 , . . . , M 4 15 ). We denote the result H by h K (C).
The figure 1 summarizes this description and a more classical definition can be found through the algorithm 1.
Input : C, K Output : H M 0 = S(C ⊕ K) for j = 1 to 4 do M j = S(F j (M j-1 )) ⊕ K end for H = M 4 return H Fig. 1. Authentication Function
Analysis
Protocol security
The identity of the tag is not revealed directly: the tag's identity iD is masked by h K (C), output of h function which appears random. But the reader can still determine the iD identity using the shared secret key K. The reader verifies that this identity has been used to compute the second part of authentication.
Algorithm 1 Tag computations
Input: C = (C0, . . . , C15), K = (K0, . . . , K15) Output: H = (H0, . . . , H15) {Comment:
Computation of M 0 = S(C ⊕ K)} for i = 0 to 15 do Mi ← S(Ci ⊕ Ki) end for {Comment: Computation of S(F j (M j-1 )} for j = 1 to 4 do for i = 0 to 15 do k ← Mi ⊕ ((M i+2 j-1 mod 16 ) 1) l ← (255 -Mi) ⊕ (M i+2 j-1 mod 16
1) mod 16 t ← (k + 16 l) mod 256 T empi ← S(t) end for {Comment: Computation of M j+1 = M j ⊕ K} for i = 0 to 15 do Mi ← T empi ⊕ Ki end for end for for i = 0 to 15 do Hi ← Mi end for return H At this state, the reader is sure that the tag with iD identity knows the secret key K.
But as aforementioned section 4, a mutual authentication can be set by adding the following steps. The reader shows that it knows K and iD by computing h K⊕C (C ⊕ iD) where C is the challenge given by the tag. The tag authenticates the reader by computing in the same way and comparing the proposed result with the computed one. If they are equal, the mutual authentication is achieved. Now we consider two cases:
-Fake Tag: the tag receives the challenge C. It can choose arbitrarily a number iD to enter into the system. But it does not know K to compute the first part of authentication response. -Fake reader: the reader chooses and sends C. Next it receives a proper tag authentication. It cannot find iD thanks to h iD (C) (because h is a one-way function) nor K.
Lightweight
We have to establish that function could be programmed using usual assembler instructions. We refer to ASM51 Assembler [30]. First we use 8-bit registers. To represent an entry of 128 bits, eight registers or space blocks must be reserved.
Next we can implement the f function defined by (1) using very simple instructions using a register named A and a carry C:
-The computation of A 1 can be translated by CLR C (Clear Carry) followed by RLC A (Rotate Left through Carry). The computation of A 1 can be translated by RRC A (Rotate Right through Carry).
-The computation of 255 -A can be translated by CPL A, the complemented value. -The bitwise-xor is classically translated by XRL.
-The modular reduction by 16 can by translated by AND 0x0F.
-The multiplication by 16 can be translated by four left shift or by AND 0x0F
followed by SWAP which swaps nibbles. -The modular addition (mod 256) can be translated simply by ADD without taking care of possible carries of an 8-bit register.
The SubBytes function can be implemented by looking up in a table as explain in the Figure 7 of [START_REF]Advanced encryption standard[END_REF]. This part of AES algorithm can be computed with a few gates compared to the whole AES (The most penalizing part being the key expansion according to the table 3 of [START_REF] Hamalainen | Design and implementation of low-area and low-power aes encryption hardware core[END_REF]). Now we claim that properties of h function presented in section 5 are satisfied:
the overflows of f are intended and contribute to the non-reversibility of the h function, the output seems random (subsection 6.4), the avalanche criterion (subsection 6.3) shows that the outputs distribution of f is well reported to h outputs.
Strict Avalanche Criterion
The strict avalanche criterion was originally presented in [START_REF] Forré | The strict avalanche criterion: Spectral properties of boolean functions and an extended definition[END_REF], as a generalization of the avalanche effect [START_REF] Webster | On the design of s-boxes[END_REF]. It was introduced for measuring the amount of nonlinearity in substitution boxes (S-boxes), like in the Advanced Encryption Standard (AES). The avalanche effect tries to reflect the intuitive idea of high-nonlinearity: a very small difference in the input producing a high change in the output, thus an avalanche of changes.
Denote by HW the Hamming weight and DH(x, y) = HW (x ⊕ y) the Hamming distance.
Mathematically, the avalanche effect can be formalized by
∀x, y|DH(x, y) = 1, average(DH(F (x), F (y))) = n 2 ,
where F is candidate to have the avalanche effect. So the output of a n-bit random input number and one generated by randomly flipping one of its bits should be, on average, n/2. That is, a minimum input change (one single bit) is amplified and produces a maximum output change (half of the bits) on average. First we show that if an input bit is changed then the modification will change an average of one half of the following byte. The input byte x will be changed to x with a difference ∆x of one bit. After the first SubBytes transformation, the difference will be S(x ⊕ k) ⊕ S(x ⊕ k) = S(y) ⊕ S(y + ∆x), with y = x ⊕ k. We have in average 1 256 • 8 y ∆x,HW (∆x)=1
HW (S(y) ⊕ S(y
+ ∆x)) ≈ 4,
where HW is the Hamming weight. Hence an average of four bits will change if the difference is of one bit. Furthermore, for any difference ∆x, 1 256 • 256 y ∆x HW (S(y) ⊕ S(y + ∆x)) = 4.
Our function satisfies the avalanche effect as 1 256 2
x y
HW (x ⊕ S(f (x, y))) ≈ 4.
Next we show that if an input bit is changed then the modification will be spread over all the bytes of the output. Suppose that a bit of the k th byte M 0 k is changed (1 ≤ k ≤ 16). Then M 1 is also changed as the SubBytes substitution is not a constant function. At the first round, the bytes k and k + 1 will be modified. At the second round, the bytes k, k + 2, k + 1 and k + 3 will be modified. Furthermore, eight bytes will be modified and at the end, the whole 16 bytes will be modified.
For example, if the first input byte is changed (M 0 0 is changed). Then M 0 0 is used for compute M 1 0 and M 1 15 , hence a difference appears in M 1 0 and M 1 15 , and so on. We trace the difference diffusion in the following table:
M 0 M 1 M 2 M 3 M 4 M 5 M 6 M 7 M 8 M 9 M 10 M 11 M 12 M 13 M 14 M 15 First Xor x j = 1 x x j = 2 x x x x j = 3 x x x x x x x x j = 4 x x x x x x x x x x x x x x x x Last Xor x x x x x x x x x x x x x x x x Table 1. Diffusion table
If another byte is changed, the same remark works by looking in the dependence table 2.
Hence for any input difference, the modification will change an average of one half of the output.
Security Quality
To evaluate the security quality, we take Y = 1 et X = 0. We consider the iterated outputs of the authentication function. Hence we test the series h Y (X), h Y (h Y (X)), ... like a random bitstream with the NIST test suite [START_REF] Williams | A statistical test suite for the validation of random number generators and pseudo random number generators for cryptographic applications[END_REF]. The bitstream satisfies all the tests (parameters of NIST software: 10 6 input bits, 10 bitstreams).
Hardware Complexity: Implementation and computational Cost
We choose a 8bit-CPU Tag for cost reasons. We implement the authentication function on a MULTOS Card [START_REF]Multos: Multos developer's guide[END_REF] without difficulties. This card is not a low-cost card but we only test the implementation with basic instructions. The code size of the authentication function (with S-box table) without manual optimization is 798 bytes. We can optimize the memory usage:
the S-box table can be placed in Read-Only memory area: 256 bytes needed for AES SubBytes Table . the variables placed in the Random Access Memory Memory can be optimized. For internal state computation, one have to represent M with 16 bytes and we need two supplementary temporary bytes: at each round, a state byte value M i is used twice to compute the next state. In fact M j i is used for compute M j+1 i and M j+1 i+2 j-1 mod 16 . After computation of these two variables, the space allocation for the variable M j i can be reused. Next we compute the value M j+1 i+2 j-1 mod 16 depending on M j i+2 j-1 mod 16 and another byte. Now we can delete the memory space for M j i+2 j-1 mod 16 and compute another byte of M j+1 , step by step. Hence we use only two additional bytes to compute the next state of M .
We evaluate the computational time with a PC computer (Intel CoreDuo T9600 2.8Ghz): 30 s for 10 7 authentications for a program in a C language, i.e. 3µs per authentication.
Privacy
Even if RFID technology is used for identify something in tracing system, in many cases this technology would merely cause infringements of private rights. We do not prevent the tracing system from recording informations but we need to protect the tag iD from external recording. Hence if an attacker records all transactions between tag and a reader, he cannot retrieve if the same tag has been read one or many times. Contrarily, a fake reader can determine if it has previously ask a tag by sending always the same challenge and recording responses, but it cannot know the real iD of the tag.
Attacks
The attacker's aim is to validate its tag identity. He can do this by producing a response to a challenge. If he can exploit the attack in a feasible way, then we say that the protocol is broken. Such a success of the attacker might be achieved with or without recovering the secret key shared by the reader and the tag. Hence a large key size is not enough to prove that the protocol cannot be broken with brute force attack. We might also take into account other attacks where the attacker can record, measure and study the tag responses. The necessary data could be obtained in a passive or in an active manner. In case of a passive attack, the attacker collects messages from one or more runs without interfering with the communication between the parties. In case of an active attack, the attacker impersonates the reader and/or the tag, and typically replays purposefully modified messages observed in previous runs of the protocol.
Recording Attacks
Replay Attack by recording: An attacker tries to extract the secret of a tag. He uses a reader and knows the commands to perform exchanges with the tag. He asks the tag many times. By listening to different requests, one can record n complete answers. A complete record is composed of a challenge C and the associated response Auth. Next if a recording challenge C is used or reused, then the attacker knows the correct response Auth. This attack works but -The attacker must have time to record all the possibilities; -To create a fake tag, the tag must have 2 128 • (2 • 2 128 ) bits (e.g. 10 60 To) of memory to store the previous records and have the good answer. If this type of tag exists, it is not a commercial one. -The challenge C, generated by the reader environment, is supposed to be random. So for a fixed C, the probability to have the good answer is very low.
Relay Attack [START_REF] Kasper | An embedded system for practical security analysis of contactless smartcards[END_REF]: the attacker makes a link between the reader and tag; it's a kind of Man-in-the-Middle attack. He creates independent connections with reader and tag and relays messages between them. Hence a tag can be identified without being in the reader area. The problem can be treated by security environment protections. A partial solution to protect tag against this attack [START_REF] Schneier | Rfid cards and man-in-the-middle attacks[END_REF] is to limit its communication distance, but this countermeasure limits the potential of RFID tags. A better way is to activate a distance-bounding protocol [START_REF] Hancke | An rfid distance bounding protocol[END_REF].
Man-In-The-Middle attack: A man-in-the-middle attack is not possible because our proposal is based on a mutual authentication, in which two random numbers (C, C ), refreshed at each iteration of the protocol, are used. One cannot forge new responses using challenge differences because h iD (C+∆) = h iD (C)+∆ and h K (C +∆) = h K (C)+∆. In the same way, h K⊕C ⊕∆ (C ⊕iD) = h K⊕C (C ⊕ iD) ⊕ ∆.
Side channels attacks
Timing Attack: a timing attack [START_REF] Kocher | Timing attacks on implementations of diffie-hellman, rsa, dss, and other systems[END_REF] is a side channel attack in which the attacker attempts to compromise a cryptosystem by analyzing the time taken to execute cryptographic algorithm. The attack exploits the fact that every operation in a computer takes a dedicated time to execute. If the time cost of operation depends on key value or input values, on can retrieve these secret values by timing attack. Hence, during the implementation, we must be aware of the timing attack. For the computation of tag authentication, the time cost of the operations is the same whatever the value of the key. Next for the reader authentication, the tag must compare the reader response with its own computation. With poor security implementation but unfortunately "classical", if a difference between two bytes is found, the algorithm stops and return the information "Authentication failed". This kind of program is sensible to timing attack. The execution time is different according if the value is rapidly found or not found. To be immune from this attack, we make always a fixed number of steps; the response is send when all the response is verified. One can also add dummy cycles to equilibrate the parts of an implementation. Hence our function is resistant to Timing attack.
Power consumption attack: an attacker studies the power consumption [START_REF] Kocher | Differential power analysis[END_REF] of the tag. He can do it by monitoring the delivery power from the reader to the tag. As the consumption of the chip depends on the executed instructions, the attacker can observe (SPA) the different parts of an algorithm. Here the algorithm does not need to be secret and the operations do not depend on the key values. One can also use random dummy cycles to disrupt the observation of the same part of program execution. Hence our function is SPA-resistant.
Mathematical Attacks
Lucky Authentication: A attacker tries to have a good authentication with a fake tag. He sends (C Nowadays, this probability is sufficient for a good security.
Active Attack: Suppose that an attacker queries the tag T by sending C = 0 as challenge. Then, to determine the secret K, it must solve the equation
S(F 4 (S(F 3 (S(F 2 (S(F 1 (S(K))) ⊕ K)) ⊕ K)) ⊕ K)) ⊕ K = H, (2)
where H is the response of T and the unknowns are the bytes of K. Since each round of the algorithm operations are performed modulo 16 or modulo 256 and the results from these transactions are processed by substitution tables, the equation 2 is very difficult to analyze algebraically.
Linear [START_REF] Matsui | Linear cryptoanalysis method for des cipher[END_REF] or differential [START_REF] Biham | Differential cryptanalysis of des-like cryptosystems[END_REF] Attacks: These attacks depend especially on properties of the substitution function. First remember that for a function g from F 2 m to F 2 m , a differential pair (α, β) is linked with the equation g(x⊕α)⊕g(x) = β. The differential attack is based on finding pairs where the probability
P ( {x ∈ F 2 m : g(x ⊕ α) ⊕ g(x) = β})
is high. If such pair exists then the attack is feasible. Our function is well resistant to this attack. Indeed the substitution function S is constructed by composing a power function with an affine map, which avoid from differential attacks. Our h function inherits from these properties: considering the output z of f (x, y) describes in the paragraph 5.1, it is easy to verify (like in the paragraph 6.3) that for all α, β ∈ F 256 , {z ∈ F 256 : S(z ⊕ α) ⊕ S(z) = β} ≤ 4.
It allows to avoid the existence of differential pair such that the probability P ( {x ∈ F 256 : S(x ⊕ α) ⊕ S(x) = β}) be high.
To achieve a linear attack, it aims at awarding credibilities to the equations of the type α, x ⊕ β, S(x) = 0, with α, β ∈ F 256 .
We know that for all α and β not identically equal to zero, the equation has a number of solutions close to 128 which makes expensive the linear attack.
Desynchronizing attack
In a desynchronization attack, the adversary aims to disrupt the key update leaving the tag and reader in a desynchronized state in which future authentication would be impossible. Compared to some other protocols [START_REF] Van Deursen | Security of rfid protocols -a case study[END_REF], the key does not change in our authentication protocol. It is not a lack of security, the key may change during stocktaking or subscription renewal, by changing tag by another with the new key.
Conclusion
We have presented a lightweight authentication protocol for low-cost RFID tags. The internal functions are well adapted for 8-bit CPU with few memory and without cryptoprocessor, even if it is true that a precise evaluation of the building cost and performance of a tag supporting our protocol (i.e. very few CPU functions and less than 1Kbytes of memory) should be evaluated with a manufacturer.
We use the security qualities of the AES S-Boxes to build a function, specifically dedicated to the authentication, which keeps them. The notions of privacy and the classic attacks are addressed. The proposed version is light in terms of implementation and in a reduced cost what makes it usable on RFID systems. Even if these systems are intended for simple applications as secure counter of photocopies or stock management in a small shop, the security level reached here allows to envisage more ambitious applications.
Table 2 .
2 Dependency table
Table 3 .
3 NIST STATISTICAL TEST RESULTS
Test Name Percentage of passing sequences
with Significance level α = 0.01
1. Frequency Test (Monobit) 99/100
2. Frequency Test (Block) 100/100
3. Runs Test 100/100
4. Longest Run of Ones 99/100
5. Binary Matrix Rank Test 98/100
6. Discrete Fourier Transform Test 98/100
7. Non-Overlapping Template 98/100
8. Overlapping Template 98/100
9. Maurers Universal Statistical 100/100
10 Linear Complexity Test 100/100
11. Serial Test 99/100
12. Approximate Entropy Test 100/100
13. Cumulative Sums (Cusum) Test 98/100
14. Random Excursions Test 90/93
15. Random Excursion Variant Test 91/93
Acknowledgements
The authors want to thank the anonymous reviewers for their constructive comments which were helpful to improve this paper and Damien Sauveron for proofreading of preliminary versions. | 34,771 | [
"6208",
"1003802"
] | [
"444304",
"302584"
] |
01485970 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01485970/file/978-3-642-37635-1_12_Chapter.pdf | Carlos G López Pombo
email: clpombo@dc.uba.ar
Pablo F Castro
email: pcastro@dc.exa.unrc.edu.ar
Nazareno M Aguirre
email: naguirre@dc.exa.unrc.edu.ar
Thomas S E Maibaum
Satisfiability Calculus: The Semantic Counterpart of a Proof Calculus in General Logics
Since its introduction by Goguen and Burstall in 1984, the theory of institutions has been one of the most widely accepted formalizations of abstract model theory. This work was extended by a number of researchers, José Meseguer among them, who presented General Logics, an abstract framework that complements the model theoretical view of institutions by defining the categorical structures that provide a proof theory for any given logic. In this paper we intend to complete this picture by providing the notion of Satisfiability Calculus, which might be thought of as the semantical counterpart of the notion of proof calculus, that provides the formal foundations for those proof systems that use model construction techniques to prove or disprove a given formula, thus "implementing" the satisfiability relation of an institution.
Introduction
The theory of institutions, presented by Goguen and Burstall in [START_REF] Goguen | Introducing institutions[END_REF], provides a formal and generic definition of what a logical system is, from a model theoretical point of view. This work evolved in many directions: in [START_REF] Meseguer | General logics[END_REF], Meseguer complemented the theory of institutions by providing a categorical characterization for the notions of entailment system (also called π-institutions by other authors in [START_REF] Fiadeiro | Generalising interpretations between theories in the context of π-institutions[END_REF]) and the corresponding notion of proof calculi; in [START_REF] Goguen | Institutions: abstract model theory for specification and programming[END_REF][START_REF] Tarlecki | Moving between logical systems[END_REF] Goguen and Burstall, and Tarlecki, respectively, extensively investigated the ways in which institutions can be related; in [START_REF] Sannella | Specifications in an arbitrary institution[END_REF], Sannella and Tarlecki studied how specifications in an arbitrary logical system can be structured; in [START_REF] Tarlecki | Abstract specification theory: an overview[END_REF], Tarlecki presented an abstract theory of software specification and development; in [START_REF] Mossakowski | Comorphism-based Grothendieck logics[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF] and [START_REF] Diaconescu | Logical foundations of CafeOBJ[END_REF][START_REF] Diaconescu | Grothendieck institutions[END_REF], Mossakowski and Tarlecki, and Diaconescu, respectively, proposed the use of institutions as a foundation for heterogeneous environments for software specification. Institutions have also been used as a very general version of abstract model theory [START_REF] Diaconescu | Institution-independent Model Theory[END_REF], offering a suitable formal framework for addressing heterogeneity in specifications [START_REF] Mossakowski | The heterogeneous tool set[END_REF][START_REF] Tarlecki | Towards heterogeneous specifications[END_REF], including applications to UML [START_REF] Cengarle | A heterogeneous approach to UML semantics[END_REF] and other languages related to computer science and software engineering.
Extensions of institutions to capture proof theoretical concepts have been extensively studied, most notably by Meseguer [START_REF] Meseguer | General logics[END_REF]. Essentially, Meseguer proposes the extension of entailment systems with a categorical concept expressive enough to capture the notion of proof in an abstract way. In Meseguer's words:
A reasonable objection to the above definition of logic 5 is that it abstracts away the structure of proofs, since we know only that a set Γ of sentences entails another sentence ϕ, but no information is given about the internal structure of such a Γ ϕ entailment. This observation, while entirely correct, may be a virtue rather than a defect, because the entailment relation is precisely what remains invariant under many equivalent proof calculi that can be used for a logic. Before Meseguer's work, there was an imbalance in the definition of a logic in the context of institution theory, since the deductive aspects of a logic were not taken into account. Meseguer concentrates on the proof theoretical aspects of a logic, providing not only the definition of entailment system, but also complementing it with the notion of proof calculus, obtaining what he calls a logical system. As introduced by Meseguer, the notion of proof calculus provides, intuitively, an implementation of the entailment relation of a logic. Indeed, Meseguer corrected the inherent imbalance in favour of models in institutions, enhancing syntactic aspects in the definition of logical systems.
However, the same lack of an operational view observed in the definition of entailment systems still appears with respect to the notion of satisfiability, i.e., the satisfaction relation of an institution. In the same way that an entailment system may be "implemented" in terms of different proof calculi, a satisfaction relation may be "implemented" in terms of different satisfiability procedures. Making these satisfiability procedures explicit in the characterization of logical systems is highly relevant, since many successful software analysis tools are based on particular characteristics of these satisfiability procedures. For instance, many automated analysis tools rely on model construction, either for proving properties, as with model-checkers, or for finding counterexamples, as with tableaux techniques or SAT-solving based tools. These techniques constitute an important stream of research in logic, in particular in relation to (semi-)automated software validation and verification.
These kinds of logical systems can be traced back to the works of Beth [START_REF] Beth | The Foundations of Mathematics[END_REF]17], Herbrand [START_REF] Herbrand | Investigation in proof theory[END_REF] and Gentzen [START_REF] Gentzen | Investigation into logical deduction[END_REF]. Beth's ideas were used by Smullyan to formulate the tableaux method for first-order predicate logic [START_REF] Smullyan | First-order Logic[END_REF]. Herbrand's and Gentzen's works inspired the formulation of resolution systems presented by Robinson [START_REF] Robinson | A machine-oriented logic based on the resolution principle[END_REF]. Methods like those based on resolution and tableaux are strongly related to the semantics of a logic; one can often use them to guide the construction of models. This is not possible in pure deductive methods, such as natural deduction or Hilbert systems, as formalized by Meseguer. In this paper, our goal is to provide an abstract characterization of this class of semantics based tools for logical systems. This is accomplished by introducing a categorical characterization of the notion of satisfiability calculus which embraces logical tools such as tableaux, resolution, Gentzen style sequents, etc. As we mentioned above, this can be thought of as a formalization of a semantic counterpart of Meseguer's proof calculus. We also explore the concept of mappings between satisfiability calculi and the relation between proof calculi and satisfiability calculi.
The paper is organized as follows. In Section 2 we present the definitions and results we will use throughout this paper. In Section 3 we present a categorical formalization of satisfiability calculus, and prove relevant results underpinning the definitions. We also present examples to illustrate the main ideas. Finally in Section 4 we draw some conclusions and describe further lines of research.
Preliminaries
From now on, we assume the reader has a nodding acquaintance with basic concepts from category theory [START_REF] Mclane | Categories for working mathematician[END_REF][START_REF] Fiadeiro | Categories for software engineering[END_REF]. Below we present the basic definitions and results we use throughout the rest of the paper. In the following, we follow the notation introduced in [START_REF] Meseguer | General logics[END_REF].
An Institution is an abstract formalization of the model theory of a logic by making use of the relationships existing between signatures, sentences and models. These aspects are reflected by introducing the category of signatures, and by defining functors going from this category to the categories Set and Cat, to capture sets of sentences and categories of models, respectively, for a given signature. The original definition of institutions is the following:
Definition 1. ([1]
) An institution is a structure of the form Sign, Sen, Mod, {|= Σ } Σ∈|Sign| satisfying the following conditions:
-Sign is a category of signatures, -Sen : Sign → Set is a functor. Let Σ ∈ |Sign|, then Sen(Σ) returns the set of Σ-sentences, -Mod :
Sign op → Cat is a functor. Let Σ ∈ |Sign|, then Mod(Σ) returns the category of Σ-models, -{|= Σ } Σ∈|Sign| , where |= Σ ⊆ |Mod(Σ)| × Sen(Σ),
M |= Σ Sen(σ)(φ) iff Mod(σ op )(M ) |= Σ φ .
Roughly speaking, the last condition above says that the notion of truth is invariant with respect to notation change. Given Σ ∈ |Sign| and Γ ⊆ Sen(Σ), Mod(Σ, Γ ) denotes the full subcategory of Mod(Σ) determined by those models M ∈ |Mod(Σ)| such that M |= Σ γ, for all γ ∈ Γ . The relation |= Σ between sets of formulae and formulae is defined in the following way: given Σ ∈ |Sign|, Γ ⊆ Sen(Σ) and α ∈ Sen(Σ), Γ |= Σ α if and only if M |= Σ α, for all M ∈ |Mod(Σ, Γ )|.
An entailment system is defined in a similar way, by identifying a family of syntactic consequence relations, instead of a family of semantic consequence relations. Each of the elements in this family is associated with a signature. These relations are required to satisfy reflexivity, monotonicity and transitivity. In addition, a notion of translation between signatures is considered. Definition 2. ([2]) An entailment system is a structure of the form Sign, Sen, { Σ } Σ∈|Sign| satisfying the following conditions:
-Sign is a category of signatures, -Sen : Sign → Set is a functor. Let Σ ∈ |Sign|; then Sen(Σ) returns the set of Σ-sentences, and -{ Σ } Σ∈|Sign| , where Σ ⊆ 2 Sen(Σ) × Sen(Σ), is a family of binary relations such that for any Σ, Σ ∈ |Sign|, {φ} ∪ {φ i } i∈I ⊆ Sen(Σ), Γ, Γ ⊆ Sen(Σ), the following conditions are satisfied: 1. reflexivity: {φ} Σ φ, 2. monotonicity: if Γ Σ φ and Γ ⊆ Γ , then Γ Σ φ, 3. transitivity: if Γ Σ φ i for all i ∈ I and {φ i } i∈I Σ φ, then Γ Σ φ, and 4. -translation: if Γ Σ φ, then for any morphism σ : Σ → Σ in Sign, Sen(σ)(Γ ) Σ Sen(σ)(φ).
Definition 3. ([2]
) Let Sign, Sen, { Σ } Σ∈|Sign| be an entailment system. Its category Th of theories is a pair O, A such that:
-O = { Σ, Γ | Σ ∈ |Sign| and Γ ⊆ Sen(Σ) }, and
-A = σ : Σ, Γ → Σ , Γ Σ, Γ , Σ , Γ ∈ O, σ : Σ → Σ is a morphism in Sign and for all γ ∈ Γ, Γ Σ Sen(σ)(γ)
.
In addition, if a morphism σ : Σ, Γ → Σ , Γ satisfies Sen(σ)(Γ ) ⊆ Γ , it is called axiom preserving. By retaining those morphisms of Th that are axiom preserving, we obtain the subcategory Th 0 . If we now consider the definition of Mod extended to signatures and sets of sentences, we get a functor Mod : Th op → Cat defined as follows: let
T = Σ, Γ ∈ |Th|, then Mod(T ) = Mod(Σ, Γ ). Definition 4. ([2]) Let Sign, Sen, { Σ } Σ∈|Sign| be an entailment system and Σ, Γ ∈ |Th 0 |. We define • : 2 Sen(Σ) → 2 Sen(Σ) as follows: Γ • = γ Γ Σ γ .
This function is extended to elements of Th 0 , by defining it as follows:
Σ, Γ • = Σ, Γ • . Γ • is called the theory generated by Γ . Definition 5. ([2]
) Let Sign, Sen, { Σ } Σ∈|Sign| and Sign , Sen , { Σ } Σ∈|Sign | be entailment systems, Φ : Th 0 → Th 0 be a functor and α : Sen → Sen • Φ a natural transformation. Φ is said to be α-sensible if and only if the following conditions are satisfied:
1. there is a functor Φ : Sign → Sign such that sign • Φ = Φ • sign, where sign and sign are the forgetful functors from the corresponding categories of theories to the corresponding categories of signatures, that when applied to a given theory project its signature, and
2. if Σ, Γ ∈ |Th 0 | and Σ , Γ ∈ |Th 0 | such that Φ( Σ, Γ ) = Σ , Γ , then (Γ ) • = (∅ ∪ α Σ (Γ )) • , where ∅ = α Σ (∅) 6 .
Φ is said to be α-simple if and only if
Γ = ∅ ∪α Σ (Γ ) is satisfied in Condition 2, instead of (Γ ) • = (∅ ∪ α Σ (Γ )) • .
It is straightforward to see, based on the monotonicity of • , that α-simplicity implies α-sensibility. An α-sensible functor has the property that the associated natural transformation α depends only on signatures. Now, from Definitions 1 and 2, it is possible to give a definition of logic by relating both its modeltheoretic and proof-theoretic characterizations; a coherence between the semantic and syntactic relations is required, reflecting the soundness and completeness of standard deductive relations of logical systems.
Definition 6. ([2]
) A logic is a structure of the form Sign, Sen, Mod, { Σ } Σ∈|Sign| , {|= Σ } Σ∈|Sign| satisfying the following conditions:
-Sign, Sen, { Σ } Σ∈|Sign| is an entailment system, -Sign, Sen, Mod, {|= Σ } Σ∈|Sign| is an institution, and the following soundness condition is satisfied: for any
Σ ∈ |Sign|, φ ∈ Sen(Σ), Γ ⊆ Sen(Σ): Γ Σ φ implies Γ |= Σ φ .
A logic is complete if, in addition, the following condition is also satisfied: for any Σ ∈ |Sign|, φ ∈ Sen(Σ), Γ ⊆ Sen(Σ): -Sign, Sen, { Σ } Σ∈|Sign| is an entailment system, -P : Finally, a logical system is defined as a logic plus a proof calculus for its proof theory.
Γ |= Σ φ implies Γ Σ φ .
Th 0 → Struct P C is a functor. Let T ∈ |Th 0 |, then P(T ) ∈ |Struct P C | is the proof-theoretical structure of T , -Pr : Struct P C → Set
Definition 8. ([2]) A logical system is a structure of the form Sign, Sen, Mod, { Σ } Σ∈|Sign| , {|= Σ } Σ∈|Sign| , P, Pr, π satisfying the following conditions:
-Sign, Sen, Mod, { Σ } Σ∈|Sign| , {|= Σ } Σ∈|Sign| is a logic, and -Sign, Sen, { Σ } Σ∈|Sign| , P, Pr, π is a proof calculus.
Satisfiability Calculus
In Section 2, we presented the definitions of institutions and entailment systems. Additionally, we presented Meseguer's categorical formulation of proof that provides operational structure for the abstract notion of entailment. In this section, we provide a categorical definition of a satisfiability calculus, providing a corresponding operational formulation of satisfiability. A satisfiability calculus is the formal characterization of a method for constructing models of a given theory, thus providing the semantic counterpart of a proof calculus. Roughly speaking, the semantic relation of satisfaction between a model and a formula can also be implemented by means of some kind of structure that depends on the model theory of the logic. The definition of a satisfiability calculus is as follows:
Definition 9. [Satisfiability Calculus] A satisfiability calculus is a structure of the form Sign, Sen, Mod, {|= Σ } Σ∈|Sign| , M, Mods, µ satisfying the following conditions:
-Sign, Sen, Mod, {|= Σ } Σ∈|Sign| is an institution, -M :
Th 0 → Struct SC is a functor. Let T ∈ |Th 0 |, then M(T ) ∈ |Struct SC | is the model structure of T , -Mods : Struct SC → Cat is a functor. Let T ∈ |Th 0 |, then Mods(M(T )) is
the category of canonical models of T ; the composite functor Mods • M : Th 0 → Cat will be denoted by models, and µ : models op → Mod is a natural transformation such that, for each T = Σ, Γ ∈ |Th 0 |, the image of µ T : models op (T ) → Mod(T ) is the category of models Mod(T ). The map µ T is called the projection of the category of models of the theory T .
The intuition behind the previous definition is that, for any theory T , the functor M assigns a model structure for T in the category Struct SC 7 . For instance, in propositional tableaux, a good choice for Struct SC is the collection of legal tableaux, where the functor M maps a theory to the collection of tableaux obtained for that theory. The functor Mods projects those particular structures that represent sets of conditions that can produce canonical models of a theory T = Σ, Γ (i.e., the structures that represent canonical models of Γ ). For example, in the case of propositional tableaux, this functor selects the open branches of tableaux, that represent satisfiable sets of formulae, and returns the collections of formulae obtained by closuring these sets. Finally, for any theory T , the functor µ T relates each of these sets of conditions to the corresponding canonical model. Again, in propositional tableaux, this functor is obtained by relating a closured set of formulae with the models that can be defined from these sets of formulae in the usual ways [START_REF] Smullyan | First-order Logic[END_REF].
Example 1.
[Tableaux Method for First-Order Predicate Logic] Let us start by presenting the tableaux method for first-order logic. Let us denote by I F OL = Sign, Sen, Mod, {|= Σ } Σ∈|Sign| the institution of first-order predicate logic. Let Σ ∈ |Sign| and S ⊆ Sen(Σ); then a tableau for S is a tree such that:
1. the nodes are labeled with sets of formulae (over Σ) and the root node is labeled with S, 2. if u and v are two connected nodes in the tree (u being an ancestor of v), then the label of v is obtained from the label of u by applying one of the following rules:
where, in the last rules, c is a new constant and t is a ground term. A sequence of nodes s 0
τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2
--→ . . . is a branch if: a) s 0 is the root node of the tree, and b) for all i ≤ ω, s i → s i+1 occurs in the tree, τ αi i is an instance of one of the rules presented above, and α i are the formulae of s i to which the rule was applied. A branch s 0
τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2 --→ . . . in a tableau is saturated if there exists i ≤ ω such that s i = s i+1 . A branch s 0 τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2 --→ . . . in a tableau is closed if there exists i ≤ ω and α ∈ Sen(Σ) such that {α, ¬α} ⊆ s i . Let s 0 τ α 0 0 --→ s 1 τ α 1 1 --→ s 2 τ α 2 2
--→ . . . be a branch in a tableau. Examining the rules presented above, it is straightforward to see that every s i with i < ω is a set of formulae. In each step, we have either the application of a rule decomposing one formula of the set into its constituent parts with respect to its major connective, while preserving satisfiability, or the application of the rule [f alse] denoting the fact that the corresponding set of formulae is unsatisfiable. Thus, the limit set of the branch is a set of formulae containing sub-formulae (and "instances" in the case of quantifiers) of the original set of formulae for which the tableau was built. As a result of this, every open branch expresses, by means of a set of formulae, the class of models satisfying them.
In order to define the tableau method as a satisfiability calculus, we provide formal definitions for M, Mods and µ. The proofs of the lemmas and properties shown below are straightforward using the introduced definitions. The interested reader can find these proofs in [START_REF] Lopez Pombo | Satisfiability calculus: the semantic counterpart of a proof calculus in general logics[END_REF]. First, we introduce the category Str Σ,Γ of tableaux for sets of formulae over signature Σ and assuming the set of axioms Γ . In Str Σ,Γ , objects are sets of formulae over signature Σ, and morphisms represent tableaux for the set occurring in their target and having subsets of the set of formulae occurring at the end of open branches, as their source.
Definition 10. Let Σ ∈ |Sign| and Γ ⊆ Sen(Σ), then we define Str Σ,Γ = O, A such that O = 2 Sen(Σ) and A = {α : {A i } i∈I → {B j } j∈J | α = {α j } j∈J }, where for all j ∈ J , α j is a branch in a tableau for Γ ∪ {B j } with leaves ∆ ⊆ {A i } i∈I . It should be noted that ∆ |= Σ Γ ∪ {B j }. The functor M must be understood as the relation between a theory in |Th 0 | and its category of structures representing legal tableaux. So, for every theory T , M associates the strict monoidal category [START_REF] Mclane | Categories for working mathematician[END_REF] Str Σ,Γ , ∪, ∅ , and for every theory morphism σ : Σ, Γ → Σ , Γ , M associates a morphism σ : Str Σ,Γ → Str Σ ,Γ which is the homomorphic extension of σ to the structure of the tableaux.
Definition 12. M : Th 0 → Struct SC is defined as M( Σ, Γ ) = Str Σ,Γ , ∪, ∅ and M(σ : Σ, Γ → Σ , Γ ) = σ : Str Σ,Γ , ∪, ∅ → Str Σ ,Γ , ∪, ∅ , the homo- morphic extension of σ to the structures in Str Σ,Γ , ∪, ∅ .
Lemma 3. M is a functor.
In order to define M ods, we need the following auxiliary definition, which resembles the usual construction of maximal consistent sets of formulae. Definition 13. Let Σ ∈ |Sign|, ∆ ⊆ Sen(Σ), and consider {F i } i<ω an enumeration of Sen(Σ) such that for every formula α, its sub-formulae are enumerated before α. Then Cn(∆) is defined as follows:
-
Cn(∆) = i<ω Cn i (∆) -Cn 0 (∆) = ∆, Cn i+1 (∆) = Cn i (∆) ∪ {F i } , if Cn i (∆) ∪ {F i } is consistent. Cn i (∆) ∪ {¬F i } , otherwise.
Given Σ, Γ ∈ |Th 0 |, the functor Mods provide the means for obtaining the category containing the closure of those structures in Str Σ,Γ that represent the closure of the branches in saturated tableaux.
Definition 14. Mods : Struct SC → Cat is defined as:
Mods( Str Σ,Γ , ∪, ∅ ) = { Σ, Cn( ∆) | (∃α : ∆ → ∅ ∈ ||Str Σ,Γ ||) ( ∆ → ∅ ∈ α ∧ (∀α : ∆ → ∆ ∈ ||Str Σ,Γ ||)(∆ = ∆))}
and for all σ : Σ → Σ ∈ |Sign| (and σ : Str Σ,Γ , ∪, ∅ → Str Σ ,Γ , ∪, ∅ ∈ ||Struct SC ||), the following holds: Now, from Lemmas 3, 4, and 5, and considering the hypothesis that I F OL is an institution, the following corollary follows.
Mods( σ)( Σ, Cn( ∆) ) = Σ , Cn(Sen(σ)(Cn( ∆))) .
Corollary 1. Sign F OL , Sen F OL , Mod F OL , {|= Σ F OL } Σ∈|Sign F OL | , M, Mods, µ is a satisfiability calculus.
Another important kind of system used by automatic theorem provers are the so-called resolution methods. Below, we show how any resolution system conforms to the definition of satisfiability calculus.
Example 2. [Resolution Method for First-Order Predicate Logic]
Let us describe resolution for first-order logic as introduced in [START_REF] Fitting | Tableau methods of proof for modal logics[END_REF]. We use the following notation: [] denotes the empty list; [A] denotes the unitary list containing the formula A; 0 , 1 , . . . are variables ranging over lists; and i + j denotes the concatenation of lists i and j . Resolution builds a list of lists representing a disjunction of conjunctions. The rules for resolution are the following:
0 + [¬¬A] + 1 [¬¬] 0 + [A] + 1 0 + [¬A] + 1 0 + [A] + 1 [¬] 0 + 1 + 0 + 1 0 + [A ∧ A ] + 1 [∧] 0 + [A, A ] + 1 0 + [¬(A ∨ A )] + 1 [¬∧] 0 + [¬A, ¬A ] + 1 0 + [A ∨ A ] + 1 [∨] 0 + [A] + 1 0 + [A ] + 1 0 + [¬(A ∧ A )] + 1 [¬∧] 0 + [¬A] + 1 0 + [¬A ] + 1 0 + [∀x : A(x)] + 1 for any closed term t [∀] 0 + [A[x/t]] + 1 0 + [∃x : A(x)] + 1 for a new constant c [∃] 0 + [A[x/c]] + 1
where A(x) denotes a formula with free variable x, and A[x/t] denotes the formula resulting from replacing variable x by term t everywhere in A. For the sake of simplicity, we assume that lists of formulae do not have repeated elements. A resolution is a sequence of lists of formulae. If a resolution contains an empty list (i.e., []), we say that the resolution is closed; otherwise it is an open resolution. For every signature Σ ∈ |Sign| and each Γ ⊂ Sen(Σ), we denote by Str Σ,Γ the category whose objects are lists of formulae, and where every morphism σ : [A 0 , . . . , A n ] → [A 0 , . . . , A m ] represents a sequence of application of resolution rules for [A 0 , . . . , A m ]. Then, Struct SC is a category whose objects are Str Σ,Γ , for each signature Σ ∈ |Sign| and set of formulae Γ ∈ Sen(Σ), and whose morphisms are of the form σ : Str Σ,Γ → Str Σ ,Γ , obtained by homomorphically extending σ : Σ, Γ → Σ , Γ in ||Th 0 ||.
As for the case of Example 1, the functor M : Th 0 → Struct SC is defined as M( Σ, Γ ) = Str Σ,Γ , ∪, ∅ , and Mods : Struct SC → Set is defined as in the previous example.
A typical use for the methods involved in the above described examples is the search for counterexamples of a given logical property. For instance, to search for counterexamples of an intended property in the context of the tableaux method, one starts by applying rules to the negation of the property, and once a saturated tableau is obtained, if all the branches are closed, then there is no model of the axioms and the negation of the property, indicating that the latter is a theorem. On the other hand, if there exists an open branch, the limit set of that branch characterizes a class of counterexamples for the formula. Notice the contrast with Hilbert systems, where one starts from the axioms, and then applies deduction rules until the desired formula is obtained.
Mapping Satisfiability Calculi
In [START_REF] Goguen | Institutions: abstract model theory for specification and programming[END_REF] the original notion of morphism between Institutions was introduced. Meseguer defines the notion of plain map in [START_REF] Meseguer | General logics[END_REF], and in [START_REF] Tarlecki | Moving between logical systems[END_REF] Tarlecki extensively discussed the ways in which different institutions can be related, and how they should be interpreted. More recently, in [START_REF] Goguen | Institution morphisms[END_REF] all these notions of morphism were investigated in detail. In this work we will concentrate only on institution representations (or comorphisms in the terminology introduced by Goguen and Rosu), since this is the notion that we have employed to formalize several concepts arising from software engineering, such as data refinement and dynamic reconfiguration [START_REF] Castro | Towards managing dynamic reconfiguration of software systems in a categorical setting[END_REF][START_REF] Castro | A categorical approach to structuring and promoting Z specifications[END_REF]. The study of other important kinds of functorial relations between satisfiability calculi are left as future work. The following definition is taken from [START_REF] Tarlecki | Moving between logical systems[END_REF], and formalizes the notion of institution representation.
M |= γ Sign (Σ) γ Sen Σ (α) iff γ M od Σ (M ) |= Σ α .
An institution representation γ : I → I expresses how the "poorer" set of sentences (respectively, category of models) associated with I is encoded in the "richer" one associated with I . This is done by:
constructing, for a given I-signature Σ, an I -signature into which Σ can be interpreted, translating, for a given I-signature Σ, the set of Σ-sentences into the corresponding I -sentences, obtaining, for a given I-signature Σ, the category of Σ-models from the corresponding category of Σ -models.
The direction of the arrows shows how the whole of I is represented by some parts of I . Institution representations enjoy some interesting properties. For instance, logical consequence is preserved, and, under some conditions, logical consequence is preserved in a conservative way. The interested reader is referred to [START_REF] Tarlecki | Moving between logical systems[END_REF] for further details. In many cases, in particular those in which the class of models of a signature in the source institution is completely axiomatizable in the language of the target one, Definition 16 can easily be extended to map signatures of one institution to theories of another. This is done so that the class of models of the richer one can be restricted, by means of the addition of axioms (thus the need for theories in the image of the functor γ Sign ), in order to be exactly the class of models obtained by translating to it the class of models of the corresponding signature of the poorer one. In the same way, when the previously described extension is possible, we can obtain what Meseguer calls a map of institutions [2, definition 27] by reformulating the definition so that the functor between signatures of one institution and theories of the other is γ T h : Th 0 → Th 0 . This has to be γ Sen -sensible (see definition 5) with respect to the entailment systems induced by the institutions I and I . Now, if Σ, Γ ∈ |Th 0 |, then γ T h0 can be defined as follows: γ T h0 ( Σ, Γ ) = γ Sign (Σ), ∆ ∪ γ Sen Σ (Γ ) , where ∆ ⊆ Sen(ρ Sign (Σ)). Then, it is easy to prove that γ T h0 is γ Sen -simple because it is the γ Sen -extension of γ T h0 to theories, thus being γ Sen -sensible.
The notion of a map of satisfiability calculi is the natural extension of a map of institutions in order to consider the more material version of the satisfiability relation. In some sense, if a map of institutions provides a means for representing one satisfiability relation in terms of another in a semantics preserving way, the map of satisfiability calculi provides a means for representing a model construction technique in terms of another. This is done by showing how model construction techniques for richer logics express techniques associated with poorer ones.
Definition 17. Let S = Sign, Sen, Mod, {|= Σ } Σ∈|Sign| , M, Mods, µ and S = Sign , Sen , Mod , {|= Σ } Σ∈|Sign | , M , Mods , µ be satisfiability calculi. Then, ρ Sign , ρ Sen , ρ M od , γ : S → S is a map of satisfiability calculi if and only if:
1. ρ Sign , ρ Sen , ρ M od : I → I is a map of institutions, and 2. γ : models op • ρ T h0 → models op is a natural transformation such that the following equality holds:
Th0 Mod models op A A ρ T h 0 $ $ =⇒ µ Cat = Th0 Mod ! ! ρ T h 0 & & =⇒ ρ M od Cat =⇒ γ = ⇒ µ Th 0 models op M M Th 0 models op N N Mod . .
Roughly speaking, the 2-cell equality in the definition says that the translation of saturated tableaux is coherent with respect to the mapping of institutions.
Example 3. [Mapping Modal Logic to First-Order Logic] A simple example of a mapping between satisfiability calculi is the mapping between the tableau method for propositional logic, and the one for first-order logic. It is straightforward since the tableau method for first-order logic is an extension of that of propositional logic.
Let us introduce a more interesting example. We will map the tableau method for modal logic (as presented by Fitting [START_REF] Fitting | Tableau methods of proof for modal logics[END_REF]) to the first-order predicate logic tableau method. The mapping between the institutions is given by the standard translation from modal logic to first-order logic. Let us recast here the tableau method for the system K of modal logic. Recall that formulae of standard modal logic are built from boolean operators and the "diamond operator" ♦. Intuitively, formula ♦ϕ says that ϕ is possibly true in some alternative state of affairs. The Methods like resolution and tableaux are strongly related to the semantics of a logic. They are often employed to construct models, a characteristic that is missing in purely deductive methods, such as natural deduction or Hilbert systems, as formalized by Meseguer. In this paper, we provided an abstract characterization of this class of semantics-based tecniques for logical systems. This was accomplished by introducing a categorical characterization of the notion of satisfiability calculus, which covers logical tools such as tableaux, resolution, Gentzen style sequents, etc. Our new characterization of a logical system, that includes the notion of satisfiability calculus, provides both a proof calculus and a satisfiability calculus, which essentially implement the entailment and satisfaction relations, respectively. There clearly exist connections between these calculi that are worth exploring, especially when the underlying structure used in both definitions is the same (see Example 1).
A close analysis of the definitions of proof calculus and satisfiability calculus takes us to observe that the constraints imposed over some elements (e.g., the natural family of functors π Σ,Γ : proofs( Σ, Γ ) → Sen( Σ, Γ ) and µ Σ,Γ : models op ( Σ, Γ ) → Mod( Σ, Γ )) may be too restrictive, and working on generalizations of these concepts is part of our further work. In particular, it is worth noticing that partial implementations of both the entailment relation and the satisfiability relation are gaining visibility in the software engineering community. Examples on the syntactic side are the implementation of less expressive calculi with respect to an entailment, as in the case of the finitary definition of the reflexive and transitive closure in the Kleene algebras with tests [START_REF] Kozen | Kleene algebra with tests[END_REF], the case of the implementation of rewriting tools like Maude [START_REF] Clavel | All About Maude -A High-Performance Logical Framework, How to Specify, Program and Verify Systems in Rewriting Logic[END_REF] as a partial implementation of equational logic, etc. Examples on the semantic side are the bounded model checkers and model finders for undecidable languages, such as Alloy [START_REF] Jackson | Alloy: a lightweight object modelling notation[END_REF] for relational logic, the growing family of SMT-solvers [START_REF] Moura | Satisfiability modulo theories: introduction and applications[END_REF] for languages including arithmetic, etc. Clearly, allowing for partial implementations of entailment/satisfiability relations would enable us to capture the behaviors of some of the above mentioned logical tools. In addition, functorial relations between partial proof calculi (resp., satisfiability calculi) may provide a measure for how good the method is as an approximation of the ideal entailment relation (resp., satisfaction relation). We plan to explore this possibility, as future work.
is a family of binary relations, and for any signature morphism σ : Σ → Σ , Σ-sentence φ ∈ Sen(Σ) and Σ -model M ∈ |Mod(Σ)|, the following |=-invariance condition holds:
is a functor. Let T ∈ |Th 0 |, then Pr(P(T )) is the set of proofs of T ; the composite functor Pr • P : Th 0 → Set will be denoted by proofs, and π : proofs → Sen is a natural transformation such that for each T = Σ, Γ ∈ |Th 0 | the image of π T : proofs(T ) → Sen(T ) is the set Γ • . The map π T is called the projection from proofs to theorems for the theory T .
Lemma 1 .
1 Let Σ ∈ |Sign| and Γ ⊆ Sen(Σ); then Str Σ,Γ , ∪, ∅ , where ∪ : Str Σ,Γ × Str Σ,Γ → Str Σ,Γ is the typical bi-functor on sets and functions, and ∅ is the neutral element for ∪, is a strict monoidal category. Using this definition we can introduce the category of legal tableaux, denoted by Struct SC . Definition 11. Struct SC is defined as O, A where O = {Str Σ,Γ | Σ ∈ |Sign| ∧ Γ ⊆ Sen(Σ)}, and A = { σ : Str Σ,Γ → Str Σ ,Γ | σ : Σ, Γ → Σ , Γ ∈ ||Th 0 ||}, the homomorphic extension of the morphisms in ||Th 0 ||. Lemma 2. Struct SC is a category.
Lemma 4 .Fact 1
41 Mods is a functor. Finally, the natural transformation µ relates the structures representing saturated tableaux with the model satisfying the set of formulae denoted by the source of the morphism. Definition 15. Let Σ, Γ ∈ |Th 0 |, then we define µ Σ : models op ( Σ, Γ ) → Mod F OL ( Σ, Γ ) as µ Σ ( Σ, ∆ ) = Mod( Σ, ∆ ). Let Σ ∈ |Sign F OL | and Γ ⊆ Sen F OL (Σ). Then µ Σ,Γ is a functor. Lemma 5. µ is a natural transformation.
Definition 16 .
16 ([5]) Let I = Sign, Sen, Mod, {|= Σ } Σ∈|Sign| and I = Sign , Sen , Mod , {|= Σ } Σ∈|Sign | be institutions. Then, γ Sign , γ Sen , γ M od : I → I is an institution representation if and only if:γ Sign : Sign → Sign is a functor, γ Sen : Sen → γ Sign • Sen , is a natural transformation, γ M od : (γ Sign ) op • Mod → Mod,is a natural transformation, such that for any Σ ∈ |Sign|, the function γ Sen Σ : Sen(Σ) → Sen (γ Sign (Σ)) and the functor γ M od Σ : Mod (γ Sign (Σ)) → Mod(Σ) preserve the following satisfaction condition: for any α ∈ Sen(Σ) and M ∈ |Mod(γ Sign (Σ))|,
Authors' note: Meseguer refers to a logic as a structure that is composed of an entailment system together with an institution, see Def.
∅ is not necessarily the empty set of axioms. This fact will be clarified later on.
Notice that the target of functor M, when applied to a theory T , is not necessarily a model, but a structure which, under certain conditions, can be considered a representation of the category of models of T .
X ∪ {A ∧ B} [∧] X ∪ {A ∧ B, A, B} X ∪ {A ∨ B} [∨] X ∪ {A ∨ B, A} X ∪ {A ∨ B, B} X ∪ {¬¬A} [¬1] X ∪ {¬¬A, A} X ∪ {A} [¬2] X ∪ {A, ¬¬A} X ∪ {A, ¬A} [false] Sen(Σ) X ∪ {¬(A ∧ B)} [DM1] X ∪ {¬(A ∧ B), ¬A ∨ ¬B} X ∪ {¬(A ∨ B)} [DM2] X ∪ {¬(A ∨ B), ¬A ∧ ¬B} X ∪ {(∀x)P (x)} [∀] X ∪ {(∀x)P (x), P (t)} X ∪ {(∃x)P (x)}[∃] X ∪ {(∃x)P (x), P (c)}
Notice that ρ Sign ( {pi}i∈I ) = R, {pi}i∈I , where {pi}i∈I ∈ |Sign K |.
Acknowledgements
The authors would like to thank the anonymous referees for their helpful comments. This work was partially supported by the Argentinian Agency for Scientific and Technological Promotion (ANPCyT), through grants PICT PAE 2007 No. 2772, PICT 2010 No. 1690, PICT 2010 No. 2611 and PICT 2010 No. 1745, and by the MEALS project (EU FP7 programme, grant agreement No. 295261). The fourth author gratefully acknowledges the support of the National Science and Engineering Research Council of Canada and McMaster University.
semantics for modal logic is given by means of Kripke structures. A Kripke structure is a tuple W, R, L , where W is a set of states, R ⊆ W × W is a relation between states, and L : W → 2 AP is a labeling function (AP is a set of atomic propositions). Note that a signature in modal logic is given by a set of propositional letters: {p i } i∈I . The interested reader can consult [START_REF] Blackburn | Modal logic[END_REF].
In [START_REF] Fitting | Tableau methods of proof for modal logics[END_REF] modal formulae are prefixed by labels denoting semantic states. Labeled formulae are then terms of the form : ϕ, where ϕ is a modal formula and is a sequence of natural numbers n 0 , . . . , n k . The relation R between these labels is then defined in the following way: R ≡ ∃n : , n = . The new rules are the following:
The rules for the propositional connectives are the usual ones, obtained by labeling the formulae with a given label. Notice that labels denote states of a Kripke structure. This is related in some way to the tableau method used for first-order predicate logic. Branches, saturated branches and closed branches are defined in the same way as in Example 1, but considering the relations between sets to be also indexed by the relation used at that point. Thus,
must be understood as follows: the set s i+1 is obtained from s i by applying rule τ i to formula α i ∈ s i under the accessibility relation R i .
Assume Sign F OL , Sen F OL , M F OL , Mods F OL , {|= Σ F OL } Σ∈|Sign F OL | , µ F OL is the satisfiability calculus for first-order predicate logic, denoted by SC F OL , and Sign K , Sen K , M K , Mods K , {|= Σ K } Σ∈|Sign K |,µ K is the satisfiability calculus for modal logic, denoted by SC K . Consider now the standard translation from modal logic to first-order logic. Therefore, the tuple ρ Sign , ρ Sen , ρ M od is defined as follows [START_REF] Blackburn | Modal logic[END_REF]: Definition 18. ρ Sign : Sign K → Sign F OL is defined as ρ Sign ( {p i } i∈I ) = R, {p i } i∈I by mapping each propositional variable p i , for all i ∈ I, to a firstorder unary logic predicate p i , and adding a binary predicate R, and ρ Sign (σ :
R , and p i to p i for all i ∈ I. Lemma 6. ρ Sign is a functor.
α) where:
for all M = S, R,
The proof that this is a mapping between institutions relies on the correctness of the translation presented in [START_REF] Blackburn | Modal logic[END_REF]. Using this map we can define a mapping between the corresponding satisfiability calculi. The natural transformation: γ : ρ T h0 • models op → models op is defined as follows.
is defined as:
Finally, the following lemma prove the equivalence of the two cells shown in Definition 17.
This means that building a tableau using the first-order rules for the translation of a modal theory, then obtaining the corresponding canonical model in modal logic using γ, and therefore obtaining the class of models by using µ, is exactly the same as obtaining the first-order models by µ and then the corresponding modal models by using ρ M od . | 40,467 | [
"1003760",
"1003761",
"1003762",
"977624"
] | [
"131288",
"92878",
"488156",
"92878",
"488156",
"92878",
"64587"
] |
01485971 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01485971/file/978-3-642-37635-1_13_Chapter.pdf | Till Mossakowski
Oliver Kutz
Christoph Lange
Semantics of the Distributed Ontology Language: Institutes and Institutions
The Distributed Ontology Language (DOL) is a recent development within the ISO standardisation initiative 17347 Ontology Integration and Interoperability (OntoIOp). In DOL, heterogeneous and distributed ontologies can be expressed, i.e. ontologies that are made up of parts written in ontology languages based on various logics. In order to make the DOL meta-language and its semantics more easily accessible to the wider ontology community, we have developed a notion of institute which are like institutions but with signature partial orders and based on standard set-theoretic semantics rather than category theory. We give an institute-based semantics for the kernel of DOL and show that this is compatible with institutional semantics. Moreover, as it turns out, beyond their greater simplicity, institutes have some further surprising advantages over institutions.
Introduction
OWL is a popular language for ontologies. Yet, the restriction to a decidable description logic often hinders ontology designers from expressing knowledge that cannot (or can only in quite complicated ways) be expressed in a description logic. A current practice to deal with this problem is to intersperse OWL ontologies with first-order axioms in the comments or annotate them as having temporal behaviour [START_REF] Smith | Relations in biomedical ontologies[END_REF][START_REF] Beisswanger | BioTop: An upper domain ontology for the life sciences -a description of its current structure, contents, and interfaces to OBO ontologies[END_REF], e.g. in the case of bio-ontologies where mereological relations such as parthood are of great importance, though not definable in OWL. However, these remain informal annotations to inform the human designer, rather than first-class citizens of the ontology with formal semantics, and will therefore unfortunately be ignored by tools with no impact on reasoning. Moreover, foundational ontologies such as DOLCE, BFO or SUMO use full first-order logic or even first-order modal logic.
A variety of languages is used for formalising ontologies. 4 Some of these, such as RDF, OBO and UML, can be seen more or less as fragments and notational variants of OWL, while others, such as F-logic and Common Logic (CL), clearly go beyond the expressiveness of OWL.
This situation has motivated the Distributed Ontology Language (DOL), a language currently under active development within the ISO standard 17347 Ontology Integration and Interoperability (OntoIOp). In DOL, heterogeneous and distributed ontologies can be expressed. At the heart of this approach is a graph of ontology languages and translations [START_REF] Mossakowski | The Onto-Logical Translation Graph[END_REF], shown in Fig. 1. What is the semantics of DOL? Previous presentations of the semantics of heterogeneous logical theories [START_REF] Tarlecki | Towards heterogeneous specifications[END_REF][START_REF] Diaconescu | Grothendieck institutions[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF][START_REF] Kutz | Carnap, Goguen, and the Hyperontologies: Logical Pluralism and Heterogeneous Structuring in Ontology Design[END_REF][START_REF] Mossakowski | The Onto-Logical Translation Graph[END_REF] relied heavily on the theory of institutions [START_REF] Goguen | Institutions: Abstract model theory for specification and programming[END_REF]. The central insight of the theory of institutions is that logical notions such as model, sentence, satisfaction and derivability should be indexed over signatures (vocabularies). In order to abstract from any specific form of signature, category theory is used: nothing more is assumed about signatures other than that (together with suitable signature morphisms) they form a category.
However, the use of category theory diminishes the set of potential readers: "Mathematicians, and even logicians, have not shown much interest in the theory of institutions, perhaps because their tendency toward Platonism inclines them to believe that there is just one true logic and model theory; it also doesn't much help that institutions use category theory extensively."
(J. Goguen and G. Roşu in [START_REF] Goguen | Institution morphisms[END_REF], our emphasis) Indeed, during the extensive discussions within the ISO standardisation committee in TC37/SC3 to find an agreement concerning the right semantics for the DOL language, we (a) encountered strong reservations to base the semantics entirely on the institutional approach in order not to severely limit DOL's potential adoption by users, and (b) realised that a large kernel of the DOL language can be based on a simpler, category-free semantics. The compromise that was found within OntoIOp therefore adopted a twolayered approach: (i) it bases the semantics of a large part of DOL on a simplification of the notion of institutions, namely the institute-based approach presented in this paper that relies purely on standard set-theoretic semantics, and (ii) allows an elegant addition of additional features that do require a full institution-based approach. Indeed, it turned out that the majority of work in the ontology community either disregards signature morphisms altogether, or uses only signature inclusions. The latter are particularly important for the notion of ontology module, which is essentially based on the notion of conservative extension along an inclusion signature morphisms, and related notions like inseparability and uniform interpolation (see also Def. 5 below). Another use case for signature inclusions are theory interpretations, which are used in the COLORE repository of (first-order) Common Logic ontologies. Indeed, COLORE uses the technique of extending the target of a theory interpretation by suitable definitions of the symbols in the source. The main motivation for this is probably the avoidance of derived signature morphisms; as a by-product, also renamings of symbols are avoided.
There are only rare cases where signature morphisms are needed in their full generality: the renaming of ontologies, which so far has only been used for combinations of ontologies by colimits. Only here, the full institution-based approach is needed. However, only relatively few papers are explicitly concerned with colimits of ontologies. 5Another motivation for our work is the line of signature-free thinking in logic and ontology research; for example, the ISO/IEC standard 24707:2007 Common Logic [START_REF]Common Logic: Abstract syntax and semantics[END_REF] names its signature-free approach to sentence formation a chief novel feature: "Common Logic has some novel features, chief among them being a syntax which is signature-free . . . " [START_REF]Common Logic: Abstract syntax and semantics[END_REF] Likewise, many abstract studies of consequence and satisfaction systems [START_REF] Gentzen | Investigations into logical deduction[END_REF][START_REF] Scott | Rules and derived rules[END_REF][START_REF] Avron | Simple consequence relations[END_REF][START_REF] Carnielli | Analysis and synthesis of logics: how to cut and paste reasoning systems[END_REF] disregard signatures. Hence, we base our semantics on the newly introduced notion of institutes. These start with the signature-free approach, and then introduce signatures a posteriori, assuming that they form a partial order. While this approach covers only signature inclusions, not renamings, it is much simpler than the category-based approach of institutions. Of course, for features like colimits, full institution theory is needed. We therefore show that institutes and institutions can be integrated smoothly.
Institutes: Semantics for a DOL Kernel
The notion of institute follows the insight that central to a model-theoretic view on logic is the notion of satisfaction of sentences in models. We also follow the insight of institution theory that signatures are essential to control the vocabulary of symbols used in sentences and models. However, in many logic textbooks as well as in the Common Logic standard [START_REF]Common Logic: Abstract syntax and semantics[END_REF], sentences are defined independently of a specific signature, while models always interpret a given signature. The notion of institute reflects this common practice. Note that the satisfaction relation can only meaningfully be defined between models and sentences where the model interprets all the symbols occurring in the sentence; this is reflected in the fact that we define satisfaction per signature. We also require a partial order on models; this is needed for minimisation in the sense of circumscription.
Moreover, we realise the goal of avoiding the use of category theory by relying on partial orders of signatures as the best possible approximation of signature categories. This also corresponds to common practice in logic, where signature extensions and (reducts against these) are considered much more often than signature morphisms.
Definition 1 (Institutes). An institute I = (Sen, Sign, ≤, sig, Mod, |=, .| . ) consists of a class Sen of sentences; a partially ordered class (Sign, ≤) of signatures (which are arbitrary sets); a function sig : Sen → Sign, giving the (minimal) signature of a sentence (then for each signature Σ , let Sen(Σ ) = {ϕ ∈ Sen | sig(ϕ) ≤ Σ }); for each signature Σ , a partially ordered class Mod(Σ ) of Σ -models; for each signature Σ , a satisfaction relation
|= Σ ⊆ Mod(Σ ) × Sen(Σ ); -for any Σ 2 -model M, a Σ 1 -model M| Σ 1 (called the reduct), provided that Σ 1 ≤ Σ 2 ,
such that the following properties hold:
-given Σ 1 ≤ Σ 2 , for any Σ 2 -model M and any Σ 1 -sentence ϕ M |= ϕ iff M| Σ 1 |= ϕ (satisfaction is invariant under reduct), -for any Σ -model M, given Σ 1 ≤ Σ 2 ≤ Σ , (M| Σ 2 )| Σ 1 = M| Σ 1
(reducts are compositional), and for any
Σ -models M 1 ≤ M 2 , if Σ ≤ Σ , then M 1 | Σ ≤ M 2 | Σ (reducts preserve the model ordering).
We give two examples illustrating these definitions, by phrasing the description logic ALC and Common Logic CL in institute style:
Example 2 (Description Logics ALC). An institute for ALC is defined as follows: sentences are subsumption relations C 1 C 2 between concepts, where concepts follow the grammar
C ::= A | | ⊥ |C 1 C 2 |C 1 C 2 | ¬C | ∀R.C | ∃R.C
Here, A stands for atomic concepts. Such sentences are also called TBox sentences. Sentences can also be ABox sentences, which are membership assertions of individuals in concepts (written a : C, where a is an individual constant) or pairs of individuals in roles (written R(a, b), where R is a role, and a, b are individual constants).
Signatures consist of a set A of atomic concepts, a set R of roles and a set I of individual constants. The ordering on signatures is component-wise inclusion. For a sentence ϕ, sig(ϕ) contains all symbols occurring in ϕ.
Σ -models consist of a non-empty set ∆ , the universe, and an element of ∆ for each individual constant in Σ , a unary relation over ∆ for each concept in Σ , and a binary relation over ∆ for each role in Σ . The partial order on models is defined as coincidence of the universe and the interpretation of individual constants plus subset inclusion for the interpretation of concepts and roles. Reducts just forget the respective components of models. Satisfaction is the standard satisfaction of description logics.
An extension of ALC named SROIQ [START_REF] Horrocks | The Even More Irresistible SROIQ[END_REF] is the logical core of the Web Ontology Language OWL 2 DL 6 .
Example 3 (Common Logic -CL). Common Logic (CL) has first been formalised as an institution in [START_REF] Kutz | Carnap, Goguen, and the Hyperontologies: Logical Pluralism and Heterogeneous Structuring in Ontology Design[END_REF]. We here formalise it as an institute.
A CL-sentence is a first-order sentence, where predications and function applications are written in a higher-order like syntax as t(s). Here, t is an arbitrary term, and s is a sequence term, which can be a sequence of terms t 1 . . .t n , or a sequence marker. However, a predication t(s) is interpreted like the first-order formula holds(t, s), and a function application t(s) like the first-order term app(t, s), where holds and app are fictitious symbols (denoting the semantic objects rel and fun defined in models below). In this way, CL provides a first-order simulation of a higher-order language. Quantification variables are partitioned into those for individuals and those for sequences.
A CL signature Σ (called vocabulary in CL terminology) consists of a set of names, with a subset called the set of discourse names, and a set of sequence markers. The partial order on signatures is componentwise inclusion with the requirement that the a name is a discourse name in the smaller signature if and only if is in the larger signature. sig obviously collects the names and sequence markers present in a sentence.
A Σ -model consists of a set UR, the universe of reference, with a non-empty subset UD ⊆ UR, the universe of discourse, and four mappings:
rel from UR to subsets of UD * = {< x 1 , . . . , x n > |x 1 , . . . , x n ∈ UD} (i.e., the set of finite sequences of elements of UD); fun from UR to total functions from UD * into UD;
-int from names in Σ to UR, such that int(v) is in UD if and only if v is a discourse name; -seq from sequence markers in Σ to UD * .
The partial order on models is defined as M 1 ≤ M 2 iff M 1 and M 2 agree on all components except perhaps rel, where we require rel 1 (x) ⊆ rel 2 (x) for all x ∈ UR 1 = UR 2 . Model reducts leave UR, UD, rel and fun untouched, while int and seq are restricted to the smaller signature.
Interpretation of terms and formulae is as in first-order logic, with the difference that the terms at predicate resp. function symbol positions are interpreted with rel resp. fun in order to obtain the predicate resp. function, as discussed above. A further difference is the presence of sequence terms (namely sequence markers and juxtapositions of terms), which denote sequences in UD * , with term juxtaposition interpreted by sequence concatenation. Note that sequences are essentially a second-order feature. For details, see [START_REF]Common Logic: Abstract syntax and semantics[END_REF].
Working within an arbitrary but fixed Institute
Like with institutions, many logical notions can be formulated in an arbitrary but fixed institute. However, institutes are more natural for certain notions used in the ontology community.
The notions of 'theory' and 'model class' in an institute are defined as follows:
Definition 4 (Theories and Model Classes). A theory T = (Σ ,Γ ) in an institute I consists of a signature Σ and a set of sentences Γ ⊆ Sen(Σ ). Theories can be partially ordered by letting
(Σ 1 ,Γ 1 ) ≤ (Σ 2 ,Γ 2 ) iff Σ 1 ≤ Σ 2 and Γ 1 ⊆ Γ 2 .
The class of models Mod(Σ ,Γ ) is defined as the class of those Σ -models satisfying Γ . This data is easily seen to form an institute I th of theories in I (with theories as "signatures").
The following definition is taken directly from [START_REF] Lutz | Deciding inseparability and conservative extensions in the description logic EL[END_REF], 7 showing that central notions from the ontology modules community can be seamlessly formulated in an arbitrary institute:
Definition 5 (Entailment, inseparability, conservative extension).
-
A theory T 1 Σ -entails T 2 , written T 1 T 2 , if T 2 |= ϕ implies T 1 |= ϕ for all sentences ϕ with sig(ϕ) ≤ Σ ; -T 1 and T 2 are Σ -inseparable if T 1 Σ -entails T 2 and T 2 Σ -entails T 1 ; -T 2 is a Σ -conservative extension of T 1 if T 2 ≥ T 1 and T 1 and T 2 are Σ -inseparable; -T 2 is a conservative extension of T 1 if T 2 is a Σ -conservative extension of T 2 with Σ = sig(T 1 ).
Note the use of sig here directly conforms to institute parlance. In contrast, since there is no global set of sentences in institutions, one would need to completely reformulate the definition for the institution representation and fiddle with explicit sentence translations.
From time to time, we will need the notion of 'unions of signatures':
Definition 6 (Signature unions). A Signature union is a supremum (least upper bound) in the signature partial order. Note that signature unions need not always exist, nor be unique. In either of these cases, the enclosing construct containing the union is undefined.
Institute Morphisms and Comorphisms
Institute morphisms and comorphisms relate two given institutes. A typical situation is that an institute morphism expresses the fact that a "larger" institute is built upon a "smaller" institute by projecting the "larger" institute onto the "smaller" one. Somewhat dually to institute morphisms, institute comorphisms allow to express the fact that one institute is included in another one. (Co)morphisms play an essential role for DOL: the DOL semantics is parametrised over a graph of institutes and institute morphisms and comorphisms. The formal definitions are as follows:
Definition 7 (Institute morphism). Given I 1 = (Sen 1 , Sign 1 , ≤ 1 , sig 1 , Mod 1 , |= 1 , .| . )
and
I 2 = (Sen 2 , Sign 2 , ≤ 2 , sig 2 , Mod 2 , |= 2 , .| . ), an institute morphism ρ = (Φ, α, β ) : I 1 -→ I 2 consists of -a monotone map Φ : (Sign 1 , ≤ 1 ) → (Sign 2 , ≤ 2 ),
a sentence translation function α : Sen 2 -→ Sen 1 , and for each I 1 -signature Σ , a monotone model translation function
β Σ : Mod 1 (Σ ) → Mod 2 (Φ(Σ )), such that -M 1 |= 1 α(ϕ 2 ) if and only if β Σ (M 1 ) |= 2 ϕ 2 holds for each I 1 -signature Σ , each model M 1 ∈ Mod 1 (Σ ) and each sentence ϕ 2 ∈ Sen 2 (Σ ) (satisfaction condition) -Φ(sig 1 (α(ϕ 2 ))) ≤ sig 2 (ϕ 2 ) for any sentence ϕ 2 ∈ Sen 2 (sentence coherence); -model translation commutes with reduct, that is, given Σ 1 ≤ Σ 2 in I 1 and a Σ 2 - model M, β Σ 2 (M)| Φ(Σ 1 ) = β Σ 1 (M| Σ 1 ).
The dual notion of institute comorphism is then defined as:
Definition 8 (Institute comorphism). Given I 1 = (Sen 1 , Sign 1 , ≤ 1 , sig 1 , Mod 1 , |= 1 , .| . ) and I 2 = (Sen 2 , Sign 2 , ≤ 2 , sig 2 , Mod 2 , |= 2 , .| . ), an institute comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 consists of -a monotone map Φ : (Sign 1 , ≤ 1 ) → (Sign 2 , ≤ 2 ),
a sentence translation function α : Sen 1 -→ Sen 2 , and for each I 1 -signature Σ , a monotone model translation function
β Σ : Mod 2 (Φ(Σ )) → Mod 1 (Σ ), such that -M 2 |= 2 α(ϕ 1 ) if and only if β Σ (M 2 ) |= 1 ϕ 1 holds for each I 1 -signature Σ , each model M 2 ∈ Mod 2 (Σ ) and each sentence ϕ 1 ∈ Sen 1 (Σ ) (satisfaction condition) -sig 2 (α(ϕ 1 )) ≤ Φ(sig 1 (ϕ 1 )) for any sentence ϕ 1 ∈ Sen 1 (sentence coherence); -model translation commutes with reduct, that is, given Σ 1 ≤ Σ 2 in I 1 and a Φ(Σ 2 )- model M in I 2 , β Σ 2 (M)| Σ 1 = β Σ 1 (M| Φ(Σ 1 ) ).
Some important properties of institution (co-)morphisms will be needed in the technical development below:
Definition 9 (Model-expansive, (weakly) exact, (weak) amalgamation). An institute comorphism is model-expansive, if each β Σ is surjective. It is easy to show that model-expansive comorphisms faithfully encode logical consequence, that is,
Γ |= ϕ iff α(Γ ) |= α(ϕ). An institute comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 is (weakly) exact, if for each signature extension Σ 1 ≤ Σ 2 the diagram Mod I 1 (Σ 2 ) .| Σ 1 Mod I 2 (Φ(Σ 2 )) .| Φ(Σ 1 ) β Σ 2 o o Mod I 1 (Σ 1 ) Mod I 2 (Φ(Σ 1 )) β Σ 1 o o admits (weak) amalgamation, i.e. for any M 2 ∈ Mod I (Σ 2 ) and M 1 ∈ Mod J (Φ(Σ 1 )) with M 2 | Σ 1 = β Σ 1 (M 1 ), there is a (not necessarily unique) M 2 ∈ Mod J (Φ(Σ 2 )) with β Σ 2 (M 2 ) = M 2 and M 2 | Φ(Σ 1 ) = M 1 .
Given these definitions, a simple theoroidal institute comorphism ρ : I 1 -→ I 2 is an ordinary institute comorphism ρ : I 1 -→ I th 2 (for I th 2 , see Def. 4). Moreover, an institute comorphism is said to be model-isomorphic if β Σ is an isomorphism. It is a subinstitute comorphism (cf. also [START_REF] Meseguer | General logics[END_REF]), if moreover the signature translation is an embedding and sentence translation is injective. The intuition is that theories should be embedded, while models should be represented exactly (such that model-theoretic results carry over).
A DOL Kernel and Its Semantics
The Distributed Ontology Language (DOL) shares many features with the language HetCASL [START_REF] Mossakowski | HetCASL -Heterogeneous Specification[END_REF] which underlies the Heterogeneous Tool Set Hets [START_REF] Mossakowski | The Heterogeneous Tool Set[END_REF]. However, it also adds a number of new features:
minimisation of models following the circumscription paradigm [START_REF] Mccarthy | Circumscription -A Form of Non-Monotonic Reasoning[END_REF][START_REF] Lifschitz | Circumscription[END_REF]; ontology module extraction, i.e. the extraction of a subtheory that contains all relevant logical information w.r.t. some subsignature [START_REF] Konev | Formal properties of modularization[END_REF]; projections of theories to a sublogic; ontology alignments, which involve partial or even relational variants of signature morphisms [START_REF] David | François Scharffe, and[END_REF]; combination of theories via colimits, which has been used to formalise certain forms of ontology alignment [START_REF] Zimmermann | Formalizing Ontology Alignment and its Operations with Category Theory[END_REF][START_REF] Kutz | Chinese Whispers and Connected Alignments[END_REF]; referencing of all items by URLs, or, more general, IRIs [START_REF] Lange | LoLa: A Modular Ontology of Logics, Languages, and Translations[END_REF].
Sannella and Tarlecki [START_REF] Sannella | Specifications in an arbitrary institution[END_REF][START_REF] Sannella | Foundations of Algebraic Specification and Formal Software Development[END_REF] show that the structuring of logical theories (specifications) can be defined independently of the underlying logical system. They define a kernel language for structured specification that can be interpreted over an arbitrary institution.
Similar to [START_REF] Sannella | Specifications in an arbitrary institution[END_REF] and also integrating heterogeneous constructs from [START_REF] Tarlecki | Towards heterogeneous specifications[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF], we now introduce a kernel language for heterogeneous structured specifications for DOL. We will use the term "structured ontology" instead of "structured specification" to stress the intended use for DOL.
Since DOL involves not only one, but possibly several ontology languages, we need to introduce the notion of a 'heterogeneous logical environment'.
Definition 10 (Heterogeneous logical environment). A heterogeneous logical environment is defined to be a graph of institutes and institute morphisms and (possibly simple theoroidal) comorphisms, where we assume that some of the comorphisms (including all obvious identity comorphisms) are marked as default inclusions. The default inclusions are assumed to form a partial order on the institutes of the logic graph. If I 1 ≤ I 2 , the default inclusion is denoted by ι : I 1 -→ I 2 . For any pair of institutes I 1 and I 2 , if their supremum exists, we denote it by I 1 ∪ I 2 , and the corresponding default inclusions by ι i :
I i -→ I 1 ∪ I 2 .
We are now ready for the definition of heterogeneous structured ontology.
Definition 11 (Heterogeneous structured ontology -DOL kernel language). Let a heterogeneous logical environment be given. We inductively define the notion of heterogeneous structured ontology (in the sequel: ontology). Simultaneously, we define functions Ins, Sig and Mod yielding the institute, the signature and the model class of such an ontology. Let O be an ontology with institute I and signature Σ and let Σ min , Σ fixed be subsignatures of Σ such that Σ min ∪ Σ fixed is defined. Intuitively, the interpretation of the symbols in Σ min will be minimised among those models interpreting the symbols in Σ fixed in the same way, while the interpretation of all symbols outside Σ min ∪ Σ fixed may vary arbitrarily. Then O minimize Σ min , Σ fixed is an ontology with: The full DOL language adds further language constructs that can be expressed in terms of this kernel language. Furthermore, DOL allows the omission of translations along default inclusion comorphisms, since these can be reconstructed in a unique way.
Ins(O minimize Σ min , Σ fixed ) := I Sig(O minimize Σ min , Σ fixed ) := Σ Mod(O minimize Σ min , Σ fixed ) := {M ∈ Mod(O) | M| Σ min ∪Σ fixed is minimal in Fix(M)} where Fix(M) = {M ∈ Mod(O)| Σ min ∪Σ fixed | M | Σ fixed = M| Σ fixed }
Logical consequence. We say that a sentence ϕ is a logical consequence of a heterogeneous structured ontology O, written O |= ϕ, if any model of O satisfies ϕ.
Monotonicity. Similar to [START_REF] Sannella | Foundations of Algebraic Specification and Formal Software Development[END_REF], Ex. 5.1.4, we get:
Proposition 12. All structuring operations of the DOL kernel language except minimisation are monotone in the sense that they preserve model class inclusion:
Mod(O 1 ) ⊆ Mod(O 2 ) implies Mod(op(O 1 )) ⊆ Mod(op(O 2 )). (Union is monotone in both argu- ments.)
Indeed, the minimisation is a deliberate exception: its motivation is to capture nonmonotonic reasoning.
Proposition 13. If reducts are surjective, minimize is anti-monotone in Σ min .
Proof. Let O be an ontology with institute I and signature Σ and let Σ 1 min , Σ 2 min , Σ fixed ⊆ Σ be subsignatures such that Σ 1 min ≤ Σ 2 min , and Σ i min ∪ Σ fixed is defined for i = 1, 2. Let Fix 1 and Fix 2 defined as Fix above, using Σ [START_REF] Lutz | Conservative Extensions in Expressive Description Logics[END_REF] for an example from description logic, and see [START_REF] Kutz | Conservativity in Structured Ontologies[END_REF] for more general conservativity preservation results.
We first considered to integrate a module extraction operator into the kernel language of heterogeneous structured ontologies. However, there are so many different notions of ontology module and techniques of module extraction used in the literature that we would have to define a whole collection of module extraction operators, a collection that moreover would quickly become obsolete and incomplete. We refrained from this, and instead provide a relation between heterogeneous structured ontologies that is independent of the specificities of particular module extraction operators. Still, it is possible to define all the relevant notions used in the ontology modules community within an arbitrary institute, namely the notions of conservative extension, inseparability, uniform interpolant etc. The reason is that these notions typically are defined in set-theoretic parlance about signatures (see Def. 5).
The full DOL language is based on the DOL kernel and also includes a construct for colimits (which is omitted here, because its semantics requires institutions) and ontology alignments (which are omitted here, because they do not have a model-theoretic semantics). The full DOL language is detailed in the current OntoIOp ISO 17347 working draft, see ontoiop.org. There, also an alternative semantics to the above direct set-theoretic semantics is given: a translational semantics. It assumes that all involved institutes can be translated to Common Logic, and gives the semantics of an arbitrary ontology by translation to Common Logic (and then using the above direct semantics). The two semantics are compatible, see [START_REF] Mossakowski | Three Semantics for the Core of the Distributed Ontology Language[END_REF] for details. However, the translational semantics has some important drawbacks. In particular, the semantics of ontology modules (relying on the notion of conservative extension) is not always preserved when translating to Common Logic. See [START_REF] Mossakowski | Three Semantics for the Core of the Distributed Ontology Language[END_REF] for details.
An Example in DOL
As an example of a heterogeneous ontology in DOL, we formalise some notions of mereology. Propositional logic is not capable of describing mereological relations, but of describing the basic categories over which the DOLCE foundational ontology [START_REF] Masolo | Ontology library[END_REF] defines mereological relations. The same knowledge can be formalised more conveniently in OWL, which additionally allows for describing (not defining!) basic parthood properties. As our OWL ontology redeclares as classes the same categories that the propositional logic ontology Taxonomy had introduced as propositional variables, using different names but satisfying the same disjointness and subsumption axioms, we observe that it interprets the former. Mereological relations are frequently used in lightweight OWL ontologies, e.g. biomedical ontologies in the EL profile (designed for efficient reasoning with a large number of entities, a frequent case in this domain), but these languages are not fully capable of defining these relations. Therefore, we finally provide a full definition of several mereological relations in first order logic, in the Common Logic language, by importing, translating and extending the OWL ontology. We use Common Logic's second-order facility of quantifying over predicates to concisely express the restriction of the variables x, y, and z to the same taxonomic category. such that for each σ : Σ -→ Σ in Sign the following satisfaction condition holds:
( ) M |= Σ σ (ϕ) iff M | σ |= Σ ϕ
for each M ∈ |Mod(Σ )| and ϕ ∈ Sen(Σ ), expressing that truth is invariant under change of notation and context. 10With institutions, a few more features of DOL can be equipped with a semantics:
renamings along signature morphisms [START_REF] Sannella | Specifications in an arbitrary institution[END_REF],
combinations (colimits), and monomorphic extensions.
Due to the central role of inclusions of signatures for institutes, we also need to recall the notion of inclusive institution. Definition 15 ([31]). An inclusive category is a category having a broad subcategory which is a partially ordered class.
An inclusive institution is one with an inclusive signature category such that the sentence functor preserves inclusions. We additionally require that such institutions have inclusive model categories, have signature intersections (i.e. binary infima), which are preserved by Sen, 11 and have well-founded sentences, which means that there is no sentence that occurs in all sets of an infinite chain of strict inclusions
. . . → Sen(Σ n ) → . . . → Sen(Σ 1 ) → Sen(Σ o )
that is the image (under Sen) of a corresponding chain of signature inclusions.
Definition 16. Given institutions I and J, an institution morphism [START_REF] Goguen | Institutions: Abstract model theory for specification and programming[END_REF] written µ = (Φ, α, β ) : I -→ J consists of a functor Φ : Sign I -→ Sign J , a natural transformation α : Sen J • Φ -→ Sen I and a natural transformation β : Mod I -→ Mod J • Φ op , such that the following satisfaction condition holds for all Σ ∈ Sign I , M ∈ Mod I (Σ ) and ϕ ∈ Sen J (Φ(Σ )):
M |= I Σ α Σ (ϕ ) iff β Σ (M) |= J Φ(Σ ) ϕ
Definition 17. Given institutions I and J, an institution comorphism [START_REF] Goguen | Institution morphisms[END_REF] denoted as ρ = (Φ, α, β ) : I -→ J consists of a functor Φ : Sign I -→ Sign J , a natural transformation α : Sen I -→ Sen J • Φ, a natural transformation β : Mod J • Φ op -→ Mod I such that the following satisfaction condition holds for all Σ ∈ Sign I , M ∈ Mod J (Φ(Σ )) and ϕ ∈ Sen I (Σ ):
M |= J Φ(Σ ) α Σ (ϕ) iff β Σ (M ) |= I Σ ϕ.
Let InclIns (CoInclIns) denote the quasicategory of inclusive institutions and morphisms (comorphisms). Furthermore, let Class denote the quasicategory of classes and functions. Note that (class-indexed) colimits of sets in Class can be constructed in the same way as in Set. Finally, call an institute locally small, if each Sen(Σ ) is a set. Let Institute (CoInstitute) be the quasicategory of locally small institutes and morphisms (comorphisms).
Proposition 18. There are functors F co : CoInstitute → CoInclIns and F : Institute → InclIns.
Proof. Given an institute I = (Sen I , Sign I , ≤ I , sig I , Mod I , |= I , .| . ), we construct an institution F(I) = F co (I) as follows: (Sign I , ≤ I ) is a partially ordered class, hence a (thin) category. We turn it into an inclusive category by letting all morphisms be inclusions. This will be the signature category of F(I).
For each signature Σ , we let Sen F(I) (Σ ) be Sen I (Σ ) (here we need local smallness of I). Then Sen F(I) easily turns into an inclusion-preserving functor. Also, Mod F(I) (Σ ) is Mod I (Σ ) turned into a thin category using the partial order on Mod I . Since reducts are compositional and preserve the model ordering, we obtain reduct functors for F(I). Satisfaction in F(I) is defined as in I. The satisfaction condition holds because satisfaction is invariant under reduct.
Given an institute comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 , we define an institution comorphism F co (ρ) : F(I 1 ) -→ F(I 2 ) as follows. Φ obviously is a functor from Sign F(I 1 ) to Sign F(I 2 ) . If sig(ϕ) ≤ Σ , by sentence coherence, sig(α(ϕ)) ≤ Φ(Σ ). Hence, α : Sen 1 -→ Sen 2 can be restricted to α Σ : Sen 1 (Σ ) -→ Sen 2 (Σ ) for any I 1signature Σ . Naturality of the family (α Σ ) Σ ∈Sign 1 follows from the fact that the α Σ are restrictions of a global α. Each β Σ is functorial because it is monotone. Naturality of the family (β Σ ) Σ ∈Sign 1 follows from model translation commuting with reduct. The satisfaction condition is easily inherited from the institute comorphism.
The translation of institute morphisms is similar.
Proposition 19. There are functors G co : CoInclIns → CoInstitute and G :
InclIns → Institute, such that G co • F co ∼ = id and G • F ∼ = id.
Proof. Given an inclusive institution I = (Sign I , Sen I , Mod I , |= I ), we construct an institute G(I) = G co (I) as follows: (Sign I , ≤ I ) is the partial order given by the inclusions.
Sen G(I) is the colimit of the diagram of all inclusions Sen I (Σ 1 ) → Sen I (Σ 1 ) for Σ 1 ≤ Σ 2 . This colimit is taken in the quasicategory of classes and functions. It exists because all involved objects are sets (the construction can be given as a quotient of the disjoint union, following the usual construction of colimits as coequalisers of coproducts). Let µ Σ : Sen I (Σ )-→ Sen G(I) denote the colimit injections. For a sentence ϕ, let S(ϕ) be the set of signatures Σ such that ϕ is in the image of µ Σ . We show that S(ϕ) has a least element. For if not, choose some Σ 0 ∈ S(ϕ). Assume that we have chosen Σ n ∈ S(ϕ). Since Σ n is not the least element of S(ϕ), there must be some Σ ∈ S(ϕ) such that Σ n ≤ Σ . Then let Σ n+1 = Σ n ∩ Σ ; since Sen preserves intersections, Σ n+1 ∈ S(ϕ). Moreover, Σ n+1 < Σ n . This gives an infinite descending chain of signature inclusions in S(ϕ), contradicting I having well-founded sentences. Hence, S(ϕ) must have a least element, which we use as sig(ϕ).
Mod G(I) (Σ ) is the partial order of inclusions in Mod I (Σ ), and also reduct is inherited. Since Mod G(I) is functorial, reducts are compositional. Since each Mod G(I) (σ ) is functorial, reducts preserve the model ordering. Satisfaction in G(I) is defined as in I. The satisfaction condition implies that satisfaction is invariant under reduct.
Given an institution comorphism ρ = (Φ, α, β ) : I 1 -→ I 2 , we define an institute comorphism G co (ρ) : G(I 1 ) -→ G(I 2 ) as follows. Φ obviously is a monotone map from Sign G(I 1 ) to Sign G(I 2 ) .
α : Sen G(I 1 ) -→ Sen G(I 2 ) is defined by exploiting the universal property of the colimit Sen G(I 1 ) : it suffices to define a cocone Sen I 1 (Σ ) → Sen G(I 2 ) indexed over signatures Σ in Sign I 1 . The cocone is given by composing α Σ with the inclusion of Sen I 1 (Φ(Σ )) into Sen G(I 2 ) . Commutativity of a cocone triangle follows from that of a cocone triangle for the colimit Sen G(I 2 ) together with naturality of α. This construction also ensures sentence coherence.
Model translation is just given by the β Σ ; the translation of institution morphisms is similar.
Finally, G • F ∼ = id follows because Sen can be seen to be the colimit of all Sen(Σ 1 ) → Sen(Σ 2 ). This means that we can even obtain G • F = id. However, since the choice of the colimit in the definition of G is only up to isomorphism, generally we obtain only G • F ∼ = id. The argument for G co • F co ∼ = id is similar, since isomorphism institution morphisms are also isomorphism institution comorphisms.
It should be noted that F co : CoInstitute → CoInclIns is "almost" left adjoint to G co : CoInclIns → CoInstitute: By the above remarks, w.l.o.g., the unit η : Id -→ G co • F co can be chosen to be the identity. Hence, we need to show that for each institute comorphism ρ : I 1 -→ G(I 2 ), there is a unique institution comorphism ρ # : F(I 1 ) -→ I 2 with G(ρ # ) = ρ. The latter condition easily ensures uniqueness. Let ρ = (Φ, α, β ). We construct ρ # as (Φ, α # , β ). Clearly, Φ also is a functor from Sign F(I 1 into Sign I 2 (which is a supercategory of Sign G(I 2 ) . A similar remark holds for β , but only if the model categories in I 2 consist of inclusions only. α # can be constructed from α by passing to the restrictions α Σ . Altogether we get: Since also G co • F co ∼ = id, CoInstitute comes close to being a coreflective subcategory of CoInclIns.
We also obtain: Proposition 21. For the DOL kernel language, the institute-based semantics (over some institute-based heterogeneous logical environment E) and the institution-based semantics (similar to that given in [START_REF] Sannella | Specifications in an arbitrary institution[END_REF][START_REF] Mossakowski | Heterogeneous logical environments for distributed specifications[END_REF], over F applied to E) coincide up to application of G to the Ins component of the semantics.
Conclusion
We have taken concepts from the area of formal methods for software specification and applied them to obtain a kernel language for the Distributed Ontology Language (DOL), including a semantics, and have thus provided the syntax and semantics of a heterogeneous structuring language for ontologies. The standard approach here would be to use institutions to formalise the notion of logical system. However, aiming at a more simple presentation of the heterogeneous semantics, we here develop the notion of institute which allows us to obtain a set-based semantics for a large part of DOL. Institutes can be seen as institutions without category theory.
Goguen and Tracz [START_REF] Goguen | An implementation-oriented semantics for module composition[END_REF] have a related set-theoretic approach to institutions: they require signatures to be tuple sets. Our approach is more abstract, because signatures can be any partial order. Moreover, the results of Sect. 7 show that institutes integrate nicely with institutions. That is, we can have the cake and eat it, too: we can abstractly formalise various logics as institutes, a formalisation which, being based on standard settheoretic methods, can be easily understood by the broader ontology communities that are not necessarily acquainted with category theoretic methods. Moreover, the possibility to extend the institute-based formalisation to a full-blown institution which is compatible with the institute (technically, this means that the functor G defined in Prop. [START_REF] Lifschitz | Circumscription[END_REF], applied to the institution, should yield the institute), allows a smooth technical integration of further features into the framework which do require institutions, such as colimits.
This work provides the semantic backbone for the Distributed Ontology Language DOL, which is being developed in the ISO Standard 17347 Ontology Integration and Interoperability, see ontoiop.org. An experimental repository for ontologies written in different logics and also in DOL is available at ontohub.org.
Fig. 1 .
1 Fig. 1. An initial logic graph for the Distributed Ontology Language DOL
presentations:
For any institute I, signature Σ ∈ |Sign I | and finite set Γ ⊆ Sen I (Σ ) of Σ -sentences, the presentation I, Σ ,Γ is an ontology with: Ins( I, Σ ,Γ ) := I Sig( I, Σ ,Γ ) := Σ Mod( I, Σ ,Γ ) := {M ∈ Mod(Σ ) | M |= Γ } union: For any signature Σ ∈ |Sign|, given ontologies O 1 and O 2 with the same institute I and signature Σ , their union O 1 and O 2 is an ontology with: Ins(O 1 and O 2 ) := I Sig(O 1 and O 2 ) := Σ Mod(O 1 and O 2 ) := Mod(O 1 ) ∩ Mod(O 2 ) extension: For any ontology O with institute I and signature Σ and any signature extension Σ ≤ Σ in I, O with Σ is an ontology with: Ins(O with Σ ) := I Sig(O with Σ ) := Σ Mod(O with Σ ) := {M ∈ Mod(Σ ) | M | Σ ∈ Mod(O)} hiding: For any ontology O with institute I and signature Σ and any signature extension Σ ≤ Σ in I, O hide Σ is an ontology with: Ins(O hide Σ ) := I Sig(O hide Σ ) := Σ Mod(O hide Σ ) := {M | Σ | M ∈ Mod(O )} minimisation:
translation along a comorphism: For any ontology O with institute I and signature Σ and any institute comorphism ρ = (Φ, α, β ) : I → I , O with ρ is a ontology with:Ins(O with ρ) := I Sig(O with ρ) := Φ(Σ ) Mod(O with ρ) := {M ∈ Mod I (Φ(Σ )) | β Σ (M ) ∈ Mod(O)} If ρ is simple theoroidal, then Sig(O with ρ) is the signature component of Φ(Σ ).hiding along a morphism: For any ontology O with institute I and signature Σ and any institute morphism µ = (Φ, α, β ) : I → I , O hide µ is a ontology with: Ins(O hide µ) := I Sig(O hide µ) := Φ(Σ ) Mod(O hide µ) := {β Σ (M ) | M ∈ Mod(O )} Derived operations. We also define the following derived operation generalising union to arbitrary pairs of ontologies: For any ontologies O 1 and O 2 with institutes I 1 and I 2 and signatures Σ 1 and Σ 2 , if the supremum I 1 ∪ I 2 exists and the union Σ = Φ(Σ 1 ) ∪ Φ(Σ 2 ) is defined, the generalised union of O 1 and O 2 , by abuse of notation also written as O 1 and O 2 , is defined as (O 1 with ι 1 with Σ ) and (O 2 with ι 2 with Σ )
Proposition 20 .
20 F co : CoInstitute → CoInclIns is left adjoint to G co : CoInclIns → CoInstitute if institutions are restricted to model categories in consisting of inclusions only.
1 min and Σ 2 min respectively. Let M ∈ Mod(O minimize Σ 2 min , Σ fixed ). Then M is an O-model such that M| Σ 2 min ∪Σ fixed is minimal in Fix 2 (M). We show that M| Σ 1 min ∪Σ fixed is minimal in Fix 1 (M): Let M be in Fix 1 (M). By surjectivity of reducts, it can be expanded to a Σ 2 min ∪ Σ fixed -model M . Now M ∈ Fix 2 (M), because all involved models agree on Σ fixed . Since M| Σ 2 min ∪Σ fixed is minimal in Fix 2 (M), M| Σ 2 min ∪Σ fixed ≤ M . Since reducts preserve the model ordering, M| Σ 1 min ∪Σ fixed ≤ M . Hence, M ∈ Mod(O minimize Σ 1 min , Σ fixed ). if for any O 1 -model M 1 , M 1 | Σ can be extended to an O 2 -model (resp. O 2 is a conservative extension of O 1 , see Def. 5). It is easy to see that the model-theoretic module relation implies the consequence-theoretic one. However, the converse is not true in general, compare
5 Relations between Ontologies
Besides heterogeneous structured ontologies, DOL features the following statements about relations between heterogeneous structured ontologies:
interpretations Given heterogeneous structured ontologies O 1 and O 2 with institutes I 1 and I 2 and signatures Σ 1 and Σ 2 , we write
O 1 ∼ ∼ ∼ > O
2 (read: O 1 can be interpreted in O 2 ) for the conjunction of 1. I 1 ≤ I 2 with default inclusion ι = (Φ, α, β ) : I 1 -→ I 2 , 2. Φ(Σ 1 ) ≤ Σ 2 , and 3. β (Mod(O 2 )| Φ(Σ 1 ) ) ⊆ Mod(O 1 ). modules Given heterogeneous structured ontologies O 1 and O 2 over the same institute I with signatures Σ 1 and Σ 2 , and given another signature Σ ≤ Σ 1 (called the restriction signature), we say that O 1 is a model-theoretic (consequence-theoretic) module of O 2 w.r.t. Σ
For the purposes of this paper, "ontology" can be equated with "logical theory".
To make this more explicit, as of January 2013, Google Scholar returns about 1 million papers for the keyword 'ontology', around 10.000 for the keyword 'colimits', but only around 200 for the conjunctive query.
See also http://www.w3.org/TR/owl2-overview/
There are two modifications: 1. We use ≤ where[START_REF] Lutz | Deciding inseparability and conservative extensions in the description logic EL[END_REF] write ⊆. 2. In[START_REF] Lutz | Deciding inseparability and conservative extensions in the description logic EL[END_REF], all these notions are defined relative to a query language. This can also be done in an institute by singling out a subinstitute (see end of Sect. 3 below), which then becomes an additional parameter of the definition.
Set is the category having all small sets as objects and functions as arrows.
CAT is the category of categories and functors. Strictly speaking, CAT is not a category but only a so-called quasicategory, which is a category that lives in a higher set-theoretic universe.
Note, however, that non-monotonic formalisms can only indirectly be covered this way, but compare, e.g.,[START_REF] Guerra | Composition of Default Specifications[END_REF].
This is a quite reasonable assumption met by practically all institutions. Note that by contrast, preservation of unions is quite unrealistic-the union of signatures normally leads to new sentences combining symbols from both signatures.
Acknowledgements: We would like to thank the OntoIOp working group within ISO/TC 37/SC 3 for providing valuable feedback, in particular Michael Grüninger, Pat Hayes, Maria Keet, Chris Menzel, and John Sowa. We also want to thank Andrzej Tarlecki, with whom we collaborate(d) on the semantics of heterogeneous specification, Thomas Schneider for help with the semantics of modules, and Christian Maeder, Eugen Kuksa and Sören Schulze for implementation work. This work has been supported by the DFGfunded Research Centre on Spatial Cognition (SFB/TR 8), project I1-[OntoSpace], and EPSRC grant "EP/J007498/1".
(forall (x y z) (if (and (X x) (X y) (X z)) (and %% now list all the axioms (if (and (isPartOf x y) (isPartOf y x)) (= x y))
%% antisymmetry (if (and (isProperPartOf x y) (isProperPartOf y z)) (isProperPartOf x z))
%% transitivity; can't be expressed in OWL together with asymmetry (iff (overlaps x y) (exists (pt) (and (isPartOf pt x) (isPartOf pt y)))) (iff (isAtomicPartOf x y) (and (isPartOf x y) (Atom x))) (iff (sum z x y) (forall (w) (iff (overlaps w z) (and (overlaps w x) (overlaps w y))))) (exists (s) (sum s x y)))))))
%% existence of the sum }
Relating Institutes and institutions
In this section, we show that institutes are a certain restriction of institutions. We first recall Goguen's and Burstall's notion of institution [START_REF] Goguen | Institutions: Abstract model theory for specification and programming[END_REF], which they have introduced as a formalisation of the intuitive notion of logical system. We assume some acquaintance with the basic notions of category theory and refer to [START_REF] Adámek | Abstract and Concrete Categories[END_REF] or [START_REF] Mac | Categories for the Working Mathematician[END_REF] for an introduction. | 47,437 | [
"769746"
] | [
"461380",
"258630",
"461380",
"461380",
"421435"
] |
01485976 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01485976/file/978-3-642-37635-1_2_Chapter.pdf | Francisco Durán
email: duran@lcc.uma.es
Fernando Orejas
email: orejas@lsi.upc.edu
Steffen Zschaler
email: szschaler@acm.org
Behaviour Protection in Modular Rule-Based System Specifications
Model-driven engineering (MDE) and, in particular, the notion of domain-specific modelling languages (DSMLs) is an increasingly popular approach to systems development. DSMLs are particularly interesting because they allow encoding domain-knowledge into a modelling language and enable full code generation and analysis based on high-level models. However, as a result of the domain-specificity of DSMLs, there is a need for many such languages. This means that their use only becomes economically viable if the development of new DSMLs can be made efficient. One way to achieve this is by reusing functionality across DSMLs. On this background, we are working on techniques for modularising DSMLs into reusable units. Specifically, we focus on DSMLs whose semantics are defined through in-place model transformations. In this paper, we present a formal framework of morphisms between graph-transformation systems (GTSs) that allow us to define a novel technique for conservative extensions of such DSMLs. In particular, we define different behaviour-aware GTS morphisms and prove that they can be used to define conservative extensions of a GTS.
Introduction
Model-Driven Engineering (MDE) [START_REF] Schmidt | Model-driven engineering[END_REF] has raised the level of abstraction at which systems are developed, moving development focus from the programming-language level to the development of software models. Models and specifications of systems have been around the software industry from its very beginning, but MDE articulates them so that the development of information systems can be at least partially automated. Thus models are being used not only to specify systems, but also to simulate, analyze, modify and generate code of such systems. A particularly useful concept in MDE are domainspecific modelling languages (DSMLs) [START_REF] Van Deursen | Domain-specific languages: An annotated bibliography[END_REF]. These languages offer concepts specifically targeted at a particular domain. On the one hand this makes it easier for domain experts to express their problems and requirements. On the other hand, the higher amount of knowledge embedded in each concept allows for much more complete generation of executable solution code from a DSML model [START_REF] Hemel | Code generation by model transformation: A case study in transformation modularity[END_REF] as compared to a model expressed in a general-purpose modelling language.
DSMLs can only be as effective as they are specific for a particular domain. This implies that there is a need for a large number of such languages to be developed. However, development of a DSML takes additional effort in a software-development project. DSMLs are only viable if their development can be made efficient. One way of achieving this is by allowing them to be built largely from reusable components. Consequently, there has been substantial research on how to modularise language specifications. DSMLs are often defined by specifying their syntax (often separated into concrete and abstract syntax) and their semantics. While we have reasonably good knowledge of how to modularise DSML syntax, the modularisation of language semantics is an as yet unsolved issue.
DSML semantics can be represented in a range of different ways-for example using UML behavioural models [START_REF] Engels | Dynamic meta modeling: A graphical approach to the operational semantics of behavioral diagrams in UML[END_REF][START_REF] Fischer | Story diagrams: A new graph rewrite language based on the unified modeling language[END_REF], abstract state machines [START_REF] Di Ruscio | Extending AMMA for supporting dynamic semantics specifications of DSLs[END_REF][START_REF] Chen | Semantic anchoring with model transformations[END_REF], Kermeta [START_REF] Muller | Weaving executability into object-oriented metalanguages[END_REF], or in-place model transformations [START_REF] De Lara | Automating the transformation-based analysis of visual languages[END_REF][START_REF] Rivera | Analyzing rule-based behavioral semantics of visual modeling languages with Maude[END_REF]. In the context of MDE it seems natural to describe the semantics by means of models, so that they may be integrated with the rest of the MDE environment and tools. We focus on the use of in-place model transformations.
Graph transformation systems (GTSs) were proposed as a formal specification technique for the rule-based specification of the dynamic behaviour of systems [START_REF] Ehrig | Introduction to the algebraic theory of graph grammars[END_REF]. Different approaches exist for modularisation in the context of the graph-grammar formalism [START_REF] Corradini | The category of typed graph grammars and its adjunctions with categories of derivations[END_REF][START_REF]Handbook of Graph Grammars and Computing by Graph Transformations[END_REF][START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. All of them have followed the tradition of modules inspired by the notion of algebraic specification module [START_REF] Ehrig | Fundamentals of Algebraic Specification 2. Module Specifications and Constraints[END_REF]. A module is thus typically considered as given by an export and an import interface, and an implementation body that realises what is offered in the export interface, using the specification to be imported from other modules via the import interface. For example, Große-Rhode, Parisi-Presicce, and Simeoni introduce in [START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF] a notion of module for typed graph transformation systems, with interfaces and implementation bodies; they propose operations for union, composition, and refinement of modules. Other approaches to modularisation of graph transformation systems include PROGRES Packages [START_REF] Schürr | The PROGRES-approach: Language and environment[END_REF], GRACE Graph Transformation Units and Modules [START_REF] Kreowski | Graph transformation units and modules[END_REF], and DIEGO Modules [START_REF] Taentzer | DIEGO, another step towards a module concept for graph transformation systems[END_REF]. See [START_REF] Heckel | Classification and comparison of modularity concepts for graph transformation systems[END_REF] for a discussion on these proposals.
For the kind of systems we deal with, the type of module we need is much simpler. For us, a module is just the specification of a system, a GTS, without import and export interfaces. Then, we build on GTS morphisms to compose these modules, and specifically we define parametrised GTSs. The instantiation of such parameterized GTS is then provided by an amalgamation construction. We present formal results about graph-transformation systems and morphisms between them. Specifically, we provide definitions for behaviour-reflecting and -protecting GTS morphisms and show that they can be used to infer semantic properties of these morphisms. We give a construction for the amalgamation of GTSs, as a base for the composition of GTSs, and we prove it to protect behaviour under certain circumstances. Although we motivate and illustrate our approach using the e-Motions language [START_REF] Rivera | A graphical approach for modeling time-dependent behavior of DSLs[END_REF][START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF], our proposal is language-independent, and all the results are presented for GTSs and adhesive HLR systems [START_REF] Lack | Adhesive categories[END_REF][START_REF] Ehrig | Adhesive high-level replacement categories and systems[END_REF].
Different forms of GTS morphisms have been used in the literature, taking one form or another depending on their concrete application. Thus, we find proposals cen-tered on refinements (see., e.g., [START_REF] Heckel | Horizontal and vertical structuring of typed graph transformation systems[END_REF][START_REF] Große-Rhode | Spatial and temporal refinement of typed graph transformation systems[END_REF][START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF]), views (see, e.g., [START_REF] Engels | A combined reference model-and viewbased approach to system specification[END_REF]), and substitutability (see [START_REF] Engels | Flexible interconnection of graph transformation modules[END_REF]). See [START_REF] Engels | Flexible interconnection of graph transformation modules[END_REF] for a first attempt to a systematic comparison of the different proposals and notations. None of these notions fit our needs, and none of them coincide with our behaviour-aware GTS morphisms.
Moreover, as far as we know, parameterised GTSs and GTS morphisms, as we discuss them, have not been studied before. Heckel and Cherchago introduce parameterised GTSs in [START_REF] Heckel | Structural and behavioural compatibility of graphical service specifications[END_REF], but their notion has little to do with our parameterised GTSs. In their case, the parameter is a signature, intended to match service descriptions. They however use a double-pullback semantics, and have a notion of substitution morphism which is related to our behaviour preserving morphism.
Our work is originally motivated by the specification of non-functional properties (NFPs), such as performance or throughput, in DSMLs. We have been looking for ways in which to encapsulate the ability to specify non-functional properties into reusable DSML modules. Troya et al. used the concept of observers in [START_REF] Troya | Simulating domain specific visual models by observation[END_REF][START_REF] Troya | Model-driven performance analysis of rulebased domain specific visual models[END_REF] to model nonfunctional properties of systems described by GTSs in a way that could be analysed by simulation. In [START_REF] Durán | On the reusable specification of non-functional properties in DSLs[END_REF], we have built on this work and ideas from [START_REF] Zschaler | Formal specification of non-functional properties of component-based software systems: A semantic framework and some applications thereof[END_REF] to allow the modular encapsulation of such observer definitions in a way that can be reused in different DSML specifications. In this paper, we present a full formal framework of such language extensions. Nevertheless, this framework is independent of the specific example of non-functional property specifications, but instead applies to any conservative extension of a base GTS.
The way in which we think about composition of reusable DSML modules has been inspired by work in aspect-oriented modelling (AOM). In particular, our ideas for expressing parametrised metamodels are based on the proposals in [START_REF] Clarke | Generic aspect-oriented design with Theme/UML[END_REF][START_REF] Klein | Reusable aspect models[END_REF]. Most AOM approaches use syntactic notions to automate the establishment of mappings between different models to be composed, often focusing primarily on the structural parts of a model. While our mapping specifications are syntactic in nature, we focus on composition of behaviours and provide semantic guarantees. In this sense, our work is perhaps most closely related to the work on MATA [START_REF] Whittle | MATA: A unified approach for composing UML aspect models based on graph transformation[END_REF] or semantic-based weaving of scenarios [START_REF] Klein | Semantic-based weaving of scenarios[END_REF].
The rest of the paper begins with a presentation of a motivating example expressed in the e-Motions language in Section 2. Section 3 introduces a brief summary of graph transformation and adhesive HLR categories. Section 4 introduces behaviour-reflecting GTS morphisms, the construction of amalgamations in the category of GTSs and GTS morphisms, and several results on these amalgamations, including the one stating that the morphisms induced by these amalgamations protect behaviour, given appropriate conditions. The paper finishes with some conclusions and future work in Section 5.
NFP specification with e-Motions
In this section, we use e-Motions [START_REF] Rivera | A graphical approach for modeling time-dependent behavior of DSLs[END_REF][START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF] to provide a motivating example, adapted from [START_REF] Troya | Simulating domain specific visual models by observation[END_REF], as well as intuitions for the formal framework developed. However, as stated in the previous section, the framework itself is independent of such a language.
e-Motions is a Domain Specific Modeling Language (DSML) and graphical framework developed for Eclipse that supports the specification, simulation, and formal anal- ysis of DSMLs. Given a MOF metamodel (abstract syntax) and a GCS model (a graphical concrete syntax) for it, the behaviour of a DSML is defined by in-place graph transformation rules. Although we briefly introduce the language here, we omit all those details not relevant to this paper. We refer the interested reader to [START_REF] Rivera | Formal specification and analysis of domain specific models using Maude[END_REF][START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF] or http://atenea.lcc.uma.es/e-Motions for additional details.
Figure 1(a) shows the metamodel of a DSML for specifying Production Line systems for producing hammers out of hammer heads and handles, which are generated in respective machines, transported along the production line via conveyors, and temporarily stored in trays. As usual in MDE-based DSMLs, this metamodel defines all the concepts of the language and their interconnections; in short, it provides the language's abstract syntax. In addition, a concrete syntax is provided. In the case of our example, this is sufficiently well defined by providing icons for each concept (see Figure 1(b)); connections between concepts are indicated through arrows connecting the corresponding icons. Figure 2 shows a model conforming to the metamodel in Figure 1(a) using the graphical notation introduced in the GCS model in Figure 1(b).
The behavioural semantics of the DSML is then given by providing transformation rules specifying how models can evolve. Figure 3 shows an example of such a rule. The rule consists of a left-hand side matching a situation before the execution of the rule and a right-hand side showing the result of applying the rule. Specifically, this rule shows how a new hammer is assembled: a hammer generator a has an incoming tray of parts and is connected to an outgoing conveyor belt. Whenever there is a handle and a head available, and there is space in the conveyor for at least one part, the hammer generator can assemble them into a hammer. The new hammer is added to the parts set of the outgoing conveyor belt in time T, with T some value in the range [a.pt -3, a.pt + 3], and where pt is an attribute representing the production time of a machine. The complete semantics of our production-line DSML is constructed from a number of such rules covering all kinds of atomic steps that can occur, e.g., generating new pieces, moving pieces from a conveyor to a tray, etc. The complete specification of a Production Line example using e-Motions can be found at http://atenea.lcc. uma.es/E-motions/PLSExample.
For a Production Line system like this one, we may be interested in a number of non-functional properties. For example, we would like to assess the throughput of a production line, or how long it takes for a hammer to be produced. Figure 4(a) shows the metamodel for a DSML for specifying production time. It is defined as a parametric model (i.e., a model template), defined independently of the Production Line system. It uses the notion of response time, which can be applied to different systems with different meanings. The concepts of Server, Queue, and Request and their interconnections are parameters of the metamodel, and they are shaded in grey for illustration purposes. Figure 4(b) shows the concrete syntax for the response time observer object. Whenever that observer appears in a behavioural rule, it will be represented by that graphical symbol.
Figure 4(c) shows one of the transformation rules defining the semantics of the response time observer. It states that if there is a server with an in queue and an out queue and there initially are some requests (at least one) in the in queue, and the out queue contains some requests after rule execution, the last response time should be recorded to have been equal to the time it took the rule to execute. Similar rules need to be written to capture other situations in which response time needs to be measured, for example, where a request stays at a server for some time, or where a server does not have an explicit in or out queue.
Note that, as in the metamodel in Figure 4(a), part of the rule in Figure 4(c) has been shaded in grey. Intuitively, the shaded part represents a pattern describing transformation rules that need to be extended to include response-time accounting. 4 The lower part of the rule describes the extensions that are required. So, in addition to reading Figure 4(c) as a 'normal' transformation rule (as we have done above), we can also read it as a rule transformation, stating: "Find all rules that match the shaded pattern and add ResponseTime objects to their left-and right-hand sides as described." In effect, observer models become higher-order transformations [START_REF] Tisi | On the use of higher-order model transformations[END_REF].
To use our response-time language to allow specification of production time of hammers in our Production Line DSML, we need to weave the two languages together. For this, we need to provide a binding from the parameters of the response-time metamodel (Figure 4(a)) to concepts in the Production Line metamodel (Figure 1(a)). In this case, assuming that we are interested in measuring the response time of the Assemble machine, the binding might be as follows:
-Server to Assemble; -Queue to LimitedContainer as the Assemble machine is to be connected to an arbitrary LimitedContainer for queuing incoming and outgoing parts; -Request to Part as Assemble only does something when there are Parts to be processed; and -Associations:
• The in and out associations from Server to Queue are bound to the corresponding in and out associations from Machine to Tray and Conveyor, respectively; and • The association from Queue to Request is bound to the association from Container to Part.
As we will see in Section 4, given DSMLs defined by a metamodel plus a behaviour, the weaving of DSMLs will correspond to amalgamation in the category of DSMLs and DSML morphisms. Figure 5 shows the amalgamation of an inclusion morphism between the model of an observer DSML, M Obs , and its parameter sub-model M Par , and the binding morphism from M Par to the DSML of the system at hand, M DSML , the Production Line DSML in our example. The amalgamation object M DSML is obtained by the construction of the amalgamation of the corresponding metamodel morphisms and the amalgamation of the rules describing the behaviour of the different DSMLs.
In our example, the amalgamation of the metamodel corresponding morphisms is shown in Figure 6 (note that the binding is only partially depicted). The weaving process has added the ResponseTime concept to the metamodel. Notice that the weaving process also ensures that only sensible woven metamodels can be produced: for a given binding of parameters, there needs to be a match between the constraints expressed in the observer metamodel and the DSML metamodel. We will discuss this issue in more formal detail in Section 4.
The binding also enables us to execute the rule transformations specified in the observer language. For example, the rule in Figure 3 matches the pattern in Figure 4(c), given this binding: In the left-hand side, there is a Server (Assemble) with an in-Queue (Tray) that holds two Requests (Handle and Head) and an out-Queue (Conveyor). In the right-hand side, there is a Server (Assemble) with an in-Queue (Tray) and an out-Queue (Conveyor) that holds one Request (Hammer). Consequently, we can apply the rule transformation from the rule in Figure 4(c). As we will explain in Section 4, the semantics of this rule transformation is provided by the rule amalgamation illustrated in Figure 7, where we can see how the obtained amalgamated rule is similar to the Assemble rule but with the observers in the RespTime rule appropriately introduced.
MPar MMPar ⊕ RlsPar Binding BMM ⊕ B Rls + 3 MDSML MMDSML ⊕ RlsDSML M Obs MM Obs ⊕ Rls Obs + 3 M DSML (MMDSML ⊗ MM Obs ) ⊕ (RlsDSML ⊗ Rls Obs )
Clearly, such a separation of concerns between a specification of the base DSML and specifications of languages for non-functional properties is desirable. We have used the response-time property as an example here. Other properties can be defined easily in a similar vein as shown in [START_REF] Troya | Simulating domain specific visual models by observation[END_REF] and at http://atenea.lcc.uma.es/index. php/Main_Page/Resources/E-motions/PLSObExample. In the following sections, we discuss the formal framework required for this and how we can distinguish safe bindings from unsafe ones.
The e-Motions models thus obtained are automatically transformed into Maude [4] specifications [START_REF] Rivera | On the behavioral semantics of real-time domain specific visual languages[END_REF]. See [START_REF] Rivera | Formal specification and analysis of domain specific models using Maude[END_REF] for a detailed presentation of how Maude provides an accurate way of specifying both the abstract syntax and the behavioral semantics of models and metamodels, and offers good tool support both for simulating and for reasoning about them.
Graph transformation and adhesive HLR categories
Graph transformation [START_REF]Handbook of Graph Grammars and Computing by Graph Transformations[END_REF] is a formal, graphical and natural way of expressing graph manipulation based on rewriting rules. In graph-based modelling (and meta-modelling), graphs are used to define the static structures, such as class and object ones, which represent visual alphabets and sentences over them. We formalise our approach using the typed graph transformation approach, specifically the Double Pushout (DPO) algebraic approach, with positive and negative (nested) application conditions [START_REF] Ehrig | Theory of constraints and application conditions: From graphs to high-level structures[END_REF][START_REF] Habel | Correctness of high-level transformation systems relative to nested conditions[END_REF]. We however carry on our formalisation for weak adhesive high-level replacement (HLR) categories [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. Some of the proofs in this paper assume that the category of graphs at hand is adhesive HLR. Thus, in the rest of the paper, when we talk about graphs or typed graphs, keep in mind that we actually mean some type of graph whose corresponding category is adhesive HLR. Specifically, the category of typed attributed graphs, the one of interest to us, was proved to be adhesive HLR in [START_REF] Ehrig | Fundamental theory for typed attributed graph transformation[END_REF].
Generic notions
The concepts of adhesive and (weak) adhesive HLR categories abstract the foundations of a general class of models, and come together with a collection of general semantic techniques [START_REF] Lack | Adhesive categories[END_REF][START_REF] Ehrig | Adhesive high-level replacement categories and systems[END_REF]. Thus, e.g., given proofs for adhesive HLR categories of general results such as the Local Church-Rosser, or the Parallelism and Concurrency Theorem, they are automatically valid for any category which is proved an adhesive HLR category. This framework has been a break-through for the DPO approach of algebraic graph transformation, for which most main results can be proved in these categorical frameworks, and then instantiated to any HLR system. Definition 1. (Van Kampen square) Pushout ( 1) is a van Kampen square if, for any commutative cube with (1) in the bottom and where the back faces are pullbacks, we have that the top face is a pushout if and only if the front faces are pullbacks.
A f r r m $ $ a C n $ $ c A f r r m # # (1) B g r r b C n # # D d B g r r A f r r m $ $ D C n $ $ B g r r D Definition 2. (Adhesive HLR category) A category C with a morphism class M is called adhesive HLR category if
-M is a class of monomorphisms closed under isomorphisms and closed under composition and decomposition, -C has pushouts and pullbacks along M -morphisms, i.e., if one of the given morphisms is in M , then also the opposite one is in M , and M -morphisms are closed under pushouts and pullbacks, and pushouts in C along M -morphisms are van Kampen squares.
In the DPO approach to graph transformation, a rule p is given by a span (L
l ← K r → R)
with graphs L, K, and R, called, respectively, left-hand side, interface, and right-hand side, and some kind of monomorphisms (typically, inclusions) l and r. A graph transformation system (GTS) is a pair (P, π) where P is a set of rule names and π is a function
mapping each rule name p into a rule L l ←-K r -→ R.
An application of a rule p : 1) and ( 2), which are pushouts in the corresponding graph category, leading to a direct transformation G p,m =⇒ H.
L l ←-K r -→ R to a graph G via a match m : L → G is constructed as two gluings (
p : L m (1) K l o o r / / (2) R G D o o / / H
We only consider injective matches, that is, monomorphisms. If the matching m is understood, a DPO transformation step G p,m =⇒ H will be simply written
G p =⇒ H. A transformation sequence ρ = ρ 1 . . . ρ n : G ⇒ * H via rules p 1 , . . . , p n is a sequence of transformation steps ρ i = (G i pi,mi ==⇒ H i ) such that G 1 = G, H n = H,
and consecutive steps are composable, that is, G i+1 = H i for all 1 ≤ i < n. The category of transformation sequences over an adhesive category C, denoted by Trf(C), has all graphs in |C| as objects and all transformation sequences as arrows.
Transformation rules may have application conditions. We consider rules of the form (L
l ←-K r -→ R, ac), where (L l ←-K r -→ R) is a normal
rule and ac is a (nested) application condition on L. Application conditions may be positive or negative (see Figure 8). Positive application conditions have the form ∃a, for a monomorphism a : L → C, and demand a certain structure in addition to L. Negative application conditions of the form a forbid such a structure. A match m : L → G satisfies a positive application condition ∃a if there is a monomorphism q : C → G satisfying q • a = m. A matching m satisfies a negative application condition a if there is no such monomorphism. Given an application condition ∃a or a, for a monomorphism a : L → C, another application condition ac can be established on C, giving place to nested application conditions [START_REF] Habel | Correctness of high-level transformation systems relative to nested conditions[END_REF]. For a basic application condition ∃(a, ac C ) on L with an application condition ac C on C, in addition to the existence of q it is required that q satisfies ac C . We write m |= ac if m satisfies ac. ac C ∼ = ac C denotes the semantical equivalence of ac C and ac C on C.
C q L a o o m K l o o r / / R G D o o / / H (a) Positive application condition C / q L a o o m K l o o r / / R G D o o / / H (b) Negative application condition
To improve readability, we assume projection functions ac, lhs and rhs, returning, respectively, the application condition, the left-hand side and the right-hand side of a rule. Thus, given a rule r = (L l ←-K r -→ R, ac), ac(r) = ac, lhs(r) = L, and rhs(r) = R.
Given an application condition ac L on L and a monomorphism t : L → L , then there is an application condition Shift(t, ac L ) on L such that for all m : [START_REF] Parisi-Presicce | Transformations of graph grammars[END_REF] a notion of rule morphism very similar to the one below, although we consider rules with application conditions, and require the commuting squares to be pullbacks.
L → G, m |= Shift(t, ac L ) ↔ m = m • t |= ac L . ac L L t / / m L m Shift(t, ac L ) G Parisi-Presicce proposed in
Definition 3. (Rule morphism) Given transformation rules p i = (L i li ← K i ri → R i , ac i ), for i = 0, 1, a rule morphism f : p 0 → p 1 is a tuple f = (f L , f K , f R ) of graph mono- morphisms f L : L 0 → L 1 , f K : K 0 → K 1 ,
and f R : R 0 → R 1 such that the squares with the span morphisms l 0 , l 1 , r 0 , and r 1 are pullbacks, as in the diagram below, and such that ac 1 ⇒ Shift(f L , ac 0 ).
p 0 : f ac 0 L 0 f L pb K 0 l0 o o r0 / / f K pb R 0 f R p 1 : ac 1 L 1 K 1 l1 o o r1 / / R 1
The requirement that the commuting squares are pullbacks is quite natural from an intuitive point of view: the intuition of morphisms is that they should preserve the "structure" of objects. If we think of rules not as a span of monomorphisms, but in terms of their intuitive semantics (i.e., L\K is what should be deleted from a given graph, R\K is what should be added to a given graph and K is what should be preserved), then asking that the two squares are pullbacks means, precisely, to preserve that structure. I.e., we preserve what should be deleted, what should be added and what must remain invariant. Of course, pushouts also preserve the created and deleted parts. But they reflect this structure as well, which we do not want in general.
Fact 1 With componentwise identities and composition, rule morphisms define the category Rule .
Proof Sketch. Follows trivially from the fact that ac ∼ = Shift(id L , ac), pullback composition, and that given morphisms f • f such that
p 0 : f ac 0 L 0 f L pb K 0 l0 o o r0 / / pb R 0 p 1 : f ac 1 L 1 f L pb K 1 l1 o o r1 / / pb R 1 p 2 : ac 2 L 2 K 2 l2 o o r2 / / R 2 then we have Shift(f L , Shift(f L , ac 0 )) ∼ = Shift(f L • f L , ac 0 ).
A key concept in the constructions in the following section is that of rule amalgamation [START_REF] Boehm | Amalgamation of graph transformations with applications to synchronization[END_REF][START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. The amalgamation of two rules p 1 and p 2 glues them together into a single rule p to obtain the effect of the original rules. I.e., the simultaneous application of p 1 and p 2 yields the same successor graph as the application of the amalgamated rule p. The possible overlapping of rules p 1 and p 2 is captured by a rule p 0 and rule morphisms f : p 0 → p 1 and g : p 0 → p 2 . 2) and (3) are pushouts, l and r are induced by the universal property of (2) so that all subdiagrams commute, and ac = Shift( f L , ac 2 ) ∧ Shift( g L , ac 1 ).
ac 0 L 0 f L | | g L " " (1)
K 0 } } " " l0 o o r0 / / (2) R 0 ~! ! (3)
ac 2 L 2 f L | | K 2 } } l2 o o r2 / / R 2 } } ac 1 L 1 g L " " K 1 ! ! l1 o o r1 / / R 1 ac L K l o o r / / R
Notice that in the above diagram all squares are either pushouts or pullbacks (by the van Kampen property) which means that all their arrows are monomorphisms (by being an adhesive HLR category).
We end this section by introducing the notion of rule identity.
Definition 5. (Rule-identity morphism) Given graph transformation rules
p i = (L i li ←-K i ri -→ R i , ac i )
, for i = 0, 1, and a rule morphism f : p 0 → p 1 , with f = (f L , f K , f R ), p 0 and p 1 are said to be identical, denoted p 0 ≡ p 1 , if f L , f K , and f R are identity morphisms and ac 0 ∼ = ac 1 .
Typed graph transformation systems
A (directed unlabeled) graph G = (V, E, s, t) is given by a set of nodes (or vertices) V , a set of edges E, and source and target functions s, t :
E → V . Given graphs G i = (V i , E i , s i , t i ), with i = 1, 2, a graph homomorphism f : G 1 → G 2 is a pair of functions (f V : V 1 → V 2 , f E : E 1 → E 2 ) such that f V • s 1 = s 2 • f E and f V • t 1 = t 2 • f E .
With componentwise identities and composition this defines the category Graph.
Given a distinguished graph TG, called type graph, a TG-typed graph (G, g G ), or simply typed graph if TG is known, consists of a graph G and a typing homomorphism g G : G → T G associating with each vertex and edge of G its type in TG. However, to enhance readability, we will use simply g G to denote a typed graph (G, g G ), and when G2 g 2
G1
k : :
g 1 $ $ TG f / / TG (a) Forward retyping functor. G2 g 2 / / G 2 g 2
G1
k : : the typing morphism g G can be considered implicit, we will often refer to it just as G. A TG-typed graph morphism between TG-typed graphs (G i , g i :
g 1 $ $ / / G 1 k : : g 1 $ $ TG f / / TG (b) Backward retyping functor.
G i → T G), with i = 1, 2, denoted f : (G 1 , g 1 ) → (G 2 , g 2 ) (or simply f : g 1 → g 2 ), is a graph morphism f : G 1 → G 2 which preserves types, i.e., g 2 • f = g 1 .
Graph TG is the category of TG-typed graphs and TG-typed graph morphisms, which is the comma category Graph over TG.
If the underlying graph category is adhesive (resp., adhesive HLR, weakly adhesive) then so are the associated typed categories [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF], and therefore all definitions in Section 3.1 apply to them. A TG-typed graph transformation rule is a span p = L l ← K r → R of injective TG-typed graph morphisms and a (nested) application condition on L. Given TG-typed graph transformation rules
p i = (L i li ← K i ri → R i , ac i ), with i = 1, 2, a typed rule morphism f : p 1 → p 2 is a tuple (f L , f K , f R ) of TG-typed
graph monomorphisms such that the squares with the span monomorphisms l i and r i , for i = 1, 2, are pullbacks, and such that ac 2 ⇒ Shift(f L , ac 1 ). TG-typed graph transformation rules and typed rule morphisms define the category Rule TG , which is the comma category Rule over TG.
Following [START_REF] Corradini | The category of typed graph grammars and its adjunctions with categories of derivations[END_REF][START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF], we use forward and backward retyping functors to deal with graphs over different type graphs. A graph morphism f : TG → TG induces a forward retyping functor f > : Graph TG → Graph TG , with f > (g 1 ) = f • g 1 and f > (k : g 1 → g 2 ) = k by composition, as shown in the diagram in Figure 9(a). Similarly, such a morphism f induces a backward retyping functor f < : Graph TG → Graph TG , with f < (g 1 ) = g 1 and f < (k : g 1 → g 2 ) = k : g 1 → g 2 by pullbacks and mediating morphisms as shown in the diagram in Figure 9(b). Retyping functors also extends to application conditions and rules, so we will write things like f > (ac) or f < (p) for some application condition ac and production p. Notice, for example, that given a graph morphism f : TG → TG , the forward retyping of a production p = (L
l ← K r → R, ac) over TG is a production f > TG (p) = (f > TG (L) f > TG (l) ←--f > TG (K) f > TG (r) --→ f > TG (R), f > TG (ac)) over TG , defining an induced morphism f p : p → f > TG (p) in Rule.
Since f p is a morphism between rules in |Rule TG | and |Rule TG |, it is defined in Rule, forgetting the typing. Notice also that f > TG (ac) ∼ = Shift(f p L , ac). As said above, to improve readability, if G → TG is a TG-typed graph, we sometimes refer to it just by its typed graph G, leaving TG implicit. As a consequence, if f : TG → TG is a morphism, we may refer to the TG -typed graph f > (G), even if this may be considered an abuse of notation.
The following results will be used in the proofs in the following section. Proposition 1. (From [START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF]) (Adjunction) Forward and backward retyping functors are left and right adjoints; i.e., for each f : TG → TG we have f > f < : TG → TG . Remark 1. Given a graph monomorphism f : TG → TG , for all k : G 1 → G 2 in Graph TG , the following diagram is a pullback:
f < (G 1 ) f < (k ) / / pb f < (G 2 ) G 1 k / / G 2
This is true just by pullback decomposition.
Remark 2. Given a graph monomorphism f : TG → TG , and given monomorphisms k : G 0 → G 1 and h : G 0 → G 2 in Graph TG , if the following diagram on the left is a pushout then the diagram on the right is also a pushout:
G 0 k / / h po G 1 h f < (G 0 ) f < (k ) / / f < (h ) po f < (G 1 ) f < ( h ) G 2 k / / G f < (G 2 ) f < ( k ) / / f < ( G)
Notice that since in an adhesive HLR category all pushouts along M -morphisms are van Kampen squares, the commutative square created by the pullbacks and induced morphisms by the backward retyping functor imply the second pushout.
f < (G 1 ) / / G 1 f < (G 0 ) / / 4 4 G 0 4 4 f < ( G ) / / | | G | | f < (G 2 ) / / 5 5 G 2 5 5
T G / / T G Remark 3. Given a graph monomorphism f : TG → TG , and given monomorphisms k : G 0 → G 1 and h : G 0 → G 2 in Graph TG , if the diagram on the left is a pushout (resp., a pullback) then the diagram on the right is also a pushout (resp., a pullback):
G 0 k / / h G 1 h f > (G 0 ) f > (k) / / f > (h) f > (G 1 ) f > ( h) G 2 k / / G f > (G 2 ) f > ( k) / / f > ( G)
Remark 4. Given a graph monomorphism f : TG → TG , and a TG -typed graph transformation rule p = (L
l ← K r → R, ac), if a matching m : L → C satisfies ac, that is, m |= ac, then, f < (m) |= f < (ac).
A typed graph transformation system over a type graph TG, is a graph transformation system where the given graph transformation rules are defined over the category of TGtyped graphs. Since in this paper we deal with GTSs over different type graphs, we will make explicit the given type graph. This means that, from now on, a typed GTS is a triple (TG, P, π) where TG is a type graph, P is a set of rule names and π is a function mapping each rule name p into a rule (L
l ← K r → R, ac) typed over TG.
The set of transformation rules of each GTS specifies a behaviour in terms of the derivations obtained via such rules. A GTS morphism defines then a relation between its source and target GTSs by providing an association between their type graphs and rules. Definition 6. (GTS morphism) Given typed graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 , with f = (f TG , f P , f r ), is given by a morphism f TG : TG 0 → TG 1 , a surjective mapping f P : P 1 → P 0 between the sets of rule names, and a family of rule morphisms
f r = {f p : f > T G (π 0 (f P (p))) → π 1 (p)} p∈P1 .
Given a GTS morphism f : GTS 0 → GTS 1 , each rule in GTS 1 extends a rule in GTS 0 . However if there are internal computation rules in GTS 1 that do not extend any rule in GTS 0 , we can always consider that the empty rule is included in GTS 0 , and assume that those rules extend the empty rule.
Please note that rule morphisms are defined on rules over the same type graph (see Definition 3). To deal with rules over different type graphs we retype one of the rules to make them be defined over the same type graph.
Typed GTSs and GTS morphisms define the category GTS. The GTS amalgamation construction provides a very convenient way of composing GTSs. Definition 7. (GTS Amalgamation). Given transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, 2, and GTS morphisms f : GTS 0 → GTS 1 and g : GTS 0 → GTS 2 , the amalgamated GTS GTS = GTS 1 + GTS0 GTS 2 is the GTS ( TG, P , π) constructed as follows. We first construct the pushout of typing graph morphisms f TG : TG 0 → TG 1 and g TG : TG 0 → TG 2 , obtaining morphisms f TG : TG 2 → TG and g TG : TG 1 → TG. The pullback of set morphisms f P : P 1 → P 0 and g P : P 2 → P 0 defines morphisms f P : P → P 2 and g P : P → P 1 . Then, for each rule p in P , the rule π(p) is defined as the amalgamation of rules f > TG (π 2 ( f P (p))) and g > TG (π 1 ( g P (p))) with respect to the kernel rule f > TG (g > TG (π 0 (g P ( f P (p))))).
GTS 0 g # # f { { GTS 1 g # # GTS 2 f { { GTS
Among the different types of GTS morphisms, let us now focus on those that reflect behaviour. Given a GTS morphism f : GTS 0 → GTS 1 , we say that it reflects behaviour if for any derivation that may happen in GTS 1 there exists a corresponding derivation in GTS 0 . Definition 8. (Behaviour-reflecting GTS morphism) Given graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 is behaviourreflecting if for all graphs G, H in |Graph TG1 |, all rules p in P 1 , and all matches
m : lhs(π 1 (p)) → G such that G p,m =⇒ H, then f < TG (G) f P (p),f < TG (m) ======⇒ f < TG (H) in GTS 0 .
Morphisms between GTSs that only add to the transformation rules elements not in their source type graph are behaviour-reflecting. We call them extension morphisms.
Definition 9. (Extension GTS morphism) Given graph transformation systems GTS
i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 , with f = (f TG , f P , f r ), is an extension morphism if f TG is a monomorphism and for each p ∈ P 1 , π 0 (f P (p)) ≡ f < TG (π 1 (p)).
That an extension morphism is indeed a behaviour-reflecting morphism is shown by the following lemma.
Lemma 1. All extension GTS morphisms are behaviour-reflecting.
Proof Sketch. Given graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, let a GTS morphism f : GTS 0 → GTS 1 be an extension morphism. Then, we have to prove that for all graphs G, H in |Graph TG1 |, all rules p in P 1 , and all matches
m : lhs(π 1 (p)) → G, if G p,m =⇒ H then f < TG (G) f P (p),f < TG (m) ======⇒ f < TG (H). Assuming transformation rules π 1 (p) = (L 1 l1 ←-K 1 r1 -→ R 1 , ac 1 ) and π 0 (f P (p)) = (L 0 l0 ←-K 0 r0
-→ R 0 , ac 0 ), and given the derivation
ac 1 L 1 m po K 1 l1 o o r1 / / po R 1 G D o o / / H
since f is an extension morphism, and therefore f TG is a monomorphism, and l 1 and m are also monomorphisms, by Remark 2 and Definition 8, we have the diagram
ac 0 ∼ = L 0 K 0 l1 o o r1 / / R 0 f < TG (ac 1 ) f < TG (L 1 ) f < TG (m) po f < TG (K 1 ) f < TG (l1) o o f < TG (r1) / / po f < TG (R 1 ) f < TG (G) f < TG (D) o o / / f < TG (H)
Then, given the pushouts in the above diagram and Remark 4, we have the derivation
f < TG (G) f P (p),f < TG (m) ======⇒ f < TG (H).
Notice that Definition 9 provides specific checks on individual rules. In the concrete case we presented in Section 2, the inclusion morphism between the model of an observer DSML, M Obs , and its parameter sub-model M Par , may be very easily checked to be an extension, by making sure that the features "added" in the rules will be removed by the backward retyping functor. In this case the check is particularly simple because of the subgraph relation between the type graphs, but for a morphism as the binding morphism between M Par and the DSML of the system at hand, M DSML , the check would also be relatively simple. Basically, the backward retyping of each rule in M DSML , i.e., the rule resulting from removing all elements not target of the binding map, must coincide with the corresponding rule, and the application conditions must be equivalent.
Since the amalgamation of GTSs is the basic construction for combining them, it is very important to know whether the reflection of behaviour remains invariant under amalgamations.
Proposition 2. Given transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, 2, and the amalgamation GTS = GTS 1 + GTS0 GTS 2 of GTS morphisms f : GTS 0 → GTS 1 and g : GTS 0 → GTS 2 , if f TG is a monomorphism and g is an extension morphism, then g is also an extension morphism.
GTS 0 f / / g GTS 1 g GTS 2 f / / GTS Proof Sketch.
Let it be GTS = ( TG, P , π). We have to prove that for each p ∈ P , π 1 ( g P (p)) ≡ g < TG ( π(p)). By construction, rule π(p) is obtained from the amalgamation of rules g > TG (π 1 ( g P ( p))) and f > TG (π 2 ( f P ( p))). More specifically, without considering application conditions by now, the amalgamation of such rules is accomplished by constructing the pushouts of the morphisms for the left-hand sides, for the kernel graphs, and for the right-hand sides.
By Remark 2, we know that if the diagram
f > TG (g > TG (L 0 )) ∼ = g > TG (f > TG (L 0 )) f > TG (g f P (p) L ) g > TG (f g P (p) L ) / / g > TG (L 1 ) g p L f > TG (L 2 ) f p L / / L
is a pushout, then if we apply the backward retyping functor g < T G to all its components (graphs) and morphisms, the resulting diagram is also a pushout.
g < TG ( g > TG (f > TG (L 0 ))) g < TG ( f > TG (g f P (p) L )) g < TG ( g > TG (f g P (p) L )) / / g < TG ( g > TG (L 1 )) g < TG ( g p L ) g < TG ( f > TG (L 2 )) g < TG ( f p L ) / / g < TG ( L)
Because, by Proposition 1, for every f : TG → TG and every TG-type graph G and morphism g, since f T G is assumed to be a monomorphism,
f < (f > (G)) = G and f < (f > (g)) = g, we have g < TG ( g > TG (f > TG (L 0 )) = f > TG (L 0 ), g < TG ( g > TG (f g P (p) L )) = f g P (p) L , and g < TG ( g > TG (L 1 )) = L 1 . By pullback decomposition in the corresponding retyping diagram, g < TG ( f > TG (L 2 )) = f > TG (g < TG (L 2 )
). Thus, we are left with this other pushout:
f > TG (L 0 ) f > TG (g < TG (g f P (p) L
))
f g P (p) L / / L 1 g < TG ( g p L ) f > TG (g < TG (L 2 )) g < TG ( f p L ) / / g < TG ( L) Since g is an extension, L 0 ∼ = g < TG (L 2 ), which, because f TG is a monomorphism, implies f > TG (L 0 ) ∼ = f > TG (g < TG (L 2 )
). This implies that g < TG ( L) ∼ = L 1 . Similar diagrams for kernel objects and right-hand sides lead to similar identity morphisms for them. It only remains to see that ac(π 1 ( g P (p))) ∼ = ac( g < TG ( π(p))). By the rule amalgamation construction, ac = f > TG (ac 2 ) ∧ g > TG (ac 1 ). Since g is an extension morphism, ac 2 ∼ = g > TG (ac 0 ). Then, ac ∼ = f > TG (g > TG (ac 0 )) ∧ g > TG (ac 1 ). For f , as for any other rule morphism, we have ac 1 ⇒ f > TG (ac 0 ). By the Shift construction, for any match m 1 :
L 1 → C 1 , m 1 |= ac 1 iff g > TG (m 1 ) |= g > TG (ac 1
) and, similarly, for any match m 0 :
L 0 → C 0 , m 0 |= ac 0 iff f > TG (m 0 ) |= f > TG (ac 0 ). Then, ac 1 ⇒ f > TG (ac 0 ) ∼ = g > TG (ac 1 ) ⇒ g > TG (f > TG (ac 0 )) ∼ = g > TG (ac 1 ) ⇒ f > TG (g > TG (ac 0 )).
And therefore, since ac = f > (g > TG (ac 0 )) ∧ g > TG (ac 1 ) and g > TG (ac 1 ) ⇒ f > TG (g > TG (ac 0 )), we conclude ac ∼ = g > TG (ac 1 ). When a DSL is extended with observers and other alien elements whose goal is to measure some property, or to verify certain invariant property, we need to guarantee that such an extension does not change the semantics of the original DSL. Specifically, we need to guarantee that the behaviour of the resulting system is exactly the same, that is, that any derivation in the source system also happens in the target one (behaviour preservation), and any derivation in the target system was also possible in the source one (behaviour reflection). The following definition of behaviour-protecting GTS morphism captures the intuition of a morphism that both reflects and preserves behaviour, that is, that establishes a bidirectional correspondence between derivations in the source and target GTSs. Definition 10. (Behaviour-protecting GTS morphism) Given typed graph transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, a GTS morphism f : GTS 0 → GTS 1 is behaviour-protecting if for all graphs G and H in |Graph TG1 |, all rules p in P 1 , and all matches m : lhs(π 1 (p)) → G, g < TG (G)
g P (p),g < TG (m) ======⇒ g < TG (H) ⇐⇒ G p,m =⇒ H
We find in the literature definitions of behaviour-preserving morphisms as morphisms in which the rules in the source GTS are included in the set of rules of the target GTS. Although these morphisms trivially preserve behaviour, they are not useful for our purposes. Works like [START_REF] Heckel | Horizontal and vertical structuring of typed graph transformation systems[END_REF] or [START_REF] Große-Rhode | Formal software specification with refinements and modules of typed graph transformation systems[END_REF], mainly dealing with refinements of GTSs, only consider cases in which GTSs are extended by adding new transformation rules. In our case, in addition to adding new rules, we are enriching the rules themselves.
The main result in this paper is related to the protection of behaviour, and more precisely on the behaviour-related guarantees on the induced morphisms.
Theorem 1. Given typed transformation systems GTS i = (TG i , P i , π i ), for i = 0, 1, 2, and the amalgamation GTS = GTS 1 + GTS0 GTS 2 of GTS morphisms f : GTS 0 → GTS 1 and g : GTS 0 → GTS 2 , if f is a behaviour-reflecting GTS morphism, f TG is a monomorphism, and g is an extension and behaviour-protecting morphism, then g is behaviourprotecting as well.
GTS 0 f / / g GTS 1 g GTS 2 f
/ / GTS Proof Sketch. Since g is an extension morphism and f TG is a monomorphism, by Proposition 2, g is also an extension morphism, and therefore, by Lemma 1, also behaviour-reflecting. We are then left with the proof of behaviour preservation.
Given a derivation G 1
p1,m1 ==⇒ H 1 in GTS 1 , with π 1 (p 1 ) = (L 1 l1 ←-K 1 r1
-→ R 1 , ac 1 ), since f : GTS 0 → GTS 1 is a behaviour-reflecting morphism, there is a corresponding derivation in GTS 0 . Specifically, the rule f P (p 1 ) can be applied on f < TG (G 1 ) with match f < TG (m 1 ) satisfying the application condition of production π 0 (f P (p 1 )), and resulting in a graph f < TG (H 1 ).
f < TG (G 1 ) f P (p1),f < TG (m1) =======⇒ f < TG (H 1 )
Moreover, since g is a behaviour-protecting morphism, this derivation implies a corresponding derivation in GTS 2 .
By the amalgamation construction in Definition 7, the set of rules of GTS includes, for each p in P , the amalgamation of (the forward retyping of) the rules π 1 ( g
P (p)) = (L 1 l1 ←-K 1 r1 -→ R 1 , ac 1 ) and π 2 ( f P (p)) = (L 2 l2 ←-K 2 r2 -→ R 2 , ac 2 ), with kernel rule π 0 (f P ( g P (p))) = π 0 (g P ( f P (p))) = (L 0 l0 ←-K 0 r0 -→ R 0 , ac 0 ).
First, notice that for any TG graph G, G is the pushout of the graphs g < TG (G), f < TG (G) and f < TG ( g < TG (G)) (with the obvious morphisms). This can be proved using a van Kampen square, where in the bottom we have the pushout of the type graphs, the vertical faces are the pullbacks defining the backward retyping functors and on top we have that pushout.
Thus, for each graph G in GTS, if a transformation rule in GTS 1 can be applied on g < TG (G), the corresponding transformation rule should be applicable on G in GTS. The following diagram focus on the lefthand sides of the involved rules.
f > TG (g > TG (L 0 )) = g > TG (f > TG (L 0 )) g p 2 L t t f p 1 L * * f > TG (g > TG (m0))= g > TG (f > TG (m0)) f > TG (L 2 ) f p L * * f > TG (m2) f > TG (g > TG (g < TG ( f < TG (G)))) = g > TG (f > TG (f < TG ( g < TG (G)))) t t * * g > TG (L 1 ) g p L t t g > TG (m1) f > TG ( f < TG (G)) g2 + + L m g > TG ( g < TG (G)) g1 s s G
As we have seen above, rules g P (p), f P (p), and f P (g P (p)) = g P (f P (p)) are applicable on their respective graphs using the matchings depicted in the above diagram. Since, by the amalgamation construction, the top square is a pushout, and
g 1 • g > TG (m 1 )•f p1 L = g 2 • f > TG (m 2 )•g p2 L , then there is a unique morphism m : L → G making g 1 • g > TG (m 1 ) = m • g p L and g 2 • f > TG (m 2 ) = m • f p L
. This m will be used as matching morphism in the derivation we seek.
By construction, the application condition ac of the amalgamated rule p is the conjunction of the shiftings of the application conditions of g P (p) and f P (p). Then, since We can then conclude that rule p is applicable on graph G with match m satisfying its application condition ac. Indeed, given the rule π
(p) = ( L l ←-K r -→ R, ac) we have the following derivation: ac L m po K l1 o o r1 / / po R G D o o / / H
Let us finally check then that D and H are as expected. To improve readability, in the following diagrams we eliminate the retyping functors. For instance, for the rest of the theorem
L 0 denotes f > TG (g > TG (L 0 )) = g > TG (f > TG (L 0 )), L 1 denotes g > TG (L 1 ), etc.
First, let us focus on the pushout complement of l : K → L and m : L → G. Given rules g P (p), f P (p), and f P (g P (p)) = g P (f P (p)) and rule morphisms between them as above, the following diagram shows both the construction by amalgamation of the morphism l : K → L, and the construction of the pushout complements for morphisms l i and m i , for i = 0 . . . 2.
L 0 v v m0 K 0 v v l0 o o L 2 m2 K 2 l2 o o L 1 v v m1 K 1 v v l1 o o L m K l o o G 0 v v D 0 v v l0 o o G 2 D 2 l2 o o G 1 v v D 1 u u l1 o o G D l o o r r X
By the pushout of D 0 → D 1 and D 0 → D 2 , and given the commuting subdiagram
D 0 v v G 2 D 2 o o G 1 x x D 1 u u o o G D o o
there exists a unique morphism D → G making the diagram commute. This D is indeed the object of the pushout complement we were looking for. By the pushout of K 0 → K 1 and K 0 → K 2 , there is a unique morphism from K to D making the diagram commute. We claim that these morphisms K → D and D → G are the pushout complement of K → L and L → G. Suppose that the pushout of K → L and K → D were L → X and D → X for some graph X different from G. By the pushout of K 1 → D 1 and K 1 → L 1 there is a unique morphism G 1 → X making the diagram commute. By the pushout of K 2 → D 2 and K 2 → L 2 there is a unique morphism G 2 → X making the diagram commute. By the pushout of G 0 → G 1 and G 0 → G 2 , there is a unique morphism G → X. But since L → X and D → X are the pushout of K → L and K → D, there is a unique morphism X → G making the diagram commute. Therefore, we can conclude that X and G are isomorphic. Theorem 1 provides a checkable condition for verifying the conservative nature of an extension in our example, namely the monomorphism M Par → M Obs being a behaviour-protecting and extension morphism, M Par → M DSML a behaviour-reflecting morphism, and MM Par → MM DSML a monomorphism.
In the concrete application domain we presented in Section 2 this result is very important. Notice that the parameter specification is a sub-specification of the observers DSL, making it particularly simple to verify that the inclusion morphism is an extension and also that it is behaviour-protecting. The check may possibly be reduced to checking that the extended system has no terminal states not in its parameter sub-specification. Application conditions should also be checked equivalent. Forbidding the specification of application conditions in rules in the observers DSL may be a practical shortcut.
The morphism binding the parameter specification to the system to be analysed can very easily be verified behaviour-reflecting. Once the morphism is checked to be a monomorphism, we just need to check that the rules after applying the backward retyping morphism exactly coincide with the rules in the source GTS. Checking the equivalence of the application conditions may require human intervention. Notice that with appropriate tools and restrictions, most of these restrictions, if not all, can be automatically verified. We may even be able to restrict editing capabilities so that only correct bindings can be specified.
Once the observers DSL are defined and checked, they can be used as many times as wished. Once they are to be used, we just need to provide the morphism binding the parameter DSL and the target system. As depicted in Figures 6 for the metamodels the binding is just a set of pairs, which may be easily supported by appropriate graphical tools. The binding must be completed by similar correspondences for each of the rules. Notice that once the binding is defined for the metamodels, most of the rule bindings can be inferred automatically.
Finally, given the appropriate morphisms, the specifications may be merged in accordance to the amalgamation construction in Definition 7. The resulting system is guaranteed to both reflect and preserve the original behaviour by Theorem 1.
Conclusions and future work
In this paper, we have presented formal notions of morphisms between graph transformation systems (GTSs) and a construction of amalgamations in the category of GTSs and GTS morphisms. We have shown that, given certain conditions on the morphisms involved, such amalgamations reflect and protect behaviour across the GTSs. This result is useful because it can be applied to define a notion of conservative extensions of GTSs, which allow adding spectative behaviour (cf. [START_REF] Katz | Aspect categories and classes of temporal properties[END_REF]) without affecting the core transformation behaviour expressed in a GTS.
There are of course a number of further research steps to be taken-both in applying the formal framework to particular domains and in further development of the framework itself. In terms of application, we need to provide methods to check the preconditions of Theorem 1, and if possible automatically checkable conditions that imply these, so that behaviour protection of an extension can be checked effectively. This will enable the development of tooling to support the validation of language or transformation compositions. On the part of the formal framework, we need to study relaxations of our definitions so as to allow cases where there is a less than perfect match between the base DSML and the DSML to be woven in. Inspired by [START_REF] Katz | Aspect categories and classes of temporal properties[END_REF], we are also planning to study different categories of extensions, which do not necessarily need to be spectative (conservative), and whether syntactic characterisations exist for them, too.
Fig. 1 .
1 Fig. 1. Production Line (a) metamodel and (b) concrete syntax (from [45]).
Fig. 2 .
2 Fig. 2. Example of production line configuration.
Fig. 3 .
3 Fig. 3. Assemble rule indicating how a new hammer is assembled (from [45]).
Sample response time rule.
Fig. 4 .
4 Fig. 4. Generic model of response time observer.
Fig. 5 .
5 Fig. 5. Amalgamation in the category of DSMLs and DSML morphisms.
Fig. 6 .
6 Fig. 6. Weaving of metamodels (highlighting added for illustration purposes).
Fig. 7 .
7 Fig. 7. Amalgamation of the Assemble and RespTime rules.
Fig. 8 .
8 Fig. 8. Positive and negative application conditions.
Definition 4 .
4 (Rule amalgamation) Given transformation rules p i : (L i li← K i ri → R i , ac i ), for i = 0,1, 2, and rule morphisms f : p 0 → p 1 and g : p 0 → p 2 , the amalgamated production p 1 + p0 p 2 is the production (L l ← K r → R, ac) in the diagram below, where subdiagrams (1), (
Fig. 9 .
9 Fig. 9. Forward and backward retyping functors.
m 1
1 |= ac 1 ⇐⇒ m |= Shift( g p L , ac 1 ) and m 2 |= ac 2 ⇐⇒ m |= Shift( f p L , ac 2 ), and therefore m 1 |= ac 1 ∧ m 2 |= ac 2 ⇐⇒ m |= ac.
By a similar construction for the righthand sides we get the pushout K
Please, notice the use of the cardinality constraint 1.. * in the rule in Figure4(c). It is out of the scope of this paper to discuss the syntactical facilities of the e-Motions system.
Acknowledgments
We are thankful to Andrea Corradini for his very helpful comments. We would also like to thank Javier Troya and Antonio Vallecillo for fruitful discussions and previous and on-going collaborations this work relies on. This work has been partially supported by CICYT projects TIN2011-23795 and TIN2007-66523, and by the AGAUR grant to the research group ALBCOM (ref. 00516). | 60,060 | [
"1003768",
"872871",
"950126"
] | [
"198404",
"85878",
"327716"
] |
01485978 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01485978/file/978-3-642-37635-1_4_Chapter.pdf | Irina Mȃriuca Asȃvoae
Frank De Boer
Marcello M Bonsangue
email: marcello@liacs.nl
Dorel Lucanu
Jurriaan Rot
email: jrot@liacs.nl
Bounded Model Checking of Recursive Programs with Pointers in K
Keywords: pushdown systems, model checking, the K framework
We present an adaptation of model-based verification, via model checking pushdown systems, to semantics-based verification. First we introduce the algebraic notion of pushdown system specifications (PSS) and adapt a model checking algorithm for this new notion. We instantiate pushdown system specifications in the K framework by means of Shylock, a relevant PSS example. We show why K is a suitable environment for the pushdown system specifications and we give a methodology for defining the PSS in K. Finally, we give a parametric K specification for model checking pushdown system specifications based on the adapted model checking algorithm for PSS.
Introduction
The study of computation from a program verification perspective is an effervescent research area with many ramifications. We take into consideration two important branches of program verification which are differentiated based on their perspective over programs, namely model-based versus semantics-based program verification.
Model-based program verification relies on modeling the program as some type of transition system which is then analyzed with specific algorithms. Pushdown systems are known as a standard model for sequential programs with recursive procedures. Intuitively, pushdown systems are transition systems with a stack of unbounded size, which makes them strictly more expressive than finite
The research of this author has been partially supported by Project POSDRU/88/ 1.5/S/47646 and by Contract ANCS POS-CCE, O2.1.2, ID nr 602/12516, ctr.nr 161/15.06.2010 (DAK). The research of this author has been funded by the Netherlands Organisation for state systems. More importantly, there exist fundamental decidability results for pushdown systems [START_REF] Bouajjani | Reachability Analysis of Pushdown Automata: Application to Model Checking[END_REF] which enable program verification via model checking [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF].
Semantics-based program verification relies on specification of programming language semantics and derives the program model from the semantics specification. For example, the rewriting logic semantics project [START_REF] Meseguer | The Rewriting Logics Semantics Project[END_REF] studies the unification of algebraic denotational semantics with operational semantics of programming languages. The main incentive of this semantics unification is the fact that the algebraic denotational semantics is executable via tools like the Maude system [10], or the K framework [START_REF] Roşu | An Overview of the K Semantic Framework[END_REF]. As such, a programming language (operational) semantics specification implemented with these tools becomes an interpreter for programs via execution of the semantics. The tools come with model checking options, so the semantics specification of a programming language have for-free program verification capabilities.
The current work solves the following problem in the rewriting logic semantics project: though the semantics expressivity covers a quite vast and interesting spectrum of programming languages, the offered verification capabilities via model checking are restricted to finite state systems. Meanwhile, the fundamental results from pushdown systems provide a strong incentive for approaching the verification of this class of infinite transition systems from a semantics-based perspective. As such, we introduce the notion of pushdown system specifications (PSS), which embodies the algebraic specification of pushdown systems. Furthermore, we adapt a state-of-the-art model checking algorithm for pushdown systems [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] to work for PSS and present an algebraic specification of this algorithm implemented in the K tool [START_REF] Roşu | K-Maude: A Rewriting Based Tool for Semantics of Programming Languages[END_REF]. Our motivating example is Shylock, a programming language with recursive procedures and pointers, introduced by the authors in [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF].
Related work. K is a rewriting logic based framework for the design, analysis, and verification of programming languages, originating in the rewriting logic semantics project. K specifies transition systems and is built upon a continuationbased technique and a series of notational conventions to allow for more compact and modular executable programming language definitions. Because of the continuation-based technique, K specifications resemble PSS where the stack is the continuation. The most complex and thorough K specification developed so far is the C semantics [START_REF] Ellison | An Executable Formal Semantics of C with Applications[END_REF].
The standard approach to model checking programs, used for K specifications, involves the Maude LTL model checker [START_REF] Eker | The Maude LTL Model Checker[END_REF] which is inherited from the Maude back-end of the K tool. The Maude LTL checker, by comparison with other model checkers, presents a great versatility in defining the state properties to be verified (these being given as a rewrite theory). Moreover, the actual model checking is performed on-the-fly, so that the Maude LTL checker can verify systems with states that involve data in types of infinite cardinality under the assumption of a finite reachable state space. However, this assumption is infringed by PSS because of the stack which is allowed to grow unboundedly, hence the Maude LTL checker cannot be used for PSS verification.
The Moped tool for model checking pushdown systems was successfully used for a subset of C programs [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] and was adapted for Java with full recursion, but with a fixed-size number of objects, in jMoped [START_REF] Esparza | A BDD-Based Model Checker for Recursive Programs[END_REF]. The WPDS ++ tool [START_REF] Kidd | WPDS++: A C++ Library for Weighted Pushdown Systems[END_REF] uses a weighted pushdown system model to verify x86 executable code. However, we cannot employ any of these dedicated tools for model checking pushdown systems because we work at a higher level, namely with specifications of pushdown system where we do not have the actual pushdown system.
Structure of the paper. In Section 2 we introduce pushdown system specifications and an associated invariant model checking algorithm. In Section 3 we introduce the K framework by showing how Shylock's PSS is defined in K. In Section 4 we present the K specification of the invariant model checking for PSS and show how a certain type of bounded model checking can be directly achieved.
Model Checking Specifications of Pushdown Systems
In this section we discuss an approach to model checking pushdown system specifications by adapting an existing model checking algorithm for ordinary pushdown systems. Recall that a pushdown system is an input-less pushdown automaton without acceptance conditions. Basically, a pushdown system is a transition system equipped with a finite set of control locations and a stack. The stack consists of a non-a priori bounded string over some finite stack alphabet [START_REF] Bouajjani | Reachability Analysis of Pushdown Automata: Application to Model Checking[END_REF][START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF]. The difference between a pushdown system specification and an ordinary pushdown system is that the former uses production rules with open terms for the stack and control locations. This allows for a more compact representation of infinite systems and paves the way for applications of model checking to recursive programs defined by means of structural operational semantics.
We assume a countably infinite set of variables Var = {v 1 , v 2 , . . .}. A signature Σ consists of a finite set of function symbols g 1 , g 2 , . . ., each with a fixed arity ar(g 1 ), ar(g 2 ), . . .. Function symbols with arity 0 are called constants. The set of terms, denoted by T Σ (Var ) and typically ranged over by s and t, is inductively defined from the set of variables Var and the signature Σ. A substitution σ replaces variables in a term with other terms. A term s can match term t if there exists a substitution σ such that σ(t) = s. A term t is said to be closed if no variables appear in t, and we use the convention that these terms are denoted as "hatted" terms, i.e., t.
A pushdown system specification (PSS) is a tuple (Σ, Ξ, Var , ∆) where Σ and Ξ are two signatures, Var is a set of variables, and ∆ is a finite set of production rules (defined below). Terms in T Σ (Var ) define control locations of a pushdown system, whereas terms in T Ξ (Var ) define the stack alphabet. A production rule in ∆ is defined as a formula of the form (s, γ) ⇒ (s , Γ ) , where s and s are terms in T Σ (Var ), γ is a term in T Ξ (Var ), and Γ is a finite (possibly empty) sequence of terms in T Ξ (Var ). The pair (s, γ) is the source of the rule, and (s , Γ ) is the target. We require for each rule that all variables appearing in the target are included in those of the source. A rule with no variables in the source is called an axiom. The notions of substitution and matching are lifted to sequences of terms and to formulae as expected.
Example 1. Let Var = {s, t, γ}, let Σ = {0, a, +} with ar(0) = ar(a) = 0 and ar(+) = 2, and let Ξ = {L, R} with ar(L) = ar(R) = 0. Moreover consider the following three production rules, denoted as a set by ∆:
(a, γ) ⇒ (0, ε) (s + t, L) ⇒ (s, R) (s + t, R) ⇒ (t, LR) .
Then (Σ, Ξ, Var , ∆) is a pushdown system specification. Given a pushdown system specification P = (Σ, Ξ, Var , ∆), a concrete configuration is a pair ŝ, Γ where ŝ is a closed term in T Σ (Var ) denoting the current control state, and Γ is a finite sequence of closed terms in T Ξ (Var ) representing the content of the current stack. A transition ŝ, γ • Γ -→ ŝ , Γ • Γ between concrete configurations is derivable from the pushdown system specification P if and only if there is a rule r = (s r , γ r ) ⇒ (s r , Γ r ) in ∆ and a substitution σ such that σ(s r ) = ŝ, σ(γ r ) = γ, σ(s r ) = ŝ and σ(Γ r ) = Γ . The above notion of pushdown system specification can be extended in the obvious way by allowing also conditional production rules and equations on terms.
Continuing on Example 1, we can derive the following sequence of transitions:
a + (a + a), R -→ a + a, LR -→ a, RR -→ 0, R .
Note that no transition is derivable from the last configuration 0, R .
A pushdown system specification P is said to be locally finite w.r.t. a concrete configuration ŝ, Γ , if the set of all closed terms appearing in the configurations reachable from ŝ, Γ by transitions derivable from the rules of P is finite. Note that this does not imply that the set of concrete configurations reachable from a configuration ŝ, Γ is finite, as the stack is not bounded. However all reachable configurations are constructed from a finite set of control locations and a finite stack alphabet. An ordinary finite pushdown system is thus a pushdown system specification which is locally finite w.r.t. a concrete initial configuration ĉ0 , and such that all rules are axioms, i.e., all terms appearing in the source and target of the rules are closed.
For example, if we add (s, L) ⇒ (s+a, L) to the rules of the pushdown system specification P defined in Example 1, then it is not hard to see that there are infinitely many different location reachable from a, L , meaning that P is not locally finite w.r.t. the initial configuration a, L . However, if instead we add the rule (s, L) ⇒ (s, LL) then all reachable configurations from a, L will only use a or 0 as control locations and L as the only element of the stack alphabet. In this case P is locally finite w.r.t. the initial configuration a, L .
A Model Checking Algorithm for PSS
Next we describe a model checking algorithm for (locally finite) pushdown system specifications. We adapt the algorithm for checking LTL formulae against pushdown systems, as presented in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF], which, in turn, exploits the result from [START_REF] Bouajjani | Reachability Analysis of Pushdown Automata: Application to Model Checking[END_REF],
where it is proved that for any finite pushdown system the set R(ĉ 0 ) of all configurations reachable from the initial configuration ĉ0 is regular. The LTL model checking algorithm in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] starts by constructing a finite automaton which recognizes this set R(ĉ 0 ). This automaton has the property that ŝ, Γ ∈ R(ĉ 0 ) if the string Γ is accepted in the automaton, starting from ŝ.
According to [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF], the automaton associated to R(ĉ 0 ), denoted by A post * , can be constructed in a forward manner starting with ĉ0 , as described in Fig. 1. We use the notation x ∈ T Σ (Var ) for closed terms representing control states in P, γ, γ1 , γ2 ∈ T Ξ (Var ) for closed terms representing stack letters, ŷx,γ for the new states of the A post * automaton, f for the final states in A post * , while ŷ, ẑ, û stand for any state in A post * . The transitions in A post * are denoted by ŷ γ ẑ or ŷ ε ẑ. The notation ŷ Γ ẑ, where Γ = γ1 ..γ n , stands for ŷ γ1 .. γn ẑ.
In Fig. 1 we present how the reachability algorithm in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] for generating A post * can be adjusted to invariant model checking pushdown system specifications. We emphasize that the transformation is minimal and consists in:
(a) The modification in the lines containing the code:
"for all ẑ such that x, γ → ẑ,ˆ is a rule in the pushdown system do" i.e., lines 9, 12, 15 in Fig. 1, where instead of rules in the pushdown system we use transitions derivable from the pushdown system specification as follows: "for all ẑ such that x, γ -→ ẑ,ˆ is derivable from P do" (b) The addition of lines 1, 10, 13, 16 where the state invariant φ is checked to hold in the newly discovered control state y.
This approach for producing the A post * in a "breadth-first" manner is particularly suitable for specifications of pushdown systems as we can use the newly discovered configurations to produce transitions based on ∆, the production rules in P. Note that we assume, without loss of generality, that the initial stack has one symbol on it. Note that in the algorithm Apost* of [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF], the set of states of the automaton is determined statically at the beginning. This is clearly not possible starting with a PSS, because this set is not known in advance, and could be infinite if the algorithm does not terminate. Hence, the states that are generated when needed, that is, in line 9, 12 and 15, where the derivable transitions are considered.
We give next some keynotes on the algorithm in Fig. 1. The "trans" variable is a set containing the transitions to be processed. Along the execution of the algorithm Apost*(φ, P), the transitions of the A post * automaton are incrementally deposited in the "rel" variable which is a set where we collect transitions in the A post * automaton. The outermost while is executed until the end, i.e., until "trans" is empty, only if all states satisfy the control state formula φ. Hence, the algorithm in Fig. 1 verifies the invariant φ. In case φ is a state invariant for the pushdown system specification, the algorithm collects in "rel" the entire automaton A post * . Otherwise, the algorithm stops at the first encountered state x which does not satisfy the invariant φ.
Note that the algorithm in Fig. 1 assumes that the pushdown system specification has only rules which push on the stack at most two stack letters. This assumption is inherited from the algorithm for A post * in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] where the requirement is imposed without loss of generality. The approach in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF] is to adopt a standard construction for pushdown systems which consists in transforming the rules that push on the stack more than two stack letters into multiple rules that push at most two letters. Namely, any rule r in the pushdown system, of the form x, γ → x , γ1 ..γ n with n ≥ 3, is transformed into the following rules:
x, γ → x , νr,n-2 γn , x , νr,i → x , νr,i-1 γi+1 , x , νr,1 → x , γ1 γ2
where 2 ≤ i ≤ n -2 and νr,1 , .., νr,n-2 are new stack letters. This transformation produces a new pushdown system which simulates the initial one, hence the assumption in the A post * generation algorithm does not restrict the generality.
However, the aforementioned assumption makes impossible the application of the algorithm Apost* to pushdown system specifications P for which the stack can be increased with any number of stack symbols. The reason is that [START_REF] Roşu | K-Maude: A Rewriting Based Tool for Semantics of Programming Languages[END_REF] for all ẑ such that x, γ -→ ẑ, γ1..γn is derivable from P with n ≥ 2 do 16 if ẑ |= φ then return false; 17 trans := trans ∪{ẑ γ1 ŷẑ,γ 1 };
18 rel := rel ∪{ŷ ẑ,ν(r,i) γi+2 ŷẑ,ν(r,i+1) | 0 ≤ i ≤ n -2};
where r denotes x, γ -→ ẑ, γ1..γn and ν(r, i), 1 ≤ i ≤ n -2 are new symbols (i.e., ν is a new function symbol s.t. ar(ν) = 2) and ŷẑ,ν(r,0) = ŷẑ,γ 1 and ŷẑ,ν(r,n-1) = ŷ 19 for all P defines rule schemas and we cannot identify beforehand which rule schema applies for which concrete configuration, i.e., we cannot identify the r in ν r,i .
û ε ŷẑ,ν(r,i) ∈ rel, 0 ≤ i ≤ n -2 do 20 trans := trans ∪{û γi+2 ŷẑ,ν(r,i+1) | 0 ≤ i ≤ n -2};
Our solution is to obtain a similar transformation on-the-fly, as we apply the Apost* algorithm and discover instances of rule schemas which increase the stack, i.e., we discover r. This solution induces a localized modification of the lines 15 through 20 of the Apost* algorithm, as described in Fig. 2. We denote by Apost*gen the Apost* algorithm in Fig. 1 with the lines 15 through 20 replaced by the lines in Fig. 2. The correctness of the new algorithm is a rather simple generalization of the one presented in [START_REF] Schwoon | Model-Checking Pushdown Systems[END_REF].
3 Specification of Pushdown Systems in K
In this section we introduce K by means of an example of a PSS defined using K, and we justify why K is an appropriate environment for PSS. A K specification evolves around its configuration, a nested bag of labeled cells denoted as content label , which defines the state of the specified transition system. The movement in the transition system is triggered by the K rules which define transformations made to the configuration. A key component in this mechanism is introduced by a special cell, labeled k, which contains a list of computational tasks that are used to trigger computation steps. As such, the K rules that specify transitions discriminate the modifications made upon the configuration based on the current computation task, i.e., the first element in the k-cell. This instills the stack aspect to the k-cell and induces the resemblance with a PSS. Namely, in a K configuration we make the conceptual separation between the k-cell, seen as the stack, and the rest of the cells which form the control location. Consequently, we promote K as a suitable environment for PSS.
In the remainder of this section we describe the K definition of Shylock by means of a PSS that is based on the operational semantics of Shylock introduced in [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. In Section 3.1 we present the configuration of Shylock's K implementation with emphasis on the separation between control locations and stack elements. In Section 3.2 we introduce the K rules for Shylock, while in Section 3.3 we point out a methodology of defining in K production rules for PSS. We use this definition to present K notations and to further emphasize and standardize a K style for defining PSS.
Shylock's K Configuration
The PSS corresponding to Shylock's semantics is given in terms of a programming language specification. First, we give a short overview of the syntax of Shylock as in [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF], then describe how this syntax is used in Shylock's K-configuration.
A Shylock program is finite set of procedure declarations of the form p i :: B i , where B i is the body of procedure p i and denotes a statement defined by the grammar:
B ::= a.f := b | a := b.f | a := new | [a = b]B | [a = b]B | B + B | B; B | p
We use a and b for program variables ranging over G ∪ L, where G and L are two disjoint finite sets of global and local variables, respectively. Moreover we assume a finite set F of field names, ranged over by f . G, L, F are assumed to be defined for each program, as sets of Ids, and we assume a distinguished initial program procedure main.
Hence, the body of a procedure is a sequence of statements that can be: assignments or object creation denoted by the function " := " where ar(:=) = 2 (we distinguish the object creation by the "new" constant appearing as the second argument of ":="); conditional statements denoted by "[ ] "; nondeterministic choice given by " + "; and function calls. Note that K proposes the BNF notation for defining the language syntax as well, with the only difference that the variables are replaced by their respective sorts.
A K configuration is a nested bag of labeled cells where the cell content can be one of the predefined types of K, namely K , Map, Set, Bag, List. The K configuration used for the specification of Shylock is the following:
K k Map var Map fld h heap Set G Set L Set F Map P pgm K kAbs
The pgm-cell is designated as a program container where the cells G, L, F maintain the above described finite sets of variables and fields associated to a program, while the cell P maintains the set of procedures stored as a map, i.e., a set of map items p → B.
The heap-cell contains the current heap H which is formed by the variable assignment cell var and the field assignment cell h. The var cell contains the mapping from local and global variables to their associated identities ranging over N ⊥ = N ∪ {⊥}, where ⊥ stands for "not-created". The h cell contains a set of fld cells, each cell associated to a field variable from F . The mapping associated to each field contains items of type n →m, where n, m range over the object identities space N ⊥ . Note that any fld-cell always contains the item ⊥ →⊥ and ⊥ is never mapped to another object identity.
Intuitively, the contents of the heap-cell form a directed graph with nodes labeled by object identities (i.e., values from N ⊥ ) and arcs labeled by field names.
Moreover, the contents of the var-cell (i.e., the variable assignment) define entry nodes in the graph. We use the notion of visible heap, denoted as R(H), for the set of nodes reachable in the heap H from the entry nodes.
The k-cell maintains the current continuation of the program, i.e., a list of syntax elements that are to be executed by the program. Note that the sort K is tantamount with an associative list of items separated by the set-aside symbol " ". The kAbs-cell is introduced for handling the heap modifications required by the semantics of certain syntactic operators. In this way, we maintain in the cell k only the "pure" syntactic elements of the language, and move into kAbs any additional computational effort used by the abstract semantics for object creation, as well as for procedure call and return.
In conclusion, the k-cell stands for the stack in a PSS P, while all the other cells, including kAbs, form together the control location. Hence the language syntax in K practically gives a sub-signature of the stack signature in P, while the rest of the cells give a sub-signature, the control location signature in P.
Shylock's K Rules
We present here the K rules which implement the abstract semantics of Shylock, according to [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. Besides introducing the K notation for rules, we also emphasize on the separation of concerns induced by viewing the K definitions as PSS.
In K we distinguish between computational rules that describe state transitions, and structural rules that only prepare the current state for the next transition. Rules in K have a bi-dimensional localized notation that stands for "what is above a line is rewritten into what is bellow that line in a particular context given by the matching with the elements surrounding the lines". Note that the solid lines encode a computational rule in K which is associated with a rewrite rule, while the dashed lines denote a structural rule in K, which is compiled by the K-tool into a Maude equation.
The production rules in PSS are encoded in K by computational rules which basically express changes to the configuration triggered by an atomic piece of syntax matched at the top of the stack, i.e., the k-cell. An example of such encoding is the following rule:
rule a.f := b • ••• k ••• v(a) → v(b) ••• fld(f ) v var when v(a) = Bool ⊥
which reads as: if the first element in the cell k is the assignment a.f := b then this is consumed from the stack and the map associated to the field f , i.e., the content of the cell fld(f ), is modified by replacing whatever object identity was pointed by v(a) with v(b), i.e., the object identity associated to the variable b by the current variable assignment v, only when a is already created, i.e., v(a) is not ⊥. Note that this rule is conditional, the condition being introduced by the keyword "when". We emphasize the following notational elements in K that appear in the above rule: " " which stands for "anything" and the ellipses "•••". The meaning of the ellipses is basically the same as " " the difference being that the ellipses appear always near the cell walls and are interpreted according to the contents of the respective cell. For example, given that the content of the k-cell is a list of computational tasks separated by " ", the ellipses in the k-cell from the above rule signify that the assignment a.f := b is at the top of the stack of the PSS. On the other hand, because the content of a fld cell is of sort Map which is a commutative sequence of map items, the ellipses appearing by both walls of the cell fld denote that the item v(a) → may appear "anywhere" in the fld-cell. Meanwhile, the notation for the var cell signifies that v is the entire content of this cell, i.e., the map containing the variable assignment. Finally, "•" stands for the null element in any K sort, hence "•" replacing a.f := b at the top of the k-cell stands for ε from the production rules in P.
All the other rules for assignment, conditions, and sequence are each implemented by means of a single computational rule which considers the associated piece of syntax at the top of the k-cell. The nondeterministic choice is implemented by means of two computational rules which replace B 1 + B 2 at the top of a k-cell by either B 1 or B 2 .
Next we present the implementation of one of the most interesting rules in Shylock namely object creation. The common semantics for an object creation is the following: if the current computation (the first element in the cell k) is "a:=new", then whatever object was pointed by a in the var-cell is replaced with the "never used before" object "oNew " obtained from the cell kAbs . Also, the fields part of the heap, i.e., the content of h-cell, is updated by the addition of a new map item "oNew → ⊥". However, in the semantics proposed by Shylock, the value of oNew is the minimal address not used in the current visible heap which is calculated by the function min(R(H) c ) that ends in the normal form oNew(n). This represents the memory reuse mechanism which is handled in our implementation by the kAbs-cell. Hence, the object creation rules are:
rule a := new ••• k H heap • min(R(H) c ) kAbs rule a := new ••• k H h h oNew(n) • update H h with n →⊥ kAbs rule a := new • ••• k ••• x → n ••• var H h H h h oNew(n) updated(H h ) • kAbs
where "min(R(H) c )" finds n, the first integer not in R(H), and ends in oNew(n), then "update Bag with MapItem" adds n → ⊥ to the map in each cell fld contained in the h-cell and ends in the normal form updated(Bag). Note that all the operators used in the kAbs-cell are implemented equationally, by means of structural K-rules. In this manner, we ensure that the computational rule which consumes a := new from the top of the k-cell is accurately updating the control location with the required modification.
The rules for procedure call/return are presented in Fig. 3. They follow the same pattern as the one proposed in the rules for object creation. The renaming
rule p ••• k H heap L L G G F F ••• p → B ••• P • processingCall(H, L, G, F ) kAbs rule p B restore(H) ••• k H H heap ••• p → B ••• P processedCall(H ) • kAbs rule restore(H ) ••• k H heap L L G G F F • processingRet(H, H , L, G, F ) kAbs rule restore( ) • ••• k H H heap processedRet(H ) •
kAbs Fig. 3. K-rules for the procedure's call and return in Shylock scheme defined for resolving name clashes induced by the memory reuse for object creation is based in Shylock on the concept of cut points as introduced in [START_REF] Rinetzky | A Semantics for Procedure Local Heaps and its Abstractions[END_REF]. Cut points are objects in the heap that are referred to from both local and global variables, and as such, are subject to modifications during a procedure call.
Recording cut points in extra logical variables allows for a sound return in the calling procedure, enabling a precise abstract execution w.r.t. object identities.
For more details on the semantics of Shylock we refer to [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF].
Shylock as PSS
The benefit of a Shylock's K specification lies in the rules for object creation, which implement the memory reuse mechanism, and for procedure call/return, which implement the renaming scheme. Each element in the memory reuse mechanism is implemented equationally, i.e., by means of structural K rules which have equational interpretation when compiled in Maude. Hence, if we interpret Shylock as an abstract model for the standard semantics, i.e., with standard object creation, the K specification for Shylock's abstract semantics renders an equational abstraction. As such, Shylock is yet another witness to the versatility of the equational abstraction methodology [START_REF] Meseguer | Equational Abstractions[END_REF].
Under the assumption of a bounded heap, the K specification for Shylock is a locally finite PSS and compiles in Maude into a rewriting system. Obviously, in the presence of recursive procedures, the stack grows unboundedly and, even if Shylock produces a finite pushdown system, the equivalent transition system is infinite and so is the associated rewriting system. We give next a relevant example for this idea.
Example 2. The following Shylock program, denoted as pgm0, is the basic example we use for Shylock. It involves a recursive procedure p0 which creates an object g. gvars: g main :: p0 p0 :: g:=new; p0
In a standard semantics, because the recursion is infinite, so is the set of object identities used for g. However, Shylock's memory reuse guarantees to produce a finite set of object identities, namely ⊥, 0, 1. Hence, the pushdown system associated to pgm0 Shylock program is finite and has the following (ground) rules:
(g:⊥, main) → (g:⊥, p0; restore(g:⊥)) (g:⊥, p0) → (g:⊥, g := new; p0; restore(g:⊥)) (g:⊥, g := new) → (g:0, ) (g:0, p0) → (g:0, g := new; p0; restore(g:0)) (g:0, g := new) → (g:1, ) (g:1, p0) → (g:1, g := new; p0; restore(g:1)) (g:1, g := new) → (g:0, ) Note that we cannot obtain the pushdown system by the exhaustive execution of Shylock[pgm0] because the exhaustive execution is infinite due to recursive procedure p0. For the same reason, Shylock[pgm0] specification does not comply with Maude's LTL model checker prerequisites. Moreover, we cannot use directly the dedicated pushdown systems model checkers as these work with the pushdown system automaton, while Shylock[pgm0] is a pushdown system specification. This example creates the premises for the discussion in the next section where we present a K-specification of a model checking procedure amenable for pushdown systems specifications.
Model Checking K Definitions
We recall that the PSS perspective over the K definitions enables the verification by model checking of a richer class of programs which allow (infinite) recursion. In this section we focus on describing kA post * (φ, P), the K specification of the algorithm Apost*gen. Note that kA post * (φ, P) is parametric, where the two parameters are P, the K specification of a pushdown system, and φ a control state invariant. We describe kA post * (φ, P) along justifying the behavioral equivalence with the algorithm Apost*gen.
The while loop in Apost*gen, in Fig. 1, is maintained in kA post * by the application of rewriting, until the term reaches the normal form, i.e. no other rule can be applied. This is ensured by the fact that from the initial configuration:
Init ≡ • traces • traces x 0 γ0 f trans • rel • memento φ formula true return collect
the rules keep applying, as long as trans-cell is nonempty. We assume that the rewrite rules are applied at-random, so we need to direct/pipeline the flow of their application via matching and conditions. The notation rulei [label] in the beginning of each rule hints, via [label], towards which part of the Apost*gen algorithm that rule is handling. In the followings we discuss each rule and justify its connection with code fragments in Apost*gen.
The last rule, ruleP, performs the exhaustive unfolding for a particular configuration in cell trace. We use this rule in order to have a parametric definition of the kA post * specification, where one of the parameters is P, i.e., the K specification of the pushdown system. Recall that the other parameter is the specification of the language defining the control state invariant properties
Init ≡ • traces • traces x0 γ 0 f trans • rel • memento φ formula true return collect rule1 [if x γ ŷ ∈ rel else] : • traces • traces ••• ••• x γ y • ••• trans ••• x γ y ••• rel • memento ••• collect rule2 [if (x γ ŷ ∈ rel then...if γ = else] : • traces • traces ••• ••• x y (x Rel[y -]) ••• trans • x y Rel rel • memento ••• collect when x y ∈ Rel rule3 [if x γ ŷ ∈ rel then...if γ = then] : • x ctrl γ k trace traces • traces ••• ••• x γ y • ••• trans • x γ y Rel rel • x γ y memento ••• collect when x γ y ∈ Rel and Bool γ = ε rule4 [for all ẑ s.t. x, γ -→ ẑ, ε|γ|γ1..γn is derivable from P do if ẑ |=φ then] : • traces z ctrl ••• trace • traces • trans • rel • memento φ formula true f alse return collect when z |= φ rule5 [for all ẑ s.t. x, γ -→ ẑ, ε|γ1 is derivable from P do] : • traces ••• z ctrl Γ k trace • ••• traces ••• ••• • z Γ y ••• trans x γ y memento φ formula ••• collect when |Γ | ≤ 1 and Bool z |= φ
rule6 [for all ẑ s.t. x, γ -→ ẑ, γ1..γn is derivable from P do] :
• traces ••• z ctrl γ Γ k trace • ••• traces ••• • z γ new(z, γ ) (Rel[ new(z, γ ), news(x, γ, z, γ , Γ )]
Γ news(x, γ, z, γ , Γ ), y) φ which are to be verified on the produced pushdown system. ruleP takes x ctrl γ Γ k a configuration in P and gives, based on the rules in P, all the configurations z i ctrl Γ i Γ k , 0 ≤ i ≤ n obtained from x ctrl γ Γ k after exactly one rewrite.
• traces • traces ••• x γ y • memento ••• collect ruleP [all ẑ s.t. x, γ -→ ẑ, Γ is derivable from P] : ••• x ctrl γ Γ k trace • ••• traces ••• • z0 ctrl Γ0 Γ k trace.. zn ctrl Γn Γ k trace ••• traces
The pipeline stages are the following sequence of rules' application:
rule3ruleP(rule4 + rule5 + rule6) * rule7
The cell memento is filled in the beginning of the pipeline, rule3, and is emptied at the end of the pipeline, rule7. We use the matching on a nonempty memento for localizing the computation in Apost*gen at the lines 7 -20. We explain next the pipeline stages. Firstly, note that when no transition derived from P is processed by kA post * we enforce cells traces, traces to be empty (with the matching • traces • traces ). This happens in rules 1 and 2 because the respective portions in Apost*gen do not need new transitions derived from P to update "trans" and "rel".
The other cases, namely when the transitions derived from P are used for updating "trans" and "rel", are triggered in rule3 by placing the desired configuration in the cell traces, while the cell traces is empty. At this point, since all the other rules match on either traces empty, or traces nonempty, only ruleP can be applied. This rule populates traces with all the next configurations obtained by executing P.
After the application of ruleP, only one of the rules 4, 5, 6 can apply because these are the only rules in kA post * matching an empty traces and a nonempty traces .
Among the rules 4,5,6 the differentiation is made via conditions as follows:
rule4 handles all the cases when the new configuration has a control location z which does not verify the state invariant φ (i.e., lines 10, 13, 16 in Apost*gen). In this case we close the pipeline and the algorithm by emptying all the cells traces, traces, trans. Note that all the rules handling the while loop match on at least a nonempty cell traces, traces, or trans, with a pivot in a nonempty trans. rules 5 and 6 are applied disjunctively of rule4 because both have the condition z |= φ. Next we describe these two rules. rule5 handles the case when the semantic rule in P which matches the current < x, γ > does not increase the size of the stack. This case is associated with the lines 9 and 11, 12 and 14 in Apost*gen. rule6 handles the case when the semantic rule in P which matches the current < x, γ > increases the stack size and is associated with lines 15 and 17 -20 in Apost*gen.
Both rules 5 and 6 use the memento cell which is filled upon pipeline initialization, in rule3. The most complicated rule is rule6, because it handles a for all piece of code, i.e., lines 17 -20 in Fig. 2. This part is reproduced by matching the entire content of cell rel with Rel, and using the projection operator:
Rel[ γ z 1 , .., z n ] := {u | (u, γ, z 1 ) ∈ Rel}, .., {u | (u, γ, z n ) ∈ Rel}
where z 1 , .., z n in the left hand-side is a list of z-symbols, while in the right hand-side we have a list of sets. Hence, the notation:
(Rel[ new(z, γ ), news(x, γ, z, γ , Γ )] Γ news(x, γ, z, γ , Γ ), y) in rule6 cell trans stands for the lines 17 and 19-20 in Fig. 2. (Note that instead of notation r for rule < x, γ >-→< ẑ, γ Γ > we use the equivalent unique representation (x, γ, ẑ, γ , Γ ) and that instead of ŷẑ,ν(r,0) we use directly ŷẑ,γ , i.e., new(z, γ ), while instead of ŷẑ,ν(r,n-1) in Fig. 2 we use directly ŷ.) Also, the notation in cell rel: "new(z, γ ), news(x, γ, z, γ , Γ ) Γ news(x, γ, z, γ , Γ ), y" stands for line 18 in Fig. 2. rules 4, 5, 6 match on a nonempty traces -cell and an empty traces, and no other rule matches alike. rule7 closes the pipeline when the traces cell becomes empty by making the memento cell empty. Note that traces empties because rules 4, 5, 6 keep consuming it.
Example 3. We recall that the Shylock program pgm0 from Example 2 was not amenable by semantic exhaustive execution or Maude's LTL model checker, due to the recursive procedure p0. Likewise, model checkers for pushdown systems which can handle the recursive procedure p0 cannot be used because Shylock[pgm0], the pushdown system obtained from Shylock's PSS, is not available. However, we can employ kA post * for Shylock's K-specification in order to discover the reachable state space, the A post * automata, as well as the pushdown system itself. In the Fig. 5 we describe the first steps in the execution of kA post * (true, Shylock[pgm0]) and the reachability automaton generated automatically by kA post * (true, Shylock[pgm0]).
Bounded Model Checking for Shylock
One of the major problems in model checking programs which manipulate dynamic structures, such as linked lists, is that it is not possible to bound a priori the state space of the possible computations. This is due to the fact that programs may manipulate the heap by dynamically allocating an unbounded number of new objects and by updating reference fields. This implies that the reachable state space is potentially infinite for Shylock programs with recursive procedures. Consequently for model checking purposes we need to impose some suitable bounds on the model of the program.
A natural bound for model checking Shylock programs, without necessarily restricting their capability of allocating an unbounded number of objects, is to impose constraints on the size of the visible heap [START_REF] Bouajjani | Context-Bounded Analysis of Multithreaded Programs with Dynamic Linked Structures[END_REF]. Such a bound still allows for storage of an unbounded number of objects onto the call-stack, using local variables. Thus termination is guaranteed with heap-bounded model checking of the form |= k φ meaning |= φ ∧ le(k), where le(k) verifies if the size of the visible heap is smaller than k. To this end, we define the set of atomic propositions (φ ∈) Rite as the smallest set defined by the following grammar:
r ::= ε | x | ¬x | f | r.r | r + r | r *
where x ranges over variable names (to be used as tests) and f over field names (to be used as actions). The atomic proposition in Rite are basically expressions from the Kleene algebra with tests [START_REF] Kozen | Kleene Algebra with Tests[END_REF], where the global and local variables are used as nominals while the fields constitute the set of basic actions. The K specification of Rite is based on the circularity principle [START_REF] Goguen | Circular Coinductive Rewriting[END_REF][START_REF] Bonsangue | A Decision Procedure for Bisimilarity of Generalized Regular Expressions[END_REF] to handle the possible cycles in the heap. We employ Rite with kA post * (φ, P), i.e., φ ∈ Rite, for verifying heap-shape properties for Shylock programs. For the precise definition of the interpretation of these expressions in a heap we refer to the companion paper [START_REF] Rot | Interacting via the Heap in the Presence of Recursion[END_REF]. We conclude with an example showing a simple invariant property of a Shylock program. This is an example of a program which induces, on some computation path, an unbounded heap. When we apply the heap-bounded model checking specification, by instantiating φ with the property le (10), we collect all lists with a length smaller or equal than 10. We can also check the heap-shape property "(¬first+first.next * .last)". This property says that either the first object is not defined or the last object is reached from first via the next field.
Conclusions
In this paper we introduced pushdown system specifications (PSS) with an associated invariant model checking algorithm Apost*gen. We showed why the K framework is a suitable environment for pushdown systems specifications, but not for their verification via the for-free model checking capabilities available in K. We gave a K specification of invariant model checking for pushdown system specifications, kA post * , which is behaviorally equivalent with Apost*gen. To the best of our knowledge, no other model checking tool has the flexibility of having structured atomic propositions and working with the generation of the state space on-the-fly.
Future work includes the study of the correctness of our translation of Shylock into the K framework as well as of the translation of the proposed model checking algorithm and its generalization to any LTL formula. From a more practical point of view, future applications of pushdown system specifications could be found in semantics-based transformation of real programming languages like C or Java or in benchmark-based comparisons with existing model-based approaches for program verification.
Fig. 1 .
1 Fig. 1. The algorithm for obtaining Apost * adapted for pushdown system specifications.
Fig. 2 .
2 Fig. 2. The modification required by the generalization of the algorithm Apost*.
Fig. 4 .
4 Fig. 4. kApost * (φ, P)
p0 news( g →⊥ var • h heap , p0, g →⊥ var • h heap , g:=new, p0 restore(...), 1) restore(...) new( g →⊥ var • h heap , p0) rel rule7 • traces • traces • memento rule3 ...
Fig. 5 .
5 Fig. 5. The first pipeline iteration for kApost * (true, Shylock[pgm0]) and the automatically produced reachability automaton at the end of kApost * (true, Shylock[pgm0]). Note that for legibility reasons we omit certain cells appearing in the control state, like g G • L • F main →p0 p0 →g := new; p0 P pgm, which do not change along the execution. Hence, for example, the ctrl-cell is filled in rule3 with both cells heap and pgm.
Example 4 .
4 The following Shylock program pgmList creates a potentially infinite linked list which starts in object first and ends with object last. gvars: first, last lvars: tmp flds: next main :: last:=new; last.next:=last; first:=last; p0 p0 :: tmp:=new; tmp.next:=first; first:=tmp; (p0 + skip)
• traces • traces g →⊥ var • h heap main fin trans • rel • memento true formula true return collect rule3 g →⊥ var • h heap ctrl main k traces • trans g →⊥ var • h heap main fin rel g →⊥ var • h heap main fin memento ruleP • traces g →⊥ var • h heap ctrl p0 restore( g →⊥ var • h heap ) k traces →⊥ var • h heap , p0) trans g →⊥ var • h heap main fin new( g →⊥ var • h heap , p0) →⊥ var • h heap , p0) memento g →⊥ var • h heap main fin new( g →⊥ var • h heap , p0) →⊥ var • h heap , p0) rel →⊥ var • h heap , p0) memento g →⊥ var • h heap g:=new new( g →⊥ var • h heap , g := new) trans g →⊥ var • h heap main fin new( g →⊥ var • h heap , p0)
rule6 • traces • traces g →⊥ var • h heap main fin memento
g →⊥ var • h heap p0 new( g restore(...) fin rel
rule7 • traces • traces • memento
rule3 g →⊥ var • h heap ctrl p0 k traces
• trans g →⊥ var • h heap p0 new( g restore(...) fin
g →⊥ var • h heap p0 new( g
ruleP • traces g →⊥ var • h heap ctrl g := new p0 restore( g →⊥ var • h heap ) k traces rule6 • traces • traces g →⊥ var • h heap p0 new( g restore(...) fin g →⊥ var • h heap p0 new( g →⊥ var • h heap , p0) new( g →⊥ var • h heap , g := new)
Scientific Research (NWO), CoRE project, dossier number: 612.063.920.
Acknowledgments. We would like to thank the anonymous reviewers for their helpful comments and suggestions. | 47,009 | [
"1003769",
"1003770",
"895511",
"966814",
"1003771"
] | [
"452729",
"20495",
"135222",
"135222",
"20495",
"452729",
"135222",
"20495"
] |
01485980 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01485980/file/978-3-642-37635-1_6_Chapter.pdf | Roberto Bruni
email: [bruni@di.unipi.it
Andrea Corradini
email: andrea@di.unipi.it
Fabio Gadducci
email: gadducci]@di.unipi.it
Alberto Lluch Lafuente
email: alberto.lluch@imtlucca.it
Andrea Vandin
email: andrea.vandin]@imtlucca.it
Alberto Lluch Lafuente
Adaptable Transition Systems
Keywords: Adaptation, autonomic systems, control data, interface automata
à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Self-adaptive systems have been advocated as a convenient solution to the problem of mastering the complexity of modern software systems, networks and architectures. In particular, self-adaptivity is considered a fundamental feature of autonomic systems, that can specialise to several other self-* properties like selfconfiguration, self-optimisation, self-protection and self-healing. Despite some valuable efforts (see e.g. [START_REF] Salehie | Self-adaptive software: Landscape and research challenges[END_REF][START_REF] Lints | The essentials in defining adaptation[END_REF]), there is no general agreement on the notion of adaptivity, neither in general nor in software systems. There is as well no widely accepted foundational model for adaptivity. Using Zadeh's words [START_REF] Zadeh | On the definition of adaptivity[END_REF]: "it is very difficult -perhaps impossible-to find a way of characterizing in concrete terms the large variety of ways in which adaptive behavior can be realized". Zadeh's concerns were conceived in the field of Control Theory but are valid in Computer Science as well. Zadeh' skepticism for a concrete unifying definition of adaptivity is due to the attempt to subsume two aspects under the same definition: the external manifestations of adaptive systems (sometimes called black-box adaptation), and the internal mechanisms by which adaptation is achieved (sometimes called white-box adaptation).
The limited effort placed so far in the investigation of the foundations of adaptive software systems might be due to the fact that it is not clear what are the characterising features that distinguish adaptive systems from those that are not so. For instance, very often a software system is considered "self-adaptive" if it "modifies its own behavior in response to changes in its operating environment" [START_REF] Oreizy | An architecture-based approach to self-adaptive software[END_REF], when the software system realises that "it is not accomplishing what the software is intended to do, or better functionality or performance is possible" [START_REF] Robertson | Introduction to self-adaptive software: Applications[END_REF]. But, according to this definition, almost any software system can be considered self-adaptive, since any system of a reasonable complexity can modify its behaviour (e.g. following one of the different branches of a conditional statement) as a reaction to a change in its context of execution (e.g. values of variables or parameters). Consider the automaton of Fig. 1, which models a server providing a task execution service. Each state has the format s{q} [r] where s can be either D (the server is down) or U (it is up), and q, r are possibly empty sequences of t symbols representing, respectively, the lists of tasks scheduled for execution and the ones received but not scheduled yet. Transitions are labelled with t? (receive a task), u! (start-up the server), s! (schedule a task), f! (notify the conclusion of a task), and d! (shut-down the server). Annotations ? and ! denote input and output actions, respectively. Summing up, the server can receive tasks, start up, schedule tasks and notify their termination, and eventually shut down. Now, is the modelled server self-adaptive? One may argue that indeed it is, since the server schedules tasks only when it is up. Another argument can be that the server is self-adaptive since it starts up only when at least one task has to be processed, and shuts down only when no more tasks have to be processed. Or one could say that the server is not adaptive, because all transitions just implement its ordinary functional behaviour. Which is the right argument? How can we handle such diverse interpretations? White-box adaptation. White-box perspectives on adaptation allow one to specify or inspect (part of) the internal structure of a system in order to offer a clear separation of concerns to distinguish changes of behaviour that are part of the application or functional logic from those which realise the adaptation logic.
In general, the behaviour of a component is governed by a program and according to the traditional, basic view, a program is made of control (i.e. algorithms) and data. The conceptual notion of adaptivity we proposed in [START_REF] Bruni | A conceptual framework for adaptation[END_REF] requires to identify control data which can be changed to adapt the component's behaviour. Adaptation is, hence, the run-time modification of such control data. Therefore, a component is adaptable if it has a distinguished collection of control data that can be modified at run-time, adaptive if it is adaptable and its control data are modified at run-time, at least in some of its executions, and self-adaptive if it modifies its own control data at run-time.
Several programming paradigms and reference models have been proposed for adaptive systems. A notable example is the Context Oriented Programming paradigm, where the contexts of execution and code variations are first-class citizens that can be used to structure the adaptation logic in a disciplined way [START_REF] Salvaneschi | Context-oriented programming: A programming paradigm for autonomic systems (v2)[END_REF]. Nevertheless, it is not the programming language what makes a program adaptive: any computational model or programming language can be used to implement an adaptive system, just by identifying the part of the data that governs the adaptation logic, that is the control data. Consequently, the nature of control data can vary considerably, including all possible ways of encapsulating behaviour: from simple configuration parameters to a complete representation of the program in execution that can be modified at run-time, as it is typical of computational models that support meta-programming or reflective features.
The subjectivity of adaptation is captured by the fact that the collection of control data of a component can be defined in an arbitrary way, ranging from the empty set ("the system is not adaptable") to the collection of all the data of the program ("any data modification is an adaptation"). This means that white-box perspectives are as subjective as black-box ones. The fundamental difference lies in who is responsible of declaring which behaviours are part of the adaptation logic and which not: the observer (black-box) or the designer (white-box).
Consider again the system in Fig. 1 and the two possible interpretations of its adaptivity features. As elaborated in Sect. 3, in the first case control data is defined by the state of the server, while in the second case control data is defined by the two queues. If instead the system is not considered adaptive, then the control data is empty. This way the various interpretations are made concrete in our conceptual approach. We shall use this system as our running example.
It is worth to mention that the control data approach [START_REF] Bruni | A conceptual framework for adaptation[END_REF] is agnostic with respect to the form of interaction with the environment, the level of contextawareness, the use of reflection for self-awareness. It applies equally well to most of the existing approaches for designing adaptive systems and provides a satisfactory answer to the question "what is adaptation conceptually?". But "what is adaptation formally?" and "how can we reason about adaptation, formally?".
Contribution. This paper provides an answer to the questions we raised above. Building on our informal discussion, on a foundational model of component based systems (namely, interface automata [START_REF] De Alfaro | Game models for open systems[END_REF][START_REF] De Alfaro | Interface automata[END_REF], introduced in Sect. 2), and on previous formalisations of adaptive systems (discussed in Sect. 5) we distill in Sect. 3 a core model of adaptive systems called adaptable interface automata (aias). The key feature of aias are control propositions evaluated on states, the formal counterpart of control data. The choice of control propositions is arbitrary but it imposes a clear separation between ordinary, functional behaviours and adaptive ones. We then discuss in Sect. 4 how control propositions can be exploited in the specification and analysis of adaptive systems, focusing on various notions proposed in the literature, like adaptability, feedback control loops, and control synthesis. The approach based on control propositions can be applied to other computational models, yielding other instances of adaptable transition systems. The choice of interface automata is due to their simple and elegant theory.
Background
Interface automata were introduced in [START_REF] De Alfaro | Interface automata[END_REF] as a flexible framework for componentbased design and verification. We recall here the main concepts from [START_REF] De Alfaro | Game models for open systems[END_REF].
Definition 1 (interface automaton). An interface automaton P is a tuple V, V i , A I , A O , T , where V is a set of states; V i ⊆ V is the set of initial states, which contains at most one element (if V i is empty then P is called empty); A I and A O are two disjoint sets of input and output actions (we denote by A = A I ∪ A O the set of all actions); and
T ⊆ V × A × V is a deterministic set of steps (i.e. (u, a, v) ∈ T , (u, a, v ) ∈ T implies v = v ).
Example 1. Figure 2 presents three interface automata modelling respectively a machine Mac (left), an execution queue Exe (centre), and a task queue Que (right). Intuitively, each automaton models one component of our running example (cf. Fig. 1). The format of the states is as in our running example. The initial states are not depicted on purpose, because we will consider several cases. Here we assume that they are U, {} and [], respectively. The actions of the automata have been described in Sect. 1. The interface of each automaton is implicitly denoted by the action annotation: ? for inputs and ! for outputs.
Given B ⊆ A, we sometimes use P | B to denote the automaton obtained by restricting the set of steps to those whose action is in B. Similarly, the set of actions in B labelling the outgoing transitions of a state u is denoted by B(u). A computation ρ of an interface automaton P is a finite or infinite sequence of consecutive steps (or transitions) {(u i , a i , u i+1 )} i<n from T (thus n can be ω).
A partial composition operator is defined for automata: in order for two automata to be composable their interface must satisfy certain conditions. Definition 2 (composability). Let P and Q be two interface automata. Then, P and Q are composable if
A O P ∩ A O Q = ∅. Let shared (P, Q) = A P ∩ A Q and comm(P, Q) = (A O P ∩ A I Q ) ∪ (A I P ∩ A O Q
) be the set of shared and communication actions, respectively. Thus, two interface automata can be composed if they share input or communication actions only.
Two composable interface automata can be combined in a product as follows.
Definition 3 (product). Let P and Q be two composable interface automata.
Then the product T is the union of
P ⊗ Q is the interface automaton V, V i , A I , A O , T such that V = V P ×V Q ; V i = V i P ×V i Q ; A I = (A I P ∪A I Q )\comm(P, Q); A O = A O P ∪A O Q ; and
{((v, u), a, (v , u)) | (v, a, v ) ∈ T P ∧ a ∈ shared (P, Q) ∧ u ∈ V Q } (i.e. P steps), {((v, u), a, (v, u )) | (u, a, u ) ∈ T Q ∧ a ∈ shared (P, Q) ∧ v ∈ V P } (i.e. Q steps), and {((v, u), a, (v , u )) | (v, a, v ) ∈ T P ∧ (u, a, u ) ∈ T Q ∧ a ∈ shared (P, Q)} (i.
e. steps where P and Q synchronise over shared actions).
In words, the product is a commutative and associative operation (up to isomorphism) that interleaves non-shared actions, while shared actions are synchronised in broadcast fashion, in such a way that shared input actions become inputs, communication actions become outputs.
Example 2. Consider the interface automata Mac, Exe and Que of Fig. 2. They are all pairwise composable and, moreover, the product of any two of them is composable with the remaining one. The result of applying the product of all three automata is depicted in Fig. 3
(left).
States in P ⊗ Q where a communication action is output by one automaton but cannot be accepted as input by the other are called incompatible or illegal.
Definition 4 (incompatible states). Let P and Q be two composable interface automata. The set incompatible(P, Q)
⊆ V P × V Q of incompatible states of P ⊗ Q is defined as {(u, v) ∈ V P × V Q | ∃a ∈ comm(P, Q) . (a ∈ A O P (u) ∧ a ∈ A I Q (v)) ∨ (a ∈ A O Q (v) ∧ a ∈ A I P (u))}. Example 3.
In our example, the product Mac ⊗ Exe ⊗ Que depicted in Fig. 3 (left) has several incompatible states, namely all those of the form "s{t}[t]" or "s{t}[tt]". Indeed, in those states, Que is willing to perform the output action s! but Exe is not able to perform the dual input action s?.
The presence of incompatible states does not forbid to compose interface automata. In an open system, compatibility can be ensured by a third automata called the environment which may e.g. represent the context of execution or an adaptation manager. Technically, an environment for an automaton R is a non-empty automaton E which is composable with R, synchronises with all output actions of R (i.e. A I E = A O R ) and whose product with R does not have incompatible states. Interesting is the case when R is P ⊗Q and E is a compatible environment, i.e. when the set incompatible(P, Q)×V E is not reachable in R⊗E. Compatibility of two (composable, non-empty) automata is then expressed as the existence of a compatible environment for them. This also leads to the concept of compatible (or usable) states cmp(P ⊗ Q) in the product of two composable interface automata P and Q, i.e. those for which an environment E exists that makes the set of incompatible states incompatible(P , Q) unreachable in P ⊗ Q ⊗ E.
Example 4. Consider again the interface automata Mac, Exe and Que of Fig. 2. Automata Mac and Exe are trivially compatible, and so are Mac and Que. Exe and Que are compatible as well, despite of the incompatible states {t}[t] and {t}[tt] in their product Exe ⊗ Que. Indeed an environment that does not issue a second task execution requests t! without first waiting for a termination notification (like the one in Fig. 4) can avoid reaching the incompatible states.
We are finally ready to define the composition of interface automata.
Definition 5 (composition). Let P and Q be two composable interface automata. The composition
P | Q is an interface automaton V, V i , A I P ⊗Q , A O P ⊗Q , T such that V = cmp(P ⊗ Q); V i = V i P ⊗Q ∩ V ; and T = T P ⊗Q ∩ (V × A × V ).
Adaptable Interface Automata
Adaptable interface automata extend interface automata with atomic propositions (state observations) a subset of which is called control propositions and play the role of the control data of [START_REF] Bruni | A conceptual framework for adaptation[END_REF].
Definition 6 (adaptable interface automata). An adaptable interface automaton ( aia) is a tuple P, Φ, l, Φ c such that P = V, V i , A I , A O , T is an interface automaton; Φ is a set of atomic propositions, l : V → 2 Φ is a labelling function mapping states to sets of propositions; and Φ c ⊆ Φ is a distinguished subset of control propositions.
Abusing the notation we sometimes call P an aia with underlying interface automaton P , whenever this introduces no ambiguity. A transition (u, a, u ) ∈ T is called an adaptation if it changes the control data, i.e. if there exists a proposition φ ∈ Φ c such that either φ ∈ l(u) and φ ∈ l(u ), or vice versa. Otherwise, it is called a basic transition. An action a ∈ A is called a control action if it labels at least one adaptation. The set of all control actions of an aia P is denoted by A C P .
Example 6. Recall the example introduced in Sect. 1. We raised the question whether the interface automaton S of Fig. 1 is (self-)adaptive or not. Two arguments were given. The first argument was "the server schedules tasks only when it is up". That is, we identify two different behaviours of the server (when it is up or down, respectively), interpreting a change of behaviour as an adaptation.
We can capture this interpretation by introducing a control proposition that records the state of the server. More precisely, we define the aia Switch(S) in the following manner. The underlying interface automaton is S; the only (control) proposition is up, and the labelling function maps states of the form U{. . .}[. . .] into {up} and those of the form D{. . .}[. . .] into ∅. The control actions are then u and d. The second argument was "the system starts the server up only when there is at least one task to schedule, and shuts it down only when no task has to be processed ". In this case the change of behaviour (adaptation) is triggered either by the arrival of a task in the waiting queue, or by the removal of the last task scheduled for execution. Therefore we can define the control data as the state of both queues. That is, one can define an aia Scheduler(S) having as underlying interface automaton the one of Fig. 1, as control propositions all those of the form queues status q r (with q ∈ { , t}, and r ∈ { , t, tt}), and a labelling function that maps states of the form s{q}[r] to the set {queues status q r }. In this case the control actions are s, f and t.
Computations. The computations of an aia (i.e. those of the underlying interface automata) can be classified according to the presence of adaptation transitions. For example, a computation is basic if it contains no adaptive step, and it is adaptive otherwise. We will also use the concepts of basic computation starting at a state u and of adaptation phase, i.e. a maximal computation made of adaptive steps only.
Coherent control. It is worth to remark that what distinguishes adaptive computations and adaptation phases are not the actions, because control actions may also label transitions that are not adaptations. However, very often an aia has coherent control, meaning that the choice of control propositions is coherent with the induced set of control actions, in the sense that all the transitions labelled with control actions are adaptations.
Composition. The properties of composability and compatibility for aia, as well as product and composition operators, are lifted from interface automata.
Definition 7 (composition). Let P and Q be two aias whose underlying interface automata P , Q are composable. The composition P | Q is the aia P | Q , Φ, l, Φ c such that the underlying interface automaton is the composition of P and Q ; Φ = Φ P Φ Q (i.e. the set of atomic propositions is the disjoint union of the atomic propositions of P and Q); Φ c = Φ c P Φ c Q ; and l is such that l((u, v)) = l P (u) ∪ l Q (v) for all (u, v) ∈ V (i.e. a proposition holds in a composed state if it holds in its original local state).
Since the control propositions of the composed system are the disjoint union of those of the components, one easily derives that control coherence is preserved by composition, and that the set of control actions of the product is obtained as the union of those of the components.
Exploiting Control Data
We explain here how the distinguishing features of aia (i.e. control propositions and actions) can be exploited in the design and analysis of self-adaptive systems. For the sake of simplicity we will focus on aia with coherent control, as it is the case of all of our examples. Thus, all the various definitions/operators that we are going to define on aia may rely on the manipulation of control actions only.
Design
Well-formed interfaces. The relationship between the set of control actions A C P and the alphabets A I P and A O P is arbitrary in general, but it could satisfy some pretty obvious constraints for specific classes of systems.
Definition 8 (adaptable, controllable and self-adaptive ATSs). Let P be an aia. We say that P is adaptable if
A C P = ∅; controllable if A C P ∩ A I P = ∅; self-adaptive if A C P ∩ A O P = ∅.
Intuitively, an aia is adaptable if it has at least one control action, which means that at least one transition is an adaptation. An adaptable aia is controllable if control actions include some input actions, or self-adaptive if control actions include some output actions (which are under control of the aia).
From these notions we can derive others. For instance, we can say that an adaptable aia is fully self-adaptive if A C P ∩ A I P = ∅ (the aia has full control over adaptations). Note that hybrid situations are possible as well, when control actions include both input actions (i.e. actions in A I P ) and output actions (i.e. actions in A O P ). In this case we have that P is both self-adaptive and controllable. Example 7. Consider the aia Scheduler(S) and Switch(S) described in Example 6, whose underlying automaton (S) is depicted in Fig. 1. Switch(S) is fully self-adaptive and not controllable, since its control actions do not include input actions, and therefore the environment cannot force the execution of control actions directly. On the other hand, Scheduler(S) is self-adaptive and controllable, since some of its control actions are outputs and some are inputs.
Consider instead the interface automaton A in the left of Fig. 5, which is very much like the automaton Mac ⊗ Exe ⊗ Que of Fig. 3, except that all actions but f have been turned into input actions and states of the form s{t}[tt] have been removed. The automaton can also be seen as the composition of the two automata on the right of Fig. 5. And let us call Scheduler(A) and Switch(A) the aia obtained by applying the control data criteria of Scheduler(S) and Switch(S), respectively. Both Scheduler(A) and Switch(A) are adaptable and controllable, but only Scheduler(A) is self-adaptive, since it has at least one control output action (i.e. f!). Composition. As discussed in Sect. 3, the composition operation of interface automata can be extended seamlessly to aia. Composition can be used, for example, to combine an adaptable basic component B and an adaptation manager M in a way that reflects a specific adaptation logic. In this case, natural well-formedness constraints can be expressed as suitable relations among sets of actions. For example, we can define when a component M controls another component B as follows.
Definition 9 (controlled composition). Let B and M be two composable aia. We say that
M controls B in B | M if A C B ∩ A O M = ∅. In addition, we say that M controls completely B in B | M if A C B ⊆ A O M .
This definition can be used, for instance, to allow or to forbid mutual control. For example, if a manager M is itself at least partly controllable (i.e. A C M ∩A I M = ∅), a natural requirement to avoid mutual control would be that the managed component B and M are such the A O B ∩ A C M = ∅, i.e. that B cannot control M .
Example 8. Consider the adaptable server depicted on the left of Fig. 5 as the basic component whose control actions are d, u and s. Consider further the controller of Fig. 6 as the manager, which controls completely the basic component.
A superficial look at the server and the controller may lead to think that their composition yields the adaptive server of Fig. 1, yet this not the case. Indeed, the underlying interface automata are not compatible due to the existence of (unavoidable) incompatible states.
Control loops and action classification. The distinction between input, output and control actions is suitable to model some basic interactions and well-formedness criteria as we explained above. More sophisticated cases such as control loops are better modelled if further classes of actions are distinguished. As a paradigmatic example, let us consider the control loop of the MAPE-K reference model [START_REF]An Architectural Blueprint for Autonomic Computing[END_REF], illustrated in Fig. 7. This reference model is the most influential one for autonomic and adaptive systems. The name MAPE-K is due to the main activities of autonomic manager components (Monitor, Analyse, Plan, Execute) and the fact that all such activities operate and exploit the same Knowledge base.
According to this model, a self-adaptive system is made of a component implementing the application logic, equipped with a control loop that monitors the execution through suitable sensors, analyses the collected data, plans an adaptation strategy, and finally executes the adaptation of the managed component through some effectors. The managed component is considered to be an adaptable component, and the system made of both the component and the manager implementing the control loop is considered as a self-adaptive component.
Analysis and verification
Property classes. By the very nature of adaptive systems, properties that one is interested to verify on them can be classified according to the kind of computations that are concerned with, so that the usual verification (e.g. model checking problem) P |= ψ (i.e. "does the aia P satisfy property ψ?") is instantiated in some of the computations of P depending of the class of ψ.
For example, some authors (e.g. [START_REF] Zhao | Model checking of adaptive programs with modeextended linear temporal logic[END_REF][START_REF] Zhang | Modular verification of dynamically adaptive systems[END_REF][START_REF] Kulkarni | Correctness of component-based adaptation[END_REF]) distinguish the following three kinds of properties. Local properties are "properties of one [behavioral] mode", i.e. properties that must be satisfied by basic computations only. Adaptation properties are to be "satisfied on interval states when adapting from one behavioral mode to another ", i.e. properties of adaptation phases. Global properties "regard program behavior and adaptations as a whole. They should be satisfied by the adaptive program throughout its execution, regardless of the adaptations.", i.e. properties about the overall behaviour of the system.
To these we add the class of adaptability properties, i.e. properties that may fail for local (i.e. basic) computations, and that need the adapting capability of the system to be satisfied.
Definition 10 (adaptability property). Let P be an aia. A property ψ is an adaptability property for P if P |= ψ and P | A P \A C P |= ψ.
Example 9. Consider the adaptive server of Fig. 1 and the aia Scheduler(S) and Switch(S), with initial state U{}[]. Consider further the property "whenever a task is received, the server can finish it". This is an adaptability property for Scheduler(S) but not for Switch(S). The main reason is that in order to finish a task it first has to be received (t) and scheduled (s), which is part of the adaptation logic in Scheduler(S) but not in Switch(S). In the latter, indeed, the basic computations starting from state U{}[] are able to satisfy the property.
Weak and strong adaptability. aia are also amenable for the analysis of the computations of interface automata in terms of adaptability. For instance, the concepts of weak and strong adaptability from [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF] can be very easily rephrased in our setting. According to [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF] a system is weakly adaptable if "for all paths, it always holds that as soon as adaptation starts, there exists at least one path for which the system eventually ends the adaptation phase", while a system is strongly adaptable if "for all paths, it always holds that as soon as adaptation starts, all paths eventually end the adaptation phase".
Strong and weak adaptability can also be characterised by formulae in some temporal logic [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF], ACTL [START_REF] De Nicola | Action versus state based logics for transition systems[END_REF] in our setting.
Definition 11 (weak and strong adaptability). Let P be an aia. We say that P is weakly adaptable if P |= AG EF EX{A P \ A C P }true, and strongly adaptable if P |= AG AF (EX{A P }true ∧ AX{A P \ A C P }true).
The formula characterising weak adaptability states that along all paths (A) it always (G) holds that there is a path (E) where eventually (F) a state will be reached where a basic step can be executed (EX{A P \ A C P }true). Similarly, the formula characterising strong adaptability states that along all paths (A) it always (G) holds that along all paths (A) eventually (F) a state will be reached where at least one step can be fired (EX{A P }true) and all fireable actions are basic steps (AX{A P \ A C P }true). Apart from its conciseness, such characterisations enables the use of model checking techniques to verify them.
Example 10. The aia Switch(S) (cf. Fig. 1) is strongly adaptable, since it does not have any infinite adaptation phase. Indeed every control action (u or d) leads to a state where only basic actions (t, f or s) can be fired. On the other hand, Scheduler(S) is weakly adaptable due to the presence of loops made of adaptive transitions only (namely, t, s and f), which introduce an infinite adaptation phase. Consider now the aia Scheduler(A) and Switch(A) (cf. Fig. 5). Both are weakly adaptable due to the loops made of adaptive transitions only;: e.g. in Switch(A) there are cyclic behaviours made of the control actions u and d.
Reverse engineering and control synthesis
Control data can also guide reverse engineering activities. For instance, is it possible to decompose an aia S into a basic adaptable component B and a suitable controller M ? We answer in the positive, by presenting first a trivial solution and then a more sophisticated one based on control synthesis.
Basic decomposition. In order to present the basic decomposition we need some definitions. Let P ⊥ B denote the operation that given an automaton P results in an automaton P ⊥ B which is like P but where actions in B ⊆ A have been complemented (inputs become outputs and vice versa). Formally,
P ⊥ B = V, V i , ((A I \ B) ∪ (A O ∩ B)), ((A O \ B) ∪ (A I ∩ B))
, T . This operation can be trivially lifted to aia by preserving the set of control actions.
It is easy to see that interface automata have the following property. If P is an interface automaton and O 1 , O 2 are sets of actions that partition
A O P (i.e. A O P = O 1 O 2 ), then P is isomorphic to P ⊥ O 1 | P ⊥ O 2
. This property can be exploited to decompose an aia P as M | B by choosing M = P Example 11. Consider the server Scheduler(S) (cf. Fig. 1). The basic decomposition provides the manager with underlying automata depicted in Fig. 9 (left) and the basic component depicted in Fig. 9 (right). Vice versa, if the server Switch(S) (cf. Fig. 1) is considered, then the basic decomposition provides the manager with underlying automata depicted in Fig. 9 (right) and the basic component depicted in Fig. 9 (left).
Decomposition as control synthesis. In the basic decomposition both M and B are isomorphic (and hence of equal size) to the original aia S, modulo the complementation of some actions. It is however possible to apply heuristics in order to obtain smaller non-trivial managers and base components. One possibility is to reduce the set of actions that M needs to observe (its input actions). Intuitively, one can make the choice of ignoring some input actions and collapse the corresponding transitions. Of course, the resulting manager M must be checked for the absence of non-determinism (possibly introduced by the identification of states) but will be a smaller manager candidate. Once a candidate M is chosen we can resort to solutions to the control synthesis problem.
We recall that the synthesis of controllers for interface automata [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] is the problem of solving the equation P | Y Q, for a given system Q and component P , i.e. finding a component Y such that, when composed with P , results in a system which refines Q. An interface automaton R refines an interface automaton S if (i)
A I R ⊆ A I S , (ii) A O R ⊆ A O S
, and (iii) there is an alternating simulation relation from R to S, and two states u
∈ V i R , v ∈ V i S such that (u, v) ∈ [1]
. An alternating simulation relation from an interface automaton R to an interface automaton S is a relation ⊆ V R × V S such that for all (u, v) ∈ and all
a ∈ A O R (u) ∪ A I S (v) we have (i) A I S (v) ⊆ A I R (u) (ii) A O R (u) ⊆ A O S (v) (iii) there are u ∈ V R , v ∈ V S such that (u, a, u ) ∈ T R , (v, a, v ) ∈ T S and (u , v ) ∈ .
The control synthesis solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] can be lifted to aia in the obvious way. The equation under study in our case will be B | M P . The usual case is when B is known and M is to be synthesised, but it may also happen that M is given and B is to be synthesised. The solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] can be applied in both cases since the composition of interface automata is commutative. Our methodology is illustrated with the latter case, i.e. we first fix a candidate M derived from P . Then, the synthesis method of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] is used to obtain B. Our procedure is not always successful: it may be the case that no decomposition is found.
Extracting the adaptation logic. In order to extract a less trivial manager from an aia P we can proceed as follows. We define the bypassing of an action set B ⊆ A in P as P | B ,≡ , which is obtained by P | B (that is, the aia obtained from P by deleting those transitions whose action belong to B) collapsing the states via the equivalence relation induced by {u ≡ v | (u, a, v) ∈ T P ∧ a ∈ B}.
The idea is then to choose a subset B of A P \ A C P (i.e. it contains no control action) that the manager M needs not to observe. The candidate manager M is then P
⊥ A O P \A C P | B ,≡
. Of course, if the result is not deterministic, this candidate must be discarded: more observations may be needed. Extracting the application logic. We are left with the problem of solving the equation B | M P for given P and M . It is now sufficient to use the solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] which defines B to be (M | P ⊥ ) ⊥ , where P ⊥ abbreviates P ⊥ A P . If the obtained B and M are compatible, the reverse engineering problem has been solved. Otherwise we are guaranteed that no suitable managed component B exists for the candidate manager M since the solution of [START_REF] Bhaduri | Interface synthesis and protocol conversion[END_REF] is sound and complete. A different choice of control data or hidden actions should be done.
Example 12. The manager Scheduler(S)
⊥ {u,d} {u,d},≡ (see Fig. 10, left) and the other manager Switch(S) ⊥ {f,s} {s},≡ (see Fig. 10, right) are obtained by removing some observations. For the former we obtain no solution, while for the latter we obtain the same base component of the basic decomposition (Fig. 9 left).
Related Works
Our proposal for the formalisation of self-adaptive systems takes inspiration by many former works in the literature. Due to lack of space we focus our discussion on the most relevant related works only.
S[B] systems [START_REF] Merelli | A multi-level model for self-adaptive systems[END_REF] are a model for adaptive systems based on 2-layered transitions systems. The base transition system B defines the ordinary (and adaptable) behaviour of the system, while S is the adaptation manager, which imposes some regions (subsets of states) and transitions between them (adaptations). Further constraints are imposed by S via adaptation invariants. Adaptations are triggered to change region (in case of local deadlock). Weak and strong adaptability formalisations (casted in our setting in Sect. 4.2) are introduced.
Mode automata [START_REF] Maraninchi | Mode-automata: About modes and states for reactive systems[END_REF] have been also advocated as a suitable model for adaptive systems. For example, the approach of [START_REF] Zhao | Model checking of adaptive programs with modeextended linear temporal logic[END_REF] represents adaptive systems with two layers: functional layer, which implements the application logic and is represented by state machines called adaptable automata, and adaptation layer, which implements the adaptation logic and is represented with a mode automata. Adaptation here is the change of mode. The approach considers three different kinds of specification properties (cf. 4.2): local, adaptation, and global. An extension of linear-time temporal logic (LTL) called mLTL is used to express them.
The most relevant difference between aia and S[B] system or Mode automata is that our approach does not impose a two-layered asymmetric structure: aia can be composed at will, possibly forming towers of adaptation [START_REF] Bruni | A conceptual framework for adaptation[END_REF] in the spirit of the MAPE-K reference architecture, or mutual adaptation structures. In addition, each component of an adaptive system (be it a manager or a managed component, or both) is represented with the same mathematical object, essentially a well-studied one (i.e. interface automata) decorated with some additional information (i.e. control propositions).
Adaptive Featured Transition Systems (A-FTS) have been introduced in [START_REF] Cordy | Model checking adaptive software with featured transition systems[END_REF] for the purpose of model checking adaptive software (with a focus on software product lines). A-FTS are a sort of transition systems where states are composed by the local state of the system, its configuration (set of active features) and the configuration of the environment. Transitions are decorated with executability conditions that regard the valid configurations. Adaptation corresponds to reconfigurations (changing the system's features). Hence, in terms of our white-box approach, system features play the role of control data. They introduce the notion of resilience as the ability of the system to satisfy properties despite of environmental changes (which essentially coincides with the notion of blackbox adaptability of [START_REF] Hölzl | Towards a system model for ensembles[END_REF]). Properties are expressed in AdaCTL, a variant of the computation-tree temporal logic CTL. Contrary to aias which are equipped with suitable composition operations, A-FTS are seen in [START_REF] Cordy | Model checking adaptive software with featured transition systems[END_REF] as monolithic systems.
Concluding Remarks
We presented a novel approach for the formalisation of self-adaptive systems, which is based on the notion of control propositions (and control actions). Our proposal has been presented by instantiating it to a well-known model for component-based system, interface automata. However, it is amenable to be applied to other foundational formalisms as well. In particular, we would like to verify its suitability for basic specification formalisms of concurrent and distributed systems such as process calculi. Among future works, we envision the investigation of more specific notions of refinement, taking into account the possibility of relating systems with different kind of adaptability and general mechanisms for control synthesis that are able to account also for non-deterministic systems. Furthermore, our formalisation can be the basis to conciliate white-and blackbox perspectives adaptation under the same hood, since models of the latter are usually based on variants of transition systems or automata. For instance, control synthesis techniques such as those used to modularize a self-adaptive system (white-box adaptation) or model checking techniques for game models (e.g. [START_REF] Alur | Alternating-time temporal logic[END_REF]) can be used to decide if and to which extent a system is able to adapt so to satisfy its requirements despite of the environment (black-box adaptation).
Fig. 1 .
1 Fig. 1. Is it self-adaptive?
Fig. 2 .
2 Fig. 2. Three interface automata: Mac (left), Exe (centre), and Que (right).
Fig. 3 .
3 Fig. 3. The product Mac ⊗ Exe ⊗ Que (left) and the composition Mac | Exe | Que (right).
Fig. 4 .
4 Fig. 4. An environment.
Example 5 .
5 Consider the product Mac ⊗ Exe ⊗ Que depicted in Fig. 3 (left). All states of the form s{t}[t] and s{t}[tt] are incompatible and states D{}[tt] and U{}[tt] are not compatible, since no environment can prevent them to enter the incompatible states. The remaining states are all compatible. The composition Mac | Exe | Que is the interface automaton depicted in Fig. 3 (right).
Fig. 5 .
5 Fig. 5. An adaptable server (left) and its components (right).
6 .
6 A controller.
Fig. 7 .
7 Fig. 7. MAPE-K loop.
Fig. 8 .
8 Fig. 8. MAPE-K actions.
aia can be composed so to adhere to the MAPE-K reference model as schematised in Fig.8. First, the autonomic manager component M and the managed component B have their functional and output actions, respectivelyI ⊆ A I M , O ⊆ A O M , I ⊆ A I B , O ⊆ A O B suchthat no dual action is shared (i.e. comm(B, M ) ∩ (I ∪ I ) = ∅) but inputs may be shared (i.e. possibly I ∩ I = ∅). The autonomic manager is controllable and has hence a distinguished set of control actions C = A C B . The dual of such control actions, i.e. the output actions of M that synchronise with the input control actions B can be regarded as effectors F ⊆ A O M , i.e. output actions used to trigger adaptation. In addition, M will also have sensor input actions S ⊆ A I M to sense the status of B, notified via emit output actions E ⊆ A O M . Clearly, the introduced sets partition inputs and outputs, i.e. I S = A I M , O F = A O M , E I = A I B and O C = A O M .
, the manager and the base component are identical to the original system and only differ in their interface. All output control actions are governed by the manager M and become inputs in the base component B. Outputs that are not control actions become inputs in the manager. This decomposition has some interesting properties: B is fully controllable and, if P is fully self-adaptive, then M completely controls B.
Fig. 9 .
9 Fig. 9. A basic decomposition.
Fig. 10 .
10 Fig. 10. Bypassed managers for Scheduler(S) (left) and Switch(S) (right).
Research partially supported by the EU through the FP7-ICT Integrated Project 257414 ASCEns (Autonomic Service-Component Ensembles). | 43,222 | [
"1003772",
"1003773",
"894474",
"1003774",
"1003775"
] | [
"366408",
"366408",
"366408",
"301837",
"301837"
] |
01485982 | en | [
"info"
] | 2024/03/04 23:41:48 | 2012 | https://inria.hal.science/hal-01485982/file/978-3-642-37635-1_8_Chapter.pdf | Andrea Corradini
Reiko Heckel
Frank Hermann
Susann Gottmann
Nico Nachtigall
Transformation Systems with Incremental Negative Application Conditions
Keywords: graph transformation, concurrent semantics, negative application conditions, switch equivalence
de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
Graph Transformation Systems (GTSs) are an integrated formal specication framework for modelling and analysing structural and behavioural aspects of systems. The evolution of a system is modelled by the application of rules to the graphs representing its states and, since typically such rules have local eects, GTSs are particularly suitable for modelling concurrent and distributed systems where several rules can be applied in parallel. Thus, it is no surprise that a large body of literature is dedicated to the study of the concurrent semantics of graph transformation systems [START_REF] Corradini | Graph processes[END_REF][START_REF] Baldan | Concurrent Semantics of Algebraic Graph Transformations[END_REF][START_REF] Baldan | Processes for adhesive rewriting systems[END_REF].
The classical results include among others the denitions of parallel production and shift equivalence [START_REF] Kreowski | Is parallelism already concurrency? part 1: Derivations in graph grammars[END_REF], exploited in the Church-Rosser and Parallelism theorems [START_REF] Ehrig | Introduction to the Algebraic Theory of Graph Grammars (A Survey)[END_REF]: briey, derivations that dier only in the order in which independent steps are applied are considered to be equivalent. Several years later, taking inspiration from the theory of Petri nets, deterministic processes were introduced [START_REF] Corradini | Graph processes[END_REF], which are a special kind of GTSs, endowed with a partial order, and can be considered as canonical representatives of shift-equivalence classes of derivations. Next, the unfolding of a GTS was dened as a typically innite non-deterministic process which summarises all the possible derivations of a GTS [START_REF] Baldan | Unfolding semantics of graph transformation[END_REF]. Recently, all these concepts have been generalised to transformation systems based on (M-)adhesive categories [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF][START_REF] Corradini | Subobject Transformation Systems[END_REF][START_REF] Baldan | Unfolding grammars in adhesive categories[END_REF].
In this paper, we consider the concurrent semantics of GTSs that use the concept of Negative Application Conditions (NACs) for rules [START_REF] Habel | Graph Grammars with Negative Application Conditions[END_REF], which is widely used in applied scenarios. A NAC allows one to describe a sort of forbidden context, whose presence around a match inhibits the application of the rule.
These inhibiting eects introduce several dependencies among transformation steps that require a shift of perspective from a purely local to a more global point of view when analysing such systems.
Existing contributions that generalise the concurrent semantics of GTSs to the case with NACs [START_REF] Lambers | Parallelism and Concurrency in Adhesive High-Level Replacement Systems with Negative Application Conditions[END_REF][START_REF] Ehrig | Parallelism and Concurrency Theorems for Rules with Nested Application Conditions[END_REF] are not always satisfactory. While the lifted Parallelism and Concurrency Theorems provide adequate constructions for composed rules specifying the eect of concurrent steps, a detailed analysis of possible interleavings of a transformation sequence leads to problematic eects caused by the NACs. As shown in [START_REF] Heckel | DPO Transformation with Open Maps[END_REF], unlike the case without NACs, the notion of sequential independence among derivation steps is not stable under switching. More precisely, it is possible to nd a derivation made of three direct transformations s = (s 1 ; s 2 ; s 3 ) where s 2 and s 3 are sequentially independent and to nd a derivation s = (s 2 ; s 3 ; s 1 ) that is shift equivalent to s (obtained with the switchings (1 ↔ 2; 2 ↔ 3)), but where s 2 and s 3 are sequentially dependent on each other. This is a serious problem from the concurrent semantics point of view, because for example the standard colimit technique [START_REF] Corradini | Graph processes[END_REF] used to generate the process associated with a derivation does not work properly, since the causalities between steps do not form a partial order in general.
In order to address this problem, we introduce a restricted kind of NACs, based on incremental morphisms [START_REF] Heckel | DPO Transformation with Open Maps[END_REF]. We rst show that sequential independence is invariant under shift equivalence if all NACs are incremental. Next we analyse to which extent systems with general NACs can be transformed into systems with incremental NACs. For this purpose, we provide an algorithmic construction INC that takes as input a GTS and yields a corresponding GTS with incremental NACs only. We show that the transformation system obtained via INC simulates the original one, i.e., each original transformation sequence induces one in the derived system. Thus, this construction provides an over-approximation of the original system. We also show that this simulation is even a bisimulation, if the NACs of the original system are obtained as colimits of incremental NACs.
In the next section we review main concepts for graph transformation systems. Sect. 3 discusses shift equivalence and the problem that sequential independence with NACs is not stable in general. Thereafter, Sect. 4 presents incremental NACs and shows the main result on preservation of independence.
Sect. 5 presents the algorithm for transforming systems with general NACs into those with incremental ones and shows under which conditions the resulting system is equivalent. Finally, Sect. 6 provides a conclusion and sketches future developments. The proofs of the main theorems are included in the paper.
Basic Denitions
In this paper, we use the double-pushout approach [START_REF] Ehrig | Graph grammars: an algebraic approach[END_REF] to (typed) graph transformation, occasionally with negative application conditions [START_REF] Habel | Graph Grammars with Negative Application Conditions[END_REF]. However, we will state all denitions and results at the level of adhesive categories [START_REF] Lack | Adhesive and quasiadhesive categories[END_REF]. A category is adhesive if it is closed under pushouts along monomorphisms (hereafter monos) as well as under pullbacks, and if all pushouts along a mono enjoy the van Kampen property. That means, when such a pushout is the bottom face of a commutative cube such as in the left of Fig. 1, whose rear faces are pullbacks, the top face is a pushout if and only if the front faces are pullbacks. In any adhesive category we have uniqueness of pushout complements along monos, monos are preserved by pushouts and pushouts along monos are also pullbacks. As an example, the category of typed graphs for a xed type graph T G is adhesive [START_REF] Ehrig | Fundamentals of Algebraic Graph Transformation[END_REF]. =⇒ H from G to a H exists if a double-pushout (DPO) diagram can be constructed as in the right of Fig. 1, where (1) and (2) are pushouts.
A * * q q q q B * * C r r r r D A * * q q q q B + + C q q q q D L m (1) K (2) o o l o o / / r / / R m * G D o o l * o o / / r * / / H
The applicability of rules can be restricted by specifying negative conditions requiring the non-existence of certain structures in the context of the match. A (negative) constraint on an object L is a morphism n : L → L. A morphism m : L → G satises n (written m |= n) i there is no mono q : L G such that n; q = m. A negative application condition (NAC) on L is a set of constraints N .
A morphism m : L → G satises N (written m |= N ) if and only if m satises every constraint in N , i.e., ∀n ∈ N : m |= n.
All along the paper we shall consider only monic matches and monic constraints: possible generalisations are discussed in the concluding section. A graph transformation system (GTS) G consists of a set of rules, possibly with NACs. A derivation in G is a sequence of direct transformations s = (G 0
p1,m1 =⇒ G 1 p2,m2 =⇒ • • • pn,mn =⇒ G n ) such
that all p i are in G; we denote it also as s = s 1 ; s 2 ; . . . ; s n , where
s k = (G k-1 = p k ,m k ===⇒ G k ) for k ∈ {1, . . . , n}.
Independence and Shift Equivalence
Based on the general framework of adhesive categories, this section recalls the relevant notions for sequential independence and shift equivalence and illustrates the problem that independence is not stable under switching in presence of NACs. In the DPO approach, two consecutive direct transformations
s 1 = G 0 p1,m1 =⇒ G 1 and s 2 = G 1 p2,m2
=⇒ G 2 as in Fig. 2 are sequentially independent if there exist morphisms i : R 1 → D 2 and j :
L 2 → D 1 such that j; r * 1 = m 2 and i; l * 2 = m * 1 .
In this case, using the local Church-Rosser theorem [8] it is possible
to construct a derivation s = G 0 p2,m 2 =⇒ G 1 p1,m 1
=⇒ G 2 where the two rules are applied in the opposite order. We write s 1 ; s 2 ∼ sh s to denote this relation.
Given a derivation s = s 1 ; s 2 ; . . . s i ; s i+1 ; . . . ; s n containing sequentially independent steps s i and s i+1 , we denote by s = switch(s, i, i + 1) the equivalent derivation s = s 1 ; s 2 ; . . . s i ; s i+1 ; . . . ; s n , where s i ; s i+1 ∼ sh s i ; s i+1 . Shift equivalence ≡ sh over derivations of G is dened as the transitive and context closure of ∼ sh , i.e., the least equivalence relation containing ∼ sh and such that if s ≡ sh s then s 1 ; s; s 2 ≡ sh s 1 ; s ; s 2 for all derivations s 1 and s 2 . Example 1 (context-dependency of independence with NACs). Fig. 3 presents three transformation sequences starting with graph G 0 via rules p1, p2 and p3. Rule p3 has a NAC, which is indicated by dotted lines (one node and two edges).
N1 N2 L1 O O m 1 K1 l 1 o o r 1 / / k 1 R1 m * 1 ! ! i ' ' L2 O O m 2 } } j w w K2 l 2 o o r 2 / / k 2 R2 m * 2 G0 D1 l * 1 o o r * 1 / / G1 D2 l * 2 o o r * 2 / / G2
In the rst sequence s
= G 0 = p1,m1 ===⇒ G 1 = p2,m2 ===⇒ G 2 = p3,m3
===⇒ G 3 = (s 1 ; s 2 ; s 3 ) shown in the top of Fig. 3, steps s 1 and s 2 are sequentially independent, and so are s 2 and s 3 . After switching the rst and the second step we derive s = switch(s, 1, 2) = (s 2 ; s 1 ; s 3 ) (middle of Fig. 3) so that both sequences are shift equivalent (s ≡ sh s ). Since s 1 and s 3 are independent, we can perform a further switch s = switch(s , 2, 3) = (s 2 ; s 3 ; s 1 ) shown in the bottom sequence in Fig. 3. However, steps s 2 and s 3 are dependent from each other in s , because the match for rule p 3 will not satisfy the corresponding NAC for a match into G 0 . Hence, independence can change depending on the derivation providing the context, even if derivations are shift equivalent. In this section we show that under certain assumptions on the NACs of the rules, the problem identied in Ex. 1 does not occur. Intuitively, for each constraint n : L → L in a NAC we will require that it is incremental, i.e., that L does not extend L in two (or more) independent ways. Therefore, if there are two dierent ways to decompose n, one has to be an extension of the other. Incremental arrows have been considered in [START_REF] Heckel | DPO Transformation with Open Maps[END_REF] for a related problem: here we present the denition for monic arrows only, because along the paper we stick to monic NACs. Denition 1 (incremental monos and NACs). A mono f : A B is called incremental, if for any pair of decompositions g 1 ; g 2 = f = h 1 ; h 2 as in the diagram below where all morphisms are monos, there is either a mediating morphism
o : O O or o : O O, such that the resulting triangles commute. O g2 o A / / f / / > > g1 > > h1 B O > > h2 > > o O O A monic NAC N over L is incremental if each constraint n : L L ∈ N is incremental.
Example 2 (Incremental NACs). The left diagram below shows that the negative constraint n 3 :
L 3 → L3 ∈ N 3 of rule p 3 of Ex. 1 is not incremental, because L3 extends L 3 in
L 3 L 3 Ô3 O 3 ' 1 1 2 1 2 L 4 L 4 Ô4 O 4 ' 1 2
Intuitively, the problem stressed in Ex. 1 is due to the fact that rules p 1 and p 2 delete from G 0 two independent parts of the forbidden context for p 3 . Therefore p 3 depends on the ring of p 1 or on the ring of p 2 , while p 1 and p 2 are independent. This form of or-causality from sets of independent events is known to be a source of ambiguities in the identication of a reasonable causal ordering among the involved events, as discussed in [START_REF] Langerak | Causal ambiguity and partial orders in event structures[END_REF]. The restriction to incremental NACs that we consider here is sucient to avoid such problematic situations (as proved in the main result of this section) essentially because if both p 1 and p 2 delete from G 0 part of an incremental NAC, then they cannot be independent, since the NAC cannot be factorized in two independent ways. Incrementality of monos enjoys some nice properties: it is preserved by decomposition of arrows, and it is both preserved and reected by pushouts along monos, as stated in the next propositions.
Proposition 1 (decomposition of monos preserve incrementality). Let Proposition 2 (preservation and reection of incrementality by POs).
In the diagram to the right, let
B f * D g *
C be the pushout of the monic arrows
B g A f C. Then f is incremental if and only if f * is incremental. A / / g / / f B f * C / / g * / / D
We come now to the main result of this section: if all NACs are incremental, then sequential independence of direct transformations is invariant with respect to the switch of independent steps.
Theorem 1 (invariance of independence under shift equivalence). Assume transformation sequences
s = G 0 p1,m1 =⇒ G 1 p2,m2 =⇒ G 2 p3,m3 =⇒ G 3 and s = G 0 p2,m 2 =⇒ G 1 p3,m 3 =⇒ G 2 p1,m 1
=⇒ G 3 using rules p 1 , p 2 , p 3 with incremental NACs only as in the diagram below, such that s ≡ sh s with s = switch(switch(s, 1, 2), 2, 3).
G 0 p2,m 2 + 3 p1,m1 G 1 p3,m 3 + 3 p1,m 1 G 2 p1,m 1 G 1 p2,m2 + 3 G 2 p3,m3 + 3 G 3 Then, G 1 p2,m2 =⇒ G 2 and G 2 p3,m3
=⇒ G 3 are sequentially independent if and only if
G 0 p2,m 2 =⇒ G 1 and G 1 p3,m 3 =⇒ G 2 are. Proof. Let N 1 , N 2
G 0 p1,m1 =⇒ G 1 p2,m2 =⇒ G 2 and G 0 p2,m 2 =⇒ G 1 p1,m 1 =⇒ G 2 according to the proof of the G0 G 1 p 1 ,m 1 G1 p 2 ,m 2 + 3 G2 ↓ L1 m 1 o o v v (3) K1 l 1 O O r 1 o o v v (6) R1 o o v v L3 m 3 c c èe k k G 1 (2) D 2 O O (5) G2 (8) R2 O O F F D 1 / / o o (1) D * 2 / / o o O O (4) D2 / / o o (7) K2 l 2 / / r 2 o o O O F F G0 D1 O O G1 L2 m 2 O O E E (a) Match m 3 : L3 → G0 O * | | ( ( O e 1 ( ( i 1 1 O e 2 } } L3 q L3 / / n / / o * 2 2 m 3 2 2 D * 2 } } ( ( (1)
D1
( ( =⇒ G 3 are independent. Then, there exists
D 1 } } G0 (b) Induced morphism L3 → D 1
L 3 → D 2 commuting with m 3 such that m * 3 = L 3 → D 2 → G 1 satises N 3 . Also, G 1 p1,m 1 =⇒ G 2 and G 2 p3,m3
=⇒ G 3 are independent because equivalence of s and s requires to switch them, so there exists
L 3 → D 2 commuting with m 3 such that m 3 = L 3 → D 2 → G 1 satises N 3 .
There exists a morphism L 3 → D * 2 commuting the resulting triangles induced by pullback [START_REF] Corradini | Subobject Transformation Systems[END_REF]. To show that m
3 = L 3 → D * 2 → D 1 → G 0 = L 3 → D * 2 → D 1 → G 0 sat- ises N 3 ,
by way of contradiction, assume n : L 3 → L3 ∈ N 3 with morphism q : L3 → G 0 commuting with m 3 . We can construct the cube in Fig. 4 We show that e 2 : O ↔ L3 is an isomorphism. First of all, e 2 is a mono by pullback (FR) and mono D 1 → G 0 . Pushout (TOP) implies that the morphism pair (e 1 , e 2 ) with e 1 : O → L3 and e 2 : O → L3 is jointly epimorphic. By com- mutativity of i; e 2 = e 1 , we derive that also (i; e 2 , e 2 ) is jointly epi. By denition of jointly epi, we have that for arbitrary (f, g) it holds that i; e 2 ; f = i; e 2 ; g and e 2 ; f = e 2 ; g implies f = g. This is equivalent to e 2 ; f = e 2 ; g implies f = g. Thus, e 2 is an epimorphism. Together with e 2 being a mono (see above) we conclude that e 2 is an isomorphism, because adhesive categories are balanced [START_REF] Lack | Adhesive and quasiadhesive categories[END_REF]. This means, there exists a mediating morphism L3 → O → D 1 which contradicts the earlier assumption that
L 3 → D * 2 → D 1 satises N 3 .
Example 3. If in Fig. 3 we replace rule p 3 by rule p 4 of Fig. 5 that has an incremental NAC, so that s
= G 0 = p1,m1 ===⇒ G 1 = p2,m2 ===⇒ G 2 = p4,m4
===⇒ G 3 = (s 1 ; s 2 ; s 4 ), then the problem described in Ex. 1 does not hold anymore, because s 2 and s 4 are not sequentially independent, and they remain dependent in the sequence s = s 2 ; s 4 ; s 1 . Let us start with some auxiliary technical facts that hold in adhesive categories and that will be exploited to show that the compilation algorithm terminates, which requires some ingenuity because sometimes a single constraint can be compiled into several ones.
Denition 2 (nitely decomposable monos). A mono
A f B is called at most k-decomposable, with k ≥ 0, if for any sequence of arrows f 1 ; f 2 ; • • • ; f h = f where for all 1 ≤ i ≤ h arrow f i is a mono and it is not an iso, it holds h ≤ k. Mono f is called k-decomposable if it
is at most k-decomposable and either k = 0 and f is an iso, or there is a mono-decomposition like the above with h = k. A mono is nitely decomposable if it is k-decomposable for some
k ∈ N. A 1-decomposable mono is called atomic.
From the denition it follows that all and only the isos are 0-decomposable.
Furthermore, any atomic (1-decomposable) mono is incremental, but the converse is false in general. For example, in Graph the mono {•} {• → •} is incremental but not atomic. Actually, it can be shown that in Graph all incremental monos are at most 2-decomposable, but there exist adhesive categories with k-decomposable incremental monos for any k ∈ N.
Furthermore, every nitely decomposable mono f : A B can be factorized as A K g B where g is incremental and maximal in a suitable sense.
Proposition 3 (decomposition and incrementality). Let f : A B be nitely decomposable. Then there is a factorization A K g B of f such that g is incremental and there is no
K such that f = A K K g B,
where K K is not an iso and K K g B is incremental. In this case we call g maximally incremental w.r.t. f .
Proposition 4 (preservation and reection of k-decomposability).
Let the square to the right be a pushout and a be a mono. Then b is a k-decomposable mono if and only if d is a kdecomposable mono.
A (1) / / a / / b B d C / / c / / D
In the following construction of incremental NACs starting from general ones, we will need to consider objects that are obtained starting from a span of monos, like pushout objects, but that are characterised by weaker properties. We describe now how to transform a rule p with arbitrary nitely decomposable constraints into a set of rules with simpler constraints: this will be the basic step of the algorithm that will compile a set of rules with nitely decomposable NACs into a set of rules with incremental NACs only.
B ' ' d ' ' . . ( ( A / / h / / 8 8 a 8 8 & & b & & A O O a O O b D / / g / / G C 7
Denition 4 (compiling a rule with NAC). Let p = L K R, N be a rule with NAC, where the NAC N = {n i :
L L i | i ∈ [1, s]}
is a nite set of nitely decomposable monic constraints and at least one constraint, say n j , is not incremental. Then we dene the set of rules with NACs INC (p, n j ) in the following way. L j be a decomposition of n j such that k is maximally incremental w.r.t. n j (see Prop. 3). Then INC (p, n j ) = {p , p j }, where: 1. p is obtained from p by replacing constraint n j : L L j with constraint
n j : L M j . 2. p j = M j K R , N , where M j K R is obtained by apply- ing rule L K R to match n j : L M j , as in the next diagram. L n j (1) K (2) o o l o o / / r / / R M j K o o l * o o / / r * / / R Furthermore, N is a set of constraints N = N 1 ∪ • • • ∪ N s obtained as follows. (1) N j = {k : M j L j }. (2) For all i ∈ [1, s] \ {j}, N i = {n ih : M j L ih | L i L ih M j is a quasi-pushout of L i L M j }.
Before exploring the relationship between p and INC (p, n j ) let us show that the denition is well given, i.e., that in Def. 4(b).2 the applicability of
L K R to match n j : L M j is guaranteed. K 5 5 ( ( / / / / (1) X (2)
/ / / / • L / / n j / / ( ( nj 5 5 M j / / / / L j
In fact, by the existence of a pushout complement of K L nj L j we can build a pushout that is the external square of the diagram on the right; next we build the pullback (2) and obtain K X as mediating morphism. Since (1) + ( 2) is a pushout, (2) is a pullback and all arrows are mono, from Lemma 4.6 of [START_REF] Lack | Adhesive and quasiadhesive categories[END_REF] we have that (1) is a pushout, showing that K L M j has a pushout complement.
The following result shows that INC (p, n j ) can simulate p, and that if the decomposition of constraint n j has a pushout complement, then also the converse is true.
{Li} / q ) ) L o o {n i } o o m (1)
K (2) o o l o o / / r / / R m * G D o o l * o o / / r
= id {• 1 } , n : {• 1 } {• 1 → • 2 → • 3
} be a rule (the identity rule on graph {• 1 }) with a single negative constraint n, which is not incremental. Then according to Def. 4 we obtain
INC (p, n) = {p , p 1 } where p = id {• 1 } , n : {• 1 } {• 1 → • 2 } and p 1 = id {• 1 →• 2 } , n : {• 1 → • 2 } {• 1 → • 2 → • 3 } . Note
G = {• 2 ← • 1 → • 2 → • 3 }, and let x be the inclusion morphism from {• 1 → • 2 } to G. Then G p1,x =⇒ G, but the induced inclusion match m : {• 1 } → G does not satisfy constraint n.
Starting with a set of rules with arbitrary (but nitely decomposable) NACs, the construction of Def. 4 can be iterated in order to get a set of rules with incremental NACs only, that we shall denote INC (P ). As expected, INC (P ) simulates P , and they are equivalent if all NACs are obtained as colimits of incremental constraints.
Denition 5 (compiling a set of rules). Let P be a nite set of rules with NACs, such that all constraints in all NACs are nitely decomposable. Then the set INC (P ) is obtained by the following procedure. =⇒ H for all G. 4. Suppose that each constraint of each rule in P is the colimit of incremental monos, i.e., for each constraint L L , L is the colimit object of a nite diagram {L L i } i∈I of incremental monos. Then P and INC (P ) are equivalent, i.e., we also have that G
INC (P ) =⇒ H implies G P =⇒ H.
Proof. Point 2 is obvious, given the guard of the while loop, provided that it terminates. Also the proofs of points 3 and 4 are pretty straightforward, as they follow by repeated applications of Prop. 6. The only non-trivial proof is that of termination.
To this aim, let us use the following lexicographic ordering, denoted N k , for a xed k ∈ N, that is obviously well-founded. The elements of N k are sequences of natural numbers of length k, like σ = σ 1 σ 2 . . . σ k . The ordering is dened as σ < σ i σ h < σ h , where h ∈ [1, k] (b). In this case rule p is obtained from p by replacing the selected constraint with one that is at most ( k -1)-decomposable. Furthermore, each other constraint n i is replaced by a set of constraints, obtained as quasi-pushouts of n i and n j . If n i is incremental, so are all the new constraints obtained as quasi-pushouts, by Prop. 5(4), and thus they don't contribute to the degree. If instead n i is non-incremental, then it is h-decomposable for h ≤ k, by denition of k. Then by Prop. 5(3) all constraints obtained as proper quasi-pushouts are at most (h -1)-decomposable, and only one (obtained as a pushout) will be h-decomposable.
Discussion and Conclusion
In our quest for a stable notion of independence for conditional transformations, we have dened a restriction to incremental NACs that guarantees this property (Thm. 1). Incremental NACs turn out to be quite powerful, as they are sucient for several case studies of GTSs. In particular, the well studied model transformation from class diagrams to relational data base models [START_REF] Hermann | Ecient Analysis and Execution of Correct and Complete Model Transformations Based on Triple Graph Grammars[END_REF] uses incremental NACs only. In an industrial application for translating satellite software (pages 14-15 in [START_REF] Ottersten | Interdisciplinary Centre for Security, Reliability and Trust -Annual Report 2011[END_REF]), we used a GTS with more than 400 rules, where only 2 of them have non-incremental NACs. Moreover, the non-incremental NACs could also have been avoided by some modications of the GTS. Incremental morphisms have been considered recently in [START_REF] Heckel | DPO Transformation with Open Maps[END_REF], in a framework dierent but related to ours, where requiring that matches are open maps one can restrict the applicability of transformation rules without using NACs.
We have also presented a construction that compiles as set of rules with general (nitely-decomposable) NACs into a set of rules with incremental NACs only. For NACs that are obtained as colimits of incremental ones, this compilation yields an equivalent system, i.e., for every transformation in the original GTS there exists compatible step in the compiled one and vice versa (Thm. 2), and therefore the rewrite relation on graphs is still the same. In the general case, the compiled system provides an overapproximation of the original GTS, which nevertheless can still be used to analyse the original system.
In fact our intention is to dene a stable notion of independence on transformations with general NACs. Using the compilation, we can declare a two-step sequence independent if this is the case for all of its compilations, or more liberally, for at least one of them. Both relations should lead to notions of equivalence that are ner than the standard shift equivalence, but that behave well thanks to Thm. 1. Moreover, independence should be expressed directly on the original system, rather than via compilation. Such a revised relation will be the starting point for developing a more advanced theory of concurrency for conditional graph transformations, including processes and unfoldings of GTSs.
The main results in this paper can be applied for arbitrary adhesive transformation systems with monic matches. However, in some cases (like for attributed graph transformation system) the restriction to injective matches is too strict (rules contain terms that may be mapped by the match to equal values). As shown in [START_REF] Hermann | Analysis of Permutation Equivalence in Madhesive Transformation Systems with Negative Application Conditions[END_REF], the concept of NAC-schema provides a sound and intuitive basis for the handling of non-injective matches for systems with NACs. We are condent that an extension of our results to general matches is possible based on the concept of NAC-schema.
Another intersting topic that we intend to study is the complexity of the algorithm of Def. 5, and the size of the set of rules with incremental constraints, INC (P ), that it generates. Furthermore, we plan to extend the presented results for shift equivalence to the notion of permutation equivalence, which is coarser and still sound according to [START_REF] Hermann | Analysis of Permutation Equivalence in Madhesive Transformation Systems with Negative Application Conditions[END_REF]. Finally, we also intend to address the problem identied in Ex. 1 at a more abstract level, by exploiting the event structures with or-causality of events that are discussed in depth in [START_REF] Langerak | Causal ambiguity and partial orders in event structures[END_REF].
Fig. 1 .
1 Fig. 1. van Kampen condition (left) and DPO diagram (right)
Fig. 2 .p2,m 2 =⇒ G 1
221 Fig. 2. Sequential independence
Fig. 3 .
3 Fig. 3. Independence of p2 and p3 is not preserved by switching with p1
two independent ways: by the loop on 1 in O 3 , and by the outgoing edge with one additional node 2 in O 3 . Indeed, there is no mediating arrow from O 3 to O 3 or vice versa relating these two decompositions. Instead the constraint n 4 : L 4 → L4 ∈ N 4 of rule p 4 of Fig. 5 is incremental: it can be decomposed in only one non-trivial way, as shown in the top of the right diagram, and for any other possible decomposition one can nd a mediating morphism (as shown for one specic case).
f : A B be an incremental arrow and f = g; h with monos g : A C andh : C B.Then both h and g are incremental.
and N 3 3 satisfy N 3 ,p2,m 2 =⇒ G 1 and G 1 p3,m 3 =⇒ G 2 .
3332132 be the NACs of p 1 , p 2 and p 3 , respectively. Due to sequential independence of G 1 p2,m2 =⇒ G 2 and G 2 p3,m3 =⇒ G 3 , match m 3 : L 3 → G 2 extends to a match m * 3 : L 3 → G 1 satisfying N 3 . Using that both m 3 and m * we show below that the match m 3 : L 3 → G 0 , that exists by the classical local Church-Rosser, satises N 3 , too. This provides one half of the independence of G 0 By reversing the two horizontal sequences in the diagram above with the same argument we obtain the proof for the other half, i.e., that the comatch of p 2 into G 2 satises the equivalent right-sided NAC of N 2 , which is still incremental thanks to Prop. 2. Finally reversing the vertical steps yields the reverse implication, that independence of the upper sequence implies independence of the lower. The diagram in Fig. 4(a) shows a decomposition of the transformations
Fig. 4 .
4 Fig. 4. Constructions for proof of Thm. 1
Matches L 3 → D * 2 → D 1 and L 3 → D * 2 → D 1 satisfy N 3 because they are prexes of matches m * 3 and m 3 , respectively; indeed, it is easy to show that m; m |= n ⇒ m |= n for injective matches m, m and constraint n.
(b) as follows. The bottom face is pushout (1), faces front left (FL), front right (FR) and top (TOP) are constructed as pullbacks. The commutativity induces unique morphism O * → D * 2 making the back faces commuting and thus, all faces in the cube commute. Back left face (BL) is a pullback by pullback decomposition of pullback (TOP+FR) via (BL+(1)) and back right face (BR) is a pullback by pullback decomposition of pullback (TOP+FL) via (BR+(1)). We obtain o * : L 3 → O * as induced morphism from pullback (BL+FL) and using the assumption m 3 = n; q.
Further, by the
van Kampen property, the top face is a pushout. Since the constraint is incremental and L 3 → O → L3 = L 3 → O → L3 , without loss of generality we have a morphism i : O → O commuting the triangles.
4 Fig. 5 .
45 Fig. 5. Rule p4 with incremental NAC
Fig. 6 .
6 Fig. 6. Rule p3 (left) and the set INC ({p3}) = {p31, p32} (right)
6 Fig. 7 .
67 Fig. 7. Quasi-pushout of monos in an adhesive category
CDC
such that the mediating morphism g : D → G is mono.2. LetB A C be a span of monos. If objects B and C are nite (i.e., they have a nite number of subobjects), then the number of non-isomorphic distinct quasi-pushouts of the span is nite. 3. In span B A b C, suppose that mono b is k-decomposable, and that B d is a quasi-pushout based on A , where h : A A is not an iso. Then mono d : B D is at most (k -1)-decomposable. 4. Quasi-pushouts preserve incrementality: if B d D C is a quasi-pushout of B A b C and b is incremental, then also d : B D is incremental.
L
j has no pushout complement, then INC (p, n j ) = {p }, where p if obtained from p by dropping constraint n j . (b) Otherwise, let L n j M j k
INC (P ) := P while (there is a rule in INC (P ) with a non-incremental constraint) do let k = max{k | there is a k-decomposable non-incremental constraint in INC (P )} let n be a k-decomposable non-incremental constraint of p ∈ INC (P ) Set INC (P ) := (INC (P ) \ {p}) ∪ INC (p, n) endwhile return INC (P ) Theorem 2 (correctness and conditional completeness of compilation).
, namely the set INC ({p 3 }) = {p 31 , p 32 } containing rules with incremental NACs only. It is not dicult to see that p 3 can be applied to a match if and only if either p 31 or p 32 can be applied to the same match (determined by the image of node 1), and the eect of the rules is the same (adding a new node). In fact, if either p 31 or p 32 can be applied, then also p 3 can be applied to the same match, because at least one part of its NAC is missing (the loop if p 31 was applied, otherwise the edge). Viceversa, if p 3 can be applied, then either the loop on 1 is missing, and p 31 is applicable, or the loop is present but there is no non-looping edge from 1, and thus p 32 can be applied. As a side remark, notice that the NACs p 31 or p 32
1 p 3 ⇒ 1 3 1 1 p 31 ⇒ ⇒ p 32 1 3 1 3
5 Transforming General NACs into Incremental NACs
In this section we show how to compile a set of rules P with arbitrary NACs into a (usually much larger) set of rules INC (P ) having incremental NACs only. The construction guarantees that every derivation using rules in P can be transformed into a derivation over INC (P ). Additionally, we show that P and INC (P ) are actually equivalent if all constraints in P are obtained as colimits of incremental constraints.
The example shown in Fig.
6
can help getting an intuition about the transformation. It shows one possible outcome (indeed, the algorithm we shall present is non-deterministic) of the application of the transformation to rule p 3
Example 4. Fig. 6 shows one possible outcome of INC ({p 3 }), as discussed at the beginning of this section. As the NAC of p 3 is a colimit of incremental arrows, {p 3 } and INC ({p 3 }) are equivalent. Instead, let p
of n j (L n j M j k L j ) has a pushout complement, then G INC (p,nj ) =⇒ H implies
G p =⇒ H.
Fig. 8. DPO diagram with NAC
Proposition 6 (relationship between p and INC (p, n j )). In the hypotheses
of Def. 4, if G p =⇒ H then G INC (p,nj ) =⇒ H. Furthermore, if the decomposition
* / / H
that all constraints in INC (p, n) are incremental, but the splitting of n as n ; n does not have a pushout complement. Indeed, we can nd a graph to which p 1 is applicable but p is not, showing that the condition we imposed on NACs to prove that p and INC (p, n j ) are equivalent is necessary. In fact, let
1 .
1 The algorithm of Def. 5 terminates. 2. INC (P ) contains rules with incremental NACs only. 3. INC (P ) simulates P , i.e., G
P =⇒ H implies G INC (P )
is the highest position at which σ and σ dier. Now, let k be the minimal number such that all non-incremental constraints in P are at most k-decomposable, and dene the degree of a rule p, deg(p), as the σ ∈ N k given by σ i = |{n | n is an i-decomposable non-incremental constraint of p}| Dene deg(Q) for a nite set of rules as the componentwise sum of the degrees of all the rules in Q. we conclude by showing that at each iteration of the loop of Def. 5 the degree deg(INC (P )) decreases strictly. Let p be a rule and n be a non-incremental constraint, k-decomposable for a maximal k. The statement follows by showing that INC (p, n) has at least one k-decomposable non-incremental constraint less than p, while all other constraints are at most ( k -1)-decomposable. This is obvious if INC (p, n) is obtained according to point (a) of Def. 4. Otherwise, let INC (p, n) = {p , p j } using the notation of point
Next | 35,599 | [
"1003773",
"1003778",
"1003779",
"1003780",
"1003781"
] | [
"366408",
"300751",
"104741",
"104741",
"104741"
] |
01486026 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01486026/file/978-3-642-38493-6_13_Chapter.pdf | Behrooz Nobakht
email: bnobakht@liacs.nl
Frank S De Boer
Mohammad Mahdi Jaghoori
email: m.jaghouri@lacdr.leidenuniv.nl
The Future of a Missed Deadline
Keywords: actors, application-level scheduling, real-time, deadlines, futures, Java
de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Introduction
In real-time applications, rigid deadlines necessitate stringent scheduling strategies. Therefore, the developer must ideally be able to program the scheduling of different tasks inside the application. Real-Time Specification for Java (RTSJ) [START_REF] Jcp | RTSJ v1 JSR 1[END_REF][START_REF]RTSJ v1.1 JSR 282[END_REF] is a major extension of Java, as a mainstream programming language, aiming at enabling real-time application development. Although RTSJ extensively enriches Java with a framework for the specification of real-time applications, it yet remains at the level of conventional multithreading. The drawback of multithreading is that it involves the programmer with OS-related concepts like threads, whereas a real-time Java developer should only be concerned about high-level entities, i.e., objects and method invocations, also with respect to real-time requirements.
The actor model [START_REF] Scott | A foundation for actor computation[END_REF] and actor-based programming languages, which have re-emerged in the past few years [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF][START_REF] Armstrong | Programming Erlang: Software for a Concurrent World[END_REF][START_REF] Haller | Scala actors: Unifying thread-based and eventbased programming[END_REF][START_REF] Broch | An Asynchronous Communication Model for Distributed Concurrent Objects[END_REF][START_REF] Varela | Programming dynamically reconfigurable open systems with SALSA[END_REF], provide a different and promising paradigm for concurrency and distributed computing, in which threads are transparently encapsulated inside actors. As we will argue in this paper, this paradigm is much more suitable for real-time programming because it enables the programmer to obtain the appropriate high-level view which allows the management of complex real-time requirements.
In this paper, we introduce an actor-based programming language Crisp for real-time applications. Basic real-time requirements include deadlines and time-outs. In Crisp, deadlines are associated with asynchronous messages and timeouts with futures [START_REF] Frank | A complete guide to the future[END_REF]. Crisp further supports a general actor-based mechanism for handling exceptions raised by missed deadlines. By the integration of these basic real-time control mechanisms with the application-level policies supported by Crisp for scheduling of the messages inside an actor, more complex real-time requirements of the application can be met with more flexibility and finer granularity.
We formalize the design of Crisp by means of structural operational semantics [START_REF] Gordon D Plotkin | The origins of structural operational semantics[END_REF] and describe its implementation as a full-fledged programming language. This implementation uses both the Java and Scala language with extensions of Akka library. We illustrate the use of the programming language with an industrial case study from SDL Fredhopper that provides enterprise-scale distributed e-commerce solutions on the cloud.
The paper continues as follows: Section 2 introduces the language constructs and provides informal semantics of the language with a case study in Section 2.1. Section 3 presents the operational semantics of Crisp. Section 4 follows to provide a detailed discussion on the implementation. The case study continues in this section with further details and code examples. Section 5 discusses related work of research and finally Section 6 concludes the paper and proposes future line of research.
Programming with deadlines
In this section, we introduce the basic concepts underlying the notion of "deadlines" for asynchronous messages between actors. The main new constructs specify how a message can be sent with a deadline, how the message response can be processed, and what happens when a deadline is missed. We discuss the informal semantics of these concepts and illustrate them using a case study in Section 2.1.
Listing 1 introduces a minimal version of the real-time actor-based language Crisp. Below we discuss the two main new language constructs presented at lines [START_REF] Eker | Taming heterogeneity -the ptolemy approach[END_REF] and [START_REF] Fersman | Schedulability analysis using two clocks[END_REF].
How to send a message with a deadline? The construct f = e 0 ! m(e) deadline(e 1 ) describes an asynchronous message with a deadline specified by e 1 (of type T time ). Deadlines can be specified using a notion of time unit such as millisecond, second, minute or other units of time. The caller expects the callee (denoted by e 0 ) to process the message within the units of time specified by e 1 . Here processing a message means starting the execution of the process generated by the message. A deadline is missed if and only if the callee does not start processing the message within the specified units of time.
What happens when a deadline is missed? Messages received by an actor
C ::= class N begin V ? {M } * end (1) Msig ::= N(T x) (2)
M ::= {Msig == {V ; } ? S} (3)
V ::= var {{x},
+ : T {= e} ? }, + (4)
S ::= x := e | (5)
:
:= x := new T(e ? ) | (6)
:= f = e ! m(e) deadline(e) | (7)
:
:= x := f.get(e ? ) | (8)
:
:= return e | (9)
::= S ; S | (10)
:
:= if (b) then S else S end | (11)
:
:= while (b) { S } | (12)
:
:= try {S} catch(TException x) { S } (13)
Fig. 1: A kernel version of the real-time programming language. The bold scripted keywords denote the reserved words in the language. The over-lined v denotes a sequence of syntactic entities v. Both local and instance variables are denoted by x. We assume distinguished local variables this, myfuture, and deadline which denote the actor itself, the unique future corresponding to the process, and its deadline, respectively. A distinguished instance variable time denotes the current time. Any subscripted type T specialized denotes a specialized type of general type T; e.g. T Exception denotes all "exception" types. A variable f is in T f uture . N is a name (identifier) used for classes and method names. C denotes a class definition which consists of a definition of its instance variables and its methods; M sig is a method signature; M is a method definition; S denotes a statement. We abstract from the syntax the side-effect free expressions e and boolean expressions b.
generate processes. Each actor contains one active process and all its other processes are queued. Newly generated processes are inserted in the queue according to an application-specific policy. When a queued process misses its deadline it is removed from the queue and a corresponding exception is recorded by its future (as described below). When the currently active process is terminated the process at the head of the queue is activated (and as such dequeued). The active process cannot be preempted and is forced to run to completion. In Section 4 we discuss the implementation details of this design choice.
How to process the response of a message with a deadline? In the above example of an asynchronous message, the future result of processing the message is denoted by the variable f which has the type of Future. Given a future variable f , the programmer can query the availability of the result by the construct
v = f.get(e)
The execution of the get operation terminates successfully when the future variable f contains the result value. In case the future variable f records an exception, e.g. in case the corresponding process has missed its deadline, the get operation is aborted and the exception is propagated. Exceptions can be caught by try-catch blocks.
Listing 1: Using try-catch for processing future values
1 try { 2 x = f.get(e) 3 S_1 4 } catch(Exception x) { 5 S_2 6 }
For example, in Listing 1, if the get operation raises an exception control, is transferred to line (5); otherwise, the execution continues in line (3). In the catch block, the programmer has also access to the occurred exception that can be any kind of exception including an exception that is caused by a missed deadline. In general, any uncaught exception gives rise to abortion of the active process and is recorded by its future. Exceptions in our actor-based model thus are propagated by futures.
The additional parameter e of the get operation is of type T time and specifies a timeout; i.e., the get operation will timeout after the specified units of time. This challenging task involves working on difficult issues, such as the performance of information retrieval algorithms, the scalability of dealing with huge amounts of data and in satisfying large amounts of user requests per unit of time, the fault tolerance of complex distributed systems, and the executive monitoring and management of large-scale information retrieval oper-ations. Fredhopper offers its services and facilities to e-Commerce companies (customers) as services (SaaS) over the cloud computing infrastructure (IaaS); which gives rise to different challenges in regards with resources management techniques and the customer cost model and service level agreements (SLA).
To orchestrate different services such as FAS or data processing, Fredhopper takes advantage of a service controller (a.k.a. Controller). Controller is responsible to passively manage different service installations for each customer. For instance, in one scenario, a customer submits their data along with a processing request to their data hub server. Controller, then picks up the data and initiates a data processing job (usually an ETL job) in a data processing service. When the data processing is complete, the result is again published to customer environment and additionally becomes available through FAS services. Figure 2 illustrates an example scenario that is described above.
In the current implementation of Controller, at Step 4, a data job instance is submitted to a remote data processing service. Afterwards, the future response of the data job is determined by a periodic remote check on the data service (Step 4). When the job is finished, Controller continues to retrieve the data job results (Step 5) and eventually publishes it to customer environment (Step 6).
In terms of system responsiveness, Step 4 may never complete.
Step 4 failure can have different causes. For instance, at any moment of time, there are different customers' data jobs running on one data service node; i.e. there is a chance that a data service becomes overloaded with data jobs preventing the periodic data job check to return. If Step 4 fails, it leads the customer into an unbounded waiting situation. According to SLA agreements, this is not acceptable. It is strongly required that for any data job, the customer should be notified of the result: either a completed job with success/failed status, a job that is not completed, or a job with an unknown state. In other words, Controller should be able to guarantee that any data job request terminates.
To illustrate the contribution of this paper, we extract a closed-world simplified version of the scenario in Figure 2 from Controller. In Section 4, we provide an implementation-level usage of our work applied to this case study.
Operational Semantics
We describe the semantics of the language by means of a two-tiered labeled transition system: a local transition system describes the behavior of a single actor and a global transition system describes the overall behavior of a system of interacting actors. We define an actor state as a pair p, q , where p denotes the current active process of the actor, and q denotes a queue of pending processes.
Each pending process is a pair (S, τ ) consisting of the current executing statement S and the assignment τ of values to the local variables (e.g., formal parameters). The active process consists of a pair (S, σ), where σ assigns values to the local variables and additionally assigns values to the instance variables of the actor.
Local transition system
The local transition system defines transitions among actor configurations of the form p, q, φ , where (p, q) is an actor state and for any object o identifying a created future, φ denotes the shared heap of the created future objects, i.e., φ(o), for any future object o existing in φ, denotes a record with a field val which represents the return value and a boolean field aborted which indicates abortion of the process identified by o.
In the local transition system we make use of the following axiomatization of the occurrence of exceptions. Here (S, σ, φ) ↑ v indicates that S raises an exception v:
-(x = f.get(), σ, φ) ↑ σ(f ) where φ(σ(f )).aborted = true, - (S, σ, φ) ↑ v try{S}catch(T u){S }↑v
where v is not of type T, and,
- (S, σ, φ) ↑ v (S; S, σ, φ) ↑ v .
We present here the following transitions describing internal computation steps (we denote by val(e)(σ) the value of the expression e in σ and by f [u → v] the result of assigning the value v to u in the function f ).
Assignment statement is used to assign a value to a variable: (x = e; S, σ), q, φ → (S, σ[x → val(e)(σ)]), q, φ Returning a result consists of setting the field val of the future of the process:
(return e; S, σ), q, φ → (S, σ), q, φ[σ(myfuture).val → val(e)(σ)] Initialization of timeout in get operation assigns to a distinguished (local) variable timeout its initial absolute value: (x = f.get(e); S, σ), q, φ →
(x = f.get(e); S, σ[timeout → val(e + time)(σ), q, φ
The get operation is used to assign the value of a future to a variable: (x = f.get(); S, σ), q, φ → (S, σ[x → φ(σ(f )).val]), q, φ where φ(σ(f )).val = ⊥ .
Timeout is operationally presented by the following transition: (x = f.get(); S, σ), q, φ → (S, σ), q, φ where σ(time) < σ(timeout).
The try-catch block semantics is presented by: (S, σ), q, φ → (S , σ ), q , φ (try{S}catch(T x){S }; S , σ), q, φ → (try{S }catch(T x){S }; S , σ), q , φ Exception handling. We provide the operational semantics of exception handling in a general way in the following:
(S, σ, φ) ↑ v (try{S}catch(T x){S }; S , σ), q, φ → (S ; S , σ[x → v]), q, φ
where the exception v is of type T.
Abnormal termination of the active process is generated by an uncaught exception:
(S, σ, φ) ↑ v (S; S , σ), q, φ → (S , σ ), q , φ where q = (S , τ ) • q and σ is obtained from restoring the values of the local variables as specified by τ (formally, σ (x) = σ(x), for every instance variable x, and σ (x) = τ (x), for every local variable x), and φ (σ(myfuture)).aborted = true (φ (o) = φ(o), for every o = σ(myfuture)).
Normal termination is presented by: (E, σ), q, φ → (S, σ ), q , φ where q = (S, τ ) • q and σ is obtained from restoring the values of the local variables as specified by τ (see above). We denote by E termination (identifying S; E with S).
Deadline missed. Let (S , τ ) be some pending process in q such that τ (deadline) < σ(time). Then (S, σ), q, φ → p, q , φ where q results from q by removing (S , τ ) and φ (τ (myfuture)).aborted = true (φ (o) = φ(o), for every o = τ (myfuture)).
A message m(τ ) specifies for the method m the initial assignment τ of its local variables (i.e., the formal parameters and the variables this, myfuture, and deadline). To model locally incoming and outgoing messages we introduce the following labeled transitions.
Incoming message. Let the active process p belong to the actor τ (this) (i.e., σ(this) = τ (this) for the assignment σ in p):
p, q, φ m(τ ) ---→ p, insert(q, m(v, d)), φ
where insert(q, m(τ )) defines the result of inserting the process (S, τ ), where S denotes the body of method m, in q, according to some application-specific policy (described below in Section 4).
Outgoing message. We model an outgoing message by:
(f = e 0 ! m(ē) deadline(e 1 ); S, σ), q, φ m(τ ) ---→ (S, σ[f → o]), q, φ
Global transition system
A (global) system configuration S is a pair (Σ, φ) consisting of a set Σ of actor states and a global heap φ which stores the created future objects. We denote actor states by s, s , s , etc.
Local computation step. The interleaving of local computation steps of the individual actors is modeled by the rule:
(s, φ) → (s , φ ) ({s} ∪ Σ, φ) → ({s } ∪ Σ, φ )
Communication. Matching a message sent by one actor with its reception by the specified callee is described by the rule:
(s 1 , φ) m(τ ) ---→ (s 1 , φ ) (s 2 , φ) m(τ ) ---→ (s 2 , φ) ({s 1 , s 2 } ∪ Σ, φ) → ({s 1 , s 2 } ∪ Σ, φ )
Note that only an outgoing message affects the shared heap φ of futures. where
Progress of
Σ = { (S, σ ), q, φ | (S, σ), q, φ ∈ Σ, σ = σ[time → σ(time) + δ]}
for some positive δ.
Implementation
We base our implementation on Java's concurrent package: java.util.concurrent. The implementation consists of the following major components:
1. An extensible language API that owns the core abstractions, architecture, and implementation. For instance, the programmer may extend the concept of a scheduler to take full control of how, i.e., in what order, the processes of the individual actors are queued (and as such scheduled for execution).
We illustrate the scheduler extensibility with an example in the case study below. 2. Language Compiler that translates the modeling-level programs into Java source. We use ANTLR [START_REF] Parr | Antlr[END_REF] parser generator framework to compile modelinglevel programs to actual implementation-level source code of Java. 3. The language is seamlessly integrated with Java. At the time of programming, language abstractions such as data types and third-party libraries from either Crisp or Java are equally usable by the programmer.
We next discuss the underlying deployment of actors and the implementation of real-time processes with deadlines.
Deploying actors onto JVM threads. In the implementation, each actor owns a main thread of execution, that is, the implementation does not allocate one thread per process because threads are costly resources and allocating to each process one thread in general leads to a poor performance: there can be an arbitrary number of actors in the application and each may receive numerous messages which thus give rise to a number of threads that goes beyond the limits of memory and resources. Additionally, when processes go into pending mode, their correspondent thread may be reused for other processes. Thus, for better performance and optimization of resource utilization, the implementation assigns a single thread for all processes inside each actor.
Consequently, at any moment in time, there is only one process that is executed inside each actor. On the other hand, the actors share a thread which is used for the execution of a watchdog for the deadlines of the queued processes (described below) because allocation of such a thread to each actor in general slows down the performance. Further this sharing allows the implementation to decide, based on the underlying resources and hardware, to optimize the allocation of the watchdog thread to actors. For instance, as long as the resources on the underlying hardware are abundant, the implementation decides to share as less as possible the thread. This gives each actor a better opportunity with higher precision to detect missed deadlines.
Implementation of processes with deadlines. A process itself is represented in the implementation by a data structure which encapsulates the values of its local variables and the method to be executed. Given a relative deadline d as specified by a call we compute at run-time its absolute deadline (i.e. the expected starting time of the process) by
TimeUnit.toMillis(d) + System.currentTimeMillis()
which is a soft real-time requirement. As in the operational semantics, in the real-time implementation always the head of the process queue is scheduled for execution. This allows the implementation of a default earliest deadline first (EDF) scheduling policy by maintaining a queue ordered by the above absolute time values for the deadlines.
The important consequence of our non-preemptive mode of execution for the implementation is the resulting simplicity of thread management because preemption requires additional thread interrupts that facilitates the abortion of a process in the middle of execution. As stated above, a single thread in the implementation detects if a process has missed its deadline. This task runs periodically and to the end of all actors' life span. To check for a missed deadline it suffices to simply check for a process that the above absolute time value of its deadline is smaller than System.currentTimeMillis(). When a process misses its deadline, the actions as specified by the corresponding transition of the operational semantics are subsequently performed. The language API provides extension points which allow for each actor the definition of a customized watchdog process and scheduling policy (i.e., policy for enqueuing processes). The customized watchdog processes are still executed by a single thread.
Fredhopper case study. As introduced in Section 2.1, we extract a closedworld simplified version from Fredhopper Controller. We apply the approach discussed in this paper to use deadlines for asynchronous messages.
Listing 2 and 3 present the difference in the previous Controller and the approach in Crisp. The left code snippet shows the Controller that uses polling to retrieve data processing results. The right code snippet shows the one that uses messages with deadlines. When the approach in Crisp in the right snippet is applied to Controller, it is guaranteed that all data job requests are terminated in a finite amount of time. Therefore, there cannot be complains about never receiving a response for a specific data job request. Many of Fredhopper's customers rely on data jobs to eventually deliver an e-commerce service to their end users. Thus, to provide a guarantee to them that their job result is always published to their environment is critical to them. As shown in the code snippet, if the data job request is failed or aborted based on a deadline miss, the customer is still eventually informed about the situation and may further decide about it. However, in the previous version, the customer may never be able to react to a data job request because its results are never published.
In comparison to the Controller using polling, there is a way to express timeouts for future values. However, it does not provide language constructs to specify a deadline for a message that is sent to data processing service. A deadline may be simulated using a combination of timeout and periodic polling approaches (Listing 2). Though, this approach cannot guarantee eventual termination in all cases; as discussed before that Step 4 in Figure 2 may never complete. Controller is required to meet certain customer expectations based on an SLA. Thus, Controller needs to take advantage of a language/library solution that can provide a higher level of abstraction for real-time scheduling of concurrent messages. When messages in Crisp carry a deadline specification, Controller is able to guarantee that it can provide a response to the customer. This termination guarantee is crucial to the business of the customer.
Additionally, on the data processing service node, the new implementation takes advantage of the extensibility of schedulers in Crisp. As discussed above, the default scheduling policy used for each actor is EDF based on the deadlines carried by incoming messages to the actor. However, this behavior may be extended and replaced by a custom implementation from the programmer. In this case study, the priority of processes may differ if they the job request comes from specific customer; i.e. apart from deadlines, some customers have priority over others because they require a more real-time action on their job requests while others run a more relaxed business model. To model and implement this custom behavior, a custom scheduler is developed for the data processing node.
In the above listings, Listing 5 defines a custom scheduler that determines the priority of two processes with custom logic for specific customer. To use the custom scheduler, the only requirement is that the class DataProcessor defines a specific class variable called scheduler in Listing 4. The custom scheduler is picked up by Crisp core architecture and is used to schedule the queued processes. Thus, all processes from customer A have priority over processes from other customers no matter what their deadlines are.
We use Controller's logs for the period of February and March 2013 to examine the evaluation of Crisp approach. We define customer satisfaction as a property that represents the effectiveness of futures with deadline. s 1 s 2 88.71% 94.57%
Table 1: Evaluation Results
For a customer c, the satisfaction can be denoted by s = r F c rc ; in which r F c is the number of finished data processing jobs and r c is the total number of requested data processing jobs from customer c. We extracted statistics for completed and never-ended data processing jobs from Controller logs (s 1 ). We replayed the logs with Crisp approach and measured the same property (s 2 ). We measured the same property for 180 customers that Fredhopper manages on the cloud. In this evaluation, a total number of about 25000 data processing requests were included. The results show 6% improvement in Table 1 (that amounts to around 1600 better data processing requests). Because of data issues or wrong parameters in the data processing requests, there are requests that still fail or never end and should be handled by a human resource.
You may find more information including documentation and source code of Crisp at http://nobeh.github.com/crisp.
Related Work
The programming language presented in this paper is a real-time extension of the language introduced in [START_REF] Nobakht | Programming and deployment of active objects with application-level scheduling[END_REF]. This new extension features integration of asynchronous messages with deadlines and futures with timeouts; a general mechanism for handling exceptions raised by missed deadlines; high-level specification of application-level scheduling policies; and a formal operational semantics.
To the best of our knowledge the resulting language is the first implemented real-time actor-based programming language which formally integrates the above features.
In several works, e.g, [START_REF] Aceto | Modelling and Simulation of Asynchronous Real-Time Systems using Timed Rebeca[END_REF] and [START_REF] Nielsen | Semantics for an Actor-Based Real-Time Language[END_REF], asynchronous messages in actor-based languages are extended with deadlines. However these languages do not feature futures with timeouts, a general mechanism for handling exceptions raised by missed deadlines or support the specification of application-level scheduling policies. Futures and fault handling are considered in the ABS language [START_REF] Broch Johnsen | ABS: A core language for abstract behavioral specification[END_REF]. This work describes recovery mechanisms for failed get operations on a future. However, the language does not support the specification of real-time requirements, i.e., no deadlines for asynchronous messages are considered and no timeouts on futures. Further, when a get operation on a future fails, [START_REF] Broch Johnsen | ABS: A core language for abstract behavioral specification[END_REF] does not provide any context or information about the exception or the cause for the failure. Alternatively, [START_REF] Broch Johnsen | ABS: A core language for abstract behavioral specification[END_REF] describes a way to "compensate" for a failed get operation on future. In [START_REF] Bjørk | User-defined schedulers for real-time concurrent objects[END_REF], a real-time extension of ABS with scheduling policies to model distributed systems is introduced. In contrast to Crisp, Real-Time ABS is an executable modeling language which supports the explicit specification of the progress of time by means of duration statements for the analysis of real-time requirements. The language does not support however asynchronous messages with deadlines and futures with timeouts.
Two successful examples of actor-based programming languages are Scala and Erlang. Scala [START_REF] Haller | Scala actors: Unifying thread-based and eventbased programming[END_REF][START_REF]Coordination Models and Languages, volume 4467, chapter Actors That Unify Threads and Events[END_REF] is a hybrid object-oriented and functional programming language inspired by Java. Through the event-based model, Scala also provides the notion of continuations. Scala further provides mechanisms for scheduling of tasks similar to those provided by concurrent Java: it does not provide a direct and customizable platform to manage and schedule messages received by an individual actor. Additionally, Akka [START_REF] Typesafe | [END_REF] extends Scala's actor programming model and as such provides a direct integration with both Java and Scala. Erlang [START_REF] Armstrong | Programming Erlang: Software for a Concurrent World[END_REF] is a dynamically typed functional language that was developed at Ericsson Computer Science Laboratory with telecommunication purposes [START_REF] Corrêa | Actors in a new "highly parallel" world[END_REF]. Recent developments in the deployment of Erlang support the assignment of a scheduler to each processor [START_REF] Lundin | Inside the Erlang VM, focusing on SMP[END_REF] (instead of one global scheduler for the entire application) but it does not, for example, support application-level scheduling policies. In general, none these languages provide a formally defined real-time extension which integrates the above features.
There are well-known efforts in Java to bring in the functionality of asynchronous message passing onto multicore including Killim [START_REF] Srinivasan | Kilim: Isolation-Typed Actors for Java[END_REF], Jetlang [START_REF] Rettig | Jetlang Library[END_REF], Ac-torFoundry [START_REF] Rajesh | Actor frameworks for the JVM platform: a comparative analysis[END_REF], and SALSA [START_REF] Varela | Programming dynamically reconfigurable open systems with SALSA[END_REF]. In [START_REF] Rajesh | Actor frameworks for the JVM platform: a comparative analysis[END_REF], the authors present a comparative analysis of actor-based frameworks for JVM platform. Most of these frameworks support futures with timeouts but do not provide asynchronous messages with deadlines, or a general mechanism for handling exceptions raised by missed deadlines. Further, pertaining to the domain of priority scheduling of asynchronous messages, these efforts in general provide a predetermined approach or a limited control over message priority scheduling. As another example, in [START_REF] Maia | Combining rtsj with fork/join: a priority-based model[END_REF] the use of Java Fork/Join is described to optimize mulicore applications. This work is also based on a fixed priority model. Additionally, from embedded hardwaresoftware research domain, Ptolemy [START_REF] Eker | Taming heterogeneity -the ptolemy approach[END_REF][START_REF] Lee | Actor-oriented design of embedded hardware and software systems[END_REF] is an actor-oriented open architecture and platform that is used to design, model and simulate embedded software. Their approach is hardware software co-design. It provides a platform framework along with a set of tools.
In general, existing high-level programming languages provide the programmer with little real-time control over scheduling. The state of the art allows specifying priorities for threads or processes that are used by the operating system, e.g., Real-Time Specification for Java (RTSJ [START_REF] Jcp | RTSJ v1 JSR 1[END_REF][START_REF]RTSJ v1.1 JSR 282[END_REF]) and Erlang. Specifically in RTSJ, [START_REF] Zerzelidis | A framework for flexible scheduling in the RTSJ[END_REF] extensively introduces and discusses a framework for applicationlevel scheduling in RTSJ. It presents a flexible framework to allow scheduling policies to be used in RTSJ. However, [START_REF] Zerzelidis | A framework for flexible scheduling in the RTSJ[END_REF] addresses the problem mainly in the context of the standard multithreading approach to concurrency which in general does not provide the most suitable approach to distributed applications. In contrast, in this paper we have shown that an actor-based programming language provides a suitable formal basis for a fully integrated real-time control in distributed applications.
Conclusion and future work
In this paper, we presented both a formal semantics and an implementation of a real-time actor-based programming language. We presented how asynchronous messages with deadline can be used to control application-level scheduling with higher abstractions. We illustrated the language usage with a real-world case study from SDL Fredhopper along the discussion for the implementation. Currently we are investigating further optimization of the implementation of Crisp and the formal verification of real-time properties of Crisp applications using schedulability analysis [START_REF] Fersman | Schedulability analysis using two clocks[END_REF].
where φ results from φ by extending its domain with a new future object o such that φ (o).val =⊥ 4 and φ (o).aborted = false, τ (this) = val(e 0 )(σ), τ (x) = val(e)(σ), for every formal parameter x and corresponding actual parameter e, τ (deadline) = σ(time) + val(e 1 )(σ), τ (myfuture) = o.
Time. The following transition uniformly updates the local clocks (represented by the instance variable time) of the actors. (Σ, φ) → (Σ , φ)
Listing 4: Data Processor class | 34,230 | [
"1003810",
"1003770",
"1003811"
] | [
"121723",
"488223",
"20495",
"121723"
] |
01486032 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://inria.hal.science/hal-01486032/file/978-3-642-38493-6_2_Chapter.pdf | Andrea Cerone
email: acerone@scss.tcd.ie
Matthew Hennessy
email: matthew.hennessy@scss.tcd.ie
Massimo Merro
email: massimo.merro@univr.it
Modelling MAC-layer communications in wireless systems (Extended abstract)
We present a timed broadcast process calculus for wireless networks at the MAC-sublayer where time-dependent communications are exposed to collisions. We define a reduction semantics for our calculus which leads to a contextual equivalence for comparing the external behaviour of wireless networks. Further, we construct an extensional LTS (labelled transition system) which models the activities of stations that can be directly observed by the external environment. Standard bisimulations in this novel LTS provide a sound proof method for proving that two systems are contextually equivalent. In addition, the main contribution of the paper is that our proof technique is also complete for a large class of systems.
Introduction
Wireless networks are becoming increasingly pervasive with applications across many domains, [START_REF] Rappaport | Wireless communications -principles and practice[END_REF][START_REF] Akyildiz | Wireless sensor networks: a survey[END_REF]. They are also becoming increasingly complex, with their behaviour depending on ever more sophisticated protocols. There are different levels of abstraction at which these can be defined and implemented, from the very basic level in which the communication primitives consist of sending and receiving electromagnetic signals, to the higher level where the basic primitives allow the set up of connections and exchange of data between two nodes in a wireless system [START_REF] Tanenbaum | Computer Networks[END_REF].
Assuring the correctness of the behaviour of a wireless network has always been difficult. Several approaches have been proposed to address this issue for networks described at a high level [START_REF] Nanz | Static analysis of routing protocols for ad-hoc networks[END_REF][START_REF] Merro | An Observational Theory for Mobile Ad Hoc Networks (full paper)[END_REF][START_REF] Godskesen | A Calculus for Mobile Ad Hoc Networks[END_REF][START_REF] Ghassemi | Equational reasoning on mobile ad hoc networks[END_REF][START_REF] Singh | A process calculus for mobile ad hoc networks[END_REF][START_REF] Kouzapas | A process calculus for dynamic networks[END_REF][START_REF] Borgström | Broadcast psi-calculi with an application to wireless protocols[END_REF][START_REF] Cerone | Modelling probabilistic wireless networks (extended abstract)[END_REF]; these typically allow the formal description of protocols at the network layer of the TCP/IP reference model [START_REF] Tanenbaum | Computer Networks[END_REF]. However there are few frameworks in the literature which consider networks described at the MAC-Sublayer of the TCP/IP reference model [START_REF] Lanese | An operational semantics for a calculus for wireless systems[END_REF][START_REF] Merro | A timed calculus for wireless systems[END_REF]. This is the topic of the current paper. We propose a process calculus for describing and verifying wireless networks at the MAC-Sublayer of the TCP/IP reference model. This calculus, called the Calculus of Collision-prone Communicating Processes (CCCP), has been largely inspired by TCWS [START_REF] Merro | A timed calculus for wireless systems[END_REF]; in particular CCCP inherits its communication features but simplifies considerably the syntax, the reduction semantics, the notion of observation, and as we will see the behavioural theory. In CCCP a wireless system is considered to be a collection of wireless stations which transmit and receive messages. The transmission of messages is broadcast, and it is time-consuming; the transmission of a message v can require several time slots (or instants). In addition, wireless stations in our calculus are sensitive to collisions; if two different stations are transmitting a value over a channel c at the same time slot a collision occurs, and the content of the messages originally being transmitted is lost.
More specifically, in CCCP a state of a wireless network (or simply network, or system) will be described by a configuration of the form Γ W where W describes the code running at individual wireless stations and Γ represents the communication state of channels. At any given point of time there will be exposed communication channels, that is channels containing messages (or values) in transmission; this information will be recorded in Γ.
Such systems evolve by the broadcast of messages between stations, the passage of time, or some other internal activity, such as the occurrence of collisions and their consequences. One of the topics of the paper is to capture formally these complex evolutions, by defining a reduction semantics, whose judgments take the form Γ 1 W 1 Γ 2 W 2 . The reduction semantics satisfies some desirable properties such as time determinism, patience and maximal progress [START_REF] Nicollin | The algebra of timed processes, atp: Theory and application[END_REF][START_REF] Hennessy | A process algebra for timed systems[END_REF][START_REF] Yi | A Calculus of Real Time Systems[END_REF].
However the main aim of the paper is to develop a behavioural theory of wireless networks. To this end we need a formal notion of when two such systems are indistinguishable from the point of view of users. Having a reduction semantics it is now straightforward to adapt a standard notion of contextual equivalence:
Γ 1 W 1 Γ 2 W 2 .
Intuitively this means that either system, Γ 1 W 1 or Γ 2 W 2 , can be replaced by the other in a larger system without changing the observable behaviour of the overall system. Formally we use the approach of [START_REF] Honda | On reduction-based process semantics[END_REF], often called reduction barbed congruence; the only parameter in the definition is the choice of primitive observation or barb. Our choice is natural for wireless systems: the ability to transmit on an idle channel, that is a channel with no active transmissions.
As explained in papers such as [START_REF] Rathke | Deconstructing behavioural theories of mobility[END_REF][START_REF] Hennessy | A distributed Pi-calculus[END_REF], contextual equivalences are determined by so-called extensional actions, that is the set of minimal observable interactions which a system can have with its external environment. For CCCP determining these actions is non-trivial. Although values can be transmitted and received on channels, the presence of collisions means that these are not necessarily observable. In fact the important point is not the transmission of a value, but its successful delivery. Also, although the basic notion of observation on systems does not involve the recording of the passage of time, this has to be taken into account extensionally in order to gain a proper extensional account of systems.
The extensional semantics determines an LTS (labelled transition system) over configurations, which in turn gives rise to the standard notion of (weak) bisimulation equivalence between configurations. This gives a powerful co-inductive proof technique: to show that two systems are behaviourally equivalent it is sufficient to exhibit a witness bisimulation which contains them.
One result of this paper is that weak bisimulation in the extensional LTS is sound with respect to the touchstone contextual equivalence: if two systems are related by some bisimulation in the extensional LTS then they are contextually equivalent. However, the main contribution is that completeness holds for a large class of networks, called well-formed. If two such networks are contextually equivalent then there is some bisimulation, based on our novel extensional actions, which contains them. In [START_REF] Merro | A timed calculus for wireless systems[END_REF], a sound but not complete bisimulation based proof method is developed for (a different form of) reduction barbed congruence. Here, by simplifying the calculus and isolating novel extensional actions we obtain both soundness and completeness.
The rest of the paper is organised as follows: in Section 2 we define the syntax which we will use for modelling wireless networks. The reduction semantics is given in Section 3 from which we develop in the same section our notion of reduction barbed congruence. In Section 4 we define the extensional semantics of networks, and the (weak) bisimulation equivalence it induces. In Section 5 we state the main results of the paper, namely that bisimulation is sound with respect to barbed congruence and, for a large class of systems, it is also complete. Detailed proofs of the results can be found in the associated technical report [START_REF] Cerone | Modelling mac-layer communications in wireless systems[END_REF]. The latter also contains an initial case study showing the usefulness of our proof technique. Two particular instances of networks are compared; the first forwards two messages to the external environment using a TDMA modulation technique, the second performs the same task by routing the messages along different stations.
The calculus
Formally we assume a set of channels Ch, ranged over by c, d, • • • , and a set of values Val, which contains a set of data-variables, ranged over by x, y, • • • and a special value err; this value will be used to denote faulty transmissions. The set of closed values, that is those not containing occurrences of variables, are ranged over by v, w, • • • . We also assume that every closed value v ∈ Val has an associated strictly positive integer δ v , which denotes the number of time slots needed by a wireless station to transmit v.
A channel environment is a mapping Γ : Ch → N × Val. In a configuration Γ W where Γ(c) = (n, v) for some channel c, a wireless station is currently transmitting the value v for the next n time slots. We will use some suggestive notation for channel environments: Γ t c : n in place of Γ(c) = (n, w) for some w, Γ v c : w in place of Γ(c) = (n, w) for some n. If Γ t c : 0 we say that channel c is idle in Γ, and we denote it with Γ c : idle. Otherwise we say that c is exposed in Γ, denoted by Γ c : exp. The channel environment Γ such that Γ c : idle for every channel c is said to be stable.
The syntax for system terms W is given in Table 1, where P ranges over code for programming individual stations, which is also explained in Table 1. A system term W is a collection of individual threads running in parallel, with possibly some channels restricted. Each thread may be either an inactive piece of code P or an active code of the form c[x].P. This latter term represents a wireless station which is receiving a value from the channel c; when the value is eventually received the variable x will be replaced with the received value in the code P. The restriction operator νc : (n, v).W is nonstandard, for a restricted channel has a positive integer and a closed value associated with it; roughly speaking, the term νc : (n, v).W corresponds to the term W where The syntax for station code is based on standard process calculus constructs. The main constructs are time-dependent reception from a channel c?(x).P Q, explicit time delay σ.P, and broadcast along a channel c ! u .P. Here u denotes either a data-variable or closed value v ∈ Val. Of the remaining standard constructs the most notable is matching, [b]P, Q which branches to P or Q, depending on the value of the Boolean expression b. We leave the language of Boolean expressions unspecified, other than saying that it should contain equality tests for values, u 1 = u 2 . More importantly, it should also contain the expression exp(c) for checking if in the current configuration the channel c is exposed, that is it is being used for transmission.
In the construct fix X.P occurrences of the recursion variable X in P are bound; similarly in the terms c?(x).P Q and c[x].P the data-variable x is bound in P. This gives rise to the standard notions of free and bound variables, α-conversion and capture-avoiding substitution; we assume that all occurrences of variables in system terms are bound and we identify systems up to α-conversion. Moreover we assume that all occurrences of recursion variables are guarded; they must occur within either a broadcast, input or time delay prefix, or within an execution branch of a matching construct. We will also omit trailing occurrences of nil, and write c?(x).P in place of c?(x).P nil.
Our notion of wireless networks is captured by pairs of the form Γ W, which represent the system term W running in the channel environment Γ. Such pairs are called configurations, and are ranged over by the metavariable C.
(Snd) Γ c ! v .P c!v ----→ σ δv .P (Rcv) Γ c : idle Γ c?(x).P Q c?v ----→ c[x].P (RcvIgn) ¬rcv(W, c) Γ W c?v ----→ W (Sync) Γ W 1 c!v ----→ W 1 Γ W 2 c?v ----→ W 2 Γ W 1 | W 2 c!v ----→ W 1 | W 2 (RcvPar) Γ W 1 c?v ----→ W 1 Γ W 2 c?v ----→ W 2 Γ W 1 | W 2 c?v ----→ W 1 | W 2
Reduction semantics and contextual equivalence
The reduction semantics is defined incrementally. We first define the evolution of system terms with respect to a channel environment Γ via a set of SOS rules whose judg-
ments take the form Γ W 1 λ ---→ W 2 .
Here λ can take the form c!v denoting a broadcast of value v along channel c, c?v denoting an input of value v being broadcast along channel c, τ denoting an internal activity, or σ, denoting the passage of time. However these actions will also have an effect on the channel environment, which we first describe, using a functional upd λ (•) : Env → Env, where Env is the set of channel environments.
The channel environment upd λ (Γ) describes the update of the channel environment Γ when the action λ is performed, is defined as follows: for λ = σ we let upd σ (Γ) t c : (n -1) whenever Γ t c : n, upd σ (Γ) v c : w whenever Γ v c : w.
For λ = c!v we let upd c!v (Γ) be the channel environment such that
upd c!v (Γ) t c : δ v if Γ c : idle max(δ v , k) if Γ c : exp upd c!v (Γ) v c : v if Γ c : idle err if Γ c : exp where Γ t c : k. Finally, we let upd c?v (Γ) = upd c!v (Γ) and upd τ (Γ) = Γ.
Let us describe the intuitive meaning of this definition. When time passes, the time of exposure of each channel decreases by one time unit 3 . The predicates upd c!v (Γ) and upd c?v (Γ) model how collisions are handled in our calculus. When a station begins broadcasting a value v over a channel c this channel becomes exposed for the amount of time required to transmit v, that is δ v . If the channel is not free a collision happens. As a consequence, the value that will be received by a receiving station, when all transmissions over channel c terminate, is the error value err, and the exposure time is adjusted accordingly.
For the sake of clarity, the inference rules for the evolution of system terms, Γ
W 1 λ ---→ W 2 ,
are split in four tables, each one focusing on a particular form of activity.
Table 2 contains the rules governing transmission. Rule (Snd) models a non-blocking broadcast of message v along channel c. A transmission can fire at any time, independently on the state of the network; the notation σ δ v represents the time delay operator σ iterated δ v times. So when the process c ! v .P broadcasts it has to wait δ v time units before the residual P can continue. On the other hand, reception of a message by a time-guarded listener c?(x).P Q depends on the state of the channel environment. If the channel c is free then rule (Rcv) indicates that reception can start and the listener evolves into the active receiver c[x].P.
The rule (RcvIgn) says that if a system can not receive on the channel c then any transmission along it is ignored. Intuitively, the predicate rcv(W, c) means that W contains among its parallel components at least one non-guarded receiver of the form c?(x).P Q which is actively awaiting a message. Formally, the predicate rcv(W, c) is the least predicate such that rcv( c?(x).P Q, c) = true and which satisfies the equations rcv(P
+ Q, c) = rcv(P, c) ∨ rcv(Q, c), rcv(W 1 | W 2 , c) = rcv(W 1 , c) ∨ rcv(W 2 , c) and rcv(νd.W, c) = rcv(W, c) if d c.
The remaining two rules in Table 2 (Sync) and (RcvPar) serve to synchronise parallel stations on the same transmission [START_REF] Hennessy | Bisimulations for a calculus of broadcasting systems[END_REF][START_REF] Nicollin | The algebra of timed processes, atp: Theory and application[END_REF][START_REF] Prasad | A calculus of broadcasting systems[END_REF].
Example 1 (Transmission). Let C 0 = Γ 0 W 0 , where Γ 0 c, d : idle and W 0 = c! v 0 | d?(x).nil ( c?(x).Q ) | c?(x).P where δ v 0 = 2.
Using rule (Snd) we can infer
Γ 0 c! v 0 c!v 0
-----→ σ 2 ; this station starts transmitting the value v 0 along channel c. Rule (RcvIgn) can be used to derive the transition Γ 0 d?(x).nil ( c?(x).Q ) c?v 0 -----→ d?(x).nil ( c?(x).Q ), in which the broadcast of value v 0 along channel c is ignored. On the other hand, Rule (RcvIgn) cannot be applied to the configuration Γ 0 c?(x).P , since this station is waiting to receive a value on channel c; however we can derive the transition Γ 0 c?(x).P c?v 0 -----→ c[x].P using Rule (Rcv). We can put the three transitions derived above together using rule (Sync), leading
to the transition C 0 c!v ----→ W 1 , where W 1 = σ 2 | d?(x).nil ( c?(x).Q ) | c[x].P.
The transitions for modelling the passage of time, Γ W σ ---→ W , are given in Table 3. In the rules (ActRcv) and (EndRcv) we see that the active receiver c[x].P continues to wait for the transmitted value to make its way through the network; when the allocated transmission time elapses the value is then delivered and the receiver evolves to { w / x }P. The rule (SumTime) is necessary to ensure that the passage of time does not resolve non-deterministic choices. Finally (Timeout) implements the idea that c?(x).P Q is a time-guarded receptor; when time passes it evolves into the alternative Q. However this only happens if the channel c is not exposed. What happens if it is exposed is explained later in Table 4. Finally, Rule (TimePar) models how σ-actions are derived for collections of threads.
Example 2 (Passage of Time
). Let C 1 = Γ 1 W 1 , where Γ 1 (c) = (2, v 0 ), Γ 1 d : idle and W 1 is the system term derived in Example 1.
We show how a σ-action can be derived for this configuration. First note that Γ 1 σ 2 σ ---→ σ; this transition can be derived using Rule (Sleep). Since d is idle in Γ 1 , we can apply Rule (TimeOut) to infer the transition Γ 1 d?(x).nil ( c?(x).Q ) σ ---→ c?(x).Q ; time passed before a value could be broadcast along channel d, causing a timeout in the Table 3 Intensional semantics: timed transitions Table 4 is devoted to internal transitions Γ W τ ---→ W . Let us first explain rule (RcvLate). Intuitively the process c?(x).P Q is ready to start receiving a value on an exposed channel c. This means that a transmission is already taking place. Since the process has therefore missed the start of the transmission it will receive an error value. Thus Rule (RcvLate) reflects the fact that in wireless systems a broadcast value cannot be correctly received by a station in the case of a misalignment between the sender and the receiver.
(TimeNil) Γ nil σ ---→ nil (Sleep) Γ σ.P σ ---→ P (ActRcv) Γ t c : n, n > 1 Γ c[x].P σ ---→ c[x].P (EndRcv) Γ t c : 1, Γ v c : w Γ c[x].P σ ---→ { w / x }P (SumTime) Γ P σ ---→ P Γ Q σ ---→ Q Γ P + Q σ ---→ Γ P + Q (Timeout) Γ c : idle Γ c?(x).P Q σ ---→ Q (TimePar) Γ W 1 σ ---→ W 1 Γ W 2 σ ---→ W 2 Γ W 1 | W 2 σ ---→ W 1 | W 2 Table 4 Intensional semantics: internal activity (RcvLate) Γ c : exp Γ c?(x).P Q τ --→ c[x].{ err / x }P (Tau) Γ τ.P τ --→ P (Then) b Γ = true Γ [b]P, Q τ --→ σ.P (Else) b Γ = false Γ [b]P, Q τ --→ σ.Q
The remaining rules are straightforward except that we use a channel environment dependent evaluation function for Boolean expressions b Γ , because of the presence of the exposure predicate exp(c) in the Boolean language. However in wireless systems it is not possible to both listen and transmit within the same time unit, as communication is half-duplex, [START_REF] Rappaport | Wireless communications -principles and practice[END_REF]. So in our intensional semantics, in the rules (Then) and (Else), the execution of both branches is delayed of one time unit; this is a slight simplification, Table 5 Intensional semantics: -structural rules
(TauPar) Γ W 1 τ --→ W 1 Γ W 1 | W 2 τ --→ W 1 | W 2 (Rec) {fix X.P/X}P λ ---→ W Γ fix X.P λ ---→ W (Sum) Γ P λ ---→ W λ ∈ {τ, c!v} Γ P + Q λ ---→ W (SumRcv) Γ P c?v ----→ W rcv(P, c) Γ c : idle Γ P + Q c?v ----→ W (ResI) Γ[c → (n, v)] W c!v ----→ W Γ νc:(n, v).W τ --→ νc:upd c!v (Γ)(c).W (ResV) Γ[c → (n, v)] W λ ---→ W , c λ Γ νc:(n, v).W λ ---→ νc:(n, v).W
as evaluation is delayed even if the Boolean expression does not contain an exposure predicate.
Example 3. Let Γ 2 be a channel environment such that Γ 2 (c) = (1, v), and consider the configuration
C 2 = Γ 2 W 2 ,
where W 2 has been defined in Example 2.
Note that this configuration contains an active receiver along the exposed channel c. We can think of such a receiver as a process which missed the synchronisation with a broadcast which has been previously performed along channel c; as a consequence this process is doomed to receive an error value.
This situation is modelled by Rule (RcvLate), which allows us to infer the transition Γ 2 c?(x).Q The final set of rules, in Table 5, are structural. Here we assume that Rules (Sum), (SumRcv) and (SumTime) have a symmetric counterpart. Rules (ResI) and (ResV) show how restricted channels are handled. Intuitively moves from the configuration Γ νc:(n, v).W are inherited from the configuration Γ[c → (n, v)] W; here the channel environment Γ[c → (n, v)] is the same as Γ except that c has associated with it (temporarily) the information (n, v). However if this move mentions the restricted channel c then the inherited move is rendered as an internal action τ, (ResI). Moreover the information associated with the restricted channel in the residual is updated, using the function upd c!v (•) previously defined.
We are now ready to define the reduction semantics; formally, we let
Γ 1 W 1 Γ 2 W 2 whenever Γ 1 W 1 λ ---→ W 2 and Γ 2 = upd λ (Γ 1 ) for some λ = τ, σ, c!v.
Note that input actions cannot be used to infer reductions for computations; following the approach of [START_REF] Milner | Communicating and Mobile Systems: The π-calculus[END_REF][START_REF] Sangiorgi | The Pi-Calculus -A Theory of Mobile Processes[END_REF] reductions are defined to model only the internal of a system. In order to distinguish between timed and untimed reductions in Let C i = Γ i W i , i = 0, • • • , 2 be as defined in these examples. Note that Γ 1 = upd c!v 0 (Γ 0 ) and Γ 2 = upd σ (Γ 1 ). We have already shown that C 0 c!v 0 -----→ W 1 ; this transition, together with the equality Γ 1 = upd c!v 0 (Γ 0 ), can be used to infer the reduction
Γ 1 W 1 Γ 2 W 2 we use Γ 1 W 1 σ Γ 2 W 2 if Γ 2 = upd σ (W 1 ) and Γ 1 W 1 i Γ 2 W 2 if Γ 2 = upd λ (Γ 1 ) for some λ = τ, c!v.
C 0 i C 1 . A similar argument shows that C 1 σ C 2 . Also if we let C 3 denote Γ 2 W 3 we also have C 2 i C 3 since Γ 2 = upd τ (Γ 2 ).
W 1 = σ | c! w 1 | c[x].P. Let Γ 1 := upd c!w 0 (Γ), that is Γ 1 (c) = (1, w 0 )
. This equality and the transition above lead to the instantaneous reduction
C i C 1 = Γ 1 W 1 .
For C 1 we can use the rules (RcvIgn), (Snd) and (Sync) to derive the transition We now define a contextual equivalence between configurations, following the approach of [START_REF] Honda | On reduction-based process semantics[END_REF]. This relies on two crucial concepts: a notion of reduction, already been defined, and a notion of minimal observable activity, called a barb.
C 1 c!w 1 -----→ W 2 , where W 2 = σ | σ | c[x].P.
While in other process algebras the basic observable activity is chosen to be an output on a given channel [START_REF] Sangiorgi | The Pi-Calculus -A Theory of Mobile Processes[END_REF][START_REF] Hennessy | A distributed Pi-calculus[END_REF], for our calculus it is more appropriate to rely on the exposure state of a channel: because of possible collisions transmitted values may never be received. Formally, we say that a configuration Γ W has a barb on channel c, written Γ W ↓ c , whenever Γ c : exp. A configuration Γ W has a weak barb on c, denoted by Γ W ⇓ c , if Γ W * Γ W for some Γ W such that Γ W ↓ c . As we will see, it turns out that using this notion of barb we can observe the content of a message being broadcast only at the end of its transmission. This is in line with the standard theory of wireless networks, in which it is stated that collisions can be observed only at reception time [START_REF] Tanenbaum | Computer Networks[END_REF][START_REF] Rappaport | Wireless communications -principles and practice[END_REF]. Definition 1. Let R be a relation over configurations.
(1) R is said to be barb preserving if
Γ 1 W 1 ⇓ c implies Γ 2 W 2 ⇓ c , whenever (Γ 1 W 1 ) R (Γ 2 W 2 ). (2) It is reduction-closed if (Γ 1 W 1 ) R (Γ 2 W 2 ) and Γ 1 W 1 Γ 1 W 1 imply there is some Γ 2 W 2 such that Γ 2 W 2 * Γ 2 W 2 and (Γ 1 W 1 ) R (Γ 2 W 2 ).
Table 6 Extensional actions
(Input) Γ W c?v ----→ W Γ W c?v -→ upd c?v (Γ) W (Time) Γ W σ ---→ W Γ W σ -→ upd σ (Γ) W (Shh) Γ W c!v ----→ W Γ W τ -→ upd c!v (Γ) W (TauExt) Γ W τ --→ W Γ W τ -→ Γ W (Deliver) Γ(c) = (1, v) Γ W σ ---→ W Γ W γ(c,v) -→ upd σ (Γ) W (Idle) Γ c : idle Γ W ι(c) -→ Γ W (3) It is contextual if Γ 1 W 1 R Γ 2 W 2 , implies Γ 1 (W 1 | W) R Γ 2 (W 2 | W) for all processes W.
Reduction barbed congruence, written , is the largest symmetric relation over configurations which is barb preserving, reduction-closed and contextual.
Example 6. We first give some examples of configurations which are not barbed congruent; here we assume that Γ is the stable environment.
-Γ c! v 0 Γ c! v 1 ; let T = c?(x).[x = v 0 ]d! ok nil,
, where d c and ok is an arbitrary value. It is easy to see that
Γ c! v 0 | T ⇓ d , whereas Γ c! v 1 | T ⇓ d . -Γ c! v Γ σ.c! v ; let T = [exp(c)]d! ok , nil. In this case we have that Γ c! v | T ⇓ d , while Γ σ.c! v | T ⇓ d .
On the other hand, consider the configurations Γ c! v 0 | c! v 1 and Γ c! err , where δ v 0 = δ v 1 and for the sake of convenience we assume that δ err = δ v 0 . In both cases a communication along channel c starts, and in both cases the value that will be eventually delivered to some receiving station is err, independently of the behaviour of the external environment. This gives us the intuition that these two configurations are barbed congruent. Later in the paper we will develop the tools that will allow us to prove this statement formally.
Extensional Semantics
In this section we give a co-inductive characterisation of the contextual equivalence between configurations, using a standard bisimulation equivalence over an extensional LTS, with configurations as nodes, but with a special collection of extensional actions; these are defined in Table 6.
Rule (Input) simply states that input actions are observable, as is the passage of time, by Rule (Time). Rule (TauExt) propagates τ-intensional actions to the extensional semantics. Rule (Shh) states that broadcasts are always treated as internal activities in the extensional semantics. This choice reflects the intuition that the content of a message being broadcast cannot be detected immediately; in fact, it cannot be detected until the end of the transmission. Rule (Idle) introduces a new label ι(c), parameterized in the channel c, which is not inherited from the intensional semantics. Intuitively this rules states that it is possible to observe whether a channel is exposed. Finally, Rule (Deliver) states that the delivery of a value v along channel c is observable, and it corresponds to a new action whose label is γ(c, v). In the following we range over extensional actions by α.
Example 7. Consider the configuration Γ c! v , where Γ is the stable channel environment. By an application of Rule (Shh) we have the transition
Γ c! v τ -→ Γ σ δ v , with Γ c : exp. Furthermore, Γ c! v ι(c)
-→ since channel c is idle in Γ. Notice that Γ σ δ v cannot perform a ι(c) action, and that the extensional semantics gives no information about the value v which has been broadcast.
The extensional semantics endows configurations with the structure of an LTS. Weak extensional actions in this LTS are defined as usual, and the formulation of bisimulations is facilitated by the notation
C α =⇒ C , which is again standard: for α = τ this denotes C -→ * C while for α τ it is C τ -→ * α -→ τ -→ * C .
Definition 2 (Bisimulations). Let R be a symmetric binary relation over configurations. We say that R is a (weak) bisimulation if for every extensional action α, whenever Example 6. Recall that in this example we assumed that Γ is the stable channel environment; further, δ v 0 = δ v 1 = δ err = k for some k > 0.
C 1 R C 2 , then C 1 α =⇒ C 1 implies C 2 α =⇒ C 2 for some C 2 satisfying C 1 R C 2 We let ≈ be the the largest bisimulation. Example 8. Let us consider again the configurations Γ W 0 = c! v 0 | c! v 1 , Γ W 1 = c! err of
We show that Γ W 0 ≈ Γ W 1 by exhibiting a witness bisimulation S such that Γ W 0 S Γ W 1 . To this end, let us consider the relation
S = { (∆ W 0 , ∆ W 1 ) , (∆ σ k | c! v 1 , ∆ σ k ) , (∆ c! v 0 , ∆ σ k ) , (∆ σ j | σ j , ∆ σ j ) | ∆ t c : n, ∆ (c) = (n, err) for some n > 0, j ≤ k}
Note that this relation contains an infinite number of pairs of configurations, which differ by the state of channel environments.This is because input actions can affect the channel environment of configurations. It is easy to show that the relation S is a bisimulation which contains the pair (Γ 0 W 0 , Γ 1 W 1 ), therefore Γ W 0 ≈ Γ W 1 .
One essential property of weak bisimulation is that it does not relate configurations which differ by the exposure state of some channel:
Proposition 2. Suppose Γ 1 W 1 ≈ Γ 2 W 2 .
Then for any channel c, Γ 1 c : idle iff Γ 2 c : idle.
Full abstraction
The aim of this section is to prove that weak bisimilarity in the extensional semantics is a proof technique which is both sound and complete for reduction barbed congruence.
Theorem 1 (Soundness). C 1 ≈ C 2 implies C 1 C 2 .
Proof. It suffices to prove that bisimilarity is reduction-closed, barb preserving and contextual. Reduction closure follows from the definition of bisimulation equivalence. The preservation of barbs follows directly from Proposition 2. The proof of contextuality on the other hand is quite technical, and is addressed in detail in the associated technical report [START_REF] Cerone | Modelling mac-layer communications in wireless systems[END_REF]. One subtlety lies in the definition of τ-extensional actions, which include broadcasts. While broadcasts along exposed do not affect the external environment, and hence cannot affect the external environment, this is not true for broadcasts performed along idle channels. However, we can take advantage of Proposition 2 to show that these extensional τ-actions preserve the contextuality of bisimilar configurations.
To prove completeness, the converse of Theorem 1, we restrict our attention to the subclass of well-formed configurations. Informally Γ W is well-formed if the system term W does not contain active receivers along idle channels; a wireless station cannot be receiving a value along a channel if there is no value being transmitted along it.
Definition 3 (Well-formedness). The set of well-formed configurations WNets is the least set such that for all processes P (i)
Γ P ∈ Wnets, (ii) if Γ c : exp then Γ c[x].P ∈ WNets, (iii) is closed under parallel composition and (iv) if Γ[c → (n, v)] W ∈ WNets then Γ νc : (n, v).W ∈ WNets.
By focusing on well-formed configurations we can prove a counterpart of Proposition 2 for our contextual equivalence: This means that, if we restrict our attention to well-formed configurations, we can never reach a configuration which is deadlocked; at the very least time can always proceed.
Proposition 3. Let Γ 1 W 1 , Γ 2 W 2 be two well formed configurations such that Γ 1 W 1 Γ 2 W 2 .
Theorem 2 (Completeness). On well-formed configurations, reduction barbed congruence implies bisimilarity.
The proof relies on showing that for each extensional action α it is possible to exhibit a test T α which determines whether or not a configuration Γ W can perform the action α. The main idea is to equip the test with some fresh channels; the test T α is designed so that a configuration Γ W | T α can reach another one C = Γ W | T , where T is determined uniquely by the barbs of the introduced fresh channel; these are enabled in Γ T , if and only if C can weakly perform the action α.
The tests T α are defined by performing a case analysis on the extensional action α:
T τ = eureka! ok T σ = σ.(τ.eureka! ok + fail! no ) T γ(c,v) = νd:(0, •).((c[x].([x=v]d! ok , nil) + fail! no ) | | σ 2 .[exp(d)]eureka! ok , nil | σ.halt! ok ) T c?v = (c ! v .eureka! ok + fail! no ) | halt! ok T ι(c) = ([exp(c)]nil, eureka! ok ) + fail! no | halt! ok
where eureka, fail, halt are arbitrary distinct channels and ok, no are two values such that δ ok = δ no = 1.
For the sake of simplicity, for any action α we define also the tests T α as follows:
T τ = T σ = eureka! ok T γ(c,v) = νd:(0, •).(σ.d! ok nil | σ.[exp(d)]eureka! ok , nil | halt! ok ) T c?v = σ δ v .eureka! ok | halt! ok T ι(c) = σ.eureka! ok | halt! ok
Proposition 5 (Distinguishing contexts). Let Γ W be a well-formed configuration, and suppose that the channels eureka, halt, fail do not appear free in W, nor they are exposed in Γ. Then for any extensional action α, Γ W
α =⇒ Γ W iff Γ W | T α * Γ W | T α .
A pleasing property of the tests T α is that they can be identified by the (both strong and weak) barbs that they enable in a computation rooted in the configuration Γ W | T α .
Proposition 6 (Uniqueness of successful testing components). Let Γ W be a configuration such that eureka, halt, fail do not appear free in W, nor they are exposed in Γ. Suppose that Γ W | T α * C for some configuration C such that
-if α = τ, σ, then C ↓ eureka , C ⇓ eureka , C ⇓ fail , -otherwise, C ↓ eureka , C ↓ halt , C ⇓ eureka , C ⇓ halt , C ⇓ fail . Then C = Γ W | T α for some configuration Γ W .
Note the use of the fresh channel halt when testing some of these actions. This is because of a time mismatch between a process performing the action, and the test used to detect it. For example the weak action ι(c)
=⇒ does not involve the passage of time but the corresponding test uses a branching construct which needs at least one time step to execute. Requiring a weak barb on halt in effect prevents the passage of time.
Outline proof of Theorem 2: It is sufficient to show that reduction barbed congruence, , is a bisimulation. As an example suppose
Γ 1 W 1 Γ 2 W 2 and Γ 1 W 1 γ(c,v) -→ Γ 1 W 1 .
We show how to find a matching move from Γ 2 W 2 . =⇒ Γ 2 W 2 . Now standard process calculi techniques enable us to infer from this that Γ 1 W 1 Γ 2 W 2 .
Suppose that
Γ 1 W 1 γ(c,v) -→ Γ 1 W 1 , we need to show that Γ 2 W 2 γ(c,v) =⇒ Γ 2 W 2 for some Γ 2 W 2 such that Γ 1 W 1 Γ 2 W 2 . By Proposition 5 we know that Γ 1 W 1 | T γ(c,v) * Γ 1 W 1 | T α .By the hypothesis it follows that Γ 1 W 1 | T γ(c,v) Γ 2 W 2 | T γ(c,v) , therefore Γ 2 W 2 | T γ(c,v) * C 2 for some C 2 Γ 1 W 1 | T γ(c,v) . Let C 1 = Γ 1 W 1 | T γ(c,v) . It is easy to check that C 1 ↓ eureka , C 1 ↓ halt , C 1 ⇓
Conclusions and Related work
In this paper we have given a behavioural theory of wireless systems at the MAC level. We believe that our reduction semantics, given in Section 2, captures much of the subtlety of intensional MAC-level behaviour of wireless systems. We also believe that our behavioural theory is the only one for wireless networks at the MAC-Layer which is both sound and complete. The only other calculus which considers such networks is TCWS from [START_REF] Merro | A timed calculus for wireless systems[END_REF] which contains a sound theory; as we have already stated we view CCCP as a simplification of this TCWS, and by using a more refined notion of extensional action we also obtain completeness.
We are aware of only two other papers modelling networks at the MAC-Sublayer level of abstraction, these are [START_REF] Lanese | An operational semantics for a calculus for wireless systems[END_REF][START_REF] Wang | A timed calculus for mobile ad hoc networks[END_REF]. They present a calculus CWS which views a network as a collection of nodes distributed over a metric space. [START_REF] Lanese | An operational semantics for a calculus for wireless systems[END_REF] contains a reduction and an intensional semantics and the main result is their consistency. In [START_REF] Wang | A timed calculus for mobile ad hoc networks[END_REF], time and node mobility is added.
On the other hand there are numerous papers which consider the problem of modelling networks at a higher level. Here we briefly consider a selection; for a more thorough review see [START_REF] Cerone | Modelling mac-layer communications in wireless systems[END_REF].
Nanz and Hankin [START_REF] Nanz | Static analysis of routing protocols for ad-hoc networks[END_REF] have introduced an untimed calculus for Mobile Wireless Networks (CBS ), relying on a graph representation of node localities. The main goal of that paper is to present a framework for specification and security analysis of communication protocols for mobile wireless networks. Merro [START_REF] Merro | An Observational Theory for Mobile Ad Hoc Networks (full paper)[END_REF] has proposed an untimed process calculus for mobile ad-hoc networks with a labelled characterisation of reduction barbed congruence, while [START_REF] Godskesen | A Calculus for Mobile Ad Hoc Networks[END_REF] contains a calculus called CMAN, also with mobile ad-hoc networks in mind. Singh, Ramakrishnan and Smolka [START_REF] Singh | A process calculus for mobile ad hoc networks[END_REF] have proposed the ω-calculus, a conservative extension of the π-calculus. A key feature of the ω-calculus is the separation of a node's communication and computational behaviour from the description of its physical transmission range. Another extension of the π-calculus, which has been used for modelling the LUNAR ad-hoc routing protocol, may be found in [START_REF] Borgström | Broadcast psi-calculi with an application to wireless protocols[END_REF].
In [START_REF] Cerone | Modelling probabilistic wireless networks (extended abstract)[END_REF] a calculus is proposed for describing the probabilistic behaviour of wireless networks. There is an explicit representation of the underlying network, in terms of a connectivity graph. However this connectivity graph is static. In contrast Ghassemi et al. [START_REF] Ghassemi | Equational reasoning on mobile ad hoc networks[END_REF] have proposed a process algebra called RBPT where topological changes to the connectivity graph are implicitly modelled in the operational semantics rather than in the syntax. Kouzapas and Philippou [START_REF] Kouzapas | A process calculus for dynamic networks[END_REF] have developed a theory of confluence for a calculus of dynamic networks and they use their machinery to verify a leader-election algorithm for mobile ad hoc networks.
station waiting to receive a value along d. Finally, since Γ 1 n c : 2, we can use Rule (ActRcv) to derive Γ 1 c[x].P σ ---→ c[x].P. At this point we can use Rule (TimePar) twice to infer a σ-action performed by C 1 . This leads to the transition C 1 σ ---→ W 2 , where W 2 = σ | c?(x).Q | c[x].P.
τ
---→ c[x].{err/x}Q. As we will see, Rule (TauPar), introduced in Table 5, ensures that τ-actions are propagated to the external environment. This means that the transition derived above allows us to infer the transition C 2 τ ---→ W 3 , where W 3 = σ | c[x].{err/x}Q | c[x].P.
Proposition 1 (Example 4 .
14 Maximal Progress and Time Determinism). Suppose C σ C 1 ; then C σ C 2 implies C 1 = C 2 , and C i C 3 for any C 3 . We now show how the transitions we have inferred in the Examples 1-3 can be combined to derive a computation fragment for the configuration C 0 considered in Example 1.
Example 5 (
5 Collisions). Consider the configuration C = Γ W, where Γ c : idle and W = c! w 0 | c! w 1 | c?(x).P ; here we assume δ w 0 = δ w 1 = 1. Using rules (Snd), (RcvIgn), (Rcv) and (Sync) we can infer the transition Γ W c!w 0 -----→ W 1 , where
1 iC 2 = 2 σ
122 This transition gives rise to the reduction C Γ 2 W 2 , where Γ 2 = upd c!w 1 (Γ 1 ). Note that, since Γ 1 c : exp we obtain that Γ 2 (c) = (1, err). The broadcast along a busy channel caused a collision to occur. Finally, rules (Sleep), (EndRcv) and (TimePar) can be used to infer the transition C ---→ W 3 = nil | nil | {err/x}P. Let Γ 3 := upd σ (Γ ); then the transition above induces the timed reduction C 2 σ C 3 = Γ 3 W 3 , in which an error is received instead of either of the transmitted values w 0 , w 1 .
Then for any channel c, Γ 1 c : idle implies Γ 2 c : idle. Proposition 3 does not hold for ill-formed configurations. For example, let Γ 1 c : exp, Γ 1 d : idle and Γ 2 c, d : idle and consider the two configurations C 1 = Γ 1 nil | d[x].P and C 2 = Γ 2 c! v | d[x].P, neither of which are well-formed; nor do they let time pass, C i σ . As a consequence C 1 C 2 . However Proposition 2 implies that they are not bisimilar, since they differ on the exposure state of c. Another essential property of well-formed systems is patience: time can always pass in networks with no instantaneous activities. Proposition 4 (Patience). If C is well-formed and C i , then C σ C for some C .
fail and C 1 ⇓
1 eureka , C 1 ⇓ halt . By definition of reduction barbed congruence and Proposition 3 we obtain thatC 2 ↓ eureka , C 2 ↓ halt , C 2 ⇓ eureka , C 2 ⇓ halt and C 2 ⇓ fail . Proposition 6 ensures then that C 2 = Γ 2 W 2 | T γ(c,v)for some Γ 2 , W 2 . An application of Proposition 5 leads toΓ 2 W 2 γ(c,v)
Table 1
1 CCCP: Syntax
W ::= P station code
c[x].P active receiver
W 1 | W 2 parallel composition
νc:(n, v).W channel restriction
P, Q ::= c ! u .P broadcast
c?(x).P Q receiver with timeout
σ.P delay
τ.P internal activity
P + Q choice
[b]P, Q matching
X process variable
nil termination
fix X.P recursion
Channel Environment: Γ : Ch → N × Val
channel c is local to W, and the transmission of value v over channel c will take place
for the next n slots of time.
Table 2
2 Intensional semantics: transmission
For convenience we assume 0 -1 to be 0.
Supported by SFI project SFI 06 IN.1 1898. Author partially supported by the PRIN 2010-2011 national project "Security Horizons" | 42,926 | [
"1003818",
"1003819",
"1003820"
] | [
"22205",
"22205",
"542958"
] |
01483419 | en | [
"info"
] | 2024/03/04 23:41:48 | 2016 | https://theses.hal.science/tel-01483419v2/file/these_A_BOISARD_Olivier_2016.pdf | Directeur De Thèse
Michel Paindavoine
Philippe Coussy
Professeur des Christophe Garcia
Rapporteur Andres Perez-Uribe
Robert M French
Yann Lecun
Lolita Mathieu
Laura
Jonathan Luc
Margaux, Stéphane Julie Pierre
Philippe, Corinne, Sandrine, Léa, Lydia Danilo Christophe
Benoît, Kiki, Drak, Émilie, Rémi "Goodfinger", Manjo, Élisa, Clémence, Romain, Roswitha, Hélène, Margot Alena Chloé
Jimmy Mélissa
Valentin
Claire David
Thomas " Vougny-Pensez-Pas
Optimization and implementation of bio-inspired feature extraction frameworks for visual object recognition
Industry has growing needs for so-called "intelligent systems", capable of not only acquire data, but also to analyse it and to make decisions accordingly. Such systems are particularly useful for video-surveillance, in which case alarms must be raised in case of an intrusion. For cost saving and power consumption reasons, it is better to perform that process as close to the sensor as possible. To address that issue, a promising approach is to use bio-inspired frameworks, which consist in applying computational biology models to industrial applications. The work carried out during that thesis consisted in selecting bio-inspired feature extraction frameworks, and to optimize them with the aim to implement them on a dedicated hardware platform, for computer vision applications.
First, we propose a generic algorithm, which may be used in several use case scenarios, having an acceptable complexity and a low memory print. Then, we proposed optimizations for a more global framework, based on precision degradation in computations, hence easing up its implementation on embedded systems. Results suggest that while the framework we developed may not be as accurate as the state of the art, it is more generic. Furthermore, the optimizations we proposed for the more complex framework are fully compatible with other optimizations from the literature, and provide encouraging perspective for future developments. Finally, both contributions have a scope that goes beyond the sole frameworks that we studied, and may be used in other, more widely used frameworks as well.
So here I am, after three years spent playing around with artificial neurons. That went fast, and I guess I would have needed twice as long to get everything done. That was a great experience, which allowed me to meet extraordinary people without whom those years wouldn't have been the same.
First of all, I wish to thank my mentor Michel Paindavoine for letting me be his student, along with my co-mentors Olivier Brousse and Michel Doussot.
Résumé
L'industrie a des besoins croissants en systmes dits intelligents, capable d'analyser les signaux acquis par des capteurs et prendre une dcision en consquence. Ces systmes sont particulirement utiles pour des applications de vido-surveillance ou de contrle de qualit.
Pour des questions de cot et de consommation d'nergie, il est souhaitable que la prise de dcision ait lieu au plus prs du capteur. Pour rpondre cette problmatique, une approche prometteuse est d'utiliser des mthodes dites bio-inspires, qui consistent en l'application de modles computationels issus de la biologie ou des sciences cognitives des problmes industriels. Les travaux mens au cours de ce doctorat ont consist choisir des mthodes d'extraction de caractristiques bio-inspires, et les optimiser dans le but de les implanter sur des plateformes matrielles ddies pour des applications en vision par ordinateur. Tout d'abord, nous proposons un algorithme gnrique pouvant tre utiliss dans diffrents cas d'utilisation, ayant une complexit acceptable et une faible empreinte mmoire. Ensuite, nous proposons des optimisations pour une mthode plus gnrale, bases essentiellement sur une simplification du codage des donnes, ainsi qu'une implantation matrielle bases sur ces optimisations. Ces deux contributions peuvent par ailleurs s'appliquer bien d'autres mthodes que celles tudies dans ce document.
List of Figures
General introduction 1.1 The need for intelligent systems
Automating tedious or dangerous tasks has been an ongoing challenge for centuries.
Many tools have been designed to that end. Among them lies computing machines, allowing to assist human beings in calculations or even performing them. Such machines are everywhere nowadays, in devices that fit into our pockets. However, despite the fact that they are very efficient for mathematical operations that are complicated for our brains, they usually perform poorly at tasks that are easy for us, such as recognizing a landmark on a picture or analysing and understanding a scene.
There are many applications for systems that are able to analyze their environments and to make a decision accordingly. In fact, Alan Turing, one of the founder of modern computing, estimated one of the ultimate goal of computing is to build machines that could be said intelligent [1]. Perhaps one of the most well known applications of such technology would be for autonomous vehicules, e.g cars that would be able to drive themselves, with little to no help from humans. In order to drive safely, those machines obviously need to retrieve information from different channels, e.g audio of video. Such systems may also be useful for access control for areas that need to be secured, or for quality control on production chains, e.g as was proposed for textile products in [2].
One could think of two ways to achieve a machine of that kind: either engineer how it should process the information, or use methods allowing it to learn it and determine it automatically. Those techniques form a research fields that have been active for decades called Machine Learning, which is part of the broader science of Artificial Intelligence (AI).
Machine Learning
In 1957, the psychologist Frank Rosenblatt proposed the Perceptron, one of the first system capable of learning automatically without being explicitly programmed. He proposed a mathematical model, and also built a machine implementing that learning behavior; he tested it with success on a simple letter recognition application. Its principle is very simple: the input image is captured by a retina, producing a small black and white image of the letter -black corresponds to 1, and white to 0. A weighted sum of those pixels is performed, and the sign function is applied to the result -for instance, one could state that the system must return 1 when the letter to recognize is an A, and -1 if its a B. If the system returns the wrong value, then the weights are corrected so that the output is correct. A more formal, mathematical description of the Perceptron is provided latter, in Section 2.1.1.1 on page 9. The system is also illustrated in Figure 1.2. Since the Perceptron, many trainable frameworks have been proposed, most of them following a neuro-inspired approached like the Perceptron or a statistical approach. They are described in Section 2.1.
Recently, Machine Learning -and AI in general -gained renown from the spectacular research breakthrough and applications initiated by companies such as Facebook, Google, Microsoft, Twitter, etc. For instance, Google DeepMind recently developed AlphaGo, Perceptron applied to pattern recognition. Figure 1.2a shows an hardware implementation, and Figure 1.2b presents the principle: each cell of the retina captures a binary pixel and returns 0 when white, 1 when black. Those pixels are connected to so called input units, and are used to compute a weighted sum. If that sum is positive, then the net returns 1, otherwise it returns -1. Training a Perceptron consists in adjusting its weights. For a more formal and rigorous presentation, see page 9.
a software capable of beating the world champion of Go [3]. Facebook is also using AI to automatically detect, localize and identify faces in pictures [4]. However those applications are meant to be performed on machines with high computational power, and it is beyond question to run such programs on constraint architectures, like those one expect to find on autonomous systems. Indeed, such devices fall into the field of Embedded Systems which shall be presented now.
Embedded systems
Some devices are part of larger systems, in which they perform one task in particulare.g control the amount of gas that should be injected in the motor of a vehicle. Those socalled embedded systems must usually meet high constraints in terms of volume, power consumption, cost, timing and robustness. Indeed, they are often used in autonoumous systems carrying batteries with limited power. In the case of mass produced devices such as phones or cars, it is crucial that their cost is as low as possible. Furthermore, they are often used in critical systems, where they must process information and deliver the result on time without error -any malfunction of those systems may lead to disastrous consequences, especially in the case of autonomous vehicles or military equipments. All those constraints also mean that embedded systems have very limited computational power.
Many research teams have proposed implementations of embedded intelligent systems, as shown in Section 2.2.2. The work proposed in this thesis falls into that research field.
However, as we shall see many of those implementations require high-end hardware, thus leading to potentially high cost devices. The NeuroDSP project 3 , in the frame of which this PhD thesis was carried out, aims to provide a device at a lower cost with a low power consumption.
NeuroDSP: a neuro-inspired integrated circuit
The goal of the research project of which this PhD is part of is to design a chip capable of performing the computation required by the "intelligent" algorithms presented earlier.
As suggested in its name, NeuroDSP primarily focuses on the execution of algorithms based on the neural networks theory, among which lie the earlier mentioned Perceptron.
As shown in Section 2.1, the main operators needed to support such computations are linear signal processing operators such as convolution, pooling operators and non-linear functions. Most Digital Signal Processing (DSP) operators, such as convolution, actually need similar features -hence that device shall also be able to perform DSP operation, for signal preprocessing for instance. As we shall see, all those operations may be, most of the time, performed in parallel, thus leading to a single-instruction-multiple-data (SIMD) architecture, in which the same operations is applied in parallel to a large amount of data. The main advantage of this paradigm is obviously to carry those operations faster, potentially at a lower clock frequency. As the power consumption of a device is largely related to its clock frequency, SIMD may also allow a lower power consumption.
NeuroDSP is composed of 32 so called P-Neuro blocks, each basically consisting of a cluster of 32 Processing Elements (PE), thus totalling 1024 PE. A PE may be seen as an artificial neuron performing a simple operation on some data. All PEs in a single P-Neuro perform the same operation, along the lines of the aforementioned SIMD paradigm. A NeuroDSP device may then carry out signal processing and decision making operations. Since 1024 neurons may not be enough, they may be multiplexed to emulate larger systems -of course at a cost in terms of computation time. When timing is so critical that multiplexing is not a satisfying option, it is possible to use several NeuroDSP devices in cascade. The device's architecture is illustrated in Figure 1.3.
Document overview
While NeuroDSP was designed specifically to run signal processing and decision making routines, such algorithms are most of the time too resource consuming to be performed efficiently on that type of device. It is therefore mandatory to optimize them, which is the main goal of the research work presented here.
In Chapter 2, a comprehensive tour of the works related to our research is proposed.
After presenting machine learning theoretical background and also algorithms inspired by biological data, the main contribution concerning their implementations are shown. A discussion shall also be proposed, from which arises the problematic that is aimed to be addressed in this document, namely: how may a preprocessing algorithm be optimized given particular face and pedestrian detection applications, and how the data may be efficiently encoded so that few hardware resources may be used?
The first part of that problem is addressed in Chapter 3. While focusing on a preprocessing algorithm called HMAX, the main works in the literature concerning feature selection are recalled. Our contribution to that question is then proposed.
Chapter 4 presents our contribution of the second part of the raised problems, concerning data encoding. After reminding the main research addressing that issue, we show how a preprocessing algorithm may be optimized so that it may process data coded on a few bits only, with few to none performance drop. An implementation on a reconfigurable hardware shall then be proposed.
Finally, Chapter 5 draws final thoughts and conclusions about the work proposed here.
The main problems and results are reminded, as well as the limitations. Considered future research are also proposed.
Chapter 2
Related works and problem statement
This chapter proposes an overview of the frameworks used in the pattern recognition field. Both its theoretical backbone and the main implementation techniques shall be presented. It is shown here that one of the key problems of many PR frameworks is their computational cost. Those approaches mainly consists in either using machines with high parallel processing capabilities and high computational power, or on the contrary in optimizing the algorithms so they can be run with less resources. The problematics underlying the work proposed in this thesis, which follows the second paradigm, shall also be stated.
Theoretical background
In this section, the major theoretical contributions to PR are presented. The principle classification frameworks are first presented to the reader. Then, a description of several descriptors which aim to capture the useful information from the processed images and to get rid of the noise, is proposed.
Classification frameworks
The classification of an unknown data, also called vector or feature vector, consists in predicting the category it belongs to. Perhaps the simplest classification framework there is is Nearest Neighbor. It consists in storing examples of feature vectors in memory, each associated with the category it belongs to. To classify a unknown feature vector, one simply uses a distance (e.g Euclidean or Manhattan) to determine the closest example.
The classifier then returns the category associated to that selected vector. While really simple, that framework however has many issues. The most obvious is its memory print and its computational cost: the more examples we have, the more expansive that framework is. From a theoretical point of view, that framework is also very sensitive to outliers; any peculiar feature vector, for instance in the case of labelling error, may lead to disastrous classification performance. A way to improve this framework is to take not only the closest feature vector, but the K closest, and to make them vote for the category.
The retained category is then the one having the most votes [START_REF] Fix | Discriminatory analysis, nonparametric discrimination[END_REF]. That framework is called K-Nearest Neighbour (KNN). While this technique may provide better generalization and reduce the effects due to outliers, it still requires lots of computational resources.
There exist many more other pattern classification frameworks. The most used of those frameworks shall now be described. Neural networks are presented first. A presentation of the Support Vector Machines framework shall follow. Finally, Ensemble Learning methods are presented. This document focuses on feedforward architecture only -nonfeedforward architectures, such as Boltzmann Machines [START_REF] Hinton | Optimal perceptual inference[END_REF][START_REF] David | A learning algorithm for boltzmann machines[END_REF], Restricted Boltzmann Machines [START_REF] Rumelhart | Parallel Distributed Processing -Explorations in the Microstructure of Cognition: Foundations[END_REF][START_REF] Bengio | Classification using discriminative restricted Boltzmann machines[END_REF] and Hopfield networks [START_REF] Hopfield | Neural networks and physical systems with emergent collective computational abilities[END_REF] shall not be described here. We also focus on supervised learning frameworks, as opposed to unsupervised learning, such as selforganizing maps [START_REF] Kohonen | Self-organized formation of topologically correct feature maps[END_REF]. In suppervised learning, each example is manually associated to a category, while in unsupervised learning the model "decides" by itself which vector goes to which category.
Neural Networks
Artifical Neural Networks (NN) are machine learning frameworks inspired by biolocical neural systems, used both for classification and regression tasks. Neural networks are formed of units called neurons, interconnected to each others by synapses. Each synapse has a synaptic weight, which represents a parameter of the model that shall be tuned during training. During prediction, each neuron performs a sum of its inputs, weighted by the synaptic weights. A non linear function called activation function is then applied to the result, thus giving the neuron's activation which feeds the neurons connected to the outputs of the considered one. In this thesis, only feedforward network shall be considered. In those systems, neurons are organized in successive layers, where each unit in a layer gets inputs from units in the previous layer and feeds its activation to units in the next layer. The layer getting the input data is called input layer, while the layer from which the network's prediction is read is the output layer. Such a framework is represented in Figure 2.1. For a complete overview of the existing neural networks, a good review is given in [START_REF] Fausett | Fundamentals of Neural Networks: Architectures, Algorithms And Applications: United States Edition[END_REF]. In each layer, units get their inputs from neurons in the previous layer and feed their outputs to units in the next layer.
Perceptron
The perceptron is one of the most fundamental contribution to the Neural Network field, and was introduced by Rosenblatt in 1962 in [START_REF] Rosenblatt | Principles of neurodynamics: perceptrons and the theory of brain mechanisms[END_REF]. It is represented in Figure 2.2. It has only two layers: the input layer and the output layer. A "dummy"
unit is added to the input layer, the activation of which is always 1 -the weight w 0 associated to that unit is called bias. Those layers are fully connected, meaning each output unit is connected to all input units. Thus, the total input value z of a neuron with N inputs and a bias w 0 is given by:
z = w 0 + N i=1 w i x i (2.1)
Or, in an equivalent, more compact matrix notation:
z = W T x (2.2)
with x = (1, x 1 , x 2 , . . . , x n ) T and W = (w 0 , w 1 , w 2 , . . . , w N ) T . W is called weight vector.
In the case where there is more than one output unit, then W becomes a matrix where the i-th column is the weight vector for the i-th output unit. By denoting M the number of output units, z i the input value of the i-th output unit and z = (z 1 , z 2 , . . . , z M ), one may write:
z = W T x (2.
3)
The output unit's activation function f is as follows:
∀x ∈ R, f (x) = +1 x > θ 0 x ∈ [-θ, θ] -1 x < θ (2.4)
Where θ represents a threshold (θ ≥ 0)1 .
To train a Perceptron, it is fed with each feature vector x in the training set along with the corresponding target category t. Let's consider for now that we only have two different categories: +1 and -1. The idea is that, if the network predicts the wrong category, the difference between the target and the prediction, weighted by a learning rate and the input value, is added to the weights and bias. If the prediction is correct, then no modifications is made. The training algorithm is shown in more details for a
Perceptron having a single output unit in Algorithm 1. It is easily extensible to systems with several output units; the only major difference is that t is replaced by a target vector t, the components of which may be +1 or -1.
n ← number of input units; η ← learning rate; Initialize all weights and bias to 0; while Stopping condition is false do forall
(x = (x 1 , x 2 , . . . , x n ) , t) in training set do y ← f (w 0 + w 1 x 1 + w 2 x 2 + • • • + w n x n ); for i ← 1 to n do w i ← w i + ηx i (t -y); end w 0 ← w 0 + η (t -y); end end
Algorithm 1: Learning rule for a perceptron with one output unit.
If there exists a hyperplan separating the two categories, then the problem is said linearly separable. In that case, the perceptron convergence theorem [START_REF] Fausett | Fundamentals of Neural Networks: Architectures, Algorithms And Applications: United States Edition[END_REF][START_REF] Michael | Brains, Machines, and Mathematics[END_REF][START_REF] Hertz | Introduction to the Theory of Neural Computation[END_REF][START_REF] Minsky | Perceptrons -An Intro to Computational Geometry Exp Ed[END_REF] states that such a hyperplan shall be found in a finite number of iterations -even if one cannot now that number a priori. However, that condition is required, meaning the perceptron is not able to solve non-linearly separable problems. Therefore, it is not possible to train a perceptron to perform the XOR operation. This is often referred to as the "XOR problem" in the literature, and was one of the main reasons why neural network had not known great popularity in industrial applications in the past. A way to address this class of problems is to use several layers instead of a single one.
x 1
x 2
x 3
x N
∀x ∈ R f (x) = tanh (x) (2.5)
or the very similar bipolar sigmoid:
∀x ∈ R f (x) = 2 1 + e -x -1 (2.6)
Those functions' curves are represented in Figure 2.4. Its training algorithm is somewhat more complicated, and follows the Stochastic Gradient Descent approach. Let E be the cost function measuring the error between the expected result and the network's prediction. The goal is to minimize E, the shape of which is unknown. The principle of the algorithm achieving that is called back-propagation of error [START_REF] Rumelhart | Learning Internal Representations by Error Propagation[END_REF][START_REF] Rumelhart | Learning representations by back-propagating errors[END_REF].
RBF
Radial Basis Function networks were proposed initially by Broomhead and
Lowe [START_REF] Broomhead | Radial basis functions, multi-variable functional interpolation and adaptive networks[END_REF][START_REF] Broomhead | Multivariable Functional Interpolation and Adaptive Networks[END_REF] and fall in the kernel methods family. They consist in three layers: an input layer similar to the Perceptron's, a hidden layer containing kernels and an output layer. Here, a kernel i is a radial basis function f i (hence the name of the network) that measures the proximity of the input pattern x with a learnt pattern p i called center, according to a radius β i . It typically has the following form: x 1
f i (x) = exp - ||x -p i | | β i (2.7) -3 -2 -1 1 2 3 -1 -0.5 0.5 1 tanh(x) 2 1+exp(-x) -1
x 2 x 3 x N y 1 y 2 y M
Input layer
Hidden layer Output layer The output layer is similar to a Perceptron: the hidden and output units are fully connected by synapses having synaptic weights, which are determine during the training stage. The network is illustrated in Figure 2.5.
To determine the kernels parameters, one may adopt different strategies. Centers may be directly drawn from the training set, and radius may be arbitrarily chosen -however such empirical solution leads to poor results. A more efficient way is to use a clustering algorithm that gathers the centers into clusters, the center of which shall represent an example center while the corresponding radius is evaluated w.r.t the proximity with other kernels. Such an algorithm is presented in Appendix A. The computational power and the memory required by this network grows linearly with the number of kernels.
While the training method presented in Appendix A tend to reduce the number of kernels, it still may be quite important. There exists sparse kernel machines, that work in a similar way than RBF networks but are designed to use as few kernels as possible, like the Support Vector Machines described in Section 2.1.1.2.
Spiking Neural Network
All the models presented above treat the information at the level of the neurons activation. Spiking neural networks intend to describe the behaviour of the neurons at a lower level. That model was first introduced by Hodgkin et al [START_REF] Hodgkin | A quantitative description of membrane current and its application to conduction and excitation in nerve[END_REF], who proposed a description of the propagation of the action potentials between biological neurons. There exists different variations of the spiking models, but the most used nowadays is probably the "integrate and fire", where the neurons' inputs are accumulated over time. When the total reaches a threshold, the neuron is committed.
Thus, the information sent by a neuron is not carried by a numerical value, but rather by the spikes order and the duration between two spikes. It is still an active research subject, with many applications in computer vision -Masquelier and Thorpe proposed the "spike timing dependent plasticity" (STDP) algorithm, which allows unsupervised learning of visual features [START_REF] Masquelier | Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity[END_REF]. [START_REF] Bishop | Pattern recognition and machine learning[END_REF]. The selected vectors are called support vectors. After selecting them, the decision boundary's parameters are optimized so that it is as far as possible to all support vectors. Typically, a quasi-Newton optimization process could be choosen to that end; however its description lies beyond the scope of this document. Figure 2.6
shows an example of their determination as well as the resulting decision boundary.
Ensemble learning
The rational behind Ensemble Learning frameworks is that instead of having one classifier, it may be more efficient to use several ones [START_REF] Opitz | Popular ensemble methods: an empirical study[END_REF][START_REF] Polikar | Ensemble based systems in decision making[END_REF][START_REF] Rokach | Ensemble-based classifiers[END_REF][START_REF] Schapire | The strength of weak learnability[END_REF]. Those classifiers are called weak classifiers, and the final decision results from their predictions. Their exists several paradigms, among which Boosting [START_REF] Breiman | Arcing classifier (with discussion and a rejoinder by the author)[END_REF] in particular.
Boosting algorithm are known for their computational efficiency during prediction. A good example is their use in Viola and Jone's famous face detection algorithm [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. The speed of the algorithm comes partly from the fact that the classifier is composed of a cascade of weak classifiers, in which all regions of the image that are clearly not faces are discarded by the top-level classifier. If the data goes through it, then it is "probably a face", and is processed by the second classifier, which either discard of accept it, and so on. This allows to rapidly eliminate irrelevant data and the noise. Boosting is also known to be slightly more efficient than SVM for multiclass classification tasks with HMAX [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF], which is described in Section 2.1.2.2.
Feature extraction frameworks
Signal processing approach
Classical approaches More than ten years ago, Lowe proposed a major contribution in computer vision with his Scale Invariant Feature Transform (SIFT) descriptor [START_REF] David | Distinctive Image Features from Scale-Invariant Keypoints[END_REF],
which became quickly very popular due to its efficiency. Its primary aim was to provide, as suggested by its name, features that are invariant to the scale and to some extent to the orientation and small changes in viewpoint. It consists in matching features from the unknown image to a set of learnt features at different locations and scales, followed by a Hough transform that gathers the matched points in the image into clusters, which represent detected objects. The matching is operated by a fast nearest-neighbour algorithm, that indicates for a given feature the closest learnt feature. However, doing so at every locations and scale would be very inefficient, as most of the image probably does not contain much information. In order to find locations which are the most likely to hold information, a Difference of Gaussian (DoG) filter bank is applied to the image. Each DoG filter behaves as a band-pass filter, selecting edges at a specific spatial frequency and allowing to find features at a specific scale. Extrema are then evaluated across all those scales in the whole image, and constitute a set of keypoints at which the aforementioned matching operations are performed. As for rotation invariance, it is brought by the computation of gradients that are local to each keypoint. Before performing the actual matching, the data at a given keypoint is transformed according to those gradients so that any variability caused by the orientation is removed.
Bay et al. proposed in [START_REF] Bay | Speeded-Up Robust Features (SURF)[END_REF] a descriptor aiming to reproduce the result of the state of the art algorithm, but much faster to compute. They called their contribution SURF, for Speeded-Up Robust Features. It provides properties similar to SIFT (scale and rotation invariance), with a speed-up of 2.93X on a feature extraction task, where both frameworks were tuned to extract the same number of keypoints. Like SIFT, SURF consists in a detector that takes care of finding keypoints in the image, cascaded with a descriptor that computes features at those keypoints. The keypoints are evaluated using a simple approximation of the Hessian matrix, which can be efficiently computed thanks to the integral image representation, i.e an image where each pixels contains the sum of all the original image's pixels located left and up to it [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. Descriptors are then computed locally using Haar wavelet, which can also be computed with the integral image [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]. [START_REF] Rosten | Machine Learning for High-Speed Corner Detection[END_REF][START_REF] Schmidt | An Evaluation of Image Feature Detectors and Descriptors for Robot Navigation[END_REF] Another popular framework for feature extraction is Histograms of Oriented Gradients (HOG) [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. It may be used in many object detection applications, though it was primarly designed for the detection of human beings. It consists in computing the gradients at each pixel, and make each of those gradients vote for a particular bin of a local orientation histogram. The weight with which each gradient votes is a linear function of its norm and of the difference between its orientation and the orientation of the closest bins' centers. Those gradients are then normalized over overlapping spatial blocks, and the result forms the feature vector. The classifier used here is typically a linear SVM, presented in Section 2.1.1.
Like many feature extraction frameworks, there exists some variations of the HOG feature descriptor. Dalal and Triggs present two of them in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]: R-HOG and C-HOG, respectively standing for "Rectangular HOG" and "Circular HOG". The difference with the HOG lies in the shape of the overlapping spatial blocks used for the gradient normalization. R-HOG is somewhat close to presented earlier SIFT, except that computations are performed at all locations, thus providing a dense feature vector. C-HOG is somewhat trickier to implement due to the particular shape it induces, and shall not be presented here. All three frameworks provide similar recognition performances, which were the state of the art at that time.
There are many other descriptors for images, like FAST [START_REF] Rosten | Machine Learning for High-Speed Corner Detection[END_REF][START_REF] Schmidt | An Evaluation of Image Feature Detectors and Descriptors for Robot Navigation[END_REF], and we shall not describe them in detail here as it lies beyond the scope of this document. However it is worth detailing another type of frameworks based on so-called wavelets, which allow to retreive
•••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • • x Input image 1st layer 2nd layer 3rd layer U λ 1 (x) U λ 1 ,λ 2 (x) S 0 (x) S λ 1 (x) S λ 1 ,λ 2 (x)
Figure 2.7: Invariant scattering convolution network [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. Each layer applies a wavelet decomposition U λ to its inputs, and feed the next layer with the filtered images U λ (x).
At each layer, a low-pass filter is applied to the filtered images and the results are sub-sampled. The resulting so-called "scattering coefficients" S λ (x) are kept to form the feature vector.
frequency information while keeping local information -which is not possible with the classical Fourier transform.
Wavelets Wavelets have known a great success in many signal processing applications, such as signal compression or pattern recognition, including for images. They are linear operators decomposing locally a signal on a frequency basis. A wavelet decomposition consists in applying a "basis" linear filter, called mother wavelet, on the signal. It is then dilated in order to extract features of different sizes and, in the case of images, rotated so that it responds to different orientations. An excellent and comprehensive guide to the theory and practice of wavelets is given in [START_REF] Mallat | A Wavelet Tour of Signal Processing[END_REF].
Wavelets are used as the core operators of the Scattering Transform frameworks. Among them lie the Invariant Scattering Convolution Networks (ISCN), introduced by Bruna and Mallat [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. They follow a feedforward, multistage structure, along the lines of ConvNet described in Section 2.1.2.3, though contrary to ConvNet its parameters are fixed, not learnt. They alternate wavelet decompositions with low-pass filters and subsampling -the function of which is to provide invariance in order to raise classification performances. Each stage computes a wavelet decomposition of the images produced at the previous stage, and feed the resulting filtered images to the next stage. At each stage the network also outputs a low-pass filtered and sub-sampled version of those decompositions -the final feature vector is the concatenation of those output features.
Figure 2.7 sums up the data flow of this framework. It should be noted that in practice, not all wavelet are applied at each stage to all images: indeed it is shown in [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF] that some of those wavelet cascades do not carry information, and thus their computation may be avoided, which allows to reduce the algorithmic complexity. Variations of the ISCN with invariance to rotation are also presented in [START_REF] Sifre | Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination[END_REF][START_REF] Oyallon | Deep Roto-Translation Scattering for Object Classification[END_REF], which may be used for texture [START_REF] Sifre | Rotation, Scaling and Deformation Invariant Scattering for Texture Discrimination[END_REF] or objects [START_REF] Oyallon | Deep Roto-Translation Scattering for Object Classification[END_REF] classification.
A biological approach: HMAX
Some frameworks are said to be biologically plausible. In such case, their main aim is not so much to provide a framework as efficient as possible in terms of recognition rates or computation speed, but rather to propose a model of a biological system. One of the most famous of such frameworks is HMAX, which also happens to provide decent recognition performances. The biological background was proposed by Riesenhuber and
Poggio in [START_REF] Riesenhuber | Hierarchical models of object recognition in cortex[END_REF], on the base of the groundbreaking work of Hubel and Wiesel [START_REF] Hubel | Receptive fields, binocular interaction, and functional architecture in the cat's visual cortex[END_REF]. Its usability for actual object recognition scenarios was stated by Serre et al. 8 years later in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. It is a model of the ventral visual system in the cortex of the primates, accounting for the first 100 to 200 ms of processing of visual stimuli. As its name suggests -HMAX stands for "Hierarchical Max" -that model is built in a hierarchical manner. Four successive stages, namely S1, C1, S2 and C2 process the visual data in a feedforward way. The S1 and S2 layers are constituted of simple cells, performing linear operations or proximity evaluations, while the C1 and C2 contain complex cells that provide some degrees of invariance. Figure 2.8 sums up the structure of this processing chain. Let's now describe each stage in detail.
The S1 stage consists in a Gabor filter bank. Gabor filters -which are here two dimensional, as we process images -are linear filters responding to patterns of a given spatial frequency and orientation. They are a particular form of the wavelets described in Section 2.1.2.1. A Gabor filter is described as follows:
G (x, y) = exp - x 2 0 + γ 2 y 2 0 2σ 2 × cos 2π λ x 0 (2.8)
x 0 = x cos θ + y sin θ and y 0 = -x sin θ + y cos θ (2.9)
where γ is the filter's aspect ratio, θ its orientation, σ the Gaussian effective width and λ the cosine wavelength. The S2 stage aims to compare the input features to a dictionary of learnt features.
There are different ways to build up that dictionary. In [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF] it is proposed to simply crop patches of different sizes in images in C1 space at random position and scales.
During feedforward, patches are cropped from images in C1 space at all locations and scales, and are compared to each learnt feature. The comparison operator is a radial basis function, defined as follows:
∀i ∈ {1, 2, . . . , N } r i (X) = exp(-β X -P i ) (2.10)
where X is the input patch from the previous layer, P i the i-th learnt patch in the dictionary and β > 0 is a tuning parameter. Therefore, the closer the input patch is to the S2 unit learnt patch, the stronger the S2 unit fires.
Finally, a complete invariance to locations and scales of the features in C1 space is reached in the C2 stage. Each C2 unit pools over all S2 unit sharing the same learnt pattern, and simply keeps the maximum value. Those values are then serialized in order to form the feature vector. The descriptor HMAX provides is well suited to detect the presence of an object in cluttered images, though the complete invariance to location and scales brought by C2 removes information related to its location. This issue is addressed in [START_REF] Chikkerur | What and where: A Bayesian inference theory of attention[END_REF] -however that model lies beyond the scope of this thesis and shall not be discussed here. [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Concerning the Gabor filters in S1, σ represents the spread of their Gaussian envelopes and λ the wavelength of their underlying cosine functions. like the S1 and C1 layers of HMAX, followed by a fully connected layer similar to a MLP. However, the parameters of the convolution kernels are not predefined, but rather learnt at the same time as the weights in the final classifier. Thus, the feature extraction and classification models are both tuned simultaneously, using an extension of the back-propagation algorithm.. An example of this model is presented in Figure 2.9. That framework became very popular since the industry demonstrated its efficiency, and is today actively used by big companies such as Facebook, Google, Twitter, Amazon and Microsoft. A particular implementation of that framework, tuned to perform best at face recognition tasks, was proposed by Garcia et al [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]. However, the large amount of parameters to be optimized by the training algorithm requires a huge amount of data in order to avoid overfitting, lots of computational power and lots of time -still, pretrained models are provided by the community, making that problem avoidable.
Frameworks implementations
Software implementations
There exists many implementation of the descriptors and classifier described in Section 2.1. Some of them are available in general purpose software packages, like the widespread Scikit-learn python package [START_REF] Pedregosa | Scikit-learn: Machine learning in Python[END_REF]. SVM also have a high performance dedicated library with LIBSVM [START_REF] Chang | LIBSVM: A library for support vector machines[END_REF]. Other frameworks, more dedicated to neural networks -and particularly deep learning -are accelerated on GPUs, like Theano [START_REF] Bastien | Theano: new features and speed improvements[END_REF][START_REF] Bergstra | Theano: a CPU and GPU math expression compiler[END_REF],
Caffe [START_REF] Jia | Caffe: Convolutional architecture for fast feature embedding[END_REF], Torch [START_REF] Collobert | Torch7: A Matlablike Environment for Machine Learning[END_REF], cuDNN [START_REF] Woolley | cuDNN: Efficient Primitives for Deep Learning[END_REF] and the recently released TensorFlow [START_REF] Abadi | TensorFlow: Large-scale machine learning on heterogeneous systems[END_REF]. There also exist frameworks more oriented towards neuroscience, such as PyNN [START_REF] Davison | PyNN: A Common Interface for Neuronal Network Simulators[END_REF] and NEST [START_REF] Plesser | Nest: the neural simulation tool[END_REF].
The Parallel Neural Circuit Simulator (PCSIM) allows to handle large-scale models composed of several networks that may use different neural models, and is able to handle several millions of neurons and synapses [START_REF] Pecevski | PCSIM: a parallel simulation environment for neural circuits fully integrated with Python[END_REF]. As for spiking neural networks, the BRIAN framework [START_REF] Goodman | Brian: A Simulator for Spiking Neural Networks in Python[END_REF][START_REF] Dan | The Brian Simulator[END_REF] provides an easy to use simulation environment. Uetz and Behnke along with its implementation on GPU [START_REF] Uetz | Large-scale object recognition with CUDA-accelerated hierarchical neural networks[END_REF], using the CUDA framwork.
This framework was especially designed for large-scale object recognition. The authors claim a very low testing error rate of 0.76 % on MNIST, a popular hand-written digit dataset initially provided by Burges et al [START_REF] Christopher | Mnist database[END_REF], and 2.87 % on the general purpose NORB dataset [START_REF] Lecun | Learning methods for generic object recognition with invariance to pose and lighting[END_REF].
Embedded systems
Optimizations for software implementations, both on CPU and GPU, for the SIFT and SURF frameworks have also been proposed [START_REF] Kim | A fast feature extraction in object recognition using parallel processing on CPU and GPU[END_REF]. It has also been shown that wavelets are very efficient to compute, even on low hardware resources [START_REF] Courroux | Use of wavelet for image processing in smart cameras with low hardware resources[END_REF], which make them a reasonable choice for feature extraction on embedded systems. Furthermore, an embedded version of the SpiNNaker board described in Section 2.2.2 for autonomous robots, programmable using with the C language or languages designed for neural networks programing is presented in [START_REF] Galluppi | Event-based neural computing on an autonomous mobile platform[END_REF].
Hardware implementations
As shown in Section 2.2.1, GPUs are very efficient platforms for the implementation of classification and feature extraction frameworks, particularly for neuromorphic algorithms, due to their highly parallel architecture. Field Programmable Gate Arrays (FPGA) are another family of massively parallel platforms, and as such are also good candidates for efficient implementations. They are reconfigurable hardware devices, in which the user implement algorithms at a hardware level. Therefore, they provide a much finer control than the GPU: one implements indeed the communication protocols, the data coding, how computations are performed, etc. -though they utilization is also more complicated. FPGAs are configured using hardware description languages, like VHDL or Verilog.
Going further down in the abstraction levels, there also exists fully analogical neural network implementations that use a component called memristor [START_REF] Brousse | Neuro-inspired learning of low-level image processing tasks for implementation based on nano-devices[END_REF][START_REF] Chabi | Robust neural logic block (NLB) based on memristor crossbar array[END_REF][START_REF] Choi | An electrically modifiable synapse array of resistive switching memory[END_REF][START_REF] He | Design and electrical simulation of on-chip neural learning based on nanocomponents[END_REF][START_REF] Liao | Design and Modeling of a Neuro-Inspired Learning Circuit Using Nanotube-Based Memory Devices[END_REF][START_REF] Retrouvey | Electrical simulation of learning stage in OG-CNTFET based neural crossbar[END_REF][START_REF] Retrouvey | Electrical simulation of learning stage in OG-CNTFET based neural crossbar[END_REF][START_REF] Snider | From Synapses to Circuitry: Using Memristive Memory to Explore the Electronic Brain[END_REF][START_REF] Versace | The brain of a new machine[END_REF][START_REF]Molecular-junction-nanowire-crossbar-based neural network[END_REF]. The resistance of such components can be controlled by the electric charge that goes through it. That resistance value is analogous to a synaptic weight. As it is still at the fundamental research level, analogical neural network shall not be studied here.
Neural networks
The literature concerning hardware implementations of neural networks is substantial.
A very interesting and complete survey was published in 2010 by Misra et al [START_REF] Misra | Artificial neural networks in hardware: A survey of two decades of progress[END_REF]. Feedforward neural network are particularly well suited for hardware implementations, since the layers are, by definition, computed sequentially. It implies that the data goes through each layers successively, and that while the layer i processes the image k, the image k + 1 is processed by the layer i -1. Another strategy is, on the contrary, to implement a single layer on the device, and to use layer multiplexing to sequentially load and apply each layer to the data, thus saving lots of hardware resources to the expense of a higher processing time [START_REF] Himavathi | Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization[END_REF]. However, it has been demonstrated that neural network that are not feedforward may also be successfuly implemented on hardware [START_REF] Ly | High-Performance Reconfigurable Hardware Architecture for Restricted Boltzmann Machines[END_REF][START_REF] Coussy | Fully-Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories[END_REF].
There also exist hardware implementations of general purpose bio-inspired frameworks, such as Perplexus, which proposes among other the capability for hardware devices to self-evolve, featuring dynamic routing and automatic reconfiguration [START_REF] Upegui | The perplexus bio-inspired reconfigurable circuit[END_REF], particularly suited for large-scale biological system emulation. Architecture of adaptive size have also been proposed, that allow to dynamically scale itself when needed [START_REF] Héctor | A networked fpga-based hardware implementation of a neural network application[END_REF].
While the mentioned works intend to be general purpose frameworks with no particular applications in mind, some contributions also propose implementations for very specific purposes, such as the widespread face detection and identification task [START_REF] Yang | Implementation of an rbf neural network on embedded systems: real-time face tracking and identity verification[END_REF], or more peculiar application such as gas sensing [START_REF] Benrekia | FPGA implementation of a neural network classifier for gas sensor array applications[END_REF] or classification of data acquired from magnetic probes [START_REF] Nguyen | FPGA implementation of neural network classifier for partial discharge time resolved data from magnetic probe[END_REF].
Some frameworks received special considerations from the community in those attempts.
After presenting the works related to HMAX, the next paragraphs shall present the numerous -and promising -approaches for ConvNet implementations. The many contributions that concern the Spiking Neural Networks are presented afterwards.
HMAX Many contributions about hardware architectures for HMAX have been proposed by Al Maashri and his colleagues [START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF][START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF][START_REF] Debole | FPGA-accelerator system for computing biologically inspired feature extraction models[END_REF][START_REF] Maashri | Accelerating neuromorphic vision algorithms for recognition[END_REF][START_REF] Park | Saliencydriven dynamic configuration of HMAX for energy-efficient multi-object recognition[END_REF][START_REF] Sun Park | An FPGAbased accelerator for cortical object classification[END_REF]. Considering that in HMAX, the most resource consuming stage is, by far, the S2 layer [START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF], a particular effort was made in [START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF] to propose a suitable hardware accelerator for that part. In that paper, Al Maashri et al. proposed a stream-based correlation, where input data is streamed to several pattern matching engines performing the required correlation operations in parallel. The whole model, including the other layers, was implemented on a single-FPGA and a multi-FPGA platforms that respectively provide 23× and 89× speedup, compared with a CPU implementation running on a system having a quad-core 3.2 GHz Xeon processor and 24 GB memory. The single-FPGA platform uses a Virtex-6 FX-130T, and the multi-FPGA one embeds four Virtex-5 SX-240T, all of which are high-end devices.
Those systems did not have any drop in accuracy compared to the CPU implementation.
A complete framework allowing to map neuromorphic algorithms to multi-FPGA systems is presented by Parket al. in [START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF]. The chosen hardware platform is called Vortex [START_REF] Park | A reconfigurable platform for the design and verification of domain-specific accelerators[END_REF], which was designed to implement and map hardware accelerators for streambased applications. One of the biggest challenge for such systems is the inter-device communication, which is addressed in that work with the design of specific network interfaces. It also proposes tools allowing to achieve the mapping in a standardized way, with the help of a specially-designed tool called Cerebrum. As a proof of concept, a complete image processing pipeline was implemented, that cascades a preprocessing stage, a visual saliency2 determination and an object recognition module using HMAX.
That pipeline was also implemented on CPU in C/C++ and on GPU with CUDA for comparison. The gain provided by the system is a speedup of 7.2× compared to the CPU implementation and 1.1× compared to the GPU implementation. As for the power efficiency, the gain is 12.1× compared to the CPU implementation and 2.3× compared to the GPU implementation.
Kestur et al proposed with their CoVER system [START_REF] Kestur | Emulating Mammalian Vision on Reconfigurable Hardware[END_REF] a multi-FPGA based implementation of visual attention and classification algorithms -the latter being operated by HMAX -that aims to process high resolution images nearly in real time. It has a pre-processing stage, followed by either an image classification or a saliency detection algorithm, or both, depending on the chosen configuration. Each process uses a hardware accelerator running on an FPGA device. The architecture was implemented on a DNV6F6-PCIe prototyping board, which embeds six high-end Virtex6-SX475T FPGAs: one of them is used for image preprocessing and routing data, another one to compute HMAX's S1 and C1 feature maps, two perform the computations of HMAX's S2 and C2
features, and the remaining two are used both as repeaters and to compute the saliency maps.
To our knowledge, the most recent hardware architecture for HMAX was proposed in 2013 by Orchard et al [99]. It was successfuly implemented on a Virtex 6 ML605 board, which carries a XC6VLX240T FPGA. The implementation is almost identical to the original HMAX described in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF], and is able to process 190 images per second with less than 1% loss in recognition rate compared with standard software implementations, for both binary and multiclass objet recognition tasks. One of the major innovation of this contribution is the use of separable filters for the S1 layer: it was indeed shown that all filters used in HMAX, at least the original version presented in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF], may be expressed as separable filters or as a linear combinations of separable filters -this allows to considerably reduce the utilization of FPGA resources. That engine is composed of three submodules: a wrapper that takes care of communications with other modules, a weight loader that manages the convolution kernel's coefficients and the convolution engine itself, that performs the actual computation. In order to perform the convolution operations in streams, the convolution kernel stores a stripe of the image and perform convolutions as soon there are enough data, so that for a K × K convolution kernel the system needs to store K -1 lines. Thus, the system can output one pixel per clock cycle. That engine reached the to date state-of-the-art in terms of energy efficiency, wih 2.76 GOPS/mW.
ConvNet
To our knowledge, the most recent effort concerning the implementation of ConvNets on hardware lies in the Origami project [START_REF] Cavigelli | Origami: A Convolutional Network Accelerator[END_REF]. The contributors claim that their integrated circuit is low-power enough to be embeddable, while handling network that only workstation with GPU could handle before. To achieve this, the pixel stream is first, if necessary, cropped to a Region Of Interest (ROI) with a dedicated module. A filter bank is then run on that ROI. Each filter consists in the combination of chanels, each performing multiplication-accumulation5 (MAC) operations on the data they get.
Each channel then sums the final results individually, and output the pixel values in the stream. That system achieves a high throughput of 203 GOPS when running at 700 MHz, and consumes 744 mW.
Spiking Neural Networks Due to the potentially low computational resources they need, SNN also have their share of hardware implementation attempts. Perhaps the most well-known is the Spiking Neural Network architecture (SpiNNaker) Project [START_REF] Furber | The SpiNNaker Project[END_REF]. It may be described as a massively parallel machine, capable of simulating neuromorphic systems in real time -i.e it respects biologically plausible timings. It it basically a matrix of interconnected processors (up to 2500 in the largest implementation), splitted in several nodes of 18 processors. Each processor simulates 1000 neurons. The main advantage in using spikes is that the information is carried by the firing timing, as explained in Section 2.1.1.1, page 12 -thus each neuron needs to send only small packets to the other neurons. However, the huge amount of those packets and of potential destination makes it challenging to route them efficiently. In order to guarantee that each emitted packet arrives on time at the right destination, the packet itself only contains the identifier of the emitting neuron. Then, the router sends it to the appropriate processors according to that identifier, which would depend on the network's topology and more precisely to which neurons the emitting neuron is connected to.
IBM and the EPFL ( École Polytechnique Fédérale de Lausanne) collaborated to start a large and (very) ambitious research program: the Blue Brain project, which aims to use an IBM Blue Gene supercomputer to simulate mammalian brains, first of little animals like rodents, and eventually the human brain [START_REF] Markram | The blue brain project[END_REF]. However it is highly criticized by the scientific community, mostly for its cost, the lack of realism in the choice of its goals and the contributions it led to [START_REF] Kupferschmidt | Virtual rat brain fails to impress its critics[END_REF]. While still ongoing, that project led to the creation of SyNAPSE, meaning System of Neuromporphic Adaptive Plastic Scalable Electronics. Since the Blue Brain project needed a supercomputer, the aim of SyNAPSE is to design a somewhate more constrained system. In the frame of that project, the TrueNorth chip [START_REF] Merolla | A million spiking-neuron integrated circuit with a scalable communication network and interface[END_REF] [START_REF] Krichmar | Large-scale spiking neural networks using neuromorphic hardware compatible models[END_REF], run in a simulation environment. The authors backed the propositions that neural networks may be useful for both engineering and modeling purposes, and supported the fact that the spiking neural networks are particularly well suited with the use of Addressable Event Representation communication scheme, which consists in transmitting only the information about particular events instead of the full information, which is particularly useful to reduce the required bandwidth and computations.
However, that strategy lies beyond the scope of this document.
Other frameworks implementations
There exists many academic works that are yet to be mentioned, for both classifiers and descriptors. As for classifiers, Kim et al proposed a bio-inspired processor for real time object detection, achieving high throughput (201.4 GOPS) while consuming 496 mW.
Other frameworks for pattern recognition systems that are not biologically inspired have been proposed. For instance, Hussain et al proposed an efficient implementation of the simple KNN algorithm [START_REF] Hussain | An adaptive implementation of a dynamically reconfigurable K-nearest neighbour classifier on FPGA[END_REF], and an implementation of the almost-equally-simple Naive Bayes6 framework is proposed in [START_REF] Hongying Meng | FPGA implementation of Naive Bayes classifier for visual object recognition[END_REF]. Anguita et al proposed a framework allowing to generate user-defined FPGA cores for SVMs [START_REF] Anguita | A FPGA Core Generator for Embedded Classification Systems[END_REF]. An implementation for Gaussian Mixture Models, which from a computational point of view are somewhat close to RBF nets and as such may require lots of memory and hardware resources, have also been presented [START_REF] Shi | An Efficient FPGA Implementation of Gaussian Mixture Models-Based Classifier Using Distributed Arithmetic[END_REF]. Concerning feature extraction, the popular SIFT descriptor have been implemented on FPGA devices with success [START_REF] Bonato | A Parallel Hardware Architecture for Scale and Rotation Invariant Feature Detection[END_REF][START_REF] Yao | An architecture of optimised SIFT feature detection for an FPGA implementation of an image matcher[END_REF], as well as SURF [START_REF] Svab | FPGA based Speeded Up Robust Features[END_REF].
Some companies also proposed their own neural netwok implementations, long before the arrival of ConvNet, HMAX and other hierarchical networks. Intel proposed an analogical neural processor called ETANN in 1989 [START_REF] Holler | An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses[END_REF]. While harder to implement and not as flexible as their digital counterparts, analogical devices are much faster.
That processors embeds 64 PEs that act as as many neurons and 10, 240 connections.
The device was parameterizable by the user using a software called BrainMaker. A digital neural architecture was presented by Phillips for the first time in 1992, and was called L-neuro [START_REF] Mauduit | Lneuro 1.0: a piece of hardware LEGO for building neural network systems[END_REF][START_REF] Duranton | L-Neuro 2.3: a VLSI for image processing by neural networks[END_REF]. It was designed with modularity as a primarly concern in mind, and thus is easily interconnected with other modules which makes it scalable. In its latter version, that system was composed of 12 DSP processors, achieving 2 GOP/s with a 1.5 GB/s bandwidth, and was successfuly used for PR applications.
IBM also proposed the Zero Instruction Set Computer (ZISC) [START_REF] Madani | ZISC-036 Neuroprocessor Based Image Processing[END_REF], their own neural processor. It was composed of a matrix of processing elements that act like a kernel function of an RBF network: as detailed in Section 2.
Discussion
In the previous sections of this chapter, the theoretical background of pattern recognition was presented as well as different implementations of pattern recognition framework on different platforms. This Section is dedicated to the comparison of those frameworks.
Descriptors and then classifiers shall be discussed in terms of robustness and complexity, 7 Hardware modules that may be used as black boxes on FPGAs.
with an emphasis on how well they may be embedded. Afterwards the problematics underlying the research work presented here shall be stated. [START_REF] Bay | Speeded-Up Robust Features (SURF)[END_REF] that it was both more accurate and faster than SIFT.
The accuracy brought by HMAX for computer vision was groundbreaking [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. It showed better performances than SIFT in many object recognition tasks, mainly on the Cal-tech101 dataset. Those results were corroborated by the work of Moreno et al, who compared the performances of HMAX and SIFT on object detection and face localization tasks, and found out that HMAX performed indeed better than SIFT [START_REF] Moreno | A Comparative Study of Local Descriptors for Object Category Recognition: SIFT vs HMAX[END_REF]. It is also worth mentionning the very interesting work of Jarett et al [START_REF] Jarrett | What is the Best Multi-Stage Architecture for Object Recognition?[END_REF], in which they evaluated the contribution of several properties of different computer vision frameworks applied to object recognition. That paper confirms and generalizes the aforementioned work of Moreno et al: it states that multi-stage architectures in general, which includes HMAX and ConvNets, perform better than single-stage ones, such as SIFT.
ConvNet achieves outstandingly good performances on large datasets, such as MNIST [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] or ImageNet [START_REF] Szegedy | Going Deeper with Convolutions[END_REF][START_REF] He | Delving deep into rectifiers: Surpassing human-level performance on imagenet classification[END_REF]. In comparison, HMAX's performances are lower. However, its number of parameters to optimize is very large, therefore a ConvNet needs a huge amount of data to be trained properly -indeed, models with lots of parameters are known to be more subject to overfitting [START_REF] Bishop | Pattern recognition and machine learning[END_REF]. If the data is sparse, it is worth considering using a framework with less parameters, such as HMAX; as explained in Section 2.1.2.2, its training stage simply consists in cropping images at random locations and scales.
Despite the fact that that randomness is clearly suboptimal and has been subject to optimization works in the past [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF], it presents the advantage of being very simple.
ConvNet
Very high Yes, High on large datasets requires a large dataset Furthermore, while it has been stated that HMAX's accuracy is related to the amount of features in the S2 dictionnary, the performance do not evolve so much after 1,000 patches. Assuming only 1 patch per image is cropped during training, then one would require 1,000 which is much lower than the tens of thousands usually gathered to train ConvNet [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]. That state of things led to the thought that while working in many situations, ConvNet may not be the most adapted tool for all applications -particularly in the case where the training set is small. Another possibility would be to use an Invariant Scattering Convolution Network as the first layers of a ConvNet, as suggested in [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF], instead of optimizing the weights of the convolution kernels during the training stage.
Due to their performances, those three multistage architectures -ConvNet, ISCN and HMAX -seem like the most promising options for most computer vision applications.
However, another important aspect that must be taken into account is that of their respective complexities: they have different requirements in terms of computational resources and memory that shall be decisive when choosing one of them, especially in the case of embedded systems. To that respect legacy descriptors such as HOG, SIFT and SURF in particular are interesting alternatives.
In order to set boundaries to the present work, a few descriptors must be chosen so that most of the effort can focus on them. To that end, Table 2.2 sums up the main features of the presented descriptors. As the aim is to achieve state of the art accuracy, the work presented in this thesis shall mostly relate to the three aforementioned multistage architectures: ConvNet, ISCN and HMAX.
Classifiers
For a given application, after selecting the (believed) most appropriate descriptor one must choose a classifiers. Like descriptors, they have different features in terms of robustness, complexity and memory print, both for training and prediction. Most of the time, the classification stage itself is not the most demanding in a processing chain, and thus may not need to be accelerated. In the case where one need such acceleration, the literature on the subject is already substantial -see Section 2.1.1. For those reasons, the present document shall not address hardware acceleration for classification. However, as the choice of the classifier plays a decisive role in the robustness of the system, the useful criteria for classifier selection shall be presented.
Let's first consider the training stage. As it shall be in any case performed on a workstation and not on an embedded system, constraints in terms of complexity and memory print are not so high. However, a clear difference must be made between the iterative training algorithms and the others. An iterative algorithm processes the training samples one by one, or by batch -they do not need to load all the data in once, and are therefore well suited for training with lots of samples. On the other hand, non-iterative data such as SVM or RBF need the whole dataset in memory to be trained, which is not a problem for reasonably small datasets but may become one when there are many datapoints -obviously the limit depends on the hardware configuration used to train the machine, though in any case efficient training requires strong hardware.
The classifier must also be efficient during predictions -here, "efficiency" is meant as speed, as the robustness depends largely on training. Feedforward frameworks, as most of those presented here, present the advantage of being fast compared to more complex frameworks. In linear classifiers such as Perceptrons or linear SVMs, the classification often simply consists in a matrix multiplication, which is now well optimized even on non massively parallel architectures like CPUs, thanks to libraries such as LAPACK [START_REF]Lapack -linear algrebra package[END_REF] or BLAS [START_REF]Blas -basic linear algebra subprogram[END_REF].
The speed of kernel machines, e.g RBF or certain types of SVM, is often directly related to the number of used kernel functions. For instance, the more training examples, the more kernels an RBF net may have (see Appendix A). Particular care must therefore be taken during the training stage of such nets, so that the number of kernels stays to a manageable amount. Finally, ensemble learning frameworks such as Boosting algorithms are often used when speed is critical in an applications, and have been demonstrated to be very efficient in the case of face detection for instance [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF].
Those considerations put aside, according to the literature HMAX is best used with either AdaBoost or SVM classifiers respectively for one-class and multi-class classification tasks [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Concerning ISCN, it is suggested to use a SVM for prediction [START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. Concerning ConvNet, it embeds its own classification stage which typically takes the form of an MLP [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Lecun | Convolutional networks and applications in vision[END_REF]. Now that the advantages and drawbacks of both the classification and feature extraction frameworks have been stated, the next section proposes a comparison between different implementation techniques.
Implementations comparison
In order to implement those frameworks a naive approach would be to implement them on a CPU, as it is probably the most widespread computing machine. However that would be particularly inefficient, as those frameworks are highly parallel and that such devices are by nature sequential: a program consists in a list of successive instructions that are run one after the other. Their main advantage, however, is that they are fairly easy to program. For that reason, CPU implementations still remain a quasi-mandatory step when testing a framework.
GPUs are also fairly widespread devices, even in mainstream machines. The advent of video games demanding more and more resources dedicated to graphics processing led to a massive production of those devices, which provoked a dramatic drop in costs. For those reasons they are a choice target platforms for many neuromorphic applications.
While somewhat more complicated to program that CPUs, the coming of higher level languages such as CUDA made the configuration of GPU reasonably easy to reach. The amount of frameworks using that kind of platforms, and moreover their success show that it is a very popular piece of hardware for that purpose [START_REF] Bastien | Theano: new features and speed improvements[END_REF][START_REF] Collobert | Torch7: A Matlablike Environment for Machine Learning[END_REF][START_REF] Woolley | cuDNN: Efficient Primitives for Deep Learning[END_REF][START_REF] Abadi | TensorFlow: Large-scale machine learning on heterogeneous systems[END_REF][START_REF]CUDA Implementation of a Biologically Inspired Object Recognition System[END_REF]. However, their main disadvantage is their volume and power consumption, the latter being in the order of magnitude of 10 W. For embeddable systems the power consumption should not go beyond 1 W, which is where reconfigurable hardware devices are worth considering.
FPGAs present two major drawbacks: they are not as massively produced as GPUs and CPUs, which raises their cost. Their other downside actually goes alongside with their highest quality: they are entirely reconfigurable, from the way the computations are organized to the data coding scheme and such flexibility comes to the price of a higher development time (and thus cost) than CPUs and GPUs. However their power consumption is most of the time below 1 W, and can be optimized with appropriate coding principles. They are also much smaller than GPUs, and the low power consumption leads to colder circuits, which allows to save the energy and space that would normally be required to keep the device at a reasonnable temperature. Furthermore, they are reconfigurable to a much finer grain than GPU, and thus provide even more parallelization as the latters. All these criteria make FPGAs good candidates for embedded implementations of computer vision algorithms.
Problem statement
The NeuroDSP project presented in Section 1.4 aims to propose an integrated circuit for embedded neuromorphic computations, with high constraints in terms of power consumption, volume and cost. The ideal solution would be to produce the device as an Application Specific Integrated Circuit (ASIC) -however its high cost makes it a realistic choice only in the case where the chip is guaranteed to be sold in high quantities, which may be a bit optimistic for a first model. For that reason, we chose to implement that integrated circuit on an FPGA. As one of the aim of NeuroDSP is to be cost-efficient, we aim to propose those neuromorphic algorithms on mid-range hardware. Towards that end, one must optimize them w.r.t two aspects: complexity and hardware resource consumption. The first aspect may be optimized by identifying what part of the algorithm is the most important, and what part can be discarded. A way to address the second aspect of the problem is to optimize data encoding, so that computations on them requires less logic. Those considerations lead to to the following problematics, which shall form the matter of the present document:
• How may neuromorphic descriptors be chosen appropriately and how may their complexity be reduced?
• How the data handled by those algorithms may be efficiently coded so as to reduce hardware resources?
Conclusion
In this chapter we presented the works related to the present document. The problematics that we aimed to address were also stated. The aim of the contributions presented here is to implement efficient computer vision algorithms on embedded devices, with high constraints in terms of power consumption, volume, cost and robustness. The primary use case scenario concerns image classification tasks.
There exist many theoretical frameworks allowing to classify data, be it images, one dimensional signals or other. Naive algorithms such as Nearest Neighbor have the advantage of being really simple to implement; however they may achieve poor classification performances, and cost too much memory and computational power when used on large datasets. More sophisticated frameworks, such as neural networks, SVMs or ensemble learning algorithms can achieve better results.
In order to help the classifier, it is also advisable to use a descriptor, the aim of which is to extract data from the sample to be processed. Among such descriptors figures HMAX, which is inspired by neurophysiological data acquired on mamals. Such frameworks are said to be neuro-inspired, or bio-inspired. Another popular framework is ISCN, which decomposes the input image with particular filters called wavelets.
One of the most popular frameworks nowadays is ConvNet, which is a basically a classifier with several preprocessing layers that act as a descriptor. While impressively efficient, it needs to be trained with a huge amount of training data, which is a problem for applications where data is sparse. In such case it may seem more reasonable to use other descriptors, such as HMAX or ISCN, in combination with a classifier. The algorithms mentioned above are most of the time particularly well suited for parallel processing. While it is easier to implement them on CPU using languages such as C, the efficiency gained when running them on massively parallel architecture makes it worth the effort. There exist several frameworks using GPU acceleration, however GPUs are ill-suited for most embedded applications where power consumption is critical. FPGAs are better candidates in those cases, and contributions about implementations on such devices have been proposed.
The aim of the work presented in this document is to implement those demanding algorithms on mid-range reconfigurable hardware platforms. To achieve that, it is necessary to adapt them to the architecture. Such study is called "Algorithm-Architecture Matching" (AAM). That need raises two issues: how those frameworks may be reduced, and how the data handled for computation may be efficiently optimised, so as to use as few hardware resources as possible? The present document proposes solutions addressing those two questions.
Chapter 3
Feature selection
This chapter addresses the first question stated in Chapter 2, concerning the optimizations of a descriptor for specific applications. The first contribution presented here is related to a face detection task, while the second one proposes optimizations adapted to a pedestrian detection task. In both cases, the optimization scheme and rational are presented, along with a study of the complexity of major frameworks addressing the considered task. Accuracies obtained with the proposed descriptors are compared to those obtained with the original framework and the described systems of the literature.
Those changes in accuracies are then put in perspectives with the computational gain.
General conclusions are presented at the end of this Chapter.
Feature selection for face detection
This Section focuses on a handcrafted feature extractor for a face detection application.
We start from a descriptor derived from HMAX, and we propose a detailed complexity analysis; we also determine where lies the most crucial information for that specific application, and we propose optimizations allowing to reduce the algorithm complexity.
After reminding the reader of the major techniques used in face detection, we present our contribution, which consists in finding and keeping the most important information extracted by a framework derived from HMAX. Performance comparison with state of the art frameworks are also presented.
Detecting faces
For many applications, being either mainstream or professional, face detection is a crucial issue. Its more obvious use case is to address security problems, e.g identifying a person may help in deciding whether access should be granted or denied. It may also be useful in human-machine interactions, for instance if a device should answer in some way in case a human user shows particular states, such as distress, pain or unconsciousnessand to do that, the first step is to detect and locate the person's face. In that second scenario we fall into embedded systems, which explains our interest in optimizing face detection frameworks. Among the most used face detection techniques lie Haar-like feature extraction, and as usual ConvNet. We shall now describe the use of those two paradigms in those particular problems, as well as a framework called HMIN which is the basis of our work.
Cascade of Haar-like features
Before the spreading of ConvNets, one of the most popular framework for face detection was the Viola-Jones algorithm [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF] -it is still very popular, as it is readily implemented in numerous widely used image processing tools, such as OpenCV [START_REF]Itseez. Open source computer vision library[END_REF]. As we shall see, the main advantage of this framework is its speed, and its decent performances.
Framework description
Viola's and Jones' framework is built along two main ideas [START_REF] Viola | Robust real-time face detection[END_REF]: using easy and fast to compute low-level features -the so-called Haarlike features -in combination with a Boosting classifier that selects and classifies the most relevant features. Classifiers are cascaded so that the most obviously not-face regions of the image are discarded first, allowing to spend more computational time on most promising regions. A naive implementation of the Haar-like features may use convolution kernels, consisting of 1 and -1 coefficients, as illustrated on Figure 3.1. Such features may be computed efficiently using an image representation proposed in [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF] called Integral Image. In such representation, the pixel located at (x, y) takes as value the sum of the original image's pixels located in the rectangle defined by the (0, 0) and the (x, y) point, as shown in Figure 3.2. To compute such an image F one may use the following recurrent equation [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF]:
F (x, y) = F (x -1, y) + s (x, y) , (3.1)
with
s (x, y) = s (x, y -1) + f (x, y) (3.2)
where f (x, y) is the original image's pixel located at (x, y). Using this representation, the computation of a Haar-like feature may be performed with few addition and subtraction operations. Moreover the number of operations does not depend on the scale of the considered feature. Let's consider first the feature on the left of Figure 3.1, and let's They can be seen as convolution kernels where the grey parts correspond to +1 coefficients, and the white ones -1. Such features can be computed efficiently using integral images [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF]. Point coordinates are presented here for latter use in the equations characterizing feature computations. assume its top-left corner location is (x 1 , y 1 ) and that of its bottom-right corner's is (x 2 , y 2 ). Given the integral image II, its response r l (x 1 , y 1 , x 2 , y 2 ) is given by
(x 1 , y 1 ) (x 2 , y 2 ) x 2 , yg + + + (x 1 , y 1 ) (x 2 , y 2 ) xg , y 2 (xw , y 2 ) + + + +
r l (x 1 , y 1 , x 2 , y 2 ) = F (x 1 , y g , x 2 , y 2 ) -F (x 1 , y 1 , x 2 , y g ) (3.3)
with F (x 1 , y 1 , x 2 , y 2 ) the integral of the values in the rectangle delimited by (x 1 , y 1 ) and (x 2 , y 2 ), expressed as
F (x 1 , y 1 , x 2 , y 2 ) = II (x 2 , y 2 ) + II (x 1 , y 1 ) -II (x 1 , y 2 ) -II (x 2 , y 1 ) (3.4)
where II (x, y) is the value of the integral images at location (x, y). As for the response r r (x 1 , y 1 , x 2 , y 2 ) of the feature on the right, we have:
r r (x 1 , y 1 , x 2 , y 2 ) = F (x w , y 2 , x g , y 1 ) -F (x 1 , y 1 , x g , y 2 ) -F (x w , y 1 , x 2 , y 2 ) (3.5)
The locations of the points are shown in Figure 3.1. Once features are computed, they are classified using a standard classifier such as a perceptron for instance. If the classifier does not reject the features as "not-face", complementary features are computed and classified, and so on until either all features are computed and classified as "face", or the image is rejected. This cascade of classifiers allows to reject most non-faces images early in the process, which is one of the main reasons for its low complexity. Now that we described the so-called Viola-Jones framework, we shall study its computational complexity.
Complexity analysis Let's now evaluate the complexity involved by that algorithm when classifying images. The first step of the computation of those Haar-like features on an image is then to compute its integral image. According to Equation 3.1 and Equation 3.2, it takes only 2 additions per pixels. Then, the complexity C VJ II of this
II (X, Y ) = X x=1 Y y=0 f (x, y) + X Y Figure 3.2: Integral image representation. II (X, Y
) is its value of the point coordinated (X, Y ), and f (x, y) the value of the original image at location (x, y) [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF].
process for a w × h image is given by
C VJ II = 2wh. (3.6)
That serves as the basis of the computation of the Haar-like features, as we saw earlier.
The complexity highly depends on the number of computed features, and for this study we shall stick to the implementation proposed in the original paper [START_REF] Viola | Robust real-time face detection[END_REF]. In that work, the authors have a total a 6060 features to compute -however, they also claimed that,
given the cascade of classifiers they used, only N f = 8 features are computed in average.
From [START_REF] Viola | Robust real-time face detection[END_REF], we now that each feature needs from 6 to 9 operations to compute -we shall consider here that, on average, they need N op = 7.5 operations. We note that, thanks to the computation based on the integral image, the number of operations does not depend on the size of the computed feature. After that, the features are classified -however we focus our analysis on the feature extraction only, so we do not take that aspect into account here. Thus, denoting C VJ F the complexity involved a this stage, we have
C VJ F = N op N f . (3.7)
In additions, images must be normalized before being processed. Viola et al. proposed in [START_REF] Viola | Robust real-time face detection[END_REF] to normalize the contrast of the image by using its standard deviation σ given by
σ = m 2 - 1 N N i=0 x i 2 , (3.8)
where m is the mean of the pixels of the image, N = wh is the number of pixels and x i is the value of the i-th pixel. Those values may be computed simply as
m = II (W, H) wh (3.9) 1 N N i=0 x i 2 = II 2 (W, H) wh (3.10)
where II 2 denotes is the integral image representation of the original image with all its pixels squared. The computation of that integral image needs thus one power operations per pixel, to which we must add the computations required by the integral images, which leads to a total of 3W H operations. Computing m requires a single operation, as computing 1
N N i=0 x i 2 .
As the feature computation is entirely linear and since the normalization simply consists in multiplying the feature by the standard deviation, that normalization may simply be applied after the feature computation, involving a single operation per feature. Thus, the complexity C VJ N involved by image normalization is given by
C VJ N = 3wh + N f (3.11)
From Equations 3.6, 3.7 and 3.11, the framework's global complexity is given by
C VJ = C VJ II + C VJ F + C VJ N = 5wh + N op + 1 N f , (3.12)
which considering the implementation proposed in [START_REF] Viola | Robust real-time face detection[END_REF], i.e with w = h = 24 and N f = 7.5, leads to a total of 2948 operations. Although strikingly low, it must be emphasized here that that value is an average; when a face is actually detected, all 6060 features must be computed and classified, which then leads to 54,390 operations.
However, for fair comparison we shall stick to the average value latter in the document.
Now that we evaluated the complexity of the processing of a single w × h image, let's evaluate it in the case where we scan a scene in order to find and locate faces. Normally, one would typically use several sizes of descriptors in order to find faces of different sizes [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF][START_REF] Viola | Robust real-time face detection[END_REF] -however, in order to simplify the study we shall stick here to a single scale. Let W and H respectively be the width and height of the frame to process, and let N w be the number of windows processed in the image. If we classify subwindows at each location of the image, we have The integral images are first computed on the whole 640×480 image; after that, features must be computed, normalized and classified for each window. From Equations 3.6, 3.7, 3.11 and 3.13 we know that we need
N w = (W -w + 1) (H -h + 1) (3.
C VJ = 2W H + N op N f N w + N f N w (3.14) = 2W H + N f N w N op + 1 (3.15) = 5W H + N f (W -w + 1) (H -h + 1) N op + 1 . (3.16)
In the case of a 640 × 480 image, with w = 24, h = 24, N f = 8 and N op = 7.5 as before, we get C VJ = 20.7 MOP. Figure 3.3 shows the repartition of the complexity into several types of computations, considering that we derive from the above analysis that we need 4W H + N o pN f additions and W H multiplications.
Memory print
Let's now evaluat the memory required by that framework when processing a 640 × 480 image. Assuming the pixels of the integral image are coded on 32 bits integers, the integral image would require 1.2 MB to be stored entirely. Assuming
ROIs are evaluated sequentially on the input image, 6060 features are computed at most and each feature is coded as 32-bits integers, we would require 24.24 ko to stores the features. Thus, the total memory print required by that framework would be, in that case, 1.48 MB. That framework also has the great advantage that a single integral image may be used to compute features of various scales, without the need of computing, storing and managing an image pyramid, as required by other frameworks -more information about image pyramids are available in Section 3.1.3.2.
We presented the use of Haar-like features in combination with the AdaBoost classifier for face detection task, proposed by Viola and Jones [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF]. We shall now present and analyse an other major tool for this task, which is called CFF. The framework is shown in Figure 3.4.
During the prediction stage, it should be noted that the network can in fact process the whole image at once, instead of running the full computation windows by windows [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Long | Fully Convolutional Networks for Semantic Segmentation[END_REF]. This technique allows to save lots of computations, and is readily implemented if one considers the N1 layer as a convolution filter bank with kernel of size 6 × 7, and the N2 layer like another filter bank with 1 × 1 convolution kernels [START_REF] Long | Fully convolutional networks for semantic segmentation[END_REF].
Complexity analysis Let's now evaluate the complexity involved by the CFF algorithm. Denoting C CFF XX the complexity brought by the layer XX, and neglecting the classification as done in Section 3.1.1.1, we have
C CFF = C CFF C1 + C CFF S1 + C CFF T 1 + C CFF C2 + C CFF S2 + C CFF T 2 , (3.17)
where TX represents a non-linearity layer, where an hyperbolic tangeant is applied to each feature of the input feature map. Let's first evaluate C CFF C1 . It consists in 4 convolutions, which consists mainly in Multiplication-Accumulation (MAC), which we
Input C1 S1 C2 S2 • • • • • • • • • • • • • • • N1 N2 Output Figure 3.4: Convolutional Face Finder [50]
. This classifier is a particular topology of a ConNet, consisting in a first convolution layer C1 having four trained convolution kernels, a first sub-sampling layer S1, a second convolution layer C2 partially connected to the previous layer's units, a second sub-sampling layer S2, a partially-connected layer N1 and a fully-connected layer N2 with one output unit.
assume corresponds to a single operation as it may be done on dedicated hardware.
Thus we have
C CFF C1 = 4 × 5 × 5 (W -4) (H -4) (3.18) = 100W H -400 (W + H) + 1600. (3.19)
Since the S2 layer consists in the computation of means of features in contiguous nonoverlapping receptive fields, this means that each feature is involved once an only once in the computation of a mean, which also requires a MAC operation per pixel. Considering that at this point, we have 4 (W -4) × (H -4) feature maps, and so
C CFF S1 = 4 (W -4) (H -4) (3.20) = 4W H -16 (W + H) + 64. (3.21)
Now, the non-linearity layer must be applied: an hyperbolic tangeant function is used to each feature of the 4 W S2 × H S2 feature maps, with
W S2 = W -4 2 (3.22) H S2 = H -4 2 , (3.23)
and thus, considering the best case where an hyperbolic tangent may be computed in a single operation,
C CFF T 1 = 4 W -4 2 H -4 2 (3.24) = W H -2 (W + H) + 16 (3.25)
The C2 layers consists in 20 convolution, the complexity of which may be derived from 3.18. Then, there are 6 element-wise sums of feature maps, which after the convolutions are of dimensions
W -4 2 -2 × H -4 2 -2 , (3.26)
and thus we have
C CFF C2 = (20 × 3 × 3 + 6 × 3 × 3) W -4 2 -2 H -4 2 -2 (3.27) = 9 × 26 W 2 -4 H 2 -4 (3.28) = 234 W H 4 - 5 2 (W + H) + 16 (3.29) = 58.5W H -585 (W + H) + 3744. (3.30)
The complexity in S2 layer may be derived from Equations 3.20 and 3.26, giving
C CFF S2 = 3.5W H -28 (W + H) + 224. (3.31)
And finally, the complexity of the last non-linearity may be expressed as
C CFF T2 = 14W S2 H S2 (3.32)
with [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] and [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF] and as recalled earlier, we know that the features may be efficiently extracted at once in the whole image, by applying all the convolutions and subsampling directly to it. Thus, we may compute that complexity directly by reusing Equation 3.36, and we get 50.7 MOP.
W S2 = 1 2 W -4 2 -2 (3.33) H S2 = 1 2 H -4 2 -2 , ( 3
Memory print
Let's now evaluate the memory required by the CFF framework.
As in Section 3.1.1.1, we shall consider here the case where we process a 640 × 480 image, without image pyramid. The first stage produces 4 636 × 476 feature mapsassuming the values are coded using single precision floating point scheme, hence using 32 bits, that stage requires a total of 4.84 MB. As the non-linearity and subsampling stages may be performed in-place, they do not bring any further need in memory. The second convolution stage, however, produces 20 316 × 236 feature maps. Using the same encoding scheme as before, we need 59.7 MB. We should also take into account the memory needed by the weights of the convolution and subsampling layers, but it is negligible compared to the values obtained previously. Hence, the total memory print is 64.54 MB. It should by noted that that amount would be much higher in the case where we process an image pyramid, as usually done. However, we stick to an evaluation on a single scale here for consistency with the complexity study.
This Section was dedicated to the description and study of the CFF framework. Let's now do the same study on another framework to which we refer as HMIN.
HMIN
Framework description In order to detect and locate faces on images, one may use HMAX, which was described in Section 2.1.2.2. However using that framework to locate an object requires to process separately different ROI of the image. In such case, the S2 and C2 layers of HMAX provide little gain in performance, as it is mostly useful for object detection in clutter [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Considering the huge gain in computation complexity when not using the two last layers, we propose here to use only the first two layers for our application. In the rest of the document, the framework constituted by the S1 and C1 layers of HMAX shall be referred to as HMIN.
We presented the so-called framework HMIN, on which we base our further investigations. We shall now study its complexity, along the lines of what we have proposed earlier for Viola-Jones and the CFF.
Complexity analysis
The overall complexity C HMIN involved by the two stages S1 and C1 of HMIN is simply
C HMIN = C HMIN S1 + C HMIN C1 (3.37)
Where C HMIN S1 and C HMIN C1 are respectively the complexity of the S1 and C1 layers.
The S1 layer consists in a total of 64 convolutions on the W × H input image. Different kernel sizes are involved, but it is important that all feature maps fed in the C1 layer are of the same size. Thus, the convolution must be computed at all positions, even those where the center of the convolution kernel is on the edge of the image. Missing pixel may take any value: either simply 0 or the value of the nearest pixel for instance.
Denoting k i the size of the convolution kernel at scale i presented in the filter column of Table 2.1, we may write
C HMIN S1 = 4 16 i=1 W Hk i 2 = 36146W H. (3.38)
As for the C1 layer, it may be applied as follows: first, the element-wise maximum operations accross pairs of feature maps are computed, which take 8WH operations;
then we apply the windowed max pooling. Since there is a 50% overlap between the receptive fields of two contiguous C1 units, neglecting the border effects each feature of each S1 feature map is involved in 4 computations of maximums. Since those operations are computed on 32 feature maps, and adding the complexity of the first computation, we get This Section was dedicated to the presentation of several algorithms suited for face detection, include HMIN which shall serve as the basis of our work. Next Section is dedicated to our contributions in the effort of optimizing out HMIN.
C HMIN C1 = 8W H + 8 × 4W H = 40W H. ( 3
HMIN optimizations for face detection
In this Section we propose optimizations for HMIN, specific to face detection applications. We begin by analysing the output of the C1 layer, and we then propose our simplifications accordingly. Experimental results are then shown. This work is based on the one presented in [START_REF] Boisard | Optimizations for a bio-inspired algorithm towards implementation on embedded platforms[END_REF], which we pushed further as described below.
C1 output
As HMIN intends to be a general purpose descriptor, it aims to grasp features of as various types. Figure 3.6 shows an examples of the C1 feature maps for a face. The eyes, nose and mouth are the most prominent object of the face, and as such one can expect HMIN to be particularly sensible to them as it is based on the mammal's vision system, which can indeed easily be seen in Figure 3.6. One can also see that the eyes and mouths are more salient when θ = π/2, and that the nose is more salient when θ = 0. Furthermore, one can also see that the extracted features are redundant from C1 maps of neighboring scales and same orientations. Due to the redundancy, we also propose to sum the output of the S1 layer -which is equivalent to sum the remaining kernels of the filter bank to produce one, unique 37 × 37 convolution kernel. The smaller kernels are padded with zeros so that they are all 37×37
and may be sum across coefficients. This operation is sum-up in Figure 3.7. Figure 3.8 also show the output of that unique kernel applied to the image of a face.
Since we now only have one feature map, we must adapt the C1 layer. As all C1 units now + + +. . . + = Figure 3.7: S1 convolution kernel sum. Kernels smaller that 37 × 37 are padded with 0's so that they all are 37 × 37. Kernels are then sum element-wise so as to produce the kernel on the right. It is worth mentioning the proximity of that kernel with one of the feature selected by the Adaboost algorithm in the Viola-Jones framework [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF],
shown in Figure 3.1. pool over the only remaining scale, we propose to take the median value N m among the N s showed in Table 2.1, namely 16, as the width of the pooling window. Following the lines of the original model, the overlap between the receptive fields of two neighbouring C1 unit shall be ∆ m = 8. We shall refer to this descriptor as HMIN θ=π/2 later on.
Let's now evaluate the complexity involved in this model. We have a single K × K convolution kernel, with K = 37. Applying it to a W × H image thus requires an amount of MAC operations given by
C S1 = (W -K -1) (H -K -1) . (3.41)
As for the C1 layer, it needs
C C1 = (W -K -1) (H -K -1) (3.42) maximum operations.
As for the memory print, since we produce a single (W -K -1) × (H -K -1) feature map of single precision floating point numbers, that optimized version of HMIN needs
4 (W -K -1) × (H -K -1) bytes. HMIN R θ=π/2
Following what has been done in earlier, we propose to reduce even further the algorithmic complexity. Indeed, we process somewhat "large" 128 × 128 face images with a large 37 × 37 convolution kernel. Perhaps we do not need such a fine resolution -in fact, the CFF takes very small 32 × 36 images as inputs. Thus, we propose to divide the complexity of the convolution layer by 16 by simply resizing the convolution kernel to 9 × 9 using a bicubic interpolation, thanks to Matlab's imresize function, with the default parameters. Finally, the maximum pooling layer is adapted by divided its parameters also by 4: the receptive fields are 4 × 4, with 2 × 2 overlaps between two receptive fields. Hence, our new descriptor, which we shall refer to as HMIN R θ=π/2 later on, expects 32×32 images as inputs, thus providing vector of the exact same dimensionality than HMIN θ=π/2 . The complexity involved by that framework is expressed as
C HMIN = C HMIN S1 + C HMIN C1 , (3.43)
with
C HMIN S1 = 9 × 9 × W × H = 81W H (3.44) C HMIN C1 = 4W H, (3.45)
which leads to
C HMIN = 85W H. (3.46)
As we typically expect 32 × 32 images as inputs, the classification of a single image would take 82.9 kOP. For extracting features of a 640 × 480 as done previously, that would require 26.1 MOP, and the memory print would be the same as for HM IN θ=π/2
assuming we can neglected the memory needed to store the coefficients of the 9 × 9 kernel, hence we need here 1.22 MB.
Experiments
Test on LFWCrop grey
In this Section, we evaluate the different versions of HMIN presented in the previous Section. To perform the required tests, face images were provided by the Cropped Labelled Face in the Wild (LFW crop) dataset [START_REF] Huang | Robust face detection using Gabor filter features[END_REF], which shall be used as positive examples.
Negative examples were obtained by cropping patches from the "background" classwhich shall be refered to as "Caltech101-background" -of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF] at random positions. All feature vectors v = (v 1 , v 2 , . . . , v N ) are normalized so that the lower value is set to 0, and the maximum value is set to 1 to produce a vector For each version of HMIN, we needed to train a classifier. We selected 500 images at random from LFW crop and another 500 from Caltech101-background. We chose to use an RBF classifier. The images were also transformed accordingly to the descriptor, i.e resized to 128 × 128 for both HMIN and HMIN θ=π/2 and resized to 32 × 32 images for HMIN R θ=π/2 . The kerneling parameter of the RBF network was set to µ = 2 -see Appendix A for more information about the RBF learning procedure that we used.
v = (v 1 , v 2 , . . ., v n ) ∀i ∈ {1, . . . , N } v i = vi max k∈{1,...,N } vk (3.47) ∀i ∈ {1, . . . , N } vi = v i -min k∈{1,...,N } v k (3.
After training, 500 positive and 500 negative images were selected at random among the images that were not used for training to build the testing set. All images were, again, transformed w.r.t the tested descriptor, the feature vectors were normalized and classification was performed. Table 3.1 shows the global accuracies for each descriptor, using a naive classification scheme with no threshold in the classification function. Figure 3.9
shows the Receiver Operating Characteristic curves obtained for all those classifiers on that dataset. In order to build those curves, we apply the classification process to all testing images, and for each classification we compare its confidence to a threshold.
That confidence is the actual output of the RBF classifier, and indicates how certain the classifier is that its prediction is correct. If the confidence is higher than the threshold, then the classification is kept; otherwise it is rejected. By modifying that threshold, we make the process more or less tolerant. If the network is highly tolerant, then it shall tend to produce higher false and true positive rates; if it is not tolerant, then on the contrary it shall tend to produce lower true and false positive rates. The ROC curves show how the true positive rate evolve w.r.t the false positive rate.
Test on CMU
The CMU Frontal Face Images [START_REF] Sung | Cmu frontal face images test set[END_REF] dataset consists in grayscale images showing scenes with one or several persons (or characters) facing the camera or sometimes looking slightly away. Sample images are presented in Figure 3.10. It is useful to study the behaviour of a face detection algorithm on whole images, rather than simple classification of whole images in "Face" and "Not Face" categories. In particular, it has been used in the literature to evaluate the precision of the CFF [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF] and Viola-Jones [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF].
We carried out our experiment as follows. We selected 500 images from the LFW crop dataset [START_REF] Gary | Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments[END_REF] and 500 images from the Caltech101-background [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF] does not significantly alter the accuracy. The drop of performance is to be put in perspective with the saving in terms of computational complexity.
RBF using the kerneling parameter µ = 2. The images were all resized to 32 × 32, their histograms were equalized and we extracted features using HMIN R θ=π/2 ; hence the feature vectors have 225 components.
After training, all images of the dataset were processed as follows. A pyramid is created from each images, meaning we built a set of the same image but with different sizes.
Starting with the original size, the next image's width and height are 1.2 times smaller, which is 1.2 times bigger than the next, and so on until it is not possible to have an image bigger than 32 × 32. Then, 32 × 32 patches were cropped at all positions of all images of all sizes. Patches' histograms were equalized, and we extracted their HMIN R θ=π/2 feature vectors which fed the RBF classifier.
We tested the accuracy of the classifications with several tolerance values, and accuracy were compared to the provided ground truth [START_REF] Sung | Cmu frontal face images test set[END_REF]. We use a definition of a correctly detected face close to what Garcia et al. proposed in [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]: we consider that a detection is valid if it contains both eyes and mouths of the face and the ROI's area is not bigger than 1.2 times the area of the square just wrapping the rectangle delimited by the eyes and mouths, i.e those square and rectangles share the same centroid and the width of the square is as long as the bigger dimension of the rectangle. For each face in the ground truth, we check that it was correctly detected using the aforementioned criterion -success counts as a "true positive", while failure counts as a "false negative". Then, for each region of the image that does not correspond to a correctly detected face, we check if the system classified it as a "not-face" -in which case it counts as a "true negative" -or a face -in which case it counts as a "false positive". Some faces in CMU are too small to be detected by the system, and thus are not taken into account. The chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop [START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. A face was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive. [START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF][START_REF] Viola | Robust real-time face detection[END_REF], and thus are approximate. All false positive rates are obtained with a 90% accuracy. The "Classification" column gives the complexity involved when computing a single patch of the size expected by the corresponding framework which is indicated in the "Input size" column. The "Frame" column indicates the complexity of the algorithm when scanning a 640 × 480 image. The complexities and memory prints shown here only take into account the feature extraction, and not the classification. It should be noted that in the case of the processing of an image pyramid, both CFF and HMIN would require a much higher amount of memory.
Test on Olivier dataset
In order to evaluate our system in more realistic scenarios, we created our own dataset specifically for that task. We acquired a video on a fixed camera of a person moving in front of a non-moving background, with his face looking at the camera -an example of a frame extracted from that video are presented in Figure 3.12. The training and evaluation procedure is the same as in Section 3.1.3.2: we trained an RBF classifier with features extracted with HMIN R θ=π/2 from 500 images of faces from the LFW crop dataset [START_REF] Huang | Robust face detection using Gabor filter features[END_REF], and from 500 images cropped from images of the "background" class of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. We labeled the location of the face for each image by hand, so that the region takes both eyes and the mouth of the person, and nothing more, in order to be consistent with the CMU dataset [START_REF] Sung | Cmu frontal face images test set[END_REF]. Correct detections and false positives were evaluated using the same method as in Section 3.1.3.2: a face is considered as correctly detected if at least one ROI encompassing its eyes and mouth is classified as "face", and if that ROI is not more than 20% bigger than the face according to the ground truth.
Each non-face ROI classified as a face is considered to be a false positive.
With that setting up, we obtained a 2.38% error rate for a detection rate of 79.72% -more detailed results are shown on Figure 3.13. Furthermore, we process the video frame by frame, without using any knowledge of the results from the previous images. ROC curves obtained with HMIN R θ=π/2 on "Olivier" dataset. As in Figure 3.11, the chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop [START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset [START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. An image was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive. a pedestrian detection application.
Feature selection for pedestrian detection
In this Section, we aim to propose a descriptor for pedestrian detection applications.
The proposed descriptor is based on the same rational than in Section 3.1. Comparison in terms of computational requirements and accuracy shall be established between two of the most popular pedestrian detection algorithms.
Detecting pedestrians
With the arrival of autonomous vehicles, pedestrian detection rises as a very important issue nowadays. It is also vital in many security applications, for instance to detect intrusions in a forbidden zone. For this last scenario, one could think that a simple infrared camera could be sufficient -however such a device cannot determine by itself whether a hot object is really a human or an animal, which may be a problem in videosurveillance applications. It is then crucial to provide a method allowing to make that decision.
In this Section, we propose to use an algorithm similar to the one presented in Section 3.1.1, although this time it has been specifically optimized for the detection of pedestrian. One of the state of the art systems -which depends greatly on the considered dataset -is the work proposed by Sermanet et al. [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF], in which they tuned a ConvNet for this specific task. However, as we shall see it requires lots of computational power, and we intend to produce a system needing as few resources as possible. Thus, we compare our system to another popular descriptor called HOG, which has proven efficient for this task. We shall now describe those two frameworks, then we shall study their computational requirements.
HOG
Histogram of Oriented Gradients (HOG) is a very popular descriptor, particularly well suited to pedestrian detection [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. As its name suggests, it consists in computing approximations of local gradients in small neighborhoods of the image and use them to build histograms, which indicates the major orientations across small regions of the image. Its popularity comes from its very small algorithmic complexity and ease of implementation.
We focus here on the implementation given in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], assuming RGB input images as for the face detection task presented in Section 3.1.1. The first step is to compute gradients at each position of the image. Each gradient then contributes by voting for the global orientation of its neighborhood. Normalization is then performed across an area of several of those histograms, thus providing the HOG descriptor that shall be used in the final classifier, typically SVM with linear kernels, that shall decides whether the image is of a person.
Gradients computation
Using the same terminology as in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], we are interested in the so called "unsigned" gradients, i.e we are not directly interested into the argument θ of the gradient, but rather θ mod π. Keeping that in mind, in order to compute the gradient at each location, we use an approximation implying convolution filters. All gradients are computed separately for each R, G and B channels -for each location, the only the gradient with the highest norm is kept.
Two feature maps H and V are produced from the input image respectively using the kernels [-1, 0, 1] and [-1, 0, 1] T . At each location, values across the two feature maps at the same location may be seen as components of the 2D gradients, which we can use to compute their arguments and norms. Respectively denoting G (x, y), φ [0,π] ((x, y)) and G (x, y) the gradient at location (x, y), its "unsigned argument" and its norm, and H (x, y) and V (x, y) the features from H and V feature maps at location (x, y), we have
G (x, y) = H (x, y) 2 + V (x, y) 2 (3.49) φ [0,π] (G (x, y)) = arctan V (x, y) H (x, y) mod π (3.50)
The result of that process is shown in Figure 3.14. It is important to note here that the convolutions are performed so that the output feature maps have the same width and height as the input image. This may be ensured by cropping images slightly bigger than actually needed, or by padding the image with 1 pixel at each side of its side with 0's or replicating its border.
Binning Now that we have the information we need about the gradients, i.e their norms and arguments, we use them to perform the non-linearity proposed in this framework. The image is divided in so-called cells, i.e non-overlapping regions of N c × N c pixels, as illustrated in Figure 3.14. For each cell, we compute an histogram as follows.
The half-circle of unsigned angles is evenly divided into B bins. The center c i of the i-th bin is given by the centroid of the bin's boundaries, as shown in Figure 3.15. Each gradient in the cell votes for the two bins with the centers closest to its argument. Calling those bins c l and c h , the weights of its votes w l and w h depend on the difference between its argument and the bin center, and on its norm:
w h = |G (x, y)| φ (G (x, y)) -c l c h -c l (3.51) w l = |G (x, y)| φ (G (x, y)) -c h c h -c l (3.52)
We end up having a histogram per cell. Assuming the input image is of size W × H and that N c both divide W and H, we have a total of W H/N c 2 histograms. We associate each histogram to its corresponding cell to build a so called histogram map.
Local normalization
The last step provides some invariance to luminosity among histograms. The histogram map is divided into overlapping blocks, each having 2 × 2 histograms. The stride between two overlapping blocks is 1 so that the whole histogram map is covered. All the bins' values of those histograms form a vector v (x h , y h ) having BN 2 b components where (x h , y h ) is the location of the top-left corner's of the block in the histogram map frame coordinate, and we compute its normalized vector
v (x h , y h ) = v 1 (x h , y h ) , v 2 (x h , y h ) , . . . , v N b 2 (x h , y h ) using the so called L2-norm [36] normalization: ∀i ∈ 1, . . . , BN b 2 v i (x h , y h ) = min v i (x h , y h ) v (x h , y h ) 2 + 2 , 0.2 (3.53)
where is a small value avoiding divisions by 0.
Thus we obtain a set of vectors v (x h , y h ), which are finally concatenated in order to form the feature vector fed in a SVM classifier.
Complexity analysis Let's evaluate the complexity of extracting HOG features from an W × H image. As we saw, the first step of the extraction is the convolutions, that require of 6W H operations per channel, followed by the computation of their squared norms, which requires 3W H operations per channel; thus at this point we need 3(3 + 6)W H = 27W H operations. Afterward, we need to compute the maximum values across the three channels for each location, thus leading to 2W H more operations. Finally, we must compute the gradients, which we assume involves one operation for the division, one operation for the arc-tangent and one for the modulus operation; hence 3W H more operations. Thus, the total amount of operations at this stage is given by
C HOG grad = 32W H (3.54)
Next, we perform the binning. We assume that finding the lower and higher bins takes two operations: one for finding the lower bin, and another one to store the index of the higher bin. From Equation 3.51, we see that computing w h takes one subtraction and one division, assuming c h -c l is pre-computed, to which we add one operation for the multiplication with |G (x, y)|, thus totaling 3 operations. The same goes for the computation of w l . Finally, w h and w l are both accumulated to the corresponding bins, requiring both one more operations. This done at each location of the feature maps, thus this stage needs a total of operations of
C HOG hist = 8W H. ( 3
N p = (W h -1) × (H h -1) (3.56)
positions, with of each component of the vector by a scalar, and finally a comparison. Since the sum and the square root may be considered to take a single operation, which is very small compared to the total, we chose to neglect it to make the calculation more tractable.
W h = W 8 , (3.57)
H h = H 8 . ( 3
The Euclidean distance itself requires one subtraction followed by a MAC operation per component. Thus, extracting features from a 64 × 128 image as suggested in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] takes 344.7 kOP.
When scanning an image to locate pedestrians, we may use the same method as usual [START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF][START_REF] Garcia | Convolutional face finder: A neural architecture for fast and robust face detection[END_REF]. Using Equation 3.63 on a 640 × 480 image, we get a complexity of 12.96 MOP.
Repartitions of the computational efforts are presented in Figure 3.16.
Memory print Let's now evaluate the memory print required by the extraction of HOG features for a 640 × 480 input image. When computing the gradients, the first step consists in performing 2 feature maps from convolutions, of the same size of the input image. We consider here each feature of the feature maps shall be coded as 16 bits integers, hence we need 2 × 2 × 640 × 480 = 1.23 × 10 6 bytes at this stage. Then, the modulus and arguments of the gradients are computed at each feature location. We assume here that that data shall be stored using single precision floating point scheme;
hence 32 bits per value, and then we need 2.45 MB. As for the histograms, since there is no overlaps between cells, they may be evaluated in-place -hence, they do not bring more memory requirement. Finally comes the memory needed by the normalization stage; assuming we neglect the border effect, one normalized vector is computed at each cell location, which correspond to 8 × 8 areas in the original image. Hence, 4800 normalized vectors are computed, each having 36 component, which leads to 691.2 kB.
Thus, the memory print of the HOG framework is 4.37 MB.
We presented an analysed the HOG algorithm for pedestrian detection. In the next Section, we describe a particular architecture of a ConvNet optimized for that same task.
ConvNet
As for many other applications, ConvNet have proven very efficient for pedestrian detection. Sermanet et al. proposed in [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF] a ConvNet specifically designed for that purpose.
Presentation We now review the architecture of that system, using the same notations as in Section 3.1.1.2. First of all, we assume images use the Y'UV representation.
In this representation, the Y channel represents the luma, i.e the luminosity, while the U and V channels represent coordinates of a color in a 2D space. The Y channel is processed separately from the UV channels in the ConvNet.
The Y channel first goes through the C Y 1 convolution stage which consists in 32 kernels, all 7 × 7, followed by an absolute-value rectification -i.e we apply a point-wise absolute value function on all output feature maps [START_REF] Kavukcuoglu | Learning convolutional feature hierarchies for visual recognition[END_REF] -followed by a local constrast normalization which is performed as followed [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]:
v i = m i -m i w (3.64) σ = N i=1 w v i 2 (3. 65
)
y i = v i max (c, σ) (3.66)
where m i is the i-th un-normalized feature map, denotes the convolution operator, w is a Gaussian blur 9 × 9 convolution kernel with normalized weights, N is the number of are concatenated to form the feature vector to be classified, which is performed with a classical linear classifier. That architecture is sum-up in Figure 3.17.
C Y 1 C U V 1 Y S U V 0 UV S Y 1 C2 S2 F u
Complexity analysis
Let's now evaluate the amount of operations needed for a W × H Y'UV image to be processed by that ConvNet. Denoting C X the complexity involved in layer X and along the lines of the calculus done in Section 3.1.1.2, we have
C C Y 1 = 32 × 7 × 7 × (W -6) (H -6) (3.67) C S Y 1 = 32 × 9 × W -6 3
H -6 3 (3.68) C S U V 0 = 2 × 9 × W 3 H 3 (3.69) C C2 = 2040 × 9 × 9 × 2 × (W S U V 0 -8) (H S U V 0 -8) (3.70) C S2 = 68 × 2 × 2 W C2 2
H C2 2 (3.71)
where W X and H X respectively denote the width and height of the X feature maps.
The C U V 1 layer has full connection between its input and output feature maps. Thus, denoting N I and N O respectively the number of input and output feature maps, a total of N I N O convolutions are performed. Inside this layer, this produces N I N O feature maps, which are sum feature-wise so as to produce the N O output feature maps. This leads to
C U V 1 = 2 × 6 × 6 × (W S U V 0 -4) (H S U V 0 -4) . (3.72)
We shall now evaluate the complexity involved by the absolute value rectifications which are performed on the C Y 1 and C2 feature maps. It needs one operation per feature, thus denoting C (A X ) the complexity involved by those operations on feature map X we have
C A C Y 1 = 32W C Y 1 H C Y 1 (3.73) C A C U V 1 = 6W C U V 1 H C U V 1 (3.74) C A C2 = 68W C2 H C2 . (3.75)
Finally, we evaluate the complexity brought by the local contrast normalizations. From Equations 3.64, 3.65 and 3.66, we see that the first step consists in a convolution by a 9 × 9 kernel G followed by a pixel-wise subtraction between two feature maps. Assuming the input feature map is w × h and that the convolution is performed so that the output feature map is the same size as the input feature map, the required amount of operations at this step is given by
C N 1 (w, h) = 2 × 9 × 9 × wh = 162wh. (3.76)
The second step involves squaring up each feature of the wh output feature maps, which implies wh operations. The result is again convolved with G, implying 81wh operations, and the resulting feature are sum feature-wise across N feature maps, implying nwh sums. Finally, we produce a "normalization map" by taking the square root of all features, which involves wh operations assuming a square root takes only one operation.
Hence:
C N 2 (w, h, n) = (83 + n) wh (3.77)
The final normalization step consists in computing, for each feature of the normalization map, the maximum value between that feature and the constant c, which leads to wh operations, and perform feature-wise divisions between the N maps computed in Equation 3.64 and those maximums, which leads to nwh operations. Thus we have
C N 3 (w, h, n) = (1 + n) wh, (3.78)
and the complexity brought by a local contrast normalization on n w × h feature maps is given by
C N (w, h, n) = (246 + 2n) wh. (3.79)
The overall complexity is given by
C ConvNet = C C Y 1 + C S Y 1 + C S U V 0 + C C2 + C S2 + C A C Y 1 + C A C U V 1 + C A C2 + C N (W C Y 1 , H C Y 1 , 32) + C N (W C U V 1 , H C U V 2 , 6) + C N (W C1 , H C2 , 68) (3.80)
which leads to
C ConvNet = 1568W C Y 1 H C Y 1 + 288W 1 H 1 + 18W S U V 0 H S U V 0 + 330480W C2 H C2 + 272W S2 H S2 + 24W 1 H 1 + 342W C Y 1 H C Y 1 + 264W C U V 1 H C U V 1 + 450W C2 H C2 (3.81)
with
W C Y 1 = W -6 (3.82) H C Y 1 = H -6 (3.83) W S U V 0 = W 3 (3.84) H S U V 0 = H 3 (3.85) W 1 = W S Y 1 = W C U V 1 = W C Y 1 3 = W C U V 0 -4 (3.86) H 1 = W S Y 1 = H C U V 1 = H C Y 1 3 = H C U V 0 -4 (3.87) W C2 = W 1 -8 (3. 88
)
H C2 = H 1 -8 (3.89) W S2 = W C2 2 (3.90) H S2 = H C2 2 (3.
HMAX optimizations for pedestrian detection
We propose optimizations along the lines of what was explained in Section 3.1.2. When we were looking for faces, we hand-crafted the convolution kernel so that it responded best to horizontal features, in order to extract eyes and mouths for instance. However, in the case of pedestrians it intuitively seems more satisfactory to detect vertical features.
Thus, we propose to keep the same kernel as represented in Figure 3.7, but flipped by 90 • . As in Section 3.1.2, we have two descriptors: HMIN θ=0 and HMIN R θ=0 . For consistency reasons with what was done for faces in Section 3.1.2 and with the HOG [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] and ConvNet [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF] algorithms, HMIN θ=0 expects 64 × 128 input images and consists in a single 37 × 37 convolution kernel. As for HMIN R θ=0 , it expects 16 × 32 inputs and consists in a 9 × 9 convolution kernel.
Experiments
In order to test our optimizations, we used the INRIA pedestrian dataset, originally proposed to evaluate the performances of the HOG algorithm [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. That dataset is divided in two subsets: a training set and a testing set. Hence, we simply trained the system described in Section 3.2.2 on the training set and evaluated it on the testing set.
Results are shown in Figure 3.18, which is a ROC curve produced as done for faces in Section 3.1.3.1. All images were resized to 16 × 32 before process. Comparisons with HOG and ConvNet features are shown in Table 3.3.
In this Section, we proposed and evaluated optimizations for the so-called HMIN descriptor applied to pedestrian detection. Next Section is dedicated to a discussion about the results that we obtained both here, and in the previous Section which was related to face detection.
Discussion
Let's now discuss the results obtained in the two previous Sections, where we described a feature extraction framework and compared its performance, both in terms of accuracy and complexity, against major algorithms. The drop of performance is more important here than it was for faces, as shown on Figure 3.9. However, the gain in complexity is as significant as in Section 3. Table 3.3: Complexity and accuracy of human detection frameworks. The false positive rate of the HOG has been drawn from the DET curve shown in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], and thus is approximate. The false positive rates presented here correspond to a 90% detection rate. As in Table 3.2, the "Classification" column gives the complexity involved when computing a single patch of the size expected by the corresponding framework which is indicated in the "Input size" column. The "Frame" column indicates the complexity of the algorithm when applied to a 640 × 480 image. Furthermore, the complexities involved by HMIN are computed from Equation 3.46, with the input size shown in the column on the right. Finally the result of the ConvNet may not be shown here as their strategy for evaluating it is different from what was done in [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF] -using the evaluation protocol detailed in [START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF], HOG produces approximately three times as many false positives as ConvNet. Furthermore, the miss rate of the HOG was determined on a scene-scanning task, while we evaluated our framework on a simpler classification task. Thus, comparisons of the accuracy of those frameworks are difficult, although the preliminary results presented here show a clear disadvantage in using HMIN R θ=0 . Finally, the complexities and memory prints shown here only take into account the feature extraction, and not the classification. It should also be noted that both are evaluated without image pyramid, and that in that case they would be much higher than evaluated here.
Results of our framework are sum up in Table 3.2 for face detection applications and in Table 3.3 for pedestrian detection application. First of all, we see from the ROC curves shown in Figures 3.11 and 3.18 that the accuracy of our framework is significantly bigger for face detection tasks than for pedestrian detection task -although comparing performances on two different tasks is dangerous, those results seem to indicate that our framework would operate much better in the first case. However, the raw accuracy is significantly lower than those of the other frameworks presented here, be it for face or human detection. This is probably due to the fact that our frameworks HMIN R θ=x 2 produce features that are much simpler than those of the other frameworks -indeed, the feature vector for a 32 × 32 input image has only 225 components. Among all other frameworks, the only other that may be considered better to that respect is Viola-Jones, where on average only 8 features are computed, although in the worst case that amount rises dramatically to 6060. Nevertheless, Viola-Jones and the HOG algorithms are both slightly less complex than HMIN θ = x R . There is also a consequent literature about their implementations on hardware [START_REF] Mizuno | Architectural Study of HOG Feature Extraction Processor for Real-Time Object Detection[END_REF][START_REF] Jacobsen | FPGA implementation of HOG based pedestrian detector[END_REF][START_REF] Kadota | Hardware Architecture for HOG Feature Extraction[END_REF][START_REF] Hahnle | Fpga-based real-time pedestrian detection on high-resolution images[END_REF][START_REF] Hsiao | An FPGA based human detection system with embedded platform[END_REF][START_REF] Negi | Deep pipelined one-chip FPGA implementation of a real-time image-based human detection algorithm[END_REF][START_REF] Komorkiewicz | Floating point HOG implementation for real-time multiple object detection[END_REF][START_REF] Kelly | Histogram of oriented gradients front end processing: An FPGA based processor approach[END_REF][START_REF] Tam | Implementation of real-time pedestrian detection on FPGA[END_REF][START_REF] Lee | HOG feature extractor circuit for realtime human and vehicle detection[END_REF][START_REF] Chen | An Efficient Hardware Implementation of HOG Feature Extraction for Human Detection[END_REF][START_REF] Karakaya | Implementation of HOG algorithm for real time object recognition applications on FPGA based embedded system[END_REF][START_REF] Kadota | Hardware Architecture for HOG Feature Extraction[END_REF][START_REF] Ngo | An area efficient modular architecture for real-time detection of multiple faces in video stream[END_REF][START_REF] Cheng | An FPGA-based object detector with dynamic workload balancing[END_REF][START_REF] Gao | Novel FPGA based Haar classifier face detection algorithm acceleration[END_REF][START_REF] Das | Modified architecture for real-time face detection using FPGA[END_REF]. In particular, the main difficulties of the HOG algorithm for hardware implementations, i.e the highly non-linear computations of the arc-tangents, divisions and square roots, have been addressed in [START_REF] Kadota | Hardware Architecture for HOG Feature Extraction[END_REF]. As for the CFF, it was also optimized and successfully implemented on hardware devices [START_REF] Farrugia | Fast and robust face detection on a parallel optimized architecture implemented on FPGA[END_REF] and on embedded processors [START_REF] Roux | Embedded Convolutional Face Finder[END_REF][START_REF] Roux | Embedded facial image processing with Convolutional Neural Networks[END_REF].
However, one can expect HMIN R θ=x to be implemented easily on FPGA, with really low resource utilization -that aspect shall be tested in future development. Furthermore, the only framework that beats HMIN in terms of memory-print is Viola-Jones -that aspect is crucial when porting an algorithm on an embedded systems, especially in industrial use cases where constraints may be really high in that respect. Furthermore, while HMIN R θ=x may not seem as attractive as the other frameworks presented here, it has a very interesting advantage: it is generic. Indeed, both ConvNet implementations presented in this Chapter were specifically designed for a particular task: face detection or pedestrian detection. As for Viola-Jones, it may be used for tasks other than face detection as was done for instance for pedestrian detection [START_REF] Viola | Detecting pedestrians using patterns of motion and appearance[END_REF] -however, a different task might need different Haar-like features, which would be implemented differently than the simple ones presented in Section 3.1.1.1. In terms of hardware implementation, that difference would almost certainly mean code modifications, while with HMIN R θ=x one would simply need to change the weights of the convolution kernel. Concerning the HOG, it should be as generic as HMIN -however it suffers from a much greater memory print.
2 HMIN R θ=x refers to both HMIN R θ=0 and HMIN R θ=π/2 .
Finally, researchers have also proposed other optimization schemes for HMIN [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF][START_REF] Chikkerur | Approximations in the HMAX Model[END_REF] -future research shall focus on comparing our work with the gained one can expect with their solutions, as well as use a common evaluation scheme for the comparison of HMIN R θ=0 with other pedestrian detection algorithms.
Conclusion
In this Chapter, we presented our contribution concerning the optimization of a feature extraction framework. The original framework is based on an algorithm called HMAX, which is a model of the early stages of the image processing by the mamal brain. It consists in 4 scales, called S1, C1, S2 and C2 -however, in the use case scenarions presented here the S2 and C2 layers do not provide much more precision, but are by far the most costly in terms of algorithm complexity. We thus chose to keep only the S1 and C1 layers, respectively consisting in a convolution filter bank and max-pooling operations. We explored how the algorithm behaved when diminuing its complexity, by reducing the number and sizes of the linear and max-pooling filters, by estimating where the most relevant information is located.
We replaced the initial 64 filters in the S1 layer with only one, the size of which is 9 × 9. It expects 32 × 32 grayscale images as inputs. The nature of the filter depends on the use case: for faces, we found that most saliencies lie in the eyes and mouth of the face, thus we chose a filter responding to horizontal features. As for the use case of human detection, we assume that pedestrians are standing up, which intuitively made us use a filter responding to vertical features. In both cases, we compared the results with standard algorithms having reasonable complexities. Optimizing out the HMIN descriptor provoked a drop in accuracy of 5.73 points on the face detection task on the CMU dataset, and 21.91 points on the pedestrian detection task when keeping a false positive rate of 10%. However, that drop of performance is to be put in perspective with the gain in complexity: after optimizations, the descriptor is 429.12 less complex to evaluate. In spite of everything, that method does not provide results as good as other algorithms with comparable complexities, e.g Viola-Jones for face detection -as for pedestrian detection, we need to perform complementary tests with common metrics for the comparison of that system with the state of the art, but the results presented here tend to show that that algorithm is not well suited for this task. However, we claim that our algorithm provides a low memory print and is more generic than the other frameworks, which make it implementable on hardware with fewer resources, and should be easy to adapt for new tasks: only the weights of the convolution kernel are to be changed.
This Chapter was dedicated to the proposition of optimizations for a descriptor. Next
Chapter will present another type of optimizations, not based on the architecture of the algorithm, but on the encoding of the data, with implementation on a dedicated hardware. As we shall see, those optimizations are much more efficient and promising, and may easily be applied to other algorithms.
Chapter 4
Hardware implementation
This chapter addresses the second question stated in Chapter 2, about the optimization of the HMAX framework with the aim of implementing it on a dedicated hardware platform. We begin by exposing the optimizations that we used, coming both from our own work and from the literature. In particular, we show that the combination of all those optimizations does not bring a severe drop in accuracy. We then implement our optimized HMAX on an Artix-7 FPGA, as naively as possible, and we compare our results with those of the state of the art implementation. While our implementation achieves a significantly lower throughput, we shall see that it uses much less hardware resources. Furthermore, our optimizations are fully compatible with those of the state of the art, and future implementations may profit from both contributions.
Algorithm-Architecture Matching for HMAX
In the case of embedded systems, having an implemented model in a high-level language such as Matlab is not enough. Even an implementation using the C language may not meet the particular constraints that are found in critical systems, in terms of power consumption, algorithmic complexity and memory print. This is particularly true in the case of HMAX, where the S2 layer in particular may take several seconds to be computed on a CPU. Furthermore, GPU implementations are most of the time not an option, as GPUs often have a power consumption in the order of magnitude of 10 W.
In the fields of embedded systems, we look for systems consuming about 10 to 100 mW.
This may be achieve thanks to FPGAs, as was done in the past [91-96, 98, 99]. This
Chapter proposes a detailed review of one of those implementations; the other ones are either based on architecture with multiple high-end FPGAs or focus on accelerating a part of the framework only, thus they are hardly comparable with what we aim to do here.
Orchard et al. proposed in [99] a complete hardware implementation of HMAX on a single Virtex-6 ML605 FPGA. To achieve this, the authors proposed optimizations on their own, which concern mostly the way the data is organized and not so much the encoding and the precision degradation -indeed, the data coming out of S1 and carried throughout the processing layers is coded on 16 bits.
We shall now review the main components of their implementation, e.g. the four modules implementing the behaviours of S1, C1, S2 and C2. The layers are pipelined, so they may process streamed data. As for the classification stage, it is not directly implemented on the FPGA and should be taken care of on a host computer. The results of that implementation are presented afterwards.
Description
S1
First of all, the authors showed how all filters in S1 may be decomposed as separable filters, or sums of seperable filters. Indeed, if we consider the "vertical" Gabor filters in S1, i.e. we have θ = π/2, Equations 2.8 and 2.9 lead to [99] G (x, y) θ=π/2 = exp - Let's now focus on the filters having "diagonal" shapes. As shown in [99] and following the same principles as before, we may write
x 2 + γ 2 y 2 2σ 2 cos 2π λ x (4.1) = exp - x 2 2σ 2 cos 2π λ x × exp - γ 2 y 2 2σ 2 (4.2) = H (x) V (y) (4.3) H (x) = exp - x 2 2σ 2 cos 2π λ x (4.4) V (y) = exp - γ 2 y 2 2σ 2 , ( 4
I * G θ=π/4 = I * c H * r H + I * c U * r U (4.9) I * G θ=3π/4 = I * c H * r H -I * c U * r U (4.10) with U (x, y) = exp - x 2 2σ 2 sin 2π λ x . (4.11)
The benefits in using separable filters are twofolds. First of all, the memory prints of those filters are much smaller than their unoptimized counterparts. Indeed, storing a N × N filter in a naive way requires storing N 2 words, while their separated versions would require the storage of 2N words for G| θ=0 and G| θ=π/2 , and 3N words for G| θ=π/4 and G| θ=3π/4 . The other benefit is related to the algorithmic complexity. Indeed, performing the convolution of a
W I × H I image by a W K × H K kernel has an O (W I H I W K H K ),
while for separable filters it goes down to O (W I W K + H I H K ). According to [99], doing so reduces the complexity from 36,146MAC operations to 2816MAC.
In order to provide some invariance to luminosity, Orchard et al. also use a normalization scheme called l 2 . Mathematically, computing that norm consists in taking square root of the sum of the pixels. Gabor filters where thus normalized so that their l 2 norms equal 2 16 -1, and so that their means are null.
C1
Let's consider a C1 unit with a 2∆ × 2∆ receptive field. The max-pooling operations are performed as follows: first, maximums are computed in ∆×∆ neighborhoods, producing an intermediate feature map M t . Second, the output of the C1 unit are obtained by pooling over 2 × 2 pooling windows from M t with a overlap of 1. This elegant method allows to avoid the storage of values that would have been discarded any way, as the data is processed here as it is provided by S1, in a pipelined manner.
S2
In the original model, it is recommended to use 1000 pre-learnt patches in S2. However, the authors used themselves 1280 of them -320 per classes -as it was the maximum
Results
That system all fits in the chosen Virtex-6 ML605 FPGA, including the temporary results
and the pre-determined data that are stored in the device's BRAM. It was synthesized using Xilinx ISE tools. It has a latency of 600k clock cycles, with a throughput of one image every 526k clock cycles. The system may operate at 100MHz, with implies a 6ms latency and a 190 image per second throughput. The total resource utilization of the device is given in Table 4.1.
Finally the VHDL implementation was tested on binary classification tasks, using 5 classes of objects from Caltech101 and a background class. Accuracies for those tasks are given in Table 4.2. Results show that the accuracy on FPGAs is comparable to that of CPU implementations.
In this Section, we presented the work proposed by Orchard et al [99] and the architecture of their implementation. Next Section is dedicated to our contribution, which mainly consists of reducing the precision of the data throughout the process.
Proposed simplification
In order to save hardware resources, we propose several optimizations to the original HMAX model. Our approach mainly consists in simplifying the encoding of the data and reducing the required number of bits. In order to determine optimal encoding and algorithmic optimizations, we test each of our proposition on the widely used Catlech101
dataset. For fair comparison with other works, we use the same classes as in [99]:
"airplanes", "faces", "car rear", "motorbikes" and "leaves".
Optimizations are tested individually, starting from those intervening at the beginning of the feed-forward and continuing in processing order, to finish with optimizations to apply to the later layer of the model. For optimizations having tunable parameters (e.g the bit width), those tests shall be used to determine a working point, which is done for all optimizations that require it in order to have a complete and usable optimization scheme.
Optimizations are performed at the following levels: the input data, the coefficients of the Gabor filters in S1, the data produced by S1, the number of filters in S2, and finally 8 bits 3 bits 2 bits 1 bit Color maps are modified so that the 0 corresponds to black and the highest possible value corresponds to white, with gray level linearly interpolated in between. We can see that while the images are somewhat difficult to recognize with 1 bit pixels, they are easily recognizable with as few as 2 bits.
the computation of the distances in S2 during the pattern matching operations. We shall first present our work, namely the reduction of the precision of the input pixels.
We shall then see how that optimization behaves with further optimizations got from the literature.
Input data
Our implementation of HMAX, along the lines of what is done in [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF], processes grayscale images. The pixels of such images are typically coded on 8 bits unsigned integers, representing values ranging from 0 to 255, where 0 is "black" and 255 is "white".
We propose here to use less than 8 bits to encode those pixels, simply by keeping the Most Significant Bits (MSB). This is equivalent to an Euclidean division by a power of two: unwiring the N Least Significant Bits (LSB) amounts to perform an Euclidean division by 2 N . The effect of such precision degradation is shown in Figure 4.2.
In order to find the optimal bit width presenting the best compromise between compression and performance, an experiment was conducted. It consisted of ten independent runs. In each run, the four classes are tested in binary independent classification tasks.
Each task consists in splitting the dataset in halves: one half is used as the training set, and the other half is used as the testing set. All images are resized so that their height For each bit width, ten independent tests were carried out, in which half of the data was learnt and the other half was kept for testing. We see that the pixel precision has little to no influence on the accuracy. is 164 pixel, and are then degraded w.r.t the tested bit width, i.e. all pixels are divided by 2 N where N is the number of removed LSB. The degraded data is then used to train first HMAX, and then the classifier -in this case, GentleBoost [START_REF] Friedman | Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors)[END_REF]. The images used as negative samples are taken from the Background Google class of Caltech101. All tests were performed in Matlab. Is should also be noted that we do not use RBFs in the S2 layer as described in [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF] and in Section 2.1.2.2.The global accuracy for each class is then given by the mean of the recognition rates for that class across all runs, and the uncertainty in the measure is given by the standard deviations of those accuracies.
Finally, the random seed used in the pseudo-random number generator was manually set to the same value for each run, thus ensuring that the conditions across all bit-widths are exactly the same and only the encoding changes.
The results of this experiment are shown in Figure 4.3. It is shown that for all four classes the bit width has only limited impact on performances: all accuracies lie beyond 0.9, except when the input image pixels are coded on a single bit where the Airplanes class gets more difficult to be correctly classified. For that reason, we chose to set the input pixel's bit width to 2 bits, and all further simplifications shall be made taking that into account. The next step is to reduce the precision of the filter's coefficient, in a way that is somewhat similar to what is proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF].
S1 filters coefficients
The second simplification that we propose is somewhat similar to that presented in Section 4.2.1, except this time we operate on the coefficients of the Gabor filters used in S1. Mathematically, those coefficients are real numbers in the range [-1, 1], thus the most naive implementation for them is to use double precision floating point representation as used by default in Matlab, and that encoding scheme shall be used as the baseline of our experiments. Our simplifications consist in using signed integers instead of floats using n-bits precision, by transforming the coefficients so that their values lie within -2 n-1 , . . . , 2 n-1 -1 , which is done by multiplying them by 2 n-1 and rounding them to the nearest integer. Several values for n where tested, along the lines of the methodology described in Section 4.2.1: 16, and from 8 downto 1. However, using the standard signed coding scheme the 1 bit encoding would lead to coefficients equal either to -1 or 0, which does not seem relevant in our case. Thus, we proposed to use a particular coding here, where the "0" binary value actually encodes -1 and "1" still encodes 1. The rational is that that encoding is close to the Haar-like features used in Viola-Jones [START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF] as explained in Section 3.1.1.1, and this technique is also suggested in [START_REF] Courbariaux | BinaryConnect: Training Deep Neural Networks with binary weights during propagations[END_REF]. As explained in Section 4.2.1, the input pixels precision is 2 bits.
Recent works [START_REF] Trinh | Efficient Data Encoding for Convolutional Neural Network Application[END_REF] also propose much more sophisticated encoding scheme. While their respective efficiencies have been proven, they seem more adapted to a situation where the weights are learnt during the learning process, and thus unknown before learning. In our case, all weights of the convolution are predetermined, thus we have a total control over the experiment and we prefered to use optimizations as simple as possible.
Results for that experiment are given in Figure 4.4. We see that the impact of the encoding of the Gabor filter coefficients has even less impact than the input image pixels precision, even in the case of 1 bit precision. This result is consistent with the fact that Haar-like features are used with success in other frameworks. Thus, we shall use that 1 bit precision encoding scheme for Gabor filters in combination with the 2 bit encoding for input pixels in further simplifications.
In this Section, we validated that we could use only one bit to encode the Gabor filter's coefficients, using "0" to encode "-1" and "1" to encode "1", in conjunction with input pixels coded on two bits only. In order to continue our simplification process, next Section proposes optimizations concerning the output of S1.
S1 output encoding
It has been proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF] to use Lloyd's algorithm [START_REF] Stuart | Least squares quantization in PCM[END_REF][START_REF] Roe | Quantizing for minimum distortion (corresp.)[END_REF], that provides a way to find an optimal encoding w.r. . subset S of the data to encode. The encoding strategy consists in defining two sets:
a codebook C = {c 1 , c 2 , . . . , c Q } and a partition Q = {q 0 , q 1 , q 2 , . . . , q K-1 , q K } . With those elements, mapping a code l (x) to any arbitrary value x ∈ R is done as follows:
∀x ∈ R l (x) = c 1 x ≤ q 1 , c 2 q 1 < x ≤ q 2 , . . . c q-1 q K-2 < x ≤ q K-1 , c q q K-1 < x.
. (4.12)
One can see here that p 0 and p K are not used to encode data; however those values are required to be computed when determining the partition, as we shall now see.
Finding the partition consists in minimizing the Mean Square Error E (C, P ) between the real values in the subset and the values after quantization [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF][START_REF] Stuart | Least squares quantization in PCM[END_REF]:
E (C, P ) = K i=1 q i q i-1 |c i -x| 2 p (x) dx (4.13)
Where p is the probability distribution of x. One can show [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF] that ∀i ∈ {1, . . . , K} c i =
q i q i-1 xp (x) dx q i q i-1 p (x) dx (4.14) ∀i ∈ {1, . . . , K -1} q i = c i-1 + c i 2 (4.15)
a 0 = min S (4.16)
a K = max S (4.17)
We see that Equations 4.14 and 4.15 depend on each other, and there is no closed-form solution for them. The optimal values are thus determined with an iterative process:
starting from arbitrary values for Q = {q 1 , q 2 , . . . , q K }, e.g separating the range of values to encode in segments of same size:
∀k ∈ {1, . . . , } q i = q 0 + k q K -q 0 K , (4.18)
we compute C = {c 1 , . . . , c k } with Equation 4.14. Once this is done, we use those values to compute a new ensemble Q with Equation 4.15, and so on until convergence.
Since the dynamics of the values vary greatly from scales to scales in C1, we computed a set C i and Q i per C1 scale in i. However, contrary to what is proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF],
we did not separate the orientations. We thus produced 8 sets S i of data to encode (i ∈ {1, . . . , 8}). using the same 500 images selected at random among all of the five classes we use to test our simplifications. As suggested in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF], we used four quantization levels for all S i . Each partition Q i and code book C i where computed using Matlab's
Communication System Toolbox's lloyd function. The results are given in Table 4.3.
While this simplification uses the values computed in C1, it is obvious that it could easily be performed at the end of the S1 stage, simply by using a strictly growing encoding function f . This is easily performed by associated each value from C i to a positive integer as follows:
∀i ∈ {1, . . . , 8} , j ∈ {1, . . . , 4} f (c i j) = j (4.19)
and encoding f (c ij ) simply as unsigned integers on 2 bits. By doing so, performing the max-pooling operations in C1 after that encoding is equivalent to performing it before.
We must now make sure that this simplification, in addition to the other two presented earlier, does not have a significant negative impact on accuracy. Thus, we perform an experiment along the lines of what is described in Section 4.
Filter reduction in S2
As it has been stated many times in the literature [91-96, 98, 99], the most demanding stage of HMAX is S2. Assuming there are the same amount of pre-learnt patches of each size, then the algorithmic complexity depends linearly on the amount of filters N S2 and their average number of elements K. It has been suggested in [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF] to simply reduce the number of per-learnt patches in S2 by sorting them by relevance according to a criterion, and to keep only the N most relevant patches. The criterion used by the authors is simply the variance ν of the components inside a patch p = (p 1 , . . . , p M ): In order to ensure that all sizes are equally represented, we propose to first crop at random 250 patches of each of those sizes in order to get the suggested 1000 patches by Serre et al. [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF], and we select 50 patches of each size according to the variance criterion so that we have a total of 200 patches, as proposed in [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF]. The rational is that we must keep in mind that we aim to implement that process on a hardware device, thus we need to know in advance the amount of patches of each size and to keep them to pre-determined values.
ν (p) = M i=1 |p i -p| 2 , ( 4
Let's now experiment that simplification on our dataset. We followed the methodology established in Section 4.2.1, and we used the simplification proposed here along with all the other simplifications that were presented until now. Results are compiled with those of Section 4.2.3 and Section 4.2.5 in Table 4.4.
Manhattan distance in S2
In S2, pattern-matching is supposed to be performed with a Gaussian function, the centers of which are the pre-learnt patches in S2, so that each S2 unit In this Section, we proposed a series of optimizations, both of our own and from the literature. In the next Section, we show how that particular encoding may be put into practice on a dedicated hardware configuration.
M (v 1 , v 2 ) = Nv i=1 |v 1i -v 2i | . ( 4
Input and
FGPA implementation 4.3.1 Overview
We propose now our own implementation of the HMAX model, using both our contributions and the simplifications proposed in the literature proposed in Section 4.2. We did not use the architectural optimization proposed in [99] on purpose, to see how a "naive" implementation of the optimized HMAX model compares with that of Orchard et al.
This implementation of the HMAX model with our optimizations intends to process fixed-size grayscale images. We aim to process 164 × 164 grayscale images. The rational behind those dimensions is that we want to actually process the 128 × 128 ROI located at the center of the image -however, the largest convolution kernel in S1 is 37 × 37, therefore in order to have 128 × 128 S1 feature maps we need input images padded with 18 pixels wide stripes. That padding is assumed to be performed before the data is sent to the HMAX module.
The data is processed serially, i.e pixels arrive one after the other, row by row. The pixels' precision is assumed to be already reduced to two bits per pixel, as suggested in Section 4.2. The module's input pins consists in a serial bus of two pins called din in which the pixels should be written, a reset pin rst allowing to initialize the module, an enable pin en din allowing to activate the computation and finally three clocks: a "pixel clock" pix clk for input data synchronization, a "process clock" proc clk synchronizing the data produced by the module's processes, and a "sub-process clock" subproc clk as some processes need a high-frequency clock. Suggestion concerning the frequencies of those clocks are given in Section 4.4.
The output pins consist in: an 8 pins serial bus for the descriptor itself called dout and a pin indicating when data is available named en dout. The serialized data is sent to s2c2, which perform pattern matching between input data and pre-learnt patches with its s2 components, several in parallel, with a multiplexing. The maximum responses of each S2 unit are then computed by c2. The data is then serialized by c2 to out.
The HMAX module -illustrated in Figure 4.5 -itself mainly consists in two sub-modules, s1c1 and s2c2. As suggested in their names, the first one performs the computations required in the S1 and C1 layers, while the second one takes care of the computation for the S2 and C2 layers of the model. The rational behind that separation is that it is suggested in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF] that in some cases one may use only the S1 and C1 layers, as we did in Chapter 3. The following two Sections describe those modules in detail.
s1c1
That module consists uses two components of its own, called s1 and c1, which performs the operations required by the layers of the same names of the model. It process the input pixels with a multiplexing across orientations, meaning that all processes concerning the first orientation of the Gabor filters in S1 are performed in the same clock cycle, then all processes concerning the second orientation are performed on the same input data, and so on until all four orientations are processed.
The input pins of that module are directly connected to those of the top module. Its input pins consist in a dout bus of 4 pins where the C1 output data are written, a en dout pin indicating when new data is available and a dout ori serial bus that precises which orientation the output data corresponds to. The s1 and c1 modules shall now be presented. First of all the pixels arrive in the pix to stripe, which returns columns of 37 pixels. Those columns are then stored in shift registers, which store a 37 × 37 patchonly 7 lines are represented here for readability. Then for each of the 16 scales in S1, there exists an instance of the image cropper module that keeps only the data needed by its following conv module. The convolution kernels' coefficients are gotten from the coeffs manager module, which get them from the FPGA's ROM and retrieve those corresponding to the needed orientation, for all scales.
Here only 4 of the 16 convolution engines are shown. The computed data is written in dout, in parallel. Note that not all components of s1 are repesented here: pixmat, pixel manager, coeffs manager and conv crop are not displayed to enhance readability and focus and the dataflow.
s1
That module consists in three sub-modules: pixel manager which gets the pixels from the input pins and reorder them so that they may be used in convolutions, the coeffs manager module which handles the coefficients used in the convolution kernels, and the convolution filter bank module conv filter bank which take care of the actual linear filtering operations. Shift registers are also used to synchronize the data produced by the different components when needed. The main modules are described below, and the dataflow in the module is sum up Figure 4.6.
pixel manager As mentioned in Section 4.3.1, the data arrives in our module serially, pixel by pixel. It is impractical to perform 2D convolutions in those conditions, as we need the data corresponding to a sub-image of the original image. The convolution cannot be processed fully until all that data arrives, and the data not needed at a particular moment needs to be stored. This is taken care of by this component: it stores the temporary data and outputs it when ready, as a 37 × 37 pixel matrix as needed by the following conv filter bank, as explained below. That process is performed by two different sub-modules: pix to stripe, which reorder the pixels so that they may be processed column per column, and the pixmat that stores the data in a matrix of registers and provide them to the convolution filter bank module.
pix to stripe That modules consists in a BRAM, the output pins of which are rewired to its input pins in the way shown in Figure 4.6. It gets as inputs, apart from the usual clk , en din and rst pins, the 2 bit pixels got from the top-module. Its output pins consist in a 37 × 2 = 74 pins bus providing a column of the 37 pixels, as well as a en dout output port indicating when data is ready to be processed.
pixmat That module gets as inputs the outputs of the aforementioned pix to stripe module. It simply consists in a matrix of 37 × 37 pixels. At each pixel clock cycle, all registered data is shifted to the "right", and the registers on the left store the data gotten from pix to stripe. The pixmat module's output pins are directly wired to its outputs, and an output pin called en dout indicates when the data is ready. When that happens, the data stored in the matrix of registers may be used by the convolution engines.
In order to handle new lines, that module has an inner counter incremented every time new data arrives. When that counter reaches 164, i.e when a full stripe of the image went through the module, the en dout signal is unset and the counter is reset to 0. The en dout signal is set again when the counter reaches 37 again, meaning that the matrix is filled.
coeffs manager That module's purpose is to provide the required convolution kernels' coefficient, w.r.t the required Gabor filters orientation. It gets as inputs the regular rst, clk and en signals, but also a bus of two pins called k idx indicating the desired orientation The output pins consists of the customary en dout output port indicating that the data is ready, and a large bus called cout that outputs all coefficients of all scales for the requested orientation. This is also close to the box filter approximation proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. As explained in Section 4.2, we use a particular one bit encoding.
Since our convolution kernels' sizes go from 7 × 7 to 37 × 37 by steps of 2 × 2, the total amount of input pins in the cout bus is given by In order to simplify the process, all coefficients needed at a particular time are read all at once from several BRAM, of which only two are represented here for readability. The coefficients are then concatenated in a single vector directly connected to the cout output port.
en din and rst pins, which serve their usual purposes. It also gets the orientation identifier thanks to an id in input bus -that identifier is not directly used for computation, but is passed with the output data for use in latter modules. Finally, that modules needs two clocks: the pixel clock, on which the input data is synchronized and acquired through the clk pin, and the process clock (acquired through clk proc) needed for multiplexing the filters per orientations, as suggested in Section 4.3.1.
Output pins consist in a dout bus in which the result of the convolutions at all scales are written, an id out bus simply indicating the orientation identifier got from the id in input bus and the usual en dout pin. In order to perform its operations, that module has one distinct instance of the conv crop component per scale (i.e, 16 instances in total). Each instance has parameters of its own depending on its scale.
conv crop That module's input and output ports are similar to those of its parent module conv filter bank. It gets the pixel and process clocks respectively from its clk and clk sum input ports, and it may be reset using the rst input port. Image data arrive through din, and the convolution coefficients got from coeffs manager are acquired through the coeffs input port. Data identifier is given by id in input port, and en din indicates when input data is valid and should be processed. Output ports encompass dout, which provide the results of the convolution, and id out which gives back the signal got from id in. Finally, en dout indicates when valid output data is available. dout signals from all instances of conv crop are then gathered in conf filter bank's dout bus. This module gets its name from its two main purposes: select the data required for the convolution, and perform the actual convolution.
The first stage is done asynchronously by a component called image cropper. As explained earlier, conv crop get the data in the form of a 37 × 37 pixel matrix -however, all that data is only useful for the 16th scale convolution kernel, which is also of size located in the middle of the 37 × 37 matrix, as shown in Figure 4.6. The selected data is then processed by the conv component, which is detailed in the next section.
conv
That module carries out the actual convolution filter operations. It gets as inputs two clocks: clk which gets the process clock and clk sum which is used to synchronize sums in the convolution sub-process clock. It also has the usual rst pin for initialization, a bus called din through which the pixel matrix arrives, a bus called coeffs which gets the convolution kernel's coefficients, an id in bus allowing to identify the orientation that is being computed, and an en din pin warning that the input data is valid and that operations may be performed. Its outputs are a dout bus that provides the convolution results, another one called id out that indicates which orientation that data corresponds to and a en dout bus announcing valid output data.
In order to simplify the architecture and to limit the required frequency of the subprocess clock, the convolution is first performed row by row in parallel. The results of each rows are then added to get the final result. That row-wise convolution is performed by a bank of convrow module having one filter per row. The sum of the rows are performed by the sum acc module, and the result is coded as suggested in Section 4.2 thanks to the s1degrader module; both modules shall now be presented.
convrow That module has almost the same inputs as conv, the only exception being that it only gets the input pixels and coefficients corresponding to the row it is expected to process. Its output pins are similar to those of conv. As explained in Section 4.2, our filters coefficients are either +1's and -1's, respectively coded as "1" and "0". Thus, each 1 bit coefficient does actually not code a value, but rather an instruction: if the coefficient is 1, the corresponding pixel value is added to the convolution's accumulated value, and it is subtracted if the coefficient is 0. That trick allows to perform the convolution without any products. In practice, a subtraction is performed by getting the opposite value of the input pixel by evaluating its two's complement and performing and addition. Sums involved at that stage are carried out by the sum acc module, which shall now be described.
sum acc That module sums serially the values arriving in parallel. The data arrives through its din parallel bus, and must be synchronized with the process clock arriving through the clk pin. That module uses a unique register to store its temporary data. At each process clock cycle, the MSB of the din bus, which correspond to the first value of In each of those modules, the "multiplications" are performed in parallel in rowmult between the data coming from din and coeffs input buses -as mentioned in Section 4.2, those multiplications consist in fact in simple changes of signs, depending on the 1 bit coefficients provided by the external module coeffs manager. The results are the accumulated thanks to convrow's cumsum component. Finally, the output of all conrow modules are accumulated thanks to another cumsum component. The result is afterward degraded thanks to the s1degrader module, the output of which is written in dout.
the sum, is written in the register. At each following sub-process clock cycle, an index is incremented, indicating which value should be added to the accumulated total. Timing requirements concerning the involved clocks are discussed later in Section 4.4.2. The result is written on the output pins synchronously with the process clock.
Once the data has been accumulated row by row, and the results coming out of all rows have been accumulated again, the result may be encoded on significantly shorter words as we explained in Section 4.2.3. That encoding is taken care of by the s1degrader module, which shall be described now.
s1degrader This modules takes care of the precision degradation of the convolution's output. It is synchronized on the process clock, and as such has a clk input pin, and The results written in dout simply depends on the position of the input value w.r.t the partition boundaries on the natural integer line.
r 0 r 1 r 2 r 3 din en din dout en dout
shift registers
That module allows to delay data. This is mostly useful to address synchronization problem, and thus it needs a clock clk. A rst input port allows to initialize it, and data is acquired through the din port while an en din input port allows to indicate valid input data. Delayed data may be read from the dout output port, and a flag called en dout is set when valid output data is available and unset otherwise.
The way that module works is straightforward. It simply consist in N registers r i , each one of them being connected to two neighboors except for r 1 and r N . At each clock cycle, both the data from din and en din are written in r 1 , and each other register r i gets the data from its neighboor r i-1 as shown in Figure 4.9. The last register simply writes its data in the dout and en dout output ports.
c1
Once the convolutions are done and the data encoded on a shorter word, max-pooling operations must be performed. Following the lines of the theoretical model, this is done by the c1 module, which gets its inputs directly from s1 output pins. It is synchronized on the process clock, and therefore it has the mandatory clk and rst pins. It also has input buses called din, din ori and en din which are respectively connected to s1's dout, ori and en dout. Its outputs pins are made up of buses named dout, dout ori Maximums are first computed accross scales with the max 2by2 components. The data is then organized into stripes in the same fashion as done in the pix to stripe component used in s1 module. That stripe is organized by lines, and then scales, and needs to be organized by scales, and then lines to be processed by the latter modulethis reorganization is taken care of by reorg stripes. Orientations being multiplexed, we needed to separate them so each may be processed individually, which is done by the data demux module. Each orientation is then processed by one of the c1 orientation module. Finally, data comming out of c1 orientation is multiplexed by data mux before being written in output ports.
and en dout, which respectively provide the result of the max-pooling operations, the associated orientation identifier and the flag indicating valid data.
The process is carried out by the following components: c1 max 2by2 which computes the pixel-wise maximum across two S1 feature maps of consecutive scales and same orientation, c1 pix to stripe which reorganize the values in a way similar to that of the aforementioned pixel manager module, c1 reorg stripes which routes the data to the following components in an appropriate manner, c1 orientation demux which routes the data to the corresponding max-pooling engine depending on the orientation it corresponds to, and finally max filter which is the actual max-pooling engine and performs for a particular orientation, hence the name. That flow is shown in Figure 4.10.
c1 max 2by2 Apart from the clk, rst and en din input pins, that module has an input bus called din that gets the data produce by all convolution engines and perform the max-pooling operations across consecutive scales. Since the immediate effect of that process is to divide the number of scales by two, that module's output bus dout has half the width of din. A signal going through the en dout output pin indicates that valid data is available via dout.
c1 pix to stripe That module is very similar to the pix to stripe module used in s1 (see Section 4.3.2.1), except that it operates on data of all of the 8 scales produced by c1 max 2by2 and produces stripes of 22 pixels in heights, as the maximum window used for the max-pooling operations in C1 is 22 × 22 as stated in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. Its input and output ports are the same as those of pix to stripe, with additional din ori and dout ori allowing to keep track of the orientation corresponding to the data.
c1 reorg stripes The data produced by c1 pix to stripe is ordered first by the position of the pixels in its stripe, and then per scale -i.e first pixels of all scales are next to each others, followed by the second pixels of all scales, and so on. This is impractical for the processed needed in the later module, where we need the data to be grouped by scales. That module achieves it simply by rerouting the signals asynchronously.
c1 orientations demux During C1, each orientation is performed independently from the others. However, at this point they arrive multiplexed from the same bus: first pixels from the first orientation, then the pixels at the same locations from the second orientation, followed by the third and the fourth -we then go back to the first orientation, then the second one and so on. That modules gets those pixels through its din bus, and route the signal to the relevant pins of its dout bus depending on its orientation, which is given by the din ori input bus, which is wired to c1 pix to stripe's dout ori bus.
Each set of pins corresponding to a particular orientation then routes the signal to the correct instance of the c1 orientation module. In order to perform that demultiplexing operation, that module also has the compulsory clk, rst and en din pins.
c1 orientation The actual max-pooling operation is performed by the c1unit components contained in that module. Each c1 orientation instance has a bank of 8 c1unit instances, each having its own configuration so as to perform the max-pooling according to the parameters indicated in [START_REF] Serre | Robust object recognition with cortex-like mechanisms[END_REF]. The role of the c1 orientation module is to serve as an interface between the max-pooling unit bank and the rest of the hardware model. As inputs, is has the usual clk, rst and en din input pins as well as a din input bus. That bus gets the data of the corresponding orientation generated in the s1 module. Data of all scales arrive in parallel, as a result of the previous modules.
Data of each of the 8 scales is routed to a particular c1unit component, which shall be described soon. Output data is then written in the dout bus. An en dout output is set to "1" when data is ready, and pins of an output bus called dout en scales are set depending on the scales at which the data is available, while the other pins are unsete.g, is the output data correspond to the 1st and 4th scales of the C1 layer of the model, dout en scales shall get the value "00001001". Figure 4.11a shows the two c1unit components and the control module c1unit ctrlnamed ctrl here for readability. Data coming out of those components are multiplexed in the same output port dout. The four bits data signal is shown with the thick line, and the control signals ares shown in light line. We see that dedicated control signals are sent to each maxfilt components, but also that both get the same data.
The control signals presented in Figure 4.11b show how the control allow to shift the data between the two units, in order to produce the overlap between two C1 units. We assume here that we emulate C1 units with 4 × 4 receptive fields and 2 × 2 overlap.
c1unit This is the core-module of the max-pooling operations -the purpose of all other modules in c1 is mostly to organize and route data, and manage the whole process.
Its inputs consist in the compulsory clk, rst and en din pins and the din bus. Data are written to the usual dout and en dout output ports. The max-pooling operations are performed by two instances of a component named maxfilt. The use of those two instances, latter refered to as maxfilt a and maxfilt b, is made mandatory by the fact that there is 50% overlapping between the receptive fields of two C1 unit in the original model. The data is always sent to both components, however setting and unsetting their respective en din pins at different times emulates the behaviour of the set of C1 units operating at the corresponding orientation and scale: at the beginning of a line, only one of the two modules is enabled, and the other one gets enabled only after an amount of pixels equal to half the size of the pooling window (e.g the stride) as arrived. That behaviour is illustrated in Figure 4.11, and is made possible thanks to the c1unit ctrl module. In the next two paragraphs, we first describe how maxfilt works, and then how it is controlled by c1unit ctrl.
maxfilt This is where the maximum pooling operation actually takes place. That module operates synchronously with the process clock, and thus has the usual clk, rst and en din input ports -data is got in parallel via the din input port. The input data corresponds to a column of values generated by s1, with the organization performed by the above modules. There are also two additionnal control pins called din new and din last, allowing to indicate the module that the input data is either the first ones of the receptive field, the last ones, or intermediate data. The value determined by the filter is written in the dout port, and valid data is indicated with the en dout output port.
The module operates as follows. the module is enabled only when the en din port is set to "1". It has an inner register R that shall store intermediate data. When din new is set, the maximum of all input data is computed and the result is stored in R. When both din new and din last are unset and en din is set, the maximum between all input values and the value stored in R is computed and stored back in R. Finally, when din last is set the maximum value between inputs and R is computed again but this time it is also written in dout and en dout is set to "1". Figure 4.11b shows how those signals should act to make that module work properly.
c1unit ctrl That module's purpose is to enable and disable the two maxfilt components of c1unit when appropriate. It does so thanks to a process synchronized on the process clock, and thus has the customary clk, rst and en din input ports. It gets the data that is to be processed in its parent c1unit module through its din input bus, and re-write to the dout output bus along with flags wired to the two c1unit components of its parent module, via four output ports: en new a and en last a which are connected to maxfilt a, and en new b and en last b which are connected to maxfilt b. maxfilt a and maxfilt b are the modules mentioned in the the description of c1unit, presented earlier.
c1 to s2
That module's goal is to propose an interface between the output port of c1 and the input ports of s2. It also allows to get the data directly from c1 and use it as a descriptor for the classification chain. It reads the data coming out of c1 in parallel, stores it, and serializes it in an output port when ready. That module needs three clocks: clk c1, clk s2 and clk proc. It also has the rst port, as any other modules with synchronous processes. The input data is written in the c1 din input port, and its associated orientation is written in c1 ori. Data coming from different scale in C1 are written in parallel. en c1 is a input port having of side 8 -one pin per scale in c1 -that indicates which scale from c1 din is valid. Finally, a retrieve input port indicates that the following module is ready to get new data. Output data is written serially in dout output port, and a flag called en dout indicates when data in dout is valid.
As shown in Figure 4.12, that module has four major components: two BRAM-based buffers that store the data and write it in din when ready, an instance of c1 handler which gets the input data and provides it along with the address where it should be written in the buffers, and finally a controller ctrl with two processes that takes care of the controlling signals. The reason why we need two buffers is that we use a double buffering: the data is first written into buffer A, then when all the required data has been written the next data is written into buffer B while we read that of buffer A, then buffer B is read while the data in buffer A is overwritten with new data, and so on. This allows to avoid problems related to concurrent accesses of the same resources.
When new data in c1 din is available, -that is when, at least one of en c1's bits is set -the writting process is launched. This process, which is synchronized on the highfrequency clk proc clock, proceeds as follows: if en c1'LSB is set, the corresponding data is read from c1 din and sent to c1 handler along with an unsigned integer identifying its scale. Then the second LSB of en c1 is read, and the same process is repeated until all 8 bits of en c1 are checked.
In parallel, c1 handler returns its input data along with the address where it should be written in BRAM. Both are sent to the buffer available for writing, which takes care of the writing of the data in its inner BRAM. Once data is ready, i.e when all C1 feature maps for an image are written in the buffer, then that buffer becomes read-only, as new incoming data is written in the other buffer. Every time the retreive input signal switches state, data is written into dout and en dout is set. When the data is written, it is always by batches of four values, one per orientation.
c1 handler That module handles the pixels sent from the c1 to s2 and its corresponding scale, and simply rewrites it in its output ports with the address to which it should be written in c1 to s2 write buffer. Its input ports consist in clk which get the clock on which it should be synchronized, the rst port allowing to reset the component, the din port getting the C1 value to be handled, the scale of which is written in the scale input port, the rst cnts that allows to reset all of this module's inner counters used to generate the address, and the en din input port indicating when valid data is available and should be processed. This module's output port consist in dout which is used to rewrite input data, addr which indicates the address where to write the data in BRAM and en dout indicating that output data is available. .12: c1 to s2 module. The blue and red lines show the data flow in the two configurations of the double-buffering. The data goes through c1 handler, where the address to which it should be written is generated and written in waddr. The rea and reb signals control the enable mode of the BRAMs, while the wea and web enable and disable the write modes of the BRAMs. When the upper BRAM is in write mode, wea and reb are set and web and rea are unset. When the upper buffer is full, those signals are toggled so that we read the full buffer and write in the other one. Those signals are controled thanks to the ctrl component, which also generates the address from which the output data should be read from the BRAMs. Data read from both BRAMs are then multiplexed into the dout output port. Pins on the left of both BRAMs correspond to the same clock domain, and those on the right belong to another one so that it is synchronized with following modules.
That module works as follows. It has 8 independent counters, one per scale. Let c n s be the value hold on the counter associated to scale s at instant n. When en din is set, assuming the value read from scale correspond to the scale s coded as an unsigned integer, the data read from scale is simply written in dout and the value written in addr is simply c n s + o s , again coded as an unsigned integer, where o s is an offset value as given in Table 4.5. Those offsets are determined so that each scale has its own address range, contiguous with each others, under the conditions given in Section 4.3.1:
o 0 = 0, (4.24) ∀s ∈ Z * o s = s-1 k=0 4S k 2 , (4.25)
where S k is the size of the C1 maps at scale k, also given in Table 4.5. Once all pixels have been handled by c1 handler, the module's counters must be reset by setting and then unsetting the rst cnts input signal.
s2c2
That module gets as input the serialized data produced by c1 to s2. and performs the operation required in HMAX's S2 and C2 layers. It has two main components, s2 and c2, that respectively take care of the computations needed in HMAX's S2 and C2
layers. In order to save hardware resources, the pre-learnt in S2 filters are multiplexed as it is done in s1: every time new data arrives, pattern-matching are performed with some of the pre-learnt S2 patches in parallel, then the same operations are performed with other pre-learnt patches and same input data, and so on until all pre-learnt patches were used. We shall define here for latter use a multiplexing factor that we shall denote M S2C2 , which corresponds to the amount of serial computations required to perform computations on all S2 patches for a given input data. Is most useful output port is called rdy, and is connected to c1 to s2's retreive input port, to warn it when it is ready to get new data.
s2
This module handles the data coming out of c1 to s2 as well as the pre-learnt patches, matches those patterns and returns the results. Its input pins firstly consist in clk and clk proc that each get a clock signal: the first one is the clock on which the input data is synchronized and the other one synchronizes the computations. It also has a rst input port allowing to reset it. The data should be written in the din port, and a port called en din indicates that the input data is valid. After performing the pattern matching operations, the data is written into the dout output port, along with an identifier into the id out output port. Finally, en new allows the other module to be warned that new data is available, en dout indicates precisely which parts of dout carry valid data and should be read and rdy indicates when the process is ready to read data from c1 to s2.
That modules has three major components, which shall be described in the next Sections:
s2 input manager, which handles and organizes input data; s2 coeffs manager, which handles and provides the coefficients of the pre-learnt filters; and s2 processors, which takes care of the actual pattern-matching operations.
Figure 4.13 shows the dataflow in that module. We shall now described in more details the its sub-modules s2 coeffs manager, s2 coeffs manager and s2 processors. The data arriving to the module is handled by s2 input manager, which make it manageable for the s2processors. The latter also gets the pre-learnt filter needed for the pattern-matching operations from s2 coeffs manager in parallel, and perform the computations. Once it is over, the data is sent in parallel to the dout output port, which feed the next processing module.
s2 input manager
This module's purpose is somewhat similar to that of s1's pixman module: managing the incoming data and reorganizing it in a way that makes it easier to process. It gets input data from c1 to s2 serially and provide a N × N × 4 map of C1 samples, where N is the side length of the available map. Its input ports gather a clk port the clock and a rst port allowing to reset the module, and also a din port where the data should be written and an en din port that should be set when valid data is written into din. The output map may be read from the dout output port and its corresponding scale in C1 space is coded as an unsigned integer and written into the dout scale output port. Finally, the input matsize output gives a binary string w.r.t the value of the aforementioned N variable according to Table 4.6. Individually, each bit of dout scale allow to enable and disable s2bank modules, which takes care of the actual pattern-matching operation and which are described in Section 4.3.3.6.
s2 input manager mainly consists in two components: s2 input handler, which get C1 samples serially as input and returns vertical stripes of those samples; and an instance of the pixmat component described in Section 4.3.2.1. However, pixmat is not used here in exactly the same way as in s1. First of all, we consider here that a "sample" stored in pixmat does not actually correspond to a single sample of a C1 map at a given location, but to an ensemble of four C1 samples, one per orientation. Furthermore, contrary to s1, the feature maps produced by c1 do not have the same sizes, as stated earlier in dout scale 0000 0001 0011 0111 1111 To address that issue, we chose to ignore pixmat's en dout port, and to use a state machine that shall keep track of the data in a similar way to that of pixmat, although it manages better the cases where the feature maps are smaller than 31 × 31: the process is similar but the line width depends on the scale to which the input data belongs. That scale is determined by an inner counter: knowing how many samples there are per scales in C1, it is easy to know the scale of the input data.
s2 input handler The reorganization of the data arriving sample by sample in stripes that can feed pixmat is performed by that module. As a synchronous module, it has the required clk and rst input ports, the data is read from its din input port and valid data is signaled with the usual en din input port. Output stripes are written in the dout output port, along with the identifier of their scales which are written in the dout scale output port. Finally, the en dout output port indicated that data from dout scale is valid.
Let's keep track of the organization of the data that arrives in that module. Pixels arrive serially, as a stream. The first pixels to arrive are those of the C1 maps of the smallest scale. Inside that scale, the data is organized by rows, then columns, and then orientation as shown in Figure 4.14a. The first thing that module does is to demultiplex the orientations, so that every word contains the pixels of all of the orientations, at the same locations and scale. Once this is done, this new stream may be processed as we explain now.
As presented in Figure 4.14b, that module has 8 instances of the s2 pix to stripe component -one for each size of C1 feature maps -that produces the vertical stripes given input samples and generic parameters such as the desired stripe's height and width. Only one of those instances is used at a time, depending on the scale (which is computed internally depending on the amount of acquired samples). Thus, at scale 0 the C1 feature maps are 31 × 31 and the only active module is the 31 × 31 one. When processing samples of scale 2, which means 24 × 24 feature maps, the only active module is the 24 × 24 one, and so one. Whatever its side, the generated stripe is written in dout and its corresponding scale in dout scale. Finally, en dout indicates which data from dout is valid -this is somewhat redundant with dout scale, but makes it easier to interface that model to the others. units of all sizes is performed here. Data are synchronized on the pixel clock which is provided to this module via its clk input port. Operations, however, are synchronized on the S2 process clock of much higher frequency, given by the clk proc input port. The module also has the compulsory rst clock allowing to initialize it. The data resulting from the processes of the previous layers is passed through the din input port, along with its codebook identifier via the cb din input port. The pre-learnt patterns to be used for the pattern matching operations are passed through the coeffs input bus, and all their corresponding codebooks identifiers are given to the module via an input port called cbs coeffs. Finally, the id in input port gets an identifier that allows to keep track of the data in the latter c2 module, and the en din allows to enable or disable the module.
Regarding the output ports, they consist in the dout port which provide the results of all the pattern matching operations performed in parallel, the id dout port that simply gives back the identifier provided earlier via the id in port, a "rdy" output port that warns that that module is ready to get new data, and finally an en dout output bus that indicates which data made available by dout is valid; this is required due to the fact that, as we shall see, pattern matching operations are not performed at all positions of the input C1 maps, depending on the various sizes of the pre-learnt pattern. Thus, data are not always available at the same time, and we need to keep track of this.
For each size of the pre-learnt S2 patches, i.e 4×4×4, 8×8×4, 12×12×4, 16×16×4, this module implements two components: s2bank that performs the actual pattern matching operation, and corner cropper that makes sure that only valid data is routed to the s2bank instance. Data arriving from din corresponds to a matrix of 16 × 16 × 4 pixels: all of it is passed to the s2bank instance that match input data with 16×16×4 patterns.
The data fed to s2bank instances performing computations for smaller pre-learnt pattern corresponds to a chunk of the matrix cropped from the "corner" of the pixel matrix. the pre-learnt vector used for the pattern matching operation, and the corresponding codebook is got via the cb coeffs input port.
s2unit
That module takes care of the computation of a single pattern matching operation in S2. As its top module s2processors, it has clk and clk proc input ports that respectively get the data and system clocks. It also has rst input ports for reset. The operands consist on one hand in the data produced by the s1c1 module and selected by corner cropper and on the other hand in the pre-learnt pattern with which the Manhattan distance is to be computed. They are respectively given to that module via the din and coeffs input ports. The data arrive in parallel in the form of the optimized encoding described in Section 4.2, and as explained there this encoding requires a codebook. Since there is a codebook per C1 map, the identifiers of the codebooks required for the input data and the pre-learnt pattern are respectively given by the cb din and cb coeffs input ports. The identifier mentioned in s2processors passed by the id in input port, and the module can be enabled or disabled thanks to the en din input port.
The Manhattan distance computed between the passed vectors is written to the dout output port, along with the corresponding identifier which is written to the id out output ports. Finally, an output port called en dout indicates when valid data is available.
The Manhattan distance is computed here in a serial way, synchronized on the clk proc clock. This computation is performed by a component called cum diff, which shall now be described. Shift registers as described in Section 4. In this Section, we described principles of the s2 module. Next Section does the same for the c2 module.
Timing
Our model works globally as a pipeline, where each module uses its own resources.
Therefore, the overall time performances of the whole chain is determined by the module that takes longest. In order to evaluate how fast is our model in terms of frames per second, we shall now study, for each stage, the timing constraints it requires. As for the C1 layer, it processes the data as soon as it arrives, and thus no bottleneck is involved there.
The S2 layer is the most demanding in terms of computations. Computations are performed only when all the required data is here, in order to save as most time as possible, as explained in Section 4.3.3.5. Considering we use a 25-to-1 multiplexer to process all S2 filters, the time T S2 required by this stage is given by the time required by that layer may be written as
T S2 = 25 (16 × 16 × 4M 16 + 12 × 12 × 4M 12 + 8 × 8 × 4M 8 + 4 × 4 × 4M 4 ) , (4.26)
where M i is the number of valid X i × X i patches in the C1 feature maps where patches bigger than X i × X i with X i = 4i are not valid, and may be expressed as
M i = N i -N i+1 (4.27)
where N i is the number of valid X i × X i patch in the C1 feature maps. Hence, we have The strategy proposed in [99] is very different from what we proposed here. The huge computational gain they brought is largely due to the use of separable filters for S1, which allow to use very few resources as explained in Section 4.1.1.1. The fact that, in their implementation of S1, filters are multiplexed across scales instead of across orientations as we did here, also allows to begin computations in the the S2 layer as soon as data is ready, while in our case we chose to wait for all C1 features to be ready before starting computation, using a double-buffer to allow a pipelined process. In their case, the bottleneck is the S1 layer, which forces them to process a maximum of 190 images per second. However, that amount is 8.37 times bigger than the FPS we propose. This is due to the fact that, while reducing data encoding seem to provide performances similar to those obtained with full double-precision floating point values, it does not take full advantage of the symmetries underlined by Orchard et al. in [99].
T S2 =25 [
As for the S2 layer, Orchard et al claimed that they used 640 multipliers in order to make the computation as parallel as possible -however it is not very clear in that paper how exactly those multipliers were split across filters, and the code is not available online -hence direct comparison with our architecture is not feasible. However, with their implementation of S2 they claim being able to process 193 128 × 128 images per seconds, while our implementation gives 22.69 images per second, although it uses much less resources. Finally, we did reduce the precision of the data going from S1 to S2, but the computation in S2 is still performed with data coded on 24 bits integer -this is due to the fact that we did not tested the model when degrading the precision at that stage. Future work shall address that issue, and we hope to reduce the precision to a single bit per word at that stage. Indeed, in that extreme scenario the computation of the Euclidean distance is equivalent to that of the Hamming distance, i.e. the number of different symbols between two words of same length. That kind of distance is much easier to compute than classical Euclidean or even Manhattan distance, be it on FPGA or CPU. The rational behind that idea is that single bit precisions were successfully used in other machine learning contexts [START_REF] Coussy | Fully-Binary Neural Network Model and Optimized Hardware Architectures for Associative Memories[END_REF][START_REF] Courbariaux | BinaryConnect: Training Deep Neural Networks with binary weights during propagations[END_REF], and such an implementation would be highly profitable for implementation on highly constraint devices.
Resource
Conclusion
This Chapter was dedicated to the optimizations of the computations that take place in the HMAX model. The optimization strategy was to use simpler operations as well as coding the data on shorter words. After that study, a hardware implementation of the optimized model was proposed using the VHDL language, targeting an Artix 7 200T
platform. Implementation results in terms of resource utilization and timing were given, as well as comparisons with a work chosen as a baseline.
We showed that the precision of the data in the early stages of the model could be dramatically reduced, while keeping acceptable accuracy: only the 2 most significant bits of the input image's pixels were kept, and the Gabor filters' coefficients were coded on a single bit, as was proposed in [START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. We also used the coding strategy proposed in the same paper, in order to reduce the bit width of the stored coefficients and their transfer from modules to modules. We also instantiated less patches in S2 as proposed by Yu and Slotine [START_REF] Yu | FastWavelet-Based Visual Classification[END_REF], and we proposed to use the Manhattan distance instead of the Euclidean distance as in the initial model [START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF]. Those optimizations made the overall accuracy of the model lose XXX points in precision for an image classification task based on 5 classes of the popular Caltech101 dataset, while dividing the complexity in the S2 stage by 5 and greatly reducing the required precision of the data, hence diminishing the memory print and the needed bandwidth for inter-module communication.
A hardware implementation of that optimized model was then proposed. We aim to that implementation to be as naive as possible, to see how those optimization compared with the implementation strategy proposed by Orchard et al. [99]. Their implementation was made so as to fully use the resources of the target device, and thus they claimed a throughput much higher than ours. However, our implementation uses much less resources than theirs, and our optimizations and theirs are fully compatible. A system implementing both of them would be of high interest in the fields of embedded systems for pattern recognition.
Future research shall aim to combine our optimizations with the implementation strategy proposed by Orchard et al, thus reducing even further the resource utilization of that algorithm. Furthermore, we shall continue our efforts towards that objective, by addressing the computation in the S2 layers: at the moment, they are implemented as
Manhattan distance -we aim to reduce the precision of the data during those pattern matching operation to a single bit. That way, Euclidean and Manhattan distances are reduced to the Hamming distance, much less complex to compute.
Chapter 5
Conclusion
In this thesis, we addressed the issue of optimizing a bio-inspired feature extraction framework for computer vision, with the aim of implementing it on a dedicate hardware architecture. Our goal is to propose an easily embeddable framework, generic enough to fit different applications. We chose to focus on efforts on HMAX, a computational model of the early stage of image processing in the mammal's cortex. Although that model may not be quite as popular as others, such as ConvNet for instance, it is interesting in that it is more generic and only requires little training, while frameworks such as ConvNet often require the design of a particular topology and a large amount of samples for training.
HMAX is composed of 4 main stages, each computing features that are progressively more invariant that the one before, to translations and small deformations: the S1 stage uses Gabor filters to extract low-level features from the input image, the C1 stage uses a max-pooling strategy to provide a first level of translation and scale invariance, the S2 feature matches pre-learnt patches with the feature maps produced by C1 and the C2 provides full invariance to translation and scale thanks to its bag-of-word approach by keeping only the highest responses of S2. The only training that happens here is in S2, and it may be performed using simple training algorithms with few data.
First, we aimed to optimize HMIN, which is a version of HMAX with only the S1 and C1 layers, for two particular tasks: face detection, and pedestrian detection. Our optimization strategy consisted in removing the filters that we assumed were not necessary:
for instance, in the case of face detection, the most prominent features lie in the eyes and mouth, which respond best to horizontal Gabor filters. Hence, we proposed to keep only such features in S1. Furthermore, most useful information are redundant from scales to scales, thus we reduced further the complexity of our system by summing all the remaining convolution kernels in S1, and we reduced it to a manageable size of 9 × 9 which allows it to process smaller images. Doing so helped us to greatly reduce the complexity of the framework, while keeping its accuracy to an acceptable level. We validated our approach on the two aforementioned tasks, and we compared the performance of our framework with state-of-the art approaches, namely the Convolutional Face Finder and Viola-Jone's for the face detection task, and another implementation of ConvNet and the Histogram of Oriented Gradients for the pedestrian detection task.
For face detection applications, we concluded that, while the precision of our algorithm is significantly lower than that of state of the art systems, our system still works decently on a real life scenario, where images were extracted from a video. Furthermore, it presents the advantage of being generic: in order to adapt our model to another task one would simply need to update the weights of the filter in S1 so as to extract relevant features, while state of the art algorithm were either design specifically for the considered task or would require particular implementation for it.
However, our algorithm does not seem to perform to a sufficient level for the pedestrian detection task, and more efforts need to be made to that end. Indeed, while our simplifications allowed our system to be the most interesting in terms of complexity, they also brought a significant drop in terms of accuracy, although more tests need to be made for that use case as our results are not directly comparable to those of the state of the art.
We then went back to the full HMAX framework with all four layers, and we studied optimizations aiming to reduce the computation precision. Our main contribution is the use of as few as two bits to encode the input pixels, hence using only 4 gray levels instead of the usual 255. We also tested that optimization in combination with other optimizations from the literature: Gabor filters in S1 were reduced to simple additions and subtractions, the output in S1 were quantized using Lloyd's encoding method, allowing to find the optimal quantization given a dataset, we divided by 5 the number of pre-learnt patches in S2 and we replaced the complex computation of Gaussians in S2
with much simpler Manhattan distance. We showed that all those approximations allow to keep an acceptable accuracy compared to the original model.
We then implemented our own version on HMAX on a dedicated hardware, namely the Artix-7 200T FPGA from Xilinx, using the aforementioned optimizations. That implementation was purposely naive, in order to compare it with state of the art implementation. The precision reduction of the input pixels allows to greatly reduce the memory needed when handling the input pixels, and made the computation of the S1 feature map being done on narrower data. Furthermore, the replacement of the Gabor filter coefficients by simple additions and subtractions allowed us to encode that instruction on a single bit -"0" for subtraction and "1" for addition -instead of a full coefficient, using for instance a fixed or floating point representation. The data coming out of S1 is then encoding using the codebooks and partitions determined thanks to Lloyd's method, hence allowing to pass only words of 2 bits to the C1 stage. As for the S2 layer, the influence of data precision on the performance was not yet evaluated by the time that document was written, and hence all data processed here used full precision: input data are coded on 12 bits, and output data on 24 bits.
The main limit of our implementation is that is does not use the symmetries of the Gabor filters. That technique was successfully used in the literature to propose a full HMAX implementation on a single FPGA, allong with different multiplexing scheme that allow a higher throughput. Indeed, our implementation -which is yet to be implemented and tested on a real device -may process 4.54 164×164 frames per second, while the authors of the state of the art solution claimed that it may process up to 193 128 × 128 frames per second. It must be emphasized however that our implementation uses much less hardware resources, and that our optimizations and theirs are fully compatible. Hence, future development shall mainly consist in merging the optimization they proposed with those that we used.
Let's now give answers to the question we stated at the beginning of that document. The first one was: How may neuromorphic descriptors be chosen appropriately and how may their complexity be reduced? As we saw, a possible solution is to find empirically the most promising features, and keeping only the filters that respond best to it. Furthermore, it is possible to merge the convolution filters that are sensible to similar features.
That approach led us to a generic architecture for visual pattern recognition, and one would theoretically need to change only its weights to adapt it to new problems.
The second question that we stated was: How the data handled by those algorithms may be efficiently coded so as to reduce hardware resources? We show that full precision is not required to keep decent accuracy, and that we can acceptable results using even only a few bits to encode parameters and input data. We also showed that that technique may be successfully combined with other optimizations.
Given the fact that nowadays, the most widely used framework for visual pattern recognition is ConvNet, it may seem surprising that we chose to stick to HMAX. The main reason is that their most well known applications are meant to run on very powerful machines, while on the contrary we directed our research towards embedded systems.
We also found the bio-inspiration paradigm promising, and we chose to push as far as possible our study of frameworks falling in that categories, in order to use them to their full potential. While our contribution in deriving an algorithm optimized for a given task does not provide an accuracy as impressive as the state of the art, we claim that the architecture of that framework is generic enough to be easily implementable on hardware, and that only the parameters would need to change to adapt it to another task.
Furthermore, our implementation of the general-purpose HMAX algorithm on FPGA is the basis of a future, more optimized and faster implementation on hardware, combining the presented optimizations which allowed to keep low hardware resource utilization low and those proposed in the literature, that take full advantage of the features of an FPGA. Combining those contributions may take several form: one can imagine using a full HMAX model with all four layers, but with a number of filters in S1 greatly reduced, thus leading to an implementation on FPGA using even less resources. Or, one can imagine directly implementing the framework proposed for face detection, i.e.
without the S2 and C2 layers, with the optimizations that we proposed for the S1 and C1 layers. Doing so would produce a very tight framework, with a low memory print and a low complexity.
However, one may argue that frameworks such as ConvNet are nevertheless more accurate than HMAX in most use case scenarios, that frameworks such as Viola-Jones have strikingly low complexities, and that the genericity we claim to bring does not make it up for it. With that consideration, we claim that the study carried out in Chapter 3 and 4 may still apply to those frameworks. Indeed, as was done in the literature, if one trains a ConvNet having a topology similar to that of the CFF, where the feature maps of the second convolution stage ultimately produce a scalar each, one may see that the weight affected to that scalar if close to zero, and hence the corresponding convolutions responsible for that feature map may simply be removed; furthermore, for a given task it may be easy to identify the shape of a Gabor filter that would allow to grasp interesting features -then, one can either use Gabor filters as the first stage of a ConvNet, as was done in the past, or initialize the weights of some convolution kernels before training.
As for our hardware implementation of HMAX, most of the optimizations we proposed may be used for ConvNet as well. For instance, one could still chose to train a ConvNet on input images with pixels coded on less than 8 bits. Furthermore, after training one could also imagine to replace all positive weights with 1 and negative weights with -1, and remove weights close to 0 -given that the dynamics of the weights is not too far from the [-1, 1] range. We also confirmed that using those techniques in combinations with other techniques from the literature, such as Lloyd's algorithm for inter-layer communication, are usable without dramatically altering the accuracy. Hence, our example of implementation is perfectly applicable to other situations, and goes way beyond the sole scope of HMAX.
To conclude, we would back the position that claims that bio-inspiration is often a good starting point and that it may open perspectives that were not explored until then, but that we should not fear to quickly move away from it. Indeed, humanity conquered the skies with machines only loosely connected to birds, and submarine depths with boats that share almost nothing with fishes. Computer vision boomed very recently thanks to frameworks that are indeed inspired by cognitive theories, but the implementations of those theories in industrial systems is far from mimicking the brain. But all those systems, at some point, were inspired by nature -and while it is not always the most fundamental aspect, going back to that viewpoint and rediscovering why it inspired a technology may shed new lights on how to go further and deeper in their improvement.
A.3 Output layer training
As for the final layer, it is trained using a simple least mean square approach. Denoting W the weight matrix and T the matrix of target vectors, it can be shown [START_REF] Bishop | Pattern recognition and machine learning[END_REF] that we have
W = Φ T Φ -1 ΦT (A.4) with Φ = Φ 0 (x 1 ) Φ 1 (x 1 ) . . . Φ M -1 (x 1 ) Φ 0 (x 2 ) Φ 1 (x 2 ) . . . Φ M -1 (x 2 ) . . . . . . Φ 0 (x N ) Φ 1 (X N ) . . . Φ M -1 (X N ) (A.5)
where Φ i is the function corresponding to the i-th kernel, and where each vector of T has components equal to -1, except for it i-th component which is +1 if the categories of the vector it corresponds to is i. gaussienne modulée par un cosinus, et peut être formalisé de la manière suivante:
•••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • •••• • •••• • •••• • •••• • • • x Image d'entre Couche 1 Couche 2 Couche 3 U λ 1 (x) U λ 1 ,λ 2 (x) S 0 (x) S λ 1 (x) S λ 1 ,λ 2 (x)
G (x, y) = exp - x 2 0 + γ 2 y 2 0 2σ 2 × cos 2π λ x 0 , (B.2)
x 0 = x cos θ + y sin θ and y 0 = -x sin θ + y cos θ, (B.
B.2.2 Implantations matérielles
Afin de répondre aux problématiques de l'embarqué, de nombreuses implantations matérielles de classificateurs, d'extracteurs de caractéristiques et même de réseaux de neurones à • Comment choisir des caractéristiques bio-inspirées de manière appropriée et comment réduire leurs complexités algorithmes ?
• Comment les données manipulées par ces algorithmes peuvent-elles être codées efficacement de façon à réduire l'utilisation des resources matérielles ?
Cette Section était dédiée à une revue de l'état de l'art lié à nos travaux. Dans la prochaine Section, nous allons répondre à la première problématique en décrivant notre contribution sur la sélection de caractéristiques. Dans la Section suivante, nous détaillerons les optimisations réalisées sur HMAX en vue d'une implantation sur matériel.
Enfin, la dernière Section sera consacrée aux discussion et conclusions de ces travaux.
B.3 Sélection de caractéristiques
Dans cette Section, nous allons présenter nos travaux concernant la sélection de caractéristiques en vue d'optimiser un algorithme, pour deux tâches précises: la détection de visages, et la détection de piétons.
B.3.1 Détection de visages
B.3.2 Détection de piétons
B.3.2.3 Expérimentations
Afin de tester nos algorithmes, nous avons évalué sa précision sur une tâche de détection de piétons sur la base INRIA [START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF]. Les résultats sont présentés en Figure B.17 et en
1. 1
1 Application examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Perceptron applied to PR . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 NeuroDSP architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 A feedforward architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Perceptron. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Multi-layer perceptron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 MLP activation functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 RBF neural network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Support vectors determination . . . . . . . . . . . . . . . . . . . . . . . . 2.7 Invariant scattering convolution network. . . . . . . . . . . . . . . . . . . 2.8 HMAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 Convolutional neural network. . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Example of Haar-like features used in Viola-Jones. . . . . . . . . . . . . . 3.2 Integral image representation. . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Complexity repartition of Viola and Jones' algorithm. . . . . . . . . . . . 3.4 CFF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Complexity repartition of the CFF algorithms . . . . . . . . . . . . . . . . 3.6 C1 feature maps for a face . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 S1 convolution kernel sum . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Feature map obtained with the unique kernel in S1 . . . . . . . . . . . . . 3.9 ROC curves of the HMIN classifiers. . . . . . . . . . . . . . . . . . . . . . 3.10 Samples from the CMU Face Images dataset . . . . . . . . . . . . . . . . 3.11 ROC curve obtained with HMIN R θ=π/2 on CMU dataset. . . . . . . . . . . 3.12 Example of frame from the "Olivier" dataset. . . . . . . . . . . . . . . . . 3.13 ROC curves obtained with HMIN R θ=π/2 on "Olivier" dataset. . . . . . . . 3.14 HOG descriptor computation. . . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Binning of the half-circle of unsigned angles . . . . . . . . . . . . . . . . . 3.16 Complexity repartition of HOG features extraction. . . . . . . . . . . . . . 3.17 ConvNet for pedestrian detection. . . . . . . . . . . . . . . . . . . . . . . 3.18 ROC curves of the HMIN classifiers on the INRIA pedestrian dataset ..4.1 Caltech101 samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Precision degradation in input image. . . . . . . . . . . . . . . . . . . . . 4.3 Recognition rates of HMAX w.r.t input image bit width. . . . . . . . . . . 4.4 Recognition rates w.r.t S1 filters precision . . . . . . . . . . . . . . . . . . 4.5 HMAX VHDL module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii List of Figures xiii 4.6 Dataflow in s1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 coeffs manager module. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 7 × 7 convolution module. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 shift registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 c1 module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.11 c1unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.12 c1 to s2 module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.13 Dataflow in s2c2. The data arriving to the module is handled by s2 input manager, which make it manageable for the s2processors. The latter also gets the pre-learnt filter needed for the pattern-matching operations from s2 coeffs manager in parallel, and perform the computations. Once it is over, the data is sent in parallel to the dout output port, which feed the next processing module. . . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 Data management in s2 handler. . . . . . . . . . . . . . . . . . . . . . . 4.15 Data flow in s2processors. . . . . . . . . . . . . . . . . . . . . . . . . . . B.1 Exemples d'applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2 NeuroDSP architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Architecture feedforward . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.4 Invariant scattering convolution network. . . . . . . . . . . . . . . . . . . . B.5 HMAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.6 Réseaux de neurones à convolutions. . . . . . . . . . . . . . . . . . . . . . B.7 Examples de caractéristiques utilisés dans Viola-Jones. . . . . . . . . . . . B.8 Représentation en image intégrale. . . . . . . . . . . . . . . . . . . . . . . B.9 CFF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.10 Sorties des C1 pour un visage . . . . . . . . . . . . . . . . . . . . . . . . . B.11 Somme des noyaux de convolutions dans S1. . . . . . . . . . . . . . . . . . B.12 Réponse du filtre unique dans S1 sur un visage. . . . . . . . . . . . . . . . B.13 Courbes ROC obtenues avec différentes versions de HMIN sur LFW Crop. B.14 Courbe ROC obtenue avec HMIN R θ=π/2 sur la base CMU. . . . . . . . . . B.15 HOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.16 ConvNet pour la détection de piétons. . . . . . . . . . . . . . . . . . . . . B.17 Courbes ROC obtenues avec les descripteurs HMIN sur la base INRIA. . . B.18 Effet de la dégradation de précision sur l'image d'entrée. . . . . . . . . . . B.19 Taux de reconnaissances avec HMAX en fonction de la précision des pixels en entrée. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.20 Précisions en fonction du nombres de bits dans les filtres de Gabor de S1, avec 2 bits pour l'image d'entrée. . . . . . . . . . . . . . . . . . . . . . . . B.21 Aperçu du module VHDL HMAX. . . . . . . . . . . . . . . . . . . . . . . To Ryan and Théo. xv Chapter 1
(a) Google's self driving car 1 . (b) Production control. (c) Security. (d) Home automation.
Figure 1 . 1 :
11 Figure 1.1: Application examples.
Figure 1 . 2 :
12 Figure1.2: Perceptron applied to pattern recognition. Figure1.2a shows an hardware implementation, and Figure1.2b presents the principle: each cell of the retina captures a binary pixel and returns 0 when white, 1 when black. Those pixels are connected to so called input units, and are used to compute a weighted sum. If that sum is positive, then the net returns 1, otherwise it returns -1. Training a Perceptron consists in adjusting its weights. For a more formal and rigorous presentation, see page 9.
Figure 2 . 1 :
21 Figure 2.1: A feedforward architecture. In each layer, units get their inputs from neurons in the previous layer and feed their outputs to units in the next layer.
Figure 2 . 2 :
22 Figure 2.2: Perceptron.
Figure 2 . 4 :
24 Figure 2.4: MLP activation functions.
Figure 2 . 5 :
25 Figure 2.5: RBF neural network.
Figure 2 . 6 :
26 Figure 2.6: Support vectors determination. Green dots belong to a class, and red ones to the others. Dots marked with a × sign represent the selected support vectors. The unmarked dots have no influence over the determination of the decision boundary's parameters. The black dashed line represents the determined decision boundary, and the orange lines possible decision boundaries that would not be optimal.
Figure 2 . 9 :
29 Figure 2.9: Convolutional neural network [48].
Figure 3 . 1 :
31 Figure 3.1: Example of Haar-like features used in Viola-Jones for face detection.They can be seen as convolution kernels where the grey parts correspond to +1 coefficients, and the white ones -1. Such features can be computed efficiently using integral images[START_REF] Viola | Rapid object detection using a boosted cascade of simple features[END_REF][START_REF] Viola | Robust real-time face detection[END_REF]. Point coordinates are presented here for latter use in the equations characterizing feature computations.
Figure 3 .
3 Figure 3.5 shows the repartition of the complexity of that frameworks.
Figure 3 . 6 :
36 Figure 3.6: C1 feature maps for a face. One can see here that most of the features corresponding to an actual feature of a face, e.g the eyes or the mouth, is given by the filters with orientation θ = π/2.
Figure 3 . 8 :
38 Figure 3.8: Feature map obtained with the unique kernel in S1 presented in Figure 3.7. One can see that the eyes mouth and even nostrils are particularly salient.
Figure 3 . 10 :
310 Figure 3.10: Samples from the CMU Face Images dataset.
Figure 3 . 11 :
311 Figure 3.11: ROC curves obtained with HMIN Rθ=π/2 on CMU dataset. The chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop[START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset[START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. A face was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive.
Figure 3 . 12 :
312 Figure 3.12: Example of frame from the "Olivier" dataset.
Figure 3 .
3 Figure 3.13: ROC curves obtained with HMIN Rθ=π/2 on "Olivier" dataset. As in Figure3.11, the chosen classifier is an RBF, and was trained with the features extracted from 500 faces from LFW crop[START_REF] Huang | Robust face detection using Gabor filter features[END_REF] dataset and 500 non-faces images cropped from images of the "background" class of the Caltech101 dataset[START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. For each image, a pyramid was produced in order to detect faces of various scales, were the dimensions of the images are successively reduced by a factor 1.2. An image was considered correctly detected if at least one ROI encompassing its eyes, nose and mouth was classified as "face", and if that ROI is not 20% bigger than the face according to the ground truth. Each non-face ROI that was classified as "Face" was counted as a false positive.
Figure 3 . 14 :
314 Figure 3.14: HOG descriptor computation. Gradients are computed for each location of the R, G and B channels, and for each location only the gradient with the highest norm is kept. The kept gradients are separated into cells, shown in green, and histograms of their orientations are computed for each cell. This produces a histogram map, which is divided in overlapping blocks a shown on the right. Normalization are performed for each block, which produces one feature vector per block. Those feature vectors are finally concatenated so as to produce the feature vector used for training and classification.
Figure 3 . 15 :
315 Figure 3.15: Binning of the half-circle of unsigned angles with N b = 9. The regions in gray correspond to the same bin.
Figure 3 . 17 :
317 Figure3.17: ConvNet for pedestrian detection[START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]. Input image is assumed to be represented in Y'UV space. The Y channel feed the C Y 1 convolution layer, the resulting feature maps of which are sub-sampled in S Y 1. In parallel, the UV channels are subsampled by the S U V 0 layer, and the results feed the C U V 1 convolution layer. The C U V 1 and S Y 1 feature maps are concatenated and feed the C2 convolution layer. The C2 feature maps are then subsampled by S2. Finally, all output features from C2 and C U V 1 are serialized and used as inputs of a fully-connected layer for classification.
Figure 3 . 18 :
318 Figure 3.18: ROC curves of the HMIN classifiers on the INRIA pedestrian dataset.The drop of performance is more important here than it was for faces, as shown on Figure3.9. However, the gain in complexity is as significant as in Section 3.1.2.
. 5 )
5 thus, by denoting * the convolution operator and I the input image: I * G | θ=0 = I * c V * r H (4.6) where A * r B denotes separated convolutions on rows of 2D data A by 1D kernel B, and A * c B denotes column-wise convolutions of A by B. Using the same notations: G (x, y) | θ=0 = G (y, x) θ=π/2 (4.7) and then I * G θ=π/2 = I * c H * r V (4.8)
Table 4 . 2 :
42 Accuracies of Orchard's implementations on Caltech101 [99]. The "Original model" column shows the results obtained with the original HMAX code, while "CPU" shows the results obtained by Orchard et al.'s own CPU implementation, and "FPGA" show the results obtained with their FPGA implementation. (a) Car rears. (b) Airplanes. (c) Faces. (d) Leaves. (e) Motorbikes. (f) Background.
Figure 4 . 1 :
41 Figure 4.1: Samples of images of the used classes from Caltech101 dataset [142].
Figure 4 . 2 :
42 Figure 4.2: Precision degradation in input image for three types of objects: faces, cars and airplanes.Color maps are modified so that the 0 corresponds to black and the highest possible value corresponds to white, with gray level linearly interpolated in between. We can see that while the images are somewhat difficult to recognize with 1 bit pixels, they are easily recognizable with as few as 2 bits.
Figure 4 . 3 :
43 Figure 4.3: Recognition rates of HMAX on four categories of Caltech101 dataset w.r.t the input image pixel bit width.For each bit width, ten independent tests were carried out, in which half of the data was learnt and the other half was kept for testing. We see that the pixel precision has little to no influence on the accuracy.
Figure 4 . 4 :
44 Figure 4.4: Recognition rates on four categories of Caltech101 dataset w.r.t the coefficients of the Gabor filter coding scheme in S1 layer. Those tests were run with input pixels having 2 bits widths. The protocol is the same as developped for testing the input pixels, as done in Figure 4.3.
21 )
21 In their paper, Yu et al. proposed to keep the 200 most relevant patches, which when compared to the 1000 patches recommended in[START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF] would allow to divide the complexity at this stage by 5. In[START_REF] Serre | A feedforward architecture accounts for rapid categorization[END_REF], it is suggested to use patches of4 different sizes: 4 × 4 × 4, 8 × 8 × 4, 12 × 12 × 4 and 16 × 16 × 4.
returns a value close to 1 when the patterns are closed in terms of Euclidean distance and 0 when they are far from each others. Computing an Euclidean distance implies the computation of square and square-roots function, which may use lots of hardware resources. The evaluation of the exponential function also raises similar issues, along with those already exposed in Section 4.2. Since we already removed the Gaussian function to simplify the training of S2, we propose to compare the performances obtained when replacing the Euclidean distance with the Manhattan distance:
Figure 4 . 5 :
45 Figure 4.5: HMAX VHDL module. The main components are shown in colors, and the black lines represent the data flow.We see here that the data from the degraded 164 × 164 input image is first processed by S1 filters at all scales in parallel -only 8 out of the 16 filters in the bank are shown for readability. Orientations are processed serially and the outputs are multiplexed. The data is then processed by the c1 module, which produces half the feature maps produced in S1, before being serialized by c1 to s2. The serialized data is sent to s2c2, which perform pattern matching between input data and pre-learnt patches with its s2 components, several in parallel, with a multiplexing. The maximum responses of each S2 unit are then computed by c2. The data is then serialized by c2 to out.
Figure 4 . 6 :
46 Figure 4.6: Dataflow in s1. This Figure shows the major components of the s1 module.First of all the pixels arrive in the pix to stripe, which returns columns of 37 pixels. Those columns are then stored in shift registers, which store a 37 × 37 patchonly 7 lines are represented here for readability. Then for each of the 16 scales in S1, there exists an instance of the image cropper module that keeps only the data needed by its following conv module. The convolution kernels' coefficients are gotten from the coeffs manager module, which get them from the FPGA's ROM and retrieve those corresponding to the needed orientation, for all scales. Here only 4 of the 16 convolution engines are shown. The computed data is written in dout, in parallel. Note that not all components of s1 are repesented here: pixmat, pixel manager, coeffs manager and conv crop are not displayed to enhance readability and focus and the dataflow.
are stored in BRAM. The module fetches the needed ones depending on the value written in k idx, and route them to the cout module.
Figure 4 Figure 4 . 7 :
447 Figure 4.7: coeffs manager module. In order to simplify the process, all coefficients needed at a particular time are read all at once from several BRAM, of which only two are represented here for readability. The coefficients are then concatenated in a single vector directly connected to the cout output port.
Figure 4 . 8 : 7 ×
487 Figure 4.8: 7 × 7 convolution module. That module has one convrow module per row in the convolution kernel, each taking care of a line.In each of those modules, the "multiplications" are performed in parallel in rowmult between the data coming from din and coeffs input buses -as mentioned in Section 4.2, those multiplications consist in fact in simple changes of signs, depending on the 1 bit coefficients provided by the external module coeffs manager. The results are the accumulated thanks to convrow's cumsum component. Finally, the output of all conrow modules are accumulated thanks to another cumsum component. The result is afterward degraded thanks to the s1degrader module, the output of which is written in dout.
Figure 4 . 9 :
49 Figure 4.9: shift registers module with 4 registers. At each clock cycle, data is read from din and en din and written into the next register, the last of which writes its data into dout and en dout output ports.
Figure 4 . 10 :
410 Figure 4.10: c1 module. For more readability, only 4 of the 8 filters are represented here. Maximums are first computed accross scales with the max 2by2 components. The data is then organized into stripes in the same fashion as done in the pix to stripe component used in s1 module. That stripe is organized by lines, and then scales, and needs to be organized by scales, and then lines to be processed by the latter modulethis reorganization is taken care of by reorg stripes. Orientations being multiplexed, we needed to separate them so each may be processed individually, which is done by the data demux module. Each orientation is then processed by one of the c1 orientation module. Finally, data comming out of c1 orientation is multiplexed by data mux before being written in output ports.
Figure 4 .
4 Figure 4.11: c1unit.Figure 4.11a shows the principle components of the module architecture, and Figure 4.11b shows the control signals enabling and disabling the data.Figure4.11a shows the two c1unit components and the control module c1unit ctrlnamed ctrl here for readability. Data coming out of those components are multiplexed in the same output port dout. The four bits data signal is shown with the thick line, and the control signals ares shown in light line. We see that dedicated control signals are sent to each maxfilt components, but also that both get the same data. The control signals presented in Figure4.11b show how the control allow to shift the data between the two units, in order to produce the overlap between two C1 units. We assume here that we emulate C1 units with 4 × 4 receptive fields and 2 × 2 overlap.
Figure 4 .
4 Figure 4.11: c1unit.Figure 4.11a shows the principle components of the module architecture, and Figure 4.11b shows the control signals enabling and disabling the data.Figure4.11a shows the two c1unit components and the control module c1unit ctrlnamed ctrl here for readability. Data coming out of those components are multiplexed in the same output port dout. The four bits data signal is shown with the thick line, and the control signals ares shown in light line. We see that dedicated control signals are sent to each maxfilt components, but also that both get the same data. The control signals presented in Figure4.11b show how the control allow to shift the data between the two units, in order to produce the overlap between two C1 units. We assume here that we emulate C1 units with 4 × 4 receptive fields and 2 × 2 overlap.
Figure 4
4 Figure 4.12: c1 to s2 module. The blue and red lines show the data flow in the two configurations of the double-buffering. The data goes through c1 handler, where the address to which it should be written is generated and written in waddr. The rea and reb signals control the enable mode of the BRAMs, while the wea and web enable and disable the write modes of the BRAMs. When the upper BRAM is in write mode, wea and reb are set and web and rea are unset. When the upper buffer is full, those signals are toggled so that we read the full buffer and write in the other one. Those signals are controled thanks to the ctrl component, which also generates the address from which the output data should be read from the BRAMs. Data read from both BRAMs are then multiplexed into the dout output port. Pins on the left of both BRAMs correspond to the same clock domain, and those on the right belong to another one so that it is synchronized with following modules.
Figure 4 . 13 :
413 Figure 4.13: Dataflow in s2c2.The data arriving to the module is handled by s2 input manager, which make it manageable for the s2processors. The latter also gets the pre-learnt filter needed for the pattern-matching operations from s2 coeffs manager in parallel, and perform the computations. Once it is over, the data is sent in parallel to the dout output port, which feed the next processing module.
Organization of stream arriving in s2 input handler. Each color indicates the orientation of the C1 feature map the corresponding sample comes from. We assume here that those feature maps are 2 × 2. cX indicates that the samples are located in the X-th column in their feature maps, and rX indicates that the samples are located in the X-th row. s2 handler module. Orientations are first demultiplexed, and written in parallel into the relevant s2 pix to stripe, shown here in gray. There is one s2 pix to stripe per scale in C1 feature maps -i.e 8. The output of those compinents are then routed to the dout output port, using a multiplexer.
Figure 4 . 14 :
414 Figure 4.14: Data management in s2 handler.Figure 4.14a shows how the arriving stream of data is organized.Figure 4.14b shows how this stream is processed.
Figure 4 .
4 Figure 4.14: Data management in s2 handler.Figure 4.14a shows how the arriving stream of data is organized.Figure 4.14b shows how this stream is processed.
Figure 4 .
4 Figure 4.14: Data management in s2 handler.Figure 4.14a shows how the arriving stream of data is organized.Figure 4.14b shows how this stream is processed.
Figure 4 .
4 Figure 4.15 sums up the data flow in s2processors. We shall now describe the corner cropper and s2bank modules.
4. 3 . 3 .cropper 16 × 16 × 4 12 × 12 × 4 8 × 8 × 4 4 × 4 Figure 4 . 15 :
3316164124844415 Figure 4.15: Data flow in s2processors.Names in italic represent the components instantiated in that module, and plain names show input and output ports. Only din, dout and en dout are represented for readability. Each square in din represent one of the 1024 pixels read from din, and each set of four squares represents the pixels from C1 maps of the same scale and locations, and the four orientations. The corner cropper module makes sure only the relevant data is routed to the following s2bank components. Those components perform their computations in parallel. When the data produced by one or several of those instances is ready, it is written in the corresponding pins of the dout output ports and the relevant pins of the en dout output port are set.
3.2.3 are also used to synchronize data.cum diff As suggested by its name, this module computed the absolute difference between two unsigned integers, and accumulates the result with those of the previous operations. To that end, it needs the usual clk and rst input ports for respectively synchronization and resetting purposes. It also needs two operands, which are provided by the din1 and din2 input ports. An input port called new flag allows to reset the accumulation to 0 and start a fresh cumulative difference operation, and the en din flag allows to enable computation. That module has a single output port called dout, which provides the result of the accumulation as it is computed. It is not required to have an output pin stating when the output data is valid, for the reason that the data is always valid. Knowing when the data actually correspond to a full Manhattan distance is actually performed in s2unit.
Let's begin
with the S1 layer. The convolution is computed at 128 × 128 places of the input image. As detailed in Section 4.3.2.1, the sums of implied by the convolution are performed row-wise in parallel, and the results per row as then sum sequentially. Thus, for a k × k convolution kernel, k sums are of k elements are performed in parallel, and each one of them takes 1 cycle per element -hence, k cycles. That leads to k elements, which are them sum using the same strategy, and thus requiring another k cycles, thus totalizing 2k cycles. Since we use a 4-to-1 multiplexing strategy to compute the output of the orientations one after the other, all scales are processed in parallel and the biggest convolution kernel is 37 × 37, the convolution takes 128 × 128 × 8 × 37 = 4.85 × 10 6 clock cycles to process a single 128 × 128 image.
Figure B. 1 :
1 Figure B.1: Exemples d'applications.
Figure B. 4 :Figure B. 5 :
45 Figure B.4: Invariant scattering convolution network[START_REF] Bruna | Invariant Scattering Convolution Networks[END_REF]. Chaque couche applique une décomposition en ondelette U λ à l'entrée, et envoie le résultat auxquels a été appliqué un filtre passe-bas et un sous-échantillonage à la couche suivante. Les scattering coefficients S λ (x) ainsi produits forment le vecteur caractéristique à classifier.
Figure B. 6 :
6 Figure B.6: Réseaux de neurones à convolutions [48].
(x 1 , y 1 )(x 2 , y 2 )Figure B. 7 :Figure B. 8 :Figure B. 9 :
1122789 Figure B.7: Examples de caractéristiques utilisés dans Viola-Jones [30, 136].
Figure B. 11 .Figure B. 10 :Figure B. 11 :
111011 Figure B.11. La sortie obtenue pour un visage après filtrage par ce noyau de convolution est donné en Figure B.12. Pour C1, la taille de la fenêtre du filtrage est ∆ k = 8. Cet extracteur de caractéristiques sera appelé HMIN θ=π/2 dans la suite du document. Nous proposons ensuite de réduire la taille de ce noyau de convolution, qui comporte à l'heure actuelle 37 × 37 éléments, en le réduisant à 9 × 9 en utilisant une interpolation bilinéaire, ce qui lui permet de traiter des images 4 fois plus petites. Cette version du descripteur sera appelée HMIN R θ=π/2 .
Figure B. 12 :Figure B. 13 :
1213 Figure B.12: Réponse du filtre unique dans S1 sur un visage.
Figure B. 14 :
14 Figure B.14: Courbe ROC obtenue avec HMIN R θ=π/2 sur la base CMU.
4 :
4 Complexité et précision de différentes méthodes de détections de visages. Les taux de faux positifs du CFF et de Viola-Jones ont été lus à partir des courbes ROC de leurs articles respectifs [50, 136], et sont donc approximatifs. Tous les taux de faux positifs correspondent à des taux de détections de 90%. La colonne Classification donne la complexité pour la classification d'une image dont la taille est donnée par la colonne Taille d'entrée. La colonne Scanning donne la complexité de l'algorithme lors d'un scan d'une image VGA complète de dimensions 640 × 480. Les complexités et empreintes mémoires ont été évaluées pour l'extraction de caractéristiques seulement, sans prendre en compte la classification. Il faut également noter qu'aucune pyramide d'images n'est utilisée ici, pour simplifier les calculs -dans le cas où on en utiliserait une, Viola-Jones demanderait bien moins de ressurces que le CFF et HMIN grâce à la représentation en image intégrale.
Figure B. 15 :
15 Figure B.15: HOG [36].
C Y 1 C U V 1 YSFigure B. 16 :Figure B. 17 :
111617 Figure B.16: ConvNet pour la détection de piétons [145]. Les couches C XX désignent des couches de convolutions, et les couches S XX désignent des couches de souséchantillonage.
B. 4 Figure B. 18 :Figure B. 19 :
41819 Figure B.18: Effet de la dégradation de précision sur l'image d'entrée.
B. 4 . 3 ConclusionB. 5 Conclusion
435 Dans cette Section, nous avons présenté une série d'optimisations pour HMAX visant à faciliter son implantation matérielle. Notre contribution consiste à diminuer la précision des pixels de l'image d'entrée, diminuer la précision des coefficients des filtres de Gabor et utiliser une distance de Manhattan dans la couche S2 lors des opérations de comparaisons de motifs. Nous utilisons également des méthodes proposées dans la littérature consistant à utiliser l'algorithme de Lloyd pour compresser la sortie de S1, et pour diminuer la complexité de S2. Nous avons montré que ces simplifications n'ont que peu d'impact sur la précision du modèle. Nous avons ensuite présenté les résultats de l'implantation matérielle, que nous avons voulu aussi naïve que possible en dehors des optimisations proposées ici, puis nous avons comparé le résultat avec la littérature. Il apparaît que notre implantation traite les images significativement moins rapidement que ce qui est proposé dans la littérature ; cependant notre implantation utilise moins de ressources matérielles et nos optimisations sont parfaitement compatibles avec l'implantation de référence. Les travaux futurs consisteront donc à proposer une implantation tirant parti des avantages des deux méthodes, afin de proposer une implantation la plus réduite et avec la plus grande bande passante possible. Dans cette thèse, nous avons proposé une solution à un problème d'optimisation d'un algorithme bio-inspiré pour la classification de motifs visuels, avec pour but de l'implanter sur une architecture matérielle dédiée. Notre but était de proposer une architecture facilement embarquable et suffisamment générique pour répondre à différents problèmes. Notre choix s'est porté sur HMAX, en raison de l'unicité de son architecture et de ses performances acceptable même avec un nombre réduit d'examples à apprendre, contrairement à ConvNet. Notre première contribution consistait à optimiser HMIN, qui est une version allégée de HMAX, pour deux tâches précises, la détection de visages et la détection de piétons, en se basant sur le fait que seules certaines caractéristiques sont utiles. Les performances que nous avons obtenus, pour chacune des deux tâches, sont significativements inférieures à celles proposées dans la littérature -cependant, nous estimons que notre algorithme à l'avantage d'être plus générique, et nous pensons qu'une implémentation matérielle nécessiterait extrêmement peu de ressources. Notre seconde contribution est de proposer une série d'optimisations pour l'algorithme HMAX complet, principalement basées sur un codage des données efficace. Nous avons montré qu'HMAX ne perdait pas de précisions de manière significative en réduisant la précision des pixels des images d'entrées à 2 bits, et celle des coefficients des filtres de Gabor à 1 seul bit. Bien que cette implantation, naïve en dehors des optimisations nommées ci-dessus, ne permettent pas de traiter une quantité d'images équivalentes à ce qu'il se fait dans la littérature, nos optimisations sont parfaitement utilisables en conjonctions avec celles de l'algorithme de référence, ce qui produirait une implantation particulièrement compact et rapide de cet algorithme -ce qui sera réalisé dans des recherches futures.
NeuroDSP architecture[START_REF] Paindavoine | NeuroDSP Accelerator for Face Detection Application[END_REF]. A NeuroDSP device is composed of 32 clusters, called P-Neuro, each constituted of 32 artificial neurons called PE, thus representing a total of 1024 neurons. The PEs may be multiplexed, so that they can perform several instruction sequentially and thus emulate bigger neural networks. When timing is critical, one may instead cascade several NeuroDSP processors and use them as if it was a single device.
Data In (audio, image. . . ) Cluster Cluster Cluster Decision
From previous 32 PE 32 PE 32 PE To next
NeuroDSP NeuroDSP
Figure 1.3:
3 http://goo.gl/Ax6CoF
There are 16 different scales and four different orientations, thus totaling 64 filters. During the S1 stage, each filter is applied independently on the input image and the filtered images are fed to the next layer.The C1 stage gives a first level of location invariance of the features extracted in S1. It does so with maximum pooling operators: each C1 unit pools over several neighboring S1 units with a 50% overlap and feed the S2 layer with the maximum value. The number of S1 units a C1 unit pools over depends on the scale of the considered S1 units.Furthermore, each C1 unit pools across two consecutive scales, with no overlap. This leads to a number of images divided by two, thus only 32 images are fed to the following layer. The parameters of the S1 and C1 layers are presented in Table2.8.
The filter bank has several filters, each having a specific wavelength, effective width, size and orientation. The wavelength, effective width and size define the filter's scale.
Table 2 .
2
1: Paramaters for HMAX S1 and C1 layers
Table 2 . 2 :
22 Comparison of descriptors.
Framework Accuracy Training Complexity
ISCN High None High
HMAX High Yes, requires few data points High
HOG Reasonnable None Low
SIFT Reasonnable None Low
SURF Reasonnable None very low
Figure 3.3: Complexity repartition of Viola and Jones' algorithm when processing a 640 × 480 with a 24 × 24 sliding window. From Equations 3.7 to 3.13, we see that the integral image computation requires 2W H additions, the feature extraction needs N op N f N w additions, and C VJ N needs W H multiplications and 2W H. Thus, we need a total of 4W H + N op N f N w .
[START_REF] Fausett | Fundamentals of Neural Networks: Architectures, Algorithms And Applications: United States Edition[END_REF]
Complexity repartition of the CFF algorithms, separated in three types of computations: MAC, hyperbolic tangents ("Tanh") and sums. We see here that the large majority of operations are MAC, toward which most effort should then be put for fine optimizations or hardware implementation.
Tanh (0.88%)
MAC (97.8%)
Sums (1.32%)
Figure 3.5:
.34)
which gives
C CFF T2 = 1.75W H + 7 (W + H) + 16. (3.35)
Using those results in Equation
3
.17, we finally get C CFF = 168.75W H -1038 (W + H) + 5664. (3.36) Now that we have this general formula, let's compute the complexity involved in the classification of a typical 36 × 32 patch. We get 129.5 kOP. Let's now assume that we must find and locate faces in a VGA 640 × 480 image. From
Memory print Using the same method as for Viola-Jones' in Section 3.1.1.1 and the CFF in Section 3.1.1.2, let's evaluate the memory print of HMIN. Since the C1 layer may be processed in-place, the memory print of HMIN is the same as its S1 layer,
From Equations 3.37 to 3.39, we get
C HMIN = 36456W H. (3.40)
If we aim to extract feature from a typical 128 × 128 image for classification as suggested
in [31], it needs 597 MOP operation. When scanning a 640 × 480 image as done with the
CFF in Section 3.1.1.2, we get a total of 11.2 GOP. From Equations 3.38 and 3.39, we
also see that the convolutions operations take 99.89 % of the computation -thus, they
represent clearly the basis of our optimizations.
.39)
which produces 16 640×480 feature maps, coded on 32-bit single precision floating point numbers. Hence, its memory print is 19.66 MB.
[START_REF] Lecun | Gradient-based learning applied to document recognition[END_REF] Accuracy (%) 95.78 ± 0.97 90.81 ± 1.10 90.05 ± 0.98
Descriptor HMIN HMIN θ=π/2 HMIN R θ=π/2
Table 3 . 1 :
31 Accuracies of the different version of HMIN on the LFW crop dataset.
dataset at random to build the training set, as in Section 3.1.3.1. Once again, we used it to train the
1
0.8
Recognition rate 0.4 0.6 -HMIN -HMIN| θ=π/2 -HMIN| R θ=π/2
0.2
0 0.2 0.4 0.6 0.8 1
False positive rate
Figure 3.9: ROC curves of the HMIN classifiers on LFW crop dataset. They show the recognition rate w.r.t the false positive rate: ideally, that curve would represent the function ∀x ∈ (0, 1] f : x → 0 when x = 0, 1 otherwise. One can see a significant drop of performance when using HMIN θ=π/2 compared to HMIN -however using HMIN R θ=π/2
Table 3 . 2 :
32 Complexity and accuracy of face detection frameworks. The false positive of the CFF and VJ frameworkw were drawn from the ROC curves of their respective papers
Finally the S2 layer produces 2040 102 × 76 feature maps, which using 32 bits floating point precision would require 63.26 MB.
Memory print Let's now evaluate the memory print of that framework when pro-
cessing a 640 × 480 input image. The C Y 1 layer produces 32 634 × 474 feature maps,
in which we assume the features are coded using 32-bits floating point precision, which
needs 38.47 MB. In order to simplify our study, we then assume that the subsampling
[START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF]
Let's evaluate this expression as a function of the width W and height h of the input image. In order to make it more tractable, we approximate it by neglecting the floor operators . Reusing Equation 3.81 to 3.
[START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF]
we have C ConvNet (W, H) ≈ 38.8 × 10 3 W H -1.12 × 10 6 (W + H) + 33.2 × 10 6 (3.92) It should be noted that we again neglected the classification stage. Considering input images are 78 × 126, we have C ConvNet ≈ 484.84 MOP. Applying Equation 3.92 to the case where we process a 640 × 480, we have 11 GOP. From the previous analysis, we see that lots of MAC are computed at almost all stage, including the average downsampling ones. This is largely due to the C2 layer, with its high amount of convolution filters. It is then clear that optimization efforts should be directed towards the computation of MACs. and normalization operations are performed in-place, and hence do not bring more need in memory. The S Y 1 layer produces 2 213 × 160 feature maps, hence needing 272.64 kB.
1.2.
Framework False positive rate (%) Complexity (OP) Scanning Classification Memory print Input size
HOG 0.02 [36] 12.96 M 344.7 k 4.37MB 64 × 128
ConvNet See caption 484.84 M 11 G 63.26MB 78 × 126
HMIN R θ=0 30% 13.05 M 41.45 k 1.2 MB 32 × 16
Table 4 . 1 :
41 Hardware resources utilized by Orchard's implementation[99].amount that could fit on their device. At each location, pattern-matching are multiplexed by size, i.e first all 4 × 4 × 4 in parallel, then 8 × 8 × 4, then 12 × 12 × 4 and finally 16 × 16 × 4. Responses are computed for two different orientations in parallel, this results in a total of 320 × 2 = 640 MAC operations to be performed in parallel at each clock cycle. Thus, this requires 640 multipliers, and 640 coefficients to be read at each clock cycle. As for the precision, each feature is coded on 16 bits to fit.
Resource Used Available Utilization(%)
DSP 717 768 93
BRAM 373 416 89
Flip-flops 66196 301440 21
Look-up tables 60872 150720 40
4.1.1.4 C2
Due to the simplicity of C2 in the original model, there is not much room for optimizations or implementation tricks here. Orchard et al.'s implementation simply gets the 320 results from S2 in parallel and use them to perform the maximum operations with the previous values, again in parallel.
Table 4 . 3 :
43 Code books and partitions by scales for features computed in C1. Values were computed with the simplification proposed in Sections 4.2.1 and 4.2.2 for S1, using Matlab's lloyds function.
i 1 2 3 4
C 1 14 27 37 50
Q 1 21 32 43 -
C 2 42 82 118 154
Q 2 62 100 136 -
C 3 37 65 94 141
Q 3 51 79 117 -
C 4 81 148 209 284
Q 4 114 178 246 -
C 5 122 208 278 380
Q 5 165 243 329 -
C 6 175 309 427 559
Q 6 242 368 494 -
C 7 296 521 707 905
Q 7 408 614 806 -
C 8 499 868 1182 1492
Q 8 633 1025 1337 -
2.1, with the exception that this time we add the simplification proposed here. Results are compiled with further optimizations in Table 4.4
Table 4 . 4 :
44 Accuracies of HMAX with several optimizations on five classes of the Caltech101 dataset[START_REF] Fei-Fei | Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories[END_REF]. That Table compiles the results of the experiment conducted in Sections 4.2.3, 4.2.4 and 4.2.5. The column on the left shows the result gotten from Section 4.2.2. Starting from the second column, each column show the accuracies obtained on the 5 classes in binary task classification, as described before, taking into account the corresponding simplification as well as those referred by the columns left to it.
Table 4 . 5 :
45 Offsets used to computed addresses in c1 to s2 modules.
scale s 1 2 3 4 5 6 7 8
C1 patch side 31 24 20 17 15 13 11 10
offset o s 0 3844 6148 7748 8904 9804 10480 10964
Table 4 .
4 [START_REF] Paindavoine | NeuroDSP Accelerator for Face Detection Application[END_REF]. Thus, we instantiated a pixmat component adapted to the maximum size of C1 feature maps, i.e 31 × 31. The problem is that pixmat's en dout signal is only set when the whole matrix is ready, which make it impractical for C1 feature maps smaller than 31 × 31.
N 0 4 8 12 16
Table 4 . 6 :
46 Mapping between N and dout scale.
Table 4 . 7 :
47 's final stage is performed in the c2 module. It is synchronized, and thus has a clk input port expecting to get a clock signal, as well as an rst input port allowing to reset the component. The data used to performe the computation is obtained thanks to the din input port, and it arrives in parallel. The id in input port allows to indicate which of the data from din are valid, and a new in input port allows to warn about the arrival of new data. After performing of the maximum operations, the results for all pre-learnt vectors in S2 are written in parallel into the dout output port, and the last output port, which is called new out, indicates that new data is available through dout. Resource utilization of HMAX implementation on XC7A200TFBG484-1 with the proposed simplifications. The proportion of used flips-flops is high enough to cause problems during implementation. However, the biggest issue comes from the fact that we use way too many blocks RAM for a single such target.
4.3.4 c2
As done in the c1 to s2 presented in Section 4.3.2.5, we use a double-buffering design
pattern to manage output data.
HMAX
16 × 16 × 4N 16 + 12 × 12 × 4 (N 12 -N 16 ) One of the most interesting contributions about HMAX hardware implementation is the work of Orchard et al., described in Section 4.1.1 -as mentioned in Section 2.2.2.1,there exists several implementations of either parts of the model or of the whole model on boards containing many FGPAs, but we shall focus here only on that work, as it is the only one to our knowledge aiming to implement the whole model on a single FPGA. In that work, they implemented their algorithm on a Virtex 6 XC6VLX240T FPGA, while we targeted an Artix-7 XC7A200TFBG484-1 device. Table4.8 sums up the resources of those two devices; we see that the Virtex-6 FPGA has slightly more resources than the Artix-7, however the two devices have roughly the same resources.
+ 8 × 8 × 4 (N 8 -N 12 )
+ 4 × 4 × 4 (N 4 -N 8 )] (4.28)
=2240N 16 + 1600N 12 + 960N 8 + 320N 4 . (4.29)
Let's now evaluate the N i . Considering that some the C1 feature maps are smaller than
some of the pre-learnt patches and that in such case, no computations are performed,
we may write
N i = 8 k=1 max 128 ∆ k -i + 1, 0 2 . (4.30)
with ∆ k defined in Table 2.1. Hence we have
N 16 = 435
N 12 = 821
N 8 = 1437
N 4 = 2309, (4.31)
which gives
T S2 = 2240 × 435 + 1600 × 821 + 960 × 1437 + 320 × 2309, (4.32)
and thus T S2 = 110.16 × 10 6 clock cycles.
Finally, C2 processes the data as soon as it arrives in a pipelined manner, as done in
C1. Hence, it doesn't bring any bottleneck.
We see from the above analysis that the stage that takes most time is S2, with 4.41 × 10 6
clock cycles per image. Assuming we have a system clock cycle of 100 MHz, we get
22.69 FPS.
4.5 Discussion
Table 4 . 8 :
48 Hardware resources comparison between the Virtex-6 FPGA used in[99], and the Artix-7 200T we chose.
XC6VLX240T Artix 7 200T
DSP 768 740
BRAM 416 365
Flip-flops 301440 269200
Look-up tables 150720 136400
7. if d opp > µR, where µ is a strictly positive constant, accept the merge and go back to 3 using C\ {c} instead of C; if d opp ≤ µR, reject the merge and go back to 3 selecting another cluster, 8. repeat steps 3 to 7 until all clusters from C were considered, which leads to a new set of clusters C 2 , 9. repeat steps 2 to 8 using C 2 instead of C 1 and c 2 1 ∈ C 2 instead of c 1 1 , and continue using C 3 , C 4 and so on until no further merge is possible.
de sélection de caractéristiques pour la classifications d'objets visuels. Nous présenterons ensuite une implantation optimisée d'un algorithme de classification d'images sur une plateforme matérielle reconfigurable. Finalement, la dernière Section présentera la conclusion de nos travaux. Architecture feedforward. de reconnaissances d'images. Nous nous intéresserons ici uniquement aux architectures dites feedforward, dans lesquelles les neurones sont organisées par couches et chaque unité transmet l'information à des neurones de la couche suivante -ainsi, l'information se propage dans un seul sens. Ce genre d'architecture est représenté en Figure B.3. Les connexions entre les unités sont appelés synapses, et à chacune d'entre elles est affecté un poid synaptique. Ainsi, la valeur d'entrée z d'un neurone de N entrée ayant des poids synaptiques w 1 , w 2 , . . . , w N est donnée par z = w 0 +
B.2 État de l'art Cette Section propose une brève revue de la littérature concernant les travaux présentés ici. Nous commencerons par les fondements théoriques de l'apprentissage automatique et de l'extraction de caractéristiques d'un signal. Nous verrons ensuite les implémentations matérielles existances pour ces méthodes. Finalement, nous proposerons une discussion au cours de laquelle nous établirons les problématiques auxquelles nous répondront dans ce documents. Figure B.3: N
w i x i , (B.1)
i=1
Entrée (son, image. . . ) Cluster Cluster Cluster Décision
Depuis NeuroDSP 32 PE 32 PE 32 PE Vers NeuroDSP
précédant Suivant
Figure B.2: Architecture NeuroDSP [5].
d'encombrement. Il est constitué de 32 blocs appelé P-Neuro, qui consistent chacun en
32 processeurs élémentaires (PE), pour un total de 1024 PE. Chacun de ces PE peut KNN [6], pour K-Nearest Neighbors, et présente l'avantange d'être extrêmement simple Une autre méthode de classification que nous utilisons dans ces travaux s'appelle le
être vu comme un neurone d'un réseau de neurones artificiel, tel que le Perceptron. Au à implanter. Cependant, lorsque le nombre d'exemples de la base d'apprentissage devient RBF, qui fait partie des méthode dites à noyaux. Elles consistent à évaluer un en-
sein d'un P-Neuro, tous les PE exécutent la même opération sur des données différentes, important ou que la taille des vecteurs devient trop grande, cette méthode devient trop semble de fonction à base radiale au point représenté par le vecteur à classifier, et le
constituant ainsi une architecture de type SIMD (Single Instruction Multiple Data), par-complexe et trop consommatrice en mémoire pour être efficace, en particulier dans un valeurs produites par ces fonctions forment un nouveau vecteur qui sera classifié par
faitement adaptée aux calculs parallèle tels que nécessités dans les réseaux de neurones contexte embarqué. un classificateur linéaire -e.g, un Perceptron. En revanche, dans ce cas la technique
artificiels. Cette architecture est présenté en Figure B.2. Les travaux présentés dans ce Il existe beaucoup d'autres méthode de classification de motifs, parmi lesquelles fig-d'apprentissage utilisée est simplement une recherche de moindres carrés.
documents ont été réalisés dans le cadre de ce projet. urent en particulier les réseaux de neurones (cf. le Perceptron en Section B.1), ou des
Dans ce résumé, nous ferons tout d'abord un état de l'art de la littérature concer-approches plus statistiques telles que les Machines à Vecteurs de Supports, ou SVM 2 . B.2.1.2 Méthodes d'extraction de caractéristiques
nant ce domaine -nous y verrons les principales méthodes d'apprentissage automatique, leurs implantations sur matériel, et nous poserons les problématiques auxquelles nous répondront dans la suite du document. Une Section sera ensuite consacrée à notre méthode B.2.1 Fondements théoriques B.2.1.1 Méthodes de classification Il existe de nombreuses approches permettant à une machine d'apprendre d'elle-même à classifier des motifs. Nous allons ici revoir les principales. Une approche extrêmement simple consiste à considérer l'intégralité des vecteurs dont nous disposons a priori, que l'on appelle base d'apprentissage. Lors de la classification d'un vecteur inconnu, on évalue une distance (par exemple, Euclidienne) avec tous les vecteurs de la base d'apprentissage, et on ne considère que les K plus proches. Chacun de ces vecteurs vote alors pour sa propre catégorie, et la catégorie ayant obtenue le plus de vote est retenue. On considère alors que le vecteur inconnue appartient à cette catégorie. Cette approche s'appelle Les réseaux de neurones sont récemment devenus extrêmement populaires, depuis leurs utilisations par les entreprises Facebook et Google notamment pour leurs applications avec x i les valeurs propagées par les unités de la couche précédente et w 0 un biais, nécessaire pour des raisons mathématiques. Une fonction non-linéaire, appelée fonction d'activation, est ensuite appliquée à z, et le résultat est propagé aux neurones de la couche suivante. Apprendre un réseau de neurones de ce type à éxecuter une tâche consiste à trouver les bons poids synaptiques, au moyen d'un algorithme d'apprentissage.
Dans le cas des réseaux de neurones feedforward à plusieurs couches, l'algorithme le plus utilisé en raison de son efficacité et de sa faible complexité algorithmique est la descente de gradient stochastique -en effet, celui-ci peut être facilement exécuté au moyen d'une technique appelée rétro-propagation de l'erreur, qui permet d'evaluer rapidement la dérivée de la fonction de coût à optimiser
[START_REF] Rumelhart | Learning Internal Representations by Error Propagation[END_REF][START_REF] Rumelhart | Learning representations by back-propagating errors[END_REF]
. Afin de faciliter la tâche du classificateur, il est possible de faire appel à un algorithme d'extraction de caractéristiques, dont l'objet est de transformer le signal à classifier,
Finalement, la dernière couche C2 ne conserve, pour chacun de ces motifs pré-appris, que la réponse maximale, formant ainsi le vecteur caractéristique. Cet algorithme est présenté en Figure B.5.D'autres méthodes d'extractions de caractéristiques ou de classifications (ou les deux), tels que SIFT[START_REF] David | Distinctive Image Features from Scale-Invariant Keypoints[END_REF], SURF[START_REF] Bay | Speeded-Up Robust Features (SURF)[END_REF] ou Viola-Jones[START_REF] Viola | Robust real-time face detection[END_REF] ont également connu une certaine popularité.Enfin, il n'est pas possible de ne pas mentionner les réseaux de neurones à convolu-
Couche C1 Couche S1
Échelle Taille du filtre maximum (N k × N k ) Recouvrement ∆ k Taille de filtre S1 k Gabor Gabor σ λ
Band 1 × 8 4 7 × 7 9 × 9 2.8 3.6 3.5 4.6
Band 2 × 10 5 11 × 11 13 × 13 4.5 5.4 5.6 6.8
Band 3 × 12 6 15 × 15 17 × 17 6.3 7.3 7.9 9.1
Band 4 × 14 7 19 × 19 21 × 21 8.2 9.2 10.3 11.5
Band 5 × 16 8 23 × 23 25 × 25 10.2 11.3 12.7 14.1
Band 6 × 18 9 27 × 27 29 × 29 12.3 13.4 15.4 16.8
Band 7 × 20 10 31 × 31 33 × 33 14.6 15.8 18.2 19.7
Band 8 × 22 11 35 × 35 37 × 37 17.0 18.2 21.2 22.8
3) où γ est le ratio d'aspect, λ la longueur d'onde du cosinus, θ l'orientation du filtre et σ l'écart-type de la gaussienne. S1 comporte des filtres de 16 échelles et 4 orientations différentes, totalisant ainsi 64 filtres. Les paramètres des filtres sont donnés en Table B.1. La couche C1 fourni un premier niveau d'invariance aux translations et à l'échelle grâce à un ensemble de filtres maximum, dont la taille de la fenêtre N k et le recouvrement ∆ k dépendent de l'échelle considérée, et sont donnés en Table B.1. La troisième couche, S2, compare les sorties de la couche C1 avec un ensemble de motifs pré-appris aux moyens de fonctions à base radiale.
tions
[START_REF] Lecun | Convolutional networks and applications in vision[END_REF]
, qui sont les principaux contributeurs au succès que rencontrent les réseaux de neurones à l'heure actuelle. Leur approche est très simple: plutôt que de séparer l'extraction de caractéristiques de la classification, ces méthodes considèrent l'ensemble de la chaîne algorithmique et réalisent l'apprentissage sur son intégralité. L'extraction de Table B.1: Paramètres des couches S1 et C1 de HMAX [31]. Convolution Sous-chantillonage Convolution Sous-chantillonage Connection complte Sortie
Table B . 2 :
B2 Comparaison des principaux extracteurs de caractéristiques. Il existe également de nombreuse implantations logicielles, mais nous ne les mentionneront pas dans ce résumé. HMAX lui-même a été implanté de nombreuses fois sur du matériel reconfigurable (FPGA)[START_REF] Park | System-On-Chip for Biologically Inspired Vision Applications[END_REF][START_REF] Al Maashri | A hardware architecture for accelerating neuromorphic vision algorithms[END_REF][START_REF] Debole | FPGA-accelerator system for computing biologically inspired feature extraction models[END_REF][START_REF] Maashri | Accelerating neuromorphic vision algorithms for recognition[END_REF][START_REF] Park | Saliencydriven dynamic configuration of HMAX for energy-efficient multi-object recognition[END_REF][START_REF] Sun Park | An FPGAbased accelerator for cortical object classification[END_REF][START_REF] Park | A reconfigurable platform for the design and verification of domain-specific accelerators[END_REF][START_REF] Kestur | Emulating Mammalian Vision on Reconfigurable Hardware[END_REF] -récemment, l'implantation la plus prometteuse pour ce modèle est celle proposée par[99]. Des travaux ont été menés en ce sens également pour les réseaux de neurones à convolutions[START_REF] Farabet | Neu-Flow: A runtime reconfigurable dataflow processor for vision[END_REF][START_REF] Cavigelli | Origami: A Convolutional Network Accelerator[END_REF].B.2.3 DiscussionNotre but est de proposer un système embarquable et générique de reconnaissance de motifs. Pour cela, nous allons choisir un extracteur de caractéristiques qui servira de base à nos futurs travaux. Le problème de la classification ne sera pas traité ici. La TableB.2 présente une comparaison des principaux descripteurs. Au vu de cette comparaison, nous avons décidé de porter notre étude sur HMAX, qui nous assurera de plus une certaine généricité.Notre but est d'adapter cet algorithme à différentes tâches tout en conservant une généricité au niveau de l'architecture, et d'optimiser, notamment en termes de codage, ces algorithmes pour faciliter leur portage sur des cibles matérielles, ce qui amène les problématiques suivantes auxquelles nous nous efforcerons de répondre :
Méthode Précision Apprentissage requis Complexité
Scattering Transform Haute Non Élevée
HMAX High Oui, requière peu de données Élevée
HOG Raisonnable Non Basse
SIFT Raisonnable Non Basse
SURF Raisonnable Non Très basse
Réseaux de neurones Très élevée à convolutions Oui, requière beaucoup de données Élevée
convolutions ont été réalisées.
Table B . 3 :
B3 Précision des différentes versions de HMIN sur la base de données LFW crop. B.3.1.3 HMIN et optimisations D'après l'article de Serre et al. [31], nous savons que pour détecter et localiser un objet dans une scène, il est préférable de n'utiliser que les deux premières couches de HMAX, i.e S1 et C1. Afin de voir quelles caractéristiques sont les plus pertinentes, et donc quelles caractéristiques peuvent être enlevées sans trop impacter la précision du système, nous avons observé les réponses des différents filtres de Gabor pour les visages. Les résultats sont montrés en Figure B.10. Nous pouvons y voir que les informations qui semblent les plus pertinentes sont celles correspondant à l'orientation θ = π/2. Par ailleurs, nous pouvons voir que les informations sont semblables d'une échelle à l'autre. Ainsi, nous proposons de ne conserver que les filtres d'orientations θ = π/2, et de les sommer, afin de n'avoir plus qu'une convolution. L'aspect du noyau de cette convolution est donné en
Table B .
B Nous allons commencer par décrire les méthodes avec lesquelles nous allons comparer notre approche. Nous avons choisi de nous comparer avec l'état de l'art du domaine, à savoir le HOG et une implémentation particulière d'un réseau de neurones à convolutions, que nous appellerons ConvNet[START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]. La seule différence avec la détection de visages réside dans le fait que, cette fois, nous nous intéressons à des objets verticaux, et donc nous avons choisis de conserver cette fois-ci les filtres d'orientation θ = 0. Nous appellerons les algorithmes ainsi produits HMIN θ=0 et HMIN R θ=0 .
Méthode Taux de faux positifs (%) Scanning Classification Complexité (OP) Empreinte mémoire Taille d'entrée
VJ 5.32 × 10 -5 [136] 20.7 M 2.95 k 1.48 MB 24 × 24
CFF 5 × 10 -5 [50] 50.7 M 129.5 k 64.54 MB 36 × 32
HMIN R θ=π/2 4.5 26.1 M 82.9 k 1.2 MB 32 × 32
Table B . 5 .
B5 B.3.3 ConclusionDans cette Section, nous avons présenté notre contribution à l'optimisation d'une méthode d'extraction de caractéristiques. L'algorithme initial est basé sur HMAX, mais n'utilise que ses deux premières couches, S1 et C1. La couche S1 est constituée de 64 filtres TableB.5: Complexité et précisions de différentes méthode de détections de personnes. Le taux de faux positifs du HOG a été obtenu à partir des courbes DET présentés dans l'article original[START_REF] Dalal | Histograms of oriented gradients for human detection[END_REF], et est donc approximatif. Les taux de faux positifs présentés ici correspondent à des taux de détections de 90%. Les résultats concernant le Con-vNet ne sont pas directement indiqués ici, en raison du fait que la méthode d'évaluation de sa précision employée dans la littérature est différente de ce qui a été réalisé pour le HOG[START_REF] Sermanet | Pedestrian Detection with Unsupervised Multi-stage Feature Learning[END_REF]. Cependant, les contributeurs ont évalué la précision du HOG selon le même critère, et il en ressort que le HOG produit trois fois plus de faux positifs sur cette même base que ConvNet. En raison de ces différences de méthodologies, il est délicat de comparer directement nos résultats avec ceux de la littérature -en revanche, les résultats présentés ici suggèrent un clair désavantage à l'utilisation de HMIN R θ=0 pour cette tâche.de Gabor, avec 16 échelles et 4 orientations différentes. En étudiant les carte de caractéristiques produites par S1 pour différentes tâches spécifiques, nous avons conclus que nous pouvions ne conserver qu'une seule orientation, et sommer les noyaux des convolutions des 16 filtres restants de façon à n'en n'avoir plus qu'un, orienté horizontalement dans le cas de la détection de visages et verticalement dans le cas de la détection de personnes. Nos résultats montrent que notre système a une complexité acceptable, mais sa précision est moindre. Cependant, l'architecture est extrêmement simple, et peut facilement être implantée sur une cible matérielle. De plus, notre architecture est générique : un changement d'applications consiste simplement à changer les poids du noyau de convolution, alors que les autres architectures présentées requièreraient des changements plus en profondeur de l'architecture matérielle. Enfin, l'empreinte mémoire de notre méthode est très faible, ce qui autorise son implantation sur des systèmes ayant de fortes contraintes.La prochaine Section est dédiée à une proposition d'implantation matérielle de l'algorithme HMAX complet. La dernière Section sera quant à elle dédiée aux discussions finales et aux conclusions générales de nos travaux.
Méthode Taux de faux positifs (%) Scanning Classification Complexité (OP) Empreinte Taille d'entrée mémoire
HOG 0.02 [36] 12.96 M 344.7 k 4.37MB 64 × 128
ConvNet Voir légende 484.84 M 11 G 63.26MB 78 × 126
HMIN R θ=0 30% 13.05 M 41.45 k 1.2 MB 32 × 16
Table B . 6 :
B6 Précision de HMAX en utilisant différentes optimisations. correspond à -1 et 1 à +1. Le nombre de bit pour les pixels de l'image d'entrée est de 2. Cette approche est similaire à ce qui a été proposé dans[START_REF] Chikkerur | Approximations in the HMAX Model[END_REF].B.4.1.3 Autres optimisationsNous avons appliqué un ensemble d'autres optimisations. La sortie des S1 est compressée sur 2 bits seulement grâce à la méthode de Lloyds, telle que proposée dans[START_REF] Chikkerur | Approximations in the HMAX Model[END_REF]. Nous avons également réduit le nombre de vecteurs pré-appris dans S2 grâce à la méthode de Yu et al.[START_REF] Yu | FastWavelet-Based Visual Classification[END_REF]. De plus, nous avons utilisé une distance de Manhattan au lieu d'une distance Euclidienne dans les opérations de comparaison de motifs de S2. En cumulant ces optimisations avec une précision de 2 bits pour les pixels de l'image d'entrée et de 1 bit pour les filtres de Gabor, nous obtenons les résultats présentés en Table B.6. Table B.7: Utilisation des ressources matérielles de HMAX sur un Artix7-200T. La Table B.7 présente une estimation de l'utilisation des ressources matérielles. Concernant le timing, une étude théorique indique que, sur la base d'une fréquence de l'horloge système à 100 MHz, notre système peut traiter 22.69 images par seconde, contre 193 pour l'implantation présentée en [99]. Cela est dû à une organisation des ressources très différentes, notamment au niveau du multiplexage. Cependant, notre implantation requiert moins de ressources matérielles, et il est important de signaler que nos optimisations et celles proposées par Orchard et al. [99] sont parfaitement compatibles.
Ressource Estimation Disponible Utilisation (%)
Look-up tables 58204 133800 43.50
Flip-flops 158161 267600 59.10
Inputs/outputs 33 285 11.58
Global buffers 6 32 18.75
Block RAM 254 365 69.59
Entrée et Méthode de Réduction Distance de
coefficients des filtres Lloyd des patchs de S2 Manhattan
Avions 95.49 ± 0.81 94.43 ± 0.88 92.07 ± 0.69 91.83 ± 0.63
Voitures 99.45 ± 0.41 99.35 ± 0.40 98.45 ± 0.54 98.16 ± 0.60
Visages 92.97 ± 1.49 90.11 ± 1.05 82.71 ± 1.32 83.35 ± 1.40
Feuilles 96.83 ± 0.79 97.21 ± 0.89 94.61 ± 1.12 93.20 ± 1.42
Motos 95.54 ± 0.79 94.79 ± 0.62 88.83 ± 1.10 89.08 ± 1.31
By Michael Shick -Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php? curid=44405988.
By Arvin Calspan Advanced Technology Center; Hecht-Nielsen, R. Neurocomputing (Reading, Mass.: Addison-Wesley, 1990).
In the literature the definition of the activation function may be slightly different, with "≥" signs instead of ">" in Equation
2.4 and with θ > 0.
A saliency is a region that is likely to contain information in an image. Saliencies are typically determined with edge detection and the frequency of occurrences of a pattern in the image -the less frequent, the more unusual and thus the more salient that pattern shall be.
A multiplication between an input data and a coefficients, the result of which being added to a data computed before by another MAC operation.
Naive Bayes are a class of classification frameworks, the principle of which is to assume each component of the feature vector is indenpendent from the others -hence the word naive.
We consider the case where the initial scale is 1 and ∆ = 1 -see[START_REF] Viola | Robust real-time face detection[END_REF] for more information.
Travaux de Michael Shick -Production peronnelle, CC BY-SA 4.0, https://commons.wikimedia. org/w/index.php?curid=44405988.
Acknowledgements
and accepting to review it. Finally, I would like to thank the ANRT, i.e the French National Research and Technology Agency, for giving me the opportunity to realize that PhD with the CIFRE program.
c2 to out
This is the very final stage of our HMAX hardware implementation. It gets the data given by the c2 module in parallel, and serialize it in a way very similar to that of the c1 to s2 module. Its input pins consist in the usual clk and rst respectively for synchronization and reset purposes, as well as a port called din that get the input data and new in that indicates when new data is available. Serial output data is written in the dout output port, and the en dout output port indicates when the data from dout is valid.
As in c2 to out, the parallel data from din is simply read and written serially into the dout output port, while en dout is set. When this is done, en dout is unset again.
In this Section, we described the architecture of our VHDL model for the HMAX frameworks, taking into account our own optimizations along with other simplification from the literature. That implementation was purposely naive, in order to compare it with the state-of-the-art. Next Section focuses on the implementation results of that model on a hardware target.
Implementation results
In the previous Section, we described the architecture of our VHDL model. The next step is to synthesize and implement it for a particular device. We chose to target a Xilinx Artix-7 200T FPGA. Both synthesis and implementation were performed with Xilinx Vivado tools.
We first examine the utilization of hardware resources -in particular, we shall see that our model does not fit on a single device as is. We then study the timing constraint of our system, including the latency it induces.
Resource utilization
We synthesized and implemented our VHDL code using Xilinx's Vivado 2016.2, targeting a XC7A200TFBG484-1 platform. Results are shown in Table 4.7. On can see that there is still room for other processes on the FPGA, for instance of a classifier. Now that we studied the feasability of the implementation of our model on hardware devices, let's study the throughput that it may achieve.
Appendix A RBF networks training A.1 Overview
Radial Basis Function neural network (RBF) fall into the fields of generative models. As suggested in its name, after fitting a model to a training set, that type of models may be used to generate new data [START_REF] Bishop | Pattern recognition and machine learning[END_REF] similar to the real one. RBF are also considered as kernel models, in which the data is processed by so-called kernel functions before the actual classification; the goal is to represent the data in a new space, in which it is expected to be more easily linearly separable -particularly in the case when that new space is of larger dimensionality than the space of the input data. Other well-known kernel-based models are e.g SVM. Although those models may be used for both classification and regression tasks, we shall detail here its use for classification tasks only.
A short presentation of such models is proposed in Section 2.
A.2 Clustering
This stage consists in reducing the training set to a more manageable size. The method we chose is based on the work of Musavi et al., but a bit simpler as we shall see. It consists in merging neighboring vectors of the same categories into clusters, each represented in the network by a kernel function that is constituted of center, i.e a representation in the same space of one or several data points from the training set, and a radius, showing the generalization relevancy of that center: the bigger the radius, the better the center represents the dataset. As we shall see, this method allows to build highly non-linear boundaries between classes.
Let X 1 = x 1 1 , x 1 2 , . . . , x 1 N be the training set composed of the N vectors x 1 1 , x 1 2 , . . . , x 1 N , and T 1 = t 1 1 , t 1 2 , . . . , t 1 N be their respective labels. As for many training algorithm, it is important that the x 1 i are randomized, so that we avoid the case where all vectors of a category have neighboring indexes i. Let also d (a, b) denote the distance between the a and b vectors. Although any distance could be used, we focus here on a typical Euclidean distance so that
where a and b have M dimensions.
The clustering algorithm proceeds as follows [START_REF] Musavi | On the training of radial basis function classifiers[END_REF]:
1. map each element x 1 i of X 1 to a cluster c 1 i ∈ C 1 , the radius r 1 i of which is set to 0, 2. select the first cluster c 1 1 from C 1 , 3. select a cluster c at random from the ensemble C of the other clusters of the same class -let x be its assigned vector and r its radius, 4. merge the two clusters into a new one c 2 1 , the vector x 2 1 of which is the centroid of c 1 1 and c:
5. compute the distance d opp between c 2 1 and the closest cluster ĉ ∈ C 1 of another category, 6. compute the radius r 2 1 of the new cluster c 2 1 , as the distance between the new center x 2 1 and the furthest point of the new cluster: | 296,017 | [
"778151"
] | [
"227"
] |
01373668 | en | [
"phys",
"spi",
"info"
] | 2024/03/04 23:41:48 | 2018 | https://hal.science/lirmm-01373668/file/A%20novel%20EMG%20interface.pdf | W Tigra
email: wafa.tigra@inria.fr
B Navarro
A Cherubini
X Gorron
A Gelis
C Fattal
D Guiraud
C Azevedo Coste
A novel EMG interface for individuals with tetraplegia to pilot robot hand grasping
Keywords: Control, electromyographic (EMG), grip function, robot hand, tetraplegia
This article introduces a new human-machine interface for individuals with tetraplegia. We investigated the feasibility of piloting an assistive device by processing supra-lesional muscle responses online. The ability to voluntarily contract a set of selected muscles was assessed in five spinal cord-injured subjects through electromyographic (EMG) analysis. Two subjects were also asked to use the EMG interface to control palmar and lateral grasping of a robot hand. The use of different muscles and control modalities was also assessed. These preliminary results open the way to new interface solutions for high-level spinal cord-injured patients.
I. INTRODUCTION
Consequences of complete spinal cord injury (SCI) are often devastating for patients. This observation is particularly true for trauma at cervical levels (tetraplegia), since this impedes the use of the four limbs. Indeed, a complete SCI prevents any communication between the central nervous system and the sub-lesional peripheral nervous system, which receives no cervical commands. However, moving paralyzed limbs after such trauma is still possible, as for example when sufficient electric current is applied. Cells (neurons or myocytes), are then excited and generate the action potentials responsible for muscle contraction [START_REF] Keith | Implantable functional neuromuscular stimulation in the tetraplegic hand[END_REF], [START_REF] Billian | Upper extremity applications of functional neuromuscular stimulation[END_REF], [START_REF] Keith | Neuroprostheses for the upper extremity[END_REF], [START_REF] Hoshimiya | A multichannel FES system for the restoration of motor functions in high spinal cord injury patients: a respiration-controlled system for multijoint upper extremity[END_REF]. Nevertheless, the interaction of the tetraplegic person with his/her electrical stimulation device, to control the artificial contractions and achieve a given task at the desired instant, is still problematic. The reason is that both the range of possible voluntary movements, and the media available to detect intention, are limited. Various interface types have therefore been tested in recent years. For lower limbs, these interfaces include push buttons on walker handles in assisted-gait [START_REF] Guiraud | An implantable neuroprosthesis for standing and walking in paraplegia: 5year patient follow-up[END_REF], accelerometers for movement detection in assisted-sit-to-stand [START_REF] Jovic | Coordinating Upper and Lower Body During FES-Assisted Transfers in Persons With Spinal Cord Injury in Order to Reduce Arm Support[END_REF], electromyography (EMG) [START_REF] Moss | A novel command signal for motor neuroprosthetic control[END_REF] and evoked-electromyography (eEMG) [START_REF] Zhang | Evoked electromyography-based closed-loop torque control in functional electrical stimulation[END_REF] and, most recently, brain computer interfaces (BCI) [START_REF] King | The feasibility of a brain-computer interface functional electrical stimulation system for the restoration of overground walking after paraplegia[END_REF]. For upper limbs (restoring hand movement), researchers have proposed the use of breath control, joysticks, electromyography (EMG) [START_REF] Knutson | Simulated neuroprosthesis state activation and hand-position control using myoelectric signals from wrist muscles[END_REF], shoulder movements [START_REF] Hart | A comparison between control methods for implanted fes hand-grasp systems[END_REF], and voluntary wrist extension [START_REF] Bhadra | Implementation of an implantable joint-angle transducer[END_REF]. In this last work, a wrist osseointegrated Hall effect sensor implant provided the functional electrical stimulation (FES) of a hand neuroprosthesis. Keller et al. proposed using surface EMG from the deltoid muscle of the contralateral arm to stimulate the hand muscles [START_REF] Keller | Grasping in high lesioned tetraplegic subjects using the EMG controlled neuroprosthesis[END_REF]. In [START_REF] Thorsen | A noninvasive neuroprosthesis augments hand grasp force in individuals with cervical spinal cord injury: The functional and therapeutic effects[END_REF], the EMG signal from the ipsilateral wrist extensor muscles was used to pilot a hand neuroprosthesis. An implanted device [START_REF] Memberg | Implanted neuroprosthesis for restoring arm and hand function in people with high level tetraplegia[END_REF] took advantage of the shoulder and neck muscles to control the FES applied to the arm and hand muscles. EMG signals were also used to control an upper limb exoskeleton in [START_REF] Dicicco | Comparison of control strategies for an EMG controlled orthotic exoskeleton for the hand[END_REF]. Orthotics and FES can be effective in restoring hand movements, but the piloting modalities are often unrelated to the patient's level of injury and remaining motor functions, making the use of these devices somewhat limited. We believe that poor ergonomics and comfort issues related to the piloting modes also explain this low usage. In this paper, we therefore present a control modality closely linked to the patients remaining capacities in the context of tetraplegia. We propose here to evaluate the capacity and comfort of contracting supra-lesional muscles [START_REF] Tigra | Ergonomics of the control by a quadriplegic of hand functions[END_REF], and assess the feasibility of using EMG signals as an intuitive mode of controlling of functional assistive devices for upper limbs. In this preliminary study, we focus on the comfort and capacity for contracting four upper limb muscles (trapezius, deltoid, platysma and biceps) in individuals with tetraplegia. We then investigate the feasibility of using these contractions to control the motions of a robot hand. A robot hand was preferred to conventional grippers since it allows manipulators or humanoids to handle complex shaped parts or objects that were originally designed for humans, at the cost of more sophisticated mechanical designs and control strategies [START_REF] Cutkosky | On grasp choice, grasp models, and the design of hands for manufacturing tasks[END_REF], [START_REF] Bicchi | Hands for dexterous manipulation and robust grasping: a difficult road toward simplicity[END_REF]. Recently, robot hand usage has been extended to the design of prostheses for amputees, under the control of brain-computer interfaces [START_REF] Weisz | A user interface for assistive grasping[END_REF], or EMG signals [START_REF] Farry | Myoelectric teleoperation of a complex robotic hand[END_REF], [START_REF] Zollo | Biomechatronic design and control of an anthropomorphic artificial hand for prosthetic and robotic applications[END_REF], [START_REF] Cipriani | On the shared control of an EMG-controlled prosthetic hand: Analysis of user[END_REF], [START_REF] Kent | Electromyogram synergy control of a dexterous artificial hand to unscrew and screw objects[END_REF]. However, to our knowledge, surface EMG signals (in contrast to neural signals [START_REF] Hochberg | Reach and grasp by people with tetraplegia using a neurally controlled robotic arm[END_REF], [START_REF] Pfurtscheller | Thought control of functional electrical stimulation to restore hand grasp in a patient with tetraplegia[END_REF]) have never been used by tetraplegic individuals to pilot robot hands. CWRU [START_REF] Moss | A novel command signal for motor neuroprosthetic control[END_REF], for example, used EMG signals to pilot the patient's own hand through FES, whereas Dalley et al. [START_REF] Dalley | A method for the control of multigrasp myoelectric prosthetic hands[END_REF] used EMG within a finite state machine to control a robot hand, but with healthy subjects. Furthermore, in most of the cited works, a single motor was used to open or close a finger, a design constraint that impedes precise hand postures and grasps. Using a fully dexterous robot hand allowed us to further investigate the possibilities of an EMG interface to control different grasping modalities owing to the visual feedback provided by the robot hand. Furthermore, the dimensions and degrees of freedom are very close to those of the human hand, therefore providing the user an intuitive representation of the final movement that he/she can control with, for example, FESbased hand movement restoration. The goal of the study was two-fold: (i) to assess the ability of tetraplegic patients to pilot a robot hand device via muscle contractions even though the contractions are not functional. The EMG signals came from supra-lesional muscles that can be very weak and unable to produce any movement; and (ii) to compare different control modalities. In the following section, we present the protocol and experimental setup. We then present the results on the efficacy and comfort of the continuous or graded contraction of different muscles, along with details on the participants capacity to pilot the robot hand using these contractions.
II. MATERIAL AND METHODS
A. Subjects and selected muscles
The study was conducted during scheduled clinical assessments at the Propara Neurological Rehabilitation Center in Montpellier, France. Thus, the experiments had to be of limited duration. The subjects were informed verbally and in writing about the procedure and gave their signed informed consent according to the internal rules of the medical board of the Centre Mutualiste Neurologique Propara. The experiments were performed with five tetraplegic male subjects with lesional levels between C5 and C7 (see Table I). Subject 2 had undergone muscle-tendon transfer surgery at the time of inclusion. Surface BIOTRACE Electrodes (Controle graphique S.A, France) were used for EMG recordings. Pairs of surfacerecording electrodes (1cm distance) were positioned above the four muscles on each body side. Subjects did not receive any pre-training before these experiments. They were only instructed on the movements for contracting the various muscles. As the muscles selected to control hand grasp devices are likely to be used in a daily context by tetraplegic subjects, these muscles should be under voluntary control. The targeted tetraplegic patients had no muscle under voluntary control under the elbow. The use of facial muscles to pilot a hand grasp device has never been studied because social acceptability would probably be problematic. In addition, muscle synergies were sought (e.g., hand closing could be linked to elbow flexion, as performed via the biceps or deltoid muscle). For these reasons, we chose to study the EMG activity of four upper arm muscles (right and left): the middle deltoid, the superior trapezius, the biceps and the platysma. Nevertheless, 1 The ASIA (American Spinal Injury Association) Impairment Scale (AIS) classifies the severity (i.e. completeness) of a spinal cord injury. The AIS is a multi-dimensional approach to categorize motor and sensory impairment in individuals with SCI. It identifies sensory and motor levels indicative of the most rostral spinal levels, from A (complete SCI) to E (normal sensory and motor function) [START_REF] Kirshblum | International standards for neurological classification of spinal cord injury (revised 2011)[END_REF].
there were slight differences in these eight muscles based on each subjects remaining ability. EMG signals are initially recorded on the ipsilateral and contralateral sides of the dominant upper limb.Yet, patients 1 and 3 showed signs of fatigue and they did not use the contralateral (left) limb. The superior trapezius, middle deltoid, biceps and platysma muscles of the ipsilateral side of the dominant (right) upper limb were thus studied for these subjects. For subjects 2 and 4, both (left and right) superior trapezii, middle deltoids, bicepses, and platysmas were considered. For patient 5, the deltoid was remplaced by the middle trapezius, which has a similar motor schema, since strong electrocardiogram signals were observed on the deltoid EMG signal. To guarantee that the selected EMG would not impede available functionality, the patients' forearms were placed in an arm brace and EMGs signals were recorded with quasi-isometric movements.
B. EMG processing
Surface EMG signals were recorded with an insulated National Instrument acquisition card NI USB 6218, 32 inputs, 16-bit (National Instruments Corp., Austin, TX, USA). BIOVISION EMG amplifiers (Wherheim, Germany) were used, with gain set to 1000. The acquisition card was connected to a batteryrun laptop computer. The acquisition was made at 2.5kHz. For the first three subjects, the data processing was offline: EMG data were filtered with a high-pass filter (20Hz, fourthorder Butterworth filter, 0 phase). Then, a low-pass filter was applied to the absolute value of the EMG to obtain its envelope (2Hz, fourth-order Butterworth filter). The data processing was online for the other two subjects in order to control the robot hand motion. We applied the same filtering except for the first filter, which had a non-zero phase. In all cases, the filtered EMG signal is denoted with s (t). A calibration phase was performed for each muscle's EMG. Subjects were asked to first relax the muscle and then to strongly contract it. The corresponding EMG signals were stored and post-processed to obtain the maximum envelope. The thresholds were then set as a proportion of the normalized value of the EMG signal (value for a maximal contraction = 1). The high and low thresholds were experimentally determined to s L = 0.3 ± 0.1 and s H = 0.44 ± 0.14 through the calibration process, in order to avoid false detection against noise, while maintaining them as low as possible, to require only a small effort from the patient. These thresholds, s L and s H (s H > s L > 0), were used to trigger the states of the robot hand finite state machine (FSM), as explained below. FSMs have been used in some myoelectric control studies, mostly on healthy or amputees subjects, but never with tetraplegic subjects [START_REF] Dalley | A method for the control of multigrasp myoelectric prosthetic hands[END_REF]. In our study, the goal was to determine whether the muscles in the immediate supra-lesional region could be used by tetraplegic patients to control a robot hand. We relied on myoelectric signals, even from very weak muscles that were unable to generate torque sufficient to pilot the hand. As we controlled only three hand states through event-triggered commands, an FSM was appropriate. On the contrary, EMG pattern recognition is mostly used to progressively pilot several hand movements from many sensors. Grasping is related to EMG amplitude (stronger EMG signal leads to tighter closure). When the muscle is relaxed, the hand opens.
4
Contracting (for 2 s) first muscle 1 causes palmar pinch (palmar grasping); then, the hand can be opened by contracting (for 2 s) muscle 2. Instead, contracting first (for 2 s) muscle 2 causes key-grip (lateral grasping), followed by hand opening if muscle 1 is contracted (for 2 s).
5
Contraction of muscle 1 causes a palmar pinch, whereas contraction of muscle 2 causes key-grip. In both cases, to stop the closure, subjects must stop muscle contraction (cf. Fig. 1).
Open hand
Palmar pinch
Key grip
Fig. 1. Finite state machine used to control the hand in Mode 5.
C. Robot hand control
We chose to use the robot hand since it gives patients much more realistic feedback on task achievement (via grasp of real objects) compared to a virtual equivalent (e.g., a simulator). With a real (yet robot) hand, patients can perform the task as if FES had been used on their hand. The Shadow Dexterous Hand (Shadow Robot Company, London, UK) closely reproduces the kinematics and dexterity of the human hand. The model used here is a right hand with 19 cable-driven joints (denoted by angle q i for each finger i = 1, . . . , 5): two in the wrist, four in the thumb and little finger, and three in the index, middle and ring fingers. Each fingertip is equipped with a BioTac tactile sensor (SynTouch, Los Angeles, CA, USA). These sensors mimic human fingertips by measuring pressure, vibrations and temperature. The hand is controlled through ROS 1 , with the control loop running at 200Hz. In this work, the hand could be controlled in five alternative modes, shown in Table II. Each mode corresponds to a different FSM, and the transitions between states are triggered by muscle contractions and relaxations. Three hand states were used: open hand, palmar pinch, and key-grip (see Fig. 2). Unlike the other modes, mode 3. is not an "all-or-nothing" closing, but allows progressive closing, according to the amplitude of the EMG signal. To begin grasping, contraction has to be above the first chosen threshold, and then the finger position is proportional to the EMG envelope amplitude. One muscle is monitored in modes 1 to 3, and two muscles in modes 4 and 5 (see Table II). Hysteresis was used: we considered a muscle contracted if s (t) > s H and relaxed if s (t) < s L . For s (t) ∈ [s L , s H ], the muscle (hence, hand) state is not changed.
In modes 1-3, only one -predetermined -grasp (palmar) was used, whereas in modes 4 and 5, the user was able to change the grasp (palmar/lateral) type online via the EMG signal. Each state was characterized by the five finger target joint values, q * i . In all modes, except for mode 3, these were pretuned offline to constant values (corresponding to open and closed configurations).In mode 3, however, the desired finger position q * i was obtained by interpolating between open and closed positions (q o i and q c i ):
q * i = q o i (1 + e(q c i -q o i )), (1)
with e the contraction level, normalized between 0 (no contraction) and 1 (full contraction):
e = 1 if s > s H , 0 if s < s L , s-s L s H -s L otherwise. (2)
We now outline how the target values q * i were attained. For the two grasping states, finger motion should stop as soon as contact with the grasped object occurs. To detect contact on each fingertip i, we use the pressure measurement P i on the corresponding BioTac. At time t, the contact state (defined by the binary value C i (t)) is detected by a hysteresis comparator over P i :
C i (t) = 1 if P i > P H , 0 if P i < P L or t = 0, C i (t -T ) otherwise. (3)
Here, P H and P L (P H > P L > 0) are the pre-tuned high and low thresholds at which C i changes, and T is the sampling period. For the open hand state, we do not account for fingertip contact, and keep C i (t) = 0. For all three states, an online trajectory generator (OTG) is used to generate the joint commands q i , ensuring smooth motion of each finger to its target value q * i . The commands depend on the contact state:
q i (t) = OT G(q i (t -T ), q * i , qM i ), if C i (t) = 0, q i (t -T ) otherwise, (4)
with qM i the vector of -known -maximum motor velocities allowed for the joints of finger i. Each finger is controlled by a separate OTG, in order to stop only the ones in contact. As OTG, we used the Reflexxes Motion Library2 .
D. Experimental protocols
The experiments were performed through two successive protocols at two different times and with two different sets of patients to limit the duration of the session within their clinical assessment. The first time (protocol A, subjects 1,2 and 3, Fig. 3), we checked whether the patients could contract each muscle (assumed to be supra-lesional but not far from the lesion) with a sufficient level of EMG. The second time (protocol B, subjects 4 and 5, Fig. 3) we tested their ability to control the robot hand without previous practice so visual feedback (from observing the hand) was added to the proprioceptive feedback (subjects 4 and 5). Both protocols are described below.
1) Protocol A -EMG alone: This protocol evaluated the subjects' capacity to voluntarily control the different muscles and the comfort and ease of contraction (Fig. 3). Each task was performed only once since the objective was achieved at the first attempt, thereby confirming the easiness of command. Moreover, warm-up was not necessary, since the muscles were not used to output torque but only to generate usable EMG. For each muscle, the subjects performed two tasks:
1) maintain maximum contraction for 15 seconds, 2) successively maintain three levels of contraction (low, medium, high), each for 5 seconds.
2) Protocol B -EMG driving robot hand motion: For this second protocol, muscle contractions controlled the robot hand motion (see Fig. 4). Protocol B was thus composed of two consecutive parts: individual, and preferred muscle assessment.
a) Individual muscle assessment:
In the first part of protocol B, individual muscle contractions were assessed through three tasks. T1) calibrate: s L and s H are set, T2) maintain maximum contraction for 5 seconds, T3) maintain contraction as long as possible (after the minimum of 15 seconds. In tasks 2 and 3, the contraction level had the empirically defined threshold s H . After each muscle assessment, the subject was asked to assess the comfort, fatigue and ease of contraction efforts through a questionnaire. The questionnaire was inspired by the ISO 9241-9 standard on "Ergonomics of non-keyboard input devices." Once all eight muscles were tested, the subjects were asked to select the two preferred muscles. These two muscles were then taken into account to evaluate the different robot hand control modes in the second part of the protocol. b) Preferred muscle assessment: Two muscles were selected among the eight, based on subjective patients assessments. The choice of preferential muscles was up to the patient, with the constraint that these two muscles must be on the same side. All five modes of robot hand control (shown in Table II) were tested and evaluated. For mode 5, the subject was instructed to select contraction muscle 1 or 2 (i.e., either palmar or lateral grasping), depending on the object randomly presented by the experimenter. Two objects were presented to the subject, one with a cylindrical or spherical shape requiring palmar grasping, the other with a triangular prism shape requiring lateral grasping. The subject had to trigger the correct closure of the robot hand through the contraction of the appropriate muscle to grasp the presented object. Each type of prehension was tested at least five times during the 11 randomized trials.
III. RESULTS
A. EMG Results
We analyzed EMG data from continuous (Fig. 5 (a different muscles is presented in Table III. All subjects were able to individually contract the eight muscles on demand for at least 7 seconds, except subject 1 for biceps (no voluntary contraction was visible in the EMG signal). Interestingly, a contraction could be extracted from the EMG signals even for very weak muscles. This is illustrated in Fig. 5 (a) and Fig. 5 (b), where a voluntary sustained contraction of the subject's left biceps can be seen. He was able to maintain his contraction for more than 30 seconds. Although this subject presented a C5 lesion with non-functional biceps activity (no elbow flexion), this very weak EMG activity of the biceps could still be turned into a functional command to pilot a device. Among our five patients, there was only one case where a very weak muscle produced a functional EMG signal. This muscle had an MRC1 score of 1. For all other muscles with EMG activity, the MRC score was ≥ 3. For protocol A (subjects 1-3), we present in Table IV the ability to grade muscle contraction. The three subjects were able to achieve the three levels of contraction (low, medium and high).
The biceps of subject 1 was not tested here, as continuous voluntary contraction was not visible in the EMG signal. In Fig. 5 (c) and Fig. 5 (d), we present an example of a trial from subject 3. He was able to perform an isometric graded contraction of his superior trapezius muscle, but had difficulties holding the contraction for more than 5 seconds. The amplitude of contraction was increased by a factor of seven from 17.3 ±1.9 (rest level), 34.3±14.9 (low contraction), 43.7 ±36.1 (middle contraction) to 78.7±38.5 (high contraction). In protocol B, the subjects were able to maintain the contraction of each of the tested muscles.
B. Hand results
The tasks (e.g., holding the object in the robot hand for 5 s) were successfully achieved with each of the tested muscles. Among the tested modes, mode 2 was the favorite mode for subject 4. Mode 1 was the favorite mode for subject 5. Regarding the preferential muscle: Subject 4 chose the left biceps as muscle 1 and the left superior trapezius as muscle 2, whereas subject 5 chose the left superior and left middle trapezius, respectively, as 1 and 2. Muscle 1 contraction resulted in palmar grasping, whereas a contraction of muscle 2 resulted in lateral grasping (mode 5). We randomly presented two distinct objects to subjects 4 and 5. They performed 11 hand grasping tests with the robot hand (Fig. 6). To grasp the objects, the subjects had to make either a palmar prehension via muscle 1 contraction, or a lateral prehension through muscle 2 contraction. Among the 11 trials, subject 4 had 100% success, while patient 5 managed to seize eight objects out of 11. The three failures occurred with the palmar grasp because of co-contraction. Indeed, cocontraction was still present to some degree but this was the first muscle to reach the threshold that is considered to trigger the hand movement. Patient 5 tended to push the shoulder back (this activated the middle trapeze) just before raising it (this activated the superior trapezius).
C. Comfort survey
For protocol B (subjects 4-5), we present in Table V the responses of the subjects to the questionnaire on comfort and fatigue, related to the contraction of the different muscles.
Each subject declared some muscles to be easier and more comfortable to contract (in terms of effort, fatigue, and concentration) than others.
IV. DISCUSSION
The control of a neuroprosthesis by the user -that is, the patient -is a key issue, especially when the objective is to restore movement. Control should be intuitive and thus easily linked to task finality [START_REF] Hoshimiya | A multichannel FES system for the restoration of motor functions in high spinal cord injury patients: a respiration-controlled system for multijoint upper extremity[END_REF], [START_REF] Keith | Neuroprostheses for the upper extremity[END_REF], [START_REF] Bhadra | Implementation of an implantable joint-angle transducer[END_REF]. Furthermore, interfaces are based on the observation (i.e., sensing) of voluntary actions (even mentally imagined, as with BCI interfaces [START_REF] King | The feasibility of a brain-computer interface functional electrical stimulation system for the restoration of overground walking after paraplegia[END_REF]). EMG is widely used to achieve this goal for amputees, but for patients with tetraplegia, the use of supra-lesional muscles to control infra-lesional muscles was a neat option. The second generation of the Freehand system was successfully developed and is the only implanted EMGcontrolled neuroprosthesis to date. As far as we know, robot hands for tetraplegics have not yet been controlled using EMG.
The feasibility of using supra-lesional muscle EMG was not straightforward. Indeed, the available muscles are few and most of them cannot be considered valid as they are underused and their motor schema is in some cases deeply impaired, with no functional output. This leads to highly fatigable and weak muscles, but also to the loss of synergy between the paralyzed muscles that are normally involved in upper limb movements. In some cases, even if the muscle is contractable, the produced contraction is not functional (does not induce any joint motion). Here, the goal was to understand whether the immediately supra-lesional muscles of tetraplegic patients could be used to control a robot hand. The targeted population -that is, tetraplegics with potentially weak supra-lesional musclesshould have a very simple interface for two reasons: (i) simple contraction schemes to control the hand limit cognitive fatigue, and (ii) short contractions limit physiological fatigue. These two constraints mean that the hand should be controlled with predefined postures and not in a proportional way. Thus, the output of our control framework was a limited set of hand states, while its input, except for one mode (mode 3), was a limited set of EMG levels. In this context, the FSM scheme should be preferred. In our study, we found in all five subjects a combination of muscles such that each was able to easily perform the tasks (protocol A) that is, to maintain a continuous contraction or a grade contraction, so that it could be quantified by an EMG signal. We were able to calibrate quite low thresholds, so that patients did not have to contract much and experience fatigue. Moreover, these experiments were conducted during the scheduled clinical assessment, so no training was offered, even during the session. The patients were merely asked to contract muscles and to try to hold objects with the robot hand. All were able to control it immediately. The calibration procedure is linked only to EMG signal scaling so that, as a whole, the system is very easy to use in a clinical context, compared with approaches like BCI, for instance. Interestingly, the lesion age had no influence on performance. Two subjects participated in the second session (protocol B), in which the EMG signals were used to control a robot hand. This was achieved without any prior learning or training. We show that both the used muscle, and the way the contraction controls the hand (control mode), have a drastic effect on performance. This robot hand approach may thus be a very good paradigm for rehabilitation or training, for future FES-based control of the patients' own hand. These two subjects did not have the same preferred mode of control, but clearly preferred one over the others. Mode 1 (continuous contraction to maintain robot hand closure) seems to be more intuitive, as the contraction is directly linked to the posture of the hand, but mode 2 (an impulsive contraction provokes robot hand closure/opening) induces less fatigue as it needs only short muscle contractions to toggle from open to closed hand. Depending on their remaining motor functions, patients feel more or less comfortable with a given mode. Also, the choice of the preferred control mode would probably be different after a training period. In our opinion, patients should select their preferred mode themselves. However, a larger study would give indications on how to classify patients preferred modes, based on the assessment of their muscle state. In any case, control cannot be defined through a single mode and should be adapted to each patient and probably to each task and fatigue state. For practical reasons, we decided that the two EMGs would be located on the same side without any knowledge beforehand as to which side to equip. The subject selected one preferred muscle and based on this choice, the second muscle was selected on the same side. A major issue with this decision is that the two muscles sometimes co-contract and in mode 5 (muscle 1 contraction causes palmar pinch and muscle 2 contraction causes key-grip) the robot hand grasping task selected by the system was not always the one the user intended to execute. In the future, patients will control their own hand by means of electrical stimulation instead of a distant robot hand, and the choice of which body side to equip with EMG will need to be made with respect to the task that the stimulated hand must achieve. For example, if muscle contraction is associated with arm motion, this might well disturb the grasping to be achieved. Furthermore, an analysis is needed to determine the effect of the dominant side on performance. For our patients, grasping would not be disturbed since shoulder movements do not induce forearm movements. The questionnaire at the end of each test allowed us to evaluate the ease of using EMG as a control method. Preferential muscles were chosen so as not to disturb the functionalities available to the subjects. Yet, one can also imagine a system that deactivates electrostimulation when the patient wishes to use his/her remaining functionality for other purposes. In this case, the subject would be able to contract his/her muscles without causing hand movements. Furthermore, one can imagine using forearm/arm muscle synergies or relevant motor schemas to facilitate the learning (e.g. hand closing when the elbow bends, hand opening during elbow extension, and so on). The interesting property of the proposed interface is that even a weak muscle can produce a proper EMG signal. As an example, subject 4 was able to control the robot hand with a weak muscle to produce functional movement. In other words, a non-functional muscle in the context of natural movements can be turned into a functional muscle in the context of assistive technology and one can even expect that motor performances will improve with training.
V. CONCLUSION
We have demonstrated the feasibility of extracting contraction recordings from supra-lesional muscles in individuals with tetraplegia that are sufficiently rich in information to pilot a robot hand. The choice of muscles and modes of control are patient-dependent. Any available contractable muscleand not just functional muscles -can be candidates and should be evaluated. The control principle could also be used for FES applied to the patient arm, or to control an external device such as a robot arm or electric wheelchair, or as a template of rehabilitation movements. The robot hand might help to select (via their residual control capacity), and possibly train, patients as potential candidates for an implanted neuroprosthetic device. A greater number of patients using the robot hand would provide a better picture of the range of performance. Therefore, the next step will be to extend the study to a wider group of patients, to provide a better picture of the range of performance. We also plan to use the robot hand as a part of a training protocol for future FES devices.
2 A
2 first contraction of 2 s triggers grasping. The hand remains closed even when the muscle is relaxed. The next 2 s contraction triggers hand opening.
3
3
Fig. 2 .
2 Fig. 2. Different states of the robot hand: (a) open hand, (b) palmar pinch (palmar grasping), (c) Key-grip (lateral grasping).
Fig. 3 .
3 Fig. 3. Top: Principle of EMG recording and analysis (protocol A). Bottom: Principle of robot hand control through EMG signals. (protocol B).
Fig. 4 .
4 Fig. 4. Protocol B: setup description and upper arm positioning during EMG recordings.
) and Fig. 5 (b)) and graded (Fig. 5 (c) and Fig. 5 (d)) muscle contractions. Data on each subject's ability to contract the muscle contractions (Subj. 4).
sup. trapezius muscle (Subj. 3).
Fig. 5 .
5 Fig. 5. Example of muscle contractions observed in SCI subjects. Raw signal (a and c), filtered signal (b and d).
Fig. 6 .
6 Fig. 6. Example of robot hand trajectories generated from EMG recording in subject 5 for modes 1 and 3. Top: raw EMG, Bottom: filtered EMG (blue) and hand trajectory (red).0: hand is open, 1: hand is closed.
TABLE II DESCRIPTION
II OF THE FIVE HAND CONTROL MODES
Mode n o Description
1 Continuous muscle contraction provokes grasping. When the muscle is relaxed, the hand opens.
TABLE III MUSCLE
III CONTRACTION ABILITIES. D: MAXIMUM CONTRACTION DURATION. ** FAVORITE MUSCLE, *WITH HELP OF ARM SUPPORT
Subject ID superior trapezius middle deltoid / biceps platysma
middle trapezius
Right (I) Left (C) Right (I) Left (C) Right (I) Left (C) Right (I) Left (C)
1 10s** NA >15s NA 0 NA >15s NA
2 >15s** >15s >15s >15s >15s >15s >15s >15s
3 >15s NA >15s NA >15s NA >15s >15s
4 >15s* >15s* >15s >15s* 7s >15s** >15s >15s
5 >15s 15s >15s >15s** >15s >15s** 14s >15s
TABLE IV
ABILITY TO GRADE THE CONTRACTION FOR THE 3 FIRST SUBJECTS, TIME FOR EACH CONTRACTION: 5 S (PROTOCOL A)
Level upper Trapezius middle Deltoid Biceps Platysma
of Average STD Normalised Average STD Normalised Average STD Normalised Average STD Normalised
contraction (mV) (mV) value (%) mV) (mV) value (%) (mV) (mV) value (%) mV) (mV) value (%)
1 75.33 18.87 0.32 72.93 9.51 0.6 NA NA NA 53.7 16.88 0.39
Subject 1 2 104 12.9 0.44 84.53 10.53 0.69 NA NA NA 59.83 14.3 0.44
3 237 59.9 1 122.13 12.87 1 NA NA NA 135.83 19.4 1
1 50.94 7.81 0.22 273.3 59.9 0.52 110.9 8.07 0.39 73.7 2 0.35
Subject 2 2 96.36 3.87 0.42 370 73.2 0.71 164.8 30.8 0.58 157.5 14.5 0.74
3 226.97 211.51 1 522 61.1 1 285.8 50.5 1 213 51.6 1
1 53.93 19.32 0.29 85.42 5 0.25 21.38 6.39 0.37 42.5 11.19 0.30
Subject 3 2 116.32 38.11 0.63 185 33.75 0.54 41.5 8.74 0.72 100 15.11 0.70
3 185 56.05 1 345 72.25 1 57.38 10.21 1 143.61 32.58 1
TABLE V
EVALUATION OF INDIVIDUAL MUSCLE CONTRACTION FOR SUBJECTS 4 AND 5 (PROTOCOL B), * 1=VERY HIGH EFFORTS AND FATIGUE, 7=VERY LOW
EFFORTS AND FATIGUE
Superior Middle deltoid / Biceps Platysma
trapezius Middle trapezius
Right Left Right Left Right Left Right Left
Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue Comfort Fatigue
Subject 4 3.8 2 3 2 4.5 2.5 3.3 5.3 4.3 3.7 5 5.7 2.25 2.5 3 2
Subject 5 7 4.3 2.5 1 6.8 6.3 4 3 2.5 5 3.5 6.3 2 1 3.5 3
http://www.ros.org
http://www.reflexxes.ws
The MRC (Medical Research Council) Scale assesses muscle power in patients with peripheral nerve lesions from 0 (no contraction) to 5 (normal power).
ACKNOWLEDGMENTS
The authors wish to thank the subjects who invested time into this research, as well as MXM-Axonic/ANRT for support with the PhD grant, CIFRE # 2013/0867. The work was also supported in part by the ANR (French National Research Agency) SISCob project ANR-14-CE27-0016. Last, the authors also warmly thank Violaine Leynaert, occupational therapist at Propara Center, for her precious help. | 38,597 | [
"982188",
"12978",
"6566",
"925900",
"838724",
"8582",
"8632"
] | [
"450088",
"303268",
"395113",
"98357",
"395113",
"395113",
"31275",
"234185",
"455505",
"450088",
"450088"
] |
01486186 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01486186/file/Han16-STF.pdf | Jing Han
Zixing Zhang
email: zixing.zhang@uni-passau.de
Nicholas Cummins
Fabien Ringeval
Björn Schuller
Strength Modelling for Real-World Automatic Continuous Affect Recognition from Audiovisual Signals
published or not. The documents may come
Introduction
Automatic affect recognition plays an essential role in smart conversational agent systems that aim to enable natural, intuitive, and friendly human-machine interaction. Early works in this field have focused on the recognition of prototypic expressions in terms of basic emotional states, and on the data collected in laboratory settings, where speakers either act or are induced with predefined emotional categories and content [START_REF] Gunes | Automatic temporal segment detection and affect recognition from face and body display[END_REF][START_REF] Schuller | Speaker independent speech emotion recognition by ensemble classification[END_REF][START_REF] Schuller | Hidden markov model-based speech emotion recognition[END_REF][START_REF] Zeng | A survey of affect recognition methods: Audio, visual, and spontaneous expressions[END_REF]. Recently, an increasing amount of research efforts have converged into dimensional approaches for rating naturalistic affective behaviours by continuous dimensions (e. g., arousal and valence) along the time continuum from audio, video, and music signals [START_REF] Gunes | Automatic, dimensional and continuous emotion recognition[END_REF][START_REF] Gunes | Categorical and dimensional affect analysis in continuous input: Current trends and future directions[END_REF][START_REF] Petridis | Prediction-based audiovisual fusion for classification of non-linguistic vocalisations[END_REF][START_REF] Weninger | Discriminatively trained recurrent neural networks for continuous dimensional emotion recognition from audio[END_REF][START_REF] Yang | A regression approach to music emotion recognition[END_REF][START_REF] Kumar | Affective feature design and predicting continuous affective dimensions from music[END_REF][START_REF] Soleymani | Emotional analysis of music: A comparison of methods[END_REF][START_REF] Soleymani | Analysis of EEG signals and facial expressions for continuous emotion detection[END_REF]. This trend is partially due to the benefits of being able to encode small difference in affect over time and distinguish the subtle and complex spontaneous affective states. Furthermore, the affective computing community is moving toward combining multiple modalities (e. g., audio and video) for the analysis and recognition of human emotion [START_REF] Mariooryad | Correcting time-continuous emotional labels by modeling the reaction lag of evaluators[END_REF][START_REF] Pantic | Toward an affect-sensitive multimodal human-computer interaction[END_REF][START_REF] Soleymani | Continuous emotion detection using EEG signals and facial expressions[END_REF][START_REF] Wöllmer | LSTM-Modeling of continuous emotions in an audiovisual affect recognition framework[END_REF][START_REF] Zhang | Enhanced semi-supervised learning for multimodal emotion recognition[END_REF], owing to (i) the easy access to various sensors like camera and microphone, and (ii) the complementary information that can be given from different modalities.
In this regard, this paper focuses on the realistic time-and value-continuous affect (emotion) recognition from audiovisual signals in the arousal and valence dimensional space. To handle this regression task, a variety of models have been investigated. For instance, Support Vector Machine for Regression (SVR) is arguably the most frequently employed approach owing to its mature theoretical foundation. Further, SVR is regarded as a baseline regression approach for many continuous affective computing tasks [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF]. More recently, memory-enhanced Recurrent Neural Networks (RNNs), namely Long Short-Term Memory RNNs (LSTM-RNNs) [START_REF] Hochreiter | Long short-term memory[END_REF], have started to receive greater attention in the sequential pattern recognition community [START_REF] Graves | Framewise phoneme classification with bidirectional LSTM and other neural network architectures[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF][START_REF] Zhang | Channel mapping using bidirectional long short-term memory for dereverberation in hand-free voice controlled devices[END_REF][START_REF] Zhang | Facing realism in spontaneous emotion recognition from speech: Feature enhancement by autoencoder with LSTM neural networks[END_REF]. A particular advantage offered by LSTM-RNNs is a powerful capability to learn longer-term contextual information through the implementation of three memory gates in the hidden neurons. Wöllmer et al. [START_REF] Wöllmer | Abandoning emotion classes-towards continuous emotion recognition with modelling of long-range dependencies[END_REF] was amongst the first to apply LSTM-RNN on acoustic features for continuous affect recognition. This technique has also been successfully employed for other modalities (e. g., video, and physiological signals) [START_REF] Chao | Long short term memory recurrent neural network based multimodal dimensional emotion recognition[END_REF][START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF].
Numerous studies have been performed to compare the advantages offered by a wide range of modelling techniques, including the aforementioned, for continuous affect recognition [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF][START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Tian | Emotion recognition in spontaneous and acted dialogues[END_REF]. However, no clear observations can be drawn as to the superiority of any of them. For instance, the work in [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF] compared the performance of SVR and Bidirec-tional LSTM-RNNs (BLSTM-RNNs) on the Sensitive Artificial Listener database [START_REF] Mckeown | The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent[END_REF], and the results indicate that the latter performed better on a reduced set of 15 acoustic Low-Level-Descriptors (LLD). However, the opposite conclusion was drawn in [START_REF] Tian | Emotion recognition in spontaneous and acted dialogues[END_REF], where SVR was shown to be superior to LSTM-RNNs on the same database with functionals computed over a large ensemble of LLDs. Other results in the literature confirm this inconsistent performance observation between SVR and diverse neural networks like (B)LSTM-RNNs and Feed-forward Neural Networks (FNNs) [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF]. A possible rationale behind this is the fact that each prediction model has its advantages and disadvantages. For example, SVRs cannot explicitly model contextual dependencies, whereas LSTM-RNNs are highly sensitive to overfitting.
The majority of previous studies have tended to explore the advantages (strength) of these models independently or in conventional early or late fusion strategies. However, recent results indicate that there may be significant benefits in fusing two, or more, models in hierarchical or ordered manner [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF][START_REF] Nicolaou | Output-associative rvm regression for dimensional and continuous emotion prediction[END_REF]. Motivated by these initial promising results, we propose a Strength Modelling approach, in which the strength of one model, as represented by its predictions, is concatenated with the original feature space which is then used as the basis for regression analysis in a subsequent model.
The major contributions of this study include: (1) proposing the novel machine learning framework of Strength Modelling specifically designed to take advantage of the benefits offered by various regression models namely SVR and LSTM-RNNs; (2) investigating the effectiveness of Strength Modelling for value-and time-continuous emotion regression on two spontaneous multimodal affective databases (RECOLA and SEMAINE); and (3) comprehensively analysing the robustness of Strength Modelling by integrating the proposed framework into frequently used multimodal fusion techniques namely early and late fusion.
The remainder of the present article is organised as follows: Section 2 first discusses related works; Section 3 then presents Strength Modelling in details and briefly reviews both the SVR and memory-enhanced RNNs; Section 4 describes the selected spontaneous affective multimodal databases and corresponding audio and video feature sets; Section 5 offers an extensive set of experiments conducted to exemplify the effectiveness and the robustness of our proposed approach; finally, Section 6 concludes this work and discusses potential avenues for future work.
Related Work
In the literature for multimodal affect recognition, a number of fusion approaches have been proposed and studied [START_REF] Wu | Survey on audiovisual emotion recognition: databases, features, and data fusion strategies[END_REF], with the majority of them relevant to early (aka feature-level) or late (aka decision-level) fusion. Early fusion is implemented by concatenating all the features from multiple modalities into one combined feature vector, which will then be used as the input for a machine learning technique. The benefit of early fusion is that, it allows a classifier to take advantage of the complementarity that exists between, for example, the audio and video feature spaces. The empirical experiments offered in [START_REF] Chao | Long short term memory recurrent neural network based multimodal dimensional emotion recognition[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF] have shown that the early fusion strategy can deliver better results than the strategies without feature fusion.
Late fusion involves combining predictions obtained from individual learners (models) to come up with a final prediction. They normally consist of two steps: 1) generating different learners; and 2) combining the predictions of multiple learners. To generate different learners, there are two primary ways which are separately based on different modalities and models. Modality-based ways combines the output from learners trained on different modalities. Examples of this learner generation in the literature include [START_REF] He | Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Nicolaou | Output-associative rvm regression for dimensional and continuous emotion prediction[END_REF][START_REF] Wei | Multimodal continuous affect recognition based on LSTM and multiple kernel learning[END_REF], where multiple SVRs or LSTM-RNNs are trained separately for different modalities (e.g. audio, video, etc). Model-based ways, on the other hand, aims to exploit information gained from multiple learners trained on a single modality. For example in [START_REF] Qiu | Ensemble deep learning for regression and time series forecasting[END_REF], predictions obtained by 20 different topology structures of Deep Belief Networks (DBNs). However, due to the similarity of characteristics of different DBNs, the predictions can not provide many variations that could be mutually complemented and improve the system performance. To combine the predictions of multiple learners, a straightforward way is to apply simple or weighted averaging (or voting) approach, such as Simple Linear Regression (SLR) [START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF]. Another common approach is to perform stacking [START_REF] Wolpert | Stacked generalization[END_REF]. In doing this, all the predictions from different learners are stacked and used as inputs of a subsequent non-linear model (e.g., SVR, LSTM-RNN) trained to make a final decision [START_REF] Qiu | Ensemble deep learning for regression and time series forecasting[END_REF][START_REF] He | Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks[END_REF][START_REF] Wei | Multimodal continuous affect recognition based on LSTM and multiple kernel learning[END_REF].
Different from these fusion strategies, our proposed Strength Modelling paradigm operates on a single feature space. Using an initial model, it gains a set of predictions which are then fused with the original feature set for use as a new feature space in a subsequent model. This offers the framework a vital important advantage as the single modality setting is often faced in affect recognition tasks, for example, if when either face or voice samples are missing in a particular recording.
Indeed, Strength Modelling can be viewed as an intermediate fusion technology, which lies in the middle of the early and late fusion stages. Strength Modelling can therefore not only work independently of, but also be simply integrated into early and late fusion approaches. To the best of our knowledge, intermediate fusion techniques are not widely used in the machine learning community. Hermansky et al. [START_REF] Hermansky | Tandem connectionist feature extraction for conventional HMM systems[END_REF] introduced a tandem structure that combines the output of a discriminative trained neural nets with dynamic classifiers such as Hidden Markov Models (HMMs), and applied it efficiently for speech recognition. This structure was further extended into a BLSTM-HMM [START_REF] Wöllmer | Bidirectional LSTM networks for context-sensitive keyword detection in a cognitive virtual agent framework[END_REF][START_REF] Wöllmer | Robust in-car spelling recognition-a tandem BLSTM-HMM approach[END_REF]. In this approach the BLSTM networks provides a discrete phoneme prediction feature, together with continuous Mel-Frequency Cepstral Coefficients (MFCCs), for the HMMs that recognise speech.
For multimodal affect recognition, a relevant approach -Parallel Interacting Multiview Learning (PIML) -was proposed in [START_REF] Kursun | Parallel interacting multiview learning: An application to prediction of protein sub-nuclear location[END_REF] for the prediction of protein sub-nuclear locations. The approach exploits different modalities that are mutually learned in a parallel and hierarchical way to make a final decision. Reported results show that this approach is more suitable than the use of early fusion (merging all features). Compared to our approach, that aims at taking advantages of different models from a same modality, the focus of PIML is rather on exploiting the benefit from different modalities. Further, similar to early fusion approaches, PIML operates under a concurrence assumption of multiple modalities.
Strength Modelling is similar to the Output Associative Relevance Vector Machine (OA-RVM) regression framework originally proposed in [START_REF] Nicolaou | Output-associative rvm regression for dimensional and continuous emotion prediction[END_REF]. The OA-RVM framework attempts to incorporate the contextual relationships that exist within and between different affective dimensions and various multimodal feature spaces, by training a secondary RVM with an initial set of multi-dimensional output predictions (learnt using any prediction scheme) concatenated with the original input features spaces. Additionally, the OA-RVM framework also attempts to capture the temporal dynamics by employing a sliding window framework that incorporates both past and future initial outputs into the new feature space. Results presented in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF] indicate that the OA-RVM framework, is better suited to affect recognition problems than both conventional early and late fusion. Recently the OA-RVM model was extended in [START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF] to be multivariate, i. e., predicting multiple continuous output variables simultaneously.
Similar to Strength Modelling, OA-RVM systems take input features and output predictions into consideration to train a subsequent regression model to perform the final affective predictions. However, the strength of the OA-RVM framework is that it is underpinned by the RVM. Results in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF] indicate that, the framework is not as successful when using either a SVR or a SLR as the secondary model. Further, the OA-RVM is non-casual and requires careful tuning to find suitable window lengths in which to combine the initial outputs; this can take considerable time and effort. The proposed Strength Modelling framework, however, is designed to work with any combination of learning paradigms. Furthermore, Strength Modelling is casual; it combines input features and predictions on a frameby-frame basis. This is a strong advantage over the OA-RVM in terms of employment in real-time scenarios (beyond the scope of this paper).
Strength Modelling
Strength Modelling
The proposed Strength Modelling framework for affect prediction is depicted in Fig. 1. As can be seen, the first regression model (Model 1 ) generates the original estimate ŷt based on the feature vector x t . Then, ŷt is concatenated with x t pair-wise as the input of the second model (Model 2 ) to learn the expected prediction y t . To implement the Strength Modelling for these suitable combination of individual models, Model 1 and Model 2 are trained subsequently, in other words, Model 2 takes the predictive ability of Model 1 into account for training. The procedure is given as follows:
x t Model 1 ∪ Model 2 y t ŷt [x t , ŷt ]
-First, Model 1 is trained with x t to obtain the prediction ŷt .
-Then, Model 2 is trained with [x t , ŷt ] to learn the expected prediction y t .
Whilst the framework should work with any arbitrary modelling technique we have selected two commonly used, in the context of affect recognition, for our initial investigations, namely the SVR and BLSTM-RNNs which are briefly reviewed in the subsequent subsection.
Regression Models
SVR is extended from Support Vector Machine (SVM) to solve regression problems. It was first introduced in [START_REF] Drucker | Support vector regression machines[END_REF] and is one of the most dominant methods in the context of machine learning, particularly in emotion recognition [START_REF] Chang | Physiological emotion analysis using support vector regression[END_REF][START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF]. Applying the SVR for a regression task, the target is to optimise the generalisation bounds for regression in the high-dimension feature space by using a ε-insensitive loss function which is used to measure the cost of the errors of the prediction. At the same time, a predefined hyperparameter C is set accordingly for different cases to balance the emphasis on the errors and the generalisation performance.
Normally, the high-dimension feature space is mapped from the initial feature space with a non-linear kernel function. However, in our study, we use a linear kernel function, as the features in our cases (cf. Section 4.2) perform quite well for affect prediction in the original feature space, similar to [START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF].
One of the most important advantages of SVR is the convex optimisation function, the characteristics of which gives the benefit that the global optimal solution can be obtained. Moreover, SVR is learned by minimising an upper bound on the expected risk, as opposed to the neural networks trained by minimising the errors on all training data, which equips SVR a superior ability to generalise [START_REF] Gunn | Support vector machines for classification and regression[END_REF]. For a more in-depth explanation of the SVR paradigm the reader is referred to [START_REF] Drucker | Support vector regression machines[END_REF].
The other model utilised in our study is BLSTM-RNN which has been successfully applied to continuous emotion prediction [START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF] as well as for other regression tasks, such as speech dereverberation [START_REF] Zhang | Channel mapping using bidirectional long short-term memory for dereverberation in hand-free voice controlled devices[END_REF] and non-linguistic vocalisations classification [START_REF] Petridis | Prediction-based audiovisual fusion for classification of non-linguistic vocalisations[END_REF]. In general, it is composed of one input layer, one or multiple hidden layers, and one output layer [START_REF] Hochreiter | Long short-term memory[END_REF]. The bidirectional hidden layers separately process the input sequences in a forward and a backward order and connect to the same output layer which fuses them.
Compared with traditional RNNs, it introduces recurrently connected memory blocks to replace the network neurons in the hidden layers. Each block consists of a self-connected memory cell and three gate units, namely input, output, and forget gate. These three gates allow the network to learn when to write, read, or reset the value in the memory cell. Such a structure grants BLSTM-RNN to learn past and future context in both short and long range. For a more in-depth explanation of BLSTM-RNNs the reader is referred to [START_REF] Hochreiter | Long short-term memory[END_REF].
It is worth noting that these paradigms bring distinct sets of advantages and disadvantages to the framework:
• The SVR model is more likely to achieve the global optimal solution, but it is not context-sensitive [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF]; • The BLSTM-RNN model is easily trapped in a local minimum which can be hardly avoided and has a risk of overfitting [START_REF] Graves | Framewise phoneme classification with bidirectional LSTM and other neural network architectures[END_REF], while it is good at capturing the correlation between the past and the future information [START_REF] Nicolaou | Continuous prediction of spontaneous affect from multiple cues and modalities in valencearousal space[END_REF].
In this paper, Model 1 and Model 2 in Fig. 1 could be either an SVR model or a BLSTM-RNN model, resulting in four possible permutations, i. e., SVR-SVR (S-S), SVR-BLSTM (S-B), BLSTM-SVR (B-S), BLSTM-BLSTM (B-B). It is worth noting that the B-B structure can be regarded as a variation of the neural networks in a deep structure. Note, the S-S structure is not considered, because SVR training is achieved by solving a large margin separator. Therefore, it is unlikely to get any advantage in concatenating a set of SVR predictions with its feature space for subsequent SVR based regression analysis.
Strength Modelling with Early and Late Fusion Strategies
As previously discussed (Sec. 2), the Strength Modelling framework can be applied in both early and late fusion strategies. Traditional early fusion combines multiple feature spaces into one single set. When integrating Strength Modelling with early fusion, the initial predictions gained from models trained on the different feature sets are also concatenated to form a new feature vector. The new feature vector is then used as the basis for the final regression analysis via a subsequent model (Fig. 2).
Model 1a
Model 1v
Model 2 y e audio features x a video features x v Strength Modelling can also be integrated with late fusion using three different approaches, i. e., (i) modality-based, (ii) model-based, and (iii) modality-and model-based (Fig. 3). Modality-based fusion combines the decisions from multiple independent modalities (i. e., audio and video in our case) with the same regression model; whilst model-based approach fuses the decisions from multiple different models (i. e., SVR and BLSTM-RNN in our case) within the same modality; and modality-and model-based approach is the combination of the above two approaches, regardless of which modality or model is employed. For all three techniques the fusion weights are learnt using a linear regression model:
S M 1a S M 2a S M 3a S M 1v S M 2v S M 3v fusion y l audio features x a video features x v
y l = + N i=1 γ i • y i , (1)
where y i denotes the original prediction of the model i from N available ones; and γ i are the bias and weights estimated on the development partition; and y l is the final prediction.
Selected Databases and Features
For the transparency of experiments, we utilised the widely used multimodal continuously labelled affective databases -RECOLA [START_REF] Ringeval | Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions[END_REF] and SEMAINE [START_REF] Mckeown | The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent[END_REF], which have been adopted as standard databases for the AudioVisual Emotion Challenges (AVEC) in 2015/2016 [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF] and in 2012 [START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF], respectively. Both databases were designed to study socio-affective behaviours from multimodal data. To annotate the corpus, value-and time-continuous dimensional affect ratings in terms of arousal and valence were performed by six French-speaking raters (three males and three females) for the first five minutes of all recording sequences. The obtained labels were then resampled at a constant frame rate of 40 ms, and averaged over all raters by considering interevaluator agreement, to provide a 'gold standard' [START_REF] Ringeval | Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions[END_REF].
SEMAINE
The SEMAINE database was recorded in conversations between humans and artificially intelligent agents. In the recording scenario, a user was asked to talk with four emotionally stereotyped characters, which are even-tempered and sensible, happy and out-going, angry and confrontational, and sad and depressive, respectively.
For our experiments, the 24 recordings of the Solid-Sensitive Artificial Listener (Solid-SAL) part of the database were used, in which the characters were role-played. Each recording contains approximately four character conversation sessions. This Solid-SAL part was then equally split into three partitions: a training, development, and test partition, resulting in 8 recordings and 32 sessions per partition except for the training partition that contains 31 sessions. For more information on this database, the readers are referred to [START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF].
All sessions were annotated in continuous time and continuous value in terms of arousal and valence by two to eight raters, with the majority annotated by six raters. Different from RECOLA, the simple mean over the obtained labels was then taken to provide a single label as 'gold standard' for each dimension.
Audiovisual Feature Sets
For the acoustic features, we used the openSMILE toolkit [START_REF] Eyben | openSMILE -the Munich versatile and fast open-source audio feature extractor[END_REF] to generate 13 LLDs, i. e., 1 log energy and 12 MFCCs, with a frame window size of 25 ms at a step size of 10 ms. Rather than the official acoustic features, MFCCs were chosen as the LLDs since preliminary testing (results not given) indicated that they were more effective in association with both RECOLA [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF] and SEMAINE [START_REF] Schuller | AVEC 2012: the continuous audio/visual emotion challenge[END_REF]. The arithmetic mean and the coefficient of variance were then computed over the sequential LLDs with a window size of 8 s at a step size of 40 ms, resulting in 26 raw features for each functional window. Note that, for SEMAINE the window step size was set to 400 ms in order to reduce the computational workload in the machine learning process. Thus, the total numbers of the extracted segments of the training, development, and test partitions were 67.5 k, 67.5 k, 67.5 k for RECOLA, and were, respectively, 24.4 k, 21.8 k, and 19.4 k for SEMAINE.
For the visual features, we retained the official features for both RECOLA and SEMAINE. As to RECOLA, 49 facial landmarks were tracked firstly, as illustrated in Fig. 4. The detected face regions included left and right eyebrows (five points respectively), the nose (nine points), the left and right eyes (six points respectively), the outer mouth (12 points), and the inner mouth (six points). Then, the landmarks were aligned with a mean shape from stable points (located on the eye corners and on the nose region).
As features for each frame, 316 features were extracted, consisting of 196 features by computing the difference between the coordinates of the aligned landmarks and those from the mean shape and between the aligned landmark locations in the previous and the current frame, 71 ones by calculating the Euclidean distances (L2-norm) and the angles (in radians) between the points in three different groups, and another 49 ones by computing the Euclidean distance between the median of the stable landmarks and each aligned landmark in a video frame. For more details on the feature extraction process the reader is referred to [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF].
Again, the functionals (arithmetic mean and coefficient of variance) were computed over the sequential 316 features within a fixed length window (8 s) that shifted forward at a rate of 40 ms. As a result, 632 raw features for each functional window were included in the geometric set. Feature reduction was also conducted by applying a Principal Component Analysis (PCA) to reduce the dimensionality of the geometric features, retaining 95% of the variance in the original data. The final dimensionality of the reduced video feature set is 49. It should be noted that a facial activity detector was used in conjunction with the video feature extraction; video features were not extracted for the frames where no face was detected, resulting in the number of video segments somewhat less than that of audio segments.
As to SEMAINE, 5 908 frame-level features were provided as the video baseline features. In this feature set, eight features describes the position and pose of the face and eyes, and the rest are dense local appearance descriptors. For appearance descriptors, the uniform Local Binary Patterns (LBP) were used. Specifically, the registered face region was divided into 10 × 10 blocks, and the LBP operator was then applied to each block (59 features per block) followed by concatenating features of all blocks, resulting to another 5 900 features.
Further, to generate features on window-level, in this paper we used the method based on max-pooling. Specifically, the maximum of features were calculated with a window size of 8 s at a step size of 400 ms, to keep consistent with the audio features. We applied PCA for feature reduction on these window-level representations and generated 112 features, retaining 95% of the variance in the original data. To keep in line with RECOLA, we selected the first 49 principal components as the final video features.
Experiments and Results
This section empirically evaluates the proposed Strength Modelling by large-scale experiments. We first perform Strength Modelling for the continuous affect recognition in the unimodal settings (cf. Sec. 5.2), i. e., audio or video. We then incorporate it with the early (cf. Sec. 5.3) and late (cf. Sec. 5.4) fusion strategies so as to investigate its robustness in the bimodal settings.
Experimental Set-ups and Evaluation Metrics
Before the learning process, mean and variance standardisation was applied to features of all partitions. Specifically, the global means and variances were calculated from the training set, which were then applied over the development and test sets for online standardisation. To demonstrate the effectiveness of the strength learning, we first carried out the baseline experiments, where the SVR or BLSTM-RNNs models were individually trained on the modalities of audio, video, or the combination, respectively. Specifically, the SVR was implemented in the LIBLINEAR toolkit [START_REF] Fan | LIBLINEAR: A library for large linear classification[END_REF] with linear kernel, and trained with L2-regularised L2-loss dual solver. The tolerance value of ε was set to be 0.1, and complexity (C) of the SVR was optimised by the best performance of the development set among [.00001, .00002, .00005, .0001, . . . , .2, .5, 1] for each modality and task.
For the BLSTM-RNNs, two bidirectional LSTM hidden layers were chosen, with each layer consisting of the same number of memory blocks (nodes). The number was optimised as well by the development set for each modality and task among [START_REF] Mckeown | The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent[END_REF][START_REF] Wöllmer | Bidirectional LSTM networks for context-sensitive keyword detection in a cognitive virtual agent framework[END_REF]60,80,100,120]. During network training, gradient descent was implemented with a learning rate of 10 -5 and a momentum of 0.9. Zero mean Gaussian noise with standard deviation 0.2 was added to the input activations in the training phase so as to improve generalisation. All weights were randomly initialised in the range from -0.1 to 0.1. Finally, the early stopping strategy was used as no improvement of the mean square error on the validation set has been observed during 20 epochs or the predefined maximum number of training epochs (150 in our case) has been executed. Furthermore, to accelerate the training process, we updated the network weights after running every mini batch of 8 sequences for computation in parallel. The training procedure was performed with our CURRENNT toolkit [START_REF] Weninger | Introducing CUR-RENNT: The munich open-source cuda recurrent neural network toolkit[END_REF].
Herein we adapted the following naming conventions, the models trained with baseline approaches are referred to as individual models, whereas the ones associated with the proposed approaches are denoted as strength models. For the sake of a more even performance comparison the optimised parameters of individual models (i. e., SVR or BLSTM-RNN) were used in the corresponding strength models (i. e., S-B, B-S, or B-B models).
Annotation delay compensation was also performed to compensate for the temporal delay between the observable cues, as shown by the participants, and the corresponding emotion reported by the annotators [START_REF] Mariooryad | Correcting time-continuous emotional labels by modeling the reaction lag of evaluators[END_REF]. Similar to [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF], this delay was estimated in the preliminary experiments using SVR and by maximising the performance on the development partition, while shifting the gold standard annotations back in time. As in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF] we identified this delay to be four seconds which was duly compensated, by shifting the gold standard back in time with respect to the features, in all experiments presented. Note that all fusion experiments require concurrent initial predictions from audio and visual modalities. However, as discussed in (Sec. 4.2), visual prediction cannot occur where a face has not been detected. For all fusion experiments where this occurred we replicated the initial corresponding audio prediction to fill the missing video slot.
Unless otherwise stated we report the accuracy of our systems in terms of the Concordance Correlation Coefficient (CCC) [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF] metric:
ρ c = 2ρσ x σ y σ 2 x + σ 2 y + (µ x -µ y ) 2 , ( 2
)
where ρ is the Pearson's Correlation Coefficient (PCC) between two time series (e. g., prediction and gold-standard); µ x and µ y are the means of each time series; and σ 2 x and σ 2 y are the corresponding variances. In contrast to the PCC, CCC takes not only the linear correlation, but also the bias and variance between the two compared series into account. As a consequence, whereas PCC is insensitive to bias and scaling issues, CCC reflects those two variations. The value of CCC is in the range of [-1, 1], where +1 represents total concordance, -1 total discordance, and 0 no concordance at all. One may further note that, it has also been successfully used as objective function to train discriminative neural networks [START_REF] Weninger | Discriminatively trained recurrent neural networks for continuous dimensional emotion recognition from audio[END_REF], and has been used as the official scoring metric in the last two editions of the AVEC. We further intuitively compared the difference between PCC and CCC by Fig. 5. From the figure, the obtained PCC of the two series (black and blue) is 1.000, while the obtained CCC is only 0.467 as it takes the bias of the mean and variance of the two series into account. For continuous emotion recognition, ones are often interested in not only the variation trend but also the absolute value/degree of personal emotional state. Therefore, the metric of CCC fits better for continuous emotion recognition than PCC.
In addition to CCC, results are also given in all tables in terms of Root Mean Square Error (RMSE), a poplar metric for regression tasks. To further access the significance level of performance improvement, a statistical evaluation was carried out over the whole predictions between the proposed and the baseline approaches by means of Fisher's r-to-z transformation [START_REF] Cohen | Applied multiple regression/correlation analysis for the behavioral sciences[END_REF].
Affect Recognition with Strength Modelling
Table 1 displays the results (RMSE and CCC) obtained from the strength models and the individual models of SVR and BLSTM-RNN on the development and test partitions of RECOLA and SEMAINE databases from the audio. As can be seen, the three Strength Modelling set-ups either matched or outperformed their corresponding individual models in most cases. This observation implies that the advantages of each model (i. e., SVR and BLSTM-RNN) are enhanced via Strength Modelling. In particular the performance of the BLSTM model, for both arousal and valence, was significantly boosted by the inclusion of SVR predictions (S-B) on the development and test sets. We speculate this improvement could be due to the initial SVR predictions helping the subsequent RNN avoid local minima.
Similarly, the B-S combination brought additional performance improvement for the SVR model (except the valence case of SEMAINE), although not as obvious as for the S-B model. Again, we speculate that the temporal information leveraged by the BLSTM-RNN is being exploited by the successive SVR model. The best results for both arousal and valence dimensions were achieved with the framework of B-B for RECOLA, which achieved relative gains of 6.5 % and 29.1 % for arousal and valence respectively on the test set when compared to the single BLSTM-RNN model (B). This indicates there are potential benefits for audio based affect recognition by the deep structure formed by combining two BLSTM-RNNs using the Strength Modelling framework. Additionally, one can observe that there is no much performance improvement by applying Strength Modelling in the case of the valence recognition of SEMAINE. This might be attribute to the poor performance of the baseline systems, which can be regarded as noise and possibly not able to provide useful information for the other models.
The same set of experiments were also conducted on the video feature set (Table 2). As for valence, the highest CCC obtained on test set achieves at .477 using the S-B model for RECOLA and at .158 using the B-B model for SEMAINE. As expected, we observe that the models (individual or strength) trained using only acoustic features is more efficient for interpreting the dimension of arousal rather than valence. Whereas, the opposite observation is seen for models trained only on the visual features. This finding is in agreement with similar results in the literature [START_REF] Gunes | Automatic, dimensional and continuous emotion recognition[END_REF][START_REF] Gunes | Categorical and dimensional affect analysis in continuous input: Current trends and future directions[END_REF][START_REF] Ringeval | Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data[END_REF].
Additionally, Strength Modelling achieved comparable or superior performance to other state-of-the-art methods applied on the RECOLA database. The OA-RVM model was used in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF][START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF], and the reported performance in terms of CCC, with audio features on the development set, was .689 for arousal [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF], and .510 for valence using video features [START_REF] Manandhar | Multivariate output-associative RVM for multi-dimensional affect predictions[END_REF]. We achieved .755 with audio features for arousal, and .592 with video features for valence with the proposed Strength Modelling framework, showing the interest of our method.
To further highlight advantages of Strength Modelling, Fig. 6 illustrates the automatic predictions of arousal via audio signals (a) and valence via video signals (b) obtained with the best settings of the strength models and the individual models frame by frame for a single test subject from RECOLA. Note that, similar plots were observed for the other subjects in the test set. In general, the predictions generated by the proposed Strength Modelling approach are closer to the gold standard, which consequently contributes to better results in terms of CCC.
Strength Modelling Integrated with Early Fusion
Table 3 shows the performance of both the individual and strength models integrated with the early fusion strategy. In most cases, the performance of the individual models of either SVR or BLSTM-RNN was significantly improved with the fused feature vector for both arousal and valence dimensions in comparison to the performance with the corresponding individual models trained only on the unimodal feature sets (Sec. 5.2) in most cases for both RECOLA and SEMAINE datasets.
For the strength model systems, the early fusion B-S model generally outperformed the equivalent SVR model, and the structure of S-B outperformed the equivalent BLSTM model. However, the gain obtained by Strength Modelling with the early fused features is not as obvious as that with individual models. This might be due to the higher dimensions of the fused feature sets which possibly reduce the weight of the predicted features.
Strength Modelling Integrated with Late Fusion
This section aims to explore the feasibility of integrating Strength Modelling into three different late fusion strategies: modality-based, model-based, and the combination (see Sec. 3.3) [START_REF] Ringeval | AV+EC 2015: The first affect recognition challenge bridging across audio, video, and physiological data[END_REF][START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF], and again confirms the importance of multimodal fusion for affect recognition. However, similar observation can only been seen on the validation set for SEMAINE, which might be due to the huge mismatch between the validation and test partitions. Interestingly when incorporating Strength Modelling into late fusion we can observe significant improvements over the corresponding non-strength set-ups. This finding confirms the effectiveness and the robustness of the proposed method for multimodal continuous affect recognition. In particular, the best test results of RECOLA, .685 and .554, were obtained by the strength models integrated with the modality-and modelbased late fusion approach. This arousal result matches the performance with the AVEC 2016 affect recognition subchallenge baseline system, .682, which was obtained using a late fusion strategy involving eight feature sets [START_REF] Valstar | AVEC 2016 -depression, mood, and emotion recognition workshop and challenge[END_REF].
As for SEMAINE, although obvious performance improvement can be seen on the development set, a similar observation can not be observed on the test set. This finding is possibly attributed to the mismatch between the development set and the test set, since all parameters of the training models were optimised on the development set. However, these parameters are not fit for the test set anymore.
Further, for a comparison with the OA-RVM system, we applied the same fusion system as used in [START_REF] Huang | An investigation of annotation delay compensation and outputassociative fusion for multimodal continuous emotion prediction[END_REF], with only audio and video features. The results are shown in Table 4 and 5 for the RECOLA and SEMAINE database, respectively. It can be seen that, for both databases, the proposed methods outperform the OA-RVM technique, which further confirms the efficiency of the proposed Strength Modelling method.
In general, to provide an overview of the contributions of Strength Modelling to the continuous emotion recognition, we averaged the relative performance improvement of Strength Modelling over RECOLA and SEMAINE for arousal and valence recognition. The corresponding results from four cases (i. e., audio only, video only, early fusion, and late fusion) are displayed in Fig. 7. From the figure, one can observe an obvious performance improvement gained by Strength Modelling, except for the late fusion framework. This particular case is highly attributed to the mismatch between validation and test sets of SEMAINE as aforementioned, as all parameters of the training models were optimised on the development set. Employing some state-of-the-art generation techniques like dropout for training neural networks might help to tackle this problem in the future.
Conclusion and Future Work
This paper proposed and investigated a novel framework, Strength Modelling, for continuous audiovisual affect recognition. Strength Modelling concatenates the strength of an initial model, as represented by its predictions, with the original features to form a new feature set which is then used as the basis for regression analysis in a subsequent model.
To demonstrate the suitability of the framework, we jointly explored the benefits from two state-of-the-art regression models, i. e., Support Vector Regression (SVR) and Bidirectional Long Short-Term Memory Recurrent Neural Network (BLSTM-RNN), in three different Strength Modelling structures (SVR-BLSTM, BLSTM-SVR, BLSTM-BLSTM). Further, these three structures were evaluated in both unimodal settings, using either audio or video signals, and the bimodal settings where early fusion and late fusion strategies were integrated. Results gained on the widely used RECOLA and SEMAINE databases indicate that Strength Modelling can match or outperform the corresponding conventional individual models when performing affect recognition. An interesting observation was that, among our three different Strength Modelling set-ups no one case significantly outperformed the others. This demonstrates the flexibility of the proposed framework, in terms of being able to work in conjunction with different combination of A further advantage of Strength Modelling is that, it can be implemented as a plug-in for use in both early and late fusion stages. Results gained from an exhaustive set of fusion experiments confirmed this advantage. The best Strength Modelling test set results on the RECOLA dataset, .685 and .554, for arousal and valence respectively were obtained using Strength Modelling integrated into a modality-and model-based late fusion approach. These results are much higher than the ones obtained from other state-of-the-art systems. Moreover, on the SEMAINE dataset, competitive results can also be obtained.
There is a wide range of possible future research direction associated with Strength Modelling to build on this initial set of promising results. First, only two widely used regression model were investigated in the present article for affect recognition. Much of our future efforts will concentrate around assessing the suitability of more other regression approaches (e. g., Partial Least Squares Regression) for use in the framework. Investigating a more general rule of what kind of models can be implemented together in the framework help to expand its application. In addition, it is interesting to extend the framework widely and deeply. Second, motivated by the work in [START_REF] Kursun | Parallel interacting multiview learning: An application to prediction of protein sub-nuclear location[END_REF], we will also combine the original features with the predictions from different modalities (integrating the predictions based on audio features with the original video features for a final arousal or valence prediction), rather than from different models only. Furthermore, we also plan to generalise the promising advantages offered by Strength Modelling, by evaluating its performance on other behavioural regression tasks.
Figure 1 :
1 Figure 1: Overview of the Strength Modelling framework.
Figure 2 :
2 Figure 2: Strength Modelling with early fusion strategy.
Figure 3 :
3 Figure3: Strength Modelling (SM) with late fusion strategy. Fused predictions are from multiple independent modalities with the same model (denoted by the red, green, or blue lines), multiple independent models within the same modality (denoted by the solid or dotted lines), or the combination.
Figure 4 :
4 Figure 4: Illustration of the facial landmark features extraction from RECOLA database
Figure 5 :
5 Figure 5: Comparison of PCC and CCC between two series. The black line is gold standard from RECOLA database test partition, and the blue line is generated by shifting and scaling the gold standard.
Figure 6 :
6 Figure 6: Automatic prediction of arousal via audio signals (a) and valence via video signals (b) obtained with the best settings of the strength-involved models and individual models for a subject from the test partition on RECOLA database.
Figure 7 :
7 Figure 7: Averaged relative performance improvement (in terms of CCC) cross RECOLA and SEMAINE for arousal and valence recognition. The performance of the Strength Modelling was compared with the best individual tems in the case of audio video only, early fusion, and late fusion frameworks.
Table 1 :
1 Results based on audio features only: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) on the development and test partitions of RECOLA and SEMAINE databases from the audio signals. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems.
RECOLA SEMAINE
Audio based AROUSAL VALENCE AROUSAL VALENCE
method RMSE CCC RMSE CCC RMSE CCC RMSE CCC
a. on the development set
S .126 .714 .149 .331 .218 .399 .262 .172
B .142 .692 .117 .286 .209 .387 .261 .117
B-S .127 .713 .144 .348 * .206 .417 * .255 .179
S-B .122 .753 * .113 .413 * .210 .434 * .262 .172
B-B .122 .755 * .112 .476 * .206 .417 * .255 .178 *
b. on the test set
S .133 .605 .165 .248 .216 .397 .263 .017
B .155 .625 .119 .282 .202 .317 .256 .008
B-S .133 .606 .160 .264 .205 .332 .258 .006
S-B .133 .665 * .117 .319 * .203 .423 * .262 .017
B-B .133 .666 * .123 .364 * .205 .332 * .258 .006
Table 2 :
2 Results based on visual features only: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) on the development and test partitions of RECOLA and SEMAINE databases from the video signals. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems.
RECOLA SEMAINE
Video based AROUSAL VALENCE AROUSAL VALENCE
method RMSE CCC RMSE CCC RMSE CCC RMSE CCC
a. on the development set
S .197 .120 .139 .456 .249 .241 .253 .393
B .184 .287 .110 .478 .224 .232 .247 .332
B-S .183 .292 .110 .592 * .222 .250 .252 .354
S-B .186 .350 * .118 .510 * .231 .291 * .242 .405
B-B .185 .344 * .113 .501 * .222 .249 * .256 .301
b. on the test set
S .186 .193 .156 .381 .279 .112 .278 .115
B .183 .193 .122 .394 .240 .112 .275 .063
B-S .176 .265 * .130 .464 * .235 .072 .285 .043
S-B .186 .196 .121 .477 * .249 .125 .284 .068
B-B .197 .184 .120 .459 * .235 .072 .255 .158
*
Unless stated otherwise, a p value less than .05 indicates significance.
. A comparison of the performance of different fusion approaches, with or without Strength Modelling, is presented in Table 4. For the systems without Strength Modelling for RECOLA, one can observe that best individual model test
Table 3 :
3 Early fusion results on RECOLA and SEMAINE databases: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) with early fusion strategy on the development and test partitions of RECOLA and SEMAINE databases. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems. Sec. 5.2) were boosted to .671 and .405 with the modality-based late fusion approach, and to .651 and .497 with the model-based late fusion approach. These results were further promoted to .664 and .549 when combining the modalityand model-based late fusion approaches. This result is in line with other results in the literature
RECOLA SEMAINE
Early Fusion AROUSAL VALENCE AROUSAL VALENCE
method RMSE CCC RMSE CCC RMSE CCC RMSE CCC
a. on the development set
S .121 .728 .113 .544 .213 .392 .252 .436
B .132 .700 .109 .513 .217 .354 .257 .205
B-S .122 .727 .118 .549 .210 .374 .239 .363
S-B .127 .712 .096 .526 .208 .423 * .253 .397
B-B .126 .718 * .095 .542 * .210 .421 * .241 .361 *
b. on the test set
S .132 .610 .139 .463 .224 .304 .292 .057
B .148 .562 .114 .476 .204 .288 .244 .127
B-S .132 .610 .121 .520 * .204 .328 * .264 .063
S-B .144 .616 * .112 .473 .198 .408 * .275 .144 *
B-B .143 .618 * .114 .499 * .220 .307 * .265 .060
set performances, .625 and .394, for arousal and valence re-
spectively (
Table 4 :
4 Late fusion results on the RECOLA database: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) with late fusion strategies (i. e., modality-based, model-based, or the combination) on the development and test partitions of RECOLA database. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems.
RECOLA
Table 5 :
5 Late fusion results on the SEMAINE database: performance comparison in terms of RMSE and CCC between the strength-involved models and the individual models of SVR (S) and BLSTM-RNN (B) with late fusion strategies (i. e., modality-based, model-based, or the combination) on the development and test partitions of SEMAINE database. The best achieved CCC is highlighted. The symbol of * indicates the significance of the performance improvement over the related individual systems.
SEMAINE
Acknowledgements
This work was supported by the EU's Horizon 2020 Programme through the Innovative Action No. 645094 (SEWA) and the EC's 7th Framework Programme through the ERC Starting Grant No. 338164 (iHEARu). We further thank the NVIDIA Corporation for their support of this research by Tesla K40-type GPU donation. | 59,664 | [
"13134"
] | [
"488795",
"488795",
"488795",
"488795",
"1041971",
"488795",
"50682"
] |
01486190 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01486190/file/Lezoray_ICASSP2017.pdf | Olivier Lézoray
3D COLORED MESH GRAPH SIGNALS MULTI-LAYER MORPHOLOGICAL ENHANCEMENT
Keywords: Graph signal, morphology, color, multilayer decomposition, detail enhancement, sharpness
We address the problem of sharpness enhancement of 3D colored meshes. The problem is modeled with graph signals and their morphological processing is considered. A hierarchical framework that decomposes the graph signal into several layers is introduced. It relies on morphological filtering of graph signal residuals at several scales. To have an efficient sharpness enhancement, the obtained layers are blended together with the use of a nonlinear sigmoid detail enhancement and tone manipulation, and of a structure mask.
INTRODUCTION
3D Meshes are widely used in many fields and applications such as computer graphics and games. Recently, low cost sensors have brought 3D scanning into the hands of consumers. As a consequence, a new market has emerged that proposes cheap software that, similarly to an ordinary video camera, enables to generate 3D models by simply moving around an object or a person. With such software one can now easily produce 3D colored meshes with each vertex described by its position and color. However, the quality of the mesh is not always visually good. In such a situation, the sharpness of the 3D colored mesh needs to be enhanced. In this paper we propose an approach towards this problem. Existing techniques for sharpness enhancement of images use structurepreserving smoothing filters [START_REF] Zhang | Rolling guidance filter[END_REF][START_REF] Cho | Bilateral texture filtering[END_REF][START_REF] Gastal | Domain transform for edge-aware image and video processing[END_REF][START_REF] Xu | Image smoothing via L 0 gradient minimization[END_REF] within a hierarchical framework. They decompose the image into different layers from coarse to fine details, making it easier for subsequent detail enhancement. Some filters have been extended to 3D meshes but most manipule only mesh vertices positions [START_REF] Fleishman | Bilateral mesh denoising[END_REF][START_REF] Michael Kolomenkin | Prominent field for shape processing and analysis of archaeological artifacts[END_REF]. Some recent works have considered the color information [START_REF] Afrose | Mesh color sharpening[END_REF]. In this paper we present a robust sharpness enhancement technique based on morphological signal decomposition. The approach considers manifold-based morphological operators to construct a complete lattice of vectors. With this approach, a multi-layer decomposition of the 3D colored mesh, modeled as a graph signal, is proposed that progressively decomposes an input color mesh from coarse to fine scales. The layers are manipulated by non-linear s-curves and blended by a structure mask to produce an enhanced 3D color mesh. The paper is organized as follows. In Section 2, we introduce a learned ordering of the vectors of a graph signal. From this ordering, we derive a graph signal representation and define the associated morphological graph signal operators. Section 3 describes the proposed method for multi-layer morphological enhancement of graph signals. Last sections present results and conclusion.
MATHEMATICAL MORPHOLOGY FOR 3D
COLORED GRAPH SIGNALS
Notations
A graph G = (V, E) consists in a set V = {v 1 , . . . , v m } of vertices and a set E ⊂ V × V of edges connecting vertices.
A graph signal is a function that associates real-valued vectors to vertices of the graph f : G → T ⊂ R n where T is a non-empty set of vectors. The set T = {v 1 , • • • , v m } represents all the vectors associated to all vertices of the graph (we will also use the notation
T [i] = v i = f (v i )).
In this paper 3D colored graphs signals are considered, where a color is assigned to each vertex of a triangulated mesh.
Manifold-based color ordering
Morphological processing of graph signals requires the definition of a complete lattice (T , ≤) [START_REF] Ronse | Why mathematical morphology needs complete lattices[END_REF], an ordering of all the vectors of T . Since there exits no admitted universal ordering fo vectors, the framework of h-orderings [START_REF] Goutsias | Morphological operators for image sequences[END_REF] has been proposed as an alternative. This consists in constructing a bijective projection h : T → L where L is a complete lattice equipped with the conditional total ordering [START_REF] Goutsias | Morphological operators for image sequences[END_REF]. We refer to ≤ h as the h-ordering given by v
i ≤ h v j ⇔ h(v i ) ≤ h(v j ).
As argued in our previous works [START_REF] Lézoray | Complete lattice learning for multivariate mathematical morphology[END_REF], the projection h cannot be linear since a distortion of the space topology is inevitable. Therefore, it is preferable to rely on a nonlinear mapping h. The latter will be constructed by learning the manifold of vectors from a given graph signal and the complete lattice (T , ≤ h ) will be deduced from it.
Complete lattice learning
Given a graph signal that provides a set
T of m vectors in R 3 , a dictionary D = {x ′ 1 , • • • , x ′ p } of p ≪ m
vectors is built by Vector Quantization [START_REF] Gersho | Vector Quantization and Signal Compression[END_REF]. A similarity matrix K D that contains the pairwise similarities between all the dictionary vectors x ′ i is then computed. The manifold of the dictionary vectors is modeled using nonlinear manifold learning by Laplacian Eigenmaps [START_REF] Belkin | Laplacian eigenmaps for dimensionality reduction and data representation[END_REF]. This is be performed with the decomposition
L = Φ D Π D Φ T D of the normalized Laplacian matrix L = I -D -1 2 D K D D -1 2
D with Φ D and Π D its eigenvectors and eigenvalues, and D D the degree diagonal matrix of K D . The obtained representation being only valid for the dictionary D, it is extrapolated to all the vectors of T by Nyström extrapolation [START_REF] Talwalkar | Large-scale SVD and manifold learning[END_REF] expressed by
Φ = D -1 2 DT K T DT D -1 2 D Φ D (diag[1] - Π D ) -1
, where K DT is the similarity matrix between sets D and T , and D DT its associated diagonal degree matrix. Finally, the bijective projection
h ⊂ R 3 → L ⊂ R p on the manifold is defined as h(x) = ( φ1 (x), • • • , φp (x))
T with φk the k-th eigenvector. The complete lattice (T , ≤ h ) is obtained by using the conditional ordering after this projection.
Graph signal representation
The complete lattice (T , ≤ h ) being learned, a new graph signal representation can be defined. Let P be a sorted permutation of the elements of T according to the manifold-based ordering ≤ h , one has
P = {v ′ 1 , • • • , v ′ m } with v ′ i ≤ h v ′ i+1 , ∀i ∈ [1, (m -1)].
From this ordered set of vectors, an index graph signal can be defined. Let I : G → [1, m] denote this index graph signal. Its elements are defined as
I(v i ) = {k | v ′ k = f (v i ) = v i }.
Therefore, at each vertex v i of the index graph signal I, one obtains the rank of the original vector f (v i ) in P, the set of sorted vectors, that we will call a palette. A new representation of the original graph signal f is obtained and denoted in the form of the pair f = (I, P). Figure 1 presents such a representation for a 3D colored graph signal. The original graph signal f can be directly recovered since f
(v i ) = P[I(v i )] = T [i] = v i . f : G → R 3 I : G → [1, m] P Fig. 1.
From left to right: a 3D colored graph signal f , and its representation in the form of an index graph signal I and associated sorted vectors P.
Graph signal morphological processing
From this new representation of graph signals, morphological operators can now be expressed for the latter. The erosion of a graph signal f at vertex v i ∈ G by a structuring element B k ⊂ G is defined as:
ǫ B k (f )(v i ) = {P[∧I(v j )], v j ∈ B k (v i )}. The dilation δ B k (f )(v i
) can be defined similarly.
A structuring element B k (v i ) of size k defined at a vertex v i corresponds to the k-hop set of vertices that can be reached from v i in k walks, plus vertex v i . These graph signal morphological operators operate on the index graph signal I, and the processed graph signal is reconstructed through the sorted vectors P of the learned complete lattice. From these basic operators, we can obtain other morphological filters for graph signals such a as openings
γ B k (f ) = δ B k (ǫ B k (f )) and clos- ings φ B k (f ) = ǫ B k (δ B k (f )).
MULTI-LAYER MORPHOLOGICAL ENHANCEMENT
Graph signal multi-layer decomposition
We adopt the strategy of [START_REF] Farbman | Edge-preserving decompositions for multi-scale tone and detail manipulation[END_REF] that consists in decomposing a signal into a base layer and several detail layers, each capturing a given scale of details. We propose the following multiscale morphological decomposition of a graph signal into l layers, as shown in Algorithm 1. To extract the successive Algorithm 1 Morphological decomposition of a graph signal d -1 = f , i = 0 while i < l do Compute the graph signal representation at level i -1:
d i-1 = (I i-1 , P i-1 )
Morphological Filtering of d i-1 :
f i = M F B l-i (d i-1 )
Compute the residual (detail layer):
d i = d i-1 -f i Proceed to next layer: i = i + 1 end while
layers in a coherent manner, the layer f 0 has to be the coarsest version of the graph signal, while the residuals d i have to contain details that become finer across the decomposition levels. This means that the sequence of scales should be decreasing and therefore the size of the structuring element in the used morphological filtering (MF) should also decrease. In terms of graph signal decomposition, this means that as the process evolves, the successive decompositions extract more details from the original graph signal (similarly as [START_REF] Hidane | Graph signal decomposition for multi-scale detail manipulation[END_REF]). In Algorithm 1, this is expressed by B l-i which is a sequence of structuring elements of decreasing sizes with i ∈ [0, l -1]. Since each detail layer d i is composed of a set of vectors different from the previous layer d i-1 , the graph signal representation (I i , P i ) has to be computed for the successive lay-Fig. 2. From top to bottom, left to right: an original mesh f , and its decomposition into three layers f 0 , f 1 , and d 1 .
ers to decompose. Finally, the graph signal can then be represented by f = l-2 i=0 f i + d l-1 . The f i 's thus represent different layers of f captured at different scales. The morphological filter we have considered for the decomposition is an Open Close Close Open. The OCCO filter is a self-dual operator that has excellent signal decomposition abilities [START_REF] Peters | A new algorithm for image noise reduction using mathematical morphology[END_REF]:
OCCO B k (f ) = γ B k (φ B k (f ))+φ B k (γ B k (f )) 2
. In Figure 2, we show an example with three levels of decomposition (l = 3) to obtain a coarse base layer f 0 , a medium detail layer f 1 and a fine detail layer d 1 .
Graph signal enhancement
Proposed approach
Given a graph signal f = (I, P), we first construct its multilayer decomposition in l levels. The graph signal can be enhanced by manipulating the different layers with specific coefficients and adding the modified layers altogether. This is achieved with the following proposed scheme:
f (v k ) = S 0 (f 0 (v k )) + M (v k ) • l-1 i=1 S i (f i (v k )). (1)
with f l-1 = d l-1 . Each layer is manipulated by a nonlinear function S i for detail enhancement and tone manipulation. The layers are combined with the use of a structure mask M that prevents from boosting noise and artifacts while enhancing the main structures of the original graph signal f . We provide now details on S i and M .
Nonlinear boosting curve
In classical image detail manipulation, the layers are manipulated in a linear way with specific layer coefficients (i.e., S i (x) = α i x [START_REF] Choudhury | Hierarchy of nonlocal means for preferred automatic sharpness enhancement and tone mapping[END_REF]). However this can over-enhance some image details and requires hard clipping. Therefore, alternative nonlinear detail manipulation and tone manipulation have ben proposed [START_REF] Farbman | Edge-preserving decompositions for multi-scale tone and detail manipulation[END_REF][START_REF] Paris | Local laplacian filters: edge-aware image processing with a laplacian pyramid[END_REF][START_REF] Talebi | Fast multi-layer laplacian enhancement[END_REF]. Similarly, we consider a nonlinear sigmoid function of the form S i (x) = 1 1+exp(-αix) , appropriately shifted and scaled. The parameter α i of the sigmoid is automatically determined and decreases while i increases, whereas its width increases from one level to the other (details not provided due to reduced space).
Structure mask
As recently proposed in [START_REF] Talebi | Fast multi-layer laplacian enhancement[END_REF] for image enhancement, it is much preferable to boost strong signal structures and to keep unmodified the other areas. For graph signals, a vertex located on an edge or a textured area has a high spectral distance with respect to its neighbors as compared to a vertex within a constant area. Therefore, we propose to construct a structure mask that accounts for the structures present in the graph signal. A normalized sum of distances within a local neighborhood is a good indicator of the graph signal structure, and is defined as δ(v i ) = [START_REF] Rubner | The earth mover's distance as a metric for image retrieval[END_REF]. To build H(v i ), an histogram of size N is constructed on the index graph signal I as H(v i ) = {(w k , m k )} N k=1 within the set B 1 (v i ) where m k is the index of the k-th element and w k its appearance frequency. One has to note that N ≤ |B 1 (v i )| since identical values can be found within the set B 1 (v i ), and two signatures can have different sizes. To compute the EMD, ground distances are computed in the CIELAB color space. Finally, we define the structure mask of a graph signal as M (v i ) = 1 + δ(vi)-∧δ ∨δ-∧δ . One can notice that M (v i ) ∈ [1, 2] and will be close to 1 for constant areas and to 2 for ramp edges. Figure 3 presents examples of structure masks on two 3D colored graph signals. The structure mask is computed only once, and on the original graph signal (I, P) = f .
EXPERIMENTAL RESULTS AND CONCLUSION
We illustrate our approach on graph signals in the form 3D colored meshes that represent 3D scans of several person busts 1 . Such scans have recently received much interest to generate 3D printed selfies and their perceived sharpness is of huge importance for final consumers. We have used l = 3 levels of decomposition for computational efficiency. To assess objectively the benefit of our method, we measure the sharpness of the orignal signal f and modified signal f with the TenenGrad criterion [START_REF] Xu | A comparison of contrast measurements in passive autofocus systems for low contrast images[END_REF][START_REF] Choudhury | Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means[END_REF], after adapting it to 3D colored meshes by using the morphological gradient (as in [START_REF] Choudhury | Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means[END_REF] for images):
T G(f ) = 1 3|V| vi∈V 3 k=0 |δ(f k )(v i ) -ǫ(f k )(v i )|
where the morphological δ and ǫ are performed on each channel f k on a 1-hop. It has been shown in [START_REF] Choudhury | Perceptually motivated automatic sharpness enhancement using hierarchy of non-local means[END_REF] that a higher value means a sharper signal and that this value is correlated with perceived sharpness. be seen that our approach has enhanced the local contrast without artifact magnification or detail loss.
CONCLUSION
We have introduced an approach for 3D colored graph enhancement based on a morphological multi-layer decomposition of graph signals. The use of nonlinear detail manipulation with a structure mask enables to have an automatic method that produces visually appealing results of enhanced sharpness.
v j ∈B 1 (
1 v i ) d EM D (H(vj ),H(vi)) |B1(vi)| with d EM D the Earth Mover Distance between two signatures that are compact representations of local distributions
Fig. 3 .
3 Fig. 3. Graph signal structure masks used to modulate the importance of detail enhancement. The original graph signals can be seen in the next figures.
Figure 4 Fig. 4 .
44 Fig. 4. Morphological colored mesh detail manipulation with cropped zoomed areas.
Fig. 5 .
5 Fig. 5. Morphological colored mesh detail manipulation.
This work received funding from the Agence Nationale de la Recherche (ANR-14-CE27-0001 GRAPHSIP), and from the European Union FEDER/FSE 2014/2020 (GRAPHSIP project).
Models from Cyberware and ReconstructMe. | 16,965 | [
"230"
] | [
"406734"
] |
01486563 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2017 | https://ujm.hal.science/ujm-01486563/file/Visapp_Alex_CameraReady.pdf | Panagiotis-Alexandros Bokaris
email: panagiotis-alexandros.bokaris@limsi.fr
Damien Muselet
Alain Trémeau
email: alain.tremeau@univ-st-etienne.fr
3D reconstruction of indoor scenes using a single RGB-D image
Keywords: 3D reconstruction, Cuboid fitting, Kinect, RGB-D, RANSAC, Bounding box, Point cloud, Manhattan World
The three-dimensional reconstruction of a scene is essential for the interpretation of an environment. In this paper, a novel and robust method for the 3D reconstruction of an indoor scene using a single RGB-D image is proposed. First, the layout of the scene is identified and then, a new approach for isolating the objects in the scene is presented. Its fundamental idea is the segmentation of the whole image in planar surfaces and the merging of the ones that belong to the same object. Finally, a cuboid is fitted to each segmented object by a new RANSAC-based technique. The method is applied to various scenes and is able to provide a meaningful interpretation of these scenes even in cases with strong clutter and occlusion. In addition, a new ground truth dataset, on which the proposed method is further tested, was created. The results imply that the present work outperforms recent state-of-the-art approaches not only in accuracy but also in robustness and time complexity.
INTRODUCTION
3D reconstruction is an important task in computer vision since it provides a complete representation of a scene and can be useful in numerous applications (light estimation for white balance, augment synthetic objects in a real scene, design interiors, etc). Nowadays, with an easy and cheap access to RGB-D images, as a result of the commercial success of the Kinect sensor, there is an increasing demand in new methods that will benefit from such data.
A lot of attention has been drawn to 3D reconstruction using dense RGB-D data [START_REF] Izadi | Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera[END_REF][START_REF] Neumann | Real-time rgb-d mapping and 3-d modeling on the gpu using the random ball cover data structure[END_REF][START_REF] Dou | Exploring high-level plane primitives for indoor 3d reconstruction with a hand-held rgb-d camera[END_REF]. Such data are obtained by multiple acquisitions of the considered 3D scene under different viewpoints. The main drawback of these approaches is that they require a registration step between the different views. In order to make the 3D reconstruction of a scene feasible despite the absence of a huge amount of data, this paper focuses on reconstructing a scene using a single RGB-D image. This challenging problem has been less addressed in the literature [START_REF] Neverova | 2 1/2 d scene reconstruction of indoor scenes from single rgb-d images[END_REF]. The lack of information about the shape and position of the different objects in the scene due to the single viewpoint and occlusions makes the task significantly more difficult. Therefore, various assumptions have to be made in order to make the 3D reconstruction feasible (object nature, orientation). In this paper, starting from a single RGB-D image, a fully automatic method for the 3D reconstruction of an indoor scene without constraining the object orientations is proposed. In the first step, the layout of the room is identified by solving the parsing problem of an indoor scene. For this purpose, the work of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] is exploited and improved by better addressing the problem of the varying depth resolution of the Kinect sensor while fitting planes. Then, the objects of the scene are segmented by using a novel plane-merging approach and a cuboid is fitted to each of these objects. The reason behind the selection of such representation is that most of the objects in a common indoor scene, such as drawers, bookshelves, tables or beds have a cuboid shape. For the cuboid fitting step, a new "double RANSAC"-based [START_REF] Fischler | Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography[END_REF] approach is proposed. The output of the algorithm is a 3D reconstruction of the observed scene, as illustrated in Fig. 1. In order to assess the quality of the reconstruction, a new dataset of captured 3D scenes is created, in which the exact positions of the objects are measured by using a telemeter. In fact, by knowing the exact 3D positions of the objects, one can objectively assess the accuracy of all the 3D reconstruction algorithms. This ground truth dataset will be publicly available for future comparisons. Finally, the proposed method is tested on this new dataset as well as on the NYU Kinect dataset [START_REF] Silberman | Indoor segmentation and support inference from rgbd images[END_REF]. The obtained results indicate that the proposed algorithm outperforms the state-ofthe-art even in cases with strong occlusion and clutter.
RELATED WORK
The related research to the problem examined in this paper can be separated in two different categories. The first category is the extraction of the main layout of the scene while the second one is the 3D representation of the objects in the scene.
Various approaches have been followed in computer vision for recovering the spatial layout of a scene. Many of them are based on the Manhattan World assumption [START_REF] Coughlan | Manhattan world: Compass direction from a single image by bayesian inference[END_REF]. Some solutions only consider color images without exploiting depth information [START_REF] Mirzaei | Optimal estimation of vanishing points in a manhattan world[END_REF][START_REF] Bazin | Globally optimal line clustering and vanishing point estimation in manhattan world[END_REF][START_REF] Hedau | Recovering the spatial layout of cluttered rooms[END_REF][START_REF] Schwing | Efficient exact inference for 3d indoor scene understanding[END_REF][START_REF] Zhang | PanoContext: A Whole-Room 3D Context Model for Panoramic Scene Understanding[END_REF] and hence provide only coarse 3D layouts. With Kinect, depth information is available, which can be significantly beneficial in such applications. [START_REF] Zhang | Estimating the 3d layout of indoor scenes and its clutter from depth sensors[END_REF] expanded the work of [START_REF] Schwing | Efficient exact inference for 3d indoor scene understanding[END_REF]) and used the depth information in order to reduce the layout error and estimate the clutter in the scene. [START_REF] Taylor | Fast scene analysis using image and range data[END_REF] developed a method that parses the scene in salient surfaces using a single RGB-D image. Moreover, [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] presented a method for parsing the Manhattan structure of an indoor scene. Nonetheless, these works are based on assumptions about the content of the scene (minimum size of a wall, minimum ceiling height, etc.). Moreover, in order to address the problem of the depth accuracy in Kinect, they used the depth disparity differences, which is not the best solution as it is discussed in section 3.1.
Apart from estimating the layout of an indoor scene, a considerable amount of research has been done in estimating surfaces and objects from RGB-D images. [START_REF] Richtsfeld | Towards scene understanding -object segmentation using rgbd-images[END_REF] used RANSAC and NURBS [START_REF] Piegl | On nurbs: a survey[END_REF] for detecting unknown 3D objects in a single RGB-D image, requiring learning data from the user. [START_REF] Cupec | Fast 2.5d mesh segmentation to approximately convex surfaces[END_REF][START_REF] Jiang | Finding Approximate Convex Shapes in RGBD Images[END_REF] segment convex 3D shapes but their grouping to complete objects remains an open issue. To the best of our knowledge, [START_REF] Neverova | 2 1/2 d scene reconstruction of indoor scenes from single rgb-d images[END_REF] was the first method that proposed a 3D reconstruction starting from a single RGB-D image under the Manhattan World assumption. However, it has the significant limitation that it only reconstructs 3D objects which are parallel or perpendicular to the three main orientations of the Manhattan World. [START_REF] Lin | Holistic scene understanding for 3d object detection with rgbd cameras[END_REF] presented a holistic approach that takes into account 2D segmentation, 3D geometry and contextual relations between scenes and objects in order to detect and classify objects in a single RGB-D image. Despite the promising nature of such approach it is constrained by the assumption that the objects are parallel to the floor. In addition, the cuboid fitting to the objects is performed as the minimal bounding cube of the 3D points, which is not the optimal solution when working with Kinect data, as discussed by [START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF]. Recently, an interesting method that introduced the "Manhattan Voxel" was developed by [START_REF] Ren | Three-dimensional object detection and layout prediction using clouds of oriented gradients[END_REF]. In their work the 3D layout of the room is estimated and detected objects are represented by 3D cuboids. Being a holistic approach that prunes candidates, there is no guarantee that a cuboid will be fitted to each object in the scene. Based on a single RGB image, [START_REF] Dwibedi | Deep cuboid detection: Beyond 2d bounding boxes[END_REF] developed a deeplearning method to extract all the cuboid-shaped objects in the scene. This novel technique differs from our perspective since the intention is not to fit a cuboid to a 3D object but to extract a present cuboid shape in an image.
The two methods [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF][START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF] are similar with our approach since their authors try to fit cuboids using RANSAC to objects of a 3D scene acquired by a single RGB-D image. [START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF] followed a 3D reasoning approach and investigated different constraints that have to be applied to the cuboids, such as occlusion, stability and supporting relations. However, this method is applicable only to pre-labeled images. [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] coarsely segment the RGB-D image into roughly piecewise planar patches and for each pair of such patches fit a cuboid to the two planes. As a result, a large set of cuboid candidates is created. Finally, the best subset of cuboids is selected by optimizing an objective function, subject to various constraints. Hence, they require strong constraints (such as intersections between pairs of cuboids, number of cuboids, covered area on the image plane, occlusions among cuboids, etc.) during the global optimization process. This pioneer approach provides promising results in some cases but very coarse ones in others even for dramatically simple scenes (see Figs. 9 and 10 and images shown in [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]).
In this paper, in order to improve the quality of the reconstruction, we followed a different approach and propose an accurate segmentation step using novel constraints. The objective is to isolate the objects from each other before fitting the cuboids due to the fact that the cuboid fitting step can be significantly more efficient and accurate when working with each object independently.
METHOD OVERVIEW
The method proposed in this paper can be separated in three different stages. The first stage is to define the layout of the scene. This implies to extract the floor, all the walls and their intersections. For this purpose, the input RGB-D image is segmented by fitting 3D planes to the point cloud. The second stage is to segment all the objects in the scene and to fit a cuboid to each one separately. Finally, in stage 3 the results of the two previous stages are combined in order to visualize the 3D model of the room. An overview of this method can be seen in Fig. 2 3
.1 Parsing the indoor scene
In order to parse the indoor scene and extract the complete layout of the scene, an approach based on the research of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] is used. According to this work, the image is separated in planar regions by fitting planes to the point cloud using RANSAC, as can be seen in Fig 2b . Then the floor and the walls are detected by analyzing their surfaces, angles with vertical and angles between them. This method provides the layout of the room in less than 6 seconds. The final result of the layout of the scene, visualized in the 3D Manhattan World, can be seen in the bottom of Fig. 2c.
While working with depth values provided by the Kinect sensor, it is well known that the depth accuracy is not the same for the whole range of depth [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF], i.e. the depth information is more accurate for points that are close to the sensor than for points that are farther. This has to be taken into account in order to define a threshold according to which the points will be considered as inliers in a RANSAC method. Points with a distance to a plane inside the range of Kinect error should be treated as inliers of that plane. In order to address this problem, [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] proposed to fit planes in the disparity (inverse of depth) image instead of working directly with depth. This solution improves the accuracy but we claim that the best solution would be to use a threshold for the computation of the residual errors in RANSAC that increases according to the distance from the sensor. This varying threshold is computed once by fitting a second degree polynomial function to the depth values provided by [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF]. The difference between the varying threshold proposed by [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF] using disparity and the one proposed here can be seen in Fig. 3. As observed in the graph, our threshold follows significantly better the experimental data of [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF] compared to the threshold of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF]. The impact of the proposed threshold on the room layout reconstruction can be seen in the two character-istic examples in Fig. 4. As it can be easily noticed, with the new threshold the corners of the walls are better defined and complete walls are now detected. This adaptive threshold is further used in the cuboid fitting step and significant improvements are obtained for various objects, as it is discussed in section 3.3.
Segmenting the objects in the scene
As an output of the previous step, the input image is segmented in planar regions (Fig. 2b). Moreover, it is already known which of these planar regions correspond to the walls and to the floor in the scene (bottom of Fig. 2c). By excluding them from the image, only planar regions that belong to different objects in the image are left, as can be seen in the top of Fig. 2c. In order to segment the objects in the scene, the planar regions that belong to the same object have to be merged. For this purpose, the edges of the planar surfaces are extracted using a Canny edge detector and the common edge between neighboring surfaces is calculated. Then, we propose to merge two neighbor surfaces by analyzing i)the depth continuity across surface boundaries, ii)the angle between the surface normals and iii)the size of each surface.
For the first criterion, we consider that two neighboring planar surfaces that belong to the same object have similar depth values in their common edge and different ones when they belong to different objects. The threshold in the mean depth difference is set to 60 mm in all of our experiments. The second criterion is necessary in order to prevent patches that do not belong to the same object to be merged. In fact, since this study is focused on cuboids, the planar surfaces that should be merged need to be either parallel or perpendicular to each other. The final criterion forces neighboring planar surfaces to be merged if both of their sizes are relatively small (less than 500 points). The aim is to regroup all small planar regions that constitute an object that does not have a cuboid shape (sphere, cylinder, etc.). This point is illustrated in Fig. 5, where one cylinder is extracted. The proposed algorithm checks each planar region with respect to its neighboring regions (5 pixels area) in order to decide whether they have to be merged or not. This step is crucial for preparing the data before fitting cuboids in the next step.
Fitting a cuboid to each object
The aim of this section is to fit an oriented cuboid to each object. As discussed by [START_REF] Jia | 3dbased reasoning with blocks, support, and stability[END_REF], the optimal cuboid is the one with the minimum volume and the maximum points on its surface. Since the image has been already segmented, i.e. each object is isolated from the scene, the strong global constraints used by [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]) can be relaxed and more attention to each cuboid can be drawn. Therefore, we propose the following double-RANSAC process. Two perpendicular planar surfaces are sufficient to define a cuboid. Hence, in order to improve the robustness of the method, we propose to consider only the two biggest planar surfaces of each object. In fact, in a single viewpoint of a 3D scene only two surfaces of an object are often visible. Thus, first, for each segmented object, the planar surface with the maximum number of inliers is extracted by fitting a plane to the corresponding point cloud using RANSAC (with our adaptive threshold described in section 3.1). The orientation of this plane provides the first axis of the cuboid. We consider that the second plane is perpendicular to the first one but this information is not sufficient to define the second plane. Furthermore, in case of noise or when the object is thin (few points in the other planes) or far from the acquisition sensor, the 3D orientation of the second plane might be poorly estimated. Hence, we propose a robust solution which projects all the remaining points of the point cloud on the first plane and then fits a line using another RANSAC step to the projected points. The orientation of this line provides the orientation of the second plane. This is visualized in Fig. 6. In the experiments section, it is shown that this double RANSAC process provides very good results while fitting cuboids to small, thin or far objects.
Furthermore, as a second improvement of the RANSAC algorithm, we propose to analyze its qual-ity criterion. In fact, RANSAC fits several cuboids to each object (10 cuboids in our implementation) and selects the one that optimizes a given quality criterion. Thus, the chosen quality criterion has a big impact on the results. As it was discussed before, in RGB-D data a well estimated cuboid should have a maximum of points on its surface. Given one cuboid returned by one RANSAC iteration, we denote area f 1 and area f 2 the areas of its two faces and area c1 and area c2 the areas defined by the convex hull of the inlier points projected on these two faces, respectively. In order to evaluate the quality of the fitted cuboid, Jiang and Xiao proposed the measure defined as min( area c1 area f 1 , area c2 area f 2 ) which is equal to the maximum value of 1 when the fitting is perfect. This measure assimilates the quality of a cuboid to the quality of the worst plane among the two, without taking into account the quality of the best fitting plane. Nevertheless, the quality of the best fitting plane could help in deciding between two cuboids characterized by the same ratio. Furthermore, the relative sizes of the two planes are completely ignored in this criterion. Indeed, in case of a cuboid composed by a very big plane and a very small one, this measure does not provide any information about which one is well fitted to the data, although this information is crucial to assess the quality of the cuboid fitting. Consequently, we propose to use a similar criterion which does not suffer from these drawbacks: ratio = area c1 +area c2 area f 1 +area f 2 . Likewise, for an ideal fitting this measure is equal to 1. In order to illustrate the improvement due to the proposed adaptive threshold (of section 3.1) and the proposed ratio in the cuboid fitting step, 3 typical examples are shown in in Fig. 7. There, it can be seen that the proposed method (right column) increases significantly the performance for far and thin objects. In the final step of the method, the fitted cuboids are projected in the Manhattan World of the scene, in order to obtain the 3D model of the scene, as illustrated in Fig. 2f. Additionally, the cuboids are pro- jected on the input RGB image in order to demonstrate how well the fitting procedure performs (see Fig. 2e).
NEW GROUND TRUTH DATASET
For an objective evaluation, a new dataset with measured ground truth 3D positions was built. This dataset is composed by 4 different scenes and each scene is captured under 3 different viewpoints and 4 different illuminations. Thus, each scene consists of 12 images. For all these 4 scenes, the 3D positions of the vertices of the objects were measured using a telemeter. These coordinates constitute the ground truth. As the reference point was considered the intersection point of the three planes of the Manhattan World. It should be noted that the measurement of vertices positions in a 3D space with a telemeter is not perfectly accurate and the experimental measurements show that the precision of these ground truth data is approximately ±3.85mm. Some of the dataset images can be seen in the figures of the next section.
EXPERIMENTS
Qualitative evaluation
As a first demonstration of the proposed method some reconstruction results are shown in Fig. 8. It can be seen that it performs well even in very demanding scenes with strong clutter. Moreover, it is able to handle small and thin objects with convex surfaces. Subsequently, our method is compared with the recent method proposed by [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] since their method not only performs cuboid fitting to RGB-D data but also outperforms various other approaches. A first visual comparison can be performed on both our dataset and the well-known NYUv2 Kinect Dataset [START_REF] Silberman | Indoor segmentation and support inference from rgbd images[END_REF] in Figs. 9 and 10, respectively. It should be noted that all the thresholds in this paper were tuned to the provided numbers for both ours and the NYUv2 dataset. This point highlights the generality of our method that was tested in a wide variety of scenes. [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] have further improved their code and its last release (January 2014) was used for our comparisons. A random subset of 40 images that contain information about the layout of the room was selected from the NYUv2 Kinect dataset. The results imply that our method provides significantly better reconstructions than this state-of-the-art approach. Furthermore, in various in Fig. 9, it can be observed that the global cuboid fitting method of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] can result in cuboids that do not correspond to any object in the scene. The reason for this is the large set of candidate cuboids that they produce for each two planar surfaces in the image. The strong constraints that they apply afterwards, in order to eliminate the cuboids which do not correspond to an object, do not always guarantee an optimal solution. Another drawback of this approach is that the aforementioned constraints might eliminate a candidate cuboid that does belong to a salient object. In the next section, the improvement of our approach is quantified by an exhaustive test on our ground truth dataset.
Quantitative evaluation
In order to test how accurate is the output of the proposed method and how robust it is against different viewpoints and illuminations, the following procedure was used. The 3D positions of the reconstructed vertices are compared to their ground truth positions by measuring their Euclidean distance. The mean value (µ) and the standard deviation (σ) of these Euclidean distances as well as the mean running time of the algorithm over the 12 images of each scene are presented in Table 1. The results using the code of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] are included in the table for comparison. It should be noted that since this method does not provide the layout of the room, their estimated cuboids are rotated to the Manhattan World obtained by our method for each image.
During the experiments, it was noticed that the results of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] were very unstable and various times their method could not provide a cuboid for each object in the scene. Moreover, since the RANSAC algorithm is non-deterministic, neither are both our approach and the one of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]. In order to quantify this instability, each algorithm was run 10 times on the exact same image (randomly chosen) of each scene. The mean (µ) and standard deviation (σ) of the Euclidean distance between the ground truth and the reconstructed 3D positions were measured. The results are presented in Table 2. It should be noted that the resulting 3D positions of both algorithms are estimated according to the origin of the estimated layout of the room. Thus, the poor resolution of the Kinect sensor is perturb- ing the estimation of both the layout and the 3D positions of the objects and the errors are cumulating. However, the values of the mean and standard deviation for our method are relatively low with respect to the depth resolution of Kinect sensor at that distance, which is approximately 50 mm at 4 meters [START_REF] Andersen | Kinect depth sensor evaluation for computer vision applications[END_REF]. Furthermore, the standard deviations of Table 2 are considerably low and state a maximum deviation of the result less than 4.5 mm.
Finally, as can be seen in Table 1, the computational cost of our method is dramatically lower than the one of [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]. It should be noted that in this running time our method estimates the complete 3D scene reconstruction of the scene. It requires around 9 seconds for a simple scene and less than 20 seconds for a demanding scene with strong clutter and occlusion on a Dell Inspiron 3537, i7 1.8 Ghz, 8 GB RAM. It is worth mentioning that no optimization was done in the implementation. Thus, the aforementioned running times could be considerably lower.
CONCLUSIONS
In this paper, a new method that provides accurate 3D reconstruction of an indoor scene using a single RGB-D image is proposed. First, the layout of the scene is extracted by exploiting and improving the method of [START_REF] Taylor | Parsing indoor scenes using rgb-d imagery[END_REF]. The latter is achieved by better addressing the problem of the non-linear relationship between depth resolution and distance from the sensor. For the 3D reconstruction of the scene, we propose to fit cuboids to the objects composing the scene since this shape is well adapted to most of the indoor objects. Unlike the state-of-theart method [START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF] that runs a global optimization process over sets of cuboids with strong constraints, we propose to automatically segment the image, as a preliminary step, in order to focus on the local cuboid fitting on each extracted object. It is shown that our method is robust to viewpoint and object orientation variations. It is able to provide meaningful interpretations even in scenes with strong clutter and occlusion. More importantly, it outperforms the state-of-the-art approach not only in accuracy but also in robustness and time complexity. Finally, a ground truth dataset for which the exact 3D positions of the objects have been measured is provided. This dataset can be used for future comparisons.
Figure 1 :
1 Figure 1: (left) Color and Depth input images, (right) 3D reconstruction of the scene.
Figure 2 :
2 Figure 2: An overview of the proposed method.
Figure 3 :
3 Figure 3: Comparison of the varying threshold set in (Taylor and Cowley, 2012) and the one proposed in this paper.
Figure 4 :
4 Figure 4: Impact of the proposed threshold in the room layout reconstruction. (left column): Input image (middle column): Threshold in (Taylor and Cowley, 2012). (right column): Threshold proposed here.
Figure 5 :
5 Figure 5: An example of merging objects that are not cuboids.(left) original input image. (middle):Before merging. (right):After merging.
Figure 6 :
6 Figure 6: Illustration of our cuboid fitting step. (left): The inliers of the first fitted 3D plane are marked in green. The remaining points and their projection on the plane is marked in red and blue, respectively. A 3D line is fitted to these points. (right): The fitted cuboid.
Figure 7 :
7 Figure 7: Impact of the selected threshold and ratio on the cuboid fitting. (left): Fixed global threshold and ratio proposed here. (middle): Varying threshold proposed here and ratio proposed in (Jiang and Xiao, 2013) (right): Threshold and ratio proposed here.
Figure 8 :
8 Figure 8: Various results of the proposed method on different real indoor scenes.
Figure 10 :
10 Figure 10: Random results of (Jiang and Xiao, 2013) (top 2 rows) and the corresponding ones of our method (bottom 2 rows) for the ground truth dataset.
Figure 9 :
9 Figure 9: Comparison of the results obtained by (Jiang and Xiao, 2013) (odd rows) and the method proposed in this paper (even rows) for the NYUv2 Kinect dataset.
Table 2 :
2 Mean value (µ) and standard deviation (σ) of the Euclidean distances between the ground truth and the reconstructed vertices over 10 iterations of the algorithm on the same image.Our method[START_REF] Jiang | A linear approach to matching cuboids in rgbd images[END_REF]) µ (mm) σ (mm) µ (mm) σ (mm
Table 1 :
1 Mean value (µ) and standard deviation (σ) of the Euclidean distances in mm between the ground truth and the reconstructed vertices over the 12 images of each scene and mean running time (t) in seconds of each algorithm.
Our method (Jiang and Xiao, 2013)
µ σ t * µ σ t *
Scene 1 52.4 8.8 8.8 60.9 19.6 25.3
Scene 2 60.4 20.9 12.3 132.7 65.9 26.1
Scene 3 69.7 20.2 14.2 115.7 48.3 27.2
Scene 4 74.9 35.3 12.2 145.3 95.4 26.8
* Running on a Dell Inspiron 3537, i7 1.8 GHz, 8 GB RAM | 31,490 | [
"1003869",
"172493",
"859601"
] | [
"247329",
"17835",
"17835"
] |
01486575 | en | [
"info"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01486575/file/GRAPP_2017_29.pdf | Maxime Maria
email: maxime.maria@univ-poitiers.fr
Sébastien Horna
email: sebastien.horna@univ-poitiers.fr
Lilian Aveneau
email: lilian.aveneau@univ-poitiers.fr
Efficient Ray Traversal of Constrained Delaunay Tetrahedralization
Keywords: Ray Tracing, Acceleration Structure, Constrained Delaunay Tetrahedralization
published or not. The documents may come
INTRODUCTION
Ray tracing is a widely used method in computer graphics, known for its capacity to simulate complex lighting effects to render high-quality realistic images. However, it is also recognized as timeconsuming due to its high computational cost.
To speed up the process, many acceleration structures have been proposed in the literature. They are often based on a partition of Euclidean space or object space, like kd-tree [START_REF] Bentley | Multidimensional Binary Search Trees Used for Associative Searching[END_REF], BSP-tree, BVH [START_REF] Rubin | A 3-dimensional representation for fast rendering of complex scenes[END_REF][START_REF] Kay | Ray Tracing Complex Scenes[END_REF] and regular grid [START_REF] Fujimoto | ARTS: Accelerated Ray-Tracing System[END_REF]. A survey comparing all these structures can be found in [START_REF] Havran | Heuristic Ray Shooting Algorithms[END_REF]. They can reach interactive rendering, e.g exploiting ray coherency [START_REF] Wald | Interactive Rendering with Coherent Ray Tracing[END_REF][START_REF] Reshetov | Multilevel Ray Tracing Algorithm[END_REF][START_REF] Mahovsky | Memory-Conserving Bounding Volume Hierarchies with Coherent Raytracing[END_REF] or GPU parallelization [START_REF] Purcell | Ray Tracing on Programmable Graphics Hardware[END_REF][START_REF] Foley | KD-tree Acceleration Structures for a GPU Raytracer[END_REF][START_REF] Günther | Realtime Ray Tracing on GPU with BVHbased Packet Traversal[END_REF][START_REF] Aveneau | Understanding the Efficiency of Ray Traversal on GPUs[END_REF][START_REF] Kalojanov | Two-Level Grids for Ray Tracing on GPUs[END_REF]. Nevertheless, actually a lot of factors impact on traversal efficiency (scene layout, rendering algorithm, etc.).
A different sort of acceleration structures is the constrained convex space partition (CCSP), slightly studied up to then. A CCSP is a space partition into convex volumes respecting the scene geometry. [START_REF] Fortune | Topological Beam Tracing[END_REF] introduces this concept by proposing a topological beam tracing using an acyclic convex subdivision respecting the scene obstacles, but using a hand-made structure. Recently, [START_REF] Maria | Constrained Convex Space Partition for Ray Tracing in Architectural Environments[END_REF] present a CCSP dedicated to architectural environments, hence limiting its purpose. [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF] propose to use a constrained Delaunay tetrahedralization (CDT), i.e. CCSP only made up of tetrahedra. However, our experiments show that their CDT traversal methods cannot run on GPU, due to numerical errors.
Using a particular tetrahedron representation, this paper proposes an efficient CDT traversal, having the following advantages:
• It is robust, since it does not cause any error due to numerical instability, either on CPU or on GPU.
• It requires less arithmetic operations and so it is inherently faster than previous solutions.
• It is adapted to parallel programming since it does not add extra thread divergence.
This article is organized as follows: Section 2 recapitulates previous CDT works. Section 3 presents our new CDT traversal. Section 4 discusses our experiments. Finally, Section 5 concludes this paper.
PREVIOUS WORKS ON CDT
This section first describes CDT, then it presents its construction from a geometric model, before focusing on former ray traversal methods.
CDT description
A Delaunay tetrahedralization of a set of points X ∈ E 3 is a set of tetrahedra occupying the whole space and respecting the Delaunay criterion (Delaunay, 1934): a tetrahedron T , defined by four vertices V ⊂ X, is a Delaunay tetrahedron if it exists a circumscribed sphere S of T such as no point of X \ {V } is inside S. Figure 1 illustrates this concept in 2D. Delaunay tetrahedralization is "constrained" if it respects the scene geometry. In other words, all the geometric primitives are necessarily merged with the faces of the tetrahedra making up the partition.
Three kinds of CDT exist: usual constrained Delaunay tetrahedralization [START_REF] Chew | Constrained Delaunay triangulations[END_REF], conforming Delaunay tetrahedralization [START_REF] Edelsbrunner | An upper bound for conforming delaunay triangulations[END_REF] and quality Delaunay tetrahedralization [START_REF] Shewchuk | Tetrahedral Mesh Generation by Delaunay Refinement[END_REF]. In ray tracing context, [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF] proved that quality Delaunay tetrahedralization is the most efficient to traverse.
CDT construction
CDT cannot be built from every geometric models. A necessary but sufficient condition is that the model is a piecewise linear complex (PLC) [START_REF] Miller | Control Volume Meshes using Sphere Packing: Generation, Refinement and Coarsening[END_REF]. In 3D, any non empty intersection between two faces of a PLC must correspond to either a shared edge or vertex. In other words, there is no self-intersection (Figure 2). In computer graphics, a scene is generally represented as an unstructured set of polygons. In such a case, some self-intersections may exist. Nevertheless, it is still possible to construct PLC using a mesh repair technique such as [START_REF] Zhou | Mesh Arrangements for Solid Geometry[END_REF].
CDT can be built from a given PLC using the Si's method [START_REF] Si | On Refinement of Constrained Delaunay Tetrahedralizations[END_REF]. It results in a tetrahedral mesh, containing two kinds of faces: occlusive faces, belonging to the scene geometry; and some nonocclusive faces, introduced to build the partition. Obviously, a given ray should traverse the latter, as nonocclusive faces do not belong to the input geometry.
CDT traversal
Finding the closest intersection between a ray and CDT geometry is done in two main steps. First, the tetrahedron containing the ray origin is located. Second, the ray goes through the tetrahedralization by traversing one tetrahedron at a time until hitting an occlusive face. This process is illustrated in Figure 3. Let us notice that there is no need to explicitly test intersections with the scene geometry, as usual acceleration structures do. This is done implicitly by searching the exit face from inside a tetrahedron.
Locating ray origin
Using pinhole camera model, all primary rays start from the same origin. For an interactive application locating this origin is needed only for the first frame, hence it is a negligible problem. Indeed, camera motion generally corresponds to a translation, for instance when the camera is shifted, or when ray origins are locally perturbed for depth-of-field effect. Using a maximal distance in the traversal algorithm efficiently solves this kind of move.
Locating the origin of non primary rays is avoided by exploiting implicit ray connectivity inside CDT: both starting point and volume correspond to the arrival of the previous ray.
Exit face search
Several methods have been proposed in order to find the exit face of a ray from inside a tetrahedron. [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF] present four different ones. The first uses four ray/plane intersections and is similar to [START_REF] Garrity | Raytracing Irregular Volume Data[END_REF]. The second is based on half space classification. The third finds the exit face using 6 permuted inner products (called side and noted ⊙) of Plücker coordinates [START_REF] Shoemake | Plücker coordinate tutorial[END_REF]. It is similar to [START_REF] Platis | Fast Ray-Tetrahedron Intersection Using Plucker Coordinates[END_REF] technique. Their fourth and fastest method uses 3 to 6 Scalar Triple Products (STP). It is remarkable that none of these four methods exploits the knowledge of the ray entry face.
For volume rendering, [START_REF] Marmitt | Fast Ray Traversal of Tetrahedral and Hexahedral Meshes for Direct Volume Rendering[END_REF] extend [START_REF] Platis | Fast Ray-Tetrahedron Intersection Using Plucker Coordinates[END_REF]. Their method (from now MS06) exploits neighborhood relations between tetrahedra to automatically discard the entry face. It finds the exit face using 2,67 side products on average. Since the number of products varies, MS06 exhibits some thread divergence in parallel environment. This drawback also appears with the fastest Lagae et al. method. All these methods are not directly usable on GPU, due to numerical instability. Indeed, the insufficient arithmetic precision with 32-bits floats causes some failures to traverse CDT, leading to infinite loops.
In this paper, we propose a new traversal algorithm, based on Plücker coordinates. Like MS06, it exploits the neighborhood relations between faces. The originality lies in our specific tetrahedron representation, allowing to use exactly 2 optimized side products.
NEW TRAVERSAL ALGORITHM
CDT traversal algorithm is a loop, searching for the exit face from inside a tetrahedron (Figure 3). We propose a new algorithm, both fast and robust. It uses Plücker coordinates, i.e. six coordinates corresponding to the line direction u and moment v. Such a line is oriented: it passes through a first point p, and then a second one q. Then, u = qp and v = p × q.
For two lines l = {u : v} and
l ′ = {u ′ : v ′ }, the sign of the side product l ⊙ l ′ = u • v ′ + v • u ′
indicates the relative orientation of the two lines: negative value means clockwise orientation, zero value indicates intersection, and positive value signifies counterclockwise orientation [START_REF] Shoemake | Plücker coordinate tutorial[END_REF].
Exit face search
Our algorithm assumes that the entry face is known, and that the ray stabs the current tetrahedron. For a given entry face, we use its complement in the tetrahedron, i.e. the part made of one vertex, three edges and three faces. We denote Λ 0 , Λ 1 and Λ 2 the complement edges, with counterclockwise orientation from inside the tetrahedron (Figure 4). We number complement faces with a local identifier from 0 to 2, such that: face 0 is bounded by Λ 0 and Λ 2 , face 1 is bounded by Λ 1 and Λ 0 , and face 2 is bounded by Λ 2 and Λ 1 . Using Plücker side product, the face stabbed by ray r is:
Λ 2 Λ 0 Λ 1 r r ⊙ Λ 1 r ⊙ Λ 2 2 1 < 0 ≥ 0 ≥ 0 r ⊙ Λ 0 < 0 < 0 ≥ 0 1 0 (b) (a)
• face 0, if and only if r turns counterclockwise around Λ 0 and clockwise around Λ 2 (r ⊙ Λ 0 ≥ 0 and r ⊙ Λ 2 < 0);
• face 1, if and only if r turns counterclockwise around Λ 1 and clockwise around Λ 0 (r ⊙ Λ 1 ≥ 0 and r ⊙ Λ 0 < 0);
• face 2, if and only if r turns counterclockwise around Λ 2 and clockwise around Λ 1 (r ⊙ Λ 2 ≥ 0 and r ⊙ Λ 1 < 0).
We compact these conditions into a decision tree (Figure 4(b)). Each leaf corresponds to an exit face, and each interior node represents a side product between r and a line Λ i . At the root, we check r ⊙ Λ 2 . If it is negative (clockwise), then r cannot stab face 2: in the left subtree, we only have to determine if r stabs face 0 or 1, using their shared edge Λ 0 . Otherwise, r turns counterclockwise around Λ 2 and so cannot stab face 0, and the right subtree we check if r stabs face 1 or 2 using their shared edge Λ 1 . With Figure 4(a) example, r turns clockwise around Λ 2 and then counterclockwise around Λ 0 ; so, r exits through face 0.
Require: F e = {Λ 0 , Λ 1 , Λ 2 }: entry face; Λ r : ray; Ensure: F s : exit face;
1: side ← Λ r ⊙ F e .Λ 2 ; 2: id ← (side ≥ 0); {id ∈ {0, 1}} 3: side ← Λ r ⊙ F e .Λ id ; 4: id ← id + (side < 0); {id ∈ {0, 1, 2}} 5: F s ← getFace(F e ,id); 6: return F s ;
Algorithm 1: Exit face search from inside a tetrahedron.
Exit
Entry face identifier
F 0 F 1 F 2 F 3 0 F 1 F 0 F 0 F 0 1 F 2 F 3 F 1 F 2 2 F 3 F 2 F 3 F 1
Table 1: Exit face according to the entry face and a local identifier in {0, 1, 2}, following a consistent face numbering (Figure 5(a)).
Since every decision tree branch has a fixed depth of 2, our new exit face search method answers using exactly two side products. Moreover, it is optimized to run efficiently without any conditional instruction (Algorithm 1). Notice that leave labels form two pairs from left to right: the first pair (0,1) is equal to the second (1,2), minus 1. Then, it uses that successful logical test returns 1 (and 0 in failure case) to decide which face to discard. So, the test r ⊙ Λ 2 ≥ 0 allows to decide if we have to consider the first or the second pair. Finally, the same method is used with either the line Λ 0 or Λ 1 .
This algorithm ends with getFace function call. This function returns the tetrahedron face number according to the entry face and to the exit face label. It answers using a lookup-table, defined using simple combinatorics (Table 1), assuming a consistent labeling of tetrahedron faces (Figure 5(a)).
Data structure
Algorithm 1 works for any entry face of any tetrahedron. It relies on two specific representations of the tetrahedron faces: a local identifier in {0, 1, 2}, and global face F i , i ∈ [0 . . . 3]. For a given face, it uses 3 Plücker lines Λ i . Since such lines contain 6 coordinates, a face needs 18 single precision floats for the lines (18 × 32 bits), plus brdf and neighborhood data (tetrahedron and face numbers).
To reduce data size and balance GPU computations and memory accesses, we dynamically calculate the Plücker lines knowing their extremities: each line starts from a face vertex and ends with the complement vertex. So, we need all the tetrahedron vertices. We arrange the faces such that their complement vertex have the same number, implicitly known. Vertices are stored into tetrahedra (for coalescent memory accesses), and vertex indices (in [0 . . . 3]) are stored into faces. This leads to the following data structure: To save memory and so bandwidth, we compact the structure Face. The neighboring face (the field face) is a number between 0 and 3; it can be encoded using two bits, and so packed with the field tetra, corresponding to the neighboring tetrahedron. Thus, tetrahedron identifiers are encoded on 30 bits, allowing a maximum of one billion tetrahedra. In a similar way, field idV needs only 2 bits per vertex. But, they are common to all the tetrahedra, and so are stored only once for all into 4 unsigned char. Hence, a face needs 8 bytes, and a full tetrahedron 80 bytes. Notice that, on GPU a vertex is represented by 4 floats to have aligned memory accesses. Then on GPU a full tetrahedron needs 96 bytes. Figure 5 proposes an example: for F 3 (made using the complement vertex V 3 and counterclockwise vertexes V 1 , V 0 and V 2 ), we can deduce that 2 gives the description of faces according to their vertices and edges, following face numbering presented in Figure 5(a).
V 3 V 2 V 1 V 0 F 1 F 0 F 2 F 3 (a) V 3 F 3 Λ 2 Λ 0 Λ 1 (
Λ 0 = V 1 V 3 , Λ 1 = V 0 V 3 and Λ 2 = V 2 V 3 .
Λ 0 = V 1 V 3 , Λ 1 = V 0 V 3 and Λ 2 = V 2 V 3 . Table
F Vertexes Λ 0 Λ 1 Λ 2 0 {3, 1, 2} V 3 V 0 V 1 V 0 V 2 V 0 1 {2, 0, 3} V 2 V 1 V 0 V 1 V 3 V 1 2 {3, 0, 1} V 3 V 2 V 0 V 2 V 1 V 2 3 {1, 0, 2} V 1 V 3 V 0 V 3 V 2 V 3
Table 2: Complement edges of entry face F are implicitly by the face complement vertex (identified by and its vertices in counterclockwise order.
Exiting the starting volume
1 assumes known the entry face. This condition is not fulfilled for the starting tetrahedron. Algorithm 1 must be adapted in that case. A simple solution lies in using a decision tree of depth 4, leading to three Plücker side products. One can settle this tree starting with any edge to discriminate between two faces, and so on with the children.
Nevertheless, a simpler but equivalent solution exists. Once the root fixed, we have only three possible exit faces. This corresponds to Algorithm 1, as if the discarded face was the entry one. So, we just choose one edge to discard a face and then we call Algorithm 1 with the discarded exit face as the fake entry one. This leads to Algorithm 2. We naturally choose edge V 2 V 3 shared by faces F 0 and F 1 (Figure 5(a)). If the side product is negative, then we cannot exit through F 1 . Else, with a positive or null value, we cannot exit through F 0 . Thus, the starting tetrahedron problem is solved using three and only three side products.
Require: T = {V i , F i } i∈[0...3] : Tetrahedron; Λ r : Ray; Ensure: F s : exit face; 1: side ← Λ r ⊙V 2 V 3 ; 2: f← side < 0; {f∈ {0, 1}} 3: return ExitTetra(F f , Λ r ); {Algorithm 1}
Algorithm 2: Exit face search from the starting tetrahedron.
Efficient side product
Both Algorithm 1 and 2 use Plücker side products. A naive approach results in 23 operations per side product: to calculate Plücker coordinates, we need 3 subtractions for its direction and 6 multiplications and 3 subtractions for its moment. Then, product needs multiplications and 5 additions. The two side products in Algorithm 1 result in 46 operations.
We propose a new method using less operations. It rests upon a coordinate system translation to the complement vertex V f of the entry face. In this local system, lines Λ i have a nil moment (since they contain the origin). So, side products are inner products of vectors having only 3 coordinates: each one needs 3 multiplications and 2 additions. Moreover, line directions are computed using 3 subtractions. Hence, such side products need only 8 operations.
Nevertheless, we also need to modify Plücker coordinates of the ray r to obtain valid side products. Let us recall how a Plücker line is made. We compute its direction u using two points p and q on the line, and its moment v with p × q = p × u. In the local coordinates system, the new line coordinates must be calculated using translated points. The direction is obviously the same, only v is modified:
v ′ = (p -V f ) × u = p × u -V f × u = v -V f × u.
So, v ′ is calculated using 12 operations: 3 subtractions, 6 multiplications and 3 subtractions. This ray transformation is done once per tetrahedron, the local coordinates system being shared for all the lines Λ i . As a conclusion, the number of arithmetic operations involved in Algorithm 1 can be decreased from 46 to 28, saving about 40% of computations.
EXPERIMENTS
This section discusses some experiments made using our new traversal algorithm.
Results
Performance is evaluated using three objects tetrahedralized using Tetgen [START_REF] Si | TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator[END_REF]. Table 3 sums up their main characteristics and measured performance. The simplest object is constructed from a banana model, with 25k occlusive faces. The other two correspond to well-known Stanford's objects: BUNNY and ARMADILLO. Their CDT respectively count 200k and 1.1M occlusive faces. We use quality CDT, introducing new vertices into object models, explaining the high number of faces our three objects have.
Performance is measured in millions of ray cast per second (Mrays/s) using ray casting, 1024 × 1024 pixels and no anti-aliasing. The used computer possesses an Intel R Core TM i7-4930K CPU @ 3.40Ghz, 32 Gb RAM and NVidia R GeForce R GTX 680. Algorithms are made parallel on CPU (OpenMP) and GPU (CUDA, with persistent threads [START_REF] Aila | Understanding the efficiency of ray traversal on GPUs -Kepler and Fermi addendum[END_REF]). On average, CPU ray casting reaches 9 Mrays/s, GPU version 280 Mrays/s.
Traversal
Closest ray/object intersection is found by traversing CDT one tetrahedron at a time until hitting an occlusive face. The ray traversal complexity is linear in the number of traversed tetrahedra. not strictly proportional, mainly due to memory accesses that become more important when more tetrahedra are traversed, leading to more memory cache defaults. False-colored image of point of view (B) reveals that rays going close to object boundary traverse more tetrahedra.
Numerical robustness
Using floating-point numbers can cause errors due to numerical instability. Tetgen uses geometric predicates (e.g. (Shewchuk, 1996) or [START_REF] Devillers | Efficient Exact Geometric Predicates for Delaunay Triangulations[END_REF]) to construct robust CDT. If this is common practice in algebraic geometry, it is not the case in rendering. Hence, it is too expensive to be used in CDT ray traversal.
We experimented three methods proposed in (Lagae and Dutré, 2008) (ray/plane intersection tests, Plücker coordinates and STP), plus the method proposed in [START_REF] Marmitt | Fast Ray Traversal of Tetrahedral and Hexahedral Meshes for Direct Volume Rendering[END_REF]) (MS06) (Section 2.3.2). We noticed they all suffer from numerical errors either on CPU or GPU. Indeed, calculation are not enough precise with rather flat tetrahedra. Thus, without extra treatment (like moving the vertices) these algorithms may return a wrong exit face or do not find any face at all (no test is valid). view series.
In contrast, we did not obtain wrong results using our method. It can be explained by the smaller number of performed arithmetic operations; less numerical errors accumulated, more accurate results. CPU results show that our method is much more efficient than former ones. This behavior is expected since our new method requires less arithmetic operations. STP is the fastest previous method, but is 83% slower than ours.
Exit face search comparison
On GPU, results are slightly different. For example, Plücker method is faster than STP. Indeed, even if it requires more operations, it does not add extra thread divergence. Hence, it is more adapted to GPU. Among the previous GPU methods, the most efficient is MS06, still 59% slower than ours.
State-of-the-art comparison
In [START_REF] Lagae | Accelerating Ray Tracing using Constrained Tetrahedralizations[END_REF], authors noticed that rendering using CDT as acceleration structure takes two to three more computation times than using kdtree. In this last section, we check if it is still the case using our new tetrahedron exit algorithm and on GPU. We compare our GPU ray-tracer with the state-of-the-art ray tracer [START_REF] Aila | Understanding the efficiency of ray traversal on GPUs -Kepler and Fermi addendum[END_REF], always using the same computer. Their acceleration structure is BVH, constructed using SAH (MacDonald and [START_REF] Booth | Heuristics for Ray Tracing Using Space Subdivision[END_REF] and split of large triangles [START_REF] Ernst | Early Split Clipping for Bounding Volume Hierarchies[END_REF].
To our knowledge, nowadays their implementation is the fastest GPU one. Table 6 sums up this comparison. Results show that CDT is still not a faster acceleration structure than classical ones (at least than BVH on GPU). First, the timings show larger amplitude using CDT than BVH. Moreover, while CDT is on average faster than BVH with BANANA and BUNNY models, it is no more true using ARMADILLO. This is directly linked to the traversal complexity of the two structures. BVH being built up following SAH, its performance is less impacted with the geometry input size, contrary to CDT where this size has a direct impact on performance. Clearly, a heuristics similar to SAH is missing for tetrahedralization.
CONCLUSION
This article proposes a new CDT ray traversal algorithm. It is based upon a specific tetrahedron representation, and fast Plücker side products. It uses less arithmetic operations than previous methods. Last but not least, it does not involve any conditional instructions, employing two and only two side products to exit a given tetrahedron.
This algorithm exhibits several advantages compared to the previous ones. Firstly it is inherently faster, requiring less arithmetic operations. Secondly it is more adapted to parallel computing, since having a fixed number of operations it does not involve extra thread divergence. Finally, it is robust and works with 32-bits floats either on CPU or GPU.
As future work, we plan to design a new construction heuristic, to obtain as fast to traverse as possible CDT. Indeed, CDT traversal speed highly depends on its construction. CDT traversal complexity is linear in the number of traversed tetrahedra: the less tra-versed tetrahedra, the more high performance. Before SAH introduction, the same problem existed with well-known acceleration structures like kd-tree and BVH, for which performance highly depends on the geometric model. Since CDT for ray-tracing is a recent method, we expect that similar heuristics exists.
Figure 1 :
1 Figure 1: Delaunay triangulation: no vertex is inside a circumscribed circle.
Examples of two non-PLC configurations: intersection between (a) two faces, (b) an edge and a face.
Figure 3 :
3 Figure 3: CDT traversal overview: the main key of any CDT traversal algorithm lies in the "exit face search" part.
Figure 4 :
4 Figure 4: Exit face search example: (a) ray r enters the tetrahedron through the back face; (b) r ⊙ Λ 2 < 0 and r ⊙ Λ 0 ≥ 0, so the exit face is identified by 0.
s t r u c t F a c e { i n t b r d f ; / / -1: Non-O c c l u s i v e i n t t e t r a ; / / n e i g h b o r i n t f a c e ; / / n e i g h b o r i n t idV [ 3 ] ; / / f a c e v e r t i c e s } ;
b) 5: Description of a tetrahedron: (a) vertices and faces numbering; (b) the complement vertex for F 3 is {V 3 }, and its edges are
s t r u c t T e t r a h e d r o n { f l o a t 3 V [ 4 ] ; / / v e r t i c e s F a c e F [ 4 ] ; / / f a c e s } ;
Figure 6 :
6 Figure 6: Rendering times on CPU in ms (T , red curve) and number of traversed tetrahedra in millions (Φ, gray bars) using 1,282 points of view and BUNNY; (A) T = 40.1 ms -Φ = 9.6; (B) T = 122 ms -Φ = 22.71.
Table 4 :
4 Table4reports for each object the number of rays per image concerned by this problem, averaged over points of Numerical errors impact on GPU: number of rays suffering from wrong results for 1024 × 1024 pixels, and averaged over about 1, 300 points of view.
BANANA BUNNY ARMADILLO
Ray/plane 33.27 40.85 74.85
Plücker 3.6 22.25 412.13
STP 63.07 204.89 456.65
MS06 0.0007 0.004 0.422
Ours 0 0 0
Table 5 :
5 This section compares performance of our exit face search algorithm with the same 4 previous methods: ray/plane intersection tests, Plücker coordinates, STP and MS06 (Section 2.3.2). Statistics are summed up in Table5. Times are measured for 16,384 random rays stabbing 10,000 random tetrahedra, both on CPU (using one thread) and GPU. Exit face search comparison: time (in ms) to determine the exit face for 10,000 tetrahedra and 16,384 random rays per tetrahedron; on CPU (single thread) and on GPU.
Method Time (ms) CPU GPU
Ray/plane 15,623 36
Plücker 10,101 28
STP 4,876 29
MS06 5,994 21
Ours 2,663 13
Table 6 :
6 Performance comparison with[START_REF] Aila | Understanding the efficiency of ray traversal on GPUs -Kepler and Fermi addendum[END_REF], in number of frames per second.
CDT BVH (Aila et al., 2012)
BANANA 315-947 200-260
BUNNY 130-1040 160-260
ARMADILLO 82-160 130-260 | 27,403 | [
"7905",
"7913",
"6203"
] | [
"444300",
"444300",
"444300"
] |
01486607 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2009 | https://hal.science/hal-01486607/file/doc00026658.pdf | ISOLATED VS COORDINATED RAMP METERING STRATEGIES: FIELD EVALUATION RESULTS IN FRANCE
INTRODUCTION
Severe traffic congestion is the daily lot of drivers using the motorway network, especially in and around major cities and built-up areas. On intercity motorways, this is due to heavy traffic during holiday weekends when many people leave the cities at the same time, or to accidents or exceptional weather conditions. In the cities themselves, congestion is a recurrent problem. The control measures which are produced in a coordinated way to improve traffic performance include signal control, ramp metering and route guidance. With respect to the ramp metering techniques, one successful approach, for example, is the ALINEA strategy [START_REF] Haj-Salem | ALINEA -A Local Feedback Control Law for on-ramp metering: A real life study[END_REF][START_REF] Haj-Salem | Ramp Metering Impact on Urban Corridor Traffic : Field Results[END_REF][START_REF] Papageorgiou | ALINEA: A Local Feedback Control Law for on-ramp metering[END_REF] which maintains locally the density on the carriage way around the critical value. Nevertheless, due to the synergetic effect of all metered on-ramps (they interact on each other at different time scale) the coordinated strategy could be more efficient than a local strategy. In this paper, some field trials, conducted in the southern part of Ile de France motorway in Paris are presented. Field trials have been design and executed over a period of several months in the aim of investigating the traffic impact of ramp metering measures. More specifically, the field trials, reported in this paper, include a comprehensive data collection from the considered network (A6W motorway) over several weeks with isolated and coordinated ramp metering strategies. The main objectives of the field trials were the development, the test and the evaluation of the traffic impact of new isolated and coordinated strategies. This paper is organized as follows: section 2 is dedicated to the test site description. Section 3 concerns the brief description of the candidate strategies. The last section 4 is focused on the description of the used criterion on one hand and the other hand the field results analysis.
FIELD TEST DESCRIPTION
The traffic management of "Ile de France" motorway network is under both main authorities: the Paris City "Ville de Paris" authority operates the Paris inner urban network and the ring way and the DIRIF "Direction interdépartementale de la Région d'Ile de France" authority operates the motorway network around Paris city (A1 to A13). The DIRIF motorway network covers around 700 km including A1 to A13 motorways. Since 1988, DIRIF has launched a project called "SIRIUS: Service d'Information pour un Réseau Intelligible aux USagers" aiming at optimising the traffic conditions on the overall "Ile de France" motorway network in terms of real-time traffic control strategies such as ramp metering, automatic incident detection, speed control, lane assignment, traffic user's information/guidance (travel time display) etc.).
The particular motorway network considered in this field evaluation study is in the southern part of the Ile de France motorway network (A6W, figure 1). The considered site is one among the most critical areas of the Ile de France motorway network. The total length covers around 20 km including several on/off ramps. Morning and evening peak congestions extend over several hours and several kilometres. A recurrent congestion in the morning peak period typically starts around the on ramp Chilly and it spreads subsequently over several kilometres on A6W motorway axis. The considered motorway axis is fully equipped with measurement stations. The field test covers around 20 km length and includes 33 measurements stations (loop detectors) available on the carriageway, located around 500 m from each other. Each measurement station provides traffic volume, occupancy and speed measurements. The on-ramps and off-ramps are fully equipped also. In particular at each on-ramp, tow measurement stations are installed: the first one is located at the nose of the ramp behind the signal light which used for the realised onramp volume measurements and the second at the top of the on-ramp which used for the activation of the override tactic when the control is applied.
CANDIDATE STRATEGY DESCRIPTIONS
The implemented strategies are the following:
1. No control 2. ALINEA 3. VC_ALINEA (Variable Cycle ALINEA) 4. Coordination (CORDIN)
ALINEA strategy
ALINEA is based on a feedback philosophy and the control law is the following: cycle k is found to be lower (higher) than the desired occupancy O * , the second term of the right hand side of the equation becomes positive (negative) and the ordered on-ramp volume r k is increased (decreased) as compared to its last value r k-1 . Clearly, the feedback law acts in the same way both for congested and for light traffic (no switchings are necessary).
r r K O O k k k = + - - 1 ( )
VC_ALINEA Strategy
The basic philosophy of Variable Cycle ALINEA (VC_ALINEA) is the computation of the split as control variable instead of the green duration. The main objective of VC_ALINEA is to apply different cycles with respect to the on-ramp traffic demand and the traffic conditions.
The split is defined as: α = G/C, where G is the green duration, C is the cycle duration. The VC_ALINEA control law is derived from ALINEA and has the following form:
α(k) = α(k-1) + K'[Ô-O out (k)]
Basically, the derivation of VC_ALINEA control law (see EURAMP Deliverable D3.1) consists to convert the computed ALINEA on-ramp volume r(k) in green (or flashing amber) duration. This conversion is based on the measurement of the maximum on-ramp flow (q sat ).
In case of ALINEA, the calculated green time is constrained by the minimum and the Maximum green. Similarly, the split variable as a control law (α) is constrained by two limits also: the maximum cycle C M duration and the minimum cycle duration C m . This means that α is varying between α min and α max where
α min = G m / C m α max = G M / C M
Where: G m and G M are the fixed minimum green and maximum green durations respectively. C m and C M are respectively the Minimum and Maximum cycle duration:
With sat k k q G r =
we have:
( )
out k sat R k k o ô q K G G 1 1 - - - + =
(1)
G k : Calculated Green duration. q sat : Maximum output flow on the ramp.
Dividing equation ( 1) by C k , we obtain the following VC_ALINEA control law:
( )
out k k sat R k k o ô C q K 1 1 - - - + = α α (2)
The range of control variable α is defined by: In a fluid condition:
( ) ( ) + + = + - = ⇔ = + + = ⇔ ≥ min min min min 1 R A G C R A G R R R A G G thr α α α α α
And, in a congested condition:
( ) = = ⇔ + - = = ⇔ < α α α α min min min min min G C G G G A G R G G thr
Coordinated strategy (CORDIN)
The main philosophy of CORDIN strategy is to use the storage capacities of the upstream onramps in case of apparition of downstream congestion of the controlled on-ramp. Under critical on-ramp queue constraint, an anticipation of the control is applied at the upstream onramps of the head of the congestion. This means that the level of the traffic improvement in case of the application of CORDIN strategy is much related to the geometry of each on-ramp and particularly to the storage capacity. CORDIN is a based rule coordinated strategy using ALINEA strategy first and anticipating the control action. It consists in the following steps:
1. Application of ALINEA to all controlled on-ramps -> control sets U al . 2. Find the location of the head of the congestion by testing if the first on-ramp (r i ) where ALINEA is active (O i > 0.9 Ô i , cr ) and the queue constraint not active. 3. For every upstream on-ramp r up = r i +1, .., Nb_Ramps: if the queue constraint of the onramp (r up ) is NOT active then correction of the ALINEA command according to U coor = α 1 U al if r up = r i +1 and U coor = α 2 U al for the other upstream ramps, where (α 1 ) and (α 2 ) are parameters to be calibrated; otherwise do nothing. 4. Application of the new coordinated control sets on the field 5. Wait the next cycle time 6. Go to step 1.
EVALUATION RESULTS
Available data
The different strategies have been applied in weekly alternation ALINEA, VC_ALINEA, CORDIN and no control respectively over the period from the middle of September 2006, until the end of January, 2007, and to perform subsequently, comparative assessments of the traffic impact. Full 140 days of collected data were stored in the SIRIUS database. Screening the collected data was firstly necessary in order to discard days which include major detector failures. Secondly, all days with atypical traffic patterns (essentially weekends and holidays) were discarded. Thirdly, in order to preserve the results comparability, all days including significant incidents or accidents (according to the incident files provided by the Police) were also left out. This screening procedure eventually delivered 11, 10, 11 and 9 days of data using No control, ALINEA, VC_ALINEA and CORDIN strategies respectively. In order to minimize the impact of demand variations on the comparative evaluation results, the selected days were averaged for each strategy.
Assessments criteria
The evaluation procedure was based on a computation of several criteria for assessing and comparing the efficiency of the ramp metering installation. These criteria were calculated for each simulation run. The horizon of the simulation is fixed to the overall period (5:00 -22:00), the morning peak period (6:00-12:00) and the evening period (17:00-21:00). The following quantitative criteria were considered for the evaluation of the control strategy: 1. The total time spent on the network (TTS) expressed in vh*h 2. The total number of run kilometres (TTD) expressed in vh*km 3. The mean speed (MS) expressed in Km/h 4. The travel time expressed in second from one origin to the main destination 5. Other environment criteria also were computed:
-Fuel consumption (litres) [START_REF] Jurvillier | Simulation de temps de parcours et modèle de consommation sur une autoroute urbaine[END_REF] - The evaluation results were reported in the Deliverable D6.3 of EURAMP Project. In summary, the results obtained can be summarized as follows: -The VC_ALINEA seems to provide better results than ALINEA in term of the TTS index (12%). However, we observe that the TTD is decrease by 5% whereas for ALINEA, the TTD is decreases by 2% compared with the No control case.
-The CORDIN strategy provides change of 12%, 0% and 11% for TTS, TTD and MS respectively compared with the No control case.
-Figure 4 reports the congestion mapping of A6W and visually confirm these conclusions.
-With respect to the Total Travel Time (TTT), figure 5 depicts the obtained results. The CORDIN strategy gives better results than the isolated strategies. As far as the travelled distance increases, the gain in term of travel times increase also. The maximum gain of 17 % is observed for CORDIN strategy. -The emission indices are decrease for all strategies. In particular, the gains of HC and CO indices are of -6%,-9% and -7% for ALINEA, VC_ALINEA and CORDIN respectively By considering the TTS and TTD costs hypothesis in France, the results of the cost benefit analysis, with regard to the investments and the maintenance of the ramp metering system, indicated a collective benefit per year (250 of working days) of 2.4M€, 2.44M€ and 3.5 M€ for ALINEA, VC_ALINEA and CORDIN respectively.
Figure 1 .
1 Figure 1. Field test site
*
where r k and r k-1 are on-ramp volumes at discrete time periods k and k-1 respectively, O k is the measured downstream occupancy at discrete time k, O * is a pre-set desired occupancy value (typically O * is set equal to the critical occupancy) and K is a regulation parameter. The feedback law suggests a fairly plausible control behaviour: If the measured occupancy O k at
Figure 3
3 Figure 3 depicts one example of the applied correction parameters (α 1, α 2 ) after a detection of the head of the congestion (MASTER on-ramp).
Figure 3 :
3 Figure 3: Example of CORDIN parameters
Pollutant emission of CO & Hydrocarbon (HC) expressed in kg (European project TR 1030, INRESPONSE, D91[START_REF] Ademe | Émission de Polluants et consommation liée à la circulation routière-Paramètres déterminant et méthodes de quantification, "connaître pour agir, guide et cahiers techniques[END_REF][START_REF] Ademe | Émission de Polluants et consommation liée à la circulation routière-Paramètres déterminant et méthodes de quantification, "connaître pour agir, guide et cahiers techniques[END_REF]
Figure 4 :
4 Figure 4: Congestion mapping of the 4 strategies
Figure 5 .
5 Figure 5. Gain = Fn(distance) of the candidate strategies
CONCLUSIONS
The obtained results of this field trial are leads the DIRIF authorities to generalize the implementation of the ramp metering technique to the overall motorway network. Renewal of ACCES_1 system is decided current 2007. The new system is called ACCES_2 and it is implemented in SIRIUS current 2008. The DIRIF authorities decided at the first step, to test and evaluated the ALINEA strategy on the East part of the Ile de France motorway network including 22 on-ramps. The second step consists to the extension of the generalization of ALINEA to 150 others existing on-ramps. The last step will concern the implementation of CORDIN strategy. | 13,620 | [
"1278852"
] | [
"81038",
"520615",
"81038"
] |
01486698 | en | [
"info"
] | 2024/03/04 23:41:48 | 2016 | https://theses.hal.science/tel-01486698/file/ZUBIAGA_PENA_CARLOS_JORGE_2016.pdf | Carlos Jorge
Zubiaga Peña
Keywords: Appearance, shading, pre-filtered environment map, MatCap, Compositing Apparence, ombrage, cartes d'environnement pré-flitrées, MatCap, Compositing
Traditional artists paint directly on a canvas and create plausible appearances of real-world scenes. In contrast, Computer Graphics artists define objects on a virtual scene (3D meshes, materials and light sources), and use complex algorithms (rendering) to reproduce their appearance. On the one hand, painting techniques permit to freely define appearance. On the other hand, rendering techniques permit to modify separately and dynamically the different elements that compose the scene.
In this thesis we present a middle-ground approach to manipulate appearance. We offer 3D-like manipulation abilities while working on the 2D space. We first study the impact on shading of materials as band-pass filters of lighting. We present a small set of local statistical relationships between material/lighting and shading. These relationships are used to mimic modifications on material or lighting from an artist-created image of a sphere. Techniques known as LitSpheres/MatCaps use these kinds of images to transfer their appearance to arbitrary-shaped objects. Our technique proves the possibility to mimic 3D-like modifications of light and material from an input artwork in 2D. We present a different technique to modify the third element involved on the visual appearance of an object: its geometry. In this case we use as input rendered images alongside with 3D information of the scene output in so-called auxiliary buffers. We are able to recover geometry-independent shading for each object surface, assuming no spatial variations for each recovered surface. The recovered shading can be used to modify arbitrarily the local shape of the object interactively without the need to re-render the scene.
Chapter 1
Introduction
One of the main goals of image creation in Computer Graphics is to obtain a picture which conveys a specific appearance. We first introduce the general two approaches of image creation in the Section 1.1, either by directly painting the image in 2D or by rendering a 3D scene. We also present middle-ground approaches which work on 2D with images containing 3D information. It is important to note that our work will take place using this middleground approach. We define our goal in Section 1.2 as 'granting 3D-like control over image appearance in 2D space'. Our goal emerges from the limitations of existing techniques to manipulate 3D appearance in existing images in 2D. Painted images lack any kind of 3D information, while only partial geometric information can be output by rendering. In any case, the available information is not enough to fully control 3D appearance. Finally in Section 1.3 we present the contributions brought by the thesis.
Context
Image creation can be done using different techniques. They can be gathered into two main groups, depending if they work in the 2D image plane or in a 3D scene. On the one hand, traditional painting or the modern digital painting softwares work directly in 2D by assigning colors to a plane. On the other hand, artists create 3D scenes by defining and placing objects and light sources. Then the 3D scene is captured into an image by a rendering engine which simulates the process of taking a picture. There also exist techniques in between that use 3D information into 2D images to create or modify the colors of the image.
Painting
Traditional artists create images of observed or imagined real-world scenes by painting. These techniques are based on the deposition of colored paint onto a solid surface. Artists may use different kinds of pigments or paints, as well as different tools to apply them, from brushes to sprays or even body parts. Our perception of the depicted scene depends on intensity and color variations across the planar surface of the canvas. Generated images may be abstract or symbolic, but we are interested in the ones that can be considered as natural or realistic. Artists are capable to depict plausible appearances of the different elements that compose a scene. The complexity of reality is well captured by the design of object's shape and color. Artists achieve good impressions of a variety of materials under different lighting environment. This can be seen in Figure 1.1, where different object are shown ranging from organic nature to hand-crafted. Nowadays painting techniques have been integrated in computer system environments. Classical physical tools, like brushes or pigments, have been translated to digital ones (Figure 1.2). Moreover, digital systems provide a large set of useful techniques like the use of different layers, selections, simple shapes, etc. They also provide a set of image based operators that allow artists to manipulate color in a more complex way, like texturing, embossing or blurring. Despite the differences, both classical painting and modern digital systems share the idea of working directly in image space. Artists are able to depict appearances that look plausible, in a sense that they look real even if they would not be physically correct. Despite our perception of the painted objects as if they were or could be real, artist do not control physical processes. They just manipulate colors either by painting them or performing image based operations. They use variations of colors to represent objects made of different materials and how they would behave under a different illumination. The use of achromatic variations is called shading; it is used to convey volume or light source variations (Figure 1.3), as well as material effects. Shading may also correspond to variations of colors, so we can refer to shading in a colored or in a grey scale image. Carlos Jorge Zubiaga Peña In real life, perceived color variations of an object are the result of the interaction between lighting and object material properties and shape. Despite the difficult understanding of these interactions, artists are able to give good impressions of materials and how they would look like under certain illumination conditions. However, once a digital painting is created it cannot be modified afterwards: shape, material, or lighting cannot be manipulated.
Rendering
Contrary to 2D techniques, computer graphics provide an environment where artists define a scene based on physical 3D elements and their properties. Artists manipulate objects and light sources, they control object's shape (Fig. 1.4b) and material (Fig. 1.4c) and the type of light sources (Fig. 1.4a), as well as their positions. When an artist is satisfied with the scene definition, he selects a point of view to take a snapshot of the scene and gets an image as a result. The creation of 2D images from a 3D scene is called rendering. Rendering engines are software frameworks that use light transport processes to shade a scene. The theory of light transport defines how light is emitted from the light sources, how it interacts with the different objects of the scene and finally how it is captured in a 2d plane. In practice, light rays are traced from the point of view, per pixel in the image. When the rays reach an object surface, rays are either reflected, transmitted or absorbed, see Figure 1.5a. Rays continue their path until they reach a light source or they disappear by absorption, loss of energy or a limited number of reflections/refractions. At the same time, rays can also be traced from the light sources. Rendering engines usually mix both techniques by tracing rays from both directions, as shown in Figure 1.6. Figure 1.6: Rays may be both traced from the eye or image plane as well as from the light sources. When a ray reaches an object surface it is reflected, transmitted or absorbed.
Object geometry is defined by one or more 3D meshes composed of vertices, which form facets that describe the object surface. Vertices contain information about their 3D position, as well as other properties like their surface normal and tangent. The normal and tangent together describe a reference frame of the geometry at a local scale, which is commonly used in computer graphics to define how materials interact with lighting. This reference frame is used to define the interaction at a macroscopic level. In contrast, real-world interaction of light and a material at a microscopic level may turn out to be extremely complex. When a ray reaches a surface it can be scattered in any possible direction, rather than performing a perfect reflection. The way rays are scattered depends on the surface reflectance for opaque objects or the transmittance in the case of transparent or translucent objects. Materials are usually defined by analytical models with a few parameters; the control of those parameters allows artists to achieve a wide range of object appearances.
Manipulation of all the 3D properties of light, geometry and material allows artists to create images close to real-world appearances. Nevertheless, artists usually tweak images by manipulating shading in 2D until they reach the desired appearance. Those modifications are usually done for artistic reasons that require the avoidance of physically-based restrictions of the rendering engines, which make difficult to obtain a specific result. Artists usually start from the rendering engine output, from which they work to get their imagined desired image. Carlos Jorge Zubiaga Peña
Compositing
Shading can be separated into components depending on the effects of the material. Commonly we can treat independently shading coming from the diffuse or the specular reflections (see Figure 1.5b), as well as from the transparent/translucent or the emission effects. Therefore, rendering engines can outputs images of the different components of shading independently. In the post-processing stage, called compositing, those images are combined to obtain the final image, as shown in Figure 1.7. In parallel with shading images, rendering engines have the capacity to output auxiliary buffers containing 3D information. In general, one can output any kind of 3D information, by assigning them to the projected surface of objects in the image. Usually those buffers are used to test errors for debugging, but they can be used as well to perform shading modifications in post-process. They can be used to guide modifications of shading: for instance, positions or depth information are useful to add fog or create focusing effects like depth of fields. Auxiliary buffers may also be used to add shading locally. Having information about positions, normals and surface reflectance are enough to create new shading by adding local light sources. This is similar to a technique called Deferred Shading used in interactive rendering engines. It is based on a multi-pass pipeline, where the first pass produces the necessary auxiliary buffers and the other passes produce shading by adding the contribution of a discrete set of lights, as is shown in Figure 1.8. Instead of computing shading at each pass we can pre-compute it, if we only consider distant illumination. Distant illumination means that there is no spatial variation on the incoming lighting, therefore it only depends on the surface normal orientation. Thanks to this approximation we only need surface normals to shade an object (usually they are used projected in screen space). Typically, pre-computed shading values per hemisphere direction are given by filtering the values of the environment lighting using the material reflectance properties. These techniques are referred by the name pre-filtered environment maps or PEM (see Chapter 2, Section 2.3). Different material appearances are obtained by using different material filters, as seen in Figure 1.9a. Pre-computed values are stored in spherical structures that can be easily accessed, shading is obtained by fetching using normal buffers. Instead of filtering an environment map, pre-computed shading values may also be created by painting or obtained from images (photographs or artwork). A well known technique, call the LitSphere, defines how to fill colors on a sphere from a picture and then use this sphere to shade an object, similarly to pre-filtered environment map techniques. The idea of LitSphere it's been extensively used in sculpting software where it takes the name of MatCap (see Figure 1.9b), as shorthand of Material Capture. MatCaps depict plausible materials under an arbitrary environment lighting. In the thesis we decided to use MatCaps instead of LitSpheres to avoid misunderstanding with non photo-realistic shading, like cartoon shading. Despite the limitations of distant lighting (no visibility effects like shadows or inter-reflections), they create convincing shading appearances.
Summary
On the one hand, painting techniques permit direct manipulation of shading with no restrictions, allowing artists to achieve the specific appearance they desire. In contrast, artists cannot manipulate dynamically the elements represented (object shape and material) and how they are lit. On the other hand, global illumination rendering engines are based on a complete control of the 3D scene and a final costly render. Despite the complete set of tools provided to manipulate a scene before rendering, artists commonly modify the rendering output in post-processing using image-based techniques similar to digital painting. Postprocess modifications permit to avoid the physically based restrictions of the light transport algorithms.
As a middle-ground approach between the direct and static painting techniques and the dynamically controlled but physically-based restricted render engines, we find techniques which work in 2D and make use of 3D information in images or buffers. Those techniques may be used in post-process stage called compositing. Rendering engines can easily output image buffers with 3d properties like normal, positions or surface reflectance, which are usually called auxiliary buffers. Those buffers permit to generate or modify shading in ways different than digital painting, like the addition of local lighting or a few guided image operations (i.e. fog, re-texturing). Modifications of the original 3D properties (geometry, material or lighting) cannot be performed with a full modification on shading. A different way to employ auxiliary buffers is to use normal buffers alongside with pre-filtered environment maps or MatCaps/LitSpheres to shade objects. The geometry of the objects can be modified arbitrarily, but in contrast once pre-computed shading is defined, their depicted material and lighting cannot be modified.
Real-time 2D manipulation of plausible 3D appearance
Problem statement
Problem statement
Dynamic manipulation of appearance requires the control of three components: geometry, material and lighting. When we obtain an image independently of the way it has been created (painted or rendered) we lose the access to all components. Geometry is easily accessible, normal buffers may be generated by a rendering engine, but also may be obtained by scanners or estimated from images. Material are only accessible when we start from a 3D scene; the reflectance properties of the object can be projected to the image plane. Lighting in contrast is not accessible in any case. If we consider rendering engines, lighting is lost in the process of image creation. In the case of artwork, shading is created directly and information of lighting and materials is 'baked-in', therefore we do not have access to lighting or material separately.
Lighting structure is arbitrary complex and the incoming lighting per surface point varies in both the spatial and the angular domain, in other words, it varies per position and normal. The storage of the full lighting configuration is impractical, as we would need to store a complete environment lighting per pixel. Moreover, in the ideal case that we would have access to the whole lighting, the modification of the material, geometry or lighting will require a costly re-rendering process. In that case there will not be an advantage compared to rendering engine frameworks.
Our goal is to grant 3D-like control of image appearance in 2D space. We want to incorporate new tools to modify appearance in 2D using buffers containing 3D information. The objective is to be able to modify lighting, material and geometry in the image and obtain a plausible shading color. We develop our technique in 2 steps: first, we focus on the modification of light and material and then on the modification of geometry.
We base our work on the hypothesis that angular variations due to material and lighting can be mimicked by applying modifications directly on shading without having to decouple material and lighting explicitly. For that purpose we use structures similar to pre-filtered environment maps, where shading is stored independently of geometry.
In order to mimic material and lighting variations, we focus MatCaps. They are artistcreated images of spheres, which their shading depicts an unknown plausible material under an unknown environment lighting. We want to add dynamic control over lighting, like rotation, and also to the material, like modifications of reflectance color, increasing or decreasing of material roughness or controlling silhouette effects.
In order to mimic geometry modifications, we focus on the compositing stage of the image creation process. Perturbations of normals (e.g. Bump mapping) is a common tool in computer graphics, but it is restricted to the rendering stage. We want to grant similar solutions of the compositing stage. In this stage several shading buffers are output by a global illumination rendering process and at the same time several auxiliary buffers are made available. Our goal in this scenario is to obtain a plausible shading for the modified normals without having to re-render the scene. The avoidance of re-rendering will permit to alter normals interactively.
As described in the previous section, material reflectance, and as a consequence shading, can be considered as the addition of specular and diffuse components. Following this approach we may separate the manipulation of diffuse from specular reflections, which is similar to control differently low-frequency and high-frequency shading content. This approach can be considered in both cases, the MatCap and the compositing stage, see Figure 1.10. Meanwhile rendering engines can output both components separately, MatCaps will require a pre-process step to separate them. Carlos Jorge Zubiaga Peña
Contributions
The work is presented in three main chapters that capture the three contributions of the thesis. Chapter 3 present a local statistical analysis of the impact of lighting and material on shading. We introduce a statistical model to represent surface reflectance and we use it to derive statistical relationships between lighting/material and shading. At the end of the chapter we validate our study by analyzing measured materials using statistical measurements.
In Chapter 4 we develop a technique which makes use of the statistical relationships to manipulate material and lighting in a simple scene: an artistic image of a sphere (MatCap). We show how to estimate a few statistical properties of the depicted material on the MatCap, by making assumptions on lighting. Then those properties are used to modify shading by mimicking modifications on lighting or material, see Figure 1.11.
Contributions
Chapter 5 introduces a technique to manipulate local geometry (normals) at the compositing stage; we obtain plausible diffuse and specular shading results for the modified normals. To this end, we recover a single-view pre-filtered environment map per surface and per shading component. Then we show how to use these recovered pre-filetered environment maps to obtain plausible shading when modifications on normals are performed, see Figure1.12. Figure 1.12: Starting from shading and auxiliary buffers, our goal is to obtain a plausible shading color when modifying normals at compositing stage.
10
Carlos Jorge Zubiaga Peña
Chapter 2
Related Work
We are interested in the manipulation of shading in existing images. For that purpose we first describe the principles of rendering, in order to understand how virtual images are created as the interaction of geometry, material and lighting (Section 2.1). Given an input image a direct solution to modify its appearance is to recover the depicted geometry, material and lighting. These set of techniques are called inverse rendering (Section 2.2). Recovered components can be modified afterwards and a new rendering can be done. Inverse rendering is limited as it requires assumptions on lighting and materials which forbids its use in general cases. These techniques are restricted to physically-based rendering or photographs and they are not well defined to work with artworks. Moreover, a posterior rendering would limit the interactivity of the modification process. To avoid this tedious process, we found interesting to explore techniques that work with an intermediate representation of shading. Pre-filtered environment maps store the results of the interaction between material and lighting independently to geometry (Section 2.3). These techniques have been proven useful to shade objects in interactive applications, assuming distant lighting. Unfortunately there is no technique which permits to modify lighting or material once PEM are created.
Our work belongs to the domain of appearance manipulation. These techniques are based on the manipulation of shading without the restrictions of physically-based rendering (Section 2.4). However, the goal is to obtain images which appear plausible even if they are not physically correct. Therefore we also explore how the human visual system interprets shading (Section 2.5). We are interested into our capability to infer the former geometry, lighting and material form an image.
Shading and reflectance
We perceive objects by the light they reflect toward our eyes. The way objects reflect light depends on the material they are composed of. In the case of opaque objects it is described by their surface reflectance properties; incident light is considered either absorbed or reflected. Surface reflectance properties define how reflected light is distributed. In contrast, for transparent or translucent objects the light penetrates, scatters inside the object and eventually exists from a different point of the object surface. In computer graphics opaque object materials are defined by the Bidirectional Reflectance Distribution Functions (BRDF or f r ), introduced by Nicodemus [START_REF] Nicodemus | Directional reflectance and emissivity of an opaque surface[END_REF]. They are 4D functions of an incoming ω i and an outgoing direction ω o (e.g., light and view directions). The BRDF characterizes how much radiance is reflected in all lighting and viewing configurations, and may be considered as a black-box encapsulating light transport at a microscopic scale. Directions are classically parametrized by the spherical coordinates elevation θ and azimuth φ angles, according to the reference frame defined by the surface normal n and the tangent t as in Figure 2.1a. In order to guarantee a physically correct behavior a BRDF must follow the next three properties. It has to be positive f r (ω i , ω o ) ≥ 0. It must obey the Helmoth reciprocity: f r (ω i , ω o ) = f r (ω o , ω i ) (directions may be swapped without reflectance being changed). It must conserve energy ∀ω o , Ω f r (ω i , ω o ) cos θ i dω i ≤ 1, the reflected radiance must be equal to or less than the input radiance.
Shading and reflectance
n t ω o ω i φ i φ o θ o θ i (a) Classical parametrization n t h θ d θ h ω o φ d φ h ω i (b) Half-vector parametrization
Different materials can be represented using different BRDFs as shown in Figure 2.2, which shows renderings of spheres made of five different materials in two different environment illuminations in orthographic view. These images have been obtained by computing the reflected radiance L o for every visible surface point x toward a pixel in the image. Traditionally L o is computed using the reflected radiance equation, as first introduced by Kajiya [Kaj86] :
L o (x, ω o ) = Ω f r (x, ω o , ω i ) L i (x, ω i ) ω i • n dω i ,
(2.1) with L o and L i are the reflected and incoming radiance, x a surface point of interest, ω o and ω i the outgoing and ingoing directions, n the surface normal, f r the BRDF, and Ω the upper hemisphere. Thanks to the use of specialized devices (gonoireflectometers, imaging systems, etc.) we can measure real materials as the ratio of the reflected light from a discrete set of positions on the upper hemisphere. One of the most well-known databases of measured material is the MERL database [START_REF] Matusik | A data-driven reflectance model[END_REF]. This database holds 100 measured BRDFs and displays a wide diversity of material appearances. All BRDFs are isotropic, which means light and view directions may be rotated around the local surface normal with no incurring change in reflectance. When measuring materials we are limited by a certain choice of BRDFs among real-world materials. We are also limited by the resolution of the measurements: we only obtain a discretized number of samples, and the technology involved is subject to measurement errors. Lastly, measured BRDFs are difficult to modify as we do not have parameters to control them. The solution to those limitations has been the modeling of material reflectance properties using analytical functions.
Analytical models have the goal to capture the different effects that a material can produce. The ideal extreme cases are represented by mirror and matte materials. On the one hand, mirror materials reflect radiance only in the reflection direction ω r = 2 (ω • n) n -ω. On the other hand, matte or lambertian materials reflect light in a uniform way over the whole hemisphere Ω. However, Real-world material are much more complex, they exhibit a composition of different types of reflection. Reflections vary from diffuse to mirror and 12 Carlos Jorge Zubiaga Peña therefore materials exhibit different aspects in terms of roughness or glossiness. Materials define the mean direction of the light reflection, it can be aligned with the reflected vector or be shifted like off-specular reflections or even reflect in the same direction (retro-reflections). Materials can also reproduce Fresnel effects which characterize variations on reflectance depending on the viewing elevation angle, making objects look brighter at grazing angles.
Variations when varying the view around the surface normals are captured by anisotropic BRDFs. In contrast, isotropic BRDFs imply that reflections are invariant to variations of azimuthal angle of both ω o and ω i . BRDFs may be grouped by empirical models: they mimic reflections using simple formulation; or physically based models: they are based on physical theories. Commonly BRDFs are composed of several terms, we are interested in the main ones: a diffuse and a specular component. The diffuse term is usually characterized with a lambertian term, nevertheless there exist more complex models like Oren-Nayar [START_REF] Oren | Generalization of lambert's reflectance model[END_REF].
Regarding specular reflections, the first attempt to characterize them has been defined by Phong [START_REF] Bui Tuong | Illumination for computer generated pictures[END_REF]. It defines the BRDF as a cosine lobe function of the reflected view vector and the lighting direction, whose spread is controlled by a single parameter. It reproduces radially symmetric specular reflections and does not guarantee energy conservation. An extension of the Phong model has been done in the work of Lafortune et al. [START_REF] Eric Pf Lafortune | Non-linear approximation of reflectance functions[END_REF] which guarantees reciprocity and energy conservation. Moreover it is able to produce more effects like off-specular reflections, Fresnel effect or retro-reflection. Both models are based on the reflected view vector. Alternatively there is a better representation for BRDFs based on the half vector h = (ωo+ωi) ||ωo+ωi|| , and consequently the 'difference' vector, as the ingoing direction in a frame which the halfway vector is at the north pole, see Figure 2.1b. It has been formally described by Rusinkewicz [START_REF] Szymon | A new change of variables for efficient brdf representation[END_REF]. Specular or retro reflections are better defined in this parametrization as they are aligned to the transformed coordinate angles. Blinn-Phong [START_REF] James F Blinn | Models of light reflection for computer synthesized pictures[END_REF] redefined the Phong model by using the half vector instead of the reflected vector. The use of the half vector produces asymmetric reflections in contrast to the Phong model. Those model, Phong, Lafortune and Blinn-Phong are empirical based on cosine lobes. Another empirical model, which is commonly used, is the one defined by Ward [START_REF] Gregory | Measuring and modeling anisotropic reflection[END_REF]. This model uses the half vector and is based on Gaussian Lobes. It is designed to reproduce anisotropic reflections and to fit measured reflectance, as it was introduced alongside with a measuring device.
The most common physically-based models are the ones who follow the micro-facet Real-time 2D manipulation of plausible 3D appearance theory. This theory assumes that a BRDF defined for a macroscopic level is composed by a set of micro-facets. The Torrance-Sparrow model [START_REF] Kenneth | Theory for off-specular reflection from roughened surfaces[END_REF] uses this theory by defining the BRDF as:
f r (ω o , ω i ) = G(ω o , ω i , h)D(h)F (ω o , h) 4|ω o n||ω i n| , (2.2)
where D is the Normals distributions, G is the Geometric attenuation and F is the Fresnel factor. The normal distribution function D defines the orientation distribution of the microfacets. Normal distributions often use Gaussian-like terms as Beckmann [START_REF] Beckmann | The scattering of electromagnetic waves from rough surfaces[END_REF], or other distributions like GGX [START_REF] Walter | Microfacet models for refraction through rough surfaces[END_REF]. The geometric attenuation G accounts for shadowing or masking of the micro-facets with respect to the light or the view. It defines the portion of the micro-facets that are blocked by their neighbor micro-facets for both the light and view directions. The Fresnel factor F gives the fraction of light that is reflected by each micro-facet, and is usually approximated by the Shlick approximation [START_REF] Schlick | An inexpensive brdf model for physically-based rendering[END_REF].
To understand how well real-world materials are simulated by analytical BRDFs we can fit the parameters of the latter to approximate the former. Ngan et al. [START_REF] Ngan | Experimental analysis of brdf models[END_REF] have conducted such an empirical study, using as input measured materials coming from the MERL database [START_REF] Matusik | A data-driven reflectance model[END_REF]. It shows that a certain number of measured BRDFs can be well fitted, but we still can differentiate them visually when rendered (even on a simple sphere) when comparing to real-world materials.
The use of the reflected radiance equation alongside with the BRDF models tell us how to create virtual images using a forward pipeline. Instead we want to manipulate existing shading. Moreover we want those modifications to behave in a plausible way. The goal is to modify shading in image space as if we were modifying the components of the reflectance radiance equation: material, lighting or geometry. For that purpose we are interested in the impact of those components in shading.
Inverse rendering
An ideal solution to the manipulation of appearance from shading would be to apply inverserendering. It consists in the extraction of the rendering components: geometry, material reflectance and environment lighting, from an image. Once they are obtained they can be modified and then used to re-render the scene, until the desired appearance is reached. In our case we focus on the recovery of lighting and material reflectance assuming known geometry. Inverse rendering has been a long-standing goal in Computer Vision with no easy solution. This is because material reflectance and lighting properties are of high dimensionality, which makes their recovery from shading an under-constrained problem.
Different combinations of lighting and BRDFs may obtain similar shading. The reflection of a sharp light on a rough material would be similar to a blurry light source reflected by a shiny material. At specific cases it is possible to recover material and/or lighting as described in [START_REF] Ramamoorthi | A signal-processing framework for inverse rendering[END_REF]. In the same paper the authors show that interactions between lighting and material can be described as a 2D spherical convolution where the material acts a lowpass filter of the incoming radiance. This approach requires the next assumptions: Convex curved object of uniform isotropic material lit by distant lighting. These assumptions make radiance dependent only on the surface orientation, different points with the same normal sharing the same illumination and BRDF. Therefore the reflectance radiance Equation (2.1) may be rewritten using a change of domain, by obtaining the rotation which transform the surface normal to the z direction. This rotations permits to easily transforms directions in local space to global space, as shown in Figure 2.3 for the 2D and the 3D case. They rewrite 14 Carlos Jorge Zubiaga Peña
Related Work
Equation (2.1) as a convolution in the spherical domain:
L o (R, ω ′ o ) = Ω ′ fr (ω ′ i , ω ′ o ) L i (Rω ′ i )dω ′ i = Ω fr (R -1 ω i , ω ′ o ) L i (ω i )dω i = fr * L,
where R is the rotation matrix which transforms the surface normal to the z direction. Directions ω o , ω i and the domain Ω are primed for the local space and not primed on the global space. fr indicates the BRDF with the cosine term encapsulated. The equation is rewritten as a convolution, denoted by * . Ramamoorthi et al. used the convolution approximation to study the reflected radiance equation in the frequency domain. For that purpose they use Fourier basis functions in the spherical domain, which correspond to Spherical Harmonics. They are able to recover lighting and BRDF from an object with these assumptions using spherical harmonics. Nevertheless this approach restricts the BRDF to be radially symmetric like: lambertian, Phong or re-parametrized micro-facets BRDF to the reflected view vector.
Lombardi et al. [START_REF] Lombardi | Reflectance and natural illumination from a single image[END_REF] manage to recover both reflectance and lighting, albeit with a degraded quality compared to ground truth for the latter. They assume real-world natural illumination for the input images which permits to use statistics of natural images with a prior on low entropy on the illumination. The low entropy is based on the action of the BRDF as a bandpass filter causing blurring: they show how histogram entropy increase for different BRDFs. They recovered isotropic directional statistics BRDFs [START_REF] Nishino | Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions[END_REF] which are defined by a set of hemispherical exponential power distributions. This kind of BRDF is made to represent the measured materials of the MERL database [START_REF] Matusik | A data-driven reflectance model[END_REF]. The reconstructed lighting environments exhibit artifacts (see Figure 2.4), but these are visible only when rerendering the object with a shinier material compared to the original one.
In their work Lombardi et al. [START_REF] Lombardi | Reflectance and natural illumination from a single image[END_REF] compare to the previous work of Romeiro et al [START_REF] Romeiro | Blind reflectometry[END_REF]. The latter gets as input a rendered sphere under an unknown illumination and extracts a monochromatic BRDF. They do not extract the lighting environment which restricts its use to re-use the BRDF under a different environment and forbids the manipulation of the input image. Similar to the work of Lombardi they use priors on natural lighting, in this case they study statistics of a set of environment maps projected in the Haar wavelet
(1) (2) (3) (a) (b) (c) (d)
16
Carlos Jorge Zubiaga Peña basis, see Figure 2.5. Those statistics are used to find the most likely reflectance under the studied distribution of probable illumination environments. The type of recovered BRDF is defined in a previous work of the same authors [START_REF] Romeiro | Passive reflectometry[END_REF]. That work recovers BRDFs using rendered spheres under a known environment map. They restrict BRDFs to be isotropic and they add a further symmetry around the incident plane, which permits to rewrite the BRDF as a 2D function instead of the general 4D function.
Other methods that perform BRDF estimation always require a set of assumptions, such as single light sources or controlled lighting. The work from Jaroszkiewicz [START_REF] Jaroszkiewicz | Fast extraction of brdfs and material maps from images[END_REF] assumes a single point light. It extracts BRDFs from a digitally painted sphere using homomorphic factorization. Ghosh et al. [GCP + 09] uses controlled lighting based on spherical harmonics. This approach reconstructs spatially varying roughness and albedo of real objects. It employs 3D moments (in Cartesian space) up to order 2 to recover basic BRDF parameters from a few views. Aittala et al. [START_REF] Aittala | Practical svbrdf capture in the frequency domain[END_REF] employs planar Fourier lighting patterns projected using a consumer-level screen display. They recover Spatially Varying-BRDFs of planar objects.
As far as we know there is no algorithm that works in a general case and extracts a manipulable BRDF alongside with the environment lighting. Moreover, as we are interested in the manipulation of appearance in an interactive manner, re-rendering methods are not suitable. A re-rendering process uses costly global illumination algorithms once material and lighting are recovered. In contrast, we expect that manipulation of shading does not require to decouple the different terms involved in the rendering equation. Therefore, we rather apply approximate but efficient modifications directly to shading, mimicking modifications of the light sources or the material reflectance. Moreover, all these methods work on photographs; in contrast we also want to manipulate artwork images.
Pre-filtered lighting
Pre-filtered environment maps [KVHS00] take an environment lighting map and convolve it with a filter defined by a material reflectance. The resulting values are used to shade arbitrary geometries in an interactive process, giving a good approximation of reflections. Distant lighting is assumed, consequently reflected radiance is independent of position. In the general case a pre-filtered environment would be a 5 dimensional function, depending on the outgoing direction ω o , and on the reference frame defined by the normal n and the tangent t. Nevertheless some dependencies can be dropped. Isotropic materials are independent of the tangent space. Radially symmetric BRDFs around either the normal (e.g. lambertian) or the reflected view vector (e.g. Phong) are 2 dimensional functions. When the pre-filtered environment maps are reduced to a 2 dimensional function they can be stored in a spherical maps. A common choice is the dual paraboloid map [START_REF] Heidrich | View-independent environment maps[END_REF], which is composed of a front and a back image with the z value given by 1/2-(x 2 +y 2 ). This method is efficient in terms of sampling and the introduced distortion is small, see Figure 2.6.
Unfortunately effects dependent on the view direction, like Fresnel, cannot be captured in a single spherical representation as in the last mentioned technique. Nevertheless it can be added afterwards, and several pre-filtered environment maps can be combined with different Fresnel functions. A solution defined by Cabral et al. [START_REF] Cabral | Reflection space image based rendering[END_REF] constructs a spare set of viewdependent pre-filtered environment maps. Then, for a new viewpoint they dynamically create a view-dependent pre-filtered environment map by warping and interpolating precomputed environment maps.
A single view-dependent pre-filtered environment map is useful when we want to have a non expensive rendering for a fixed view direction. Sloan et al. [START_REF] Sloan | The lit sphere: A model for capturing npr shading from art[END_REF] introduce a technique which creates shaded images of spheres from paintings, which can be used as Real-time 2D manipulation of plausible 3D appearance [START_REF] Ramamoorthi | An efficient representation for irradiance environment maps[END_REF] corresponding to the lowestfrequency modes of the illumination. It is proven that the resulting values differ on average 1% of the ground truth. For that purpose they project the environment lighting in the first 2 orders of spherical harmonics, which is faster than applying a convolution with a diffuse-like filter. The diffuse color is obtained by evaluating a quadratic polynomial in Cartesian space using the surface normal.
Pre-filtered lighting maps store appearance independently of geometry for distant lighting. This permits to easily give the same material appearance to any geometry, for a fixed material and environment lighting. As a drawback, when we want to modify the material or the environment lighting they need to be reconstructed, which forbids interactive appearance manipulation. In the case of the artwork techniques, LitSpheres/MatCaps are created for a single view, which forbids rotations as shading is tied to the camera.
18
Carlos Jorge Zubiaga Peña
Appearance manipulation
The rendering process is designed to be physically realistic. Nevertheless, sometimes we want to create images with a plausible appearance without caring about their physically correctness. There exist some techniques which permit different manipulations of appearance using different kinds of input, ranging form 3D scenes to 2D images. Those techniques reproduce visual cues of the physically-based image creation techniques but without being restricted by them. At the same time they take advantage of the inaccuracy of human visual system to distinguish plausible from accurate.
Image-based material editing of Khan et al. [START_REF] Erum Arif Khan | Image-based material editing[END_REF] takes as input a single image in HDR of a single object and is able to change its material. They estimate both, the geometry of the object, and the environment lighting. Then estimated geometry and environment lighting are used alongside with a new BRDF to re-render the object. Geometry is recovered following the heuristic of darker is deeper. Environment lighting is reconstructed from the background. First the hole left by the object is filled with other pixels from the image, to preserve the image statistics. Then the image is extruded to form a hemisphere. The possible results range from modifications of glossiness, texturing of the object, replacement of the BRDF or even simulation of transparent or translucent objects, see Figure 2.8. The interactive reflection editing system of Ritschel et al. [START_REF] Ritschel | Interactive reflection editing[END_REF] makes use of a full 3D scene to directly displace reflections on top of object surfaces, see Figure 2.9. The method takes inspiration on paintings where it is common to see refections that would not be possible in real life, but we perceive them as plausible. To define the modified reflections the user define constraints consisting on the area where he wants the reflections and another area which defines the origin of the reflections. This technique allows users to move reflections, adapt their shape or modify refractions.
The Surface Flows method [VBFG12] warps images using depth and normal buffers to create 3D shape impressions like reflections or texture effects. In this work they performed a differential analysis of the reflectance radiance Equation (2.1) in image space. From that differentiation of the equation they identify two kind of variations: a first order term related to texturing (variations on material) and a second order variation related to reflection (variations on lighting). Furthermore they use those variations to define empirical equations to deform pixels of an image following the derivatives of a depth buffer in the first case and the derivatives of a normal buffer in the second case. As a result they introduce a set of tools: addition of decal textures or reflections and shading using gradients or images (Fig. 2.10).
The EnvyLight system [START_REF] Pellacini | envylight: an interface for editing natural illumination[END_REF] permits to make modifications on separable features of the environment lighting by selecting them from a 3D scene. Users make scribbles on rendered image of the scene to differentiate the parts that belong to a lighting feature from the ones that do not. The features can be diffuse reflections, highlights or shadows. The geometry of the zones containing the feature permits to divide the environment map on the features that affect those effect from the rest. The separation of the environment lighting permits to edit them separately as well as to make other modifications like: contrast, translation, blurring or sharpening, see Figure 2.11.
Appearance manipulation techniques are designed to help artists achieve a desired appearance. To this end they might need to evade from physical constraints in which computer graphics is based. Nevertheless, obtained appearance might still remain plausible for the human eye. As artists know intuitively that the human visual system is not aware of how light physically interacts with objects.
20
Carlos Jorge Zubiaga Peña
Visual perception
Created or manipulated images are evaluated with respect to a 'reference' image (e.g. photograph, ground truth simulation). Measurements of Visual Quality consist in computing the perceived fidelity and similarity or the perceived difference between an image and the 'reference'. Traditionally numerical techniques like MAE (mean absolute error), MSE (mean square error), or similar have been used to measure signal fidelity in images. They are used because of their simplicity and because of their clear physical meaning. However, those metrics are not good descriptors of human visual perception. In the vast majority of cases human beings are the final consumer of images and we judge them based on our perception. Visual perception is an open domain of research which presents many challenging problems.
In computer graphics perception is very useful when a certain appearance is desired, without relying completely on physical control. A survey of image quality metrics from traditional numeric to visual perception approaches is provided in [START_REF] Lin | Perceptual visual quality metrics: A survey[END_REF].
Real-time 2D manipulation of plausible 3D appearance 21
Ramanarayanan et al. [START_REF] Ganesh Ramanarayanan | Visual equivalence: towards a new standard for image fidelity[END_REF] have shown how the human visual system is not able to perceive certain image differences. They develop a new metric for measuring how we judge images as visually equivalent in terms of appearance. They prove that we are not mostly able to detect variations on environment lighting. Users judge the equivalence of two objects that can vary in terms of bumpiness or shininess, see Figure 2.12. Objects are rendered under transformations (blurring or warping) of the same environment lighting. The results prove that we judge images as equivalent, despite their visual difference. This limitation of the human visual system is used in computer graphics to design techniques of appearance manipulation, like shown in the previous section. Despite the tolerance of the human visual system to visual differences we are able to differentiate image properties of objects. To distinguish the material of an object we use visual cues like color, texture or glossiness. The latter is often defined as the achromatic component of the surface reflectance. In a BRDF, gloss is responsible for changes in the magnitude and spread of the specular highlight as well as the change in reflectance that occurs as light moves away from the normal toward grazing angles. Hunter [START_REF] Sewall | The measurement of appearance[END_REF] introduced six visual properties of gloss: specular gloss, sheen, luster, absence-of-bloom, distinctness-of-image and surface-uniformity. He suggests that, except for surface-uniformity, all of these visual properties may be connected to reflectance (i.e., BRDF) properties. There exists standard test methods for measuring some of these properties (such as ASTM D523, D430 or D4039).
The measurements of Hunter as well as the standard methods are based on optical measurements of reflections. However, perceived glossiness does not have a linear relationships with physical measurements. The work of Pellacini [START_REF] Pellacini | Toward a psychophysically-based light reflection model for image synthesis[END_REF] re-parametrized the Ward model [START_REF] Gregory | Measuring and modeling anisotropic reflection[END_REF] As we have seen the perception of gloss has been largely studied [START_REF] Chadwick | The perception of gloss: a review[END_REF]. However, we believe that explicit connections between physical and visual properties of materials (independently of any standard or observer) remain to be established.
Summary
Work on visual perception shows how humans are tolerant to inaccuracies in images. The human visual system may perceive as plausible images with certain deviations from physically correctness. Nevertheless we are able to distinguish material appearance under different illuminations, despite the fact that we are not able to judge physical variations linearly. Manipulation appearance techniques take advantage of these limitations to alter images by overcoming physical restrictions on rendering while keeping results plausible. We pursue a similar approach when using techniques like pre-filtered environment maps, where shading is pre-computed as the interaction of lighting and material. We aim to manipulate dynamically geometry-independent stored shading (similar to pre-filtered environment maps) and be able to mimic variations on lighting and material within it. The use of these structures seems a good intermediate alternative to perform appearance modification in comparison to the generation of images using a classical rendering pipeline.
Chapter 3
Statistical Analysis
The lightness and color (shading) of an object are the main characteristics of its appearance. Shading is the result of the interaction between the lighting and the surface reflectance properties of the object. In computer graphics lighting-material interaction is guided by the reflected radiance equation [START_REF] James | The rendering equation[END_REF], explained in Section 2.1:
L o (x, ω o ) = Ω f r (x, ω o , ω i ) L i (x, ω i ) ω i • n dω i ,
Models used in computer graphics that define reflectance properties of objects are not easily connected to their final appearance in the image. To get a better understanding we perform an analysis to identify and relate properties between shading on one side, and material reflectance and lighting on the other side.
The analysis only considers opaque materials which are are well defined by BRDFs [START_REF] Nicodemus | Directional reflectance and emissivity of an opaque surface[END_REF], leaving outside of our work transparent or translucent materials. We consider uniform materials, thus we only study variations induced by the viewing direction. When a viewing direction is selected the BRDF is evaluated as 2D function, that we call a BRDF slice. In that situation the material acts a filter of the incoming lighting. Our goal is to characterize the visible effect of BRDFs, and how their filtering behavior impacts shading. For that purpose we perform an analysis based on statistical properties of the local light-material interaction. Specifically, we use moments as quantitative measures of a BRDF slice shape.
Moments up to order can be used to obtain the classical mean and variance, and the energy as the zeroth moment. We use those statistical properties: energy, mean and variance to describe a BRDF slice model. In addition we make a few hypothesis on the BRDF slice shape to keep the model simple. Then, this model is used to develop a Fourier analysis where we find relationships on the energy, mean and variance between material/lighting and shading.
Finally we use our moment-based approach to analyze measured BRDFs. We show in plots how statistical properties evolve as functions of the view direction. We can extract common tendencies of different characteristics across all materials. The results verifies our previous hypothesis and show correlations among mean and variance. This work have been published in the SPIE Electronic Imaging conference with the collaboration of Laurent Belcour, Carles Bosch and Adolfo Muñoz [ZBB + 15]. Specifically, Laurent Belcour has helped with the Fourier Analysis, meanwhile Carles Bosch has made fittings on the measured BRDF analysis.
BRDF slices
When we fix the view direction ω o at a surface point p a 4D BRDF f r is restricted to a 2D BRDF slice. We define it as scalar functions on a 2D hemispherical domain, which we write f rω o (ω i ) : Ω → R, where the underscore view direction ω o indicates that it is fixed, and R denotes reflectance. Intuitively, such BRDF slices may be seen as a filter applied to the environment illumination. We suggest that some statistical properties of this filter may be directly observable in images, and thus may constitute building blocks for material appearance.
View-centered parametrization
Instead of using a classical parametrization in terms of elevation and azimuth angles for Ω, we introduce a new view-centered parametrization with poles orthogonal to the view direction ω o , see Fig. 3.1. This parametrization is inspired by the fact that most of the energy of a BRDF slice is concentrated around the scattering plane spanning the view direction ω o and the normal n, then it minimize distortions around this plane. It is also convenient to our analysis. First, it permits to define a separable BRDF slice model, which is useful to perform the Fourier analysis separately per coordinate, see Section 3.2. Second, it enables the computation of statistical properties by avoiding periodical domains, see Section 3.3. Formally, we specify it by a mapping m : [-π 2 , π 2 ] 2 → Ω, given by:
m(θ, φ) = (sin θ cos φ, sin φ, cos θ cos φ), (3.1)
where φ is the angle made with the scattering plane, and θ the angle made with the normal in the scattering plane.
θ i , φ i ) ∈ [-π 2 , π 2 ] 2 to a direction ω i ∈ Ω. (c) A 2D BRDF slice f rω o is
directly defined in our parametrization through this angular mapping.
The projection of a BRDF slice into our parametrization is then defined by:
f rω o (θ i , φ i ) := f r (m(θ o , φ o ), m(θ i , φ i )), (3.2)
where θ o , φ o and θ i , φ i are the coordinates of ω o and ω i respectively in our parametrization.
In the following we consider only isotropic BRDF which are invariant to azimuthal view angle. This choice is coherent with the analysis of measured BRDFs as the MERL database only contains isotropic BRDFs. Then, BRDF slices of isotropic materials are only dependent on the viewing elevation angle θ o ; we denote them as f r θo .
Statistical reflectance radiance model
We define a BRDF slice model in our parametrization using statistical properties. This model will be useful to perform a statistical analysis to study the impact of material on shading. Specifically it allows us to derive a Fourier analysis in 1D that yields statistical relationships between shading and material/lighting. Our BRDF slice model is based on Gaussian lobes, which is a common choice to work with BRDFs. Gaussian functions are described by their mean µ and variance σ 2 . The mean µ describes the expected or the central value of the Gaussian distribution. BRDF Gaussian lobes centered on the reflected view vector or on the normal are commonly used in Computer Graphics. The variance σ 2 describes the spread of the function. In the case of BRDF Slices, variance may be seen as a representation of material roughness. Wider lobes are representative of rough or diffuse material, meanwhile narrow lobes represent shiny or specular materials.
To define our BRDF slice model we have made a few assumptions by observing the measured materials from the MERL database. We have observed that BRDF slices of measured materials exhibits close to symmetric behavior around the scattering plane. Moreover, while BRDF slice lobes stay centered on this plane, the mean direction can vary, from the normal direction to the tangent plane direction, passing through the reflected view vector direction. These observations lead us to assume a perfect symmetry around the scattering plane. This assumption alongside with our view-centered parametrization allows us to define our model using a pair of 1D Gaussians. Therefore our BRDF slice model is defined as:
f r θo (θ i , φ i ) = α(θ o ) g σ θ (θo) (θ i -µ θ (θ o )) g σ φ (θo) (φ i ) , (3.3)
where g σ θ and g σ φ are normalized 1D Gaussians 1 of variance σ 2 θ for the θ axis and variance σ 2 φ for the φ axis of our parametrization. The Gaussian g σ θ is centered at µ θ , meanwhile the Gaussian g σ φ is centered at 0. The energy α is similar to the directional albedo (the ratio of the radiance which is reflected). But it differs in two ways: it does not take into account the cosine term of Equation (2.1), and is defined in our parametrization. As α is a ratio, it is bounded between 0 and 1, which guarantees energy conservation. A representation of the model is shown in Figure 3.2.
Energy, mean and variance can be defined using statistical quantities called moments, which characterize the shape of a distribution f :
µ k [f ] = ∞ -∞ x k f (x) dx, (3.5)
where k is the moment order. In our case we use moments up to order 2 to define the meaningful characteristics: energy, mean and and variance. The energy α is the 0th order moment, which describes the integral value of the function. The energy is required to be 1 to guarantee f to be a distribution, as moments are only defined for distribution functions. The mean µ is the 1st moment and the variance σ 2 is the 2nd central moments, where µ is used to center the distribution. We emphasize out that this model does not ensure reciprocity, and shall thus not be used outside of this statistical analysis.
1 Both Gaussians correspond to normal distributions that have been rescaled to guarantee energy conservation on our parametrization domain. The scaling term is given by:
A = π 2 -t 0 -π 2 -t 0 e -t 2 2σ 2 dt = πσ 2 2 erf π/2 -t 0 √ 2σ 2 -erf - π/2 -t 0 √ 2σ 2 , (3.4)
where we have restricted the domain to [-π 2 , π 2 ] and centered it on t 0 . This accommodates both Gaussians with t 0 = µ θ (θo) in one case, and t 0 = 0 in the other.
Fourier analysis
Fourier analysis
We conduct a local Fourier analysis that yields direct relationships between reflected radiance and BRDF/lighting of energy, mean and variance around a fixed view elevation.
Local Fourier analysis
Our analysis begins with a change of variable in Equation (2.1) using our parametrization. Analysis is performed in a local tangent frame for simplicity, with the domain of integration being the input space of m:
L o (θ o , φ o ) = π 2 -π 2 π 2 -π 2 f r θo (θ i , φ i ) L i m(θ i , φ i ) cos θ i cos 2 φ i dθ i dφ i , (3.6)
where the 3rd coordinate of ω i = m(θ i , φ i ) (given by cos θ i cos φ i according to Equation. (3.1)) stands for the cosine term in tangent space. Replacing f r θo with our BRDF slice model (Equation (3.3)) yields:
L o (θ o , φ o ) = α(θ o ) π 2 -π 2 g σ θ (θo) (θ i -µ θ (θ o )) π 2 -π 2 g σ φ (θo) (φ i )L i m(θ i , φ i ) cos θ i cos 2 φ i dφ i dθ i .
(3.7) Now, since our BRDF slice model is separable in θ i and φ i , we may pursue our study in either dimension independently. Let us focus on θ i . If we fold in the integral of terms over φ i and cosines and write:
L i φ (θ i ) = π 2 -π 2 g σ φ (θo) (φ i )L i m(θ i , φ i ) cos θ i cos 2 φ i dφ i ,
then Equation 3.7 turns into a 1D integral of the form:
L o (θ o , φ o ) = α(θ o ) π 2 -π 2 g σ θ (θo) (θ i -µ θ (θ o ))L i φ (θ i ) dθ i .
(3.8) 28 Carlos Jorge Zubiaga Peña Our next step is to approximate this 1D integral with a convolution. To this end, we make local approximations of our BRDF slice model in a 1D angular window around θ o . We assume the energy and variance to be locally constant:
α(θ o + t) ≈ α(θ o ) and σ 2 θ (θ o + t) ≈ σ 2 θ (θ o ).
For the mean, we rather make use of a first-order approximation:
µ θ (θ o + t) ≈ µ θ (θ o ) + dµ θ dt | θo t.
As a result, L o may be approximated by a 1D convolution of the form:
L o (θ o + t, φ o ) ≈ α g σ θ * L i φ (θ o + t), with t ∈ [-ǫ, +ǫ], (3.9)
where we have dropped the dependencies of both α and σ θ on θ o since they are assumed locally constant.
In Fourier space, this convolution turns into the following product:
F[L o ](ξ) ≈ α F[g σ θ ](ξ) F[L i φ ](ξ), (3.10)
where ξ is the Fourier variable corresponding to t. Note that Fourier shifts e iθo due to the centering on θ o cancel out since they appear on both sides of the equation. Equation (3.9) bears similarities with previous work [DHS + 05, RMB07], with the difference that our approach provides direct connections with moments thanks to our BRDF slice model.
Relationships between moments
An important property of moments is that they are directly related to the Fourier transform of a function f by [START_REF] Michael | Principles of statistics[END_REF]:
F[f ](ξ) = k (iξ) k k! µ k [f ] (3.11)
where µ k [f ] is the k-th moment of f . We thus re-write Equation (3.10) as a product of moment expansions:
F[L o ](ξ) = α k (iξ) k k! µ k [g σ θ ] l (iξ) l l! µ l [L i φ ] . (3.12)
To establish relationships between moments, we extract the moments from F[L o ] using its own moment expansion. This is done by differentiating
F[L o ] at ξ = 0 [Bul65]: µ 0 [L o ] = F[L o ](0) (3.13) µ 1 [L o ] = Im dF[L o ] dξ (0) (3.14) µ 2 [L o ] = -Re d 2 F[L o ] dξ 2 (0) . (3.15)
Next, we expand Equation 3.12 and its derivatives at ξ = 0 and plug them inside Equations (3.13) through (3.15):
µ 0 [L o ] = αµ 0 [g σ θ ]µ 0 [L i φ ], (3.16) µ 1 [L o ] = µ 1 [g σ θ ] + µ 1 [L i φ ], (3.17) µ 2 [L o ] = µ 2 [g σ θ ] + µ 2 [L i φ ] + 2µ 1 [g σ θ ]µ 1 [L i φ ]. (3.18) Since g σ θ is normalized, µ 0 [g σ θ ] = 1. However, µ 0 [L o ]
= 1 in the general case, and we must normalize moments of order 1 and 2 before going further. We write Lo
= L o /µ 0 [L o ], which yields µ k [ Lo ] = µ k [Lo] µ0[Lo]
. It can then be easily shown that Equations (3.17) and (3.18) remain valid after normalization.
Lastly, we write the variance of Lo in terms of moments: Var
[ Lo ] = µ 2 [ Lo ] -µ 2 1 [ Lo ]. After carrying out simplifications, we get: Var[ Lo ] = Var[ḡ σ θ ] + Var[ Li φ ].
Putting it all together, we obtain the following moment relationships for a given viewing elevation θ o :
µ 0 [L o ](θ o ) = α(θ o ) µ 0 [L i φ ](θ o ), (3.19) E[ Lo ](θ o ) = µ θ (θ o ) + E[ Li φ ](θ o ), (3.20) Var[ Lo ](θ o ) = σ 2 θ (θ o ) + Var[ Li φ ](θ o ), (3.21)
where we have used E
[ḡ σ θ ](θ o ) = µ θ (θ o ) and Var[ḡ σ θ ](θ o ) = σ 2 θ (θ o ).
The reasoning is similar when studying the integral along φ i , in which case we must define a L i θ term analogous to L i φ . We then obtain similar moment relationships, except in this case E
[ḡ σ φ ] = 0, Var[ḡ σ φ ] = σ 2
φ , and L i φ is replaced by L i θ .
Measured material analysis
We compute statistical moments of BRDF slices up to order 2 (energy, mean and variance) on a set of measured materials of the MERL database. Moments are computed as functions of viewing angle which we call moment profiles. We experimentally show that such moment profiles are well approximated by parametric forms: a Hermite spline for the energy, a linear function for the mean, and a constant for the variance. Parametric forms for these functions are obtained through fitting, and additionally we show that mean and variance statistics are correlated.
On the implementation side, we have made use of BRDF Explorer [START_REF] Burley | BRDF Explorer[END_REF], which we have extended to incorporate the computation of moments. Carles Bosch have performed fitting using Mathematica.
Moments of scalar functions
We analyze moments on BRDF slices of measured materials without making any hypothesis. Therefore we use general tensors to capture moments of a scalar distribution f :
µ k [f ] = X x ⊗ • • • ⊗ x k factors f (x) dx, (3.22)
where X is the domain of definition of f and ⊗ denotes a tensor product. As a result, a moment of order k is a tensor of dimension k + 1: a scalar at order 0, a vector at order 1, a matrix at order 2, etc. Similarly to our BRDF Slice model we analyze moments up to order 2 to study the energy, mean and variance of the BRDF slices. Despites that, the analysis is easily extensible to higher order moments: the 3rd and 4th order moments (skewness and kurtosis) are given in the AppendixA. Now, for a scalar distribution defined over a 2D domain, we write x = (x, y) and define:
µ n,m [f ] := E f [x n y m ] = X x n y m f (x,
Choice of domain
The classical parametrization in terms of elevation and azimuth angles is not adapted to the computation of moments Indeed, the periodicity of the azimuthal dimension is problematic because domains are anti-symmetric when the power involved in the computation of moments is odd, see Equation (3.23). This incompatibility is avoided when using our parametrization. The projected result of the BRDF slices using our view-centered parametrization is shown in Figure 3.3. A different solution to deal with the periodicity of the hemispherical domain would be to compute 3D moments using Cartesian coordinates as done by Ghosh et al. [GCP + 09]. However, this would not only make analysis harder (relying on 3D tensors), but it would also unnecessarily introduce distortions at grazing angles, where hemispherical and Euclidean distances differ markedly. An alternative would be to rely on statistics based on lighting elevation, as done by Havran et al [START_REF] Havran | Statistical characterization of surface reflectance[END_REF] for the purpose of material characterization. Unfortunately, this approach is not adapted to our purpose since it reduces a priori the complexity of BRDFs by using a 1D analysis. Instead, we compute moments using a planar 2D parametrization that introduces as little distortion as possible for isotropic BRDFs.
BRDF slice components
Moments are not good descriptors for multimodal functions. They are only well-defined for unimodal functions; computed statistics are not meaningful otherwise.
In contrast to our BRDF slice model, we have observed that many BRDFs from the MERL database display multi-modal components with a near constant (i.e. close to Lambertian) diffuse component. We rely on a simple heuristic method to separate that diffuse component, leaving the rest of the data as a specular component. Such a perfect diffuse component can be extracted using a simple thresholding scheme: we sample the BRDF at a viewing elevation of 45 degrees and retrieve the minimum reflectance value. We then remove this constant from the data in order to obtain its specular component. We will analyze only the remaining specular component, on which we ground our study.
However, even after removing a Lambertian component from BRDF data, some materials still show multi-modal behaviors. We simply remove these BRDFs from our set manually, Real-time 2D manipulation of plausible 3D appearance leaving a total of 40 unimodal BRDFs 2 . They still span a wide range of appearances, from rough to shiny materials.
Moment profiles
We compute 2D moments of the specular component of the projected BRDF slices (see Fig. 3.3) using a discretized version of the Equation (3.23). Using moments up to order 2 we have as a result a scalar value for the energy, a 2D vector for the mean and a 2 × 2 matrix for the variance. We show how those statistical properties vary as functions of the viewing elevation θ o , which we call moment profiles. In practice, we sample the θ o dimension uniformly in angles, each sample yielding a projected BRDF slice.We then use a Monte Carlo estimator to evaluate the 2D moments for each of these slices:
µ n,m [f r ](θ o ) ≈ π 2 N N i=1 θ n i φ m i f r θo (θ i , φ i ), (3.24)
where x i = (θ i , φ i ) is the ith randomly generated sample in the slice, and N is the number of samples.
In the following, we present moment profiles computed at increasing orders, as shown in Figs. 3.4 and 3.5. For the sake of clarity, we will omit the dependence on θ o both for BRDF slices and 2D moments.
Energy
As seen in these plots, the energy α stays below 1, which indicates that a portion of the light is reflected. They look mostly constant except near grazing angles where they tend to increase. We show α profiles for the red channel only of all our selected BRDFs.
Mean
For moments of order 1 and higher, we must normalize by the 0th order moment in order to guarantee that f r is a distribution, see Sec. 3.1.2. We thus write fr = fr α . The coefficients are now given by µ n,m [ fr ] = E fr [θ n i φ m i ] for n + m = 1. The profile for µ θ := µ 1,0 is shown in Fig. 3.4b: our selected BRDFs exhibit profiles that have different slopes, with deviations from a line occurring toward grazing angles. In contrast the profile of µ φ := µ 0,1 , as shown in Fig. 3.5a, remains close to zero for all values of θ o . This is coherent with the near-symmetry of the BRDF slice around the scattering plane.
2
• yellow-phenolic
• yellow-matte-plastic • white-paint • white-marble • white-acrylic • violet-acrylic • two-layer-gold • tungsten-carbide • ss440 • specular-violet-phenolic • specular-green-phenolic • specular-blue-phenolic • specular-black-phenolic • silver-paint • silicon-nitrade • red-metallic-paint • pvc • pure-rubber • pearl-paint • nickel • neoprene-rubber • hematite • green-metallic-paint2 • green-metallic-paint • gold-paint • gold-metallic-paint3 • gold-metallic-paint2 • color-changing-paint3 • color-changing-paint2 • color-changing-paint1 • chrome • chrome-steel • brass • blue-metallic-paint2 • blue-metallic-paint • black-phenolic • black-obsidian • aventurnine • aluminium • alum-bronze
Co-variance
It is defined as the centralized moment matrix Σ of order 2, which consists of moments of fr centered on its mean. In our case, since µ 0,1 ≈ 0, the coefficients of the co-variance matrix may be written using a slightly simpler formula:
Σ n,m [ fr ] = E fr [(θ i -µ θ ) n φ m i ]
for n + m = 2. This matrix characterizes how the BRDF slice is spread around its mean, with larger values in either dimension implying larger spread. The profiles for the diagonal coefficients σ 2 θ := Σ 2,0 and σ 2 φ := Σ 0,2 are shown in Fig. 3.4c: our selected BRDFs exhibit profiles of different variances, with slight deviations from the average occurring toward grazing viewing angles. The off-diagonal coefficient Σ 1,1 remains close to zero as shown in Fig. 3.5b, again due to the near-symmetry of the BRDF slice.
Real-time 2D manipulation of plausible 3D appearance
Interim discussion
Plots exhibit common behaviors along all the selected BRDFs where we can study their causes. First, both moments which correspond to anti-symmetric functions in the φ i dimension m = 1, µ 0,1 and Σ 1,1 , exhibits close to null profiles. This strongly suggests that they are due to the near-symmetry of most isotropic BRDFs about the scattering plane (i.e. along φ i ), as seen in Fig. 3.3. This affirms the symmetry hypothesis made for the definition of our BRDF slice model. Second, values at incident view θ o = 0 start at 0 for the mean and share the same value for σ 2 θ and σ 2 φ . The reason for this is that slices of isotropic BRDFs are near radially symmetric around the normal (the origin in our parametrization) at incidence.
Lastly, all materials tend to exhibit deviations with respect to a simple profile toward grazing viewing angles. This might be due to specific modes of reflectance such as asperity scattering [START_REF] Koenderink | The secret of velvety skin[END_REF] coming in to reshape the BRDF slice.
Those three common behaviors are coherent with the further results obtained with the study of the skewness and kurtosis as presented in the Appendix A.
Fitting & correlation
In order to have a better understanding of material behavior we fit analytical functions to moment profiles. We look for similarities for a same moment order, as well as correlations among different orders. Naturally, we will less focus on fitting accuracy than on concision: a minimal set of parameters is necessary if we wish to compare profiles across many measured materials.
Regarding color channels, we have tried fitting them separately, or fitting their average for each slice directly. We only report fits based on averages since for our selected materials we found differences across fits for different color channels to be negligible. It must be noted that for the energy, profiles for each color channel will obviously be different; however they are merely offseted with respect to each other. It is thus also reasonable to fit their average since we are mostly interested in the shape of profile functions.
Figure 3.6 shows the fitting results for the energy, mean and variance profiles, as detailed below. It shows computed profiles, fitted profiles, fitting errors and representative 'best', 'intermediate' and 'worse' fits. We introduce our choices of analytical function for each moment order in turn.
Energy
As seen in Fig. 3.6, the energy profiles α(θ o ) for our selection of BRDFs exhibit a constant behavior up to grazing angles, at which point they tend to increase progressively. We model this profile with α(θ o ) ≈ α(θ o ) = αb + αs (θ o ) where αb represents the constant base energy and αs is an Hermite spline that deals with the increase in energy. The Hermite spline itself is modeled with two knots defined by their angular position θ 0 , θ 1 , energy value α(θ 0 ), α(θ 1 ) and slope m 0 , m 1 (see Fig. 3.7a).
We use a non-linear optimization to fit these parameters to each energy profile, using θ 0 = 45 and θ 1 = 75 degrees as initial values, and constraining m 0 = 0 to reproduce the constant profile far from grazing angles. We first fit knot positions independently for each material, which yields an average θ 0 of 35.3 degrees with a standard deviation of 14.8, and an average θ 1 of 71.5 degrees with a standard deviation of 5.2. Combined with the observation that α 1 > α 0 in all our materials, this suggests that all our materials exhibit an energy boost confined to grazing viewing angles. We then fit the same knot positions for all our materials; this yields θ 0 = 38.7 degrees and θ 1 = 69.9 degrees, which confirms the grazing energy boost tendency.
34
Carlos Jorge Zubiaga Peña 3.6: We fitted the moment profiles (from top to bottom: energy, mean and average variance) from the selected materials list. We provide in each column the computed moment profiles (a), the fitted profiles (b), and the corresponding fitting errors (c). The error of our fits is computed using both the Mean Absolute Error (MAE in blue) and the Root Mean Square Error (RMSE in purple). The small inset profiles correspond to worse, intermediate and best fits.
Statistical Analysis
Mean
Concerning the mean profile µ θ (θ o ), the vast majority of cases show a linear tendency, with slopes proportional to the specularity of the material. Moreover, all profile functions go through the origin, as previously observed in Sec. 3.3.4. This suggests that a linear fit µ θ (θ o ) ≈ μθ o is appropriate for representing this behavior. We fit the single slope parameter μ using a least-squares optimization, which always leads to a negative value due to our choice of parametrization (e.g., the mirror direction is given by μ = -1). It is interesting to observe that materials exhibit mean slopes nearly spanning the entire range from -1 to 0.
Variance
We have observed in Sec. 3.3.4 that σ 2 θ (0) ≈ σ 2 φ (0), which is due to radial symmetry at viewing incidence. Our data also reveals that the deviations from a constant behavior observed around grazing angles tend to increase for σ 2 θ and decrease for σ 2 φ . We thus choose to study the average variance using a constant profile, hence σ2 ≈
(σ 2 θ (θo)+σ 2 φ (θo)) 2
. The constant parameter is obtained using a least-squares fitting as before, with values ranging Real-time 2D manipulation of plausible 3D appearance from 0 for a mirror to π 2 12 for a Lambertian 3 . Once again, our materials exhibit a large range of average variances.
Correlation
Looking at Fig. 3.6b, one may observe a seeming correlation between the fitted mean slope μ and average variance σ2 : the lower the variance, the steeper the mean. To investigate this potential correlation, we plot one parameter against the other in Fig. 3.7b, which indeed exhibits a correlation. We thus perform a least-squares fitting with a quadratic function, using parameters of a mirror material (μ = -1 and σ2 = 0) and of a Lambertian one (μ = 0 and σ2 = π 2 /12) as end point constraints. We conjecture that this correlation is due to hemispherical clamping. Because of clamping, the distribution mean is slightly offset toward incidence compared to the distribution peak, and this effect is all the more pronounced for distributions of high variance: wider distributions are clamped early on relative to narrow ones.
Discussion
Relationships defined in Sec. 3.2.2 provide insights on how material influence shading locally, which is observable in pictures. Moreover the conclusions of the analysis of measured BRDFs in Sec. 3.3.5 describe similar behaviors across different BRDFs.
As an example, let's consider a distant environment illumination reflected off a sphere: in this case, each (θ o , φ o ) pair corresponds to a point on the sphere surface. First, the energy 3 A Lambertian material corresponds to fr L = 1 π 2 in our angular parametrization, irrespective of the view direction.
Since we are working on a closed space [-π 2 .. π 2 ] 2 , the formulation of moments for a constant function is not the same than in an infinite domain: moments of a Lambertian BRDF are thus finite. We can simplify the expression of the average variance as the mean of a Lambertian is zero, µ 1 [fr L ] = 0, hence Cov[fr L ] = µ 2 [fr L ]. Furthermore, the formulation of the variance along θ i and along φ i provide the same result due to the symmetry of the integration space and integrands. Thus the average variance ν[fr L ] is equal to the variance along θ i , leading to the formula:
ν[fr L ] = 1 π 2 θ i ,φ i ∈[-π 2 .. π 2 ] 2 θ i 2 dθ i dφ i = 1 π θ i 3 3 π 2 -π 2 = π 2 12 .
relationship Equation (3.19) tells us that the material color has a multiplicative effect on shading. In addition the energy fitting describe this multiplication as a constant effect (α b ) with an additional boost ( αs ) toward the silhouette. Second, the Equation (3.20) acts as a warping that constantly increases toward the silhouette. Third, a constant blur is applied locally as defined by Equation (3.21). It has a constant average behavior for both dimension σ2 . Both mean and variance are linked to material roughness and their effect is correlated (Fig. 3.7b). It is significant that the only effect which is applied depending on the φ direction is the blurring which is consistent with the symmetry of BRDF slices around the scattering plane The reflected environment is this time much more blurred and exhibits less warping. This is explained in (e) by the filtering characteristics of the BRDF: the filter is wide and offset toward the center of the sphere for locations closer to the silhouette. This confirms the mean/variance correlation that we have observed in our study.
Figure 3.8 provides an illustration of the effect described above. We start from an ideal mirror BRDF in Fig. 3.8a, which exhibits extreme sharpness and warping of the environment toward the silhouette as expected. We then study the effect of two BRDFs: specular-black-phenolic and pearl-paint, with renderings shown in Figure 3.8b and Figure 3.8d. In both case, we focus on three points increasingly closer to the silhouette (i.e., at increasingly grazing viewing elevation). Our analysis reveals that a BRDF directly acts as an image filter whose properties are governed by statistical moments. In Figure 3.8c and Figure 3.8e, we show the means and co-variance ellipses (in blue) for both BRDFs at the three picked locations. The filters corresponding to the specular-black-phenolic BRDF remain close to the evaluation position (in red), and their spread is narrow, resulting in a small blur. In contrast, the filters corresponding to the pearl-paint BRDF exhibit a stronger blur, and are offseted toward the center of the sphere for increasing viewing angles. As a result, the warping due to the BRDF is less pronounced in this case, a subtle effect of this BRDF which illustrates the impact of correlation between mean and variance.
Our statistical analysis has shown that using a simple BRDF slice model based on energy, mean and variance we can derive relationships between lighting/material and shading. Those relationships are observable in images induced by coloring due to α, warping by µ and blurring by σ 2 . Our BRDF slice model is coherent with isotropic BRDFs as shown for the selected list of materials from the MERL database. As output of this study we obtain similar behaviors of the BRDF slices as functions depending on the viewing elevation angle, as well as a correlation between mean and variance. Carlos Jorge Zubiaga Peña Chapter 4
Dynamic Appearance Manipulation of MatCaps
Object appearance is the result of complex interactions between shape, lighting and material. Instead of defining those components and performing a rendering process afterwards, we rather intend to manipulate existing appearance by directly modifying the resulting shading colors. We have shown how two of these components: lighting and materials are related to the resulting shading. In this chapter we are interested in using the studied relationships between material/lighting and shading to mimic modification of both material and lighting from existing shading.
We focus our work on artwork inputs, as we find interesting the artist perspective to create plausible appearances by direct painting, instead of the tedious rendering process of trial and error. We use artistic images of sphere which are created without having to specify material or lighting properties. Appearance described in these images is easily transferable to an arbitrary-shaped object with a simple lookup based on screen-space normals. This approach was first introduced by the LitSphere technique introduced by Sloan et al. [START_REF] Sloan | The lit sphere: A model for capturing npr shading from art[END_REF]. It is also known under the name of 'MatCaps' for rendering individual objects, typical applications include scientific illustration (e.g., in MeshLab and volumetric rendering [START_REF] Bruckner | Style transfer functions for illustrative volume rendering[END_REF]) and 3D sculpting (e.g., in ZBrush, MudBox or Modo). In this work we use the term 'Mat-Cap' to refer to LitSphere images that convey plausible material properties, and we leave out of our work non-photorealistic LitSpheres approaches (e.g., [START_REF] Todo | Lit-sphere extension for artistic rendering[END_REF]).
The main limitation of a MatCap is that it describes a static appearance: lighting and material are 'baked in' the image. For instance, lighting remains tied to the camera and cannot rotate independently; and material properties cannot be easily modified. A full separation into physical material and lighting representations would not only be difficult, but also unnecessary since a MatCap is unlikely to be physically-realistic. Instead, our approach is to keep the simplicity of MatCaps while permitting dynamic appearance manipulations in real time. Hence we do not fully separate material and lighting, but rather decompose an input MatCap (Figure 4.1a) into a pair of spherical image-based representations (Figure 4.1b). Thanks to this decomposition, common appearance manipulations such as rotating lighting, or changing material color and roughness are performed through simple image operators (Figures 4.1c, 4.1d and 4.1e).
Our approach makes the following contributions:
• We assume that the material acts as an image filter in a MatCap and we introduce a simple algorithm to estimate the parameters of this filter (Section 4.1);
• We next decompose a MatCap into high-and low-frequency components akin to diffuse and specular terms. Thanks to estimated filter parameters, each component is then unwarped into a spherical representation analogous to pre-filtered environment maps (Section 4.2);
• We perform appearance manipulation in real-time from our representation by means of image operations, which in effect re-filter the input MatCap (Section 4.3).
As shown in Section 4.4, our approach permits to convey a plausible, spatially-varying appearance from one or more input MatCaps, without ever having to recover physicallybased material or lighting representations.
Appearance model
We hypothesize that the material depicted in a MatCap image acts as a filter of constant size in the spherical domain (see Figure 4.2b). Our goal is then to estimate the parameters of this filter from image properties alone. We first consider that such a filter has a pair of diffuse and specular terms. The corresponding diffuse and specular MatCap components may either be given as input, or approximated (see Section 4.2.1). The remaining of this section applies to either component considered independently.
Definitions
We consider a MatCap component L o to be the image of a Sphere in orthographic projection. Each pixel is uniquely identified by its screen-space normal using a pair (θ, φ) of angular coordinates. The color at a point in L o is assumed to be the result of filtering an unknown lighting environment L i by a material filter F . Therefore we can apply the approach described by Ramamoorthi et al. [START_REF] Ramamoorthi | A signal-processing framework for reflection[END_REF], of considering rendering as a 2D spherical convolution where the material acts a low-pass filter of the incoming radiance; we may write L o = F * L i . We make the same assumptions that we have introduced in Section 2.2: 'Convex curved object of uniform isotropic material lit by distant lighting'.
40
Carlos Jorge Zubiaga Peña Even though MatCaps are artist-created images that are not directly related to radiance, they still convey material properties. We further restrict F to be radially-symmetric on the sphere, which simplifies the estimation of these properties as it allows us to study L o in a single dimension. A natural choice of dimension is θ (see Figure 4.3a), since it also corresponds to viewing elevation in tangent space along which most material variations occur. We thus re-write the 2D spherical convolution as a 1D angular convolution of the form:
L o (θ + t, φ) = (f * L i φ )(θ + t), t ∈ [-ǫ, +ǫ], (4.1)
where f is a 1D slice of F along the θ dimension, and L i φ corresponds to L i integrated along the φ dimension. We have used the same approach in our Fourier analysis on Section 3.2. Where we have shown that starting from the Equation (3.9), similar to Equation (4.1), we obtain simple formula relating 1D image statistics to statistics of lighting and material. Moreover, we use the simplified relationship equations defined after studying the measured materials of the MERL database (Section 3.3.5). These formulas are trivially adapted to the angular parametrization based on screen-space normals (a simple change of sign in Equation (4.3)).
For a point (θ, φ), we have:
K[L o ] = K[L i φ ] α(θ), (4.2) E[ Lo ] = E[ Li φ ] -µ θ, (4.3) Var[ Lo ] = Var[ Li φ ] + ν, (4.4)
where K denotes the energy of a function, hat functions are normalized by energy (e.g., Lo = Lo K[Lo] ), and E and Var stand for statistical mean and variance respectively. The filter parameters associated to each statistic are α, µ and ν. We make use of our fitting study (Section 3.3.5) to make a number of simplifying assumptions to ease their estimation. Equation (4.2) shows that the filter energy α(θ) acts as a multiplicative term. We define it as the sum of a constant α 0 and an optional Hermite function that accounts for silhouette effects (see Figure 4.2a). We assume only α 0 varies per color, hence we call it the base color parameter. Equation (4.3) shows that the angular location of the filter is additive.
The assumption here is that it is a linear function of viewing elevation (i.e., the material warps the lighting environment linearly in θ); hence it is controlled by a slope parameter µ ∈ [0, 1]. Lastly, Equation (4.4) shows that the size of the filter ν acts as a simple additive term in variance. We make the assumption that this size parameter is constant (i.e., the material blurs the lighting environment irrespective of viewing elevation). One may simply use µ = 0 for the diffuse component and µ = 1 for the specular component. However, we have shown evidence of correlation between µ and ν, which are likely due to grazing angle effects. We use the correlation function µ(ν) = 1 -0.3ν -1.1ν 2 , in effect defining slope as a function of filter size.
Putting it all together, we define our filter F as a 2D spherical Gaussian: its energy varies according to α(θ), it is shifted by µ θ and has constant variance ν. This is illustrated in Figure 4.2b, where we draw filter slices f for three different viewing elevations. In the following, we first show how to evaluate the filter energy α (Section 4.1.2), then its size ν (Section 4.1.3), from which we obtain its slope µ.
Energy estimation
The filter energy is modeled as the sum of a constant base color and an optional silhouette effect function. However, silhouette effects are scarce in MatCaps, as they require the artist to consistently apply the same intensity boost along the silhouette. In our experience, the few MatCaps that exhibit such an effect (see inset) clearly show an additive combination, suggesting a rim lighting configuration rather than a multiplicative material boost. We thus only consider the base color for estimation in artist-created MatCaps. Nevertheless, we show in Section 4.3.2 how to incorporate silhouette effects in a proper multiplicative way.
The base color α 0 is a multiplicative factor that affects an entire MatCap component. If we assume that the brightest light source is pure white, then the corresponding point on the image is the one with maximum luminance. All MatCaps consist of low-dynamic range (LDR) images since they are captured from LDR images or painted in LDR. Hence at a point of maximum luminance, α 0 is directly read off the image since K[L i φ ] = 1 in Equation (4.2). This corresponds to white balancing using a grey-world assumption (see Figure 4.6c). This assumption may not always be correct, but it is important to understand that we do not seek an absolute color estimation. Indeed, user manipulations presented in Section 4.3 are only made relative to the input MatCap.
42
Carlos Jorge Zubiaga Peña
Variance estimation
The filter size corresponds to material variance. It is related to image variance (see Eq. (4.4)).
Image variance
We begin by explaining how we compute image variance, the left hand side in Equation (4.4).
To this end we must define a 1D window with compact support around a point (θ, φ), and sample the MatCap along the θ dimension as shown in Figure 4.3b. In practice, we weight L o by a function W ǫ : [-ǫ, +ǫ] → [0, 1], yielding:
L oǫ (θ + t, φ) = L o (θ + t, φ)W ǫ (t), (4.5)
where W ǫ is a truncated Gaussian of standard deviation ǫ/3. Assuming L o to be close to a Gaussian as well on [-ǫ, +ǫ], the variance of L o is related to that of L oǫ by [START_REF] Bromiley | Products and convolutions of gaussian distributions[END_REF]:
Var[ Lo ] ǫ ≃ Var[ Loǫ ] • Var[W ǫ ] Var[ Loǫ ] -Var[W ǫ ] . (4.6)
The image variance computed at a point (θ, φ) depends on the choice of window size. We find the most relevant window size (and the corresponding variance value) using a simple differential analysis in scale space, as shown in Figure 4.4.
Variance exhibits a typical signature: after an initial increase that we attribute to variations of W ǫ , it settles down (possibly reaching a local minimum), then raises again as W ǫ encompasses neighboring image features. We seek the window size ǫ ⋆ at which the window captures the variance best, which is where the signature settles. We first locate the second inflection point which marks the end of the initial increase. Now ǫ ⋆ either corresponds the location of the next minimum (Figure 4.4a) or the location of the second inflection if no minimum is found (Figure 4.4b). If no second inflection occurs, we simply pick the variance at the largest window size ǫ ⋆ = π 2 (Figure 4.4c). The computation may become degenerated, yielding negative variances (Figure 4.4d). Such cases occur in regions of very low intensity that compromise the approximation of Equation (4.6); we discard the corresponding signatures.
Material variance
The estimation of ν from Equation (4.4) requires to make assumptions on the variance of the integrated lighting L i φ . If we assume that the lighting environment contains sharp point or line light sources running across the θ direction, then at those points we have Var[ Li φ ] ≈ 0 and thus ν ≈ Var[ Lo ]. Moreover, observe that Equation (4.1) remains valid when replacing L o and L i φ by their derivatives L ′ o and L ′ i φ in the θ dimension. Consequently Equation (4.4) may also be used to recover ν by relying on the θ-derivative of a MatCap component. In particular, if we assume that the lighting environment contains sharp edge light sources, then at those points we have Var[ L′ i φ ] ≈ 0 and thus ν ≈ Var[ L′ o ]. In practice, we let users directly provide regions of interest (ROI) around sharpest features by selecting a few pixel regions in the image. We run our algorithm on each pixel inside a ROI, and pick the minimum variance over all pixels to estimate the material variance. The process is fast enough to provide interactive feedback, and it does not require accurate user inputs since variance is a centered statistic. An automatic procedure for finding ROIs would be interesting for batch conversion purposes, but is left to future work. Our approach is similar in spirit to that of Hu and de Hann [START_REF] Hu | Low cost robust blur estimator[END_REF], but is tailored to the signatures of Figure 4.4. Note that since MatCap images are LDR, regions where intensity is clamped to 1 produce large estimated material variances. This seems to be in accordance with the way material perception is altered in LDR images [START_REF] Phillips | Effects of image dynamic range on apparent surface gloss[END_REF].
Real-time 2D manipulation of plausible 3D appearance
Validation
We validate our estimation algorithm using analytical primitives of known image variance, as shown in Figure 4.5. To make the figure compact, we have put three primitives of different sizes and variance in the first two MatCaps. We compare ground truth image variances to estimates given by Var[ L′ o ] (for ROIs A and D) or Var[ Lo ] (all other ROIs), at three image resolutions. Our method provides accurate variance values compared to the ground truth, independently of image resolution, using L o or L ′ o . The slight errors observed in D, E and F are due to primitives lying close to each other, which affects the quality of our estimation. The small under-estimation in the case of I might happen because the primitives is so large that a part is hidden from view.
To compute material variance, our algorithm considers the location that exhibits minimum image variance. For instance, if we assume the first two MatCaps of Figure 4.5 to be made of homogeneous materials, then their material variances will be those of A and D respectively. This implicitly assumes that the larger variances of other ROIs are due to blurred lighting features, which is again in accordance with findings in material perception [START_REF] Roland W Fleming | Real-world illumination and the perception of surface reflectance properties[END_REF].
44
Carlos Jorge Zubiaga Peña
MatCap decomposition
We now make use of estimated filter parameters to turn a MatCap into a representation amenable to dynamic manipulation. Figure 4.6 shows a few example decompositions. Please note that all our MatCaps are artist-created, except for the comparisons in Figs. 4.8 and 4.12.
Low-/High-frequency separation
Up until now, we have assumed that a MatCap was readily separated into a pair of components akin to diffuse and specular effects. Such components may be provided directly by the artist during the capture or painting process, simply using a pair of layers. However, most MatCaps are given as a single image where both components are blended together.
Separating an image into diffuse and specular components without additional knowledge is inherently ambiguous. Existing solutions (e.g., [NVY + 14]) focus specifically on specular highlights, while we need a full separation. Instead of relying on complex solutions, we provide a simple heuristic separation into low-frequency and high-frequency components, which we find sufficient for our purpose. Our solution is based on a gray-scale morphological opening directly inspired by the work of Sternberg [START_REF] Stanley R Sternberg | Grayscale morphology[END_REF]. It has the advantage of outputting positive components without requiring any parameter tuning, which we found in no other technique.
We use morphological opening to extract the low-frequency component of a MatCap. An opening is the composition of an erosion operator followed by a dilation operator. Each operator is applied once to all pixels in parallel, per color channel. For a given pixel p:
erode(p) = min q∈P v q (n p • n q ) ; (4.7) dilate(p) = max q∈P v q (n p • n q ) , (4.8)
where P = {q | (n p • n q ) > 0} is the set of valid neighbor pixels around p, and v q and n q are the color value and screen-space normal at a neighbor pixel q respectively. Real-time 2D manipulation of plausible 3D appearance The dot product between normals reproduces cosine weighting, which dominates in diffuse reflections. It is shown in the inset figure along with the boundary ∂P of neighbor pixels.
The morphological opening process is illustrated in Figure 4.7. The resulting low-frequency component is subtracted from the input to yield the high-frequency component. Figure 4.8 shows separation results on a rendered sphere compared to veridical diffuse and specular components. Differences are mostly due to the fact that some low-frequency details (due to smooth lighting regions) occur in the veridical specular component. As a result the specular component looks brighter compared to our highfrequency component, while the diffuse component looks dimmer than our low-frequency component. Nevertheless, we found that this approach provides a sufficiently plausible separation when no veridical diffuse and specular components exist, as with artist-created MatCaps (see Figure 4.6b for more examples).
Spherical mapping & reconstruction
Given a pair of low-and high-frequency components along with estimated filter parameters, we next convert each component into a spherical representation. We denote a MatCap component by L o , the process being identical in either case.
We first divide L o by its base color parameter α 0 . This yields a white-balanced image L ⋆ o , as shown in Figure 4.6c. We then use the filter slope parameter µ to unwarp L ⋆ o to a spherical representation, and we use a dual paraboloid map [START_REF] Heidrich | View-independent environment maps[END_REF] for storage purpose. In practice, we apply the inverse mapping to fill in the dual paraboloid map, as visualized in Figure 4.9. Each texel q in the paraboloid map corresponds to a direction ω q . We rotate it back to obtain its corresponding normal n q = rot uq,-µθ (ω q ) where u q = e2×ωq e2×ωq , θ = acos(e 2 • ω q )/(1 + µ) and e 2 = (0, 0, 1) stands for the (fixed) view vector in screen space. Since for each texel q we end up with a different rotation angle, the resulting transformation is indeed an image warping. The color for q is finally looked up in L ⋆ o using n q . Inevitably, a disc-shaped region on the back-side of the dual paraboloid map will receive no color values. We call it the blind spot and its size depends on µ: the smaller the slope parameter, the wider the blind spot. Since in our approach the slope is an increasing function µ(ν) of filter size, a wide blind spot will correspond to a large filter, and hence a low-frequency content. It is thus reasonable to apply inpainting techniques without having to introduce new details in the back paraboloid map. In practice, we apply Poisson image editing [START_REF] Pérez | Poisson image editing[END_REF] with a radial guiding gradient that propagates boundary colors of the blind spot toward its center (Fig. 4
.9b).
This decomposition process results in a pair of white-balanced dual paraboloid maps, one for each component, as illustrated in Figure 4.6d. They well suited to real-time rendering as they are analogous to pre-filtered environment maps (e.g., [START_REF] Kautz | A unified approach to prefiltered environment maps[END_REF][START_REF] Ramamoorthi | Frequency space environment map rendering[END_REF]).
Real-time 2D manipulation of plausible 3D appearance
Appearance manipulation
Rendering using our decomposition is the inverse process of Section 4.2.2. The color at a point p on an arbitrary object is given as a function of its screen-space normal n p . For each component, we first map n p to a direction ω p in the sphere: we apply a rotation ω p = rot up,µθ (n p ), with u p = e2×np e2×np and θ = acos(e 2 • n p ). A shading color is then obtained by a lookup in the dual paraboloid map based on ω p , which is then multiplied by the base color parameter α 0 . The low-and high-frequency components are finally added together.
Lighting manipulation
Lighting may be edited by modifying our representation given as a pair of dual paraboloid maps. We provide a painting tool to this end, as illustrated in Figure 4.10b. The user selects one of the components and paints on the object at a point p. The screen-space normal n p and slope parameter µ are used to accumulate a brush footprint in the dual paraboloid map. To account for material roughness, the footprint is blurred according to ν. We use Gaussian-and Erf-based footprints to this end, since they enable to perform such a blurring analytically. We also provide a light source tool, which is similar to the painting tool, and is shown in Figure 4.10e. It takes as input a bitmap image that is blurred based on ν. However, instead of accumulating it as in painting, it is simply moved around.
A major advantage of our decomposition is that it permits to rotate the whole lighting environment around. This is applied to both low-and high-frequency components in synchronization. In practice, it simply consists in applying the inverse rotation to n p prior to warping. As shown in Figure 4.10c,f, this produces convincing results that remain coherent even with additional reflections.
Material manipulation
Manipulating apparent material roughness requires the modification of ν, but also µ since it depends on ν. This is trivial for light sources that have been added or painted, as one simply has to re-render them. However, low-and high-frequency components obtained through separation of the input MatCap require additional filtering. For a rougher material look (Figure 4.11b), we decrease the magnitude of µ and blur the dual paraboloid map to increase ν. For a shinier material look (Figure 4.11c), we increase the magnitude of µ and manually add reflections with a lower ν to the dual paraboloid map. We have tried using simple sharpening operators, but avoided that solution as it tends to raise noise in images.
For the manipulation of apparent material color, we take inspiration from color variation 48 Carlos Jorge Zubiaga Peña
Results and comparisons
Our material estimation algorithm (Section 4.1) is implemented on the CPU and runs in real-time on a single core of an Intel i7-2600K 3.4GHz, allowing users to quickly select appropriate ROIs. The decomposition process (Section 4.2) is implemented in Gratin (a GPU-tailored nodal software available at http://gratin.gforge.inria.fr/), using an Nvidia GeForce GTX 555. Performance is largely dominated by the low-/high-frequency separation algorithm, which takes from 2 seconds for a 400 × 400 MatCap, to 6 seconds for a 800 × 800 one. Rendering (Section 4.3) is implemented in Gratin as well and runs in real-time on the GPU, with a negligible overhead compared to rendering with a simple MatCap. We provide GLSL shaders for rendering with our representation in supplemental material. A benefit of our approach is the possibility to rotate lighting independently of the view. One may try to achieve a similar behavior with a mirrored MatCap to form an entire sphere. However, this is equivalent to a spherical mapping, in which case highlights do not move, stretch or compress in a plausible way.
In this paper, we have focused on artist-created MatCaps for which there is hardly any ground truth to compare to. Nevertheless, we believe MatCaps should behave similarly to rendered spheres when lighting is rotated. Figure 4.12 shows a lighting rotation applied to the rendering of a sphere, for which a ground truth exists. We also compare to a rotation obtained by the method of Lombardi et al. [START_REF] Lombardi | Reflectance and natural illumination from a single image[END_REF]. For the specific case of lighting rotation, our approach appears superior; in particular, it reproduces the original appearance exactly. However, the method of Lombardi et al. has an altogether different purpose, since it explicitly separates material and lighting. For instance, they can re-render a sphere with the same lighting but a different material, or with the same material but a different lighting.
Up to this point, we have only exploited a single MatCap in all our renderings. However, we may use low-and high-frequency components coming from different MatCaps, as shown in Figure 4.13. Different MatCaps may of course be used on different object parts, as seen in Figure 4.14. Our approach offers several benefits here: the input Matcaps may be aligned, their color changed per components, and they remain aligned when rotated.
Our representation also brings interesting spatial interpolation abilities, since it provides material parameters to vary. Figure 4.15 shows how bitmap textures are used to vary highand low-frequency components separately. Figure 4.16 successively makes use of an ambient occlusion map, a diffuse color map, then silhouette effects to convey object shape. Our approach thus permits to obtain spatial variations of appearance, which are preserved when changing input MatCaps.
Discussion
We have shown how to decompose a MatCap into a representation more amenable to dynamic appearance manipulation. In particular, our approach enables common shading operations such as lighting rotation and spatially-varying materials, while preserving the appeal of artist-created MatCaps. We are convinced that our work will quickly prove useful in software that already make use of MatCaps (firstly 3D sculpting, but also CAD and scientific visualization), with a negligible overhead in terms of performance but a greater flexibility in terms of appearance manipulation. We believe that this work restricted to MatCaps is easily transfered to other kind of inputs, once an spherical representation of shading is obtained. We presume that the estimation of materials will be valid independently if the input comes from an artwork, a photograph or a rendering. However, sharp lighting assumption might not always be met, in which case material parameters will be over-or under-estimated. This will not prevent our approach from working, since it will be equivalent to having a slightly sharper or blurrier lighting. Interestingly, recent psycho-physical studies (e.g., [START_REF] Doerschner | Estimating the glossiness transfer function induced by illumination change and testing its transitivity[END_REF]) show that different material percepts may be elicited only by changing lighting content. This suggests that our approach could be in accordance with visual perception, an exciting topic for future investigations.
Our decomposition approach makes a number of assumptions that may not always be satisfied. We assume an additive blending of components, whereas artists may have painted a MatCap using other blending modes.
For further explanation of our technique limitations like the restriction of radially symmetric BRDFs or the needed improvements of the filling-in technique we refer to the Chapter 6.
52
Carlos Jorge Zubiaga Peña Local Shape Editing at the Compositing Stage
Images created by rendering engines are often modified in post-process, by making use of independent, additive shading components such as diffuse or reflection shading or transparency (Section 1.1.3). Most modern off-line rendering engines output these shading components in separate image buffers without impeding rendering performance. In the same way, it is possible to output auxiliary components such as position or normal buffers. These auxiliary buffer permit additional modifications: adding lights in post-process (using normals) or depth of field (using positions) for instance. Nevertheless, these modifications are limited and if we would like to modify auxiliary buffers holding 3D normals or positions, this would have no effect on shading buffers. Following our goal to modify existing appearance we want to grant geometry modifications. We specifically focus on ways to obtain a plausible shading color when modifying local shape (normals). If one wants to make these kinds of modifications, the straightforward solution will be to completely re-render the scene in 3D. This is a time-consuming process that we want to avoid to be able to explore modifications interactively. Such post-processing techniques are routinely used in product design applications (e.g., Colorway) or in movie production (e.g., Nuke or Flame) to quickly test alternative compositions. They are most often preferred to a full re-rendering of the 3D scene that would require considerably larger assets, longer times and different artistic skills.
The main issue when one wants to modify shape at the compositing stage is that lighting information is not available any more as is is lost in the rendering process. The recovery of the environment lighting would not be possible as we lack much of the necessary 3D data. We instead strive for a plausible result, ensuring that the input diffuse and reflection shading buffers are recovered when reverting to the original normals As in Chapter 4 we will be working with pre-filtered environments maps. While with MatCaps we use pre-filtered environments maps to mimic modifications of material or lighting, now we use them to allow geometry modifications. The key idea of our method is to reconstruct a pair of pre-filtered environments per object/material: one for the diffuse term, the other for the reflection term. Obtaining new shading colors from arbitrarily modified normals then amounts to perform a pair of lookups in the respective prefiltered environment maps. Modifying local shape in real-time then becomes a matter of recompositing the reconstructed shading buffers.
Alternatively we could export an environment lighting map, pre-filtered or not, per object/material in the rendering process. Afterwards they will be used to perform desired modifications of local shape. This solution requires to obtain the environment maps using light-transport techniques to capture the effects of the interaction between different objects (i.e. shadows or reflections). That results in a tedious and costly approach because each environment map would need a complete re-render of the whole scene. Moreover retroreflections could not be obtained as each object needs to be replaced by a sphere to get its environment map.
Our approach is a first step toward the editing of surface shape (and more generally object appearance) at the compositing stage, which we believe is an important and challenging problem. The rendering and compositing stages are clearly separate in practice, involving different artistic skills and types of assets (i.e., 3D scenes vs render buffers). Providing compositing artists with more control over appearance will thus require specific solutions. Our paper makes the following contributions toward this goal (see Figure 5.1):
• Diffuse and reflection shading environments are automatically reconstructed in preprocess for each object/material combinations occurring in the reference rendering (Section 5.1);
• The reconstructed environments are used to create new shading buffers from arbitrarily modified normals, which are recomposited in real time with the reference shading buffers (Section 5.2).
Figure 5.1: Our method permits to modify surface shape by making use of the shading and auxiliary buffers output by modern renderers. We first reconstruct shading environments for each object/material combination of the Truck scene, relying on normal and shading buffers. When normals are then modified by the compositing artist, the color image is recomposited in real-time, enabling interactive exploration. Our method reproduces interreflections between objects, as seen when comparing the reconstructed environments for rear and front mudguards. This work have been published in the Eurographics Symposium of Rendering at the 'Experimental Ideas & Implementations' with the collaboration of Gael Guennebaud and Romain Vergne [START_REF] Jorge | Local Shape Editing at the Compositing Stage[END_REF]. Specifically, Gael Guennebaud has developed the reconstruction of the diffuse component and the hole-filling and regularization has been developed by Gael Guennebaud and Romain Vergne.
Reconstruction
In this work, we focus on modifying the apparent shape of opaque objects at the compositing stage. We thus only consider the diffuse and reflection buffers, the latter resulting from reflections off either specular or glossy objects. These shading buffers exhibit different frequency content: diffuse shading is low-frequency, while reflection shading might contain arbitrarily high frequencies. As a result we use separate reconstruction techniques for each of them.
Both techniques take their reference shading buffer D 0 or R 0 as input, along with an auxiliary normal buffer n. Reconstruction is performed separately for each surface patch P belonging to a same object with a same material, and identified thanks to a surface ID buffer. We also take as input ambient and reflection occlusion buffers α D and α R , which identify diffuse and specular visibility respectively (see Appendix). The latter is used in the reconstruction of the reflection environment. We use Modo to export all necessary shading and auxiliary buffers, as well as camera data. The normals are transformed from world to screen-space prior to reconstruction; hence the diffuse and reflection environments output by our reconstruction techniques are also expressed in screen space.
Diffuse component
In this section our goal is to reconstruct a complete prefiltered diffuse environment D : S 2 → R 3 , parametrized by surface normals n. The map D should match the input diffuse buffer inside P: for each selected pixel x ∈ P, D(n(x)) should be as close as possible to D 0 (x). However, as illustrated by the Gauss map in Figure 5.2, this problem is highly ill-posed without additional prior. First, it is under-constrained in regions of the unit sphere not covered by the Gauss map of the imaged surface. This is due to sharp edges, occluded regions and highly curved features producing a very sparse sampling. Second, the problem might also be over-constrained in areas of the Gauss map covered by multiple sheets of the imaged surface, as they might exhibit different diffuse shading values due to global illumination (mainly occlusions). the pixels of the input diffuse layer of the selected object are scattered inside the Gauss map. This shading information is then approximated by low-order spherical harmonics using either a Least Square fit (top) or our Quadratic Programming formulation (bottom) leading to the respective reconstructed environment maps. To evaluate the reconstruction quality, those environment maps are then applied to the original objects, and the signed residual is shown using a color code (shown at right). Observe how our QP reconstruction guarantees a negative residual.
Input diffust laytr Comparison to Ltast Squart Rtconstruction
Our QP Reconstruction
Since diffuse shading exhibits very low frequencies, we address the coverage issue by representing D with low-order Spherical Harmonics (SH) basis functions, which have the double advantage of being globally supported and of exhibiting very good extrapolation capabilities. We classically use order-2 SH, only requiring 9 coefficients per color channel [START_REF] Ramamoorthi | An efficient representation for irradiance environment maps[END_REF]. The reconstructed prefiltered diffuse environment is thus expressed by:
D(n) = 2 l=0 l m=-l c l,m Y l,m (n).
(5.1)
The multiple-sheets issue is addressed by reconstructing a prefiltered environment as if no occlusion were present. This choice will facilitate shading edition as the local residual D 0 (x) -D(n(x)) is then directly correlated to the amount of local occlusion. Formally, it Real-time 2D manipulation of plausible 3D appearance amounts to the following constrained quadradic minimization problem:
c ⋆ l,m = arg min c l,m x∈P D 0 (x) -D(n(x)) 2 , s.t. D(n(x)) ≥ D 0 (x),
which essentially says that the reconstruction should be as close as possible to the input while enforcing negative residuals. This is a standard Quadratic Programming (QP) problem that we efficiently solve using a dual iterative method [START_REF] Goldfarb | A numerically stable dual method for solving strictly convex quadratic programs[END_REF]. Figure 5.2 compares our approach to a standard least squares (LS) reconstruction. As made clear in the two right-most columns, our QP method produces shading results more plausible than the LS method: residuals are negative by construction and essentially correspond to darkening by occlusion.
Validation
The left column of Figure 5.6 shows reconstructed diffuse shading for a pair of environment illuminations. We project the light probes onto the SH basis and use them to render a 3D teapot model, from which we reconstruct the diffuse environments. In this case, there is no occlusion, only direct lighting: our reconstruction exhibits very good results as shown by the difference images with the ground truth environments.
Reflection component
At first glance, the problem of reconstructing a prefiltered reflection environment map is similar to the diffuse case. Our goal is to reconstruct a complete prefiltered reflection environment R : S 2 → R 3 parametrized by the reflected view vector r = reflect(v, n), where v and n are the local view and normal vectors respectively. As before, R should match the input reflection buffer: for each selected pixel x ∈ P, R(r(x)) should be as close as possible to R 0 (x).
On the other hand, the reflection buffer contains arbitrarily high-frequencies, which prohibits the use of a SH representation. We thus propose to represent and store R in a high resolution dual-paraboloid map [START_REF] Heidrich | View-independent environment maps[END_REF] that we fill in three steps:
1. Mapping from R 0 to R while dealing with overlapping sheets; 2. Hole-filling of R using a spherical harmonic interpolation; 3. Smoothing of the remaining discontinuities.
Partial reconstruction
For the reconstruction of the diffuse map, interpreting each input pixel as a simple point sample proved to be sufficient. However, in order to reconstruct a sharp reflection map, it becomes crucial to fill the gaps between the samples with as much accuracy as possible. To this end, we partition the input set of pixels into smooth patches according to normal and depth discontinuities. Each smooth patch has thus a continuous (i.e., hole-free) image once mapped on S 2 through the reflected directions r. Due to the regular structure of the input pixel grid, the image of the patch is composed of adjacent spherical quads and triangles (at patch boundaries). This is depicted in Figure 5.3 for a block of 3 × 3 pixels.
Depending on object shape and camera settings, each patch may self-overlap, and the different patches can also overlap each other. In other words, a given reflection direction r might coincide with several polygons. We combine shading information coming from these different image locations using two types of weights. First, we take as input an auxiliary 58 Carlos Jorge Zubiaga Peña where N (r) is the number of unoccluded spherical polygons containing r, Q k is the set of corner indices of the k-th polygon, and λ k j are barycentric coordinates enabling the interpolation of the shading colors inside the k-th polygon. For polygons embedded on a sphere, barycentric coordinates can be computed as described in Equation 8 of [START_REF] Langer | Spherical barycentric coordinates[END_REF]. We use normalized spherical coordinates, which amounts to favor the partition of unity property over the linear precision property on the sphere (i.e., Equation 2 instead of 3 in [START_REF] Langer | Spherical barycentric coordinates[END_REF]).
In order to quickly recover the set of spherical polygons containing r, we propose to first warp the set S 2 of reflected directions to a single hemisphere so that the search space can be more easily indexed. To this end, we compute a rectified normal buffer n ′ such that r = reflect(z, n ′ ), where z = (0, 0, 1)
T as shown in Figure 5.4. This is obtained by the bijection n ′ (r) = r+z r+z . In a preprocess, we build a 2D grid upon an orthographic projection of the Gauss map of these rectified screen-space normals. For each polygon corresponding to four or three connected pixels, we add its index in the cells it intersects. The intersection is carried out conservatively by computing the 2D convex hull of the projected spherical polygon. Then, for each query reflection vector r, we compute the spherical barycentric coordinates λ k j of each polygon of index k in the cell containing n ′ (r), and pickup the polygons having all λ k j positive. In our implementation, for the sake of simplicity and consistency with our 2D grid construction, we compute spherical barycentric coordinates with respect to the rectified normals n ′ , for both intersections and shading interpolation (Equation (5.2)).
Figure 5.4: A normal n is 'rectified' to n ′ prior to reconstruction. The reflection of the view vector v is then given by r = reflect(z, n ′ ).
Hole-filling and regularization
Evaluating Equation (5.2) for each direction r in a dual-paraboloid map yields a partial reconstruction, with holes in regions not covered by a single polygon and discontinuities at the transition between different smooth parts.
For instance, the bright red region in the top row of Figure 5.5 correspond to reflection directions where no shading information is available (i.e., N (r) = 0). It is necessary to fill these empty regions to guarantee that shading information is available for all possible surface orientations. In practice, we perform a harmonic interpolation directly on a densely tessellated 3D sphere, with vertices indexed by r matching exactly the pixels of the dual-paraboloid map. The tessellation is thus perfectly regular except at the junction between the front and back hemispheres. We use a standard Finite Element Discretization with linear basis functions over a triangular mesh to solve the Laplacian differential equation, while setting shading values recovered by Equation (5.2) as Dirichlet boundary constraints [IGG + 14].
A result is shown in the middle row of Figure 5.5: holes are properly filled in, but some shading discontinuities remain. Those are caused by spatial discontinuities in Equation (5.2) occurring when spatially disconnected polygons are used for neighboring directions in the environment. We thus apply a last post-processing step where we slightly blur the environment along those discontinuities. We identify them by computing a second dual-paraboloid map storing the 2D image position of the polygon that contributed the respective shading color. This map is simply obtained by replacing the shading values R 0 (x j ) in Equation (5.2) by the 2D coordinates x j . We then compute the gradient of these maps and use its magnitude to drive a spatially-varying Gaussian blur. The effect is to smooth the radiance on discontinuities caused by two or more remote polygons projected next to one another. An example of regularized environment is shown in the bottom row of Figure 5.5.
Visualization
Dual paraboloid maps are only used for practical storage purposes in our approach. It should be noted that once applied to a 3D object, most of the shading information in the back part of the map gets confined to the silhouette, as shown in the right column of Figure 5.5. In the following, we thus prefer to use shaded 3D spheres seen in orthographic projection (i.e., Lit Spheres [START_REF] Sloan | The lit sphere: A model for capturing npr shading from art[END_REF]) to visualize the reconstructed shading environments (both diffuse and 60 Carlos Jorge Zubiaga Peña Figure 5.5: A dual paraboloid map reconstructed with our approach is shown in the top row: it only partially covers the space of reflected view vectors. The missing information, shown in red, appears confined to the silhouette when visualized with a LitSphere at right. We focus on the back part of the paraboloid map in the middle row to show the result of the hole-filling procedure. Missing regions are completed, but some discontinuities remain; they are corrected by our regularization pass as shown in the bottom row.
reflections). In practice, perspective projection only lets appear a subset of filled-in shading values close to object contours.
Validation
The right column of Figure 5.6 shows reconstruction results for a pair of known environment lighting. As before, a 3D teapot is rendered using the light probe, and then used for reconstructing reflection environment maps. The difference between our reconstruction and the ground truth is small enough to use it for shape editing.
Recompositing
The outcome of the reconstruction process is a set of SH coefficients and dual paraboloid maps for all object/material combinations appearing in the image. Obtaining reconstructed shading buffers simply amounts to evaluate shading in the appropriate SH basis or environment map, using arbitrary screen-space normals. A benefit of this approach is that any normal manipulation may then be used in post-process; we give some practical examples in Section 5.3. However, we must also ensure that the independently reconstructed shading buffers are seamlessly recombined in the final image. In particular, when a normal is left untouched 62 Carlos Jorge Zubiaga Peña by the compositing artist, we must guarantee that we reproduce the reference diffuse and reflection shading colors exactly. This is the goal of the recompositing process: taking as input an arbitrarily modified normal buffer, it combines the reconstructed prefiltered environments with rendered input buffers to produce a final color image where the apparent shape of objects has been altered. It works in parallel on all pixels; hence we drop the dependence on x for brevity.
Combined diffuse term
Given a modified normal ñ, we define the combined diffuse term D by:
D = α D D 0 -D(n) residual +D(ñ) + (1 -α D ) D 0 , (5.3)
where the ambient occlusion term α D is used to linearly interpolate between the reference and reconstructed diffuse colors. The rationale is that highly occluded areas should be preserved to prevent the introduction of unnatural shading variations. The D 0 -D(n) term is used to re-introduce residual differences between the reference and reconstructed buffers. It corresponds to local darkening of diffuse shading that could not be captured by our global reconstruction. The ⌊•⌋ symbol denotes clamping to 0, which is necessary to avoid negative diffuse shading values. This is still preferable to a multiplicative residual term D 0 /D(n) as it would raise numerical issues when D(n) ≈ 0. Observe that if n = ñ then D = D 0 : the reference diffuse shading is exactly recovered when the normal is not modified.
Combined reflection term
Contrary to the diffuse case, we cannot apply the residual approach between the reference and reconstructed reflection buffers as it would create ghosting artifacts. This is because reflections are not bound to low-frequencies as in the diffuse case. Instead, given a modified normal ñ and a corresponding modified reflection vector r = reflect(v, ñ), we define the combined reflection term R by:
R = ν r,r α R R(r) + (1 -ν r,r α R ) R 0 , (5.4)
where ν r,r = min(1, cos -1 (r • r)/ǫ) computes the misalignment between original and modified reflection vectors (we use ǫ = 0.005π), and α R is the reflection occlusion term. The latter serves the same purpose as the ambient occlusion term for the diffuse case. Equation (5.4) performs a linear interpolation between reference and reconstructed reflection colors based on ν r,r α R . As a result, if n = ñ, then ν r,r = 0 and R = R 0 : the reference reflection shading is exactly recovered when the normal is left unmodified.
Final composition
The final image intensity Ĩ is given by:
Ĩ = α k D D + k R R 1 γ + (1 -α) I, (5.5)
where the diffuse and reflection coefficients k D and k R are used to edit the corresponding shading term contributions (k D = k R = 1 is the default), γ is used for gamma correction (we use γ = 2.2 in all our results), and α identifies the pixels pertaining to the background (e.g., showing an environment map), which are already gamma corrected in our input color image I. Equation (5.5) is carried out on all color channels separately. Figure 5.7 shows an example of the recompositing process on a simple scene holding a single object, where input normals have been corrupted by a 2D Perlin noise restricted to a square region. The final colors using original and modified normals are shown in the leftmost column; the remaining columns show the different gamma-corrected shading terms. The top row focuses on the diffuse term (k R = 0), while the bottom row focuses on the reflection term (k D = 0). The importance of recombining reconstructed and reference diffuse shading done in Equation (5.3) becomes apparent when comparing D(ñ) and D. In particular, it permits to seamlessly reproduce D 0 outside of the square region (e.g., inside the ear). Similarly, using Equation (5.4) permits to remove implausible bright reflections in reflection shading (e.g., inside the ear or below the eyebrow).
Experimental results
We have implemented the recompositing process of Section 5.2 in Gratin [START_REF] Vergne | Designing gratin, a gpu-tailored node-based system[END_REF], a programmable node-based system working on the GPU. It permits to test various normal modification algorithms in 2D by programming them directly in GLSL, while observing results in real-time as demonstrated in the supplemental video. Alternatively, normal variations can be mapped onto 3D objects and rendered as additional auxiliary buffers at a negligible fraction of the total rendering time. It then grants compositing artists the ability to test and combine different variants of local shape details in post-process. We demonstrate both the 2D and 3D normal editing techniques in a set of test scenes rendered in global illumination. We start with three simple 3D scenes, each showing a different object with a same material in a same environment illumination. The normal buffer and global illumination rendering for each of these scenes are shown in the first two columns of Figure 5.8. Diffuse and reflection environments are reconstructed from these input images and shown in the last two columns, using Lit Spheres. The reconstructed diffuse environments are nearly identical for all three objects. However, the quality of reconstruction for the reflection environment depends on object shape. The sphere object serves as a reference and only differs from the LitSphere view due to perspective projection. With increasing shape complexity, in particular when highly curved object features are available, the reflection environment becomes less sharp. However, this is usually not an issue when we apply the reconstructed environment to the same object, as shown in Figures 5.9 and 5.10.
We evaluate the visual quality of our approach on the head and vase objects in Figure 5.9. The alternative normal buffer is obtained by applying a Voronoi-based bump map on the object in 3D. We use the reconstructed environments of Figure 5.8 and our recompositing pipeline to modify shading buffers in the middle column. The result is visually similar to a re-rendering of the scene using the additional bump map, shown in the right column. A clear benefit of our approach is that it runs in real-time independently of the rendered scene complexity. In contrast, re-rendering takes from several seconds to several minutes depending on the scene complexity.
Figure 5.10 demonstrates three interactive local shape editing tools that act on a normal n = (n x , n y , n z ). Normal embossing is inspired from the LUMO system [START_REF] Scott F Johnston | Lumo: illumination for cel animation[END_REF]: it replaces the n z coordinate with β n z where β ∈ (0, 1] and renormalize the result to make the surface appear to "bulge". Bump mapping perturbs the normal buffer with an arbitrary height map, here a fading 2D ripple pattern (the same manipulation is applied in Figure 5.7 with a 2D Perlin noise). Bilateral smoothing works on projected normals n = (n x , n y ) using an auxiliary depth buffer to preserve object contours.
In more complex scenes, images of objects may appear in the reflections of each other. This is what occurs in Figure 5.11, which shows a Table top scene with various objects: a cup made of porcelain with a metal spoon, a reddish coated kettle with an aluminum handle, and a vase made of a glossy material exhibiting blurry reflections. Despite the increased 64 Carlos Jorge Zubiaga Peña complexity, our method still produces a plausible result when normals are modified with a noise texture on the cup, an embossed symbol on the kettle body and a carved pattern on the vase. The results remain plausible even when the material properties are edited, as shown in the right column where we decrease the diffuse intensity and increase the specular intensity. The reconstructed diffuse and reflection environments are shown combined in Figure 5.12, before and after material editing has been performed. Observe in particular how the reflections of nearby objects have been properly reconstructed. The reflected window appears stretched in the cup environment. This is because it maps to the highly curved rim of the cup. However, when reapplied to the same object, stretching goes unnoticed. The Truck scene of Figure 5.1 is more challenging: not only object parts are in contact, but each cover a relatively small subset of surface orientations. Nevertheless, as shown in the reconstructed shading environments, our method manages to capture a convincing appearance that reproduces inter-reflections between different parts. This permits to generate a plausible result when normals are modified to apply a noise texture and an embossed emblem to the truck body, and corrugations to the front and rear mudguards.
Performance
Reconstruction timings for all edited objects in the paper are given in Table 5.1, using a single CPU core on an i7-4790k@4.00GHz. The reconstruction of the diffuse environment is negligible compared to that of the reflection environment. Our partial reconstruction could be easily optimized with a parallel implementation. The performance of the hole-filling 66 Carlos Jorge Zubiaga Peña Alternative normals Our approach Ground truth Figure 5.9: A Voronoi-based bump texture is mapped onto the red head and vase models in 3D, yielding an alternative normal buffer. Our approach is used to modify shading at the compositing stage in real-time, with a resulting appearance similar to re-rendering the object using the bump map (ground truth).
Normal embossing Bump mapping Bilateral smoothing
Figure 5.10: Our technique permits to apply arbitrary modifications to the normal buffer, while still yielding plausible shading results. Normal embossing is applied to the eyebrow of the head, and the whole vase, resulting in an apparent bulging of local shape. Any bump texture may be applied as a decal to modify normals: we use a 2D fading ripple pattern, affecting both diffuse and reflection shading. Local shape details may also be removed: we apply a cross bilateral filter on normals, using an auxiliary depth buffer to preserve occluding contours.
Reconstructed Edited Cup env. Kettle env. Vase env.
Figure 5.12: Combined diffuse and reflection environments reconstructed from the cup, kettle and vase objects of Figure 5.11. The bottom row shows edited materials where the diffuse term is divided by 4 and the reflection term multiplied by 4: the supporting table and other nearby objects appear more clearly in the reflections.
Discussion and future work
We have demonstrated a first solution for the edition of surface shape at the compositing stage, based on environment reconstruction and real-time re-compositing.
Our technique is limited in terms of the kinds of materials that we can work with and is quite dependent on the geometry of the input. We are restricted to homogeneous isotropic opaque materials. A first easy extension would be to treat spatially varying materials, meanwhile more complex materials would require more involved improvements. Geometry restricts the quality and the viability of our reconstruction. If object shape is too simple, it will not provide enough shading information, which will require to fill-in wider regions. If object shape is too detailed with respect to image resolution, it will tend to reduce the accuracy of the reconstructed shading, as seen in the right column of Figure 5.8 when object complexity is increased. We only modify normals, which mimics geometry modifications without the displacement of vertices that would be needed in an ideal case. Similarly, we are not able to reproduce other effects related to visibility, such as inter-reflections. For further explanation of these limitations and their possible solutions we refer to the Chapter 6.
Besides the described limitations, as demonstrated in Section 5.2, our approach gives satisfying results in practical cases of interest, granting interactive exploration of local shape variations with real-time feedback. We believe it could already be useful to greatly shorten trial and error decisions in product design and movie production. Carlos Jorge Zubiaga Peña Chapter 6
Conclusions
We have introduced a middle-ground approach for the control of appearance; it works inbetween 2D image creation and 3D scene rendering. The use of auxiliary buffers (mostly normal buffers) permits to situate our techniques between painting in 2D and the control of a 3D scene for rendering. Our technique is developed to manipulate appearance for arbitraryshaded images. We can work with artwork (MatCaps) and rendering (Compositing), and we expect it can be easily extended to photographs. In Chapter 4 I have shown how to modify shading from an input MatCap to mimic modifications on lighting and material without having to retrieve them separately. In Chapter 5 I have shown how to recover singleview pre-filtered environment maps at the compositing stage, and how these pre-filtered environment maps are used to obtain plausible shading when modifying local geometry.
In the following I will discuss the main limitations of our approach, as well as possible solutions. I will conclude by presenting a series of future work directions to extend our approach.
Discussion
In this section I enumerate the basic restrictions of our techniques. Some restrictions are inherited from the use of structures similar to pre-filtered environment maps. They limit the kind of materials that we can represent and restrict lighting to be distant. Nevertheless, we propose possible solutions to work with an extended set of materials (Section 6.1.1), as well as to reproduce effects related to local lighting (Section 6.1.4). Alongside with these limitations, we present other problems related to the separation of shading and material components (Section 6.1.2) and the filling of missing shading information (Section 6.1.3).
Non-radially symmetric and anisotropic materials
We store geometry-independent shading values into a spherical representation (dual paraboloid maps) that we use to manipulate appearance. Our representation can be seen as a prefiltered environment map. Similarly to pre-filtered environment maps we are restricted to work with opaque objects. These kind of structures are not adapted to work with transparent or translucent objects, as they depend on complex light paths inside objects.
Moreover, we have restricted the manipulations of MatCaps to radially symmetric BRDFs. Input MatCaps define shading tied to the camera. In order to enable modifications on lighting from an input MatCap, specifically rotation, we have turned MatCaps into a spherical representation that behaves as a pre-filtered environment lighting of a radially symmetric BRDF. Radially symmetric BRDFs permit to create 2D view-independent pre-filtered environment maps, and therefore enable the rotation of lighting independently of the view. The limitation of BRDFs to be radially symmetric also eases the estimation of material properties from a MatCap. Symmetry is also used to compute the correlation function between mean and variance, as variance in measured materials is computed as the average along the θ i and φ i dimensions. The radial symmetry is finally used in the definition of filters that mimic rougher materials by blurring using radial/circular filters.
To incorporate arbitrary isotropic materials in our work, we should start with a deeper study of variances in both θ i and φ i directions. We expect that a better understating of variance will help us to define non-radial filters, and to estimate material properties for θ i and φ i directions independently. In contrast, if we assume that MatCaps depict arbitrary isotropic BRDFs instead of radially symmetric ones, rotations of lighting will not be straightforward. A solution would be to ask the artist to depict the same MatCap for different view directions, but it will turn the appealing simplicity of MatCaps into a tedious process.
In the Compositing stage technique we reconstruct pre-filtered environment maps for a fixed view-direction. In this case, arbitrary isotropic BRDFs are possible since we are tied to the camera used for rendering. Nevertheless, this leaves out anisotropic materials since anisotropy is linked to tangent fields. Therefore we will need to make use of tangent buffers alongside with normal buffers. Moreover, we would need to have a 3D structure, instead of a 2D spherical representation. This increment on complexity will impact the reconstruction of geometry-independent shading. First, it increments the storage size while the input shading data stays similar, which makes the reconstructions more sparse. Second, partial reconstruction and filling-in techniques are not straightforward to extend to the required 3D structure.
If we want to manipulate anisotropic BRDFs for arbitrary views it will require 5 dimensional structures in the naive case, since all possible views can be parametrized with 2 additional dimensions: the shading at a surface point would then depend on the 2D view direction and the 3D reference frame (normal and tangent). This increases the complexity even further. Working with 5D structure would not be straightforward, as opposed to the simplicity of a spherical representation.
Anisotropic materials are not considered in our statistical approach. Their inclusion would imply to study BRDF slice statistics as variations of viewing direction, instead of simply the viewing elevation angle. Moreover, it would require to use a different database of measured materials because the MERL database only contains isotropic BRDFs. However, at the time of writing existing databases of anisotropic BRDFs, like the UTIA BTF Database [START_REF] Filip | Template-based sampling of anisotropic brdfs[END_REF], are not sampled densely enough or do not contain enough materials.
A different approach to increase the dimensionality of our approach would be to use a similar 2D spherical structure with a set of filters that adapt it to local viewing and anisotropy configurations. In the case of non-radially symmetric materials we would apply non-radial filters and for anisotropic materials we would need to introduce local rotations as well. Non-radial symmetry can be see in the left of Figure 6.1 where the spread of the BRDF differs between dimensions. Anisotropic materials (Figure 6.1, three remaining images) have the effect of rotating the filter kernel in our parametrization. This approach seems promising, as our goal is to mimic modifications from a basic shading, while producing plausible results.
Shading components
We treat both shading and BRDF as the addition of diffuse and specular components. At the compositing stage we obtain a perfect separation of diffuse and specular components thanks to the capabilities of the rendering engines. In contrast, we need to separate those components for measured materials as well as in MatCaps. Because of our simple heuristic on measured BRDF decomposition, we were forced to consider a subset of the MERL 72 Carlos Jorge Zubiaga Peña database. It would thus be interesting to devise clever decomposition schemes so that each component could be studied using a separate moment analysis. We have shown that our MatCap separation into low and high frequency components is a good approximation of diffuse and specular components. Despite that, we attribute all low-frequency content to the diffuse component, which is not the case in real world materials.For instance, hazy gloss (see Section 6.2.1) is a low-frequency effect that belongs to the specular component. Moreover, material reflectance is composed of more than just diffuse and specular components.
It would be interesting to treat separately those different components. We can relate them to different effects like grazing angle effects (e.g. asperity scattering), retro reflection or offspecular reflection. We would need to be able to separate them in our statistical analysis to understand better their effects. In the case of manipulation of shading from rendering engine outputs, we could again take advantage of their capability to render them separately.
As we have already discussed, techniques related to pre-filtered environment maps are not ready to work with translucency or transparency materials. Nevertheless we are interested in trying to recover translucency shading. Depending on the material, it can look similar to a diffuse material. Moreover, it has been already shown by Khan et al. [START_REF] Erum Arif Khan | Image-based material editing[END_REF] that the human visual system would accept inaccurate variations of translucent materials as plausible.
Filling-in of missing shading
The construction of our geometry-independent shading structure, for both the MatCap and the compositing approach, requires the filling of some parts. The missing parts in MatCaps depend on the estimated roughness of the depicted material, which describes a circle in the back paraboloid map, that we called the 'blind spot'. When retrieving shading information from renderings at the compositing stage we are restricted by input geometry. The corresponding missing parts may be bigger than the ones defined by the 'blind spot' and their shape are arbitrary, which makes the filling-in more complex. This is illustrated in Figure 6.2, which shows the reconstructed reflection environment for the spoon in the Table top scene of Figure 5.11 shown in Chapter 5.
One way to improve the filling-in would be to take into account structured shading, as shown in Figure 6.3. It is important to note that in the case of MatCaps this is not a blocking issue since users may correct in-painting results by hand, which is consistent with the technique as it is an artistic approach.
Reconstructed env
After hole-filling After regularization The structure of horizontal line should be taken into account in order to prevent it from fading out.
Visibility and inter-reflections
Our approach does not offer any solution to control or mimic local light transport.We do not take into account visibility effects like shadows or inter-reflections. Working with pre-filtered environment maps requires to assume distant illumination. As a solution, when working with MatCaps we plan to use different MatCaps to shade illuminated and shadowed parts of an object. Similarly, for the case of compositing we plan to create separated PEM for shadowed and un-shadowed parts thanks to the information obtained in auxiliary buffers.
Geometry modifications would required to displace vertex in addition to modifying normals. A plausible solution would be to apply displacement of vertices to update occlusion buffers. Another limitation is the control of inter-reflections when recovering shading at the compositing stage. It could be interesting to recover shading for those specific zones. Available information is sparse and would not be enough to recover a good quality PEM. A solution would be to get more involved into the rendering process by recognizing these parts and output more shading samples that characterize the inter-reflections zones.
74
Carlos Jorge Zubiaga Peña
Future work
As long-term goals we would like to extend our analysis of BRDFs and their impact on shading (Section 6.2.1). A second major future goal is to be able to manipulate more complex 3D scenes. The most important challenge would be to deal with the spatial shading variations due to both material and lighting, not just angular variations (Section 6.2.2). Finally we explain how our technique could be useful to other applications in graphics or even perception (Section 6.2.3).
Extended statistical analysis
We have performed our analysis by considering simple shapes (spheres). Therefore we have studied the implication of material and lighting into shading without taking into account geometry. When considering more complex shapes, our observations may still be similar by considering a surface patch on the object. However, surface curvatures will impose restrictions on the window sizes we use for establishing relationships between material/lighting and shading on this patch. In particular, high curvatures will lead to rapid changes of the view direction in surface tangent space. In such situations, our local approximation will be valid only in small 1D windows. This suggests that the effect of a BRDF will tend to be less noticeable on bumpy surfaces, which is consistent with existing perceptual experiments [START_REF] Vangorp | The influence of shape on the perception of material reflectance[END_REF].
We have considered orthographic or orthographic-corrected projections along the whole thesis. To get complete relationships between shading and their components, we should consider the effect of perspective projection on reflected radiance. This will of course depend on the type of virtual sensor used. We may anticipate that foreshortening will tend to compress radiance patterns at grazing angles. This suggests that some grazing angle effects will get 'squeezed' in a thin image region around the silhouette.
We have focused on moments up to order 2, but as we shown in Appendix A it can be extended to higher-order moments to study skewness and kurtosis. Skewness quantifies the asymmetry of a distribution, meanwhile kurtosis measures its peakedness. We have shown how energy, mean and variance of a BRDF slice are perceived in the image, as coloring, warping and blurring. One question is whether similar perceptible effects could be related to skewness and kurtosis. To study these effects would require to introduce both skewness and kurtosis into the statistical model and consequently into the Fourier Analysis. This would increase the complexity of the statistical analysis; hence we have performed a perceptual study to first identify whether they have a perceptible effect [START_REF] Vangorp | Specular kurtosis and the perception of hazy gloss[END_REF].
We have focused on kurtosis with the idea to identify it as a cue to hazy gloss. We performed a series of experiment with a BRDF model made of a pair of Gaussian lobes.The difference between the lobe intensities and their spread produces different kurtosis and different haze effects, as shown in Figure 6.4a. Using these stimuli we have studied how human subjects perceive haziness of gloss. Our conclusion is that perceived haziness does not vary according to kurtosis, as shown in Figure 6.4b and Figure 6.4c. We suggest that haziness depends on the sepparation of the specular component into two sub-components, which are not directly the two Gaussians used to define the BRDF. Instead, haziness effects would be characterized by a central peak plus a wide component characteristic of the halo effect of haziness. If this hypothesis is correct, then maybe other sub-decompositions can be performed for other BRDF components.
Spatially-varying shading
In this thesis we only have considered shading as variations in the angular domain. This approximation has lead us to satisfactory results in simple scenes. However, for a good Real-time 2D manipulation of plausible 3D appearance representation and manipulation of shading in complex scenes we should consider spatial variations as well. As we have shown, variations of shading depend of variations of material and of lighting.
In the case of variations of materials our compositing approach would be easily extended to objects with spatially-varying reflectance, provided that diffuse (resp. specular) reflectance buffers are output separately by the renderer. Our method would then be used to reconstruct multiple diffuse (resp. specular) shading buffers, and the final shading would be obtained through multiplication by reflectance at the re-compositing stage.
When considering variations of lighting, we may separate shading depending on the origin of the incoming radiance. We can distinguish and manipulate differently the shading due to local light sources or the reflection of close objects. This would help to deal with a problem that arises with extended objects: their incoming lighting may vary spatially and come from completely different sources.
Ideally we should store shading in a 4D representation for the variations in spatial and angular domains, which can be seen as a light field. This is equivalent to reconstructing a pre-filtered environment map per pixel instead of per surface. To deal with this issue, we would like to explore the reconstruction of multiple pre-filtered environment maps for extended surfaces and recombining them through warping depending on pixel locations.
We have considered out-of-the-box render buffers and have focused on algorithms working at the compositing stage. We would also like to work at the rendering stage to export more detailed shading information while retaining a negligible impact on rendering performance. For example we would like to export information about light paths, in a sense that we could have more information about where the incoming radiance came from. Another useful solution would be to have a fast pre-analysis of render output to know which parts will not provide enough information to recover shading and output more information for these parts.
Rendering engines are usually made to generate a set of images that will form an animation. We plan to extend our technique to animations, which will require a temporally consistent behavior.
76
Carlos Jorge Zubiaga Peña
New applications
We believe our approach could prove useful in a number of applications in Computer Graphics and Visual Perception.
Dynamic creation of MatCaps
We have shown how to use existing MatCaps and later on apply our technique to enable modification of lighting and material. Instead we could consider the creation of MatCaps directly on a spherical representation (i.e. dual parabolid maps) using our tools. The rotation of the MatCap would avoid problems in the blind-spot by permitting to fill it during the creation process. When using paint brushes or light sources to create shading, the material roughness could be taken into account and blur the created shading accordingly.
Material estimation on photographs
We estimate a few material properties from MatCaps making assumptions on lighting. We believe that this technique could be extended to other kind of inputs, like rendering or, more interestingly, photographs. This will require some knowledge over lighting moments, either explicit or hypothesized. This technique should be complemented with a geometry estimation from images. In any case, for a correct behavior it will require the study of the impact of geometry in shading.
Editing of measured materials
Moments have proved to be a good method to analyze the BRDF effect on shading. We believe that they could be used as a way to edit measured BRDFs, by finding operators in BRDF space that preserve some moments while modifying others. Ideally, users could be granted control over a measured BRDF directly through its moment profiles. A sufficient accuracy of these edits would require a better decomposition of BRDFs.
Perceptual studies
Finally, we believe that BRDF moments may also be well adapted to the study of material perception. The end goal of such an approach is to explicitly connect perceptually-relevant gloss properties to measurable BRDF properties. Experiments should be conducted to evaluate the BRDF moments humans are most sensitive to, and check whether the statistical analysis can predict perceived material appearance.
Discussion
It would be interested to study potential correlations between moments of different orders, as we have done for the mean and the variance. We have already observed interesting deviations from simple behaviors at grazing angles in skewness and kurtosis profiles. They may be related to known properties such as off-specular peaks, but could as well be due to hemispherical clamping once again. Moreover, we would like to extend our local Fourier analysis to include co-skewness and co-kurtosis tensors, in order to characterize their effects in the image.
Figure 1.1: Still-life is a work of art depicting inanimate subjects. Artists are able to achieve a convincing appearance from which we can infer the material of the different objects.
Figure 1.2: Computer systems offer a complete set of tools to create images directly in image space. They provide virtual versions of traditional painting tools, such as different kinds of brushes or pigments, as can be seen in the interface of ArtRage TM (a). On the right (b) we can see the interface of one of the most well-known image editing softwares, Photoshop TM . They also provide other tools that couldn't exist in traditional painting, like working on layers, different kind of selections or image modifications like scaling or rotations.
Figure 1 . 3 :
13 Figure 1.3: Shading refers to depicting depth perception in 3D models or illustrations by varying levels of darkness. It makes possible to perceive volume and infer lighting direction. Image are property of Kristy Kate http://kristykate.com/.
Figure 1
1 Figure 1.4: A 3D scene are composed of lights and objects. where lights may vary in type (a) from ambient, to point, direction, area, etc. Objects are defined by their geometry defined by (b) 3D meshes and (c) materials.
Figure 1 . 5 :
15 Figure 1.5: In the general case, when a light ray reaches a object surface, it can be reflected, refracted or absorbed. When we focus on opaque objects the reflection can vary from shining (mirror) to matte (diffuse) by varying glossiness.
Figure 1
1 Figure 1.7: Rendering engine computes shading per component: diffuse, reflections, transparency, etc. They generate per each component. Those images are used in post-process step called compositing. Final image is created as a combination of the different components. This figure shows an example from the software Modo TM .
Figure 1 . 8 :
18 Figure 1.8: Deferred shading computes in a first step a set of auxiliary buffers: positions, normals, diffuse color and surface reflectance. In a second pass those buffers are used to compute shading by adding the contribution of every single light source
Figure 1 . 9 :
19 Figure 1.9: Both pre-filtered environment maps (a) and MatCaps (b) can be used to shade arbitrary objects. Shading color per pixel is assigned by fetching the color that corresponds to the same normal in the spherical representation.
Figure 1.10: Shading is usually composed as different components. The main components are diffuse and specular reflections. We can see how (a) a MatCap and (b) a rendering are composed as the addition of a diffuse and a specular component.
Figure 1
1 Figure 1.11: Starting from a stylized image of a sphere (a) our goal is to vary the (b) lighting orientation, (c) the material color and (d) the material roughness.
Figure 2 . 1 :
21 Figure 2.1: Directions ω o and ω i can be defined in the classical parametrization of elevation θ and azimuth φ angles (a). Or by the the halfvector (θ h , φ h ) and a difference vector (θ d , φ d ) (b). The vectors marked n and t are the surface normal and tangent, respectively.
Figure 2 . 2 :
22 Figure 2.2: Renderings of different BRDF coming from the MERL database (from left to right: specular-black-phenolic, yellow-phenolic, color-changing-paint2, gold-paint and neoprene-rubber) under two different environment maps (upper row: galileo; lower row: uffizi). Each BRDF has a different effect on the reflected image of the environment.
Figure 2 . 3 :
23 Figure 2.3: Different orientations of the surface correspond to rotations of the upper hemisphere and BRDF, with global directions (not primed) corresponding to local directions (primed).
Figure 2 . 4 :Figure 2
242 Figure 2.4: Results of the alum-bronze material under three lighting environments using Lombardi et al. [LN12] method. Column (a) shows the ground truth alum-bronze material rendered with one of the three lighting environments, column (b) shows a rendering of the estimated BRDF with the next ground truth lighting environment, column (c) shows the estimated illumination map and column (d) shows the ground truth illumination map. The lighting environments used were Kitchen (1), Eucalyptus Grove (2), and the Uffizi Gallery (3). The recovered illumination is missing high frequency details lost at rendering.
Figure 2
2 Figure 2.6: Distortion of a circle when projected from a paraboloid map back to the sphere.
(a) Sloan et al. [SMGG01] (b) Todo et al. [TAY13] (c) Bruckner [BG07]
Figure 2 . 7 :
27 Figure 2.7: Renderings using LitSphere: (a) the original LitSphere technique, (b) a nonphotorealistic approach and (c) a technique focused on scientific illustration
Figure 2 . 8 :
28 Figure 2.8: Given a high dynamic range image such as shown on the left, the Image Based Material Editing technique makes objects transparent and translucent (left vases in middle and right images), as well as apply arbitrary surface materials such as aluminium-bronze (middle) and nickel (right).
Figure 2
2 Figure 2.9: Users can modify reflections using constraints. (a) The mirror reflection is changed to reflect the dragon's head, instead of the tail. (b) Multiple constraints can be used in the same scene to make the sink reflected on the pot. (c) The ring reflects clearer the face by using two constraints. (d) The reflection of the tree in the hood is modified and at the same time the highlight on the door is elongated.
Figure 2.10: Tools using Surface Flows. Deformed textures (a) or shading patterns (b) are applied at arbitrary sizes (red contours) and locations (blue dots). (c) Smooth shading patterns are created by deforming a linear gradient (red curve). Two anchor points (blue and red dots) control its extremities. (d) A refraction pattern is manipulated using anchor points. Color insets visualize weight functions attached to each anchor point.
Figure 2 .
2 Figure 2.11: Example edits performed with envyLight. For each row, we can see the environment map at the bottom, as top and bottom hemispheres, and the corresponding rendered image at the top. Designers mark lighting features (e.g. diffuse gradients, highlights, shadows) with two strokes: a stroke to indicate parts of the image that belong to the feature (shown in green), and another stroke to indicate parts of the image that do not (shown in red). envyLight splits the environment map into a foreground and background layer, such that edits to the foreground directly affect the marked feature and such that the sum of the two is the original map. Editing operations can be applied to the layers to alter the marked feature. (a) Increased contrast and saturation of a diffuse gradient. (b) Translation of a highlight. (c) Increased contrast and blur of a shadow.
(a) Objects varying in bumpiness and roughness (b) Env. Lighting modified by warping or blurring
Figure 2 .
2 Figure 2.12: (a) Objects varying in bumpiness from left to right and the roughness of the material increase vertically. (b) Environment lighting map is blurred in the upper image and warped in the bottom row.
to vary linearly in relation to human perceived glossiness. Wills et al. [WAKB09] performs an study from the isotropic BRDF database of MERL. From them they create a 2D perceptual space of gloss. Moreover, perceived reflectance depends on the environment lighting around an object. The work of Doerschner et al. [DBM10] tries to quantify the effect of the environment lighting on the perception of reflectance. They look for a transfer function of glossiness between pairs of BRDF glossiness and environment lighting. Fleming et al. [FDA03] perform a series of experiments about the perception of surface reflectance under natural illumination. Their experiments evaluate how we perceive variation on specular reflectance and roughness of a 22Carlos Jorge Zubiaga Peña material under natural and synthetic illuminations, see Figure2.13. Their results show that we estimate better material properties under natural environments. Moreover they have tried to identify natural lighting characteristics that help us to identify material properties. Nevertheless, they show that our judgment of reflectance seems to more related to certain lighting features than to global statistics of the natural light.
Natural or analytical environment lighting
Figure 2 .
2 Figure 2.13: (a) Rendered spheres are shown by increasing roughness from top to bottom, and by increasing specular reflectance from left to right. The scale of these parameters are re-scaled to fit visual perception as proposed by Pellacini et al. [PFG00] All spheres are rendered under the same environment lighting Grace. (b) Rendered spheres with the same intermediate values of roughness and specular reflectance are rendered under different environment maps. The first two columns use natural environment lighting, whether the last column use artificial analytical environment lighting.
Figure 3.1: (a) Our parametrization of the hemisphere has poles orthogonal to the view direction ω o , which minimizes distortions in the scattering plane (in red). (b) It maps a pair of angles (θ i , φ i ) ∈ [-π 2 , π 2 ]2 to a direction ω i ∈ Ω. (c) A 2D BRDF slice f rω o is directly defined in our parametrization through this angular mapping.
Figure 3
3 Figure 3.2: Different BRDF slices for the same viewing elevation angle at 45 o are shown in our view-centered parametrization. Different values of α, µ θ , σ 2 θ and σ 2 φ will define different BRDF slice and therefore different material appearances. These values can vary independently in terms of elevation angle θ o .
Figure 3 . 3 :
33 Figure 3.3: Top row: 3D visualization of four slices of the gold-paint BRDF at increasing viewing angles. Bottom row: the same BRDF slices in our view-dependent parametrization.
Figure 3 . 4 :Figure 3
343 Figure 3.4: Moment profiles computed from our selected BRDFs are shown at increasing moment orders. (a) Energy (b) Mean at θ (c) Co-Variance at θθ and φφ.
Figure 3.7: (a) The energy profile α may exhibit a silhouette effect, which we model by a Hermite spline starting at θ 0 with m 0 = 0, and ending at θ 1 with a fitted m 1 . (b) We fit the correlation between the mean slope μ and the average variance σ2 using a quadratic function.
Figure3.8: (a) A sphere made of an ideal mirror material rendered using the StPeter environment map. The reflected environment is extremely sharp and warped toward silhouettes. (b) A rendering using the Specular-black-phenolic BRDF. The reflected environment is slightly blurred and highly warped. This is explained in (c) by the filtering characteristics (in blue) of the BRDF at 3 different locations: the filter is narrow and remains close to the evaluation point (in red). Note that at similar viewing elevations (dashed red arcs), the filters are rotated copies of each other. (d) A rendering using the Pearl-paint BRDF. The reflected environment is this time much more blurred and exhibits less warping. This is explained in (e) by the filtering characteristics of the BRDF: the filter is wide and offset toward the center of the sphere for locations closer to the silhouette. This confirms the mean/variance correlation that we have observed in our study.
Figure 4 . 1 :
41 Figure 4.1: Our approach decomposes a MatCap into a representation that permits dynamic appearance manipulation via image filters and transforms. (a) An input MatCap applied to a sculpted head model (with a lookup based on screen-space normals). (b) The low-& high-frequency (akin to diffuse & specular) components of our representation stored in dual paraboloid maps. (c) A rotation of our representation orients lighting toward the top-left direction. (d) Color changes applied to each component. (e) A rougher-looking material obtained by bluring, warping and decreasing the intensity of the high-frequency component.
Figure 4
4 Figure 4.2: (a) the filter energy α(θ) is the sum of a base color α 0 and a Hermite funtion for silhouette effects (with control parameters θ 0 , m 0 = 0, m 1 and α 1 ). (b) three slices of our material filter for θ = {0, θ 0 , π 2 } (red points). Observe how the filter (in blue) is shifted in angles by µ θ (green arrows), with its energy increasing toward θ = π 2 .
Figure 4
4 Figure 4.3: (a) A MatCap is sampled uniformly in the θ dimension, around three different locations (in red, green and blue). (b) Intensity plots for each 1D window.
Figure 4 . 4 :
44 Figure 4.4: Our algorithm automatically finds the relevant window size ǫ ⋆ around a ROI (red square on MatCaps). We analyze image variances for all samples in the ROI (colored curves) as a function of window size ǫ, which we call a signature. The variance estimate (red cross) is obtained by following signature inflexions (blue tangents), according to four cases: (a) Variance is taken at the first minimum after the second inflexion; (b) There is no minimum within reach, hence variance is taken at the second inflection; (c) There is no second inflexion, hence the variance at the widest window size is selected; (d) The signatures are degenerated and the ROI is discarded. The signature with minimum variance (black curve) is selected for material variance.
Figure 4
4 Figure4.5: We validate our estimation algorithm on analytic primitives of known image variance in MatCaps. This is done at three resolutions for nine ROI marked A to I. Comparisons between known variances (in blue) and our estimates (with black intervals showing min/max variances in ROI) reveal that our algorithm is both accurate and robust.
Figure 4 . 6 :
46 Figure 4.6: Each row illustrates the entire decomposition process: (a) An input MatCap is decomposed into (b) low-and high-frequency components; (c) white balancing separates shading from material colors; (d) components are unwarped to dual paraboloid maps using slope and size parameters.
Figure 4
4 Figure 4.7: (a) An input MatCap is (b) eroded then (c) dilated to extract its low-frequency component. The high-frequency component is obtained by (d) subtracting the low-frequency component from the input MatCap.
Figure 4.8: A rendered Matcap (a) is separated into veridical diffuse & specular components (b,c). Our low-/high-frequency separation (d,e) provides a reasonable approximation. Intensity differences are due to low-frequency details in the specular component (c) that are falsely attributed to the low-frequency component (d) in our approach. Note that (a) = (b) + (c) = (d) + (e) by construction.
Figure 4 . 9 :
49 Figure 4.9: We illustrate the reconstruction process, starting from a white-balanced MatCap component. (a) A dual paraboloid map is filled by warping each texel q to a normal n q ; the color is then obtained by a MatCap lookup. (b) This leaves an empty region in the back paraboloid map (the 'blind spot') that is filled with a radial inpainting technique.
Painted reflections (c) Rotated lighting (d) Initial MatCap (e) Added reflection (f) Rotated lighting
Figure 4 .
4 Figure 4.10: Lighting manipulation. Top row: (a) Starting from a single reflection, (b) we modify the lighting by painting two additional reflections (at left and bottom right); (c) we then apply a rotation to orient the main light to the right. Bottom row: (d) We add a flame reflection to a dark glossy environment by (e) blurring and positioning the texture; (f) we then rotate the environment.
Figure 4 .
4 Figure 4.11: Material manipulation. Top row: (a) Starting from a glossy appearance, (b) we increase filter size to get a rougher appearance, or (c) decrease it and add a few reflections to get a shinier appearance. Warping is altered in both cases since it is a function of filter size. Bottom row: (d) The greenish color appearance is turned into (e) a darker reddish color with increased contrast in both components; (f) a silhouette effect is added to the low-frequency component.
Lombardi et al. (b) Ground truth (c) Our approach
Figure 4 .
4 Figure 4.12: Comparison on lighting rotation. The top and bottom rows show initial and rotated results respectively. (b) Ground truth images are rendered with the gold paint material in the Eucalyptus Grove environment lighting. (a) The method of Lombardi et al. [LN12] makes the material appear rougher both before and after rotation. (c) Our approach reproduces exactly the input, and better preserves material properties after rotation.
Figure 4 .
4 Figure 4.13: Mixing components. (a,b) Two different MatCaps applied to the same head model. Thanks to our decomposition, components may be mixed together: (c) shows the low-frequency component of (a) added to the high-frequency component of (b); (d) shows the reverse combination.
Figure 4 . 14 :
414 Figure 4.14: Using material IDs (a), three MatCaps are assigned to a robot object (b). Our method permits to align their main highlight via individual rotations (c) and change their material properties (d). All three MatCaps are rotated together in (e).
Figure 4 . 15 :
415 Figure 4.15: Spatially-varying colors. (a) The MatCap of Figure 4.6 (2nd row) is applied to a cow toy model. A color texture is used to modulate (b) the low-frequency component, then (c) the high-frequency component. (d) A binary version of the texture is used to increase roughness outside of dark patches (e.g., on the cheek).In (e) we rotate lighting to orient it from behind.
Figure 4 .
4 Figure 4.16: Shape-enhancing variations. (a) A variant of the MatCap of Figure 4.10 (1st row) is applied to an ogre model. (b) An occlusion map is used to multiply the lowand high-frequency components. (c) A color texture is applied to the low-frequency component. (d) Different silhouette effects are added to each component. (e) Lighting is rotated so that it comes from below.
Figure 5 . 2 :
52 Figure5.2: Reconstruction of the prefiltered diffuse environment map. From left to right: the pixels of the input diffuse layer of the selected object are scattered inside the Gauss map. This shading information is then approximated by low-order spherical harmonics using either a Least Square fit (top) or our Quadratic Programming formulation (bottom) leading to the respective reconstructed environment maps. To evaluate the reconstruction quality, those environment maps are then applied to the original objects, and the signed residual is shown using a color code (shown at right). Observe how our QP reconstruction guarantees a negative residual.
Figure 5
5 Figure 5.3: A 3×3 pixel neighborhood (shown at left on a low-resolution image for clarity) is mapped to four contiguous spherical quads on the Gauss sphere of rectified normals (right). The color inside each spherical quad is computed by bilinear interpolation inside the input image using spherical barycentric coordinates.
Figure 5 . 6 :
56 Figure5.6: Reconstruction results for diffuse (left column) and reflection (right column) shading, using Galileo (top row) and rnl (bottom row) light probes. Each quadrant shows the reference prefiltered environment, along with the rendering of a 3D teapot assuming distant lighting and no shadowing or inter-reflections. The teapot image is then used to reconstruct the prefiltered environment using either the method of Section 5.1.1 or 5.1.2, and a boosted absolute color difference with the reference environment is shown.
Figure 5.7: Illustration of the different terms involved in recompositing (Equations (5.3)-(5.5)). The left column shows reference and modified images, where we have perturbed the normal buffer with a 2D perlin noise in a square region. This permits to show the behavior of our approach on both modified & unmodified portions of the image (i.e., inside & outside the square); in the latter case, Ĩ = I. The remaining columns present the diffuse (top) and reflection (bottom) terms. Starting with the reference shading at left, we then show reconstructed shading using prefiltered environments. Reference and reconstructed shading terms slightly differ in some unmodified regions (e.g., inside the ear). The combined shading terms correct these effects with the aid of ambient and reflection occlusion buffers, shown in the rightmost column.
Figure 6
6 Figure 6.1: BRDF slices on our parametrization for the anisotropic Ward BRDF. Slices vary in azimuthal angle, while remaining at the same elevation angle θ of 30 o . From left to right images correspond to azimuthal angles of [0 o ,180 o ,180 o ], [45 o ,225 o ], [90 o ,270 o ] and [135 o ,315 o ]. To reproduce the effect of anisotropy, filters will have to be rotated.
Figure 6 . 2 :
62 Figure 6.2: The reconstructed reflection environment for the spoon object in Figure 5.11 does not contain enough shading information to grant editing, even after hole-filling and regularization.
Figure 6 . 4 :
64 Figure 6.4: In our perceptual experiments we use a two lobe Gaussian model BRDF controlled separately by its intensity and spread to produce a hazy gloss appearance. The sum of of intensities and spread of the wider lobe are keep constant. (a) A set of stimuli is presented by increasing the intensity of the narrow lobe from bottom to top and by increasing the difference in spread from left to right. (b) We measured kurtosis for the BRDF of our stimuli, and we can see how variations differs from (c) how much subjects rated 'haziness' for each material.
Figure A. 2 :
2 Figure A.1: Skewness (a) and Kurtosis (b) profiles computed from our selected BRDFs
Real-time 2D manipulation of plausible 3D appearance
Carlos Jorge Zubiaga Peña
Original/modified normals Before/after shape editing With edited material as well as a modified version (bottom) where a noise texture, an embossed symbol and a carved pattern have been respectively applied to the cup, kettle body and vase. In the middle column, we show the result of our approach (bottom) on the corresponding color image (top), using the reconstructed diffuse and reflection environments shown in Figure 5.12 (1st row).
In the right column, we have edited the intensities of the diffuse and reflection components in both the original and modified scene. Reflections of nearby objects become more clearly apparent, as is also seen in Figure 5.12 (2nd row).
process highly depends on the size of the hole; it could be greatly improved by using an adaptive sphere tesselation strategy. Nevertheless, reconstruction is not too performancedemanding as it is done only once in pre-process.
Skewness and Kurtosis Analysis of Measured BRDFs
We have shown how to compute 2D moments of arbitrary order. By now we have used moments up to order 2 to define the statistical properties of energy, mean and variance.
Here we extend our analysis on measured materials to moments of order 3 and 4, which allow us to define skewness and kurtosis. They seem to be important properties of material appearance, related to asymmetry and peakedness of a BRDF slice respectively. Co-skewness and co-kurtosis are defined as the standardized moment tensors of order 3 and 4 respectively. Standardized moments are computed by both centering fr on its mean and scaling it by respective variances. Since in our case Σ 1,1 ≈ 0, we may write standardized moments using
, where σ θ = Σ 2,0 and σ φ = Σ 0,2 . The coefficients of the co-skewness and co-kurtosis tensors are then given by γ n,m [ fr ] for n+m = 3 and n + m = 4 respectively. It is common to use the modern definition of kurtosis, also called excess kurtosis, which is is equal to 0 for a Normal distribution. In our case (i.e., with µ 0,1 = 0 and Σ 1,1 = 0), it can be shown that excess kurtosis coefficients are given by γ 4,0 -3, γ 3,1 , γ 2,2 -1, γ 1,3 and γ 0,4 -3. For simplicity, we will make an abuse of notation and refer to excess kurtosis coefficients as γ n,m [ fr ] for n + m = 4. The co-skewness tensor characterizes asymmetries of the BRDF slices in different dimensions. The profile for two of its coefficients, γ 3,0 and γ 1,2 , are shown in
Conclusions
Plots of co-skewness and co-kurtossis follow the same insights described in Section 3.3.4. First, moments where m is odd are close to null, similar to µ 0,1 and Σ 1,1 . This enforces the symmetry about the scattering plane. The second symmetry at incident view is as well confirmed by the fact that co-skewness starts at 0, meanwhile co-kurtosis starts at the same value, similarly to mean and variance respectively. In general all deviations from a single profile occur toward grazing angles. This effect appears stronger for moments related to θ than for those along φ. We thus conjecture that such grazing-angle deviations are due in part to the clamping of directions by hemispherical boundaries. Indeed, such a clamping will have more influence at grazing angles in directions parallel to θ i (see Fig. 3.3). | 186,161 | [
"781504"
] | [
"3102"
] |
01116414 | en | [
"math",
"qfin"
] | 2024/03/04 23:41:48 | 2017 | https://hal.science/hal-01116414v4/file/Huang_Nguyen_2017.pdf | Yu-Jui Huang
email: yujui.huang@colorado.edu
Adrien Nguyen-Huu
Time
Erhan Bayraktar
René Carmona
Ivar Samuel Cohen
Eke- Paolo Guasoni
Jan Ob
Traian Pirvu
Ronnie Sircar
Xunyu Zhou
Time-consistent stopping under decreasing impatience
Keywords: time inconsistency optimal stopping hyperbolic discounting decreasing impatience subgame-perfect Nash equilibrium JEL: C61, D81, D90, G02 2010 Mathematics Subject Classification: 60G40, 91B06
published or not. The documents may come
Introduction
Time inconsistency is known to exist in stopping decisions, such as casino gambling in [START_REF] Barberis | A model of casino gambling[END_REF] and [START_REF] Ebert | Until the bitter end: On prospect theory in a dynamic context[END_REF], optimal stock liquidation in [START_REF] Xu | Optimal stopping under probability distortion[END_REF], and real options valuation in [START_REF] Grenadier | Investment under uncertainty and time-inconsistent preferences[END_REF]. A general treatment, however, has not been proposed in continuous-time models. In this article, we develop a dynamic theory for time-inconsistent stopping problems in continuous time, under non-exponential discounting. In particular, we focus on log sub-additive discount functions (Assumption 3.1), which capture decreasing impatience, an acknowledged feature of empirical discounting in Behavioral Economics; see e.g. [START_REF] Thaler | Some empirical evidence on dynamic inconsistency[END_REF], [START_REF] Loewenstein | Anomalies: Intertemporal choice[END_REF], and [START_REF] Loewenstein | Anomalies in intertemporal choice: evidence and an interpretation[END_REF]. Hyperbolic and quasi-hyperbolic discount functions are special cases under our consideration.
The seminal work Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF] identifies three types of agents under time inconsistency -the naive, the pre-committed, and the sophisticated. Among them, only the sophisticated agent takes the possible change of future preferences seriously, and works on consistent planning: she aims to find a strategy that once being enforced over time, none of her future selves would want to deviate from it. How to precisely formulate such a sophisticated strategy had been a challenge in continuous time. For stochastic control, Ekeland and Lazrak [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF] resolved this issue by defining sophisticated controls as subgame-perfect Nash equilibria in a continuous-time inter-temporal game of multiple selves. This has aroused vibrant research on time inconsistency in mathematical finance; see e.g. [START_REF] Ekeland | Investment and consumption without commitment[END_REF], [START_REF] Ekeland | Time-consistent portfolio management[END_REF], [START_REF] Hu | Time-inconsistent stochastic linear-quadratic control[END_REF], [START_REF] Yong | Time-inconsistent optimal control problems and the equilibrium HJB equation[END_REF], [START_REF] Björk | Mean-variance portfolio optimization with state-dependent risk aversion[END_REF], [START_REF] Dong | Time-inconsistent portfolio investment problems[END_REF], [START_REF] Björk | A theory of Markovian time-inconsistent stochastic control in discrete time[END_REF], and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]. There is, nonetheless, no equivalent development for stopping problems.
This paper contributes to the literature of time inconsistency in three ways. First, we provide a precise definition of sophisticated stopping policy (or, equilibrium stopping policy) in continuous time (Definition 3.2). Specifically, we introduce the operator Θ in (3.7), which describes the game-theoretic reasoning of a sophisticated agent. Sophisticated policies are formulated as fixed points of Θ, which connects to the concept of subgame-perfect Nash equilibrium invoked in [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF].
Second, we introduce a new, iterative approach for finding equilibrium strategies. For any initial stopping policy τ , we apply the operator Θ to τ repetitively until it converges to an equilibrium stopping policy. Under appropriate conditions, this fixed-point iteration indeed converges (Theorem 3.1), which is the main result of this paper. Recall that the standard approach for finding equilibrium strategies in continuous time is solving a system of non-linear equations, as proposed in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]. Solving this system of equations is difficult; and even when it is solved (as in the special cases in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]), we only obtain one particular equilibrium, and it is unclear how other equilibrium strategies can be found. Our iterative approach can be useful here: we find different equilibria simply by starting the fixed-point iteration with different initial strategies τ . In some cases, we are able to find all equilibria; see Proposition 4.2.
Third, when an agent starts to do game-theoretic reasoning and look for equilibrium strategies, she is not satisfied with an arbitrary equilibrium. Instead, she works on improving her initial strategy to turn it into an equilibrium. This improving process is absent from [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], [START_REF] Ekeland | Investment and consumption without commitment[END_REF], [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF], and subsequent research, although well-known in Game Theory as the hierarchy of strategic reasoning in [START_REF] Stahl | Evolution of smart-n players[END_REF] and [START_REF] Stahl | Experimental evidence on players' models of other players[END_REF]. Our iterative approach specifically represents this improving process: for any initial strategy τ , each application of Θ to τ corresponds to an additional level of strategic reasoning. As a result, the iterative approach complements the existing literature of time inconsistency in that it not only facilitates the search for equilibrium strategies, but provides "agent-specific" equilibria: it assigns one specific equilibrium to each agent according to her initial behavior.
Upon completion of our paper, we noticed the recent work Pedersen and Peskir [START_REF]Optimal mean-variance selling strategies[END_REF] on mean-variance optimal stopping. They introduced "dynamic optimality" to deal with time inconsistency. As explained in detail in [START_REF]Optimal mean-variance selling strategies[END_REF], this new concept is different from consistent planning in Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], and does not rely on game-theoretic modeling. Therefore, our equilibrium stopping policies are different from their dynamically optimal stopping times. That being said, a few connections between our paper and [START_REF]Optimal mean-variance selling strategies[END_REF] do exist, as pointed out in Remarks 2.2, 3.2, and 4.4.
The paper is organized as follows. In Section 2, we introduce the setup of our model, and demonstrate time inconsistency in stopping decisions through examples. In Section 3, we formulate the concept of equilibrium for stopping problems in continuous time, search for equilibrium strategies via fixed-point iterations, and establish the required convergence result. Section 4 illustrates our theory thoroughly in a real options model. Most of the proofs are delegated to appendices.
Preliminaries and Motivation
Consider the canonical space Ω := {ω ∈ C([0, ∞); R d ) : ω 0 = 0}. Let {W t } t≥0 be the coordinate mapping process W t (ω) = ω t , and F W = {F W s } s≥0 be the natural filtration generated by W . Let P be the Wiener measure on (Ω, F W ∞ ), where F W ∞ := s≥0 F W s . For each t ≥ 0, we introduce the filtration
F t,W = {F t,W s } s≥0 with F t,W s = σ(W u∨t -W t : 0 ≤ u ≤ s),
and let F t = {F t s } s≥0 be the P-augmentation of F t,W . We denote by T t the collection of all F t -stopping times τ with τ ≥ t a.s. For the case where t = 0, we simply write F 0 = {F 0 s } s≥0 as F s = {F s } s≥0 , and T 0 as T .
Remark 2.1. For any 0 ≤ s ≤ t, F t s is the σ-algebra generated by only the P-negligible sets. Moreover, for any s, t ≥ 0, F t s -measurable random variables are independent of F t ; see Bouchard and Touzi [8, Remark 2.1] for a similar set-up.
Consider the space X := [0, ∞) × R d , equipped with the Borel σ-algebra B(X). Let X be a continuous-time Markov process given by X s := f (s, W s ), s ≥ 0, for some measurable function f : X → R. Or, more generally, for any τ ∈ T and R d -valued F τ -measurable ξ, let X be the solution to the stochastic differential equation (2.1)
dX t = b(t, X t )dt + σ(t, X t )dW t for t ≥ τ, with X τ = ξ a.s.
We assume that b : X → R and σ : X → R satisfy Lipschitz and linear growth conditions in
x ∈ R d , uniformly in t ∈ [0, ∞). Then, for any τ ∈ T and R d -valued F τ -measurable ξ with E[|ξ| 2 ] < ∞, (2.1
) admits a unique strong solution.
For any (t, x) ∈ X, we denote by X t,x the solution to (2.1) with X t = x, and by E t,x the expectation conditioned on X t = x.
Classical Optimal Stopping
Consider a payoff function g : R d → R, assumed to be nonnegative and continuous, and a discount function δ : R + → [0, 1], assumed to be continuous, decreasing, and satisfy δ(0) = 1. Moreover, we assume that
(2.2) E t,x sup t≤s≤∞ δ(s -t)g(X s ) < ∞, ∀(t, x) ∈ X,
where we interpret δ(∞t)g(X t,x ∞ ) := lim sup s→∞ δ(st)g(X t,x s ); this is in line with Karatzas and Shreve [START_REF] Karatzas | Methods of mathematical finance[END_REF]Appendix D]. Given (t, x) ∈ X, classical optimal stopping concerns if there is a τ ∈ T t such that the expected discounted payoff
(2.3) J(t, x; τ ) := E t,x [δ(τ -t)g(X τ )]
can be maximized. The associated value function
(2.4) v(t, x) := sup τ ∈T t J(t, x; τ )
has been widely studied, and the existence of an optimal stopping time is affirmative. The following is a standard result taken from [START_REF] Karatzas | Methods of mathematical finance[END_REF]Appendix D] and [START_REF] Peskir | Optimal stopping and free-boundary problems[END_REF]Chapter I.2].
Proposition 2.1. For any (t, x) ∈ X, let {Z t,x s } s≥t be a right-continuous process with
(2.5) Z t,x s (ω) = ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -t)g(X τ )] a.s. ∀s ≥ t,
and define τ t,x ∈ T t by
τ t,x := inf s ≥ t : δ(s -t)g(X t,x s ) = Z t,x s . (2.6)
Then, τ t,x is an optimal stopping time of (2.4), i.e.
(2.7)
J(t, x; τ t,x ) = sup τ ∈T t J(t, x; τ ).
Moreover, τ t,x is the smallest, if not unique, optimal stopping time.
Remark 2.2. The classical optimal stopping problem (2.4) is static in the sense that it involves only the preference of the agent at time t. Following the terminology of Definition 1 in Pedersen and Peskir [START_REF]Optimal mean-variance selling strategies[END_REF], τ t,x in (2.6) is "statically optimal".
Time Inconsistency
Following Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], a naive agent solves the classical problem (2.4) repeatedly at every moment as time passes by. That is, given initial (t, x) ∈ X, the agent solves sup τ ∈Ts J(s, X t,x s ; τ ) at every moment s ≥ t.
By Proposition 2.1, the agent at time s intends to employ the stopping time τ s,X t,x s ∈ T s , for all s ≥ t. This raises the question of whether optimal stopping times obtained at different moments, τ t,x and τ t ′ ,X t,x t ′ with t ′ > t, are consistent with each other. Definition 2.1 (Time Consistency). The problem (2.4) is time-consistent if for any (t, x) ∈ X and s > t, τ t,x (ω) = τ s,X t,x s (ω) (ω) for a.e. ω ∈ { τ t,x ≥ s}. We say the problem (2.4) is timeinconsistent if the above does not hold.
In the classical literature of Mathematical Finance, the discount function usually takes the form δ(s) = e -ρs for some ρ ≥ 0. This already guarantees time consistency of (2.4). To see this, first observe the identity
(2.8) δ(s)δ(t) = δ(s + t) ∀s, t ≥ 0.
Fix (t, x) ∈ X and pick t ′ > t such that P[ τ t,x ≥ t ′ ] > 0. For a.e. ω ∈ { τ t,x ≥ t ′ }, set y := X t,x t ′ (ω). We observe from (2.6), (2.5), and
X t,x s (ω) = X t ′ ,y s (ω) that τ t,x (ω) = inf s ≥ t ′ : δ(s -t)g(X t ′ ,y s (ω)) ≥ ess sup τ ∈Ts E s,X t ′ ,y s (ω) [δ(τ -t)g(X τ )] , τ t ′ ,y (ω) = inf s ≥ t ′ : δ(s -t ′ )g(X t ′ ,y s (ω)) ≥ ess sup τ ∈Ts E s,X t ′ ,y s (ω) [δ(τ -t ′ )g(X τ )] .
Then (2.8) guarantees τ t,x (ω) = τ t ′ ,y (ω), as δ(τ -t) δ(s-t) = δ(τ -t ′ ) δ(s-t ′ ) = δ(τs). For non-exponential discount functions, the identity (2.8) no longer holds, and the problem (2.4) is in general timeinconsistent.
Example 2.1 (Smoking Cessation). Suppose a smoker has a fixed lifetime T > 0. Consider a deterministic cost process X s := x 0 e 1 2 s , s ∈ [0, T ], for some x 0 > 0. Thus, we have X t,x s = xe 1 2 (s-t) for s ∈ [t, T ]. The smoker can (i) quit smoking at some time s < T (with cost X s ) and die peacefully at time T (with no cost), or (ii) never quit smoking (thus incurring no cost) but die painfully at time T (with cost X T ). With hyperbolic discount function δ(s) := 1 1+s for s ≥ 0, (2.4) becomes minimizing cost
inf s∈[t,T ] δ(s -t)X t,x s = inf s∈[t,T ] xe 1 2 (s-t) 1 + (s -t) .
By basic Calculus, the optimal stopping time τ t,x is given by (2.9)
τ t,x = t + 1 if t < T -1, T if t ≥ T -1.
Time inconsistency can be easily observed, and it illustrates the procrastination behavior: the smoker never quits smoking. . This can be viewed as a real options problem in which the management of a large non-profitable insurance company has the intention to liquidate or sell the company, and would like to decide when to do so; see the explanations under (4.2) for details.
By the argument in Pedersen and Peskir [START_REF] Pedersen | Solving non-linear optimal stopping problems by the method of time-change[END_REF], we prove in Proposition 4.1 below that the optimal stopping time τ x , defined in (2.6) with t = 0, has the formula
τ x = inf s ≥ 0 : X x s ≥ √ 1 + s .
If one solves the same problem at time t > 0 with X t = x ∈ R + , the optimal stopping time is τ t,x = t + τ x = inf{s ≥ t : X t,x s ≥ 1 + (st)}. The free boundary s → 1 + (st) is unusual in its dependence on initial time t. From Figure 1, we clearly observe time inconsistency: τ t,x (ω) and τ t ′ ,X t,x t ′ (ω) do not agree in general, for any t ′ > t, as they correspond to different free boundaries. As proposed in Strotz [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], to deal with time inconsistency, we need a strategy that is either pre-committed or sophisticated. A pre-committed agent finds τ t,x in (2.6) at time t, and forces her future selves to follow τ t,x through a commitment mechanism (e.g. a contract). By contrast, a sophisticated agent works on "consistent planning": she anticipates the change of future preferences, and aims to find a stopping strategy that once being enforced, none of her future selves would want to deviate from it. How to precisely formulate sophisticated stopping strategies has been a challenge in continuous time, and the next section focuses on resolving this.
Equilibrium Stopping Policies
Objective of a Sophisticated Agent
Since one may re-evaluate and change her choice of stopping times over time, her stopping strategy is not a single stopping time, but a stopping policy defined below. Definition 3.1. A Borel measurable function τ : X → {0, 1} is called a stopping policy. We denote by T (X) the set of all stopping policies.
Given current time and state (t, x) ∈ X, a policy τ ∈ T (X) governs when an agent stops: the agent stops at the first time τ (s, X t,x s ) yields the value 0, i.e. at the moment
Lτ (t, x) := inf s ≥ t : τ (s, X t,x s ) = 0 . (3.1)
To show that Lτ (t, x) is a well-defined stopping time, we introduce the set
(3.2) ker(τ ) := {(t, x) ∈ X : τ (t, x) = 0}.
It is called the kernel of τ , which is the collection of (t, x) at which the policy τ suggests immediate stopping. Then, Lτ (t, x) can be expressed as
(3.3) Lτ (t, x) = inf s ≥ t : (s, X t,x s ) ∈ ker(τ ) .
Lemma 3.1. For any τ ∈ T (X) and (t, x) ∈ X, ker(τ ) ∈ B(X) and Lτ (t, x) ∈ T t .
Proof. The Borel measurability of τ ∈ T (X) immediately implies ker(τ ) ∈ B(X). In view of (3.3), Lτ (t, x)(ω) = inf {s ≥ t : (s, ω) ∈ E}, where
E := {(r, ω) ∈ [t, ∞) × Ω : (r, X t,x r (ω)) ∈ ker(τ )}.
With ker(τ ) ∈ B(X) and the process X t,x being progressively measurable, E is a progressively measurable set. Since the filtration F t satisfies the usual conditions, [2, Theorem 2.1] asserts that Lτ (t, x) is an F t -stopping time.
Remark 3.1 (Naive Stopping Policy). Recall the optimal stopping time τ t,x defined in (2.6) for all (t, x) ∈ X. Define τ ∈ T (X) by
(3.4) τ (t, x) := 0, if τ t,x = t, 1, if τ t,x > t.
Note that τ : X → {0, 1} is indeed Borel measurable because τ t,x = t if and only if
(t, x) ∈ (t, x) ∈ X : g(x) = sup τ ∈Tt E t,x [δ(τ -t)g(X τ )] ∈ B(X).
Following the standard terminology (see e.g. [START_REF] Strotz | Myopia and inconsistency in dynamic utility maximization[END_REF], [START_REF] Pollak | Consistent planning[END_REF]), we call τ the naive stopping policy as it describes the behavior of a naive agent, discussed in Subsection 2.2.
Remark 3.2. Despite its name, the naive stopping policy τ may readily satisfy certain optimality criterion. For example, "dynamic optimality" recently proposed in Pedersen and Peskir [START_REF]Optimal mean-variance selling strategies[END_REF] can be formulated in our case as follows: τ ∈ T (X) is dynamically optimal if there is no other π ∈ T (X) such that
P t,x J Lτ (t, x), X t,x Lτ (t,x) ; Lπ Lτ (t, x), X t,x Lτ (t,x) > g(X t,x Lτ (t,x) ) > 0
for some (t, x) ∈ X. By (3.4) and Proposition 2.1, τ is dynamically optimal as the above probability is always 0.
Example 3.1 (Real Options Model, Continued). Recall the setting of Example 2.2. A naive agent follows τ ∈ T (X), and the actual moment of stopping is
L τ (t, x) = inf{s ≥ t : τ (s, X t,x s ) = 0} = inf{s ≥ t : X t,x s ≥ 1},
which differs from the agent's original decision τ t,x in Example 2.2.
We can now introduce equilibrium policies. Suppose that a stopping policy τ ∈ T (X) is given to a sophisticated agent. At any (t, x) ∈ X, the agent carries out the game-theoretic reasoning: "assuming that all my future selves will follow τ ∈ T (X), what is the best stopping strategy at current time t in response to that?" Note that the agent at time t has only two possible actions: stopping and continuation. If she stops at time t, she gets g(x) immediately. If
L * τ (t, x) := inf s > t : τ (s, X t,x s ) = 0 = inf s > t : (s, X t,x s ) ∈ ker(τ ) , (3.5)
leading to the payoff
J(t, x; L * τ (t, x)) = E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) .
By the same argument in Lemma 3.1, L * τ (t, x) is a well-defined stopping time in T t . Note the subtle difference between Lτ (t, x) and L * τ (t, x): with the latter, the agent at time t simply chooses to continue, with no regard to what τ ∈ T (X) suggests at time t. This is why we have "s > t" in (3.5), instead of "s ≥ t" in (3.1). Now, we separate the space X into three distinct regions
S τ := {(t, x) ∈ X : g(x) > J(t, x; L * τ (t, x))}, C τ := {(t, x) ∈ X : g(x) < J(t, x; L * τ (t, x))}, I τ := {(t, x) ∈ X : g(x) = J(t, x; L * τ (t, x))}. (3.6)
Some conclusions can be drawn:
1. If (t, x) ∈ S τ , the agent should stop immediately at time t.
2. If (t, x) ∈ C τ , the agent should continue at time t.
3. If (t, x) ∈ I τ , the agent is indifferent between stopping and continuation at current time; there is then no incentive for the agent to deviate from the originally assigned stopping strategy τ (t, x).
To summarize, for any (t, x) ∈ X, the best stopping strategy at current time (in response to future selves following τ ∈ T (X)) is
(3.7) Θτ (t, x) := 0 for (t, x) ∈ S τ 1 for (t, x) ∈ C τ τ (t, x) for (t, x) ∈ I τ .
. The next result shows that Θτ : X → {0, 1} is again a stopping policy. Lemma 3.2. For any τ ∈ T (X), S τ , C τ , and I τ belong to B(X), and Θτ ∈ T (X).
Proof. Since L * τ (t, x) is the first hitting time to the Borel set ker(τ ), the map (t, x) → J(t, x; L * τ (t, x)) = E t,x [δ(L * τ (t, x) -t)g(X L * τ (t,x) )
] is Borel measurable, and thus S τ , I τ , and C τ all belong to B(X). Now, by (3.7), ker(Θτ ) = S τ ∪ (I τ ∩ ker(τ )) ∈ B(X), which implies that Θτ ∈ T (X). By Lemma 3.2, Θ can be viewed as an operator acting on the space T (X). For any initial τ ∈ T (X), Θ : T (X) → T (X) generates a new policy Θτ ∈ T (X). The switch from τ to Θτ corresponds to an additional level of strategic reasoning in Game Theory, as discussed below Corollary 3.1.
Definition 3.2 (Equilibrium Stopping Policies
). We say τ ∈ T (X) is an equilibrium stopping policy if Θτ (t, x) = τ (t, x) for all (t, x) ∈ X. We denote by E(X) the collection of all equilibrium stopping policies.
The term "equilibrium" is used as a connection to subgame-perfect Nash equilibria in an inter-temporal game among current self and future selves. This equilibrium idea was invoked in stochastic control under time inconsistency; see e.g. [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], [START_REF] Ekeland | Investment and consumption without commitment[END_REF], [START_REF] Ekeland | Time-consistent portfolio management[END_REF], and [START_REF] Björk | A theory of Markovian time-inconsistent stochastic control in discrete time[END_REF]. A contrast with the stochastic control literature needs to be pointed out.
Remark 3.3 (Comparison with Stochastic Control).
In time-inconsistent stochastic control, local perturbation of strategies on small time intervals [t, t + ε] is the standard way to define equilibrium controls. In our case, local perturbation is carried out instantaneously at time t. This is because an instantaneously-modified stopping strategy may already change the expected discounted payoff significantly, whereas a control perturbed only at time t yields no effect.
The first question concerning Definition 3.2 is the existence of an equilibrium stopping policy. Finding at least one such a policy turns out to be easy. Remark 3.4 (Trivial Equilibrium). Define τ ∈ T (X) by τ (t, x) := 0 for all (t, x) ∈ X. Then Lτ (t, x) = L * τ (t, x) = t, and thus J(t, x; L * τ (t, x)) = g(x) for all (t, x) ∈ X. This implies I τ = X. We then conclude from (3.7) that Θτ (t, x) = τ (t, x) for all (t, x) ∈ X, which shows τ ∈ E(X). We call it the trivial equilibrium stopping policy.
Example 3.2 (Smoking Cessation, Continued). Recall the setting in Example 2.1. Observe from (2.9) and (3.4) that L * τ (t, x) = T for all (t, x) ∈ X. Then,
δ(L * τ (t, x) -t)X t,x L * τ (t,x) = X t,x T 1 + T -t = xe 1 2 (T -t) 1 + T -t .
Since e 1 2 s = 1 + s has two solutions s = 0 and s = s * ≈ 2.51286, and e
1 2 s > 1 + s iff s > s * , the above equation implies S τ = {(t, x) : t < T -s * }, C τ = {(t, x) : t ∈ (T -s * , T )}, and I τ = {(t, x) : t = T -s * or T }. We therefore get Θ τ (t, x) = 0 for t < T -s * , 1 for t ≥ T -s * .
Whereas a naive smoker delays quitting smoking indefinitely (as in Example 2.1), the first level of strategic reasoning (i.e. applying Θ to τ once) recognizes this procrastination behavior and pushes the smoker to quit immediately, unless he is already too old (i.e. t ≥ Ts * ). It can be checked that Θ τ is already an equilibrium, i.e. Θ 2 τ (t, x) = Θ τ (t, x) for all (t, x) ∈ X.
It is worth noting that in the classical case of exponential discounting, characterized by (2.8), the naive stopping policy τ in (3.4) is already an equilibrium.
Proposition 3.1. Under (2.8), τ ∈ T (X) defined in (3.4) belongs to E(X).
Proof. The proof is delegated to Appendix A.1.
The Main Result
In this subsection, we look for equilibrium policies through fixed-point iterations. For any τ ∈ T (X), we apply Θ to τ repetitively until we reach an equilibrium policy. In short, we define τ 0 by
(3.8) τ 0 (t, x) := lim n→∞ Θ n τ (t, x) ∀(t, x) ∈ X,
and take it as a candidate equilibrium policy. To make this argument rigorous, we need to show (i) the limit in (3.8) converges, so that τ 0 is well-defined; (ii) τ 0 is indeed an equilibrium policy, i.e. Θτ 0 = τ 0 . To this end, we impose the condition:
Assumption 3.1. The function δ satisfies δ(s)δ(t) ≤ δ(s + t) for all s, t ≥ 0.
Assumption 3.1 is closely related to decreasing impatience (DI) in Behavioral Economics. It is well-documented in empirical studies, e.g. [START_REF] Thaler | Some empirical evidence on dynamic inconsistency[END_REF], [START_REF] Loewenstein | Anomalies: Intertemporal choice[END_REF], [START_REF] Loewenstein | Anomalies in intertemporal choice: evidence and an interpretation[END_REF], that people admits DI: when choosing between two rewards, people are more willing to wait for the larger reward (more patient) when these two rewards are further away in time. For instance, in the two scenarios (i) getting $100 today or $110 tomorrow, and (ii) getting $100 in 100 days or $110 in 101 days, people tend to choose $100 in (i), but $110 in (ii).
Following [28, Definition 1] and [START_REF] Noor | Decreasing impatience and the magnitude effect jointly contradict exponential discounting[END_REF], [START_REF]Hyperbolic discounting and the standard model: Eliciting discount functions[END_REF], DI can be formulated under current context as follows: the discount function δ induces DI if (3.9) for any s ≥ 0, t → δ(t + s) δ(t) is strictly increasing.
Observe that (3.9) readily implies Assumption 3.
) ker(Θ n τ ) ⊆ ker(Θ n+1 τ ), ∀n ∈ N. (3.11
Hence, τ 0 in (3.8) is a well-defined element in T (X), with ker(τ 0 ) = n∈N ker(Θ n τ ).
Proof. The proof is delegated to Appendix A.2.
Condition (3.10) means that at any (t, x) ∈ X where the initial policy τ indicates immediate stopping, the new policy Θτ agrees with it; however, it is possible that at some (t, x) ∈ X where τ indicates continuation, Θτ suggests immediate stopping, based on the game-theoretic reasoning in Subsection 3.1. Note that (3.10) is not very restrictive, as it already covers all hitting times to subsets of X that are open (or more generally, half-open in [0, ∞) and open in R d ), as explained below.
Remark 3.5. Let E be a subset of X that is "open" in the sense that for any (t, x) ∈ E, there exists The stopping policy τ corresponds to the stopping times T t,x := inf{s ≥ t : (s, X t,x s ) ∈ E} for all (t, x) ∈ X. In particular, if
ε > 0 such that (t, x) ∈ [t, t + ε) × B ε (x) ⊆ E, where B ε (x) := {y ∈ R d : |y -x| < ε}. Define τ ∈ T (X) by τ (t, x) = 0 if and only if (t, x) ∈ E. Since ker(τ ) = E is "open", for any (t, x) ∈ ker(τ ), we have L * τ (t, x) = t,
E = [0, ∞) × F where F is an open set in R d , the corresponding stopping times are T ′ t,x := inf{s ≥ t : X t,x s ∈ F }, (t, x) ∈ X.
Moreover, the naive stopping policy τ also satisfies (3.10).
Proposition 3.3. τ ∈ T (X) defined in (3.4) satisfies (3.10).
Proof. The proof is delegated to Appendix A.3.
The next theorem is the main result of our paper. It shows that the fixed-point iteration in (3.8) indeed converges to an equilibrium policy. Proof. The proof is delegated to Section A.4.
The following result for the naive stopping policy τ , defined in (3.4), is a direct consequence of Proposition 3.3 and Theorem 3.1.
Corollary 3.1. Let Assumption 3.1 hold. The stopping policy τ 0 ∈ T (X) defined by
(3.12) τ 0 (t, x) := lim n→∞ Θ n τ (t, x) ∀(t, x) ∈ X
belongs to E(X).
Our iterative approach, as in (3.8), contributes to the literature of time inconsistency in two ways. First, the standard approach for finding equilibrium strategies in continuous time is solving a system of non-linear equations (the so-called extended HJB equation), as proposed in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]. Solving this system of equations is difficult; and even when it is solved (as in the special cases in [START_REF] Ekeland | Investment and consumption without commitment[END_REF] and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF]), we just obtain one particular equilibrium, and it is unclear how other equilibrium strategies can be found. Our iterative approach provides a potential remedy here. We can find different equilibria simply by starting the iteration (3.8) with different initial policies τ ∈ T (X). In some cases, we are able to find all equilibria, and obtain a complete characterization of E(X); see Proposition 4.2 below.
Second, while the continuous-time formulation of equilibrium strategies was initiated in [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], the "origin" of an equilibrium strategy has not been addressed. This question is important as people do not start with using equilibrium strategies. People have their own initial strategies, determined by a variety of factors such as classical optimal stopping theory, personal habits, and popular rules of thumb in the market. Once an agent starts to do game-theoretic reasoning and look for equilibrium strategies, she is not satisfied with an arbitrary equilibrium. Instead, she works on improving her initial strategy to turn it into an equilibrium. This improving process is absent from [START_REF] Ekeland | Being serious about non-commitment: subgame perfect equilibrium in continuous time[END_REF], [START_REF] Ekeland | Investment and consumption without commitment[END_REF], and [START_REF] Björk | A theory of Markovian timeinconsistent stochastic control in continuous time[END_REF], but it is in fact well-known in Game Theory as the hierarchy of strategic reasoning in [START_REF] Stahl | Evolution of smart-n players[END_REF] and [START_REF] Stahl | Experimental evidence on players' models of other players[END_REF]. Our iterative approach embodies this framework: given an initial τ ∈ T (X), Θ n τ ∈ T (X) corresponds to level-n strategic reasoning in [START_REF] Stahl | Experimental evidence on players' models of other players[END_REF], and τ 0 := lim n→∞ Θ n τ reflects full rationality of "smart ∞ " players in [START_REF] Stahl | Evolution of smart-n players[END_REF]. Hence, our formulation complements the literature of time inconsistency in that it not only defines what an equilibrium is, but explains where an equilibrium is coming from. This in turn provides "agent-specific" results: it assigns one specific equilibrium to each agent according to her initial behavior.
In particular, Corollary 3.1 specifies the connection between the naive behavior and the sophisticated one. While these behaviors have been widely discussed in the literature, their relation has not been stated mathematically as precisely as in (3.12).
The Time-Homogeneous Case
Suppose the state process X is time-homogeneous, i.e. X s = f (W s ) for some measurable f : R d → R; or, the coefficients b and σ in (2.1) does not depend on t. The objective function (2.3) then reduces to J(x; τ ) := E x [δ(τ )g(X τ )] for x ∈ R d and τ ∈ T , where the superscript of E x means X 0 = x. The decision to stop or to continue then depends on the current state x only. The formulation in Subsection 3.1 reduces to: Definition 3.3. When X is time-homogeneous, a Borel measurable τ : R d → {0, 1} is called a stopping policy, and we denote by T (R d ) the set of all stopping policies. Given τ ∈ T (R d ) and x ∈ R d , we define, similarly to (3.2), (3.1), and (3.5), ker(τ ) := {x ∈ R d : τ (x) = 0}, Lτ (x) := inf{t ≥ 0 : τ (X x t ) = 0}, and L * τ (x) := inf{t > 0 : τ (X x t ) = 0}. Furthermore, we say τ ∈ T (R d ) is an equilibrium stopping policy if Θτ (x) = τ (x) for all x ∈ R d , where
(3.13) Θτ (x) := 0 if x ∈ S τ := {x : g(x) > E x [δ(L * τ (x))g(X L * τ (x) )]}, 1 if x ∈ C τ := {x : g(x) < E x [δ(L * τ (x))g(X L * τ (x) )]}, τ (x) if x ∈ I τ := {x : g(x) = E x [δ(L * τ (x))g(X L * τ (x) )]}.
Remark 3.6. When X is time-homogeneous, all the results in Subsection 3.2 hold, with T (X), E(X), ker(τ ), and Θ replaced by the corresponding ones in Definition 3.3. Proofs of these statements are similar to, and in fact easier than, those in Subsection 3.2, thanks to the homogeneity in time.
A Detailed Case Study: Stopping of BES(1)
In this section, we recall the setup of Example 2.2, with hyperbolic discount function
(4.1) δ(s) := 1 1 + βs ∀s ≥ 0,
where β > 0 is a fixed parameter. The state process X is a one-dimensional Bessel process, i.e. X t = |W t |, t ≥ 0, where W is a one-dimensional Brownian motion. With X being timehomogeneous, we will follow Definition 3.3 and Remark 3.6. Also, the classical optimal stopping problem (2.4) reduces to
(4.2) v(x) = sup τ ∈T E x X τ 1 + βτ for x ∈ R + .
This can be viewed as a real options problem, as explained below.
By [START_REF] Taksar | Optimal dynamic reinsurance policies for large insurance portfolios[END_REF] and the references therein, when the surplus (or reserve) of an insurance company is much larger than the size of each individual claim, the dynamics of the surplus process can be approximated by dR t = µdt + σdW t with µ = p -E[Z] and σ = E[Z 2 ]. Here, p > 0 is the premium rate, and Z is a random variable that represents the size of each claim. Suppose that an insurance company is non-profitable with µ = 0, i.e. it uses all the premiums collected to cover incoming claims. Also assume that the company is large enough to be considered "systemically important", so that when its surplus hits zero, the government will provide monetary support to bring it back to positivity, as in the recent financial crisis. The dynamics of R is then a Brownian motion reflected at the origin. Thus, (4.2) describes a real options problem in which the management of a large non-profitable insurance company has the intention to liquidate or sell the company, and would like to decide when to do so.
An unusual feature of (4.2) is that the discounted process {δ(s)v(X x s )} s≥0 may not be a supermartingale. This makes solving (4.2) for the optimal stopping time τ x , defined in (2.6) with t = 0, nontrivial. As shown in Appendix B.1, we need an auxiliary value function, and use the method of time-change in [START_REF] Pedersen | Solving non-linear optimal stopping problems by the method of time-change[END_REF]. Proposition 4.1. For any x ∈ R + , the optimal stopping time τ x of (4.2) (defined in (2.6) with t = 0) admits the explicit formula
(4.3) τ x = inf s ≥ 0 : X x s ≥ 1/β + s .
Hence, the naive stopping policy τ ∈ T (R + ), defined in (3.4), is given by
(4.4) τ (x) := 1 [0, √ 1/β) (x) ∀x ∈ R + .
Proof. The proof is delegated to Appendix B.1.
Characterization of equilibrium policies
Lemma 4.1. For any τ ∈ T (R + ), consider τ ′ ∈ T (R + ) with ker(τ ′ ) := ker(τ ).
Then L * τ (x) = Lτ (x) = Lτ ′ (x) = L * τ ′ (x) for all x ∈ R + . Hence, τ ∈ E(R + ) if and only if τ ′ ∈ E(R + ). Proof. If x ∈ R + is in the interior of ker(τ ), L * τ (x) = Lτ (x) = 0 = Lτ ′ (x) = L * τ ′ (x). Since a one-dimensional Brownian motion W is monotone in no interval, if x ∈ ker(τ ′ ) \ ker(τ ), L * τ (x) = Lτ (x) = 0 = Lτ ′ (x) = L * τ ′ (x); if x / ∈ ker(τ ′ ), then L * τ (x) = Lτ (x) = inf{s ≥ 0 : |W x | ∈ ker(τ )} = inf{s ≥ 0 : |W x | ∈ ker(τ )} = Lτ ′ (x) = L * τ ′ (x)
. Finally, we deduce from (3.13) and L * τ (x) = L * τ ′ (x) for all x ∈ R + that τ ∈ E(R + ) implies τ ′ ∈ E(R + ), and vice versa.
The next result shows that every equilibrium policy corresponds to the hitting time to a certain threshold. Recall that a set E ⊂ R + is called totally disconnected if the only nonempty connected subsets of E are singletons, i.e. E contains no interval. Lemma 4.2. For any τ ∈ E(R + ), define a := inf (ker(τ )) ≥ 0. Then, the Borel set E := {x ≥ a : x / ∈ ker(τ )} is totally disconnected. Hence, ker(τ ) = [a, ∞) and the stopping policy τ a , defined by τ a (x) := 1 [0,a) (x) for x ∈ R + , belongs to E(R + ).
Proof. The proof is delegated to Appendix B.2
The converse question is for which a ≥ 0 the policy τ a ∈ T (R) is an equilibrium. To answer this, we need to find the sets S τa , C τa , and I τa in (3.13). By Definition 3.3,
(4.5) Lτ a (x) = T x a := inf{s ≥ 0 : X x s ≥ a}, L * τ a (x) = inf{s > 0 : X x s ≥ a}. Note that Lτ a (x) = L * τ a (x)
, by an argument similar to the proof of Lemma 4.1. As a result, for x ≥ a, we have J(x; L * τ a (x)) = J(x; 0) = x, which implies (i) For any a ≥ 0, x → η(x, a) is strictly increasing and strictly convex on [0,a], and satisfies 0 < η(0, a) < a and η(a, a) = a.
(ii) For any x ≥ 0, η(x, a) → 0 as a → ∞.
(iii) There exists a unique a * ∈ (0, 1/ √ β) such that for any a > a * , there is a unique solution x * (a) ∈ (0, a * ) of η(x, a) = x. Hence, η(x, a) > x for x < x * (a) and η(x, a) < x for x > x * (a). On the other hand, a ≤ a * implies that η(x, a) > x for all x ∈ (0, a).
> x, if x ∈ [0, x * (a)), = x, if x = x * (a), < x, if x ∈ (x * (a), a).
By (4.6), (4.7), (4.8), and the definition of Θ in (3.13), For a > a * , although τ a / ∈ E(R + ) by Proposition 4.2, we may use the iteration in (3.8) to find a stopping policy in E(R + ). Here, the repetitive application of Θ to τ a has a simple structure: to reach an equilibrium, we need only one iteration. Recall "static optimality" and "dynamic optimality" in Remarks 2.2 and 3.2. By Proposition 4.1, τ x in (4.3) is statically optimal for x ∈ R + fixed, while τ in (4.4) is dynamically optimal. This is reminiscent of the situation in Theorem 3 of [START_REF]Optimal mean-variance selling strategies[END_REF]. Moreover, τ ∈ T (R + ) defined by τ (x) := 1 [0,b) (x), x ∈ R + , is dynamically optimal for all b ≥ 1/β, thanks again to Proposition 4.1.
if a ≤ a * , Θτ a (x) = 1 [0,a) (x) + τ a (x)1 [a,∞) (x) ≡ τ a (x); if a > a * , Θτ a (x) = 1 [0,x * (a)) (x) + τ a (x)1 {x * (a)}∪[a,∞) (x) ≡ τ a (x
E(R + ) = {τ ∈ T (R + ) : ker(τ ) = [a, ∞) for some a ∈ [0, a * ]}. Proof. The derivation of "τ a ∈ E(R + ) ⇐⇒ a ∈ [0, a * ]" is
Further consideration on selecting equilibrium policies
In view of (4.10), it is natural to ask which equilibrium in E(R + ) one should employ. According to standard Game Theory literature discussed below Corollary 3.1, a sophisticated agent should employ the specific equilibrium generated by her initial stopping policy τ , through the iteration (3.8). Now, imagine that an agent is "born" sophisticated: she does not have any previouslydetermined initial stopping policy, and intends to apply an equilibrium policy straight away. A potential way to formulate her stopping problem is the following: (4.11) sup
τ ∈E(R + ) J(x; Lτ (x)) = sup a∈[0,a * ] J(x; Lτ a (x)) = sup a∈[x,a * ∨x] E x a 1 + βT x a .
where the first equality follows from Proposition 4.2 and Lemma 4.1.
Proposition 4.3. τ a * ∈ E(R + ) solves (4.11) for all x ∈ R + .
Proof. Fix a ∈ [0, a * ). For any x ≤ a, we have T x a ≤ T x a * . Thus,
J(x; Lτ a * (x)) = E x a * 1 + βT x a * = E x E x a * 1 + βT x a * F T x a ≥ E x 1 1 + βT x a E a a * 1 + βT a a * > E x a 1 + βT x a = J(x; Lτ a (x)),
where the last inequality follows from Lemma 4.3 (iii).
The conclusion is twofold. First, it is possible, at least under current setting, to find one single equilibrium policy that solves (4.11) for all x ∈ R + . Second, this "optimal" equilibrium policy τ a * is different from τ ′
x * ( a) , the equilibrium generated by the naive policy τ (see Remark 4.3). This indicates that the map Θ * := lim n→∞ Θ n : T (X) → E(X) is in general nonlinear: while τ ∈ T (T ) is constructed from optimal stopping times { τ x } x∈R + (or "dynamically optimal" as in Remark 4.4), Θ * ( τ ) = τ ′
x * ( a) ∈ E(X) is not optimal under (4.11). This is not that surprising once we realize τ x > L τ (x) > Lτ ′
x * ( a) (x) for some x ∈ R + . The first inequality is essentially another way to describe time inconsistency, and the second inequality follows from ker( τ ) ⊂ ker(Θ τ ) = ker(τ ′
x * ( a) ). It follows that the optimality of τ x for sup τ ∈T J(x; τ ) does not necessarily translate to the optimality of τ ′
x * ( a) for sup τ ∈E(R + ) J(x; Lτ (x)).
A Proofs for Section 3
Throughout this appendix, we will constantly use the notation
(A.1) τ n := Θ n τ n ∈ N, for any τ ∈ T (X).
A.1 Proof of Proposition 3.1
Fix (t, x) ∈ X. We deal with the two cases τ (t, x) = 0 and τ (t, x) = 1 separately. If τ (t, x) = 0, i.e. τ t,x = t, by (2.7)
g(x) = sup τ ∈Tt E t,x [δ(τ -t)g(X τ )] ≥ E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) , which implies (t, x) ∈ S τ ∪ I τ . We then conclude from (3.7) that Θ τ (t, x) = 0 if (t, x) ∈ S τ τ (t, x) if (t, x) ∈ I τ = τ (t, x). If τ (t, x) = 1, then L * τ (t, x) = L τ (t, x) = inf{s ≥ t : τ (s, X t,x s ) = 0} = inf{s ≥ t : τ s,X t,x s = s}. By (2.6) and (2.5), τ s,X t,x s = s means g(X t,x s (ω)) = ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -s)g(X τ )],
which is equivalent to
δ(s -t)g(X t,x s (ω)) = δ(s -t) ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -s)g(X τ )] = ess sup τ ∈Ts E s,X t,x s (ω) [δ(τ -t)g(X τ )] = Z t,x s (ω),
where the second equality follows from (2.8). We then conclude that
L * τ (t, x) = inf{s ≥ t : δ(s -t)g(X t,x s ) = Z t,x s } = τ t,x
. This, together with (2.7), shows that
E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) = E t,x δ( τ t,x -t)g(X τt,x ) ≥ g(x), which implies (t, x) ∈ I τ ∪ C τ . By (3.7), we have Θ τ (t, x) = τ (t, x) if (t, x) ∈ I τ 1 if (t, x) ∈ C τ = τ (t, x).
We therefore have Θ τ x) = τ (t, x) for all (t, x) ∈ X, i.e. τ ∈ E(X).
A.2 Derivation of Proposition 3.2
To prove the technical result Lemma A.1 below, we need to introduce shifted random variables as formulated in Nutz [START_REF] Nutz | Random G-expectations[END_REF]. For any t ≥ 0 and ω ∈ Ω, we define the concatenation of ω and ω ∈ Ω at time t by
(ω ⊗ t ω) s := ω s 1 [0,t) (s) + [ω s -(ω t -ω t )]1 [t,∞) (s), s ≥ 0.
For any F ∞ -measurable random variable ξ : Ω → R, we define the shifted random variable
[ξ] t,ω : Ω → R, which is F t ∞ -measurable, by [ξ] t,ω (ω) := ξ(ω ⊗ t ω), ∀ω ∈ Ω.
Given τ ∈ T , we write ω ⊗ τ (ω) ω as ω ⊗ τ ω, and [ξ] τ (ω),ω (ω) as [ξ] τ,ω (ω). A detailed analysis of shifted random variables can be found in [3, Appendix A]; Proposition A.1 therein implies that give (t, x) ∈ X fixed, any θ ∈ T t and
F t ∞ -measurable ξ with E t,x [|ξ|] < ∞ satisfy (A.2) E t,x [ξ | F t θ ](ω) = E t,x [[ξ] θ,ω ] for a.e. ω ∈ Ω.
Lemma A.1. For any τ ∈ T (X) and (t, x) ∈ X, define t 0 := L * τ 1 (t, x) ∈ T t and s 0 := L * τ (t, x) ∈ T t , with τ 1 as in (A.1). If t 0 ≤ s 0 , then for a.e. ω ∈ {t < t 0 },
g(X t,x t 0 (ω)) ≤ E t,x δ(s 0 -t 0 )g(X s 0 ) | F t t 0 (ω).
Proof. For a.e. ω ∈ {t < t 0 } ∈ F t , we deduce from t 0 (ω) = L * τ 1 (t, x)(ω) > t that for all s ∈ (t, t 0 (ω)) we have τ 1 (s, X t,x s (ω)) = 1 . By (A.1) and (3.7), this implies (s, X t,x s (ω)) / ∈ S τ for all s ∈ (t, t 0 (ω)). Thus,
g(X t,x s (ω)) ≤ E s,X t,x s (ω) δ(L * τ (s, X s ) -s)g X L * τ (s,X s ) ∀s ∈ (t, t 0 (ω)) . (A.3) For any s ∈ (t, t 0 (ω)), note that [t 0 ] s,ω (ω) = t 0 (ω ⊗ s ω) = L * τ 1 (t, x)(ω ⊗ s ω) = L * τ 1 (s, X t,x s (ω))(ω), ∀ ω ∈ Ω. Since t 0 ≤ s 0 , similar calculation gives [s 0 ] s,ω (ω) = L * τ (s, X t,x s (ω))(ω). We thus conclude from (A.3) that g(X t,x s (ω)) ≤ E s,X t,x s (ω) δ([s 0 ] s,ω -s)g [X s 0 ] s,ω ≤ E s,X t,x s (ω) δ([s 0 ] s,ω -[t 0 ] s,ω )g [X s 0 ] s,ω , ∀s ∈ (t, t 0 (ω)) , (A.4)
where the second line holds because δ is decreasing and also δ and g are both nonnegative. On the other hand, by (A.2), it holds a.s. that
E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t s ](ω) = E t,x δ([s 0 ] s,ω -[t 0 ] s,ω )g([X t,x s 0 ] s,ω ) ∀s ≥ t, s ∈ Q.
Note that we used the countability of Q to obtain the above almost-sure statement. This, together with (A.4), shows that it holds a.s. that
(A.5) g(X t,x s (ω)) 1 {(t,t 0 (ω))∩Q} (s) ≤ E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t s ](ω) 1 {(t,t 0 (ω))∩Q} (s).
Since our sample space Ω is the canonical space for Brownian motion with the right-continuous Brownian filtration F, the martingale representation theorem holds under current setting. This in particular implies that every martingale has a continuous version. Let {M s } s≥t be the continuous version of the martingale {E t,x [δ(s 0t 0 )g(X s 0 ) | F t s ]} s≥t . Then, (A.5) immediately implies that it holds a.s. that
(A.6) g(X t,x s (ω)) 1 {(t,t 0 (ω))∩Q} (s) ≤ M s (ω) 1 {(t,t 0 (ω))∩Q} (s).
Also, using the right-continuity of M and (A.2), one can show that for any τ ∈ T t , M τ = E t,x [δ(s 0t 0 )g(X s 0 ) | F t τ ] a.s. Now, we can take some Ω * ∈ F ∞ with P[Ω * ] = 1 such that for all ω ∈ Ω * , (A.6) holds true and M t 0 (ω) = E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t t 0 ](ω). For any ω ∈ Ω * ∩{t < t 0 }, take {k n } ⊂ Q such that k n > t and k n ↑ t 0 (ω). Then, (A.6) implies g(X t,x kn (ω)) ≤ M kn (ω), ∀n ∈ N. As n → ∞, we obtain from the continuity of s → X s and z → g(z), and the left-continuity of s → M s that g(X t,x t 0 (ω))
≤ M t 0 (ω) = E t,x [δ(s 0 -t 0 )g(X s 0 ) | F t t 0 ](ω).
Now, we are ready to prove Proposition 3.2.
Proof of Proposition 3.2. We will prove (3.11) by induction. We know that the result holds for n = 0 by (3.10). Now, assume that (3.11) holds for n = k ∈ N ∪ {0}, and we intend to show that (3.11) also holds for n = k + 1. Recall the notation in (A.1). Fix (t, x) ∈ ker(τ k+1 ), i.e. τ k+1 (t, x) = 0. If L * τ k+1 (t, x) = t, then (t, x) belongs to I τ k+1 . By (3.7), we get τ k+2 (t, x) = Θτ k+1 (t, x) = τ k+1 (t, x) = 0, and thus (t, x) ∈ ker(τ k+2 ), as desired. We therefore assume below that L * τ k+1 (t, x) > t. By (3.7), τ k+1 (t, x) = 0 implies
(A.7) g(x) ≥ E t,x [δ(L * τ k (t, x) -t)g(X L * τ k (t,x) )].
Let t 0 := L * τ k+1 (t, x) and s 0 := L * τ k (t, x). Under the induction hypothesis ker(τ k ) ⊆ ker(τ k+1 ), we have t 0 ≤ s 0 , as t 0 and s 0 are hitting times to ker(τ k+1 ) and ker(τ k ), respectively; see (3.5). Using (A.7), t 0 ≤ s 0 , Assumption 3.1, and g being nonnegative,
g(x) ≥ E t,x [δ(s 0 -t)g(X s 0 )] ≥ E t,x [δ(t 0 -t)δ(s 0 -t 0 )g(X s 0 )] = E t,x δ(t 0 -t)E t,x δ(s 0 -t 0 )g(X s 0 ) | F t t 0 ≥ E t,x δ(t 0 -t)g(X t 0 ) ,
where the second line follows from the tower property of conditional expectations, and the third line is due to Lemma A.1. This implies (t, x) / ∈ C τ k+1 , and thus
(A.8) τ k+2 (t, x) = 0 for (t, x) ∈ S τ 1 τ k+1 (t, x) for (t, x) ∈ I τ 1 = 0.
That is, (t, x) ∈ ker(τ k+2 ). Thus, we conclude that ker(τ k+1 ) ⊆ ker(τ k+2 ), as desired.
It remains to show that τ 0 defined in (3.8) is a stopping policy. Observe that for any (t, x) ∈ X, τ 0 (t, x) = 0 if and only if Θ n τ (t, x) = 0, i.e. (t, x) ∈ ker(Θ n τ ), for n large enough. This, together with (3.11), implies that
{(t, x) ∈ X : τ 0 (t, x) = 0} = n∈N ker(Θ n τ ) ∈ B(X).
Hence, τ 0 : X → {0, 1} is Borel measurable, and thus an element in T (X).
A.3 Proof of Proposition 3.3
Fix (t, x) ∈ ker( τ ). Since τ (t, x) = 0, i.e. τ t,x = t, (2.6), (2.5), and (2.7) imply
g(x) = sup τ ∈Tt E t,x [δ(τ -t)g(X τ )] ≥ E t,x δ(L * τ (t, x) -t)g(X L * τ (t,x) ) .
This shows that (t, x) ∈ S τ ∪ I τ . Thus, we have ker( τ ) ⊆ S τ ∪ I τ . It follows that
ker( τ ) = (ker( τ ) ∩ S τ ) ∪ (ker( τ ) ∩ I τ ) ⊆ S τ ∪ (ker( τ ) ∩ I τ ) = ker(Θ τ ),
where the last equality follows from (3.7).
A.4 Derivation of Theorem 3.1
Lemma A.2. Suppose Assumption 3.1 holds and τ ∈ T (X) satisfies (3.10). Then τ 0 defined in (3.8) satisfies L * τ 0 (t, x) = lim n→∞ L * Θ n τ (t, x), ∀(t, x) ∈ X.
Proof. We will use the notation in (A.1). Recall that ker(τ n ) ⊆ ker(τ n+1 ) for all n ∈ N and ker(τ 0 ) = n∈N ker(τ n ) from Proposition 3.2. By (3.5), this implies that {L * τ n (t, x)} n∈N is a nonincreasing sequence of stopping times, and
L * τ 0 (t, x) ≤ t 0 := lim n→∞ L * τ n (t, x).
It remains to show that L * τ 0 (t, x) ≥ t 0 . We deal with the following two cases. (i) On {ω ∈ Ω : L * τ 0 (t, x)(ω) = t}: By (3.5), there must exist a sequence {t m } m∈N in R + , depending on ω ∈ Ω, such that t m ↓ t and τ 0 (t m , X t,x tm (ω)) = 0 for all m ∈ N. For each m ∈ N, by the definition of τ 0 in (3.8), there exists n * ∈ N large enough such that τ n * (t m , X t,x tm (ω)) = 0, which implies L * τ n * (t, x)(ω) ≤ t m . Since {L * τ n (t, x)} n∈N is nonincreasing, we have t 0 (ω) ≤ L * τ n * (t, x)(ω) ≤ t m . With m → ∞, we get t 0 (ω) ≤ t = L * τ 0 (t, x)(ω).
(ii) On {ω ∈ Ω : L * τ 0 (t, x)(ω) > t}: Set s 0 := L * τ 0 (t, x). If τ 0 (s 0 (ω), X t,x s 0 (ω)) = 0, then by (3.8) there exists n * ∈ N large enough such that τ n * (s 0 (ω), X t,x s 0 (ω)) = 0. Since {L * τ n (t, x)} n∈N is nonincreasing, t 0 (ω) ≤ L * τ n * (t, x)(ω) ≤ s 0 (ω), as desired. If τ 0 (s 0 (ω), X t,x s 0 (ω)) = 1, then by (3.5) there exist a sequence {t m } m∈N in R + , depending on ω ∈ Ω, such that t m ↓ s 0 (ω) and τ 0 (t m , X t,x tm (ω)) = 0 for all m ∈ N. Then we can argue as in case (i) to show that t 0 (ω) ≤ s 0 (ω), as desired. Now, we are ready to prove Theorem 3.1.
Proof of Theorem 3.1. By Proposition 3.2, τ 0 ∈ T (X) is well-defined. For simplicity, we will use the notation in (A.1). Fix (t, x) ∈ X. If τ 0 (t, x) = 0, by (3.8) we have τ n (t, x) = 0 for n large enough. Since τ n (t, x) = Θτ n-1 (t, x), we deduce from "τ n (t, x) = 0 for n large enough" and (3.7) that (t, x) ∈ S τ n-1 ∪ I τ n-1 for n large enough. That is, g(x) ≥ E t,x δ(L * τ n-1 (t, x)t)g(X L * τ n-1 (t,x) ) for n large enough. With n → ∞, the dominated convergence theorem and Lemma A.2 yield g(x) ≥ E t,x δ(L * τ 0 (t, x)t)g(X L * τ 0 (t,x) ) , which shows that (t, x) ∈ S τ 0 ∪ I τ 0 . We then deduce from (3.7) and τ 0 (t, x) = 0 that Θτ 0 (t, x) = τ 0 (t, x). On the other hand, if τ 0 (t, x) = 1, by (3.8) we have τ n (t, x) = 1 for n large enough. Since τ n (t, x) = Θτ n-1 (t, x), we deduce from "τ n (t, x) = 1 for n large enough" and (3.7) that
(t, x) ∈ C τ n-1 ∪ I τ n-1 for n large enough. That is, g(x) ≤ E t,x δ(L * τ n-1 (t, x) -t)g(X L * τ n-1 (t,x) ) for n large enough.
With n → ∞, the dominated convergence theorem and Lemma A.2 yield g(x) ≤ E t,x δ(L * τ 0 (t, x)t)g(X L * τ 0 (t,x) ) , which shows that (t, x) ∈ C τ 0 ∪ I τ 0 . We then deduce from (3.7) and τ 0 (t, x) = 1 that Θτ 0 (t, x) = τ 0 (t, x). We therefore conclude that τ 0 ∈ E(X).
B Proofs for Section 4 B.1 Derivation of Proposition 4.1
In the classical case of exponential discounting, (2.8) ensures that for all s ≥ 0,
(B.1) δ(s)v(X x s ) = sup τ ∈T E X x s [δ(s + τ )g(X τ )] = sup τ ∈Ts E x [δ(τ )g(X τ ) | F s ] ,
which shows that {δ(s)v(X x s )} s≥0 is a supermartingale. Under hyperbolic discounting (4.1), since δ(r 1 )δ(r 2 ) < δ(r 1 +r 2 ) for all r 1 , r 2 ≥ 0, {δ(s)v(X x s )} s≥t may no longer be a supermatingale, as the first equality in the above equation fails.
To overcome this, we introduce the auxiliary value function: for (s,
x) ∈ R 2 + , V (s, x) := sup τ ∈T E x [δ(s + τ )g(X τ )] = sup τ ∈T E x X τ 1 + β(s + τ ) . (B.2)
By definition, V (0, x) = v(x), and {V (s, X x s )} s≥0 is a supermartingale as V (s, X Following [START_REF] Pedersen | Solving non-linear optimal stopping problems by the method of time-change[END_REF], we propose the ansatz w(s, y) = 1 √ 1+βs h( y √ 1+βs ). Equation (B.4) then becomes a one-dimensional free boundary problem:
(B.5) -βzh ′ (z) + h ′′ (z) = βh(z), h(z) > |z|, for |z| < b(s) √ 1+βs ; h(z) = |z|, for |z| ≥ b(s) √ 1+βs .
Since the variable s does not appear in the above ODE, we take b(s) = α √ 1 + βs for some α ≥ 0. The general solution of the first line of (B.5) is
h(z) = e β 2 z 2 c 1 + c 2 2 β √ β/2z 0 e -u 2 du , (c 1 , c 2 ) ∈ R 2 .
The second line of (B.5) gives h(α) = α. We then have
w(s, y) = e βy 2 2(1+βs) √ 1+βs c 1 + c 2 2 β √ β/2y √ 1+βs 0 e -u 2 du , |y| < α √ 1 + βs; |y| 1+βs , |y| ≥ α √ 1 + βs.
To find the parameters c 1 , c 2 and α, we equate the partial derivatives of (s, y) → w(s, y) obtained on both sides of the free boundary. This yields the equations 1+βs -1 -y 1+βs and observing that h(0) > 0, h( 1/β + s) = 0, and h ′ (y) < 1 1+βs -1 1+βs = 0 for all y ∈ (0, 1/β + s), we conclude h(y) > 0 for all y ∈ [0, 1/β + s), or w(s, y) > |y| 1+βs for |y| < 1/β + s. Also note that w is C 1,1 on [0, +∞) × R, and C 1,2 on the domain {(s, y) ∈ [0, ∞) × R : |y| < 1/β + s}. Moreover, by (B.6), w s (s, y) + 1 2 w yy (s, y) < 0 for |y| > 1/β + s). We then conclude from the standard verification theorem (see e.g. [START_REF] Øksendal | Applied stochastic control of jump diffusions[END_REF]Theorem 3.2]) that V (s, y) = w(s, y) is a smooth solution of (B.4). This implies that { V (s, W y s )} s≥0 is a supermartingale, and { V (s ∧ τ * y , W y s∧τ * y )} s≥0 is a true martingale, with τ * y := inf{s ≥ 0 : |W y s | ≥ 1/β + s}. It then follows from standard arguments that τ * y is the smallest optimal stopping time of V (0, y), and thus τx := inf{s ≥ 0 : X x s ≥ 1/β + s} is the smallest optimal stopping time of (4.2). In view of Proposition 2.1, τ x = τx .
α = e β 2 α 2 c 1 + c 2 2 β √ β/2α 0 e -u
B.2 Proof of Lemma 4.2
First, we prove that E is totally disconnected. If ker(τ ) = [a, ∞), then E = ∅ and there is nothing to prove. Assume that there exists x * > a such that x * / ∈ ker(τ ). Define We claim that ℓ = u = x * . Assume to the contrary ℓ < u. Then τ (x) = 1 for all x ∈ (ℓ, u). Thus, given y ∈ (ℓ, u), L * τ (y) = T y := inf{s ≥ 0 : X y s / ∈ (ℓ, u)} > 0, and (B.7) J(y; L * τ (y)) = E y X T y 1 + βT y < E y [X T y ] = ℓP[X T y = ℓ] + uP[X T y = u].
Since X s = |W s | for a one-dimensional Brownian motion W and 0 < ℓ < y < u, by the optional sampling theorem P[X T y = ℓ] = P[W y s hits ℓ before hitting u] = u-y u-ℓ and P[X T y = u] = P[W y s hits u before hitting ℓ] = y-ℓ u-ℓ . This, together with (B.7), gives J(y; L * τ (y)) < y. This implies y ∈ S τ , and thus Θτ (y) = 0 by (3.13). Then Θτ (y) = τ (y), a contradiction to τ ∈ E(R + ). This already implies that E is totally disconnected, and thus ker(τ ) = [a, ∞). The rest of the proof follows from Lemma 4.1.
B.3 Proof of Lemma 4.3
(i) Given a ≥ 0, it is obvious from definition that η(0, a) ∈ (0, a) and η(a, a) = a. Fix x ∈ (0, a), and let f x a denote the density of T x a . We obtain Since T x a is the first hitting time of a one-dimensional Bessel process, we compute its Laplace transform using Theorem 3.1 of [START_REF] Kent | Some probabilistic properties of Bessel functions[END_REF] (or Formula 2.0.1 on p. 361 of [START_REF] Borodin | Handbook of Brownian motion-facts and formulae, Probability and its Applications[END_REF]): (B.9)
E x 1 1 + βT x a = ∞ 0 1 1 + βt f x a (t)dt =
E x e - where the second line follows from tanh(x) ≤ 1 for x ≥ 0 and a * ∈ (0, 1/ √ β). Since η a (a * , a * ) = 0 and η aa (a * , a * ) < 0, we conclude that on the domain a ∈ [a * , ∞), the map a → η(a * , a) decreases down to 0. Now, for any a > a * , since η(a * , a) < η(a * , a * ) = a * , we must have x * (a) < a * .
Example 2 . 2 (
22 Real Options Model). Suppose d = 1 and X s := |W s |, s ≥ 0. Consider the payoff function g(x) := x for x ∈ R + and the hyperbolic discount function δ(s) := 1 1+s for s ≥ 0. The problem (2.4) reduces to v(x) = sup τ ∈T E x Xτ 1+τ
5 Figure 1 :
51 Figure 1: The free boundary s → 1 + (st) with different initial times t.
which implies (t, x) ∈ I τ . Thus, ker(τ ) ⊆ I τ . It follows that (3.10) holds, as ker(τ ) ⊆ S τ ∪ ker(τ ) = S τ ∪ (I τ ∩ ker(τ )) = ker(Θτ ), where the last equality is due to (3.7).
Theorem 3 . 1 .
31 Let Assumption 3.1 hold. If τ ∈ T (X) satisfies (3.10), then τ 0 defined in(3.8) belongs to E(X).
(4. 6 ) 3 . 4 . 3 .
6343 [a, ∞) ⊆ I τa . For x ∈ [0, a), we need the lemma below, whose proof is delegated to Appendix B.Lemma Recall T x a in (4.5). On the space {(x, a) ∈ R 2 + : a ≥ x}, define η(x, a) := E x a 1 + βT x a .
2 .
2 The figure below illustrates x → η(x, a) under different scenarios a ≤ a * and a > a * .We now separate the case x ∈ [0, a) into two sub-cases:1. If a ≤ a * , Lemma 4.3 (iii) shows that J(x; L * τ a (x)) = η(x, a) > x, and thus (4.7) [0, a) ⊆ C τa . If a > a * , then by Lemma 4.3 (iii), (4.8) J(x; L * τ a (x)) = η(x, a)
presented in the discussion above the proposition. By the proof of Lemma 4.3 in Appendix B.3, a * satisfies η a (a * , a * ) = 1, which leads to the characterization of a * . Now, for any τ ∈ T (R + ) with ker(τ ) = [a, ∞) and a ∈ [0, a * ], Lemma 4.1 implies τ ∈ E(R + ). For any τ ∈ E(R + ), set a := inf(ker(τ )). By Lemma 4.2, ker(τ ) = [a, ∞) and τ a ∈ E(R + ). The latter implies a ∈ [0, a * ] and thus completes the proof.
Remark 4 . 1 (
41 Estimating a * ). With β = 1, numerical computation gives a * ≈ 0.946475. It follows that for a general β > 0, a * ≈ 0.946475/ √ β.
Remark 4 . 2 . 2 . 4 . 3 .
42243 Fix a > a * , and recall x * (a) ∈ (0, a * ) inLemma 4.3 (iii). By (4.9),Θτ a (x) = τ ′ x * (a) (x) := 1 [0,x * (a)] (x) for all x ∈ R + . Equivalently, ker(Θτ a ) = ker(τ ′ x * (a) ) = (x * (x), ∞). Since ker(τ ′ x * (a) ) = [x * (a), ∞) and x * (a) ∈ (0, a * ), we conclude from (4.10) that τ ′ x * (a) ∈ E(R + ). Recall (3.12) which connects the naive and sophisticated behaviors. With the naive strategy τ ∈ T (R + ) given explicitly in (4.4), Proposition 4.2 and Remark 4.1 imply τ / ∈ E(R + ). We may find the corresponding equilibrium as in Remark 4.Remark Set a := 1/ √ β. By (4.4) and Remark 4.2, Θ τ = Θτ a = τ ′ x * ( a) ∈ E(R + ). In view of the proof of Lemma 4.3 in Appendix B.3, we can find x * ( a) by solving η(1/ √ β, x) = x, i.e. 1 √ β ∞ 0 e -s cosh(x √ 2βs) sech( √ 2s)ds = x, for x. Numerical computation shows x * ( a) ≈ 0.92195/ √ β, and thus x * ( a) < a * by Remark 4.1. This verifies τ ′ x * ( a) ∈ E(R + ), thanks to (4.10). Remark 4.4.
x s ) is equal to the right hand side of (B.1). Proof of Proposition 4.1. Recall that X s = |W s | for a one-dimensional Brownian motion W . Let y ∈ R be the initial value of W , and define V (s, y) := V (s, |y|). The associated variational inequality for V (s, y) is the following: for (s, y) ∈ [0, ∞) × R, (B.3) min w s (s, y) + 1 2 w yy (s, y), w(s, y) -|y| 1 + βs = 0. Taking s → b(s) as the free boundary to be determined, we can rewrite (B.3) as (B.4) w s (s, y) + 1 2 w yy (s, y) = 0, w(s, y) > |y| 1+βs , for |y| < b(s); w(s, y) = |y| 1+βs , for |y| ≥ b(s).
2 2 1+βs - 1 ,
221 du and sgn(x)c 2 = sgn(x)α 2 β.The last equation implies c 2 = 0. This, together with the first equation, shows that α = 1/ √ β and c 1 = αe -1/2 . Thus, we obtain(|y| < 1/β + s, |y| 1+βs , |y| ≥ 1/β + s.Note that w(s, y) > |y| 1+βs for |y| < 1/β + s. Indeed, by defining the function h(y)
ℓ
:= sup {b ∈ ker(τ ) : b < x * } and u := inf {b ∈ ker(τ ) : b > x * } .
e
-βst f x a (t)dt ds = ∞ 0 e -s E x [e -βsT x a ]ds. (B.8)
2 ∞ 0 e 0 e
200 xλ) sech(aλ), for x ≤ a. Here, I ν denotes the modified Bessel function of the first kind. Thanks to the above formula with λ = √ 2βs, we obtain from (B.8) that (B.10) η(x, a) = a ∞ 0 e -s cosh(x 2βs) sech(a 2βs)ds.It is then obvious that x → η(x, a) is strictly increasing. Moreover,η xx (x, a) = 2aβ -s s cosh(x 2βs) sech(a 2βs)ds > 0 for x ∈ [0, a],which shows the strict convexity.(ii) This follows from (B.10) and the dominated convergence theorem.(iii) We will first prove the desired result with x * (a) ∈ (0, a), and then upgrade it to x * (a) ∈ (0, a * ). Fix a ≥ 0. In view of the properties in (i), we observe that the two curves y = η(x, a) and y = x intersect at some x * (a) ∈ (0, a) if and only if η x (a, a) > 1. Define k(a) := η x (a, a). By (B.10), (B.11) k(a) = a ∞ -s 2βs tanh(a 2βs)ds.Thus, we see that k(0) = 0 and k(a) is strictly increasing on (0, 1), since for any a > 0,k ′ (a) = ∞ 0 e -s √ 2s tanh(a √ 2s) + a √ 2s cosh 2 (a √ 2s) ds > 0. By numerical computation, k(1/ √ β) = ∞ 0 e -s √ 2s tanh( √ 2s)ds ≈ 1.07461 > 1. It follows that there must exist a * ∈ (0, 1/ √ β) such that k(a * ) = η x (a * , a * ) = 1. Monotonicity of k(a) thengives the desired result. Now, for any a > a * , we intend to upgrade the previous result to x * (a) ∈ (0, a * ). Fix x ≥ 0. By the definition of η and (ii), on the domain a ∈ [x, ∞), the map a → η(x, a) must either first increases and then decreases to 0, or directly decreases down to 0. From (B.10), we haveη a (x, x) = 1x ∞ 0 e -s 2βs tanh(x 2βs)ds = 1k(x),with k as in (B.11). Recalling k(a * ) = 1, we have η a (a * , a * ) = 0. Notice that η aa (a * , a * ) = -2 a * k(a * ) -2βa * + a * ∞ 0 4βse -s tanh 2 (a * 2βs)ds ≤ -2 a * + 2βa * < 0,
1, as δ(t + s)/δ(t) ≥ δ(s)/δ(0) = δ(s) for all s, t ≥ 0. That is, Assumption 3.1 is automatically true under DI. Note that Assumption 3.1 is more general than DI, as it obviously includes the classical case of exponential discounting, characterized by (2.8).
The main convergence result for (3.8) is the following: Proposition 3.2. Let Assumption 3.1 hold. If τ ∈ T (X) satisfies (3.10) ker(τ ) ⊆ ker(Θτ ), then
Proposition 4.2. τ a defined in Lemma 4.2 belongs to E(R + ) if and only if a ∈ [0, a * ], where a * > 0 is characterized by a * ∞
(4.9) ).
0 e -s √ 2βs tanh(a * √ 2βs)ds = 1. Moreover,
(4.10)
author's attention. Special gratitude also goes to Traian Pirvu for introducing the authors to know each other. Y.-J. Huang is partially supported by the University of Colorado (11003573). | 61,905 | [
"963972",
"2654"
] | [
"425705",
"2583"
] |
01487002 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2015 | https://hal.parisnanterre.fr/hal-01487002/file/Le%20droit%20fran%C3%A7ais%20des%20s%C3%BBret%C3%A9s%20personnelles%20%20-%20French%20report%20M.%20Bourassin.pdf | 2 Ce développement s'explique par l'essor du crédit aux particuliers et aux entreprises, et par les atouts qu'ont pu lui reconnaître les créanciers, notamment par comparaison aux sûretés réelles classiques : sa simplicité, sa souplesse, son faible coût de constitution, son efficacité en cas de mise en oeuvre, même dans le cadre d'une procédure d'insolvabilité ouverte au bénéfice du débiteur principal. 3 Proches du débiteur principal, personnes physiques ou morales intégrées dans l'entreprise débitrice, garants institutionnels. 4 Les sûretés personnelles sont des engagements juridiques pour autrui (le débiteur principal), consentis bien souvent sans réelle liberté (en raison des relations professionnelles ou personnelles unissant le garant au débiteur) et sans contrepartie. Elles risquent pourtant d'obérer gravement le patrimoine du garant, puisqu'il est "tenu de remplir son engagement sur tous ses biens mobiliers et immobiliers, présents et à venir" (C. civ., art. 2284), quand bien même les recours en remboursement contre le débiteur principal seraient voués à l'échec. 5 Le Code civil renferme, depuis 1804, diverses règles susceptibles de limiter, voire d'exclure le paiement des cautions. Les unes reposent sur le caractère accessoire du cautionnement et sa subsidiarité, les autres sur les règles applicables à tous les nouvelles protections des cautions ont vu le jour, pour l'essentiel en dehors du Code civil 6 . Il importe à cet égard de préciser que l'ordonnance n° 2006-346 du 23 mars 2006 relative aux sûretés n'a nullement réformé en profondeur le droit du cautionnement. Seule la numérotation des articles du Code civil le concernant a été modifiée 7 . Depuis une trentaine d'années, le droit commun du cautionnement cohabite ainsi avec de multiples règles spéciales 8 qui, à tous les stades de la vie de la sûreté, visent à en réduire, voire à en supprimer les risques pour les garants les plus exposés. Ces règles, qu'elles soient légales ou jurisprudentielles, protègent la volonté et le patrimoine de certaines cautions, en spécifiant la source des dettes couvertes (crédit de consommation, bail d'habitation), la qualité du débiteur principal (consommateur, société, entrepreneur individuel, particulier surendetté, entreprise en difficulté), les caractéristiques du cautionnement (sa nature, sa forme, son étendue, ses modalités), la qualité du créancier (personne physique ou morale, professionnel ou non) et/ou celle de la caution (personne physique ou morale, avertie ou profane, engagée pour les besoins de sa profession ou non). Ce mouvement de spécialisation concerne également les deux autres sûretés personnelles que le Code civil reconnaît depuis l'ordonnance du 23 mars 2006, à savoir la garantie autonome et la lettre d'intention 9 . En effet, des règles spéciales interdisent la couverture de certaines dettes par une garantie autonome. Par ailleurs, il existe en droit des sociétés, en droit des entreprises en difficulté ou encore en droit patrimonial de la famille, de nombreux textes relatifs aux garanties, aux sûretés ou aux sûretés personnelles, qui concernent certains garants seulement. Cette protection sélective caractérise la spécialisation du droit des sûretés personnelles 10 . En s'attachant aux intérêts que le législateur et les juges cherchent précisément à protéger, il est possible de ranger les multiples règles légales et jurisprudentielles qui se sont développées en marge du droit commun dans trois catégories. Certaines, d'abord, visent à sécuriser et à dynamiser la vie des affaires en général et celle des entreprises en particulier, afin de soutenir la croissance économique (A). D'autres, ensuite, ont pour but de protéger les consommateurs contre des engagements irréfléchis et ruineux, risquant de les conduire au surendettement et à l'exclusion sociale (B). Enfin, les deux finalités précédentes sous-tendent les règles bénéficiant aux garants personnes physiques (C).
A/ Sécuriser et dynamiser la vie des affaires 3. Les règles spéciales intéressant la vie des affaires, c'est-à-dire celles relatives aux sûretés personnelles données par ou pour des entreprises, sont ambivalentes. Les unes expriment une sollicitude à l'égard des sociétés garantes (1) ou des garants d'entreprises (2). Les autres, au contraire, font preuve de rigueur à l'encontre des garants intégrés dans les entreprises débitrices (3). Ces deux dynamiques antagonistes révèlent la complexité du soutien aux entreprises : les sociétés et leurs membres, les entrepreneurs individuels et leurs proches doivent être protégés des dangers des sûretés, dont l'ampleur est souvent accrue en présence de dettes professionnelles. Mais les créanciers doivent
Le droit français des sûretés personnelles Manuella Bourassin, Agrégée des Facultés de droit, Professeur à l'Université Paris Ouest Nanterre La Défense, Directrice du Centre de droit civil des affaires et du contentieux économique (EA 3457) Résumé Depuis les années 1980, le droit français des sûretés personnelles a profondément évolué. En marge du droit commun inscrit dans le Code civil, se sont développées des règles propres aux sûretés personnelles données par ou pour des entreprises, des règles protectrices des garants s'apparentant à des consommateurs et encore des règles spécifiques aux cautions personnes physiques contractant avec des créanciers professionnels. Cette spécialisation du droit des sûretés personnelles a généré une réelle insécurité juridique et économique, car les nouvelles règles légales et jurisprudentielles manquent d'accessibilité, d'intelligibilité et de stabilité et parce qu'elles sont davantage tournées vers la protection des garants que vers celle créanciers. Une réforme en profondeur de la matière s'impose pour restaurer l'efficacité des sûretés personnelles et conforter par là même le crédit aux entreprises et aux particuliers. Cette reconstruction devrait reposer sur l'édiction de règles communes à l'ensemble des sûretés personnelles (régime primaire) et sur une révision des critères et du contenu des règles spéciales (à côté des règles applicables à tous les garants personnes physiques, des règles particulières devraient dépendre de la cause, professionnelle ou non, de l'engagement du garant).
1. Les sûretés personnelles traversent une crise 1 . Alors que les textes et la jurisprudence devraient favoriser leur efficacité pour qu'elles confortent le crédit aux entreprises et aux particuliers, le droit positif les fragilise. Depuis une trentaine d'années, effectivement, l'insécurité juridique règne en la matière, sous toutes ses formes (inaccessibilité, illisibilité et instabilité des règles en vigueur), non seulement parce que des réformes ponctuelles ont morcelé le droit du cautionnement et l'ont rendu plus complexe, moins souple et cohérent, mais aussi en raison d'une jurisprudence pléthorique et fluctuante. La sécurité économique recherchée par les créanciers est quant à elle compromise par les multiples et diverses protections accordées par le législateur et par les juges à certains garants. La spécialisation des règles légales et jurisprudentielles (I) est largement responsable des imperfections que présente aujourd'hui le droit français des sûretés personnelles (II). Pour le rendre plus sûr et attractif, une reconstruction mérite d'être proposée (III).
I. La spécialisation du droit des sûretés personnelles
2. A la fin du XXe siècle, le droit commun du cautionnement, c'est-à-dire les règles inscrites dans le Code civil depuis 1804, est apparu insuffisant pour répondre, tant au développement du cautionnement 2 et à la diversification des cautions 3 , qu'à la préoccupation de limiter les dangers de cette sûreté 4 . A partir des années 1980, en vue de remédier aux insuffisances du droit commun 5 , de contrats (les exigences probatoires, la sanction des vices du consentement et encore l'obligation de bonne foi). Le droit commun du cautionnement a pu toutefois sembler insuffisant pour sauvegarder les intérêts des garants et ce, pour plusieurs raisons : le consentement des cautions n'y est protégé qu'a posteriori, c'est-à-dire lors de l'appel en paiement, et non dès la conclusion du contrat ; la solvabilité des cautions y est largement ignorée, alors que les risques patrimoniaux de l'engagement sont bien souvent considérables ; le Code civil appréhende les cautions et les créanciers de manière abstraite, dans leur qualité générale de parties, sans tenir compte des caractéristiques des dettes garanties, alors que les dangers du cautionnement n'ont certainement pas la même intensité pour toutes les cautions, ni dans toutes les opérations de garantie. 6 Trois règles nouvelles seulement y ont été ajoutées depuis 1804, toutes protectrices des cautions : le caractère d'ordre public de l'exception de défaut de subrogation (art. 2314, al. 2, issu de la loi n° 84-148 du 1er mars 1984), le bénéfice d'un "reste à vivre" et une information annuelle sur l'évolution du montant de la créance garantie (art. 2301 et 2293, issus de la loi n° 98-657 du 29 juillet 1998). 7 C. civ., nouv. art. 2288 à 2320. 8 Sur ce mouvement de spécialisation, v. not. Ch. Albiges, "L'influence du droit de la consommation sur l'engagement de la caution", Liber amicorum J. Calais-Auloy, Dalloz, Paris, 2004, p. 1 ; L. Aynès, "La réforme du cautionnement par la loi Dutreil", Dr. et patr. 11/2003, p. 28 ; Ph. Delebecque, "Le cautionnement et le Code civil : existe-t-il encore un droit du cautionnement ?", RJ com. 2004, p. 226 ; J. Devèze, "Petites grandeurs et grandes misères de la sollicitude à l'égard du dirigeant caution personne physique", Mélanges Ph. Merle, Dalloz, Paris, 2013, p. 165 ; D. Houtcieff, "Le droit des sûretés hors le Code civil", LPA 22 juin 2005, p. 7 ; D. Legeais, "Le Code de la consommation siège d'un nouveau droit commun du cautionnement", JCP éd. E 2003, 1433 ; Ph. Simler, "Prévention et dispositif de protection de la caution", LPA 10 avr. 2003, p. 20 ; Ph. Simler, "Les principes fondamentaux du cautionnement : entre accessoire et autonomie", BICC 15 oct. 2013. 9 Les articles 2321 et 2322 du Code civil les définissent, sans les réglementer précisément. 10 Seuls les principaux textes et arrêts qui illustrent cette évolution seront ici exposés. Pour de plus amples références, v. M. Bourassin, V. Brémond, M.-N. Jobard-Bachellier, Droit des sûretés, Sirey, Paris, 5e éd., 2015. aussi être rassurés pour que les entreprises reçoivent les crédits nécessaires à leur création, leur développement et leur maintien.
Les protections propres aux sociétés garantes
4. Il est fréquent qu'une société garantisse les dettes d'une autre société appartenant au même groupe ou les dettes d'une personne physique ou morale avec laquelle elle entretient des relations d'affaires. Cette garantie est dangereuse pour la société elle-même, pour ses associés et pour ses créanciers, puisqu'elle déplace le patrimoine social au service d'autrui, le plus souvent sans aucune contrepartie, au risque qu'en cas de défaut de remboursement par le débiteur principal, la pérennité de la société et les emplois qu'elle génère se trouvent menacés. Pour limiter ces risques, le droit des sociétés -droit commun et dispositions propres à certaines formes sociales -encadre les pouvoirs dont doivent disposer les représentants de la société pour l'engager en qualité de garant. D'abord, en vertu du principe de spécialité, la garantie doit être conforme à l'objet social. Ensuite, elle doit respecter l'intérêt social 11 . Enfin, dans les sociétés par actions, les "cautionnements, avals et garanties" doivent être autorisés par le conseil d'administration ou de surveillance 12 , à peine d'inopposabilité à la société 13 . 5. Pour éviter un autre risque, celui que les organes de direction ou les associés ne vampirisent le patrimoine social à leur seul profit, interdiction leur est faite, à peine de nullité du contrat, "de faire cautionner ou avaliser (par la société par actions ou à risque limité) leurs engagements envers les tiers" 14 .
Les protections accordées aux garants d'entreprises
6. Il existe de nombreuses règles spéciales dont le principal critère d'application réside dans la qualité d'entreprise, sous forme sociale ou individuelle, du débiteur principal. Il est vrai que les sûretés personnelles garantissant les dettes d'une entreprise présentent des dangers accrus par rapport à celles couvrant des dettes non professionnelles : leur compréhension est rendue plus ardue par la diversité et le caractère futur, donc indéterminé, des dettes qu'elles peuvent embrasser ; les risques patrimoniaux sont plus importants dès lors que les créanciers requièrent habituellement une couverture, en montant et en durée, plus large ; en cas d'ouverture d'une procédure d'insolvabilité au bénéfice de l'entreprise, les risques de paiement par le garant et d'absence de remboursement par celle-ci sont très importants. De nombreuses règles spéciales s'attachent à limiter, voire à supprimer ces différents risques en protégeant les garants d'entreprises, qu'ils soient ou non intégrés dans celles-ci. Les entreprises, in bonis (a) ou en difficulté (b), en sont les bénéficiaires par ricochet. Toutes les protections ici envisagées sont en effet susceptibles d'encourager la constitution de sûretés et, par là même, l'octroi des crédits indispensables à la création et à la pérennité des entreprises. Celles qu'énonce le droit des entreprises en difficulté sont en outre de nature à inciter les dirigeants-garants à demander le plus tôt possible l'ouverture d'une procédure et à favoriser de la sorte le redressement de leur entreprise.
a. Entreprises in bonis
7. En dehors du droit des procédures collectives professionnelles, les sources et les modes de protection des garants d'entreprises sont extrêmement diversifiés. Il est néanmoins possible de distinguer quatre types de mesures. 8. En premier lieu, détourner les parties des sûretés les plus dangereuses. La loi n° 94-126 du 11 février 1994 relative à l'initiative et à l'entreprise individuelle comporte deux dispositions en ce sens. D'une part, elle cherche à dissuader les entrepreneurs individuels de faire garantir leurs dettes professionnelles par des proches en imposant aux établissements de crédit de les informer par écrit de 11 En présence de sociétés à risque illimité, la jurisprudence annule les cautionnements qui contredisent l'intérêt social, même s'ils entrent dans leur objet statutaire ou ont été approuvés par tous les associés ou couverts par une communauté d'intérêts entre la société caution et le débiteur (v. not. Com. 23 sept. 2014, Bull. civ. IV, n° 142). 12 C. com., art. L. 225-35, al. 4, et L. 225-68, al. 2. 13 Cette sanction est retenue par la Cour de cassation depuis 1980 (Com. 29 janv. 1980, Bull. civ. IV, n o 47). 14 C. com.,. L'interdiction vaut également pour les proches des dirigeants (conjoints, ascendants ou descendants) et, plus généralement, pour "toute personne interposée". Dans la SARL (et non les sociétés par actions), l'interdiction vise en outre les associés. la possibilité de proposer une garantie sur les biens nécessaires à l'exploitation de l'entreprise ou par un garant institutionnel, plutôt qu'une "sûreté personnelle consentie par une personne physique" 15 . D'autre part, la loi de 1994 interdit aux personnes physiques cautionnant les dettes professionnelles d'un entrepreneur individuel de s'engager à la fois solidairement et indéfiniment. Sont effectivement réputées non écrites les stipulations de solidarité et de renonciation au bénéfice de discussion si leur cautionnement n'est pas limité en montant 16 . 9. En deuxième lieu, délivrer aux cautions d'entreprises des informations au cours de la période de garantie. Chaque année, les créanciers doivent leur préciser le montant de la dette principale au 31 décembre de l'année précédente, ainsi que le terme du cautionnement ou la faculté de le résilier s'il est à durée indéterminée. D'abord imposée dans les cautionnements des concours financiers accordés aux entreprises par des établissements de crédit 17 , y compris ceux fournis par les dirigeants-cautions 18 , cette information annuelle a ensuite été accordée aux personnes physiques cautionnant les dettes professionnelles d'un entrepreneur individuel, pour une durée indéterminée 19 . Si l'information n'est pas délivrée, la caution n'est plus tenue des "intérêts échus depuis la précédente information jusqu'à la date de communication de la nouvelle information". Il existe, par ailleurs, une information sur "le premier incident de paiement (du débiteur) non régularisé dans le mois d'exigibilité du paiement", sous peine de déchéance des "pénalités ou intérêts de retard échus entre la date de ce premier incident et celle à laquelle (la caution) en a été informée" 20 . Cette protection profite aux cautions personnes physiques garantissant les dettes professionnelles d'un entrepreneur individuel ou d'une société. 10. En troisième lieu, transférer la sûreté au conjoint divorcé entrepreneur. Pour éviter que l'époux, qui s'est porté garant de l'activité professionnelle de son conjoint entrepreneur individuel ou membre de la société dont les dettes sont garanties, ne se trouve, après le divorce, écrasé par le poids de la sûreté, la loi n° 2005-882 du 2 août 2005 relative aux petites et moyennes entreprises a prévu le transfert, sur décision du tribunal de grande instance, des "dettes ou sûretés consenties par les époux, solidairement ou séparément, dans le cadre de la gestion d'une entreprise", au conjoint divorcé entrepreneur 21 . 11. En quatrième et dernier lieu, appliquer le droit du surendettement aux cautions des entreprises. La situation de surendettement étant définie par l'impossibilité manifeste de faire face à l'ensemble des dettes non professionnelles exigibles et à échoir 22 , la Cour de cassation a initialement refusé le bénéfice des procédures de surendettement aux cautions retirant un intérêt patrimonial personnel de la dette professionnelle cautionnée, au premier rang desquelles se trouvent les dirigeants des sociétés garanties 23 . Mais, depuis la loi n° 2008-776 du 4 août 2008 de modernisation de l'économie, toutes les cautions surendettées, même celles garantissant des entreprises et dont l'engagement présente une nature professionnelle 24 , peuvent profiter des mesures protectrices du droit du surendettement, en particulier l'effacement total des dettes lors de la clôture de la procédure de rétablissement personnel pour insuffisance d'actif 25 .
b. Entreprises en difficulté
15 C. mon. fin., art. L. 313-21. Le défaut d'information interdit au créancier de se prévaloir de la sûreté constituée "dans ses relations avec l'entrepreneur individuel", et non de demander paiement au garant. 16 Loi du 11 février 1994, art. 47, II, al. 1er. 17 C. mon. fin., art. L. 313-22, issu de la loi n° 84-148 du 1 er mars 1984 relative à la prévention et au règlement amiable des difficultés des entreprises. 18 Com. 25 mai 1993, Bull. civ. IV, n o 203. 19 Loi du 11 février 1994, art. 47, II, al. 2. 20 Loi du 11 février 1994, art. 47, II, al. 3, modifié par la loi n° 98-657 du 29 juillet 1998. 21 C. civ., art. 1387-1. La portée de la décharge de l'époux-caution est incertaine, car ce texte ne précise pas si elle est opposable au créancier ou si elle affecte uniquement les rapports intra conjugaux. Les juridictions du fond ont jusqu'à présent privilégié cette seconde interprétation, qui préserve le droit de poursuite du créancier et confine la décharge dans les opérations de liquidation du régime matrimonial. 22 C. consom., art L. 330-1. 23 Civ. 1 re , 31 mars 1992, Bull. civ. I, n o 107 ; Civ. 1 re , 7 nov. 2000, Bull. civ. I, n o 285. 24 A condition toutefois de ne pas être éligibles aux procédures collectives professionnelles (C. consom., art. L. 333-3). 25 C. consom., art. L. 332-5 et 332-9. 12. Lorsque l'entreprise garantie fait l'objet d'une procédure d'insolvabilité, des protections de quatre types sont accordées aux garants, qu'ils aient "consenti une sûreté personnelle" ou "affecté ou cédé un bien en garantie" 26 . Il s'agit d'abord de réduire le montant de la garantie, dans la procédure de conciliation, en permettant à tous les garants de se prévaloir des dispositions de l'accord constaté ou homologué 27 et, dans la procédure de sauvegarde, en autorisant les garants personnes physiques à opposer au créancier l'arrêt du cours des intérêts, ainsi que les remises inscrites dans le plan 28 . Il s'agit ensuite de retarder la mise en oeuvre de la sûreté, non seulement en faisant profiter tous les garants (dans la procédure de conciliation) ou les garants personnes physiques (dans la procédure de sauvegarde) des délais de paiement octroyés à l'entreprise 29 , mais également en suspendant les poursuites contre les garants personnes physiques pendant la période d'observation de la procédure de sauvegarde ou de redressement 30 . Il s'agit encore d'interdire toute poursuite contre les garants personnes physiques, pendant l'exécution du plan de sauvegarde, si la créance garantie n'a pas été déclarée 31 . Enfin, il s'agit, dans la procédure de rétablissement professionnel, de déroger au principe d'effacement des dettes du débiteur personne physique à l'égard des dettes de remboursement des cautions, personnes physiques ou morales 32 . 13. Même si tous les garants personnes physiques, voire tous les garants sans distinction, sont visés par ces dispositions, le législateur s'est surtout soucié des dirigeants et de leurs proches, afin d'inciter les premiers à anticiper le traitement des difficultés de l'entreprise, en demandant l'ouverture d'une procédure le plus tôt possible, c'est-à-dire avant la cessation des paiements. C'est pourquoi un sort nettement plus favorable leur est réservé dans les procédures de conciliation et de sauvegarde que dans les procédures de redressement ou de liquidation judiciaire 33 . Mais alors, la protection des garants n'est pas une fin en soi. C'est plutôt un moyen de soutenir les entreprises, de conforter les emplois, et de favoriser in fine la croissance économique 34 .
Les protections refusées aux garants intégrés dans l'entreprise débitrice
14. Les garants intégrés dans l'entreprise débitrice sont les personnes physiques ou morales qui disposent d'un pouvoir de direction et/ou de contrôle à son égard. Pour l'essentiel, ce sont ses dirigeants ou associés et les sociétés-mères. Diverses protections leur sont refusées, que l'entreprise garantie soit in bonis (a) ou qu'elle fasse l'objet d'une procédure d'insolvabilité (b).
a. Entreprises in bonis
15. En s'attachant à la cause professionnelle de l'engagement, la jurisprudence fait montre de rigueur à l'encontre des garants intégrés dans l'entreprise débitrice. Ainsi, parce qu'ils ont un "intérêt personnel et patrimonial" dans le crédit garanti, la Cour de cassation décide-t-elle que la sûreté présente un caractère commercial 35 . Cette commercialité rend le cautionnement solidaire et prive le garant, même s'il n'est pas commerçant 36 , des bénéfices de discussion et de division. 26 L'ensemble des sûretés personnelles, ainsi que les sûretés réelles pour autrui, font l'objet de ce traitement uniforme depuis l'ordonnance n° 2008-1345 du 18 décembre 2008 portant réforme du droit des entreprises en difficulté. 27 C. com., art. L. 611-10-2, al. 1er. 28 C. com., art. L. 622-28, al. 1er., et L. 626-11. 29 C. com., art. L. 611-10-2, al. 1er, et L. 626-11. 30 C. com., art. L. 622-28, al. 2 et L. 631-14, qui ajoutent que "le tribunal peut ensuite leur accorder des délais ou un différé de paiement dans la limite de deux ans". 31 C. com., art. L. 622-26, al. 2. 32 C. com., art. L. 645-11, issu de l'ordonnance n° 2014-326 du 12 mars 2014. 33 Sur la constitutionnalité de cette différence de traitement, v. Com., QPC, 8 oct. 2012, n° 12-40060. 34 Le droit des sûretés est ainsi mis au service des finalités qui innervent le droit des entreprises en difficulté. L'article 2287 du Code civil, issu de l'ordonnance du 23 mars 2006, consacre cette primauté des droits de l'insolvabilité en précisant que "les dispositions du présent livre (livre IV : "Des sûretés") ne font pas obstacle à l'application des règles prévues en cas d'ouverture d'une procédure de sauvegarde, de redressement judiciaire ou de liquidation judiciaire ou encore en cas d'ouverture d'une procédure de traitement des situations de surendettement des particuliers". 35 Com. 7 juill. 1969, Bull. n° 269. 36 Tel est le cas des dirigeants de sociétés anonymes ou à responsabilité limitée.
La Haute juridiction rejette par ailleurs la libération des garants intégrés lorsqu'ils cessent leurs fonctions au sein de l'entreprise garantie37 , car la cause de l'obligation de garantir réside dans "la considération du crédit accordé par le créancier au débiteur principal"38 , et non dans les relations que le garant entretient avec ce dernier, et que son existence s'apprécie exclusivement lors de la conclusion du contrat. 16. D'autres protections sont refusées aux garants intégrés parce qu'ils sont censés connaître et comprendre la nature, le montant et la durée des garanties souscrites. Tel est le cas de certaines formalités ayant pour finalité d'attirer l'attention des contractants sur la nature et la portée de leurs obligations. En application de l'article 1326 du Code civil39 , la Chambre commerciale de la Cour de cassation considère ainsi qu'en présence d'une mention équivoque ou incomplète ou en l'absence de toute mention en chiffres et en lettres du montant de l'engagement, la seule qualité de dirigeant du garant constitue un complément de preuve suffisant 40 . Dans le même sens, l'article 1108-2 du Code civil 41 admet le remplacement des mentions manuscrites exigées à peine de nullité par des mentions électroniques, réputées moins éclairantes, si la sûreté personnelle est passée "par une personne pour les besoins de sa profession". 17. En outre, parce qu'elles connaissent leur propre solvabilité et qu'elles comprennent en principe les risques financiers liés à la mise en oeuvre des sûretés, les cautions intégrées se voient refuser par la Cour de cassation deux types de protections. D'une part, l'exigence de proportionnalité du cautionnement aux biens et revenus de la caution, fondée sur la bonne foi contractuelle 42 . Alors qu'il avait été initialement consacré au bénéfice d'un dirigeantcaution 43 , ce moyen de défense a par la suite été paralysé en présence de garants intégrés dans l'entreprise 44 . D'autre part, ceux-ci profitent rarement du devoir de mise en garde sur les risques de l'opération projetée et sur la disproportion de l'engagement à souscrire, que la Cour de cassation impose aux établissements de crédit depuis 2007. En effet, ce devoir, lui aussi fondé sur la loyauté contractuelle, ne peut être invoqué que par les cautions "non averties" 45 . Les connaissances des garants intégrés sur leurs capacités financières et sur les risques d'endettement liés à la sûreté évincent le plus souvent cette qualification et la protection qu'elle conditionne 46 . 18. Enfin, depuis une vingtaine d'années, d'autres moyens de défense fondés sur le droit commun des contrats sont rendus inefficaces en raison des connaissances des garants intégrés sur la situation financière de l'entreprise débitrice. Tel est le cas de la réticence dolosive commise par le créancier au sujet de la situation financière de l'entreprise 47 , ainsi que de la responsabilité des banques pour octroi abusif de crédit 48 .
b. Entreprises en difficulté 19. Diverses dispositions protectrices des entreprises soumises aux procédures du Livre VI du Code de commerce ne profitent pas aux garants, qui se trouvent dès lors traités plus strictement que les débiteurs garantis. Il en va ainsi de la suspension des poursuites individuelles contre l'entreprise 49 . Dans les procédures de redressement et de liquidation judiciaire, les garants ne peuvent pas opposer non plus le défaut de déclaration des créances pour paralyser les poursuites du créancier 50 . Dans la procédure de redressement encore, aucun garant ne peut bénéficier des remises et délais prévus dans le plan 51 , ni de l'arrêt du cours des intérêts 52 . Enfin, la clôture de la procédure de liquidation judiciaire pour insuffisance d'actif n'empêche nullement les créanciers de poursuivre en paiement les garants 53 . 20. Cette rigueur à l'encontre de tous les garants d'entreprises reçoit plusieurs explications. D'abord, même si les règles concernées n'opèrent aucune distinction entre les garants (personnes physiques ou morales, intégrées ou non dans l'entreprise en difficulté), il est permis d'y voir, à l'encontre de ceux qui se trouvent aux commandes de l'entreprise en difficulté, une sanction pour avoir laissé la situation de celle-ci se dégrader jusqu'à la cessation des paiements. Ensuite, comme la rigueur se manifeste pour l'essentiel dans le cadre des procédures de redressement et de liquidation judiciaire, elle révèle que le législateur n'entend pas protéger les intérêts des garants lorsque le sauvetage de l'entreprise est compromis, voire impossible. Le contraste existant avec les procédures de conciliation et de sauvegarde est censé inciter les dirigeants-garants à se tourner vers les procédures préventives. Il est donc manifeste qu'en droit des entreprises en difficulté, les protections sont accordées ou refusées aux garants, non pas au regard des caractéristiques de leur engagement, et donc de leur propre besoin de protection, mais en fonction des chances de préserver l'activité économique de l'entreprise. Enfin, la rigueur à l'encontre des garants a pour corollaire une meilleure protection des créanciers. Mais il existe là aussi une instrumentalisation de cette protection au service de l'entreprise, puisque l'efficacité des sûretés personnelles dans les procédures de redressement et de liquidation judiciaire s'explique par la volonté d'asseoir la confiance des créanciers et de stimuler par là même l'octroi de crédit aux entreprises 54 . 21. Dynamiser et sécuriser la vie des affaires est donc bien une finalité partagée par de nombreuses et diverses règles spéciales. La spécialisation du droit des sûretés personnelles n'est pas uniquement sous-tendue par cette logique économique. Des impératifs sociaux ont conduit à l'adoption d'autres règles spécifiques, tournées vers la protection des consommateurs.
B/ Protéger les consommateurs
22. En matière de sûretés personnelles, aucune règle ne vise les garants ou cautions consommateurs. Cette qualité est toutefois implicite chaque fois que la loi ou les juges réservent un traitement particulier aux personnes physiques s'engageant à des fins non professionnelles 55 . Les critères de leur protection méritent d'être détaillés (1), avant que n'en soient exposées les principales modalités (2).
Critères de protection
23.
Les protections qui bénéficient aux garants personnes physiques s'engageant dans un cadre non professionnel reposent sur des critères distincts en législation (a) et en jurisprudence (b). a. En législation : la nature de la dette principale 24. Les premiers textes ayant protégé les personnes physiques qui souscrivent une sûreté personnelle en dehors de leur activité commerciale, industrielle, artisanale ou libérale n'ont pas détaillé de la sorte la qualité du garant. Ils les ont implicitement visées en spécifiant la nature de la dette principale. En effet, ont été spécialement réglementés les deux types de dettes non professionnelles le plus souvent garanties par des proches du débiteur personne physique, à savoir, d'une part, les crédits mobiliers ou immobiliers de consommation et, d'autre part, les dettes naissant d'un bail d'habitation.
b. En jurisprudence : les caractéristiques de l'engagement de garantie 51 C. com.,al. 6. 53 C. com.,II. 54 Le principe d'irresponsabilité des dispensateurs de crédit lorsque l'entreprise fait l'objet d'une procédure d'insolvabilité (C. com., art. L. 650-1) relève de la même logique. 55 L'article préliminaire du Code de la consommation, issu de la loi n° 2014-344 du 17 mars 2014 relative à la consommation, définit le consommateur comme "toute personne physique qui agit à des fins qui n'entrent pas dans le cadre de son activité commerciale, industrielle, artisanale ou libérale".
25.
Depuis une vingtaine d'années, la Cour de cassation réserve le bénéfice de certaines règles du droit commun des contrats aux cautions qui n'ont pas d'intérêt pécuniaire dans l'opération garantie, qui ne sont pas rompues aux affaires, qui ne disposent d'aucun pouvoir juridique à l'égard du débiteur principal et qui ne maîtrisent nullement la situation financière de ce dernier. Sur le fondement de l'absence d'"interêt personnel et patrimonial" de la caution dans l'obtention du crédit garanti, les protections liées au caractère civil du cautionnement sont ainsi applicables 56 . C'est par ailleurs au profit des cautions "non averties", que la Haute juridiction découvre des obligations de loyauté particulières à la charge des créanciers, comme l'obligation de ne pas faire souscrire un cautionnement manifestement disproportionné aux biens et revenus de ces cautions et le devoir de les mettre en garde sur les risques patrimoniaux de l'opération. 26. La Cour de cassation n'a jamais défini la notion de caution "non avertie". Elle en contrôle en revanche les critères, liés principalement aux compétences et aux expériences professionnelles de la caution, lui permettant ou non de comprendre la nature et la portée des obligations principales et de son propre engagement, ainsi qu'aux relations -personnelles ou professionnelles -qu'elle entretient avec le débiteur garanti, lui permettant ou non de connaître et d'influencer l'endettement de celui-ci. Fréquemment, les proches du débiteur principal ou d'un membre de la société garantie sont qualifiés de cautions "non averties" et profitent dès lors des protections que la jurisprudence subordonne à cette qualité.
Modes de protection
27.
Les règles spéciales qui ont été consacrées depuis la fin du XXe siècle au bénéfice des garants personnes physiques s'engageant pour des raisons et à des fins non professionnelles expriment nettement l'emprise du droit de la consommation sur le droit des sûretés personnelles, en ce qu'elles déploient les techniques consuméristes classiques de protection du consentement et du patrimoine de la partie réputée faible, c'est-à-dire des interdictions (a), des informations (b) et des limitations (c).
a. Interdictions
caution personne physique solvable, ni rémunérer une caution professionnelle, risquaient de ne pouvoir se loger. Un autre type d'interdiction concerne la mise en oeuvre des cautionnements manifestement disproportionnés ab initio aux biens et revenus de la caution personne physique garantissant un crédit mobilier ou immobilier de consommation 63 . Effectivement, sous réserve d'un retour à meilleure fortune de la caution, l'établissement de crédit "ne peut se prévaloir" de la sûreté. Cette déchéance totale constitue une mesure de prévention du surendettement des cautions engagées pour des raisons et à des fins non professionnelles.
b. Informations 29. Sous l'influence du droit de la consommation, qui organise précisément l'information des consommateurs au stade de la formation des contrats, plusieurs dispositions visant à éclairer le consentement des cautions sur la nature et la portée de leur engagement, dès la souscription de celuici, ont institué un formalisme informatif conditionnant la validité même des cautionnements visés, à savoir ceux garantissant des crédits de consommation ou des dettes provenant d'un bail d'habitation. 30. Certaines informations doivent être délivrées avant même la signature du contrat de cautionnement, pour que la décision de s'engager soit la plus libre et éclairée possible. Ainsi, celui qui envisage de cautionner un crédit à la consommation ou un crédit immobilier doit-il se voir remettre, comme l'emprunteur-consommateur lui-même, un exemplaire de l'offre de crédit 64 . Une autre mesure préventive est prévue dans le cautionnement par une personne physique d'un crédit immobilier. Il s'agit d'un délai de réflexion de dix jours suivant la réception de l'offre de crédit 65 . 31. Pour éclairer la caution sur les principales caractéristiques du contrat garanti et de son propre engagement, le formalisme informatif ad validitatem revêt deux autres modalités lors de la conclusion du cautionnement : en matière de bail d'habitation, la remise d'un exemplaire du contrat de location 66 ; en ce domaine et également lorsque la caution personne physique garantit un crédit accordé à un consommateur, des mentions manuscrites portant principalement sur le montant, la durée et, le cas échéant, le caractère solidaire de l'engagement 67 . Ces mentions n'ont pas à être respectées si le cautionnement est notarié ou contresigné par un avocat 68 , compte tenu des obligations d'information et de conseil pesant sur ces professionnels du droit. Elles conditionnent en revanche la validité des cautionnements conclus par actes sous seing privé, dans lesquels elles ne sauraient être apposées sous forme électronique par les cautions ne s'engageant pas pour les besoins de leur profession 69 . 32. Les personnes physiques garantissant un crédit à la consommation ou immobilier doivent par ailleurs être informées par l'établissement de crédit de la défaillance de l'emprunteur-consommateur 70 . Le non-respect de cette obligation est sanctionné par une déchéance partielle des droits du créancier 71 . Si le débiteur fait l'objet d'une procédure de surendettement, la caution doit en être informée par la commission de surendettement 72 . Cela peut lui permettre d'invoquer les protections spécifiques que renferme le droit du surendettement au profit de l'ensemble des cautions 73 71 Déchéance des "pénalités ou intérêts de retard échus entre la date de ce premier incident et celle à laquelle elle en a été informée" 72 C. consom., art. L. 331-3, qui ne prévoit aucune sanction en cas de défaut d'information. 73 Comme l'extinction du cautionnement par voie accessoire en cas de défaut de déclaration de la créance garantie dans la procédure de rétablissement personnel (C. consom., art. L. 332-7). 74 V. infra n° 51.
33.
En vue de réduire les risques patrimoniaux inhérents au contrat de cautionnement, des limites à l'étendue de l'obligation de garantir, ainsi qu'au droit de poursuite du créancier, se sont développées au bénéfice des cautions personnes physiques s'engageant pour des raisons et à des fins personnelles. 34. La première limitation concerne l'étendue de leur engagement et joue a priori. Elle consiste à imposer, à peine de nullité du cautionnement, une mention précisant le montant et la durée de la garantie. Les personnes physiques qui s'engagent sous seing privé à cautionner des crédits de consommation doivent ainsi écrire la mention imposée par l'article L. 313-7 du Code de la consommation 75 . 35. D'autres limitations jouent a posteriori. Elles procèdent des sanctions prononcées à l'encontre du créancier sur le fondement de textes spéciaux 76 ou du droit commun de la responsabilité civile. Ainsi, lorsqu'un créancier professionnel se montre déloyal vis-à-vis d'une caution profane, en lui faisant souscrire un engagement manifestement disproportionné et/ou en ne la mettant pas en garde sur les risques de l'opération, cette caution "non avertie" peut-elle obtenir des dommages et intérêts, qui ont vocation à se compenser avec sa propre dette et à diminuer celle-ci à due concurrence. Sans être totalement remise en cause, l'obligation de garantir se trouve alors ramenée à un montant raisonnable. 36. Un dernier type de limitation porte sur la durée pendant laquelle des poursuites peuvent être exercées par le créancier à l'encontre des cautions garantissant un emprunteur-consommateur. Depuis 1989, les textes relatifs au crédit à la consommation étant applicables à son cautionnement 77 , les actions du prêteur doivent être exercées dans les deux ans du premier incident de paiement non régularisé 78 , tant à l'encontre de l'emprunteur, que de sa caution, à peine de forclusion. 37. Bien que la qualité de garant-consommateur ne soit pas expressément consacrée en droit français, il existe donc, depuis la fin des années 1980, de nombreuses règles légales et jurisprudentielles qui, sur le fondement de la nature de la dette principale ou des caractéristiques de l'engagement de garantie, et sous l'influence du droit de la consommation, protègent les garants personnes physiques s'engageant pour des raisons et à des fins n'entrant pas dans le cadre de leur activité professionnelle. Alors qu'initialement ces règles étaient clairement distinctes de celles relatives aux sûretés personnelles constituées pour des entreprises, des rapprochements ont par la suite été opérés entre le monde des affaires et celui des consommateurs. Tel est l'objet des règles spéciales protégeant les garants personnes physiques.
C/ Protéger les personnes physiques 38. Les règles spéciales encadrant la vie des affaires et celles protégeant les consommateurs sont habituellement distinctes d'un point de vue formel et opposées d'un point de vue substantiel : inscrites dans des textes ou des codes séparés, les premières sont inspirées par des objectifs micro ou macro économiques et promeuvent bien souvent la liberté, la rapidité, la sécurité ou encore la confiance mutuelle, tandis que les secondes, sous-tendues par des impératifs sociaux, veillent à densifier la volonté de la partie faible, à rééquilibrer des relations réputées inégales et à pourchasser le surendettement. En matière de sûretés personnelles, ce classique clivage a d'abord été respecté. Nous avons vu que, jusqu'au milieu des années 1990, des règles spéciales différentes ont été adoptées, soit pour dynamiser et sécuriser l'activité des entreprises, soit pour protéger les garants n'agissant pas pour les besoins de leur profession. La frontière entre le monde des affaires et celui des consommateurs a ensuite été largement dépassée. Les règles édictées depuis une vingtaine d'années ont en effet privilégié deux nouveaux critères d'application, à savoir deux qualités cumulatives, celles de caution personne physique et de créancier professionnel (1), ou bien la seule qualité de garant personne physique (2). Ces deux critères ont pour point commun d'englober les garants intégrés dans l'entreprise débitrice et les garants agissant pour des raisons et à des fins non professionnelles. 75
Les protections des cautions personnes physiques engagées envers un créancier professionnel
39. Sur le fondement de la double prise compte de la qualité de la caution -personne physique -et de celle du créancier -professionnel -, un corps de règles spéciales a été créé, au sein du Code de la consommation, par la loi du 29 juillet 1998 relative à la lutte contre les exclusions et par celle du 1er août 2003 pour l'initiative économique 79 . Ces règles présentent une réelle singularité : tout en étant profondément liées au droit de la consommation (a), elles opèrent une alliance avec le monde des affaires (b).
a. Parenté avec le droit de la consommation 40. Les protections instaurées au bénéfice des cautions personnes physiques engagées envers des créanciers professionnels entretiennent des liens très étroits avec le droit de la consommation. Outre l'inscription dans le Code du même nom, la parenté repose sur trois éléments. 41. D'abord, les modes de protection. Les lois de 1998 et de 2003 ont étendu le champ de la plupart des protections empruntées au droit de la consommation, qui étaient précédemment accordées aux cautions personnes physiques engagées envers un établissement financier pour garantir un crédit mobilier ou immobilier de consommation. Désormais, ce sont plus généralement les cautions personnes physiques garantissant des créanciers professionnels qui profitent du formalisme informatif ad validitatem (les mentions manuscrites portant sur les principales caractéristiques de leur engagement 80 ) et de la limitation qui en résulte du montant et de la durée de l'obligation de garantir. Ont pareillement été étendues l'information sur la défaillance du débiteur 81 et la décharge totale en cas de disproportion manifeste de l'engagement 82 . 42. La parenté avec le droit de la consommation se reconnaît ensuite aux critères de protection retenus, qui évoquent la prise en compte de la qualité des deux parties -un consommateur et un professionnelet le déséquilibre réputé exister entre elles, sur lesquels ce droit s'est historiquement construit. En effet, l'application des articles L. 341-1 à L. 341-6 du Code de la consommation ne dépend plus de la nature de la dette principale 83 , mais seulement de la qualité des parties : une caution personne physique, qui fait figure de partie faible, et un créancier professionnel, censé être en position de force. 43. C'est enfin la définition de ce créancier professionnel qui rapproche nettement les règles spéciales du cautionnement du droit de la consommation. Effectivement, les articles L. 341-1 à L. 341-6 concernent, non pas les seuls prestataires de services bancaires 84 , mais plus généralement tout créancier "dont la créance est née dans l'exercice de sa profession ou se trouve en rapport direct avec l'une de ses activités professionnelles" 85 . Or, depuis une vingtaine d'années, le critère du "rapport direct" avec l'activité professionnelle est précisément celui qui préside à l'interprétation des textes du droit de la consommation relatifs, notamment, à la lutte contre les clauses abusives ou à la vente par démarchage. 44. Compte tenu de ces divers liens avec le droit de la consommation, il est certain que les règles spéciales édictées en 1998 et 2003 ont vocation à protéger la volonté et le patrimoine de toutes les cautions qui s'apparentent à des consommateurs, c'est-à-dire les personnes physiques qui agissent à des 79 C. consom., art. L. 341-1, issu de la loi n° 98-657, et L. 341-2 à L. 341-6, issus de la loi n° 2003-721. 80 C. consom., art. L. 341-2 et L. 341-3, dont la rédaction est identique à celle des articles L. 313-7 et L. 313-8. 81 C. consom., art. L. 341-1, qui étend l'information imposée par l'article L. 313-9. 82 C. consom., art. L. 341-4, qui reprend les conditions et la sanction de l'article L. 313-10. 83 Les dettes garanties par les cautionnements soumis aux articles L. 341-2 et L. 341-3 du Code de la consommation peuvent naître, non seulement d'un crédit accordé sous la forme d'un prêt ou d'une autorisation de découvert en compte courant ou même de délais de paiement (Com. 10 janv. 2012, Bull. civ. IV, n° 2), mais également d'un contrat de bail commercial (Com. 13 mars 2012, inédit, n° 10-27814) ou encore d'un contrat de fournitures (Paris, 11 avr. 2012, JurisData n° 2012-014098). 84 Au contraire, les articles L. 313-7 à L. 313-10 régissant les cautionnements des crédits de consommation ne sont applicables qu'en présence d'un établissement de crédit, une société de financement, un établissement de monnaie électronique, un établissement de paiement ou encore un organisme mentionné au 5° de l'article L. 511-6 du Code monétaire et financier. 85 Par exemple, un garagiste ou un vendeur de matériaux de construction, qui accorderait des délais de paiement à ses clients moyennant la conclusion d'un cautionnement par une personne physique. V. not. Civ. 1 re , 25 juin 2009, Bull. civ. I, n° 138 ; Civ. 1 re , 9 juill. 2009, Bull. civ. I, n° 173 ; Com. 10 janv. 2012, Bull. civ. IV, n° 2 ; Civ. 1re, 10 sept. 2014, inédit, n o 13-19426.
fins n'entrant pas dans le cadre de leur activité professionnelle, dès lors qu'elles contractent avec un créancier professionnel. Sont beaucoup moins évidentes au premier abord, mais néanmoins réelles, les relations existant entre ces mêmes règles et le monde des affaires.
b. Alliance avec le monde des affaires 45. Les règles protectrices des cautions personnes physiques engagées envers un créancier professionnel associent le monde des affaires et celui des consommateurs, non seulement parce que les principaux acteurs de l'un et de l'autre n'y sont plus différenciés, mais aussi parce que les objectifs économiques qui gouvernent habituellement la vie des affaires et les impératifs sociaux qui président à la protection des consommateurs y sont étroitement mêlés. 46. S'agissant du rapprochement entre les acteurs, il s'est opéré de deux manières symétriques. D'une part, la loi du 1er août 2003 a étendu aux cautionnements conclus entre une caution personne physique et un créancier professionnel deux règles qui avaient vu le jour dans les cautionnements de la vie des affaires, à savoir la nullité des stipulations de solidarité et de renonciation au bénéfice de discussion dès lors que le cautionnement n'est pas limité à un montant global86 , ainsi que l'obligation d'information annuelle sur l'encours de la dette principale et le terme du cautionnement [START_REF] Consom | art. L. 341-6, qui se trouve dans le prolongement des articles 48 de la loi du 1er mars 1984[END_REF] . D'autre part, le décloisonnement est le fruit d'une interprétation large de la notion de caution personne physique. Depuis 2010, la Cour de cassation accorde aux dirigeants-cautions le bénéfice de l'article L. 341-4 du Code de la consommation, c'est-à-dire le droit d'être intégralement déchargés si le cautionnement était manifestement disproportionné ab initio à leurs biens et revenus, en décidant que "le caractère averti de la caution est indifférent pour l'application de ce texte" 88 . A partir de 2012, les articles L. 341-2 et L. 341-3 du Code de la consommation relatifs au formalisme informatif ont également été déclarés applicables à "toute personne physique, qu'elle soit ou non avertie" 89 . 47. Les objectifs poursuivis par les auteurs des lois du 29 juillet 1998 et du 1er août 2003 ont été à la fois sociaux et économiques, comme en attestent l'intitulé de la première, "loi relative à la lutte contre les exclusions", et celui de la seconde, "loi pour l'initiative économique". Il s'est essentiellement agi de prévenir le surendettement de toutes les cautions personnes physiques 90 et d'étendre les protections jusque là réservées aux cautions n'agissant pas pour les besoins de leur profession à celles exerçant un pouvoir de direction ou de contrôle au sein de l'entreprise garantie et ce, en vue d'encourager l'esprit d'entreprendre et la souscription de garanties, nécessaires au financement des entreprises à tous les stades de leur existence.
Les protections fondées sur la seule qualité de garant personne physique
48. L'endettement génère des risques spécifiques pour les personnes physiques : risque d'exclusion sociale et d'atteinte à la dignité, s'il se transforme en surendettement ; risque de propagation aux membres de la famille tenus de répondre des dettes du débiteur. En matière de sûretés personnelles, ces dangers pèsent sur les garants personnes physiques avec une acuité particulière étant donné que l'endettement a lieu pour autrui. Il n'est dès lors pas surprenant que plusieurs protections aient été accordées à toutes les cautions, voire à tous les garants, personnes physiques, quelles que soient la nature de la dette principale et la qualité du créancier, soit pour préserver leur famille (a), soit pour lutter contre le surendettement (b). La protection des garants personnes physiques, sur le fondement de cette seule qualité, est alors une fin en soi, et non un moyen au service d'autres intérêts [START_REF]et 13 les règles qui protègent tous les garants personnes physiques dans les procédures collectives professionnelles, en vue de favoriser le maintien de l'activité des entreprises garanties. 107 Com. 27 mars[END_REF] .
a. Protections de la famille du garant 49. Le droit des régimes matrimoniaux et le droit des successions protègent la famille du garant en imposant une limitation de l'assiette du droit de poursuite du créancier. Si une sûreté personnelle est souscrite par un époux commun en biens 92 , seul, le créancier ne peut en principe saisir que les biens propres et les revenus de cet époux. Les biens communs ne font partie du gage du créancier que si la garantie a été contractée "avec le consentement exprès de l'autre conjoint, qui, dans ce cas, n'engage pas ses biens propres" 93 . En cas de décès du garant, ses engagements sont transmis à ses héritiers 94 , qui, s'ils acceptent la succession purement et simplement, sont en principe tenus d'exécuter les obligations du défunt sur leur patrimoine personnel, même s'ils ignorent l'existence de la sûreté au moment d'exercer leur option successorale 95 . La réforme du droit des successions du 23 juin 2006 a tempéré la rigueur de ces solutions en prévoyant que l'héritier acceptant purement et simplement la succession "peut demander à être déchargé en tout ou partie de son obligation à une dette successorale qu'il avait des motifs légitimes d'ignorer au moment de l'acceptation" 96 . Dans la mesure où cette décharge judiciaire est subordonnée à la preuve que "l'acquittement de cette dette aurait pour effet d'obérer gravement son patrimoine personnel", elle ne libère sans doute pas l'héritier de la dette elle-même, mais uniquement de l'obligation de l'acquitter sur son propre patrimoine en cas d'insuffisance de l'actif successoral. La protection de l'héritier repose donc bien, elle aussi, sur une réduction de l'assiette du droit de poursuite du créancier.
b. Lutte contre le surendettement du garant 50. Pour prévenir le surendettement, la loi du 29 juillet 1998 s'est attachée à réduire l'engagement des cautions personnes physiques, en inscrivant dans le Code civil deux règles indifférentes au type de dettes couvertes, à la cause de la garantie, professionnelle ou non, et encore à la qualité du créancier. La première impose une information annuelle sur "l'évolution du montant de la créance garantie et de ces accessoires" au bénéfice des personnes physiques ayant souscrit un "cautionnement indéfini" 97 . Dès lors que celui-ci ne comporte pas de limite propre et que sa durée est indéterminée si celle de la dette principale l'est elle-même, l'information peut favoriser sa résiliation 98 et, par conséquent, dans le cautionnement de dettes futures, la non-couverture de celles naissant postérieurement. La seconde limitation prévue par la loi de 1998 porte sur l'assiette des poursuites : "en toute hypothèse, le montant des dettes résultant du cautionnement ne peut avoir pour effet de priver la personne physique qui s'est portée caution d'un minimum de ressources" 99 , correspondant au montant du revenu de solidarité activité. Ce "reste à vivre" profite à toutes les cautions personnes physiques, que leur engagement soit simple ou solidaire, qu'il ait été consenti pour des raisons personnelles ou professionnelles 100 , car il procède de l'impératif de lutte contre l'exclusion des particuliers. 92 Régime légal de communauté réduite aux acquêts ou régime conventionnel de communauté universelle (Civ. 1 re , 3 mai 2000, Bull. civ. I, n o 125). 93 C. civ., art. 1415, issu de la loi n° 85-1372 du 23 décembre 1985 relative à l'égalité des époux dans les régimes matrimoniaux. La Cour de cassation décide que ce texte "est applicable à la garantie à première demande qui, comme le cautionnement, est une sûreté personnelle, (…) et est donc de nature à appauvrir le patrimoine de la communauté" (Civ. 1 re , 20 juin 2006, Bull. civ. I, n o 313). 94 En présence d'un cautionnement, la Cour de cassation limite cette transmission, rappelée par l'article 2294 du Code civil. En effet, lorsque des dettes futures sont garanties, le décès de la caution constitue un terme extinctif implicite de son obligation de couverture, de sorte que seules les dettes nées avant le décès sont transmises aux héritiers (Com. 29 juin 1982, Bull. civ 98 Contrairement aux autres textes régissant l'information annuelle des cautions, l'article 2293 du Code civil n'impose malheureusement pas au créancier de rappeler cette faculté de résiliation lorsque le cautionnement est à durée indéterminée. En revanche, si l'information n'est pas délivrée, il conduit à une réduction plus importante de l'obligation de garantir, puisque la caution se trouve déchargée "de tous les accessoires de la dette, frais et pénalités" et non seulement de ceux échus au cours de la période de non-information. 99 C. civ., art. 2301, qui renvoie à l'article L. 331-2 du Code de la consommation dans lequel se trouvent détaillées les sommes devant être obligatoirement laissées aux particuliers surendettés. 100 Com. 31 janv. 2012, Bull. civ. IV, n° 13. L'objectif de prévention du surendettement des cautions personnes physiques a inspiré d'autres protections dans le droit du surendettement lui-même. En effet, depuis 2003, les dettes payées en lieu et place d'un débiteur surendetté par une caution ou un coobligé, personne physique, ne sauraient être effacées partiellement dans le cadre de la procédure se déroulant devant la commission de surendettement, ni totalement effacées en cas de clôture de la procédure de rétablissement personnel pour insuffisance d'actif101 . L'existence même des recours en remboursement contre le débiteur surendetté se trouve ainsi préservée par la loi. La Cour de cassation conforte en outre leur efficacité, en décidant que le débiteur ne peut opposer à la caution les remises et délais dont il a profités102 . 51. En cas d'échec des diverses mesures visant à prévenir le surendettement103 , les garants104 personnes physiques se trouvant dans cette situation ont accès aux mesures de traitement régies par le Code de la consommation 105 , qui conduiront à retarder le paiement du créancier, à le réduire, voire à l'empêcher purement et simplement, autrement dit à limiter, voire à ruiner, l'efficacité de la sûreté.
52. L'inefficacité des sûretés personnelles ne résulte pas uniquement de ces règles ayant pour finalité de lutter contre le surendettement des garants. En réalité, presque toutes les règles spéciales adoptées depuis les années 1980 en matière de sûretés personnelles, qu'elles aient pour objet de sécuriser et dynamiser la vie des affaires, de protéger les consommateurs ou plus largement les garants personnes physiques, portent des atteintes plus ou moins profondes aux droits des créanciers. C'est probablement la principale critique que l'on puisse adresser à la spécialisation du droit des sûretés personnelles. Mais c'est loin d'être la seule.
II. Les imperfections du droit des sûretés personnelles
53. L'évolution que le droit français des sûretés personnelles a connue depuis une trentaine d'années repose, nous l'avons vu, sur des objectifs parfaitement légitimes, si ce n'est impérieux : soutenir les entreprises, protéger les parties faibles, préserver les familles, lutter contre l'exclusion financière et sociale des particuliers. Les bons sentiments ne suffisent cependant pas à faire de bonnes règles. Celles que les réformes ponctuelles des sûretés personnelles et la jurisprudence ont forgées en marge du droit commun en sont l'illustration. Les règles spéciales en cette matière présentent effectivement de graves imperfections, tant formelles (A), que substantielles (B).
A/ Imperfections formelles 54. D'un point de vue formel, le droit des sûretés personnelles est source d'insécurité juridique en raison de l'inaccessibilité des règles spéciales. Alors que celles-ci renferment le droit ordinaire, si ce n'est le nouveau droit commun, vu qu'elles portent sur les sûretés les plus fréquemment constituées dans et en dehors de la vie des affaires, il est malaisé d'y accéder matériellement. Elles sont dispersées dans plusieurs codes et textes non codifiés, ainsi qu'une jurisprudence pléthorique. En outre, leur emplacement ne reflète pas toujours leur champ d'application. Il en va ainsi des articles L. 341-1 à L. 341-6 du Code de la consommation, qui s'appliquent non seulement aux cautions n'agissant pas dans un cadre professionnel, mais également à celles intégrées dans l'entreprise garantie106 . Les règles spéciales sont également inintelligibles et ce, pour différentes raisons. 55. D'abord, elles reposent sur une multitude de critères de différenciation. Les développements précédents ont souligné qu'ils concernent :
le garant : sa qualité de personne physique ou morale ; ses connaissances en matière de crédit et sur la situation du débiteur ; les besoins, professionnels ou non, auxquels répond son engagement ; le créancier : personne physique ou morale ; institutionnel, professionnel ou non professionnel ; le débiteur principal : consommateur ; société ou entrepreneur individuel, in bonis ou en difficulté ; particulier surendetté ; la nature de la dette principale : concours à une entreprise ; crédit de consommation ; bail d'habitation ; la forme de la sûreté : acte sous seing privé ; acte notarié ; acte sous seing privé contresigné par un avocat ; l'étendue de la garantie : définie ou indéfinie ; déterminée ou non en montant et en durée ; les modalités de la garantie : simple ou solidaire. 56. L'inintelligibilité procède ensuite de l'obscurité de certains de ces critères d'application. Il en va ainsi des qualités de cautions "averties" ou "non averties". La Cour de cassation ne les ayant jamais définies et n'ayant admis aucune présomption à égard, la qualification est incertaine, alors qu'en dépendent plusieurs moyens de défense fondés sur la bonne foi contractuelle, dont la responsabilité en cas de disproportion du cautionnement ou de défaut de mise en garde. Ainsi, les dirigeants ou associés de la société débitrice ne sont-ils pas nécessairement considérés comme des "cautions averties". Ils le sont seulement si le créancier prouve leur implication effective dans la gestion de la société cautionnée et leur connaissance de la situation financière de celle-ci 107 , ou au moins de son domaine d'activité, grâce à des expériences professionnelles passées ou concomitantes 108 . La qualification de "caution avertie" peut être écartée, a contrario, si le dirigeant était, lors de la conclusion du cautionnement, inexpérimenté et/ou de paille 109 . Vis-à-vis des proches du débiteur principal, la qualification de "caution non avertie" n'est guère plus prévisible. Un conjoint, un parent ou un ami du débiteur peut être considéré comme "averti", si la preuve est rapportée par le créancier, soit de la compréhension des engagements 110 , soit de l'intérêt financier qu'en retire la caution, fût-ce seulement par le biais du régime matrimonial de communauté 111 . 57. L'inintelligibilité du droit des sûretés personnelles est par ailleurs imputable à l'absence de coordination entre les réformes successives instaurant des obligations identiques ou voisines. Les obligations d'information des cautions en sont l'exemple caricatural, puisque les critères d'application, les contours de l'information et les sanctions ne sont pas les mêmes dans les quatre textes régissant l'information annuelle 112 , non plus que dans les trois articles imposant l'information sur la défaillance du débiteur 113 . 58. L'insécurité juridique résulte encore du manque d'articulation entre les règles spéciales et le droit commun, notamment entre les sanctions spéciales, comme la déchéance des accessoires en cas de défaut d'information, et la responsabilité civile de droit commun 114 . Pose également difficulté la coexistence de l'exigence légale de proportionnalité du cautionnement aux biens et revenus de la caution et du devoir de mise en garde sur les risques de l'opération et la disproportion de l'engagement, créé par la jurisprudence sur le fondement de l'article 1134, alinéa 3, du Code civil. 59. Enfin, l'inintelligibilité du droit des sûretés personnelles est entravée par les incohérences entre certaines de ses dispositions. Par exemple, l'article L. 341-5 du Code de la consommation répute non écrites les stipulations de solidarité ou de renonciation au bénéfice de discussion "si l'engagement de la caution n'est pas limité à un montant global", et l'article L. 341-6 prévoit le rappel, chaque année, de la faculté de révocation "si l'engagement est à durée indéterminée", alors que l'article L. 341-2 impose de limiter le montant comme la durée du cautionnement sous seing privé souscrit par les mêmes parties, c'est-à-dire une caution personne physique et un créancier professionnel 115 . 60. Tous ces défauts formels entravent la connaissance, la compréhension et la prévisibilité du droit en vigueur et compromettent la réalisation des attentes des parties, singulièrement la sécurité recherchée par les créanciers garantis. B/ Imperfections substantielles 61. Sur le fond, le droit des sûretés personnelles présente d'autres d'imperfections qui entravent également l'efficacité de ces garanties. Les premières imperfections substantielles résident dans la méconnaissance de la fonction des sûretés et dans l'altération de leurs principaux caractères (1). Les secondes tiennent à l'inadéquation entre les objectifs poursuivis et les techniques déployées pour les atteindre (2). Ces différentes imperfections menacent directement la sécurité des créanciers. Elles produisent également des effets pervers à l'encontre de ceux-là mêmes qu'elles cherchent à protéger.
Altération de la fonction et des caractères des sûretés
62. Depuis les années 1980, la protection des créanciers ne semble plus être la priorité, ni du législateur, ni des juges, les objectifs poursuivis étant essentiellement tournés vers les garants personnes physiques et, le cas échéant, vers les entreprises garanties. Dès lors que se trouve ainsi occultée la fonction des sûretés personnelles, qui consiste à augmenter les chances de paiement du créancier, il n'est pas étonnant que leur efficacité soit menacée 116 . 63. Techniquement, les règles spéciales entravent la protection des créanciers en remettant en cause les caractères de la sûreté qui leur étaient traditionnellement favorables. Quatre altérations de ce type peuvent être citées, la première relative aux sûretés non accessoires, les trois autres au cautionnement. Le caractère indépendant ou indemnitaire de la sûreté est méconnu par les règles communes aux sûretés pour autrui énoncées par le droit des entreprises en difficulté, précisément par celles rendant opposables par tous les garants les remises ou délais accordés au débiteur (dans la procédure de conciliation) ou par les seuls garants personnes physiques (dans la procédure de sauvegarde) 117 . Concernant le cautionnement, c'est d'abord son caractère consensuel 118 qui se trouve profondément entamé par les textes imposant, à peine de nullité, des mentions manuscrites 119 . La souplesse du cautionnement au stade de sa constitution s'en trouve diminuée. La sécurité que sont censés procurer, tant la sûreté, que le formalisme, est également compromise par le contentieux très abondant que suscitent ces mentions 120 . C'est ensuite le caractère unilatéral du cautionnement, donc sa simplicité pour les créanciers, qui reçoit de sérieux tempéraments par le biais des obligations diverses qu'ils supportent à tous les stades de la vie du contrat : remise de documents, vérification du patrimoine du garant, mise en garde avant la signature du contrat, informations pendant la période de couverture et lors de la défaillance du débiteur 121 . C'est enfin le caractère supplétif du régime du cautionnement qui est fortement battu en brèche. Dans une large mesure, les créanciers n'ont plus la liberté de modeler le contenu du contrat au plus proche de leurs besoins et intérêts, non seulement parce qu'une limitation du montant et de la durée de la garantie leur est souvent imposée, à peine de nullité du contrat 122 , mais aussi parce que des clauses qui pourraient favoriser leur paiement sont interdites. Il en va ainsi des stipulations de solidarité ou de renonciation au bénéfice de discussion, lorsque le montant de l'engagement n'est pas limité 123 . La jurisprudence paralyse aussi la clause, au sein d'un cautionnement de dettes futures, qui mettrait à la charge des héritiers de la caution les dettes nées après son décès 124 .
Inadéquation entre les finalités recherchées et les règles adoptées
64. Bien qu'elles contredisent la fonction même des sûretés et ceux de leurs caractères qui sécurisent les intérêts des créanciers, les protections des garants ne sont pas ipso facto illégitimes. Des intérêts supérieurs à ceux des créanciers méritent d'être défendus. A cet égard, les principaux objectifs qui sous-tendent les protections des garants, qu'ils soient d'ordre économique ou social (soutenir les entreprises et maintenir les emplois ; protéger les contractants en situation de faiblesse ; lutter contre l'exclusion des particuliers ; préserver les familles du risque de propagation de l'endettement), sont suffisamment sérieux et légitimes, voire impérieux, pour autoriser des atteintes aux droits des créanciers. Si les protections des garants sont donc justifiées, dans leur principe même, elles prêtent en revanche le flanc à la critique chaque fois que leurs modalités ne sont pas en adéquation avec leurs finalités. Il en va ainsi lorsque leur périmètre est mal défini 125 ou que les sanctions sont mal calibrées, car les protections des garants sont alors insuffisantes pour atteindre les objectifs poursuivis ou excessives par rapport à ce que requièrent ceux-ci. 65. Deux exemples d'inadéquation entre les finalités recherchées et les sanctions prévues par les règles spéciales peuvent être fournis. Le premier concerne la nullité du cautionnement en cas de non-respect du formalisme informatif. Dès lors que la protection du consentement est au coeur des solennités instituées, il est logique que la nullité en question soit relative et que les cautions puissent y renoncer a posteriori par une confirmation non équivoque 126 . Il est en revanche critiquable d'admettre la nullité de l'acte "sans qu'il soit nécessaire d'établir l'existence d'un grief" 127 ou, a fortiori, lorsque la preuve est rapportée de la parfaite connaissance par la caution de l'étendue de son engagement 128 . La sanction excède alors le but poursuivi, elle donne une prime à la mauvaise foi du garant et encourage inutilement le contentieux. L'interdiction faite au créancier professionnel de se prévaloir d'un cautionnement manifestement disproportionné ab initio aux biens et revenus de la caution, constitue un autre exemple de sanction excessive. En effet, comme cette déchéance "ne s'apprécie pas à la mesure de la disproportion" 129 , l'engagement disproportionné est rendu totalement inefficace 130 , alors que, pour satisfaire l'objectif de prévention du surendettement de la caution, une réduction eût été suffisante 131 . 66. Les diverses imperfections formelles et substantielles que présentent les règles spéciales du droit des sûretés personnelles affectent directement les droits des créanciers et, par ricochet, ceux des autres protagonistes de l'opération de garantie. Il est bien connu, en effet, que la perte de confiance des créanciers dans les sûretés produit deux types d'effets pervers. D'une part, à l'encontre des garants, car les créanciers cherchent à compenser le déficit d'efficacité de la sûreté en imposant des garanties 122 Le montant et la durée du cautionnement peuvent demeurer indéterminés dans trois hypothèses seulement : s'il est conclu par acte notarié ou contresigné par avocat ; s'il est souscrit sous seing privé par une caution personne morale ; s'il est conclu sous seing privé entre une caution personne physique et un créancier non professionnel. 123 Loi du 11 février 1994, art. 47-II, al. 1er ;C. consom., art. L. 341-5. 124 Com. 13 janv. 1987, Bull. civ. IV, n o 9. 125 V. infra n° 78 à 84. 126 Com. 5 févr. 2013, Bull. civ. IV, n° 20, au motif que le formalisme (en l'espèce, la mention manuscrite de l'article L. 341-2 du Code de la consommation) a "pour finalité la protection des intérêts de la caution". 127 Civ. 3 e , 8 mars 2006, Bull. civ. III, n o 59 ; Civ. 3 e , 14 sept. 2010, inédit, n° 09-14001. 128 Civ. 1 re , 16 mai 2012, inédit, n° 11-17411 ; Civ. 1re, 9 juill. 2015, n° 14-24287, à paraître au Bulletin. 129 Com. 22 juin 2010, Bull. civ. IV, n° 112, relatif à l'article L. 341-4 du Code de la consommation. 130 La Cour de cassation a récemment décidé que la décharge intégrale de la caution ayant souscrit un engagement manifestement disproportionné joue erga omnes, c'est-à-dire "à l'égard tant du créancier que des cofidéjusseurs", de sorte que cette caution n'a pas à rembourser le cofidéjusseur ayant désintéressé le créancier (Ch. mixte 27 févr. 2015, n° 13-13709, à paraître au Bulletin). 131 Sur le fondement du droit commun de la responsabilité, la sanction de la disproportion est ainsi plus mesurée.
supplémentaires132 et/ou moins encadrées133 , préservant davantage leur propre sécurité. D'autre part, sur le crédit, et donc sur le système économique dans son ensemble, puisque la perte d'efficacité des sûretés peut se traduire par un ralentissement et une augmentation du coût des crédits aux particuliers et aux entreprises. Il apparaît en définitive que l'inefficacité des sûretés personnelles, que génèrent les règles spéciales en la matière, est de nature à compromettre la protection des consommateurs (débiteurs principaux et garants), celle plus généralement des personnes physiques, ainsi que le soutien aux entreprises, autrement dit la réalisation des principaux objectifs qui sous-tendent ces règles spéciales. Pour restaurer à la fois l'efficacité des sûretés personnelles et celle du droit des sûretés personnelles lui-même, une réforme en profondeur s'impose.
III. La reconstruction du droit des garanties personnelles
67. La reconstruction globale du droit des sûretés n'a pas été réalisée par l'ordonnance du 23 mars 2006. Si les sûretés réelles conventionnelles de droit commun ont été réformées en profondeur, les sûretés personnelles ne l'ont pas été. A leur égard, aucune refonte n'a été opérée : le cautionnement n'a nullement été modifié sur le fond, seule la numérotation des articles du Code civil le concernant a été modifiée ; la garantie autonome et la lettre d'intention ont certes été reconnues, mais seulement dans deux articles du Code civil, qui en donnent la définition, sans détailler leur régime juridique134 . Compte tenu des imperfections formelles et substantielles que présente le droit des sûretés personnelles135 , il est cependant regrettable qu'une réforme n'ait pas eu lieu depuis 2006. 68. La doctrine et les praticiens appellent de concert une reconstruction136 et se rejoignent sur les finalités qui devraient l'inspirer. Il est essentiel, d'abord, de renforcer l'accessibilité, l'intelligibilité et la prévisibilité du droit des sûretés personnelles pour rendre effectifs les droits de tous les protagonistes de l'opération de garantie, et pour favoriser le rayonnement du droit français dans l'ordre international. Ensuite, il est indispensable de restaurer l'efficacité des sûretés personnelles137 en augmentant les chances de paiement des créanciers, qui ont été compromises par les multiples causes de décharge partielle ou totale des garants consacrées par les lois récentes et la jurisprudence. Remettre la sécurité des créanciers au coeur du droit des sûretés personnelles favoriserait, par contrecoup, l'accès au logement des particuliers et surtout l'octroi de crédit à ceux-ci ainsi qu'aux entreprises, l'un et l'autre limités par la crise économique.
La troisième finalité de la réforme du droit des sûretés personnelles a trait à la sauvegarde des intérêts légitimes des garants. L'impératif de justice contractuelle commande en effet de les mettre à l'abri d'un endettement excessif, source d'exclusion économique et sociale. Le principe de bonne foi contractuelle exige quant à lui de sanctionner les déloyautés des créanciers préjudiciables aux garants. La protection des garants qui en résulte est un moyen de stimuler le soutien qu'ils apportent aux particuliers et aux entreprises, autrement dit un instrument au service d'intérêts socio-économiques généraux. 69. Pour satisfaire ces trois objectifs, la réforme du droit des sûretés personnelles devrait modifier le contenu de bon nombre de règles en vigueur et en créer de nouvelles 138 . Dans le cadre limité de cet article, nous ne saurions détailler toutes les améliorations techniques qui mériteraient d'être apportées aux droits et obligations existants, ni les choix politiques qui devraient être opérés, particulièrement au sujet de l'articulation entre le droit des sûretés personnelles et ceux de l'insolvabilité -droit des entreprises en difficulté et droit du surendettement. Nous allons en revanche formuler des propositions intéressant le périmètre des règles gouvernant les sûretés personnelles. Pour renforcer la sécurité juridique en la matière et pour augmenter les chances de paiement des créanciers, tout en sauvegardant les intérêts légitimes des débiteurs et garants, le champ des règles en vigueur devrait être réformé de deux façons complémentaires. Il conviendrait, d'une part, d'étendre le champ des règles applicables à toutes les sûretés personnelles (A) et, d'autre part, de réviser le champ des règles spéciales du cautionnement (B). Autrement dit, un régime primaire, fondé sur les caractéristiques communes des sûretés personnelles, devrait être complété par des corps de règles spéciales, fondées sur leurs caractéristiques distinctives. Cette structure rationnelle et stratifiée, que l'ordonnance du 23 mars 2006 a consacrée en matière de sûretés réelles 139 , nous semble conditionner le succès de la réforme du droit des sûretés personnelles.
A/ Extension du champ des règles communes aux sûretés personnelles 70. La reconstruction du droit des sûretés personnelles devrait reposer sur un régime primaire, c'est-àdire sur des règles communes à l'ensemble de ces garanties. Cette proposition mérite d'être justifiée (1), puis illustrée (2).
1. Justifications de l'édiction d'un régime primaire 71. Le Titre I du Livre IV du Code civil consacré aux sûretés personnelles ne comporte actuellement aucune règle générale applicable à la fois au cautionnement, à la garantie autonome et à la lettre d'intention. Des règles communes à plusieurs sûretés personnelles, voire à l'ensemble des garanties pour autrui, existent cependant déjà. Certaines ont une origine jurisprudentielle. Elles procèdent de l'application par analogie à d'autres sûretés personnelles que le cautionnement de dispositions qui ne visent que celui-ci, comme l'article 1415 du Code civil 140 . D'autres règles communes ont une origine légale. Le droit des sociétés 141 , le droit des entreprises en difficulté 142 , le droit des incapacités 143 ou encore le droit du bail d'habitation 144 encadrent effectivement les sûretés ou les garanties consenties 138 Sur ces mesures, il n'existe pas encore de consensus. Entre les reconstructions d'ensemble déjà proposées en doctrine, les principales divergences concernent :
les mécanismes à réformer : les seules sûretés personnelles ou, plus largement, les garanties personnelles ; la structure de la réforme : uniquement des règles propres à chaque sûreté ou, en outre, des règles communes ; les arbitrages à réaliser entre les intérêts des différents acteurs de l'opération de garantie, qui conduisent à définir différemment le champ des règles, spécialement au regard de la qualité des parties, à sanctionner plus ou moins rigoureusement le non-respect des obligations imposées aux créanciers et encore à réserver un sort différent aux sûretés dans le cadre des procédures d'insolvabilité. 139 Le droit des sûretés réelles, tel qu'il résulte de cette ordonnance, est articulé entre des "dispositions générales" et des "règles particulières", notamment en matière de gage de meubles corporels, d'hypothèques et de privilèges immobiliers. 140 Sur son extension, a pari, à la garantie autonome, v. Civ. 1 re , 20 juin 2006, Bull. civ. I, n o 313. 141 C. com., art. L. 225-35 et L. 225-68 imposant l'autorisation des "cautions, avals et garanties" par le conseil d'administration ou de surveillance de la société anonyme constituante. V. supra n° 4. 142 C. com., art. L. 611-10-2, L. 622-26, L. 622-28, L. 626-11, L. 631-14, L. 631-20 et L. 643-11. Depuis les ordonnances du 18 décembre 2008 et du 12 mars 2014, ces textes visent les coobligés et les personnes "ayant consenti une sûreté personnelle ou ayant affecté ou cédé un bien en garantie". V. supra n° 12 et 19. 143 C. civ., art. 509 relatif aux actes interdits aux tuteurs des mineurs ou majeurs sous tutelle, qui vise "la constitution d'une sûreté pour garantir la dette d'un tiers". 144 Loi du 6 juillet 1989, art. 22-1. V. supra n° 28. pour autrui. Ce droit commun en filigrane n'est guère accessible ; il manque de cohérence, de prévisibilité et n'est pas suffisamment développé. 72. C'est au sein du Titre I du Livre IV du Code civil que devraient être énoncées des règles générales, applicables à l'ensemble des sûretés personnelles, qu'elles soient accessoires ou indépendantes, quelles que soient également les caractéristiques de la dette principale ou la situation spécifique des parties. En s'inspirant du droit des régimes matrimoniaux, il s'agirait d'instaurer un régime primaire des sûretés personnelles venant s'ajouter aux règles propres à chacune d'elles. Il permettrait de satisfaire les trois objectifs qui devraient guider la reconstruction de la matière. D'abord, le renforcement de la sécurité juridique, dans toutes ses composantes. La cohérence et donc l'intelligibilité de la loi seraient améliorées si les règles du régime primaire étaient édictées dans le respect du principe de logique formelle selon lequel à une identité de nature doit correspondre une identité de régime. L'accessibilité matérielle serait favorisée par l'inscription du régime primaire dans le Code civil, en tête du Titre dédié aux sûretés personnelles. La prévisibilité et la stabilité du droit des sûretés personnelles seraient quant à elles renforcées, car le régime primaire orienterait l'interprétation des règles spéciales et la mise en oeuvre des mécanismes innomés. Ensuite, le régime primaire des sûretés personnelles respecterait l'objectif de protection des créanciers, d'une part, parce qu'il est parfaitement compatible avec la diversité actuelle des mécanismes de garantie personnelle et la liberté de choisir celle la mieux à même de procurer la sécurité recherchée145 , d'autre part, parce qu'un régime primaire pourrait diminuer le risque que les attentes des créanciers ne soient déjouées par une requalification de la garantie ou une application a pari des règles propres à une autre sûreté. Enfin, l'instauration d'un régime primaire répondrait à l'objectif de sauvegarde des intérêts des garants. Elle pourrait en effet limiter le déficit de protection auquel conduisent les stratégies de contournement du cautionnement.
Illustration du régime primaire des sûretés personnelles
73. Le régime primaire devrait commencer par définir les sûretés personnelles, sur la base des caractéristiques qu'elles partagent toutes. Trois paraissent essentielles. En premier lieu, le caractère accessoire commun à toutes les garanties, et non celui qui se trouve renforcé dans certaines sûretés, particulièrement le cautionnement. Ce caractère accessoire général se reconnaît à l'adjonction de la garantie à un rapport d'obligation principal et à l'extinction de celui-ci par la réalisation de la garantie. La deuxième caractéristique des sûretés personnelles réside dans l'obligation de garantir, plus précisément dans les deux obligations distinctes, mais complémentaires, qui la composent, à savoir l'obligation de couverture naissant dès la conclusion de la sûreté et ayant pour objet d'"assurer l'aléa du non-paiement", et l'obligation de règlement, conditionnée par la défaillance du débiteur principal146 . Les sûretés personnelles se caractérisent, en troisième lieu, par un paiement pour le compte d'autrui, qui ne doit pas peser définitivement sur le garant. 74. Afin d'éclairer la définition de la sûreté personnelle fondée sur ces trois caractéristiques, une liste de mécanismes mériterait de figurer dans le régime primaire. Il serait opportun d'étendre celle de l'actuel article 2287-1 du Code civil, en présentant le cautionnement, la garantie autonome et la lettre d'intention comme des exemples ou en citant expressément d'autres garanties personnelles147 .
Justifications de la révision du champ des règles spéciales
79. L'efficacité que les créanciers attendent du cautionnement et la protection des cautions que recherche le législateur sont compromises chaque fois que le champ des règles spéciales n'est pas en adéquation avec les finalités poursuivies. Cette incohérence est flagrante au sein des articles L. 341-1 à L. 341-6 du Code de la consommation, qui protègent de différentes manières le patrimoine et le consentement des cautions personnes physiques engagées envers un créancier professionnel. En effet, lorsqu'il s'agit de protéger les personnes physiques et leur famille des risques patrimoniaux les plus graves liés à la garantie, deux critères d'application semblent surabondants, à savoir celui de la nature de la sûreté et celui de la qualité du créancier. Dit autrement, les protections inspirées par l'impératif de justice distributive ou celui, à valeur constitutionnelle, de protection de la dignité humaine ne devraient pas être réservées aux cautions et encore moins à celles qui s'engagent envers un créancier professionnel, car s'attacher ainsi à la nature de la garantie et aux activités du créancier prive injustement de protection certains garants. Le périmètre des règles légales ayant pour objet de protéger la volonté des garants, que ce soit au stade de la formation du contrat 155 ou au cours de la vie de la sûreté 156 , paraît lui aussi inadapté. Le double critère retenu -caution personne physique et créancier professionnel -conduit à traiter toutes les cautions personnes physiques comme des parties faibles et tous les créanciers dont les créances sont en rapport direct avec leur activité professionnelle comme des parties fortes, alors qu'il n'existe pas nécessairement une asymétrie d'informations. En effet, les connaissances ou l'ignorance du garant relativement à la nature et à la portée de son engagement, ainsi qu'à la situation financière du débiteur principal, ne dépendent pas essentiellement de sa qualité de personne physique, mais bien plutôt de la cause non professionnelle de son engagement. Ainsi, les règles à finalité informative ne devraient-elles protéger que les cautions personnes physiques ayant un lien affectif avec le débiteur principal et les personnes morales dont l'activité est étrangère à l'engagement de garantie. Les cautions qui s'engagent pour des raisons et à des fins professionnelles, telles les cautions personnes physiques dirigeants ou associés de l'entreprise débitrice 157 , ne devraient pas, au contraire, en bénéficier, car elles disposent en principe de compétences, de connaissances et de pouvoirs juridiques vis-à-vis du débiteur, qui rendent superfétatoires les informations sur leur propre engagement et/ou sur la dette principale. En ignorant la cause de l'engagement de la caution, les règles spéciales du cautionnement protègent donc excessivement certaines cautions et portent atteinte inutilement à l'efficacité du cautionnement.
Illustration des règles propres aux garants personnes physiques
80. Deux types de règles pourraient dépendre de la seule qualité de personne physique du garant. Il s'agit, d'une part, de celles qui ont trait aux spécificités attachées à la personnalité physique. Nous songeons aux règles relatives à la capacité du garant 158 , aux droits de la personnalité 159 et encore à la transmission de la sûreté en conséquence du décès du garant 160 . 155 Par la remise de documents, un délai d'acceptation et encore des mentions manuscrites. 156 Par l'information annuelle sur l'encours de la dette principale et sur la durée de la garantie, ainsi que par l'information sur la défaillance du débiteur. 157 Ces cautions intégrées dans les affaires de l'entreprise débitrice ne devraient pas être assimilées à des consommateurs. Telle est la position de la Cour de justice de l'Union européenne, qui a jugé qu'un avaliste, gérant et associé majoritaire de la société garantie, ne saurait être qualifié de consommateur au sens de l'article 15, § 1 er , du Règlement n° 44/2001 sur les contrats conclus par les consommateurs : "seuls les contrats conclus en dehors et indépendamment de toute activité ou finalité d'ordre professionnel, dans l'unique but de satisfaire aux propres besoins de consommation privée d'un individu, relèvent du régime particulier prévu en matière de protection du consommateur, (…) une telle protection ne se justifie pas en cas de contrat ayant comme but une activité professionnelle" (CJUE 14 mars 2013, aff. C-419/11, pt 34). 158 Règles protectrices des mineurs et majeurs sous tutelle, à l'image de l'article 509, 1° du Code civil. 159 Protections du droit au respect de la vie privée des garants personnes physiques, notamment par l'interdiction de la collecte et du traitement des données personnelles à d'autres fins que l'appréciation de leur situation financière et de leurs facultés de remboursement. 160 De lege ferenda, le principe de transmission à cause de mort de l'obligation de garantir devrait être rappelé au sein du corps de règles propres aux garants personnes physiques. Le nouveau texte devrait préciser si les successeurs recueillent uniquement l'obligation de régler les dettes déjà nées au moment du décès du garant (v. supra n° 49) ou également l'obligation de couvrir les dettes postérieures.
Ce sont, d'autre part, les règles ayant pour finalité de protéger le garant lui-même et sa famille contre un endettement excessif, qui devraient profiter à tous les garants personnes physiques, quelles que soient la nature de la sûreté et de la dette principale, la cause de l'engagement de garantir et la qualité du créancier. Plusieurs règles bénéficiant actuellement aux seules cautions mériteraient ainsi d'être étendues à tous les garants personnes physiques. Tel est le cas de l'article 1415 du Code civil 161 , de la règle dite du "reste à vivre" 162 , de toutes les mesures de protection énoncées par le droit du surendettement 163 et de l'exigence de proportionnalité entre le montant de la garantie et le patrimoine du garant 164 , si la proposition d'inscrire cette règle dans le régime primaire des sûretés personnelles n'était pas retenue 165 . En outre, afin de prévenir le surendettement des particuliers, que peut engendrer un cumul de garanties ruineux, il est souhaitable qu'un fichier d'endettement de type positif voie enfin le jour 166 et qu'il tienne compte des sûretés personnelles souscrites par les personnes physiques 167 . Toutes les règles propres aux garants personnes physiques, dont nous venons de donner des exemples, devraient être indifférentes à la cause de l'engagement de garantir. Le champ d'autres règles spéciales devrait à l'inverse être circonscrit sur le fondement de la cause de cet engagement.
Illustration des règles propres aux cautions ne s'engageant pas à des fins professionnelles
81. De lege lata, un seul texte, au sein du droit commun des contrats et non des règles spéciales du cautionnement, s'attache à la cause de l'engagement du garant. Il s'agit de l'article 1108-2 du Code civil 168 , qui écarte la forme électronique à l'égard des mentions requises à peine de nullité, si l'acte sous seing privé relatif à la sûreté personnelle n'est pas passé pour les besoins de la profession du garant. 82. De lege ferenda, même si la notion de cause devait ne plus figurer dans le droit commun des contrats 169 , les raisons et les buts des engagements devraient continuer d'être pris en compte, tant pour définir la qualité de certains contractants 170 , que pour délimiter le champ d'application de certains mécanismes 171 . C'est la raison pour laquelle il nous semble que toutes les règles visant la protection du consentement lors de la conclusion de la sûreté, ainsi que toutes celles ayant pour objectif d'informer 161 V. supra n° 49. D'autres règles protectrices de la famille du garant couvrent déjà l'ensemble des sûretés personnelles. Il s'agit de la règle de subsidiarité de l'article L. 313-21 du Code monétaire et financier (v. supra n° 8), de la décharge de l'exconjoint d'un entrepreneur (C. civ., art. 1387-1 ; v. supra n° 10) et de la décharge des héritiers prévue par l'article 786 du Code civil (v. supra n° 49). 162 C. civ., art. 2301, al. 2. V. supra n° 50. 163 V. supra n° 50 et 51. 164 Il s'agirait de modifier le champ de la règle figurant dans l'article L. 341-4 du Code de la consommation et de condamner la jurisprudence qui, en dehors de ce texte, refuse de sanctionner les créanciers non professionnels ayant fait souscrire un engagement excessif (Com. 13 nov. 2007, Bull. civ. IV, n o 236). 165 Sur cette proposition, v. supra n° 76. 166 La création d'un registre national des crédits aux particuliers a été censurée par le Conseil constitutionnel, au motif que ce fichier portait une atteinte au droit au respect de la vie privée qui ne pouvait être regardée comme proportionnée au but poursuivi, en l'occurrence la lutte contre le surendettement (Cons. const., 13 mars 2014, n° 2014-690 DC). 167 La publicité des sûretés personnelles souscrites par des personnes physiques présenterait des avantages, aussi bien pour les garants (elle limiterait le risque d'endettement excessif en évitant des cumuls de garanties ruineux), que pour les créanciers (la consultation du fichier d'endettement augmenterait leurs chances de paiement, car les garanties seraient certainement plus adaptées aux capacités patrimoniales du garant, ce qui faciliterait l'exécution de l'obligation de règlement et limiterait les risques d'extinction totale ou partielle de la sûreté pour cause de disproportion). De sérieux inconvénients lui sont cependant opposés : la rigidité et l'augmentation des coûts de constitution de la sûreté personnelle ; l'inefficacité procédant de la sanction du défaut de publicité ; le caractère illusoire des bénéfices attendus de la publicité des sûretés personnelles, insusceptible de refléter l'endettement réel des garants. 168 Issu de la loi n° 2004-575 du 21 juin 2004 pour la confiance dans l'économie numérique. 169 A l'heure où nous écrivons ces lignes, la suppression de la cause, en tant que condition de validité des contrats, n'est pas encore certaine, puisque la réforme du droit des obligations est attendue pour le mois de février 2016 (en vertu de la loi d'habilitation n° 2015-177 du 16 février 2015 relative à la modernisation et à la simplification du droit et des procédures dans les domaines de la justice et des affaires intérieures). La disparition de la cause est toutefois fort probable au vu du projet d'ordonnance en date du 25 février 2015. 170 V. en ce sens l'article inscrit en tête du Code de la consommation : "Au sens du présent code, est considérée comme un consommateur toute personne physique qui agit à des fins qui n'entrent pas dans le cadre de son activité commerciale, industrielle, artisanale ou libérale". 171 En ce sens, v. C. civ., art. 2422, al. 1er, issu de la loi n° 2014-1545 du 20 décembre 2014 sur la simplification de la vie des entreprises : "L'hypothèque constituée à des fins professionnelles par une personne physique ou morale peut être ultérieurement affectée à la garantie de créances professionnelles autres que celles mentionnées dans l'acte constitutif pourvu que celui-ci le prévoie expressément". la caution sur son engagement et sur la dette principale au cours de la vie de la sûreté, devraient être réservées aux cautions qui ne s'engagent pas à des fins professionnelles. Ainsi, dans l'optique de supprimer le risque de méconnaissance des spécificités des sûretés personnelles indépendantes, en particulier l'inopposabilité des exceptions, la réforme pourrait-elle interdire leur souscription à des fins non professionnelles 172 . En vue de limiter le risque d'ignorance de l'étendue du cautionnement et de l'ampleur des dettes couvertes, les règles en vigueur à finalité informative devraient voir leur champ limité aux cautionnements conclus à des fins non professionnelles. Nous envisageons ici le formalisme informatif lors de la conclusion du contrat, par le biais des mentions manuscrites portant sur le montant, la durée et, le cas échéant, le caractère solidaire du cautionnement 173 . Nous songeons également à l'information annuelle sur l'encours de la dette principale et la durée du cautionnement 174 et à l'information sur la défaillance du débiteur principal 175 . Chacune de ces règles devrait être énoncée par un texte unique se substituant aux multiples dispositions qui se superposent aujourd'hui. La sécurité juridique s'en trouverait renforcée. 82. L'accessibilité du droit du cautionnement serait également améliorée si les nouvelles règles propres aux cautions ne s'engageant pas à des fins professionnelles étaient inscrites dans le Code civil. Bien que ces cautions s'apparentent à des consommateurs, les règles particulières les concernant ne devraient pas figurer dans le Code de la consommation, mais bien dans le Code civil, et ce, pour deux raisons essentielles. D'une part, le champ des règles particulières que nous proposons de fonder sur la cause de l'engagement de garantie ne correspond pas exactement à celui du Code de la consommation. Celui-ci limite en effet la qualité de consommateur aux personnes physiques, alors que des personnes morales pourraient être qualifiées de cautions n'agissant pas à des fins professionnelles (telles des sociétés civiles de moyens, des associations ou encore des communes). De plus, le Code de la consommation s'intéresse le plus souvent au binôme consommateur/professionnel, alors que la qualité du créancier nous paraît indifférente lorsqu'il s'agit de protéger ces cautions. D'autre part, le Code civil semble le creuset idéal des règles propres aux cautions s'engageant à des fins non professionnelles 176 , non seulement parce que l'engagement de ces cautions constitue le prolongement du cautionnement "service d'ami", qui fait figure de principe depuis le Code Napoléon, mais surtout parce que le Code civil doit redevenir le siège des règles de droit commun pour que l'accessibilité matérielle et l'intelligibilité du droit du cautionnement soient restaurées. Dans le chapitre du Code civil consacré au cautionnement, il serait donc opportun de regrouper les règles particulières aux cautions ne s'engageant pas à des fins professionnelles dans une nouvelle section. 83. Celle-ci s'achèverait par un article déclarant les règles énoncées en son sein inapplicables, en principe, aux cautions s'engageant à des fins professionnelles. Mais, si les cautions personnes physiques dirigeants ou associés ou les cautions personnes morales appartenant au même groupe que le débiteur principal parvenaient à faire la preuve de circonstances exceptionnelles les ayant empêchées de connaître la situation financière du débiteur et/ou les spécificités de leur engagement 177 , elles pourraient rechercher la responsabilité du créancier ne les ayant pas informées, sur le fondement de la bonne foi contractuelle.
84.
Ces dernières propositions, comme toutes celles présentées plus haut intéressant les règles spéciales du cautionnement ou le régime primaire des sûretés personnelles, montrent que le renforcement de la sécurité juridique, la restauration de l'efficacité de ces sûretés, dans le respect des 172 Cette prohibition remplacerait celles concernant aujourd'hui la garantie autonome en matière de crédit à la consommation ou immobilier et de bail d'habitation (C. consom., art. L. 313-10-1 ; Loi du 6 juillet 1989, art. 22-1-1). V. supra n° 28. 173 C. consom., art. L. 313-7, L. 313-8, L. 341-2 et L. 341-3. V. supra n° 31 et 41 . 174 C. mon. fin., art. L. 313-22 ;Loi du 11 février 1994, art. 47-II, al. 2 ;C. civ., art. 2293 ;C. consom., art. L. 341-6. V. supra n° 9, 46 et 50. 175 C. consom., art. L. 313-9 ;Loi du 11 février 1994, art. 47-II, al. 3 ;C. consom., art. L. 341-1. V. supra n° 9, 32 et 41. 176 En revanche, les règles spéciales principalement fondées sur la nature de la dette principale devraient rester en dehors du Code civil. Par exemple, la remise aux cautions personnes physiques des offres de crédit à la consommation ou immobilier, ainsi que le délai de réflexion précédant la conclusion de ce dernier, devraient demeurer dans le Code de la consommation. 177 La jurisprudence rendue en matière de preuve, de réticence dolosive ou d'octroi abusif de crédit fournit des exemples de circonstances particulières dans lesquelles les dirigeants cautions sont exceptionnellement autorisés à se prévaloir de ces moyens de défense : nouveau dirigeant encore inexpérimenté, caution âgée et malade dont les fonctions directoriales sont purement théoriques, dirigeant de complaisance (v. not. Com. 6 déc. 1994, Bull. civ. IV, n o 364).
intérêts légitimes des garants, nécessitent une réforme en profondeur du droit français des sûretés personnelles.
ou des seules cautions personnes physiques 74 .
c. Limitations
63 C. consom., art. L. 313-10, issu de la loi n° 89-1010 du 31 décembre 1989.
64 C. consom., art. L. 311-11, al. 1er, et L. 312-7.
65 C. consom., art. L. 312-10.
66 Loi du 6 juillet 1989, art. 22-1.
67 Les termes mêmes de la mention ne sont pas imposés par l'article 22-1 de loi du 6 juillet 1989. Ils le sont, au contraire,
par le Code de la consommation (art. L. 313-7 et L. 313-8), qui admet "uniquement" les mentions qu'il édicte.
68 C. civ., art. 1317-1 et Loi du 31 décembre 1971, art. 66-3-3, issus de la loi n° 2011-331 du 28 mars 2011 de modernisation
des professions judiciaires et juridiques.
69 C. civ., art. 1108-1 et 1108-2.
70 C. consom., art. L. 313-9, qui vise "le premier incident de paiement caractérisé susceptible d'inscription au fichier institué
à l'article L. 333-4".
Com. 17 juill. 1978, Bull. civ. IV, n° 200 ; Com. 6 déc. 1988, Bull. civ. IV, n° 334.
Com. 8 nov. 1972, Bull. civ. IV, n° 278 (en matière de cautionnement) ; Com. 19 avr. 2005, Bull. civ. IV, n° 91 et Com. 3 juin 2014, inédit, n° 13-17643 (en matière de garantie autonome).
Selon ce texte, celui qui s'engage unilatéralement à payer une somme d'argent doit en indiquer le montant, en chiffres et en lettres, pour que la preuve de cet engagement soit parfaite.
L'ordonnance du 23 mars 2006 a interdit la couverture par une garantie autonome des crédits mobiliers et immobiliers de consommation57 , ainsi que des loyers d'un bail d'habitation58 . Bien que la prohibition soit formulée en termes généraux, elle vise à protéger spécialement les personnes physiques s'engageant dans un cadre non professionnel contre les dangers inhérents à l'indépendance de la garantie autonome et ceux liés à l'absence de réglementation détaillée de cette sûreté. En matière de bail d'habitation, d'autres interdictions concernent le cautionnement. En effet, d'une part, le bailleur, quelle que soit sa qualité, ne saurait le cumuler avec une assurance couvrant les obligations locatives, ni avec toute autre forme de garantie souscrite par le bailleur (dépôt de garantie mis à part), "sauf en cas de logement loué à un étudiant ou un apprenti"[START_REF]de mobilisation pour le logement et la lutte contre l'exclusion, puis par la loi n° 2009-1437 du 24[END_REF] . La violation de cette règle de non-cumul est sanctionnée par la nullité du cautionnement 60 , l'assurance demeurant au contraire valable. D'autre part, si le bailleur est une personne morale[START_REF]Publique comme privée, à la seule exception d'une "société civile constituée exclusivement entre parents et alliés jusqu'au quatrième degré inclus[END_REF] , le cautionnement ne peut être conclu qu'avec des "organismes dont la liste est fixée par décret en Conseil d'État" 62 , sauf si le locataire est "un étudiant ne bénéficiant pas d'une bourse de l'enseignement supérieur". Il convient de souligner que ces restrictions ont moins été inspirées par la volonté de protéger les cautions, proches des locataires, que par l'impératif de lutte contre l'exclusion des personnes qui, ne pouvant proposer une56 En particulier, le formalisme probatoire de l'article 1326 du Code civil (la qualité de caution non intéressée dans l'opération principale ne saurait alors suffire à compléter la mention défaillante), et les bénéfices de discussion et de division, sauf clause expresse de renonciation ou de solidarité.
C. consom., art. L. 341-5, qui reprend les termes de l'article 47-II, al. 1er, de la loi du 11 février 1994 relative à l'initiative et à l'entreprise individuelle.
C. consom., art. L. 331-7-1, 2°, L. 332-5 et L. 332-9.
Civ. 1 re , 15 juill. 1999, Bull. civ. I, n o 248 ; Civ. 1 re , 28 mars 2000, Bull. civ. I, n o 107.
Non seulement celles qui viennent d'être décrites, mais aussi, le cas échéant, celles qui profitent plus spécialement aux cautions personnes physiques engagées envers un créancier professionnel (v. supra n° 41).
Compte tenu des impératifs sociaux qui gouvernent les procédures de surendettement, tous les garants personnes physiques devraient y avoir accès, bien que l'article L. 330-1 du Code de la consommation envisage le seul cautionnement et que l'hypothèse d'un garant surendetté autre qu'une caution soit certainement rare en pratique (en raison des prohibitions dont fait l'objet la garantie autonome -v. supra n° 28 -et de la rareté des lettres d'intention émises par des personnes physiques).
Il s'agit, pour l'essentiel, de l'interdiction des procédures d'exécution, de l'interdiction du paiement des dettes antérieures, de l'aménagement du montant et de la durée des dettes et de l'effacement total des dettes en cas de rétablissement personnel avec ou sans liquidation judiciaire.
V. supra n° 46.
Comme la pluralité de cautionnements garantissant la même dette ou un cumul de sûretés personnelles et réelles.
Cautionnements fournis par des organismes habilités à cette fin (cautionnements mutuels, bancaires) ; sûretés personnelles dont le régime est plus souple que celui du cautionnement (garantie autonome et lettre d'intention) ; garanties personnelles fondées sur des mécanismes du droit des obligations (telles la solidarité sans intéressement à la dette, la délégation imparfaite et la promesse de porte fort) ; assurances.
Pourtant, dans le premier projet de loi d'habilitation en date du 14 avril 2005 (projet de loi n° 2249 pour la confiance et la modernisation de l'économie), étaient inscrites la "refonte" du cautionnement, la modification des dispositions du droit des obligations relatives à des mécanismes pouvant servir de garanties personnelles et encore l'introduction dans le Code civil de règles sur la garantie autonome et la lettre d'intention. Les parlementaires ont finalement écarté une réforme d'une telle ampleur, car ils ont considéré inopportun, d'un point de vue démocratique, de recourir à la technique de l'ordonnance à l'égard de contrats jouant un rôle important dans la vie quotidienne des particuliers et susceptibles de provoquer leur surendettement (avis n° 2333 déposé à l'Assemblée nationale le 12 mai 2005 au nom de la commission des lois).
V. supra n° 53 à 66.
Au niveau national, plusieurs propositions de réforme ont été développées depuis 2005. V. not. le rapport du groupe de travail relatif à la réforme du droit des sûretés en date du 31 mars 2005 (http://www.justice.gouv.fr/publications-10047/rapports-thematiques-10049/reforme-du-droit-des-suretes-11940.html) ; M. Bourassin, L'efficacité des garanties personnelles, LGDJ, Paris, 2006 ; J.-D. Pellier, Essai d'une théorie des sûretés personnelles à la lumière de la notion d'obligation, LGDJ, Paris, 2012 ; F. Buy, "Recodifier le droit du cautionnement (à propos du Rapport sur la réforme du droit des sûretés)", RLDC juillet-août 2005, n°18, p. 27 ; M. Grimaldi, "Orientations générales de la réforme", Dr. et patr. 2005, n° 140, p. 50 ; D. Legeais, "Une symphonie inachevée", RDBF mai-juin 2005, p. 67 ; Ph. Simler, "Codifier ou recodifier le droit des sûretés personnelles ?", Livre duBicentenaire, Litec, Paris, 2004, p. 382 ; Ph. Simler, "Les sûretés personnelles", Dr. et patr. 2005, n° 140, p. 55. Il existe également des réflexions doctrinales en ce sens au niveau européen, dans le cadre du Projet de cadre commun de référence(Sellier, Munich, 2009). Selon l'un de ses auteurs (U. Drobnig, "Traits fondamentaux d'un régime européen des sûretés personnelles", Mélanges Ph. Simler,Dalloz-Litec, Paris, 2006, p. 315), l'objectif a été de présenter une sorte de dénominateur commun, à l'image des Restatements of the Law élaborés aux États-Unis.
137 Sur cette notion et son étude de lege lata et de lege ferenda, v. notre thèse : L'efficacité des garanties personnelles,LGDJ, Paris, 2006.
Dans la reconstruction suggérée, aucune sûreté personnelle n'est rendue obligatoire ou n'est interdite de manière générale. Les créanciers resteraient libres de choisir la garantie qui leur semble la plus appropriée pour protéger leurs intérêts. Ils pourraient notamment toujours opter en faveur d'une sûreté indépendante, à condition de la faire souscrire par un garant professionnel ou intégré dans l'entreprise débitrice. Ils pourraient bénéficier d'un cautionnement non limité en montant et en durée, soit en s'adressant à des cautions qui s'engagent pour des raisons professionnelles, soit en le faisant souscrire par une caution agissant à des fins non professionnelles, mais en recourant alors à un notaire pour établir l'acte ou à un avocat pour le contresigner.
Sur cette structure duale de l'obligation de garantir, en matière de cautionnement de dettes futures, v. Ch. Mouly, Les causes d'extinction du cautionnement, Litec,Paris, 1979.
La distinction entre les sûretés personnelles et les garanties personnelles, reposant sur le caractère exclusif ou non de la fonction de garantie, est discutable dans l'optique d'une réforme, car elle contredit les principaux objectifs qui devraient animer celle-ci. D'une part, la sécurité juridique et la satisfaction des attentes des créanciers, puisque la qualification et le régime des garanties demeureraient incertains et sources de contentieux si seules les sûretés personnelles étaient visées, alors même qu'il importe peu aux créanciers d'être couverts par un mécanisme ayant une autre fonction que d'améliorer leurs chances de paiement. D'autre part, la sauvegarde des intérêts des garants, car les garanties personnelles peuvent se révéler
75. S'agissant des règles applicables à toutes les sûretés personnelles, ainsi définies et illustrées, elles devraient être dictées par ce qu'elles ont en commun et être indifférentes, à l'inverse, à ce qui est contingent dans chacune d'elles (à savoir, les caractéristiques de la dette principale, la nature accessoire ou indépendante de la garantie, la qualité des protagonistes et encore la cause de l'engagement du garant). Sur le fondement du caractère accessoire général des garanties, deux règles pourraient être consacrées. D'une part, le principe de transmission des accessoires avec la créance principale, énoncé par l'article 1692 du Code civil, pourrait être précisé à l'égard des sûretés personnelles, au sein du régime primaire. D'autre part, pourrait être mise à la charge des créanciers une obligation de restituer l'enrichissement procuré par la mise en oeuvre de la sûreté, c'est-à-dire les sommes excédant le montant des créances que la sûreté a pour fonction d'éteindre. Sur le fondement de l'obligation de couverture naissant dès la conclusion du contrat, pourraient être imposés, ad probationem 148 , l'établissement de celui-ci en deux exemplaires et la remise de l'un d'eux au garant 149 . En conséquence du paiement pour le compte d'autrui, des recours devraient être reconnus à tous les garants. Il s'agirait d'étendre ceux bénéficiant aujourd'hui aux cautions, c'est-à-dire un recours avant paiement et des recours en remboursement, personnel et subrogatoire. 76. D'autres dispositions du régime primaire devraient reposer sur le principe de bonne foi contractuelle. Sur ce fondement, deux règles du droit du cautionnement pourraient être étendues. D'abord, le bénéfice dit de subrogation de l'actuel article 2314 du Code civil 150 , puisque l'égoïsme du créancier qui fait perdre au garant des chances d'être remboursé par le débiteur constitue une déloyauté 151 , qui devrait être sanctionnée dans toutes les sûretés personnelles ouvrant au garant un recours subrogatoire. Ensuite, comme le principe de bonne foi commande à tous les contractants de faire preuve de tempérance 152 , l'exigence de proportionnalité entre le montant du cautionnement et les facultés financières de la caution personne physique contractant avec un créancier professionnel, inscrite dans l'article L. 341-4 du Code de la consommation, pourrait être généralisée par rapport aux garanties et aux parties 153 . Elle couvrirait alors l'ensemble des sûretés personnelles et s'appliquerait quelles que soient la qualité et les activités du créancier 154 et du garant. 77. Dans le régime primaire proposé, toutes les règles communes aux sûretés personnelles devraient être indifférentes aux spécificités relatives aux parties. En dehors du régime primaire, des règles particulières devraient toujours prendre en compte ces spécificités. Mais, à l'occasion de la réforme du droit des sûretés personnelles, le champ des règles spéciales devrait lui aussi être rationalisé.
B/ Révision du champ des règles spéciales du cautionnement
78. Une fois justifiée cette révision (1), seront illustrées les règles particulières qui, de lege ferenda, pourraient être réservées aux garants personnes physiques (2) ou aux cautions ne s'engageant pas à des fins professionnelles (3). plus dangereuses que les sûretés personnelles (une comparaison entre la délégation imparfaite ou la promesse de porte fort et le cautionnement permet de s'en convaincre). 148 Le contrat de sûreté personnelle établi en un seul exemplaire conservé par le créancier serait privé de force probante, sauf commencement d'exécution ou défaut de contestation de son existence par le garant. Ces tempéraments sont déjà admis par la jurisprudence statuant en application de l'article 1325 du Code civil. 149 Cela éviterait que le garant n'oublie son engagement et ne s'abstienne dès lors de prendre des précautions pour l'honorer. Cela limiterait également le risque que les héritiers du garant n'ignorent l'obligation de leur auteur et ne soient déchargés sur le fondement de l'article 786 du Code civil. 150 "La caution est déchargée, lorsque la subrogation aux droits, hypothèques et privilèges du créancier, ne peut plus, par le fait de ce créancier, s'opérer en faveur de la caution. Toute clause contraire est réputée non écrite". 151 Com. 14 janv. 2014, inédit, n° 12-21389. 152 V. la jurisprudence relative aux cautionnements disproportionnés ne relevant pas des articles L. 313-10 ou L. 341-4 du Code de la consommation, qui sanctionne la faute commise par les créanciers "dans des circonstances exclusives de toute bonne foi" et notamment l'arrêt fondateur : Com. 17 juin 1997, Macron, Bull. civ. IV, n° 188. V. supra n° 17. 153 Une autre exigence de proportionnalité, celle imposée par l'article L. 650-1 du Code de commerce entre le montant de la garantie et le montant des concours consentis au débiteur principal, a déjà un champ d'application aussi général. 154 Aujourd'hui, seuls les créanciers professionnels sont visés par les articles L. 313-10 et L. 341-4 du Code de la consommation et, lorsque ces textes ne sont pas applicables, la Cour de cassation considère que les créanciers non professionnels ne commettent pas de faute en faisant souscrire à une caution un engagement prétendument excessif (Com. 13 nov. 2007, Bull. civ. IV, n o 236). | 116,250 | [
"750998"
] | [
"461303"
] |
00148718 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2006 | https://hal.science/hal-00148718/file/COCIS_Oberdisse_sept2006_revised.pdf | Julian Oberdisse
email: oberdisse@lcvn.univ-montp2.fr
Adsorption and grafting on colloidal interfaces studied by scattering techniques -REVISED MANUSCRIPT
Keywords: Dynamic Light Scattering, Small Angle Neutron Scattering, Small Angle X-ray Scattering, Adsorption Isotherm, Polymer, Layer Profile, Surfactant Layer, PEO Figures: 4
Adsorption and grafting on colloidal interfaces studied by scattering techniques
Introduction
Adsorption and grafting of polymer and surfactants from solution onto colloidal structures has a wide range of applications, from steric stabilisation to the design of nanostructured functional interfaces, many of which are used in industry (e.g., detergence). There are several techniques for the characterization of decorated interfaces. Scattering counts without doubt to the most powerful methods, as it allows for a precise determination of the amount and structure of the adsorbed molecules without perturbing the sample. This review focuses on structure determination of adsorbed layers on colloidal interfaces by scattering techniques, namely Dynamic Light Scattering (DLS), Small Angle Neutron and X-ray Scattering (SANS and SAXS, respectively). The important field of neutron and X-ray reflectivity is excluded, because it is covered by a review on adsorption of biomolecules on flat interfaces [START_REF] Lu | Current Opinion in Colloid and Interface Science[END_REF].
The colloidal domain in aqueous solutions includes particles and nanoparticles, (micro-) emulsions, and self-assembled structures like surfactant membranes, all typically in the one to one hundred nanometer range. Onto these objects, different molecules may adsorb and build layers, possibly with internal structure. We start with a review of studies concerning a model polymer, poly(ethylene oxide) (PEO), the adsorption profile normal to the surface Φ(z) of which has attracted much attention. We then extent the review to other biopolymers, polyelectrolytes, and polymer complexes, as well as to surfactant and self-assembled layers.
Adsorption isotherm measurement are the natural starting point of all studies, and whenever they are feasible, they yield independent information to be compared to the scattering results.
Apart from the detailed shape of the isotherm, they give the height and position of the adsorption plateau, and thus also how much material is unadsorbed. The last point may be important for the data analysis as these molecules also contribute to the scattering.
Analysis of scattering from decorated interfaces
Adsorbed (or grafted) layers on colloidal surfaces can be characterized quite directly by small-angle scattering. The equation describing small-angle scattering from isolated objects (for simplicity called 'particles') with adsorbed layers reads:
( ) ( ) ( ) ( ) ( ) inc 2 3 iqr l 3 iqr p inc 2 l p I r d e r r d e r V N I q A q A V N q I + ρ ∆ + ρ ∆ = + + = ∫ ∫ (1)
where N/V is the number density of particles, and ∆ρ p and ∆ρ l are the contrasts of the particle and the layer in the solvent, respectively [*2,*3,*4]. The first integral over the volume of the particles gives the scattering amplitude A p of the particles, and their intensity can be measured independently. The second integral over the volume of the layer gives the layer contribution A l . The last term, I inc , denotes the incoherent scattering background -particularly high in neutron scattering with proton-rich samples, which must be subtracted because it can dominate the layer-contribution. In eq.( 1), finally, the structure factor describing particleparticle interactions is set to one, and it needs to be reintroduced for studies of concentrated colloidal suspensions [*5,*6,7,8].
Small-angle scattering with neutrons or x-rays corresponds to different contrast conditions, which makes scattering powerful and versatile, applicable to all kinds of particle-layer combinations. The great strength of SANS is that isotopic substitution gives easy access to a wide range of contrast conditions. Eq.( 1) illustrates the three possible cases. If ∆ρ p = 0 ("oncontrast" or "layer contrast"), only the layer scattering is probed. Secondly, if, ∆ρ l = 0 ("particle contrast"), only the bare particle is seen, which is potentially useful to check that 'particles' (including droplets) are not modified by the adsorption process. Only in the last situation, where ∆ρ p ≠ 0 and ∆ρ l ≠ 0 ("off-contrast"), both terms in eq.( 1) contribute. This is important for polymer layers.
Before going into modelling, one may wish to know the quantity of adsorbed matter. For small enough particles, the limiting value I(q→0) in small angle scattering gives direct acces to this information. For homogeneous particles of volume V p in particle contrast, we obtain ( )
p p 2 p p V 0 q I Φ ρ ∆ = →
, and equivalently for layer contrast ( )
l l 2 l l V 0 q I Φ ρ ∆ = →
, where we have introduced the volume fraction of the particles Φ p = N/V V p , (Φ l = N/V V l for the layer). Note that it is not important if the layer contains solvent: V l is the "dry" volume of adsorbed material, if we set ∆ρ l to its "dry" contrast. By measuring different contrast conditions and dividing the limiting zero-angle intensities, the adsorbed quantities can be determined regardless of structure factor influence and instrument calibration, as such contributions cancel in intensity ratios [*9,*10].
The spatial extent of an adsorbed layer can be determined via the hydrodynamic radius of particles by DLS with and without the adsorbed layer, the difference being the hydrodynamic layer thickness. However, DLS does not give any information on the amount of adsorbed matter. Alternatively, with SANS or SAXS, one can determine the particle radius, with and without adsorbed layer, as well as the adsorbed amount. If the contrasts of the particle and the adsorbed material are similar, the increase in particle radius can be directly translated into the layer thickness. If the contrasts are too different, the weighting (eq.( 1)) of the two contributions needs to be taken into account, e.g. with core-shell models. The simplest ones are a special case of eq.( 1), with constant contrast functions ∆ρ(r). For spherically symmetric particles and adsorbed layers, the model has only four parameters (radius and contrast of particle and layer), besides the particle concentration. The particle parameters can be determined independently, whereas the other two affect I(q) differently: An increase in layer thickness, e.g., shifts the scattering function to smaller q, whereas an increase in adsorbed amount (at fixed thickness) increases the intensity. Note that the average contrast of the layer and its thickness are convenient starting points for modelling (identification of monolayers or incomplete layers), while more elaborate core-shell models use decaying shell concentrations
[*5,*11,*12].
The determination of the profile Φ(z) of adsorbed polymer chains, with the z-axis normal to the surface, needs a more involved data analysis [*2,*4]. There are two routes to Φ(z). The first one is based on a measurement in "layer contrast" (∆ρ p = 0). According to eq.( 1), with ∆ρ l ∝ Φ(z), this intensity is related to the square of the Fourier transform of Φ(z). One can then either test different profiles, or try to invert the relationship, which causes the usual problems related to data inversion (limited q-range, phase loss and limiting conditions …)
[*2,*3,*4]. This route also gives a (usually small) second term in the layer-scattering, called the fluctuation term [13], which stems from deviations from the average profile. The second route is based on additional off-contrast measurements. Carrying out the square of the sum in eq.( 1) gives 3 terms, A p 2 + A l 2 + 2 A p A l . Subtracting the bare particle and pure layer term yields the cross-term with the layer contribution A l , this time without the square, which is easier to treat because the phase factor is not lost.
Review of grafting and adsorption studies by small-angle scattering and DLS
Structure of PEO-layers
Many studies focussing on fundamental aspects deal with the model polymer PEO, as homopolymer or part of a block copolymer [14 -30]. 3.2 Structure of adsorbed and grafted layers, from polyelectrolytes to surfactants.
Adsorption of polyelectrolytes, biomacromolecules, and polymer complexes.
Adsorbed layers of many different macromolecules have been characterized by scattering [START_REF] Marshall | Small-angle neutron scattering of gelatin/sodium dodecyl sulfate complexes at the polystyrene/water interface[END_REF][START_REF] Dreiss | Formation of a supramolecular gel between α α α α-cyclodextrin and free and adsorbed PEO on the surface of colloidal silica: Effect of temperature, solvent, and particle size[END_REF][START_REF] Cárdenas | SANS study of the interactions among DNA, a cationic surfactant, and polystyrene latex particles[END_REF][START_REF] Lauten R A, Kjøniksen | Adsorption and desorption of unmodified and hydrophobically modified ethyl(hydroxyethyl)cellulose on polystyrene latex particles in the presence of ionic surfactants using dynamic light scattering[END_REF][START_REF] Rosenfeldt | The adsorption of Bovine Serum Albumin (BSA) and Bovine Pancreastic Ribonuclease A (RNase A) on strong and weak polyelectrolytes grafted onto latex particles is measured by SAXS. The scattered intensity is modelled by a geometrical model of beads on the surface of the latex[END_REF][START_REF] Borget | Interactions of hairy latex particles with cationic copolymers[END_REF][START_REF] Estrela-Lopis | SANS studies of polyelectrolyte multilayers on colloidal templates[END_REF][START_REF] Rusu | Adsorption of novel thermosensitive graft-copolymers: Core-shell particles prepared by polyelectrolyte multiplayer selfassembly[END_REF][START_REF] Okubo | Alternate multi-layered adsorption of macro-cations and -anions on the colloidal spheres. Influence of the deionisation of the complexation mixtures with coexistence of the ion-exchange resins[END_REF]. In these studies, the focus shifts from the more 'conceptual' interest in PEO-layers to specific substrate-molecule interactions. The profile of gelatin layers adsorbed on contrast matched PS-particles was shown to be well-described by an exponential by Marshall el al [START_REF] Marshall | Small-angle neutron scattering of gelatin/sodium dodecyl sulfate complexes at the polystyrene/water interface[END_REF]. Addition of equally contrast matched ionic surfactant (SDS) induces layer swelling, and finally gelatin desorption. Dreiss et al have shown that the α-cyclodextrin threads on adsorbed PEO chains (pseudopolyrotaxanes), modifying their configuration [START_REF] Dreiss | Formation of a supramolecular gel between α α α α-cyclodextrin and free and adsorbed PEO on the surface of colloidal silica: Effect of temperature, solvent, and particle size[END_REF]. Cárdenas et al have characterized DNA-coated contrast-matched PS-particles by SANS, and present evidence for layer compaction upon addition of cationic surfactant [START_REF] Cárdenas | SANS study of the interactions among DNA, a cationic surfactant, and polystyrene latex particles[END_REF]. Addition of ionic surfactants has been shown to lead to the desorption of ethyl(hydroxyethyl)cellulose from PS-latex, which can be followed by DLS [START_REF] Lauten R A, Kjøniksen | Adsorption and desorption of unmodified and hydrophobically modified ethyl(hydroxyethyl)cellulose on polystyrene latex particles in the presence of ionic surfactants using dynamic light scattering[END_REF].
Scattering studies have been crucial for polyelectrolyte layers. The adsorption of small proteins (BSA) onto spherical polyelectrolyte brushes was measured by SAXS by Rosenfeldt et al [**35]. Using DLS, the thickness of adsorbed cationic copolymer on latex particles was studied by Borget et al [START_REF] Borget | Interactions of hairy latex particles with cationic copolymers[END_REF]. Finally, polyelectrolyte multilayers have been characterized on contrast-matched PS using core-shell models, and by DLS on silica [**37, [START_REF] Rusu | Adsorption of novel thermosensitive graft-copolymers: Core-shell particles prepared by polyelectrolyte multiplayer selfassembly[END_REF][START_REF] Okubo | Alternate multi-layered adsorption of macro-cations and -anions on the colloidal spheres. Influence of the deionisation of the complexation mixtures with coexistence of the ion-exchange resins[END_REF]. In all of these studies, unperturbed structural characterizations in the solvent were made possible by scattering.
There is a great amount of literature by synthesis groups in grafting of polymer chains onto or from colloidal surfaces. These groups often use DLS to characterize layer extensions [START_REF] Inoubli | Graft from' polymerization on colloidal silica particles: elaboration of alkoxyamine grafted surface in situ trapping of carbon radicals[END_REF][START_REF] Qi | Preparation of acrylate polymer/silica nanocomposite particles with high silica encapsulation efficiency via miniemulsion polymerisation[END_REF],
with
Adsorption of surfactant layers and supramolecular aggregates.
The adsorption of ionic and non-ionic surfactants to hydrocarbon emulsion droplets is of evident industrial importance. In this case, the scattered intensity can be described by a coreshell model [*10], which was also used by Bumajdad et al to study the partitioning of C12Ej (j=3 to 8) in DDAB layers in water-in-oil emulsion droplets [START_REF] Bumajdad | Compositions of mixed surfactant layers in microemulsions determined by small-angle neutron scattering[END_REF]. On colloids, the thickness of an adsorbed layer of C 12 E 5 on laponite has been measured by Grillo et al by SANS using a core-shell model, and evidence for incomplete layer formation was found [START_REF] Grillo | structural determination of a nonionic surfactant layer adsorbed on clay particles[END_REF]. On silica particles, a contrast-variation study of adsorbed non-ionic surfactant has been performed, and the scattering data modelled with micelle-decorated silica [*9,55], a structure already seen by Cummins et al [START_REF] Cummins | Temperature-dependence of the adsorption of hexaethylene glycol monododecyl ether on silica sols[END_REF].
Pores offer the possibility to study adsorption at interfaces with curvatures comparable but opposite in sign to colloids. Porous solids are not colloidal, but adsorption inside pores can be analysed using small angle scattering. Vangeyte et al [*57] study adsorption of poly(ethylene oxide)-b-poly(ε-caprolactone) copolymers at the silica-water interface. They succeed in explaining their SANS-data with an elaborate model for adsorbed micelles similar to bulk micelles, cf. Fig. 3, and the result in q 2 I representation is shown in Fig. 4. In the more complex system with added SDS the peak disappears and a core-shell model becomes more appropriate, indicating de-aggregation [START_REF] Vangeyte | Concomitant adsorption of poly(ethylene oxide)-b-poly(ε ε ε ε-caprolactone) copolymers and sodium dodecyl sulfate at the silica-water interface[END_REF].
Conclusion
Recent advances in the study of adsorption on colloidal interfaces have been reviewed. On the one hand, DLS is routinely used to characterize layer thickness, with a noticeable sensitivity to long tails due to their influence on hydrodynamics. On the other hand, SANS and SAXS give information on mass, and mass distribution, with a higher sensitivity to the denser regions. Small-angle scattering being a 'mature' discipline, it appears that major progress has been made by using it to resolve fundamental questions, namely concerning the layer profile of model polymers. In parallel, a very vivid community of researchers makes intensive use of DLS and static scattering to characterize and follow the growth of layers of increasing complexity.
contrast variation and concentration dependence measurements, J Chem Phys 2006, 125:
044715
A contrast-variation study of the scattering of silica spheres with a hydrophobic layer in an organic solvent is presented. The intensity is described by a core-shell model combined with a structure factor for adhesive particles, which fits all contrast situations simultaneously. The structure of the adsorbed layers of PEO (10k to 634 k) on polystyrene is studied by onand off-contrast SANS. Different theoretical profiles are reviewed and used to describe the layer scattering. This includes the weak fluctuation term, which is proportional to q -4/3 and decays more slowly than the average layer contribution q -2 . It is therefore more important (but nonetheless very small) at high q.
[16] Flood C, Cosgrove T, The volume profiles of three copolymers (F68, F127, tetronic/poloxamine 908) adsorbed onto emulsion droplets are determined by SANS, using two methods, inversion and fitting, to Φ(z).
Considerable similarity in the adsorbed layer structure is found for hydrocarbon and fluorocarbon emulsions.
Especially SANS has lead to a very detailed description of the structure of PEO-layers, deepening our understanding of polymer brushes, and their interaction, e.g. in colloidal stabilization. Hone et al have measured the properties of an adsorbed layer of PEO on poly(styrene)-latex (PS) [*14]. They have performed on-and off-contrast SANS experiments in order to determine the (exponential) profile Φ(z) and the weak fluctuation term, the determination of which requires a proper treatment of smearing and polydispersity. They have revisited the calculation by Auvray and de Gennes [13], and propose a one parameter description of the fluctuation term. Marshall et al have extended the adsorbed layer study of PEO on PS-latex to different molecular weights, and compare exponential, scaling-theory based and Scheutjens-Fleer self-consistent mean field theory, including the fluctuation term [**15]. Recently, the effect of electrolytes on PEO layers on silica was also investigated by DLS [16]. Concerning copolymers, Seelenmeyer and Ballauff investigate the adsorption of non-ionic C 18 E 112 onto PS latex particles by SAXS [*17]. They used exponential and parabolic density profiles for PEO to fit the data. The adsorption of a similar non-ionic surfactant (C 12 E 24 ) onto hydrophobized silica in water was studied by SANS, employing a two layer model describing the hydrophobic and hydrophilic layers [18]. On hydrocarbon and fluorocarbon emulsion droplets, layers of two triblock copolymers (pluronics F68 and F127) and a star-like molecule (Poloxamine 908) have been adsorbed by King et al [*19]. They found surprisingly similar exponentially decaying profiles in all cases, cf. Fig. 1, which also serves as illustration for two ways to determine Φ(z) in "layer contrast", inversion and fitting, as discussed in section 2. The SANS-study of Washington et al deals with small diblock copolymers adsorbed on perfluorocarbon emulsion droplets [20]. A clear temperature-dependence of Φ(z) was found, but the best-fitting profile type depends on the (low) molecular weight. Diblock copolymer layers adsorbed onto water droplets have been characterized by DLS by Omarjee et al [*21]. Frielinghaus et al determine the partial structure factors of diblock copolymers [*22,23] used for boosting of microemulsions [24]. Adsorption on carbon black has been studied by comparison of DLS and contrast-variation SANS [*25,26,27]. The adsorbed layer of both F127 and a rake-type siloxane-PPO-PEO copolymer was found to be a monolayer at low coverage, and adsorbed micelles at high coverage. On magnetic particles, Moeser et al have followed the water decrease in a PPO-PEO shell by SANS and theory as the PPO content increases [*11]. Concerning the adsorption of PEO and tri-block copolymers on non-spherical particles, Nelson and Cosgrove have performed SANS and DLS studies with anisotropic clay particles [*28,29,30]. Unusually thin layers are found for PEO, and a stronger adsorption of the pluronics. Studies of adsorbed PEO-layers at higher colloid concentrations have been published by several groups [*5,7,8]. Zackrisson et al have studied PEO-layers grafted to PS-particles (used for studies of glassy dynamics) by SANS with contrast variation, using a stretched-chain polymer profile [*5]. In Fig. 2, they compare their form factor measurements to a model prediction, at different solvent compositions, and nice fits are obtained. Along a very different approach, Qi et al match the PEO layer but follow its influence via the interparticle structure factor [7,8].
convincing plots of the growing hydrodynamic thickness during polymerisation [**42], or as a function of external stimuli [*12, 43-47]. In static scattering, El Harrak et al use SANS [*48, 49, 50], and Yang et al DLS and static light scattering with a core-shell model [*12]. Concerning the structure of grafted layers, the Pedersen model must be mentioned [**51]. Shah et al use polarized and depolarised light scattering to investigate PMMA layers grafted onto Montmorillonite clay [52]. In a concentration study, Kohlbrecher et al fit contrastvariation SANS intensities of coated silica spheres in toluene with a core-shell model and an adhesive polydisperse structure factor model [*6].
[ 7 ]
7 Qiu D, Cosgrove T, Howe A: Small-angle neutron scattering study of concentrated colloidal dispersions: The electrostatic/steric composite interactions between colloidal particles, Langmuir 2006, 22:6060-6067 [8] Qiu D, Dreiss CA, Cosgrove T, Howe A: Small-angle neutron scattering study of concentrated colloidal dispersions: The interparticle interactions between sterically stabilized particles, Langmuir 2005,21:9964-9969 [*9] Despert G, Oberdisse J: Formation of micelle-decorated colloidal silica by adsorption of nonionic surfactant, Langmuir 2003, 19, 7604-7610 The adsorption of a non-ionic surfactant (TX-100) on colloidal silica is studied by SANS, using solvent contrast variation. The adsorbed layer is described by a model of impenetrable micelles attached to the silica bead. [*10] Staples E, Penfold J, Tucker I: Adsorption of mixed surfactants at the oil/water interface, J Phys Chem B 2000, 104: 606-614 The adsorption of mixtures of SDS and C 12 E 6 onto hexadecane droplets in water is studied by SANS. A core-shell model is used to describe the form factor of the emulsion droplets, and coexisting micelles are modelled as interacting core-shell particles. A model-independent analysis using I(q→0) is used to extract information on layer composition. The results are shown to disagree with straightforward regular solution theory. [*11] Moeser GD, Green WH, Laibinis PE, Linse P, Hatton TA: Structure of polymerstabilized magnetic fluids: small-angle neutrons scattering and mean-field lattice modelling, Langmuir 2004, 20:5223-5234 The layer of PAA with grafted PPO and PEO blocks bound to magnetic nanoparticles is studied by SANS. Core-shell modelling including the magnetic scattering of the core is used to determine the layer density and thickness, for different PPO/PEO ratios. [*12] Yang C, Kizhakkedathu JN, Brooks DE, Jin F, WU C: Laser-light scattering study of internal motions of polymer chains grafted on spherical latex particles, J Phys Chem B 2004, 108:18479-18484 Temperature-dependent Poly(NIPAM) chains grown from relatively big poly(styrene) latex ('grafting from') are studied by static and dynamic light scattering. Near the thetatemperature, the hairy-latex is described by a core-shell model with a r -1 polymer density in the brush. The time correlation function reveals interesting dynamics at small scales, presumably due to internal motions. [13] Auvray G, de Gennes PG, Neutron scattering by adsorbed polymer layers, Europhys Lett 1986, 2 :647-650 [*14] Hone J H E, Cosgrove T, Saphiannikova M, Obey T M, Marshall J C, Crowley T L: Structure of physically adsorbed polymer layers measured by small-angle neutron scattering using contrast variation methods, Langmuir 2002, 18: 855-864 Combined on-and off-contrast SANS experiments in order to determine the (exponential) profile Φ(z) and the weak fluctuation term of PEO-layers on polystyrene latex. The fluctuation term is obtained by subtraction of layer intensities obtained via the two routes discussed in the text. [**15] Marshall J C, Cosgrove T, Leermakers F, Obey T M, Dreiss C A: Detailed modelling of the volume fraction profile of adsorbed layers using small-angle neutron scattering, Langmuir 2004, 20: 4480-4488
Howell I, Revell P: Effect of electrolyte on adsorbed polymer layers: poly(ethylene oxide) -silica system, Langmuir 2006, 22: 6923-6930 [*17] Seelenmeyer S, Ballauff M: Analysis of surfactants adsorbed onto the surface of latex particles by small-angle x-ray scattering, Langmuir 2000, 16: 4094-4099 layer structure of hydrophilic PEO attached onto latex by hydrophobic stickers is studied. PS-latex is virtually matched by the solvent, and the intensity curves show a nice the side maxima, which shift to smaller-q and raise in intensity as the layer scattering increases. Moments of the density profile are used to characterize the layer, and both an exponential and a parabolic density profile fit the data. Dale P J, Vincent B, Cosgrove T, Kijlstra J: Small-angle neutron scattering studies of an adsorbed non-ionic surfactant (C 12 E 24 on hydrophobised silica particles in water, Langmuir 2005, 21: 12244 -12249 [*19] King S, Washington C, Heenan R: Polyoxyalkylene block copolymers adsorbed in hydrocarbon and fluorocarbon oil-in-water emulsions, Phys Chem Chem Phys 2005, 7:143-149
[ 20 ]Figure Captions: Figure 1 :
201 Figure Captions:
Figure 1 is
1 Figure 1 is Fig. 1 of ref. [19].
Figure 2 :
2 Figure 2: Form factors measured for deuterated latex with grafted PEO-layers in 0.4M Na 2 CO 3 at three different contrasts corresponding to 100:0, 91:9, and 85:15 (w/w) D 2 O/H 2 O. Lines are "simultaneous" fits (cf. [*5]) in which only the solvent scattering length density varies. Shown in the inset are accompanying scattering length density profiles. Reprinted with permission from ref. [*5], copyright 2005, American Chemical Society.
Figure 2
2 Figure 2 is Fig. 7 of ref. [*5].
Figure
Figure 3a is Fig. 9 of ref. [*57].
Figure 4 :
4 Figure 4: Fit of the micellar form factor for the core-rigid rods model to the SANS intensity for the PEO 114 -b-PCL 19 copolymer at surface saturation in porous silica, see ref. [*57] for details. The representation of q 2 I(q) vs q enhances the layer scattering. Reprinted with permission from ref. [*57], copyright 2005, American Chemical Society.
Figure
Figure 3b is the upper graph of Fig.10 in ref. [*57].
Figure 1 (
1 Figures :I have copy-pasted them from the original pdf, is this ok ?
Figure 2 (Oberdisse)
Figure 3(Oberdisse)
Acknowledgements: Critical rereading and fruitful discussions with François Boué and Grégoire Porte are gratefully acknowledged. | 27,032 | [
"995273"
] | [
"737"
] |
01487239 | en | [
"spi",
"shs"
] | 2024/03/04 23:41:48 | 2015 | https://minesparis-psl.hal.science/hal-01487239/file/IMHRC%202015.pdf | Benoit Montreuil
Eric Ballot
William Tremblay
Modular Design of Physical Internet Transport, Handling and Packaging Containers
Keywords: Physical Internet, Container, Encapsulation, Material Handling, Interconnected Logistics, Packaging, Transportation, Modularity
This paper proposes a three-tier characterization of Physical Internet containers into transport, handling and packaging containers. It first provides an overview of goods encapsulation in the Physical Internet and of the generic characteristics of Physical Internet containers. Then it proceeds with an analysis of the current goods encapsulation practices. This leads to the introduction of the three tiers, with explicit description and analysis of containers of each tier. The paper provides a synthesis of the proposed transformation of goods encapsulation and highlights key research and innovation opportunities and challenges for both industry and academia.
Introduction
The Physical Internet has been introduced as a means to address the grand challenge of enabling an order-of-magnitude improvement in the efficiency and sustainability of logistics systems in their wide sense, encompassing the way physical objects are moved, stored, realized, supplied and used all around the world [START_REF] Montreuil | Towards a Physical Internet: Meeting the Global Logistics Sustainability Grand Challenge[END_REF][START_REF] Ballot | The Physical Internet : The Network of Logistics Networks[END_REF]. The Physical Internet (PI, π) has been formally defined as an open global logistics network, founded on physical, digital, and operational interconnectivity, through encapsulation, interfaces, and protocols (Montreuil et al. 2013a).
Recent studies have assessed PI's huge potential over a wide industry and territory spectrum. Estimations permit to expect economic gains at least on the order of 30%, environmental gains on the order of 30 to 60 % in greenhouse gas emission, and social gains expressed notably through a reduction of trucker turnover rate on the order of 75% for road based transportation, coupled to lower prices and faster supply chains (Meller et al. 2012[START_REF] Sarraj | Interconnected logistics networks and protocols : simulation-based efficiency assessment[END_REF]). It has recently been highlighted in the US Material Handling and Logistics Roadmap as a key contribution towards shaping the future of logistics and material handling (Gue et al. 2013).
This paper focuses on one of the key pillars of the Physical Internet: goods encapsulation in smart, world-standard, modular and designed-for-logistics containers (in short, π-containers). Previous research has introduced generic dimensional and functional specifications for the π-containers and made clear the need for them to come in various structural grades (Montreuil, 2009-2013[START_REF] Montreuil | Towards a Physical Internet: Meeting the Global Logistics Sustainability Grand Challenge[END_REF][START_REF] Montreuil | Towards a Physical Internet: the impact on logistics facilities and material handling systems design and innovation[END_REF]. The purpose of this paper is to address the need for further specifying the modular design of π-containers. Specifically, it proposes to generically characterize π-containers according to three modular tiers: transport containers, handling containers and packaging containers.
The paper is structured as follows. It starts in section 2 with a brief review of the Physical Internet and its focus on containerized goods encapsulation. Then it proceeds with a review of the essence of current goods encapsulation, containers and unit loads in section 3. The paper introduces the proposed three-tier structural characterization of π-containers in section 4. Finally, conclusive remarks are offered in section 5.
The Physical Internet and goods encapsulation
The Digital Internet deals only with standard data packets. For example, an email to be sent must first have its content chunked into small data components that are each encapsulated into a set of data packets according to a universal format and protocol. These data packets are then routed across the digital networks to end up at their final destination where they are reconsolidated into a readable complete email. The Physical Internet intends to do it similarly with goods having to flow through it. Indeed the Physical Internet strictly deals with goods encapsulated in standard modular π-containers that are to be the material-equivalent to data packets. This extends the classical single-organization centric unit load standardization concepts [START_REF] Tompkins | Facilities planning[END_REF], the shipping container (ISO 1161(ISO -1984) ) and the wider encompassing modular transportation concepts introduced nearly twenty-five years ago [START_REF] Montreuil | Modular Transportation[END_REF] and investigated in projects such as Cargo2000 [START_REF] Hülsmann | Automatische Umschlag-anlagen für den kombinierten Ladungsverkehr[END_REF], extending and generalizing them to encompass all goods encapsulation in the Physical Internet.
The uniquely identified π-containers intend to offer a private space in an openly interconnected logistics web, protecting and making anonymous, as needed, the encapsulated goods. Indeed, πcontainers from a multitude of shippers are to be transported by numerous certified transportation and logistics service providers across multiple modes. They are also to be handled and stored in numerous certified open logistics facilities, notably for consolidated transshipment and distributed deployment across territories. They are to be used from factories and fields all the way to retail stores and homes. Their exploitation getting momentum and eventually universal acceptance requires on one side for them to be well designed, engineered and realized, and on the other side for industry to ever better design, engineer and realize their products for easing their standardized modular encapsulation.
Figure 1. Generic characteristics of Physical Internet containers
From a dimensional perspective, π-containers are to come in modular cubic dimensions from that of current large cargo containers down to pallet sizes, cases and tinier boxes. Illustrative sets of dimensions include {12; 6; 4,8; 3,6; 2,4; 1,2} meters on the larger spectrum and {0,8; 0,6; 0,4; 0,3; 0,2; 0,1} or {0,64; 0,48; 0,36; 0,24; 0,12} meters on the smaller spectrum. The specific final set of dimensions have been left to be determined based on further research and experiments in industry, so that this set becomes a unique world standard acknowledged by the key stakeholders and embraced by industry.
From a functional perspective, the fundamental intent is for π-containers to be designed and engineered so as to ease interconnected logistics operations, standardizing key functionalities while opening vast avenues for innovation [START_REF] Montreuil | Towards a Physical Internet: Meeting the Global Logistics Sustainability Grand Challenge[END_REF][START_REF] Montreuil | Towards a Physical Internet: the impact on logistics facilities and material handling systems design and innovation[END_REF].
Their most fundamental capability is to be able to protect their encapsulated objects, so they need to be robust and reliable in that regard. They must be easy to snap to equipment and structures, to interlock with each other, using standardized interfacing devices. They should be easy to load and unload fast and efficiently as needed.
Their design must also facilitate their sealing and unsealing for security purposes, contamination avoidance purposes as well as, when needed, damp and leak proof capability purposes; their conditioning (e.g. temperature-controlled) as required; and their cleaning between usages as pertinent.
As illustrated in Figure 2, they must allow composition into composite π-containers and decomposition back into sets of smaller π-containers. A composite container exists as a single entity in the Physical Internet and is handled, stored and transported as such until it is decomposed. Composition capabilities are subject to structural constraints. Figure 2 illustrates how such composition/decomposition can be achieved by exploiting the modularity of π-containers and standardized interlocking property.
Even though not technically necessary, π-containers should be easy to panel with publicity and information supports for business marketing and transaction easing purposes as well as for user efficiency and safety purposes.
Designed for interconnected logistics, π-containers are to be efficiently processed in automated as well as manual environments, without requiring pallets. From an intelligence perspective, they are to take advantage of being smart, localized and connected, and should be getting better at it as technology evolves. As a fundamental basis, they must be uniquely identifiable. They should exploit Internet-of-Things standards and technologies whenever accessible (e.g. Atzori et al. 2010). Using their identification and communications capabilities, π-containers are to be capable of signaling their position for traceability purposes and problematic conditions relative to their content or state (breakage, locking integrity, etc.), notably for security and safety purposes.
The π-containers should also have state memory capabilities, notably for traceability and integrity insurance purposes. As technological innovations make it economically feasible, they should have autonomous reasoning capabilities. Thus, they are to be notably capable of interacting with devices, carriers, other π-containers, and virtual agents for routing purposes [START_REF] Montreuil | An Open Logistics Interconnection Model for the Physical Internet[END_REF].
From an eco-friendliness perspective, π-containers are to be as light and thin as possible to minimize their weight and volume burden on space usage and on energy consumption when handled and transported. They are to be efficiently reusable and/or recyclable; to have minimal offservice footprint, and to come in distinct structural grades well adapted to their range of purposes.
The current state of goods encapsulation and unit load design
In order to better comprehend the subsequently introduced characterization of π-containers, it is important to revise the current state of goods "encapsulation". In order to achieve this in a compact manner, this section exploits a multi-tier characterization of goods encapsulation that is depicted in Figure 3. At the first encapsulation tier, goods are packaged in boxes, bottles and bags as illustrated in Figure 3 for consumer goods. The packaging may be done in a single layer or several layers. When exploited, the package is usually the basic selling unit of goods to consumers and businesses.
Packaging is subject to product design, mostly related to its dimensions, weight and fragility. Indeed the package must protect its contained product. This involves many compromises between the size of the packaging, regulations, its materials, as well as with the inclusion of protective filling materials and fixations. It often ends up with the actual product using a fraction of the package space. Packaging is also subject to marketing needs. This is hugely important in the retail industry as the package is often what the consumer sees and touches when deciding whether to purchase the product or not in retail stores. Hence packages get all kinds of prints, colors and images. Packages have become differentiating agents affecting sales. This is less the case in industrial and e-commerce settings. In industrial B2B contexts, the purchasing decision is mostly subject to pricing, functional, technical and delivery time specifications and assessments. With e-commerce, the purchasing decision is done facing a smartphone, tablet or computer, mostly based on images, videos, descriptions, expert rankings, word-of-mouth, peer-to-peer comments, promised delivery time as well as total price including taxes and delivery fees. The consumer sees the packaging only upon receiving the product at home or a e-drive, when he has already committed to buy it.
Goods
Logistics considerations such as ergonomic manual and/or automated handling have usually very limited impact on package design for specific goods. This is a world currently dominated by packaging design and engineering, product design and engineering, and marketing. As asserted by Meller et al. (2012), this leads to situations whereas a consumer packaged goods manufacturer making and selling 1000 distinct products may well end up with 800 distinct package sizes.
Encapsulation tier 2: basic handling unit loads
At the second encapsulation tier, packages encapsulating goods are grouped into basic handling units such as cardboard cases, totes and containers. Figures 4 and5 provide typical examples. In some settings, goods are directly unitized, bypassing packaging encapsulation.
Cases are often single-use while totes and containers are mostly reusable and returnable. The former are usually much cheaper than the latter. From a logistics perspective, the cubic format of cardboard cases makes them easier to handle than odd-shaped loads. Their low price and recyclability often leads users to adopt a throw-after-usage operation, avoiding the need for reverse logistics of cases. They are most often designed to fit the unitizing needs of a specific product or family of products, leading businesses to use often hundreds of distinct cases with specific dimensions. Cases often lack good handles to ease their handling, so they either have to be clamped or held from the bottom for manual or automated handling purposes. For example, their conveyance forces the use of roller or belt conveyors to support them.
For storage purpose, their structural weakness and their lack of snapping devices force to lay them on a smooth strong surface (such as racks and pallets).
In the parcel logistics industry, in order to help streamlining their logistics network and offering competitive pricing, the service providers prefer using their specific formats. Shippers who want to use their own formats are usually charged stiffer prices. Also, in order to avoid excessive pricing, shippers have to certify that their cases meet shock-resisting specifications, which often force shippers to double box their goods, the outer case protecting the inner case containing the goods: this increases significantly the material and operational costs of load unitizing in cases.
Generally, returnable handling totes and containers are differently designed for logistics than cases, often for the specific context they are used in. Often times, they have handles, are easy to open and close multiple times, and are structurally stronger, allowing higher stacking capability. As shown in Figure 6, many are foldable or stackable when empty to limit the reverse logistics induced by the need for redeploying them. As they offer limited security and are designed for specific purposes and users, totes and plastic containers are mostly used in limited ecosystems, such as within a facility, a company, a client-supplier dyad, a collaborative supply chain or a specific industry, such as for fresh produce in a specific territory. Pallets have been characterized as one of the most important innovations ever in the material handling, logistics and supply chain domains, having a huge impact on productivity by easing the movement of multiple goods, cases, etc., as a single entity, using functionally standardized fork equipment (e.g. [START_REF] Vanderbilt | The Single Most Important Object in the Global Economy: The Pallet[END_REF]. Figure 8 illustrates several pallet-handling contexts. There are companies that specialize in providing pools of pallets shared by their clients, insuring their quality, making pallets available when and where their clients need them, involving relocating pallets and tactically positioning them based on client usage expectations.
Encapsulation tier 4: Shipping containers
At the fourth encapsulation tier lies the shipping container that contains some combination of goods themselves, in their unitary packaging or in basic handling unit loads such as cases, themselves either stacked directly on its floor, and/or loaded on pallets in pallet-wide swap boxes or containers.
Illustrated in Figure 9, shipping containers are rugged, capable of heavy-duty work in tough environmental conditions such as rain, ice, snowstorms, sandstorms and rough waters in high sea. They come roughly in 2,4 by 2,4 meter section, with lengths of 6 or 12 meters (20 or 40 feet). There are numerous variants around these gross dimensions, notably outside of the maritime usage. In complement to their quite standard dimensions, they have standardized handling devices to ease their manipulation. As illustrated in Figure 11, this has lead to the development and exploitation of highly specialized handling technologies for loading them in ships and unloading them from ships, to move them around in port terminals and to perform stacking operations. As emphasized in Figure 3, carriers may encapsulate goods directly such as in the examples from the lumber and car industries provided in Figure 12. Yet in most cases, they transport goods already encapsulated at a previous tier. Figure 13 illustrates semi-trailers encapsulating pallets of cases, with much better filling ratio in the left side example than in the right side example. Indeed the right side represents a typical case where the pallets and cases are such that pallets cannot be stacked on top of each other in the semi-trailer, leading to filling ratios on the order of 60% in weight and volume.
Shipping containers are ever more used in multimodal contexts, as illustrated in Figure 14, where they are encapsulated on a semi-trailer, on railcars, and on a specialized container ship.
Figure 14. Shipping containers carried on semi-trailer, train and ship Sources: www.tamiya.com, www.kvtransport.gr and www.greenship.com
Proposed three-tier modular design of Physical Internet containers
The Physical Internet concept proposes to replace by standard and modular π-containers all various packages, cases, totes and pallets currently exploited in the encapsulation tiers one to four of Figure 3. Yet clearly, these must come in various structural grades so as to cover smartly the vast scope of intended usage.
In a nutshell, it is proposed as depicted in Figure 15 that three types of π-containers be designed, engineered and exploited: transport containers, handling containers and packaging containers. The transport containers are an evolution of the current shipping containers exploited in encapsulation tier 4. The handling containers replace the basic handling unit loads and pallets exploited in encapsulation tiers 2 and 3. The packaging containers transform the current packages of encapsulation tier 1. These are respectively short-named T-containers, H-Containers and Pcontainers in this paper.
Transport containers
Transport containers are functionally at the same level as current shipping containers, yet with the upgrading generic specifications of π-containers. T-containers are thus to be world-standard, modular, smart, eco-friendly and designed for easing interconnected logistics.
T-containers are to be structurally capable of sustaining tough external conditions such as heavy rain, snowstorms and tough seas. They are to be stackable at least as many levels as current shipping containers.
From a dimensional modularity perspective, their external height and width are to be 1,2m or 2,4m while their external length is to be 12m, 6m, 4,8m, 3,6m, 2,4m or 1,2m. These dimensions are indicative only and subject to further investigations leading to worldwide satisfactory approval. The thickness of T-containers is also to be standard so as to offer a standard set of internal dimensions available for embedded H-containers. In order to generalize the identification and external dimensions of T-containers, it is proposed to define them according to their basic dimension, specified above as 1,2m. This basic dimension is corresponding to a single T unit. As detailed in Table 1, a T-container whose length, width and height are 1,2m, as shown in Figure 15, is to be identified formally as a T.1.1.1 container. Similarly, a 6m long, 1,2m wide, 1,2m high T-container can be identified as a T.5.1.1 container.
As can be seen in Table 1, the majority of T-container volumes are unique, with two being the maximum number of distinct T-container dimensions having the same volume. This has lead to a way to shorten T-container identification, indeed the short name in the second column of Table 1.
According to this naming, the formally named T.1.1.1 container of Figure 15 is short named a T-1 container, due to its unitary volume, and the short name for a T.5.1.1 container is T-5. The short name for the T.5.2.2 container of Figure 17 is T-20S to distinguish it from the T-20L short name for the T.10.2.1 container that is the only other T-container with a volume of 20 T units. The suffixes L and S respectively refer to long and short.
Goods
Encapsula on Tier
Table 1.
Identification and external dimensions of T-containers
Formally, the modular dimensions of a T-container can be expressed as: Each side is shown to have an internal surface protecting visually and materially the content. Here the surface is drawn in dark red. As all subsequent renderings, the conceptual rendering of Figure 16 is not to be interpreted as a specific specification but rather simply as a way to illustrate the concept in a vivid way. Much further investigation and engineering work are required prior freezing such specifications. Internet in general and to dealing with T-containers in particular. Figure 22 shows a semi-trailer containing several T-containers that is first backed up to a docking station so as to unload one Tcontainer and loading another T-container. The T-container to be unloaded is side-shifted onto a πconveyor and moved inside the logistic facility. Then the T-container to be loaded is side-shifted from a π-conveyor on the other side onto the semi-trailer where it is interlocked with the adjacent T-container on the semi-trailer for secure travel. Figure 23 shows how a π-adapted stacker similar to those in current port terminals can be used to load a T-container on a semi-trailer. Transport containers have been described above in much detail so as to make vivid the distinctions and similarities with current containers. In the next sections, the handling and packaging containers are described in a more compact fashion, emphasizing only the key attributes as the essence of the proposed changes is similar to transport containers.
𝑑 𝑓 𝑇𝑒 = 𝑓𝑏 𝑇 ∀𝑓 ∈ 𝐹 𝑇 (1)
𝑑 𝑓 𝑇𝑖 = 𝑑 𝑓 𝑇𝑒 -2𝑡 𝑇 ∀𝑓 ∈ 𝐹 𝑇 (2)
Handling containers
Handling containers are functionally at the same level as current basic handling unit loads such as cases, boxes and totes, yet with the upgrading generic specifications of π-containers. Handling containers are conceptually similar to transport containers, as they are both Physical Internet containers. They could be designed to look as T-containers shown in Figures 17 and18. In order to contrast them with T-containers, in this paper H-containers are displayed as in Figure 2. Note that H-containers are also nicknamed π-boxes.
A key difference between transport and handling containers lies in the fact that H-containers are smaller, designed to modularly fit within T-containers and dry-bed trailers and railcars. Figure 26 illustrates a T-container filled with a large number of H-containers, depicting the exploitation of their modularity to maximize space utilization.
Figure 26. H-containers encapsulated in a T-container (sliced to show its content)
A second key difference is that they only have to be able to withstand rough handling conditions, mostly within facilities, carriers and T-containers. So they are structurally lighter, being less rugged than T-containers. They have to be stackable, at least the interior height of a T-container (see Figure 24), higher in storage facilities, yet less high than T-containers in ports. H-containers, given their inherent interlocking capabilities and their robust structure, are designed to support and protect their content without requiring pallets for their consolidated transport, handling and storage. The Modulushca project, under the leadership of Technical University of Graz, has designed and produced a first-generation prototype of H-containers. The prototype is depicted in Figure 27 and is thoroughly described in [START_REF] Modulushca | Modulushca Work Package 3 Final Report[END_REF]. It has the capability of interlocking with others located above and below it through an elaborate locking mechanism, yet does not allow sideway interlocking. Even though it currently does not have all the desired characteristics for H-containers, it is indeed a first step along an innovation journey toward ever better π-boxes. The Modulushca is currently working a second-generation prototype for H-containers.
From a dimensional modularity perspective, their dimensions are roughly to be on the order of series such as 1,2m, 0,6m, 0,48m, 0,36m, 0,24m and 0,12m or 1,2m, 0,8m, 0,6m, 0,4m, 0,3m, 0,2m and 0,1m. These dimensions are indicative only and subject to further investigations leading to worldwide satisfactory approval. Here a 1,2m dimension is meant to signify that it fits within a T-1 container as described in Table 1, taking into consideration the thickness of T-containers. So, based on the series above, one can generically use 1-2-3-4-5-10 and 1-2-3-4-6-8-12 series to describe Hcontainers. So, assuming a basis at 0,1m using the second series, then a H.2.4.6 container refers to a 0,2m*0,4m*0,6m approximate cube.
Formally, the modular dimensions of a H-container can be expressed as follows, assuming that the largest-size H-container has to fit perfectly in the smallest-size T-container:
𝑏 𝐻 = (𝑑 1 𝑇𝑖 -2𝑠 𝐻 ) 𝑓 𝐻 ⁄ (3) 𝑑 𝑓 𝐻𝑒 = 𝑓𝑏 𝐻 ∀𝑓 ∈ 𝐹 𝐻 (4) 𝑑 𝑓 𝐻𝑖 = 𝑑 𝑓 𝐻𝑒 -2𝑡 𝐻 ∀𝑓 ∈ 𝐹 𝐻 (5)
Where Set of modular dimension factors f for H-containers, here exemplified as {1; 2; 3; 4; 5; 10} or {1; 2; 3; 4; 6; 8; 12}.
𝑑
As contrasted with the huge number of customized sizes of current cases, boxes and totes, the modular dimension factor sets limit strongly the number of potential H-container sizes. For example, Table 2 demonstrates that exploiting the set {1; 2; 3; 4; 6; 8; 12} leads to a set of 84 potential H-container sizes, each composed of six modular sides from a set of 28 potential modularsize sides. It is not the goal of this paper to advocate using all these modular sizes in industry or to rather trim the number of modular sizes to a much lower H-container set. This is to be the subject of further research and of negotiations among industry stakeholders. Basically, a larger set enables a better fit of goods in H-containers yet induces more complexity in manufacturing, deploying, flowing and maintaining π-boxes. The exploitation of modular sides attenuates this complexity hurdle. Meller et al. (2012) and[START_REF] Meller | A decomposition-based approach for the selection of standardized modular containers[END_REF] provided optimization-based empirical insights relative to the compromises involved in setting the portfolio of allowed handling container sizes. snap-on handles. Furthermore, if the wheels are motorized and smart, then when snapped to the Hcontainer, the set becomes an autonomous vehicle. Specific company-standardized handling containers are already in used in industry, with significant impact. Figure 31 provides an example in the appliance industry. It depicts appliances encapsulated in modular handling containers. It allows moving several of them concurrently with a lift truck by simply clamping them from the sides. It also allows to store them in the distribution center without relying on storage shelves, indeed by simply stacking them. The Physical Internet aims to generalize and extend such practices through world standard H-containers designed for interconnected logistics.
Packaging containers
Packaging containers, short named P-containers or π-packs, are functionally at the same level as current goods packages embedding unit items for sales, as shown in Figure 3, the kind seen displayed in retail stores worldwide.
P-containers are Physical Internet containers as T-containers and H-containers, with the same generic characteristics. Yet there are three key characteristics that distinguish them.
1. The need for privacy is generally minimal as, to the contrary, goods owners want to expose the product, publicity and instructions to potential buyers. 2. The need for robust protection of their embedded goods is lowest as the H-containers and Tcontainers take on the bulk of this responsibility; so they are to be lightest and thinnest amongst Physical Internet containers. 3. The need for handling and sorting speed, accuracy and efficiency is maximal as they encapsulate individual product units.
Figure 32 illustrates the concept of π-packs as applied to cereals, toothpaste and facial tissues. The π-packs are here composed of display sides, reinforced standard tiny edges and corners acting as interfaces with handling devices, and they have modular dimensions. Figure 33 purposefully exhibits a toothpaste dispenser being loaded into a P-container, looking at first glance just like current toothpaste boxes on the market. Yet the P-container characteristics described above simplify very significantly the efforts and technologies necessary to move, pick, sort, and group them at high speed For example, it enables improved A-frame technologies, cheaper and more efficient, or innovative alternative technologies. As illustrated in Figure 34, the dimensional modularity of π-packs enables their space-efficient encapsulation in H-container for being flowed through the multiple distribution channels, all the way to retail stores, e-drives or households.
Figure 34. Multiple modular P-containers efficiently encapsulated in a H-container From a dimensional perspective, P-containers are in the same realm as H-containers, yet are not generally expected to go as large as the largest 1,2m*1,2m*1,2m H-containers. So, given that their bases are in the same order, P-containers are to have dimensional factors of series such as 1-2-3-4-5 or 1-2-3-4-6-8 in line with yet shorter than H-containers.
Formally, the modular dimensions of a P-container can be expressed as follows, assuming that the smallest-size P-container has to fit perfectly in the smallest-size H-container:
𝑏 𝑃 = 𝑑 1 𝐻𝑖 -2𝑠 𝑃 (6) 𝑑 𝑓 𝑃𝑒 = 𝑓𝑏 𝑃 ∀𝑓 ∈ 𝐹 𝑃 (7)
Where 𝑑 𝑓 𝑃𝑒 : External dimension of a P-container side of factor f 𝑏 𝑃 : Base dimension of a P-container 𝑠 𝑃 : Standard minimal maneuvering slack between H-container interior side and encapsulated P-containers 𝑓 𝑃 : Maximum modular dimensional factor for a H-container 𝐹 𝐻 : Set of modular dimension factors f for H-containers, here exemplified as {1; 2; 3; 4; 5} or {1; 2; 3; 4; 6; 8}.
Note that the need for standardizing the thickness and internal dimensions of P-containers is debatable, explaining why it is omitted in the above formalization. For the Physical Internet itself, standardization is functionally not necessary. It is necessary for T-containers as H-containers must modularly fit within them, and for H-containers as P-containers must similarly fit modularly within them. Only goods are to be encapsulated in P-containers. Variability in thickness may allow to adjust it to provide adequate protection to the encapsulated goods. On the other hand, standardizing the thickness of P-containers provides a strong advantage in guiding and aligning product designers worldwide with a fixed set of usable space dimensions within the P-containers they are to be encapsulated.
Conclusion
The three-tier characterization of transport, handling and packaging containers proposed for the Physical Internet enables generalizing and standardizing unit load design worldwide, away from single-organization centric unit load design as engraved in textbooks such as [START_REF] Tompkins | Facilities planning[END_REF].
It offers a simple and intuitive framework that professionals from all realms and disciplines can readily grasp. It simplifies unit load creation and consolidation. It is bearer of innovations that are to make transshipment, crossdocking, sorting, order picking, etc., much more efficient. This is true within a type of container as well as across types, notably enabling significant improvement in space-time utilization of transportation, handling and storage means.
The proposed three-tier characterization also catalyzes a shift from the current paradigm of dimensioning the packaging to fit individual products, which leads to countless package dimensions, towards a new paradigm where product dimensioning and packaging dimensioning and functionality are adapted to modular logistics standards.
There are strong challenges towards the appropriation by industry of the modular transport, handling and packaging containers. These challenges cross technical, competitive, legacy and behavioral issues. For example, there must be consensus on the base dimensions and factor series for each type of container. There must also be consensus on standardized thickness of T-containers and H-containers. The container thickness, weight and cost must be controlled in order to minimize the wasted space and loading capacity, and to make the containers profitably usable in industry. The same goes with the handling connectors (allowing snapping and interlocking), relative to their cost, size, ease of use, position on the containers of each type.
Beyond the containers themselves, there must be engagement by the material handling industry to create the technologies and solutions capitalizing on the modular three-tier containers. Similarly, the vehicle and carrier (semi-trailer, railcar, etc.) industry must also get engaged. New types of logistics facilities are to be designed, prototyped, implemented and operationalized that enable seamless, fast, cheap, safe, reliable, distributed, multimodal transport and deployment of the three interconnected types of π-containers across the Physical Internet. Indeed, the proposed characterization opens a wealth of research and innovation opportunities and challenges to both academia and industry.
Figure 2 .
2 Figure 2. Conceptual design illustrating the modularity and the composition functionality of π-containers (Source: original design by Benoit Montreuil and Marie-Anne Côté, 2012)
Figure 3 .
3 Figure 3. Current encapsulation practice characterization
Figure 3 .
3 Figure 3. Illustrative consumer goods packaging Source: www.bestnewproductawards.biz (2012)
Figure 4 .
4 Figure 4. Cardboard cases used as handling unit loads Source: www.ukpackaging.com
Figure 6 .
6 Figure 6. Illustrating the collapsible and stackable capabilities of some returnable plastic containers Source: www.pac-king.net and www.ssi-schaefer.us
Figure 7 .
7 Figure 7. Two extreme examples of cases grouped as a unit load on a pallet Source: www.123rf.com and www.rajapack.co.uk
Figure 8 .
8 Figure 8. Pallets handled by forklift, walkie rider and AS/RS system Source: www.us.mt.com, www.chetwilley.com and www.directindustry.fr
Figure 9 .
9 Figure 9. A shipping containerMaritime containers have strong structural capabilities enabling their stacking, often up to three full and five empty high in port terminals, and even higher in large ships. Figure10depicts the wide exploitation of their stacking capabilities in a temporary storage zone of a port.
Figure 10 .
10 Figure 10. Stacked shipping containers
Figure 11 .
11 Figure 11. Shipping-Container adapted handling equipment in port operations
Figure 12 .
12 Figure 12. Semi-trailers carrying logs and cars directly without further encapsulation Sources: www.commercialmotor.com/big-lorry-blog/logging-trucks-in-new-zealand and en.wikipedia.org/wiki/Semi-trailer_truck
Figure 15 .
15 Figure 15. Proposed Physical Internet encapsulation characterization
Figure16depicts a conceptual rendering of a T-1 container. It shows its sides to be identical. Each side is represented as having a frame composed of an internal X-frame coupled to an external edgeframe. Each side is shown to have five standard handling interfaces represented as black circles. Each side is shown to have an internal surface protecting visually and materially the content. Here the surface is drawn in dark red. As all subsequent renderings, the conceptual rendering of Figure16is not to be interpreted as a specific specification but rather simply as a way to illustrate the concept in a vivid way. Much further investigation and engineering work are required prior freezing such specifications.
Figure 16 .
16 Figure 16. Illustrating a 1,2m long, wide and high transport container: T.1.1.1 or T-1 container
Figure 17 .
17 Figure 17. Illustrating a 6m-long 1,2m-wide, 1,2m-high transport container: T.5.1.1 or T-5 container
Figure 19 .
19 Figure 19. Modular spectrum of T-container sizes from T.1.1.1 (T-1) to T.10.2.2 (T-40)
Figure 21
21 Figure 21. T-containers carried on π-adapted flatbed trucks and semi-trailers
Figure 22 .
22 Figure 22. Conveyor based unloading and loading T-containers from semi-trailer
Figure 24 .
24 Figure 24. Modular T-containers loaded on adapted flatbed π-railcars
Figure 27 .
27 Figure 27. H-container prototyped in 2014 in the Modulushca project Source: www.modulshca.eu
Figure 28 .
28 Figure 28. Composite H-container moved (1) snapped to a forkless lift truck and (2) using snapped wheels manually or autonomously if they are motorized and smart Source: Montreuil et al. (2010)
Figure 30
30 Figure 30. H-containers stacked and snapped to a modular storage grid Source: Montreuil et al. (2010)
Figure 32 .
32 Figure 32. Illustrative consumer-focused P-containers
Table 2 .
2 Set of 84 H-container sizes using the {1; 2; 3; 4; 6; 8; 12} modular factor set and a set of 28 modular side sizes Figures 28 to 30, sourced from[START_REF] Montreuil | Towards a Physical Internet: the impact on logistics facilities and material handling systems design and innovation[END_REF], highlight the potential for innovative handling technologies exploiting the characteristics of H-containers. Figure28shows that π-boxes do not require pallets to be moved, even a composite π-box, as the handling vehicle can have devices enabling to snap, lift and carry the H-container. It also shows that wheels can be easily snapped underneath a π-box so that a human handler or a mobile robot can readily carry it, potentially using
Identification of H-container sides : side dimensions and number of each one
H-Container X Y Z
1 1 1 1
2 1 1 2
3 1 1 3
4 1 1 4
5 1 1 6
6 1 1 8
7 1 1 12
8 1 2 2
9 1 2 3
10 1 2 4
11 1 2 6
12 1 2 8
13 1 2 12
14 1 3 3
15 1 3 4
16 1 3 6
17 1 3 8
18 1 3 12
19 1 4 4
20 1 4 6
21 1 4 8
22 1 4 12
23 1 6 6
24 1 6 8
25 1 6 12
26 1 8 8
27 1 8 12
28 1 12 12
29 2 2 2
30 2 2 3
31 2 2 4
32 2 2 6
33 2 2 8
34 2 2 12
35 2 3 3
36 2 3 4
37 2 3 6
38 2 3 8
39 2 3 12
40 2 4 4
41 2 4 6
42 2 4 8
43 2 4 12
44 2 6 6
45 2 6 8
46
Acknowledgements
The authors thank for their support the Québec Strategic Grant Program through the LIBCHIP project and the European FP7 Program through the Modulushca project. | 39,705 | [
"766191",
"10955"
] | [
"94189",
"39111",
"97391"
] |
01487298 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2015 | https://shs.hal.science/halshs-01487298/file/Chapron%20The%20%C2%AB%C2%A0supplement%20to%20all%20archives%C2%A0%C2%BB.pdf | Eds B Delmas
D Margairaz
D Ogilvie
Mutations de l'État, avatars des archives
national' libraries, at a time when these were being institutionalised as central repositories and became the 'natural' place for the conservation of collections whose high political or intellectual value required that they be preserved 2 . Hence, from the final decades of the seventeenth century onwards, the Bibliothèque Royale de Paris incorporated several dozen private libraries belonging to scholars and senior government officials, which were rich in ancient manuscripts, transcribed texts and extracts from archival repositories: in such a way, as lawyer Armand-Gaston Camus, archivist of the Assemblée Nationale in revolutionary France later said, it came to form « the supplement to all archives and charter repositories » 3 . « Archival turn » is the term now commonly used to describe the new interest that historians are showing for past modalities of selecting, classifying and transmitting the documents that they use in archives 4 . Resulting from a partnership between historians and archivists, the history of the 'making of archives' will help achieve a better understanding of how history was written -and still is 5 . Historians' increasing reflexivity with regard to their own documentary practices, however, has had comparatively less impact on the history of libraries. The first reason for such a neglect is undoubtedly due to a certain conception of the historian's profession, which is essentially conceived as a work on archives and in archives. In France, today historians are still heirs to the 'big divide' which resulted from the debates, during the second half of the nineteenth century, between the former Royal, later Imperial, Library and the National Archives. At a time when the historian's profession was being redefined around the use of authentic, unpublished sources, archivists educated in the recently created École des Chartes (1821) availed themselves of the founding law on archives of Messidor Year II to claim that archives were the true repository of all « true sources of national history » 6 . Notwithstanding the archivists' claim, the Imperial Library did not give up all its treasures and maintained its status as repository of the historical 2 F. Barbier, "Représentation, contrôle, identité: les pouvoirs politiques et les bibliothèques centrales en Europe, XV e -XIX e siècles", Francia, 26, (1999): 1-22. 3 A. G. Camus, "Mémoire sur les dépôts de chartes, titres, registres, documents et autres papiers… et sur leur état au 1 er nivôse de l'an VI", cited in F. Ravaisson, Rapport adressé à S. Exc. le ministre d'État au
English abstract
In the early modern period, libraries were probably the most important place of work for historians. They were used as a kind of archive, where historians could find all sorts of records, be they original documents or copies. Based on the case of the Royal Library in eighteenth-century Paris, this study aims to investigate the chain of documentary acts which gave it a para-archivistical function -which it retains to this day. First of all, I will discuss the constitution of scholars' and bureaucrats' private collections and their incorporation in the Royal Library from the final decades of the seventeenth century onwards; then the various operations of classifying, cataloguing and filing that blurred the initial rationales of the 'archive avatars' developed by previous owners; finally the uses of this peculiar material, be they documentary (by scholars or royal officials) or pragmatic (by families wishing to clarify their genealogy or private individuals involved in court cases).
In the early modern period, libraries were probably the most important place of work for historians.
They were places where scholars went to look not just for the printed works and handwritten narrative sources they needed, but also for all sorts of other records produced during the everyday activities of secular and ecclesiastic institutions 1 . Mediaeval charters, ambassadors' reports, ministerial correspondence and judicial records can still all be found in Ancien Régime libraries, be they original documents or copies. The fact that they are kept there has nothing to do with the ordinary life of archives belonging to a given administration; it is the result of two successive operations. First, these items were generally part of large individual or family collections, compiled by royal officers or scholars for use in their everyday activities. In turn, a certain number of these collections were later donated to central 1 Historiography has put into proper perspective the opposition between scholarly history which turned to the source, and eloquent history which was considered to be less in compliance with the rules of scholarship. C. Grell, L'histoire entre érudition et philosophie. Étude sur la connaissance historique à l'âge des Lumières (Paris: PUF, 1993). I would like to thank Maria-Pia Donato, Filippo de Vivo and Anne Saada for their time and precious comments.
records it had acquired in the Ancien Régime and during the Revolution 7 . Yet, its function changed as emphasis shifted toward providing access to printed books for a wider public. As a result, the mass of documents it contained that might easily be described as archival items fell into a sort of forgotten area in the mental landscape of contemporary historians.
The way in which we nowadays write the history of libraries provides a second reason. In this field, primacy is given to ancient manuscripts and printed books; this means that library and information experts tend to forget the diversity of written and non-written material contained in libraries -exotic objects, collections of antiques, scientific instruments -and the various uses they had 8 . Whilst monographs devoted to early modern national libraries mention private collections that have been bequeathed or purchased, they offer little insight into the political or intellectual rationale behind these acquisitions. Yet, as I shall claim in this article, the way these collections were sorted, classified, inventoryed and even -at some point -returned into public archives, helps throw light on the slow and mutally dependent emergence of archival and library institutions in early modern States 9 . The concentration of the history of reading on private contexts, and the lack of sources on reading in public libraries, add to our poor understanding of how libraries were used as archives. In this article I wish to redress this problem, taking France as a case study.
I. « ALL OF THE STATE'S SECRETS »
1782 saw the publication of an Essai historique sur la bibliothèque du Roi et sur chacun des dépôts qui la composent, avec la description des bâtiments, et des objets les plus curieux à voir dans ces différents dépôts 10 . Its author, Nicolas Thomas Le Prince, who was employed in the Bibliothèque royale as caretaker of the legal deposit, devised his book as a visitor's guide for the curious and for those travellers who came to admire the library. He walks the reader through the rooms, describes the paintings, and discusses the various 'sections' of the library's administrative organisation (printed books, manuscripts, prints and engravings, deeds and genealogies, medals and antiquities); last but not least, he 7 Article 12 of the law dated Messidor Year II specifies the documents to be deposited within the National Library: « charters and manuscripts belonging to history, to the sciences and to the arts, or which may be used for instruction ». The 1860s debates on the respective perimeters of the two institutions led to just a few ad hoc exchanges. 8 Even if some of these components have been properly examined. T. Sarmant, Le Cabinet des médailles de la Bibliothèque nationale (Paris: École des chartes, 1994). 9 See the procedures adopted in the Grand Duchy of Tuscany: E. Chapron, Ad utilità pubblica. Politique des bibliothèques et pratiques du livre à Florence au XVIII e siècle (Geneva: Droz, 2009), 224-261. 10 N. T. Le Prince, Essai historique sur la Bibliothèque du Roi, et sur chacun des dépôts qui la composent, avec la description des bâtimens, et des objets les plus curieux à voir dans ces différens dépôts [Historical essay on the King's library and on each of the repositories of which it is comprised, with the description of the buildings and of the most curious objects to be seen in these various repositories] (Paris, Bibliothèque du roi, 1782).
provides details on the private collections which have been added in time to the huge manuscripts holdings. To describe these collections, Le Prince consistently used the word 'fonds', which is still used in modern archival jargon to indicate the entire body of records originating from an office or a family. This is accurate because the collections originated precisely from families and, as we shall see, it reflects the closeness of archives and librairies at the time. For eighteen of these collections (followed by a dozen smaller collections, more rapidly presented), Le Prince provides all available information on the identity of the former owner, the history of the collection, the conditions under which it became part of the Bibliothèque royale and the material description of the volumes (binding, ex libris). He pays little attention to the literary part of these collections, although most were rich in literary, scientific and theological treasures. For the most part, he focuses on listing the resources that each offered prospective historians, on the nature and number of original documents and charters, and on the quality of copies.
In other words, Le Prince's presentation invited the reader to consider the Bibliothèque royale primarily in its role as a 'public repository', in the sense of a place designed to preserve authentic archives, deeds and legal instruments which may be needed as evidence for legal purposes 11 . The very use of the term 'repository' (dépôt) to designate the library's departments, whilst not unusual, is sufficiently systematic in his book to be meaningful. Like the word 'fonds', which he used for the single collections, the term derives from the field of archives, usually called 'public repository' at this time 12 . Alongside the minutely detailed enumeration of all authentic items it preserved, Le Prince repeatedly underlines the Bibliothèque Royale's role as a repository for 'reserve copies'. The copies made in Languedoc at Colbert's orders and brought together in the Doat collection acquired in 1732, for instance, made it possible « to find an infinite number of deeds which might have been mislaid, lost or burned », especially as « these copies made and collated by virtue of letters patent can, if so required, replace the very acts from which the copies were made » 13 . In his notes on the collection of Mégret de Sérilly, the Franche-Comté intendant who sold it to the king in 1748, Le Prince points out that, because of a fire at the Palais de Justice in 1737, « the original documents used to make these copies [the registers of the Cour des Aides up until 1717] were partly burned or seriously damaged, to the extent that the copies now replace the originals, and by virtue of this disastrous accident have now become priceless » 14 . At the same time, 11 F. Hildesheimer, "Échec aux archives: la difficile affirmation d'une administration", Bibliothèque de l'École des chartes, 156, (1998), 91-106. 12 "Dépôt public", Encyclopédie, ou Dictionnaire raisonné des sciences, des arts et des métiers, 35 vols. (Paris: Briasson, 1751-1780), 4: 865. 13 Le Prince, Essai historique, 267. 14 Le Prince, Essai historique, 214. the library description stresses the existence of numerous old notarial instruments, fallen into abeyance and henceforth devoid of legal value, but now of documentary interest and for this reason made available to scholars. Le Prince goes as far as dressing a somewhat imperfect yet innovative work tool for future historians: a long « list of the charters, cartularies etc. of French churches and other documents from the various collections in the manuscript department » 15 . Finally, he further signals valuable materials to historians, as in the Duchesne collection that allegedly contained « an infinite number of records which
have not yet been used and which might usefully serve those working on the history of France and on that of the kingdom's churches » 16 .
Hence, due to this mix of 'living' and 'dead' archives, of authentic instruments and artefacts, the Bibliothèque royale held a singular place in the monarchy's documentary landscape. It differed from the 'political' repositories which came into being in the second half of the seventeenth century, initially in the care of Louis XIV's senior officials and later, at the turn of the century, in a more established form in large ministerial departments (Maison du Roi, Foreign Affairs, War, Navy, General Control of Finances) the main scope of which was to gather documentation that might assist political action 17 . Around the same period, the Bibliothèque royale increased considerably by incorporating numerous private collections.
The latter originated in the intense activity of production, copy and collection of political documentation carried out in scholarly and parliamentary milieus between the seventeenth and eighteenth centuries.
Senior State officials collected documents for their own benefit, that is, not just papers relating to their own work (as was the custom through to at least the end of the seventeenth century), but also any kind of documentation likely to inform their activities in the service of the monarchy. The poor state or even total abandonment in which the archives of certain institutions were left explains the considerable facility with which old records and registers could find their way into private collections 18 . The major part of the collections was nevertheless made up of copies or extracts 19 . Powerful and erudite aristocrats and officials such as Louis-François Morel de Thoisy or Gaspard de Fontanieu hired small groups of clerks who were tasked with copying -« de belle main » and on quality paper -all the documents they deemed to be of 15 I. Vérité, "Les entreprises françaises de recensement des cartulaires", Les Cartulaires, eds. O. Guyotjeannin, L. Morelle and M. Parisse (Paris: École des chartes, 1993), 179-213. 16 Le Prince, Essai historique, 333. 17 Hildesheimer, "Échec aux archives". 18 For example, M. Nortier, "Le sort des archives dispersées de la Chambre des comptes de Paris", Bibliothèque de l'École des chartes, 123, (1965), 460-537. 19 This is also the case with scholarly collections, such as that of Étienne Baluze. In the bundles relating to the history of the city of Tulle (now Bibliothèque nationale de France [BnF], Baluze 249-253), Patricia Gillet calculated that 42% of the documents were copies made by or for Baluze, 21% were originals, 10% were authentic old copies, the rest being printed documents, leaflets, work papers and letters addressed to Baluze (P. Gillet, Étienne Baluze et l'histoire du Limousin. Desseins et pratiques d'un érudit du XVII e siècle (Genève: Droz, 2008), 141). potential use 20 . They also copied original documents of their collections, when they were ancient and badly legible, so as to produce properly ordered and clean copies, bound in volumes, whilst the original documents were kept in bundles. These collections were not repositories of curiosities; they were strategic resources for personal political survival and for the defence of the State's interests, at a time when the king's Trésor des Chartes had definitively fossilised in a collection of ancient charters and acts, and the monarchy had no central premises in which to store its modern administrative papers 21 . In the 1660s, when Hippolyte de Béthune gifted to the king the collection created by his father Philippe, a diplomat in the service of Henri III and Henri IV, Louis XIV's letters of acceptance underlined the fact that the collection contained « in addition to the two thousand original documents, all of the State's secrets and political secrets for the last four hundred years » 22 . Similarly, the collection compiled during the first half of the seventeenth century by Antoine and Henri-Auguste de Loménie, Secretaries of State of the Maison du Roi (the King's Household), constituted a veritable administrative record of the reigns of Henri IV and Louis XIII. To facilitate their work at the head of this tentacular office, father and son collected a vast quantity of documents relating to the provinces of the kingdom, to the functioning of royal institutions and to the sovereign's domestic services since the Middle Ages, in addition to the documents drawn up during the course of their duties 23 . Despite the establishment of administrative repositories at the turn of the eighteenth century, this type of collection continued to be assembled until the French Revolution. Thus, Guillaume-François Joly de Fleury, attorney-general at the Paris parliament (1717-1756), installed the Parquet archives and a large collection of handwritten and printed material in his mansion 24 .
Scholarly collections had close links with these 'political' collections. A certain number of scholars, often with a legal background, moved in royal and administrative circles, participated in the creation of 'professional' collections, and gathered significant amounts of documents for themselves. Caroline R.
Sherman has shown how, since the Renaissance, erudition became a family business for the Godefroys, the Dupuys or the Sainte-Marthes. The creation of a library made it possible to transmit 'scholarly capital' 20 Morel de Thoisy, counsellor to the king, treasurer and wage-payer at the Cour des Monnaies, gave his library to the king in 1725. On the clerks he employed, BnF, Clairambault 1056, fol. 128-156. The Mémoire sur la bibliothèque de M. de Fontanieu (sold in 1765 by its owner, maître des requêtes and intendant of Dauphiné) mentions « the work of four clerks he constantly employed over a period of fourteen or fifteen years » (published in H. Omont, Inventaire sommaire des portefeuilles de Fontanieu (Paris: Bouillon, 1898), 8-11). 21 O. Guyotjeannin and Y. Potin, "La fabrique de la perpétuité. Le trésor des chartes et les archives du royaume (XIII e -XIX e siècle)", Revue de synthèse, 125, (2004), 15-44. from one generation to the next: sons were trained by copying documents, filing bundles, making tables and inventories, and compiling extracts [START_REF] Sherman | The Ancestral Library as an Immortal Educator[END_REF] . Their collections were by no means disconnected from political stakes, as these scholars were involved simultaneously in compiling collections, caring for extant repositories, and defending royal interests. During the first half of the seventeenth century, brothers Pierre and Jacques Dupuy were employed to inventory the Trésor des Chartes and to reorganise Loménie de Brienne's collection, from which they were given the original documents as a reward for their work [START_REF] Solente | Les manuscrits des Dupuy à la Bibliothèque nationale[END_REF] .
II. PRIVATE COLLECTIONS IN THE KING'S LIBRARY.
The rationale behind the integration of a certain number of collections into the Bibliothèque royale from the 1660s onwards was obviously connected to these collections' political nature. Following the chronology established by Le Prince, the first wave of acquisitions coincided with the period in which Jean-Baptiste Colbert extended his control over the Bibliothèque royale. As Jacob Soll has pointed out, Colbert based his political action on the creation of an information system extending from the collection of data 'in the field', through their compilation and organisation, and onto their exploitation [START_REF] Soll | The information master: Jean-Baptiste Colbert's secret state intelligence system[END_REF] . His own library was an efficient work tool, a vast 'database' enhanced by documents collected in the provinces or copied by his librarians. The Bibliothèque royale was part of this information system. As early as 1656, while in Cardinal Mazarin's service, Colbert placed his protégés and friends there, including his brother Nicolas whom he placed in charge as head librarian. In 1666, the newly designated Controller-General of Finances ordered the books to be moved into two houses neighbouring his own in rue Vivienne. It was during the period between these two dates that the first collections of manuscript documents came into the library [START_REF] Dupuy Bequest | [END_REF] : the Béthune collection, gifted in 1662 by Hippolyte de Béthune, who considered that « it belonged to the king alone », and the Loménie de Brienne collection, which Jean-Baptiste Colbert recovered for the Bibliothèque royale after Mazarin's death [START_REF]The collection was transferred to Richelieu by Henri-Auguste de Loménie, and then passed on to Mazarin's library[END_REF] . In the library Jean-Baptiste Colbert employed scholars, such as the historiographer Varillas, to collate his own copy of the Brienne collection, based on a comparison with the originals in the king's Library, to explore the resources of the royal developed in this direction. After Gaignières' collection in 1715, Louvois' (1718), de La Mare's (1718), Baluze's (1719), and then Mesmes' (1731), Colbert's (1732), Lancelot's (1732) and Cangé's (1733) were either presented to or purchased by the Library. This acceleration ran parallel to the increasing authority of the royal establishment, which had come to coordinate the activities of the Collège royal, the royal academies, the Journal des Savants and the royal printing house und thus became somewhat of a « point of convergence for research » 31 . The idea that the purpose of the Bibliothèque royale was to conserve documentary corpuses relating to State interests would appear to have been widely shared by the intellectual and political elites. The initiatives of attorney-general Guillaume-François Joly de Fleury are indicative of this mind-set. Vigilant as he was on the repositories under his responsibility (Trésor des Chartes, archives of the Paris Parliament), he also took care to bring rich collections of historical material to the Bibliothèque royale 32 . In 1720, he made a considerable and personal financial effort to buy the collection of the Dupuy brothers, put up for sale by Charron de Ménars' daughters, because the Royal Treasury did not have enough funds to make the purchase. Yet, although « these manuscripts contain an abundance of important documents that the King's attorney-general cannot do without in the defence of the domain and rights of His Majesty's Crown », Joly de Fleury always saw them « less as his heritage, more as property which can only belong to the king, and considering himself lucky to have been able to conserve them, he always believed that His Majesty's library was the only place where they could be kept » 33 networks to attract donations and to find interesting manuscripts on the market. Between 1729 and 1731, among other historical items, they bought a collection of remonstrances of the Paris Parliament addressed to the king between 1539 and 1630 (almost certainly a copy, bought for 30 livres tournois), a collection of Philippe de Béthune's negotiations in Rome in the 1600s (bought from a bookseller for 20 livres tournois), and forty volumes of accounts of inspections to forests in the 1680s (bought for 1,000 livres tournois from a private individual) 35 . This representation of the Royal Library as repository of statesensitive papers was to be found in scholarly circles, although it did not elicit unanimity in a community which fostered the Republic of Letters' ideal of the free communication of knowledge. Upon the death of Antoine Lancelot (1675-1740), former secretary to the dukes and peers of France, secretary-counsellor to the king and inspector of the Collège royal, Abbott Terrasson wrote to Jamet, secretary to the Lorraine intendant, regretting that Lancelot had bequeathed his collections « to the abyss (to the king's library) », and added that « It would have been better shared among his curious friends[, but] you know as well as I his quirky habit of donating to the king's library, which he felt to be the unique repository in which all curiosities should lie » 36 .
The Bibliothèque royale thus acted as a vast collector of papers relating to the monarchy, authentic documents or artefacts, which should not be allowed to fall into foreign hands 37 . This acquisition policy raised three questions. The first concerned remuneration for the documents produced in the State's service. This was a lively subject of debate in negotiations relating to Colbert's library at the end of the 1720s. Abbot de Targny and academician Falconet, experts appointed by the king, refused to give an estimate for « modern manuscripts », i.e. « State manuscripts ». According to Bignon, experts would argue on the basis of « natural law, by virtue of which I believe that ministers' papers belong to the king and not to their heirs » 38 . His argument is consistent with the measures taken as from the 1670s to seize the papers of deceased senior government officials. But this position was by no means self-evident when faced with an heir who wished to make the most out of his capital 39 . Indeed, in Colbert's case as in others, the transfer of papers was eventually not the result of an actual purchase, but of a gift in return for which the king either offered a reward of a substantial sum of money or granted an office. When he sold his 35 BnF, Arch. adm. AR 65, 101, 137. 36 G. Peignot, Choix de testaments anciens et modernes (Paris: Renouard, 1829), 418. 37 When there were rumours that the de Thou library was to be put up for sale, the library's caretaker, Abbot de Targny, told Abbott Bignon how « important it was to take steps to ensure that [the manuscripts] are not lost to the king's Library ». A few days later he let it be understood that he did not believe « they should be seen to be too eager to acquire [them]. It is enough that we be persuaded that the King will not suffer should they pass into foreign hands » (BnF, Arch. Adm. AR 61, 89, 21 September 1719, and 92). 38 BnF, ms. lat. 9365, 188. Bignon to Targny, 10 October 1731. 39 Before the experts were appointed, he demanded 150,000 pounds for the government papers and 300,000 for the ancient manuscripts. BnF, ms. lat. 9365, 312. library in 1765, Gaspard de Fontanieu made a point of stating that the papers relating to his intendancies of Dauphiné and to the Italian army should not be considered as having been sold, but as gifted, « seeing his personal productions as the fruit of the honours bestowed upon him by His Majesty through the different employments with which he had been entrusted; and for this reason being persuaded that the said productions belonged to His Majesty » 40 .
Secondly, the confluence of old administrative documentation in the Bibliothèque royale raises the question of its relations with the existing archives of ministries. The competition was not as evident as it might seem, in so far as the function of those archives was not so much to preserve the memory of past policies, as to make available the documentation required for present and future political action. As for the old Trésor des Chartes, it was reduced to purchasing sets of documents linked directly to the king, and such opportunities were very infrequent 41 . Only the creation of the foreign affairs repository in the Louvre (1710) would appear to have had an effect on the management of documentary acquisitions. From this moment on, a portion of the acquisitions were destined or brought back to this repository, for instance from the Gaignières bequest (1715) or from the Trésor des Chartes de Lorraine (1739) 42 . Errors in destination were evidence of the uncertainty surrounding the exact boundaries of both institutions: in 1729, papers relating to the history of Burgundy collected by Maurist Dom Aubrée were taken to the Louvre repository, but in 1743 they were found to be « still on the floor, having not been touched since; these manuscripts are in no way of a nature to be kept in this repository where they will never be used, and their proper place is in the king's Library » 43 . Finally, distribution was not retroactive: in 1728, Foreign Affairs officials were sent to the Bibliothèque royale « to make copies of what was missing at the repository, in order to have a full set of documents from previous ministries » 44 . Interestingly, rather than requesting the originals preserved in the library, they made copies: their aim was to have complete sets of records in the ministry of Foreign Affairs, rather than originals, which they were happy to leave in the library.
Tensions sometimes arose in relation to the 'youngest' collections, filled with documents that were still sensitive. In 1786, Delaune, secretary to the Peers, asked to borrow Antoine Lancelot's portfolios which contained items relating to the Peerage. He wanted to compare them with the boxes at the Peerage 40 Fontanieu's library bill of sale published by Omont, Inventaire sommaire, 4. 41 Guyotjeannin, Potin, "La fabrique de la perpétuité". No study has yet been done on these eighteenth century entries. 42 repository and copy the items which were missing 45 . When consulted in relation to this request, Abbot Desaulnais (custodian of the printed books department in the Bibliothèque royale) very coldly responded that the portfolios assembled loose sheets for which there was no accurate inventory. This would not however be a reason to refuse the loan, were it not for the fact that among the documents were « essential items which are missing from the Peerage repository and which M. de Laulne [Delaune] claims were removed from this repository by the late M. Lancelot ». The secretary had already examined these files and had made notes, but « I did not want him to make copies, in order to allow you the pleasure, in different circumstances and where so required, of obliging the peers, whilst conserving at the repository whatever may be unique » 46 . This reluctance to allow people to make copies of documents kept in the Bibliothèque royale draws attention to a third aspect of this documentary economy: the idea that copying reduced the value of the original, because the copy was itself not without value 47 . The price of the copy was calculated according to the cost of producing it (in paper and personnel), to the nature of the items copied, and, above all, to the use to which it could be put. During negotiations relating to Colbert's library, one memorandum pointed out: how important it is for the King and for the State to prevent the said manuscripts from being removed, even if they are only copies. The copy made from the records of the Trésor des Chartes is extremely important, both because it may be unique, and because it contains secrets that must not be made [known] to foreigners, who are eager to acquire these sorts of items so that they might one day use them against us 48 .
III. THE « TASTE FOR ARCHIVES » IN LIBRARIES 49
The representation of the Bibliothèque royale as a public repository corresponds to the perception and documentary practices of its contemporaries. This is especially true of the Cabinet des Titres et Généalogies, which was a real resource centre for families, and of the manuscript section of the Library, which was frequented by all sorts of people. The registers of books loans and requests for documentation submitted to the secretary of State of the Maison du Roi (who formally supervised the Library) throw 45 BnF, Arch. adm., AR 56, 296. 46 BnF, Arch. adm., AR 56, 296. 47 This aspect also appears in Le Prince's Essai historique, in relation to the Brienne manuscripts « that we rightly regard as being very precious[, but which] would be of another value entirely, if copies did not exist elsewhere, which are themselves merely ordered copies of Dupuy's manuscripts ». 48 BnF, ms. lat. 9365, 313, "Mémoire sur la bibliothèque de M. Colbert", 15 December 1727. 49 A. Farge, Le goût de l'archive (Paris: Seuil, 1989). considerable light on these practices which reveal three types of relationship with the library: instrumental, evidentiary and scholarly 50 .
First and foremost, the Bibliothèque royale constituted an immense documentation centre for agents of the monarchy. Requests concerned the preparation of reference works, such as the Traité de la police published between 1705 and 1738 by Commissaire de La Mare -in 1721 he asked for the Châtelet registers « which were in the cabinet of the late Abbot Louvois and are now in the King's library » -, the publication of the Ordonnances des rois de France prepared by lawyer Secousse on the orders of chancellor d'Aguesseau, and the diplomatic memoirs of Le Dran, senior official at the foreign affairs archive in 1730, who consulted ambassadors' reports 51 . The library's resources were also mobilised in relation to more pressing affairs: in 1768, it was Ripert de Monclar, attorney-general at the parliament of Provence, commissioned to establish the king's rights over the city of Avignon and the Comtat Venaissin, who asked for relevant documents to be searched 52 . In a certain number of cases, recourse to clean and well-organised copies of major collections undoubtedly sufficed, or at least meant that the work was more rapidly completed.
The 'public repository' function likely to produce evidentiary documents is more evident in requests from families wishing to clarify their genealogy, or from individuals wanting to produce decisive evidence in a court case. The Bibliothèque royale was called upon in handwriting verification procedures and forgery pleas, for which parties had no hesitation in asking to borrow ancient documents 53 . In 1726, Mr. de Varneuille, cavalry officer in Rouen, asked to borrow the deeds signed in 1427 and 1440 by Jean de Dunois, the 'Bastard of Orleans', to help one of his friends « to support a plea of forgery which he made against the deeds used as evidence against him in his trial » 54 . In 1736, the count of Belle-Isle, in proceedings against Camusat, auditor of accounts, demanded that Philippe-Auguste's original cartulary be produced 55 . This conception of the library was so self-evident that some people asked for 'legalised' copies. To such requests, Amelot, secretary of State of the Maison du Roi at the end of the eighteenth 50 BnF, Arch. adm., AR 56, AR 123 (lending register, 1737-1759) and 124 (1775-1789). 51 BnF, Arch. adm., AR 56, 4, 25, 27, 29. N. de La Mare, Traité de la police, où l'on trouvera l'histoire de son établissement, les fonctions et les prérogatives de ses magistrats, toutes les loix et tous les règlemens qui la concernent, 4 vols. (Paris: Cot, Brunet, Hérissant, 1705-38). Ordonnances des rois de France de la 3e race, recueillies par ordre chronologique, 22 vols. (Paris: Imprimerie royale, 1723-1849). The numerous memoirs of Le Dran remained unpublished and are now kept in the Foreign Affairs Archive (Paris). 52 BnF, Arch. adm., AR 56, 56, 57. 53 On these procedures, A. Béroujon, "Comment la science vient aux experts. L'expertise d'écriture au XVII e siècle à Lyon", Genèses, 70, (2008), 4-25. 54 BnF, Arch. adm., AR 56, 299. 55 BnF, Arch. adm., AR 56, 35. century, replied that « the custodians of this library having sworn no legally binding oath, they may not issue authenticated copies », and that the librarian could only certify extracts 56 .
Obviously, the collections were widely used by scholars. Latin, Greek and French manuscripts from the royal collections were the most frequently borrowed documents, but State papers from Louvois, Colbert, Béthune and de Brienne, along with the charters collected by Baluze and Lancelot, were also used for historical or legal works. The registers of book-loans give us an idea of scholars' work methods.
Borrowing from the imagery used by Mark Hayer, who compares ways of reading to the way animals nourish themselves, in the registers we can easily identify historians as hunters, grazers, and gatherers 57 . Abbot de Mury, doctor from the Sorbonne and previously tutor to the Cardinal of Soubise, was a grazer; as from May 1757, he borrowed the entire set of registers of the Paris parliament (probably in the copy of the Sérilly collection incorporated into the library in 1756), at an average rate of one volume per month.
Abbot de Beauchesne was a hunter, borrowing on the 9th February 1753 the inventory of the Brienne manuscripts and returning four days later to borrow four sets of documents from this collection 58 . The count of Caylus might be a gatherer, as he borrowed successively one volume taken from a Baluze carton (March 1740), a treatise about the mummies from the Colbert manuscripts (January 1750) and two greek inscriptions from the Fourmont collection (December 1751) for his Recueil d'antiquités égyptiennes, étrusques, grecques et romaines published between 1752 and 1767 59 .
To what extent did the fact of putting documents into the library lead to new ways of thinking about, and using, these collections? We must first consider the fact that it was already relatively easy to access these documents within close circles of scholars and bureaucrats. During the period he owned the Dupuy collection, between 1720 and 1754, attorney-general Joly de Fleury never refused to lend volumes to his Parisian colleagues such as the lawyer Le Roy who was preparing a work on French public law (never published), or Durey de Meinières, first president of parliament, who borrowed a large part of the collection at a rate of three to four manuscripts every fortnight, to complete his own collection of parliamentary registers or ambassadors' reports 60 . The repository at the king's Library did not really 56 BnF, Arch. adm., AR 56, 93, 29 April 1781, in response to a request from the count of Apremont, asking for a legalised copy of the Treaty of Münster. 57 Quoted in Les défis de la publication sur le Web: hyperlectures, cybertextes et méta-éditions, eds. J. M. Salaün and C. Vandendorpe (Paris: Presses de l'ENSSIB, 2002). 58 BnF, Arch. adm., AR 123, 45, 48 and following. None of these scholars seems to have ever published any historical work. 59 BnF, Arch. adm., AR 123, 7, 42, 44. A.-C.-P. de Caylus, Recueil d'antiquités égyptiennes, étrusques, grecques et romaines, 7 vols. (Paris: Desaint et Saillant, 1752-1767). 60 The correspondence with Durey de Meinières is a good observatory for copying practices which are a constitutive element of collections. BnF, Joly de Fleury, 2491, letter dated 29 August 1746: « I spent part of the afternoon verifying the three volumes that you were kind enough to lend me this morning. I have most of them in my mss from Mr. Talon and the rest in my parliamentary facilitate consultation and copying because permissions were still left to the librarians' discretion, particularly as the possibility of borrowing manuscripts was called into question on several occasions during the course of the century. The rapidity with which scholars got hold of the manuscripts acquired by the library may be as much a sign of newly announced availability as of any continuity of use. Hardly three months went by between the purchase of the Chronique de Guillaume de Tyr, a late thirteenth century manuscript from the Noailles collection, and its loan to Dom Maur Dantine on 14 January 1741 61 .
Had the monk had the opportunity to look at it before, in Maréchal de Noailles' library, or was its acquisition by the Library a boon, at the time when he was taking part in the great Benedictine undertaking of Recueil des historiens des Gaules et de la France, the first volume of which appeared in 1738 ?
The opposition between public library and less public repositories should thus not be exaggerated.
The Mazarine gallery where the manuscripts were kept was not open to the public, even though amateurs were allowed in to admire the splendour of the decor and contents. Le Prince's Essai historique also stated that the custodians « do not indiscriminately release every kind of manuscript » 62 . In some cases, putting documents into the library was meant to protect them from prying eyes, as was the case with four handwritten memoirs on the Regency, deposited in the library in 1749 « to thereby ensure that none of them become public » 63 . The uses to which documents were to be put were also carefully controlled. In 1726, minister Maurepas was against the idea of lending three volumes from the Brienne collection in order to provide evidence of the king's sovereignty over Trois Évêchés in a trial: « I feel that their intended use is a delicate one, it being a question of the king's sovereignty over a province, evidence of which is said to be contained in the manuscripts ». He feared exposing them to the risk of contradiction by the opposing party 64 .
IV. LIBRARY PLACEMENT
Putting documents into a library implies not only that their consultation was to be ruled by that institution, but also that they were integrated into the library's intellectual organisation. Le Prince's presentation suggests that collections maintained an independent existence in the Library's manuscripts registers »; 11 May 1748: « I believe I have the last five which are letters and negotiations from Mr de Harlay de Beaumont in England in 1602, 1603, 1604 and 1605. It is so that I can be sure that I beg you… to be kind enough to lend them to me ». 61 department: « these collections or repositories are divided by fonds and bear the names of those who left them or sold them to the king » 65 . The existence of numerous small rooms around the Mazarine gallery made it possible to keep entire collections in their own separate spaces, such as Colbert's State papers which were placed in two rooms, or the 725 portfolios transferred from Lorraine, which were kept in two others 66 . The collections were not actually merged into the Bibliothèque royale; they were progressively absorbed during various operations of inventorying, classifying, cataloguing and filing, which blurred the initial rationales of the work tools and quasi-archives -or 'archive avatars' -developed by their previous owners 67 .
Two trends coexisted throughout the eighteenth century: the first consisted in integrating new acquisitions into a continuous series of 'royal' numbers; the second in preserving the identity of the collections acquired. The permanent hesitation between, on the one hand, a complete cancellation of the previous order into an new and integrated classification, and on the other, what was to become the principle of the archival integrity (which holds that the records coming from the same source should be kept together) is a core problem of archival management, which had its echoes also in the Bibliothèque royale. The major undertakings of cataloguing and verification tendentially favoured unification of the various collections. The Béthune manuscripts, received in 1664 and still noted as separate in 1666, were renumbered in the catalogue compiled in 1682 by Nicolas Clément, who broke up the unity of the original collection 68 . After this date, only a small portion of the acquired manuscripts were added to the catalogue.
In 1719, at the time of the verification of holdings following the appointment of Abbot Bignon, the manuscript repository appeared to be made up of an 'old fonds', organised by language (Greek, Latin, etc.), and of a 'new fonds' which juxtaposed part of the private collections which had been acquired over the previous forty years, from Mézeray (1683) to Baluze (1719) 69 . Incorporations into Clément's framework continued, but in the mid-1730s the rapid growth in the number of collections, combined with the confusion caused by repeated insertions of new items within the catalogue, led to a reform of the principle guiding incorporation. From this date onwards, « collections composed of a fairly considerable number of volumes were kept intact and formed separate fonds », whereas those acquired singly or in 65 small groups were put into the New Acquisitions series 70 . Among these new acquisitions was the Sautereau collection 71 , added in 1743 and made up of the thirty-five volumes of the inventory of acts of the Grenoble Chambre des Comptes. This general principle did not mean that the collections had been incorporated in one fell swoop by the royal institution. Of course, the most prestigious collections maintained their integrity. The 363 volumes of the Brienne collection, magnificently bound in red Morocco leather, were never absorbed into the royal series. Colbert's State papers, acquired in 1732, were for the most part organised into homogeneous collections which preserved their individuality: the Doat collection (258 volumes), made up of copies of deeds which president Jean de Doat had ordered done in Languedoc; the collection of copies brought together by Denis Godefroy in Flanders (182 volumes) ; and a collection of more than five hundred volumes of Colbert's work files (the Cinq cents de Colbert) 72 . To arrange Étienne Baluze's « literature papers », the librarians requisitioned, from his heir, the seven armoires in which Baluze had kept his papers and correspondence. After an unsuccessful attempt by Jean Boivin to reclassify this material in a totally new order, Abbot de Targny went back to a system close to the original 73 . New collections were more often than not reorganised according to the overall rationale of the Bibliothèque royale and of its departments. I will give just one example, that of the library of Gaspard de Fontanieu, former intendant of Dauphiné, sold to the king in 1765. In addition to printed documents and manuscripts, it contains a large collection of fugitive pieces which included both manuscripts and printed documents (366 volumes), and a series of portfolios of original deeds, copies from the Bibliothèque royale and diverse repositories, work notes and printed items relating to historical documents (881 volumes). As one contemporary memoir explains, these three collections (manuscripts, collected works and portfolios) « have between them a link which is intimate enough for them not to be separated » 74 . In particular, the to-ing and fro-ing between the two constitutes a « sort of mechanism [which] offers a prodigious facility for research ». However, integration into the library led to a dual alteration to the collection's intellectual economy. First of all, as had already been the case with Morel de Thoisy's and Lancelot's libraries, sets of fugitive items were sent to the printed books department, whilst the portfolio 70 L. Delisle, Inventaire des manuscrits latins (Paris: Durand et Pedone-Lauriel, 1863), 3. The oriental, Greek and Latin manuscripts were allocated new numbers as independent series, retroactively including private fonds (BnF, NAF 5412, 5413-5414). 71 BnF, NAF 5427, 1. 72 In addition there were volumes and bundles which were combined and bound in the middle of the nineteenth century, thus becoming what is known as the Mélanges Colbert collection. The six thousand manuscripts were divided among the existing linguistic series. 73 Without managing to locate all items after the mess caused by the move, whence the existence of an « Uncertain armoires » category. For more on these operations, see Lucien Auvray, "La Collection Baluze à la Bibliothèque nationale", Bibliothèque de l'École des chartes, 81, (1920), 93-174. On the armoires, BnF, Arch. adm., AR 61, 92. 74 Cited in Omont, Inventaire sommaire, 8-11.
series remained in the manuscripts section 75 . Secondly, the portfolios were partly split, and the printed documents were removed and transferred to the printed books, something that twenty years later Le Prince was to describe as « unfortunate » 76 .
The ultimate librarians' device to blur the continuity between the old collections and the new was the renumbering of manuscripts. Yet, whilst this operation gave the librarians absolute discretionary power, Le Prince hints on several occasions at the existence of concordance tables which made it possible to navigate from old numbers to new, even if there is no evidence to show that these tables were available to scholars 77 . Also very striking were the vestiges of the old 'book order' in the new library configuration. It could be seen in the way volumes were designated, for example in the registers of loans. Even when they were allocated a new number in the royal series, manuscripts frequently appeared under their original numbers. The collection of copies of the Louvois dispatches appeared under n° 24 in the Louvois fonds entered in the Bibliothèque royale in 1718. It became ms. 9350 (A.B) in the royal series but was listed as « vol. 24 n° 9350 A.B. Louvois manuscripts » when borrowed by Mr. Coquet in 1737 78 . The old book order thus retained a practical significance for both scholars and librarians. It had even more meaning in that the instruments used for orientation within the royal collections often dated back to before the acquisition, since they had been redacted by scholars for their private use: to identify manuscripts from the Brienne collection which would be useful for their research, in 1753 Abbot de Beauchesne and Abbot Quesnel borrowed the two volumes of the inventory kept in the Louvois fonds, whilst Cardinal de Soubise used the catalogue from the Lancelot fonds 79 . In some ways, these collections continued to constitute a library sub-system within the royal establishment.
Paris' Royal Library is probably a borderline case. The power of the institution and the absence of any central royal archives combined to make it a para-archival entity recognised as such by its contemporaries.
Further research is needed to assess the peculiarity of the French case in comparison to other similar insitutions. Nonetheless, this case study serves as a reminder that in Paris and elsewhere, in the early modern era the 'taste for archives' was significantly developed in libraries -as it still is today. The 75 The collections were then distributed by theme within the printed document fonds: see the Catalogue des livres imprimés de la bibliothèque du roi, 6 vols. (Paris: Imprimerie royale, 1739-53). The Lancelot portfolios were transferred back to the manuscript department in the 19 th century (BnF, NAF 9632-9826). 76 In the nineteenth century, Champollion-Figeac once again separated the original items from the copies, bound in six volumes following on from the collection. 77
Emmanuelle
Chapron, « The « Supplement to All Archives » : the Bibliothèque Royale of Paris in the Eighteenth-Century », Storia della storiografia, 68/2, 2015, p. 53-68. The « supplement to all archives »: the Bibliothèque royale of Paris in the eighteenth century Emmanuelle Chapron Aix Marseille univ, CNRS, Telemme, Aix-en-Provence, France
22 L. Delisle, Le Cabinet des manuscrits de la Bibliothèque impériale (Paris: Imprimerie impériale, 1868), 268. 23 C. Figliuzzi, "Antoine et Henri-Auguste de Loménie, secrétaires d'État de la Maison du Roi sous Henri IV et Louis XIII: carrière politique et ascension sociale" (École des chartes, thesis, 2012). 24 See D. Feutry's exemplary study, Un magistrat entre service du roi et stratégies familiales. Guillaume-François Joly de Fleury (1675-1756) (Paris: École des chartes, 2011), 15-35. The Joly de Fleury collection became part of the Library in 1836.
. In 1743, he informed Chancellor d'Aguesseau of the importance of Charles Du Cange's collection, which included copies of the Chambre des Comptes memorials that had been destroyed in the Palais de Justice fire in 1737 34 . The function of conserving the monarchy's ancient archives can be seen on an almost daily basis in the purchases made by the royal librarians. Abbott Bignon and the library's custodians made use of their 30 On Varillas' work, S. Uomini, Cultures historiques dans la France du XVII e siècle (Paris: L'Harmattan, 1998), 368-375. 31 F. Bléchet, "L'abbé Bignon, bibliothécaire du roi et les milieux savants en France au début du XVIII e siècle", Buch und Sammler. Private und öffentliche Bibliotheken im 18. Jahrhundert (Heidelberg: C. Winter, 1979), 53-66. 32 D. Feutry, "Mémoire du roi, mémoire du droit. Le procureur général Guillaume-François Joly de Fleury et le transport des registres du Parlement de Paris, 1729-1733", Histoire et archives, 20, (2006), 19-40. 33 BnF, Archives administratives, Ancien Régime [now Arch. adm., AR] 59, 270. The collection was finally bought by the Library in 1754. 34 P.-M. Bondois, "Le procureur général Joly de Fleury et les papiers de Du Cange (1743)", Bibliothèque de l'École des chartes, 89, (1928), 81-88. The memorials are registers containing transcriptions of the letters patent relating to the administration of the finances and of the Domain. On the death of Du Cange (1688) the collection was dispersed; it was later reconstituted by his grandnephew Dufresne d'Aubigny. It became part of the Bibliothèque royale in 1756.
BnF, Arch. adm., AR 123. 62 Le Prince, Essai historique, 151. 63 BnF, Arch. adm. AR 65, 304. They contain a regency project written by first president Mr. de Harlay, a memorandum by chancellor Voisin on the Regency and a chronicle of the Regency. 64 BnF, Arch. adm. AR 65, 67, Maurepas to Bignon, 17 August 1726.
Le Prince, Essai historique, 156.66 Respectively BnF, NAF 5427, 118 and Peignot, Collection, 415. Map in J. F. Blondel, Architecture française, 4 vols. (Paris: Jombert, 1752-1756), 3: 67-80.67 The formula, coined in B. Delmas, D. Margairaz and D. Ogilvie, eds. De l'Ancien Régime à l'Empire, is particularly apt for a period when archival theory and vocabulary were still fluid.68 Pierre de Carcavy, "Mémoire de la quantité des livres tant manuscrits qu'imprimez, qui estoyent dans la Bibliothèque du Roy avant que Monseigneur en ayt pris le soing [1666]", published in Jean Porcher, "La bibliothèque du roi rue Vivienne", Fédération des sociétés historiques et archéologiques de Paris et de l'Ile-de-France.Mémoires, 1, (1949), 237-246. BnF, NAF 5402, Catalogus librorum manuscriptorum… [1682]. 69 BnF, Arch. adm., AR 65, 7bis. Only the Brienne collection remains intact within the old fonds.
Le Prince, Essai historique, 155-156. 78 BnF, Arch. adm., AR 123. There are numerous examples. See also « ms de Gagnière n° 131 et nouveau n° du Roy 1245 » borrowed by Dom Duval in 1741. 79 BnF, Arch. adm., AR 123, 45-46. The first is the Le Tellier-Louvois 101 and 102 manuscript, now ms. fr. 4259-4260.
historicisation of scholarly practices, will help to renew questions relating to the history of libraries, just as they have done for the history of archives. | 55,116 | [
"13055"
] | [
"56663",
"199918",
"198056"
] |
01487333 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2006 | https://hal.science/hal-01487333/file/TIME2006.pdf | Jean-Franc ¸ois Condotta
Gérard Ligozat
email: ligozat@limsi.fr
Mahmoud Saade
email: saade@cril.univ-artois.fr
A Generic Toolkit for n-ary Qualitative Temporal and Spatial Calculi
Keywords:
Temporal and spatial reasoning is a central task for numerous applications in many areas of Artificial Intelligence. For this task, numerous formalisms using the qualitative approach have been proposed. Clearly, these formalisms share a common algebraic structure. In this paper we propose and study a general definition of such formalisms by considering calculi based on basic relations of an arbitrary arity. We also describe the QAT (the Qualitative Algebra Toolkit), a JAVA constraint programming library allowing to handle constraint networks based on those qualitative calculi.
Introduction
Numerous qualitative constraint calculi have been developed in the past in order to represent and reason about temporal and spatial configurations. Representing and reasoning about spatial and temporal information is an important task in many applications, such as computer vision, geographic information systems, natural language understanding, robot navigation, temporal and spatial planning, diagnosis and genetics. Qualitative spatial and temporal reasoning aims to describe non-numerical relationships between spatial or temporal entities. Typically a qualitative calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF][START_REF] Randell | A spatial logic based on regions and connection[END_REF][START_REF] Ligozat | Reasoning about cardinal directions[END_REF][START_REF] Pujari | INDU: An interval and duration network[END_REF][START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] uses some particular kind of spatial or temporal objects (e.g. subsets in a topological space, points on the rational line, intervals on the rational line) to represent the spatial or temporal entities of the system, and focuses on a limited range of relations between these objects (such as topological relations between regions or precedence between time points). Each of these relations refers to a particular temporal or spatial configuration. For instance, in the field of qualitative reasoning about temporal data, consider the well known formalism called Allen's calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. It uses intervals of the rational line for representing temporal entities. Thirteen basic relations between these intervals are used to represent the qualitative situation between temporal entities. An interval can be before the other one, can follow the other one, can end the other one, and so on. The thirteen basic relations are JEPD (jointly exhaustive and pairwise disjoint), which means that each pair of intervals satisfies exactly one basic relation. Constraint networks called qualitative constraint networks (QCNs) are usually used to represent the temporal or spatial information about the configuration of a specific set of entities. Each constraint of a QCN represents a set of acceptable qualitative configurations between some temporal or spatial entities and is defined by a set of basic relations. The consistency problem for QCNs consists in deciding whether a given network has instantiations satisfying the constraints. In order to solve it, methods based on local constraint propagation algorithms have been defined, in particular methods based on various versions of the path consistency algorithm [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF]. In the literature most qualitative calculi are based on basic binary relations. These basic relations are always JEPD. Moreover, the operators of intersection, of composition and of inverse used for reasoning with these relations are always defined in a similar way. Hence we can assert that these qualitative calculi share the same structure. Recently, non binary qualitative calculi have been proposed. The difference between binary calculi and non binary calculi resides in the fact that new operators are necessary for the non binary case, namely the operator of permutation and the operator of rotation.
In this paper we propose and study a very general definition of a qualitative calculus. This definition subsumes all qualitative calculi used in the literature. Moreover, to our knowledge, implementations and software tools have only been developed for individual calculi. The QAT (Qualitative Algebra Toolkit) has been conceived as a remedy to this situation. Specifically, the QAT is a JAVA constraint programming library developed at CRIL-CNRS at the University of Artois. It aims to provide open and generic tools for defining and manipulating qualitative algebras and qualitative networks based on these algebras. This paper is organized as follows. In Section 2, we propose a formal definition of a qualitative calculus. This definition is very general and it covers formalisms based on basic relations of an arbitrary arity. Section 3 is devoted to qualitative constraint networks. After introducing the QAT library in Section 4, we conclude in Section 5.
A general definition of Qualitative Calculi
Relations and fundamental operations
A qualitative calculus of arity n (with n > 1) is based on a finite set B = {B 1 , . . . , B k } of k relations of arity n defined on a domain D. These relations are called basic relations. Generally, k is a small integer and the set D is an infinite set, such as the set N of the natural numbers, the set Q of the rational numbers, the set of real numbers, or, in the case of Allen's calculus, the set of all intervals on one of these sets. We will denote by U the set of n-tuples on D, that is, elements of D n . Moreover, given an element x belonging to U and an integer i ∈ {1, . . . , n}, x i will denote the element of D corresponding to the i th component of x. The basic relations of B are complete and jointly exclusive, in other words, the set B must be a partition of U = D n , hence we have:
Property 1 B i ∩ B j = ∅, ∀ i, j ∈ {1, . . . , k} such that i = j and U = i∈{1,...,k} B i .
Given a set B of basic relations, we define the set A as the set of all unions of the basic relations. Formally, the set A is defined by A = { B : B ⊆ B}. In the binary case, the various qualitative calculi considered in the literature consider a particular basic relation corresponding to the identity relation on D. We generalise this by assuming that a qualitative calculus of arity n satisfies the following property:
Property 2 ∀ i, j ∈ {1, . . . , n} such that i = j, ∆ ij ∈ A with ∆ ij = {x ∈ U : x i = x j }.
Note that the relations ∆ ij are called diagonal elements in the context of cylindric algebras [START_REF] Hirsch | Relation Algebras by Games[END_REF]. Given a non empty set E ⊆ {1, . . . , n} × {1, . . . , n} such that for all (i, j) ∈ E we have i = j, ∆ E will denote the relation {∆ ij : (i, j) ∈ E}. We note that from Property 1 and Property 2 we can deduce that ∆ E ∈ A. Hence, the relation of identity on U, denoted by Id n , which corresponds to ∆ {(i,i+1):1≤i≤n-1} , belongs to A.
In the sequel we will see how to use the elements of A to define particular constraint networks called qualitative constraint networks. Several fundamental operations on A are necessary for reasoning with these constraint networks, in particular, the operation of permutation, the operation of rotation and the operation of qualitative composition also simply (and wrongly) called composition or weak composition [START_REF] Balbiani | On the consistency problem for the INDU calculus[END_REF][START_REF] Ligozat | What is a qualitative calculus? a general framework[END_REF]. In the context of qualitative calculi, the operations of permutation and rotation have been introduced by Isli and Cohn [START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] for a formalism using ternary relations on cyclic orderings. These operations are unary operations which associate to each element of A a relation belonging to U. They can be formally defined in the following way:
Definition 1. Let R ∈ A.
The permutation and the rotation of R, denoted by R and R respectively, are defined as follows:
-R = {(x 1 , . . . , x n-2 , x n , x n-1 ) : (x 1 , . . . , x n ) ∈ R} (Permutation), -R = {(x 2 , . . . , x n , x 1 ) : (x 1 , . . . , x n ) ∈ R} (Rotation).
In the binary case, these operations coincide and correspond to the operation of converse. To our knowledge, all binary qualitative calculi satisfy the property that the converse relation of any basic relation is a basic relation. A similar property is required in the general case:
Property 3 For each relation B i ∈ B we have B i ∈ B and B i ∈ B.
These operations satisfy the following properties: For binary relations, the operation of composition is a binary operation which associates to two relations R 1 and R 2 the relation
•(R 1 , R 2 ) = {(x 1 , x 2 ) : ∃u ∈ D with (x 1 , u) ∈ R 1 and (u, x 2 ) ∈ R 2 }.
For several qualitative calculi of arity n = 2 the composition of two relations R 1 , R 2 ∈ A is not necessarily a relation of A (consider for example the interval algebra on the intervals defined on the integers). A weaker notion of composition is used. This operation, denoted in the sequel by ⋄, and called qualitative composition, is by definition the smallest relation (w.r.t. inclusion) of A containing all the elements of the bona fide composition :
⋄(R 1 , R 2 ) = {R ∈ A : •(R 1 , R 2 ) ⊆ R}.
For an arbitrary arity n, composition and qualitative composition can be defined in the following way:
Definition 2. Let R 1 , . . . , R n ∈ A. -•( R 1 , . . . , R n ) = {( x 1 , . . . , x n ) : ∃u ∈ D, ( x 1 , . . . , x n-1 , u) ∈ R 1 , (x 1 , . . . , x n-2 , u, x n ) ∈ R 2 , . . . , (u, x 2 , . . . , x n ) ∈ R n }, -⋄(R 1 , . . . , R n ) = {R ∈ A : •(R 1 , . . . , R n ) ⊆ R}.
Note that we use the usual definition of the polyadic composition for the operation •. Both operations are characterized by their restrictions to the basic relations of B. Indeed, we have the following properties:
Proposition 2. Let R 1 , . . . , R n ∈ A. -•(R 1 , . . . , R n ) = ∪{•(A 1 , . . . , A n ) : A 1 ∈ B, . . . , A n ∈ B and A 1 ⊆ R 1 , . . . , A n ⊆ R n }; -⋄(R 1 , . . . , R n ) = ∪{⋄(A 1 , . . . , A n ) : A 1 ∈ B, . . . , A n ∈ B and A 1 ⊆ R 1 , . . . , A n ⊆ R n }.
Another way to define the qualitative composition is given by the following proposition:
Proposition 3. Let R 1 , . . . , R n ∈ A. ⋄(R 1 , . . . , R n ) = {A ∈ B : ∃x 1 , . . . , x n , u ∈ D, ∃A 1 , . . . , A n ∈ B with (x 1 , . . . , x n ) ∈ A, (x 1 , . . . , x n-1 , u) ∈ A 1 , (x 1 , . . . , x n-2 , u, x n ) ∈ A 2 , . . . , (u, x 2 , . . . , x n ) ∈ A n , A 1 ⊆ R 1 , . . . , A n ⊆ R n }.
Hence, tables giving the qualitative composition, the rotation and the permutation of basic relations can be used for computing efficiently these operations for arbitrary relations of A. Finally, we have the following properties, which generalize the usual relationship of composition with respect to converse in the binary case:
Proposition 4. Let R 1 , . . . , R n ∈ A and OP ∈ {•, ⋄}. -OP(∅, R 2 , . . . , R n ) = ∅ ; -OP(R 1 , . . . , R n ) = OP(R n , R 1 , R 2 , . . . , R n-1 ) ; -OP(R 1 , . . . , R n ) = OP(R 2 , R 1 , , R 3 . . . , , R n ).
An example of a qualitative calculus of arity 3: the Cyclic Point Algebra
This subsection is devoted to a qualitative calculus of arity 3 known as the Cyclic Point Algebra [START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF][START_REF] Balbiani | Reasoning about cyclic space: Axiomatic and computational aspects[END_REF]. The entities considered by this calculus are the points on an oriented circle C. We will call these points cyclic points. Each cyclic point can be characterised by a rational number belonging to the interval [0, 360[. This number corresponds to the angle between the horizontal line going through the centre of C. Hence, for this calculus, D is the set of the rational numbers {q ∈ Q : 0 ≤ q < 360}. In the sequel we assimilate a cyclic point to the rational number representing it. Given two cyclic points x, y ∈ D, [[x, y]] will denote the set of values of D corresponding to the cyclic points met between x and y when travelling on the circle counter-clockwise. The basic relations of the Cyclic Point Algebra is the set of the 6 relations {B abc , B acb , B aab , B baa , B aba , B aaa } defined in the following way:
B abc = {(x, y, z) ∈ D 3 : x = y, x = z, y = z and y ∈ [[x, z]]}, B acb = {(x, y, z) ∈ D 3 : x = y, x = z, y = z and z ∈ [[x, y]]}, B aab = {(x, x, y) ∈ D 3 : x = y}, B baa = {(y, x, x) ∈ D 3 : x = y}, B aba = {(x, y, x) ∈ D 3 : x = y}, B aaa = {(x, x, x) ∈ D 3 }.
These 6 relations are shown in Figure 1. Based on theses basic relations, we get a set A containing 64 relations. Note that for these basic relations the operation of composition and the operation of qualitative composition are the same operations. Table 1 gives the qualitative composition of a subset of the basic relations. Using Proposition 2, we can compute other qualitative compositions which are not given in this table. For example, ⋄(B aab , B acb , B abc ) = ⋄(B aab , B abc , B acb ) = {B aab }. Actually, the table provides a way of computing any composition of basic relations, since all qualitative compositions which cannot be deduced from it in that way yield the empty relation. This is the case for example of the qualitative composition of B aaa with B abc , which is the empty relation.
Qualitative Constraint Networks
Basic notions
Typically, qualitative constraint networks (QCNs in short) are used to express information on a spatial or temporal situation. Each constraint of a constraint network represents a set of acceptable qualitative configurations between some temporal or spatial entities and is defined by a set of basic relations. Formally, a QCN is defined in the following way:
Definition 3. A QCN is a pair N = (V, C) where: -V is a finite set of l variables {v ′ 0 , . . . , v ′ l-1 } (where l is a positive integer); -C is a map which to each tuple (v 0 , . . . , v n-1 ) of V n associates a subset C(v 0 , . . . , v n-1 ) of the set of basic relations: C(v 0 , . . . , v n-1 ) ⊆ B. C(v 0 , . . . , v n-1
) are the set of those basic relations allowed between the variables v 0 ,. . . ,v n-1 . Hence, C(v 0 , . . . , v n-1 ) represents the relation of A corresponding to the union of the basic relations belonging to it.
We use the following definitions in the sequel: -A scenario on a set of variables V ′ is an atomic QCN whose variables are the set V ′ . A consistent scenario of N is a scenario that admits a solution of N as a solution.
Definition 4. Let N = (V, C) be a QCN with V = {v ′ 0 , . . . , v ′ l-1 }. -A partial instantiation of N on V ′ ⊆ V is a map α of V ′ on D. Such a partial instantiation is consistent if and only if (α(v 0 ), . . . , α(v n-1 )) ∈ C(v 0 , . . . , v n-1 ), for all v 0 , . . . , v n-1 ∈ V ′ . -A solution of N is a consistent partial instantiation on V . N
-A QCN N ′ = (V ′ , C ′ ) is equivalent to N if and only if V = V ′ and both networks N and N ′ have the same solutions. -A sub-QCN of a QCN N = (V, C) is a QCN N ′ = (V, C ′ ) where: C ′ (v 0 , . . . , v n-1 ) ⊆ C(v 0 , . . . , v n-1
) for all v 0 , . . . , v n-1 ∈ V .
Moreover we introduce the definition of normalized QCNs which intuitively correspond to QCNs containing compatible constraints w.r.t. the fundamental operations of rotation and permutation.
Definition 5. Let N be a QCN. Then N is normalized iff:
-C(v 2 , . . . , v n , v 1 ) = C(v 1 , . . . , v n ) , -C(v 1 , . . . , v n-2 , v n , v n-1 ) = C(v 1 , . . . , v n ) , -C(v 1 , . . . , v i , . . . , v j , . . . , v n ) ⊆ ∆ ij , ∀ i, j ∈ {1, . . . , n} such that i = j and v i = v j .
Given any QCN, it is easy to transform it into an equivalent QCN which is normalized. Hence we will assume that all QCNs considered in the sequel are normalized. Given a QCN N , the problems usually considered are the following: determining whether N is consistent, finding a solution, or all solutions, of N , and computing the smallest QCN equivalent to N . These problems are generally NP-complete problems. In order to solve them, various methods based on local constraint propagation algorithms have been defined, in particular the method which is based on the algorithms of path consistency [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF] which we will refer to as the ⋄-closure method.
The ⋄-closure method
This subsection is devoted to the topic of ⋄-closed QCNs. These QCNs are defined in the following way:
Definition 6. Let N = (V, C) be a QCN. Then N is ⋄-closed iff C(v 1 , . . . , v n ) ⊆ ⋄(C(v 1 , . . . , v n-1 , v n+1 ), C(v 1 , . . . , v n-2 , v n+1 , v n ), . . . , C(v 1 , v n+1 , v 3 , . . . , v n ), C(v n+1 , v 2 , . . . , v n )), ∀v 1 , . . . , v n , v n+1 ∈ V .
For qualitative calculus of arity two this property is sometimes called the path-consistency property or the 3-consistency property, wrongly since qualitative composition is in general weaker than composition (see [START_REF] Renz | Weak composition for qualitative spatial and temporal reasoning[END_REF] for a discussion to this subject). In the binary case, the usual local constraint propagation algorithms PC1 and PC2 [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF] have been adapted to the qualitative case for computing a sub-QCN which is ⋄-closed and equivalent to a given QCN. As an extension of PC -1 to the n-ary case we define the algorithm PC1 n (see Algorithm 1). In brief, this algorithm iterates an operation (line 7-8) which suppresses non possible basic relations from the constraints using weak composition and intersection. This operation is repeated until a fixpoint is reached. It can be easily checked that the QCN output by PC1 n is ⋄-closed and equivalent to the initial QCN used as input In the binary case, a ⋄-closed QCN is not always 3-consistent but it is (0, 3)-consistent, which means, respectively, that we cannot always extend a partial solution on two variables to three variables, but that we know that all sub-QCNs on three variables are consistent. This last property can be extended to the n-ary case:
Proposition 6. Let N = (V, C) be a QCN. If N is ⋄-closed then it is (0, n)-consistent.
Note that in the same manner, we can extend PC2 to the n-ary case and prove similar results.
Associating a binary qualitative calculus to a qualitative calculus of arity n
Consider a qualitative calculus of arity n. There is actually a standard procedure for associating a binary calculus to it. Moreover, if a QCN is defined on the n-ary calculus, it can be represented by QCN in the associated binary calculus. We now proceed to sketch this procedure. Consider a qualitative calculus with a set of basic relations B = {B 1 , . . . , B k } of arity n defined on D. We associate to it a qualitative formalism with a set of basic relations B ′ = {B ′ 1 , . . . , B ′ k ′ } of arity 2 defined on a domain D ′ in the following way: -For all i, j ∈ {1, . . . , n} we define the relation E ij by:
-D ′ is the set D n = U. Hence, each relation of B ′ is a subset of U ′ = D ′ × D ′ = D n × D n = U × U. -For each relation B i ∈ B, with 1 ≤ i ≤ k, a basic relation B ′ i is introduced in B ′ . B ′ i is defined by the relation {((x 1 , . . . , x n ), (x 1 , . . . , x n )) : (x 1 , . . . , x n ) ∈ B i }. Note that the set of relations B ′ P = {B ′ 1 , . . . , B ′ k } forms a
E ij = {((x 1 , . . . , x n ), (x ′ 1 , . . . , x ′ n )) ∈ U ′ : x i = x ′ j } \ ∆ ′ 12 . E 0 = {E ij : i, j ∈ {1, . . . , n}}. E m with m > 0 is inductively defined by E m = {R 1 ∩ R 2 , R 1 \ (R 1 ∩ R 2 ), R 2 \ (R 1 ∩ R 2 ) : R 1 , R 2 ∈ E m-1 }. Let m ′ the smallest integer such that E m ′ = E m ′ +1 . B ′ E = {R ∈ E m ′ such that R = ∅ and ∄R ′ = ∅ ∈ E m ′ with R ′ ⊂ R}. The set of relations of B ′ E are added to the set B ′ . -Let F be the binary relation on D ′ defined by F = U ′ \ (E ij ∪ B ′ P ).
We add F to B ′ . Hence the final set of basic relations is the set
B ′ = B ′ P ∪ B ′ E ∪ {F}.
The reader can check that B ′ satisfies properties 1, 2 and 3 and hence defines a qualitative calculus of arity 2.
Now, consider a
QCN N = (V, C) defined on B. Let us define an equivalent QCN N ′ = (V ′ , C ′ ) on B ′ : -To define V ′ , for each n-tuples of n variables (v 1 , . . . , v n ) of V we introduce a variable v ′ {v 1 ,...,vn} in B ′ . -Given a variable v ′ = v ′ {v 1 ,...,vn} belonging to V ′ we define C ′ (v ′ , v ′ ) by the relation {B ′ i : B i ∈ C(v 1 , . . . , v n )}. -Given two distinct variables v ′ i = v ′ {v i 1 ,...,v i n } and v ′ j = v ′ {v j 1 ,...,v j n } belonging to V ′ , C ′ (v ′ i , v ′ j )
is the relation E defined in the following way: let γ the set of pairs of integer defined by
{(k, l) ∈ N × N : v i k = v j l }. E is the set of basic relations of B ′ (more precisely of B ′ E ) defined as the relation (k,l)∈γ E kl .
The reader can check that N is a consistent QCN iff N ′ is a consistent QCN. This construction is inspired by the technique called dual encoding [START_REF] Bacchus | On the conversion between non-binary and binary constraint satisfaction problems[END_REF] used in the domain of discrete CSPs to convert n-ary constraints into binary constraints. 4 The Qualitative Algebra Toolkit (QAT)
{B aab , B abc } v j v i v k v ′ ijk v ′ lim v ′ ijk {B ′ aab , B ′ abc } E 12
Clearly, all existing qualitative calculi share the same structure, but, to our knowledge, implementations and software tools have only been developed for individual calculi. The QAT (Qualitative Algebra Toolkit) has been conceived as a remedy to this situation. Specifically, the QAT is a JAVA constraint programming library developed at CRIL-CNRS at the University of Artois. It aims to provide open and generic tools for defining and manipulating qualitative algebras and qualitative networks based on these algebras. The core of QAT contains three main packages. In the sequel of this section we are going to present each of those packages. The Algebra package is devoted to the algebraic aspects of the qualitative calculi. While programs proposed in the literature for using qualitative formalisms are ad hoc implementations for a specific algebra and for specific solving methods, the QAT allows the user to define arbitrary qualitative algebras (including non-binary algebras) using a simple XML file. This XML file, which respects a specific DTD, contains the definitions of the different elements forming the algebraic structure of the qualitative calculus: the set of basic relations, the diagonal elements, the table of rotation, the table of permutation and the table of qualitative composition. We defined this XML file for many qualitative calculi of the literature: the interval algebra [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF], the point algebra [START_REF] Vilain | Constraint Propagation Algorithms for Temporal Reasoning[END_REF], the cyclic point algebra [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF], the cyclic interval algebra [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF], the rectangle algebra [START_REF] Balbiani | A new tractable subclass of the rectangle algebra[END_REF], the INDU algebra [START_REF] Pujari | INDU: An interval and duration network[END_REF], the multidimensional algebra [START_REF] Balbiani | Spatial reasoning about points in a multidimensional setting[END_REF], the RCC-5 algebra [START_REF] Randell | A spatial logic based on regions and connection[END_REF], the RCC-8 algebra [START_REF] Randell | A spatial logic based on regions and connection[END_REF], the cardinal direction algebra [START_REF] Ligozat | Reasoning about cardinal directions[END_REF]). Tools allowing to define a qualitative algebra as the Cartesian Product of other qualitative algebras are also available.
The QCN package contains tools for defining and manipulating qualitative constraint networks on any qualitative algebra. As for the algebraic structure, a specific DTD allows the use of XML files for specifying QCNs. The XML file lists the variables and relations defining the qualitative constraints. Functionalities are provided for accessing and modifying the variables of a QCN, its constraints and the basic relations they contain. Part of the QCN package is devoted to the generation of random instances of QCNs. A large amount of the research about qualitative calculi consists in the elaboration of new algorithms to solve QCNs. The efficiency of these algorithms must be validated by experimentations on instances of QCNs. Unfortunately, in the general case there does not exist instances provided by real world problems. Hence, the generation of random instances is a necessary task [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The QCN package of the QAT provides generic models allowing to generate random instances of QCNs for any qualitative calculus.
The Solver package contains numerous methods to solve the main problems of interest when dealing with qualitative constraint networks, namely the consistency problem, the problem of finding one or all solutions, and the minimal network problem. All these methods are generic and can be applied to QCNs based on arbitrary qualitative calculi. They make use of the algebraic aspect of the calculus without considering the semantics of the basic relations. In other words, they make abstraction of the definitions of the basic relations and only uniquely manipulate the symbols corresponding to these relations. Nevertheless, by using the object-oriented concept, it is very easy to particularize a solving method to a specific qualitative algebra or a particular kind of relations. We implemented most of the usual solving methods, such as the standard generate and test methods, search methods based on backtrack and forward checking, and constraint local propagation methods. The user can configure these different methods by choosing among a range of heuristics. These heuristics are related to the choice of the variables or the constraints to be scanned, of the basic relations in a constraint during a search. The order in which the constraints are selected and the order in which the basic relations of the selected constraint are examined can greatly affect the performance of a backtracking algorithm [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The idea behind constraint ordering heuristics is to first instantiate the more restrictive constraints first. The idea behind value ordering basic relations is to order the basic relations of the constraints so that the value that most likely leads to a solution is the first one to be selected. The QAT allows the user to implement new heuristics based on existing heuristics. As for local constraint propagation methods, whereas in discrete CSPs arc consistency is widely used [START_REF] Apt | Principles of Constraint Programming[END_REF], path consistency is the most efficient and most frequently used kind of local consistency in the domain of the qualitative constraints. More exactly, the methods used are based on local constraint propagation based on qualitative composition, in the manner of the PC1 n algorithm described in the previous section. In addition to PC1 n , we have extended and implemented algorithms based on PC2 [START_REF] Bessière | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF].
Conclusions
We propose and study a general formal definition of qualitative calculi based on basic relations of an arbitrary arity. This unifying definition allows us to capture the algebraic structure of all qualitative calculi in the literature. The main elements of the algebraic structure are diagonal elements, and the operations of permutation, rotation and qualitative composition. We give a transformation allowing to build a qualitative calculus based on binary basic relations from a qualitative calculus based on arbitrary basic relations. The expressive powers of both calculi are similar. Moreover we generalize the constraint propagation method P C1 to the general case, i.e. for relations of any arity. In a second part we describe the QAT1 (Qualitative Algebra Toolkit), a JAVA constraint programming library allowing to handle constraint networks defined on arbitrary n-ary qualitative calculi. This toolkit provides algorithms for solving the consistency problem and related problems, as well as most of the heuristics used in the domain. QAT is implemented using the object oriented technology. Hence, it is an open platform, and its functionalities are easily extendable. New heuristics (resp. methods) can be defined and tested. Among the tools it provides are classes allowing to generate and to use benchmarks of qualitative networks. Hence new heuristics or new solving algorithms can be conveniently evaluated.
Proposition 1 .
1 Let R ∈ A. -R = {B : B ∈ B and B ⊆ R} and R = {B : B ∈ B and B ⊆ R}.
Proposition 5 .Algorithm 1
51 . The time complexity of Algorithm PC1 n is O(|V | (n+1) ) where |V | is the number of variables of the QCN and n the arity of the calculus. We can prove the following properties: Applying the algorithm PC1 n to a normalized QCN N yields a QCN which is normalized, ⋄-closed, and equivalent to N . PC1 n Compute the closure of a QCN N = (V, C) 1: Do 2: N ′ := N 3: For each vn+1 ∈ V Do 4: For each v1 ∈ V Do 5:. . .6:For each vn ∈ V Do 7:C(v1, . . . , vn) := C(v1, . . . , vn)∩ 8:⋄(C(v1, . . . , vn-1, vn+1), C(v1, . . . , vn-2, vn+1, vn), . . . , C(vn+1, v2, . . . , vn)) 9: Until (N == N ′ ) 10: return N
partition of the relation of identity of D ′ which we will denote by ∆ ′ 12 .
Fig. 2 .
2 Fig. 2. Converting a ternary constraint C ijk of the cyclic point algebra into a binary constraint (left). Expressing a structural constraint between v ′ ijk and v ′ lim for distinct integers i, j, k, l, m (right).
Table 1 .
1 The 6 basic relations of the Cyclic Point Algebra. R1 Baaa Baaa B aab B aab B aab B aab B aba B abc R2 Baaa B aab B aba B abc B baa B acb B aab B abc R3 Baaa B aab B baa B acb B aba B abc B abc B acb ⋄(R1, R2, R3) {Baaa} {B aab } {Baaa} {B aab } {B aab } {B aab } {B abc } {B abc } The qualitative composition of the Cyclic Point Algebra a Baaa B aab B aba B baa B abc B acb a Baaa B aba B aab B baa B acb B abc a Baaa B aba B baa B aab B abc B acb
✫✪ ✬✩ x s y s z s + ✫✪ ✬✩ x s y s z s + ✫✪ ✬✩ x y z s s +
B abc (x,y,z) B acb (x,y,z) B aab (x,y,z)
✬✩ x s s y z + s ✬✩ s x y z + ✬✩ s z x y +
✫✪ ✫✪ ✫✪
B baa (x,y,z) B aba (x,y,z) Baaa(x,y,z)
Fig. 1.
Table 2 .
2 The permutation and the permutation operation of the Cyclic Point Algebra
The documentation and the source of the QAT library can be found at http://www.cril.univ-artois.fr/˜saade/QAT. | 31,866 | [
"1142762",
"998171"
] | [
"56711",
"247329",
"56711"
] |
01487411 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2006 | https://hal.science/hal-01487411/file/wecai06.pdf | Jean-François Condotta
Gérard Ligozat
email: ligozat@limsi.fr
Mahmoud Saade
email: saade@cril.univ-artois.fr
Empirical study of algorithms for qualitative temporal or spatial constraint networks
Representing and reasoning about spatial and temporal information is an important task in many applications of Artificial Intelligence. In the past two decades numerous formalisms using qualitative constraint networks have been proposed for representing information about time and space. Most of the methods used to reason with these constraint networks are based on the weak composition closure method. The goal of this paper is to study some implementations of these methods, including three well known and very used implementations, and two new ones.
Introduction
Representing and reasoning about spatial and temporal information is an important task in many applications, such as geographic information systems (GIS), natural language understanding, robot navigation, temporal and spatial planning. Qualitative spatial and temporal reasoning aims to describe non-numerical relationships between spatial or temporal entities. Typically a qualitative calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF][START_REF] Randell | A spatial logic based on regions and connection[END_REF][START_REF] Ligozat | Reasoning about cardinal directions[END_REF][START_REF] Arun | Indu: An interval and duration network[END_REF][START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] uses some particular kind of spatial or temporal objects (subsets in a topological space, points on the rational line, intervals on the rational line,...) to represent the spatial or temporal entities of the system, and focuses on a limited range of relations between these objects (such as topological relations between regions or precedence between time points). Each of these relations refers to a particular temporal or spatial configuration. For instance, consider the well-known temporal qualitative formalism called Allen's calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. It uses intervals of the rational line for representing temporal entities. Thirteen basic relations between these intervals are used to represent the qualitative situation between temporal entities (see Figure 1). For example, the basic relation overlaps can be used to represent the situation where a first temporal activity starts before a second activity and terminates while the latter is still active. Now the temporal or spatial information about the configuration of a specific set of entities can be represented using a particular kind of constraint networks called qualitative constraint networks (QCNs). Each constraint of a QCN represents a set of acceptable qualitative configurations between some temporal or spatial entities and is defined by a set of basic relations. Given a QCN N , the main problems to be considered are the following ones: decide whether there exists a solution of N (the consistency problem), find one or several solutions of N ; find one or several consistent scenarios of N ; determine the minimal QCN of N . In order to solve these problems, methods based on local constraint propagation algorithms have been defined, in particular algorithms based on the •-closure method (called also the path consistency method) [START_REF] Allen | Maintaining Knowledge about Temporal Intervals[END_REF][START_REF] Peter Van Beek | Approximation algorithms for temporal reasoning[END_REF][START_REF] Ladkin | Effective solution of qualitative interval constraint problems[END_REF][START_REF] Ladkin | A symbolic approach to interval constraint problems[END_REF][START_REF] Bessire | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF][START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF][START_REF] Renz | Efficient methods for qualitative spatial reasoning[END_REF][START_REF] Nebel | Solving hard qualitative temporal reasoning problems: Evaluating the efficienty of using the ord-horn class[END_REF] which is the qualitative version of the path consistency method [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF][START_REF] Mackworth | The Complexity of Some Polynomial Network Consistency Algorithms for Constraint Satisfaction Problem[END_REF] used in the domain of classical CSPs. Roughly speaking the •-closure method is a constraint propagation method which consists in iteratively performing an operation called the triangulation operation which removes for each constraint defined between two variables the basic relations not allowed w.r.t. a third variable. In following the line of reasoning of van Beek and Manchak [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] and Bessière [START_REF] Bessire | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF], in this paper we compare different possible versions of the •-closure method. The algorithms studied are adapted from the algorithms PC1 [START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF] or PC2 [START_REF] Mackworth | Consistency in networks of relations[END_REF]. Concerning the algorithms issued of PC2 we use different heuristics, in particular heuristics defined in [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] and we use structures saving pairs of constraints or structures saving triples of constraints. Moreover we introduce two algorithms mixing the algorithm PC1 and the algorithm PC2. This paper is organized as follows. In Section 2, we give some general definitions concerning the qualitative calculi. Section 3 is devoted to the different •-closure algorithms studied in this paper. After discussing the realized experimentations in Section 4 we conclude in Section 5.
2 Background on Qualitative Calculi
Relations
In this paper, we focus on binary qualitative calculi and use very general definitions. A qualitative calculus considers a finite set B of k binary relations defined on a domain D. These relations are called basic relations. The elements of D are the possible values to represent the temporal or spatial entities. The basic relations of B correspond to all possible configurations between two temporal or spatial entities. The relations of B are jointly exhaustive and pairwise disjoint, which means that any pair of elements of D belongs to exactly one basic relation in B. Moreover, for each basic relation B ∈ B there exists a basic relation of B, denoted by B ∼ , corresponding to the converse of B. The set A is defined as the set of relations corresponding to all unions of the basic relations:
A = { S B : B ⊆ B}.
It is customary to represent an element B1 ∪ . . . ∪ Bm (with 0 ≤ m ≤ k and Bi ∈ B for each i such that 1 ≤ i ≤ m) of A by the set {B1, . . . , Bm} belonging to 2 B . Hence we make no distinction between A and 2 B in the sequel. There exists an element of A which corresponds to the identity relation on D, we denote this element by Id. Note that this element can be composed of several basic relations. Now we give some well known examples of calculi to illustrate this definition. The Allen's calculus. As a first example, consider the well known temporal qualitative formalism called Allen's calculus [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. It uses intervals of the rational line for representing temporal entities. Hence D is the set
{(x -, x + ) ∈ Q × Q : x -< x + }.
The set of basic relations consists in a set of thirteen binary relations corresponding to all possible configurations of two intervals. These basic relations are depicted in Figure 1.
Here we have B = {eq, b, bi, m, mi, o, oi, s, si, d, di, f, f i}. Each basic relation can be formally defined in terms of the endpoints of the intervals involved; for instance, m = {((x -, x + ), (y -, y + )) ∈ D×D : The Meiri's calculus. Meiri [START_REF] Meiri | Combining qualitative and quantitative constraints in temporal reasoning[END_REF] considers temporal qualitative constraints on both intervals and points. These constraints can correspond to the relations of a qualitative formalism defined in the following way. D is the set of pairs of rational numbers: {(x, y) : x ≤ y}. The pairs (x, y) with x < y correspond to intervals and the pairs (x, y) with x = y correspond to points. Hence, we define to particular basic relations on D : eq i = {((x, y), (x, y)) : x < y} and eq p = {((x, y), (x, y)) : x = y} composing Id. These basic relations allow to constraint an object to be an interval or a point. In addition of these basic relations, the basic relations of the Allen's calculus and those ones of the point algebra are added to B. To close the definition of B we must include the ten basic relations corresponding to the possible configurations between a point and an interval, see 2 for an illustration of these basic relations.
x + = y -}.
Fundamental operations
As a set of subsets, A is equipped with the usual set-theoretic operations including intersection (∩) and union (∪). As a set of binary relations, it is also equipped with the operation of converse (∼) and an operation of composition (
Qualitative Constraint Networks
A qualitative constraint network (QCN) is a pair composed of a set of variables and a set of constraints. The set of variables represents spatial or temporal entities of the system. A constraint consists of a set of acceptable basic relations (the possible configurations) between some variables. Formally, a QCN is defined in the following way:
Definition 1 A QCN is a pair N = (V, C) where: • V = {v1, . . . , vn} is a finite set of n variables where n is a positive integer; • C is a map which to each pair (vi, vj) of V × V associates a subset C(vi, vj ) of the set of basic relations: C(vi, vj ) ∈ 2 B .
In the sequel C(vi, vj ) will be also denoted by Cij . C is such that Cii ⊆ Id and Cij = C ∼ ji for all vi, vj ∈ V .
With regard to a QCN N = (V, C) we have the following definitions:
A solution of N is a map σ from V to D such that (σ(vi), σ(vj)) satisfies Cij for all vi, vj ∈ V . N is consistent iff it admits a solution. A QCN N ′ = (V ′ , C ′ ) is a sub-QCN of N if and only if V = V ′ and C ′ ij ⊆ Cij for all vi, vj ∈ V . A QCN N ′ = (V ′ , C ′ ) is equivalent to N if and only if V = V ′ and both networks N and N ′ have the same solutions. The minimal QCN of N is the smallest (for ⊆) sub-QCN of N equivalent to N . An atomic QCN is a QCN such that each Cij contains a basic relation. A consistent scenario of N is a consistent atomic sub-QCN of N .
Given a QCN N , the main problems to be considered are the following problems: decide whether there exists a solution of N ; find one or several solutions of N ; find one or several consistent scenarios of N ; determine the minimal QCN of N . Most of the algorithms used for solving these problems are based on a method which we call the •-closure method. The next section is devoted to this method.
3 The •-closure method and associated algorithms
Generalities on the •-closure method
In this section we introduce the path •-closure property and give the different implementations of this method studied in the sequel. Roughly speaking the •-closure method is a constraint propagation method which consists in iteratively performing the following operation (the triangulation operation):
Cij := Cij ∩ (C ik • C kj )
, for all variables vi, vj , v k of V , until a fixed point is reached. Just no satisfiable basic relations are removed from these constraints with this method. In the case where the QCN obtained in this way contains the empty relation as a constraint, we can assert that the initial QCN is not consistent. However, if it does not, we cannot in the general case infer the consistency of the network. Hence the QCN obtained in this way is a sub-QCN of N which is equivalent to it. Moreover, the obtained QCN is •-closed, more precisely it satisfies the following property: Cij ⊆ C ik • C kj for all variables vi, vj , v k of V . Note that this property implies the (0, 3)consistency of the resulting QCN (each restriction on 3 variables is consistent). For several calculi, in particular for the Allen's calculus defined on the rational intervals, the (0, 3)consistency implies the 3 consistency or path consistency [START_REF] Mackworth | Consistency in networks of relations[END_REF].
It is why sometimes there exists a confusion between the •closure property and the path consistency property.
Studied Algorithms
There are two well known algorithms in the literature for enforcing the path-consistency of discrete CSPs [START_REF] Mackworth | Consistency in networks of relations[END_REF][START_REF] Montanari | Networks of constraints: Fundamental properties and application to picture processing[END_REF], namely the PC1 and the PC2 algorithms. These algorithms have been adapted on several occasions to the binary qualitative case in order to enforce •-closure [START_REF] Allen | Maintaining Knowledge about Temporal Intervals[END_REF][START_REF] Vilain | Constraint Propagation Algorithms for Temporal Reasoning[END_REF][START_REF] Ladkin | Effective solution of qualitative interval constraint problems[END_REF][START_REF] Van Beek | Reasoning About Qualitative Temporal Information[END_REF][START_REF] Bessire | A Simple Way to Improve Path Consistency Processing in Interval Algebra Networks[END_REF].
A possible adaptation of PC1 to the qualitative case is the function WCC1 defined in Algorithm 1. WCC1 checks all triples of variables of the network in a main loop. It starts again this main loop until no changes occur. For each triple of variables the operation of triangulation is made by the function revise. Note that in this function the call of updateConstraints(Cij, R) allows to set the constraint Cij with the new relation R and to set the constraint Cij with R ∼ . For particular situations, the treatment corresponding to lines 7-9 can be avoided. For example, for the QCNs defined from relations of the Allen's calculus this treatment is an useless work in the following cases : for i ← 1 to n do 4:
C ik = B, C kj = B, i = k, k = j or i = j.
for j ← i to n do 5:
for k ← 1 to n do 6:
if not skippingCondition(C ik , C kj ) then 7:
if revise(i, k, j) then 8:
if Cij == ∅ then return false 9:
else change ← true 10: until not change 11: return true Function revise(i, k, j).
1: R ← Cij ∩ (C ik • C kj ) 2: if Cij ⊆ R then return false 3: updateConstraints(Cij , R) 4: return true
The functions WCC2 P and WCC2 T defined in respectively Algorithm 2 and Algorithm 3 are inspired by PC2. WCC2 P handles a list containing pairs of variables corresponding to the modified constraints which must be propagated whereas WCC2 P handles a list containing triples of variables corresponding to the operations of triangulation to realize. The using of triples instead of pairs allows to circumscribe more precisely the useful triangulation operations. In the previous algorithms proposed in the literature, the exact nature of the list manipulated is not very clear, this list could be a set, a queue or still a stack. In WCC2 P and WCC2 T the nature of the list is connected with the nature of the object heuristic which is commissioned to handle it. The main task of heuristic consists in the insertion of a pair or a triple of variables in the list. It must compute a location in the list and places it. If the pair or the triple is already in the list it can insert it or do nothing. The method next always consists in removing and returning the first element of the list. In the sequel we will describe the used heuristics with more details. The predicate skippingCondition, like in WCC1, depends on the qualitative calculus used. For the Allen's calculus and for most of the calculi skippingCondition(Cij ) can be defined by the following instruction: return (Cij == B). The time complexity of WCC2 P and WCC2 T is O(|B| * n 3 ) whereas the spatial complexity of WCC2 P is O(n 2 ) and this one of WCC2 T is O(n 3 ).
Algorithm 2 Function WCC2 P(N , heuristic), with N = (V, C).
1: Q ← ∅ 2: initP (N , Q, heuristic) 3: while Q = ∅ do 4: (i, j) ← heuristic.next(Q) 5: for k ← 1 to n do 6:
if revise(i, j, k) then
7: if C ik == ∅ then return false 8:
else addRelatedP athsP (i, k, Q, heuristic) for j ← i to n do 3:
if not skippingCondition(Cij ) then addRelatedP athsP (i, j, Q, heuristic) Function addRelatedPathsP(i, j, Q, heuristic). 1: heuristic.append(Q, (i, j))
Algorithm 3 Function WCC2 T(N ), with N = (V, C). 1: Q ← ∅ 2: initT (N , Q, heuristic) 3: while Q = ∅ do 4: (i, k, j) ← heuristic.next(Q) 5: if revise(i, k, j) then 6: if Cij == ∅ then return false 7:
else addRelatedP athsT (i, j, Q, heuristic) 8: end while 9: return true Function initT(N, Q, heuristic).
1: for i ← 1 to n do 2:
for j ← i to n do 3:
if not skippingCondition(Cij ) then addRelatedP athsT (i, j, Q, heuristic) Function relatedPathsT(i, j, Q, heuristic).
1: for k ← 1 to n do 2:
if not skippingCondition(C jk ) then heuristic.append(Q, (i, j, k))
4:
if not skippingCondition(C ki ) then 5:
heuristic.append(Q, (k, i, j)) 6: done Despite these different complexities, WCC2 P and WCC2 T can perform worse than WCC1. This is mainly due to the fact that WCC2 P and especially WCC2 T must make an expensive initialization of the list Q (line 2). This step can take more time than the subsequent processing of the elements of the list, in particular for no consistent QCNs. This is why we introduce the functions WCCMixed P and WCCMixed T (see Algorithm 4 and Algorithm 5) to remedy this drawback. Roughly, these functions realize a first step corresponding to a first loop of WCC1 and then continues in the manner of WCC2 P and WCC2 T.
Algorithm 4 Function
WCCMixed P(N ), with N = (V, C). 1: Q ← ∅ 2: initMixedPair(N , Q, heuristic) 3: while Q = ∅ do 4: (i, j) ← heuristic.next(Q) 5: for k ← 1 to n do 6:
if revise(i, j, k) then
7: if C ik == ∅ then return false 8:
else addRelatedP athsP air((i, k), Q, heuristic) if revise(i, k, j)then
Generated instances
To evaluate the performances of the proposed algorithms we randomly generate instances of qualitative constraint networks. A randomly generated QCN will be characterized by five parameters:
• an integer n which corresponds to the number of variables of the network; • a qualitative calculus algebra which is the used qualitative calculus; • a real nonT rivialDensity which corresponds to the probality of a constraint to be a non trivial constraint (to be different of B); • a real cardinalityDensity which is the probality of a basic relation to belong to a non trivial given constraint; • a flag type which indicates if the generated network must be forced to be consistent by adding a consistent scenario. (i, k, j) ← heuristic.next(Q)
5:
if revise(i, k, j) then 6:
if Cij == ∅ then return false 7:
else addRelatedP athsT (i, j, Q, heuristic) 8: end while 9: return true Function initMixedT(N , Q, heuristic). if revise(i, k, j)then if (change) addRelatedP athsT (i, j, Q, heuristic) 11: done
The different algorithms have been implemented with the help of the JAVA library QAT 1 . We have conducted an extensive experimentation on a PC Pentium IV 2,4GHz 512mo under Linux. The experiences reported in this paper concern QCNs of the Allen's calculus generated with a nonT rivialDensity equals to 0.5 . Performances are measured in terms of the number of revise operations (numberOfRevises), in terms of cpu time (time), in terms of the number of maximum elements in the list (max).
Heuristics
Almost of the algorithms proposed in the later section use a list which contains the elements (pairs or triples) to be propagated. To improve the efficiency of the algorithms we have to reduce the number of these elements. When a constraint between (i,j) changes we must add all the elements which can be affected by this modification. The order that these elements are processed is very important and can reduce dramatically the number of triangulation operations. The set of the experimented heuristics contains the different heuristics proposed in [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The main task of a heuristic consists in the insertion of a pair or a triple of variables in the list after computing its location. If the pair or the triple is already in the list it can insert it or do nothing depending on its policy. All heuritics experimented remove and return the first element of the list. In general, given an heuritics, more it reduces the number of triangulation operations more its time cost and spatial cost are important.
Experimental results
Stack or Queue. The list used to stock the pairs/the triples can be handles as a stack or a queue, i.e. after the changing 1 This library can be found at http://www.cril.univartois.fr/∼saade/. of a constraint the pair or the triples corresponding can be added at the head of the list or at its queue (recall that the first element of the list is always treat firstly). After the initialisation of the list, the addition of a pair/a triple is due to the restriction of a constraint. Intuitively, more this constraint is added belatedly, more its cardinality is small and more it will be restrictive for an operation of triangulation. It is why, the using of the list as a stack (a FIFO structure) must perform the using of the list as a queue (a LILO structure). This is confirmed by our experiences, for example consider Figure 3 in which we use WCC2 P and WCC2 T on forced consistent networks with the heuristic Basic. Note that for WCCMixed P and WCCMixed T the difference is not so important. Actually Basic is not really a heuristic, indeed, it just adds an element in the list if it is not present and removes the first element of the list. In the sequel, among the elements which can be returned from the list, the heuristics always choose the more recently added (LIFO handling).
Add or not add a pair/a triple. The main task of heuristic is to add a pairs or a triples when a modification raises. As the pair/the triple is already in the list, depending of the used policy, heuristic can or cannot add the pair/the triple. Adding the element could have a prohibitive cost since one must remove the element of the list before add it at the new location. This cost is connected to the heuristic used and the structure used to implement the list. In our case, roughly speaking, we use doubly-linked lists or tables of doubly-linked lists for the more sophisticated heuristics. Moreover, we use tables with 2/3 entries to check the presence of a pair/a triple in the list. Actually, the experiences show that removing and adding the pair/the triple in the case where it is present avoid sufficient revise operations to be more competitive than the case where nothing is done. See for example Figure 4 which shows the behaviour of the heuristic cardinality with these two possible policies (cardinalityM oving for the systematic addition and cardinalityN oM oving for the addition in the case where the pair/the triple is not present). For the methods WCC2 P and WCC2 T cardinalityM oving performs cardinalityN oM oving, in particular before the phase transition (cardinalityDensity between 0.3 and 0.55). Conerning the no forced consistent instances, from the fact that the numbers of revises are very near, we have cardinalityM oving which is lightly better in term of time. For the mixed methods, cardinalityM oving and cardinalityN oM oving are very closed in term of time and number of revises. In the sequel we always use the policy which consists in systematically moving the present pair or triple. The better heuristics. We compared all the heuristics on the different algorithms. Concerning the algorithms manipulating the pairs we compare the heuristics Basic and Cardinality previously presented. Moreover we used the W eight heuristic, this heuristic processes the pair (i, j) following the weight of the constraint Cij in ascending order. The weight of a constraint is the sum of the weights of the basic relations composing it. Intuitively, to obtain the weight of a basic relation B we sum the number of basic relations present in the table of composition at the line and the column corresponding to the entry B then we scaled the obtained numbers to give the value 1 at the basic relations with the smallest numbers, then 2, etc. This method is lightly different from this proposed by van Beek and Manchak [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] but it is easy to implement it for all qualitative calculi. For the basic relations of the Allen's calculus we obtain the weight 1 for eq, 2 for m, mi, s, si, f, f i, 3 for d, di, b, bi and 4 for o, oi. In addition to these heuristics, we define heuristics corresponding to combinations of Cardinality and W eight: the SumCardinalityW eight heuristic which arranges the pairs (i, j) following the sum of the cardinality and the weight of the constraint Cij , the CardinalityW eight heuristic which arranges the pairs (i, j) following the cardinality of Cij and then, following the weight of Cij, and W eightCardinality which arranges the pairs (i, j) following the weight of Cij and then, following the cardinality of Cij . These heuristics are also define for for WCC2 T and WCCMixed T which use triples instead of pairs. By examining Figure 5, we constate that the number of revises are very closed for all these heuritics (expected for the Basic heuristic). In term of cpu time, the heuristics Cardinality, SumCardinalityW eight and W eight are very closed and are the more performing heuristics. Due to the using of triples we can define finer heuristics. For example, from the heuristic cardinality we have three different heuristics: the cardinalityI heuristic which considers the cardinality of Cij for the triples (i, j, k) and (k, i, j) (similarly to the previous cardinality heuristic), the cardinalityII heuristic which takes into account the sum of the cardinality of Cij and the cardinality of C jk for the triple (i, j, k), and the sum of the cardinality of Cij and the cardinality of C ki for the triple (k, i, j), the cardinalityIII heuristic which takes into account the sum of the cardinality of Cij , the cardinality of C jk and the cardinality of C ik for the triple (i, j, k), the sum of the cardinality of Cij , the cardi-nality of C ki and the cardinality of C kj for the triple (k, i, j). In a same line of reasoning we split the heuristics weight and SumCardinalityW eight in six heuristics. By considering the different versions of the Cardinality heuristic (it is the same thing for the weight and SumCardinalityW eight heuristics) we can see that the cardinalityII heuritic makes the smallest number of revises. Outside the phase transition it performs the other triple cardianality heuritics in terms of time. In the phase transition the cardinalityIII heuristic performs the cardinalityI heuristic and the cardinalityIII heuristic. In terms of cpu time, the handling with pairs performs the handling with triples. WCC1/WCC2 P/WCCMixed P/WCC2 T/WCCMixed T Now we compare all the algorithms we the more competitive heuristics. We can constate that in general WCC1 is the algorithm the less competitive algorithm, see Figure 7. The most favorable case for WCC1 is the case where the instances are inconsistent QCNs. Generally, in particular for the forced con- sistent instances, the algorithms based on triples make less revise operations than the algorithms based on pairs. Despit it we can see that the last ones are more speed than the first ones. The reason is that the handling of triples is most cost than the handling of pairs in term of time. Moreover the number of elements which must be stocked is very important for the triples contrary to pair case (see the last figures of Figure 7). For the forced consistent instances we can see that the mixed versions of the algorithms are less performing than the no mixed versions, note that the difference is not very important. Concerning the no forced consistent instances we have the same result for the cardinality density comprise between 0.5 and 0.6. For the densities strictly greater than 0.6 we have an inverion and the mixed versions are more competitive. By examing the number corresponding to the maximum of elements in the list we can see that the mixed versions reduced dramatically this number for the triples.
Conclusions
In this paper we study empirically several algorithms enforcing the •-closure on qualitative constraint networks. The algorithms studied are adapted from the algorithms PC1 and PC2. Concerning the algorithms issued of PC2 we use different heuristics, in particular heuristics defined in [START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF] and we use structures saving pairs of constraints or structures saving triples of constraints. We showed that using triples reduces dramatically the number of revises compared with an handling with pairs. Despite it, the versions using pairs are more competitive in term of time. We introduced two algorithms mixing the algorithm PC1 and the algorithm PC2. These algorithms seem to be a good compromise between a PC1 version which consumes lot of time and a PC2 version which consumes lot of space. Currently, we continue our experimentations on QCNs with a larger size in term of variables and on other qualitative calculus (in particular on INDU which is based on 25 basic relations and the cyclic point algebra which is a ternary calculus).
Figure 1 .
1 Figure 1. The basic relations of the Allen's calculus.
Figure 2 .
2 Figure 2. The basic relations of the Meiri's calculus concerning a point X and an interval Y .
•) sometimes called weak composition or qualitative composition. The converse of a relation R in A is the relation of A corresponding to the transpose of R; it is the union of the converses of the basic relations contained in R. The composition A • B of two basic relations A and B is the relation R = {C : ∃x, y, z ∈ D, x A y, y B z and x C z}. The composition R • S of R, S ∈ A is the relation T = S A∈R,B∈S {A • B}. Computing the results of these various operations for relations of 2 B can be done efficiently by using tables giving the results of these operations for the basic relations of B. For instance, consider the relations R = {eq, b, o, si} and S = {d, f, s} of Allen's calculus, we have R ∼ = {eq, bi, oi, s}. The relation R • S is {d, f, s, b, o, m, eq, si, oi}. Consider now the relations R = {b * , s * } and S = {b} of the Meiri's calculus, we have R • S = {b * } whereas S • R = {}.
9 :
9 if revise(k, i, j) then 10: if C kj == ∅ then return false 11: else addRelatedP athsP (k, j, Q, heuristic) 12: done 13: end while 14: return true Function initMixedP(N , Q, heuristic). 1: change ← false 2: for i ← 1 to n do 3: for j ← i to n do 4: for k ← 1 to n do 5:if not skippingCondition(C ik , C kj ) 6:
7 :
7 if Cij == ∅ then return false 8: else change ← true 9:
done 10 :
10 if (change) addRelatedP athsP (i, j, Q, heuristic) 11: done 4 Experimentation
Algorithm 5 Function
5 WCCPCMixedTriple(N ), with N = (V, C). 1: Q ← ∅ 2: initMixedTriple(N , Q, heuristic) 3: while Q = ∅ do 4:
1 :
1 change ← false 2: for i ← 1 to n do 3: for j ← i to n do 4: for k ← 1 to n do 5:if not skippingCondition(C ik , C kj ) 6:
7 :
7 if Cij == ∅ then return false 8: else change ← true 9:
Figure 3 .
3 Figure 3. Average time for WCC2 P and WCC2 T using the heuristic basic on consistent QCNs (200 instances per data points, with n = 50)
Figure 4 .
4 Figure 4. Average number of revises and average time for WCC2 P and WCC2 T using cardinalityN oM oving and cardinalityM oving on consistent (top) and no forcing consistent (bottom) QCNs (200 instances per data points, with n = 50)
Figure 5 .
5 Figure 5. Average number of revises and average time for the heuristics used with WCC2 P on consistent (top) and no forcing consistent (bottom) QCNs (200 instances per data points, with n = 50)
Figure 6 .
6 Figure 6. Average number of revises and average time for the different cardinality heuristics used with WCC2 T and WCCMixed T on forced consistent QCNs (200 instances per data points, with n = 50)
Figure 7 .
7 Figure 7. Average number of revises, average time and average maximum elements in the list for all algorithms with a competitive heuristic on consistent (left) and no forcing consistent (right) QCNs (200 instances per data points, with n = 50)
This is respectively due to the facts that B • R = R • B = B for all non empty relation R ∈ A, Id is composed by a basic relation (eq) and Id ⊆ R • R ∼ for all non empty relation R. Note that these properties are not always true for another calculus, see the Meiri's calculus for example. It is why we introduce a conditional statement at line 6 allowing to avoid fruitless work by defining a good predicate skippingCondition ad hoc to the qualitative calculus used. For example, in the case of the Allen's calculus, skippingCondition could be defined by the following instruction: return (C ik == B or C kj == B or i == k or k == j or i == j). For this calculus this skipping condition can be more elaborated, see[START_REF] Van Beek | The design and experimental analysis of algorithms for temporal reasoning[END_REF]. The time complexity of WCC1 is O(|B| * n 5 ) whereas its spatial complexity is O(|B| * n 3 ).
Algorithm
1 Function WCC1(N ), with N = (V, C). 1: repeat 2: change ← false 3: | 34,068 | [
"1142762",
"998171"
] | [
"56711",
"247329",
"56711"
] |
01487493 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2005 | https://hal.science/hal-01487493/file/renz-ligozat-cp05.pdf | Jochen Renz
email: jochen.renz@nicta.com.au
Gérard Ligozat
email: ligozat@limsi.fr
Weak Composition for Qualitative Spatial and Temporal Reasoning
It has now been clear for some time that for many qualitative spatial or temporal calculi, for instance the well-known RCC8 calculus, the operation of composition of relations which is used is actually only weak composition, which is defined as the strongest relation in the calculus that contains the real composition. An immediate consequence for qualitative calculi where weak composition is not equivalent to composition is that the well-known concept of pathconsistency is not applicable anymore. In these cases we can only use algebraic closure which corresponds to applying the path-consistency algorithm with weak composition instead of composition. In this paper we analyse the effects of having weak compositions. Starting with atomic CSPs, we show under which conditions algebraic closure can be used to decide consistency in a qualitative calculus, how weak consistency affects different important techniques for analysing qualitative calculi and under which conditions these techniques can be applied. For our analysis we introduce a new concept for qualitative relations, the "closure under constraints". It turns out that the most important property of a qualitative calculus is not whether weak composition is equivalent to composition, but whether the relations are closed under constraints. All our results are general and can be applied to all existing and future qualitative spatial and temporal calculi. We close our paper with a road map of how qualitative calculi should be analysed. As a side effect it turns out that some results in the literature have to be reconsidered.
Introduction
The domain of qualitative temporal reasoning underwent a major change when Allen [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF] proposed a new calculus which up to a degree resulted in embedding it in the general paradigm of constraint satisfaction problems (CSPs). CSPs have their well-established sets of questions and methods, and qualitative temporal reasoning, and more recently qualitative spatial reasoning, has profited significantly from developing tools and methods analogous to those of classical constraint satisfaction. In particular, a central question for classical constraint networks is the consistency problem: is the set of constraints specified by the constraint network consistent, that is, can the variables be instantiated with values from the domains in such a way that all constraints are satisfied?
Part of the apparatus for solving the problem consists of filtering algorithms which are able to restrict the domains of the variables without changing the problem, while remaining reasonably efficient from a computational point of view. Various algorithms such as arc consistency, path consistency, and various notions of k-consistency have been extensively studied in that direction. Reasoning about temporal or spatial qualitative constraint networks on the same line as CSPs has proved a fruitful approach. Both domains indeed share a general paradigm. However, there is a fundamental difference between the two situations:
-Relations in classical CSPs are finite relations, so they can be explicitly manipulated as sets of tuples of elements of a finite domain. In other terms, relations are given and processed in an extensional way. -By contrast, relations in (most) qualitative temporal and spatial reasoning formalisms are provided in intentional terms -or, to use a more down-to-earth expression, they are infinite relations, which means that there is no feasible way of dealing with them extensionally.
But is that such an important point? We think it is, although this was not apparent for Allen's calculus. The differences began to appear when it became obvious that new formalisms, such as for instance the RCC8 calculus [START_REF] Randell | A spatial logic based on regions and connection[END_REF], could behave in a significantly different way than Allen's calculus. The differences have to do with changes in the notion of composition, with the modified meaning of the the classical concept of pathconsistency and its relationship to consistency, and with the inapplicability of familiar techniques for analysing qualitative calculi.
Composition
Constraint propagation mainly uses the operation of composition of two binary relations. In the finite case, there is only a finite number of binary relations. In Allen's case, although the domains are infinite, the compositions of the thirteen atomic relations are themselves unions of atomic relations. But this is not the case in general, where insisting on genuine composition could lead to considering an infinite number of relations, whereas the basic idea of qualitative reasoning is to deal with a finite number of relations. The way around the difficulty consists in using weak composition, which only approximates true composition.
Path consistency and other qualitative techniques
When only weak composition is used then some algorithms and techniques which require true composition can only use weak composition instead. This might lead to the inapplicability of their outcomes. Path-consistency, for example, relies on the fact that a constraint between two variables must be at least as restrictive as every path in the constraint network between the same two variables. The influence of the paths depends on composition of relations on the path. If we use algebraic closure instead of path-consistency, which is essentially path-consistency with weak composition, then we might not detect restrictions imposed by composition and therefore the filtering effect of algebraic closure is weaker than that of path-consistency. As a consequence it might not be possible to use algebraic closure as a decision procedure for certain calculi. Likewise, commonly used reduction techniques lose their strength when using only weak composition and might not lead to valid reductions.
The main goal of this paper is to thoroughly analyse how the use of weak composition instead of composition affects the applicability of the common filtering algorithms and reduction techniques and to determine under which conditions their outcomes match that of their composition-based counterparts.
Related Work
The concepts of weak composition and algebraic closure are not new. Although there has not always been a unified terminology to describe these concepts, many authors have pointed out that composition tables do not necessarily correspond to the formal definition of composition [START_REF] Bennett | Some observations and puzzles about composing spatial and temporal relations[END_REF][START_REF] Bennett | When does a composition table provide a complete and tractable proof procedure for a relational constraint language[END_REF][START_REF] Grigni | Topological inference[END_REF][START_REF] Ligozat | When tables tell it all: Qualitative spatial and temporal reasoning based on linear orderings[END_REF]. Consequently, many researchers have been interested in finding criteria for (refutation) completeness of compositional reasoning, and Bennett et al. ( [START_REF] Bennett | Some observations and puzzles about composing spatial and temporal relations[END_REF][START_REF] Bennett | When does a composition table provide a complete and tractable proof procedure for a relational constraint language[END_REF]) posed this as a challenge and conjectured a possible solution. Later work focused on dealing with this problem for RCC8 [START_REF] Düntsch | A relation -algebraic approach to the region connection calculus[END_REF][START_REF] Li | Region connection calculus: Its models and composition table[END_REF]. In particular Li and Ying ( [START_REF] Li | Region connection calculus: Its models and composition table[END_REF]) showed that no RCC8 model can be interpreted extensionally, i.e., for RCC8 composition is always only a weak composition, which gives a negative answer to Bennett et al.'s conjecture. Our paper is the first to give a general account on the effects of having weak composition and a general and clear criterion for the relationship between algebraic closure and consistency. Therefore, the results of this paper are important for establishing the foundations of qualitative spatial and temporal reasoning and are a useful tool for investigating and developing qualitative calculi.
The structure of the paper is as follows: Section 2 introduces the main notions and terminology about constraint networks, various notions of consistency and discusses weak composition and algebraic closure. Section 3 provides a characterisation of those calculi for which algebraic closure decides consistency for atomic networks. Section 4 examines the conditions under which general techniques of reduction can be applied to a qualitative calculus. Finally, Section 5 draws general conclusions in terms of how qualitative calculi should be analysed, and shows that some existing results have to be revisited in consequence.
Background
Constraint networks
Knowledge between different entities can be represented by using constraints. A binary relation R over a domain D is a set of pairs of elements of D, i.e., R ⊆ D ×D. A binary constraint xRy between two variables x and y restricts the possible instantiations of x and y to the pairs contained in the relation R. A constraint satisfaction problem (CSP) consists of a finite set of variables V, a domain D with possible instantiations for each variable v i ∈ V and a finite set C of constraints between the variables of V. A solution of a CSP is an instantiation of each variable v i ∈ V with a value d i ∈ D such that all constraints of C are satisfied, i.e., for each constraint v i Rv j ∈ C we have (d i , d j ) ∈ R. If a CSP has a solution, it is called consistent or satisfiable. Several algebraic operations are defined on relations that carry over to constraints, the most important ones being union (∪), intersection (∩), and complement (•) of a relation, defined as the usual settheoretic operators, as well as converse (• -1 ) defined as R
Path-consistency
Because of the high complexity of deciding consistency, different forms of local consistency and algorithms for achieving local consistency were introduced. Local consistency is used to prune the search space by eliminating local inconsistencies. In some cases local consistency is even enough for deciding consistency. Montanari [START_REF] Montanari | Networks of constraints: Fundamental properties and applications to picture processing[END_REF] developed a form of local consistency which Mackworth [START_REF] Mackworth | Consistency in networks of relations[END_REF] later called path-consistency. Montanari's notion of path-consistency considers all paths between two variables. Mackworth showed that it is equivalent to consider only paths of length two, so path-consistency can be defined as follows: a CSP is path-consistent, if for every instantiation of two
variables v i , v j ∈ V that satisfies v i R ij v j ∈ C there exists an instantiation of every third variable v k ∈ V such that v i R ik v k ∈ C and v k R kj v j ∈ C are also satisfied. For- mally, for every triple of variables v i , v j , v k ∈ V: ∀d i , d j : [(d i , d j ) ∈ R ij → ∃d k : ((d i , d k ) ∈ R ik ∧(d k , d j ) ∈ R kj )].
Montanari also developed an algorithm that makes a CSP path-consistent, which was later simplified and called path-consistency algorithm or enforcing path-consistency. A path-consistency algorithm eliminates locally inconsistent tuples from the relations between the variables by successively applying the following operation to all triples of variables v i , v j , v k ∈ V until a fixpoint is reached:
R ij := R ij ∩ (R ik • R kj ).
If the empty relation occurs, then the CSP is inconsistent. Otherwise the resulting CSP is path-consistent.
Varieties of k-consistency
Freuder [START_REF] Freuder | Synthesizing constraint expressions[END_REF] generalised path-consistency and the weaker notion of arc-consistency to k-consistency: A CSP is k-consistent, if for every subset V k ⊂ V of k variables the following holds: for every instantiation of k -1 variables of V k that satisfies all constraints of C that involve only these k -1 variables, there is an instantiation of the remaining variable of V k such that all constraints involving only variables of V k are satisfied. So if a CSP is k-consistent, we know that each consistent instantiation of k -1 variables can be extended to any k-th variable. A CSP is strongly k-consistent, if it is i-consistent for every i ≤ k. If a CSP with n variables is strongly n-consistent (also called globally consistent) then a solution can be constructed incrementally without backtracking. 3-consistency is equivalent to path-consistency, 2-consistency is equivalent to arc-consistency.
Qualitative Spatial and Temporal Relations
The main difference of spatial or temporal CSPs to normal CSPs is that the domains of the spatial and temporal variables are usually infinite. For instance, there are infinitely many time points or temporal intervals on the time line and infinitely many regions in a two or three dimensional space. Hence it is not feasible to represent relations as sets of tuples, nor is it feasible to apply algorithms that enumerate values of the domains. Instead, relations can be used as symbols and reasoning has to be done by manipulating symbols. This implies that the calculus, which deals with extensional relations in the finite case, becomes intensional in the sense that it manipulates symbols which stand for infinite relations. The usual way of dealing with relations in qualitative spatial and temporal reasoning is to have a finite (usually small) set A of jointly exhaustive and pairwise disjoint (JEPD) relations, i.e., each possible tuple (a, b) ∈ D × D is contained in exactly one relation R ∈ A. The relations of a JEPD set A are called atomic relations. The full set of available relations is then the powerset R = 2 A which enables us to represent indefinite knowledge, e.g., the constraint x{R i , R j , R k }y specifies that the relation between x and y is one of R i , R j or R k , where R i , R j , R k are atomic relations.
Composition and weak composition
Using these relations we can now represent qualitative spatial or temporal knowledge using CSPs and use constraint-based methods for deciding whether such a CSP is consistent, i.e., whether it has a solution. Since we are not dealing with explicit tuples anymore, we have to compute the algebraic operators for the relations. These operators are the only connection of the relation symbols to the tuples contained in the relations and they have to be computed depending on the tuples contained in the relations. Union, complement, converse, and intersection of relations are again the usual set-theoretic operators while composition is not as straightforward. Composition has to be computed only for pairs of atomic relations since composition of non-atomic relations is the union of the composition of the involved atomic relations. Nevertheless, according to the definition of composition, we would have to look at an infinite number of tuples in order to compute composition of atomic relations, which is clearly not feasible. Fortunately, many domains such as points or intervals on a time line are ordered or otherwise well-structured domains and composition can be computed using the formal definitions of the relations. However, for domains such as arbitrary spatial regions that are not well structured and where there is no common representation for the entities we consider, computing the true composition is not feasible and composition has to be approximated by using weak composition [START_REF] Düntsch | A relation -algebraic approach to the region connection calculus[END_REF]. Weak composition ( ) of two relations S and T is defined as the strongest relation R ∈ 2 A which contains S • T , or formally,
S T = {R i ∈ A|R i ∩ (S • T ) = ∅}.
The advantage of weak composition is that we stay within the given set of relations R = 2 A while applying the algebraic operators, as R is by definition closed under weak composition, union, intersection, and converse.
In cases where composition cannot be formally computed (e.g. RCC8 [START_REF] Randell | A spatial logic based on regions and connection[END_REF]), it is often very difficult to determine whether weak composition is equivalent to composition or not. Usually only non-equality can be shown by giving a counterexample, while it is very difficult to prove equality. However, weak composition has also been used in cases where composition could have been computed because the domain is well-structured and consists of pairs of ordered points, but where the authors did not seem to be aware that R is not closed under composition (e.g. INDU, PDN, or PIDN [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF][START_REF] Navarrete | On point-duration networks for temporal reasoning[END_REF][START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF]) Example 1 (Region Connection Calculus RCC8 [START_REF] Randell | A spatial logic based on regions and connection[END_REF]). RCC8 is a topological constraint language based on eight atomic relations between extended regions of a topological space. Regions are regular subsets of a topological space, they can have holes and can consist of multiple disconnected pieces. The eight atomic relations DC (disconnected), EC (externally connected), P O (partial overlap), EQ (equal), T P P (tangential proper part), N T P P (non-tangential proper part) and their converse relations T P P i, N T P P i were originally defined in first-order logic. It was shown by Düntsch [START_REF] Düntsch | A relation -algebraic approach to the region connection calculus[END_REF], that the composition of RCC8 is actually only a weak composition. Consider the consistent RCC8 constraints B{T P P }A, B{EC}C, C{T P P }A. If A is instantiated as a region with two disconnected pieces and B completely fills one piece, then C cannot be instantiated. So T P P is not a subset of EC • T P P [START_REF] Li | Region connection calculus: Its models and composition table[END_REF] and consequently RCC8 is not closed under composition.
Algebraic closure
When weak composition differs from composition, we cannot apply the path-consistency algorithm as it requires composition and not just weak composition. We can, however, replace the composition operator in the path-consistency algorithm with the weak composition operator. The resulting algorithm is called the algebraic closure algorithm [START_REF] Ligozat | Qualitative calculi: a general framework[END_REF] which makes a network algebraically closed or a-closed.
If weak composition is equal to composition, then the two algorithms are also equivalent. But whenever we have only weak composition, an a-closed network is not necessarily path-consistent as there are relations S and T such that S • T ⊂ S T . So there are tuples (u, v) ∈ S T for which there is no w with (u, w) ∈ S and (w, v) ∈ T , i.e., for which (u, v) ∈ S • T . This contradicts the path-consistency requirements given above.
Path-consistency has always been an important property when analysing qualitative calculi, in particular as a method for identifying tractability. When this method is not available, it is not clear what the consequences of this will be. Will it still be possible to find calculi for which a-closure decides consistency even if weak composition differs from composition? What effect does it have on techniques used for analysing qualitative calculi which require composition and not just weak composition? And what is very important, does it mean that some results in the literature have to be revised or is it enough to reformulate them? These and related questions will be answered in the remainder of the paper. As an immediate consequence, unless we have proven otherwise, we should for all qualitative spatial and temporal calculi always assume that we are dealing with weak composition and that it is not equivalent to composition.
Weak composition and algebraic closure
For analysing the effects of weak composition, we will mainly focus on its effects on the most commonly studied reasoning problem, the consistency problem, i.e., whether a-closure sufficient a-closure not sufficient weak composition = composition Interval Algebra [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF] STAR calculus [START_REF] Renz | Qualitative direction calculi with arbitrary granularity[END_REF] rectangle algebra [START_REF] Guesgen | Spatial reasoning based on Allen's temporal logic[END_REF] containment algebra [START_REF] Ladkin | On binary constraint problems[END_REF] block algebra [START_REF] Balbiani | A tractable subclass of the block algebra: constraint propagation and preconvex relations[END_REF] cyclic algebra [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] weak composition = composition RCC8 [START_REF] Randell | A spatial logic based on regions and connection[END_REF], discrete IA INDU [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF],PDN [START_REF] Navarrete | On point-duration networks for temporal reasoning[END_REF], PIDN [START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF] Table 1. Does a-closure decide atomic CSPs depending on whether weak composition differs from composition? a given set Θ of spatial or temporal constraints has a solution. Recall that consistency means that there is at least one instantiation for each variable of Θ with a value from its domain which satisfies all constraints. This is different from global consistency which requires strong k-consistency for all k. Global consistency cannot be obtained when we have only weak composition as we have no method for even determining 3-consistency. For the mere purpose of deciding consistency it actually seems overly strong to require any form of k-consistency as we are not interested in whether any consistent instantiation of k variables can be extended to k + 1 variables, but only whether there exists at least one consistent instantiation. Therefore it might not be too weak for deciding consistency to have only algebraic closure instead of path-consistency.
In the following we restrict ourselves to atomic CSPs, i.e., CSPs where all constraints are restricted to be atomic relations. If a-closure does not even decide atomic CSPs, it will not decide more general CSPs. We will later see how the results for atomic CSPs can be extended to less restricted CSPs. Let us first analyse for some existing calculi how the two properties whether a-closure decides atomic CSPs and whether weak composition differs from composition relate. We listed the results in Table 1 and they are significant: Proposition 1. Let R be a finite set of qualitative relations. Whether a-closure decides consistency for atomic CSPs over R is independent of whether weak composition differs from composition for relations in R.
This observation shows us that whether or not a-closure decides atomic CSPs does not depend on whether weak composition is equivalent to composition or not. Instead we will have to find another criterion for when a-closure decides atomic CSPs. In order to find such a criterion we will look at some examples where a-closure does not decide atomic CSPs and see if we can derive some commonalities.
Example 2 (STAR calculus [START_REF] Renz | Qualitative direction calculi with arbitrary granularity[END_REF]). Directions between two-dimensional points are distinguished by specifying an arbitrary number of angles which separate direction sectors. The atomic relations are the sectors as well as the lines that separate the sectors (see Figure 1 left). The domain is ordered so it is possible to compute composition. The relations are closed under composition. If more than two angles are given, then by using constraint configurations involving four or more variables, it is possible to refine the atomic relations that correspond to sectors to different particular angles (see Figure 1 right). By combining configurations that refine the same atomic relation to different angles, inconsistencies can be constructed that cannot be detected by a-closure. In this Therefore, the atomic relation 11 can be refined to a subatomic relation using the given constraints example we can see thateven true composition can be too weak. Although we know the composition and all relations are closed under composition, it is possible to refine atomic relations using networks with more than three nodes. [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF]). Allen's 13 interval relations are combined with relative duration of intervals given in the form of a point algebra, i.e., INDU relations are of the form R = I δ where I is an interval relation (precedes p, meets m, during d, starts s, overlaps o, finishes f, equal =, and the converse relations fi,oi,si,di,mi,pi) and δ a duration relation (<, >, =). This leads to only 25 atomic relations as some combinations are impossible, e.g., a{s}b enforces that the duration of a must be less than that of b. Only weak composition is used, as for example the triple a{s < }b, a{m < }c, c{f < }b enforces that a < 0.5 * b and c > 0.5 * b. So an instantiation where a = 0.5 * b cannot be extended to a consistent instantiation of c. In the same way it is possible to generate any metric duration constraint of the form duration(x) R α * duration(b) where R ∈ {<, >, =} and α is a rational number. Consequently, it is possible to construct inconsistent atomic CSPs which are a-closed.
Example 3 (INDU calculus
In both examples it is possible to refine atomic relations to subatomic relations that have no tuples in common, i.e., which do not overlap. This can be used to construct inconsistent examples which are still a-closed. Note that in the case of the interval algebra over integers it is possible to refine atomic relations to subatomic relations, e.g., a{p}b, b{p}c leads to a{p + 2}c, where p + 2 indicates that a must precede c by at least 2 more integers than is required by the precedes relation. But since these new subatomic relations always overlap, it is not possible to construct inconsistencies which are a-closed. Let us formally define these terms.
Definition 1 (refinement to a subatomic relation).
Let Θ be a consistent atomic CSP over a set A and xRy ∈ Θ a constraint. Let R be the union of all tuples (u, v) ∈ R that can be instantiated to x and y as part of a solution of Θ. If R ⊂ R, then Θ refines R to the subatomic relation R .
Definition 2 (closure under constraints). Let A be a set of atomic relations. A is closed under constraints if no relation R ∈ A can be refined to non-overlapping subatomic relations, i.e., if for each R ∈ A all subatomic relations R ⊂ R to which R can be refined to have a nonempty intersection.
In the following theorem we show that the observation made in these examples holds in general and we can prove in which cases a-closure decides atomic CSPs, which is independent of whether weak composition differs from composition and only depends on whether the atomic relations are closed under constraints. Therefore, the new concept of closure under constraints turns out to be a very important property of qualitative reasoning.
Theorem 1.
Let A be a finite set of atomic relations. Then a-closure decides consistency of CSPs over A if and only if A is closed under constraints.
Proof Sketch. ⇒: Given a set of atomic relations A = {R 1 , . . . , R n }. We have to prove that if A is not closed under constraints, then a-closure does not decide consistency over A. A is not closed under constraints means that there is an atomic relation R k ∈ A which can be refined to non-overlapping subatomic relations using atomic sets of constraints over A. We will prove this by constructing an a-closed but inconsistent set of constraints over A for those cases where A is not closed under constraints. We assume without loss of generality that if A is not closed under constraints, there are at least two non-overlapping subatomic relations R 1 k , R 2 k of R k which can be obtained using the atomic sets of constraints Θ 1 , Θ 2 (both are a-closed and consistent and contain the constraint xR k y). We combine all tuples of
R k not contained in R 1 k or R 2 k to R m k and have that R 1 k ∪ R 2 k ∪ R m k = R k and that R 1 k , R 2 k , R m k are pairwise disjoint. We can now form a new set of atomic relations A where R k is replaced with R 1 k , R 2 k , R m k (analogous for R -1 k ).
All the other relations are the same as in A. The weak composition table of A differs from that of A for the entries that contain R k or R -1 k . Since R 1 k and R 2 k can be obtained by atomic sets of constraints over A, the entries in the weak composition table of A cannot be the same for R 1 k and for R 2 k . Therefore, there must be a relation R l ∈ A for which the entries of R l R 1 k and of R l R 2 k differ. We assume that R l R k = S and that R l R 1 k = S \ S 1 and R l R 2 k = S \ S 2 , with S, S 1 , S 2 ∈ 2 A and S 1 = S 2 . We chose a non-empty one, say S 1 , and can now obtain an inconsistent triple xR 1 k y, zR l x, zS 1 y for which the corresponding triple xR k y, zR l x, zS 1 y is consistent. Note that we use A only for identifying R l and S 1 .
If we now consider the set of constraints Θ = Θ 1 ∪ {zR l x, zS 1 y} (where z is a fresh variable not contained in Θ 1 ), then Θ is clearly inconsistent since Θ 1 refines xR k y to xR 1 k y and since R l R 1 k = S \ S 1 . However, applying the a-closure algorithm to Θ (resulting in Θ ) using the weak composition table of A does not result in an inconsistency, since a-closure does not see the implicit refinement of xR k y to xR 1 k y. ⇐: Proof by induction over the size n of Θ. Induction hypothesis: P (n) = {For sets Θ of atomic constraints of size n, if it is not possible to refine atomic relations to non-overlapping subatomic relations, then a-closure decides consistency for Θ.} This is clear for n ≤ 3. Now take an a-closed atomic CSP Θ of size n+1 over A and assume that P (n) is true. For every variable x ∈ Θ let Θ x be the atomic CSP that results from Θ by removing all constraints that involve x. Because of P (n), Θ x is consistent for all x ∈ Θ. Let R x be the subatomic relation to which R is refined to in Θ x and let R be the intersection of R x for all x ∈ Θ. If R is non-empty for every R ∈ A, i.e., if it is not possible to refine R to non-overlapping subatomic relations, then we can choose a consistent instantiation of Θ x which contains for every relation R only tuples of R . Since no relation R of Θ x can be refined beyond R by adding constraints of Θ that involve x, it is clear that we can then also find a consistent instantiation for x, and thereby obtain a consistent instantiation of Θ.
This theorem is not constructive in the sense that it does not help us to prove that a-closure decides consistency for a particular calculus. But such a general constructive theorem would not be possible as it depends on the semantics of the relations and on the domains whether a-closure decides consistency. This has to be formally proven in a different way for each new calculus and for each new domain. What our theorem gives us, however, is a simple explanation why a-closure is independent of whether weak composition differs from composition: It makes no difference whatsoever whether non-overlapping subatomic relations are obtained via triples of constraints or via larger constellations (as in Example 2). In both cases a-closure cannot detect all inconsistencies. Our theorem also gives us both, a simple method for determining when a-closure does not decide consistency, and a very good heuristic for approximating when it does. Consider the following heuristic: Does the considered domain enable more distinctions than those made by the atomic relations, and if so, can these distinctions be enforced by a set of constraints over existing relations? This works for the three examples we already mentioned. It also works for any other calculus that we looked at. Take for instance the containment algebra which is basically the interval algebra without distinguishing directions [START_REF] Ladkin | On binary constraint problems[END_REF]. So having directions would be a natural distinction and it is easy to show that we can distinguish relative directions by giving constraints: If a is disjoint from b and c touches b but is disjoint from a, then c must be on the same side of a as b. This can be used to construct a-closed inconsistent configurations. For RCC8, the domain offers plenty of other distinctions, but none of them can be enforced by giving a set of RCC8 constraints. This gives a good indication that a-closure decides consistency (which has been proven in [START_REF] Renz | On the complexity of qualitative spatial reasoning: A maximal tractable fragment of the Region Connection Calculus[END_REF]). If we restrict the domain of RCC8, e.g., to two-dimensional discs of the same size, then we can find distinctions which can be enforced by giving constraints.
When defining a new qualitative calculus by defining a set of atomic relations, it is desirable that algebraic closure decides consistency of atomic CSPs. Therefore, we recommend to test the above given heuristic when defining a new qualitative calculus and to make sure that the new atomic relations are closed under constraints. In section 5 we discuss the consequences of having a set of relations which is not closed under constraints.
Effects on qualitative reduction techniques
In the analysis of qualitative calculi it is usually tried to transfer properties such as tractability or applicability of the a-closure algorithm for deciding consistency to larger sets of relations and ideally find the maximal sets that have these properties. Such general techniques involve composition of relations in one way or another and it is not clear whether they can still be applied if only weak composition is known and if they have been properly applied in the literature. It might be that replacing composition with weak composition and path-consistency with a-closure is sufficient, but it might also be that existing results turn out to be wrong or not applicable. In this section we look at two important general techniques for extending properties to larger sets of relations. The first technique is very widely used and is based on the fact that a set of relations S ⊆ 2 A and the closure S of S under composition, intersection, and converse have the same complexity. This results from a proof that the consistency problem for S (written as CSPSAT( S)) can be polynomially reduced to CSPSAT(S) by inductively replacing each constraint xRy over a relation R ∈ S \S by either xSy ∧xT y or by xSz •zT y for S, T ∈ S [START_REF] Renz | On the complexity of qualitative spatial reasoning: A maximal tractable fragment of the Region Connection Calculus[END_REF]. If we have only weak composition, then we have two problems. First, we can only look at the closure of S under intersection, converse, and weak composition (we will denote this weak closure by S w ). And, second, we can replace a constraint xRy over a relation R ∈ S w \S only by xSy ∧xT y or by xSz zT y for S, T ∈ S. For xSz zT y we know that it might not be a consistent replacement for xRy. In Figure 2 we give an example for a consistent set of INDU constraints which becomes inconsistent if we replace a non-atomic constraint by an intersection of two weak compositions of other INDU relations.
So it is clear that this widely used technique does not apply in all cases where we have only weak composition. In the following theorem we show when it can still be applied.
Theorem 2. Let R be a finite set of qualitative relations and S ⊆ R a set of relations. Then CSPSAT( S w ) can be polynomially reduced to CSPSAT(S) if a-closure decides consistency for atomic CSPs over R.
Proof Sketch. Consider an a-closed set Θ of constraints over S w . When inductively replacing constraints over S w with constraints over S, i.e., when replacing xRy where R ∈ S w with xSz and zT y where S T = R and S, T ∈ S and z is a fresh variable, then potential solutions are lost. However, all these triples of relations (R, S, T ) are minimal, i.e., every atomic relation of R can be part of a solution of the triple. No solutions are lost when replacing constraints with the intersection of two other constraints or by a converse constraint. Let Θ be the set obtained from Θ after inductively replacing all constraints over S w with constraints over S. Since potential solutions are lost in the transformation, the only problematic case is where Θ is consistent but Θ is inconsistent. If Θ is consistent, then there must be a refinement of Θ to a consistent atomic CSP Θ a . For each constraint xRy of Θ which is replaced, all the resulting triples are minimal and are not related to any other variable in Θ. Note that due to the inductive replacement, some constraints will be replaced by stacks of minimal triples. Therefore, each R can be replaced with any of its atomic relations without making the resulting stacks inconsistent. Intersecting Θ with Θ a followed by computing a-closure will always result in an a-closed set. Since the stacks contain only minimal triples, it is clear that they can be subsequently refined to atomic relations. The relations between the fresh variables and the variables of Θ can also be refined to atomic relations as they were unrelated before applying a-closure. The resulting atomic CSP will always be a-closed, so Θ must be consistent if a-closure decides atomic CSPs. This covers all the cases in the middle column of Table 1 such as RCC8, but does not cover those cases in the bottom right cell. This result is very important for all existing and future calculi where only weak composition is used. We know now that for all calculi where a-closure decides atomic CSPs, complexity results can be transferred between a set of relations and its closure, independent of whether we are using weak composition or composition. This also resolves all doubts (Düntsch, personal communication) about applying this technique to RCC8. On the other hand, we cannot use this popular method of transferring complexity results in cases where we have only weak composition and a-closure does not decide atomic CSPs. For all existing calculi that fall into this category, we should reconsider the complexity analysis. In the following section we will have a look at the complexity results of INDU and PIDN and it turns out that some of the complexity results in the literature are wrong.
The second general technique which is very useful for analysing computational properties and identifying large tractable subsets is the refinement method [START_REF] Renz | Maximal tractable fragments of the region connection calculus: A complete analysis[END_REF]. It gives a simple algorithm for showing if a set S ⊆ 2 A can be refined to a set T ⊆ 2 A in the sense that for every path-consistent set of constraints Θ over S and every relation S ∈ S we can always refine S to a subrelation T ⊆ S with T ∈ T . If path-consistency decides consistency for T then it must also decide consistency for S. Theorem 3. Let R be a finite set of qualitative relations for which a-closure decides atomic CSPs. The refinement method also works for weak composition by using the a-closure algorithm instead of the path-consistency algorithm.
Proof Sketch. Any a-closed triple of variables is minimal. So if a relation S can be refined to T in any a-closed triple that contains S, then the refinement can be made in any a-closed network without making the resulting network not a-closed. If a-closure decides the resulting network, then it also decides the original network.
Note that the refinement method only makes sense if a-closure decides atomic CSPs as the whole purpose of the refinement method is to transfer applicability of a-closure for deciding consistency from one subset of R to another.
A road map for analysing qualitative calculi
Using the results of our paper we can now analyse new and revisit existing qualitative spatial and temporal calculi. When defining a new set of atomic relations and the domain is not ordered, we have to assume that we have only weak composition unless we can prove the contrary. The most important step is to prove whether a-closure decides atomic CSPs for our new calculus. It is possible to use the heuristic given in the previous section, but if a-closure decides atomic CSPs, then this has to be proven using the semantics of the relations. If it turns out that a-closure decides atomic CSPs then we can proceed by applying the techniques we discussed in the previous section, i.e., we can identify larger tractable subsets by using the refinement method and by computing the closure of known tractable subsets under intersection, converse and (weak) composition. But what if it does not?
When a-closure does not decide atomic CSPs
This is the case for many calculi in the literature (see e.g. Table 1) and will probably be the case for many future calculi. As shown in Theorem 1 this means that it is possible to enforce non-overlapping subatomic relations. If we only get finitely many non-overlapping subatomic relations, as, e.g., for the containment algebra, then it is best to study the calculus obtained by the finitely many new atomic relations and treat the original calculus as a subcalculus of the new calculus. If we do get infinitely many nonoverlapping subatomic relations, however, then we suggest to proceed in one of two different ways. Let us first reflect what it means to have infinitely many non-overlapping subatomic relations: An important property of a qualitative calculus is to have only finitely many distinctions. So if we have to make infinitely many distinctions, then we do not have a qualitative calculus anymore! Therefore we cannot expect that qualitative methods and techniques that are only based on (weak) compositions help us in any way. This is also the reason why we analysed the techniques in the previous section only for cases where a-closure decides atomic CSPs, i.e., where we do have qualitative calculi. 3One way of dealing with these calculi is to acknowledge that we do not have a qualitative calculus anymore and to use algorithms that deal with quantitative calculi instead. It might be that consistency can still be decided in polynomial time using these algorithms. Another way is to find the source that makes the calculus quantitative and to eliminate this source in such a way that it has no effect anymore, e.g., by combining atomic relations to form coarser atomic relations. Both of these ways were considered for the STAR calculus [START_REF] Renz | Qualitative direction calculi with arbitrary granularity[END_REF]. A third way, which is sometimes chosen, but which we discourage everyone from taking, is to look at 4-consistency.
Problems with using 4-consistency
We sometimes see results in the literature of the form "4-consistency decides consistency for a set of relations S ⊆ 2 A and therefore S is tractable." What we have not seen so far is a proper 4-consistency algorithm. For infinite domains where we only manipulate relation symbols, a 4-consistency algorithm must be based on composition of real ternary relations. The question then is how can we show that the composition of the ternary relations is not just a weak composition. Just like computing composition for binary relations, we might have to check an infinite number of domain values.
Consequently, there could be no 4-consistent configurations at all or it could be NP hard to show whether a configuration is 4-consistent. This makes these results rather useless from a practical point of view and certainly does not allow the conclusion that these sets are tractable. We illustrate this using an example from the literature where 4-consistency was wrongly used for proving that certain subsets of INDU or PIDN [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF][START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF] are tractable.
1. 4-consistency decides consistency for S ⊆ 2 A 2. Deciding consistency is NP-hard for T ⊆ S
The first result was proven for some subsets of INDU and PIDN [START_REF] Pujari | INDU: An Interval and Duration Network[END_REF][START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF]. We obtained the second result by a straightforward reduction of the NP-hard consistency problem of PDN [START_REF] Navarrete | On point-duration networks for temporal reasoning[END_REF] to INDU and PIDN. It is clear from this example that 4-consistency results cannot be used for proving tractability. Validity and applicability of similar results in the literature should be reconsidered as well.
Conclusions
We started with the well-known observation that in many cases in qualitative spatial and temporal reasoning only weak composition can be determined. This requires us to use a-closure instead of path-consistency. We thoroughly analysed the consequences of this fact and showed that the main difficulty is not whether weak composition differs from composition, but whether it is possible to generate non-overlapping subatomic relations, a property which we prove to be equivalent to whether a-closure decides atomic CSPs. Since this occurs also in cases where weak composition is equal to composition, our analysis does not only affect cases where only weak composition is known (which are most cases where the domains are not ordered) but qualitative spatial and temporal calculi in general. We also showed under which conditions some important techniques for analysing qualitative calculi can be applied and finally gave a roadmap for how qualitative calculi should be developed and analysed. As a side effect of our analysis we found that some results in the literature have to be reconsidered.
- 1 =
1 {(a, b)|(b, a) ∈ R} and composition (•) of two relations R and S which is the relation R • S = {(a, b) | ∃c : (a, c) ∈ R and (c, b) ∈ S}.
Fig. 1 .
1 Fig.1. A STAR calculus with 3 angles resulting in 13 atomic relations (left). The right picture shows an atomic CSP whose constraints enforce that D must be 45 degrees to the left of B, i.e., the constraint B{11}D is refined by the other constraints to the line orthogonal to relation 2. Therefore, the atomic relation 11 can be refined to a subatomic relation using the given constraints
Fig. 2 .
2 Fig. 2. (1) A consistent INDU network which becomes inconsistent when replacing b{s<, d<}a with (2). From (1) we get b > 0.5 * a and from (2) we get b < 0.5 * a.
It is unlikely to find a version of Theorem 2 for cases where a-closure does not decide atomic CSPs. As a heuristic, the following property could be considered: xRy can only be replaced with xSz, zT y if for all weak compositions Ri Rj that contain R the intersection of all real compositions Ri • Rj is nonempty.
National ICT Australia is funded through the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council. | 47,724 | [
"1003925",
"997069"
] | [
"74661",
"247329"
] |
01487498 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2004 | https://hal.science/hal-01487498/file/KR04CondottaJF.pdf | Jean-Franc ¸ois Condotta
Gérard Ligozat
email: ligozat@limsi.fr
Axiomatizing the Cyclic Interval Calculus
Keywords: qualitative temporal reasoning, cyclic interval calculus, cyclic orderings, completeness, ℵ 0 -categorical theories
. In this formalism, the basic entities are intervals on a circle, and using considerations similar to Allen's calculus, sixteen basic relations are obtained, which form a jointly disjunctive and pairwise distinct (JEPD) set of relations. The purpose of this paper is to give an axiomatic description of the calculus, based on the properties of the meets relation, from which all other fifteen relations can be deduced. We show how the corresponding theory is related to cyclic orderings, and use the results to prove that any countable model of this theory is isomorphic to the cyclic interval structure based on the rational numbers. Our approach is similar to Ladkin's axiomatization of Allen's calculus, although the cyclic structures introduce specific difficulties.
Introduction
In the domain of qualitative temporal reasoning, a great deal of attention has been devoted to the study of temporal formalisms based on a dense and unbounded linear model of time. Most prominently, this is the case of Allen's calculus, where the basic entities are intervals of the real time line, and the 13 basic relations (Allen's relations) correspond to the possible configurations of the endpoints of two intervals [START_REF] Allen | An interval-based representation of temporal knowledge[END_REF]. Other calculi such as the cardinal direction calculus (Ligozat 1998a;1998b), the n-point calculus (Balbiani & Condotta 2002), the rectangle calculus [START_REF] Balbiani | A new tractable subclass of the rectangle algebra[END_REF], the n-block calculus [START_REF] Balbiani | Tractability results in the block algebra[END_REF] are also based on products of the real line equipped with its usual ordering relation, hence on products of dense and unbounded linear orderings.
However, many situations call for considering orderings which are cyclic rather than linear. In particular, the set of directions around a given point of reference has such a cyclic structure. This fact has motivated several formalisms in this direction: Isli and Cohn [START_REF] Isli | A new approach to cyclic ordering of 2D orientations using ternary relation algebras[END_REF] and Balbiani et al. [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF] consider a calculus about points on a circle, based on qualitative ternary relations between the points. Schlieder's work on the concepts of orientation and panorama [START_REF] Schlieder | Representing visible locations for qualitative navigation[END_REF][START_REF] Schlieder | Reasoning about ordering[END_REF] is also concerned with cyclic situations. Our work is more closely related to Balbiani and Osmani's proposal [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] which we will refer to as the cyclic interval calculus. This calculus is similar in spirit to Allen's calculus: in the same way as the latter, which views intervals on the line as ordered pairs of points (the starting and ending point of the interval), the cyclic interval calculus considers intervals on a circle as pairs of distinct points: two points on a circle define the interval obtained when starting at the first, going (say counterclockwise) around the circle until the second point is reached. The consideration of all possible configurations between the endpoints of two intervals defined in that way leads to sixteen basic relations, each one of which is characterized by a particular qualitative configuration. For instance, the relation meets corresponds to the case where the last point of the first interval coincides with the first point of the other, and the two intervals have no other point in common. Another interesting relation, which has no analog in the linear case, is the mmi relation1 , where the last point of each interval is the first point of the other (as is the case with two serpents, head to tail, each one of them devouring the other).
This paper is concerned with giving suitable axioms for the meets relation in the cyclic case. This single relation can be used to define all other 15 relations of the formalism (there is a similar fact about the meets relation in Allen's calculus). We give a detailed description of the way in which the axiomatization of cyclic orderings -using a ternary relation described in [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF])relates to the axiomatization of cyclic intervals based on the binary relation meets. Our approach is very similar to the approach followed by Ladkin in his PhD thesis [START_REF] Ladkin | The Logic of Time Representation[END_REF] where he shows how the axiomatization of linear dense and unbounded linear orderings relates to the axiomatization proposed by Allen and Hayes for the interval calculus, in terms of the relation meets.
The core of the paper, apart from the choice of an appropriate set of axioms, rests on two constructions:
• Starting from a cyclic ordering, that is a set of points equipped with a ternary order structure satisfying suitable axioms , the first construction defines a set of cyclic intervals equipped with a binary meets relation; and conversely.
• Starting from a set of cyclic intervals equipped with a meets relation, the second construction yields a set of points (the intuition is that two intervals which meet define a point, their meeting point) together with a ternary relation which has precisely the properties necessary to define a cyclic ordering.
The next step involves studying how the two constructions interact. In the linear case, a result of Ladkin's can be expressed in the language of category theory by saying that the two constructions define an equivalence of categories. Using Cantor's theorem, this implies that the corresponding theories are ℵ 0 categorical. In the cyclic case, we prove an analogous result: here again, the two constructions define an equivalence of categories. On the other hand, as shown in [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF], all countable cyclic orderings are isomorphic. As a consequence, the same fact is true of the cyclic interval structures which satisfy the axioms we give for the relation meets. This is the main result of the paper. We further examine the connections of these results to the domain of constraint-based reasoning in the context of the cyclic interval calculus, and we conclude by pointing to possible extensions of this work.
Building cyclic interval structures from cyclic orderings
This section is devoted to a construction of the cyclic interval structures we will consider in this paper, starting from cyclic orderings. In the next section, we will propose a set of axioms for these structures. Intuitively, each model can be visualized in terms of a set of oriented arcs (intervals) on a circle (an interval is identified by a starting point and an ending point on the circle), together with a binary meets relation on the set of intervals. Specifically, two cyclic intervals (m, n) and (m , n ) are such that (m, n) meets (m , n ) if n = m and n is not between m and n, see Figure 1 (as a consequence, n = m is the only point that the two intervals have in common). In order to build interval structures, we start from cyclic or-derings2 [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF]. Intuitively, the cyclic ordering on a circle is similar to the usual ordering on the real line. In formal terms, a cyclic ordering is a pair (P, ≺) where P is a nonempty set of points, and ≺ is a ternary relation on P such that the following conditions are met, for all x, y, z, t ∈ P:
¡ ¡ £¢
¤¢
P1. ¬ ≺ (x, y, y); P2. ≺ (x, y, z)∧ ≺ (x, z, t) →≺ (x, y, t);
P3. x = y ∧ x = z → y = z∨ ≺ (x, y, z)∨ ≺ (x, z, y);
P4. ≺ (x, y, z) ↔≺ (y, z, x) ↔≺ (z, x, y);
P5. x = y → (∃z ≺ (x, z, y)) ∧ (∃z ≺ (x, y, z));
P6. ∃x, y x = y.
Definition 1 ( The cyclic interval structure associated to a cyclic ordering) Let (P, ≺) be a cyclic ordering.
The cyclic interval structure CycInt((P, ≺)) associated to (P, ≺ ) is the pair (I, meets) where:
• I = {(x, y) ∈ P × P : ∃z ∈ P with ≺ (x, y, z)}. The elements of I are called (cyclic) intervals.
• meets is the binary relation defined by meets = {((x, y), (x , y )) : y = x and ≺ (x, y, y )}.
As an example, consider the set C of all rational numbers contained in the interval [0, 2π )) is a cyclic interval structure (I, meets). Each element u = (x, y) of I can be viewed as the oriented arc containing all points between the points represented by x and y (we will refer to these two points as to the endpoints of the cyclic interval u and denote by u -and u + , respectively, the points associated to x and y). For instance, the cyclic intervals (0, π/2), (π/2, 0) and (3π/2, π/2) are shown in Figure 2. Notice that no cyclic interval contains only one point (there are no punctual intervals), and that no interval covers the whole circle. Intuitively, two cyclic intervals are in the relation meets if and only if the ending point of the first coincides with the starting point of the other, and the intervals have no other point in common. For instance, ((3π/2, π/2), (π/2, π)) ∈ meets, while ((3π/2, π/2), (π/2, 5π/3)) ∈ meets. Let (I, meets) be a cyclic interval structure. We now show how the other fifteen basic relations of the cyclic interval calculus defined by [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] can be defined using the meets relation. The 16 relations are denoted by the set of symbols {m, mi, ppi, mmi, d, di, f, f i, o, oi, s, si, ooi, moi, mio, eq} (where m is the meets relation). Figure 3 shows examples of these relations. More formally, the relations other than meets are defined as follows3 :
¥ ¥ ¦ ¨ § © ¦ £ § © ¦ ¨ § © ¦ ¨ § ©
! " # $ % '& ( ! ) '& £ # $ % ) 0& £ 1 2 !3 ! '& ( # $ 43 5& 6 " 1 87 9 # $ @7 A& £ B 2 !3 53 5& ( # C D3 !3 5& £ 1 8E ! # C %E B& £ GF AH " # $ @F 0H " 1 53 I # $ 43 5& ( 4P P Q& ( # $ "P P & ( 1 B !R D # $ %R & (
• u ppi v def ≡ ∃w, x u m w m v m x m u, • u mmi v def ≡ ∃w, x, y, z w m x m y m z m w ∧ z m u m y ∧ x m v m w, • u d v def ≡ ∃w, x, y w m x m u m y m w ∧ v mmi w, • u f v def ≡ ∃w, x w m x m u m w ∧ v mmi w, • u o v def ≡ ∃w, x, y, z u m v m x m u∧v m x m y m v∧ y m z m w, • u s v def ≡ ∃w, x, y w m x m v m w ∧ x m u m y m w, • u ooi v def ≡ ∃w, x w f u ∧ w s v ∧ x s u ∧ x f v, • u moi v def ≡ ∃w, x, y w m x m y m w ∧ y ppi u ∧ x ppi v, • u mio v def ≡ ∃w, x, y w m x m y m w ∧ x ppi u ∧ y ppi v, • u eq v def ≡ ∃w, x w m u m x ∧ w m v m x.
The relations mi, di, f i, oi, si are the converse relations of m, d, f, o, s, respectively.
Axioms for cyclic interval structures: The CycInt theory
In this section, we give a set of axioms allowing to characterise the relation meets of cyclic intervals. Several axioms are motivated by intuitive properties owned by models of cyclic intervals. Other axioms are axioms of the relation meets of the intervals of the line [START_REF] Ladkin | The Logic of Time Representation[END_REF][START_REF] Allen | A commonsense theory of time[END_REF] adapted to the cyclic case.
In the sequel u, v, w, . . . will denote variables representing cyclic intervals. The symbol | corresponds to the relation meets.
The expression v 1 |v 2 | . . . |v n with v 1 , v 2 , . . . , v n n variables (n > 2) is an abbreviation for the conjunction n-1 i=1 v i |v i+1 . Note that the expression v 1 |v 2 | . . . |v n |v 1 is equivalent to v 2 | . . . |v n |v 1 |v 2 .
Another abbreviation used in the sequel is X(u, v, w, x). It is defined by the expression u|v ∧ w|x ∧ (u|x ∨ w|v). Intuitively, the satisfaction of X(u, v, w, x) expresses the fact that the cyclic interval u meets (is in relation meets with) the cyclic interval v, the cyclic interval w meets (is in relation meets with) the cyclic interval x and the two meeting points are the same points. In Figure 4 are represented the three possible cases for which X(u, v, w, x) is satisfied by cyclic intervals onto an oriented circle :
(a) u|v, w|x, u|x, w|v are satisfied, (b) u|v, w|x, w|v are satisfied and u|x is not satisfied, (c) u|v, w|x, u|x are satisfied and w|v is not satisfied. Now, it is possible for us to give the CycInt axioms defined to axiomatize the relation meets of the cyclic interval models. After each axiom is given an intuitive idea of what
S UT 2V W X Y aX cb d Y W Ẁeb X "d eb d 4Ỳ fb W S Ug cV S ih V Y pX qb X d d eb W 4d Y X W W eb d b X Ỳ b d 4`W b X qb X "Ỳ fb d `W Ỳa`Y Figure 4: Satisfaction of X(u, v, w, x).
it expresses.
Definition 2 (The CycInt axioms)
A1. ∀u, v, w, x, y, z X(u, v, w, x) ∧ X(y, z, w, x) → X(u, v, y, z)
Given three pairs of meeting cyclic intervals, if the meeting point defined by the first pair is the same as the one defined by the second pair and the meeting point defined by the second pair is the same as the one defined by the third pair then, the first pair and the second pair of meeting cyclic intervals define the same meeting point.
A2.
∀u, v, w, x, y, z X(u, v, w, x) ∧ X(y, u, x, z) → ¬u|x ∧ ¬x|u Two cyclic intervals with the same endpoints do not satisfy the relation meets.
A3. ∀u, v, w, x, y, z u|v ∧ w|x ∧ y|z ∧ ¬u|x ∧ ¬w|v∧ ¬u|z ∧ ¬y|v ∧ ¬w|z ∧ ¬y|x → ∃r, s, t r|s|t|r ∧ X(u, v, r, s) ∧ (X(w, x, s, t) ∧ X(y, z, t, r)) ∨ (X(w, x, t, r) ∧ X(y, z, s, t))
Three distinct meeting points can be defined by three cyclic intervals satisfying the relation meets so that these three meeting cyclic intervals cover the circle in its entirety.
A4. ∀u, v, w, x, u|v ∧ w|x ∧ ¬u|x ∧ ¬w|v → (∃y, z, t, y|z|t|y ∧ X(y, z, w, x) ∧ X(t, y, u, v))∧ (∃y, z, t, y|z|t|y ∧ X(y, z, u, v) ∧ X(t, y, w, x))
Two meeting points are the endpoints of two cyclic intervals. Each one can be defined by two other cyclic intervals.
A5. ∀u, v (∃w, x u|w|x|v|u) → (∃y u|y|v|u)
Two meeting cyclic intervals define another cyclic interval corresponding to the union of these cyclic intervals.
A6. ∃u u = u and ∀u∃v, w u|v|w|u
There exists a cyclic interval and for every cyclic intervals there exist two other cyclic intervals such that they satisfy the relation meets in a cyclic manner (they satisfy the relation meets so that they cover the circle in its entirety).
A7. ∀u, v (∃w, x w|u|x ∧ w|v|x) ↔ u = v
There does not exist two distinct cyclic intervals with the same endpoints.
A8. ∀u, v, w u|v|w → ¬u|w
Two cyclic intervals separated by a third one cannot satisfy the relation meets.
From these axioms we can deduce several theorems which will be used in the sequel.
Proposition 1 Every structure (I, |) satisfying the CycInt axioms satisfies the following formulas:
B1. ∀u, v u|v → ¬v|u B2. ∀u, v, w, x, y, z X(u, v, w, x) ∧ X(y, u, x, z) → w|v ∧ y|z B3. ∀u, v (∃w u|w|v|u) → (∃x, y u|x|y|v|u)
Proof
• (B1) Let u, v be two cyclic intervals satisfying u|v. Suppose that v|u is satisfied. It follows that X(u, v, u, v) and X(v, u, v, u) are satisfied. From Axiom A2 follows that u|v and v|u cannot be satisfied. There is a contradiction.
• (B2) Let u,v,w,x,y,z be cyclic intervals satisfying X(u, v, w, x) and X(y, u, x, z). From Axiom A2 we can deduce that u|x and x|u are not satisfied. As X(u, v, w, x) and X(y, u, x, z) are satisfied, we can assert that y|z and w|v are satisfied.
• (B3) Let u, v, w be cyclic intervals satisfying u|w|v|u.
We have u|w, w|v and v|u which are satisfied. Moreover, since v|u is satisfied, from B1 we can deduce that u|v and w|w cannot be satisfied. From Axiom A4 follows that there exists cyclic intervals x, y, z satisfying x|y|z|x, X(x, y, u, w) and X(z, x, w, v). From Axiom A2 we can assert that x|w and w|x are not satisfied. From it and the satisfaction of X(x, y, u, w) ∧ X(z, x, w, v), we can assert that u|y and z|v are satisfied. We can conclude that u, v, y, z satisfy u|y|z|v|u.
From cyclic interval structures back to cyclic orderings
In this section, we show how to define a cyclic ordering ≺ onto a set of points from a set of cyclic intervals and a relation meets onto these cyclic intervals satisfying the CycInt axioms. The line of reasoning used is similar to the one used by Ladkin [START_REF] Ladkin | The Logic of Time Representation[END_REF] in the linear case. Indeed, intuitively, a set of pairs of meeting cyclic intervals satisfying the relation meets at a same place will represent a cyclic point. Hence, a cyclic point will correspond to a meeting place. Three cyclic points l, m, n defined in this way will be in relation ≺ if, and only if, there exist three cyclic intervals satisfying the relation meets in a cyclic manner (so that they cover the circle in its entirety) so that their meeting points are successively l, m and n. Now, let us give more formally the definition of this cyclic ordering.
Let Proof We give the proof for Axioms P 1 and P 2 only. The proof for the other axioms is in the annex.
• ∀uv, wx ∈ P, ¬ ≺ (uv, wx, wx) (P 1) Let uv, wx ∈ P. Suppose that ≺ (uv, wx, wx) is satisfied. From the definition of ≺, there exist y, z, t ∈ I satisfying y|z|t|y and such that (y, z) (u, v), (z, t) (w, x), (t, y) (w, x). owns the properties of transitivity and symmetry, in consequence, we can assert that (z, t) (t, y). From it and from the definition of , we have z|y or t|t which are satisfied. As | is an irreflexive relation, we can assert that z|y is satisfied. Moreover, y|z is also satisfied. There is a contradiction since the relation | is an asymmetric relation.
• ∀uv, wx, yz, st ∈ P, ≺ (uv, wx, yz) ∧ ≺ (uv, yz, st) → ≺ (uv, wx, st) (P 2)
Let uv, wx, yz, st ∈ P which satisfy ≺ (uv, wx, yz) and ≺ (uv, yz, st). From the definition of ≺ we can deduce that there exist m, n, o ∈ I satisfying m|n|o|m, mn = uv, no = wx, om = yz. On the other hand, we can assert that there exist p, q, r ∈ I satisfying p|q|r|p, pq = uv, qr = yz and rp = st. From the property of transitivity of the relation and the equalities mn = uv, pq = uv, om = yz, qr = yz, we obtain the equalities mn = pq and om = qr. Hence, from the definition of , we can assert that X(m, n, p, q) and X(o, m, q, r) are satisfied. From Theorem B2, it follows that p|n and o|r are also satisfied. From all this, we can deduce that n|o|r|p|n is satisfied. From Axiom A5, we can assert that there exists l satisfying n|l|p|n. By rotation, we deduce that p|n|l|p is satisfied. n|l and n|o are satisfied, in consequence, we have nl = no. From this equality, the transitivity of the relation and the equality no = wx, we can assert that nl = wx. As l|p and r|p are satisfied, we have the equality lp = rp. From this equality, the transitivity of the relation and the equality rp = st, we can deduce that lp = st. Consequently, p|n|l|p, pn = uv, nz = wx and zp = st are satisfied. Hence, from the definition of ≺, we can conclude that ≺ (uv, wx, st) is satisfied.
Cyclic orderings yield models of CycInt
In this section, we prove that every structure of cyclic intervals defined from a cyclic ordering is a model of CycInt.
Theorem 2 Let (P, ≺) be a cyclic ordering. (I, |) = CycInt((P, ≺)) is a model of the CycInt axioms. Proof In the sequel, given an element u = (m, n) ∈ I, u - (resp. u + ) will correspond to m (resp. to n). Let us prove that the axioms of CycInt are satisfied by (I, |).
• (A1) Let u, v, w, x, y, z ∈ I satisfying X(u, v, w, x) and X(y, z, w, x). From the definition of X we can assert that u|v and y|z are satisfied. Hence the equalities u + = v -, w + = x -and y + = z -. Moreover, from the definition of X, it follows that u|x or w|v and y|x or w|z are satisfied. Let us consider all the possible situations exhaustively:
-u|x and y|x are satisfied. It follows that u + = x -and y + = x -are satisfied. Hence, we have u + = v -= w + = x -= y + = z -. -u|x and w|z are satisfied. It follows that u + = x -and w + = z -are satisfied. Consequently, u + = v -= w + = x -= y + = z -is satisfied. -w|v and y|x are satisfied. It follows that w + = v -and y + = x -are satisfied. Therefore, u + = v -= w + = x -= y + = z -is satisfied. -w|v and w|z are satisfied. It follows that w + = v -and w + = z -are satisfied. Hence, u + = v -= w + = x -= y + = z -is satisfied. Let us denote by l the identical points u + , v -, w + , x -, y + , z -.
Suppose that X(u, v, y, z) is falsified. By using the fact that u|v and y|z are satisfied, we deduce that u|z and y|v are not satisfied. Since u + = z -and y + = v -, ≺ (u -, l, z + ) and ≺ (y -, l, v + ) are not satisfied. From P5, we get the satisfaction of ≺ (u -, z + , l) and the one of ≺ (y -, v + , l). As u|v and y|z are satisfied, ≺ (u -, l, v + ) and ≺ (y -, l, z + ) are also satisfied. Hence, by using P4, we can assert that ≺ (l, y -, v + ) and ≺ (l, v + , u -) are satisfied. From P2, it follows that ≺ (l, y -, u -) is also satisfied. From the satisfaction of ≺ (u -, z + , l) and the one of P4, it follows that ≺ (l, u -, z + ) is satisfied. By using P2, it results that ≺ (l, y -, z + ) is satisfied. Recall that ≺ (y -, l, z + ) is satisfied. From P4 and P2, it results that ≺ (y -, z + , z + ) is satisfied. From P1, a contradiction follows. Consequently, we can conclude that X(u, v, y, z) is satisfied.
• (A2) Let u, v, w, x, y, z ∈ I satisfy X(u, v, w, x) and X(y, u, x, z). The following equalities are satisfied: u+ = x-and x + = u -. By using P4 and P1, we can assert that ≺ (u -, u + , x + ) and ≺ (x -, x + , u + ) cannot be satisfied. Hence, u|x and x|u are not satisfied.
• (A3) Let us prove the satisfaction of Axiom A3. Let u, v, w, x, y, z ∈ I satisfying u|v, w|x, y|z, ¬u|x, ¬w|v, ¬u|z, ¬y|v, ¬w|z, ¬y|x. From the satisfaction of u|v (resp. w|x and y|z), it follows that u + = v -(resp. w + = x -and y + = z -). Let l (resp. m and n) the point defined by
l = u + = v -(resp. m = w + = x - and n = y + = z -). Suppose that l = m. the equal- ity u + = v -= w + = x -is satisfied.
Since w|x is true, we can deduce that ≺ (w -, x -, x + ) is also satisfied. Consequently, w -and x + are distinct points. Let us consider the three points u -, w -, x + . From P3, we can assert that only four cases are possible:
u -= w -is satis- fied, u -= x + is satisfied, ≺ (w -, x + , u -) is satisfied, or ≺ (w -, u -, x +
) is satisfied. By using, P2, P3 and P4, we obtain for every case a contradiction:
-u -= w -is satisfied. As w|x is satisified, ≺ (u -, x -, x + ) is also satisfied. Recall that u + = x -.
It follows that u|x is satisfied. There is a contradiction. u -= x + is satisfied. As u|v and w|x are satisfied, we can assert that ≺ (u + , v -, v + ) and ≺ (w -, x -, x + ) are satisfied. Hence, ≺ (x + , v -, v + ) and ≺ (w -, v -, x + ) are also satisfied. By using P4, we can deduce that ≺ (v -, v + , x + ) and ≺ (v -, x + , w -) are satisfied. From P2 it follows that ≺ (v -, v + , w -) is also satisfied. From P4 follows the satisfaction of ≺ (w -, v -, v + ). Moreover, we have the equality w + = v -. Consequently, w|v is satisfied. There is a contradiction. -≺ (w -, x + , u -) is satisfied. From P4, we obtain the satisfaction of ≺ (x + , u -, w -). As w|x is satisfied, we deduce that ≺ (w -, x -, x + ) is satisfied. Hence, ≺ (x + , w -, x -) is also satisfied (P4). From P2, we can assert that ≺ (x + , u -, x -) is satisfied. From P4, ≺ (u -, x -, x + ) is satisfied. As x -= u + is satisfied, we can assert that u|x is satisfied. There is a contradiction. -≺ (w -, u -, x + ) is satisfied. Hence, u -and x + are distinct points. Moreover, we know that u + and x + are distinct points from the fact that x -and u + are equal. From P3, ≺ (u -, x + , x -) or ≺ (u -, x -, x + ) is satisfied. Suppose that ≺ (u -, x -, x + ) is satisfied. Since we have the equality u + = x -, u|x is satisfied. There is a contradiction. It results that ≺ (u -, x + , x -) must be satisfied. From the satisfaction of ≺ (w -, u -, x + ) and P4, we deduce that ≺ (u -, x + , w -) is satisfied.
From the satisfaction of w|x and from P4, we can assert that
≺ (x -, x + , w -) is satisfied. ≺ (u -, x + , x -) is satisfied, hence, from P4 we can deduce that ≺ (u -, x + , x -) is satisfied. From P4, we obtain the sat- isfaction of ≺ (x -, u -, x + ). From P2, it results that ≺ (x -, u -, w -) is satisfied. Hence, ≺ (u + , u -, w -) is satisfied.
From the satisfaction of u|v and from P4 it follows that ≺ (u + , v + , u -) is satisfied. From P2 we can assert that ≺ (u + , v + , w -) is satisfied. In consequence, ≺ (w + , v + , w -) is satisfied. Hence, from P4, ≺ (w -, w + , v + ) is satisfied. It results that w|v is satisfied. There is a contradiction. Consequently, we can assert that l = m. In a similar way, we can prove that l = n and m = n. Now, we know that l, m, n are distinct points. From P3, we can just examine two cases: -≺ (l, m, n) is satisfied. Let r = (n, l), s = (l, m) and t = (m, n). We have r|s|t|r which is satisfied. Suppose that u|s is falsified. It follows that ≺ (u -, l, m) is also falsified. As l is different from u -and m, we have
u -= m or ≺ (u -, m, l) which is satisfied. * Suppose that u -= m is satisfied. Since u|v is satis- fied, it follows that ≺ (u -, u + , v + ) is satisfied. Con- sequently, ≺ (m, l, v + ) is true. From P4, it follows that ≺ (l, v + , m) is satisfied.
From all this, the satisfaction of ≺ (l, m, n) and P2, we can assert that
≺ (l, v + , n) is satisfied. From P4, we deduce that ≺ (n, l, v + ) is satisfied. As l = v -, r|v is satisfied. * Suppose that ≺ (u -, m, l) is satisfied. From P4, it
follows that ≺ (l, u -, m) is satisfied. From all this, the satisfaction of ≺ (l, m, n) and P2, we can assert that ≺ (l, u -, n) is satisfied. As u|v is satisfied, we can deduce that ≺ (u -, u + , v + ) is satisfied. Consequently, ≺ (u -, l, v + ) is also satisfied. From P4, it results that ≺ (l, v + , u -) is satisfied. From all this and the satisfaction of ≺ (l, u -, n), we can deduce that ≺ (l, v + , n) is satisfied. By using P4, we obtain the satisfaction of ≺ (n, l, v + ). As l = v -, we deduce that r|v is satisfied. It results that u|s or r|v is satisfied. Hence, X(u, v, r, s) is satisfied. With a similar line of reasoning, we can prove that X(w, x, s, t) and X(y, z, t, r) are satisfied.
-≺ (l, n, m) is satisfied. Let r = (m, l), s = (l, n) and t = (n, m). We have r|s|t|r which is satisfied. In a similar way, we can prove that X(u, v, r, s), X(y, z, s, t) and X(w, x, t, r) are satisfied.
• For Axioms A4-A5-A6-A7-A8, the proofs can be found in the annex.
Categoricity of CycInt
In this section, we establish the fact that the countable models satisfying the CycInt axioms are isomorphic. In order to prove this property, let us show that for every cyclic interval there exist two unique "endpoints".
Proposition 3 Let M = (I, |) a model of CycInt. Let (P, ≺) be the structure CycPoint(M). For every u ∈ I there exist L u , U u ∈ P such that :
1. ∃v ∈ I such that (v, u) ∈ L u , 2. ∃w ∈ I such that (u, w) ∈ U u , 3. L u (resp. U u ) is the unique element of P satisfying (1.) (resp. (2.)), 4. L u = U u .
Proof From Axiom A6, we can assert that there exist v, w ∈ I such that u|w|v|u is satisfied. Consequently, u|w and v|u are satisfied. By defining L u by L u = vu and U u by U u = uw, the properties (1) and (2) are satisfied. Now, let us prove that the property (3) is satisfied. Suppose that there exists L u such that there exists x ∈ I with (x, u) ∈ L u . We have (v, u) ≡ (x, u). It follows that L u = L u . Now, suppose that there exists U u such that there exists y ∈ I with (u, y) ∈ U u . We have (u, w) ≡ (u, y). It follows that U u = U u . Hence, we can assert that property (3) is true. Now, suppose that L u = U u . It follows that (v, u) ≡ (u, w). As a result, v|w or u|u is satisfied. We know that | is an irreflexive relation. Moreover, from Axiom A8 we can assert that v|w cannot be satisfied. It results that there is a contradiction. Hence, L u and U u are distinct elements.
From an initial model of CycInt, we have seen that we can define a cyclic ordering. Moreover, from this cyclic ordering we can generate a cyclic interval model. We are going to show that this generated cyclic interval model is isomorphic to the initial cyclic interval model. satisfying v|u and u|w. Let us show that f is a one-to-one mapping. Let (uv, wx) ∈ I . We have u|v and w|x which are satisfied and u|x and w|v which are falsified (in the contrary case we would have uv = wx). From A4, it follows that there exist y, z, t satisfying y|z|t|y, X(y, z, w, x) and X(t, y, u, v). Note that L y = ty = uv and U y = yz = wx. Consequently, there exists y ∈ I such that f (y) = (uv, wx). Now, suppose that there exist u, v ∈ I such that f (u) = f (v). Suppose that f (u) = (wu, ux) and f (v) = (yv, vz). We have wu = yv and ux = vz. It follows that (w, u) (y, v) and (u, x) (v, z). From all this, we have w|u, y|v, u|x and v|z which are satisfied. Four possible situations must be considered:
• w|v and u|z are satisfied. It follows that w|v|z and w|u|z are satisfied.
• w|v and v|x are satisfied. It follows that w|v|x and w|u|x are satisfied.
• y|u and u|z are satisfied. It follows that y|v|z and y|u|z are satisfied.
• y|u and v|x are satisfied. It follows that y|v|z and y|u|z are satisfied.
For each case, by using A7, we can deduce the equality u = v. Consequently, f is a one-to-one mapping. Now, let us show that u|v if, and only if, f (u)| f (v). We will denote f (u) by (wu, ux) and f (v) by (yv, vz). Suppose that u|v is satisfied. It follows that (u, x) (y, v), hence, ux = yv. For this reason, (wu, ux, vz) and ux = yv are satisfied. Hence, there exist r, s, t ∈ I such that r|s|t|r, rs = wu, st = ux and tr = vz are satisfied.
f (u)| f (v) is satisfied. Now, suppose that f (u)| f (v) is satisfied. It follows that ≺
From the equalities rs = wu and st = ux, we can assert that u|x, s|t, r|s and w|u are satisfied. Moreover, one of the following cases is satisfied:
• r|u and u|t are satisfied. It follows that r|u|t and r|s|t are satisfied.
• r|u and s|x are satisfied. It follows that r|s|x and r|u|x are satisfied.
• w|s and u|t are satisfied. It follows that w|u|t and w|s|t are satisfied.
• w|s and s|x are satisfied. It follows that w|s|x and w|u|x are satisfied.
For each case, from A7, we can deduce the equality u = s.
From the equalities st = yv and tr = vz, we can deduce that s|t, y|v, t|r and v|z are satisfied. Moreover, one of the following cases is satisfied:
• s|v and t|z are satisfied. It follows that s|t|z and s|v|z are satisfied.
• s|v and v|r are satisfied. It follows that s|v|r and s|t|r are satisfied.
• y|t and t|z are satisfied. It follows that y|t|z and y|v|z are satisfied.
• y|t and v|r are satisfied. It follows that y|t|r and y|v|r are satisfied.
For each case, from Axiom A7, we can deduce that v = t. Hence, we have the equalities u = s and v = t. We can conclude that u|v is satisfied. Now, let us show that two cyclic interval models generated by two countable cyclic orderings are isomorphic.
Proposition 5 Let (P, ≺) and (P , ≺ ) be two cyclic orderings with P and P two countable sets of points. CycInt((P, ≺)) and CycInt((P , ≺ )) are isomorphic. Proof Let (I, |) and (I , | ) be defined by CycInt((P, ≺))
and CycInt((P , ≺ )). We know that (P, ≺) and (P , ≺ ) are isomorphic [START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF].
Let g be an isomorphism from (P, ≺) to (P , ≺ ).
Let h be the mapping from I onto I defined by h((l, m)) = (g(l), g(m)).
First, let us show that (g(l), g(m)) ∈ I . As (l, m) ∈ I, there exists n ∈ P satisfying ≺ (l, m, n). It follows that ≺ (g(l), g(m), g(n)) is satisfied. It results that (g(l), g(m)) ∈ I . Now, let us show that for every (l, m) ∈ I , there exists (n, o) ∈ I such that h((n, o)) = (l, m). We can define n and o by n = g -1 (l) and o = g -1 (m). Indeed,
h(g -1 (l), g -1 (m)) = (g(g -1 (l)), g(g -1 (m))) = (l, m). Now, let (l, m), (n, o) ∈ I such that h((l, m)) = h((n, o)).
It follows that g(l) = g(n) and g(m) = g(o). Therefore, we have l = n and m = o. Hence, we obtain the equality (l, m) = (n, o). Finally, let us show that for all (l, m),
(n, o) ∈ I, (l, m)|(n, o) is satisfied iff h((l, m))| h((n, o)) is satisfied. (l, m)|(n, o) is satisfied iff ≺ (l, m, o) and m = n are satisfied. Hence, (l, m)|(n, o) is satisfied iff ≺ (g(l), g(m), g(o)
) and g(m) = g(n) are satisfied. For these reasons, we can assert that (l, m)|(n, o) is satisfied iff h((l, m))| h((n, o)) is satisfied. We can conclude that h is an isomorphism.
In the sequel, (Q, ≺) will correspond to the cyclic ordering on the set of rational numbers Q, defined by ≺ (x, y, z) iff x < y < z or y < z < x or z < x < y, with x, y, z ∈ Q and < the usual linear order on Q. It is time to As a direct consequence of this theorem we have that the set of the theorems of CycInt is syntactically complete and decidable.
) ' U ! ) D 1 0
Application to constraint networks
Balbiani and Osmani [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF] use constraint networks to represent the qualitative information about cyclic intervals. A network is defined as a pair (V, C), where V is a set of variables representing cyclic intervals and C is a map which, to each pair of variables (V i , V j) associates a subset C ij of the set of all sixteen basic relations.
The main problem in this context is the consistency problem, which consists in determining whether the network has a so-called solution: a solution is a map m from the set of variables V i to the set of cyclic intervals in C such that all constraints are satisfied. The constraint C ij is satisfied if and only if, denoting by m i and m j the images of V i and V j respectively, the cyclic interval m i is in one of the relations in the set C ij with respect to m j (the set C ij is consequently given a disjunctive interpretation in terms of constraints).
i Ij i lk i lm n co qp n cr ts 2r vu p n £w ew @u xs r yu 2p A first interesting point is the fact that the axiomatization we have obtained allows us to check the consistency of a constraint network on cyclic intervals by using a theorem prover. Indeed, the procedure goes as follows: First, translate the network (V, C) into an equivalent logical formula Φ. Then, test the validity of the formula (or its validity in a specific model) by using the CycInt axiomatization.
As an example, consider the constraint network in Figure 7.
The corresponding formula is
Φ = (∃v 1 , v 2 , v 3 ) ((v 1 ppi v 2 ∨ v 1 mi v 2 ) ∧ (v 1 m v 3 ∨ v 1 mi v 3 ) ∧ (v 2 o v 3 )).
In order to show that this network is consistent, we would have to prove that this formula is valid with respect to CycInt, or satisfiable for a model such as C. In order to show inconsistency, we have to consider the negation of Φ.
Usually a local constraint propagation method, called the path-consistency method, is used to solve this kind of constraint network. The method 4 consists in removing from each constraint C ij all relations which are not compatible with the constraints in C ik and C kj , for all 3-tuples i, j, k. This is accomplished by using the composition table of the cyclic interval calculus which, for each pair (a, b) of basic relations, gives the composition of a with b, that is the set of all basic relations c such that there exists a configuration of three cyclic intervals u, v, w with u a v, v b w and u c w. For instance, the composition of m with d consists in the relation ppi.The composition table of the cyclic interval calculus can be automatically computed by using our axiomatization. Indeed, in order to decide whether c belongs to the composition of a with b, it suffices to prove that the formula (∃u, v, w) (u a v ∧ v b w ∧ u c w) is valid. In order to prove that, conversely, c does not belong to this composition, one has to consider the negated formula ¬(∃u, v, w) (u a v ∧ v b w ∧ u c w).
Figure 1 :
1 Figure 1: Two cyclic intervals (m, n) and (m , n ) satisfying the meets relation.
Figure 2 :
2 Figure 2: Three cyclic intervals.
Figure 3 :
3 Figure 3: The 16 basic relations of the cyclic interval calculus.
Definition 4 Figure 5 :
45 Figure 5: Satisfaction of ≺ (uv, wx, yz). structure (P, ≺) obtained from (I, |) will be denoted by CycPoint((I, |)) in the sequel. Theorem 1 The structure (P, ≺) is a cyclic ordering.
Proposition 4
4 Let M = (I, |) a model of the CycInt axioms. M is isomorphic to (I , | ) = CycInt(CycPoint(M)). Proof Let f be the mapping from I onto I defined by f (u) = (L u , U u ), i.e. f (u) = (vu, uw) for any v, w ∈ I
Figure 7 :
7 Figure 7: A constraint network on cyclic intervals.
Proof Let M a model of CycInt. M is isomorphic to CycInt(CycPoint(M)). CycInt(CycPoint(M)) is isomorphic to CycInt((Q, ≺)). By composing the isomorphisms, we have CycInt((Q, ≺)) which is isomorphic to M.
d fe g D
h e h e
h e 6 ) ' C 4 C
U 8 ' 4 h e ) ' C 4
Figure 6: Every countable model of CycInt (I, |) is isomor-phic to CycInt((Q, ≺)).
establish the main result of this section. Theorem 3 The theory axiomatized by CycInt is ℵ 0categorical. Moreover, its countable models are isomorphic to CycInt((Q, ≺)).
The notation is mnemonic for meets and meets inverse.
Actually, we use what are called "standard cyclic orderings" in[START_REF] Balbiani | Reasoning about cyclic space: axiomatic and computational aspects[END_REF]. We use the shorter term "cyclic ordering" in this paper.
Here we use the notation v1 m v2 m . . . m vn where v1, v2, . . . , vn are n variables (n > 2) as a shorthand for the conjunction n-1 i=1 vi m vi+1.
In the case of cyclic interval networks, the path-consistency
Conclusions and further work
We have shown in in paper how the theory of cyclic orderings, on the one hand, and the theory of cyclic intervals, on the other hand, can be related. We proposed a set of axioms for cyclic intervals and showed that each countable model is isomorphic to the model based on cyclic intervals on the rational circle. Determining whether the first order theory of the meets relation between cyclic orderings admits the elimination of quantifiers is to our knowledge an open problem we are currently examining. Another question is whether the axioms of the CycInt theory are independent. Still another interesting direction of research is the study of finite models of cyclic intervals. To this end, we will have to consider discrete cyclic orderings (which consequently do not satisfy axiom P5). This could lead to efficient methods for solving the consistency problem for cyclic interval networks: Since these involve only a finite number of variables, they should prove accessible to the use of finite models.
Annex
Proof (End of proof of Theorem 1)
• ∀uv, wx, yz ∈ P, uv = wx ∧ wx = yz ∧ uv = yz → ≺ (uv, wx, yz)∨ ≺ (uv, yz, wx) (P 3)
Let uv, wx, yz ∈ P satisfying uv = wx, wx = yz and uv = yz. From the definitions of P and we can assert that u|v, w|x, y|z, ¬u|x, ¬w|v, ¬u|z, ¬y|v, ¬w|z, ¬y|x are satisfied. From Axiom A3 we can deduce that there exist r, s, t satisfying r|s|t|r and such that X(u, v, r, s), X(w, x, s, t), X(y, z, t, r) or X(u, v, r, s), X(w, x, t, r), X(y, z, s, t) are satisfied. From all this, we can conclude that ≺ (uv, wx, yz)∨ ≺ (uv, yz, wx) is satisfied.
• ∀uv, wx, yz ∈ P, ≺ (uv, wx, yz) ↔ ≺ (wx, yz, uv) ↔ ≺ (yz, uv, wx) (P 4) method is not complete even for atomic networks: path-consistency does not insure consistency.
Let uv, wx, yz ∈ P satisfying ≺ (uv, wx, yz). From the definition of ≺, we have u|v, w|x and y|z which are satisfied and there exist r, s, t satisfying r|s|t|r, rs = uv, st = wx and tr = yz. By rotation, we can assert that s|t|r|s is also satisfied. From this, we can deduce that ≺ (wx, yz, uv) is satisfied. In a similar way, we can prove that ≺ (wx, yz, uv) →≺ (yz, uv, wx) and ≺ (yz, uv, wx) →≺ (uv, wx, yz) are satisfied.
• ∀uv, wx ∈ P, uv = wx → ((∃yz ∈ P, ≺ (uv, wx, yz)) ∧ (∃rs ∈ P, ≺ (uv, rs, wx))) (P 5)
Let uv, wx ∈ P such that uv = wx. From the definition of P and the one of the relation we can assert that u|v, w|x, ¬u|x and ¬w|v are satisfied. From Axiom A4 we deduce that there exist y, z, t such that y|z|t|y ∧ X(y, z, w, x) ∧ X(t, y, u, v)) is satisfied and that there exist q, r, s such that q|r|s|q ∧ X(q, r, u, v) ∧ X(s, q, w, x)) is satisfied. Consequently, there exists y, z, t such that ≺ (yz, zt, ty), yz = wx, ty = uv are satisfied and there exist q, r, s such that ≺ (qr, rs, sq), qr = uv, sq = wx are satisfied. Hence, there exists zt ∈ P such that ≺ (wx, zt, uv) is satisfied , and there exists rs ∈ P such that ≺ (uv, rs, wx) is satisfied. From C3 we can conclude that there exists zt ∈ P satisfying ≺ (uv, wx, zt), and that there exists rs ∈ P satisfying ≺ (uv, rs, wx).
• ∃uv, wx ∈ P, uv = wx. (P 6)
From Axiom A6 we can assert that there exist u, v, w satisfying u|v|w|u. Hence, there exist uv, vw, wu ∈ P such that ≺ (uv, vw, wu) is satisfied. From P1 we deduce that uv and vw are distinct classes.
Proof (End of proof of Theorem 2)
• (A4) Let u, v, w, x ∈ I satisfying u|v, w|x, ¬u|x, and ¬w|v. ≺ (u -, u + , v + ), ≺ (w -, w + , x + ) with u + = v - and w + = x -are satisified. Let l and m defined by l = u + = v -and m = w + = x -. Suppose that l = m.
As ≺ (u -, u + , v + ) and ≺ (w -, w + , x + ) are satisfied, we have ≺ (u -, l, v + ) and ≺ (w -, l, x + ) which are also satisfied. Hence, we have u -= l and x + = l. From P3, we can just consider three cases:
P2 and P4, we can deduce a contradiction for every case. We can assert that l = m. From P5, we can deduce there exist n, o ∈ P satisfying ≺ (l, m, n) and ≺ (l, o, n).
Let us define three cyclic intervals y, z, t by y = (l, m), z = (m, n) and t = (n, l). From the satisfaction of ≺ (l, m, n) and P4, we can deduce that y|z|t|y is satisfied. Let us suppose that y|x is not satisfied. As y + = x -, it follows that ≺ (y -, y + , x + ) is not satisfied. We have y -= y + and y + = x + . From P3, it follows that y -= x + or ≺ (y -, x + , y + ) is satisfied. Let us examine these two possible cases.
y -= x + is satisfied. It follows that x + = l = u + = v -. From the satisfaction of w|x, we have ≺ (w -, w + , x + ) which is satisfied, with w + = x -. Since ≺ (l, m, n) is satisfied, ≺ (x + , w + , n) is also satisfied. From P4, we can deduce that ≺ (w + , x + , w -) and ≺ (w + , n, x + ) are satisfied. From P2 follows that ≺ (w + , n, w -) is satisfied. Hence, from P4, we obtain the satisfaction of ≺ (w -, w + , n). As w + = m, w|z is satisfied. -≺ (y -, x + , y + ) is satisfied. Hence, ≺ (l, x + , w + ) is satisfied. As ≺ (l, m, n) is satisfied, ≺ (l, w + , n) is also satisfied. From P4, it follows that ≺ (w + , n, l) and ≺ (w + , l, x + ) are satisfied. From P2, we can deduce that ≺ (w
From P4, we have ≺ (w + , x + , w -) which is satisfied.
From P2, we deduce that ≺ (w + , n, w -) is satisfied. From P4, it follows that ≺ (w -, w + , n) is satisfied.
We have w + = m. It results that w|z is satisfied.
Hence, X(y, z, w, x) is satisfied. In a similar way, we can prove that X(t, y, u, v) is satisfied. By defining y, z, t by y = (m, l), z = (l, o) and t = (o, m), we can also prove that X(y, z, u, v) and X(t, y, w, x) are satisfied.
• (A5) Let u, v, w, x ∈ I satisfying u|w|x|v|u. We have the following equalities:
Let us define l 1 (resp. l 2 , l 3 and l 4 ) by
Consider the pair y = (l 1 , l 3 ). As w|x is satisfied, we can deduce the satisfaction of ≺ (l 1 , l 2 , l 3 ). Hence, we can assert that l 1 = l 3 . From P5, it follows that there exists l satisfying ≺ (l 1 , l 3 , l). It results that y = (l 1 , l 3 ) belongs to I. Suppose that u|y is not satisfied. Since u + = l 1 , ≺ (u -, l 1 , l 3 ) is not satisfied. u -and l 1 are distinct points and, l 1 and l 3 are also distinct points. From the satisfaction of v|u, we can deduce that ≺ (l 3 , u -, u + ) is satisfied. It follows that l 3 = u -. Consequently, Axiom P3 and the non satisfaction of u|y allow us to assert that ≺ (u -, l 3 , l 1 ) is satisfied. As v|u is satisfied, ≺ (l 3 , u -, l 1 ) is also satisfied. From P4 and from P2, it follows that ≺ (l 3 , u -, u -) is satisfied. From Axiom P1, it results a contradiction. In consequence, u|y is satisfied. With a similar line of reasoning, by supposing that y|v is not satisfied, we obtain a contradiction. Hence, u|y|v|u is satisfied.
• (A6) From P6, we can deduce that there exist l, m ∈ P such that l = m. From P5, it follows that there exists n satisfying ≺ (l, m, n). Let u = (l, m), we have u ∈ I and u = u. Now, let us prove the second part of the axiom. Let u = (l, m) ∈ I. By definition of I, there exists n ∈ P such that ≺ (l, m, n). Let v = (m, n) and w = (n, l). From P4, ≺ (m, n, l) and ≺ (n, l, m) are satisfied. From all this, we deduce that u|v, v|w and w|u are satisfied.
• (A7) Let u, v, w, x ∈ I satisfying w|u|x and w|v|x. The following equalities are satisfied: w + = u -, u + = x -, w + = v -, v + = x -. It follows that (u -, u + ) = (v -, v + ). Consequently, we can assert that u = v. Let u, v ∈ I such that u = v. We know that u -= u + . From P5, it follows that there exists l ∈ P satisfying ≺ (u -, u + , l). Let w = (l, u -) and x = (u + , l). From P4, we deduce that ≺ (l, u -, u + ) is satisfied. From all this, we can assert that w, x ∈ I and that w|u and u|x are satisfied. Since (u -, u + ) = (v -, v + ), we can assert that w|v|x is satisfied. • (A8) Let u, v, w ∈ I satisfying u|v|w. It follows that u + = v -and v + = w -. Moreover, as ≺ (u -, v -, v + ) is satisfied, we have v -= v + . In consequence, u + = w -. Hence, we can assert that u|w is not satisfied. | 45,855 | [
"1142762",
"997069"
] | [
"56711",
"247329"
] |
01487502 | en | [
"info",
"scco"
] | 2024/03/04 23:41:48 | 2004 | https://hal.science/hal-01487502/file/ligozat-renz-pricai04.pdf | Gérard Ligozat
Jochen Renz
What is a Qualitative Calculus? A General Framework
What is a qualitative calculus? Many qualitative spatial and temporal calculi arise from a set of JEPD (jointly exhaustive and pairwise disjoint) relations: a stock example is Allen's calculus, which is based on thirteen basic relations between intervals on the time line. This paper examines the construction of such a formalism from a general point of view, in order to make apparent the formal algebraic properties of all formalisms of that type. We show that the natural algebraic object governing this kind of calculus is a non-associative algebra (in the sense of Maddux), and that the notion of weak representation is the right notion for describing most basic properties. We discuss the ubiquity of weak representations in various guises, and argue that the fundamental notion of consistency itself can best be understood in terms of consistency of one weak representation with respect to another.
Introduction
What is a qualitative temporal or spatial calculus? And: why should we care? An obvious, if not quite satisfactory way of answering the first question would consist in listing some examples of fairly well-known examples: on the temporal side, Allen's interval calculus [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF] is the most famous candidate; others are the point calculus [START_REF] Vilain | Constraint propagation algorithms for temporal reasoning[END_REF], the pointand-interval calculus [START_REF] Dechter | Temporal Constraint Networks[END_REF], generalized interval calculi [START_REF] Ligozat | On generalized interval calculi[END_REF], or the INDU calculus [START_REF] Pujari | A new framework for reasoning about points, intervals and durations[END_REF]; on the spatial side, there are Allen-like calculi, such as the directed interval calculus [START_REF] Renz | A spatial Odyssey of the interval algebra: 1. Directed intervals[END_REF], the cardinal direction calculus [START_REF] Ligozat | Reasoning about cardinal directions[END_REF], which is a particular case of the n-point calculi [START_REF] Balbiani | Spatial reasoning about points in a multidimensional setting[END_REF], the rectangle calculus [START_REF] Balbiani | A model for reasoning about bidimensional temporal relations[END_REF], and more generally the n-block calculi [START_REF] Balbiani | A tractable subclass of the block algebra: constraint propagation and preconvex relations[END_REF], as well as calculi stemming from the RCC-like axiomatics, such as the RCC-5 and RCC-8 calculi [START_REF] Randell | A spatial logic based on regions and connection[END_REF], and various kinds of calculi, such as the cyclic interval calculus [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF], the star calculi [START_REF] Mitra | Qualitative Reasoning with Arbitrary Angular Directions[END_REF], or the preference calculi [START_REF] Duentsch | Tangent circle algebras[END_REF].
Why should we care? A first reason is that, as becomes soon apparent after considering some of the examples, many calculi share common properties, and are used in analogous ways: Take for instance Allen's calculus. It makes use of a set of basic relations, and reasoning uses disjunctions of the basic relations (representing incomplete knowledge), also called (disjunctive) relations. A relation has a converse relation, and relations can be composed, giving rise to an algebraic structure called Allen's algebra (which is a relation algebra, in Tarski's sense [START_REF] Tarski | On the calculus of relations[END_REF]). In applications, the knowledge is represented by temporal networks, which are oriented graphs whose nodes stand for intervals, and labels on the arcs which are relations. In this context, a basic problem is determining whether a given network is consistent (the problem is known to be NPcomplete, [START_REF] Vilain | Constraint propagation algorithms for temporal reasoning[END_REF]). Finally, when a network is consistent, finding a qualitative instantiation of it amounts to refining the network to an atomic sub-network which is still consistent: and this can be checked at the algebraic level.
Thus, it makes sense to ask the question: to what extent do those properties extend to the other calculi we mentioned above? As first discussed in [START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF], it soon appears that some properties of Allen's calculus do not extend in general. Some disturbing facts:
-As remarked by [START_REF] Egenhofer | Relation Algebras over Containers and Surfaces: An Ontological Study of a Room Space[END_REF][START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF], the algebras of some calculi are not relation algebras in the sense of Tarski, but more general algebras called non-associative algebras by Maddux (relation algebras being the particular case of associative non-associative algebras). In fact, the INDU algebra is only a semi-associative algebra. -The natural or intended models of the calculus may not be models in the strong sense or, in algebraic terms, representations of the algebra. This is no new realization: Allen's composition, for instance, expresses necessary and sufficient conditions only if the intervals are in a dense and unbounded linear ordering. But what is less known, apart from the fact that it may be interesting to reason in weaker structures, e.g., about intervals in a discrete linear ordering, is the fact that all such models correspond to weak representations of the algebra, in the sense of [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF]. -For some calculi, such as the containment algebra [START_REF] Ladkin | On Binary Constraint Problems[END_REF] or the cyclic interval calculus [START_REF] Balbiani | A model for reasoning about topologic relations between cyclic intervals[END_REF], it has been observed that some finite atomic constraint networks which are algebraically closed3 are not consistent. Again, this phenomenon is best expressed, if not explained, in terms of weak relations. -For Allen's calculus, any consistent atomic network is in fact k-consistent, for all k < n, if it has n nodes. Again, the analogous result is false for many calculi, and considering the various weak representations helps to explain why it may be so. If we can answer this last question, we have some hope of developing general methods which could be used for whole classes of calculi, instead of specific ones which have to be reinvented for each particular calculus. Although we do not consider this particular aspect in this paper, an example of a general concept which is valid for a whole class of calculi is the notion of pre-convexity [START_REF] Ligozat | Tractable relations in temporal reasoning: pre-convex relations[END_REF] which has been shown as providing a successful way of searching for tractable classes, at least for formalisms based on linear orderings such as Allen's calculus.
The purpose of this paper is to give a precise technical answer to the first question: what is a qualitative calculus? The answer involves a modest amount of -actually, two -algebraic notions, which both extend standard definitions in universal algebra: the notion of a non-associative algebra (which generalizes that of a relation algebra), and the notion of a weak representation, (which generalizes that of a representation).
This paper provides a context for discussing these various points. In section 2, the general construction of JEPD relations is presented in terms of partition schemes. The main operation in that context is weak composition, whose basic properties are discussed. Section 3 describes some typical examples of the construction. It is shown in Section 4 that all partition schemes give rise to non-associative algebras, and in Section 5 that the original partition schemes are in fact weak representations of the corresponding algebra. A proposal for a very general definition of a qualitative calculus is presented in Section 6 as well as a description of the various guises into which weak representations appear: both as particular kind of network and as natural universes of interpretation. Section 7 is concerned with the basic notion of consistency, which appears as a particular case of a more general notion of consistency of one weak representation with respect to another.
Developing a new calculus
Although there seems to be almost no end to defining qualitative spatial or temporal calculi, most constructions are ultimately based on the use of a set of JEPD (jointly exhaustive and pairwise disjoint4 ) relations. This will be our starting point for defining a generic qualitative calculus, in a very general setting.
Partition schemes
We start with a non-empty universe U , and consider a partition of U × U into a family of non-empty binary relations (R i ) i∈I :
U × U = i∈I R i (1)
The relations R i are called basic relations. Usually, calculi defined in this way use a partition into a finite number of relations. In order to keep things simple, we assume I to be a finite set. In concrete situations, U is a set of temporal, spatial, or spatio-temporal entities (time points, intervals, regions, etc.). Among all possible binary relations, the partition selects a finite subset of "qualitative" relations which will be a basis for talking about particular situations. For instance, in Allen's calculus, U is the set of all intervals in the rational line, and any configuration is described in terms of the 13 basic relations. We make some rather weak assumptions about this setup. First, we assume that the diagonal (the identity relation) is one of the R i s, say R 0 :
R 0 = ∆ = {(u, v) ∈ U × U | u = v} (2)
Finally, we choose the partition in such a way that it is globally invariant under conversion. Recall that, for any binary relation R, R ⌣ is defined by:
R ⌣ = {(u, v) ∈ U × U | (v, u) ∈ R} (3)
We assume that the following holds:
(∀i ∈ I)(∃j ∈ I) R ⌣ i = R j (4) Definition 1. A partition scheme is a pair (U, (R i ) i∈I )
, where U is a non-empty set and (R i ) i∈I a partition of U × U satisfying conditions ( 2) and ( 4).
Describing configurations
Once we have decided on a partition scheme, we have a way of describing configurations in the universe U . Intuitively, a configuration is a (usually finite) subset V ⊆ U of objects of U . By definition, given such a subset, each pair (u, v) ∈ V ×V belongs to exactly one R i for a well-defined i. Later, we will think of V as a set of nodes of a graph, and of the map ν : V × V → I as a labeling of the set of arcs of the graph. Clearly, ν(u, u) is the identity relation R 0 , and ν(v, u) is the transpose of ν(u, v). The resulting graphs are called constraint networks in the literature. More generally, we can express constraints using Boolean expressions using the R i s. In particular, constraint networks using disjunctive labels are interpreted as conjunctions of disjunctive constraints represented by unions of basic relations on the labels.
Weak composition
Up to now, we did not consider how constraints can be propagated. This is what we do now by defining the weak composition of two relations. Recall first the definition of the composition R • S of two binary relations R and S:
(R • S) = {(u, v) ∈ U × U | (∃w ∈ U ) (u, w) ∈ R & (w, v) ∈ S} (5)
Weak composition, denoted by R i ⋄R j , of two relations R i and R j is defined as follows:
(R i ⋄ R j ) = k∈J R k where k ∈ J if and only if (R i • R j ) ∩ R k ̸ = ∅ (6)
Intuitively, weak composition is the best approximation we can get to the actual composition if we have to restrict ourselves to the language provided by the partition scheme. Notice that weak composition is only defined with respect to the partition, and not in an absolute sense, as is the case for the "real" composition.
At this level of generality, some unpleasant facts might be true. For instance, although all relations R i are non-empty by assumption, we have no guarantee that R i ⋄R j , or R i • R j for that matter, are non-empty. A first remark is that weak composition is in a natural sense an upper approximation to composition:
Lemma 1. For any i, j ∈ I: R i ⋄ R j ⊇ R i • R j Proof. Any (u, v) ∈ R i • R j is in some (unique) R k for a well-defined k. Since this R k has an element in common with R i • R j , R k must belong to R i ⋄ R j . ✷ Lemma 2. For any i, j, k ∈ I: (R i ⋄R j ) R k = ∅ if and only if (R i •R j ) R k = ∅ Proof. Because of Lemma 1, one direction is obvious. Conversely, if (R i ⋄ R j ) R k is not empty, then, since (R i ⋄ R j ) is a union of R l s, R k is contained in it. Now, by definition of weak composition, this means that R k intersects R i • Rj.
✷ The interaction of weak composition with conversion is an easy consequence of the corresponding result for composition:
Lemma 3. For all i, j ∈ I: (R i ⋄ R j ) ⌣ = R ⌣ j ⋄ R ⌣ i 2.
Weak composition and seriality
In many cases, the relations in the partition are serial relations. Recall that a relation R is serial if the following condition holds:
(∀u ∈ U )(∃v ∈ U ) such that (u, v) ∈ R (7)
Lemma 4. If the relations R and S are serial, then R • S is serial, (hence it is nonempty).
Proof. If R and S are serial, then, for an arbitrary u, choose first w such that (u, w) ∈ R, then v such that (w, v) ∈ S. Then (u, v) ∈ (R • S). ✷ As a consequence, since all basic relations are non-empty, the weak composition of two basic relations is itself non-empty. Lemma 5. If the basic relations are serial, then ∀i ∈ I:
j∈I (R i ⋄ R j ) = U × U Proof.
We have to show that, for any given i, and any pair (u, v), there is a j such that (u, v) is in R i ⋄ R j . We know that (u, v) ∈ R k , for some well-defined k. Because R i and R k are serial, for all t there are x and y such that
(t, x) ∈ R i and (t, y) ∈ R k . Therefore (x, y) ∈ R ⌣ i • R k , so R ⌣ i • R k is non-empty. Moreover, there is one well- defined j such that (x, y) ∈ R j . Hence (t, y) is both in R k and in R i • R j . Therefore, R k ⊆ (R i ⋄ R j ), hence (u, v) ∈ (R i ⋄ R j ). ✷
Examples of partition schemes
Example 1 (The linear ordering with two elements).
Let U = {a, b} a set with two elements. Let R 0 = {(a, a), (b, b)}, R 1 = {(a, b)}, R 2 = {(b, a)}. The two-element set U , in other words, is linearly ordered by R 1 (or by R 2 ). Then R 1 • R 1 = R 2 • R 2 = ∅, R 1 • R 2 = {(a, a)}, and R 2 • R 1 = {(b, b)}. Hence R 1 ⋄ R 1 = ∅, R 2 ⋄ R 2 = ∅, R 1 ⋄ R 2 = R 0 , and R 2 ⋄ R 1 = R 0 .
R 2 ). Then R 1 •R 1 = {(a, c)}, R 2 •R 2 = {(c, a)}, R 1 •R 2 = R 2 •R 1 = {(a, a), (b, b), (a, b), (b, a)}. Consequently, R 1 ⋄R 1 = R 1 , R 2 ⋄R 2 = R 2 , R 1 ⋄R 2 = R 2 ⋄R 1 = U ×U .
Example 3 (The point algebra). The standard example is the point algebra, where U is the set Q of rational numbers, and R 1 is the usual ordering on Q, denoted by <. R 2 is the converse of R 1 . Because this ordering is dense and unbounded both on the left and on the right, we have
R 1 • R 1 = R 1 , R 2 • R 2 = R 2 , R 2 • R 1 = R 1 • R 2 = U × U .
Example 4 (Allen's algebra). Here U is the set of "intervals" in Q, i.e., of ordered pairs (q 1 , q 2 ) ∈ Q×Q such that q 1 < q 2 . Basic relations are defined in the usual way [START_REF] Allen | Maintaining knowledge about temporal intervals[END_REF]. Since Q is dense and unbounded, weak composition coincides with composition [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF].
Example 5 (Allen's calculus on integers). U is the set of intervals in Z, that is, of pairs
(n 1 , n 2 ) ∈ Z × Z such that n 1 < n 2 .
Weak composition differs from composition in this case: e.g., we still have p ⋄ p = p, but the pair
([0, 1], [2, 3]) is in p, but not in p • p.
4 The algebras of qualitative calculi
Algebras derived from partition schemes
Now we take an abstract algebraic point of view. For each i ∈ I, we introduce a symbol r i (which refers to R i ) and consider the set B = {r i | i ∈ I}. Let A be the Boolean algebra of all subsets of B. The top element of this algebra is denoted by 1, and the bottom element (the empty set) by 0. Union, intersection and complementation are denoted by +, • and -, respectively. Let 1 ′ denote {r 0 }. We still denote by r ⌣ i the operation of conversion. On this Boolean algebra, the weak composition function defines an operation which is usually denoted by ;. When tabulated, the corresponding table is called the weak composition table of the calculus. The operation of composition on basic symbols is extended to all subsets as follows:
For a, b ∈ A, (a ; b) = i,j (r i ; r j ), where r i ∈ a and r j ∈ b.
(
) 8
Since the algebraic setup reflects facts about actual binary relations, the algebra we get in this way would be a relation algebra in Tarski's sense, if we considered tion. In the general case, however, what we are considering is only weak composition, an approximation to actual composition. What happens is that we get a weaker kind of algebra, namely, a non-associative algebra [START_REF] Maddux | Some varieties containing relation algebras[END_REF][START_REF] Hirsch | Relation Algebras by Games[END_REF]:
Definition 2. A non-associative algebra A is a tuple A = (A, +, -, 0, 1, ; , ⌣, 1 ′ ) s.t.:
1. (A, +, -, 0, 1) is a Boolean algebra. 2. 1 ′ is a constant, ⌣ a unary and ; a binary operation s. t., for any a, b, c ∈ A:
(a) (a ⌣ ) ⌣ = a (b) 1 ′ ; a = a ; 1 ′ = a (c) a ; (b + c) = a ; b + a ; c (d) (a + b) ⌣ = a ⌣ + b ⌣ (e) (a -b) ⌣ = a ⌣ -b ⌣ (f) (a ; b) ⌣ = b ⌣ ; a ⌣ (g) (a ; b) • c ⌣ = 0 if and only if (b ; c).a ⌣ = 0 A non-associative algebra is a relation algebra if it is associative.
Maddux [START_REF] Maddux | Some varieties containing relation algebras[END_REF] also introduced intermediate classes of non-associative algebras between relation algebras (RA) and general non-associative algebras (NA), namely weakly associative (WA) and semi-associative (SA) algebras. These classes form a hierarchy:
NA ⊇ WA ⊇ SA ⊇ RA (9)
In particular, semi-associative algebras are those non-associative algebras which satisfy the following condition: For all a, (a ; 1) ; 1 = a ; 1.
1. The algebraic structure associated to a partition scheme is a non-associative algebra. If the basic relations are serial, it is a semi-associative algebra.
Proof. We have to check points (2(a-g)) of Def.2 (checking the validity on basic relations is enough). The first six points are easily checked. The last axiom, the triangle axiom, holds because of lemma 2. If all basic relations are serial, the condition for semi-associativity holds, because, by lemma 5, (a ; 1) = 1 for all basic relations a. ✷
What about associativity?
The non associative algebras we get are not in general associative. E.g., the algebra of Example 1 is not associative: ((r 1 ; r 2 ) ; r 2 ) = (1 ′ ; r 2 ) = r 2 , whereas (r 1 ; (r 2 ; r 2 )) = (r 1 ; 0) = 0. Although it satisfies the axiom of weak associativity [START_REF] Maddux | Some varieties containing relation algebras[END_REF], it is not semiassociative, since for instance (r 1 ; 1) ; 1 = 1 whereas r 1 ; (1 ; 1) = r 1 + 1 ′ .
If weak composition coincides with composition, then the family (R i ) i∈I is a proper relation algebra, hence in particular it is associative. However, this sufficient condition is not necessary, as Example 2 shows: although the structure on the linear ordering on three elements has a weak composition which is not composition, it defines the point algebra, which is a relation algebra, hence associative. An example of an algebra which is semi-associative but not associative is the INDU calculus [START_REF] Balbiani | On the Consistency Problem for the INDU Calculus[END_REF]. The semi-associativity of INDU is a consequence of the fact that all basic relations are serial.
Weak representations
In the previous section, we showed how a qualitative calculus can be defined, starting from a partition scheme. The algebraic structure we get in this way is a non-associative algebra, i.e., an algebra that satisfies all axioms of a relation algebra, except possibly associativity.
Conversely, what is the nature of a partition scheme with respect to the algebra? The answer is that it is a weak representation of that algebra. The notion of a weak representation we use here 5 was first introduced in [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF] for relational algebras. It extends in a natural way to non-associative algebras. (U,ϕ) where U is a non empty set, and ϕ is a map of A into P(U × U ), such that: Example 6. Take a set U = {u 1 , u 2 , u 3 } with three elements. ϕ be defined by:
Definition 3. Let A be a non-associative algebra. A weak representation of A is a pair
1. ϕ is an homomorphism of Boolean algebras. 3. ϕ(a ⌣ ) is the transpose of ϕ(a). 2. ϕ(1 ′ ) = ∆ = {(x, y) ∈ U × U | x = y}. 4. ϕ(a ; b) ⊇ ϕ(a) • ϕ(b).
ϕ(o) = {(u 1 , u 2 )}, ϕ(o ⌣ ) = {(u 2 , u 1 )}, ϕ(m) = {(u 1 , u 3 )}, ϕ(m ⌣ ) = {(u 3 , u 1 )}, ϕ(d) = {(u 3 , u 2 )}, ϕ(d ⌣ ) = {(u 2 , u 3 )}, ϕ(eq) = {(u 1 , u 1
), (u 2 , u 2 ), (u 3 , u 3 )}, and ϕ(a) = ∅ for any other basic relation a in Allen's algebra. Then (U, ϕ) is a weak representation of Allen's algebra which can be visualized as shown in Fig. 1(a).
Example 7 (The point algebra). . A weak representation of this algebra is a pair (U, ≺), where U is a set and ≺ is a linear ordering on U . It is a representation iff ≺ is dense and unbounded. Fig. 1(b) shows a weak representation with three points v 1 , v 2 , v 3 .
Partition schemes and weak representations
Now we come back to the original situation where we have a universe U and a partition of U × U constituting a partition scheme. Consider the pair (U, ϕ), where ϕ : A → P(U × U ) is defined on the basic symbols by:
ϕ(r i ) = R i (11)
and is extended to the Boolean algebra in the natural way:
For a ∈ A let ϕ(a) = ri∈a ϕ(r i ) (12)
Proposition 2. Given a partition scheme on U , define ϕ as above. Then the pair (U, ϕ) is a weak representation of A.
Proof. The only point needing a proof is concerned with axiom 4. For basic symbols, ϕ(r i ; r j ) = R i ⋄ R j , by definition, while ϕ(r i ) • ϕ(r j ) = R i • R j . By lemma 1, the former relation contains the latter. The results extends to unions of relations. ✷
From this proposition we can assert the (obvious) corollary:
Corollary 1. The weak representation associated to a partition scheme is a representation if and only if weak composition coincides with composition.
6 What is a qualitative calculus?
We now have a general answer to our initial question: what is a qualitative calculus? Definition 4. A qualitative calculus is a triple (A, U, ϕ) where:
1. A is a non-associative algebra. 2. (U, ϕ) is a weak representation of A.
The ubiquity of weak representations
Summing up, we started with a partition scheme and derived an algebra from it. This algebra, in all cases, is a non-associative algebra. It may or may not be a relation algebra.
If the partition scheme is serial, it is a semi-associative algebra. In all cases, anyway, the original partition scheme defines a weak representation of the algebra.
In the sections, we show that weak representations appear both as constraints (a-closed, normalized atomic networks) and as universes of interpretation. Consequently, many notions of consistency are related to morphisms between weak representations.
Weak representations as constraint networks
Recall that a (finite) constraint network on A is a pair N = (N, ν), where N is a (finite) set of nodes (or variables) and ν a map ν : N × N → A. For each pair (i, j) of nodes, ν(i, j) is the constraint on the arc (i, j). A network is atomic if ν is in fact a map into the set of basic relations (or atoms) of
A. It is normalized if ∀i, j ∈ N ν(i, j) = 1 ′ if i = j, and ∀i, j ∈ N ν(j, i) = ν(i, j) ⌣ . A network N ′ = (N, ν ′ ) is a refinement of N if ∀i, j ∈ N we have ν ′ (i, j) ⊆ ν(i, j). Finally, a network is algebraically closed, or a-closed, if ∀i, j, k ∈ N ν(i, j) ⊆ ν(i, k) ; ν(k, j).
Let (N, ν) be a network, and consider for each atom a ∈ A the set ρ(a) = {(i, j) ∈ N ×N | ν(i, j) = a}. This defines a map from the set of atoms of A to the set of subsets of N × N , which is interpreted as providing the set of arcs in the network which are labeled by a given atom. If the network is atomic, any arc is labeled by exactly one atom, i.e., the set of non-empty ρ(a) is a partition of N × N labeled by atoms of A. If it is normalized, this partition satisfies the conditions ( 2) and (3) characterizing a partition scheme. If the network is a-closed, then (N, ρ), where ρ is extended to A in the natural way, i.e., as ρ(b) = a∈b ρ(a), is together with N a weak representation of A.
Conversely, for any weak representation (U, ϕ), we can interpret U as a set of nodes, and ϕ(r i ) as the set of arcs labeled by r i . Hence each arc is labeled by a basic relation, in such a way that (v, u) is labeled by r ⌣ i if (u, v) is labeled by r i , and that for all u, v, w the composition of the label on (u, w) with that on (w, v) contains the label on (u, v). Hence a weak representation is an a-closed, normalized atomic network.
Considering a weak representation in terms of a constraint network amounts to seeing it as an intensional entity: it expresses constraints on some instantiation of the variables of the network. Now, weak representations are at the same time extensional entities: as already apparent in the discussion of partition schemes, they also appear as universes of interpretation.
Weak representations as interpretations
Many standard interpretations of qualitative calculi are particular kinds of weak representations of the algebra, namely, representations. Allen's calculus, e.g., is usually interpreted in terms of the representation provided by "intervals", in the sense of strictly increasing pairs in the rational or real line. It has less been pointed out in the literature that in many cases weak representations, rather than representations, are what the calculi are actually about.
As already discussed in [START_REF] Ligozat | Weak Representations of Interval Algebras[END_REF], a finite weak representation of Allen's algebra can be visualized in terms of finite sets of intervals on a finite linear ordering. More generally, ✏ ✏ ✏ ✏ ✏ ✶ P P P P P q ❄ A P(N × N )
P(U × U ) ρ ϕ (h × h) * Fig. 2.
A general notion of consistency restricting the calculus to some sub-universe amounts to considering weak representations of Allen's algebra: for instance, considering intervals on the integers (Example 5) yields a weak representation. It also makes sense to consider problem of determining whether constraint networks are consistent with respect to this restrictive interpretation.
Encountering the notion of seriality is not surprising. Recall that a constraint network is k-consistent if any instantiation of k -1 variables extends to k-variables. In particular, a network is 2-consistent if any instantiation of one variable extends to two variables. Hence a partition scheme is serial if and only if the (possibly infinite) "network" U (or weak representation) is 2-consistent. Many natural calculi have consistent networks which are not 2-consistent, e.g., Allen's calculus on integers. Although the 2-element network with constraint d is consistent, it is not 2-consistent: if an interval x has length one, there is no interval y such that ydx.
What is consistency?
The preceding discussion shows that a weak representation can be considered alternatively as a particular kind of constraint network (an atomic, normalized and a-closed one), or as a universe of interpretation. Now, a fundamental question about a network is whether it is consistent with respect to a given domain of interpretation.
Intuitively, a network N = (N, ν) is consistent (with respect to a calculus (A, U, ϕ)) if it has an atomic refinement N ′ = (N, ν ′ ) which is itself consistent, that is, the variables N of N can be interpreted in terms of elements of U in such a way that the relations prescribed by ν ′ hold in U . More specifically, if (N, ν ′ ) is a-closed, normalized, and atomic, consider the associated weak representation (N, ρ). Then the consistency of the network with respect to the weak representation (U, ϕ) means that there exists an instantiation h : N → U such that, for each atom a ∈ A, (i, j) ∈ ρ(a) implies (h(i), h(j)) ∈ ϕ(a). Hence consistency of such a network appears as a particular case of compatibility between two weak representations. This means that in fact consistency is a property involving two weak representations: Definition 5. Let N = (N, ρ) and U = (U, ϕ) be two weak representations of A. Then N is consistent with respect to U if there exists a map h : N → U such that the diagram in Fig. 2 commutes, that is, for each a ∈ A, (i, j) ∈ ρ(a) implies (h(i), h(j)) ∈ ϕ(a). This generalization of the notion of consistency emphasizes the fact that it is a notion between two weak representations, where one is interpreted in intentional terms, while the other is used in an extensional way, as a universe of interpretation.
Example 8 (The point algebra).
A weak representation in that case is a linearly ordered set. Consider two such weak representations (N, ≺ N ) and (U, ≺ U ). Then (N, ≺ N ) is consistent with respect to (U, ≺ U ) iff there is a strictly increasing map h : N → U .
Inconsistent weak representations
In that light, what is the meaning of the existence of inconsistent weak representations? Examples of finite atomic a-closed networks which are not consistent exist e.g. for the cyclic interval calculus or the INDU calculus [START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF]. In such cases, the universe of interpretation of the calculus (such as intervals on a rational circle, or intervals with duration) has too much additional and constraints on its relations for the network to take them into account. Characterizing the cases where this can happen seems to be an open problem in general.
Conclusions
This paper proposes to introduce a shift of perspective in the way qualitative calculi are considered. Since Allen's calculus has been considered as a paradigmatic instance of a qualitative calculus for more than two decades, it has been assumed that the algebraic structures governing them are relation algebras, and that the domains of interpretation of the calculi should in general be extensional or, in algebraic terms, representations of these algebras. These assumptions, however, have been challenged by a series of facts: some calculi, as first shown in [START_REF] Egenhofer | Relation Algebras over Containers and Surfaces: An Ontological Study of a Room Space[END_REF], then by [START_REF] Ligozat | Spatial and Temporal Reasoning: Beyond Allen's Calculus[END_REF], involve non-associative algebras. Also, for many calculi, the domains of interpretation may vary, and do not necessarily constitute representations.
We argued in this paper that a qualitative calculus should be defined abstractly as a triple consisting of a non-associative algebra and a weak representation of that algebra. This abstract definition makes apparent the fact that particular kinds of networks on the one side, and representations of the algebras on the other side, are ultimately of a common nature, namely, both are particular kinds of weak representations. This last fact has of course been known before: for instance, the work described in [START_REF] Hirsch | Relation Algebras by Games[END_REF] is about trying to construct representations of a given relation algebra by incrementally enriching a-closed networks using games à la Ehrenfeucht-Fraissé. However, we think that putting qualitative calculi in this setting provides a clear way of considering new calculi, as well as an agenda for questions to be asked first: what are the properties of the algebra involved? What are weak representations? Are the intended interpretations representations of the algebra? When are weak representations consistent with respect to which weak representations?
A further benefit of the framework is that it makes clearly apparent what consistency really means: consistency of a network (a network is a purely algebraic notion) with respect to the calculus is a particular case of consistency between two weak representations: it can be defined as the possibility of refining the network into a weak representation which is consistent wrt. the one which is part of the calculus considered.
Obviously, defining a general framework is only an initial step for studying the new problems which arise for calculi which are less well-behaved than Allen's calculus. A first direction of investigation we are currently exploring consists in trying to get a better understanding of the relationship between consistency and the expressiveness of constraint networks.
So we cannot hope to have general methods and have to look closer at what the calculi have to offer. Defining a family of calculi by giving examples amounts to a partial extensional definition. But what would an intensional definition be?
Example 2 (
2 The linear ordering with three elements). Let U = {a, b, c} a set with three elements. Let R 0 = {(a, a), (b, b), (c, c)}, R 1 = {(a, b), (b, c), (a, c)}, R 2 = {(b, a), (c, b), (c, a)}. Here, the three-element set U is linearly ordered by R 1 (or by
Fig. 1 .
1 Fig. 1. A weak representation of Allen's algebra (a) and of the point algebra (b)
We use the term algebraically closed, or a-closed, to refer to the notion which is often (in some cases incorrectly) referred to as path-consistency: for any 3-tuple (i, j, k) of nodes, composing the labels on (i, k) and (k, j) yields a result which contains the label on (i, j).
Contrary to one of the authors' initial assumption, the JEPD acronym does not seem to be related in any way to the JEPD hypothesis in biblical exegesis, where J, E, P, D stand for the Jehovist, Elohist, Priestly and Deuteronomist sources, respectively!
This notion is not to be confused with weak representability as used by Jónsson, see[START_REF] Jónsson | Representation of modular lattices and relation algebras[END_REF][START_REF] Hirsch | Relation Algebras by Games[END_REF].
⋆⋆ National ICT Australia is funded through the Australian Government's Backing Australia's Ability initiative, in part through the Australian Research Council. | 34,597 | [
"997068",
"1003925"
] | [
"247329",
"488648"
] |
00635903 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2011 | https://shs.hal.science/halshs-00635903/file/2011GervaisMerchantOrFrenchAtlantic.pdf | A Merchant or a French Atlantic? Eighteenth-century account books as narratives of a transnational merchant political economy
Pierre GERVAIS, University Paris 8 / UMR 8533 IDHE On June 1, 1755, an anonymous clerk in Bordeaux merchant Abraham Gradis's shop took up a large, leather-bound volume, containing 267 sheets of good paper, each page thinly ruled in red. The first page was headed by the legally compulsory stamp of approval of an official, in this case provided by Pierre Agard, third consul of the Chambre de commerce of Bordeaux. Agard had signed the volume on May 13, just a week after having taken up his position, and had thus made it fit for use as a merchant record. 1 Right under this certification, our clerk wrote in large cursive across the page 'Laus Deo Bordeaux ce Prem r Juin 1755,' and proceeded to copy the first entry of what was a new account book: a list of all 'Bills receivable,' that is, debts owed his master. This inaugural act, however, went unnoticed, and one would find no mention of it either in Gradis's letters or in the later accounts of historians.
Opening an account book for anybody connected with the market place was a humdrum affair at the end of the eighteenth century, so much so that to this day we tend to take for granted the meaning of such a gesture. What would be more natural than wanting to record one's transactions, customers, and the ubiquitous mutual credit which was a necessary part of commercial life? Merchant practice has given rise to a very vast body of historiography; from the solid baseline provided by the classic studies by Paul Butel, Charles Carrière or André Lespagnol as well as Bernard Bailyn, David Hancock or Cathy Matson on the British side, it has developed into one of the major topics of historical conversation in the past twenty years with the rapid development of the Atlantic paradigm, which gave 'le doux commerce' center stage as the key force behind European expansion, and possibly the organizing principle of what has come to be called the 'Atlantic world'. 2 But account books have remained on the margins of this conversation, perhaps because the very activity they embodied was rather pedestrian in the 1700s. Accounting history has traced the rise of double-entry accounting, a sophisticated method of financial control which was the primary -though not exclusivesource for accounting as we know it today and as far back as the Renaissance.
By the end of the seventeenth century, double-entry was well-known among elite traders, but most accounting was still done in a simpler single-entry system. In this respect, the eighteenth century is better known as the era in which accounting for costs started to develop along with the new, large productive ventures of the early agricultural and industrial revolutions, while, in sharp contrast with ironworks and noble estates, merchant accounting remained primarily concerned with financial transactions. Moreover, outside of a select group of large companies and international traders, which had to innovate because of the complexities of their multinational operations, most eighteenth-century merchants held to practices already well-established in the two preceding centuries. As a result, merchant accounting is largely seen as a rather uneventful branch of commercial life until the rapid spread of new, more elaborate management techniques associated with the Industrial revolution in the nineteenth century. 3 There is more to Abraham Gradis's account book, however, than this straightforward and whiggish tale of order and progress, for the figures it contains express the underlying mechanics of European mercantile expansion, and thus raise the question of its nature. That European expansion was trade-based is hardly a new idea, but why exactly trade expandedwhy merchants filled more books with more accounts-is not as simple a question as it sounds. The standard economic approach posits the quasi natural expression of an urge to expand among economic agents, particularly merchants, once the proper institutional environment had been created; secure property rights through limitation of royal power, for instance, or more open institutions through which people would be empowered to escape extended family or tribal constraints, generated new incentives for those willing to shed old routines. The discussion thus revolves around the presence of these incentives, while the acquisitive impulse itself is considered a given. Indeed, this view of economic expansion as a natural consequence of new incentives dovetails smoothly with descriptions of the First Industrial Revolution as a set of innovations in the productive sector also enabled through a propitious social-cultural environment. The 'Industrial Enlightenment' generated a new level of incentives and opportunities, freeing yet more energies and imagination on the road to industrial capitalism, but the basic mechanism was the same: homo oeconomicus -usually male in such accounts-saw a new field of opportunities once barriers had been removed.
For a trade which promoted market unification and competitive intensification, account books first and foremost represented a recording tool which helped rationalize the decision process, and there is no reason to assume that their analysis would pose any particular challenge. This construction also fits well with a regional and even nationalized view of Atlantic trade, since a generalized economic pattern of growth would take varied local forms, depending on local political circumstances. [START_REF] Acemoglu | The Rise of Europe: Atlantic Trade, Institutional Change, and Economic Growth[END_REF] But in the past fifteen years or so, scholars specializing in the French Old Regime have started offering a different model, in which they stress the time-and space-specific dimensions of economic activity. In these accounts, European economic expansion followed rules and paths which were peculiar to the Early Modern era. Price-setting mechanisms, for instance, are described as primarily operating in periodic fashion and within regulatory and social limits, with profit itself a result of both experience and anticipation.
In this universe, prices could not play the informational and distributive role assigned by modern economists. While progress was possible, it was not a straightforward result of competition and reward, in what were effectively 'priceless markets'. Expansion ocurred not so much by changing prices or productivity than by playing on segmented, spatially segregated niche markets, through product innovation more often than not. Profit distribution was dependent on a complex intertwining and hierarchization of activities, embodied in far-reaching sub-contracting networks and cartels. Last, but not least, the transition to the First Industrial Revolution represented a break with past practices, rather than being a gradual evolution of techniques. In this transition, we see trade as a separate sphere with its own rules and own separate developmental process. "Progress" does not necessarily occur, or at least its presence is open to question, because there is no assumption that the system will work to maximize the efficiency of resource allocation. Similarly, the question of whether these processes were heterogeneous, depending on regional variables, or on state, national or local institutions, or whether they were generalized and uniform, is left wide open. [START_REF] Grenier | L'économie d'Ancien Régime[END_REF] This debate has direct bearing on the analysis of French activity in the Caribbean trade, because depending on which side one picks, it leads to two very different ways of analyzing geographic space and its precise role in the period of 'first globalization.' [START_REF]A term used[END_REF] How was the fact of a French colonial Empire articulated in a more general movement in which trade expanded throughout the colonial sphere? If we accept the idea that the expansion of trade was a direct reflection of a timeless acquisitive impulse on the part of merchants, then the identification of the Caribbean as a separate sphere was primarily a political phenomenon, before being an economic one or even a social one. The creation of French Caribbean markets was consequently a function of royal policies, within a broader frame of market development.
Conversely, the segmented and monopolistic character of Early Modern markets leads to the possibility that the French Caribbean islands were a separate market, or even a set of separate markets, deliberately constructed within a broader course of imperial economic development.
Royal policy was an acknowledgement of a state of affairs on the ground as much as a contribution to it. The present paper questions the very notion of a 'French' Atlantic. The concept of a French Atlantic makes sense in a world structured by the colonial policies of European States. It makes less sense if that world turns out to have been a mosaic of places held together by the bonds of exchange relationships as much as by the bonds of Empire. This article focuses on the sources of the commercial activity which made the French Caribbean, in particular, Saint-Domingue, Martinique, and Guadaloupe, the pride of the eighteenth-century French Empire. I concentrate on the counting houses of Bordeaux, Nantes and Saint-Malo. In these places, the slave trade was organized and financed, planters were bankrolled, and sugar and other colonial products were brought back to be redispatched throughout Europe. Complex business webs were built to deal with what was arguably the most wide-ranging endeavour devised by economic agents anywhere, but these webs were not designed within a regionalized framework, at least not in the national or imperial sense.
Account books left no space for place or borders. Instead, they structured relationships around very different notions of interpersonal credit and risk. This does not mean that the French Caribbean -or more broadly the French Empire -was irrelevant to the way merchants operated. To understand how place and politics interacted with trade, however, we have to understand according to which principles trade was organized in the first place.
Account books provide us with a perfect tool to grasp this organization. Accounting was primarily a way of listing debts and loans, and as such was the most direct expression of the key relation of power in the Early Modern era, that of credit. [START_REF] Fontaine | L'économie morale: pauvreté, crédit et confiance dans l'Europe pré-industrielle[END_REF] It gave this relation its grammar, and underpinned all its manifestations. Each account was a narrative summarizing the interaction between two very specific partners within this very specific universe of credit.
The fact that accounting was peripherally concerned with profit calculations or strategic decisions points to the hierarchy of priorities which an actor on Early Modern markets had to adopt in order to be successful; providing credit came first, bottom-line profit was a distant second, and cost issues were even farther behind. Within these constraints, merchant activity necessarily transcended both imperial and regional boundaries; it was articulated around the personal, not the political or regional, a fact which is clearly apparent in account books.
Of course, borders did offer avenues for comparative advantage and provided ways to thwart competition from "foreign" merchant networks, just as much as regional, ethnic, or religious kinship ties could be used to reinforceme these networks. The importance of these possibilities was abundantly underscored by the exclusif and by other non-tariff barriers, as well as by the role of kinship in international trade. But regardless of the context in which it was deployed, credit still had to stand for a whole complex of interpersonal links, well below -or beyond -the national or regional level, and only partly quantifiable. As will be demonstrated in the case of the House of Gradis, there is good reason to believe that these links, more than any other ingredient of merchant life, were the defining force behind merchant strategies, and unified the merchant world in ways beyond the reach of all other centrifugal forces. All merchants operated in the same way, assuredly in a very segmented universe, but with a full consciousness of the underlying unity of commercial life. Regions and empires did exist, and did play a role, but their roles were strictly constrained by the rules of merchant exchanges.
*
Recording transactions in books, in a written manner, was a legal necessity in 'law merchant,' i. e. in the body of judicial decisions which provided precedent and guidance to jurisdictions having to adjudicate conflicting claims among traders. A written record of a transaction, appearing seamlessly as one item among a chronologically arranged series, provided solid proof that a transaction had taken place. This usage, already nearly universally enforced by customs in Western Europe, became a legal obligation in France with the Ordonnance de 1673. Knowledge of accounting came first in the list of skills one had to have in order to bear the title of Master merchant (Title 1, Article IV); a compulsory balancing of accounts had to take place at least every year between parties to a contract (Title 1, Articles VII and VIII); and a whole, though rather short, chapter was devoted to the issue of books (Title 3, 'Des livres et registres des négocians, marchands et banquier'). As with merchant law, however, recording daily transactions in order was enough; what was required was a 'journal' or 'daybook', containing 'all their trade, their bills of exchange, the debts they owe and that are owed to them, and the money they used for their house expenses,' and written 'in continuity, ordered by date, with no white space left' [between two transactions]. [START_REF] Sallé | a lawyer in the Paris parliament[END_REF] Any calculation above and beyond this simple act of recording was unnecessary.
Admittedly, well-kept accounts were useful whenever they had to be balanced, in the case of a death, a bankruptcy or the dissolution of a partnership. But 'useful' did not mean 'necessary,' and it is hard to believe that generations of merchants would have filled endless volumes with tiny figures simply to spare some work to their creditors or executors. In truth, with a well-kept journal, the work of ventilating operations between accounts to calculate the balance on each of them could be postponed until it became necessary. And it was not necessary as often as one would think: the Ordonnance de 1673 quoted above prescribed compulsory settlements of accounts every six months or year, depending on the kind of goods traded, which implies that left to their own devices, traders would not necessarily have bothered to settle accounts every year -and indeed the book with which we started, Gradis's June 1755 journal, shows no trace of balancing the accounts all the way through to 1759. [START_REF]Ordonnance du commerce de 1673[END_REF] Even the most elaborate form of accounting, namely, double-entry accounting, was seldom used for calculating profits and helping managerial decision.
In the absence of standardized production and enforceable norms of quality, each transaction was largely an act of faith on the part of the buyer. After all, the buyer was almost never enough of an expert on a given product to be able to detect hidden faults and blemishes in quality. A trader was thus at the mercy of his suppliers for the quality of his goods. The problem was identical for sales, since large merchants had to sell at least part of their goods through commission merchants living in far-away markets. These agents alone could gauge the state of a local market and maximize the returns of a sale; their principal could simply hope that his trust was not misplaced. Moreover, the slow flow of imperfect information meant that markets for any product could fluctuate wildly, suddenly and unexpectedly, so that even a venture with the best suppliers and the most committed selling agents could come to grief. In the last analysis, forecasts were at best guesswork, past experience was not a useful tool for short-term predictions, and the valuation of each good was the result of an ad hoc negotiation entailing both an informed decision on the value of each particular good -this piece of cloth, that barrel of port-and a bet on the market prices at the future moment of the eventual resale. [START_REF] Yamey | The "particular gain or loss upon each article we deal in": an aspect of mercantile accounting, 1300-1800[END_REF] If a simple record of transactions was enough to fulfill legal obligations, and more complex records were not necessarily very useful for profit calculations or to help the decision-making process given the shifts in markets and the uncertainties in supplies, why then were such records kept at all? To answer this, one has to understand what was recorded -and here we turn back to our Bordeaux merchant, Abraham Gradis, and his account book. [START_REF]'the account book of the David Gradis & Fils partnership[END_REF] Parts of this book, and elements of the preceding one, for 1751-1754, were analyzed for the purpose of this paper, in order to get a quantitative grasp on what kind of operation was recorded. [START_REF][END_REF] Gradis was an international trader, active in the French colonies. Much of his activity consisted in sending supplies to the French Caribbean and Canada, and importing colonial goods in return. As a Jewish trader, he could not at first own plantations directly, but maintained an extensive network of correspondents in the colonies. In Gradis's books, 217 persons or families owned an account active between October 1754 and September 1755. Out of these 217 people or kin groups, 54 can be geographically located through a specific reference in the books. Ten of them lived in Martinique, Guadeloupe and Saint-Domingue, including the Révérends Pères Jacobinss and individuals from well-known planter families.
There were another seven correspondents in in Quebec, including François Bigot, intendant of Nouvelle-France -an official connection which would land Gradis in the middle of the Affaire du Canada after 1760, when Bigot would be accused of corruption. Of course, with 75 percent of the accounts not identified, one can easily assume that Gradis's network included significantly more correspondents in the French colonies than the few I could identify. [START_REF]The list of accounts, was derived from both Gradis's journals[END_REF] A list of business relations does not make an account book, however; what is really significant is what Gradis did with it. Personal accounts were the most numerous by far. To the 217 clearly personal accounts (individuals, families, institutions like the Jacobins or semiofficial accounts for "The King," "Intendant Bigot" or "Baron de Rochechouart") must be added seven opened for unspecified partnerships ("Merchandize for the Company") or for commission agents ("Wine on account with X"), and an extra 15 covering ship ventures ("Outfitting of ship X ", "Merchandizes in ship Y"), which were at least in part also partnership accounts. Overall, a least 225 of the 266 accounts which appeared between October 1754 and September 1755 can be classified with absolute certainty as personal accounts. Each of these personal accounts created a relationship between Gradis and the individuals concerned very much like that of a bank with its customers, except that the credit which was extended was apparently mostly free of charge and interest.
For instance, the following posting: essentially meant that in the account 'Eaux de vie,' i. e. Gradis himself, had sent 2,176 Livres tournois worth of spirits for the benefit of Mr. La Roque, and no payment had been made by the latter. Indeed, no payment was made for the rest of the summer, and probably no payment would be made until the spirits were sold. The net result was that Gradis had loaned this amount of money to La Roque for several months, with no apparent charge or interest.
Conversely, when Gradis's clerk wrote the following:
Caisse Dt à Dupin £ 2712.3 pour du Sel qu'il à Livré en 1754 p le navire L'Angelique et pour le Cochon envoyé a la Rochelle p le n.re L'entreprenant dont nous debitons La Caisse, en ayant été Creditée 15 he was recording that a Mr. Dupin had generously loaned almost 3,000 Livres tournois worth of salt to Gradis for anywhere between six months and a year and a half; this sum had been received in the 'Caisse,' that is, paid over in cash to Gradis, for salt which had been given by Dupin -but Gradis had not paid his debt to the latter, and, again, no interest or charge was listed. Last but not least, two credits could cancel each other: Mr Darche Dt. à Mlle de Beuvron £ 2446.5 pour une année de la rente qu'il doit a lad.e Dlle 16 meant that the corresponding sum was transferred to Mademoiselle de Beuvron on Gradis's books, to be offset by sums she owed him, while he would take charge of recovering what Darche owed in the course of his business transactions with him. This book credit was the dominant form of payment in Gradis's accounts, as it may well have been the case for all traders everywhere. Payment could be made with metal currency, or with commercial paper, promissory notes and bills of hands ranging from the time-honored international letter of exchange to the more modern note of hand, a simple I.O.U from one individual to another. If currency was used, then the 'Caisse,' or cash box, would be listed as receiving or disbursing the corresponding sum; though sometimes, for obscure reasons, commercial paper found its way into the 'Caisse,' which undermined its very purpose. 17 If commercial paper was used, it would be listed as 'Lettres et billets à recevoir' or 'Bills receivable,' that is, paper debts from others to be cashed at some point, or 'Lettres et billets à payer,' or 'Bills payable,' I.O.Us manifesting that some money had been borrowed and would have to be reimbursed at some point. Complex rules governed interest rates and the period of validity of such paper debts, but the important point here is that these debts were always listed separately, in the relevant accounts. This makes possible a quantitative analysis of the use of currency, commercial paper and book credit over the period studied.
We focused on 89 personal accounts from June-August 1755, having discarded accounts with ambiguous titles, or belonging to Crown officials, to Gradis family members, or to partnerships to which Gradis probably belonged, such as 'Les Intéressés au Navire L'Angélique.' [START_REF]Thus we excluded 'Le chevalier de Beaufremont' and 'Mr de Beaufremont', who could be one and the same, or father and son[END_REF] This gave us a set of individual or partnerships to whom 'normal' credit, not influenced by personal proximity or official status, would be extended. The results, as shown in Figure 1, are very clear: even for a major international trader like Gradis, in a European commercial port flush with metal currency, book credit was the main tool of business. Even if we count as 'Cash' transactions all exchanges of commercial paper for cash, the share of credit instruments compared to hard currency in transactions around the Gradis firm reached a hefty 72 percent. 19 The volumes involved in these transactions were impressive. Because the 'Bills receivable' and 'Bills payable,' that is, the formalized credit and debt accounts, were balanced at the beginning of June 1755, we know that at that date Gradis held 671,117 Livres tournois in IOUs from various people, and owed to dozens of creditors an equally impressive 430,072
Livres tournois, also in formalized paper IOUs. Considering the proportion of book debt to Book credit = all other transactions, eliminating double postings, all profit and loss postings (equivalent to the total or partial closing of an account), and subtracting any payment from or to the same account within two weeks (equivalent to a payment on the spot, recorded with some delay; quick payments of that type represented barely over 40,000 Livres tournois, less than 10 percent of all payments).
The figures above only refer to the business Gradis was doing with the individuals and groups who held accounts with his firm. A complete analysis, including purchases and sales listed directly in Gradis's own accounts, which were all commercial paper-related operations, as well as credit extended to Crown officials, gives a different set of figures: Cash transactions represented over a third of all transactions in value. Over half of these cash movements went to the purchase and sale of commercial paper; straight cash purchases or sales represented 16 percent of all transactions in value. Even including all movements between cash and commercial paper accounts, book compensations still made up over 40 percent of all the volume of trade in the Gradis firm, proof that personal credit flows were allimportant. Indeed, commercial paper itself was nothing else than formalized credit, and if we exclude the complex category of transactions mixing commercial paper and cash, we end up with over 80 percent of all transactions being made on credit.
Last but not least, cash, commercial paper and book credit were not equivalent means of payment. Leaving aside transactions involving both cash and commercial paper, it is possible to observe how each type of payment was used in terms of the value of the transactions involved: Source: 181 AQ T*, loc. cit. The transactions included are the same as in Figure 2, q. v.
These graphs show again that book accounts were used as much as cash, and in much the same way. Obviously these book credits were not as liquid (that is, easily convertible without loss of value) as cash, since they could circulate only within the circle of individuals and groups having themselves accounts with Gradis. But on the other hand the volume they represented was actually much higher than the volume of cash used by the firm, which is another way of saying that transactions took place mostly within this circle of known partners.
Moreover, book accounts could also, in special cases, comprise much larger amounts, otherwise usually dealt with through commercial paper.
Thus our opening caveat: accounts and account books were not straightforward tools of analysis. Opening an account was tantamount to creating a special bond of partnership, which explains why whole aspects of a traders' activity were lumped together in nondescript general accounts, while apparently small transactions could give rise to specific accounting efforts.
Because the core distinction was between direct partners and all others, issues of costs and profit were only dealt with peripherally, if at all; at the very least, they were clearly subordinate to this higher, primary boundary between the inner circle and the rest of the world. The same held true for national loyalty, regional proximity and all other parameters of interpersonal relationships. While they could play a role in specific cases, there is no indication that account holders could be neatly dumped into one of these categories. What made a business acquaintance into an account holder was the logic of the credit network, a network at the centre of Gradis' strategy.
* Almost all the non-personal accounts were also credit-centered, in much the same way as personal accounts. Gradis, our Bordeaux merchant, practiced a sophisticated system of double-entry accounting, which allowed him to develop two types of accounts on top of personal accounts. Parts of his inventory, a handful of goods which he traded in most frequently, were granted specific accounts: sugar, wine, indigo, spirits, and flour, thus appeared as separate accounts, debited with incoming merchandise, and credited when the goods were sold. To these, should be added accounts such as Cash and Bills payable and receivable, which also contained assets. Then, there was what accountants would call today 'nominal' accounts such as Profit and Loss and expense accounts. These accounts were supposed to summarize in the final balancing of all accounts (which almost never took place) the status of the capital expended, which had been neither invested in inventory, nor loaned out as a book debt. In practice however, most real accounts, and even some apparently nominal accounts, were much closer to personal accounts than one would expect, since they, too, were meant to encapsulate a certain credit relationship with a certain group of partners.
Let us start with an example: an account called 'Indigos p. Compte de Divers' included all indigo traded on commission. This made sense only if the discriminating principle was a combination of both the type of principal/agent relationship established by Gradis as a commission merchant with the people who commissioned him, and of the specific product concerned. Gradis's accounts could not readily provide an analysis of the benefits made on indigo in general, since there was a separate 'Indigo' account. Indeed, another account of the same type was 'Sucres et caffés p Cpte de Divers,' which mixed two vastly different products, proof if need be that the issue was not the products themselves. [START_REF] Actually | one of the transactions recorded in this account included a set of 'Dens d'éleffans,' which means that account titles were not[END_REF] There was no way either to calculate the commissions Gradis received from each principal he was commission merchant for, nor on each merchandise, since all commissions were dumped into one account. What such accounts implied, rather, was that the principals who commissioned him constituted a coherent group, a network of sorts which deserved separate analysis. Thus the indigo sold on commission came from 'Benech L'aîné' and 'Benech de L'Epinay,' while the 'Sugar and Coffee,' was sent by David Lopes and Torrail & La Chapelle, two firms from Martinique;
each account was based on a specific contractual relationship with an identifiable group, not even necessarily specialized in one product, but clearly identifiable within the merchant network erected by Gradis.
The same analysis holds true for ship-related accounts, except that the relationship was non-permanent and linked explicitly to a certain venture. A ship account constituted the perfect illustration of such a venture-based account, since it distinguished a separate group of people, from the captain to the co-investors who helped bankroll the outfitting and the lading, and only for the duration of the venture. Anything pertaining to a ship was thus gathered into one account, or distributed among several accounts if particular subgroups of investors were concerned within the larger framework of the general venture. This explains why in an extreme case three separate accounts existed side by side in the same three months of June-August 1755 for the ship Le David, one for the outfitting ('Armement du navire Le David' or 'Navire Le David'), one for its freight ('Cargaison dans le N.re L David'), and one for the goods on board which directly belonged to Gradis and nobody else ('Cargaison Pour n/C dans Le Navire Le David'). Again, the issue was not only the contractual link (investments held in partnership or not), nor the type of activity (shipping), much less the goods concerned (not even listed in this case), but a mix of all elements, which made of each venture a separate, particular item.
Even when Gradis himself sold his goods through commission merchants, this act did not systematically lead to a separate account: whether the principal/agent relationship deserved to be individualized depended on a series of parameters, most of which probably elude us. Thus there were four specific accounts for wine sold on commission. But silverware sold through Almain de Quebec was credited to 'Marchandises générales,' with no separate record kept. The Benechs were dealt with through a common account, as were the two Martinique firms who used Gradis as commission merchant. The relationship was the same in all these cases, but no general rule was applied beyond Gradis's own view of the importance and separate character of the relationship giving rise to a given account. In some cases this relationship was so obvious to our Bordeaux merchant that the name he picked for an account was remarkably poor in information. The case occurred both for personal accounts ('La société compte courant,' 'La société compte de dettes à recevoir à la Martinique' -without Gradis feeling bound to explain which 'société' was concerned exactly) and to venture-based accounts (what exactly was 'Cargaison n° 7' in 'Marchandises pour la Cargaison n° 7'? Were certain unspecified ship accounts, such as 'Le Navire Le Président Le Berton,' outfitting accounts, lading accounts, or ownership accounts?).
The systematic dividing of accounts according to the specific venture, and within it according to complex combinations of contracts and ownership, proves conclusively that by region. This holds true as well for merchandise accounts. There were eleven such accounts, with one of them, 'Marchandises générales,' including (over three months) silverware, unspecified 'divers de Hollande,' 'quincaille,' paper, cinnabar, salt, beef, 'Coity'
[coati?], feathers, walnut oil, lentils, and even 'goods from Cork.' But even with more specialized accounts, such as 'Farine' or 'Eaux de vie,' there was no effort to trace a certain batch of goods from the origin through to the sale, which means that buying and selling prices of specific goods could not be compared. Moreover, the costs entailed in trading certain goods were not necessarily recorded in relation to them, as in the following example: Here packing and freight costs were credited to Cash, and debited to the personal account of the customer, rather than being listed in the 'Vins' accounts, so that the actual cost of delivering this wine could not be included in the calculation of the profit derived from selling this particular good, nor was it listed separately elsewhere.
The lone cost account identifiable as such, called 'Primes d'assurance,' gathered all insurance premiums paid by Gradis for his shipping; but apparently he decided that separating this particular cost was not worth the trouble, and closed this account into the general Bills payable account on July 21st, 1755, only to reopen it the following day, listing a new insurance premium due for an indigo shipment. 24 Consequently, most insurance premiums found themselves jumbled together with the rest of Gradis's formal debts, while a few others stayed in the corresponding account. Another cost account, 'Fret à recevoir de divers,' listed freight paid by Gradis as a commissioner for others during the year 1754, but it had been closed by the summer of 1755, reappearing briefly because a mistake had been made in settling it. 25 Another account, 'Bien de Tallance,' was basically manorial; it individualized merchant relationships, with some of them set apart because of the specific personal relationship through which they appeared, as with the Benechs for indigo.
As shown by the following table, the account book was largely dominated by the personal credit relationships Gradis had built with the people he dealt with. Very little space was left for other issues. Accounting was, first and foremost, credit accounting, and mostly personal credit accounting. Each account was a narrative of a certain relationship, a tool for quantitative or strategic analysis maybe, but on a strictly ad hoc basis: what counted in most cases was the people, or the group of people, who underpinned the activity thus accounted for.
The identification of each element worth a separate account (assets specific to Gradis alone, or people being partners with Gradis, or people simply dealing with Gradis, or in a few cases all people entering into a certain kind of credit relationship) was neither a mere matter of legal contract, nor a straightforward result of regional or product specialization, but a complex combination of all these elements, and possibly more. No two accounts were the same, either; each had its own past, its own potential, and possibly its own constraints, so that generalization was largely impossible. What was reflected here was the highly segmented and uneven nature of early modern markets, and the fact that group control of one or other corner of this market, however small, was the best road to success. Each trading effort was thus very much an ad hoc affair, with a specific good or set of goods, in a specific region, along specific routes, all these specificities being summarized and expressed by the set of business associates which would take charge of the trade from its beginning to its end. What Each sum had its own history, and its own assessment of credit: loaning to the King had its risks and rewards, which were not the same as partnering with a fellow Bordeaux merchant, or humoring a friendly colonial official (who actually partnered with Gradis in supplying his own territory). In at least two cases out of four, Bigot and Veuve La Roche, the rewards were indirect; friendly officials could provide huge comparative advantages, being accommodating with a partner's widow gained one points within one's community, and there would be monetary windfalls eventually. Still, there was in all this a common grammar, a set of rules above and beyond the direct accounting rules, which would enable Gradis, and all other merchants, to compare and contrast their multiple ventures. Each of Gradis's decisions could be assessed -not measured, but judged qualitatively -in terms of enhanced credit, and each credit enhancement could be translated, again in unquantifiable but very concrete ways, in terms of control. Clienteles bred networks, which made access easier, and could turn into a decisive comparative advantage over less connected competitors, as in the case of Bigot.
There is a last dimension to Gradis's activity which must be underlined. Counting the sum of all his operations for the 12 months between October 1754 and September 1755 amounts to nearly seven million Livres tournois. The total number of accounts active during the same period was well above 200. An obvious advantage of such a thick and diverse network was risk diversification; Gradis was too big to fail, not because of the size of his operations, but because of their variety. A few accounts could turn out to be lost investments, but there were many others from which these losses could be compensated. With hundreds of potential credit sources, a credit crunch was highly unlikely. One bad batch of goods could lead our Bordeaux trader to lose face on one specific market in one town, but he could point to dozens of other markets elsewhere on which he had been a trustful supplier, and his reputation would merely suffer a passing dent.
Power such as Gradis's has implications for the analysis of the wider early modern economy. Certainly nobody would suggest that markets under the Old Regime were open and transparent. Network-based comparative advantages were turned into bases for monopolization of a market segment, a monopoly sometimes sanctioned by law, as in the case of the various India Companies. Collectively, then, the merchants who held the keys to the various segmented parts of the economy in Europe, the Americas and parts of Asia and Africa were truly a transnational ruling class, with an unassailable position as long as their solidarity held firm, as long as they successfully fended off any drift towards freedom of entry into these multiple niche markets where they made their fortunes. In this way we get back to a regional motif, but under a very different angle; regions existed insofar as they were controlled by a defined subgroup of this international ruling class. It may well be that access to the French Caribbean were dominated by a coherent group of French merchants, but this is unclear. At a higher level, the recent general trend towards describing the Atlantic as several more or less nationalized Atlantics may be read as an implicit recognition that nation-based groups of merchants had built exclusive trading spaces which they by and large controlled.
But how these groups interacted with institutional realities and other constraints to create more or less exclusive trading spaces, and how rules of interpersonal, account-based behavior were modified under local conditions, are questions to be explored.
On that score, Gradis' example provides only limited support to the idea of a French Atlantic. He operated mostly within the French colonial empire, but was also invested in the Spanish empire, a fact which seems to underscore the relevance of Empire-based analyses.
On the other hand, his Caribbean ventures were only one facet of a broad and diverse network, which encompassed France and several other European countries. His was a specifically French operation, both as a royal supplier and as a Bordeaux trader focusing on Quebec and the Caribbean. Notwithstanding these specializations, his accounts stressed personal credit, not national or regional networks. This makes sense since the ubiquity of credit meant that the key to merchant success was a sizeable and trustworthy network of partners, which in the case of Gradis extended well beyond the limits of any one region of the French sphere, and indeed well beyond that sphere. A French trader could favor connections to French planters, French officials and the French Crown; but no trader in his right mind would ever forget that a successful operation depended on cooperation with other traders regardless of nationality, location, religion or ethnic origin. "Frenchifying" or "Atlanticizing" one's operation was always a possibility -but only within limits, and never so far as to structure the way accounts were kept. In the end, the King in Versailles was treated the same way as Jonathan Morgan from Cork, or as the la Pagerie from Martinique, as pieces in a wider puzzle, the shape of which included regional considerations, but was never limited by them.
formal debt, over 2 to 1, generated in the next three months, book debt in toto may have amounted to well over two million Livres tournois... The larger accounts may have borne interest; one Jacob Mendes had his account balanced, and the clerk recorded the following: Jacob Mendes cte: Vx: a luy même ct N.au £ 67167.15.5 pour Solde Compte Regle ce Jour en double, dont les Interets Sont Compris Jusques au 1er Courant 20But four other personal accounts were balanced between June and August 1755, with no mention of interest. There is no such mention either in the numerous instances where errors were discovered, and accounts rectified, sometimes months after the error was made.[START_REF]Farines Dt; a Marieu & Comp & La Roche £ 200 pour Erreur Sur leur facture du 25 may, ou ils ont debité pour 300[END_REF]
Fig. 1 :
1 Fig. 1: Value of transactions by means of payment used for personal accounts in the Gradis firm, June-Aug. 1755 (in percent of the total value of transactions for each type, in livres tournois. Crown officials and ambiguous accounts excepted)
Fig. 2 :
2 Fig. 2: Value of transactions by means of payment in the Gradis firm, June-Aug. 1755 (in percent of the total value of each type of transaction, in livres tournois)
Fig. 2 :
2 Fig. 2: Proportion of transactions by means of payment and value of transaction in the Gradis firm, June-Aug. 1755 (in percent of number of transactions for each category, n = 128 cash, 41 commercial paper, 107 compensation between accounts)
counted most, and what was most counted, was with whom who did what; what was being done was only part of the equation.
Mr La Roque à Versailles Dt, a Divers £ 2176.16.6 pour 60 demy Barriq. Eaux de vie envoyées p Son Compte à Quebec par le N.re Le st Fremery de st Valery Suivant Le Livre de factures a f° 136 Savoir à Eaux de Vie pr 19 p. Cont. en 6 Bq 970 V. £ 1864.11.6 à Caisse pour tous fraix deduit le montant des pieces vuides 232.5 à Primes d'assurance p £ 2000 à 4 p Ct 80 14
Le Comte de Raymond Dt; à Divers £ 395 Pour Le Vin Suivant a luy envoyé par la voye de Horutener & Comp de Rouen pour faire passer debout a Valogne, à Son adresse Savoir à Vins de talance pr 2 Bques en double futaille £ 175 à Vins achetés pour 1/3 à 70W le thoneau 35 à Caisse pr 50 Bouteilles vin muscat rouge à 30s £75 p 50 Bouteilles dt blanc a 30s 75 pr Rabatage des 2 Bques et double futaille 18 pr droits de Sortie arimage & fraix 17 185 23
Table 1 : The account structure of Gradis's journal, October 1754 -September 1755 1.A) Individual credit relationships 2) Shipping ventures 3) Other real assets 1.A.a) Personal accounts 2.a) In partnership with
1 Martinique, the key to success was a good network of planters who would supply him with quality colonial goods, official backing for his trading activities both in Martinique and in the colonial administration in France, and the physical means to bring his goods across the ocean. Trading with Amsterdam meant dealing with a very different group of people. In Amsterdam, Gradis needed Dutch commissioners who would sell his wine at the best possible price, access to Aquitaine winegrowers whose wine would be of good quality, and the monetary means to extend generous credit terms on both sides. Obviously, with each interlocutor the strategies, incentives, and even vocabulary used would be different.Hence the crucial role of the accounts. Each reflected a privileged relationship, a building block to be used in organizing a profitable access to a certain market. Each was to be treated within the specific context of the relationship for which it had been created. Mere figures were only part of a larger equation, other parts of which were simply not quantifiable. Each debt, however, needs to br treated on its own terms. Perrens had been loaned 80,000 Livres by Gradis to buy large amounts of flour, lard, salt and brine, which were then delivered to the king, and the entire sum was at once transferred to the king's account.Perrens was merely Gradis's agent in building up Canada supplies, and the large sums loaned were actually loaned to the king. The account from Marchand Fils was wholly different, since he was a partner with Gradis in the outfitting and lading of the ship Le Sagittaire, and the 23,000 Livres he had received were two IOUs from Gradis acknowledging that Marchand Fils, who was the main outfitter, had paid that much in excess and in Gradis's stead, with the latter eventually refunding his partner's loan. In Bigot's case, money belonging to the intendant du Canada was to be deposited in Gradis's account at Chabert & Banquet, his Parisian bankers, but Bigot had already used it up by drawing on the Parisians, and Gradis was simply acknowledging that Bigot was not a creditor anymore, contrary to what had been assumed at first in his accounts. Of course, in dealing with an intendant, no sensible merchant would have dreamt of pointing out that by drawing on funds which had not yet been deposited, Bigot was in effect borrowing from Gradis, and for free. As for poor Madame La Roche, widow of a business partner of Gradis, she was presumably trying to settle her deceased husband's affairs. She claimed 842 Livres from Gradis, who did not quite agree with her statement of affairs, but who decided to give the sum to her nonetheless, crediting a doubtful debts account ('Parties en suspend') in case the matter were eventually settledwhich would probably never be the case.27
Actually, the figures could take different meaning in context, a point which is made very clear if we try to compare the story line of different accounts. By the summer of 1755, one Lyon merchant, Perrens, owed almost 80,000 Livres tournois to Gradis. Another merchant, Marchand fils, was debitor for 25,000 Livres in the same summer; Bigot, the intendant of Canada, was found to owe 43,000 Livres; and Veuve La Roche, a widow from Girac, owed 842 Livres.
Archives Nationales Paris (CARAN), Fonds Gradis, 181 AQ 7* Journal, June 1, 1755 to October
26, 1759. According to the Ordonnance of 1673, the registers had to be certified, and Gradis obtained Agard's certification onMay 13, 1755. The list of the consuls is available
See John R. Edwards, A History of Financial Accounting, (London, 1989),. For the idea that large-scale, multinational operation brought about a new momentum for innovation as early
181 AQ 7*, ff. 8 verso,[START_REF] Actually | one of the transactions recorded in this account included a set of 'Dens d'éleffans,' which means that account titles were not[END_REF], for freight costs one Leris should have paid for two bales of cotton and a barrel of sugar 'qu'il a reçu l'année 1754,' thus at least 8 months earlier. The entry clearly implies that the non-payment comes from an oversight, and there is no other mention of the corresponding account for the whole year 1755.
See supra n. 16.
Gradis's own wine-producing property, with a corresponding account called 'Vins de Tallance' probably identifying the returns of this product. This listing proves that Gradis was indeed cost conscious as a producer, and that his choice not to record his trading costs consistently was not due to ignorance. Costs were worth recording as a producer, because they represented a stable quantity, with direct and easily measurable consequences on profits; costs of specific ventures or relationships, however, varied widely, both quantitatively and in their relationship to profits.
In the end, there were few elements Gradis thought worth recording separately in real or nominal accounts, besides the few goods he traded more particularly, already mentioned, and his wine-producing venture in Talance. A general Profit and Loss account received indiscriminately all profits and all losses from all personal and venture-based accounts, in such a way as to make strategic calculations almost impossible. Cash was listed separately, though as we have seen some commercial paper found its way, for reasons unclear, into the 'Cash' box. 26 Commercial paper was recorded in the classic Bills payable and Bills receivable accounts, but some of it was included separately in a 'Lettres à négocier' account, of which we know next to nothing; it may have concerned dubious paper which Gradis had identified as such, and was trying to unload. The same can be said of 'Parties en suspend,' which was probably made up of clearly desperate debts. Two accounts, 'Contrats de cession' and 'Contrats d'obligation,' recorded formal purchases and sales materialized by notarized agreements; again, the shape of the relationship created by a given means of payment turned out to be more important than the kind of activity or goods concerned. Only one account could have been said to identify a specific activity and provide a basis to assess its returns, the 'Grosses avantures' account, which listed bottomry loans Gradis had consented to, except that we find another account named 'Grosses avantures données a Cadis par la Voye de Joseph Masson & C.e.' In other words, bottomry loans were treated somewhat like commission
accounts
Cargaison dans le N.re Le David
1.A.b) Assets in partnership or sent on commission
Marchandises
*
What does Gradis's accounting tell us about his Caribbean operations, and generally about the world he operated in? First, it was a world dominated by interpersonal relationships, but not in the classic Gemeinschaft sense. Making a profit was still the ultimate goal: merchant relationships cannot be reduced to a form of moral economy. The best descriptive tool would be that of the cartel: a group of people bound together by a common economic goal of domination and profit, but among whom solidarity is both the key to success and a fragile construction at best. In some ways, each one of Gradis's accounts was an attempt at cartelization, at building a privileged, protected market access which would bring in profit. In this universe, there was no point in trying to compare two ventures, since each had its own defining characters, from the group of people involved to the institutional environment to use and the physical means of access to control. When Gradis was trading online thanks to AD Gironde, at Inventaire de la série C. Archives Civiles: Tome 3, articles C 4250 à C439. | 52,780 | [
"3926"
] | [
"176",
"110860"
] |
01408043 | en | [
"info"
] | 2024/03/04 23:41:48 | 2016 | https://hal.science/hal-01408043/file/CC-pn16.pdf | Thomas Chatain
email: chatain@lsv.ens-cachan.fr
Josep Carmona
email: jcarmona@cs.upc.edu
Anti-Alignments in Conformance Checking -The Dark Side of Process Models
Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. These techniques suffer from the wellknown state space explosion problem, hence handling process models exhibiting large or even infinite state spaces remains a challenge. One important metric in conformance checking is to asses the precision of the model with respect to the observed executions, i.e., characterize the ability of the model to produce behavior unrelated to the one observed. By avoiding the computation of the full state space of a model, current techniques only provide estimations of the precision metric, which in some situations tend to be very optimistic, thus hiding real problems a process model may have. In this paper we present the notion of antialignment as a concept to help unveiling traces in the model that may deviate significantly from the observed behavior. Using anti-alignments, current estimations can be improved, e.g., in precision checking. We show how to express the problem of finding anti-alignments as the satisfiability of a Boolean formula, and provide a tool which can deal with large models efficiently.
Introduction
The use of process models has increased in the last decade due to the advent of the process mining field. Process mining techniques aim at discovering, analyzing and enhancing formal representations of the real processes executed in any digital environment [START_REF] Van Der Aalst | Process Mining -Discovery, Conformance and Enhancement of Business Processes[END_REF]. These processes can only be observed by the footprints of their executions, stored in form of event logs. An event log is a collection of traces and is the input of process mining techniques. The derivation of an accurate formalization of an underlying process opens the door to the continuous improvement and analysis of the processes within an information system.
Among the important challenges in process mining, conformance checking is a crucial one: to assess the quality of a model (automatically discovered or manually designed) in describing the observed behavior, i.e., the event log. Conformance checking techniques aim at characterizing four quality dimensions: fitness, precision, generalization and simplicity [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF]. For the first three dimensions, the alignment between the process model and the event log is of paramount importance, since it allows relating modeled and observed behavior [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF].
Given a process model and a trace in the event log, an alignment provides the run in the model which mostly resembles the observed trace. When alignments are computed, the quality dimensions can be defined on top [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF][START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF]. In a way, alignments are optimistic: although observed behavior may deviate significantly from modeled behavior, it is always assumed that the least deviations are the best explanation (from the model's perspective) for the observed behavior.
In this paper we present a somewhat symmetric notion to alignments, denoted as anti-alignments. Given a process model and a log, an anti-alignment is a run of the model that mostly deviates from any of the traces observed in the log. The motivation for anti-alignments is precisely to compensate the optimistic view provided by alignments, so that the model is queried to return highly deviating behavior that has not been seen in the log. In contexts where the process model should adhere to a certain behavior and not leave much exotic possibilities (e.g., banking, healthcare), the absence of highly deviating anti-alignments may be a desired property to have in the process model.
We cast the problem of computing anti-alignments as the satisfiability of a Boolean formula, and provide high-level techniques which can for instance compute the most deviating anti-alignment for a certain run length, or the shortest anti-alignment for a given number of deviations.
Using anti-alignments one cannot only catch deviating behavior, but also use it to improve some of the current quality metrics considered in conformance checking. For instance, a highly-deviating anti-alignment may be a sign of a loss in precision, which can be missed by current metrics as they bound considerably the exploration of model state space for the sake of efficiency [START_REF] Adriansyah | Measuring precision of modeled behavior[END_REF].
Anti-alignments are related to the completeness of the log; a log is complete if it contains all the behavior of the underlying process [START_REF] Van Der Aalst | Process Mining -Discovery, Conformance and Enhancement of Business Processes[END_REF]. For incomplete logs, the alternatives for computing anti-alignments grows, making it difficult to tell the difference between behavior not observed but meant to be part of the process, and behavior not observed which is not meant to be part of the process. Since there exists already some metrics to evaluate the completeness of an event log (e.g., [START_REF] Yang | Estimating completeness of event logs[END_REF]), we assume event logs have a high level of completeness before they are used for computing anti-alignments.
To summarize, the contributions of the paper are now enumerated.
-We propose the notion of anti-alignment as an effective way to explore process deviations with respect to observed behavior. -We present an encoding of the problem of computing anti-alignments into SAT, and have implemented it in the tool DarkSider. -We show how anti-alignments can be used to provide an estimation of precision that uses a different perspective from the current ones.
The remainder of the paper is organized as follows: in the next section, a simple example is used to emphasize the importance of computing anti-alignments.
Then in Section 3 the basic theory needed for the understanding of the paper is introduced. Section 4 provides the formal definition of anti-alignments, whilst Section 5 formalizes the encoding into SAT of the problem of computing anti-alignments and Section 6 presents some adaptions of the notion of antialignments. In Section 7, we define a new metric, based on anti-alignments, for estimating precision of process models. Experiments are reported in Section 8, and related work in Section 9. Section 10 concludes the paper and gives some hints for future research directions.
A Motivating Example
Let us use the example shown in Figure 1 for illustrating the notion of antialignment. The example was originally presented in [START_REF] Vanden Broucke | Event-based real-time decomposed conformance analysis[END_REF]. The modeled process describes a realistic transaction process within a banking context. The process contains all sort of monetary checks, authority notifications, and logging mechanisms. The process is structured as follows (Figure 1 (top) shows a high-level overview of the complete process): it is initiated when a new transaction is requested, opening a new instance in the system and registering all the components involved. The second step is to run a check on the person (or entity) origin of the monetary transaction. Then, the actual payment is processed differently, depending of the payment modality chosen by the sender (cash, cheque and payment). Later, the receiver is checked and the money is transferred. Finally, the process ends registering the information, notifying it to the required actors and authorities, and emitting the corresponding receipt. The detailed model, formalized as a Petri net, is described in the bottom part of the figure.
Assume that a log which contains different transactions covering all the possibilities with respect of the model in Figure 1 is given. For this pair of model and log, no highly deviating anti-alignment will be obtained since the model is a precise representation of the observed behavior. Now assume that we modify a bit the model, adding a loop around the alternative stages for the payment. Intuitively, this (malicious) modification in the process model may allow to pay several times although only one transfer will be done. The modified high-level overview is shown in Figure 2. Current metrics for precision (e.g., [START_REF] Adriansyah | Measuring precision of modeled behavior[END_REF]) will not consider this modification as a severe one: the precision of the model with respect to the log will be very similar before or after the modification.
Clearly, this modification in the process models comes with a new highly deviating anti-alignment denoting a run of the model that contains more than one iteration of the payment. This may be considered as a certification of the existence of a problematic behavior allowed by the model.
Preliminaries
Definition 1 ((Labeled) Petri net). A (labeled) Petri Net [START_REF] Murata | Petri nets: Properties, analysis and applications[END_REF] is a tuple N = P, T, F, m 0 , Σ, λ , where P is the set of places, T is the set of transitions (with marking, Σ is an alphabet of actions and λ : T → Σ labels every transition by an action.
P ∩ T = ∅), F : (P × T ) ∪ (T × P ) → {0, 1} is the flow relation, m 0 is the initial
A marking is an assignment of a non-negative integer to each place. If k is assigned to place p by marking m (denoted m(p) = k), we say that p is marked with k tokens. Given a node x ∈ P ∪ T , we define its pre-set • x := {y ∈ P ∪ T | (x, y) ∈ F } and its post-set x • := {y ∈ P ∪ T | (y, x) ∈ F }. A transition t is enabled in a marking m when all places in • t are marked. When a transition t is enabled, it can fire by removing a token from each place in • t and putting a token to each place in t Quality Dimensions. Process mining techniques aim at extracting from a log L a process model N (e.g., a Petri net) with the goal to elicit the process underlying a system S. By relating the behaviors of L, L(N ) and S, particular concepts can be defined [START_REF] Buijs | Quality dimensions in process discovery: The importance of fitness, precision, generalization and simplicity[END_REF]
. A log is incomplete if S\L = ∅. A model N fits log L if L ⊆ L(N ). A model is precise in describing a log L if L(N )\L is small. A model N represents a
generalization of log L with respect to system S if some behavior in S\L exists in L(N ). Finally, a model N is simple when it has the minimal complexity in representing L(N ), i.e., the well-known Occam's razor principle.
Anti-Alignments
The idea of anti-alignments is to seek in the language of a model N what are the runs which differ a lot with all the observed traces. For this we first need a definition of distance between two traces (typically a model trace, i.e. a run of the model, and an observed log trace). Relevant definitions about alignments can be found in [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF]. Let us start here with a simple definition. We will discuss other definitions in Section 6. Notice that the two definitions coincide when p = n and give σ |1...n := σ.
Definition 3 (Hamming distance dist). For two traces
γ = γ 1 . . . γ n and σ = σ 1 . . . σ n , of same length n, define dist(γ, σ) := i ∈ {1 . . . n} | γ i = σ i .
In the sequel, we write dist(γ, σ) for dist(γ, σ |1...|γ| ). Notice that, in this definition, only σ is truncated or padded. In particular this means that γ is compared to the prefixes of the observed traces. The idea is that a run γ which is close to a prefix of an observed trace is good, while a run γ which is much longer than an observed trace σ cannot be considered close to σ even if its prefix γ |1...|σ| is close to σ.
Example 1. For instance, for the Petri net shown in Figure 3, and the log L = { a, b, c, f, g, h, k , a, c, b, f, g, h, k , a, c, b, f, h, g, k , a, b, c, f, h, g, k , a, e, f, i, k , a, d, f, g, h, k , a, e, f, h, g, k }, the run a, b, c, f, i, k denotes an (6, 2)-antialignment. Notice that for m ≥ 3 there are no anti-alignments for this example.
Lemma 1. If the model has no deadlock, then for every n ∈ N, for every m ∈ N, if there exists a (n, m)-anti-alignment γ, then there exists a (n + 1, m)-antialignment. Moreover, for n ≥ max σ∈L |σ|, there exists a (n + 1, m + 1)-antialignment.
Proof. It suffices to fire one transition t enabled in the marking reached after γ;
γ • t is a (n + 1, m)-anti-alignment since for every σ ∈ L, dist(γ • t, σ) ≥ dist(γ, σ). When n ≥ max σ∈L |σ|, we have more: dist(γ • t, σ) ≥ 1 + dist(γ, σ) (because the t is compared to the padding symbol w), which makes γ • t a (n + 1, m + 1)-anti- alignment.
Corollary 1. If the model has no deadlock, (and assuming that the log L is a finite multiset of finite traces), then for every m ∈ N, there is a least n for which a (n, m)-anti-alignment exists. This n is less than or equal to m + max σ∈L |σ|.
Lemma 2. The problem of finding a (n, m)-anti-alignment is NP-complete.
(Since n and m are typically smaller than the length of the traces in the log, we assume that they are represented in unary.)
Proof. The problem is clearly in NP: checking that a run γ is a (n, m)-antialignment for a net N and a log L takes polynomial time.
For NP-hardness, we propose a reduction from the problem of reachability of a marking M in a 1-safe acyclic1 Petri net N , known to be NP-complete [START_REF] Stewart | Reachability in some classes of acyclic Petri nets[END_REF][START_REF] Cheng | Complexity results for 1-safe nets[END_REF]. The reduction is as follows: equip the 1-safe acyclic Petri net N with complementary places2 : a place p for each p ∈ P , with p initially marked iff p is not, p
∈ • t iff p ∈ t • \ • t, and p ∈ t • iff p ∈ • t \ t • . Now M is reachable in the original net iff M ∪ {p | p ∈ P \ M } is
Computation of Anti-Alignments
In order to compute a (n, m)-anti-alignment of a net N w.r.t. a log L, our tool DarkSider constructs a SAT formula Φ n m (N, L) and calls a SAT solver (currently minisat [START_REF] Eén | An extensible sat-solver[END_REF]) to solve it. Every solution to the formula is interpreted as a run of N of length n which has at least m misalignments with every log in L.
The formula Φ n m (N, L) characterizes a (n, m)-anti-alignment γ:
γ = λ(t 1 ) . . . λ(t n ) ∈ L(N ), and for every σ ∈ L, dist(γ, σ) ≥ m.
Coding Φ n m (N, L) Using Boolean Variables
The formula Φ n m (N, L) is coded using the following Boolean variables:
τ i,t for i = 1 . . . n, t ∈ T (remind that w is the special symbol used to pad the logs, see Definition 4) means that transition t i = t. m i,p for i = 0 . . . n, p ∈ P means that place p is marked in marking M i (remind that we consider only safe nets, therefore the m i,p are Boolean variables). δ i,σ,k for i = 1 . . . n, σ ∈ L, k = 1, . . . , m means that the k th mismatch with the observed trace σ is at position i.
The total number of variables is n
× (|T | + |P | + |L| × m).
Let us decompose the formula Φ n m (N, L). -The fact that γ = λ(t 1 ) . . . λ(t n ) ∈ L(N ) is coded by the conjunction of the following formulas:
• Initial marking:
p∈M0 m 0,p ∧ p∈P \M0 ¬m 0,p
• One and only one t i for each i:
n i=1 t∈T (τ i,t ∧ t ∈T ¬τ i,t )
• The transitions are enabled when they fire:
n i=1 t∈T (τ i,t =⇒ p∈ • t m i-1,p )
• Token game (for safe Petri nets):
n i=1 t∈T p∈t • (τ i,t =⇒ m i,p ) n i=1 t∈T p∈ • t\t • (τ i,t =⇒ ¬m i,p ) n i=1 t∈T p∈P,p ∈ • t,p ∈t • (τ i,t =⇒ (m i,p ⇐⇒ m i-1,p ))
-Now, the constraint that γ deviates from the observed traces (for every σ ∈ L, dist(γ, σ) ≥ m) is coded as:
σ∈L m k=1 n i=1 δ i,σ,k
with the δ i,σ,k correctly affected w.r.t. λ(t i ) and σ i :
σ∈L m k=1 n i=1 δ i,σ,k ⇐⇒ t∈T, λ(t)=σi τ i,t
and that for k = k , the k th and k th mismatch correspond to different i's (i.e. a given mismatch cannot serve twice):
σ∈L n i=1 m-1 k=1 m k =k+1 ¬(δ i,σ,k ∧ δ i,σ,k )
Size of the Formula
In the end, the first part of the formula (γ = λ(t 1 ) . . . The total size for the coding of the formula
λ(t n ) ∈ L(N )) is
Φ n m (N, L) is O n × |T | × |N | + m 2 × |L| .
Solving the Formula in Practice
In practice, our tool DarkSider builds the coding of the formula Φ n m (N, L) using the Boolean variables τ i,t , m i,p and δ i,σ,k .
Then we need to transform the formula in conjunctive normal form (CNF) in order to pass it to the SAT solver minisat. We use Tseytin's transformation [START_REF] Tseytin | On the complexity of derivation in propositional calculus[END_REF] to get a formula in conjunctive normal form (CNF) whose size is linear in the size of the original formula. The idea of this transformation is to replace recursively the disjunctions φ 1 ∨ • • • ∨ φ n (where the φ i are not atoms) by the following equivalent formula:
∃x 1 , . . . , x n x 1 ∨ • • • ∨ x n ∧ x 1 =⇒ φ 1 ∧ . . . ∧ x n =⇒ φ n
where x 1 , . . . , x n are fresh variables.
In the end, the result of the call to minisat tells us if there exists a run γ = λ(t 1 ) . . . λ(t n ) ∈ L(N ) which has at least m misalignments with every observed trace σ ∈ L. If a solution is found, we extract the run γ using the values assigned by minisat to the Boolean variables τ i,t .
Finding the Largest m for n
It follows directly from Definition 5 that, for a model N and a log L, every (n, m + 1)-anti-alignment is also a (n, m)-anti-alignment.
Notice also that, by Definition 5, there cannot exist any (n, n + 1)-antialignment and that, assuming that the model N has a run γ of length n, this run is a (n, 0)-anti-alignment (otherwise there is no (n, m)-anti-alignment for any m).
(Under the latter assumption), we are interested in finding, for a fixed n, the largest m for which there exists a (n, m)-anti-alignment, i.e. the run of length n of the model which deviates most from all the observed traces. Our tool Dark-Sider computes it by dichotomy of the search interval for m: [0, n].
Finding the Least n for m
If the model N has no deadlock, then by Corollary 1, for every m ∈ N, there is a least n for which a (n, m)-anti-alignment exists.
Then it is relevant to find, for a fixed m, the least n for which there exists a (n, m)-anti-alignment, i.e. (the length of) the shortest run of N which has at least m mismatches with any observed trace.
Corollary 1 tells us that the least n belongs to the interval [m, m + max σ∈L |σ|]. Then it can be found simply by dichotomy over this interval. However, in practice, when max σ∈L |σ| is much larger than m, the dichotomy would require to check the satisfiability of Φ n m (N, L) for large values of n, which is costly.
Therefore our tool DarkSider proceeds as follows: it checks the satisfiability of the formulas Φ m m (N, L), then Φ 2m m (N, L), then Φ 4m m (N, L). . . until it finds a p such that Φ 2 p m m (N, L) is satisfiable. Then it starts the dichotomy over the interval [m, 2 p m].
6 Relaxations of Anti-Alignments
Limiting the Use of Loops
A delicate issue with anti-alignments is to deal with loops in the model N : inserting loops in a model is a relevant way of coding the fact that similar traces were observed with a various number of iterations of a pattern. Typically, if the log contains traces ac, abc, abbc, . . . , abbbbbbbc, it is fair to propose a model whose language is ab * c.
However a model with loops necessarily generate (n, m)-anti-alignments even for large m: it suffices to take the loops sufficiently many more times than what was observed in the log. Intuitively, these anti-alignments are cheated and one does not want to blame the model for generating them, i.e., the model correctly generalizes the behavior observed in the event log. Instead, it is interesting to focus the priority on the anti-alignments which do not use the loops too often.
Our technique can easily be adapted so that it limits the use of loops when finding anti-alignments. The simplest idea is to add a new input place (call it bound t ) to every transition t; the number of tokens present in bound t in the initial marking determines how many times t is allowed to fire. The drawback of this trick is that the model does not remain 1-safe, and our tool currently deals only with 1-safe nets.
An alternative is to duplicate the transition t with t , t . . . (all labeled λ(t)) and to allow only one firing per copy (using input places bound t , bound t . . . like before, but now we need only one token per place). Finally, another way to limit the use of loops is to introduce appropriate constraints directly in the formula Φ n m (N, L).
Improving the Notion of Distance
A limitation of our technique as presented above, concerning the application to process mining, is that it relies on a notion of distance between γ and σ which is too rigid: indeed, every symbol of γ i is compared only to the exact corresponding symbol σ i . This puts for instance the word ababababab at distance 10 from bababababa. In process mining techniques often other distances are usually preferred (see for instance [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF]), typically Levenshtein's distance (or edit distance), which counts how many deletions and insertions of symbols are needed to obtain σ starting from γ.
We propose here an intermediate definition where every γ i is compared to all the σ j for j sufficiently close to i.
Definition 6 (dist d ). Let d ∈ N. For two traces γ = γ 1 . . . γ n and σ = σ 1 . . . σ n , of same length n, we define dist d (γ, σ) := i ∈ {1 . . . n} | ∀ i -d ≤ j ≤ i + d γ i = σ j
Notice that dist 0 corresponds to the Hamming distance.
This definition is sufficiently permissive for many applications, and we can easily adapt our technique to it, simply by adapting the constraints relating the δ i,σ,k with the λ(t i ) in the definition of Φ n m (N, L).
Anti-Alignments Between Two Nets
Our notion of anti-alignments can be generalized as follows:
Definition 7. Given n, m ∈ N and two labeled Petri nets N and N sharing the same alphabet of labels Σ, we call (n, m)-anti-alignment of N w.r.t. N , a run N of length n which is at least at distance m from every run of N .
Our problem of anti-alignment for a model N and a log L corresponds precisely to the problem of anti-alignment of N w.r.t. the net N L representing all the traces in L as disjoint sequences, all starting at a common initial place end ending by a loop labeled w, like in Figure 4.
We show below that the problem of finding anti-alignments between two nets can be reduced to solving a 2QBF formula, i.e. a Boolean formula with an alternation of quantifiers, of the form ∃ . . . ∀ . . . φ.
Solving 2QBF formulas is intrinsically more complex than SAT formulas (Σ P 2 -complete [START_REF] Kleine Büning | Theory of quantified boolean formulas[END_REF] instead of NP-complete) and 2QBF solvers are usually far from being as efficient as SAT solvers.
Anyway, the notion of anti-alignments between two nets allow us to modify the net N L in order to code a better notion of distance, for instance inserting optional wait loops at desired places in the logs. Possibly also, one can replace N L by another net which represents a large set of runs very concisely.
2QBF solvers are usually far from being as efficient as SAT solvers. As a matter of fact, we first did a few experiments with the 2QBF encoding, but for efficiency reasons we moved to the SAT encoding. Anyway we plan to retry the 2QBF encoding in a near future, with a more efficient 2QBF solver and some optimizations, in order to benefit from the flexibility offered by the generalization of the anti-alignment problem.
2QBF Coding. Finding a (n, m)-anti-alignment of a net N w.r.t. a net N corresponds to finding a run γ ∈ L(N ) such that |γ| = n and for every σ ∈ L(N ), dist(γ, σ) ≥ m. This is encoded by the following 2QBF formula:
∃(τ i,t ) i=1...n t∈T , (m i,p ) i=0...n p∈P ∀(τ i,t ) i=1...n t ∈T , (m i,p ) i=0...n p ∈P , (δ i,k ) i=1...n k=1...m λ(t 1 ) . . . λ(t n ) ∈ L(N ) ∧ λ (t 1 ) . . . λ (t n ) ∈ L(N ) ∧ ∆ =⇒ m k=1 n i=1 δ i,k
where:
the variables τ i,t and m i,p encode the execution of N like for the coding into SAT (see Section 5.1); τ i,t and m i,p represent the execution of N ; δ i,k means that the k th mismatch between the two executions is at position i; the constraints that λ(t 1 ) . . . λ(t n ) ∈ L(N ) and λ (t 1 ) . . . λ (t n ) ∈ L(N ) are coded like in Section 5; -∆ is a formula which says that the variables δ i,k are correctly affected w.r.t. the values of the τ i,t and τ i,t . ∆ is the conjunction of:
• there is a mismatch at the i th position iff λ(t i ) = λ (t i ):
n i=1 ( m k=1 δ i,k ) ⇐⇒ t∈T,t ∈T λ(t) =λ (t ) (τ i,t ∧ τ i,t )
• a mismatch cannot serve twice:
m-1 k=1 m k =k+1 ¬(δ i,k ∧ δ i,k )
7 Using Anti-Alignments to Estimate Precision
In this section we will provide two ways of using anti-alignments to estimate precision of process models. First, a simple metric will be presented that is based only on the information provided by anti-alignments. Second, a well-known metric for precision is introduced and it is shown how the two metrics can be combined to provide a better estimation for precision.
A New Metric for Estimating Precision
There are different ways of incorporating the information provided by antialignments that can help into providing a metric for precision. One possibility is to focus on the number of misalignments for a given maximal length n, i.e., find the anti-alignment with bounded length that maximizes the number of mismatches, using the search techniques introduced in the previous section. Formally, let n be the maximal length for a trace in the log, and let max n (N, L) be the maximal number of mismatches for any anti-alignment of length n for model N and log L. In practice, the length n will be set to the maximal length for a trace in the log, i.e., only searching anti-alignments that are similar in length with respect to the traces observed in the log. We can now define a simple estimation metric for precision:
a n (N, L) = 1 - max n (N, L) n Clearly, max n (N, L) ∈ [0 . . . n] which implies a n ∈ [0 . . . 1].
For instance, let the model be the one in Figure 5 (top-left), and the log L = [σ 1 , σ 2 , σ 3 , σ 4 , σ 5 ] also shown in the figure. Since maximal length n for L is 6, max 6 (N, L) = 3, corresponding to the run a, c, b, i, b, i . Hence, a n = 1-3 6 = 0.5.
Lemma 3 (Monotonicity of the Metric a n ). Observing a new trace which happens to be already a run of the model, can only increase the precision measure. Formally: for every N, L and for every σ ∈ L(N ), a n (N, L ∪ {σ}) ≥ a n (N, L).
Proof. Clearly, every (n, m)-anti-alignment for (N, L ∪ {σ}) is also a (n, m)anti-alignment for (N, L). Consequently max n (N, L ∪ {σ}) ≤ max n (N, L) and a n (N, L ∪ {σ}) ≥ a n (N, L).
The Metric a p
In [START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF][START_REF] Adriansyah | Measuring precision of modeled behavior[END_REF] the metric align precision (a p ) was presented to estimate the precision a process model N (a Petri net) has in characterizing observed behavior, described by an event log L. Informally the computation of a p is as follows: for each trace σ from the event log, a run γ of the model which has minimal number of deviations with respect to σ is computed (denoted by γ ∈ Γ (N, σ)), by using the techniques from [START_REF] Adriansyah | Aligning observed and modeled behavior[END_REF] be the set of model traces optimally aligned with traces in the log. An automaton A Γ (N,L) can be constructed from this set, denoting the model's representation of the behavior observed in L. Figure 5 describes an example of this procedure. Notice that each state in the automaton has a number denoting the weight, directly related to the frequency of the corresponding prefix, e.g., in the automaton of Figure 5, w(ab) = 2 and w(acb) = 1. For each state s in A Γ (N,L) , let a v (s) be the set of available actions, i.e., possible direct successor activities according to the model, and e x (s) be the set of executed actions, i.e., activities really executed in the log. Note that, by construction e x (s) ⊆ a v (s), i.e., the set of executed actions of a given state is always a subset of all available actions according to the model. By comparing these two sets in each state the metric a p can be computed:
a p (A Γ (N,L) ) = s∈Q ω(s) • |e x (s)| s∈Q ω(s) • |a v (s)|
where Q is the set of states in A Γ (N,L) . This metric evaluates to 0.780 for the automaton of Figure 5.
Drawbacks of the Metric a p . A main drawback of metric a p relies in the fact that it is "short-sighted", i.e., only one step ahead of log behavior is considered in order to estimate the precision of a model. Graphically, this is illustrated in the automaton of Figure 5 by the red states being successors of white states.
A second drawback is the lack of monotonicity, a feature that metric a n has: observing a new trace which happens to be described by the model may unveil a model trace which has a large number of escaping arcs, thus lowering the precision value computed by a p .
For instance, imagine that in the example of Figure 5, the model has another long branch starting as a successor of place p 0 and allowing a large piece of behaviour. Imagine that this happens to represent a possible behaviour of the real system; simply, it has not been observed yet. This branch starting at p 0 generates a new escaping arc from the initial state of A Γ (N,L) , but the metric a p does not blame a lot for this: only one more escaping point. Now, when a trace σ corresponding to the new behaviour is observed (proving somehow that the model was right!): after this observation, the construction A Γ (N,L∪{σ}) changes dramatically because it integrates the new observed trace. In consequence, if the corresponding branch in the model enables other transitions, then the model is going to be blamed for many new escaping points while, before observing σ, only one escaping point was counted.
Combining the two Metrics
In spite of the aforementioned problems, metric a p has proven to be a reasonable metric for precision in practice. Therefore the combination of the two metrics can lead to a better estimation of precision: whilst a p focuses globally to count the number of escaping points from the log behavior, a n focuses on searching globally the maximal deviation one of those escaping points can lead to.
a n p (N, L) = α • a p (A Γ (N,L) ) -β • a n (N, L) with α, β ∈ R ≥0 , α + β = 1
. Let us revisit the example introduced at the beginning of this section, which is a transformation of the model in Figure 5 but that contains an arbitrary number of operations before the Post-chemo. If β = 0.2, then a n p will evaluate to 0.508, a mid value that may explicit the precision problem represented by the anti-alignment computed.
We have implemented a prototype tool called DarkSider which implements the techniques described in this paper 4 Given a Petri net N and a log L, the tool is guided towards the computation of anti-alignments in different settings:
-Finding an anti-alignment of length n with at least m mismatches (Φ n m (N, L)). -Finding the shortest anti-alignment necessary for having at least m mismatches (Φ m (N, L)). -Finding the anti-alignment of length n with maximal mismatches (Φ n (N, L)).
Results are provided in Table 1. We have selected two considerably large models, initially proposed in [START_REF] Vanden Broucke | Event-based real-time decomposed conformance analysis[END_REF][START_REF] Munoz-Gama | Single-entry single-exit decomposed conformance checking[END_REF]. The table shows the size of the models (number of places and transitions), the number of traces in the log and the size of the alphabet of the log. Then the column labeled as n establishes the length imposed for the derived anti-alignment. In this columns values always start with the maximal length of a trace in the corresponding log e.g., for the first log of the prAm6 benchmark the length of any trace is less or equal to 41. Then the column m determines the minimal number of mismatches the computed anti-alignment should have. Finally, the results on computing the three formulas described above on these parameters are provided. For Φ n m (N, L), it is reported whereas the formula holds. For Φ m (N, L), it is provided the length of the shortest anti-alignment found for the given number of mismatches (m). Finally, for Φ n (N, L) we provide the number of mismatches computed for the given length (n).
For each benchmark, two different logs were used: one containing most of the behavior in the model, and the same log but where cases describing some important branch in the process model are removed. The results clearly show that using anti-alignments highly deviating behavior can be captured, e.g., for the benchmark prAm6 a very deviating anti-alignment (39 mismatches out of 41) is computed when the log does not contains that behavior in the model, whereas less deviating anti-alignments can be found for the full log (19 mismatches out of 41)5 .
Related Work
The seminal work in [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF] was the first one in relating observed behavior (in form of a set of traces), and a process model. In order to asses how far can the model deviate from the log, the follows and precedes relations for both model and log are computed, storing for each relation whereas it always holds or only sometimes. In case of the former, it means that there is more variability. Then, log and model follows/precedes matrices are compared, and in those matrix cells where the model has a sometimes relation whilst the log has an always relation indicate that the model allows for more behavior, i.e., a lack of precision. This technique has important drawbacks: first, it is not general since in the presence of loops in the model the characterization of the relations is not accurate [START_REF] Rozinat | Conformance checking of processes based on monitoring real behavior[END_REF]. Second, the method requires a full state-space exploration of the model in order to compute the relations, a stringent limitation for models with large or even infinite state spaces.
In order to overcome the limitations of the aforementioned technique, a different approach was proposed in [START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF]. The idea is to find escaping arcs, denoting those situations where the model starts to deviate from the log behavior, i.e., events allowed by the model not observed in the corresponding trace in the log. The exploration of escaping arcs is restricted by the log behavior, and hence the complexity of the method is always bounded. By counting how many escaping arcs a pair (model, log) has, one can estimate the precision of a model. Although being a sound estimation for the precision metric, it may hide the problems we are considering in this paper, i.e., models containing escaping arcs that lead to a large behavior.
Less related is the work in [START_REF] Vanden Broucke | Determining process model precision and generalization with weighted artificial negative events[END_REF], where the introduction of weighted artificial negative events from a log is proposed. Given a log L, an artificial negative event is a trace σ = σ • a where σ ∈ L, but σ / ∈ L. Algorithms are proposed to weight the confidence of an artificial negative event, and they can be used to estimate the precision and generalization of a process model [START_REF] Vanden Broucke | Determining process model precision and generalization with weighted artificial negative events[END_REF]. Like in [START_REF] Munoz-Gama | Conformance Checking and Diagnosis in Process Mining[END_REF], by only considering one step ahead of log/model's behavior, this technique may not catch serious precision/generalization problems.
Conclusions and Future Work
In this paper the new concept of anti-alignments is introduced as a way to catch deviations a process model may have with respect to observed behavior. We show how the problem of computing anti-alignments can be casted as the satisfiability of a Boolean formula, and have implemented a tool which automates this encoding. Experimental results performed on large models show the usefulness of the approach, being able to compute deviations when they exist.
This work starts a research direction based on anti-alignments. We consider that further steps are needed to address properly some important extensions. First, it would be interesting to put anti-alignments more in the context of process mining; for that it may be required that models have also defined a clear final state, and anti-alignments should be defined accordingly in this context. Also, the distance metric may be adapted to incorporate the log frequencies, and allow it to be less strict with respect to trace deviations concerning individual positions, loops, etc. Alternatives for the computation of anti-alignments will also be investigated. Finally, the use of anti-alignments for estimating the generalization of process models will be explored.
Fig. 1 .
1 Fig.1. Running example (adapted from[START_REF] Vanden Broucke | Event-based real-time decomposed conformance analysis[END_REF]). Overall structure (top), process model (bottom).
Fig. 2 .
2 Fig.2. Model containing a highly deviating anti-alignment for the log considered.
Definition 4 .
4 In order to deal with traces of different length, we define for every trace σ = σ 1 . . . σ p and n ∈ N, the trace σ |1...n as: σ |1...n := σ 1 . . . σ n , i.e. the trace σ truncated to length n, if |σ| ≥ n, σ |1...n := σ 1 . . . σ p • w n-p , i.e. the trace σ padded to length n with the special symbol w ∈ Σ (w for 'wait'), if |σ| ≤ n.
Fig. 3 .
3 Fig. 3. The process model (taken from [10]) has the anti-alignment a, b, c, f, i, k for the log L = { a, b,c, f, g, h, k , a, c, b, f, g, h, k , a, c, b, f, h, g, k , a, b, c, f, h, g, k ,a, e, f, i, k , a, d, f, g, h, k , a, e, f, h, g, k }.
coded by a Boolean formula of size O(n × |T | × |N |), with |N | := |T | + |P |. The second part of the formula (for every σ ∈ L, dist(γ, σ) ≥ m) is coded by a Boolean formula of size O(n × m 2 × |L| × |T |).
Fig. 4 .
4 Fig. 4. The net NL for L = { a, b, c, f , a, c, b, f, g , a, c, b, f, h }.
Fig. 5 .
5 Fig. 5. Example taken from [5]. Initial process model N (top-left), optimal alignments for the event log L = [σ1, σ2, σ3, σ4, σ5] (top-right), automaton A Γ (N,L) (bottom).
3 . Let Γ (N, L) := σ∈L Γ (N, σ)
• . A marking m is reachable from m if there is a sequence of firings t 1 t 2 . . . t n that transforms m into m , denoted by m[t 1 t 2 . . . t n m . A sequence of actions a 1 a 2 . . . a n is a feasible sequence (or run, or model trace) if there exists a sequence of transitions t 1 t 2 . . . t n firable from m 0 and such that for i = 1 . . . n, a i = λ(t i ). Let L(N ) be the set of feasible sequences of Petri net N . A deadlock is a reachable marking for which no transition is enabled. The set of reachable markings from m 0 is denoted by [m 0 , and form a graph called reachability graph. A Petri net is k-bounded if no marking in [m 0 assigns more than k tokens to any place. A Petri net is safe if it is 1-bounded. In this paper we assume safe Petri nets.Definition 2 (Event Log). An event log L (over an alphabet of actions Σ) is a multiset of traces σ ∈ Σ * .
process cash payment
open and register transaction check sender process cheque payment check receiver transfer money notify and close transaction
process electronic payment
An event log is a collection of traces, where a trace may appear more than
once. Formally:
reachable in the complemented net (and with the same firing sequence).Notice that, since N is acyclic, each transition can fire only once; hence, the length of the firing sequences of N is bounded by the number of transitions |T |.Add now a new transition t f with• t f = t f • = M ∪{p | p ∈ P \M }.Transition t f is firable if and only if M is reachable in the original net, and in this case, t f may fire forever. As a consequence the new net (call it N f ) has a firing sequence of length |T | + 1 iff M is reachable in N . It remains to observe that a firing sequence of length |T | + 1 is nothing but a (|T | + 1, 0)-anti-alignment for N f and the empty log. Then M is reachable in N iff such anti-alignment exists.
Table 1 .
1 Experiments for different models and logs.
benchmark |P | |T | |L| AL| n m Φ n m (N, L) Φm(N, L) Φ n (N, L) prAm6 347 363 761 272 41 1 ! 3 39 41 5 ! 7 21 1 ! 3 19 21 5 ! 7 1200 363 41 1 ! 4 19 41 5 ! 8 21 1 ! 4 15 21 5 ! 8 BankTransfer 121 114 989 101 51 1 ! 8 32 51 10 ! 17 21 1 ! 8 14 21 10 ! 17 2000 113 51 1 ! 15 16 51 10 ! 37 21 1 ! 15 5 21 10 % 37
a Petri net is acyclic if the transitive closure F + of its flow relation is irreflexive.
In general the net does not remain acyclic with the complementary places.
Note that more than one run of the model may correspond to an optimal alignment with log trace σ, i.e., |Γ (N, σ)| ≥ 1. For instance, in Figure5five optimal alignments exist for trace a . For the ease of explanation, we assume that |Γ (N, σ)| = 1.
The tool is available at http://www.lsv.ens-cachan.fr/ ~chatain/darksider.
Since in the current implementation we do not incorporate techniques for dealing with the improved distance as explained in Section 5, we still get a considerably deviating anti-alignment for the original log.
Acknowledgments. We thank Boudewijn van Dongen for interesting discussions related to this work. This work has been partially supported by funds from the Spanish Ministry for Economy and Competitiveness (MINECO), the European Union (FEDER funds) under grant COMMAS (ref. TIN2013-46181-C2-1-R). | 42,337 | [
"745648",
"963422"
] | [
"157663",
"2571",
"85878"
] |
01487676 | en | [
"shs"
] | 2024/03/04 23:41:48 | 2008 | https://univ-sorbonne-nouvelle.hal.science/hal-01487676/file/2008GervaisNeitherImperial.pdf | Neither imperial, nor Atlantic: A merchant perspective on international trade in the eighteenth century Pierre GERVAIS 1 In the most literal sense, the "Atlantic world" is a misnomer: in the XVIIIth century, the period for which the term is most commonly employed, the Atlantic Ocean was a forbidding expanse of salt water, mostly empty save for a few islands, and could hardly constitute a world. Even today, supertankers and cruise ships notwithstanding, not much is taking place on the Atlantic proper. What counts, of course, is the land, including the aforementioned islands. But the geographical fact that these lands border the Atlantic or are surrounded by it does not tell us much about what an Atlantic world resembles, either. As a number of authors have pointed out more or less forcefully, the so-called Atlantic community was never strictly Atlantic, and contained many very different communities. What justifies the term for its advocates is that it eventually came to encompass a thick web of relationships, linking a number of people on each side of the Atlantic Ocean, so many in fact that, in some respect at least, it produced what could be called a shared Atlantic world. This world was not a numerical accumulation of empires, defined by national boundaries, or national loyalties ; on the contrary, if it had one defining characteristic, it was precisely its web-like structure, created by the free circulation of goods, people and ideas, across national boundaries, such as they were. Whether this process of circulation was oppressive, as with the slave trade, or liberating, as with Enlightenment ideals, is beside the point. The most determinant factor in the success of Atlantic exchanges was the international movement throughout interconnected parts, and the deeper historical evolution associated with it. 2 Was this movement in any sense truly "Atlantic," however? The present paper aims at presenting a brief and narrow view of it, but from a crucial point of view, that of the merchant. Commerce, everybody will agree, was at the heart of the Atlantic process. It looms large in every account of the XVIIIth Century, and even larger when one realizes that in many ways commerce was the reason why the European "Atlantic" empires were builtthe imperial viewpoint being the other major competitor in the race to offer an analytical framework for Eighteenth-century development in Europe and the Americas, at least. 3 Colonial goods and the colonial trade prompted the great confrontation between England and France; and some economists even credit them with a key role in fueling economic growth in the mother countries, regardless of their relatively marginal volume in the overall trade of these countries. 4 Merchants themselves were supposedly the quintessential Atlanticists, both at the personal and the professional level. If the concept makes sense at all, then, it should make sense particularly for the activities of these traders, whose breadth of horizon, manic activity, and constant personal intercourse underpinned almost everything significant which took place on the Atlantic Ocean outside of strictly military ventures, and largely provided the stakes and the motives for the latter. Even the one major "Atlantic" phenomenon which could be said to escape the merchant sphere, the multifaceted cross-cultural intercourse generated by constant flows of migrants over and around the ocean, was still technically channelled through merchant-made networks, and merchant-conceived crossing procedures. * * *
To some extent, the minutiae of merchant practice have only recently become a topic of historical enquiry. Earlier works were often mainly concerned with aggregate data, the general movement of ships and goods, and changes in economic trends in the Labroussean structuralist tradition, or with collective political, cultural and social portraits of merchants groups in which account books were only peripherally used. The merchant mind was best read through correspondence and political lobbying, cultural attitudes and social differentiation. None of these areas of research, however, are illuminating for our purposes. To quote Ian Steele, nobody ever fought, prayed and died in the name of an Atlantic community, so that its existence is usually proved through reference to practice -to the circulation of ideas, people, and goods. 5 Hence the interest of analyzing the forms this circulation took, and here we can rely on a very strong body of recent prosopographies. In this respect, a series of works have reshaped our views on merchant activities, particularly in the last twenty years. We now know that merchants were combining multiple activities, integrating all the areas of the Atlantic world, and thereby holding together the many strands which made or unmade the central "adventure:" a shipping expedition. We know that they were impressively flexible, managing a multiplicity of endeavours at once through complex institutional forms, and that they suceeded in carrying on shipping activities in the face of imperial prohibition, and even in the face of Napoleon's Continental blockade. We also know that the same networks which underpinned their trade gave rise to complex "conversations" through which scales of qualities were set, goods defined within these scales, prices debated, and production and transportation processes refined and improved. [START_REF] Hancock | Commerce and Conversation in the Eighteenth Century Atlantic; the Invention of Madeira Wine[END_REF] The merchant world was thus a networked world, which, on the face of it, would fit perfectly into the model of a transnational community. However, both the motives and the implications of this networked approach to trade may not have received all the historical attention they deserve. For networks played a series of roles, some of which were characteristic of the era, and also had concrete consequences on the way merchants would view their world. First of all, the impact of information was particularly decisive to any society in which goods were far from standardized, and where official standards imposed by state institutions were constantly undermined through widespread imitation and fraud. In a remarkable article, Pierre Jeannin points out that merchants faced vast difficulties in gauging the quality of the wide range of goods they were supposed to sell. [START_REF] Jeannin | Distinction des compétences et niveaux de qualification : les savoirs négociants dans l'Europe moderne[END_REF] Who could say for sure that a given piece of textile had really been made according to the quality standards of the manufacturing area it purported to come from, that a barrel of flour contained the grade of flour it was sold for, that a jewel from India was what it seemed to be? While any merchant could acquire a competency in any given field, no buyer could hope to master the bewildering range of qualities and nomenclatures characteristic of the eighteenth-century. [START_REF]On the issue of quality scales, cf. "Networks in the Trade of Alcohol," a special section introduced by Paul Duguid[END_REF] Hence the vital role of networks. No merchant could be an expert on everything; but a good merchant would be able to rely on a network of peer experts, who would do the job for him. Indeed, this went beyond product quality, which was merely the visible part of the commercial iceberg. Each level of quality entailed a different marketing strategy, a different clientele, and ultimately different markets at each end of the process. Even (relatively) specialized traders dealt in a whole series of products, with no written and institutionalized nomenclature to help them. But the typical experience was that of unspecialized traders, such as grocer Thomas Allen of New London, Connecticut whose 1758 account book listed beef, corn, shingles, clapboard, and other local products along with coffee, sugar, raisin, rum, cotton, "stript" (striped cloth), "Oznabrigue" (Osnabrück cloth), and other colonial and European products. 9 There were thousands of retailers such as Allen, who left hundreds of such account books, each of which testified to a specific set of suppliers, or more accurately to a specific set of goods gathered through one correspondent from many suppliers. When Joshua Green, at age twentyone, started a business as a grocer in Boston, Massachusetts, he used his father's supplier, one Thomas Lane of London, and bought every year an assortment of Far Eastern spices and shipping products ; his first order brought cinnamon, cloves, nutmeg, mace, pepper, tea, starch, "Florence oil," raisins, and Cheshire cheese. Green's first introductory letter to Lane, dated 1752, started with these words : "The Satisfaction you have given in the Business you have done for Mess(rs) Green + Walker (with whom I serv'd my apprenticeship) has induc'd me to apply to you ;" past experience had taught Green that Lane could be trusted to be his expert buyer in London. [START_REF]Green Family of Boston, Mass[END_REF] Large-scale merchants did not operate differently ; according to Silvia Marzagalli, the ship Isaac Roget of New York sent in 1805 for Guadeloupe was filled with goods from five different suppliers, including various silk, linen and other textile products, as well as manufactured goods and wine. [START_REF] Marzagalli | Establishing Transatlantic Networks in Time of War : Bordeaux and the united States, 1793-1815[END_REF] All these shippers were general merchants, but specialization did not bring about significantly different approaches; ordering a shipment of British textile goods in Boston in 1813 (in the hopes that the War of 1812 would be over soon), merchant Nathan Appleton wrote to his brother in London : I should like however to have some good merchandize for me should they be reasonably low. Say to am(t) of £ 5000 -if you have not already purchased any for you M. Stone is an excellent judge of goods + I should like to have you get him to purchase them if you do not wish to do it yourself -I have about £ 2500 I suppose in Lodges + [Prother?] hands -but [ill.] they will be glad to accept drafts to a greater am(t) -whilst the goods are in their hands. It is [also?] necessary that I should give a particular order as I wish the goods to be of the most staple kinds say Cambrics Calicoes shirtings ginghams +c. to am(t) of £ 3000 or 4000 -+ 1 or £2000 in staple woolens as in my former letter [pr?] I + T Haigh for goods in their line -I leave it however to your judgement from the state of the market + the prospect of peace or a continuance of the war to purchase or not at all. 12 Thus Appleton relied on one M. Stone, and on his brother as a controlling element, when it came to order goods abroad, even though he was specialized in the type of merchandize he was buying. He had some ideas of his own, but was fully ready to defer to those who would actually buy, since they alone would be in a position to judge if the cloth they had in hand was suitable to the Boston market, and whether the price / quality ratio was adequate. In other words, even a specialized trader had to rely on others, not only to get the best possible quality for the price, but also, and possibly first and foremost, in order to pick the right type of goods and the proper variety. In any case, the time lag between orders and sales was usually such as to prevent instructions from being too specific. Commissioners everywhere had to exercise their judgements, and merchant correspondence is replete with complaints that an agent had bought too late, or too early, and at the wrong price.
Choosing the right correspondent was thus essential, and even more so if one includes the second major dimension of merchant activity, that of credit. At any one time, little cash changed hands; most of the settlements took place through compensations. Green, for instance, almost never sent any cash to Lane, but "remitted" his debts by sending "bills," i. e. formal I.O.U.s, drawn on London houses. There is no indication on how these bills came into his hands, but in almost every one of his letters in 1752-1754, he apologizes for not sending Lane enough of them to balance his account. The fact that his was a paper debt, based on theoretically open credit, may mean that no interest was paid. Whatever the case in practice, the point is that Green needed Lane's forbearance. Thus a network was also a source of credit, which in turn was assuredly bound up with the personal relationships between creditor and debtor. Of course personal reputation was a decisive element, and it included non-economic ties -Lane had been the supplier of Green's father, after all. Kinship, religion, or any other potential link could become a motive for a creditor to be more tolerant of delays, or to offer better terms of payment, such as lower discounts on exotic commercial paper, for instance. The reverse was true as well, since Lane depended on payments from his customers, Green among them, to pay his suppliers on time.
Much has been written on the delicate timing required by long-distance trade, but timing was always flexible, and dependent in part of the relationship between the actors of the exchange. The same could be said of interest rates and exchange rates, never rigidly fixed, and dependent in part on the relationship existing between the two parties. Networks, in a way, were credit, since they underpinned the ability to draw both capital and information on others. The result was in truth a joint venture between individuals who had to trust each other, a venture in which profit was distributed along complex channels of differential participation, again with close attention paid to interpersonal relationships. 13 In their concrete, day-to-day operations, networks were therefore carefully chosen and nurtured. A merchant's point of view tended to encompass first and foremost a discrete set of correspondents, usually picked among groups with which there were certain affinities. Religious or ethnic networks, or the universal tendency to pick close kin as partners, were simply rational business decisions, aimed at minimizing the risks of network 13 Laurence Fontaine, "Antonio and Shylock: Credit, and Trust in France, c. 1680-c. 1780," Economic History Review 54 (1, February 2001): 39-57; or William T. Baxter, "Observations on Money, Barter and Bookkeeping," Accounting Historians Journal 31 (1, June 2004): 129-139; as well as Cathy Matson's discussion of risk, with trust as an underlying thread, in "Introduction: The Ambiguities of Risk in the Early Republic," "Special Forum: Reputation and Uncertainty in Early America," Business History Review 78 (4, Winter 2004): 595-606). On merchant subcontracting, cf. Pierre Gervais, Les Origines de la révolution industrielle aux Etats-Unis, Paris: 2004.
failure, without suppressing them completely of course. 14 What counted was on whom one could call for credit and information, and the links one relied on delineated a geography which was never universal, nor even Atlantic, but made up of the major nodes in which one's correspondents acted. Over twelve years, from 1763 to 1775, the Bordeaux firm of Schröder and Schyler, one of the few merchant firms for which we have solid information, dealt with only 17 foreign firms on a regular basis. And while it had over 250 other foreign clients from time to time, 47.8% of all consignments made from Bordeau went to these seventeen firms. 15 It is thus impossible to overstate the importance for a trader of these bilateral relationships (in this case, the term network is slightly misleading; these were chains of correspondents, or at most small groups linearly linked, rather than actual networks). And it is easy to show that they often gave rise to gate-keeping processes. In his first letter to Lane, already quoted, Green felt necessary to explain that "As Mes(s) G + W dont trade in those articles I purposed to write for [I] shall have the Advantage of supplying some of the best of their Customers on a short Credit or for the Cash." Green wanted to establish his credit with Lane, of course, but he was also careful to point out in passing that he was not going to compete with his father's firm; within a given set of trading links, competition was strictly limited. Insiders had preferential treatment, while outsiders could scarcely hope for such special treatment. This is assuredly one of the more misleading aspects of the current studies on the economic processes commonly associated with the Atlantic area in the XVIIIth century. No merchant operated with utter freedom nor could he easily change his commercial affiliations. Every account demonstrates that any new endeavour, any extensions of earlier channels, or much more rarely any attempt at redirecting these channels, entailed the careful building of new and strong bonds with key players in the desired market. As a rule, no redirection of trade traffic was complete, no business ruptures could be permanent ; all changes were incremental, because it had to be accomplished through the existing channels, and only thanks to them. Even bankruptcies could not shake these constraints, since the practice of settlement with creditors is universally attested in the archives.
Conversely, finding new trading partners was difficult, time-consuming, and possible only to the extent that sound intermediate contacts could be found. In his same first introductory letter to Lane, Green junior was careful to point out that he had been his father's apprentice, and sent a note worth £ 50, the biggest sum he would ever send during his recorded first years of dealing with Lane. Green senior's standing was thus not automatically transferred to his son's new firm, and had to be reasserted. Establishing credit was no easy matter. On the other hand, no trader could operate without the help of other traders, and indeed in many areas of the world, especially in the Far East, but at one time or another in many European countries as well, having local correspondents was not only necessary, but compulsory. 16 14 David Hancock, "The Trouble with Networks : Managing the Scots' Early-Modern Madeira Trade," Business History Review 79 (3, Autumn 2005): 467-491. 15 Pierre Jeannin, "La cientèle étrangère de la maison Schröder et Schyler de la guerre de Sept Ans à la guerre d'indépendance américaine," in Marchands d'Europe. Pratiques et savoir à l'époque moderne, Jacques Bottin et Marie-Louise Pelus-Kaplan ed, Paris: 2002, 125-178. 16 Cf. for instance the Calcutta intermediaries and their relationship with foreign merchants as described in the contemporary letters of Patrick T. Jackson, Far Eastern trader in the 1800s ; cf. Kenneth W. Porter, The
The net result of all these pressures is that the proper unit of analysis for the merchant world was the universe of discrete chains of trading links that structured mercantile commerce. This process had nothing to do with either the Atlantic Ocean or the relationship between "Old" and "New" worlds, since it can be observed in any setting where European-style merchant capitalism was a significant reality. Family solidarities, gate-keeping practices, credit-based dealings were merchant, not Atlantic, characteristics, and they created order in merchant life most everywhere. Commercial connections, far from being set up everywhere and at will, followed lines of least resistance created in constructing this merchant order. Mercxhant linkages were structured by existing routes and contacts, and were influenced by differential risk. This is where the imperial factor also intervened in international trade, especially during times of war. War was in itself a rejoinder to the very idea of an Atlantic community, which it "vetoed," so to speak, regular intervals. Losses in times of war are an ubiquitous story in the XVIIIth century, and no merchant, however experienced, could trust that he would be protected from international conflict of all kinds. Even such a vaunted meticulous planner as British merchant John Leigh saw his first foray in slave-trading end in near-disaster at the hands of a French privateer off the Coast of Guyana. [START_REF] Steele | Markets, Transaction Cycles, and Profits : Merchand Decision Making in the British Slave Trade[END_REF] Of course, proponents of the "Atlantic" framework insist that the barriers created by war were regularly finessed and crossed in various ways, which is quite true. It has been shown again and again that war did not completely cut off communications between enemies, and that trade was not easily enclosed within imperial models. But merchants did not freely redirect their energies anywhere they wanted in the great Atlantic web either, a point which is much less made. Even more than in peacetime, networks in wartime turned out to be highly incapable of adapting or changing to meet circumstances. They engendered dependencies on strict trading pathways, the importance of which can hardly be exaggerated, and which seems to resurface in many historical example.
Thus the illegal Caribbean trade around 1780 underlines the persistent links of the New York merchant community with the Dutch West Indies over a century after Stuyvesant's surrender. A quarter of a century later, the Herculean efforts of Bordeaux merchants to maintain their colonial commerce after 1803 in the face of seemingly universal opposition reveals their inability to develop new trade channels on the continent in spite of their exceptionally famous wine-growing hinterland. Even the growth of neutral U. S. shipping during the same Napoleonic wars was insufficient to prompt Bordeaux retailers to call into question their traditional London-based financial networks even though their confiscated goods occasionally ended up in the warehouses of enemy continental firms. The much vaunted ability of traders to pursue trade in times of war thus may be read also, to a certain extent, as an inability to redirect this same trade along more secure lines, simply because the cost of this redirection was too high, hence the persistent attempt to derive profit from existing networks in spite of adverse conditions. At the very least, the assumption that such contraband trade was preferred because it was more profitable, and developed regardless of the political context, should be challenged, since in practice it turned out to be so often dependent on prior links. Total freedom of choice should have resulted in many more creative endeavours, launched well outside the beaten paths. 18 The complex dialectic between prior relationships and new business opportunities in times of war is well illustrated by the case of John Amory, a Boston merchant, in partnership with his brother Thomas. The two men were from a well-established merchant family, but their father, a Loyalist, had fled to England in 1775. In May 1779, John Jr. arrived in London, but apparently not for political reasons. For the next four years, he would travel ceaselessly between London, Brussels and Amsterdam, organizing a flow of shipments for the benefit of the firm John & Thomas Amory. 19 Most of the shipments for which shippers are specified were made from Amsterdam through a certain John Hodshon, who was, as it turns out, a correspondent of John Amory's father. Indeed, Hodshon was given the same wide latitude as Appleton's agents in London 30 years later, Amory having written him at one point to send "brother Jonathan" "1 Chest of good bohea tea [...] or Same Value in Spice as you may judge -if in spice 1/2 the value in Nutmegs 1/4 in Cinnamon 1/4 in Cloves and Mace." Goods came from both London and Brussels, and the use of a neutral port to ship to the United States was logical, as well as the various precautions which were taken to disguise the true status of the cargo : in the same letter in which Hodshon was left free to choose whether he would buy tea or spices, Amory wrote of "inclosing my letter to Brother Payne to be given Cpt Hayden, desiring the Cpt if taken to destroy it." 20 Actually, Amory's venture was probably not a journey to an entirely new territory. His correspondent firm in London was Dowling & Brett, and his first recorded transaction after his arrival in Brussels on July 1st, 1780 was to present a bill on them to the Brussels firm of Danoost & Co., for a grand total of £ 30. This sum in itself was relatively small ; according to the preceding entry Amory had reached Brussels with £ 400 in cash. The most important result of the transaction, however, was to establish Amory's credit by having Danoost & Co. draw on Dowling and Brett, a London firm which may well have been already known in Brussels anyway. In other words, Amory was most probably travelling along a chain of correspondents such as the ones we have described above. The war would slightly modify the order of the links in the 19 The complex web of family relationships and business partnerships between the various Amorys of Boston is described, if not entirely elucidated, in the Massachusetts Historical Society Guide to the Amory Family Papers, and available online at http://www.masshist.org/findingaids/doc.cfm?fa=fa0292. According to MHS records, the "John Amory" whose travels in Europe between 1778 and 1783 are used here must be John Amory Jr. (1759-1823), since John Amory Sr. was already in Europe in 1775. However, the accounts and letterbook from this Brussels trip, which come from the J. and J. Amory Collection (hereafter Amory Collection), Mss: 766, Baker Library, Harvard Business School, vol. 2 ("Journal, John Amory accounts in Europe, 1 Feb. 1778 -27 Feb. 1783"), and vol. 46 ("Copies of letters sent 1781-1789"), quote several times a "brother Jonathan," which should be either an uncle or a cousin, John Jr. having no brother Jonathan. Since William Payne, a cousin, is also called "brother Payne," we have assumed that the word "brother" here had a religious (Quaker?) connotation, and should not be taken as meaning a sibling, but this may well be a mistaken interpretation on our part. 20 For Hodshon's letters to J. & J. Amory, Amory Sr.'s firm, cf. Amory Collection, vol. 52, Folder 2 "Letters received from Miscellaneous, 1780-1785." Amory Jr's letter is in "Copies of letters sent 1781-1789," loc. cit., entry for May 5, 1781.
chain, with Flemish and Dutch merchants inserted as a buffer between London and Boston, but the points of departure and arrival were the same, and even these new intermediaries were part of the original networks. Even more interestingly, the new status of France, allied to the United States, was not enough to prompt new financial networks. On February 8, 1781, Amory credited his Bills of Exchange account with two bills on London houses, for a total of £ 588, "The above bills being the net proceeds of four bills sent me by J. A. for 13998 livres tournois on paris, + w(ch) were rec(d) By MSs Vaden Yver Freres + C(o) on whom I gave my draft in favour of MSs Danoot + C(o) + who paid [ill.] 13944.9 livres." The two British bills were duly deposited in Amory's account at Dowling & Brett's, as the next entry shows. In other words, French commercial paper probably received in the United States by Jonathan Amory was changed into London paper through French and Flemish correspondents. There was apparently no attempt to reduce the discounts and losses entailed by this long chain of intermediaries, through importing directly from France.
There are only two explanations for such a continued reliance on London-based houses in the middle of the War of Independence. Either John Amory, as the son of a Loyalist, gave precedence to his political leanings over his Atlantic impulses, and stuck with his original London friends for political reasons. Or, much more plausibly, he considered that the war was no sufficient reason to reorient his trade links, because the costs of such reorientation would be too high in comparison with the expected profits. When one considers how risky it was to use new, unknown suppliers who could easily take advantage of a newcomer with no previous connections, and also how difficult it was to gain the acceptance of fellow merchants for whom one was an unknown quantity of dubious credit, it becomes obvious that entering new business territory unbidden was very costly indeed. By far the most practical solution was to find some respected guarantor who would ensure his fellow traders that their new acquaintance was in good standing. The better known the guarantor, the more trusted one would be, and credit would flow accordingly; bills would be endorsed, orders filled with quality goods, since doing otherwise would be offending the fellow trader who had pledged his word. The upshot of this basic Greifian mechanism was a strong built-in tendency for merchant networks to reproduce themselves regardless of changes in political conditions, and to spread only slowly and cautiously. This could be taken as a proof of the resilient character of these networks, and of the irrelevance of imperial orders to their exercise, in a word of their truly "Atlantic" character. But such a reading glosses over the fact that Amory's links to London were in and of themselves the result of empirebuilding, not a free association generated in the course of free merchant exchange. Moreover, his lack of interest in any direct contact with France, which anticipated the subsequent failure of the Franco-American trade alliance after 1783, points to the same reality: networks themselves, far from being conceived in a vacuum, were in large part the results of empire-building processes in the first place. This is not to say that no merchant community ever took advantage of changed circumstances, of course. The Dutch in the XVIth century, the British in the XVIIth and XVIIIth centuries did seize opportunities from time to time. But even these takeovers may have had an element of concurrent business contacts in them. According to recent research, the Dutch at least gained entry into the Mediterranean at the end of the XVIth century in part through their (politically determined) alliance with Antwerp networks, already well established in Italy also for political reasons. 22 On the whole, though, Amory's cautious approach may have been more representative of standard merchant procedures than the brazen attempts of the Dutch in the Baltic, or of the British in Spanish America. In this case, we should picture an "Atlantic" world as not only partly non-Atlantic, but also markedly less "new" and innovative than assumed in current historiography. Certainly Nathan Appleton, the already quoted Boston merchant and soon-to-be textile magnate, took a similar position during the War of 1812. On November 14, 1813, he wrote to his brother Samuel in London that « if the war should continue I should think a great many articles [ill.] of English produce or manufacture, might be shipped here to great advantage in neutral ships via Lisbon or Gottenburg -by our treaty with Sweden + Spain -English property on board their vessels are secured against our privateers -as we have in them recognized the principle that free ships make free goods. » Again, traditional London links were not easily forsaken. 23 Like Amory, incidentally, Appleton had no qualms about trading with the enemy. One could see this as an expression of the often cited Anglophilia of Boston and New England in general, which, in a traditional political narrative, would eventually lead to the ill-fated Hartford Convention and the demise of the Federalist Party. I believe, however, that Appleton's flippancy in a time of war cannot simply be explained in terms of a rejection of Federal policy. There is no reference to politics in the statement above, which is couched in strictly commercial terms. It is an observation of fact, not an affirmation of dissidence. If contraband had been seen as a political activity, not an economic one, it should show somewhere in Appleton's statement. Our Boston magnate did end up having dealings with Great Britain, as shown by this excerpt from a letter dated September 2, 1813 ; « Capt Prince has given us his bill for the balance of this a/c say £ 110.14 which I send to Mess(r) Lodges + [B]ooth by this conveyance for your acc(t) as the 3(rd) of £ 1650. -1 + 2(d) forwarded via Halifax one half on your acc(t) other half on my own -viz: Leon Jacoby and Francis Jacoby on Sam [Balkiny?] + Sons £ 1100. Jos. + [Jon(a)?] Hemphill on Tho(s) Dory + Isaiah Robert 550 -» 24 Three notes of hand, totalling the hefty sum of £ 1760 s 14, were sent, apparently by three different ships, from the United States through Halifax, that is through enemy (British Canadian) territory, onto London, and into enemy hands.
Of course, correspondence and remittances were generally accepted in time of war, and in fact even private citizens could, under certain circumstances, travel through enemy territory. Only the movement of goods 22 Pierre Jeannin, "Entreprises hanséates et commerce méditerranéen à la fin du XVIe siècle," in Marchand du Nord. Espaces et trafics à l'époque moderne, Philippe Braunstein and Jochen Hoock ed., Paris: 1996, 311-322. 23 Appleton Papers, Box 2, Folder 25, "1813," Nathan Appleton to Samuel Appleton, November 14, 1813 24 Ibid., Nathan Appleton to Samuel Appleton, September 2, 1813.
was restricted, and in ways which were open to debate. 25 Even on this latter point governmental policy itself was often haphazard and vacillating, as exemplified by the recently analyzed case of the British smugglers invited by Napoleon in Gravelines, or the secret instructions sent by London to open the British West Indies to the Spanish American trade. 26 All in all, Appleton, like Amory, seems to have faced little moral pressure when choosing wartime strategies, and actually Amory makes one cryptic reference to a letter to John Jay, which seems to imply at least that he was in contact with the rebels besides or beyond his commercial ventures. 27 That both men chose to stick with the known approaches is all the more striking. Precisely because, as many historians have argued, enforcement of imperial policies was so haphazard, merchant relationships should have mutated much more freely and frequently than reflected by the historical record.
Appleton did end up entering the French market, but after the end of the war only, in 1815, and in a way which in itself confirms how much merchants relied on preset chains of known correspondents. On March 11, 1815, He wrote his brother that :
In revolving in my mind what course to take to avoid the necessity of laborious personal attention to business for which I am becoming too [ill.] and the other extreme of having no regular established business -I have finally concluded a partnership concern with the two M(r) Ward -B C + W. [...] M(r) W(m) Ward goes to England in the Milo with the intention of proceeding immediately to Paris for the purpose of purchasing French goods -+ being well acquainted with this market I think he will be able to select such as will pay a profit -I have agreed to put a £5000 sty to be the same on 60 day bills drawn [ill.] -and I wish you to see this arrangement completed by placing the amount to credit of the new firm Benj C. Ward + C(o) with yourself if you have established yourself as you propose in your last letter to me as a commission merchant -if not with Lodges + [Booth?] or some house in London" 28 One needed an entry into the French market, and that entry would be the young Ward. Appleton himself had no intention to go to France, but sought to obtain a surrogate more competent than himself. It is worth pointing out, moreover, that the transfer of funds from London to Paris was left to Ward's initiative. The choice of the merchant house that would serve as Ward's correspondent in Paris was up to Ward, quite logically, as this was the most crucial choice the young associate would have to make in order to crack open the French market -and he was the expert, after all.
* *
The most striking element in both Amory and Appleton's stories, and in countless other merchants' tales, is that they took place in a mercantile world which does not fit well into such categories as "Atlantic or "imperial". Because the concerns of these two men were structured by a flow of goods which never came close to imitating the free, unfettered market Adam Smith's utopian work made famous, they never thought on an Atlantic scale. Their view was economically both narrower and wider, encompassing a patchwork of fellow traders from whom they derived the goods they would send hither and thither, or the accesses to the customers who would buy these goods. But these networks were highly dependent on professional strategies, and narrowly constrained by the necessities entailed by the maintenance of these strategies. Thus Bordeaux traders would view their world as a set of correspondents, some in the Americas (the Caribbeans, some ports on the North American seaboard, South America sometimes), many in Europe, from their own region of Bordelais to London, Amsterdam and the Baltic sea, and maybe others in Asia and Africa, Calcutta, the Gold coast, or the Ile Royale. A Saint-Malo trader would have its own world as well, but it would be significantly different, with more focus on Newfoundland, on Normandy, on the Spanish empire. Boston would be a different story again, with London and the Caribbean looming large, but also the inner valleys of the North American continent, whence furs came, and the households of the Eastern seaboard, with their farmers and retailers. Even London at the height of its power, after the end of the Seven Years War, would have its own provincial outlook, and its own particular networks, or rather chains of relationships, centered on the British Caribbean Islands, the Yorkshire, the Bordeaux wine region, the slave-producing areas of Africa, the Indian dominion. And these are merely statistical orientations, dominant specializations which a few mavericks would always belie, since each trader had his own mix. From a merchant's eye view, the world was both wider and smaller than the Atlantic Ocean, but it never really corresponded to the Atlantic Ocean.
The issue here is not merely a question of geographic precision. It has often been pointed out that no trade was ever specifically Atlantic. First of all, most commercial activity took place within the land masses of Europe. In volume, and possibly in economic import as well, short-distance carting of grains may have been more crucial than gold, silver, or even the slave trade, in determining the economic health of an area. 29 Only a minority of European trade routes were prolonged across the Atlantic, and all of them were part of longer sets which reached well beyond the ocean. In Isaac Roget's already quoted cargo to Guadeloupe, part of the textile came from Central Europe, and there was silk which may well have been Chinese, or at least from Lyons; potential return cargoes could include the usual colonial goods, sugar, coffee or tobacco, but also more complex routes involving intra-Caribbean trade, a shipment of slaves to the Southern United States, the loading in North American ports of wheat, timber or flaxseed to bring back to Europe, or of fur as part of a venture toward the Far East. Even in the biggest Atlantic seaports, coastal shipping and liaisons with the hinterland, as well as longrange contacts to the Far East, were as much part of the business equation as the Atlantic crossings. But what should be underlined here is not only that merchant activity was spatially complex ; a much more important point is that it was a single process, regardless of where it took place. 30 For what united British and French and American and other merchants was their common socio-economic practice, not some potential attachment to a peculiarly trans-Atlantic enterprise which, as such, was very far from their mind. Admittedly, the individuals through whom these networks came into being never formed some general, transactional, transnational community. Market segmentation brought division and competition, and these were forces at least as powerful as political-ideological convergences or polite sociability. Geographical choices were shaped by possible business relationships, which themselves were heavily determined by kin, religion, and national loyalties. In particular, the core activities of most trading groups would develop within imperial boundaries and alliances, if only because it was easiest and most cost-effective ; inter-imperial exchange would take place of course, and necessarily so, but making them one's focus was unwise, as Bordeaux traders eventually found out the hard way. No merchant could be unmindful of such constraints, and trade flows were directed accordingly, even though inter-imperial borders were crossed all the time, including in times of war. Imperial strictures were thus only one parameter in a much wider set, and it would be equally misleading to grant them the status of monocausal explanation as it would be to ignore them entirely. But the variegated nature of the resulting trade relations should not hide their underlying identity. Each particular merchant relationship, be it local, regional, worldwide, or transatlantic, was the expression of the basic merchant act of forging a link in a commercial chain which would eventually make possible the opening of a conduit between two separate, segmented markets and the transportation of one or more goods from one to the other. In other words, the sets of relationships each merchant created were geographically diverse, but identical in nature and function wherever they came into being.
What, then, should be made of the "Atlantic" label? By focussing descriptively on a geographical area, rather than on any specified historical social development, the historiographical move toward "Atlantic" studies has unwittingly shifted the attention away from the causes of this development. Somehow the "Atlantic world" happened, along with empire-and / or community-building, but for no particular reason except maybe as the 30 The point that Atlantic history is a mere part of a wider history, and should not be separated from it, is repeatedly made in the various papers by Alison Games, Philippe J. Stearn, Peter Coclanis, and gathered in the forum section "Beyond the Atlantic÷ English Globetrotters and Transoceanic Connections," William and Mary Quarterly 63 (4, October 2006). But focalizing on the whole world does not tell us why this world became unified, any more than Jorge Cañizares-Esguerra's proposal to focus on the Americas as an area of "entangled" histories tells us why these histories became entangled in the first place. ("Entangled Histories. Borderland Historiographies in New Clothes?," the concluding paper in the already quoted forum in American Historical Review 112 (3, June 2007): 787-799. On this specific point, Bernard Bailyn's insistence on entirely rejecting Braudel's structural approach (Op. cit. 61) in favor of a purely narrative approach is intellectually coherent in its uncompromising empiricism; whether Atlantic of worldwide, unification happened because it happened. On the difficult issue of causality vs. description in Anglo-American historiography, cf. Pierre Gervais, "L'histoire sociale, ou heurs et malheurs de l'empirisme prudent," Chantiers d'histoire américaine, Jean Heffer and François Weil dir., Paris : 1994, 237-271.
serendipitous subproduct of a host of impersonal economic and social forces. And precisely because it happened in the most neutral space one could imagine, far from any specific shore, it tended to lose its European, elite, merchant and imperial administrator overtones. This is a misleading presentation, at best. From a merchant's point of view at least, and maybe from a variety of other vantage points too, the XVIIIth-century world was unified by the powerful tool of trade, backed by state power. These forces in turn defined a worldwide sphere of European expansion and market intensification of varying intensity, but with socio-economic consequences common to all the geographical places in which they were manifested. The increasingly dominant economic role of merchants, the expansion of a market economy, and the political tensions these phenomena generated, was what the "Atlantic," (and the Pacific, and Central Europe, and the Western Hemisphere, and large swaths of Africa and Asia) was all about. What was at work was a general social process, much more than a technical tendency to cross boundaries and oceans. Moreover, these evolutions, on the Atlantic Ocean and elsewhere, were brought about through the deliberate efforts of a very specific, and quite narrow human subgroup, with definite economic, social and political goals.
When we shift the focus toward these efforts and their nature, Atlantic history becomes again what Fernand Braudel argued it was all along, part of the wider history of the development of a specific social organization, European merchant capitalism, a model with a definite expansionist streak, which in turn elicited a wide range of complex reactions, from unyielding resistance to enthusiastic adoption, from the individuals and groups which had to face its encroachments or carried them out, until the eventual collapse of this model in the XIXth century with the advent of industrial capitalism. This was hardly an "Atlantic" story, since it can be traced just as well in the plains of Eastern Germany, in the Rocky Mountains years before the first French coureur des bois ever appeared, in African kingdoms which did not even have access to the sea, or in remote villages of India for which Europe was still barely a distant rumor. This was not world history, either ; this socalled first globalization was widely uneven, and left a good deal of the world population untouched, including in many regions of Europe. Neither was it purely European, though, accusations of Eurocentrism notwithstanding, the power relationships it entailed were clearly centered in Europe. There were centers and peripheries, mother countries and colonies, imperial capitals and client States or plantation economies. The space of European expansion was not homogenous, a fact which "Atlantic" history has never denied, but is hard put to explain with consistency beyond some general statements on unspecified profit motives or inherited prejudices. A history of market expansion, because such an expansion is of necessity a direct attack on other forms of social organizations, would naturally include the stories of its promoters, its opponents, their multi-faceted battles, and their winners and losers.
And last but not least, this history would not be one reserved for sea captains, pioneer migrants and cosmopolitan-minded traders; tavern-keepers, transporters with their oxen-carts, village retailers, and ordinary farmers apparently mired in their routine were also a part of it. 31 Giving up on the sea as a peculiarly significant focus is the only way to restore these latter groups to their proper status as key players in eighteenth-century economic growth, the only way to free ourselves at last from the gemeinschaft / gesellschaft dichotomy. Rather than opposing the modern, roving denizens of Atlantic History to both their hapless victims in Africa and the New World and to the traditional, not to say backward people who stayed put in the Old World, we can see all of these groups as fighting -not always bloodily -over the shape and form European market-driven expansion would take. Demonstrating that this same market expansion weakened rural society and pushed impoverished inhabitants to leave the European countryside, wrought havoc with traditional inter-tribal relationships in Africa, and brought about a massive reorientation of production toward exports in parts of Asia and in the Americas, would be the best way to ensure that all migrants, and non-migrants as well, would truly become part of the same story. Moreover, placing merchants and market forces at the center of our narrative enables us to starkly differentiate the eighteenth-century Atlantic world from our own. For we live in an age of producers, not of merchants ; the ideals and practices of merchant communities, on the Atlantic or elsewhere, were developed for a very different world. This world, structured as it was by long chains of interpersonal relationships, has long since been lost, a fact which we should keep in mind a little more when assessing the relevance these ideals and practices may still have for us.
32-33).For recent analyses of trade and its importance in the Atlantic, cf. "Trade in the Atlantic World," a special section introduced by John J. McCusker, Business History Review 79 (4, Winter 2005), and "The Atlantic Economy in an Era of Revolutions," a special section coordinated by Cathy Matson, William and Mary Quarterly 62 (3, July 2005). For narratives stressing imperial structures, while dealing in various ways with an "Atlantic" framework, cf. John H. Elliott, Empires of the Atlantic World: Britain and Spain in America, 1492-1830, New Haven : 2006; the forum on "Entangled Histories" in American Historical Review 112 (3, June 2007); and William A. pettigrew, "Free to Enslave : Politics and the Escalation of Britain's Transatlantic Slave Trade, 1688-1714," William and Mary Quarterly 64 (1, January 2007): 3-38. 4 See the idea of a planter / merchant connection in both Paul Cheney's analysis of the underpinnings of French failure, then success in the Caribbean, "A False Dawn for Enlightenment Cosmopolitanism? Franco-American Trade during the American War of Independence," William and Mary Quarterly 63 (3, July 2006): 463-488, especially 465; and William Pettigrew's article on the slave trade quoted above. On the potential role of colonial profit as an engine for growth, cf. Guillaume Daudin, Commerce et prospérité: La France au XVIIIe siècle, Paris: 2005. 5 Quote by Ian K.
18
Thomas M. Truxes, "Transnational Trade in the Wartime North Atlantic : the Voyage of the Snow Recovery," Business History Review 79 (4, Winter 2005) : 751-779; Silvia Marzagalli, "Establishing Transatlantic Trade Networks in Time of War: Bordeaux and the United States, 1793-1815," Business History Review 79 (4, Winter 2005) : 811-844; François Crouzet, "Itinéraires atlantiques d'un capitaine marchand américain pendant les guerres « napoléoniennes, »" in Guerre et économie dans l'espace atlantique, op. cit. 27-41.
21 21 On the dismal trade record between the two erstwhile allies in spite of the so-called "Treaty of Amity and Commerce" of 1778, cf. Paul Cheney, "A False Dawn for Enlightenment Cosmopolitanism? Franco-American Trade during the American War of Independence," William and Mary Quarterly 63 (3, July 2006): 463-488, and Allan Potofsky, "The Political Economy of the French-American Debt Debate: The Ideological Uses of Atlantic Commerce, 1787 to 1800," William and Mary Quarterly 63 (3, July 2006): 489-516.
*
25 There is very little secondary material on civilian movements in time of war during the 1700s. Numerous examples of safe passages can be found in various accounts of the time : see e. g. G. R. de Beer, "The Relations between Fellows of the Royal Society and French Men of Science When France and Britain were at War," Notes and Records of the Royal Society of London 9 (2, May 1952): 244-299; also Garland Cannon, "Sir William Jones and Anglo-American relations during the American Revolution," Modern Philology 76 (1, August 1978): 29-45. On the other hand, civilians seem to have been routinely captured and jailed, cf; e. g. Betsy Knight, "Prisoner Exchange and Parole in the American revolution," William and Mary Quarterly 48 (2, April 1991): 201-222.26 Gavin Daly, "Napoleon and the 'City of Smugglers, 1810-1814,'" Historical Journal 50 (2; June 2007): 333-352; John J. McCusker, "Introduction," special section on "Trade in the Atlantic World," Business History Review 79 (4, Winter 2005) : 697-713.27 Amory Collection, vol. 46 ("Copies of letters sent 1781-1789"), Letter dated January 1, 1781. 28 Appleton Papers, Box 3 "General Correspondence, etc. 1815-1825", Folder 1, "1815, Jan-June," Nathan Appleton to Eben Appleton,March 11, 1815.
Steele, "Bernard Bailyn's American Atlantic," History and Theory 46 (1, February 2007): 48. Aggregate studies, a specialty of the French historical school, are best exemplified by Paul Butel, La croissance
commerciale bordelaise dans la seconde motié du XVIIIe siècle, Lille: 1973; less well-known, Charles Carrière's Négociants marseillais au XVIIIe siècle : contribution à l'étude des économies maritimes, Marseille: 1973 is actually more detailed, and bridges the gap with more recent studies. Famous collective regional studies of merchant groups include Bernard Bailyn, The New England Merchants in the Seventeenth Century, Cambridge: 1955; Thomas Doerflinger, A Vigorous Spirit of Enterprise: Merchants and Economic Development in Revolutionary Philadelphia, Chapel Hill: 1986; and Cathy Matson, Merchants and Empire. Trading in Colonial New York, Baltimore: 1998.
Cf. Michel Morineau, Incroyables gazettes et fabuleux métaux. Les retours des trésors américains d'après les gazettes hollandaises, Paris: 1984. Even for export industries, the impact of Atlantic markets could be highly variable, cf. Claude Cailly, "Guerre et conjuncture textile dans le Perche," in Silvia Marzagalli and Bruno Marnot dir., Guerre et économie dans l'espace Atlantique du XVIe au XXe siècle, Bordeaux : 2006, 116-138.
Alison Games cogently raises this issue, along with many others, in her already quoted "Atlantic History" paper.
The research for this paper was funded in part by a DRI CNRS grant, as well as by UMR 8168. I want to thank Allan Potofsky for his | 54,075 | [
"3926"
] | [
"176",
"110860"
] |
01339246 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://hal.science/hal-01339246/file/Liris-6268.pdf | Sabina Surdu
email: sabina.surdu@insa-lyon.fr
Yann Gripay
email: yann.gripay@insa-lyon.fr
Vasile-Marian Scuturici
email: vasile-marian.scuturici@insa-lyon.fr
Jean-Marc Petit
email: jean-marc.petit@insa-lyon.fr
P-Bench: Benchmarking in Data-Centric Pervasive Application Development
Keywords: pervasive environments, data-centric pervasive applications, heterogeneous data, continuous queries, benchmarking
Developing complex data-centric applications, which manage intricate interactions between distributed and heterogeneous entities from pervasive environments, is a tedious task. In this paper we pursue the difficult objective of assessing the "easiness" of data-centric development in pervasive environments, which turns out to be much more challenging than simply measuring execution times in performance analyses and requires highly qualified programmers. We introduce P-Bench, a benchmark that comparatively evaluates the easiness of development using three types of systems: (1) the Microsoft StreamInsight unmodified Data Stream Management System, LINQ and C#, (2) the StreamIn-sight++ ad hoc framework, an enriched version of StreamInsight, that meets pervasive application requirements, and (3) our SoCQ system, designed for managing data, streams and services in a unified manner. We define five tasks that we implement in the analyzed systems, based on core needs for pervasive application development. To evaluate the tasks' implementations, we introduce a set of metrics and provide the experimental results. Our study allows differentiating between the proposed types of systems based on their strengths and weaknesses when building pervasive applications.
Introduction
Nowadays we are witnessing the commencement of a new information era. The Internet as we know it today is rapidly advancing towards a worldwide Internet of Things [START_REF]The Internet of Things[END_REF], a planetary web that interconnects not only data and people, but also inanimate devices. Due to technological advances, we can activate the world of things surrounding us by enabling distributed devices to talk to one another, to signal their presence to users and to provide them with various data and functionalities.
In [START_REF] Weiser | The Computer for the 21st Century[END_REF], Mark Weiser envisioned a world where computers vanish in the background, fitting smoothly into the environment and gracefully providing information and services to users, rather than forcing them to adapt to the intricate ambiance from the computing realm. Computing environments that arise in this context are generally referred to as pervasive environments, and applications developed for these environments are called pervasive applications. To achieve easy to use pervasive applications in a productive way, we must accomplish the realization of easy to develop applications.
Developing complex data-centric applications, which manage intricate interactions between distributed and heterogeneous entities from pervasive environments, is a tedious task, which often requires technical areas of expertise spanning multiple fields. Current implementations, which use DBMSs, Data Stream Management Systems (DSMSs) or just ad hoc programming (e.g., using Java, C#, .NET, JMX, UPnP, etc), cannot easily manage pervasive environments. Recently emerged systems, like Aorta [START_REF] Xue | Action-Oriented Query Processing for Pervasive Computing[END_REF], Active XML [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF] or SoCQ [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF], aim at easing the development of data-centric applications for pervasive environments. We call such systems Pervasive Environment Management Systems (PEMSs).
In this paper we pursue the difficult objective of assessing the "easiness" of data-centric development in pervasive environments, which turns out to be much more challenging than simply measuring execution times in performance analyses and requires highly qualified programmers. The main challenge lies in how to measure the easiness of pervasive application development and what metrics to choose for this purpose. We introduce Pervasive-Bench (P-Bench), a benchmark that comparatively evaluates the easiness of development using three types of systems: [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF] the Microsoft StreamInsight unmodified DSMS [START_REF] Kazemitabar | Geospatial Stream Query Processing using Microsoft SQL Server StreamInsight[END_REF], LINQ and C#, [START_REF] Arasu | STREAM: The Stanford Stream Data Manager[END_REF] the StreamInsight++ ad hoc framework, an enriched version of StreamInsight, which meets pervasive application requirements, and (3) our SoCQ PEMS [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF], designed for data-centric pervasive application development. We define five tasks that we implement in the analyzed systems, based on core needs for pervasive application development. At this stage, we focus our study on applications built by a single developer. To evaluate the tasks' implementations and define the notion of easiness, we introduce a set of metrics. P-Bench allows differentiating between the proposed types of systems based on their strengths and weaknesses when building pervasive applications.
P-Bench is driven by our experience in building pervasive applications with the SoCQ system [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF]. It also substantially expands our efforts to develop the ColisTrack testbed for SoCQ, which materialized in [START_REF] Gripay | ColisTrack: Testbed for a Pervasive Environment Management System (demo)[END_REF]. Nevertheless, the benchmark can evaluate systems other than SoCQ, being in no way limited by this PEMS.
We present a motivating scenario, in which we monitor medical containers transporting fragile biological content between hospitals, laboratories and other places of interest. A pervasive application developed for this scenario handles slower-changing data, similar to those found in classical databases, and distributed entities, represented as data services, that provide access to potentially unending dynamic data streams and to functionalities. Under reasonable as-sumptions drawn by these types of scenarios, where we monitor data services that provide streams and functionalities, P-Bench has been devised to be a comprehensive benchmark.
To the best of our knowledge, this is the first study in the database community that addresses the problem of evaluating easiness in data-centric pervasive application development. Related benchmarks, like TPC variants [START_REF]Transaction Processing Performance Council[END_REF] or Linear Road [START_REF] Arasu | Linear Road: A Stream Data Management Benchmark[END_REF], focus on performance and scalability. While also examining these aspects, P-Bench primarily focuses on evaluating how easy it is to code an application, including deployment and evolution as well. This is clearly a daunting process, much more challenging than classical performance evaluation. We currently focus on pervasive applications that don't handle big data, in the order of petabytes or exabytes, e.g., home monitoring applications in intelligent buildings or container tracking applications. We believe the scope of such applications is broad enough to allow us to focus on them, independently of scalability issues. We strive to fulfill Jim Gray's criteria [START_REF] Gray | Benchmark Handbook: For Database and Transaction Processing Systems[END_REF] that must be met by a domain-specific benchmark, i.e., relevance, portability, and simplicity. Another innovative feature of P-Bench is the inclusion of services, as dynamic, distributed and queryable data sources, which dynamically produce data, accessed through stream subscriptions and method invocations. In P-Bench, services become first-class citizens. We are not aware of similar works in this field. Another contribution of this paper is the integration of a commercial DSMS with service discovery and querying capabilities in a framework that can manage a pervasive environment.
This paper is organized as follows. Section 2 provides an insight into the requirements of data-centric pervasive application development, highlighting exigencies met by DSMSs, ad hoc programming and PEMSs. In Section 3 we describe the motivating scenario and we define the tasks and metrics from the benchmark. Section 4 presents the systems we assess in the benchmark, focusing on specific functionalities. In Section 5 we provide the results of our experimental study. Section 6 discusses the experimental results, highlighting the benefits and limitations of our implementations. Section 7 concludes this paper and presents future research directions.
Overview of Data-Centric Pervasive Applications
Pervasive applications handle data and dynamic data services4 distributed over networks of various sizes. Services provide various resources, like streams and functionalities, and possibly static data as well. The main difficulties are to seamlessly integrate heterogeneous, distributed entities in a unified model and to homogeneously express the continuous interactions between them via declarative queries. Such requirements are met, to different extents, by pervasive applications, depending on the implementation.
DSMSs. DSMSs usually provide a homogeneous way to view and query relational data and streams, e.g., STREAM [START_REF] Arasu | STREAM: The Stanford Stream Data Manager[END_REF]. Some of them provide the ability to handle large-scale data, like large-scale RSS feeds in the case of RoSeS [START_REF] Creus Tomàs | RoSeS: A Continuous Query Processor for Large-Scale RSS Filtering and Aggregation[END_REF], or the ability to write SQL-like continuous queries. Nevertheless, developing pervasive applications using only DSMSs introduces significant limitations, highlighted by P-Bench.
Ad hoc programming using DSMSs. Ad hoc solutions, which combine imperative languages, declarative query languages and network protocols, aim at handling complex interactions between distributed services. Although they lead to the desired result, they are not long-term solutions, as P-Bench will show.
PEMSs. These systems aim at reconciling heterogeneous resources, like slower-changing data, streams and functionalities exposed by services in a unified representation in the query engine. PEMSs can be realized with many systems or approaches, such as Aorta [START_REF] Xue | Action-Oriented Query Processing for Pervasive Computing[END_REF], Active XML [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF], SoCQ [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF] or HYPATIA [START_REF] Cuevas-Vicenttín | Evaluating Hybrid Queries through Service Coordination in HYPATIA (demo)[END_REF], to mention a few.
P-Bench
The P-Bench benchmark aims at providing an evaluation of different approaches to building data-centric pervasive applications. The common objective of benchmarks is to provide some way to evaluate which system yields better performance indicators when implementing an application, so that a "better" system can be chosen for the implementation [START_REF] Pugh | Technical Perspective: A Methodology for Evaluating Computer System Performance[END_REF]. Although we also consider performance in P-Bench, our focus is set on evaluating the easiness of data-centric pervasive application development with different types of systems: a DSMS, ad hoc programming and a PEMS.
To highlight the advantages of declarative programming, we ask that the evaluated systems implement tasks based on declarative queries. Some implementations will also require imperative code, others will not. We argue that one dimension of investigation when assessing the easiness of building pervasive applications is imperative versus declarative programming. A pervasive application seen as a declarative query over a set of services from the environment provides a logical view over those services, abstracting physical access issues. It also enables optimization techniques and doesn't require code compilation. When imperative code is included, restarting the system to change a query, i.e., recompiling the code, is considered as an impediment for the application developer.
Scenario and Framework
In our scenario, fragile biological matter is transported in sensor-enhanced medical containers, by different transporters. During the containers' transportation, temperature, acceleration, GPS location and time must be observed. Corresponding sensors are embedded in the container: a temperature sensor to verify temperature variations, an accelerometer to detect high acceleration or deceleration, a timer to control the deadline beyond which the transportation is unnecessary and a GPS to know the container position at any time.
A supervisor determines thresholds for the different quality criteria a container must meet, e.g., some organic cells cannot be exposed to more than 37 C. When a threshold is exceeded, the container sends a text message (e.g., via SMS) to its supervisor.
In our scenario, only part of these data are static and can be stored in classical databases: the medical containers' descriptions, the different thresholds. All the other data are dynamically produced by distributed services and accessed through method invocations (e.g., get current location) or stream subscriptions (e.g., temperature notifications). Moreover, services can provide additional functionalities that change the environment, like sending some messages (e.g., by SMS) when an alert is triggered; they can also provide access to data stored in relations, if necessary. Therefore, our scenario is representative for pervasive environments, where services provide static data, dynamic data streams and methods that can be invoked [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF]. It is also compatible with existing scenarios for DSMSs (like the one from Linear Road), but services are promoted as first-class citizens.
A data service that models a device in the environment has an URL and accepts a set of operations via HTTP. P-Bench contains car, medical container and alert services, which expose streams of car locations, of medical container temperature notifications, the ability to send alert messages when exceptional situations occur, etc.
We developed a framework to implement this scenario (Figure 1). Since we use a REST/HTTP-based protocol to communicate with services, they can be integrated independently of the operating system and programming language. Moreover, assessed systems can be equipped with modules for dynamic service discovery.
The World Simulator Engine is a C# application that runs on a Windows 2008 Server machine and simulates (i.e., generates) services in the environment. The simulator accepts different options, like the number of cars, the places they visit, the generation of medical containers, etc. Services' data rate is also parameterizable (e.g., how often a car emits its location). The engine uses the Google Maps Directions API Web Service to compute real routes of cars.
The Control & Visualization Interface allows visualizing services in the World, and writing and sending declarative queries to a query engine. The Visualization Interface runs on an Apache Web server; the server side is developed in PHP. On the client side, the web user interface is based on the Google Maps API to visualize the simulated world on a map, and uses Ajax XML HTTP Request to load the simulated state from the server side. Several remote clients can connect simultaneously to the same simulated environment, by using their Web browser. This user interface is not mandatory for our benchmark, but it does provide a nice way of visualizing services and the data they supply. Declarative queries can be written using an interface implemented as an ASP.NET Web application that runs on the Internet Information Services server. We thoroughly describe this scenario and framework in our previous paper on ColisTrack [START_REF] Gripay | ColisTrack: Testbed for a Pervasive Environment Management System (demo)[END_REF].
In our experiments we eliminate the overhead introduced by our web interface. In the StreamInsight and StreamInsight++ implementations we use an in-process server and send queries from the C# application that interacts with the server. In the case of SoCQ, we write and send queries from SoCQ's interface.
Benchmark Tasks
We define five benchmark tasks to evaluate the implementation of our scenario with the assessed systems. The main challenge in pervasive applications is to homogeneously express interactions between resources provided by dynamically discovered services, e.g., data streams, methods and static data. Therefore, we wrap tasks' definitions around functionalities dictated by these necessities. Each task is built around a main functionality that has to be implemented by a system in order to fulfill the task's objective. The parameters of the tasks are services specific to our scenario. These parameters can easily be changed, so that a task can be reformulated on any pervasive environment-based scenario, whilst maintaining its specified objective. The difficulty of the tasks is incremental. We start with a task that queries a single data stream from a given service, and we end with a task that combines heterogeneous resources from dynamically discovered services of different types.
Since P-Bench is concerned with assessing development in pervasive environments, our tasks are defined in the scope of pervasive applications. Other types of applications like data analysis applications are not in the focus of our current study.
Task 0: Startup. The objective of this task is to prepare the assessed systems for the implementation of the scenario. It includes the system-specific description of the scenario, i.e., data schema, additional application code, etc.
Task 1: Localized, single stream supervision. The objective of this task is to monitor a data stream provided by a service that had been localized in advance, i.e., dynamic service discovery is not required. Task 1 tracks a single moving car and uses a car service URL. The user is provided, at any given time instant, with the last reported location of the monitored car.
Task 2: Multiple streams supervision. The objective of this task is to monitor multiple data streams provided by dynamically discovered services. Task 2 tracks all the moving cars. The user is provided, at any given time instant, with the last reported location of each car.
Task 3: Method invocation. The objective of this task is to invoke a method provided by a dynamically discovered service. Task 3 provides the user with the current location of a medical container, given its identifier.
Task 4: Composite data supervision. The objective of this task is to combine static data, and method invocations and data streams provided by dynamically discovered services, in a monitoring activity. Task 4 monitors the temperatures of medical containers and sends alert messages when the supervised medical containers exceed established temperature thresholds.
Benchmark Metrics
Similarly to the approach from [START_REF] Fenton | Software Metrics: A Rigorous and Practical Approach[END_REF], we identify a set of pervasive application quality assurance goals: easy development, easy deployment and easy evolution. Since easiness alone cannot be a sole criterion for choosing a system, we also introduce the performance goal to assess the efficiency of a system under realistic workloads. Based on these objectives we define a set of metrics that we think fits best for evaluating the process of building pervasive applications.
We define the life cycle of a task as the set of four stages that must be covered for its accomplishment. Each stage is assessed through related metrics and corresponds to one of the quality assurance goals:
development -metrics from this stage assess the easiness of task development; -deployment -metrics from this stage evaluate the easiness of task deployment; -performance -in this stage we assess system performance, under realistic workloads; -evolution -metrics from this stage estimate the impact of the task evolution, i.e., how easy it is to change the current implementation of the task, so that it adapts to new requirements. The task's objective remains unmodified.
By defining the life cycle of a task in this manner, we adhere to the goal of agility [START_REF] Rys | Scalable SQL[END_REF] in P-Bench. Agility spans three life cycle stages: development, deployment and evolution. Since we are not concerned with big data, we don't focus on scale agility.
We now define a set of metrics for each of the four stages: Development. We separate task development on two levels: imperative code (written in an imperative programming language, e.g., C#) and declarative code (written in a declarative query language, e.g., Transact-SQL). We measure the easiness and speed in the development of a task through the following metrics:
-LinesOfImperativeCode outputs the number of lines of imperative code required to implement the task (e.g., code written in Java, C#). The tool used to assess this metric is SLOCCount [START_REF] Wheeler | Counting Source Lines of Code (SLOC)[END_REF]. We evaluate the middleware used to communicate with services in the environment, but we exclude predefined class libraries from our assessment (e.g., classes from the .NET Base Class Library); -NoOfDeclarativeElements provides the number of declarative elements in the implementation of the task. We normalize a query written in a declarative language in the following manner. We consider a set of language-specific declarative keywords describing query clauses, for each of the evaluated systems. The number of declarative elements in a query is given by the number of keywords it contains (e.g., a SELECT FROM WHERE query in Transact-SQL contains three declarative elements); -NoOfQueries outputs the number of declarative queries required for the implementation of the task; -NoOfLanguages gives the number of imperative and declarative languages that are used in the implementation of the task; -DevelopmentTime roughly estimates the number of hours spent to implement the task, including developer training time and task testing, but excluding the time required to implement the query engine or the middleware used by the systems to interact with services.
Deployment. The deployment stage includes metrics:
-NoOfServers gives the number of servers required for the task (e.g., the StreamInsight Server); -NoOfSystemDependencies outputs the number of system-specific dependencies that must be installed for the task; -IsOSIndependent indicates whether the task can be deployed on any operating system (e.g., Windows, Linux, etc).
Performance. Once we implemented and deployed the task, we can measure the performance of this implementation. We need now to rigorously define accuracy and latency requirements.
The accuracy requirement states that queries must output correct results. Our work for an accuracy checking framework in a pervasive environment setting is ongoing. Using this framework we will compute the correct results for queries in a given task, we will calculate the results obtained when implementing the task with an assessed system, and finally, we will characterize the accuracy of the latter results using Precision and Recall metrics. We will consider both the results of queries and the effects that query executions have on the environment.
We place an average latency requirement of 5 seconds on continuous queries, i.e., on average, up to 5 seconds can pass between the moment an item (i.e., a tuple or an event) is fed into a query and the moment the query outputs a result based on this item. We set a query execution time of 60 seconds. When assessing performance for systems that implement dynamic service discovery, a query starts only after all the required services have been discovered, but during query execution both StreamInsight++ and SoCQ continue to process messages from services that appear and disappear on and from the network.
To evaluate performance, we consider the average latency and accuracy requirements described above and define a set of metrics for continuous queries. In the current implementation, the metrics are evaluated by taking into account the average latency requirement, but our accuracy checking framework will allow us to evaluate them with respect to the accuracy constraints as well. The performance stage metrics are:
-MaxNoDataSources gives the maximum number of data sources (i.e., services) that can feed one continuous query, whilst meeting accuracy and latency requirements. We assign a constant data rate of 10 events/minute for each data source; -MaxDataRate outputs the maximum data rate for the data sources that feed a continuous query, under specified accuracy and latency requirements. All the sources are supposed to have the same constant data rate. This metric is expressed as number of events per second. We are not interested in extremely high data rates for incoming data, so we will evaluate the task up to a data rate of 10.000 events/second. Unless specified otherwise in the task, this metric is evaluated for 10 data sources; -NoOfEvents is the number of processed events during query execution when assessing the MaxDataRate metric. This metric describes the limitations of our implementations and hardware settings, more than system performance; -AvgLatency outputs the average latency for a continuous query, given a constant data rate of 10 events/second for the data sources that feed the query. AvgLatency is expressed in milliseconds and is computed across all the data sources (10 by default) that feed a continuous query, under specified accuracy requirements.
Evolution. The evolution stage encompasses metrics that quantify the impact that new requirements or changes have on the whole task. The evolution of a task does not suffer radical changes (i.e., we don't modify a task that subscribes to a stream, to invoke a method in its updated version). A task's parameters, e.g., the services, may change, but the specified objective for a task is maintained. This stage contains the following metrics:
-ChangedImperativeCode outputs the number of lines of imperative code that need to be changed (added, modified or removed), when the task evolves, in order to accomplish newly specified requirements. Lines of imperative code are counted like in the case of the LinesOfImperativeCode metric; -ChangedDeclarativeElements provides the number of declarative elements that need to be changed in any way (added, modified or removed), in order to update the task. Counting declarative elements is performed like in the case of the NoOfDeclarativeElements metric.
Metrics in this stage provide a description of the reusability dimension when developing pervasive applications. We are assessing the energy and effort devoted to the process of task evolution.
Assessed Systems
The DSMS we use in P-Bench is StreamInsight. To accomplish the tasks in an ad hoc manner, we enrich StreamInsight with dynamic service discovery features, obtaining a new framework: StreamInsight++. As a PEMS, we use SoCQ.
To communicate with services in the environment, we use UbiWare, the middleware we developed in [START_REF] Scuturici | UbiWare: Web-Based Dynamic Data & Service Management Platform for AmI[END_REF] to facilitate application development for ambient intelligence.
StreamInsight was chosen based on the high familiarity with the Microsoft .NET-based technologies. We chose SoCQ because of the expertise our team has with this system and the ColisTrack testbed. We don't aim at conducting a comprehensive study of DSMSs or PEMSs, but P-Bench can as well be implemented in other DSMSs like [START_REF] Streambase | [END_REF], [START_REF] Creus Tomàs | RoSeS: A Continuous Query Processor for Large-Scale RSS Filtering and Aggregation[END_REF], [START_REF] Arasu | STREAM: The Stanford Stream Data Manager[END_REF], or PEMSs like [START_REF] Abiteboul | A Framework for Distributed XML Data Management[END_REF] or [START_REF] Cuevas-Vicenttín | Evaluating Hybrid Queries through Service Coordination in HYPATIA (demo)[END_REF].
Microsoft StreamInsight
Microsoft StreamInsight [START_REF] Kazemitabar | Geospatial Stream Query Processing using Microsoft SQL Server StreamInsight[END_REF] is a platform for the development and deployment of Complex Event Processing (CEP) applications. It enables data stream processing using the .NET Framework. For pervasive application development, additional work has to be done in crucial areas, like service discovery and querying. To execute queries on the StreamInsight Server, one requires a C# application to communicate with the server. We enrich this application with a Service Manager module, which handles the interaction with the services in the environment and which is based on UbiWare.
As described in the technical documentation [19], StreamInsight processes event streams coming in from multiple sources, by executing continuous queries on them. Continuous queries are written in Language-Integrated Query (LINQ) [START_REF] Meijer | The World According to LINQ[END_REF]. StreamInsight's run-time component is the StreamInsight server, with its core engine and the adapter framework. Input adapters read data from event sources and deliver them to continuous queries on the server, in a push manner. Queries output results which flow, using pull mechanisms, through output adapters, in order to reach data consumers.
Figure 2 shows the architecture of an application implemented with StreamInsight (similar to [19]). Events flow from network sources in the pervasive environment through input adapters into the StreamInsight engine. Here they are processed by continuous queries, called standing queries. For simplicity, we depict data streaming in from one car service and feeding one continuous query on the server. The results are streamed through an output adapter to a consumer application. Static reference data (e.g., in-memory stored collections or SQL Server data) can be included in the LINQ standing queries specification.
StreamInsight++
StreamInsight contains a closed source temporal query engine that cannot be changed. Instead, we enrich the Service Manager with dynamic service discovery capabilities, using ad hoc programming, thus obtaining the StreamInsight++.
The enriched Service Manager allows the user of StreamInsight++ to write queries against dynamically discovered services. It can be thought of as the middleware between the system and the services in the environment, or the service wrapper that allows both service discovery and querying. The service access mechanism uses the REST/HTTP-based protocol mentioned in Section 3. The Service Manager delivers data from discovered services to input adapters.
SoCQ
We designed and implemented the Service-oriented Continuous Query (SoCQ) engine [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF], a PEMS that enables the development of complex applications for pervasive environments using declarative service-oriented continuous queries. These SQL-like queries combine conventional and non-conventional data, namely slower-changing data, dynamic streams and functionalities, provided by services.
Within our data-oriented approach, we built a complete data model, namely the SoCQ data model, which enables services to be modeled in a unified manner. It also provides a declarative query language to homogeneously handle data, streams and functionalities: Serena SQL. In a similar way to databases, we defined the notion of relational pervasive environment, composed of several eXtended Dynamic Relations, or XD-Relations. The schema of an XD-Relation is composed of real and/or virtual attributes [START_REF] Gripay | A Simple (yet Powerful) Algebra for Pervasive Environments[END_REF]. Virtual attributes represent parameters of various methods, streams, etc, and may receive values through query operators. The schema of an XD-Relation is further associated with binding patterns, representing method invocations or stream subscriptions.
SoCQ includes service discovery capabilities in the query engine. The service discovery operator builds XD-Relations that represent sets of available services providing required data. For example, an XD-Relation car could be the result of such an operator, and be continuously updated when new car services become available and when previously discovered services become unavailable.
Benchmark Experiments
In this section we present the comparative evaluation of the chosen systems. For each task, we will describe its life cycle on StreamInsight, StreamInsight++ and SoCQ. We start with the development stage, continue with deployment and performance and end with task evolution. We rigorously assess each task through the set of metrics we previously defined. At the end of each subsection dedicated to a task we provide a table with metrics results and a short discussion. The experiments were conducted on a Windows Server 2008 machine, with a 2.67GHz Intel Xeon X5650 CPU (4 processors) and 16 GB RAM.
Assessing Performance
We present our system-specific evaluation approach for the performance stage:
StreamInsight and StreamInsight++. In this case we use an in-process server. We connect to one or more service streams and deliver incoming data to an input adapter. We assess the time right before the input adapter enqueues an event on the server and the time right after the output adapter dequeues the event from the server. The time interval delimited by the enqueue and dequeue moments represents the event's latency. Average latency is computed incrementally based on individual event latencies. We also enqueue CTI events on the server, i.e., special events specific to StreamInsight, used to advance application time, but we compute average latency by taking into account only events received from environment services.
By evaluating latency in this manner, we assess the performance of the StreamInsight engine together with the adapter framework and middleware that we implemented, and not the pure performance of the StreamInsight engine.
SoCQ. The average latency is computed by comparing events from streams of data services, to events from the query output stream. An event from a service is uniquely identified by the service URL and the service-generated event timestamp. A unique corresponding event is then expected from the query output stream. A latency measurement tool has been developed to support the latency computation, based on UbiWare: it launches the task query in the query engine, connects to the query result output stream, connects to a number of services, and then matches expected events from services and query output events from the query engine. The difference between the arrival time of corresponding events at the measurement tool provides a latency for each expected event.
Task 0: Startup
The objective of this task is to prepare the evaluated systems for the implementation of Tasks 1 to 4. The latter can be implemented independently from one another, but they all require the prior accomplishment of the Startup task. We describe the schema of our scenario in system-specific terms. We also present any additional modules that need to be implemented. Task 0 uses the UbiWare middleware [START_REF] Scuturici | UbiWare: Web-Based Dynamic Data & Service Management Platform for AmI[END_REF] previously mentioned, to interact with services in the environment. UbiWare uses a REST/HTTP-based protocol for this purpose.
The developer that implemented the Startup task in StreamInsight and StreamInsight++ has a confident level of C#, .NET and LINQ, but has never developed applications for StreamInsight before. We don't embark on an incremental development task, evolving from StreamInsight to StreamInsight++. We consider them to be independent, separate systems, hence any common features are measured in the corresponding metrics, for each system.
The same developer also accomplished the Startup task in SoCQ, without having any prior knowledge about the system and the SQL-like language it provides.
Development. StreamInsight and StreamInsight++. We implement C# solutions that handle the interaction with the StreamInsight server. They contain entities specific to StreamInsight (input and output configuration classes and adapters, etc) and entities that model data provided by services in P-Bench (car location, temperature notification classes, etc). To interact with environment services, these implementations also use and enrich the Service Manager specific to StreamInsight or StreamInsight++.
StreamInsight. Task 1 is the only task that can be fully implemented with StreamInsight, as it doesn't require service discovery (the URL of the car service that represents the car to be monitored is provided). Therefore, we implement a C# solution, which handles the interaction with the StreamInsight server, to prepare the system for Task 1. The solution contains the following entities:
a car location class, that models location data provided by a car service (latitude, longitude, timestamp and car id); -a car data source module, that is part of the Service Manager, and delivers incoming car locations (from the given service URL) to an input adapter; -input and output configuration classes, to specify particulars of data sources and consumers; -input and output adapter factory classes, responsible for creating input and output adapters; -a typed input adapter, which receives a specific car location event from the car data source in a push manner and enqueues this event, using push mechanisms, into the StreamInsight server; -an output adapter, which dequeues results from the query on the StreamInsight server; -an additional benchmark tools class, that manages application state, computes latency, etc.
StreamInsight++. StreamInsight++ can implement all the tasks. The C# solution we built to enrich StreamInsight and to communicate with the StreamInsight server is much more complex than the one used with raw StreamInsight, but there are some common features. The solution contains the following entities:
apart from the car location class, we developed C# classes that model medical containers and their temperature notifications, i.e., medical container and temperature notification; -we enriched the car data source module to encompass dynamic discovery, so as to deliver car locations from dynamically discovered cars; -we added medical containers data source modules specific to Tasks 3 and 4, respectively, i.e., medical container data source and temperature notification data source; -additional input adapter factory classes were developed, for the newly added input adapters (for medical containers and medical containers temperature notifications); -classes that contain the input configuration, output configuration, output adapter factory and output adapter were maintained (we chose to implement an untyped output adapter); -extra input adapters were developed, to handle the diversity of input events from the pervasive environment, i.e., medical containers dynamic discovery messages and medical containers temperature notifications; -the benchmark tools class was extended to encompass methods specific to Tasks 3 and 4.
SoCQ. All the tasks can be implemented with SoCQ. SoCQ already contains the middleware required for the services in P-Bench, but to provide a fair comparison with the other systems, we will assess the code in SoCQ's middleware as well.
We provide a SoCQ schema of our scenario, written in Serena SQL. Listing 1 depicts the set of XD-Relations, which abstract the distributed entities in the pervasive environment. This is the only price the application developer needs to pay to easily develop data-centric pervasive applications with SoCQ: gain an understanding of SoCQ and Serena SQL and model the pervasive environment as a set of XD-Relations, yielding a relational pervasive environment. Car, MedicalContainer and SupervisorMobile are finite XD-Relations, extended with virtual attributes and binding patterns in order to provide access to stream subscriptions and method invocations. Supervise is a simple dynamic relation, with no binding patterns, yet all four relations are specified in a consistent, unified model, in the Serena SQL. On top of the relational pervasive environment, the developer can subsequently write applications as continuous queries, which reference data services from the distributed environment and produce data.
Deployment. StreamInsight and StreamInsight++. To attain this task with StreamInsight and StreamInsight++, one requires .NET, a C# compiler, SQL Server Compact Edition, a Windows operating system and the StreamInsight server. This minimum setting is necessary for Tasks 1 to 4, with some additional task-dependent prerequisites.
SoCQ. The deployment machine must have the SoCQ Server and a Java Virtual Machine. Any operating system can support this task.
Evolution. The Startup task prepares the system to handle a pervasive environment, based on entities from the scenario we proposed. If we change the StreamInsight and StreamInsight++. We must redevelop the C# solutions if the service wrappers change. If the service access mechanisms don't change, the middleware can remain unmodified, i.e., ChangedImperativeCode won't consider the ∼3700 lines of code that compose the middleware implemented for StreamInsight and StreamInsight++. Other classes might be kept if some data provided by services from the initial environment are preserved.
SoCQ. In SoCQ, we need to build a different schema, in Serena SQL, for the new pervasive environment. If the service wrappers change, then the imperative code for the middleware must be reimplemented. If the middleware remains unmodified, no line of imperative code is impacted in the evolution stage, i.e., ChangedImperativeCode will be 0.
Task discussion. The time and effort devoted to self-training and implementing Task 0 are considerably higher in the StreamInsight-based implementations than in SoCQ (see Table 1). The former can only be deployed on Windows machines. SoCQ needs a smaller number of system dependencies and can be deployed on any operating system. If we switch to a different scenario, Task 0 needs to be reimplemented, which translates to a significant amount of changed lines of imperative code in all the systems, if the service access mechanisms change. If the middleware doesn't change, the exact amount of changed code depends on the preservation of some services from the initial environment; in StreamInsight and StreamInsight++ we need to modify imperative code, whereas SoCQ requires changing only declarative elements. Table 1 shows figures for the worst-case situation, where all services and their access mechanisms are changed.
Task 1: Localized, Single Stream Supervision
Task 1 tracks one moving car. Its input is a car service URL and a stream of locations from the monitored car. The output of the task is a stream that contains the LocationTimestamp, Latitude, Longitude and CarId of the car, i.e., the user is provided with the car's stream of reported locations. The task's objective is to monitor a data stream provided by a car service that had been localized in advance, i.e., dynamic service discovery is not required. Development. StreamInsight and StreamInsight++. We require one LINQ query in order to track a given car (Listing 2a). We need additional C# code to create a query template that represents the business logic executed on the server, instantiate adapters, bind a data source and a data consumer to the query, register the query on the server and start and stop the continuous query. Dynamic discovery is not required for this task.
SoCQ. In SoCQ, the developer writes a car tracking query in Serena SQL (Listing 2b). It subscribes to a stream of location data from the car XD-Relation, based on a car service URL. No imperative code is needed. Performance. StreamInsight, StreamInsight++ and SoCQ. For this task we assess metrics MaxDataRate, NoOfEvents and AvgLatency, since we track one car.
Evolution. StreamInsight, StreamInsight++ and SoCQ. The user may want to track a different car, which means changing the car service URL. In our StreamInsight-based approaches, this requires changing and recompiling the imperative code, to provide the new URL. The LINQ query remains unchanged. In SoCQ, we supply a different car service URL in the declarative query code.
Task discussion. The effort required to develop and update the task is more intense in the StreamInsight-based implementations, which can be deployed only on Windows machines and require 2 languages, LINQ and C#, and more dependencies (see Table 2). The SoCQ implementation uses 1 language (Serena SQL), needs 1 dependency and no imperative code, and can be deployed on any operating system, but it yields a higher average latency. All the systems achieved a MaxDataRate of 10.000 events/second under specified latency requirements.
Task 2: Multiple Streams Supervision
Task 2 tracks all the moving cars. The input of this task is represented by notification messages sent by services in the environment when they appear or disappear and by streams of interest emitted by services monitored in the task, i.e., car location streams from monitored cars. The output of this task is a stream that provides the LocationTimestamp, Latitude, Longitude and CarId of the monitored cars. The user is hence provided with the reported locations of each car. Task 2's objective is to monitor multiple data streams provided by dynamically discovered car services.
Development. StreamInsight++. This implementation is similar to the one from Task 1, but the car data source receives events from all the streams the application subscribed to. It delivers them in a push manner to the input adapter. Hence, the LINQ query for this task is identical to the one described in Listing 2a. SoCQ. The SoCQ implementation is similar to the one described for Task 1. The only requirement is to write the car tracking query. The query for this task is identical with the one depicted in Listing 2b, except it doesn't encompass a filter condition, since we are tracking all the cars.
Deployment. StreamInsight++ and SoCQ. The prerequisites for deployment are identical with those mentioned in Task 0.
Performance. StreamInsight++ and SoCQ. We evaluate all the metrics from the performance stage. We compute MaxDataRate, NoOfEvents and AvgLatency across events coming in from all data sources, for a constant number of 10 data sources.
Evolution. StreamInsight++ and SoCQ. A new requirement for this task can be to track a subgroup of moving cars. In StreamInsight++ we need to change the imperative code, to check the URL of the data source discovered by the system. In SoCQ, we need to add a filter predicate in the continuous query.
Task discussion. SoCQ provides a convenient approach to development, deployment and evolution, without imperative code, obtaining better results for metrics NoOfLanguages, NoOfSystemDependencies and IsOSIndependent (Table 3). StreamInsight++ achieves superior performance when assessing MaxNo-DataSources and MaxDataRate. We believe this implementation could do better, but in our hardware setting we noticed a limit of 18.000 events that are received by the StreamInsight engine each second; hence this is not a limitation imposed by StreamInsight. For our scenario, the performance values obtained by SoCQ are very good as well. We have multiple threads in our StreamInsight++ application to subscribe to multiple streams, so the thread corresponding to the StreamInsight++ output adapter is competing with existing in-process threads. Therefore, the average latency we observe from the adapters is higher than the StreamInsight's engine pure latency and than the average latency measured for SoCQ. This task cannot be implemented in StreamInsight, due to lack of dynamic service discovery. SoCQ. We write a simple Serena one-shot query that uses the Medical-Container XD-Relation, defined in the SoCQ schema (Listing 3). We manually submit this query using SoCQ's interface.
Deployment. StreamInsight++. Apart from the prerequisites described in Task 0, to implement Task 3 we also need an installed instance of SQL Server.
SELECT latitude, longitude, locDate FROM MedicalContainer WHERE mcID="12345" USING getLocation;
Listing 3: Locating medical container query in SoCQ SoCQ. Task 0 prerequisites hold for this task implemented in SoCQ. Performance. StreamInsight++ and SoCQ. We don't assess performance metrics for this task, as it encompasses a one-shot query. Assessing service discovery performance is out of the scope of this evaluation.
Evolution. StreamInsight++ and SoCQ. The user may want to locate a different medical container. In StreamInsight++ we need to supply a different container identifier in the imperative application. In SoCQ we supply a different medical container identifier in the Serena query.
Task discussion. For this task as well development time and effort are minimal in the SoCQ implementation, which doesn't need imperative code (see Table 4). In StreamInsight++ we also need an additional instance of SQL Server. If SoCQ requires only Serena SQL, StreamInsight++ requires C#, LINQ and Transact-SQL (to interact with SQL Server). 6 Metrics NoOfSystemDependencies and IsOSIndependent yield better values for SoCQ. This task cannot be implemented in StreamInsight, because it requires dynamic service discovery. StreamInsight++. This implementation integrates the StreamInsight Server, as well as SQL Server, LINQ and C#. We need SQL Server to hold supervision related data (which supervisors monitor which medical containers) and dynamically discovered alert services. For the incoming medical containers temperature notifications we receive, if the temperature of a medical container is greater than its temperature threshold, we search the corresponding supervisor and the alert service he or she uses in the SQL Server database. We issue a call, from imperative code, to the sendSMS method from the alert service. The implementation comprises an entire application. The LINQ continuous query selects temperature notifications from medical containers that exceed temperature thresholds and calls the sendSMS method of the alert service of the corresponding supervisor. One insert and one delete Transact-SQL queries are used to update the SQL Server table holding dynamically discovered alert services. A cache is used to speed up the retrieval of temperature thresholds and container supervisors.
SoCQ. The development of this task in SoCQ contains one Serena query (Listing 4) that combines static data (temperature thresholds), method invocations (sendSMS method from SupervisorMobile) and data streams (temper-atureNotification streams from supervised medical containers). Listing 4: Temperature supervision query in SoCQ Deployment. StreamInsight++. This task requires the prerequisites from Task 0, as well as an instance of SQL Server.
SoCQ. Only the prerequisites from Task 0 are required.
Performance. StreamInsight++ and SoCQ. We evaluate all the metrics from the performance stage.
Evolution. StreamInsight++ and SoCQ. The user may ask to send notifications for a subgroup of the supervised medical containers. In both approaches, filters need to be added, to the imperative application, for StreamInsight++ or the Serena SQL query, for SoCQ.
Task discussion. StreamInsight++ outperforms SoCQ on the AvgLatency and MaxNoDataSources performance metrics (Table 5), which is not surprising, since the former is an ad hoc framework based on a commercial product, whereas SoCQ is a research prototype. As the service data rate increases, SoCQ outperforms our StreamInsight++ implementation when assessing MaxDataRate, due to the high number of alert service calls per second the query has to perform, for which SoCQ has a built-in asynchronous call mechanism. Development, deployment and evolution are easier with SoCQ, which requires no imperative code, decreased development time and a smaller number of servers and dependencies. Unlike SoCQ, StreamInsight++ does not offer an operating system independent solution. This task cannot be implemented with StreamInsight because it needs dynamic service discovery capabilities. We described the SoCQ queries from Listings 1, 2b, 3 and 4, with some modifications, in the ColisTrack paper as well [START_REF] Gripay | ColisTrack: Testbed for a Pervasive Environment Management System (demo)[END_REF].
Discussion
The StreamInsight approach revealed the shortcomings encountered when developing pervasive applications with a DSMS. Such systems don't consider services as first-class citizens, nor provide dynamic service discovery. External functions can be developed to emulate this integration in DSMSs, requiring ad hoc programming and sometimes intricate interactions with the query optimizer. With StreamInsight we were able to fully implement only Task 0 and Task 1.
StreamInsight++ was our proposed ad hoc solution for pervasive application development. The integration of different programming paradigms (imperative, declarative and network protocols) was tedious. Developing pervasive applications turned out to be a difficult and time-consuming process, which required either expert developers with more than one core area of expertise or using teams of developers. Either way, the development costs increase. Ad hoc programming led to StreamInsight++, which could be considered as a PEMS, since it handles data and services providing streams and functionalities in a pervasive environment. However, apart from the cost issues, this system carries another problem: it is specific to the pervasive environment it was designed for. A replacement of this environment automatically triggers severe changes in the implementation of the system. Moreover, although there are DSMSs which offer ways of homogeneously interacting with classical data relations and streams, in StreamInsight++ we needed a separate repository to hold static data, i.e., an instance of SQL Server.
The SoCQ PEMS solved the complex interactions between various data sources, by providing an integrated management of distributed services and a declarative definition of continuous interactions. In SoCQ we wrote declarative queries against dynamically discovered, distributed data services, the system being able to handle pervasive environments, without modifications in its implementation, as long as the services access mechanisms don't change. The price to pay was represented by the training time dedicated to the SoCQ system and the Serena SQL-like language (almost negligible for SQL developers), the description of a scenario-specific schema in Serena and the service wrappers development. Once Task 0 was accomplished, application development became straightforward. Writing SoCQ SQL-like queries was easy for someone who knew how to write SQL queries in a classical context. By comparison, the time required to study the StreamInsight platform, even if the developer had a confident level of C# and LINQ, was considerably higher. SoCQ led to concise code for Tasks 1 -4, outperforming StreamInsight and StreamInsight++ in this respect.
The StreamInsight-based systems generally yielded better scalability and performance than SoCQ when evaluating average latency, the maximum number of data sources, or the maximum data rate. One case when SoCQ did better than the StreamInsight++ ad hoc framework, was in Task 4, when the engine had to call external services' methods at a high data rate. When assessing performance for StreamInsight and StreamInsight++, we considered the StreamInsight engine together with the adapter framework and middleware we implemented, and not the pure performance of the StreamInsight engine.
SoCQ required only one SQL-like language to write complex continuous queries over data, streams and functionalities provided by services. In the StreamInsight and StreamInsight++ implementations, an application was developed in imperative code, to execute continuous queries on the server. The only host lan-guage allowed in the release we used (StreamInsight V1.2) is C#. SoCQ did not burden the developer with such requirements. One SoCQ server and a Java Virtual Machine were required in the SoCQ implementation and the solution could be deployed on any operating system. The StreamInsight++ solution also required more system-specific dependencies and it could only be deployed on Windows machines.
Task evolution was straightforward with SoCQ. Entities of type XD-Relation could be created to represent new service types in the pervasive environment and changes to continuous or one-shot queries had a minimal impact on the declarative code. With StreamInsight and StreamInsight++, task evolution became cumbersome, impacting imperative code. For the StreamInsight-based implementations, task evolution had an associated redeployment cost, since the code had to be recompiled.
SoCQ allows the developer to write code that appears to be more concise and somewhat elegant than the code written using the two other systems. Developers can fully implement Tasks 1 -4 using only declarative queries. The StreamInsight and StreamInsight++ systems require imperative code as well for the same tasks, which need to be coded using an editor like Visual Studio. The imperative paradigm also adds an extra compilation step.
Conclusion and Future Directions
In this paper we have tackled the difficult problem of evaluating the easiness of data-centric pervasive application development. We introduced P-Bench, a benchmark that assesses easiness in the development, deployment and evolution process, and also examines performance aspects. To the best of our knowledge, this is the first study of its kind. We assessed the following approaches to building data-centric pervasive applications: (1) the StreamInsight platform, as a DSMS, (2) ad hoc programming, using StreamInsight++, an enriched version of StreamInsight and (3) SoCQ, a PEMS. We defined a set of five benchmark tasks, oriented towards commonly encountered requirements in data-centric pervasive applications. The scenario we chose can easily be changed, and the task's objectives are defined in a generic, scenario-independent manner.
We evaluated how hard it is to code a pervasive application using a set of metrics thoroughly defined. As expected, our experiments showed that pervasive applications are easier to develop, deploy and update with a PEMS. On the other hand, the DSMS-and ad hoc-based approaches exhibited superior performance for most of the tasks and metrics. However, for pervasive applications like the ones in our scenario, the PEMS implementation of the benchmark tasks achieved very good performance indicators as well. This is noteworthy, as the SoCQ PEMS is a research prototype developed in a lab, whereas StreamInsight is a giant company's product.
Future research directions include finalizing our accuracy checking framework, considering error management and resilience, data coherency, and includ-ing additional metrics like application design effort, software modularity and collaborative development.
Fig. 1 .
1 Fig. 1. Scenario framework architecture
Fig. 2 .
2 Fig. 2. StreamInsight application architecture
FROM
Car IN CarSupervision SELECT Car.CarID, Car.Latitude, Car.Longitude, Car.LocationTimestamp; (a) Car supervision query in LINQ CREATE VIEW STREAM carSupervision (carID STRING, locDate DATE, locLatitude STRING, locLongitude STRING) AS SELECT c.carID, c.locDate, c.latitude, c.longitude STREAMING UPON insertion FROM Car c WHERE c.carService = "http://127.0.0.1:21000/Car" USING c.locationNotification [1]; (b) Car supervision query in SoCQ's Serena SQL Listing 2: Car supervision queries Deployment. StreamInsight, StreamInsight++ and SoCQ. For this task, the same prerequisites as for Task 0 are required, for all the implementations.
Table 1 .
1 Task 0 metrics
Stage Metric SI SI++ SoCQ
Development LinesOfImperativeCode 4323 5186 26500 5
NoOfDeclarativeElements 0 0 13
NoOfQueries 0 0 4
NoOfLanguages 1 1 2
DevelopmentTime 120 160 16
Deployment NoOfServers 1 1 1
NoOfSystemDependencies 3 3 1
IsOSIndependent No No Yes
Evolution ChangedImperativeCode ∼4323 ∼5186 ∼11000
ChangedDeclarativeElements 0 0 ∼13
Table 2 .
2 Task 1 metrics
Stage Metric SI SI++ SoCQ
Development LinesOfImperativeCode 33 33 0
NoOfDeclarativeElements 2 2 6
NoOfQueries 1 1 1
NoOfLanguages 2 2 1
DevelopmentTime 4 4 1
Deployment NoOfServers 1 1 1
NoOfSystemDependencies 3 3 1
IsOSIndependent No No Yes
Performance MaxDataRate 10000 10000 10000
NoOfEvents 350652 350652 360261
AvgLatency 0.5 0.5 1.34
Evolution ChangedImperativeCode 1 1 0
ChangedDeclarativeElements 0 0 1
Table 3 .
3 Task 2 metricsTask 3 provides the location of a medical container. The input of this task is represented by a medical container identifier and notification messages sent by services in the environment when they appear or disappear. Its output is the current location of the container, i.e., the LocationTimestamp, Latitude and Longitude. The objective of this task is to invoke a method provided by a dynamically discovered medical container service.Development. StreamInsight++. We create a SQL Server database and dynamically update a table in the database with available medical container services. An input adapter delivers medical container services discovered by Service Manager to a simple LINQ continuous query, whose results are used to update the medical container services table in SQL Server. Based on the input container identifier (an mcID field), the application looks up the medical container URL in the SQL Server table. From imperative code, it calls the get-Location method exposed by the medical container service, which outputs the current location of the container.
Stage Metric SI SI++ SoCQ
Development LinesOfImperativeCode NA 31 0
NoOfDeclarativeElements NA 2 5
NoOfQueries NA 1 1
NoOfLanguages NA 2 1
DevelopmentTime NA 4 1
Deployment NoOfServers NA 1 1
NoOfSystemDependencies NA 3 1
IsOSIndependent NA No Yes
Performance MaxNoDataSources NA 5000 2500
MaxDataRate NA 1700 750
NoOfEvents NA 976404 443391
AvgLatency NA 13.53 0.79
Evolution ChangedImperativeCode NA 1 0
ChangedDeclarativeElements NA 0 1
5.5 Task 3: Method Invocation
Table 4 .
4 Task 3 metrics
Stage Metric SI SI++ SoCQ
Development LinesOfImperativeCode NA 102 0
NoOfDeclarativeElements NA 11 4
NoOfQueries NA 4 1
NoOfLanguages NA 3 1
DevelopmentTime NA 8 1
Deployment NoOfServers NA 2 1
NoOfSystemDependencies NA 3 1
IsOSIndependent NA No Yes
Evolution ChangedImperativeCode NA 1 0
ChangedDeclarativeElements NA 0 1
5.6 Task 4: Composite Data Supervision
Task 4 monitors the temperatures of medical containers and sends alert mes-
sages when the supervised medical containers exceed established temperature
thresholds. The input of this task is represented by notification messages sent
Table 5 .
5 Task 4 metrics
Stage Metric SI SI++ SoCQ
Development LinesOfImperativeCode NA 175 0
NoOfDeclarativeElements NA 13 7
NoOfQueries NA 4 1
NoOfLanguages NA 3 1
DevelopmentTime NA 10 3
Deployment NoOfServers NA 2 1
NoOfSystemDependencies NA 3 1
IsOSIndependent NA No Yes
Performance MaxNoDataSources NA 3000 2500
MaxDataRate NA 275 400
NoOfEvents NA 13170 23812
AvgLatency NA 6.25 34.37
Evolution ChangedImperativeCode NA 1 0
ChangedDeclarativeElements NA 0 1
We will refer to a data service as a service or data source in the rest of the paper.
The SoCQ engine source code contains about 26500 lines of Java code. It encompasses the UbiWare generic implementation (client-side and server-side, about 11000 lines of code), the core of the SoCQ engine (data management and query processing, about 13200 lines), and some interfaces to control and access the SoCQ engine (2 Swing GUI and a DataService Interface, about 2300 lines). For StreamInsight and StreamInsight++, LinesOfImperativeCode assesses only the task application and Service Manager code (we don't have access to StreamInsight's engine implementation).
We will replace Transact-SQL with LINQ to SQL. | 65,058 | [
"4133",
"3040",
"4224"
] | [
"217744",
"401125",
"401125",
"401125",
"401125"
] |
01351708 | en | [
"info"
] | 2024/03/04 23:41:48 | 2013 | https://hal.science/hal-01351708/file/Liris-6533.pdf | Student Member, IEEE Huibin Li
email: huibin.li@ec-lyon.fr.
Member, IEEE Wei Zeng
email: wzeng@cs.fiu.edu
Jean Marie Morvan
email: morvan@math.univ-lyon1.fr.
Member, IEEE Liming Chen
email: liming.chen@ec-lyon.fr.
Xianfeng David Gu
X David Gu
email: gu@cs.sunysb.edu
Surface Meshing with Curvature Convergence
Keywords: Meshing, Delaunay refinement, conformal parameterization, normal cycle, curvature measures, convergence
Surface meshing plays a fundamental role in graphics and visualization. Many geometric processing tasks involve solving geometric PDEs on meshes. The numerical stability, convergence rates and approximation errors are largely determined by the mesh qualities. In practice, Delaunay refinement algorithms offer satisfactory solutions to high quality mesh generations. The theoretical proofs for volume based and surface based Delaunay refinement algorithms have been established, but those for conformal parameterization based ones remain wide open. This work focuses on the curvature measure convergence for the conformal parameterization based Delaunay refinement algorithms. Given a metric surface, the proposed approach triangulates its conformal uniformization domain by the planar Delaunay refinement algorithms, and produces a high quality mesh. We give explicit estimates for the Hausdorff distance, the normal deviation, and the differences in curvature measures between the surface and the mesh. In contrast to the conventional results based on volumetric Delaunay refinement, our stronger estimates are independent of the mesh structure and directly guarantee the convergence of curvature measures. Meanwhile, our result on Gaussian curvature measure is intrinsic to the Riemannian metric and independent of the embedding. In practice, our meshing algorithm is much easier to implement and much more efficient. The experimental results verified our theoretical results and demonstrated the efficiency of the meshing algorithm.
INTRODUCTION
Surface meshing and remeshing play fundamental roles in many engineering fields, including computer graphics, geometric modeling, visualization and medical imaging. Typically, surface meshing finds a set of sample points on the surface with a curved triangulation, then approximates each face by an Euclidean triangle in R 3 , thereby approximating the underlying smooth surface by a polyhedral triangular surface, which is called a triangle mesh.
Many geometric processing tasks are equivalent to solving geometric partial differential equations (PDEs) on surfaces. The following are some direct examples: for shape analysis, the heat kernel signature (HKS) [START_REF] Sun | A Concise and Provably Informative Multi-Scale Signature Based on Heat Diffusion[END_REF] is mostly utilized, which entails solving a heat equation and computing the eigenvalues and eigenfunctions of the Laplace-Beltrami operator on the surfaces; for shape registration, the surface harmonic map [START_REF] Wang | High Resolution Tracking of Non-Rigid Motion of Densely Sampled 3D Data Using Harmonic Maps[END_REF] is widely used, which essentially means solving elliptic PDEs on the surfaces; for surface parameterization, the discrete Ricci flow [START_REF] Jin | Discrete Surface Ricci Flow[END_REF] is often computed, which amounts to solving a nonlinear parabolic equation on the surfaces.
Most geometric PDEs are discretized on triangle meshes, and solved using numerical methods, such as Finite Element Methods (FEM). The numerical stability, the convergence rates, and the approximation bounds of the discrete solutions are largely determined by the quality of the underlying triangle mesh, which is measured mainly by the size and the shape of triangles on the mesh. Therefore, the generation of high quality meshes has fundamental importance.
Most existing meshing and remeshing approaches are based on the Delaunay refinement algorithms. They can be classified in three main categories:
1) The sampling is computed in R 3 , and triangulated using the volumetric Delaunay triangulation algorithms, such as [START_REF] Amenta | Surface Reconstruction by Voronoi Filtering[END_REF] [5] [6] [START_REF] Cheng | Sampling and Meshing a Surface with Guaranteed Topology and Geometry[END_REF] [8] [START_REF] Dey | Delaunay Meshing of Isosurfaces[END_REF]. 2) The sampling and triangulation are directly computed on curved surfaces, such as [10] [11].
3) The sampling is computed in a conformal parameter domain, and triangulated using the planar Delaunay triangulation algorithms, such as [START_REF] Alliez | Isotropic Surface Remeshing[END_REF] [13] [START_REF] Marchandise | High-Quality Surface Remeshing using Harmonic MapsPart II: Surfaces with High Genus and of Large Aspect Ratio[END_REF] [15] [START_REF] Alliez | Recent Advances in Remeshing of Surfaces[END_REF]. The convergence theories of curvature measures for the approaches in the first two categories has been thoroughly established in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF] [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] [19] [START_REF] Morvan | Approximation of the Normal Vector Field and the Area of a Smooth Surface[END_REF]. However, so far, there is no theory to show the convergence of curvature measures for the approaches in the third category.
Existing Theoretical Results
Based on the classic results of Federer [START_REF] Federer | Geometric Measure Theory[END_REF] and Fu [START_REF] Fu | Monge-Ampre Functions 1[END_REF], among others, the authors in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF] [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] [19] defined a general and unified framework of curvature measures for both smooth and discrete submanifolds of R N based on the normal cycle theory. Furthermore, they proved the convergence and approximation theorems of curvature measures for the general geometric subset of R N .
In particular, suppose M is a smooth surface embedded in R 3 , M ε is an ε-sample of M, namely, for each point p ∈ M, the ball B(p, εlfs(p)) contains at least one sample point in M ε , where lfs(p) denotes the local feature size of M at point p. Let T be the triangle mesh induced by the volumetric Delaunay triangulation of M ε restricted to M. If ε is small enough, each point of the mesh has a unique closest point on the smooth surface. This leads to the introduction of the closest point projection π : T → M. This map has the following properties:
1) Normal deviation: ∀p ∈ T , |n(p)n • π(p)| = O(ε), by Amenta et al. [START_REF] Amenta | Surface Reconstruction by Voronoi Filtering[END_REF], and Boissonnat et al. [START_REF] Boissonnat | Provably Good Sampling and Meshing of Surfaces[END_REF]. 2) Hausdorff distance: |p -π(p)| = O(ε 2 ), by Boissonnat et al. [START_REF] Boissonnat | Provably Good Sampling and Meshing of Surfaces[END_REF]. 3) Homeomorphism: π is a global homeomorphism, by Amenta et al. [START_REF] Amenta | Surface Reconstruction by Voronoi Filtering[END_REF] and Boissonnat et al. [START_REF] Boissonnat | Provably Good Sampling and Meshing of Surfaces[END_REF]. 4) Curvature measures: Let B be a Borel subset of R 3 , then the differences between the curvature measures on M and those on T are Kε, where K depends on the triangulation T [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF] [START_REF] Morvan | Generalized Curvatures[END_REF].
In the first category, the authors show that, unfortunately, the convergence of curvature measures can not be guaranteed. Depending on the triangulation, when ε goes to 0, K may go to infinity, (see [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] for a counterexample). To ensure the convergence of the curvature measures, in [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] [START_REF] Morvan | Generalized Curvatures[END_REF], the authors suggest adding a stronger assumption to the sampling condition, namely, κ-light ε-sample, which is an ε-sample with the additional constraint that each ball B(p, εlfs(p)) contains at most κ sample points.
In the second category, the curvature convergence for meshes obtained by Chew's second algorithm [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF] has been proved in [START_REF] Morvan | Approximation of the Normal Vector Field and the Area of a Smooth Surface[END_REF]. The normal and area convergence for meshes based on the geodesic Delaunay refinement algorithm has been proved in [START_REF] Dai | Geometric Accuracy Analysis for Discrete Surface Approximation[END_REF]. However, the computation of the geodesic Delaunay triangulation is prohibitively expensive in practice [START_REF] Xin | Isotropic Mesh Simplification by Evolving the Geodesic Delaunay Triangulation[END_REF].
Our Theoretical Results
This paper will deal with triangulations of the third category, showing stronger estimates. Using conformal parameterization, we obtain meshes satisfying the first two properties as before, 1) Normal deviation: O(ε), Lemma 4.8 and Lemma 4.9.
2) Hausdorff distance: O(ε 2 ), Lemma 4.8 and Lemma 4.9.
Moreover, we improve the other two properties as follows:
3) Homeomorphism: In addition to the closest point projection π, we also define a novel mapping, the natural projection η, induced by the conformal parameterization. Both projections are global homeomorphisms, see section 4.4. In addition, the coding and computational complexities are much lower than those in the second category.
Similarities
Following the work in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF], our proof is mainly based on the normal cycle theory. Both methods estimate both the Hausdorff distance and the normal deviation at the corresponding points. Then both methods construct a homeomorphism from the triangle mesh to the surface, which induces a homotopy from the normal cycle of the mesh to the normal cycle of the surface. Then, the volume swept by the homotopy and the area of its boundary are estimated. This gives a bound on the difference between the curvature measures.
Differences
However our work can be clearly differentiated from theirs, in terms of both theoretical and algorithmic aspects:
• In theory, as pointed out previously, without the stronger sampling condition, the volumetric Delaunay refinement algorithms cannot guarantee the convergence of curvature measures. In contrast, our results can ensure the convergence without extra assumptions. • In theory, the volumetric Delaunay refinement methods require the embedding of the surface. Our method is intrinsic, which only requires the Riemannian metric. In many real-life applications, e.g. the general relativity simulation in theoretical physics, the surface metric is given without any embedding space. In such cases, the volumetric Delaunay refinement methods are invalid, but our method can still apply. • In theory, to prove the main theorem, the closest point mapping was constructed in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF]. In contrast, we supply two proofs: one is based on the closest point mapping, whereas the other uses a completely different mapping based on conformal parameterization. Conceptually, besides its novelty, the latter is also simpler. • In practice, the planar Delaunay refinement methods are much easier to implement, the data structure for planar triangulation is much simpler than that of the tetrahedral mesh, and the planar algorithms are much more efficient. Remark The current meshing algorithm aims to achieve a good triangulation, and requires a conformal parameterization, which in turn requires a triangulation. Consequently, this looks like a chicken-and-egg problem.
In fact, conformal parameterization can be carried out using an initial triangulation of low quality, and this algorithm will produce a new triangulation with much better quality. Many geometric processing tasks cannot be computed on the initial mesh. For example, the error bound for a discrete solution to the Poisson equation is O(ε 2 ) on good quality meshes. If the mesh has too many obtuse angles, then the discrete results will not converge to the smooth solution.
In reality, surfaces are acquired by 3D scanning devices, such as the laser scanner or the structured light scanner. Usually, the raw point clouds are very dense, thus the initial triangulation can be induced by the pixel or voxel grid structures. In the geometric modeling field, the input surfaces may be spline surfaces, and the initial triangulation can be chosen as the regular grids on the parameter domain. Then, the conformal parameterizations can be computed using the dense samples with the initial triangulation. Finally, we can perform the remeshing using the current conformal parametric Delaunay refinement algorithm to improve the mesh quality or compress the geometric data.
PREVIOUS WORKS
Meshing/Remeshing
Delaunay Refinement
The Delaunay refinement algorithms were originally designed for meshing planar domains, and were later generalized for meshing surfaces and volumes. Chew's first algorithm [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF] splits any triangle whose circumradius is greater than the prescribed shortest edge length parameter ε and hence generates triangulation of uniform density and with no angle smaller than 30 • . But the number of triangles produced is not optimal. Chew's second algorithm [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF] splits any triangle whose circumradius-to-shortest-edge ratio is greater than one, and hence in practice produces grade mesh. Similar split criterion was used in Ruppert's algorithm [START_REF] Ruppert | A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation[END_REF], which has the theoretical guarantee of the minimal angle of no less than 20.7 • . Shewchuk's algorithm [START_REF] Shewchuk | Delaunay Refinement Algorithms for Triangular Mesh Generation[END_REF] can create meshes with most angles of 30 • or greater. Dey et al. developed a series of algorithms for surface meshing and remeshing based on volumetric Delaunay refinement [START_REF] Cheng | Sampling and Meshing a Surface with Guaranteed Topology and Geometry[END_REF] [8] [START_REF] Dey | Delaunay Meshing of Isosurfaces[END_REF], which belong to the approaches in the first category. We refer readers to [START_REF] Cheng | Delaunay Mesh Generation[END_REF] for full details.
Centroidal Voronoi Tessellation
The concept of centroidal Voronoi tessellations (CVT) was first proposed by Du et al. [START_REF] Du | Centroidal Voronoi Tessellations: Applications and Algorithms[END_REF], and then was generalized to constrained centroidal Voronoi tessellations (CCVT) [START_REF] Du | Constrained Centroidal Voronoi Tessellations for Surfaces[END_REF]. Recently, CVT has been widely used for surface meshing/remeshing to produce high quality triangulations. It can be carried out in the ambient space, e.g. Yan et al. [START_REF] Yan | Isotropic Remeshing with Fast and Exact Computation of Restricted Voronoi Diagram[END_REF], or the conformal parameter domain, e.g. Alliez et al. [12] [31], or even high embedding space, e.g. Lévy et al. [START_REF] Lévy | Variational Anisotropic Surface Meshing with Voronoi Parallel Linear Enumeration[END_REF]. A complete survey of the recent advancements on CVT based remeshing can be found in [START_REF] Alliez | Recent Advances in Remeshing of Surfaces[END_REF]. Although visually pleasing and uniform, all the existing CVT based remeshing methods for the generation of high quality triangulation have no theoretical bound of the minimal angle [START_REF] Alliez | Recent Advances in Remeshing of Surfaces[END_REF]. Therefore, the convergence of curvature measures cannot be guaranteed.
Conformal Surface Parameterization
Over the last two decades, surface parameterization has gradually become a very popular tool for various mesh processing processes [START_REF] Sheffer | Mesh parameterization Methods and their Applications[END_REF] [START_REF] Floater | Surface Parameterization: a Tutorial and Survey[END_REF]. In this work, we consider only conformal parameterizations. There are many approaches used for this purpose, including the harmonic energy minimization [START_REF] Desbrun | Intrinsic Parameterizations of Surface Meshes[END_REF] [36] [START_REF] Wang | Surface Parameterization using Riemann Surface Structure[END_REF], the Cauchy-Riemann equation approximation [START_REF] Lévy | Least Squares Conformal Maps for Automatic Texture Atlas Generation[END_REF], Laplacian operator linearization [START_REF] Haker | Conformal Surface Parameterization for Texture Mapping[END_REF], circle packing [START_REF] Hurdal | Coordinate Systems for Conformal Cerebellar Flat Maps[END_REF], angle-based flattening [START_REF] Sheffer | Parameterization of Faceted Surfaces for Meshing using Angle-Based Flattening[END_REF], holomorphic differentials [START_REF] Gu | Global Conformal Surface Parameterization[END_REF], Ricci curvature flow [START_REF] Jin | Discrete Surface Ricci Flow[END_REF] [43], Yamabe flow [START_REF] Lui | Detection of Shape Deformities Using Yamabe Flow and Beltrami Coefficients[END_REF], conformal equivalence class [START_REF] Springborn | Conformal Equivalence of Triangle Meshes[END_REF], most isometric parameterizations (MIP-S) [START_REF] Hormann | Hierarchical Parametrization of Triangulated Surfaces[END_REF], etc..
STATEMENT OF THE MAIN THEOREM
Curvature Measures
First, let M be a C 2 -smooth surface embedded in R 3 , its curvature measures can be defined as follows. Now, let V be a polyhedron of R 3 and its polyhedral boundary M be a triangular mesh surface. We use v i to denote a vertex, [v i , v j ] an edge, and [v i , v j , v k ] a face of M. We define the discrete Gaussian curvature of M at each vertex as the angle deficit,
G(v i ) = 2π -∑ jk θ jk i ,
where θ jk i is the corner angle on the face [v i , v j , v k ] at the vertex v i . Similarly, the discrete mean curvature at each edge is defined as
H(e i j ) = |v i -v j |β (e i j ),
where β i j is the angle between the normals to the faces incident to e i j . The sign of β (e i j ) is chosen to be positive if e i j is convex and negative if it is concave. Definition 3.2: The discrete Gaussian curvature measure of M, φ G M , is the function associated with each Borel set
B ⊂ R 3 φ G M (B) = ∑ v∈B∩M G(v). ( 1
)
The discrete mean curvature measure
φ H M is φ H M (B) = ∑ e∈B∩M H(e).
(
) 2
The curvature measures on both smooth surfaces and polyhedral surfaces can be unified by the normal cycle theory, which will be explained in section 4.3.
Main Results
It is well known that any Riemannian metric defined on a smooth (compact with or without boundary) surface M can be conformally deformed into a metric of constant curvature c ∈ {-1, 0, 1}, depending on the topology of M, the so-called uniformization metric (cf. Fig. 1). Now if M is endowed with a Riemannian metric with constant curvature, the Delaunay refinement algorithms can be used to generate a triangulation on M with good quality.
The most common Delaunay refinement algorithms include Chew's [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF], [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF] and Ruppert's [START_REF] Ruppert | A Delaunay Refinement Algorithm for Quality 2-Dimensional Mesh Generation[END_REF]. Let ε be a user defined upper bound of the circumradius of the final triangulation. Given an initial set of samples on surface M, such that the distance between any pair of samples is greater than ε. If M has boundaries, then the boundaries are sampled and approximated by piecewise geodesics, such that each geodesic segment is greater than ε. The Delaunay refinement method on the uniformization space starts with an initial Delaunay triangulation of the initial samples, then updates the samples by inserting circumcenters of the bad triangles, and meanwhile, updates the triangulation by maintaining the Delaunay property. A bad triangle can be either bad-sized or bad-shaped. A triangle is bad-sized, if its circumradius is greater than ε. A triangle is bad-shaped, if its circumradiusto-shortest-edge ratio is greater than one. In this work, we will show the following meshing algorithm using the packing argument.
Theorem 3.3 (Delaunay Refinement):
Let M be a compact Riemannian surface with constant curvature. Suppose that the boundary of M is empty or is a union of geodesic circles. For any given small enough ε > 0, the Delaunay refinement algorithm terminates. Moreover, in the resultant triangulation, all triangles are well-sized and well-shaped, that is 1) The circumradius of each triangle is not greater than ε.
2) The shortest edge length is greater than ε.
Suppose M is also embedded in E 3 with the induced Euclidean metric. Then M can also be conformally mapped to a surface with uniformization metric, such that all boundaries (if there are any) are mapped to geodesic circles. By running the Delaunay refinement on the uniformization space, we can get a triangulation of M, which induces a polyhedral surface T , whose vertices are on the surface, and all faces of which are Euclidean triangles. Furthermore, all triangles are wellsized and well-shaped under the original induced Euclidean metric. Based on the induced triangulation T , we will show the following main theorem.
Theorem 3.4 (Main Theorem):
Let M be a compact Riemannian surface embedded in E 3 with the induced Euclidean metric, T the triangulation generated by Delaunay refinement on conformal uniformization domain, with a small enough circumradius bound ε. If B is the relative interior of a union of triangles of T , then:
|φ G T (B) -φ G M (π(B))| ≤ Kε (3) |φ H T (B) -φ H M (π(B))| ≤ Kε (4) |φ G T (B) -φ G M (η(B))| ≤ Kε (5) |φ H T (B) -φ H M (η(B))| ≤ Kε (6)
where for fixed
M K = O( ∑ {t∈T,t⊂ B} r(t) 2 ) + O( ∑ {t∈T,t⊂ B,t∩∂ B = / 0} r(t)),
r(t) being the circumradius of triangle t. Moreover, K can be further replaced by:
K = O(area(B)) + O(length(∂ B)).
Furthermore, if M is an abstract compact Riemannian surface (only with a Riemannian metric, but not an embedding), inequalities (3) and ( 5) still hold.
Here π denotes the closest point projection on M, and η denotes the natural projection on M, which is induced by the conformal parameterization, see Definitions 4.6 and 4.7.
THEORETICAL PROOFS
Surface Uniformization
Let (M 1 , g 1 ) and (M 2 , g 2 ) be smooth surfaces with Riemannian metrics. Let φ :
M 1 → M 2 be a diffeomorphism, φ is conformal if and only if φ * g 2 = e 2λ g 1 ,
where φ * g 2 is the pullback metric on M 1 , and λ : M 1 → R is a scalar function defined on M 1 . Conformal mappings preserve angles and distort area elements. The conformal factor function e 2λ indicates the area distortion.
According to the classical surface uniformization theorem, every metric surface (M, g) can deform to one of three canonical shapes, a sphere, a Euclidean plane or a hyperbolic plane. Namely, there exists a unique conformal factor function λ : M → R, such that the uniformization Riemannian metric e 2λ g induces constant Gaussian curvature, the constant being one of {+1, 0, -1} according to the topology of the surface. If surfaces have boundaries, then the boundaries are mapped to circles on the uniformization space. Figures 1 and2 show the uniformizations for closed surfaces and surfaces with boundaries, respectively. The left-hand columns show the genus zero surfaces, which can conformally deform to the unit sphere with +1 curvatures. The middle columns demonstrate genus one surfaces, whose universal covering space is conformally mapped to the Euclidean plane, and the boundaries become circles. The columns on the right illustrate high genus surfaces, whose universal covering space is flattened to the hyperbolic plane, and whose boundaries are mapped to circles.
Surface uniformization can be carried out using the discrete Ricci flow algorithms [START_REF] Jin | Discrete Surface Ricci Flow[END_REF]. Then we can compute the triangulation of the surface by performing the planar Delaunay refinement algorithms on the canonical uniformization domain.
Delaunay Refinement
The Delaunay refinement algorithm for mesh generation operates by maintaining a Delaunay triangulation, which is refined by inserting circumcenters of triangles, until the mesh meets constraints on element quality and size.
Geodesic Delaunay Triangulation
By the uniformization theorem, all oriented metric surfaces can be conformally deformed to one of three canonical shapes, the unit sphere S 2 , the flat torus E 2 /Γ and the hyperbolic surface H 2 /Γ, where E 2 is the Euclidean plane, H 2 the hyperbolic plane, and Γ is the Deck transformation group, a subgroup of isometries of E 2 or H 2 , respectively. The unit sphere S 2 can be conformally mapped to the complex plane by stereographic projection, with the Riemannian metric
C ∪ {∞}, g = 4dzd z (1 + zz) 2 .
Similarly, the hyperbolic plane H 2 is represented by Poincaré's disk model with a Riemannian metric
{|z| < 1|z ∈ C}, g = 4dzd z (1 -zz) 2 .
The concepts of Euclidean triangles and Euclidean circles can be generalized to geodesic triangles and geodesic circles on S 2 and H 2 . Therefore, Delaunay triangulation can be directly defined on these canonical constant curvature surfaces. A triangulation is Delaunay if it satisfies the empty circle property, namely the geodesic circumcircle of each geodesic triangle does not include any other point. Spherical circles on S 2 are mapped to Euclidean circles or straight lines on the plane by stereographic projection. Similarly, hyperbolic circles are mapped to the Euclidean circles on the Poincaré disk. Therefore, geodesic Delaunay triangulations on S 2 or H 2 are mapped to the Euclidean Delaunay triangulations on the plane. As a result, geodesic Delaunay triangulations can be carried out using the conventional Euclidean Delaunay triangulation.
Delaunay Refinement on Constant Curvature Surfaces
The Delaunay refinement algorithm on constant curvature surfaces with empty boundary is introduced as follows. Take a flat torus E 2 /Γ as an example. The user chooses a parameter ε, which is the upper bound of the circumradius. 1) An initial set of samples is generated on the surface, such that the shortest distance between any pair of samples is greater than ε. An initial Delaunay triangulation is constructed. 2) Select bad size triangles, whose circumradii are greater than ε, insert their circumcenters, and maintain the Delaunay triangulation. 3) Select bad shape triangles, whose ratio between circum radius and shortest edge length is greater than one, insert their circum centers, maintain the Delaunay triangulation.
4) Repeat 2 and 3, until the algorithm terminates.
The proof of theorem 3.3 is based on the conventional packing argument [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF].
Proof: In the initial setting, all the edge lengths are greater than ε. In step 2, after inserting the circumcenter of a bad size triangle, all the newly generated edges are connected to the center, their lengths are no less than the circumradius, which is greater than ε. In step 3, the circumradius of the bad shape triangle is greater than the shortest edge of the bad triangle, which is greater than ε. All the newly generated edges connecting to the center are longer than the radius ε. Therefore, during the refinement process, the shortest edge is always greater then ε.
Suppose p and q are the closest pair of vertices, then the line segment connecting them must be an edge of the final Delaunay triangulation, which is longer than ε. Therefore, the distance between any pair of vertices is greater than ε. Centered at the each vertex of the triangulation, a disk with radius ε/2 can be drawn. All these disks are disjoint. Because the total surface area is finite, the number of vertices is finite. Therefore, the whole algorithm will terminate.
When the algorithm terminates, all triangles are well-sized and well-shaped. Namely, the circumradius of each triangle is smaller than ε, and the shortest edge length is greater than ε.
For the flat torus case, the minimal angle is greater than 30 • . By the uniformization theorem, if a surface has a boundary, it can be conformally mapped to the constant curvature surfaces with circular holes. Then the boundaries can be approximated by the planar straight line graphs (PSLG), such that the angles between two adjacent segments are greater than 60 • . Using a proof similar to the one given by Chew in [START_REF] Chew | Guaranteed-Quality Triangular Meshes[END_REF] and [START_REF] Chew | Guaranteed-Quality Mesh Generation for Curved Surfaces[END_REF], we can show the theorem still holds.
Delaunay Refinement on General Surfaces
For general surfaces, we need to add grading to the Delaunay triangulation. The grading function is the conformal factor e 2λ , which controls the size of the triangles. Step 2 in the above algorithm needs to be modified as follows: select a bad size triangle with the circumcenter p and circumradius greater than εe -λ (p) . The same proof can be applied to show the termination of the algorithm. In the resultant triangulation, the grading is controlled by the conformal factor, the circumradius is less than εe -λ , the shortest edge is greater than εe -λ , so the triangles are still well-shaped. On the original surface, the edge length is greater than ε and the circumradius is less than ε. The minimal angle is bounded.
According to [START_REF] Funke | Smooth-Surface Reconstruction in Near Linear Time[END_REF], such a kind of sampling is locally uniform, thus is also a κ-light ε-sample. Suppose the triangulation is T , t ∈ T is a triangle, with circumradius r(t), B ⊂ T is a union of triangles of T , then (7)
Normal Cycle Theory
In order to be complete, we briefly introduce the normal cycle theory, which closely follows the work in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF]. For a more in-depth treatment, we refer readers to [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF].
Intuitively, the normal cycle of a surface is its offset surface embedded in a higher dimensional Euclidean space. If the surface is not convex or smooth, its offset surface in R 3 may have self-intersections. By embedding it in a higher dimensional space, it can be fully unwrapped.
Offset Surface
Suppose V is a volumetric domain in R 3 , whose boundary M = ∂V is a compact C 2 -smooth surface. Let ρ be the distance between M and the medial axis of the complement of V . The ( )
( 2 / )
Fig. 3: Offset surface and tube formula. ε-offset of V minus V is
V ε = {p|p ∈ V d(p,V ) < ε} ⊂ R 3 .
The tube formula can be written as
Vol(V ε ) = area(M)ε + φ H V (M) ε 2 2 + φ G V (M) ε 3 3 for ε < ρ.
The localized version of the tube formula is as follows. Let B ⊂ M be a Borel set, the ε-offset of B is V ε (B), then we have
Vol(V ε (B)) = area(B)ε + φ H V (B) ε 2 2 + φ G V (B) ε 3
3 .
The volume of the ε-offset V ε (B) is always a polynomial in ε, and its coefficients are multiples of the curvature measures of B. Even if the boundary of V is not smooth but if ρ > 0, the volume of V ε (B) is always a polynomial in ε for ε < ρ. Therefore the coefficients of this polynomial generalize the curvature measures from smooth surfaces to polyhedral surfaces. This approach does not generalize to non-convex polyhedral surfaces, where ρ may be equal to 0. So the normal cycle theory has been developed. Intuitively, normal cycles provide a way of unfolding offsets in a higher dimensional space. endowed with the orientation induced by that of M, where a current is the generalization of an oriented surface patch, with integral coefficients. When no confusion is possible, we use the same notation N(M) to denote both the current and its associated set.
Normal Cycles
The normal cycle of V is the same as that of M, namely, N(V ) = N(M). The diffeomorphic mapping from M to its normal cycle N(M) is denoted as
i : M → N(M) p → (p, n(p))
Suppose V is a convex body, whose boundary M is a The crucial property of the normal cycle is its additivity as shown in Fig. 4. Suppose V 1 and V 2 are two convex bodies in
R 3 , such that V 1 ∪V 2 is convex, then N(V 1 ∩V 2 ) + N(V 1 ∪V 2 ) = N(V 1 ) + N(V 2 ).
By the additivity property, we can define the normal cycle of a polyhedron. Given a triangulation of the polyhedron V into tetrahedra
t i . i = 1, 2, • • • , n, the normal cycle of V is defined as N(V ) = n ∑ k=1 (-1) k+1 ∑ 1≤i 1 <•••<i k ≤n N(∩ k j=1 t i j )
by inclusion-exclusion. It is proved that the normal cycle N(V ) is independent of triangulations. Similar to the smooth surface case, one can define a setvalued mapping from M and its normal cycle N(M)
i : M → N(M) p → (p, n(p)) n ∈ NC V (p).
Invariant Differential 2-Forms
Normal cycles are embedded in the space R 3 × R 3 , denoted as E p × E n , where E p is called point space, and E n is called normal space. Let g be a rigid motion of R 3 , g(p) = Rp + d, where R is a rotation matrix, d is a translation vector. g can be extended to E p × E n as ĝ(p, n) = (R(p) + d, R(n)). We say that a differential 2-form ω is invariant under rigid motions, if ĝ * ω = ω.
The following invariant 2-forms play fundamental roles in the normal cycle theory, Definition 4.5: Let the coordinates of E p × E n be (x 1 , x 2 , x 3 , y 1 , y 2 , y 3 ), then
ω A = y 1 dx 2 ∧ dx 3 + y 2 dx 3 ∧ dx 1 + y 3 dx 1 ∧ dx 2 ω G = y 1 dy 2 ∧ dy 3 + y 2 dy 3 ∧ dy 1 + y 3 dy 1 ∧ dy 2 ω H = y 1 (dx 2 ∧ dy 3 + dy 2 ∧ dx 3 )+ y 2 (dx 3 ∧ dy 1 + dy 3 ∧ dx 1 )+ y 3 (dx 1 ∧ dy 2 + dy 1 ∧ dx 2 ).
Curvature measures of a surface can be recovered by integrating specific differential forms on its normal cycle. The following formula unifies the curvature measures on both smooth surfaces and polyhedral surfaces. For a Borel set B ⊂ R 3 , the curvature measures are given by
N(M) ω G |i(B∩M) = φ G M (B) N(M) ω H |i(B∩M) = φ H M (B) N(M) ω A |i(B∩M) = area(B)
where ω G |i(B∩M) denotes the restriction of ω to i(B ∩ M).
Estimation
In this section, we explicitly estimate the Hausdorff distance, normal deviation, and the differences in curvature measures from the discrete triangular mesh to the smooth surface.
Configuration
Let (M, g) be a C 2 metric surface. D is the unit disk on the uvplane. A conformal parameterization is given by ϕ : D → M, such that g(u, v) = e 2λ (u,v) (du 2 + dv 2 ). Suppose p ∈ D is a point on the parameter domain, then ϕ(p) is a point on the surface. The derivative map dϕ| p : T p D → T ϕ(p) M is a linear map dϕ| p = e λ (p) cos θsin θ sin θ cos θ .
η = ϕ • τ -1 : T → M is called the natural projection.
Another map from the mesh to the surface is the closest point projection.
Definition 4.7 (Closest point projection): Suppose T has no intersection with the medical axis of M. Let q ∈ T , and π(q) be its closest point on the surface M, π(q) = argmin r∈M |r -q|, we call the mapping from q to its closest point π(q) as the closest point projection. We will show that the closest point projection is also a homeomorphism.
Hausdorff Distance and Normal Deviation
In the following discussion, we assume the triangulation is generated by the Delaunay Refinement in Theorem 3.3. Our goal is to estimate the Hausdorff distance and the normal deviation, in terms of both the natural projection and the closest point projection.
Lemma 4.8 (Natural projection): Suppose q ∈ T , then
|q -η(q)| = O(ε 2 ), (8) |n(q) -n(η(q))| = O(ε). ( 9
)
Proof: As shown in Fig. 5, suppose p ∈ D, τ(p
) = q. p is inside a triangle t = [p 0 , p 1 , p 2 ], p = 2 ∑ k=0 α k p k , 0 ≤ α k ≤ 1,
where α k 's are barycentric coordinates. All the edge lengths are Θ(ε), and angles are bounded. The area is Θ(ε 2 ). Equation 8: By the linearity of τ and dϕ, τ(p k ) = ϕ(p k ) and
|ϕ(p k ) -dϕ(p k )| = O(ε 2 ), we obtain |τ(p) -dϕ(p)| = | ∑ k α k (τ(p k ) -dϕ(p k ))| ≤ ∑ k α k |ϕ(p k ) -dϕ(p k )| = O(ε 2 ).
Therefore
|τ(p) -ϕ(p)| ≤ |τ(p) -dϕ(p)| + |dϕ(p) -ϕ(p)| = O(ε 2 ),
where q = τ(p) and η(q) = ϕ • τ -1 (q) = ϕ(p), this gives Eqn. 8. Equation 9: Construct local coordinates on the tangent plane
T ϕ(p 0 ) M, such that ϕ(p 0 ) is at the origin, dϕ(p 1 ) is a- long the x-axis. Then τ(p 1 ) is (Θ(ε), 0, O(ε 2 )), τ(p 2 ) is (Θ(ε) cos β , Θ(ε) sin β , O(ε 2 ))
, where β is the angle at p 0 . By direct computation, the normal to the face τ(t) is
(O(ε), O(ε), Θ(1)). Therefore |n • τ(p) -n • ϕ(p 0 )| = O(ε). Furthermore, |n • ϕ(p) -n • ϕ(p 0 )| = |W (ϕ(p) -ϕ(p 0 ))| ≤ W |ϕ(p) -ϕ(p 0 )| = O(ε),
where W is the Weigarten map.
M is compact, therefore W is bounded, |ϕ(p) -ϕ(p 0 )| is O(ε). |n • τ(p) -n • ϕ(p)| ≤ |n • ϕ(p) -n • ϕ(p 0 )| + |n • τ(p) -n • ϕ(p 0 )| = O(ε).
This gives Eqn. 9. Lemma 4.9 (Closest point projection): Suppose q ∈ T , then
|q -π(q)| = O(ε 2 ), ( 10
) |n(q) -n(π(q))| = O(ε). ( 11
)
Proof: Equation 10: From Eqn. 8 and the definition of closest point, we obtain
|q -π(q)| ≤ |q -η(q)| = O(ε 2 ).
Equation 11: From Eqn. 8 and Eqn. 10, we get
|η(q) -π(q)| ≤ |η(q) -q| + |q -π(q)| = O(ε 2 ), therefore |n • η(q) -n • π(q)| ≤ W |η(q) -π(q)| = O(ε 2 ).
Then from Eqn. 9 and the above equation,
|n(q) -n(π(q))| ≤ |n(q) -n • η(q)| + |n • η(q) -n • π(q)| = O(ε) + O(ε 2 ).
Remark
The proofs for the Hausdorff distances in Eqn. 8 and Eqn. 10 do not require the triangulation to be well-shaped, but only well-sized. The proofs for the normal deviation
( 0 ) ( 2 ) ( 1 )
2 Fig. 6: Small triangles inscribed to attitudinal circles of a cylinder do not guarantee the normal convergence.
estimation in Eqn. 9 and Eqn. 11 require the triangulation to be both well-sized and well-shaped. In the proofs we use the facts that the triangulation on parameter domain has bounded angles, and the mapping ϕ is conformal. Figure 6 shows a counterexample: a triangle is inscribed in a latitudinal circle of a cylinder, no matter how small it is, its normal is always orthogonal to the surface normals.
Global Homeomorphism
Both the natural projection and the closest point projection are homeomorphisms. While it is trivial for natural projection, in the following we give detailed proof to show that the closest point projection is a piecewise diffeomorphism, and we estimate its Jacobian. Lemma 4.10: The closest point projection π : T → M is a homeomorphism.
Proof: First we show that π restricted to the one-ring neighborhood of each vertex of T is a local homeomorphism. Suppose p ∈ T is a vertex, therefore p ∈ M as well. U(p) is the union of all faces adjacent to p. We demonstrate that π :
U(p) → M is bijective. Assume q ∈ U(p), then |p -q| = O(ε), |π(q) -p| ≤ |π(q) -q| + |q -p| = O(ε 2 ) + O(ε). Therefore |n(π(q)) -n(p)| = O(ε). ( 12
)
Assume there is another point r ∈ U(p), such that π(q) = π(r).
Let the unit vector of the line segment connecting them be
d = r -q |r -q| , then because r, q ∈ U(p), d is almost orthogonal to n(p), d, n(p) = O(ε). ( 13
)
On the other hand, d is along the normal direction at π(q), n(π(q)) = ±d, assume d is along n(π(q)), from Eqn. 12, we obtain
|d -n(p)| = O(ε). ( 14
)
Eqn. 13 and Eqn. 14 contradict each other. Therefore π |U(p) is bijective.
Then we show that π restricted on each face is a diffeomorphism. Let r(u, v), n(u, v) be position and normals of M respectively, where (u, v) are local parameters along the principal directions. t ∈ T is a planar face. The inverse closest point projection map is π -1 : r(u, v) → q(u, v), where q(u, v) is the intersection between the ray through r(u, v) along n(u, v) and the face t,
q(u, v) = r(u, v) + s(u, v)n(u, v), direct computation shows q u × q v , n = (1 + 2Hs + Ks 2 ) r u × r v , n , (15)
where s = O(ε 2 ). When ε is small enough, the above equation is close to 1, which means π |U(P)| is a piecewise diffeomor- phism.
Secondly, we show that π is a global homeomorphism. We have shown that π is a covering map. At each vertex of T , the closest point equals itself, therefore the degree of π is 1. So π is a global homeomorphism.
Note that, the estimation of the Jacobian of the closest point projection in Eqn. 15 can be applied to show the following. Suppose B ⊂ R 3 is a Borel set, then
|area(B ∩ T ) -area(π(B) ∩ M)| = Kε 2 .
Proof of the Main Theorem
The proof of the main Theorem 3.4. associated with the closest point projection π is a simple corollary of the following main theorem in [START_REF] Cohen-Steiner | Restricted Delaunay Triangulations and Normal Cycle[END_REF].
Theorem 4.11: Suppose T is a bounded aspect ratio triangulation projecting homeomorphically on M, if B is a relative interior of a union of triangles of T , then
|φ G T (B) -φ G M (π(B))| ≤ Kε (16) |φ H T (B) -φ H M (π(B))| ≤ Kε ( 17
)
where for fixed
M K = O( ∑ {t∈T,t⊂ B} r(t) 2 ) + O( ∑ {t∈T,t⊂ B,t∩∂ B = / 0} r(t)),
r(t) is the circumradius of triangle t. Proof (Closest point projection): By Lemma 4.10, the closest point projection is a homeomorphism. By Theorem 3.3, the triangulation T has a bounded aspect ratio, therefore the conditions of Theorem 4.11 are satisfied, and consequently, Eqns. 16 and 17 hold. According to Eqn. 7 in Lemma 4.1, therefore the main theorem holds.
The proof of the main Theorem 3.4. associated with the natural projection η is more direct and more adapted to our framework.
Proof (Natural projection): The natural projection η : T → M can be lifted to a mapping between the two normal cycles f : N(T ) → N(M), such that the following diagram commutes:
N(M) f ← ----N(T ) i p 1 M η ← ----T ,
where p 1 is the projection from E p × E n to E p , and i(q) = (q, n(q)) for all q ∈ M. Namely, given a point q ∈ T , and n(q) in its normal cone, (q, n(q)) ∈ N(T ),
f : (q, n(q)) → (η(q), n • η(q)) ∈ N(M).
By Lemma 4.8,
|(q, n(q)) -f (q, n(q))| = O(ε). ( 18
)
It is obvious that f is continuous. Let B ⊂ E p , we denote the current N(T ) ∩ (B × E n ) by D, and the current N(M) ∩ (η(B) × E n ) by E, as shown in Fig. 7. Consider the affine homotopy h between f and the identity,
D = N (T ) ∩ (B × En) E = N (M ) ∩ (B × En) C O(ε) (q, n) f (q, n)
Fig. 7: Homotopy between the normal cycles N(T ) and N(M).
h(x, •) = (1 -x)id(•) + x f (•), x ∈ [0, 1].
We define the volume swept by the homotopy as
C = h # ([0, 1] × D),
whose boundary is
∂C = E -D -h # ([0, 1] × ∂ D).
Intuitively, C is a prism, the ceiling is E, the floor is D, and the walls are
h # ([0, 1] × ∂ D). φ G M (η(B)) -φ G T (B) = E-D ω G = ∂C ω G + h # ([0,1]×∂ D) ω G . By Stokes' Theorem, ∂C ω G = C dω G .
Both ω G and its exterior derivative dω G are bounded, therefore, we need to estimate the volume of block C and the area of the wall h # ([0, 1] × ∂ D). We use M(•) to denote the flat norm (volume, area, length). The volume of the prism C is bounded by the height and the section area. The height is bounded by sup| f -id|. The section area is bounded by the product of the bottom area M(D) and the square of the norm
Dh(x, •) 2 = xD f + (1 -x)id 2 ≤ (x sup D f + (1 -x)) 2 .
In later discussion, we will see that sup D f ≥ 1, therefore
Dh(x, •) ≤ sup D f . We obtain M(C) ≤ M(D)sup| f -id|sup D f 2 , M(h # ([0, 1] × ∂ D)) ≤ M(∂ D)sup| f -id|sup D f .
Now we estimate each term one by one. 1) Eqn. [START_REF] Cohen-Steiner | Second Fundamental Measure of Geometric Sets and Local Approximation of Curvatures[END_REF] shows
sup| f -id| = O(ε).
2) Since the triangulation has a bounded ratio of circumradius to edge length, we obtain
M(D) = O(∑ t∈T,t⊂ B r(t) 2 ) M(∂ D) = O(∑ t∈T,t⊂ B,t∩∂ B = / 0 r(t)
). Let K be the summation of the two terms above. According to Lemma 4.1, K is bounded by the area of B and the length of ∂ B.
3) For the estimation of D f , we observe that on each triangle t ∈ D, the mapping τ converges to dϕ, so D f on each triangle converges to
(r u , 0)du + (r v , 0)dv → (r u , n u )du + (r v , n v )dv,
(r u , n u ), (r u , n u ) (r u , n u ), (r v , n v ) (r v , n v ), (r u , n u ) (r v , n v ), (r v , n v ) = e 2λ id +III, (19)
where the third fundamental form is The proof for the mean curvature measure is exactly the same. Remark 1. In our proofs, perfect conformality is unnecessary. All the proofs are based on one requirement: the max circumcircle of the triangles of the tessellations converge to zero. This only requires the parameterization to be K-quasiconformal, where K is a positive constant, less than ∞.
III = n u , n u n u , n v n v , n u n v , n v .
2. It is well known that the Gauss curvature is defined on any (abstract) Riemannian surface. By the Nash theorem [START_REF] Nash | C1 Isometric Imbeddings[END_REF] [49], any (abstract) Riemannian surface can be isometrically embedded in a high-dimensional Euclidean space. Using the theory of normal cycle for large codimension submanifolds of Euclidean space, the inequalities (3) and (5) in Theorem 3.4 can be extended to any abstract Riemannian surface, the approximation depending on the chosen embedding.
COMPUTATIONAL ALGORITHM
We verified our theoretical results by meshing spline surfaces and comparing the Gaussian and mean curvature measures.
Each spline patch M is represented as a parametric smooth surface defined on a planar rectangle γ : R → R 3 , where R is the planar rectangle parameter domain, the position vector γ is C 2 continuous, therefore the classical curvatures are well As shown in Fig. 8, in our experiments, each planar domain or surface S (S ∈ {D, R, M}), is approximated by two triangle meshes, T k S , k = 0, 1, where the T 0 S is induced by the regular grid on the rectangle; T 1 S is induced by the Delaunay triangulation on the unit disk. Both the conformal parameterization ϕ and the parameter domain mapping f are approximated by piecewise linear (PL) mappings, φ and f , respectively, which are computed on the meshes.
Algorithm Pipeline
Conformal Parametrization
In the first stage, the conformal parameterization is computed as follows:
f -1 : T 0 R T 0 M T 0 D E γ E φ-1 T 0
R is a triangulation induced by the regular grid structures on the rectangle R. Each vertex on T 0 R is mapped to the spline surface M by γ, each face is mapped to a Euclidean triangle, this gives the mesh T 0 M . If the grid tessellation is dense, the quality of the mesh T 0 M is good enough for performing the Ricci flow and we get the PL mapping φ-1 , which maps T 0 M to a triangulation of the disk T 0 D . The composition of φ and γ -1 gives the PL mapping f = γ -1 • φ : T 0 D → T 0 R .
Resampling and Remeshing
The process in the second stage is described in the following diagram:
φ : T 1 D T 1 R T 1 M E f E γ
First, we apply Ruppert's Delaunay refinement method to generate the triangulation T 1 D good quality on the unit disk. The triangulation on the disk T 1 D is mapped to a triangulation T 1 R on the rectangle by the PL mapping f : T 0 D → T 0 R . The connectivity of T 1 R is the same as that of T 1 D . The vertices of T 1 R are the images of the vertices of T 1 D under the PL mapping f , which are calculated as follows. Suppose q is a Delaunay vertex of T 1 D on the disk, covered by a triangle
[p 0 , p 1 , p 2 ] ∈ T 0 D . Assume the barycentric coordinates of q are (α 0 , α 1 , α 2 ), q = ∑ k α k p k , then f (q) = ∑ k α k f (p k ). The triangulation T 1 R induces a triangle mesh T 1 M , whose connectivity is that of T 1 R , vertices of T 1 M are the images of those of T 1
R under the spline mapping γ. The discrete PL conformal mapping is given by φ
= γ • f : T 1 D → T 1 M .
The triangle mesh generated by the Delaunay refinement based on conformal parameterization is T 1 M . Fig. 9 shows the meshing results using the proposed method for a car model. In this experiment, the conformal parameter domain D is also a rectangle. Frame (a) shows a B-spline surface patch M; Frame (b) shows the initial triangle mesh T 0 M ; Frame (c) shows the triangulations on the conformal parameter domain, T 0 D on the top and T 1 D at the bottom; Frames (d), (e) and (f) illustrate the triangle meshes generated by the Delaunay refinement on a conformal parameter domain with a different number of samples, 1K, 2K, and 4K, respectively.
EXPERIMENTAL RESULTS
The meshing algorithms are developed using generic C++ on a Windows platform, all the experiments are conducted on a PC with Intel Core 2 CPU, 2.66GHz, 3,49G RAM.
Triangulation Quality
The patch on the Utah teapot (see Fig. 8) is meshed with different sampling densities, the meshes are denoted as {T n } 11 n=1 as in Tab. 2. The statistics of the meshing quality are reported in Fig. 10. Frame (a) shows the maximal circumradius of all the triangles of each mesh. Frame (b) is the average circumradius of all the triangles of each mesh. Because the sampling is uniform, we expect the circumradius ε n vs. the number of vertices s n to satisfy the relation
ε n ∼ 1 √ s n .
The curve in Frame (b) perfectly meets our expectations. Frames (c) and (d) show the minimal angles on all meshes. According to the theory of Rupert's Delaunay refinement, the minimal angle should be no less than 20.7 • . Frame (c) shows the minimal angles; in our experiments they are no less than 20.9 • . Frame (d) illustrates the means of the minimal angles, which exceed 46.5 • .
Curvature Measure Comparisons
For each triangle mesh T k produced by our method, for each vertex q ∈ T k , we define a small ball in R 3 , B(q, r) centered at q with radius r. We then calculate the curvature measures φ G T k (B(q, r)) and φ H T k (B(q, r)) using the formulae Eqn. 1 and Eqn. 2, respectively.
We also compute the curvature measures on the smooth surface M, φ G M (B(q, r)) and φ H M (B(q, r)) using the following method,
φ G M (B(q, r)) := γ(u,v)∈B(q,r) G(u, v)g(u, v)dudv,
where γ(u, v) is the point on the spline surface, G(u, v) is the Gaussian curvature at γ(u, v), and g(u, v) is the determinant of the metric tensor. Because the spline surface is C 2 continuous, all the differential geometric quantities can be directly computed using the traditional formulas. Note that, because M and T k are very close, we use B(q, r) ∩ T k to replace π(B(q, r)) ∩ M in practice. In all our experiments, we set r to be 0.05area(M) 1 2 and 0.08area(M) 1 2 for Gaussian and mean curvature measures, respectively.
We define the average errors between curvature measures as e
G n = 1
|V n | ∑ v∈V n |φ G M (B(v, r)) -φ G T n (B(v, r))|, and e H n = 1
|V n | ∑ v∈V n |φ H M (B(v, r)) -φ H T n (B(v, r))|,
where V n is the vertex set of T n .
Figure 11 shows the errors between curvature measures with respect to sampling densities, or equivalently, the number of samples and the average circumradius. Frames (a) and (b) show that the curvature measure errors are approximately proportional to the inverse of the square root of the number of sample points; Frames (c) and (d) show the curvature measure errors are approximately linear with respect to the circumradius. This again matches our main Theorem 3.4.
Figure 12 visualizes the curvature distributions on the smooth patch M (left column), and the triangle mesh T 11 (right column). The histograms show the distributions of the relative curvature errors at the vertices of the mesh. From the two left-hand columns, we can see that the curvatures of M look very similar to their counterparts on T 11 . Moreover, from the right-hand column, we can find that the overwhelming majority of vertices have relative curvature errors very close to zeros. In particular, for Gaussian curvature measure, more than 97% of vertices are fall into the relative error range of (-0.05, 0.05). For mean curvature measure, more than 95% of vertices are included in the relative error range of (-0.05,0.05). This demonstrates the accuracy of the proposed method.
CONCLUSION
This work analyzes the surface meshing algorithm based on the conformal parameterization and the Delaunay refinement method. By using the normal cycle theory and the conformal geometry theory, we rigorously prove the convergence of curvature measures, and estimate the Hausdorff distance and the normal deviation. According to [START_REF] Hildebrandt | On the Convergence of Metric and Geometric Properties of Polyhedral Surfaces[END_REF], these theoretical results also imply the convergence of the Riemannian metric and the Laplace-Beltrami operator.
The method can be generalized to prove the curvature convergence of other meshing algorithms, such as the centroidal voronoi tessellation method, and so on. The normal cycle theory is general to arbitrary dimension. We will generalize the theoretical results of this work to include higher dimensional discretizations, such as volumetric shapes. We will explore these directions in the future.
3 . 4 )
34 Curvature measures: we show the Delaunay refinement method on the conformal parameter domain generates κ-light ε-sample, which guarantees the convergence of curvature measures. Moreover, we show that the bounds of the curvature measures are Kε, where K is O(area(B))+O(length(∂ B)), and are independent of the triangulations, see Theorem 3.4 and section 4.4.4.
Definition 3 . 1 :
31 The Gaussian curvature measure of M, φ G M , is the function associated with each Borel set B ⊂ R 3 , φ G M (B) = B∩M G(p)d p where G(p) is the Gaussian curvature of M at point p. Similarly, the mean curvature measure φ H M is given by φ H M (B) = B∩M H(p)d p where H(p) denotes the mean curvature of M at point p.
Fig. 1 :
1 Fig. 1: Uniformization for closed surfaces.
Fig. 2 :
2 Fig. 2: Uniformization for surfaces with boundaries.
Lemma 4 . 1 :
41 The following estimation holds ∑ t⊂ B r(t) 2 + ∑ t⊂ B,t∩∂ B = / 0 r(t) = O(area(B)) + O(length(∂ B)).
Definition 4 . 2 :
42 The normal cycle N(M) of a C 2 -smooth surface M is the current associated with the set N(M) := {(p, n(p))|p ∈ M}
2 Fig. 4 :Definition 4 . 3 :Definition 4 . 4 :
244344 Fig. 4: Additivity of the normal cycle. polyhedral surface. We use normal cones to replace normal vectors. Definition 4.3: The normal cone NC V (p) of a point p ∈ V is the set of unit vectors v such that ∀q ∈ V, qp, v ≤ 0. Definition 4.4: The normal cycle of M is the current associated with the set {(p, n(p))|p ∈ M, n ∈ NC V (p)} endowed with the orientation induced by the one of M. As in figure 4, normal cycles are graphically represented by their image under the map sending (p, n(p)) to p + n(p).The crucial property of the normal cycle is its additivity as shown in Fig.4. Suppose V 1 and V 2 are two convex bodies in R 3 , such that V 1 ∪V 2 is convex, then
Fig. 5 :
5 Fig. 5: Configuration.
where r(u, v) and n(u, v) are the position and normal vectors of the smooth surface M, (u, v) the conformal parameters, namely, |r u | = e λ , |r v | = e λ and r u ⊥ r v . Assume (du, dv) = (cos θ , sin θ ) for any angle θ , we obtain that the norm of the tangent vector on the left hand side is e λ . The norm of the vector on the right hand side is bounded by the eigenvalues of the following matrix
From
III -2HII + GI = 0, where the first fundamental form I = e 2λ id, the second fundamental form II = e 2λ W , W is the Weigarten matrix, we get III = 2HII -GI = e 2λ (2HW -Gid). Plugging into Eqn. 19, we get D f 2 bounded by the eigenvalues of (1 -G)id + 2HW, therefore on each face D f 2 ≤ max{1 + k 2 1 , 1 + k 2 2 }. So D f 2 is globally bounded. Putting all the estimates together, we obtain |φ G M (η(B)) -φ G T (B)| ≤ Kε. According to Lemma 4.1, K is bounded by the area of B and the length of ∂ B.
defined. Let ϕ : D → M be the conformal mapping from the unit disk D to the spline surface M. As shown in the lefthand diagram in Diagram (20), the mapping f is from D to R, which makes the diagram commute, therefore f = γ -1 • ϕ.
1 MFig. 8 :
18 Fig. 8: Pipeline for meshing a Bézier patch of Utah teapot.
1 DFig. 9 :
19 Fig. 9: Remeshing of the Car spline surface model.
Fig. 10 :
10 Fig. 10: The maximal and average circumradii {ε n } (a-b), and the minimal and average of minimal angles of {T n } (c-d).
: T11 to T1 Ave. Err. mean cur.
Fig. 11 :
11 Fig. 11: Curvature errors e G n and e H n of {T n } converge to zeros as the number of sample points goes to infinity (a-b), and as the average of the circumradii {ε n } goes to zero (c-d).
Fig. 12 :
12 Fig. 12: Illustration of the curvature values on the Utah teapot spline surface patch M, (a, d), and on its approximate mesh T 11 (b, e). Their relative curvature error distribution histograms are shown in (c) and (f).
TABLE 2 :
2 The numbers of vertices and triangles of the sequence of meshes {T n } with different resolutions.
ACKNOWLEDGMENTS
This work was supported under the grants ANR 2010 INTB 0301 01, NSF DMS-1221339, NSF Nets-1016829, NSF CCF-1081424 and NSF CCF-0830550.
Huibin Li received a BSc degree in mathematics from Shaanxi Normal University, Xi'an, China, in 2006, and a Master's degree in applied mathematics from Xi'an Jiaotong University, Xi'an, China, in 2009. He is currently a PhD candidate in mathematics and computer science at Ecole Central de Lyon, France. His research interests include discrete curvature estimation, 3D face analysis and recognition.
Wei | 57,820 | [
"7562"
] | [
"403930",
"303540",
"193738",
"403930",
"361557"
] |
01487863 | en | [
"phys"
] | 2024/03/04 23:41:48 | 2017 | https://theses.hal.science/tel-01487863/file/72536_LI_2017_archivage.pdf | M Hao
M Wim
Desmet Professeur
M Alain
L E Bot
M Antonio
Huerta Professeur
Professeur Émérite
M Hervé Riou
On wave based computational approaches for heterogeneous media
Keywords: structures, matériaux par, Right: evanescent wave
convergence of result in Section 2.3. . . . . . . . . . . . . . . .
Thèse de doctorat de l'Université Paris-Saclay préparée à l'École Normale Supérieure de Cachan (École normale supérieure Paris-Saclay)
Introduction
Nowadays, the numerical simulation has become indispensable to analyse and optimise the problems in every part of engineering processes. Without using real prototype, the virtual testing drastically reduces the cost and at the meantime highly speed up the design process. Such as in automotive industry, abiding by the standards against pollution, the objective of enterprise is to produce a lighter vehicle with improved comfort for passenger. However decreasing the weight of vehicle often leads to the fact that it is more susceptible to vibrations, which are mainly generated by acoustic effect. It requires designers to take account of all these factors in the conception of automotive structure. Another example is in the aerospace industry. Given the limited budget, designers endeavor to minimise the total mass of launcher and on the other hand abate the increasing vibrations. Last example is in construction of harbor, which agitated by ocean waves. To amass the maximum vessels and to alleviate the water agitation, designers search the optimised conception for the geometry of harbor.
Characterised by the frequency response function, a vibration in the mechanic field could be classified into three zones as shown in Figure 1.
The low-frequency range is characterized by the local response. The resonance peaks are distinct from one to another. The behavior of vibration can be represented by the combination of several normal modes. The Finite Element Methods (FEM) [START_REF] Zienkiewicz | The finite element method[END_REF] is most commonly used to analyse the low-frequency vibration problem. Making use of polynomial shape functions to approximate the vibration field, the FEM gives an efficient and robust performance. Considerable commercial software of this method is well developed and is widely used in the industry. With the increasing complexity of numerical model, large numbers of researchers still continue their effort to develop this method in the aspect of intensive calculation and parallel calculation techniques.
In the high-frequency range the dimension of object is much larger than the wave length. There exist many small overlapping resonance peaks. Moreover the system is extremely sensible to uncertainties. In this context, the Statistical Energy Analysis (SEA) [Lyon et Maidanik, 1962] is developed to solve the vibration problems in this range. In fact, the SEA method neglects the local response. Instead it studies the global energy by taking the averages and variances of dynamic field over large sub-systems. These features enable the SEA well performs in the high-frequency range but on the other hand limits the use of the SEA only into this range. Therefore the SEA will become incapable facing to low-frequency and mid-frequency problem.
Figure 1: A typical frequency response function divided in low-mid-and high-frequency zones [Ohayon et Soize, 1998].
In the mid-frequency range, the problem is characterised by intense modal densification. Thus it contains both the characteristics of low-frequency and high-frequency problem. It presents many high and partially overlapping resonance peaks. In this reason, the local response could not be neglected as in high-frequency range. In addition, the system is very sensible to uncertainties. Due to these features, the methods for low-frequency or high-frequency such as the FEM and the SEA could not be applied to mid-frequency problem. For high-frequency method, the neglecting of local response will lead to its undoing. For the low-frequency method, the need of prohibitively increased refinement of mesh will be its undoing due to the pollution effect [START_REF] Deraemaeker | Dispersion and pollution of the FEM solution for the Helmholtz equation in one, two and three dimensions[END_REF].
Facing to mid-frequency problem, one category of approaches could be classified into the extensions of the standard FEM, such as the Stabilized Finite Element Methods including the Galerkin Least-Squares FEM [Harari et Hughes, 1992] the Galerkin Gradient Least-Squares FEM (G∇LS-FEM) [Harari, 1997], the Variational Multiscale FEM [Hughes, 1995], The Residual Free Bubbles method (RFB) [START_REF] Franca | Residual-free bubbles for the Helmholtz equation[END_REF], the Adaptive Finite Element method [Stewart et Hughes, 1997b]. There also exists the category of energy based methods, such as the Hybrid Finite Element and Statistical Energy Analysis (Hybrid FEM-SEA) [De Rosa et Franco, 2008, De Rosa et Franco, 2010], the Statistical modal Energy distribution Analysis [START_REF] Franca | Residual-free bubbles for the Helmholtz equation[END_REF], the Wave Intensity Analysis [Langley, 1992], the Energy Flow Analysis [START_REF] Belov | Propagation of vibrational energy in absorbing structures[END_REF][START_REF] Buvailo | [END_REF], the Ray Tracing Method [START_REF] Krokstad | Calculating the acoustical room response by the use of a ray tracing technique[END_REF], Chae et Ih, 2001], the Wave Enveloppe Method [Chadwick et Bettess, 1997].
Other approaches have been developed in order to solve mid-frequency problem, namely the Trefftz approaches [Trefftz, 1926]. They are based on the use of exact ap-proximations of the governing equation. Such methods are, for example, the partition of unity method (PUM) [Strouboulis et Hidajat, 2006], the ultra weak variational method (UWVF) [Cessenat et Despres, 1998a, Huttunen et al., 2008], the least square method [Monk et Wang, 1999, Gabard et al., 2011], the plane wave discontinuous Galerkin methods [START_REF] Gittelson | Plane wave discontinuous Galerkin methods: analysis of the h-version[END_REF], the method of fundamental solutions [Fairweather et Karageorghis, 1998, Barnett et Betcke, 2008] the discontinuous enrichment method (DEM) [START_REF] Farhat | The discontinuous enrichment method[END_REF], Farhat et al., 2009], the element free Galerkin method [Bouillard et Suleaub, 1998], the wave boundary element method [START_REF] Perrey-Debain | Wave boundary elements: a theoretical overview presenting applications in scattering of short waves[END_REF], Bériot et al., 2010] and the wave based method [START_REF] Desmet | An indirect Trefftz method for the steady-state dynamic analysis of coupled vibro-acoustic systems[END_REF], Van Genechten et al., 2012].
The Variational Theory of Complex Rays (VTCR), first introduced in [Ladevèze, 1996], belongs to this category of numerical strategies which use waves in order to get some approximations for vibration problems. It has been developed for 3-D plate assemblies in [Rouch et Ladevèze, 2003], for plates with heterogeneities in [START_REF] Ladevèze | A multiscale computational method for medium-frequency vibrations of assemblies of heterogeneous plates[END_REF], for shells in [START_REF] Riou | Extension of the Variational Theory of Complex Rays to shells for medium-frequency vibrations[END_REF], and for transient dynamics in [START_REF] Chevreuil | Transient analysis including the low-and the medium-frequency ranges of engineering structures[END_REF]. Its extensions to acoustics problems can be seen in [START_REF] Riou | The multiscale VTCR approach applied to acoustics problems[END_REF], Ladevèze et al., 2012, Kovalevsky et al., 2013]. In [START_REF] Barbarulo | Proper generalized decomposition applied to linear acoustic: a new tool for broad band calculation[END_REF] the broad band calculation problem in linear acoustic has been studied. In opposition to FEM, the VTCR has good performances for medium frequency applications, but is less efficient for very low frequency problems.
Recently, a new approach called the Weak Trefftz Discontinuous Galerkin (WTDG) method is first introduced in [Ladevèze et Riou, 2014]. It differs from the pure Trefftz methods, because the necessity to use exact solution of the governing equations can be weaken. This method could achieve the hybrid use of the FEM (based on polynoms) and the VTCR (based on waves) approximations at the same time in different adjacent subdomains of a problem. Therefore for a global system which contains both low-frequency range vibration dominated sub-structures and mid-frequency vibration dominated substructures, the WTDG outperforms the standard FEM and the standard VTCR.
Numerous methods for solving the mid-frequency range problem are presented above and among them those issued from Trefftz method seem more efficient. However most of them are limited to constant wave number Helmholtz problem. In other word, the system is considered as piecewise homogeneous medium. The reason lies on the fact that it is easy to find free space solutions of the Helmholtz equation with a constant wave number. It is not necessarily the case when the wave number varies in space. Indeed the spatially constant wave number is encountered in some applications of the Helmholtz equation, such as the wave propagation in geophysics or electromagnetics and underwater acoustics in large domains. Therefore these mid-frequency range methods will make the numerical result deviate from the real engineering problem. To alleviate this phenomenon, the UWVF proposes special solutions in the case of a layered material in [START_REF] Luostari | Improvements for the ultra weak variational formulation[END_REF]. Its studies of the smoothly variable wave number problem in one dimension by making use of exponentials of polynomials to approximate the solution can be seen in [START_REF] Després | Generalized plane wave numerical methods for magnetic plasma[END_REF]. The DEM method also suggests special solutions in case of layered material in [START_REF] Tezaur | A discontinuous enrichment method for capturing evanescent waves in multiscale fluid and fluid/solid problems[END_REF] and its extension to the smoothly variable wave number problem can be seen in [START_REF] Tezaur | The discontinuous enrichment method for medium-frequency Helmholtz problems with a spatially variable wavenumber[END_REF]. For smoothly variable wave number, the DEM introduces special forms of wave functions to enrich the result.
The objective of the dissertation is to deal with heterogeneous Helmholtz problem. First, one considers the media with the square of wave number varying linearly. It is resolved by extending the VTCR. Then a general way to handle heterogeneous media by the WTDG method is proposed. In this case, there is no a priori restriction for the wave number. The WTDG solves the problem by approximately satisfying the governing equation in each subdomain.
In extended VTCR, one solves the governing equation by the technique of separation of variables and obtains the general solution in term of Airy functions. However the direct use of Airy functions as shape functions suffer from numerical problem. The Airy wave function is a combination of Airy functions. They are built in the way that they tends towards the plane wave functions asymptotically when the wave number varies slowly. Through academic studies, the convergence properties of this method are illustrated. In engineering the heterogeneous Helmholtz problem often exists in harbor agitation problem [START_REF] Modesto | Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: application to harbor agitation[END_REF]. Therefore a harbor agitation problem solved by the extended VTCR further gives a scope of its performance in engineering application [Li et al., 2016a].
In the WTDG method, one locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this ways, zero order and first order approximations are defined. These functions only satisfy the local governing equation in the average sense. In this dissertation, they are denoted by the Zero Order WTDG and the First Order WTDG. The academic studies are presented to show the convergence properties of the WTDG. The harbor agitation problem is again solved by the WTDG method and a comparison with the extended VTCR is made [START_REF] Li | On weak Trefftz discontinuous Galerkin approach for medium-frequency heterogeneous Helmholtz problem[END_REF].
Lastly the WTDG is extended to mix the polynomial and the wave approximations in the same subdomains, at the same time. In this dissertation it is named FEM/WAVE WTDG method. Trough numerical studies, it will be shown that such a mix approach presents better performances than a pure FEM approach (which uses only a polynomial description) or a pure VTCR approach (which uses only a wave description). In other words, this Hybrid FEM/WAVE WTDG method could well solve the vibration problem of both low-frequency and mid-frequency range [START_REF] Li | Hybrid Finite Element method and Variational Theory of Complex Rays for Helmholtz problems[END_REF]. This dissertation is divided into five chapters. Chapter 1 is the description of the reference problem and the relevant literature analysis. Chapter 2 recalls the VTCR in the constant wave number acoustic Helmholtz problem and its cardinal results in previous work of VTCR. Chapter 3 addresses the Extended VTCR in slowly varying wave number heterogeneous Helmholtz problem. Chapter 4 illustrates the the Zero Order and the First Order WTDG in heterogeneous Helmholtz problem. Chapter 5 presents the FEM/WAVE WTDG method to constant wave number low-frequency and mid-frequency Helmholtz problem. The last Chapter draws the final remarks and conclusions.
Chapter 1 Bibliographie
The purpose of this chapter is to briefly introduce the principal computational methods that are developed for structural vibrations and acoustics. Up to the present day, there exist numerous methods indeed. Some are commonly adopted by the industry and others are still in the research phase. Depending on the frequency of problem, these methods could be globally classified into three categories, which are the polynomial methods, the energetic methods and the wave-based methods. Respectively, they are developed for the low-frequency, high-frequency and mid-frequency problems. Granted, this chapter could not cover all the details of each method, but the essential ideas and features will be fully illustrated in the context of Helmholtz related problems. The finite element method (FEM) is a predictive technique applied on a rewrite of reference problem into the weak form formulation, which is equivalent to reference problem.
Then it makes a finite number elements discretization of problem. In each element, the vibrational field, acoustic pressure of the fluid or the displacement of the structures, is approximated by the polynomial functions. These functions are not the exact solutions of the governing equation. For the FEM, it is required to have a fine discretization to obtain a precise solution.
Generally the weak formulation could be written as a(u,v) = l(v), where a(•,•) is a bilinear form and l(•) is a linear form. This formulation could be obtained by the virtual work principle or by minimisation of energy of system. It should be noticed that the working space of u is that U = u|u ∈ H 1 , u = u d on ∂Ω u d and v ∈ H 1 0 , where Ω u d represents the boundary ∂Ω imposed by Dirichlet type boundary condition. This means that the functions of working space need to satisfy the displacement imposed on boundary. Then it is to solve the formulation problem in a finite dimensional basis of working space. The domain Ω should be discretized into numerous small elements Ω E in the way that Ω
= n E E=1 Ω E , Ω ⋍ Ω and Ω E Ω E ′ = / 0, ∀E = E ′ .
This discretization allows one to approximate the Helmholtz problem by a piecewise polynomial base, whose support is locally defined by Ω E :
u(x) ≃ u h (x) = N E ∑ e=1 u E e φ E e (x), x ∈ Ω E (1.1)
When the vibration becomes oscillating, large numbers of piecewise polynomial shape functions are needed to be used. It has been proved in [Ihlenburg et Babuška, 1995, Bouillard et Ihlenburg, 1999] that the upper limit of error could be yielded by:
ε C 1 kh p p + C 2 kL kh p 2p (1.2)
where C 1 and C 2 are constants, k is the wave number of problem, h is the maximum element size, p is the degree of the polynomial shape functions. This error contains two terms. The first term represents the interpolation error which caused by the fact that the oscillation phenomenon is approximated by the polynomial functions. It is the predominant term for the low-frequency problem and could be remained small by keeping the term kh constant [Thompson et Pinsky, 1994]. The second term represents the pollution error due to the numerical dispersion [START_REF] Deraemaeker | Dispersion and pollution of the FEM solution for the Helmholtz equation in one, two and three dimensions[END_REF] and is preponderant when the wave number increases. It could be seen that unlike the first term, the second term of error could only be kept small when the element size h reduces drastically. This will lead to a prohibitive expensive cost of computer resources. This drawback of FEM inhibits it to solve mid-frequency problem.
The extension of FEM
The adaptive FEM
To counteract the interpolation error and the pollution effect, reducing the size h and augment the order p of the polynomial could both be the solutions. Respectively they are called h-refinement and p-refinement. For a given problem, a refinement of mesh will create a large number of degrees of freedom. It it wiser to use a refinement of mesh only on the severely oscillating or shape gradient region and other case the coarse mesh instead. Therefore a posteriori error indicator is proposed. The idea is to give a first rough analysis and to evaluate the local error by the error indicator created. Then it is to add a refinement on specific region depending on the local error. This kind of technique could be seen in [Ladevèze et Pelle, 1983, Ladevèze et Pelle, 1989] for structures, in [Bouillard et Ihlenburg, 1999, Stewart et Hughes, 1996[START_REF] Irimie | [END_REF] for acoustics and in [START_REF] Bouillard | A waveoriented meshless formulation for acoustical and vibro-acoustical applications[END_REF] for the coupling of vibro-acoustics. Depending on different way to achieve the refinement, the corresponding techniques could be classified into p-refinement, h-refinement and hp-refinement. p-refinement introduces high order polynomial shape functions on the local region without changing the mesh [Komatitsch et Vilotte, 1998, Zienkiewicz et Taylor, 2005]. Conversely, h-refinement only refines the mesh without changing the shape functions [Stewart et Hughes, 1997a, Tie et al., 2003].
Of course hp-refinement is the combination of the two former methods [START_REF] Demkowicz | Toward a universal hp adaptive finite element strategy, part 1. constrained approximation and data structure[END_REF], Oden et al., 1989, Rachowicz et al., 1989].
Although the adaptive FEM outperforms the standard FEM and considerably reduces the unnecessary cost of computer resource, it still suffers from the pollution effect and expensive computational cost in mid-frequency problem.
The stabilized FEM
As one knows that when wave number increases, it will create the numerical dispersion problem due to the bilinear form. Because in this case the quadratic form associated to the bilinear form will risk losing its positivity [START_REF] Deraemaeker | Dispersion and pollution of the FEM solution for the Helmholtz equation in one, two and three dimensions[END_REF]. To alleviate this problem, some methods are proposed to modify the bilinear form in order to stabilize it.
The Galerkin Least-Squares FEM (GLS-FEM) proposes to modify the bilinear form by adding a term to minimize the equilibrium residue [Harari et Hughes, 1992]. It is fully illustrated in [Harari et Hughes, 1992], the pollution effect is completely counteracted in 1D acoustic problem. However in the coming work [Thompson et Pinsky, 1994] it shows that facing to higher dimension problems, this method is not as successful as in 1D problem. It could only eliminate the dispersion error along some specific directions.
The Galerkin Gradient Least-Squares FEM (G∇LS-FEM) is similar to the GLS-FEM method. The only difference is that the G∇LS-FEM adds a term to minimize the gradient of the equilibrium residue [Harari, 1997]. It shows that its performance depends on the problems. It deteriorates the solution quality in acoustic problem. In the mean time, however, it well performs in the elastic vibration problems. Conversely to the GLS-FEM, the G∇LS-FEM offsets the dispersion error in all directions on the 2D problem.
The Quasi Stabilized FEM (QS-FEM) paves a way to modify the matrix rather than the bilinear form. The objective is to suppress the dispersion pollution in every direction. It is proved that this method could eliminate totally the dispersion error on 1D problem. For the 2D problem, it is valid under the condition that regular mesh is used [START_REF] Babuška | A generalized finite element method for solving the Helmholtz equation in two dimensions with minimal pollution[END_REF].
The Multiscale FEM
The Variational Multiscale (VMS) is first introduced in [Hughes, 1995]. Based on the hypothesis that the solution could be decomposed into u = u p + u e where u p ∈ U p is the solution associated with the coarse scale and u e ∈ U e is the solution associated with the fine scale. The coarse solution u p could be calculated with the standard FEM method. Compared to the characteristic length of coarse scale, the mesh size h of the FEM is small. But on the other hand, h is rather big, compared to the fine scale. Therefore u e needs to be calculated analytically.
The solution is split into two scale solutions. This nature could generate two variational problems. In this case, this method is to find u p + u e ∈ U p ⊕ U e such that a(u p ,v p ) + a(u e ,v p ) = b(v p ) ∀v p ∈ U p a(u p ,v e ) + a(u e ,v e ) = b(v e ) ∀v e ∈ U e
(1.
3)
The functions of fine scale u e has the zero trace on the boundary of each element. Let us denote the integrating by part as a(u e ,v p ) = (u e ,L * v p ) ∀v p ∈ U p a(u p + u e ,v e ) = (L(u p + u e ),v e ) ∀v e ∈ U e (1.4) where L * is the adjoint operator of L. In addition, the linear form b(v) only contains the terms of sources
b(v) = Ω f vdV (1.5)
where f represents the source. By denoting ( f ,v) Ω = Ω f vdV , (1.3) could be rewritten in the form of
a(u p ,v p ) + (u e ,L * v p ) = b(v p ) ∀v p ∈ U p (Lu e ,v e ) Ω = -(Lu p -f ,v e ) Ω ∀v e ∈ U e
(1.6)
It could be seen that the second equation describes the fine scale and the solution u e strongly depends on the residue of equilibrium Lu pf . Therefore the second equation of (1.6) is solvable and u e could be expressed as
u e = M(Lu p -f ) (1.7)
where M is a linear operator. Replacing (1.7) into the first equation of (1.6), one could obtain the variational formulation only comprises u p in the form of
a(u p ,v p ) + (M(Lu p -f ),L * v p ) Ω = b(v p ), ∀v p ∈ U p (1.8)
Since u e has the zero trace on the boundary of each element, the expression (1.8) could be decomposed into each element without coupling terms. In [START_REF] Baiocchi | Virtual bubbles and Galerkin-least-squares type methods (Ga. LS)[END_REF], Franca et Farhat, 1995], the problem is solved in each element
u e (x) = - Ω E g(x E ,x)(Lu p -f )(x E )dΩ E (1.9)
where g(x E ,x) is the Green function's kernel of the dual problem of fine scale
L * g(x E ,x) = δ(x) on Ω E g(x E ,x) = 0 on ∂Ω E (1.10)
Approximating g(x E ,x) by the polynomial functions [Oberai et Pinsky, 1998]. This technique gives an exact solution on 1D problem. However on 2D the error depends on the orientation of waves.
The Residual-Free Bubbles method (RFB) introduced in [START_REF] Franca | Residual-free bubbles for the Helmholtz equation[END_REF] is very similar to the VMS method. They base on the same hypothesis, which nearly leads to the same variation formulation as (1.8). The RFB modifies the linear operator M and has the variational formulation as follow:
a(u p ,v p ) + (M RFB (Lu p -f ),L * v p ) Ω = b(v p ), ∀v p ∈ U p (1.11)
The approximation space of the fine scale u h e is U p,RFB = ∪ n E E=1 U p,RFB,E . The spaces U p,RFB,E are generated by m + 1 bubble functions defined in each element
U p,RFB,E = Vect b 1 , b 2 , • • • , b m , b f (1.12)
The where ϕ e denotes the shape functions associated with the coarse scale. The function b f is the solution of
Lb f = f on Ω E b f = 0 on ∂Ω E (1.14)
Resolution of these equations in each element could be very expansive, especially on 2D and on 3D. In [Cipolla, 1999], infinity of bubble functions are added into the standard FEM space and the performance of this method is improved.
Domain Decomposition Methods
The Domain Decomposition Methods (DDM) resolves a giant problem by dividing it into several sub-problems. Even though the stabilized FEM could eliminate the numerical dispersion effect, it still resolve the problem in entirety. Facing to mid-frequency problem it still requires a well refined mesh. This phenomenon will give rise to expensive computational cost. The DDM provides a sub-problem affordable by a single computer. Moreover, the DDM is endowed with great efficiency when paralleling calculation is used.
The Component Mode Synthesis (CMS) is a technique of sub-structuring dynamic. It is first introduced in [Hurty, 1965]. The entire structure is divided into several substructures, which are connected by the interfaces. Then the modal analysis is applied on each sub-structure. After obtaining the preliminary proper mode of each sub-structure, the global solution could be projected on this orthogonal base. Furthermore, by condensing the inside modes on the interfaces, the CMS highly reduces the numerical cost. Then considerable methods are developed from the CMS. These methods use different ways to handle the interfaces. Such as fixed interfaces [Hurty, 1965, Craig Jr, 1968], free interfaces [MacNeal, 1971], or the mix of fixed and free interfaces [Craig Jr et Chang, 1977].
The Automated Multi-Level Substructuring (AMLS) divides the substructures into several levels in the sense of numerical model of FEM. In this case the substructure is no longer a physical structure and the lowest level are elements of FEM. Then, by assembling the substructures of lower level, one could obtain a substructure of higher level. In work [Kropp et Heiserer, 2003], this method is proposed to study the vibro-acoustic problem inside the vehicle. The Guyan's decomposition introduced in [START_REF] Sandberg | Domain decomposition in acoustic and structure-acoustic analysis[END_REF] uses the condensed Degrees of Freedoms (DoFs). In fact some of the DoFs could be classified into slave nodes and master nodes. The idea of this method is to solve a system only described by the master nodes, which contains the information of its slave notes.
The Finite Element Tearing and Interconnecting (FETI) is a domain decomposition method based on the FEM and it is first introduced in [Farhat et Roux, 1991]. The formulation of displacement problem is decomposed into substructures, which are arranged into a functional minimization under constraints. These constraints are the continuity conditions of the displacement along the interfaces between substructures and could be taken into account by using the Lagrange multipliers. In [START_REF] Farhat | Two-level domain decomposition methods with Lagrange multipliers for the fast iterative solution of acoustic scattering problems[END_REF], Magoules et al., 2000] it is applied to acoustic problems. In [Mandel, 2002] it is applied to vibro-acoustic problems.
The boundary element method
The boundary element method (BEM) based on a integral formulation on the boundary of focusing domain. This method comprises two integral equations. The first one is an integral equation. Its unknowns are only on the boundary. The second integral equation describes the connection between the field inside the domain and the quantity on the boundary. Therefore for the BEM, the first step is to figure out the solution on the boundary field through the first integral equation. Then knowing the distribution of the solution on the boundary, one could use another integral equation to approximate solutions at any point inside the domain [Banerjee et Butterfield, 1981, Ciskowski et Brebbia, 1991].
Considering an acoustic problem where u(x) satisfy the Helmholtz equation
∆u(x) + k 2 u(x) = 0 (1.15)
The two integral equations could be written as follow:
u(x) 2 = G(x 0 , x) - ∂Ω G(y, x) ∂u ∂n (y) -u(y) ∂G(y, x) ∂n(y) dS(y) x ∈ ∂Ω (1.16) u(x) = G(x 0 , x) - ∂Ω G(y, x) ∂u ∂n (y) -u(y) ∂G(y, x) ∂n(y) dS(y) x ∈ Ω (1.17)
where in (1.16) x, y are the points on the boundary ∂Ω. In (1.17) x is the point in the domain Ω and y is the point on the boundary ∂Ω. And x 0 represents the point of acoustic source. G(x 0 , x) is the Green function to be determined. As presented before, u(x) on ∂Ω could be determined by replacing the prescribed boundary conditions into (1.16). Based on this thought, BEM divides the boundary ∂Ω into N non overlapping small pieces, which are named boundary elements and denoted by ∂Ω 1 , ∂Ω 2 , • • • , ∂Ω N . By interpolation on these elements, one could resolve (1.16) and obtain the approximated u(x) on ∂Ω.
It should be noticed that these integral equations could be obtained by direct boundary integral equation formulation or by indirect boundary integral equation formulation. The difference is that the direct one is derived from Green's theorem and the indirect one is derived from the potential of the fluid.
Compared with FEM method, the BEM has the following advantages: (1) Instead of discretizing the volume and doing the integration on volume, the BEM only undertakes the similar work on the boundary. This drastically reduces the computational cost. (2) Facing to the unbounded problem, the integral equations (1.16) and (1.17) are still valid in the BEM method. The solution u(x) satisfies the Sommerfeld radiation conditions. The drawback of the BEM is to solve a linear system where the matrix needed to be inversed is fully populated. Conversely the matrix of FEM to inverse is quite sparse. This means for the FEM, it is easier to store and solve the matrix. Despite of its efficiency, facing to midfrequency problem the BEM still possesses the drawback of polynomial interpolation.
The energetic methods
The Statistical Energy Analysis
The Statistical Energy Analysis (SEA) is a method to study high-frequency problems [Lyon et Maidanik, 1962]. This method divides the global system into substructures. Then it describes the average vibrational response by studying the energy flow in each substructure. For each substructure i, the power balance is hold
P i in = P i diss + ∑ j P i j coup (1.18)
where P i in and P i diss represents the power injected and dissipated in the substructure i. P i j coup denotes the power transmitted from the substructure i to its adjacent substructure j. If the model is hysteretic damping, the dissipated work is related with the total energy of the substructure i in the form of
P i diss = ωη i E i (1.19)
where η i is the hysteretic damping and E i is the total energy. Then the coupling between the substructures could be expressed as
P i j coup = ωη i j n i E i n i - E j n j (1.20)
where n i and n j are the modal densities of the substructure i and j respectively. η i j is the coupling loss factor. This equation illustrates the fact that the energy flow between the substructures i and j is proportional to the modal energy difference. The SEA lies on some strong assumptions that are generally true only at high frequency:
• the energy is transmitted only to adjacent subdomains.
• the energy field is diffuse in every sub-system.
It should be mentioned that at very high frequency the energy field is not diffuse. [Mace, 2003] provides an excellent SEA review.
The Hybrid FEM-SEA
The Hybrid FEM-SEA method splits the system into two systems, namely the master and the slave systems [Shorter et Langley, 2005]. The standard FEM is used to treat the master system, which represents a deterministic response. On the other hand, the slave system is solved by the SEA method because it will show a randomized response. This hybrid use of the FEM and the SEA possesses both of their advantages. In fact, the uncertainty fields are directly described by the SEA without any information on stochastic parameters. The counterpart which does not require any Montecarlo simulation seems quite appropriate for the application of the FEM method .
Wave Intensity Analysis
The prediction of the SEA is valid under the diffuse field hypothesis. The calculation of the coupling loss factors are based on this hypothesis. The Wave Intensity Analysis (WIA) [Langley, 1992] proposes the hypothesis that the vibrational field diffuses and could be mainly represented by some preliminary directions, which are in the form of
u(x) = 2π 0 A(θ)e ik(θ)•x dθ (1.21)
where k(θ) represents the wave vector which propagates in the direction θ. Supposing the waves are totally uncorrelated
2π 0 2π 0 A(θ 1 )A * (θ 2 )e ik(θ 1 -θ 2 )•x dθ 1 dθ 2 = g(θ 1 )δ(θ 1 -θ 2 ) (1.22)
where g(θ 1 ) is the measure of the energy in the direction θ 1 and δ represents the Dirac function. The energy could be expressed by the relation
E(x) = 2π 0 e(x,θ)dθ (1.23)
The energy e(x,θ) is then homogenised in space and developed by the Fourier series e(x,θ) = +∞ ∑ p=0 e p N p (θ) (1.24)
The power balance therefore provides the amplitude e p . This method gives a better result than the SEA method on plate assemblies [START_REF] Langley | Statistical energy analysis of periodically stiffened damped plate structures[END_REF]. However, the local response is not addressed and the coupling coefficients are hard to determine.
The Energy Flow Analysis
The Energy Flow Analysis was first introduced in [Belov et Rybak, 1975, Belov et al., 1977]. This method studies the local response by a continue description of the energy value which characterizes the vibrational phenomenon of the mechanical system. The effective energy density, which is denoted by e, is the unknown. The energy flow is related to this energy by
I = - c 2 g ηω ∇e (1.25)
where c g is the group velocity. Then the work balance divI = P in j -P diss could lead to ωηec 2 g ηω ∆e = -P in j (1.26)
Because the quantity e varies slowly with the space variable, the simplicity of this equation makes it easily be treated with an existant FEM code. This method well performs in 1D problem in [START_REF] Lase | Energy flow analysis of bars and beams: theoretical formulations[END_REF], Ichchou et al., 1997], however it is difficult to be applied in 2D coupling problem [Langley, 1995]. In addition, using the equation (1.26) creates numerous difficulties [Carcaterra et Adamo, 1999]. For example, the 2D field radiated by the source decays as 1/ √ r. Yet in the analytic theory it decays as 1/r. In the stationary case, this model only correctly represents the evaluation of energy while the waves are uncorrelated [Bouthier et Bernhard, 1995].
Ray Tracing Method
The Ray Tracing Method (RTM) is derived from the linear optic theory and it was first introduced in [START_REF] Krokstad | Calculating the acoustical room response by the use of a ray tracing technique[END_REF] to predict acoustic performances in rooms. The vibrational response is calculated following a set of propagative waves until fully damped. Transmissions and reflections are computed using the classical Snell formula. If frequency and damping are enough elevated, the RTM is cheap and accurate. Otherwise, computational costs could be unduly expensive. Moreover, complex geometries are difficult to study due to their high scattering behaviour. This technique is applied to acoustic [START_REF] Allen | [END_REF], Yang et al., 1998, Chappell et al., 2011] and to plates assemblies in [Chae et Ih, 2001, Chappell et al., 2014].
The wave-based methods
Ultra Weak Variational Formulation
The Ultra Weak Variational Formulation (UWVF) discretizes the domain into elements. It introduces a variable on each interface and this variable satisfies a weak formulation on the boundary of all the elements. The vibrational field is approximated by a combination of the plane wave functions. Then the Galerkin method leads this approach to solve a matrix system and the solution is the boundary variables. The continuity between the elements verified by a dual variable. Once the interface variables are calculated, one could build the solution inside each element. However the matrix is generally ill-conditioned. In [Cessenat et Despres, 1998b] a uniform distribution of wave directions is proposed to maximize the matrix determinant. Of course, the idea of pre-conditioner is also introduced to alleviate this problem.
A comparison of the UWVF and the PUM on a 2D Helmholtz problem with irregular meshes is done in [START_REF] Huttunen | Comparison of two wave element methods for the Helmholtz problem[END_REF]. It presents that both of the methods could lead to a precise result with coarse mesh. Moreover, the UWVF outperforms the PUM at mid-frequency and PUM outperforms UWVF at low-frequency. As to the conditioning numbers, PUM is always better that the UWVF at mid-frequency. It is proved in [START_REF] Gittelson | Plane wave discontinuous Galerkin methods: analysis of the h-version[END_REF] that the UVWF is a special case of the Discontinuous Galerkin methods using plane waves. In [START_REF] Luostari | Improvements for the ultra weak variational formulation[END_REF], it is proposed to use special solutions in the case of a layered material.
Wave Based Method
The Wave Based Method (WBM) makes use of evanescent wave functions and plane wave functions to approximate the solution [START_REF] Desmet | An indirect Trefftz method for the steady-state dynamic analysis of coupled vibro-acoustic systems[END_REF].
p E = +∞ ∑ m=0 a jm cos mπx L jx e ±i k 2 -( mπ L jx ) 2 y + +∞ ∑ n=0 a jn cos nπy L jy e ±i k 2 -( nπ L jy ) 2 x (1.27)
where L ix and L iy represents the dimensions of the smallest encompassing rectangle of subdomain Ω j . In order to implement this approach, series in (1.27) must be truncated.
The criteria to choose the number of shape functions is
n ix L ix ≈ n iy L iy ≈ T k π (1.28)
where T is a truncation parameter to be chosen. It is proposed in [Desmet, 1998] to take T = 2, which makes sure that the wave length λ min of the shape function is smaller than the half of the characteristic wave length of problem. The boundary conditions and the continuity conditions between subdomains is satisfied by a residues weighted variational technique. Moreover, since the test functions in the formulation are taken from the dual space of the working space, this method could not be categorized into the Galerkin method. The final unknown vector to be solved by the matrix system is the complex amplitude of waves. The study of the normal impedance on the interface is addressed in [START_REF] Pluymers | Trefftz-based methods for time-harmonic acoustics[END_REF] to improve the stability of this method. Introducing the damping in the model could achieve this objective. For the WBM method, p-convergence performs a much more efficient way than the h-convergence. Similar to other Trefftz methods, the matrix of the WBM suffers from the ill-condition. In [START_REF] Desmet | An indirect Trefftz method for the steady-state dynamic analysis of coupled vibro-acoustic systems[END_REF], Van Hal et al., 2005] the WBM is applied to 2D and 3D acoustics. Its application to plate assemblies in [START_REF] Vanmaele | An efficient wave based prediction technique for plate bending vibrations[END_REF], to the unbounded problem in [Van Genechten et al., 2010].
Wave Boundary Element Method
The Wave Boundary Element Method (WBEM) is an extension of the standard BEM presented in Section 1.1.3. It is proposed in [START_REF] Perrey-Debain | Plane wave interpolation in direct collocation boundary element method for radiation and wave scattering: numerical aspects and applications[END_REF][START_REF] Perrey-Debain | Wave boundary elements: a theoretical overview presenting applications in scattering of short waves[END_REF] that the WBEM enriches the the base of the standard BEM by multiplying the propagative plane waves with the polynomial functions on the boundary. The number of the wave directions is free to choose. Generally a uniform distribution of wave directions is used. In [START_REF] Perrey-Debain | Wave boundary elements: a theoretical overview presenting applications in scattering of short waves[END_REF] it also proposes the idea that if the propagations of waves of problem are known a priori, one could use a non-uniform distribution of wave directions. Again this method could not escape from the ill-conditioning of the matrix due to the plane wave functions. Of course, compared to the standard BEM, the gain of this method largely reduces the cost. The mesh used in WBEM is much coarser than the standard BEM.
Discontinuous Enrichment Method
The Discontinuous Enrichment Method (DEM) was first introduced in [START_REF] Farhat | The discontinuous enrichment method[END_REF]. This method is similar to the multi-scale FEM. However the enrichment functions of the DEM are not zero-trace on the boundaries. In the DEM, the exact solutions of governing equations are taken as enrich functions for the fine scale solution u e . These functions neither satisfy the continuity condition between elements nor satisfy the boundary conditions. Therefore the Lagrange multipliers are introduced to meet these conditions. In order to have a good stability, the number of the Lagrange multipliers on each boundary is directly related to the number of plane waves used in each element. This inf-sup condition is presented in [Brezzi et Fortin, 1991]. Therefore the elements built by this method is specially noted such as R -4 -1: R denotes rectangle element, 4 the wave numbers in the element and 1 means the number of the Lagrange multiplier on the boundary of element. This method is applied to 2D problem in [START_REF] Farhat | The discontinuous enrichment method for multiscale analysis[END_REF][START_REF] Farhat | A discontinuous Galerkin method with plane waves and Lagrange multipliers for the solution of short wave exterior Helmholtz problems on unstructured meshes[END_REF] and to 3D problem in [Tezaur et Farhat, 2006]. It is also proved in [Farhat et al., 2004a] that the coarse solution calculated by the FEM does not contribute to the accuracy of the solution in Helmholtz problem. In this case the polynomial functions could be cut out and correspondingly the method is named the Discontinuous Galerkin method (DGM). As the WBEM, the DEM requires a much coarser mesh. Application of this method to acoustics is presented in [Gabard, 2007], to plate assemblies in [START_REF] Massimi | A discontinuous enrichment method for the efficient solution of plate vibration problems in the mediumfrequency regime[END_REF], Zhang et al., 2006], to high Péclet advection-diffusion problems in [START_REF] Kalashnikova | A discontinuous enrichment method for the finite element solution of high Péclet advection-diffusion problems[END_REF]. Recently, facing to the varying wave number Helmholtz problem, the DEM uses Airy functions as shape functions. In [START_REF] Tezaur | The discontinuous enrichment method for medium-frequency Helmholtz problems with a spatially variable wavenumber[END_REF] these new enrich functions are used to resolve a 2D under water scattering problem.
Conclusion
This chapter mainly presented the principal computational methods in vibrations and in acoustic, which could be classified into low-, mid-and high-frequency problems. Considerable approaches have been specifically developed depending on the frequency of the problem. In the low frequency range, the principal methods are the FEM and the BEM.
Both of these methods require the refinement of mesh. Their difference is that for the BEM only the boundary is required to be discretized and for the FEM however, the mesh covers the whole volume. These two methods are reliable and robust in low-frequency problem. Facing to the mid-frequency problem, the FEM suffers from the numerical dispersion effect. To alleviate this effect, the mesh of the FEM needs to be greatly refined.
Consequently, the FEM becomes extremely expensive. Even though the BEM has a much smaller numerical model to manipulate, its numerical integrations are expensive. In addition, since the BEM interpolates the polynomial functions on the boundary, consequently a refined mesh is also necessary. Both the FEM and the BEM are no longer fit to solve mid-frequency problem.
Being contrary to the low-frequency problems, the high-frequency problems could not be analysed by the local response of modes. Instead, the energetic approaches are more practical and efficient. However these methods neglect the local response. In addition, sometimes the parameters in the methods needs to be determined by experience or by very intensive calculation.
Lastly, it mainly resorts to the waves based method to solve the mid-frequency problems. These methods commonly adopt the exact solutions of the governing equation as shape functions or enrichment functions. The fundamental difference is the way they deal with the boundary conditions and continuity conditions between the subdomains.
The VTCR is categorized into these waves based method. Especially, the VTCR possesses an original variational formulation which naturally incorporates all conditions on the boundary and on the interface between subdomains. Moreover there is a priori independence of the approximations among each subdomains. This feature enables one freely to choose the approximations which locally satisfy the governing equation in each subdomain. In the Helmholtz problem of constant wave number, the plane wave functions are taken as shape functions.
However, most of the existent mid-frequency methods are confined to solve the Helmholtz problem of piecewise constant wave number. In the extended VTCR, Airy wave functions are used as shape functions. The extended VTCR could well solve the Helmholtz problem when the square of wave number varies linearly. Then the WTDG method is applied to solve the heterogeneous Helmholtz problem in more generous cases. In this dissertation, two WTDG approaches are proposed, namely the Zero Order and the First Order WTDG .
Moreover, the survey mentioned above shows that there lacks a efficient method to solve the problem with bandwidth ranging from the low-frequency to the mid-frequency.
Even there it is one such as DEM, supplementary multipliers are necessarily needed, which complicates the numerical model. The FEM/WAVE WTDG method could achieve this goal by making a hybrid use of polynomial approximations and plane wave approximations.
Chapter 2
The Variational Theory of Complex Rays in Helmholtz problem of constant wave number
The objective of this chapter is to illustrate the basic features of the standard Variational Theory of Complex Rays. The problem background lies in acoustics. A rewriting of the reference problem into variational formulation is introduced.
The equivalence of formulation, the existence and the uniqueness of the solution are demonstrated. This specific variational formulation naturally comprises all the boundary conditions and the continuity conditions on the interface between subdomains. Since the shape functions are required to satisfy the governing equation, the variational formulation has no need to incorporate the governing equation. These shape functions contain two scales. The slow scale is chosen to be discretized and calculated numerically. It corresponds to the amplitude of vibration. Meanwhile the fast scale represents the oscillatory effect and is treated analytically. Furthermore, three kinds of classical VTCR approximations are discussed. They are correspondingly the sector approximation, the ray approximation and the Fourier approximation. The numerical implementation of the VTCR is introduced, including ray distribution and iterative solvers.
Then an error estimator and convergence properties of the VTCR is presented. At last, an adaptive version of the VTCR is introduced.
Ω u d pressure prescribed over ∂ 1 Ω Ω E subdomain of Ω Γ EE ′ interface between subdomains Ω E and Ω E ′ {u} EE ′ (u E + u E ′ ) |Γ EE ′ [u] EE ′ (u E -u E ′ ) |Γ EE ′ q u (1 -iη)gradu ζ (1 -iη) -1/2
2.1 Reference problem and notations To illustrate the methods in this dissertation, a 2-D Helmholtz problem is taken as reference problem (see Figure 2.1). Acoustics or underwater wave propagation problem could be all abstracted into this model. Let Ω be the computational domain and ∂Ω = ∂ 1 Ω ∪ ∂ 2 Ω be the boundary. Without losing generality, Dirichlet and Neumann conditions are prescribed on ∂ 1 Ω, ∂ 2 Ω in this dissertation. Treatment of other different boundary conditions can be seen in [Ladevèze et Riou, 2014]. The following problem is considered:
Ω Ω E Γ EE ′ r d Ω u d ∂ 1 Ω ∂ 2 Ω g d Ω E ′
find u ∈ H 1 (Ω) such that (1 -iη)∆u + k 2 u + r d = 0 over Ω u = u d over ∂ 1 Ω (1 -iη)∂ n u = g d over ∂ 2 Ω (2.1)
where ∂ n u = gradu • n and n is the outward normal. u is the physical variable studied such as the pressure in acoustics. η is the damping coefficient, which is positive or equals to zero. The real number k is the wave number and i is the imaginary unit. u d and g d are the prescribed Dirichlet and Neumann data.
Rewrite of the reference problem
The reference problem (2.1) can be reformulated by the weak formulation. Both the reformulation and demonstration of equivalence are introduced in [Ladevèze et Riou, 2014].
Variational formulation
As Figure 2.1 shows, let Ω be partitioned into N non overlapping subdomains
Ω = ∪ N E=1 Ω E . Denoting ∂Ω E the boundary of Ω E , we define Γ EE = ∂Ω E ∩ ∂Ω and Γ EE ′ = ∂Ω E ∩ Ω E ′ .
The VTCR approach consists in searching solution u in functional space U such that
U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k 2 u E + r d = 0} (2.2)
The variational formulation of (2.1) can be written as: find u ∈ U such that
Re ik ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS = 0 ∀v ∈ U 0 (2.3)
where ˜ represents the conjugation of . The U E,0 and U 0 denote the vector space associated with U E and U when r d = 0.
Properties of the variational formulation
First, let us note that Formulation (2.3) can be written:
find u ∈ U such that b(u,v) = l(v) ∀v ∈ U 0 (2.4)
Let us introduce
u 2 U = ∑ E∈E Ω E gradu.grad ũdΩ (2.5) Property 1. u U is a norm over U 0 .
Proof. The only condition which is not straightforward is
u U = 0 for u ∈ U 0 ⇒ u = 0 over Ω. Assuming that u ∈ U 0 such that u U = 0, it follows that q u = 0 over Ω. Hence, from divq u + k 2 u = 0 over Ω E with E ∈ E where E = {1,2, • • • , N}, we have u = 0 over Ω E and, consequently, over Ω. Property 2. For u ∈ U 0 , b(u,u) kη u 2 U , which means that if η is positive the formulation is coercive. Proof. For u ∈ U 0 , we have b(u,u) = Re ik ∑ E∈E ∂Ω E q u .n ũdS (2.6) Consequently, b(u,u) = Re ik ∑ E∈E Ω E -k 2 u ũ + (1 -iη)gradu.grad ũ dΩ (2.7) Finally, b(u,u) = kη ∑ E∈E Ω E gradu.grad ũdΩ (2.8) Then, b(u,u) kη u 2 U .
Property 1 implies that if η is positive the solution of (2.3) is unique. Since the exact solution of Problem (2.1) verifies (2.3), Formulation (2.3) is equivalent to the reference problem (2.1). Besides, it can be observed that for a perturbation ∆l ∈ U ′ 0 of the excitation the perturbation ∆w of the solution verifies
∆w U 1 kη |∆l| U ′ 0 (2.9)
28The Variational Theory of Complex Rays in Helmholtz problem of constant wave number
Approximation and discretization of the problem
To solve the variational problem (2.3), it is necessary to build the approximations u h E and the test functions v h E for each subdomain Ω E . Such u h E and v h E belongs to the subdomain
U h E ⊂ U E .
The projection of solutions into the finite dimensional subdomain U h E makes the implementation of the VTCR method be feasible.
Re
ik ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u h • n} EE ′ ṽh EE ′ - 1 2 [ q v h • n] EE ′ u h EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω q v h • n u h -u d dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u h • n -g d ) ṽh dS = 0 ∀v h ∈ U h 0
(2.10) The solution could be locally expressed as the superposition of finite number of local modes namely complex rays. These rays are represented by the complex function:
u E (x) = u (E) n (x, k)e ik•x
(2.11)
where u
(E)
n is a polynomial of degree n of the spatial variable x. The complex ray with the polynomial of order n is called ray of order n. k is a wave vector. The functions belonging to U h 0 satisfy the Helmholtz equation ( 2 The evanescent rays only exist on the boundary and do not appear in the pure acoustic problem. However it is necessary to introduce these rays in some problems. For example in the vibro-acoustic where the nature of waves in the structure and that in the fluid are quite different, there exist the evanescent rays. The wave vector of these rays is in the form of k = ζk[±cosh(θ), -isinh(θ)] T with θ ∈ [0, 2π[. In this dissertation, these evanescent rays will not be used in the problem.
For the ray of order 0, the polynomial u
(E)
n becomes a constant, and at the same time the solution of the Helmholtz problem could be written in the form
u E (x) = C E A E (k)e ik•x dC E (2.12)
where A E is the distribution of the amplitudes of the complex rays and C E is the curve described by the wave vector when it propagates to all the directions of the plane. In the linear acoustic C E is a circle. The expression (2.12) describes two scales. One is the slow scale, which is the distribution of amplitudes A E (k). It slowly varies with the wave vector k. The other one is the fast scale, which corresponds to e ik•x . It depicts the vibrational effect. This scale fast varies with wave vector k and the spatial variable x.
Sectors approximation: To achieve the approximation in finite dimension, in the VTCR, the fast scale is taken into account analytically and the slow scale is discretized into finite dimension. That is to say the unknown distribution of amplitudes A E needs to be discretized. Without a priori knowing of the propagation direction of the solution, the VTCR proposes an integral representation of waves in all directions. In this way A E is considered as piecewise constant and the approximation could be expressed as
u E (x) = C E A E (k)e ik•x dC E = J ∑ j=1 A jE C jE e ik•x dC jE (2.13)
where C jE is the angular discretization of the circle C E and A jE is the piecewise constant approximation of A E (k) on the angular section C jE . The shape functions of (2.13) are called sectors of vibration and they could be rewritten on function of the variable θ
ϕ jE (x) = θ j+ 1 2 θ j-1 2 e ik(θ)•x dθ (2.14)
Therefore, the working space of shape functions could be generated as
U h E = Vect ϕ jE (x), j = 1, 2, • • • , J (2.15)
Rays approximation: Denoting ∆θ as the angular support, it should be noticed that when ∆θ → 0 the sectors become rays. In this case, the expression of approximation becomes:
u E (x) = J ∑ j=1 A jE e ik•x
(2.16)
ϕ jE (x) = e ik(θ j )•x (2.17)
where A jE becomes the amplitude associated with the complex ray which propagates in direction θ j .
Fourier approximation: Both the sectors and the rays are engaged to discretize the slow scale of (2.12), whose fast scale is treated analytically. In previous work of ( [START_REF] Kovalevsky | The Fourier version of the Variational Theory of Complex Rays for medium-frequency acoustics[END_REF]) it proposes an new idea to discretize the slow scale. The corresponding method is to take advantage of the Fourier series to achieve this discretization.
On the 2D dimension, this approximation could be written into
u E (x) = 2π 0 A E (k)e ik•x dθ = J ∑ j=-J A jE 2π 0 e i jθ e ik•x dθ (2.18)
The shape functions of this discretization is in the form of
ϕ jE (x) = 2π 0 e i jθ e ik(θ)•x dθ (2.19)
It is proved that the Fourier approximation outperforms the sectors approximation and the rays approximation. Compared to the other two approximations, this approximation alleviates ill conditioning of matrix.
In this dissertation, for the simplicity of implementation, among the three types of VTCR approximations presented above, discrete complex rays are chosen to be used. By this way, the approximation could be expressed as
u E (x) = A T E • ϕ E (x) (2.20)
where
ϕ E = [ϕ 1E , ϕ 2E , • • • , ϕ JE ]
is the vector of the shape functions ϕ jE of (2.17) and A T E is the vector of the associated amplitudes A jE . By this way, the formulation (2.3) could be written into a matrix problem
KA = F (2.21)
K corresponds to the discretization of the bilinear form of weak formulation. Inside K there are N 2 partitioning of blocks K EE ′ , whose dimension are J ×J. When Γ EE ′ = / 0, the blocks corresponding to K EE ′ are non zero fully populated. Otherwise K EE ′ are zero blocks. The vector
A = [A 1 , • • • , A E , • • •A N ]
corresponds to the total amplitudes , which is the degree of freedom in the VTCR. F is the linear form of weak formulation and corresponds to the loading.
Ray distribution and matrix recycling
For rays approximation, one has to discretize the propagative wave direction in [0, 2π[. In works [Ladevèze et Riou, 2005, Riou et al., 2004, Riou et al., 2008, Kovalevsky et al., 2014], a symmetric ray distribution was adopted. The idea is to evenly distribute the wave directions over the unit circle. There are two advantages of the symmetric ray distribution. First, it is easy to calculate the wave direction. Second, the distribution always keeps symmetric. However, this distribution requires a complete matrix recomputation as the number of rays changes. In the VTCR, matrix construction is a relevant (predominant in some cases) operation in terms of computational costs. Therefore the symmetric ray distribution is not ideal to save computational costs. In work [Cattabiani, 2016], a quasisymmetric ray distribution method is proposed. In this algorithm previous rays are fixed as new ones are added. The first ray can be placed in any direction. After that, new rays are inserted in gaps among previous rays in the most possible symmetric way. The distribution enables one to recycle matrices. But the drawback is that, for a given ray number, its distribution could be asymmetric. Compared to the symmetric distribution, the asymmetric distribution has a less efficient convergence rate. This phenomenon only exists when insufficient number of rays are used. When the ray number increases, their difference will decrease. In practice, when convergence is reached, the difference between these two distributions is already negligible. To save computational cost, the asymmetric ray distribution is used in this dissertation.
Iterative solver
The VTCR suffers from ill-conditioning. Typically, the VTCR suddenly converges when ill-conditioning appears. However, there is not a deterioration of the error. To offer a numerical example, as Figure 2.3 shows, a domain Ω with square geometry [0 m,1 m]× [0 m,1 m] and η = 1 -0.01i is considered. The wave number is k = 40m -1 over the domain. The boundary conditions are
u d = 4 ∑ n=1 A i e ikζcosθ i x+ikζsinθ i y with A 1 = 1, A 2 = 1.5, A 3 = 2, A 4 = 4, θ 1 = 6 • , θ 2 = 33 • , θ 3 = 102 • , θ 4 = 219 • .
In order to figuratively illustrate the fact that the VTCR suffers from ill-conditioning, only one subdomain is used in the calculation and the number of rays is gradually increased to make the result converge. The relative error and the condition number along with the increasing of number of rays could be seen in Figure 2.4. Since the exact solution is known over the domain, the real error is defined as following:
ε ex = u h -u re f L 2 (Ω) u re f L 2 (Ω) (2.22)
Result shows that when the VTCR converges, the condition number drastically increases.
In this situation, it means that the matrix is quasi-singular. In order to have a precise resolution, the proper iterative solver is required. Four iterative solvers considered are:
Ω u d = u ex u d = u ex u d = u ex u d = u ex
• backslash. It is the standard direct MATLAB solver. It is considered for reference.
• pinv. This algorithm returns the Moore-Penrose pseudoinverse of matrix. It is suggested for ill-conditioning since it normalizes to one the smallest singular values.
The result is a relatively well-conditioned pseudoinverse [Courrieu, 2008].
• gmres. It uses the Arnoldi's method to compute an orthonormal basis of the Krylov subspace. The method restarts if stagnation occurs [Saad et Schultz, 1986].
• lsqr. It is based on the Lanczos tridiagonalization [START_REF] Paige | [END_REF].
The numerical example defined in Figure 2.3 is reused here to compare the four solvers. Since the real error could be calculated, the performance of the four solvers are shown in Figure 2.5. It shows that the pinv possesses the best accuracy. The lsqr and the gmres perform similarly but with less accuracy. The backslash explodes immediately when the condition number gets worse. Therefore pinv is chosen to be the iterative solver in this dissertation.
Convergence of the VTCR
Convergence criteria
In [START_REF] Kovalevsky | On the use of the Variational Theory of Complex Rays for the analysis of 2-D exterior Helmholtz problem in an unbounded domain[END_REF], it is proposed that the geometrical heuristic criterion of convergence for the VTCR with plane waves in 2D follows the relation that where N e is the number of directions of waves, τ a parameter to be chosen, k the wave number and R e is the characteristic radius of domain. In the VTCR, one generally chooses τ = 2.
Error indicator
In general, the exact solution is unknown. Therefore one needs to define an error estimator. It is not easy because there may be some subdomains Ω E which do not touch the boundary ∂Ω. The only way to evaluate the accuracy of the approximated solution in such a subdomain is to verify the continuity in terms of displacement and velocity with all the other subdomains in the vicinity of Ω E . But this verification is difficult because the solutions in the surrounding subcavities are only approximated solutions.
In work [START_REF] Ladevèze | The Variational Theory of Complex Rays. MID-FREQUENCY-CAE Methodologies for Mid-Frequency Analysis in Vibration and Acoustics[END_REF], a local error estimator is defined as:
ε h E = E d,Ω E (u h E -u pv E )/mes(Ω E ) ∑ E E d,Ω E (u pv E )/mes(Ω) (2.24)
where E d,Ω E (u) is the dissipated energy, mes(Ω) and mes(Ω E ) denote respectively the measures of Ω and Ω E , and u pv E corresponds to the solution of the problem in Ω E when the pressure and normal gradient of pressure are prescribed at the boundaries of Ω E in such way that they correspond to the pressure and normal gradient of pressure in all the Ω E ′ adjacent to Ω E . Particularly, when the boundary of Ω E coincides with the boundary of the domain, the prescribed quantities are introduced. It should be noticed that this error measures the relative difference between u h E and u pv E in terms of dissipated energy. The dissipated energy is interesting in the medium-frequency range because at these frequencies it is a relevant quantity. In the similar way, one could define a global error indicator as:
ε = max E {ε h E } (2.25)
In [START_REF] Ladevèze | The Variational Theory of Complex Rays. MID-FREQUENCY-CAE Methodologies for Mid-Frequency Analysis in Vibration and Acoustics[END_REF] a comparison among the true local error, the H 1 relative error and the local error estimator (2.24) was made. The work proves that error estimator (2.24) comes very close to the classical H 1 error, and is a relevant error measure for assessing the quality of the calculated solution.
h-and p-convergence of VTCR
This subsection paves quick scope to the convergence properties of the standard VTCR.
There exists two methods leading the VTCR to the convergent result. The first one is
u d = 1 u d = 1 u d = 1 u d = 1 Ω Figure 2
.6: The definition of numerical example in Section 2.4.3. named h-method, which is to fix the number of rays and to decrease the size of the subdomains. The second one is named p-method, which is to fix the size of sub-domains and to increase the number of rays. Here, a simple numerical example will show the performance of the VTCR. A domain Ω with square geometry [0 m,1 m]×[0 m,1 m] and η = 1 -0.01i is considered as Figure 2.6 shows. The wave number is k = 40m -1 over the domain. The boundaries conditions imposed are u d = 1 along all the boundaries. In order to capture the error, one uses the error indicator defined in Section 2.4.2.
The conclusion drawn from the result is that in the VTCR the p-convergent method is far more efficient than the h-convergent method. To obtain the same level precision, the p-convergent method only uses much fewer degrees of freedom. This numerical test is consistent with the results proved in [Melenk, 1995] that the p-convergence is exponential while the h-convergence is much slower. By taking advantage of this feature, the VTCR could lead to a precise solution with a relatively small numerical model.
Adaptive VTCR
An adaptive version of the VTCR is presented in [START_REF] Ladevèze | The Variational Theory of Complex Rays. MID-FREQUENCY-CAE Methodologies for Mid-Frequency Analysis in Vibration and Acoustics[END_REF]. For the VTCR, it needs a proper angular discretization in each subdomain. If the amplitudes of waves are sparsely distributed, a coarse angular discretization is enough for the VTCR. Otherwise when the amplitudes of waves are densely distributed, a refined angular discretization is required. Beginning with a coarse angular discretization, the adaptive version VTCR will adopt a refined angular discretization when it is needed. Thus, the process is completely analogous to that used in the adaptive FEM [Stewart et Hughes, 1997b] and consists of three steps:
• In the first step, a global analysis of the problem is carried out using a uniform, low-density angular wave distribution based angular grid ν M .
• The objective of the second step is to calculate the proper angular discretization.
The quality of the approximation from the first step is quantified using an error indicator I Ω E which indicates whether a new angular discretization is locally nec- essary. If it is, a refined angular grid ν m locally replaces the coarse angular grid ν M .
• The third step is a new full calculation using angular grid ν m .
If the last calculation is not sufficiently accurate, the procedure can be repeated until the desired level of accuracy is attained.
In the second step, the error estimator defined in (2.24) serves as the error indicator
I Ω E . It can be useful to set two limit levels m 0 and m 1 : if I Ω E < m 0 , the quality of the solution is considered to be sufficient and no angular rediscretization of subdomain Ω E is necessary. If m 0 < I Ω E m 1 , the error is moderate, but too high and a new refined angular discretization is necessary. If m 1 < I Ω E , the solution is seriously flawed and the boundary conditions of Ω E must be recalculated more accurately, which requires a new first step. In practice, one often chooses m 0 = 10% and m 1 = 40%. As explained before, a large I Ω E indicates a poor solution in Ω E due to too coarse an angular discretization of the wave amplitudes. A new and better angular discretization is required. Then, the number of rays used for the coarse and refined discretizations are defined as:
N M e = τ M kR e /(2π) N m e = τ m kR e /(2π) τ M = τ m + ∆τ (2.26)
where τ M , τ m and ∆τ are positive real numbers. ∆τ is a parameter for angular discretization refinement. In practice, one chooses τ M = 0.2 and ∆τ = 0.2.The angles of rays added for the refinement are determined by the quasi-symmetric ray distribution presented in Section 2.2.4.
Conclusion
This chapter has presented the standard VTCR applied in the Helmholtz problem of constant wave number. The VTCR uses the general solutions of the governing equation as shape functions. The solution of problem is approximated by a combination of these shape functions. Generally, the general solutions are plane wave functions, evanescent wave functions. The approximations in different subdomains are a priori independent. Since the governing equation is satisfied, only the boundary conditions and the continuity conditions on the interfaces should be taken into account. The VTCR naturally introduces these conditions in a variational formulation. The unknowns are the amplitudes associated with waves in all subdomains. To achieve the numerical implementation in finite dimension, an angular discretization should be done. An asymmetric ray distribution is used for recycling the matrix. In the VTCR, the condition number increases when result begins to converge. Even though this phenomenon will not deteriorate the error, a proper iterative solver should be chosen for solution. By comparison, an iterative solver namely pinv is chosen for the VTCR. Since the VTCR uses the wave functions to approximate solutions, it requires only a very small number of degrees of freedom to obtain a precise result. Therefore the VTCR outperforms the FEM when h-convergence is used. Furthermore it shows that the p-convergence is much more efficient than the h-convergence. Finally, a geometrical heuristic criterion of convergence, an error estimator of VTCR and an adaptive version VTCR are presented.
Chapter 3
The Extended VTCR for Helmholtz problem of slowly varying wave number This chapter is dedicated to extend the VTCR in Helmholtz problem of a slowly varying wave number. Based on the governing equation, the exact solutions, named Airy wave functions, are developed thoroughly. Construction of the finite dimensional approximation comes into discretizing the unknown distribution of the amplitudes of Airy wave functions. Then, in the first numerical example, its convergence properties will be studied. It will show that the convergence properties of this extended VTCR quite resemble the standard VTCR. It could well solve the mid-frequency problem with a small amount of degrees of freedom. Of course as a heritage of standard VTCR, the performances of p-convergence are also remarkable in the extended VTCR.
The second numerical study concerns a complicated semi-unbounded harbor agitation problem, on which the extended VTCR is applied to get the solution. The result further proves the advantages and efficiency of the extended VTCR method. In this chapter the wave number k in (2.1) is no longer a constant. Instead it is supposed to be in the form that k 2 = αx + βy + γ, where α, β, γ are constant parameters.
For simplicity, in this section we denote that
k 2 † = k 2 /(1 -iη) = α † x + β † y + γ † , where α † = α/(1 -iη), β † = β/(1 -iη), γ † = γ/(1 -iη) respectively. Presented in Chapter 2,
for the VTCR method, the exact solutions need to be known a priori to serve as shape functions. Therefore exact solutions of heterogeneous Helmholtz equation in (2.1) are required to be found. In order to solve the equation, the technique of separation of variable is considered here. On 2D, by introducing u(x) = F(x)G(y) into (2.1), it can be obtained that:
F ′′ F + α † x + γ † = - G ′′ G + β † y ≡ δ (3.1)
where δ is a free constant parameter. The analytic solutions of (3.1) are:
F(x) = C 1 Ai -α † x -γ † + δ α 2/3 † +C 2 Bi -α † x -γ † + δ α 2/3 † if |α † | = 0 C 1 cos γ † -δx +C 2 sin γ † -δx if |α † | = 0 (3.2) G(y) = D 1 Ai -β † y -δ β 2/3 † + D 2 Bi -β † y -δ β 2/3 † if |β † | = 0 D 1 cos √ δy + D 2 sin √ δy if |β † | = 0 (3.3)
where Ai and Bi are Airy functions [Zaitsev et Polyanin, 2002]. C 1 , C 2 , D 1 , D 2 are constant coefficients. When a variable named z → +∞, function Ai(z) tends towards 0 and function Bi(z) tends towards infinity (see Figure 3.1). Moreover when -z → -∞, the asymptotic expression of function Ai and Bi are:
Bi(-z) ∼ cos( 2 3 z 3 2 + π 4 ) √ πz 1 4 |arg(z)| < 2π/3 Ai(-z) ∼ sin( 2 3 z 3 2 + π 4 ) √ πz 1 4 |arg(z)| < 2π/3 (3.4)
Since when z → +∞, Bi(z) goes to infinity and it has no physical meaning. To avoid of using Airy functions in this interval, the idea is to create functions in combination of Airy where k 2 m represents the minimum value of k 2 on Ω and (x m , y m ) is the coordinate which enables k 2 to take its minimum value k 2 m over the domain. Denoting P = [P 1 ,P 2 ] = [cos(θ), sin(θ)], where θ represents an angle parameter ranging from 0 to 2π, k 2 can be expressed in form that:
k 2 = k 2 m + α(x -x m ) + β(y -y m ) = k 2 m P 2 1 + k 2 m P 2 2 + α(x -x m ) + β(y -y m ) (3.6)
As the similar procedure to get (3.2) and (3.3), functions F and G can be composed by:
F( x) = Bi(-x) + i * Ai(-x) (3.7) G( ỹ) = Bi(-ỹ) + i * Ai(-ỹ) (3.8)
where x and ỹ are defined as follows:
x = k 2 m * P 2 1 + α(x -x m ) α 2/3 (1 -iη) 1/3 = k 2 1 α 2/3 (1 -iη) 1/3 (3.9) ỹ = k 2 m * P 2 2 + β(y -y m ) β 2/3 (1 -iη) 1/3 = k 2 2 β 2/3 (1 -iη) 1/3 (3.10)
By such a way, -x andỹ always locate in [-∞,0] on the domain Ω. The new wave function ψ(x,P) is built as:
ψ(x,P) = F( x) * G( ỹ) (3.11)
Asymptotically, when α tends to 0
F( x) → cos(ζk 1 • x) + i * sin(ζk 1 • x) (3.12)
Asymptotically, when β tends to 0
G( ỹ) → cos(ζk 2 • y) + i * sin(ζk 2 • y) (3.13)
It can be observed that ψ(x,P) function is the general solution of Helmholtz equation in (2.1). Especially when α = 0 and β = 0, ψ(x,P) function becomes plane wave function.
The angle parameter θ in P describes the propagation direction of plane wave. Analogous to plane wave case, when α = 0 and β = 0, ψ function still represents a wave propagates on the 2D plane. P decides its propagation direction. In order to be distinct from plane wave, this wave is named Airy wave. An example of Airy wave and plane wave can be seen in Figure 3.2.
Variational Formulation
To solve this heterogeneous Helmholtz problem, again, the VTCR approach consists in searching solution u in functional space U such that
U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k 2 u E + r d = 0} (3.14)
The variational formulation of (2.1) can be written as: find u ∈ U such that
Re ik ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS = 0 ∀v ∈ U 0 (3.15)
The U E,0 and U 0 are the vector space associated with U E and U when r d = 0. It could be noticed that the variational formulation (3.15) is exactly the same as (2.3). Therefore, to prove the equivalence of this weak formulation with the reference problem, one could refer to the demonstration in Section 2.2.2. The only difference between (3.15) and (2.3) is the definition of their working space. In (2.3), the working space is composed by the plane wave functions. In (3.15), instead, the working space is composed by the Airy wave functions.
Approximations and discretization of the problem
The ψ(x,P) function defined in (3.11) only represents the fast oscillatory scale of the wave propagating in heterogeneous field. Meanwhile the amplitude associated with the Airy wave function corresponds to the slow scale. Similarly, here only the slow scale is discretized and the fast scale is obtained analytically. The amplitude, which is a function that depends on the propagation direction θ, could be discretized by the similar way as the disretization of plane wave functions in Chapter 2. The general solution of heterogeneous Helmholtz equation could be locally written as
u E (x) = C E A E (k,P)ψ(x,P)dC E (3.16)
where A E is the distribution of the amplitudes of the complex rays and C E curve is described by the wave vector when it propagates to all the directions of the plane. In the linear acoustic C E is a circle. The expression (3.16) describes two scales.
In order to further discretize the general solution to achieve the finite dimensional implementation, instead of the circular integration, the general solution could be approximately composed by complex rays of several directions. With the rays approximation, (3.16) could be rewritten into
u E (x) = J ∑ j=1 A jE ψ(x,P j )
(3.17)
ϕ jE (x) = ψ(x,P j ) (3.18)
where A jE becomes the amplitude of the Airy wave which propagates in direction θ associated with P j .
Here, it is no need to repeat the procedure to generate the matrix system. It is exactly the same with the procedure presented in Chapter 2. One could refer to it for all the details and the properties.
Numerical implementation
Numerical integration
Since Airy wave function behaves in a quick oscillatory way, the general Gauss quadrature is no longer fit for the numerical integration. Due to the complexity of the Airy wave function, analytic solution of integration is difficult to be explicitly expressed. One must resort to other powerful numerical integration techniques. The integration methods considered are:
• trapz. It performs numerical integration via the trapezoidal method. This method approximates the integration over an interval by breaking the area down into trapezoids with more easily computable areas. For an integration with N+1 evenly spaced points, the approximation is:
b a f (x)dx ≈ b -a 2N N ∑ n=1 ( f (x n ) + f (x n+1 )) = b -a 2N [ f (x 1 ) + 2 f (x 2 ) + • • • + 2 f (x N ) + f (x N+1 )] (3.19)
where the spacing between each point is equal to the scalar value ba N . If the spacing between the points is not constant, then the formula generalizes to
b a f (x)dx ≈ 1 2 N ∑ n=1 (x n+1 -x n ) [ f (x n ) + f (x n+1 )] (3.20)
where (x n+1x n ) is the spacing between each consecutive pair of points.
• quad. It adopts the adaptive Simpson quadrature rule for the numerical integration. One derivation replaces the integrand f (x) by the quadratic polynomial P(x) which takes the same values as f (x) at the end points a and b and the midpoint m = (a + b) 2 . One can use Lagrange polynomial interpolation to find an expression for this polynomial.
P(x) = f (a) (x -m)(x -b) (a -m)(a -b) + f (m) (x -a)(x -b) (m -a)(m -b) + f (b) (x -a)(x -m) (b -a)(b -m) (3.21)
An easy integration by substitution shows that
b a P(x) = b -a 6 f (a) + 4 f ( a + b 2 ) + f (b) (3.22)
Consequently, the numerical integration could be expressed as:
b a f (x)dx ≈ b -a 6 f (a) + 4 f ( a + b 2 ) + f (b) (3.23)
The quad function may be most efficient for low accuracies with nonsmooth integrands.
• quadl. It adopts the Gauss-Lobatto rules. It is similar to Gaussian quadrature with mainly differences. First, the integration points include the end points of the integration interval. Second, it is accurate for polynomials up to degree 2n -3,
where n is the number of integration points. Lobatto quadrature of function f (x) on interval [-1, 1]:
1 -1 f (x)dx ≈ 2 n(n -1) [ f (1) + f (-1)] + n-1 ∑ i=2 w i f (x i ) (3.24)
where the abscissas x i is the (i -1)st zero of P ′ n-1 (x) and the weights w i could be expressed as:
w i = 2 n(n -1)[P n-1 (x i ) 2 ] , x i = ±1 (3.25)
The quadl function might be more efficient than quad at higher accuracies with smooth integrands.
• quadgk. Gauss-Kronrod quadrature is a variant of Gaussian quadrature, in which the evaluation points are chosen so that an accurate approximation can be computed by reusing the information produced by the computation of a less accurate approximation. Such integrals can be approximated, for example, by n-point Gaussian quadrature:
b a f (x)dx ≈ n ∑ i=1 w i f (x i ) (3.26)
where w i and x i are the weights and points at which to evaluate the function f (x).
If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at the midpoint for odd numbers of evaluation points), and thus the integrand must be evaluated at every point. Gauss-Kronrod formulas are extensions of the Gauss quadrature formulas generated by adding n + 1 points to an n-point rule in such a way that the resulting rule is of order 2n + 1. These extra points are the zeros of Stieltjes polynomials. This allows for computing higher-order estimates while reusing the function values of a lower-order estimate.
The quadgk function might be most efficient for high accuracy and oscillatory integrands. It supports infinite intervals and can handle moderate singularities at the endpoints. It also supports contour integration along piecewise linear paths.
A quick numerical example is done to test the performance of these four different numerical integrations on a simple square domain of [0 m,1 m]×[0 m, 1 m] and the origin being the left upper vertex( see Figure 3.3). The integrations are defined as
∂L u A • u i with i = 1, 2, • • • , 32
, where u A = ψ(x,P) is an Airy wave with θ = 0 • and its amplitude is 1. u i = ψ(x,P i ) are Airy waves with the angle shown in Table 3.1 and their amplitudes are all chosen to be 1. In this way, one constructs respectively thirty two integrations along the boundary on the bottom, which is denoted by ∂L. The other parameters are η = 0.01, α = 0 m -3 , β = -800 m -3 , γ = 1500 m -2 . This example is typical because to do the numerical implementation by the VTCR (3.14) one will always encounter the integrations resembling the integrations in our test. The symbol integration with MATLAB is used to yield the reference result as Table 3.2 shows. The symbol integration in MATLAB is the most accurate method but with extreme low efficiency. This is the reason why one choose the numerical integrations instead of symbol integrations in MATLAB. The differences of results between the reference results with the four numerical integration methods are made in Table 3.3, Table 3.4, Table 3.5, Table 3.6 correspondingly. It could be seen from Table 3.2 that the reference results are of order 10 0 . The differences between the reference results with the results calculated by trapz, guad, guadl and guadgk are of order 10 0 , 10 0 , 10 -10 and 10 -14 respectively. By comparison, one could draw the conclusion that quadgk could yield the most accurate results.
The VTCR suffers from ill-conditioning when it converges. In this situation, it is possible that even a disturbance of small value in the system may generate totally different solution. Therefore the accuracy is the crucial point for us to choose the numerical integration method. One could draw the conclusion from the results that the quadgk is most accurate and suitable since in the VTCR there are many quick oscillatory integrands. Table 3.1: The angle θ of Airy wave functions for the numerical test ×10 0 1.0344 + 0.0000i 0.0122 -0.0045i 0.0007 + 0.0000i -0.0013 -0.0078i 0.7028 -0.3261i 0.0067 -0.0039i 0.0001 -0.0077i 0.0852 -0.7668i -0.0177 -0.0862i -0.0008 + 0.0001i 0.0006 + 0.0006i -0.0540 + 0.0663i -0.0396 + 0.0080i -0.0023 -0.0017i 0.0022 + 0.0015i 0.0361 -0.0082i -0.0228 -0.0347i -0.0047 -0.0044i 0.0053 + 0.0012i 0.0311 + 0.0155i 0.0189 -0.0224i 0.0046 -0.0065i 0.0023 -0.0055i 0.0100 -0.0195i 0.0080 + 0.0150i 0.0050 + 0.0048i -0.0047 -0.0004i -0.0106 -0.0044i 0.0000 + 0.0046i -0.0029 + 0.0001i -0.0012 + 0.0014i 0.0020 + 0.0020i Table 3.2: Reference integral values ×10 -14 0.0000 + 0.0000i -0.0029 -0.0048i 0.0052 -0.0147i 0.0032 -0.0019i 0.1776 -0.2776i -0.0108 + 0.0082i -0.0048 + 0.0094i -0.1360 -0.3109i -0.1676 -0.0444i -0.0011 -0.0103i 0.0072 + 0.0074i 0.0604 + 0.1485i 0.0847 + 0.0808i 0.0133 -0.0024i -0.0050 + 0.0019i -0.0749 -0.0753i -0.0385 -0.0097i 0.0109 -0.0096i 0.0092 + 0.0096i 0.0343 -0.0049i -0.0073 -0.0021i -0.0143 -0.0031i -0.0169 -0.0013i -0.0168 + 0.0014i -0.0014 + 0.0024i 0.0044 + 0.0054i -0.0032 -0.0027i 0.0014 + 0.0006i 0.0061 -0.0047i -0.0032 + 0.0031i -0.0054 + 0.0031i 0.0021 -0.0046i Table 3.3: Difference between the quadgk integral values and the reference integral values
Iterative solver
Similar to the VTCR method, the extended VTCR also suffers from ill-conditioning. The research of the iterative solvers for the VTCR have been thoroughly studied in Section 2.3.
Convergence of the Extended VTCR
Convergence criteria
The geometrical heuristic criterion of convergence for the VTCR in the Helmholtz problem of constant wave number is shown in (2.23). Since k is not constant here, its maximum value k max on the domain is used in the heuristic criterion (2.23), which leads to
N e = τk max R e /(2π) (3.27)
where N e is the number of rays, τ a parameter to be chosen and R e is the characteristic radius of domain. τ = 2 is chosen in this dissertation.
×10 -10 0.0001 + 0.0000i 0.0005 + 0.0004i 0.0038 + 0.0000i 0.0003 -0.0003i -0.0715 -0.1134i -0.2077 + 0.1253i -0.0006 + 0.2415i -0.1334 + 0.0024i 0.0016 -0.0030i 0.7817 + 0.0680i -0.4297 -0.6301i -0.0033 + 0.0006i 0.2674 + 0.5459i 0.0976 + 0.0593i -0.0907 -0.0523i -0.2568 -0.4949i 0.0419 -0.0236i -0.0186 -0.0222i 0.0228 + 0.0083i -0.0204 + 0.0347i -0.0047 + 0.0069i 0.0891 -0.1254i 0.0438 -0.1065i -0.0022 + 0.0058i -0.0482 -0.0246i 0.0238 + 0.0190i -0.0204 + 0.0003i 0.0462 + 0.0006i -0.0221 + 0.0470i 0.0097 -0.0089i -0.2798 + 0.5013i -0.0101 + 0.0274i Table 3.4: Difference between the quadl integral values and the reference integral values ×10 0 0.0000 + 0.0000i -0.0006 + 0.0002i -0.0002 -0.0000i 0.0001 + 0.0004i -0.0000 + 0.0000i -0.0014 + 0.0008i -0.0000 + 0.0016i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0002 -0.0000i -0.0001 -0.0001i 0.0000 -0.0000i 0.0001 -0.0000i 0.0004 + 0.0003i -0.0004 -0.0003i -0.0001 + 0.0000i 0.0001 + 0.0001i 0.0007 + 0.0007i -0.0008 -0.0002i -0.0001 -0.0001i -0.0002 + 0.0002i -0.0006 + 0.0008i -0.0003 + 0.0007i -0.0001 + 0.0002i -0.0002 -0.0003i -0.0005 -0.0005i 0.0005 + 0.0000i 0.0002 + 0.0001i -0.0000 -0.0002i 0.0002 -0.0000i 0.0001 -0.0001i -0.0001 -0.0001i
Table 3.5: Difference between the trapz integral values and the reference integral values
Error indicator
The extended VTCR possesses the same feature as VTCR for error estimation. In each subdomain, its shape functions satisfy the governing equation. Meanwhile the boundary conditions are not satisfied automatically. Consequently, for the extended VTCR, the way to evaluate the accuracy of the approximated solution in subdomain is still to verify the continuity in terms of displacement and velocity with all the other subdomains in the vicinity of Ω E . Therefore, the definition of error indicator for the extended VTCR is the same as (2.24).
Numerical examples
Academic study of the extended VTCR on medium frequency heterogeneous Helmholtz problem
A simple geometry of square [0 m; 1 m]×[0 m; 1 m] is considered for domain Ω. In this domain, η = 0.01, α = 150 m -3 , β = 150 m -3 , γ = 1000 m -2 . Boundary conditions ×10 0 0.0000 + 0.0000i -0.0000 + 0.0000i -0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 -0.0000i -0.0000 + 0.0000i -0.0000 + 0.0000i 0.0000 -0.0000i -0.0000 -0.0000i 0.0000 -0.0000i -0.0000 -0.0000i -0.0000 + 0.0000i 0.0000 -0.0000i -0.0000 -0.0000i 0.0000 + 0.0000i -0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i -0.0000 -0.0000i -0.0000 -0.0000i -0.0000 + 0.0000i -0.0000 + 0.0000i -0.0000 -0.0000i -0.0000 + 0.0000i -0.0000 -0.0000i -0.1461 -0.1707i 0.1487 + 0.0262i 0.0000 + 0.0000i -0.0000 + 0.0000i 0.1906 -0.1578i 0.0126 -0.1533i -0.0000 + 0.0000i
= 10 • , θ 2 = 55 • , θ 3 = 70 •
correspond to propagation angle in P 1 , P 2 , P 3 respectively. The definition of the problem and the discretization strategy can be seen on Figure 3.4. This choice of geometry and boundary conditions allow one to calculate the real relative error of the extended VTCR method with exact solution. Therefore, the real relative error is defined as following:
ε ex = u -u ex L 2 (Ω) u ex L 2 (Ω) (3.28)
The result could be seen in Figure 3.5. The convergence curves of this extended VTCR method in heterogeneous problem behaves in the similar way as the convergence curves of the VTCR in the Helmholtz problem of constant wave number. Merely a small amount of degrees of freedom is sufficient to attain the convergence of numerical result, which is under a small relative error.
It can be seen that to obtain the result with same precision, refinement of subdomains results in the need of more degrees of freedom. This phenomena could be explained by the convergence properties of the VTCR. As presented in Chapter 2, for the standard VTCR both the p-convergence and the h-convergence will lead to convergent results but p-convergence performs in a far more efficient way. This special feature is inherited from the standard VTCR to this extended VTCR. Correspondingly, in Figure 3.5, the extended VTCR with only one computational domain converges the fastest. The one with nine subdomains is the slowest and the one with four subdomains locates in the middle.
Study of the extended VTCR on semi-unbounded harbor agitation problem
This example corresponds to a study of water agitation of a harbor. The movement of waves is dominated by Helmholtz equation. Incoming wave from far away field gives rise to reflected wave inside the harbor. The water wave length is much smaller than the geometry size of harbor. It is a medium frequency Helmholtz problem since there exists many periods of wave in the harbor.
Ω 1m 1m Ω Ω 1 Ω 2 Ω 3 Ω 4 Ω 1 Ω 2 Ω 3 Ω 4 Ω 5 Ω 6 Ω 7 Ω 8 Ω 9
The work in [START_REF] Modesto | Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: application to harbor agitation[END_REF] solves the agitation of a real harbor with multi input data in an heterogeneous media and with an unbounded domain. There are mainly three difficulties in this problem. The first one is the pollution errors. The problem requires a large amount of degrees of freedom of FEM since there are large numbers of waves over the computational domain. The second difficulty is to solve the influence of small geometric features to the solution. The proper generalized decomposition (PGD) model reduction approach was used to obtain a separable representation of the solution at any point and for any incoming wave direction and frequency. By this approach, the calculation cost is drastically reduced. The third difficulty is to solve the unbounded problem. Facing to this task, the perfectly matched layers (PMLs) [Berenger, 1994, Modesto et al., 2015] was proposed to satisfy the Sommerfeld radiation condition. A special artificial layer is created around the studied domain to absorb the non-physical waves. The work [START_REF] Giorgiani | High-order continuous and discontinuous Galerkin methods for wave problems[END_REF] compares three Galerkin methods-continuous Galerkin, Compact Discontinuous Galerkin, and hybridizable discontinuous Galerkin in terms of performance and computational efficiency in 2-D scattering problems for low and high-order polynomial approximations. It shows the superior performance of high-order elements. It also presents the similar capabilities for continuous Galerkin and hybridizable discontinuous Galerkin, when high-order elements are adopted, both of them outperforming compact discontinuous Galerkin. Model of problem: Definition of the harbor is shown in Figure 3.6. The agitation of harbor depends on incoming wave. In later part of this section, one can see different numerical results calculated with different parameters including the angle of incoming wave and the frequency of incoming wave. Without losing generality, all boundaries of the harbor are supposed to be totally reflecting boundaries, which is denoted by Γ R :
(1iη)∂ n u = 0 over Γ R (3.29) u + 0 represents incoming wave from far away onto the harbor. It can be expressed as
u + 0 = A + 0 e ik + 0 ζ(cosθ + 0 x+sinθ + 0 y)
, where A + 0 is the amplitude of wave and θ + 0 is the angle of wave propagation direction. The origin of coordinate is O, located in the middle point of the harbor entrance. As Figure 3.7 shows, the sea bottom of the region outside the harbor varies slowly and the depth of water is considered as constant there. The depth of water inside the harbor decreases when it is closer to the land. Consequently, the length of wave varies inside the harbor. An assumption is proposed in this example that the depth of water h complies with the following relation:
h = 1 a + by (3.30)
where a, b are constant parameters. This relation could describe the variation of the water depth with respect to y. The relation between wave frequency ω and water depth h follows the non linear dispersion relation:
ω 2 = kgtanh(kh) (3.31)
where g = 9.81 m/s 2 is the gravitational acceleration and k is the wave number. In the case h ≪ λ, when the depth of water is far more less than the length of wave, there is the following shallow water approximation:
tanh(kh) ≈ kh (3.32)
This approximation is valid in the underwater field near seashore. The numerical result of this section will further validate of this approximation. Thus it can be obtained that:
k 2 = g -1 ω 2 (a + by) (3.33)
Incoming waves cause two kinds of reflection, which include the wave reflected by the boundary inside the harbor and the wave reflected by the boundary locating outside the harbor. Part of these reflected waves propagate from the harbor to far away field. This phenomenon leads to a semi-unbounded problem. In physics these waves need to satisfy Sommerfeld radiation condition. In our 2D model it is represented by:
lim r→+∞ √ r ∂u(r) r -iku(r) = 0 (3.34)
where r is the radial direction in polar coordinate.
Unbounded problem: Many methods have been proposed to solve unbounded problem such as perfectly matched layer (PMLs) [Berenger, 1994, Modesto et al., 2015], Nonreflecting artificial boundary conditions (NRBC) [Givoli, 2004], Bayliss, Gunzburger and Turkel Local non-reflecting boundary conditions (BGT-like ABC) [Bayliss et Turkel, 1980, Antoine et al., 1999] and Dirichlet to Neumann non-local operators [Givoli, 1999].
PMLs creates an artificial boundary and a layer outside the region of interest in order to absorb the outgoing waves. NRBC, ABC and Dirichlet to Neumann non-local operators introduce a far away artificial boundary which leads to minimize spurious reflections. VTCR method can combine these artificial boundary techniques to solve the semi-unbounded harbor problem without difficulty. But here analytic solution is taken into account to solve the problem. This choice allows us to take great advantage of VTCR method. Since analytic solution verifies Helmholtz equation and Sommerfeld radiation condition, it can be used as shape functions in VTCR. Compared with artificial boundary techniques, this approach leads to a simpler strategy of calculation.
The idea of seeking for analytic solution on the unbounded domain outside the harbor can be illustrated by two steps. As Figure 3.8 shows, in the first step a relatively simple problem is considered. Without the region inside the harbor, incoming wave u + 0 agitates on a straight boundary which is infinitely long. The boundary condition here is same as (3.29). The reflected wave is denoted by u a . It is evident that for such a problem, when
u + 0 = A + 0 e ik + 0 ζ(cosθ + 0 x+sinθ + 0 y) , it can be obtained that u a = A + 0 e ik + 0 ζ(cosθ a x+sinθ a y) ,
where θ a = 2π -θ + 0 . For the second step as Figure 3.9 shows, it is exactly the original harbor agitation problem in this Section. If u a of the first step is taken as exact solution here, it will create the residual value because the governing equation inside the harbor and boundary conditions are not satisfied. It is logical to add a complementary solution outside the harbor to offset the residual value. In this point of view, the origin O is chosen to develop the expansion of this complementary solution, which is denoted by u b . Here u b is required to satisfy governing equation outside the harbor, where the wave number is constant. Furthermore u b is required to satisfy the boundary condition on Γ O and Sommerfeld radiation condition. In previous work of VTCR [START_REF] Kovalevsky | The Fourier version of the Variational Theory of Complex Rays for medium-frequency acoustics[END_REF], it is shown that for 2D acoustic domain exterior to a circular boundary surface, the analytic solution of reflected wave U s of scattering problem in polar coordinate is in form of [Herrera, 1984]:
U s = ∞ ∑ n=0 (A n sin(nθ) + B n cos(nθ)) H (1) n (ζkr) (3.35)
where H It can be verified that (3.36) satisfies boundary conditions on Γ O . Therefore u b is found. Except on the origin point, the analytic solution on the domain outside the harbor equals to the sum of u + 0 , u a and u b .
Computational strategy: As mentioned before, our computational strategies are shown in Figure 3.11 and Figure 3.12. The domain outside the harbor is divided into two computational subdomains Ω 1 and Ω 2 . The subdomain Ω 2 is a semicircular domain, whose center locates at the origin point. The subdomain Ω 1 ranges from the boundary of Ω 2 to infinity. On this domain the analytic solution presented before is used. Computational domain Ω 2 is created to separate origin point from Ω 1 . Since k is considered as constant value of the region outside the harbor, plane wave function is used as shape function on subdomain Ω 2 .
Inside the harbor two different strategies of discretization are chosen in Figure 3.11 and Figure 3.12. The first strategy is that the domain inside the harbor is divided into one computational subdomain (See Figure 3.11). The second strategy is that the domain inside the harbor is divided into four computational subdomains (See Figure 3.12). By When the subdivision of computational domain is done, one needs to choose shape functions used on each subdomain. As mentioned before, u on domain Ω 1 contains u + 0 , u a and u b . This relation can be represented by u|
Ω 1 = u + 0 + u a + u b .
The unknown value u b can be expanded in the series written as (3.36). To achieve a discrete version of the VTCR, finite-dimensional space is required. Thus (3.36) needs to be truncated into finite series. The working space of u b denoted by U b Ω 1 is defined as:
U b Ω 1 = u b ∈ L 2 (Ω 1 ) : u b (x,y) = N 1 ∑ n=0 A 1n cosnθH (1) n (ζkr), A 1n ∈ C, n = 0, • • • ,N 1 (3.37)
where A 1n is the unknown degree of freedom. N 1 is the number of degree of freedom on Ω 1 . Working space of Ω 2 is defined as follows: U
Ω 2 = u ∈ L 2 (Ω 2 ) : u(x,y) = N 2 ∑ n=0 A 2n e ikζ(cosθ n x+sinθ n y) , A 2n ∈ C, n = 1, • • • ,N 2 (3.38)
where A 2n is the unknown amplitude of plane wave. N 2 is the number of degree of freedom on Ω 2 . On the computational domain of inside harbor, the working space is constituted by the ψ(x,P) functions and it is in the form of
U Ω m = u ∈ L 2 (Ω m ) : u(x,y) = N m ∑ n=0 A mn ψ(x,P n ), A mn ∈ C, n = 1, • • • ,N m (3.39)
where A mn is the unknown amplitude of the Airy waves on subdomain Ω m with m 3. N m is the number of degrees of freedom on Ω m .
Numerical result:
Here ω = 0.5 rad/s, a = 4.8 • 10 -2 m -1 , b = 4.8 • 10 -5 m -2 , η = 0.03 are the chosen as parameters. Therefore the depth of water ranges from -20.83 m to -8.33 m, which corresponds to slow variation of water depth near the seashore. The relation between k 2 and y follows (3.33). Taking into account the parameters, it can be derived that: andλ ∈ [104.72 m, 181.38 m]. The shallow water approximation (3.32) is approved to be valid since λ ≫ h.
k 2 = 1.2 • 10 -3 -1.2 • 10 -6 y (3.40) Inside the harbor k 2 ∈ [1.2 • 10 -3 m -2 , 3.0 • 10 -3 m -2 ]
Let the amplitude of incoming wave corresponds to A + 0 = 2 m and the angle of incoming wave corresponds to θ + 0 = 45 • . Following the computational strategies mentioned above, numerical results are shown in Figure 3.13. In this example the exact solution is unknown, therefore one adopts the error indicator (2.24). For the first strategy, one chooses N 1 = 20, N 2 = 100, N 3 = 160. The result error is 6.21 • 10 -3 . For the second strategy, one chooses
N 1 = 20, N 2 = 100, N 3 = 100, N 4 = 160, N 5 = 160, N 6 = 160. The result error is 1.52 • 10 -2 .
The results could be seen in Figure 3.13 and Figure 3.14. Figure 3.13 presents the global results over all subdomains. Since Ω 1 is the semi-unbounded domain, here the numerical result only shows a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. Figure 3.14 shows the results inside the harbor calculated by the first strategy and the second strategy. One can see from the results that the two different computational strategies of the extended VTCR lead to the same result. It should be noticed that the performance of the first strategy is slightly better than the second strategy and it uses less degrees of freedom. Again, this phenomenon can be explained by the fact that the p-convergence always outperforms the h-convergence in the VTCR. It should also be noticed that only 280 degrees of freedom in all are sufficient to solve this medium frequency heterogeneous Helmholtz problem. The coarse domain disretization and small amounts of degrees of freedom used by the VTCR typify the advantage of this method. It also can be seen from Figure 3.13 that the numerical solution has a good continuity between adjacent subdomains. With the same parameters and with the first computational strategy mentioned before, two other results are calculated by changing the angle of incoming wave to θ + 0 = 35 • and θ + 0 = 65 • (see Figure 3.15). Again, results show that the continuity of displacement and velocity between subdomains are well verified.
Conclusion
This chapter proposes an extended VTCR method, which is able to solve heterogeneous Helmholtz problem. In this extended VTCR, new shape functions are created. In the context of Trefftz Discontinuous Galerkin method, these new shape functions satisfy governing equation a priori. Therefore the extended VTCR is only required to meet the continuity conditions between subdomains and the boundary conditions. All these conditions are included in the variational formulation, which is equivalent to the reference problem.
From the academic studies one learns the convergence properties of the extended VTCR. This approach converges in the same way as the VTCR method presented in Chapter 2. Then a harbor agitation problem is studied. Compared with previous examples, the harbor has a more complex geometry. By applying the extended VTCR, the problem is solved by a simple domain discretization and a small amount of rays. To satisfy Sommerfeld radiation condition, the analytic solutions of unbounded subdomain are developed. Then these analytic solutions are further used as the shape functions by the VTCR on the unbounded subdomain. Inside the harbor, where the square of wave number varies linearly due to the variation of depth of water, the Airy wave functions are used as shape functions. In the calculation, one adopts two different strategies. The first strategy only has one subdomain inside the harbor, while the second strategy has four subdomains inside the harbor. From the results it could be seen that with a good angular discretization, the two strategies lead the calculation converges to the same result. It successfully illustrates that the VTCR has a significant potential to solve true engineering problem in an efficient and flexible way.
Chapter 4
The Zero Order and the First Order WTDG for heterogeneous Helmholtz problem
This chapter presents a wave based Weak Trefftz Discontinuous Galerkin method for heterogeneous Helmholtz problem. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this ways, zero order and first order approximations are defined. These functions only satisfy the local governing equation in the average sense. In this way, the Zero Order WTDG adopts the plane wave functions as shape functions. The First Order WTDG adopts the Airy wave functions as shape functions. Academic studies will show the features of the Zero Order WTDG and the First Order WTDG for heterogeneous Helmholtz problem. Lastly, the harbor agitation example is restudied by the Zero Order WTDG method. Its results are compared with the results calculated by the extended VTCR method in Chapter 3. The WTDG was first introduced in [Ladevèze, 2011, Ladevèze et Riou, 2014]. In this method, the domain is divided into several subdomains. Shape functions are independent from one subdomain to another. The solution continuity between two adjacent subdomains is verified weakly through the variational formulation of reference problem.
The reference problem considered is an heterogeneous Helmholtz problem over a domain Ω. Let Ω be partitioned into N non overlapping subdomains
Ω = ∪ N E=1 Ω E . Denoting ∂Ω E as the boundary of Ω E , we define Γ EE = ∂Ω E ∩ ∂Ω and Γ EE ′ = ∂Ω E ∩ Ω E ′ .
The proposed approach here is searching the solution u in the functional space U such that
U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k2 E u E + r d = 0} (4.1)
where kE is an approximation of k in subdomain Ω E . Although it could be close to k, kE is still an approximation and the shape functions defined in (4.1) will not satisfy a priori the governing equation in (2.1). This is the reason why this method is named as weak Trefftz method instead of Trefftz method. In Section 4.1.3 the concrete form of kE will be further discussed. When r d = 0 the vector spaces associated with U and U E are defined as U 0 and U E,0 . The variational formulation can be written as: find u ∈ U such that
Re ik(x) ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS -∑ E∈E Ω E divq u + k 2 u + r d ṽdΩ = 0 ∀v ∈ U 0 (4.2)
where ˜ represents the conjugation of . It should be mentioned that the term which contains the governing equation in the formulation ∑
E∈E Ω E divq u + k 2 u + r d ṽdΩ could also be replaced by ∑ E∈E Ω E 1 2 divq u + k 2 u + r d ṽ + 1 2 divq v + k 2 v
ũdΩ and the demonstrations in the Section 4.1.2 will keep unchanged.
Equivalence of the reference problem
Let us note that the WTDG formulation (4.2) can be written as:
find u ∈ U such that b(u,v) = l(v) ∀v ∈ U 0 (4.3)
where b meets the property that b(u,u) is real.
Property 1. By defining u 2 U = ∑ E∈E Ω E grad ũ • gradudΩ, u U is a norm over U 0 .
Proof. When u U = 0, we can find that gradu = 0. Then u could be a non-zero constant or zero. From the definition of U 0 , it follows that:
(1 -iη)∆u + k2 E u = 0 over Ω E with k2 E > 0. It can be deduced that u = 0 over Ω. Therefore u U is a norm over U 0 .
Property 2. When η is positive, the WTDG formulation is coercive.
Proof. If it is the weak Trefftz formulation case, then we have:
b(u,u) = Re ik ∑ E∈E ∂Ω E (q u • n) ũdS -∑ E∈E Ω E divq u ũdΩ ∀u ∈ U 0 (4.4) Consequently, b(u,u) = ∑ E∈E kη Ω E grad ũ • gradudΩ (4.5)
Let us denote cl Ω a bounded closed set, which contains Ω and ∂Ω. Because k is a continuous function, k has an minimum value on cl Ω. Denoting
k in f = inf{k(x)| x ∈ cl Ω}, it is evident that when η is positive, for u ∈ U 0 , b(u,u) k in f η u 2 U .
Property 3. The WTDG formulation (4.2) is equivalent to reference problem (2.1).
And it has a unique solution.
Proof. If u is a solution of (2.1), it is also a solution of (4.2). Therefore the existence of solution is proved. From Property (1) and Property (2), it can be directly deduced that the solution u is unique.
The shape functions of the Zero Order WTDG and the First Order WTDG
Defined in (4.1), the shape functions used in each subdomain need to satisfy the Helmholtz equation where kE is an approximation of k on Ω E . Defining x e ∈ Ω E , one has the Taylor's series expansion of k 2 at the point x e : T . ξ = 0 or 1.
k 2 = k 2 (x e ) + ξ∇(k 2 )| x=x e • (x -x e ) + o( x -x e ( 1+ξ
Taking the Zero Order approximation of (4.6) and replacing it in (2.1), it can be obtained that:
(1iη)∆u + k 2 (x e )u = 0 (4.7)
In this case k2 E = k 2 (x e ) and it is known that the shape functions which satisfy (4.7) are the plane wave functions.
Taking the First Order approximation of (4.6) and replacing it in to (2.1), it can be obtained that:
(1 -iη)∆u + k 2 (x e ) + ∇k 2 | x=x e • (x -x e ) u =
Approximations and discretization of the problem
To implement the WTDG method, it is required to take a finite dimensional subspace U h 0 of U 0 . In Section 4.1.3, two kinds of shape functions are generated by taking the approximation of wave number k on subdomain. For both the plane wave functions and the Airy wave functions, they represent waves propagating in the 2D plane. Thus by using an angular discretization, one can build the functional space U h 0 .
For the plane wave functions, U h 0 is defined as:
U h 0 = u ∈ L 2 (Ω) : u(x) |Ω E = M E ∑ m E =1 A m E e ik•x , A m E ∈ C, E = 1, • • • ,N (4.10)
For the Airy wave functions, U 0,s is defined as: 4.11) where M E is the number of waves and A m E is the amplitude of the wave.
U h 0 = u ∈ L 2 (Ω) : u(x) |Ω E = M E ∑ m E =1 A m E ψ(x,P m E ), A m E ∈ C, E = 1, • • • ,N (
Numerical implementation 4.3.1 Integration of the WTDG
To implement the WTDG method, numerical integrations need to be done over the domain and along the boundary. Since the Zero Order WTDG and the First Order WTDG all use the quick oscillatory shape functions. Standard integration methods such as the Gauss integration method are not suitable in this kind of problems. Due to the complexity of the Airy wave function, one needs to resort to numerical integration presented in Chapter 3.3.
Benefiting from the feature of plane wave functions, numerical integration of the Zero Order WTDG could be achieved totally by semi-analytic integration. There are mainly two reasons to explain this. First, as the plane wave functions are always in form of exponential functions, multiplication of two shape functions will be still in form of an exponential function. Instead of the direct multiplication operation, one could add the indexes of the two exponential functions to get the index of the result and multiply the two coefficients to get the final coefficient. Second, integration of the exponential function could be calculated analytically if the index and the coefficient of the exponential function are given. As the geometry of the subdomain of the WTDG here is in a rectangle shape, all of its boundaries are straight lines.
Since the weak formulation of the WTDG contains the governing equation, the continuity of displacement and velocity on interfaces and the boundary conditions. Consequently, there are three kinds of integrations. Correspondingly, they are integration over the domain, integration along the interface between subdomains and integration along the boundary.
For the first kind of integration, it could be done analytically without difficulty. In fact, any integration over the domain for the Zero Order WTDG could be decomposed into the following basic integration problem. Supposing (k 1 cosθ 1 + k 2 cosθ 2 ) = 0 and (k 1 sinθ 1 + k 2 sinθ 2 ) = 0, the analytic integration could be calculated in the following way:
y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = -C 1 •C 2 (k 1 cosθ 1 + k 2 cosθ 2 ) • (k 1 sinθ 1 + k 2 sinθ 2 ) e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y x 2 x 1 y 2 y 1 (4.12) If (k 1 cosθ 1 + k 2 cosθ 2 ) = 0 and (k 1 sinθ 1 + k 2 sinθ 2 ) = 0,
the integration becomes:
y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = C 1 •C 2 (x 2 -x 1 ) 2i(k 1 sinθ 1 + k 2 sinθ 2 ) e i(k 1 sinθ 1 +k 2 sinθ 2 )y y 2 y 1 (4.13) If (k 1 sinθ 1 + k 2 sinθ 2 ) = 0 and (k 1 cosθ 1 + k 2 cosθ 2 ) = 0
, the integration becomes:
y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = C 1 •C 2 (y 2 -y 1 ) 2i(k 1 cosθ 1 + k 2 cosθ 2 ) e i(k 1 cosθ 1 +k 2 cosθ 2 )x x 2 x 1 (4.14)
If (k 1 sinθ 1 + k 2 sinθ 2 ) = 0 and (k 1 cosθ 1 + k 2 cosθ 2 ) = 0, the integration becomes:
y 2 y 1 x 2 x 1 C 1 e ik 1 cosθ 1 x+ik 1 sinθ 1 y •C 2 e ik 2 cosθ 2 x+ik 2 sinθ 2 y = y 2 y 1 x 2 x 1 C 1 •C 2 e i(k 1 cosθ 1 +k 2 cosθ 2 )x+i(k 1 sinθ 1 +k 2 sinθ 2 )y = C 1 •C 2 (x 2 -x 1 )(y 2 -y 1 ) 4 (4.15)
For the second kind of integration, it could also be calculated analytically. The integration is along the interface. The analytic method is similar to the integration over domain. Here it is preferable not to repeat the process. One could refer to (4.12), (4.13), (4.14) and (4.15).
For the third kind of integration, when the boundary condition can be decomposed by the Fourier expansions in form of exponential functions, the calculation could be done analytically. In this situation the integration is similar as the case along the interface.
However, when the boundary of the domain is irregular, the integration needs to be implemented numerically. The numerical integration methods are proposed in Chapter 3.3.1.
Iterative solver of the WTDG
Since in the WTDG the shape functions are the wave functions in form of ray approximations, the matrix will suffer from ill-conditioning when the number of shape functions become too large. A similar feature is observed on the VTCR in Chapter 2. Thereby, the pinv iterative solver is chosen again for both the Zero Order WTDG and the First Order WTDG. More details could be seen in Section 2.3.
Convergence of the Zero Order and the First Order WTDG 4.4.1 Convergence criteria
The common point of the VTCR and the WTDG is that they all take the wave functions as shape functions. As mentioned before, the shape functions of the VTCR satisfy a priori the governing equation. Therefore the residue will only appear on the boundary of each subdomain. Upon the convergence criteria of (2.23) and (3.27), a sufficient large number of rays will make the results of standard VTCR and the extended VTCR converge with a desired precision. Unlike the VTCR method, the WTDG will not only incur residues on the boundary but also inside the domain, because the governing equation is not satisfied by the shape functions. In this case, only a sufficient large number of rays could make the result converge but at the mean time there may exist a big residue inside the domain. For the WTDG, a sufficient number of subdomains and rays are both the essential conditions to obtain an accurate solution. The technique to choose a sufficient number of subdomains will be illustrated in Section 4.4.2. Here, the criteria for the number of rays is proposed.
For the Zero Order WTDG the criteria is defined as:
N e = τk e,0 R e /(2π) (4.16)
where N e is the number of rays, τ a parameter to be chosen and R e is the characteristic radius of the domain. k e,0 is a constant average value of the wave number on the domain. τ = 2 is chosen in this dissertation.
For the First Order WTDG the criteria is defined as:
N e = τk e,max R e /(2π) (4.17)
where N e is the number of rays, τ a parameter to be chosen and R e is the characteristic radius of domain. k e,max is the maximum value of the linearisation approximation of the wave number on the domain. τ = 2 is chosen in this dissertation.
Error indicator and convergence strategy
Unlike the VTCR, in each subdomain the shape functions of the WTDG neither satisfy the governing equation nor satisfy the boundary conditions. In this case, the definition of (2.24) is not a valid error estimator because the error inside the subdomain is not taken into account.
It leaves an open question to define the error estimator for the WTDG. In this dissertation, since the numerical examples are academic, it is practicable to take a precalculated WTDG solution as a reference solution. As (4.18) shows, the error estimator for the WTDG method is only based on the results of the WTDG. The reference solution is calculated with an overestimated number of subdomains and an overestimated number of rays.
ε W T DG = u h -u re f L 2 (Ω) u re f L 2 (Ω) (4.18)
where u re f is the overestimated solution of the WTDG. Criteria (4.16) and ( 4.17) are used to overestimate the solution. τ = 4 is chosen in this dissertation for the overestimation.
Convergence strategy: Since the WTDG requires a sufficient number of subdomains to decrease the residues inside the domain, a convergence strategy of the WTDG is proposed as following:
• 1. Start the calculation with several subdomains and quasi-sufficient rays; Calculate the error and assign its value to ε 0 .
If m stop ε 0 , go to step 4.
If m stop < ε 0 , go to step 2.
• 2. Increase the number of rays; Calculate the error and assign its value to ε 1 ;
If m stop ε 1 , go to step 4.
If m stop < ε 1 < ε 0 , assign the value of ε 1 to ε 0 and repeat step 2.
If ε stop < ε 1 and ε 0 = ε 1 , go to step 3.
• 3. Increase subdomains and set a quasi-sufficient number of rays; Calculate the error and assign its value to ε 0 . Then go to step 2.
• 4. Obtain the result with the desired precision and finish the calculation.
where m stop is a desired precision. The quasi-sufficient rays means that the angular discretization meets the criteria of (4.16) if plane wave functions are used and meets (4.17) if Airy wave functions are used. τ = 2 is used to determine the number of quasi-sufficient rays. • correspond to propagation angles in P 1 , P 2 and P 3 respectively. This choice of geometry and boundary conditions allows one to calculate the relative error of the Zero Order WTDG method with the exact solution. The real relative error is defined as:
ε ex = u -u ex L 2 (Ω) u ex L 2 (Ω) (4.19)
The definition of the problem and the discretization strategy can be seen on Figure 4.1. It is evident that in this case the Airy wave function is the exact solution of the governing equation. This example has already been studied by the extend VTCR method in Chapter 3. Here it serves as a quick example to show the capacity of the Zero Order WTDG in dealing with the medium frequency Helmholtz problem of slowly varying wave number. In Figure 4.1, it shows five strategies to discretize the subdomains of the Zero Order WTDG. For each strategy, the number of subdomains is fixed and the number of wave is gradually increased to draw the convergence curve. It corresponds to the p-convergence study. Moreover, for each strategy, the number of wave keeps the same in each subdomain. Figure 4.2 shows the convergence curves. First, it implies that for each strategy, the convergence curve will remain nearly unchangeable after certain degrees of freedom. This could be explained by the fact that a sufficient number of rays is used in each subdomain. But due to the number of subdomains is fixed, the residue inside the subdomain can not be further decreased. Second, it could be observed that the performance of the convergence curve depends on the number of subdomains. The reason is that the WTDG takes an approximation value of wave number in each subdomain. When more subdomains are used, the residue caused by approximation of the governing equation will decrease correspondingly. This phenomenon is consistent with the convergence study presented in Section 4.4.1. It reflects the significant feature of the WTDG that it can smoothly approximate the exact solution of an heterogeneous Helmholtz problem through the refinement of subdomains. It can be seen that, by this method, one could obtain a result with a desired precision.
Ω 1m 1m Ω Ω1 Ω2 Ω3 Ω4 Ω1 Ω2 Ω3 Ω4 Ω5 Ω6 Ω7 Ω8 Ω9 Ω1 Ω2 Ω3 Ω4 Ω5 Ω6 Ω7 Ω8 Ω9 Ω10 Ω11 Ω12 Ω13 Ω14 Ω15 Ω16 Ω1 Ω2 Ω3 Ω4 Ω5 Ω6 Ω7 Ω8 Ω9 Ω10 Ω11 Ω12 Ω13 Ω14 Ω15 Ω16 Ω17 Ω18 Ω19 Ω20 Ω21 Ω22 Ω23 Ω24 Ω25
Academic study of the First Order WTDG in the heterogeneous Helmholtz problem of sharply varying wave number
In this numerical example, the geometry definition of reference problem keeps the same as the one in Section 4.5.1. A square computational domain with η = 0.01. But here k = 5x + 5y + 40. Therefore k varies 25% on Ω. The boundary condition on ∂Ω are Dirichlet type such that u d = 1. Since in this problem the general solution of governing equation is unknown, one could not use VTCR method to solve it. However, by smoothly approximating of the governing equation, the WTDG method could treat the problem. Both the Zero Order WTDG and the First Order WTDG are employed to show the performance of the WTDG approach. In this example, the error estimator defined by (4.18) is adopted here to capture the error. For the Zero Order WTDG, the overestimated calculation uses 225 subdomains and 40 plane waves in each subdomain to obtain u re f . For the First Order WTDG, the overestimated calculation uses 25 subdomains and 80 Airy waves in each subdomain to obtain u re f . The visual illustration of some results calculated by the First Order WTDG are presented in Figure 4.6. Besides, in Figure 4.6, there is also a result calculated by the FEM with 625 elements of quadric mesh of order 3. This result could be used to make a comparison with the results calculated by the Zero Order WTDG and First Order WTDG. It could be seen that both the Zero Order WTDG and the First Order WTDG are all capable to well solve this problem. There are three points to be mentioned here. First, the Zero Order WTDG needs a fine discretization of subdomains. This is because in this problem the wave number k varies greatly on Ω. Compared to the medium frequency Helmholtz problem of slowly varying wave number in Section 4.5.1, the problem becomes a fast varying wave number one. In this condition, it is necessary to divide more subdomains for the Zero Order WTDG. Otherwise, there will be a large residue inside the domain. Second, the First Order WTDG requires much less subdomains to obtain an accurate result. The reason is that the First Order WTDG makes a higher order approximation of the governing equation. The Zero Order WTDG takes the average value of the wave number on the subdomain while the First Order takes into account not only the average value of the wave number, but also its linear variation. Consequently, compared with the Zero Order WTDG, the First Order WTDG uses much less subdomains. Third, when more subdomains are used in WTDG, less waves are needed in each subdomain. This phenomenon can be explained by the convergence criteria (4.16) and (4.17), which determine the number of the plane waves and of the Airy waves for convergence. Again, it should also be noticed that in the WTDG a sufficient number of rays is only a necessary condition and is not the sufficient condition for its accuracy. Its residue is also influenced by the method to approximate the wave number. Therefore sufficient subdomains are essential in the WTDG to lead to the accurate result. Otherwise, it could be seen from the example on 4.5.1 that when there are insufficient subdomains, increasing the amount of waves will not further improve the accuracy of the WTDG.
Study of the Zero Order WTDG on the semi-unbounded harbor agitation problem
The harbor agitation problem is studied by the extended VTCR in Chapter 3. In this section, one uses the WTDG to solve this problem. For the region outside the harbor, the discretization of subdomain and the choice of their working space remain unchanged. However, inside the harbor, the zero order WTDG is adopted. As mentioned above, the working space of u b denoted by U b Ω 1 is defined as:
U b Ω 1 = u b ∈ L 2 (Ω 1 ) : u b (x,y) = N 1 ∑ n=0 A 1n cosnθH (1)
n (ζkr), A 1n ∈ C, n = 0, • • • ,N 1 (4.20
) where A 1n is the unknown degree of freedom. N 1 is the number of degrees of freedom on Ω 1 . Working space of Ω 2 is defined as follows:
U Ω 2 = u ∈ L 2 (Ω 2 ) : u(x,y) = N 2 ∑ n=0 A 2n e ikζ(cosθ n x+sinθ n y) , A 2n ∈ C, n = 1, • • • ,N 2 (4.21)
where A 2n is the unknown amplitude of plane wave. N 2 is the number of degrees of freedom on Ω 2 .
For the working space U Ω j of subdomain Ω j inside the harbor, where j 3, it is expressed as follows:
U Ω j = u ∈ L 2 (Ω j ) : u(x,y) = N j ∑ n=0 A jn e ik j ζ(cosθ n x+sinθ n y) , A jn ∈ C, n = 1, • • • ,N j (4.22)
where k j = k(x j ) and x j is the coordinate of center point on Ω j . A jn is the unknown amplitude of plane wave. N j is the number of degrees of freedom on Ω j .
To implement the numerical calculation, parameters of the model remain the same as adopted in Chapter 3. The amplitude of incoming wave corresponds to A + 0 = 2 m and the angle of incoming wave θ + 0 = 45 • . Other parameters are chosen as: ω = 0.5 rad/s, a = 4.8 • 10 -2 m -1 , b = 4.8 • 10 -5 m -2 , η = 0.03. By replacing the parameters into (3.33), it can be derived that: The error indicator ε W T DG defined in (4.18) is used here to pilot the calculation to converge. Following the computational strategies mentioned above, the overestimated solution u re f takes 20 subdomains and 200 waves in each subdomain inside the harbor. The calculation here is carried out with three different dicretization strategies. The first strategy takes five subdomains and 120 waves in each subdomain. The result error is 5.32 • 10 -3 . Totally, 720 degrees of freedom are used. The second strategy takes ten subdomains and 120 waves in each subdomain. The result error is 1.27 • 10 -3 . Totally, 1320 degrees of freedom are used. The third strategy takes fifteen subdomains and 120 waves in each subdomain. The result error is 3.06•10 -4 . Totally, 1920 degrees of freedom are used.
k 2 = 1.2 • 10 -3 -1.2 • 10 -6 y (4.23)
The results are shown in Figure 4.8 and in Figure 4.9. Figure 4.8 shows the global results which contain the region inside the harbor and outside the harbor. Since Ω 1 is the semi-unbounded domain, the numerical result only shows a truncated part with r ∈ [1000 m, 2000 m] in polar coordinates. Figure 4.9 shows the detailed results inside the harbor. Besides, the result calculated by the VTCR in Chapter 3 is also shown here to give a visual comparison with the WTDG method.
It could be inferred from the results that this medium frequency heterogeneous Helmholtz problem could be well solved by the WTDG. One could also know from the result that increasing the subdomains inside the harbor could reduce the residue. However the improvement could not longer be judged visually since the error is already small. Therefore one could see that the different strategies in Figure 4.8 and Figure 4.9 always lead to the same result. This phenomenon reflects the stability of the WTDG and is consistent with the its performance in previous academic study of Section 4.5.1. Another point to be mentioned is that though considerable subdomains it uses, the WTDG guarantees good continuity between adjacent subdomains. The fact that the WTDG could smoothly approximate the wave varying Helmholtz problem is proved again in this harbor agitation problem.
Mid/high frequency model: Lastly, a quick calculation is executed with the change of the wave number parameter. One increases the wave number of the model (4.23) four times. The results could be seen in Figure 4.10 and Figure 4.11. Again, since Ω 1 is the semi-unbounded domain, here the numerical result only shows a truncated part with r ∈ [1000 m, 2000 m] in polar coordinates. In this case, there are nearly 90 periods of waves inside the computation domain. This calculation adopts 3940 degrees of freedom. This kind of calculation will pose a great numerical challenge to the FEM method while the WTDG could solve it without difficulty.
Conclusion
Facing to heterogeneous Helmholtz problem, this chapter proposed two wave based WTDG approaches. For the WTDG, there is no requirement for the shape functions to satisfy a priori the governing equation. In this chapter, wave functions are proposed as shape functions. Approximating the wave number of governing equation by its zero order Taylor series, one could obtain an approximated equation, whose exact solutions are plane wave functions. Approximating the wave number of governing equation by its first order Taylor series, one could obtain an approximated equation, whose exact solutions are Airy wave functions. These wave functions only satisfy the governing equation approximately. In other words, these shape functions do not satisfy the governing equation in the variational formulation. Therefore residue will be created inside each subdomain. The finer the subdomains discretization is, the smaller the residue inside the subdomain will be. The reason is that when the region of subdomain reduces, the approximated wave number will be closer to the real wave number of the problem. In short, the WTDG could smoothly approximate the solution of reference problem.
Academic studies have been done in this chapter to show the convergence properties of the WTDG method. Both the Zero Order WTDG and the First Order WTDG lead to the convergent and accurate numerical result. In addition, the WTDG is also used to study the habor agitation problem, which has an engineering application background and has been studied by the extended VTCR in Chapter 3. The result shows that the wave based WTDG performs well in this problem.
Chapter 5 FEM/WAVE WTDG approach for frequency bandwidth including LF and MF
This chapter is focusing on the hybrid use of the FEM approximation and the wave approximation for the constant wave number Helmholtz problem, which ranges from low-frequency to mid-frequency. Benefiting from the FEM approximation , the FEM/WAVE WTDG method well solves the low frequency problem. Moreover, benefiting from the wave approximation, the FEM/WAVE WTDG method could solve the mid-frequency problem in an efficient way as VTCR does. The feasibility of this hybrid method is ensured by the weak Trefftz discontinuous Galerkin method. The WTDG introduces a variational formulation of the reference problem and its shape functions could be found under fewer restrictions compared to the VTCR method. Shape functions are not required to satisfy the governing equation a priori.
The equivalence of the formulation is proved and discretization strategies are proposed in this chapter. Of course, numerical studies illustrate the performance of the FEM/WAVE WTDG approach.
Rewriting of the reference problem
The WTDG was first introduced in [Ladevèze, 2011]. In [Ladevèze et Riou, 2014], a coupling between the FEM approximation and the wave approximation has been developed by the WTDG in the way that FEM approximation and wave approximation are used separately in each subdomain. In this Chapter, the WTDG is extended to mix them in the same subdomains, at the same time.
Variational Formulation
In this chapter the reference problem is defined by (2.1), where the wave number is a constant. Particularly the wave number locates either in the low-frequency range or in the mid-frequency range. In order to get an equivalent variational formulation of (2.1), the domain is divided into subdomains Ω E with E ∈ E. Γ EE ′ denotes the interface between two subdomains E and E ′ . Γ EE denotes the interface between subdomain Ω E and boundary ∂Ω. The approach proposed consists in using the working space U ⊂ H 1 (Ω):
U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )} (5.1)
The vector spaces associated with U and U E where r d = 0 are denoted by U 0 and U E,0 .
Then the WTDG formulation can be written as: find u ∈ U such that
Re ik ∑ E,E ′ ∈E Γ EE ′ 1 2 {q u • n} EE ′ { ṽ} EE ′ - 1 2 [ qv • n] EE ′ [u] EE ′ dS -∑ E∈E Γ EE ∩∂ 1 Ω qv • n (u -u d ) dS -∑ E∈E Γ EE ∩∂ 1 Ω α • i • ṽ(u -u d ) dS + ∑ E∈E Γ EE ∩∂ 2 Ω (q u • n -g d ) ṽdS -∑ E∈E Ω E divq u + k 2 u + r d ṽdΩ = 0 ∀v ∈ U 0 (5.
2) where α is a parameter strictly positive to enforce the boundary Dirichlet condition. As one can see, there is no a priori constraint on the choice of the spaces U and U 0 . Consequently, one can select polynomial approximation, like in the FEM, or wave approximation, like in the VTCR, or even both.
Equivalence of the reference problem
Let us note that (5.2) can be written as: find u ∈ U such that b(u,v) = l(v) ∀v ∈ U 0 (5.3) where b has the property that b(u,u) is real.
Property 1. For u ∈ U 0 , we have b(u,u) = ∑ E∈E kη Ω E grad ũ • gradudΩ + ∑ E∈E Γ EE ∩∂ 1 Ω kαu ũdS 0 (5.4) Proof. b(u,u) = Re ik ∑ E∈E ∂Ω E (q u • n) ũdS -∑ E∈E Ω E divq u ũdΩ + ∑ E∈E Γ EE ∩∂ 1 Ω αiu ũdS (5.5) Consequently, b(u,u) = ∑ E∈E kη Ω E grad ũ • gradudΩ + ∑ E∈E Γ EE ∩∂ 1 Ω kαu ũdS (5.6)
From Property 1 it can be deduced that if b(u,u) = 0, then u is equal to zero over ∂Ω E ∩ ∂ 1 Ω. It is a piecewise constant within subdomains Ω E , E ∈ E. To keep the uniqueness of the solution, condition (P) is introduced to be satisfied by the shape functions which belong to U 0 .
Refering to work [Ladevèze et Riou, 2014], one could obtain the Condition (P), which is crucial for the demonstration. Its definition is as follows:
Condition (P) Let a E ∈ U E be a piecewise constant function within subdomains E ∈ E. a E satisfies condition (P) if ∀v ∈ U 0 , ∀E ∈ E, Re ik ∑ E,E ′ ∈E ∂Ω E (q v • n) ãE ′ dS = 0 ⇒ a E = ±a
(5.7) where E
′ is a subdomain sharing a common boundary with E. And let us take the convention a E ′ = -a E over ∂Ω E ∩ ∂Ω.
Property 2. If U 0 satisfies condition (P) and if η is positive, the WTDG formulation (5.2) has a unique solution.
Proof. In finite dimension, existence of solution will be confirmed if uniqueness can be proved. Let us suppose (5.2) has two solutions u 1 and u
2 . v = u 1 -u 2 ∈ U 0 and b(v,v) = ∑ E∈E kη Ω E grad ṽ • gradvdΩ + ∑ E∈E Γ EE ∩∂ 1 Ω kαv ṽdS = 0 (5.8)
It can be observed that v E = a E with E ∈ E, where a E is piecewise constant within the subdomains and a E = 0 in the subdomains sharing a common boundary with ∂ 1 Ω. Backsubstituting this result into (5.2), one also finds b(v,v * ) = 0 ∀v * ∈ U 0 , which leads to
∀v * ∈ U 0 , Re ik ∑ E,E ′ ∈E ∂Ω E (q v • n) ãE ′ dS = 0 (5.9)
(5.9) corresponds to the condition (P), where E ′ represents a subdomain sharing a common boundary with E, with the convention a E ′ = -a E over ∂Ω E ∩ ∂Ω. Consequently, a E = ±a ∀E ∈ E. Moreover, given that a E = 0 over ∂ 1 Ω, we have a = 0.
Refering to work [Ladevèze et Riou, 2014], one could obtain the Property 3, which is crucial for the demonstration. Its definition and demonstration are as follows:
Property 3. If U E is the combination of solution spaces of FEM and VTCR, then the condition (P) is satisfied, and
u 2 U 0 = b(u,u) + γ 2 (u) (5.10)
is a norm over U 0 . We define (5.11) where C E is constant vector over Ω E and X E is the position vector relative to the center of inertia of element E. U 0 denotes the associated space defined over Ω of U E,0 . And for u ∈ U 0 the definition of quantity γ is defined as (5.12) where C v corresponds to the vector C E of v according to (5.11).
U E,0 = u|u ∈ V E , u = C E • X E
γ(u) = sup v∈U 0 b(u,v)/||C v || L 2 (Ω)
Proof.
= β E + a E . z E is continuous because z E|Γ EE ′ = a E ′ + a E = z E ′ |Γ EE ′ .
It follows that z is constant over Ω. Since z is zero over ∂Ω, z = 0 over Ω and β E = -a E . Consequently, a E can be only the values of +a or -a, a being a constant over Ω.
To demonstrate that ||u|| 2 U 0 is a norm over U 0 , let us consider that u 2 U 0 = b(u,u) + γ 2 (u) = 0. It follows that b(u,u) = 0 and γ(u) = 0. From (5.6) it can be obtained that
u |Ω E = a E is constant over Ω E and that u = 0 over ∂ 1 Ω. Then γ(u) is equal to γ(u) = sup v∈U 0 1 ||C v || L 2 (Ω) Re ik ∑ E,E ′ ∈E ∂Ω E (q v • n) ãE ′ dS = 0 (5.13)
Since condition (P) is satisfied, it can be derived that u E = ±a, a being a constant over Ω.
Finally from u = 0 on ∂ 1 Ω, one gets u = 0 over Ω.
Approximations and discretization of the problem
Defined by (5.1), the working space U could be split into two subspaces U w and U p , which represent the subspace generated by the plane wave functions and the subspace generated by the polynomial functions.
U = U w ⊕ U p (5.14) For numerical implementation, U w and U p are then truncated into the finite dimensional subspaces, which could be noted by U h w and U h p respectively.
Plane wave approximation : The approximation solution in subspace U h w could be expressed such as
u w (x) = N w ∑ n=0
A n e ik•x (5.15) where A n is the unknown amplitude of plane wave. N w is the number of plane waves.
Polynomial approximation : The approximation solution in subspace U h p could be expressed such as
u p (x) = N p ∑ n=0 U n φ n (x) (5.16)
where U n is the unknown degrees of freedom of polynomial interpolation. φ n (x) is the standard interpolation functions of the polynomial approximation. The mesh of polynomial approximation could be built in the same way as the standard FEM method. Without losing generality, in this dissertation the meshes are regular square types.
However it should be noticed that being different from the standard FEM method, the approximation solution u p (x) is not required to a priori satisfy the Dirichlet condition imposed on the boundary. Instead, it is evident that the sum of u w (x) and u p (x) should satisfy this condition, which is weakly comprised in the variational formulation (5.2).
Numerical implementation
Since the shape functions contains both the polynomial and the wave approximations, terms in matrix to integrate composed by polynomial-polynomial terms, wave-wave terms and polynomial-wave terms. Polynomial-polynomial terms are the productions of two polynomials. Gauss quadrature is capable to treat this type of integrations. For the wave-wave terms, the productions of two plane wave functions, their integration could be achieved analytically. Details have been illustrated in Section 4.3.1. As for the terms of the productions of polynomial and plane wave functions, one could still calculate the integrations analytically by the technique of integration by part. The following illustration is typical since each integration of polynomial-wave term could be decomposed into following integration unit: (5.17) where ikcosθ = 0 and iksinθ = 0. This integration is the most complicated form that could appear in integration of polynomial-wave terms. It takes account of high order approximation of polynomial and the integration over the domain. Other cases of polynomial-wave terms could be simplified and derived from it. ) for the same model of problem, in the solution. It can be studied from this example how the FEM/WAVE WTDG method works in low-frequency problem and in mid-frequency problem. The definition of the problem and the discretization strategy can be seen on Figure 5.1. In the FEM/WAVE WTDG formulation (5.2), α = 0.0001. For the FEM approximation in the WTDG, a regular squared mesh of degree 1 is used. One uses 10 elements per wave length. For the wave approximation in the WTDG, one uses only one subdomain and a regular angular distribution of the waves from 0 to 2π. The choice for angular distribution is determined by the geometrical heuristic criterion (2.23). Since the exact solution is given, the convergence of the FEM/WAVE WTDG strategy is assessed by computing the real relative error defined as following:
y 2 y 1 x 2 x 1 x m y n • e ikcosθx+iksinθy dxdy = x 2 x 1 x m e ikcosθx dx • y 2 y 1 y n e iksinθy dy = x m ikcosθ e ikcosθx x 2 x 1 - x 2 x 1 mx m-1 ikcosθ e ikcosθx dx • y n iksinθ e iksinθy y 2 y 1 - y 2 y 1 ny n-1 iksinθ e iksinθy dy = . . . = m+1 ∑ p=1 m!(-1) p+1 x m-p+1 (m -p + 1)!(ikcosθ) p + m!(-1) m+2 (ikcosθ) m+1 e ikcosθx x 2 x 1 × n+1 ∑ q=1 n!(-1) q+1 y n-q+1 (n -q + 1)!(iksinθ) q + n!(-1) n+2 (iksinθ) n+1 e iksinθy
ε ex = u -u ex L 2 (Ω)
u ex L 2 (Ω) (5.18) A comparison of the pure FEM approach (which uses only a polynomial description), the pure VTCR approach (which uses only a wave description) and the FEM/WAVE WTDG approach (which uses at the same time the polynomial and the wave descriptions) is made. For each wave number k, the pure FEM uses the same discretization strategy as the FEM approximation in the WTDG. The pure VTCR uses the same discretization strategy as the wave approximation in the WTDG. The convergence curve is represented on Figure 5.2.
As one can see, the FEM/WAVE WTDG presents a better behaviour than the pure FEM or the pure VTCR. The pure FEM suffers from a lack of accuracy when the frequency becomes to be too high. The pure VTCR is not so efficient in the low frequency domain. This shows the benefits of using the WTDG method for finding the solution for low and mid frequency problems with the same descriptions, at the same time.
The convergence of the FEM/WAVE WTDG method relies on both the FEM approximation and the wave approximation. A study is made to see how the wave approximation affects the performance of the FEM/WAVE WTDG method. With the model of the same computational domain and the same boundary condition defined in this section, we take k = 25 m -1 . Seven different wave approximations have been used to draw the convergence curves of the FEM/WAVE WTDG method as Figure 5.3 shows. For each wave approximation strategy, only one subdomain and a fixed number of rays are used. Meanwhile, for the FEM approximation in the FEM/WAVE WTDG, the mesh is gradually refined until the result converges. As one can see, for a fixed number of waves, the results converge along with the increase of degrees of freedom of the FEM approximation. It can be seen that using the same degrees of freedom of the FEM approximation, a refinement of the angular discretization of the wave approximation in the FEM/WAVE WTDG leads to more precise result. An interesting phenomenon is that the FEM/WAVE WTDG with 32 waves always has a precise solution. The reason is that depending on the criterion (2.23), 32 waves are sufficient to make the result converge.
Non-homogeneous Helmholtz problem with two scales in the solution
The problem considered is an Helmholtz problem defined on Ω = [-0.5 m; 0. y) , k e = 10 m -1 and η = 0.0001. The boundary condition is y) . This boundary condition enables one to know the exact solution of problem with u ex = u d . Therefore, the real relative error could be measured by (5.18). This example is again interesting, because it corresponds to a non-homogeneous Helmholtz problem with two scales in the solution (slow varying scale with k e and fast varying scale with k). The exact solution u ex could be seen on Figure 5.4.
5 m]× [-0.5 m; 0.5 m], with k = 100 m -1 , θ 1 = 30 • , θ 2 = 82 • , r d = (k 2 -k 2 e )e ik e ζ(cosθ 2 •x+sinθ 2 •
u d = e ikζ(cosθ 1 •x+sinθ 1 •y) + e ik e ζ(cosθ 2 •x+sinθ 2 •
The FEM/WAVE WTDG is used to solve this problem. In the variational formulation, one has α = 0.0001. The objective of this method is to use the wave approximation to approximate the fast varying scale solution and to use the FEM approximation for the slow varying scale solution. Correspondingly, a regular squared mesh of degree 2 is used for the FEM approximation. The criteria is to choose 10 elements per wave length. The wave approximation uses 2 waves, which propagate in the 30 • and 210 • two directions. It should be noticed that the exact solution of fast varying scale is taken directly as shape The VTCR curve corresponds to the solution obtained with a pure VTCR discretization explained in Section 5.4.1. The WTDG curve corresponds to the solution obtained with an enrichment of the FEM shape functions with waves, according to the FEM/WAVE WTDG approach. function in wave approximation. Consequently, there is no need to add more shape function to simulate the fast varying scale. With such a choice, the solutions given by the wave approximation, denoted u V TCR and the polynomial approximation, denoted u FEM are depicted in Figure 5.4. The comparison between the exact solution and the FEM/WAVE WTDG solution is shown in Figure 5.5. The real relative error is 4.48×10 -5 . As one can see, the FEM/WAVE WTDG gives a good approximation. This example shows the advantage of the FEM/WAVE WTDG. Due to the fact that the fast varying scale is in mid-frequency, it will require considerable degrees of freedom for the FEM to solve this problem. However, with the FEM/WAVE WTDG, solutions of different scales are solved by different approximations. For the FEM approximation, only a small amount of degrees of freedom are used to get the slow varying scale solution. At the same time, only two more degrees of freedom of the wave approximation could well recover the fast vary scale solution.
The FEM/WAVE WTDG method applied with different types of approximations
The problem considered has a computational domain defined on Figure 5.6. This L shape domain is filled with a fluid with k = 30 m -1 and η = 0.0001. The boundary conditions is u d = e ikζ(cosθ•x+sinθ•y) + e ikζ(sinθ•x+cosθ•y) with θ = 60 • . This choice of the boundary condition enables one to know the exact solution of problem with u ex = u d . Then the performance of the approach could be evaluated by the real relative error.
On this example, three kinds of approximations are used: even a pure FEM approximation, or a pure VTCR approximation, or a mix of the polynomial approximation and the wave approximation (see Figure 5.6). The variational formulation of the WTDG allows this possibility. In order to have a good approximation, one needs select the discretization criteria of each approximation. For the FEM, the choice is to use 20 elements of degree 1 per wave length. For the VTCR, the choice is τ = 14. The FEM/WAVE WTDG uses τ = 17 for the wave approximations and 6 elements of degree 1 in the subdomain for the FEM approximation. It should be noticed that the criteria is highly overestimated for the FEM, for the VTCR and for the FEM/WAVE WTDG in order to have a convergent result. The reason for this overestimation lies in the fact that the convergence criteria for this mix use of approximations is unknown. Even though the criteria for each individual approximation is known, there is no previous study for this mix situation. When the approximations are coupled, they will interact with each other. In [Ladevèze et Riou, 2014], a coupling between the FEM approximation and the wave approximation have been developed by the WTDG in the way that FEM approximation and wave approximation are used separately in each subdomain. Its results show that compared to their individual application, this coupling use requires more degrees of freedom for the FEM approximation and the wave approximation. Consequently, the criteria for each individual approximation can only serve as a reference for our choice. The true criteria for this mix use is still an open question. Here, the objective of the example is only to give a scope of the practicability to achieve a mix use of the FEM/WAVE approximation with the FEM and the VTCR. Again, in the variational formulation, one has α = 0.0001. The exact solution and the FEM/WAVE WTDG solution are depicted in Figure 5.7. As one can see, the solutions are very closed. This is because the variational formulation the WTDG allows the couple use of the FEM, the VTCR and the FEM/WAVE approximation. According to the definition of the error in (5.18), the error is here 2.187×10 -2 .
It can be deduced from this example that all combinations of methods such as pure FEM, pure VTCR, hybrid of FEM and VTCR can be integrated together in one complex geometry problem. In each subdomain the concrete method can be chosen depend on specific requirement of engineering problems.
Conclusion
This chapter proposes an hybrid use of the FEM approximation and the wave approximation thanks to the Weak Trefftz Discontinuous Galerkin method. It is illustrated on the Helmholtz problem. The FEM/WAVE WTDG method allows one to use a combination of FEM approximation and wave approximation. It is based on a variational formulation which is equivalent to the reference problem. All the conditions such as the governing equation, the transmission continuity and the boundary conditions are included in the formulation. No a priori constraint is needed for the definition of the shape functions. As a consequence, any shape function can be used, with no difficulty. It gives the FEM/WAVE WTDG method a great flexibility, as one can select polynomials or wave shape functions (or a combination of them) very easily in the working space, with no restriction. It is successfully illustrated on different examples of different complexity, ranging from low-frequency to mid-frequency, homogeneous or not, with sometimes two scales in the solution.
Conclusion
Along with the development of computer science, numerical technique becomes a fundamental tool to solve engineering problem. The vibration problem dominated by the Helmholtz equation vastly exists in aerospace and automotive industries. Finite elements method is the most common used method in industry. However, the nature of the approximation of this method limits its application to low-frequency problem. Surpassing the low-frequency range, numerical dispersion and pollution effect arise and consequently large amounts of degrees of freedom are required to solve the problem. On the other hand, the existent method in high-frequency problem such as Statistical Energy Analysis method only studies the global energy of system and neglects the local response. The midfrequency vibration problem contains features of the low-frequency and high-frequency. The local response is still required and the system is more sensible to uncertainties that at low-frequency. Therefore it is essential to develop a specific numerical technique for mid-frequency problem.
The Variational Theory of Complex Rays is designed to treat piecewise homogeneous mid-frequency vibro-acoustic problem. This method mainly possesses two hallmarks:
• It rewrites the reference problem into a new formulation. This formulation allows one to use independently the approximations in each subdomain. Continuity conditions between subdomains and boundary conditions are incorporated directly into the formulation.
• It uses the shape functions that satisfy the governing equation in each subdomain. These shape functions are in form of the linear combination of propagative waves.
They have two scales of approximations. The fast variation scale corresponds to the wave functions. The amplitude of wave is the slow variation scale. The VTCR calculates the fast variation scale analytically. Only the slow variation scale is discretized.
The VTCR was first introduced in [Ladevèze, 1996]. It has been developed for 3-D plate assemblies in [Rouch et Ladevèze, 2003], for plates with heterogeneities in [START_REF] Ladevèze | A multiscale computational method for medium-frequency vibrations of assemblies of heterogeneous plates[END_REF], for shells in [START_REF] Riou | Extension of the Variational Theory of Complex Rays to shells for medium-frequency vibrations[END_REF], and for transient dynamics in [START_REF] Chevreuil | Transient analysis including the low-and the medium-frequency ranges of engineering structures[END_REF]. Its extensions to acoustics problems can be seen in [START_REF] Riou | The multiscale VTCR approach applied to acoustics problems[END_REF], Ladevèze et al., 2012, Kovalevsky et al., 2013]. In [START_REF] Barbarulo | Proper generalized decomposition applied to linear acoustic: a new tool for broad band calculation[END_REF] the broad band calculation problem in linear acoustic has been studied. Nevertheless, all these developments are limited to the Helmholtz problem of piecewise constant wave number.
The originality of this dissertation is to solve the heterogeneous Helmholtz problem. Two numerical approaches are developed. The first approach is presented in Chapter 3. It is the extension of the VTCR. New shape functions are developed namely Airy wave functions. These Airy wave functions satisfy the Helmholtz equation when the square of wave number varies linearly. Through academic studies, the convergence properties of this method are illustrated. The convergence of the VTCR could be quickly achieved with a small amount of degrees of freedom. p-convergence is more efficient than h-convergence. Then the extended VTCR is applied to solve an unbounded harbor agitation problem. This example is studied by adopting different domain discretization strategies and by modifying the direction parameter of incoming wave. The result is evaluated by an error estimator and it proves the practicability of the extended VTCR in engineering problem. The second approach is presented in Chapter 4. It is the Weak Trefftz Discontinuous Galerkin method. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this ways, zero order and first order approximations are defined. These functions only satisfy the local governing equation in the average sense. Consequently, residue exists in each subdomain and a refined domain discretization strategy is necessary to decrease the residue. The academic studies present the convergence properties of the WTDG. The harbor agitation problem is again solved by the WTDG and a comparison with the extended VTCR is made. Finally a modified harbor problem with the parameter of wave number being raised to mid-/high-frequency range is resolved. In Chapter 5, the WTDG is extended to mix the polynomial and the wave approximations in the same subdomains, at the same time. Through numerical studies, it illustrates that such a mix approach presents better performances than a pure FEM approach or a pure VTCR approach in the problem with a bandwidth including low-frequency and mid-frequency. Parallel with theoretical development, a software is created: HeterHelm(HETERogeneous HELMholtz). This software is programmed in the environment of MATLAB during the thesis. All the numerical results in this dissertation are obtained from this software. Following this thesis, there are mainly two prospectives of the possible developments. The first prospective is to extend the extended VTCR and the WTDG to vibration for heterogeneous media. Since the excitation problem is different from acoustic, it is not easy to conduct the extended VTCR to this extension. On the other side, without restriction of the governing equation, the extension of the WTDG could be achieved without difficulty. The second prospective is the extension of the WTDG to transient nonlinear problems. In work [Cattabiani, 2016], the VTCR was proved to be able to solve transient problem in a piecewise homogeneous media. The extension to nonlinear ones as viscoplasticity and damage phenomena imposes to work with heterogeneous media. Then, the work with the WTDG could be seen as a first step toward this goal.
French resume
Le sujet de thèse s'intéresse au développement des méthodes numériques pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Les problèmes de Helmholtz jouent un rôle majeur dans le monde industriel. C'est le cas par exemple dans l'industrie automobile où les contraintes du marché et le respect des normes antipollution ont conduit les constructeurs à produire des véhicules toujours plus légers, mais de ce fait beaucoup plus sujets aux vibrations. Le confort acoustique des passagers dans un avion ou des habitants dans un bâtiment en est un autre exemple. Il nécessite la maîtrise du comportement vibro-acoustique de la structure, qui doit être pris en compte dès la conception. Un dernier exemple est celui de l'industrie navale, où la problématique du comportement vibratoire est intégrée très tôt dans la conception des navires de grande taille. Aujourd'hui, avec le développement des outils informatiques, on est capable de traiter ce genre de problème par des méthodes numériques. C'est l'approche qui est proposée dans ce travail.
Dans cette thèse, on considère principalement le problème de vibration issu de l'équation d'Helmholtz hétérogène. C'est cette équation qui peut être utilisée, par exemple, dans la modélisation de l'agitation des vagues dans un port, dans lequel la profondeur varie au fur et à mesure que le rivage est proche. Dans les travaux de [START_REF] Modesto | Proper generalized decomposition for parameterized Helmholtz problems in heterogeneous and unbounded domains: application to harbor agitation[END_REF], ce problème est traitée par la méthode des élément finis avec la technique Perfectly Matched Layer. La Proper Generalized Decomposition, technique de la réduction de modèle, est utilisée pour étudier l'influence des différents paramètres sur le résultat. Ici, nous proposons de le faire par une méthode de Trefftz. Il est d'usage de définir les gammes de fréquences selon la taille relative des composants d'un système par rapport à une longueur d'onde (voir la Figure 8). Lorsque la taille d'un composant est plus petite que la longueur d'onde de sa réponse, on parle alors de basse fréquence (BF) qui est essentiellement caractérisé par un comportement modal du système, avec des pics de résonance bien distincts les uns des autres. Les problèmes de cette plage de fréquences ne sont pas sensibles à l'incertitude. Les méthodes de calcul les plus utilisées pour BF sont basées sur la méthode des éléments finis (FEM). Lorsque la taille d'un composant est beaucoup plus grande par rapport à longueur d'onde, sa réponse implique généralement un grand nombre de modes locaux. On parle alors du domaine de la haute fréquence (HF). Dans ce domaine, l'aspect local de la réponse du système disparait. Le champ vibratoire comporte tellement d'oscillations que la réponse locale du système perd de son sens. Les approches dédiées à ce domaine s'appuient donc sur [Ohayon et Soize, 1998].
des considérations statistiques appliquées à des grandeurs énergétiques globales comme l'analyse statistique de l'énergie (SEA) [Lyon et Maidanik, 1962], FEM-SEA [De Rosa et Franco, 2008, De Rosa et Franco, 2010], Wave Intensity Analysis [Langley, 1992], The Energy flow Analysis [START_REF] Belov | Propagation of vibrational energy in absorbing structures[END_REF][START_REF] Buvailo | [END_REF], Ray tracing méthode [START_REF] Krokstad | Calculating the acoustical room response by the use of a ray tracing technique[END_REF], Chae et Ih, 2001]. La gamme de fréquences intermédiaires est le domaine de la moyenne fréquence (MF). Ce domaine est caractérisé par une densification modale importante et une hypersensibilité du champ vibratoire par rapport aux conditions sur le bord. Ces caractéristiques impliquent l'impossibilité d'utiliser des méthodes de la BF et de la HF vers cette gamme de fréquences. C'est une des raisons qui a fait apparaître les approches ondulatoires, basée sur les travaux de Trefftz [Trefftz, 1926], qui utilisent les solutions générales des équations d'équilibre comme les fonctions de forme.
Parmi ces méthodes, celle qui a été retenue pour ce travail est la Théorie Variationnelle des Rayons Complexes (TVRC). Elle a été introduite pour la première fois dans [Ladevèze, 1996], et depuis l'activité de recherche sur cette approche a porté sur nombreux aspects. Tout d'abord, la TVRC a montré son efficacité dans le traitement des vibrations des assemblages complexes de structures planes [Rouch et Ladevèze, 2003] et de type coques [START_REF] Riou | Extension of the Variational Theory of Complex Rays to shells for medium-frequency vibrations[END_REF]. Ensuite des travaux ont porté sur l'utilisation de la méthode dans le cadre d'une approche fréquentielle pour la résolution de problème de dynamique transitoire incluant le domaine des MF [START_REF] Chevreuil | Transient analysis including the low-and the medium-frequency ranges of engineering structures[END_REF]. La TVRC a ensuite été étendue au traitement des vibrations acoustiques [START_REF] Riou | The multiscale VTCR approach applied to acoustics problems[END_REF], Ladevèze et al., 2012, Kovalevsky et al., 2013]. Avec le PGD, elle a été appliquée aux problèmes sur des bandes de fréquence [START_REF] Barbarulo | Proper generalized decomposition applied to linear acoustic: a new tool for broad band calculation[END_REF]. Plus récemment, des travaux ont été également effectués sur la réponse du choc [Cattabiani, 2016]. Néanmoins, la TVRC et la plupart des autres méthodes ondulatoires se limitent aux milieux homogènes par morceau. Pour les problèmes de Helmholtz hétérogènes, le Ultra Week Variational Formulation (UWVF) exploite l'exponentiel du polynôme pour approximer la solution. Le Discontinuous Enrichment Method (DEM) utilise les fonctions d'Airy pour résoudre le problème. Les travaux de cette thèse sont principalement liés à l'extension de la TVRC et la weak Trefftz discontinuous Galerkin méthode (WTDG) (voir [Ladevèze et Riou, 2014]) pour résoudre le problème de Helmholtz hétérogène. La WTDG n'utilise pas la solution exacte de l'équation d'équilibre comme fonction de forme. Par conséquent l'équation d'équilibre est pas vérifiée à priori et elle est introduite dans la formulation variationnelle pour être approchée. Cette approche est capable d'intégrer des fonctions de forme polynomiales dans sa formulation, et donc de coupler les éléments finis avec la TVRC dans les différentes sous-domaines d'un système. • p-convergence permet d'obtenir des niveaux de précision très grands avec peu de degré de libertés (ddls).
Ω Ω E Γ EE ′ r d Ω u d ∂ 1 Ω ∂ 2 Ω g d Ω E ′
• Moins de sous-domaines sont utilisés, plus vite le résultat converge. Un autre exemple plus compliqué pour illustrer la capacité de cette extension de la TVRC est celui de l'agitation du port. Les vagues viennent de loin, et agissent sur le port. Dans le modèle du port, le profondeur d'eau varie linéairement le long de l'axe y à l'intérieur du port. Étant donné la vitesse de l'eau v et la pulsation w, l'expression du nombre d'onde k est connue explicitement. Les conditions aux limites sur les bords sont de type réflection totale. Comme le problème défini est non borné, la solution doit vérifier la condition de Sommerfeld. Le domaine du problème est globalement divisé dans trois sous-domaines. Les fonctions de forme sont les fonctions Hankel modifiée en Ω 1 , les ondes planes en Ω 2 et les fonctions d'onde d'Airy en Ω 3 . Avec seulement 20 ddls en Ω 1 , 100 ddls en Ω 2 , 160 ddls en Ω 3 . Un résultat est obtenu avec une erreur relative de 6.21 • 10 -3 (voir Figure 11). Si on divise l'intérieure du port en 4 sous-domaines et avec 160 ddls en chaque de ces sous-domaines, le résultat est obtenu avec une erreur relative de 1.52 • 10 -2 (voir Figure 12).
U = {u | u |Ω E ∈ U E } U E = {u E | u E ∈ V E ⊂ H 1 (Ω E )|(1 -iη)∆u E + k2 E u E + r d = 0} (21)
où kE est un valeur approximé par l'expansion de Taylor de k(x). En utilisant l'ordre 0 de l'équation, la solution générale de l'équation d'équilibre peut être approximée par la fonction d'onde plane. Avec l'approximation d'ordre 1, on peut utiliser la fonction d'onde d'Airy. Ces deux approximations sont définies comme le zéro ordre WTDG et le premier ordre WTDG. Dans un des exemples numériques, on considère un domaine Ω en Le chapitre 5 s'intéresse au couplage entre une approximation de type onde et une approximation de type FEM dans le cadre de la formulation variationnelle de WTDG. C'est par la réalisation d'un tel couplage que les problèmes de bande passante qui contient la BF et la MF sont bien résolus. La capacité de cette méthode de pouvoir traiter des problèmes multi-échelles ayant des sous-systèmes à MF couplés à des sous-systèmes BF est aussi illustrée.
Ce travail de thèse développe des stratégies de calcul pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Il s'appuie sur l'utilisation de la TVRC, et enrichit l'espace des fonctions qu'elle utilise par des fonctions d'Airy, quand le carré de la longueur d'onde du milieu varie linéairement. Il généralise aussi la prédiction de la solution par la WTDG pour des milieux dont la longueur d'onde varie d'une quelconque autre manière. Pour cela, des approximations à l'ordre zéro et à l'ordre un sont définies, et vérifient localement les équations d'équilibre selon une certaine moyenne sur les sous domaines de calcul. Plusieurs démonstrations théoriques des performances de l'extension de la TVRC et de la WTDG sont menées, et plusieurs exemples numériques illustrent les résultats. La complexité retenue pour ces exemples montrent que les approches retenues permettent de prédire le comportement vibratoire de problèmes complexes, tel que le régime oscillatoire des vagues dans un port maritime. Ils montrent également qu'il est tout à fait envisageable de mixer les stratégies de calcul développées avec celles classiquement utilisées, telle que la méthode des éléments finis, pour construire des stratégies de calcul utilisables pour les basses et les moyennes fréquences, en même temps.
3. 1
1 Behaviors of Airy functions. . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2 Example of Airy wave and plane wave. Left: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. Right: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. . . . . . . . . . . . . . . . . . . . . . 46 3.3 Geometry definition for the test of numerical integration performance in Section 3.3.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 From left to right: First: definition of domain. Second: 1 subdomain discretisation. Third: 4 subdomains discretisation. Fourth: 9 subdomains discretisation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.5 The three convergence curves of extended VTCR calculated with the discretization strategies shown in Figure 3.4. . . . . . . . . . . . . . . . . . 54 3.6 Top view of Harbor in Section 3.5.2. θ + 0 represents the direction of incident wave. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.7 Side view of Harbor in Section 3.5.2. Variable h represents depth of water from sea surface to the bottom. The depth h increases when it points from harbor inside to harbor outside. . . . . . . . . . . . . . . . . . . . . . . . 56 3.8 First step for seeking analytic solution ouside the harbor . . . . . . . . . 58 3.9 Second step for seeking analytic solution ouside the harbor . . . . . . . . 58 3.10 Half plane problem with boundary Γ O . . . . . . . . . . . . . . . . . . . . 59 3.11 The first strategy in Section 3.5.2: domain inside the harbor divided into one computational subdomain. . . . . . . . . . . . . . . . . . . . . . . . 60 3.12 The second strategy in Section 3.5.2: domain inside harbor divided into four computational subdomains. . . . . . . . . . . . . . . . . . . . . . . 3.13 Up: numerical result calculated by the first strategy of Figure 3.11 with θ + 0 = 45 • . Down: numerical result calculated by the second strategy of Figure 3.12 with θ + 0 = 45 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. . 3.14 Up: numerical result inside the harbor calculated by the first strategy with θ + 0 = 45 • . Down: numerical result inside the harbor calculated by the second strategy with θ + 0 = 45 • . . . . . . . . . . . . . . . . . . . . . . . . 3.15 Up: numerical result calculated by the first strategy with θ + 0 = 35 • . Down: numerical result calculated by the first strategy with θ + 0 = 65 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate. . . . . . . . . . . . . . . . . . . . 4.1 From left to right: First: definition of domain, Second: 1 subdomain discretisation, Third: 4 subdomains discretisation, Fourth: 9 subdomains discretisation, Fifth: 16 subdomains discretisation, Sixth: 25 subdomains discretisation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 The convergence curves for the example of Section 4.5.1. The five convergence curves of the Zero Order WTDG calculated with the strategies showed in Figure 4.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 The convergence curves of the Zero Order WTDG in Section 4.5.2. . . . 4.4 The convergence curves of the First Order WTDG in Section 4.5.2. . . . 4.5 From left to right: First: the Zero Order WTDG with 4 subdomains and 100 waves per subdomain. Second: the Zero Order WTDG with 25 subdomains and 80 waves per subdomain. Third: the Zero Order WTDG with 100 subdomains and 40 waves per subdomain. . . . . . . . . . . . . 4.6 From left to right: First: the First Order WTDG with 1 subdomain and 160 waves per subdomain. Second: the First Order WTDG with 4 subdomains and 120 waves per subdomain. Third: Solution calculated by the FEM with 625 elements of quadric mesh of order 3. . . . . . . . . . . . . . . . 4.7 Left: computational strategy of the VTCR. Right: computational strategy of the Zero Order WTDG. . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR in Chapter 3. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains. . . . . . . . . . . . . . . . . . . . . . . 4.9 The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Global result considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times. . . . . . . . . . 4.11 Result of harbor inside considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times. . . . . 5.1 Left: definition of domain. Middle: VTCR wave directions discretisation. Right: FEM mesh refinement. . . . . . . . . . . . . . . . . . . . . . . . 5.2 The convergence curves for the example of Section 5.4.1. The FEM curve corresponds to the solution obtained with a pure FEM discretization explained in Section. The VTCR curve corresponds to the solution obtained with a pure VTCR discretization explained in Section 5.4.1. The WTDG curve corresponds to the solution obtained with an enrichment of the FEM shape functions with waves, according to the FEM/WAVE WTDG approach. 5.3 The convergence curves for the example of Section 5.4.1. For each convergence curve, a fixed number of wave directions of VTCR part is chosen in FEM/WAVE WTDG strategy. The degrees of freedom of FEM part is varied in order to attain the convergence. . . . . . . . . . . . . . . . . . . 5.4 Up left: definition of the computational domain. Up right: exact solution u ex . Down left: representation of the fast varying scale result simulated by VTCR part u V TCR . Down right: representation of the slow varying scale result simulated by FEM part u FEM . . . . . . . . . . . . . . . . . . . . . 5.5 Up: WTDG solution u W T DG . Down: exact solution u ex . . . . . . . . . . . 5.6 Left: computational domain Ω. Right: selected discretizations in the subdomains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Up: FEM/WAVE WTDG solution. Down: exact solution. . . . . . . . . . 8 Fonction de réponse en fréquence d'une structure complexe [Ohayon et Soize, 1998]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Problème de référence et discrétisation du domaine . . . . . . . . . . . . 10 Exemple d'onde d'Airy et d'onde plane. À gauche: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. À droite: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. . . . . . . . . . . . . . . . . . . . . . 11 À gauche: la première stratégie. À droite: résultat de la première stratégie avec θ + 0 = 45 • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 À gauche: la deuxième stratégie. À droite: résultat de la deuxième stratégie avec θ + 0 = 45 • . . . . . . . . . . . . . . . . . . . . . . . . . . . En haut: résultat du Zéro Ordre WTDG en utilisant 4, 25, 100 sousdomaines. En bas: courbes de convergence du Zéro Order WTDG . . . . 113 En haut: résultat du Premier Ordre WTDG en utilisant 1, 4 sous-domaines et résultat de FEM. En bas: courbes de convergence du Premier Order WTDG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 L'angle d'onde incidente est θ + 0 = 45 • . En haut à gauche: résultat de référence calculé par l'extension de la TVRC. En haut à droite: résultat du zéro ordre WTDG avec cinq sous-domaines. En bas à gauche: résultat du zéro ordre WTDG avec dix sous-domaines. En bas à droite: résultat du zéro ordre WTDG avec quinze sous-domaines. . . . . . . . . . . . . . 115 List of Tables 3.1 The angle θ of Airy wave functions for the numerical test . . . . . . . . . 3.2 Reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Difference between the quadgk integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Difference between the quadl integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Difference between the trapz integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Difference between the quad integral values and the reference integral values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
bubble functions b e for e ∈ 1,2, • • • , m are the solutions of following problem Lb e = -Lϕ e on Ω E b e = 0 on ∂Ω E (1.13)
Figure 2 . 1 :
21 Figure 2.1: Left: reference problem. Right: discretization of computational domain.
.1) with r d = 0. By replacing (2.11) back in to the Helmholtz equation, one could find two classes of waves, namely propagative wave and evanescent wave. Examples of propagative wave and evanescent wave could be seen in Figure 2.2.
Figure 2 . 2 :
22 Figure 2.2: Left: propagative wave. Right: evanescent wave.
Figure 2 . 3 :
23 Figure 2.3: The definition of numerical example in Section 2.3.
Figure 2 . 4 :Figure 2 . 5 :
2425 Figure 2.4: The evaluation of condition number along with the convergence of result in Section 2.3.
Figure 2.7: The comparison of h-convergence and p-convergence in Section 2.4.3.
Figure 3 . 1 :
31 Figure 3.1: Behaviors of Airy functions.
Figure 3 . 2 :
32 Figure 3.2: Example of Airy wave and plane wave. Left: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. Right: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)].
Figure 3.3: Geometry definition for the test of numerical integration performance in Section 3.3.1.
Figure 3 . 4 :Figure 3 . 5 :
3435 Figure 3.4: From left to right: First: definition of domain. Second: 1 subdomain discretisation. Third: 4 subdomains discretisation. Fourth: 9 subdomains discretisation.
Figure 3 . 6 :
36 Figure 3.6: Top view of Harbor in Section 3.5.2. θ + 0 represents the direction of incident wave.
Figure 3 . 7 :
37 Figure 3.7: Side view of Harbor in Section 3.5.2. Variable h represents depth of water from sea surface to the bottom. The depth h increases when it points from harbor inside to harbor outside.
Figure 3 . 8 :
38 Figure 3.8: First step for seeking analytic solution ouside the harbor
Figure 3 . 10 :
310 Figure 3.10: Half plane problem with boundary Γ O .
Figure 3 .
3 Figure 3.11: The first strategy in Section 3.5.2: domain inside the harbor divided into one computational subdomain.
Figure 3 .
3 Figure 3.12: The second strategy in Section 3.5.2: domain inside harbor divided into four computational subdomains.
Figure 3 .
3 Figure 3.13: Up: numerical result calculated by the first strategy of Figure 3.11 with θ + 0 = 45 • . Down: numerical result calculated by the second strategy of Figure 3.12 with θ + 0 = 45 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate.
Figure 3 .
3 Figure 3.14: Up: numerical result inside the harbor calculated by the first strategy with θ + 0 = 45 • . Down: numerical result inside the harbor calculated by the second strategy with θ + 0 = 45 • .
Figure 3 .
3 Figure 3.15: Up: numerical result calculated by the first strategy with θ + 0 = 35 • . Down: numerical result calculated by the first strategy with θ + 0 = 65 • . Results of semi-unbounded domain Ω 1 are shown in a truncated part with r ∈ [1000 m, 2000 m] in polar coordinate.
Contents 4 . 1
41 Rewriting of the reference problem . . . . . . . . . . . . . . . . . . . . 69 4.1 Rewriting of the reference problem 4.1.1 Variational Formulation
4. 5
5 Numerical examples 4.5.1 Academic study of the Zero Order WTDG in the heterogeneous Helmholtz problem of slowly varying wave number A simple geometry of a square [0 m; 1 m]×[0 m; 1 m] is considered for the domain Ω. In this domain, k 2 = 150x + 150y + 1000, η = 0.01. k varies 14.02% on Ω. Boundary conditions on ∂Ω are Dirichlet type such that u d = 3 ∑ j=1 ψ(x,P j ), where ψ(x,P j ) is the Airy wave solution of heterogeneous Helmholtz equation in domain Ω. θ 1 = 10 • , θ 2 = 55 • , θ 3 = 70
Figure 4 . 1 :
41 Figure 4.1: From left to right: First: definition of domain, Second: 1 subdomain discretisation, Third: 4 subdomains discretisation, Fourth: 9 subdomains discretisation, Fifth: 16 subdomains discretisation, Sixth: 25 subdomains discretisation.
Figure 4 . 2 :
42 Figure 4.2: The convergence curves for the example of Section 4.5.1. The five convergence curves of the Zero Order WTDG calculated with the strategies showed in Figure 4.1.
Figure 4 . 3 :
43 Figure 4.3: The convergence curves of the Zero Order WTDG in Section 4.5.2.
Figure 4 . 4 :
44 Figure 4.4: The convergence curves of the First Order WTDG in Section 4.5.2.
Figure 4 . 5 :
45 Figure 4.5: From left to right: First: the Zero Order WTDG with 4 subdomains and 100 waves per subdomain. Second: the Zero Order WTDG with 25 subdomains and 80 waves per subdomain. Third: the Zero Order WTDG with 100 subdomains and 40 waves per subdomain.
Figure 4 . 6 :
46 Figure 4.6: From left to right: First: the First Order WTDG with 1 subdomain and 160 waves per subdomain. Second: the First Order WTDG with 4 subdomains and 120 waves per subdomain. Third: Solution calculated by the FEM with 625 elements of quadric mesh of order 3.
Figure 4 . 7 :
47 Figure 4.7: Left: computational strategy of the VTCR. Right: computational strategy of the Zero Order WTDG.
Figure 4 . 8 :
48 Figure 4.8: The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR in Chapter 3. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains.
Figure 4 . 9 :
49 Figure 4.9: The direction of incoming wave being θ + 0 = 45 • . Up left: reference numerical result calculated by the VTCR. Up Right: numerical result calculated by the Zero Order WTDG with five subdomains. Down left: numerical result calculated by the Zero Order WTDG with ten subdomains. Down Right: numerical result calculated by the Zero Order WTDG with fifteen subdomains.
Figure 4 .
4 Figure 4.10: Global result considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times.
Figure 4 .
4 Figure 4.11: Result of harbor inside considered in Section 4.5.3 with incoming wave direction θ + 0 = 45 • and the wave numbers increased to four times.
5. 4 5 ∑
45 Numerical examples 5.4.1 Homogeneous Helmholtz problem of frequency bandwidth including LF and MF The domain being considered is the square Ω = [0 m; 0.5 m]×[0 m; 0.5 m]. The prescribed boundary conditions are u d = j=1 e ikζ(cosθ j •x+sinθ j •y) with θ 1 = 5.6 • , θ 2 = 12.8 • , θ 3 = 18 • , θ 4 = 33.5 • , θ 5 = 41.2 • and η = 0.0001. The bandwidth of the wave number k ranges from 5 m -1 to 72 m -1 . This example is interesting, because it covers different scales (from slow varying scale with k = 5 m -1 to fast varying scale with k = 72 m -1
Figure 5 . 1 :
51 Figure 5.1: Left: definition of domain. Middle: VTCR wave directions discretisation. Right: FEM mesh refinement.
Figure 5 . 2 :
52 Figure 5.2: The convergence curves for the example of Section 5.4.1. The FEM curve corresponds to the solution obtained with a pure FEM discretization explained in Section.The VTCR curve corresponds to the solution obtained with a pure VTCR discretization explained in Section 5.4.1. The WTDG curve corresponds to the solution obtained with an enrichment of the FEM shape functions with waves, according to the FEM/WAVE WTDG approach.
Figure 5 . 3 :
53 Figure 5.3: The convergence curves for the example of Section 5.4.1. For each convergence curve, a fixed number of wave directions of VTCR part is chosen in FEM/WAVE WTDG strategy. The degrees of freedom of FEM part is varied in order to attain the convergence.
Figure 5 . 4 :
54 Figure 5.4: Up left: definition of the computational domain. Up right: exact solution u ex . Down left: representation of the fast varying scale result simulated by VTCR part u V TCR . Down right: representation of the slow varying scale result simulated by FEM part u FEM .
Figure 5 . 5 :
55 Figure 5.5: Up: WTDG solution u W T DG . Down: exact solution u ex .
Figure 5 . 6 :
56 Figure 5.6: Left: computational domain Ω. Right: selected discretizations in the subdomains
Figure 5 . 7 :
57 Figure 5.7: Up: FEM/WAVE WTDG solution. Down: exact solution.
Figure 8 :
8 Figure8: Fonction de réponse en fréquence d'une structure complexe[Ohayon et Soize, 1998].
Figure 9 :
9 Figure 9: Problème de référence et discrétisation du domaine
Figure 10 :
10 Figure 10: Exemple d'onde d'Airy et d'onde plane. À gauche: Airy wave with η = 0.001, α = 300 m -3 , β = 300 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)]. À droite: plane wave with η = 0.001, α = 0 m -3 , β = 0 m -3 , γ = 600 m -2 , P = [cos(π/6),sin(π/6)].
Figure 11 :
11 Figure 11: À gauche: la première stratégie. À droite: résultat de la première stratégie avec θ + 0 = 45 • .Le chapitre 4 est consacré au développement de la WTDG basé sur les ondes, pour les les problèmes de Helmholtz hétérogènes. L'équation d'équilibre n'est pas vérifiée a priori. L'espace admissible de la WTDG est composée par les solutions u qui vérifient l'équation d'équilibre approximée:
Figure 12 :
12 Figure 12: À gauche: la deuxième stratégie. À droite: résultat de la deuxième stratégie avec θ + 0 = 45 • .
Figure 13 :Figure 14 :
1314 Figure 13: En haut: résultat du Zéro Ordre WTDG en utilisant 4, 25, 100 sous-domaines. En bas: courbes de convergence du Zéro Order WTDG
The energetic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . The wave-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . Wave Boundary Element Method . . . . . . . . . . . . . . . . . 1.3.4 Discontinuous Enrichment Method . . . . . . . . . . . . . . . . 1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Variational Theory of Complex Rays in Helmholtz problem of constant wave number 2.1 Reference problem and notations . . . . . . . . . . . . . . . . . . . . . . 2.2 Rewrite of the reference problem . . . . . . . . . . . . . . . . . . . . . 2.2.1 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Properties of the variational formulation . . . . . . . . . . . . . .
Introduction 1 Bibliographie 1.1 The polynomial methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 The standard finite element method . . . . . . . . . . . . . . . . 1.1.2 The extension of FEM . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 The boundary element method . . . . . . . . . . . . . . . . . . . 1.2 1.2.1 The Statistical Energy Analysis . . . . . . . . . . . . . . . . . . 1.2.2 The Hybrid FEM-SEA . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Wave Intensity Analysis . . . . . . . . . . . . . . . . . . . . . . 1.2.4 The Energy Flow Analysis . . . . . . . . . . . . . . . . . . . . . 1.2.5 Ray Tracing Method . . . . . . . . . . . . . . . . . . . . . . . . 1.3 1.3.1 Ultra Weak Variational Formulation . . . . . . . . . . . . . . . . 1.3.2 Wave Based Method . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 2.2.3 Approximation and discretization of the problem . . . . . . . . . 2.2.4 Ray distribution and matrix recycling . . . . . . . . . . . . . . .
The Hybrid FEM-SEA . . . . . . . . . . . . . . . . . . . . . . . . 1.2.3 Wave Intensity Analysis . . . . . . . . . . . . . . . . . . . . . . . 1.2.4 The Energy Flow Analysis . . . . . . . . . . . . . . . . . . . . . . 1.2.5 Ray Tracing Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 The wave-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.1 Ultra Weak Variational Formulation . . . . . . . . . . . . . . . . . 1.3.2 Wave Based Method . . . . . . . . . . . . . . . . . . . . . . . . . 1.3.3 Wave Boundary Element Method . . . . . . . . . . . . . . . . . . 1.3.4 Discontinuous Enrichment Method . . . . . . . . . . . . . . . . . 1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1 The polynomial methods
1.1.1 The standard finite element method
Contents 1.1 The polynomial methods . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.1.1 The standard finite element method . . . . . . . . . . . . . . . . . 9 1.1.2 The extension of FEM . . . . . . . . . . . . . . . . . . . . . . . . 10 1.1.3 The boundary element method . . . . . . . . . . . . . . . . . . . . 13 1.2 The energetic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2.1 The Statistical Energy Analysis . . . . . . . . . . . . . . . . . . . 15 1.2.2
Contents 2.1 Reference problem and notations . . . . . . . . . . . . . . . . . . . . . . 2.2 Rewrite of the reference problem . . . . . . . . . . . . . . . . . . . . . 2.2.1 Variational formulation . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Properties of the variational formulation . . . . . . . . . . . . . . . 2.2.3 Approximation and discretization of the problem . . . . . . . . . . 2.2.4 Ray distribution and matrix recycling . . . . . . . . . . . . . . . . 2.3 Iterative solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Nomenclature
Ω domain
∂Ω boundary of Ω
u or v pressure or displacement
k wave number
η damping coefficient
h constant related to the impedance
r d g d source prescribed over Ω source prescribed over ∂ 2
2.4 Convergence of the VTCR . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Convergence criteria . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Error indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 h-and p-convergence of VTCR . . . . . . . . . . . . . . . . . . . 2.4.4 Adaptive VTCR . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 3.1 VTCR with Airy wave functions . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Airy wave functions . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Variational Formulation . . . . . . . . . . . . . . . . . . . . . . . 3.2 Approximations and discretization of the problem . . . . . . . . . . . . 3.3 Numerical implementation . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Iterative solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Convergence of the Extended VTCR . . . . . . . . . . . . . . . . . . . . 3.4.1 Convergence criteria . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Error indicator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Academic study of the extended VTCR on medium frequency heterogeneous Helmholtz problem . . . . . . . . . . . . . . . . . . .
3.1 VTCR with Airy wave functions
3.1.1 Airy wave functions
3.5.2 Study of the extended VTCR on semi-unbounded harbor agitation problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table 3 .
3 6: Difference between the quad integral values and the reference integral values on ∂Ω are Dirichlet type such that u d =
3
∑ j=1 ψ(x,y,P j ), where ψ(x,y,P j ) is the Airy wave solution of heterogeneous Helmholtz equation in domain Ω. θ 1
To demonstrate that condition (P) is satisfied, let us take a E ∈ U E , E ∈ E a piecewise constant. Since U E is the combination of FEM and VTCR, u could be any linear combination of polynoms and wave functions. Therefore when u = a E a piecewise constant, u could only be the polynomial function of order 0. Let us note that ∀E ∈ E a E ′ = β E (β E constant over Ω E ) for any subdomain E ′ sharing a common boundary with E. Let us introduce z E
Titre : Sur des stratégies de calcul ondulatoires pour les milieux hétérogènes Mots-clés : hétérogène, moyennes fréqences, TVRC, WTDG Résumé : Ce travail de thèse s'intéresse au développement de stratégies de calcul pour résoudre les problèmes de Helmholtz, en moyennes fréquences, dans les milieux hétérogènes. Il s'appuie sur l'utilisation de la Théorie Variationnelle des Rayons Complexes (TVRC), et enrichit l'espace des fonctions qu'elle utilise par des fonctions d'Airy, quand le carré de la longueur d'onde du milieu varie linéairement. Il s'intéresse aussi à une généralisation de la prédiction de la solution pour des milieux dont la longueur d'onde varie d'une quelconque autre manière. Pour cela, des approximations à l'ordre zéro et à l'ordre un sont définies, et vérifient localement les équations d'équilibre selon une certaine moyenne sur les sous domaines de calcul.
Plusieurs démonstrations théoriques des performances de la méthodes sont menées, et plusieurs exemples numériques illustrent les résultats. La complexité retenue pour ces exemples montrent que l'approche retenue permet de prédire le comportement vibratoire de problèmes complexes, tel que le régime oscillatoire des vagues dans un port maritime. Ils montrent également qu'il est tout à fait envisageable de mixer les stratégies de calcul développées avec celles classiquement utilisées, telle que la méthode des éléments finis, pour construire des stratégies de calcul utilisables pour les basses et les moyennes fréquences, en même temps.
Title : On wave based computational approaches for heterogeneous media
Keywords : heterogeneous, mid-frequency, VTCR, WTDG Abstract : This thesis develops numerical approaches to solve mid-frequency heterogeneous Helmholtz problem. When the square of wave number varies linearly in the media, one considers an extended Variational Theory of Complex Rays(VTCR) with shape functions namely Airy wave functions, which satisfy the governing equation. Then a general way to handle heterogeneous media by the Weak Trefftz Discontinuous Galerkin (WTDG) is proposed. There is no a priori restriction for the wave number. One locally develops general approximated solution of the governing equation, the gradient of the wave number being the small parameter. In this way, zero order and first order approximations are defined, namely Zero Order WTDG and First Order WTDG. Their shape functions only satisfy the local governing equation in average sense.
Theoretical demonstration and academic examples of approaches are addressed. Then the extended VTCR and the WTDG are both applied to solve a harbor agitation problem. Finally, a FEM/WAVE WTDG is further developed to achieve a mix use of the Finite Element method(FEM) approximation and the wave approximation in the same subdomains, at the same time for frequency bandwidth including LF and MF.
Université Paris-Saclay Espace Technologique / Immeuble Discovery Route de l'Orme aux Merisiers RD 128 / 91190 Saint-Aubin, France | 194,053 | [
"763133"
] | [
"247321"
] |
01488061 | en | [
"sdv"
] | 2024/03/04 23:41:48 | 2017 | https://inserm.hal.science/inserm-01488061/file/Infant%20PrEP%20BMJ%202017.pdf | Philippe Van De Perre
email: van_de_perre@chu-montpellier.fr
Chipepo Kankasa
Nicolas Nagot
Nicolas Meda
James K Tumwine
Anna Coutsoudis
Thorkild Tylleskär
Hoosen M Coovadia
Pre-exposure prophylaxis for infants exposed to HIV through breast feeding
published or not. The documents may come
The AIDS 2016 conference, held in July in Durban, South Africa, lauded pre-exposure prophylaxis (PrEP) as the way forward for substantially reducing the rate of new HIV infections worldwide. PrEP is defined as the continuous or intermittent use of an antiretroviral drug or drug combination to prevent HIV infection in people exposed to the virus. The underlying pathophysiological rationale is that impregnating uninfected cells and tissues with an antiviral drug could prevent infection by both cell-free and cell-associated HIV (cell-to-cell transfer). PrEP's tolerance and efficacy have been demonstrated in well designed clinical trials in men who have sex with men (MSM). 1 2 In the Ipergay trial, 86% of HIV infections were averted in highly exposed men. [START_REF] Molina | Study Group. On-demand preexposure prophylaxis in men at high risk for HIV-1 infection[END_REF] PrEP has also been evaluated in other highly exposed groups such as transgender women, injecting drug users, serodiscordant heterosexual couples, and commercial sex workers. [START_REF] Who | WHO technical update on pre exposure prophylaxis (PreP)[END_REF]
HIV exposed children: lost in translation
Uninfected pregnant or breastfeeding women in high incidence areas have also been suggested as a potential target population for PrEP, but infants exposed to HIV through breast feeding have not been mentioned. [START_REF] Price | Cost-effectiveness of pre-exposure HIV prophylaxis during pregnancy and breastfeeding in Sub-Saharan Africa[END_REF] Numerous public declarations and petitions have produced a strong advocacy for extension of the PrEP principle to all high risk populations exposed to HIV, considering access to PrEP as part of human rights. Recently, the World Health Organization recommended offering PrEP to any population in which the expected incidence of HIV infection is above 3 per 100 person-years. 3 5 So why are breastfed infants born to HIV infected women, a population that often has an overall HIV acquisition rate above 3/100 person-years, not receiving this clearly beneficial preventive health measure?
Current strategy not good enough
Since June 2013, the WHO has recommended universal lifelong antiretroviral therapy (ART)-known as "option B+"-for all pregnant and breastfeeding women infected with HIV-1, with the objective of eliminating mother-to-child transmission (defined by WHO as an overall rate of transmission lower than 5%). [START_REF] Who | Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection[END_REF] The B+ strategy also recommends that their babies receive nevirapine for six weeks to mitigate the risks of transmission during delivery. But infants who are breast fed continue to be exposed to a substantial risk of infection beyond the six week prophylaxis period.
The B+ strategy has been rolled out in most programmes to prevent mother-to-child transmission worldwide without any additional protection for breastfed infants. Although these programmes have been shown to increase the number of pregnant and breastfeeding women who receive ART, their success in prevention of infection in infants is less clear. According to UNAIDS estimates, improvement in services to prevent mother-to-child HIV transmission since 2010 has reduced the annual number of new infections among children globally by 56%. [START_REF]2016 prevention gap report[END_REF] However, the few available programmatic data on long term residual HIV transmission rates suggest that this is mainly accounted for by reduced in utero and intrapartum HIV transmission rather than in postnatal transmission through breast feeding. Also, there is considerable variation across countries and continents, with many countries, mainly in Africa
Analysis
ANALYSIS
and Asia, seeing no change in HIV incidence among children. An update of the UNAIDS 2015 report suggests that in 2015 the average mother-to-child transmission rate was 8.9% among 21 Global Plan African countries, and only five of these countries-Namibia, Uganda, Swaziland, Botswana and South Africa-have reached the target transmission rate of below 5%. 7
Reasons for continued transmission
Most of the residual transmission is attributable to exposure through breast feeding. A recent study assessed community viral load in Kenyan, Malawian, and South African households, including more than 11 000 women of child bearing age, of whom 3296 were pregnant or breast feeding. A total of 608 pregnant or breastfeeding women had HIV infection, with the proportion with plasma RNA above 1000 copies/ml varying from 27% in Malawi to 73% in Kenya. [START_REF] Maman | Most breastfeeding women with high viral load are still undiagnosed in sub Saharan Africa[END_REF] Some of the women who had detectable viral load were unaware of their infection because they had not been tested or had become infected after antenatal screening; others had not started ART or were not taking it as recommended.
In 2015, about 150 000 children were infected with HIV worldwide. The vertical transmission rate from mother-to-infant at six weeks was 5% but rose to 8.9% by the conclusion of breast feeding. [START_REF] Mofenson | State of the art in prevention of mother-to-child transmission. 2nd workshop on prevention trials in infants born to HIV-positive mothers[END_REF] In Africa, the reasons for this high residual burden of child infections are multiple. The main reason is operational, with challenges in all phases of the care cascade (test, treat, and retain in care), including consistent testing of HIV exposed infants, starting infants on treatment, and retaining infants in care.
Primary obstacles to linkage and retention include the distance and resources required to travel to a health facility, cultural or stigma related challenges, logistic hurdles that exist in antenatal care centres, and resources and efficacy of linkage to definitive HIV care. Observational studies in different African settings report less than optimal adherence, with only 50-70% viral suppression in women one year after starting ART. In Malawi, where the B+ strategy was rolled out in 2011, one fifth of women identified never started ART during the early phases of the programme. [START_REF] Tenthani | Ministry of Health in Malawi and IeDEA Southern Africa. Retention in care under universal antiretroviral therapy for HIV-infected pregnant and breastfeeding women ('Option B+') in Malawi[END_REF] In the early phases of the Swaziland programme, postnatal retention in care for HIV infected women was only 37% overall and 50% for those who started ART during pregnancy. [START_REF] Abrams | Impact of Option B+ on ART uptake and retention in Swaziland: a stepped-wedge trial[END_REF] A study in Malawi found that women who started ART to prevent transmission to their child were five times more likely to default than women who started treatment for their own health. [START_REF] Tenthani | Ministry of Health in Malawi and IeDEA Southern Africa. Retention in care under universal antiretroviral therapy for HIV-infected pregnant and breastfeeding women ('Option B+') in Malawi[END_REF] Maternal discontinuation of ART while breast feeding considerably increases risk of HIV transmission to the infant because of viral rebound, as observed after interrupting maternal zidovudine prophylaxis in the DITRAME study. [START_REF] Manigart | Diminution de la Transmission Mere-Enfant Study Group. Effect of perinatal zidovudine prophylaxis on the evolution of cell-free HIV-1 RNA in breast milk and on postnatal transmission[END_REF] Furthermore, cell-to-cell transfer of HIV is not inhibited in mothers taking ART in many cases. [START_REF] Van De Perre | HIV-1 reservoirs in breast milk and translational challenges to elimination of breast-feeding transmission of HIV-1[END_REF] The residual postnatal transmission rate from a mother with an ART suppressed viral load has been estimated at 0.2% per month of breastfeeding. This corresponds to an expected residual rate of 2.4% at 12 months. [START_REF] Rollins | Estimates of peripartum and postnatal mother-to-child transmission probabilities of HIV for use in Spectrum and other population-based models[END_REF] Since the latest WHO/Unicef guidelines for HIV and infant feeding recommend 24 months of breast feeding rather than 12, 14 the duration of infant HIV exposure will be much extended, increasing the risk of additional HIV infections.
Infant prophylaxis has been used before
Administration of a daily antiviral drug to an uninfected but exposed breastfed infant meets the definition of PrEP: the prophylaxis is administered before exposure (ideally from birth) to an uninfected infant whose exposure to HIV is intermittent (during breast feeding) and persistent. Ironically, infants born to HIV infected women were the first to participate in ARV prophylaxis trials. They have probably contributed the highest number of participants in such studies worldwide. Indeed, prophylaxis with oral zidovudine was integrated in the first prophylactic protocol (ACTG 076) reported in 1994. [START_REF] Connor | Reduction of maternal-infant transmission of human immunodeficiency virus type 1 with zidovudine treatment. Pediatric AIDS Clinical Trials Group Protocol 076 Study Group[END_REF] Thereafter, numerous trials have included infant PrEP to prevent mother-to-child transmission, in combination or even as a sole preventive regimen. [START_REF] Kilewo | Prevention of mother-to-child transmission of HIV-1 through breast-feeding by treating infants prophylactically with lamivudine in Dar es Salaam, Tanzania: the Mitra Study[END_REF][START_REF] Kumwenda | Extended antiretroviral prophylaxis to reduce breast-milk HIV-1 transmission[END_REF][START_REF] Coovadia | HPTN 046 protocol team. Efficacy and safety of an extended nevirapine regimen in infant children of breastfeeding mothers with HIV-1 infection for prevention of postnatal HIV-1 transmission (HPTN 046): a randomised, double-blind, placebo-controlled trial[END_REF][START_REF] Nagot | ANRS 12174 Trial Group. Extended pre-exposure prophylaxis with lopinavir-ritonavir versus lamivudine to prevent HIV-1 transmission through breastfeeding up to 50 weeks in infants in Africa (ANRS 12174): a randomised controlled trial[END_REF] The most recent of these, the ANRS 12174 trial, showed that infant prophylaxis with either lamivudine (3TC) or boosted lopinavir (LPV/r) daily throughout breastfeeding for up to 12 months among infants of HIV infected women who did not qualify for ART for their own health was well tolerated and reduced the risk of postnatal transmission at 1 year of age to 0.5% (per protocol) or 1.4% (intention to treat). [START_REF] Nagot | ANRS 12174 Trial Group. Extended pre-exposure prophylaxis with lopinavir-ritonavir versus lamivudine to prevent HIV-1 transmission through breastfeeding up to 50 weeks in infants in Africa (ANRS 12174): a randomised controlled trial[END_REF] Adherence to infant PrEP in the trial was particularly high (over 90%). [START_REF] Nagot | ANRS 12174 Trial Group. Extended pre-exposure prophylaxis with lopinavir-ritonavir versus lamivudine to prevent HIV-1 transmission through breastfeeding up to 50 weeks in infants in Africa (ANRS 12174): a randomised controlled trial[END_REF] Pharmacological data suggest that plasma drug levels lower than the therapeutic threshold are sufficient to protect infants. [START_REF] Foissac | ANRS 12174 Trial Group. Are prophylactic and therapeutic target concentrations different? The case of lopinavir/ritonavir or lamivudine administered to infants for the prevention of mother-to-child HIV-1 transmission during breastfeeding[END_REF] In addition, pharmacokinetic studies in infants breastfed by mothers taking ART show that their antiretroviral drug plasma levels are largely below 5% of the therapeutic level. [START_REF] Shapiro | Therapeutic levels of lopinavir in late pregnancy and abacavir passage into breast milk in the Mma Bana Study, Botswana[END_REF] This suggests that infant PrEP could be combined with maternal ART without a risk of overdosing or cumulative adverse effects.
In the near future, injectable long acting antiretroviral drugs such as rilpivirine or cabotegravir may become available. This would enable PrEP to be started from birth with only a few additional administrations to cover the duration of breastfeeding.
The estimated cost of daily administration lamivudine paediatric suspension in a breastfed infant is less than $15 (£12; €14) a year. Cost effectiveness studies of infant PrEP have not been done, but the low cost of the infant PrEP regimen suggests that the expected benefit would justify the expense of adding it to maternal ART. Indeed, even if only one HIV infection was averted out of 100 exposed infants, the cost per averted infection would be minimal ($1500).
When should infant PrEP be recommended?
Infant PrEP should certainly be advised when the mother's HIV infection is untreated or if she has a detectable viral load despite ART. Such situations can occur when the mother does not want or is unable to take ART or is at high risk of poor drug adherence. The determinants of maternal adherence to ART probably differ from those for adherence to infant PrEP. Unpublished data collected during the ANRS 12174 trial suggest that most pregnant or lactating mothers prefer to administer a prophylactic antiretroviral drug to their exposed infant than to adhere to their own ART. However, this targeted approach may be seen as complex and hampered by programmatic problems in some settings. A simpler alternative would be to protect all HIV exposed infants with PrEP during the breastfeeding period, on the basis that the PrEP drugs are safe and that optimal maternal adherence to ART in the perinatal period cannot be assumed. Of course, treatment of the mother should remain a priority.
Conclusion
Mother-to-child HIV transmission among breastfed infants is not unlike HIV transmission associated with discordant couples, with the mother and child having frequent contact that exposes the infant to HIV, even if the mother is provided with a For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe
ANALYSIS
suppressive ART regimen. Given the evidence that infant PrEP is effective, there is a moral imperative to correct the policy inequity that exists between HIV exposed adults and children. Scaling up existing interventions and extended access to PrEP to those most in need are the most cost effective ways to stem new HIV infections. [START_REF] Smith | Maximising HIV prevention by balancing the opportunities of today with the promises of tomorrow: a modelling study[END_REF] Expanding global prevention guidelines to include infant PrEP for infants exposed to HIV by breast feeding could be a major breakthrough as a public health approach to eliminate mother-to-child transmission.
Contributors and sources: This article is based on recent publications and conference presentations on PMTCT. All authors conceptualised this article during meetings on mother and child health. PV wrote the first draft of the manuscript and coordinated the revised versions. All authors reviewed and approved the final version and are responsible for the final content of the manuscript.
Provenance and peer review: Not commissioned; externally peer reviewed.
Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.
ANALYSIS
Key messages
WHO recommends pre-exposure prophylaxis for any group with an expected incidence of HIV infection above 3/100 person-years Current strategies to prevent mother-to-child transmission of HIV cover only the first six weeks Many infections of breastfed infants occur after this period Adding infant PrEP to maternal ART is cheap and does not expose infants to unsafe doses Routine infant PrEP has the potential to be a breakthrough in elimination of mother-to-child transmission For personal use only: See rights and reprints http://www.bmj.com/permissions Subscribe: http://www.bmj.com/subscribe | 16,546 | [
"932073",
"889428",
"889426",
"932080"
] | [
"488753",
"139833",
"488755",
"488756",
"488757",
"488759",
"488760",
"12196"
] |
01488172 | en | [
"math"
] | 2024/03/04 23:41:48 | 2018 | https://inria.hal.science/hal-01488172/file/CiDS17_HAL.pdf | Patrick Ciarlet
email: patrick.ciarlet@ensta-paristech.fr
Charles F Dunkl
Stefan A Sauter
T ⊂ ∂k}
V ∈ ∂k}
E ⊂ ∂k}
A Family of Crouzeix-Raviart Finite Elements in 3D
Keywords: AMS-Classification: 33C45, 33C50, 65N12, 65N30, secondary 33C80 finite element, non-conforming, Crouzeix-Raviart, orthogonal polynomials on triangles, symmetric orthogonal polynomials
In this paper we will develop a family of non-conforming "Crouzeix-Raviart" type finite elements in three dimensions. They consist of local polynomials of maximal degree p ∈ N on simplicial finite element meshes while certain jump conditions are imposed across adjacent simplices. We will prove optimal a priori estimates for these finite elements.
The characterization of this space via jump conditions is implicit and the derivation of a local basis requires some deeper theoretical tools from orthogonal polynomials on triangles and their representation. We will derive these tools for this purpose. These results allow us to give explicit representations of the local basis functions. Finally we will analyze the linear independence of these sets of functions and discuss the question whether they span the whole non-conforming space.
Introduction
For the numerical solution of partial differential equations, Galerkin finite element methods are among the most popular discretization methods. In the last decades, non-conforming Galerkin discretizations have become very attractive where the test and trial spaces are not subspaces of the natural energy spaces and/or the variational formulation is modified on the discrete level. These methods have nice properties, e.g. in different parts of the domain different discretizations can be easily used and glued together or, for certain classes of problems (Stokes problems, highly indefinite Helmholtz and Maxwell problems, problems with "locking", etc.), the non-conforming discretization enjoys a better stability behavior compared to the conforming one. One of the first non-conforming finite element space was the Crouzeix-Raviart element ( [START_REF] Crouzeix | Conforming and nonconforming finite element methods for solving the stationary Stokes equations[END_REF], see [START_REF] Brenner | Forty years of the Crouzeix-Raviart element[END_REF] for a survey). It is piecewise affine with respect to a triangulation of the domain while interelement continuity is required only at the barycenters of the edges/facets (2D/3D).
In [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF], a family of high order non-conforming (intrinsic) finite elements have been introduced which corresponds to a family of high-order Crouzeix-Raviart elements in two dimensions. For Poisson's equation, this family includes the non-conforming Crouzeix-Raviart element [START_REF] Crouzeix | Conforming and nonconforming finite element methods for solving the stationary Stokes equations[END_REF], the Fortin-Soulie element [START_REF] Fortin | A nonconforming quadratic finite element on triangles[END_REF], the Crouzeix-Falk element [START_REF] Crouzeix | Nonconforming finite elements for Stokes problems[END_REF], and the Gauss-Legendre elements [START_REF] Baran | Gauss-Legendre elements: a stable, higher order non-conforming finite element family[END_REF], [START_REF] Stoyan | Crouzeix-Velte decompositions for higher-order finite elements[END_REF] as well as the standard conforming hp-finite elements.
In our paper we will characterize a family of high-order Crouzeix-Raviart type finite elements in three dimensions, first implicitly by imposing certain jump conditions at the interelement facets. Then we derive a local basis for these finite elements. These new finite element spaces are non-conforming but the (broken version of the) continuous bilinear form can still be used. Thus, our results also give insights on how far one can go in the non-conforming direction while keeping the original forms.
The explicit construction of a basis for these new finite element spaces require some deeper theoretical tools in the field of orthogonal polynomials on triangles and their representations which we develop here for this purpose.
As a simple model problem for the introduction of our method, we consider Poisson's equation but emphasize that this method is applicable also for much more general (systems of) elliptic equations.
There is a vast literature on various conforming and non-conforming, primal, dual, mixed formulations of elliptic differential equations and conforming as well as non-conforming discretization. Our main focus is the characterization and construction of non-conforming Crouzeix-Raviart type finite elements from theoretical principles. For this reason, we do not provide an extensive list of references on the analysis of specific families of finite elements spaces but refer to the classical monographs [START_REF] Ciarlet | The Finite Element Method for Elliptic Problems[END_REF], [START_REF] Schwab | p-and hp-finite element methods[END_REF], and [START_REF] Boffi | Mixed finite element methods and applications[END_REF] and the references therein.
The paper is organized as follows.
In Section 2 we introduce our model problem, Poisson's equation, the relevant function spaces and standard conditions on its well-posedness.
In Section 3 we briefly recall classical, conforming hp-finite element spaces and their Lagrange basis.
The new non-conforming finite element spaces are introduced in Section 4. We introduce an appropriate compatibility condition at the interfaces between elements of the mesh so that the non-conforming perturbation of the original bilinear form is consistent with the local error estimates. We will see that this compatibility condition can be inferred from the proof of the second Strang lemma applied to our setting. The weak compatibility condition allows to characterize the non-conforming family of high-order Crouzeix-Raviart type elements in an implicit way. In this section, we will also present explicit representations of non-conforming basis functions of general degree p while their derivation and analysis is the topic of the following sections.
Section 5 is devoted to the explicit construction of a basis for these new non-conforming finite elements. It requires deeper theoretical tools from orthogonal polynomials on triangles and their representation which we will derive for this purpose in this section.
It is by no means obvious whether the constructed set of functions is linearly independent and span the non-conforming space which was defined implicitly in Section 4. These questions will be treated in Section 6.
Finally, in Section 7 we summarize the main results and give some comparison with the two-dimensional case which was developed in [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF].
Model Problem
As a model problem we consider the Poisson equation in a bounded Lipschitz domain Ω ⊂ R d with boundary Γ := ∂Ω. First, we introduce some spaces and sets of functions for the coefficient functions and solution spaces.
The Euclidean scalar product in R d is denoted for a, b ∈ R d by a • b. For s ≥ 0, 1 ≤ p ≤ ∞, let W s,p (Ω) denote the classical (real-valued) Sobolev spaces with norm • W s,p (Ω) . The space W s,p 0 (Ω) is the closure with respect to the • W s,p (Ω) of all C ∞ (Ω) functions with compact support. As usual we write L p (Ω) short for W 0,p (Ω). The scalar product and norm in L 2 (Ω) are denoted by (u, v) := Ω uv and • := (•, •)
1/2 . For p = 2, we use H s (Ω), H s 0 (Ω) as shorthands for W s,2 (Ω), W s,2 0 (Ω). The dual space of H s 0 (Ω) is denoted by H -s (Ω). We recall that, for positive integers s, the seminorm |•| H s (Ω) in H s (Ω) which contains only the derivatives of order s is a norm in H s 0 (Ω). We consider the Poisson problem in weak form:
Given f ∈ L 2 (Ω) find u ∈ H 1 0 (Ω) a (u, v) := (A∇u, ∇v) = (f, v) ∀v ∈ H 1 0 (Ω) . (1)
Throughout the paper we assume that the diffusion matrix A ∈ L ∞ Ω, R d×d sym is symmetric and satisfies 0 < a min := ess inf
x∈Ω inf v∈R d \{0} (A (x) v) • v v • v ≤ ess sup x∈Ω sup v∈R d \{0} (A (x) v) • v v • v =: a max < ∞ (2)
and that there exists a partition P := (Ω j ) J j=1 of Ω into J (possibly curved) polygons (polyhedra for d = 3) such that, for some appropriate r ∈ N, it holds
A P W r,∞ (Ω) := max 1≤j≤J A| Ω j W r,∞ (Ωj ) < ∞. (3)
Assumption (2) implies the well-posedness of problem (1) via the Lax-Milgram lemma.
Conforming hp-Finite Element Galerkin Discretization
In this paper we restrict our studies to bounded, polygonal (d = 2) or polyhedral (d = 3) Lipschitz domains Ω ⊂ R d and regular finite element meshes G (in the sense of [START_REF] Ciarlet | The Finite Element Method for Elliptic Problems[END_REF]) consisting of (closed) simplices K, where hanging nodes are not allowed. The local and global mesh width is denoted by h K := diam K and h := max K∈G h K . The boundary of a simplex K can be split into (d -1)-dimensional simplices (facets for d = 3 and triangle edges for d = 2) which are denoted by T . The set of all facets in G is called F; the set of facets lying on ∂Ω is denoted by F ∂Ω and defines a triangulation of the surface ∂Ω. The set of facets in Ω is denoted by F Ω . As a convention we assume that simplices and facets are closed sets. The interior of a simplex K is denoted by
• K and we write
•
T to denote the (relative) interior of a facet T . The set of all simplex vertices in the mesh G is denoted by V, those lying on ∂Ω by V ∂Ω , and those lying in Ω by V Ω . Similar the set of simplex edges in G is denoted by E, those lying on ∂Ω by E ∂Ω , and those lying in Ω by E Ω .
We recall the definition of conforming hp-finite element spaces (see, e.g., [START_REF] Schwab | p-and hp-finite element methods[END_REF]). For p ∈ N 0 := {0, 1, . . .}, let P d p denote the space of d-variate polynomials of total degree ≤ p. For a connected subset ω ⊂ Ω, we write P p d (ω) for polynomials of degree ≤ p defined on ω. For a connected m-dimensional manifold ω ⊂ R d , for which there exists a subset ω ∈ R m along an affine bijection χ ω : ω → ω, we set
P m p (ω) := v • χ -1 ω : v ∈ P m p (ω) .
If the dimension m is clear from the context, we write P p (ω) short for P m p (ω). The conforming hp-finite element space is given by
S p G,c := u ∈ C 0 Ω | ∀K ∈ G u| K ∈ P p (K) ∩ H 1 0 (Ω) . (4)
A Lagrange basis for S p G,c can be defined as follows. Let
N p := i p : i ∈ N d 0 with i 1 + . . . + i d ≤ p (5)
denote the equispaced unisolvent set of nodal points on the d-dimensional unit simplex
K := x ∈ R d ≥0 | x 1 + . . . + x d ≤ 1 . (6)
For a simplex K ∈ G, let χ K : K → K denote an affine mapping. The set of nodal points is given by
N p := χ K N | N ∈ N p , K ∈ G , N p Ω := N p ∩ Ω, N p ∂Ω := N p ∩ ∂Ω. (7)
The Lagrange basis for S p G,c can be indexed by the nodal points N ∈ N p Ω and is characterized by
B G p,N ∈ S p G,c and ∀N ′ ∈ N p Ω B G p,N (N ′ ) = δ N,N ′ , (8)
where δ N,N ′ is the Kronecker delta.
Definition 1 For all K ∈ G, T ∈ F Ω , E ∈ E Ω , V ∈ V Ω , the conforming spaces S p K,c , S p T,c , S p E,c , S p V,c
are given as the spans of the following basis functions
S p K,c := span B G p,N | N ∈ • K ∩ N p Ω , S p T,c := span B G p,N | N ∈ • T ∩ N p Ω , S p E,c := span B G p,N | N ∈ • E ∩ N p Ω , S p V,c := span B G p,V .
The following proposition shows that these spaces give rise to a direct sum decomposition and that these spaces are locally defined. To be more specific we first have to introduce some notation. In this section, we will characterize a class of non-conforming finite element spaces implicitly by a weak compatibility condition across the facets. For each facet T ∈ F, we fix a unit vector n T which is orthogonal to T . The orientation for the inner facets is arbitrary but fixed while the orientation for the boundary facets is such that n T points toward the exterior of Ω. Our non-conforming finite element spaces will be a subspace of
C 0 G (Ω) := u ∈ L ∞ (Ω) | ∀K ∈ G u| • K ∈ C 0 • K
and we consider the skeleton
T ∈F
T as a set of measure zero.
For K ∈ G, we define the restriction operator
γ K : C 0 G (Ω) → C 0 (K) by (γ K w) (x) = w (x) ∀x ∈ • K
and on the boundary ∂K by continuous extension. For the inner facets T ∈ F, let K 1 T , K 2 T be the two simplices which share T as a common facet with the convention that n T points into K 2 . We set ω
T := K 1 T ∪ K 2 T . The jump [•] T : C 0 G (Ω) → C 0 (T ) across T is defined by [w] T = (γ K 2 w)| T -(γ K 1 w)| T . (11)
For vector-valued functions, the jump is defined component-wise. The definition of the non-conforming finite elements involves orthogonal polynomials on triangles which we introduce first. Let T denote the (closed) unit simplex in R d-1 , with vertices 0, (1, 0, . . . , 0) ⊺ , (0, 1, 0, . . . , 0) ⊺ , (0, . . . , 0, 1) ⊺ . For n ∈ N 0 , the set of orthogonal polynomials on T is given by
P ⊥ n,n-1 T := P 0 T n = 0, u ∈ P n T | T uv = 0 ∀v ∈ P n-1 T n ≥ 1. (12)
We lift this space to a facet T ∈ F by employing an affine transform χ T : T → T
P ⊥ n,n-1 (T ) := v • χ -1 T : v ∈ P ⊥ n,n-1 (T )
. The orthogonal polynomials on triangles allows us to formulate the weak compatibility condition which is employed for the definition of non-conforming finite element spaces:
[u] T ∈ P ⊥ p,p-1 (T ) , ∀T ∈ F Ω and u| T ∈ P ⊥ p,p-1 (T ) , ∀T ∈ F ∂Ω . (13)
We have collected all ingredients for the (implicit) characterization of the non-conforming Crouzeix-Raviart finite element space.
Definition 3
The non-conforming finite element space S p G with weak compatibility conditions across facets is given by
S p G := {u ∈ L ∞ (Ω) | ∀K ∈ G γ K u ∈ P p (K) and u satisfies (13)} . (14)
The non-conforming Galerkin discretization of (1) for a given finite element space S which satisfies S p G,nc ⊂ S ⊂ S p G reads:
Given f ∈ L 2 (Ω) find u S ∈ S a G (u S , v) := (A∇ G u S , ∇ G v) = (f, v) ∀v ∈ S (15)
where
∇ G u (x) := ∇u (x) ∀x ∈ Ω\ T ∈F ∂T .
Non-Conforming Finite Elements of Crouzeix-Raviart Type in 3D
The definition of the non-conforming space S p G in ( 14) is implicit via the weak compatibility condition. In this section, we will present explicit representations of non-conforming basis functions of Crouzeix-Raviart type for general polynomial order p. These functions together with the conforming basis functions span a space S p G,nc which satisfies the inclusions S p
G,c
S p G,nc ⊆ S p G (cf. Theorem 10). The derivation of the formula and their algebraic properties will be the topic of the following sections.
We will introduce two types of non-conforming basis functions: those whose support is one tetrahedron and those whose support consists of two adjacent tetrahedrons, that is tetrahedrons which have a common facet. For details and their derivation we refer to Section 5 while here we focus on the representation formulae.
Non-Conforming Basis Functions Supported on One Tetrahedron
The construction starts by defining symmetric orthogonal polynomials b sym p,k , 0 ≤ k ≤ d triv (p) -1 on the reference triangle T with vertices (0, 0) ⊺ , (1, 0) ⊺ , (0, 1) ⊺ , where
d triv (p) := p 2 - p -1 3 . ( 16
)
We define the coefficients
M (p) i,j = (-1) p 4 F 3 -j, j + 1, -i, i + 1 -p, p + 2, 1 ; 1 2i + 1 p + 1 0 ≤ i, j ≤ p,
where p F q denotes the generalized hypergeometric function (cf. [9,Chap. 16]). The 4 F 3 -sum is understood to terminate at i to avoid the 0/0 ambiguities in the formal 4 F 3 -series. These coefficients allow to define the polynomials r p,2k (x 1 , x 2 ) := 2
0≤j≤p/2 M (n) 2j,2k b p,2j + b p,2k 0 ≤ k ≤ p/2,
where b p,k , 0 ≤ k ≤ p, are the basis for the orthogonal polynomials of degree p on T as defined afterwards in (35). Then, a basis for the symmetric orthogonal polynomials is given by
b sym p,k := r p,p-2k if p is even, r p,p-1-2k if p is odd, k = 0, 1, . . . , d triv (p) -1. (17)
The non-conforming Crouzeix-Raviart basis function B K,nc p,k ∈ P p K on the unit tetrahedron K is characterized by its values at the nodal points in N p (cf. ( 5)). For a facet T ⊂ ∂ K, let χ T : T → T denote an affine pullback to the reference triangle. Then B K,nc p,k ∈ P p K is uniquely defined by Remark 4 In Sec. 5.3, we will prove that the polynomials b sym p,k are totally symmetric, i.e., invariant under affine bijections χ : K → K. Thus, any of these functions can be lifted to the facets of a tetrahedron via affine pullbacks and the resulting function on the surface is continuous. As a consequence, the value B K,nc p,k (N) in definition (18) is independent of the choice of T also for nodal points N which belong to different facets.
B K,nc p,k (N) := b sym p,k • χ -1 T (N) ∀N ∈ N p s.t. N ∈ T for some facet T ⊂ ∂ K, 0 ∀N ∈ N p \∂ K k = 0, 1, . . . , d triv (p)-1. ( 18
) b sym 2,0 , B K,nc
It will turn out that the value 0 at the inner nodes could be replaced by other values without changing the arising non-conforming space. Other choices could be preferable in the context of inverse inequalities and the condition number of the stiffness matrix. However, we recommend to choose these values such that the symmetries of B K,nc p,k are preserved.
Definition 5
The non-conforming tetrahedron-supported basis functions on the reference element are given by B K,nc p,k =
N∈ N p ∩∂ K B K,nc p,k (N) B G p,N k = 0, 1, . . . , d triv (p) -1 (19)
with values B K,nc p,k (N) as in (18). For a simplex K ∈ G the corresponding non-conforming basis functions B K,nc p,k are given by lifting B K,nc p,k via an affine pullback χ K from K to K ∈ G:
B K,nc p,k • K ′ := B K,nc p,k • χ -1 K K = K ′ , 0 K = K ′ .
and span the space
S p K,nc := span B K,nc p,k : k = 0, 1, . . . , d triv (p) -1 . ( 20
)
Example 6 The lowest order of p such that d triv (p) ≥ 1 is p = 2. In this case, we get d triv (p) = 1. In Figure 1 the function b sym p,k and corresponding basis functions B K,nc p,k are depicted for (p, k) ∈ {(2, 0) , (3, 0) , (6, 0) , (6, 1)}.
Non-Conforming Basis Functions Supported on Two Adjacent Tetrahedrons
The starting point is to define orthogonal polynomials b refl p,k on the reference triangle T which are mirror symmetric 1 with respect to the angular bisector in T through 0 and linear independent from the fully symmetric functions b sym p,k . We set
b refl p,k := 1 3 (2b p,2k (x 1 , x 2 ) -b p,2k (x 2 , 1 -x 1 -x 2 ) -b p,2k (1 -x 1 -x 2 , x 1 )) 0 ≤ k ≤ d refl (p) -1, (21)
where
d refl (p) := p + 2 3 . (22)
Let K 1 , K 2 denote two tetrahedrons which share a common facet, say T . The vertex of K i which is opposite to T is denoted by V i . The procedure of lifting the nodal values to the facets of ω T := K 1 ∪ K 2 is analogous as for the basis functions B K,nc n,k . However, it is necessary to choose the pullback χ i, T :
T → T of a facet T ⊂ ∂K i \ • T such that the origin is mapped to V i . B T,nc p,k (N) := b refl p,k • χ -1 i, T (N) ∀N ∈ N p s.t. N ∈ T for some facet T ⊂ ∂K\ • T i , 0 ∀N ∈ N p ∩ • ω T k = 0, 1, . . . , d refl (p)-1.
(23) Again, the value 0 at the inner nodes of ω T could be replaced by other values without changing the arising non-conforming space.
Definition 7
The non-conforming facet-oriented basis functions are given by
B T,nc p,k = N∈N p ∩∂ωT B T,nc p,k (N) B G p,N ωT ∀T ∈ F Ω , k = 0, 1, . . . , d refl (p) -1 (24)
with values B T,nc p,k (N) as in (23) and span the space
S p T,nc := span B T,nc p,k : k = 0, 1, . . . , d refl (p) -1 . ( 25
)
The non-conforming finite element space of Crouzeix-Raviart type is given by
S p G,nc := E∈EΩ S p E,c ⊕ T ∈FΩ S p T,c ⊕ K∈G S p K,c ⊕ K∈G S p K,nc ⊕ T ∈FΩ span B T,nc p,0 . (26)
Remark 8 In Sec. 5.3.3, we will show that the polynomials b refl p,k are mirror symmetric with respect to the angular bisector in T through 0. Thus, any of these functions can be lifted to the outer facets of two adjacent tetrahedrons via (oriented) affine pullbacks as employed in (23) and the resulting function on the surface is continuous. As a consequence, the value B T,nc p,k (N) in definition ( 23) is independent of the choice of T also for nodal points N which belong to different facets.
In Theorem 33, we will prove that (26), in fact, is a direct sum and a basis is given by the functions
B G p,N ∀N ∈ N Ω \V, B K,nc p,k ∀K ∈ G, 0 ≤ k ≤ d triv (p) -1, B T,nc p,0 ∀T ∈ F Ω .
Also we will prove that S p
G,c
S p G,nc ⊆ S p G . This condition implies that the convergence estimates as in Theorem 10 are valid for this space. We restricted the reflection-type non-conforming basis functions to the lowest order k = 0 in order to keep the functions linearly independent.
Error Analysis
In this subsection we present the error analysis for the Galerkin discretization [START_REF] Stoyan | Crouzeix-Velte decompositions for higher-order finite elements[END_REF] with the non-conforming finite element space S p G and subspaces thereof. The analysis is based on the second Strang lemma and has been presented for an intrinsic version of S p G in [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF]. For any inner facet T ∈ F and any v ∈ S p G , condition (13) implies T [v] T = 0 : hence, the jump [v] T is always zero-mean valued. Let h T denote the diameter of T . The combination of a Poincaré inequality with a trace inequality then yields where
[u] T L 2 (T ) ≤ Ch T |[u] T | H 1 (T ) ≤ Ch 1/2 T |u| H 1 pw (ω T ) , (27)
|u| H p pw (ωT ) := K⊂ω T |u| 2 H p (K) 1/2
.
In a similar fashion we obtain for all boundary facets T ∈ F ∂Ω and all u ∈ S p G the estimate
u L 2 (T ) ≤ Ch 1/2 T |u| H 1 pw (ωT ) . (28)
We say that the exact solution u ∈ H 1 0 (Ω) is piecewise smooth over the partition P = (Ω j ) J j=1 , if there exists some positive integer s such that u |Ωj ∈ H 1+s (Ω j ) for j = 1, 2, . . . , J.
We write u ∈P H 1+s (Ω) and refer for further properties and generalizations to non-integer values of s, e.g., to [START_REF] Sauter | Boundary Element Methods[END_REF]Sec. 4.1.9].
For the approximation results, the finite element meshes G are assumed to be compatible with the partition P in the following sense: for all K ∈ G, there exists a single index j such that
• K∩Ω j = ∅. The proof that |•| H 1 pw (Ω) is a norm on S p G is similar as in [4, Sect. 10.3]: For w ∈ H 1 0 (Ω) this follows from |w| H 1 pw (Ω)
= ∇w and a Friedrichs inequality; for w ∈ S p G the condition ∇ G w = 0 implies that w| K is constant on all simplices K ∈ G. The combination with T w = 0 for all T ∈ F ∂Ω leads to w| K = 0 for the outmost simplex layer via a Poincaré inequality, i.e., w| K = 0 for all K ∈ G having at least one facet on ∂Ω. This argument can be iterated step by step over simplex layers towards the interior of Ω to finally obtain w = 0.
Theorem 10 Let Ω ⊂ R d be a bounded, polygonal (d = 2) or polyhedral (d = 3) Lipschitz domain and let G be a regular simplicial finite element mesh for Ω. Let the diffusion matrix A ∈ L ∞ Ω, R d×d sym satisfy assumption (2) and let f ∈ L 2 (Ω). As an additional assumption on the regularity, we require that the exact solution of (1) satisfies u ∈ P H 1+s (Ω) for some positive integer s and A P W r,∞ (Ω) < ∞ holds with r := min {p, s}. Let the continuous problem (1) be discretized by the non-conforming Galerkin method (15) with a finite dimensional space S which satisfies S p G,c ⊂ S ⊂ S p G on a compatible mesh G. Then, (15) has a unique solution which satisfies |uu S | H 1 pw (Ω) ≤ Ch r u P H 1+r (Ω) . The constant C only depends on a min , a max , A P W r,∞ (Ω) , p, r, and the shape regularity of the mesh.
Proof. The second Strang lemma (cf. [START_REF] Ciarlet | The Finite Element Method for Elliptic Problems[END_REF]Theo. 4.2.2]) applied to the non-conforming Galerkin discretization [START_REF] Stoyan | Crouzeix-Velte decompositions for higher-order finite elements[END_REF] implies the existence of a unique solution which satisfies the error estimate
|u -u S | H 1 pw (Ω) ≤ 1 + a max a min inf v∈S |u -v| H 1 pw (Ω) + 1 a min sup v∈S |L u (v)| |v| H 1 pw (Ω)
,
where L u (v) := a G (u, v) -(f, v) .
The approximation properties of S are inherited from the approximation properties of S p G,c in the first infimum because of the inclusion S p G,c ⊂ S. For the second term we obtain
L u (v) = (A∇u, ∇ G v) -(f, v) . ( 29
)
Note that f ∈ L 2 (Ω) implies that div (A∇u) ∈ L 2 (Ω) and, in turn, that the normal jump [A∇u • n T ] T equals zero and the restriction (A∇u • n T )| T is well defined for all T ∈ F. We may apply simplexwise integration by parts to (29) to obtain
L u (v) = - T ∈FΩ T (A∇u • n T ) [v] T + T ∈F∂Ω T (A∇u • n T ) v. Let K T be one simplex in ω T . For 1 ≤ i ≤ d, let q i ∈ P p-1 d (K T )
denote the best approximation of
w i := d j=1 A i,j ∂ j u K T with respect to the H 1 (K T ) norm. Then, q i | T n T,i ∈ P p-1 d-1 (T ) for 1 ≤ i ≤ d, and the inclusion S ⊂ S p G implies |L u (v)| ≤ - T ∈FΩ T d i=1 (w i -q i ) • n T,i [v] T (30)
+ T ∈F ∂Ω T d i=1 (w i -q i ) • n T,i v ≤ T ∈F Ω [v] T L 2 (T ) d i=1 w i -q i L 2 (T ) + T ∈F ∂Ω v L 2 (T ) d i=1 w i -q i L 2 (T ) .
Standard trace estimates and approximation properties lead to
w i -q i L 2 (T ) ≤ C h -1/2 T w i -q i L 2 (KT ) + h 1/2 T |w i -q i | H 1 (KT ) (31)
≤ Ch r-1/2 T |w i | H r (KT ) ≤ Ch r-1/2 T u H 1+r (KT ) ,
where C depends only on p, r, A W r (KT ) , and the shape regularity of the mesh.The combination of (30), ( 31) and ( 27),(28) along with the shape regularity of the mesh leads to the consistency estimate
|L u (v)| ≤ C T ∈F Ω h r T u H 1+r (KT ) |v| H 1 pw (ωT ) + T ∈F ∂Ω h r T u H 1+r (KT ) |v| H 1 pw (ωT ) ≤ Ch r u P H 1+r (Ω) |v| H 1 pw (Ω)
, which completes the proof.
Remark 11 If one chooses in (13) a degree p ′ < p for the orthogonality relations in [START_REF] Sauter | Boundary Element Methods[END_REF], then the order of convergence behaves like h r ′ e H 1+r ′ (Ω) , with r ′ := min {p ′ , s}, because the best approximations q i now belong to P p ′ -1 d-1 (T ).
P (α,β) n (x) q (x) (1 -x) α (1 + x) β dx = 0
for all polynomials q of degree less than n, and (cf. [9, Table 18.6.1])
P (α,β) n (1) = (α + 1) n n! , P (α,β) n (-1) = (-1) n (β + 1) n n! . ( 32
)
Here the shifted factorial is defined by (a) n := a (a + 1) . . . (a + n -1) for n > 0 and (a) 0 := 1. The Jacobi polynomial has an explicit expression in terms of a terminating Gauss hypergeometric series (see (cf. [9, 18.5.7]))
2 F 1 -n, b c ; z := n k=0 (-n) k (b) k (c) k k! z k (33)
as follows
P (α,β) n (x) = (α + 1) n n! 2 F 1 -n, n + α + β + 1 α + 1 ; 1 -x 2 . ( 34
)
Orthogonal Polynomials on Triangles
Recall that T is the (closed) unit triangle in R 2 with vertices A 0 = (0, 0) ⊺ , A 1 = (1, 0) ⊺ , and A 3 = (0, 1) ⊺ . An orthogonal basis for the space P ⊥ n,n-1 T was introduced in [START_REF] Proriol | Sur une famille de polynomes à deux variables orthogonaux dans un triangle[END_REF] and is given by the functions b
n,k , 0 ≤ k ≤ n, b n,k (x) := (x 1 + x 2 ) k P (0,2k+1) n-k (2 (x 1 + x 2 ) -1) P (0,0) k x 1 -x 2 x 1 + x 2 , ( 35
)
where P (0,0) k are the Legendre polynomials (see [9, 18.7.9]) 2 . From (36) (footnote) it follows that these polynomials satisfy the following symmetry relation
b n,k (x 1 , x 2 ) = (-1) k b n,k (x 2 , x 1 ) ∀n ≥ 0, ∀ (x 1 , x 2 ) . ( 37
)
By combining (33) -( 35), an elementary calculation leads to 3 b n,0 (0, 0) = (-1) n (n + 1).
Let E I := A 0 A 1 , E II := A 0 A 2 , and E III := A 1 A 2 (38)
denote the edges of T . For Z ∈ {I, II, III}, we introduce the linear restriction operator for the edge E Z by
γ Z : C 0 T → C 0 ([0, 1]) by γ I u := u (•, 0) , γ II u := u (0, •) , γ III u = u (1 -•, •) (39)
which allows to define
b I n,k := γ I b n,k , b II n,k := γ II b n,k , b III n,k := γ III b n,k , for k = 0, 1, . . . , n.
2 The Legendre polynomials with normalization P (0,0) k
(1) = 1 for all k = 0, 1, . . . can be defined [9, Table 18.9.1] via the three-term recursion
P (0,0) 0 (x) = 1; P (0,0) 1 (x) = x; and (k + 1) P (0,0) k+1 (x) = (2k + 1) xP (0,0) k (x) -kP (0,0) k-1 (x) for k = 1, 2, . . . , (36)
from which the well-known relation
P (0,0) k (x) = (-1) k P (0,0) k (x)
for all k ∈ N 0 follows. 3 Further special values are b n,0 (0, 0) = P (0,1) n Proof. First note that x j (x -1) n-j : 0 ≤ j ≤ n is a basis for P n ([0, 1]); this follows from expanding the right-hand side of x m = x m (x -(x -1)) n-m . Specialize the formula [9, 18.5.8]
(-1) = (-1) n (2) n n! = (-1) n (n + 1) , b n,k (0, 0) = 0, 1 ≤ k ≤ n, b n,k (1, 0) = P (0,2k+1) n-k (1) P (0,0) k (1) = 1, 0 ≤ k ≤ n, b n,k (0, 1) = P (0,2k+1) n-k (1)
P (0,0) k (-1) = (-1) k , 0 ≤ k ≤ n.
P (α,β) m (s) = (α + 1) m m! 1 + s 2 m 2 F 1 -m, -m -β α + 1 ; s -1 s + 1 to m = n -k, α = 0, β = 2k + 1, s = 2x -1 to obtain b I n,k (x) = x n 2 F 1 k -n, -n -k -1 1 ; x -1 x ( 40
) (33) = n-k i=0 (k -n) i (-n -k -1) i i!i! x n-i (x -1) i . ( 41
)
The highest index i of
x n-i (x -1) i in b I n,k (x) is n -k with coefficient (2k + 2) n-k (n -k)! = 0. Thus the matrix expressing b I n,0 , . . . , b I n,n ! in terms of " (x -1) n , x (x -1) n-1 , . . . , x n # is triangular and nonsingular; hence b I n,k : 0 ≤ k ≤ n is a basis of P n ([0, 1]). The symmetry relation b II n,k = (-1) k b I n,k for 0 ≤ k ≤ n (cf. ( 37
)) shows that b II n,k : 0 ≤ k ≤ n is also a basis of P n ([0, 1]). Finally substituting x 1 = 1 -x, x 2 = x in b n,k results in b III n,k (x) = P (0,2k+1) n-k (1)
P (0,0) k (1 -2x) , (42) and P (0,2k+1) n-k
(1) = 1 (from (32)). Clearly P
(0,0) k (1 -2x) : 0 ≤ k ≤ n is a basis for P n ([0, 1]). Lemma 13 Let v ∈ P n ([0, 1]).
Then, there exist unique orthogonal polynomials u Z ∈ P ⊥ n,n-1 T , Z ∈ {I, II, III} with v = γ Z u Z . Thus, the linear extension operator E Z : P n ([0, 1]) → P ⊥ n,n-1 T is well defined by
E Z v := u Z .
Proof. From Lemma 12 we conclude that γ Z is surjective. Since the polynomial spaces are finite dimensional the assertion follows from
dim P n ([0, 1]) = n + 1 = dim P ⊥ n,n-1 T .
The orthogonal polynomials can be lifted to a general triangle T .
Definition 14 Let T denote a triangle and χ T an affine pullback to the reference triangle T . Then, the space of orthogonal polynomials of degree n on T is
P ⊥ n,n-1 (T ) := v • χ -1 T : v ∈ P ⊥ n,n-1 T .
From the transformation rule for integrals one concludes that for any
u = v • χ -1 T ∈ P ⊥ n,n-1 (T ) and all q ∈ P n-1 (T ) it holds T uq = T v • χ -1 T q = 2 |T | T v (q • χ T ) = 0 (43) since q • χ T ∈ P n-1 T .
Here |T | denotes the area of the triangle T .
Totally Symmetric Orthogonal Polynomials
In this section, we will decompose the space of orthogonal polynomials P ⊥ n,n-1 T into three irreducible modules (see §5.3.1) and thus, obtain a direct sum decomposition P ⊥ n,n-1 T = P ⊥,sym n,n-1 T ⊕ P ⊥,refl n,n-1 T ⊕ P ⊥,sign n,n-1 T . We will derive an explicit representation for a basis of the space of totally symmetric polynomials P ⊥,sym n,n-1 T in §5.3.2 and of the space of reflection symmetric polynomials P ⊥,refl n,n-1 T in §5.3.3. We start by introducing, for functions on triangles, the notation of total symmetry. For an arbitrary triangle T with vertices A 0 , A 1 , A 2 , we introduce the set of permutations Π = {(i, j, k) : i, j, k ∈ {0, 1, 2} pairwise disjoint}. For π = (i, j, k) ∈ Π, define the affine mapping χ π : T → T by
χ π (x) = A i + x 1 (A j -A i ) + x 2 (A k -A i ) . ( 44
)
We say a function u, defined on T , has total symmetry if
u = u • χ π ∀π ∈ Π.
The space of totally symmetric orthogonal polynomials is
P ⊥,sym n,n-1 T := u ∈ P ⊥ n,n-1 T : u has total symmetry . ( 45
)
The construction of a basis of P ⊥,sym n,n-1 T requires some algebraic tools which we develop in the following.
The decomposition of
P ⊥ n,n-1 T or P n ([0, 1]) into irreducible S 3 modules
We use the operator γ I (cf. (39)) to set up an action of the symmetric group S 3 on P n ([0, 1]) by transferring its action on P ⊥ n,n-1 T on the basis {b n,k }. It suffices to work with two generating reflections. On the triangle χ {0,2,1} (x 1 , x 2 ) = (x 2 , x 1 ) and thus b n,k • χ {0,2,1} = (-1) k b n,k (this follows from (37)). The action of χ {0,2,1} is mapped to
n k=0 α k b I n,k → n k=0 (-1) k α k b I n,k
, and denoted by R. For the other generator we use
χ {1,0,2} (x 1 , x 2 ) = (1 -x 1 -x 2 , x 2 ). Under γ I this corresponds to the map n k=0 α k b I n,k (x) → n k=0 α k b I n,k (1 -x)
which is denoted by M. We will return later to transformation formulae expressing
b n,k • χ {1,0,2} (x 1 , x 2 ) = (1 -x 1 ) k P (0,2k+1) n-k (1 -2x 1 ) P (0,0) k 1 -x 1 -2x 2 1 -x 1 in the {b n,k }-basis. Observe that (MR) 3 = I because χ {1,0,2} • χ {0,2,1} (x 1 , x 2 ) = (1 -x 1 -x 2 , x 1
) and this mapping is of period 3. It follows that each of {M, R} and χ {1,0,2} , χ {0,2,1} generates (an isomorphic copy of) S 3 . It is a basic fact that the relations M 2 = I, R 2 = I and (MR) 3 = I define S 3 . The representation theory of S 3 informs us that there are three nonisomorphic irreducible representations:
τ triv : χ {0,2,1} → 1, χ {1,0,2} → 1; τ sign : χ {0,2,1} → -1, χ {1,0,2} → -1; τ refl : χ {0,2,1} → σ 1 := $ -1 0 0 1 % , χ {1,0,2} → σ 2 := $ 1 2 1 3 4 -1 2 % .
(The subscript "refl" designates the reflection representation). Then the eigenvectors of σ 1 , σ 2 with -1 as eigenvalue are (-1, 0) ⊺ and (2, -3) ⊺ respectively; these two vectors are a basis for R 2 . Similarly the eigenvectors of σ 1 and σ 2 with eigenvalue +1, namely (0, 1) ⊺ , (2, 1) ⊺ , form a basis. Form a direct sum
P ⊥ n,n-1 T := j≥0 E (triv) j ⊕ j≥0 E (sign) j ⊕ j≥0 E (refl) j ,
where the E If n = 2m + 1 is odd then the eigenvector multiplicities are m + 1 for both eigenvalues +1, -1. By similar arguments we obtain the equations
(triv) j , E (sign) j , E (
d refl (n) + d sign (n) = m + 1, d refl (n) + d triv (n) = m + 1.
It remains to find one last relation for both, even and odd cases.
To finish the determination of the multiplicities d triv (n) , d sign (n) , d refl (n) it suffices to find d triv (n). This is the dimension of the space of polynomials in P ⊥ n,n-1 T which are invariant under both χ {0,2,1} and χ {1,0,2} . Since these two group elements generate S 3 this is equivalent to being invariant under each element of S 3 .This property is called totally symmetric. Under the action of γ I this corresponds to the space of polynomials in P n ([0, 1]) which are invariant under both R and M. We appeal to the classical theory of symmetric polynomials: suppose S 3 acts on polynomials in (y 1 , y 2 , y 3 ) by permutation of coordinates then the space of symmetric (invariant under the group) polynomials is exactly the space of polynomials in {e 1 , e 2 , e 3 } the elementary symmetric polynomials, namely e 1 = y 1 + y 2 + y 3 , e 2 = y 1 y 2 + y 1 y 3 + y 2 y 3 , e 3 = y 1 y 2 y 3 . To apply this we set up an affine map from T to the triangle in R 3 with vertices (2, -1, -1), (-1, 2, -1), (-1, -1, 2). The formula for the map is
y (x) = (2 -3x 1 -3x 2 , 3x 1 -1, 3x 2 -1) .
The map takes (0, 0) , (1, 0) , (0, 1) to the three vertices respectively. The result is This number is the coefficient of t n in the power series expansion of
e 1 (y (x)) = 0, e 2 (y (x)) = -9 x 2 1 + x 1 x 2 + x 2 2 -x 1 -x 2 -3, e 3 (y (x)) = (3x 1 -1) (3x 2 -1) (2 -3x 1 -3x 2 ) .
1 (1 -t 2 ) (1 -t 3 )
= 1 + t 2 + t 3 + t 4 + t 5 + t 7 1 + 2t 6 + 3t 12 + . . . .
From d triv (n) = card ({0, 2, 4, . . .} ∩ {n, n -3, n -6, . . .}) we deduce the formula (cf. ( 16))
d triv (n) = n 2 - n -1 3 .
As a consequence:
if n = 2m then d sign (n) = d triv (n) -1 and d refl (n) = m + 1 -d triv (n); if n = 2m + 1 then d sign (n) = d triv (n) and d refl (n) = m + 1 -d triv (n).
From this the following can be derived:
d sign (n) = * n-1 2 + - * n-1 3 + and d refl (n) = * n+2 3 + .
Here is a table of values in terms of n mod 6:
n d triv (n) d sign (n) d refl (n) 6m m + 1 m 2m 6m + 1 m m 2m + 1 6m + 2 m + 1 m 2m + 1 6m + 3 m + 1 m + 1 2m + 1 6m + 4 m + 1 m 2m + 2 6m + 5 m + 1 m + 1 2m + 2 .
Construction of totally symmetric polynomials
Let M and R denote the linear maps Mp (x 1 , x 2 ) := p (1x 1x 2 , x 2 ) and Rp (x 1 , x 2 ) := p (x 2 , x 1 ) respectively. Both are automorphisms of P ⊥ n,n-1 T . Note M p = p • χ {1,0,2} and Rp = p • χ {0,2,1} (cf. Section 5.3.1).
Proposition 15 Suppose 0 ≤ k ≤ n then Rb n,k = (-1) k b n,k ; (46) M b n,k = (-1) n n j=0 4 F 3 -j, j + 1, -k, k + 1 -n, n + 2, 1 ; 1 2j + 1 n + 1 b n,j . (47)
Proof. The 4 F 3 -sum is understood to terminate at k to avoid the 0/0 ambiguities in the formal 4 F 3 -series.
The first formula was shown in Section 5.3.1. The second formula is a specialization of transformations in [10, Theorem 1.7(iii)]: this paper used the shifted Jacobi polynomial R
(α,β) m (s) = m! (α+1) m P (α,β) m (1 -2s). Setting α = β = γ = 0 in the formulas in [10, Theorem 1.7(iii)] results in b n,k = (-1) k θ n,k k! (n -k)! and Mb n,k = φ n,k k! (n -k)!
, where θ n,k , φ n,k are the polynomials introduced in [10, p.690]. More precisely, the
arguments v 1 , v 2 , v 3 in θ n,k and φ n,k are specialized to v 1 = x 1 , v 2 = x 2 and v 3 = 1 -x 1 -x 2 .
Proposition 16 The range of I + RM + M R is exactly the subspace p ∈ P ⊥ n,n-1 T : RM p = p .
Proof. By direct computation (MR) 3 = I (cf. Section 5.3.1). This implies (RM) 2 = M R. If p satisfies RM p = p then Mp = Rp and p = M Rp. Now suppose RM p = p then (I + RM + MR) 1 3 p = p; hence p is in the range of I + RM + M R. Conversely suppose p = (I + RM + MR) p ′ for some polynomial p ′ , then,
RM (I + RM + M R) p ′ = RM + (RM) 2 + I p ′ = p. Let M (n) i,j , R (n)
i,j denote the matrix entries of M, R with respect to the basis {b n,k : 0
≤ k ≤ n}, respectively (that is M b n,k = n j=0 b n,j M (n) j,k ) . Let S (n) i,j denote the matrix entries of M R + RM + I. Then R (n) i,j = (-1) i δ i,j ; M (n) i,j = (-1) n 4 F 3 -i, i + 1, -j, j + 1 -n, n + 2, 1 ; 1 2i + 1 n + 1 ; S (n) i,j = (-1) j + (-1) i M (n) i,j + δ i,j .
Thus S (n)
i,j = 2M
(n) i,j + δ i,j if both i, j are even, S
i,j = -2M
(n) i,j + δ i,j if both i, j are odd , and Proof. We use the homogeneous form of the b n,m as in [START_REF] Dunkl | Orthogonal polynomials with symmetry of order three[END_REF], that is, set
S (n) i,j = 0 if i -j ≡ 1 mod 2.
b ′ n,2m (v) = (v 1 + v 2 + v 3 ) n b n,2m v 1 v 1 + v 2 + v 3 , v 2 v 1 + v 2 + v 3 = (v 1 + v 2 + v 3 ) n-2m P (0,4m+1) n-2m v 1 + v 2 -v 3 v 1 + v 2 + v 3 (v 1 + v 2 ) 2m P (0,0) 2m v 1 -v 2 v 1 + v 2 .
Formally b ′ n,j (v) = (-1) j (j! (nj)!) -1 θ n,j (v) with θ n,j as in [10, p.690]. The expansion of such a polynomial is a sum of monomials v n 1 1 v n 2 2 v n 3 3 with 3 i=1 n i = n. Symmetrizing the monomial results in the sum of
v m 1 1 v m 2 2 v m 3 3
where (m 1 , m 2 , m 3 ) ranges over all permutations of (n 1 , n 2 , n 3 ). The argument is based on the occurrence of certain indices in b n,m . For a more straightforward approach to the coefficients we use the following expansions (with ℓ = n -2k, β = 2k + 1):
(v 1 + v 2 + v 3 ) ℓ P (0,β) ℓ v 1 + v 2 -v 3 v 1 + v 2 + v 3 = (-1) ℓ (v 1 + v 2 + v 3 ) ℓ P (β,0) ℓ -v 1 -v 2 + v 3 v 1 + v 2 + v 3 (48) = (-1) ℓ (β + 1) ℓ ℓ! ℓ i=0 (-ℓ) i (ℓ + β + 1) i i! (β + 1) i (v 1 + v 2 ) i (v 1 + v 2 + v 3 ) ℓ-i ;
and
(v 1 + v 2 ) 2k P (0,0) 2k v 1 -v 2 v 1 + v 2 = 1 (2k)! 2k j=0 (-2k) j (-2k) j (-2k) 2k-j j! v j 2 v 2k-j 1 .
First let n = 2m. The highest power of v 3 that can occur in b ′ 2m,2m-2k is 2k, with corresponding coefficient
(4m-4k+1) 2k (2k)! 2m-2k j=0 c j v j 2 v 2m-j 1
for certain coefficients {c j }. Recall that d triv (n) is the number of solutions (i, j) of the equation 3j + 2i = 2m (with i, j = 0, 1, 2, . . .). The solutions can be listed as (m, 0) , (m -3, 2) , (m -6, 4) . . . (m -3ℓ, 2ℓ) where ℓ = d triv (n) -1. By hypothesis (m -3k, 2k) occurs in the list and thus m -3k ≥ 0 and mk ≥ 2k. There is only one possible permutation of v m-k
1 v m-k 2 v 2k
3 that occurs in b ′ 2m,2m-2k and its coefficient is
(2k-2m) 3 m-k (2m-2k)! = 0.
Hence there is a triangular pattern for the occurrence of
v m 1 v m 2 , v m-1 1 v m-1 2 v 2 3 , v m-2 1 v m-2 2 v 4 3 , . . .in the symmetrizations of b ′ 2m,2m , b ′ 2m,2m-2 .
. . with nonzero numbers on the diagonal and this proves the basis property when n = 2m. Now let n = 2m + 1. The highest power of v 3 that can occur in b ′ 2m+1,2m-2k is 2k + 1, with coefficient
(4m-4k+1) 2k+1 (2k+1)! 2m-2k j=0 c j v j 2 v 2m-j 1
for certain coefficients {c j }. The solutions of 3j + 2i = 2m + 1 can be listed as (m -1, 1) , (m -4, 3) , (m -7, 5) . . . (m -1 -3ℓ, 2ℓ + 1) where ℓ = d triv (n) -1. By hypothesis (m -1 -3k, 2k + 1) occurs in this list, thus mk ≥ 2k + 1. There is only one possible permutation of
v m-k 1 v m-k 2 v 2k+1 3 that occurs in b ′
2m+1,2m-2k and its coefficient is
(2k-2m) 3 m-k (2m-2k)! = 0.
As above, there is a triangular pattern for the occurrence of
v m 1 v m 2 v 3 , v m-1 1 v m-1 2 v 3 3 , v m-2 1 v m-2 2 v 5 3 , . . . in the symmetrizations of b ′ 2m+1,2m , b ′
2m+1,2m-2 , . . . with nonzero numbers on the diagonal and this proves the basis property when n = 2m + 1.
The totally symmetric orthogonal polynomials can be lifted to a general triangle T .
Definition 19 Let T denote a triangle. The space of totally symmetric, orthogonal polynomials of degree n is
P ⊥,sym n,n-1 (T ) := u ∈ P ⊥ n,n-1 (T ) : u has total symmetry (49) = span b T,sym n,m : 0 ≤ m ≤ d triv (n) -1 , (50)
where the lifted symmetric basis functions are given by b T,sym n,m
:= b sym n,m • χ -1
T for b sym n,m as in Theorem 18 and an affine pullback χ T : T → T .
A Basis for the
τ refl component of P ⊥ n,n-1 (T )
As explained in Section 5.3.1 the space P ⊥ n,n-1 T can be decomposed into the τ triv -, the τ sign -and the τ refl -component. A basis for the τ triv component are the fully symmetric basis functions (cf. Section 5.3.2).
Next, we will construct a basis for all of P ⊥ n,n-1 T by extending the totally symmetric one. It is straightforward to adjoin the d sign (n) basis, using the same technique as for the fully symmetric ones: the monomials which appear in p with Rp = -p = M p must be permutations of
v n1 1 v n2 2 v n3 3 with n 1 > n 2 > n 3 . As in Theorem 18 for n = 2m argue on monomials v m-k 1 v m-1-k 2 v 2k+1 3 and the polynomials b ′ 2m,2m-2k-1 with 0 ≤ k ≤ d sign (n) -1 = d triv (n) -2, and for n = 2m + 1 use the monomials v m+1-k 1 v m-k 2 v 2k 3 and b 2m+1,2m-2k with 0 ≤ k ≤ d triv (n) -1 = d sign (n) -1.
As we will see when constructing a basis for the non-conforming finite element space, the τ sign component of P ⊥ n,n-1 T is not relevant, in contrast to the τ refl component. In this section, we will construct a basis for the τ refl polynomials in P ⊥ n,n-1 T . Each such polynomial is an eigenvector of RM + MR with eigenvalue -1. We will show that the polynomials
b refl n,k = 1 3 (2I -RM -MR) b n,2k , 0 ≤ k ≤ n -1 3 , (51)
are linearly independent (and the same as introduced in ( 21)) and, subsequently, that the set
RMb refl n,k , M Rb refl n,k : 0 ≤ k ≤ n -1 3 (52)
is a basis for the τ refl subspace of P ⊥ n,n-1 T . (The upper limit of k is as in (52)
d refl (n) -1 (cf. ( 22
)).) Note that RMb refl n,k = 1 3 (2RM -MR -I) b n,2k , M Rb refl n,k = 1 3 (2M R -I -RM ) b n,2k , (53)
because (RM) 2 = MR. Thus the calculation of these polynomials follows directly from the formulae for
[M ij ] and [R ij ].
The method of proof relies on complex coordinates for the triangle.
Lemma 20 For k = 0, 1, 2, . . .
P (0,0) 2k (s) = (-1) k k + 1 2 k k! k j=0 (-k) 2 j j! 1 2 -2k j 1 -s 2 k-j , (v 1 + v 2 ) 2k P (0,0) 2k v 1 -v 2 v 1 + v 2 = (-1) k k + 1 2 k k! k j=0 (-k) 2 j j! 1 2 -2k j 4 k-j (v 1 v 2 ) k-j (v 1 + v 2 ) 2j .
Proof. Start with the formula (specialized from a formula for Gegenbauer polynomials [9, 18.5.10])
P (0,0) 2k (s) = (2s) 2k 1 2 2k (2k)! 2 F 1 -k, 1 2 -k 1 2 -2k ; 1 s 2 .
Apply the transformation (cf. [9, 15.8.1])
2 F 1 -k, b c ; t = (1 -t) k 2 F 1 -k, c -b c ; t t -1 with t = 1/s 2 ; then t t -1 = 1 1 -s 2 and s 2k 1 -1 s 2 k = (-1) k 1 -s 2 k . Also 2 2k ( 1 2 ) 2k (2k)! = ( 1 2 ) 2k k!( 1 2 ) k = (k+ 1 2 ) k k! .
This proves the first formula. Set 2 to obtain the second one. Introduce complex homogeneous coordinates:
s = v 1 -v 2 v 1 + v 2 then 1 -s 2 = 4v 1 v 2 (v 1 + v 2 )
z = ωv 1 + ω 2 v 2 + v 3 z = ω 2 v 1 + ωv 2 + v 3 t = v 1 + v 2 + v 3 . Recall ω = e 2πi/3 = -1 2 + i 2 √
3 and ω 2 = ω. The inverse relations are
v 1 = 1 3 (-(ω + 1) z + ωz + t) v 2 = 1 3 (ωz -(ω + 1) z + t) v 3 = 1 3 (z + z + t) .
Suppose f (z, z, t) is a polynomial in z and z then Rf (z, z, t) = f (z, z, t) and M f (z, z, t) = f ωz, ω 2 z, t . Thus RM f (z, z, t) = f ω 2 z, ωz, t and M Rf (z, z, t) = f ωz, ω 2 z, t . The idea is to write b n,2k in terms of z, z, t and apply the projection Π := 1 3 (2I -M R -RM ). To determine linear independence it suffices to consider the terms of highest degree in z, z thus we set t = v 1 + v 2 + v 3 = 0 in the formula for b n,2k (previously denoted b ′ n,2k using the homogeneous coordinates, see proof of Theorem 18). From formula (48) and Lemma 20
b ′ n,2k (v 1 , v 2 , 0) = (n -2k + 2) n-2k (v 1 + v 2 ) n-2k (-1) k k + 1 2 k k! × k j=0 (-k) 2 j j! 1 2 -2k j 4 k-j (v 1 v 2 ) k-j (v 1 + v 2 ) 2j .
The coefficient of (v
1 v 2 ) k (v 1 + v 2 ) n-2k in b ′ n,2k (v 1 , v 2 , 0
) is nonzero, and this is the term with highest power
of v 1 v 2 . Thus b ′ n,2k (v 1 , v 2 , 0) : 0 ≤ k ≤ n-2 3 is a basis for span (v 1 v 2 ) k (v 1 + v 2 ) n-2k : 0 ≤ k ≤ n-2 3
. The next step is to show that the projection Π has trivial kernel. In the complex coordinates
v 1 + v 2 = -1 3 (z + z -t) = -1 3 (z + z) and v 1 v 2 = 1 9 z 2 -zz + z 2 (discarding terms of lower order in z, z, that is, set t = 0). Proposition 21 If Π ⌊(n-1)/3⌋ k=0 c k (z + z) n-2k z 2 -zz + z 2 k = 0 then c k = 0 for all k.
Proof. For any polynomial f (z, z) we have Πf (z, z) = 1 3 2f (z, z)f ω 2 z, ωzf ωz, ω 2 z . In particular
Π (z + z) n-2k z 2 -zz + z 2 k = Π (z + z) n-3k z 3 + z 3 k = 1 3 2 (z + z) n-3k -ω 2 z + ωz n-3k -ωz + ω 2 z n-3k z 3 + z 3 k .
By hypothesis n -3k ≥ 1. Evaluate the expression at z = e πi /6 + ε where ε is real and near 0. Note
e πi /6 = 1 2 √ 3 + i . Then z + z = √ 3 + 2ε, ω 2 z + ωz = -ε, ωz + ω 2 z = - √ 3 -ε, z 3 + z 3 = 3ε + 3 √ 3ε 2 + 2ε 3 , and
1 3 2 (z + z) n-3k -ω 2 z + ωz n-3k -ωz + ω 2 z n-3k z 3 + z 3 k = 1 3 2 -(-1) n-3k × 3 (n-3k)/2 -(-ε) n-3k + Cε + O ε 2 ε k 3 + 3 √ 3ε + 2ε 2 k ,
where C = 3 (n--3k-1)/2 (n -3k) 4 -2 (-1) n-3k (binomial theorem). The dominant term in the right-hand
side is 2 -(-1) n-3k 3 (n-k)/2-1 ε k . Now suppose Π ⌊(n-1)/3⌋ k=0 c k (z + z) n-2k z 2 -zz + z 2 k = 0.
Evaluate the polynomial at z = e πi /6 + ε. Let ε → 0 implying c 0 = 0. Indeed write the expression as
⌊(n-1)/3⌋ k=0 c k 2 -(-1) n-3k 3 (n-k)/2-1 ε k (1 + O (ε)) = 0.
Since 2 -(-1) n-3k ≥ 1 this shows c k = 0 for all k.
We have shown:
Proposition 22 Suppose Π ⌊(n-1)/3⌋
k=0 c k b n,2k = 0 then c k = 0 for all k; the cardinality of the set (52) is
d refl (n).
Π (z + z) n-3k z 3 + z 3 k = n-3k j=0 n-2j≡1,2 mod 3 n -3k j z n-3k-j z j z 3 + z 3 k . Then RM w k (z, z) = n-3k j=0,n-2j≡1,2 mod 3 n -3k j ω 2j-n z n-3k-j z j z 3 + z 3 k , MRw k (z, z) = n-3k j=0,n-2j≡1,2 mod 3 n -3k j ω n-2j z n-3k-j z j z 3 + z 3 k .
Firstly we show that {RM w k , M Rw k } is linearly independent for 0 ≤ k ≤ n-1 3 . For each value of n mod 3 we select the highest degree terms from RM w k and MRw k : (i) n = 3m + 1, ω 2 z 3m+1 + ωz 3m+1 and ωz 3m+1 + ω 2 z 3m+1 , (ii) n = 3m+2, ωz 3m+2 +ω 2 z 3m+2 and ω 2 z 3m+2 +ωz 3m+2 , (iii) n = 3m, (n -3k) ω 2 z 3m z + ωzz 3m and (n -3k) ωz 3m z + ω 2 zz 3m (by hypothesis n-3k ≥ 1). In each case the two terms are linearly independent (the determinant of the coefficients is ± ωω 2 = ∓i √ 3). Secondly the same argument as in the previous theorem shows that
⌊(n-1)/3⌋ k=0 {c k RMw k + d k M Rw k } = 0 implies c k RM w k + d k M Rw k = 0 for all k.
By the first part it follows that c k = 0 = d k . This completes the proof.
Remark 24 The basis b n,k for P ⊥ n,n-1 T in (35) is mirror symmetric with respect to the angular bisector in T through the origin for even k and is mirror skew-symmetric for odd k. This fact makes the point 0 in T special compared to the other vertices. As a consequence the functions defined in Theorem 23.a reflects the special role of 0. Part b shows that it is possible to define a basis with functions which are either symmetric with respect to the angle bisector in T through (1, 0) ⊺ or through (0, 1) ⊺ by "rotating" the functions Πb n,2k to these vertices:
RM (Πb n,2k ) (x 1 , x 2 ) = (Πb n,2k ) (x 2 , 1 -x 1 -x 2 ) and M R (Πb n,2k ) (x 1 , x 2 ) = (Πb n,2k ) (1 -x 1 -x 2 , x 1 ) . Since the dimension of E (refl) is 2d refl (n) = 2 * n+2 3 +
is not (always) a multiple of 3, it is, in general, not possible to define a basis where all three vertices of the triangle are treated in a symmetric way.
Definition 25 Let P ⊥,refl n,n-1 T := span RM Πb n,2k , M RΠb n,2k : 0 ≤ k ≤ n -1 3 . ( 54
)
This space is lifted to a general triangle T by fixing a vertex P of T and setting
P ⊥,refl n,n-1 (T ) := u • χ -1 P,T : u ∈ P ⊥,refl n,n-1 T , (55)
where the lifting χ P,T is an affine pullback χ P,T : T → T which maps 0 to P.
The basis b refl n,k to describe the restrictions of facet-oriented, non-conforming finite element functions to the facets is related to a reduced space and defined as in (51) with lifted versions
b P,T n,k := b refl n,k • χ -1 P,T , 0 ≤ k ≤ n -1 3 . ( 56
)
Remark 26 The construction of the spaces P ⊥,sym p,p-1 (T ) and P ⊥,refl p,p-1 (T ) (cf. Definitions 19 and 25) implies the direct sum decomposition
span b p,2k • χ -1 P,T : 0 ≤ k ≤ ⌊p/2⌋ = P ⊥,sym p,p-1 (T ) ⊕ P ⊥,refl p,p-1 (T ) . (57)
It is easy to verify that the basis functions b P,T p,k are mirror symmetric with respect to the angle bisector in T through P. However, the space P ⊥,refl n,n-1 (T ) is independent of the choice of the vertex P. In Appendix A we will define further sets of basis functions for the τ refl component of P ⊥ n,n-1 T -different choices might be preferable for different kinds of applications.
Simplex-Supported and Facet-Oriented Non-Conforming Basis Functions
In this section, we will define non-conforming Crouzeix-Raviart type functions which are supported either on one single tetrahedron or on two tetrahedrons which share a common facet. As a prerequisite, we study in §5.4.1 piecewise orthogonal polynomials on triangle stars, i.e., on a collection of triangles which share a common vertex and cover a neighborhood of this vertex (see Notation 27). We will derive conditions such that these functions are continuous across common edges and determine the dimension of the resulting space. This allows us to determine the non-conforming Courzeix-Raviart basis functions which are either supported on a single tetrahedron (see §5.4.2) or on two adjacent tetrahedrons (see §5.4.3) by "closing" triangle stars either by a single triangle or another triangle star.
Orthogonal Polynomials on Triangle Stars
The construction of the functions B K,nc p,k and B T,nc p,k as in ( 20) and ( 24) requires some results of continuous, piecewise orthogonal polynomials on triangle stars which we provide in this section.
Notation 27 A subset C ⊂ Ω is a triangle star if C is the union of some, say m C ≥ 3, triangles T ∈ F C ⊂ F, i.e., C = T ∈F C
T and there exists some vertex
V C ∈ V such that V C is a vertex of T ∀T ∈ F C , ∃ a continuous, piecewise affine mapping χ : D mC → C such that χ (0) = V C . (58)
Here, D k denotes the regular closed k-gon (in R 2 ).
For a triangle star C, we define
P ⊥ p,p-1 (C) := u ∈ C 0 (C) | ∀T ∈ F C : u| T ∈ P ⊥ p,p-1 (T ) .
In the next step, we will explicitly characterize the space P ⊥ p,p-1 (C) by a set of basis functions. Set A := V C (cf. (58)) and pick an outer vertex in F C , denote it by A 1 , and number the remaining vertices A 2 , . . . , A mC in F C counterclockwise. We use the cyclic numbering convention A mC +1 := A 1 and also for similar quantities.
For 1 ≤ ℓ ≤ m C , let e ℓ := [A, A ℓ ] be the straight line (convex hull) between and including A, A ℓ . Let T ℓ ∈ F C be the triangle with vertices A, A ℓ , A ℓ+1 . Then we choose the affine pullbacks to the reference element T by
χ ℓ (x 1 , x 2 ) := A + x 1 (A ℓ -A) + x 2 (A ℓ+1 -A) if ℓ is odd, A + x 1 (A ℓ+1 -A) + x 2 (A ℓ -A) if ℓ is even.
In this way, the common edges e ℓ are parametrized by χ ℓ-1 (t, 0) = χ ℓ (t, 0) if 3 ≤ ℓ ≤ m C is odd and by χ ℓ-1 (0, t) = χ ℓ (0, t) if 2 ≤ ℓ ≤ m C is even. The final edge e 1 is parametrized by χ 1 (t, 0) = χ m C (t, 0) if m C is even and by χ 1 (t, 0) = χ mC (0, t) (with interchanged arguments!) otherwise. We introduce the set
R p,C := {0, . . . , p} if m C is even, 2ℓ : 0 ≤ ℓ ≤ * p 2 + if m C is odd
and define the functions (cf. ( 49), ( 55), (57))
b C p,k T ℓ := b p,k • χ -1 ℓ , ∀k ∈ R p,C . (59)
Lemma 28 For a triangle star C, a basis for P ⊥ p,p-1 (C) is given by b From Lemma 12 we conclude that the continuity across such edges is equivalent to
C p,k , k ∈ R p,C . Further dim P ⊥ p,p-1 (C) = p + 1 if m C is even, * p 2 + + 1 if m C is odd. ( 60
α (ℓ-1) p,k = α (ℓ) p,k ∀0 ≤ k ≤ p. ( 61
)
Continuity across e ℓ for even 2 ≤ ℓ ≤ m C . Note that χ 2 (0, t) = χ 3 (0, t). Taking into account (49), ( 55), (57) we see that the continuity across e ℓ is equivalent to
p k=0 α (2) p,k b II p,k = p k=0 α (3)
p,k b II p,k .
From Lemma 12 we conclude that the continuity across e ℓ for even 2 ≤ ℓ ≤ m C is again equivalent to
α (ℓ-1) p,k = α (ℓ) p,k ∀0 ≤ k ≤ p. (62)
Continuity across e 1 For even m C the previous argument also applies for the edge e 1 and the functions b C p,k , 0 ≤ k ≤ p, are continuous across e 1 . For odd m C , note that χ 1 (t, 0) = χ mC (0, t). Taking into account (49), (55), (57) we see that the continuity across e 1 is equivalent to Using the symmetry relation (37) we conclude that this is equivalent to
p k=0 α (1) p,k b I p,k = p k=0 α (mC ) p,k (-1) k b I p,k .
From Lemma 12 we conclude that this, in turn, is equivalent to
α (1) p,k = α (m C ) p,k k is even, α (1) p,k = -α (m C ) p,k k is odd. ( 63
)
From the above reasoning, the continuity of b C p,k across e 1 follows if α In this section, we will prove that S p K,nc (cf. (20)) satisfies
S p K,nc ⊕ S p K,c = S p K := u ∈ S p G : supp u ⊂ K ,
where S p G is defined in (4) and, moreover, that the functions B K,nc p,k , k = 0, 1, . . . , d triv (p) -1, as in ( 18), (20) form a basis of S p K,nc .
in S p K1,nc ⊕ S p K2,nc . In view of the direct sum in (67) we may thus assume that the functions in Sp T,nc are continuous in ω T .
To finally arrive at a direct decomposition of the space in the right-hand side of (67) we have to split the spaces P ⊥ p,p-1 (C i ) into a direct sum of the spaces of totally symmetric orthogonal polynomials and the spaces introduced in Definition 25 and glue them together in a continuous way. We introduce the functions for the definition of S p T,nc . The resulting non-conforming facet-oriented space S p T,nc was introduced in Definition 7 and Sp T,nc can be chosen to be S p T,nc .
Proposition 30 For any u ∈ S p T,nc , the following implication holds
u| T ∈ S p T,nc T ∩ P ⊥ p,p-1 (T ) =⇒ u = 0.
Proof. Assume there exists u ∈ S p T,nc with u| T ∈ S p T,nc T ∩P ⊥ p,p-1 (T ). Let K be a simplex adjacent to T . Then
u K = u| K satisfies u K | T ′ ∈ P ⊥ p,p-1 (T ′
) for all T ′ ⊂ ∂K and, thus, u K ∈ S p K,nc . Since S p K,nc T ′ ∩ S p T,nc T ′ = {0} for T ′ ∈ ∂K\
refl p,k (x 1 , 1 -x 1 ) is invariant under x 1 → 1 -x 1 .
For four non-coplanar points A 0 , A 1 , A 2 , A 3 let K denote the tetrahedron with these vertices. For any k such that 0 ≤ k ≤ p-1
3 define a piecewise polynomial on the faces of K as follows: choose a local (x 1 , x 2 )-coordinate system for A 0 A 1 A 2 so that the respective coordinates are (0, 0) , (1, 0) , (0, 1), and define Q
k is continuous at the edges A 0 A 1 , A 0 A 2 , and A 0 A 3 . The values at the boundary of the triangle star equal b refl p,k (x 1 , 1x 1 ); note the symmetry and thus the orientation of the coordinates on the edges
A 1 A 2 , A 2 A 3 , A 3 A 1 is immaterial. The value of Q (0)
k on the triangle A 1 A 2 A 3 is taken to be a degree p polynomial, totally symmetric, with values agreeing with b refl p,k (x 1 , 1x 1 ) on each edge. Similarly Q
(1) k , Q (2) k , Q (3)
k are defined by taking A 1 , A 2 , A 3 as the center of the construction, respectively.
Theorem 31 a) The functions Q
(i) k , 0 ≤ k ≤ d refl (p) -1, i = 0,
1, 2, 3 are linearly independent. b) Property (71) holds.
A Basis for Non-Conforming Crouzeix-Raviart Finite Elements
We have defined conforming and non-conforming sets of functions which are spanned by functions with local support. In this section, we will investigate the linear independence of these functions. We introduce the following spaces S p sym,nc :=
is not direct. The sum Sp G,c ⊕ S p sym,nc ⊕ S p,0 refl,nc (74)
is direct.
Proof. Part 1. We prove that the sum S p sym,nc ⊕ S p refl,nc is direct. From Proposition 30 we know that the sum S p T,nc T , 0 ≤ k ≤ d refl (p) -1, are linearly independent and belong to P p-1 (T ). We define the functionals
⊕ P ⊥ p,p-1 (T ) is direct. Let Π T : L 2 (T ) → P p-1 (
J T p,k (w) := T wq T p,k 0 ≤ k ≤ d refl (p) -1.
Next we consider a general linear combination and show that the condition
K⊂G dtriv(p)-1 i=0 α K i B K,nc p,i + K⊂G T ′ ⊂∂K d refl (p)-1 j=0 β T ′ j B T ′ ,nc p,j ! = 0 (76)
implies that all coefficients are zero. We apply the functionals J T p,k to (76) and use the orthogonality between P ⊥ p,p-1 (T ) and q T p,k to obtain
K⊂G T ′ ⊂∂K d refl (p)-1 j=0 β T ′ j J T p,k B T ′ ,nc p,j ! = 0. ( 77
) For T ′ = T it holds J T p,k B T ′ ,nc p,i = 0 since B T ′ ,nc p,i K T
is an orthogonal polynomial. Thus, equation (77) is equivalent to
drefl(p)-1 j=0 β T j J T p,k B T,nc p,j ! = 0. (78)
The matrix J T p,k B T,nc p,j
d refl (p)-1 k,j=0
is regular because
J T p,k B T,nc p,j = T B T,nc p,j q T p,k = T B T,nc p,j Π T B T,nc p,k T = T B T,nc p,j B T,nc p,k
and B T,nc p,k T k are linearly independent. Hence we conclude from (78) that all coefficients β T j are zero and the condition (76) reduces to
K⊂G d triv (p)-1 i=0 α K i B K,nc p,i ! = 0.
The left-hand side is a piecewise continuous function so that the condition is equivalent to dtriv(p)-1 i=0
α K i B K,nc p,i !
where χ i : T → T i are affine pullbacks to the reference triangle such that χ i (0) = A 0 . This implies that the functions u i at A 0 have the same value (say w 0 ) and, from the condition u refl (A 0 ) = 3w 0 = 0, we conclude that u i (A 0 ) = 0. The values of u i at the vertex A i of K (which is opposite to T i ) also coincide and we denote this value by v 0 . Since u refl | T = 0 it holds u refl (A i ) = 2w 0 + v 0 = 0. From w 0 = 0 we conclude that also v 0 = 0. Let χ i,T0 : T → T 0 denote an affine pullback with the property χ i,T0 (0) = A i . Hence,
u i := u i | T 0 • χ -1 i,T0 ∈ span b refl p,0 (80)
with values zero at the vertices of T . Note that b p,0 (0, 0) = (-1) p (p + 1) and b p,0 (1, 0) = b p,0 (0, 1) = 1.
The vertex properties (81) along the definition of b refl p,k (cf. ( 51)) imply that
b refl p,0 (1, 0) = b refl p,0 (0, 1) = 1 3 (1 -(-1) p (p + 1)) = c p , (82)
b refl p,0 (0, 0) = -2b refl p,0 (1, 0) . Since c p = 0 for p ≥ 1 we conclude that u i = 0 holds. Relation (80) implies u i | T0 = 0 and thus u i = 0. From
u refl | T = 3 i=1 u i | T we deduce that u refl | K = 0.
The Cases b.1-.3 allow to proceed with the same induction argument as for Case a and u refl = 0 follows by induction.
Part 3. An inspection of Part 2 shows that, for the proof of Case a, it was never used that the vertexoriented basis functions have been removed from S p G,c and Case a holds verbatim for S p G,c . This implies that the first sum in (73) is direct.
Part 4. The fact that the sum S p G,c + S p refl,nc is not direct is postponed to Proposition 34.
Proposition 34 For any vertex
V ∈V Ω it holds B G p,V ∈ S p sym,nc ⊕ S p,0 refl,nc ⊕ Sp G,c .
Proof. We will show the stronger statement B G p,V ∈ S p,0 refl,nc ⊕ Sp G,c . It suffices to construct a continuous function u V ∈ S p refl,nc which coincides with B G p,V at all vertices V ′ ∈ V and vanishes at ∂Ω; then,
B G p,V -u V ∈ Sp G,c
and the assertion follows. Recall the known values of b refl p,0 at the vertices of the reference triangle and the definition of c p as in (82). Let K ∈ G be a tetrahedron with V as a vertex. The facets of K are denoted by T i , 0 ≤ i ≤ 3, and the vertex which is opposite to T i is denoted by A i . As a convention we assume that A 0 = V. For every T i , 1 ≤ i ≤ 3, we define the function u Ti ∈ S p Ti,nc by setting (cf. (56))
u Ti | T0 = b refl p,0 • χ -1 Ai,T0 ,
where χ Ai,T0 : T → T 0 is an affine pullback which satisfies χ Ai,T0 (0) = A i . (It is easy to see that the definition of u T i is independent of the side of T i , where the tetrahedron K is located.) From ( 51) and (53) we conclude that 3 i=1 u Ti T0 = 0 holds. We proceed in the same way for all tetrahedrons K ∈ G V (cf. ( 9)). This implies that ũV :=
T ∈FΩ V∈T u T (83)
vanishes at Ω\
• ω V (cf. ( 9)). By construction the function ũV is continuous. At V, the function u T i has the value (cf. (82))
u Ti (V) = c p so that ũV (V) = Cc p , where C is the number of terms in the sum (83). Since c p > 0 for all p ≥ 1, the function u V := 1 Ccp ũV is well defined and has the desired properties.
Remark 35
We have seen that the extension of the basis functions of S p G,c by the basis functions of S p refl,nc leads to linearly depending functions. On the other hand, if the basis functions of the subspace S p,0 refl,nc are added and the vertex-oriented basis functions in S p G,c are simply removed, one arrives at a set a linear independent functions which span a larger space than S p G,c . Note that S p,0 refl,nc = S p refl,nc for p = 1, 2, 3. One could add more basis functions from S p refl,nc but then has to remove further basis functions from Sp G,c
or formulate side constraints in order to obtain a set of linearly independent functions.
We finish this section by an example which shows that there exist meshes with fairly special topology, where the inclusion S p G,c + S p sym,nc + S p refl,nc ⊂ S p G (84) is strict. We emphasize that the left-hand side in (84), for p ≥ 4, defines a larger space than the space in (75) since it contains all non-conforming functions of reflection type.
Example 36 Let us consider the octahedron Ω with vertices A ± := (0, 0, ±1) ⊺ and A 1 := (1, 0, 0) ⊺ , A 2 := (0, 1, 0) ⊺ , A 3 := (-1, 0, 0) ⊺ , A 4 := (0, -1, 0) ⊺ . Ω is subdivided into a mesh G := {K i : 1 ≤ i ≤ 8} consisting of eight congruent tetrahedrons sharing the origin 0 as a common vertex. The six vertices at ∂Ω have the special topological property that each one belongs to exactly four surface facets.
Note that the space defined by the left-hand side of (84) does not contain functions whose restriction to a surface facet, say T , belongs to the τ sign component of P ⊥ n,n-1 (T ). Hence, the inclusion in ( 84) is strict if we identify a function in S p G whose restriction to some surface facet is an orthogonal polynomial of "sign type". Let q = 0 be a polynomial which belongs to the τ sign component of P ⊥ n,n-1 (T ) on the reference element. Denote the (eight) facet on ∂Ω with the vertices A ± , A i , A i+1 by T ± i for 1 i ≤ 4 (with cyclic numbering convention) and choose affine pullbacks χ ±,i : T → T ± i as χ ±,i (x) := A ± + x 1 (A i -A ± ) + x 2 (A i+1 -A ± ). Then, it is easy to verify (use Lemma 28 with even m C ) that the function q : ∂Ω → R, defined by q| T ± i := q • χ -1 ±,i is continuous on ∂Ω. Hence the "finite element extension" to the interior of Ω via
Q := N∈N p ∩∂Ω q (N) B G p,N
defines a function in S p G which is not in the space defined by the left-hand side of (84). We state in passing that the space S p G does not contain any function whose restriction to a boundary facet, say T , belongs to the τ sign component of P ⊥ p,p-1 (T ) if there exists at least one surface vertex which belongs to an odd number of surface facets. In this sense, the topological situation considered in this example is fairly special.
Conclusion
In this article we developed explicit representation of a local basis for non-conforming finite elements of the Crouzeix-Raviart type. As a model problem we have considered Poisson-type equations in three-dimensional domains; however, this approach is by no means limited to this model problem. Using theoretical conditions in the spirit of the second Strang lemma, we have derived conforming and non-conforming finite element spaces of arbitrary order. For these spaces, we also derived sets of local basis functions. To the best of our knowledge, such explicit representation for general polynomial order p are not available in the existing literature. The derivation requires some deeper tools from orthogonal polynomials of triangles, in particular, the splitting of these polynomials into three irreducible irreducible S 3 modules.
Based on these orthogonal polynomials, simplex-and facet-oriented non-conforming basis functions are defined. There are two types of non-conforming basis functions: those whose supports consist of one tetrahedron and those whose supports consist of two adjacent tetrahedrons. The first type can be simply added to the conforming hp basis functions. It is important to note that the span of the functions of the second type contains also conforming functions and one has to remove some conforming functions in order to obtain a linearly independent set of functions. We have proposed a non-conforming space which consists of a) all basis functions of the first type and b) a reduced set of basis functions of the second type and c) of the conforming basis functions without the vertex-oriented ones. This leads to a set of linearly independent functions and is in analogy to the well known lowest order Crouzeix-Raviart element.
It is interesting to compare these results with high-order Crouzeix-Raviart finite elements for the twodimensional case which have been presented in [START_REF] Ciarlet | Intrinsic finite element methods for the computation of fluxes for Poisson's equation[END_REF]. Facets T of tetrahedrons in 3D correspond to edges E of triangles in 2D. As a consequence the dimension of the space of orthogonal polynomials P ⊥ p,p-1 (E) equals one. For even degree p, one has only non-conforming basis functions of "symmetric" type (which are supported on a single triangle) and for odd degree p, one has only non-conforming basis functions of "reflection" type (which are supported on two adjacent triangles). It turns out that adding the non conforming symmetric basis function to the conforming hp finite element space leads to a set of linearly independent functions which is the analogue of the first sum in (73). If the non-conforming basis functions of reflection type are added, the
Figure 1 :
1 Figure 1: Symmetric orthogonal polynomials on the reference triangle and corresponding tetrahedronsupported non-conforming basis functions.
Example 9
9 The lowest order of p such that d refl (p) ≥ 1 is p = 1. In this case, we get d refl (p) = 1. In Figure2the function b refl p,k and corresponding basis functions B T,nc p,k are depicted for (p, k) ∈ {(1, 0) , (2, 0) , (4, 0) , (4, 1)}.
Figure 2 :
2 Figure 2: Orthogonal polynomials of reflection type and corresponding non-conforming basis functions which are supported on two adjacent tetrahedrons. The common facet is horizontal and the two tetrahedrons are on top of each other.
Lemma 12
12 For any Z ∈ {I, II, III}, each of the systems b Z n,k n k=0 , form a basis of P n ([0, 1]).
refl) j are S 3 -
3 irreducible and realizations of the representations τ triv , τ sign , τ refl respectively. Let d triv (n) , d sign (n) , d refl (n) denote the respective multiplicities, so that d triv (n) + d sign (n) + 2d refl (n) = n + 1. The case n even or odd are handled separately. If n = 2m is even then the number of eigenvectors of R having -1 as eigenvalue equals m (the cardinality of {1, 3, 5, . . . , 2m -1}). The same property holds for M since the eigenvectors of M in the basis x 2m (x -1)2m-j are explicitly given by x 2m-2ℓ (x -1) 2ℓx 2ℓ (x -1) 2m-2ℓ : 0 ≤ ℓ ≤ m . Each E (refl) j contains one (-1)-eigenvector of χ {1,0,2}and one of χ {0,2,1} and each E (sign) j consists of one (-1)-eigenvector of χ {0,2,1} . This gives the equationd refl (n) + d sign (n) = m. Each E (refl) jcontains one (+1)-eigenvector of χ {1,0,2} and one of χ {0,2,1} and each E (triv) j consists of one (+1)-eigenvector of χ {0,2,1} . There are m + 1 eigenvectors with eigenvalue 1 of each of χ {1,0,2} and χ {0,2,1} thus d refl (n) + d triv (n) = m + 1.
Thus any totally symmetric polynomial on T is a linear combination of e a 2 e b 3 with uniquely determined coefficients. The number of linearly independent totally symmetric polynomials in 0 T equals the number of solutions of 0 ≤ 2a + 3b ≤ n with a, b = 0, 1, 2, . . .. As a consequence d triv (n) = card {(a, b) : 2a + 3b = n}.
Corollary 17 2 0≤j≤n/ 2 M 2 0≤j≤
17222 For 0 ≤ k ≤ n 2 each polynomial r n,2k := b n,2j + b n,2k is totally symmetric and for 0 ≤ k ≤ n-1 2 each polynomial r n,2k+1 =b n,2j+1 + b n,2k+1 satisfies Mp = -p = Rp (the sign representation). Proof. The pattern of zeroes in " M (n) i,j # shows that r n,2k = (M R + RM + I) b n,2k ∈ span {b n,2j } and thus satisfies Rr n,2k = r n,2k ; combined with RM r n,2k = r n,2k this shows r n,2k is totally symmetric. A similar argument applies to (M R + RM + I) b n,2k+1 . Theorem 18 The functions b sym n,k , 0 ≤ k ≤ d triv (n) -1, as in (17) form a basis for the totally symmetric polynomials in P ⊥ n,n-1 T .
1 3 1 3
11 polynomials Πb n,2k : 0 ≤ k ≤ n-are linearly independent. b. The set RM Πb n,2k , M RΠb n,2k : 0 ≤ k ≤ n-is linearly independent and defines a basis for the τ refl component of P ⊥ n,n-1 T . Proof. In general Πz a z b = z a z b if a-b ≡ 1, 2 mod 3 and Πz a z b = 0 if a-b ≡ 0 mod 3. Expand the polynomials w k (z, z) := Π (z + z) n-3k z 3 + z 3 k by the binomial theorem to obtain
)
Proof. We show that b C p,k k∈R p,C is a basis of P ⊥ p,p-1 (C) and the dimension formula. Continuity across e ℓ for odd 3 ≤ ℓ ≤ m C . The definition of the lifted orthogonal polynomials (see (49), (55), (57)) implies that the continuity across e ℓ for odd 3 ≤ ℓ ≤ m C is equivalent to
= 0 for odd k and all 1 ≤ ℓ ≤ m C . The proof of the dimension formula (60) is trivial. 5.4.2 A Basis for the Symmetric Non-Conforming Space S p K,nc
, 0 , i = 1 , 2 ,
012 ≤ k ≤ d triv (p) -1, with b ∂Ki,sym p,k as in (65) and define b Ci,refl p,k , 0 ≤ k ≤ d refl (p) -1, piecewise by b Ci,refl p,k T ′ := b Ai,T ′ p,k for T ′ ⊂ C i with b Ai,T ′ p,k as in (56). The mirror symmetry of b Ai,T ′ p,k with respect to the angular bisector in T ′ through A i implies the continuity of b Ci,refl p,k . Hence, P ⊥ p,p-1 (C i ) = span b Ci,sym p,k Ci : 0 ≤ k ≤ d triv (p) -1 ⊕ span b Ci,refl p,k : 0 ≤ k ≤ d refl (p) -1 . (70) Since the traces of b Ci,sym p,k and b Ci,refl p,k at ∂T are continuous and are, from both sides, the same linear combinations of edge-wise Legendre polynomials of even degree, the gluing b ∂ωT ,defines continuous functions on ∂ω T . Since the space S p T,nc must satisfy a direct sum decomposition (cf. (67)), it suffices to consider the functions b ∂ω T ,refl p,k
•T 1 3
1 we conclude that u K = 0. Note that Definition 7 and Proposition 30 neither imply a priori that the functions B T,nc p,k K , ∀T ⊂ ∂K, k = 0, . . . , d refl (p) -1 are linearly independent nor that ∀T ⊂ ∂K it holds T ′ ⊂C B T ′ ,nc p,m T = P ⊥,refl p,p-1 (T ) for the triangle star C = ∂K\ • T (71) holds. These properties will be proved next. Recall the projection Π = 1 3 (2I -M R -RM) from Proposition 21. We showed (Theorem 23.a) that b refl p,k : 0 ≤ k ≤ p-is linearly independent, where b refl p,k := Πb p,2k . Additionally Rb refl p,k = b refl p,k which implies b refl p,k (0, x 1 ) = b refl p,k (x 1 , 0), and the restriction x 1 -→ b
k
on the facet equal to b refl p,k . Similarly define Q (0) k on A 0 A 2 A 3 and A 0 A 3 A 1 (with analogously chosen local (x 1 , x 2 )-coordinate systems), by the property b refl p,k (0, x 1 ) = b refl p,k (x 1 , 0). Q
T ) denote the L 2 (T ) orthogonal projection. Since P p-1 (T ) is the orthogonal complement of P ⊥ p,p-1 (T ) in P p (T ) and since P ⊥ p,p-1 (T ) ∩ S p T,nc T = {0}, the restricted mapping Π T : S p T,nc T → P p-1 (T ) is injective and the functions q T p,k := Π T B T,nc p,k T
The superscript "refl" is a shorthand for "reflection" and explained in Section 5.3.1.
where P E 2k is the Legendre polynomial of even degree 2k scaled to the edge E with endpoint values +1 and symmetry with respect to the midpoint of E. Hence, we are looking for orthogonal polynomials P to the total simplex K by polynomial extension (cf. ( 18), ( 19))
These functions are the same as those introduced in Definition 5. The above reasoning leads to the following Proposition.
Proposition 29 For a simplex K, the space of non-conforming, simplex-supported Crouzeix-Raviart finite elements can be chosen as in (20) and the functions B K,nc p,k , 0 ≤ k ≤ d triv (p) -1 are linearly independent.
A Basis for S p T,nc
Let T ∈ F Ω be an inner facet and
) with the convention that the unit normal n T points into K 2 . In this section, we will prove that a space Sp T,nc which satisfies
can be chosen as Sp T,nc := S p T,nc (cf. (25)) and, moreover, that the functions B T,nc p,k , k = 0, 1, . . . , d refl (p) -1, as in (24) form a basis of S p T,nc .
denote the triangle star (cf. Notation 27) formed by the three remaining triangles of ∂K i . We conclude from Lemma 28 that a basis for
Since any function in S p T is continuous on C i , we conclude from Lemma 28 (with
with b ∂T p,2k as in (64). To identify a space Sp T,nc which satisfies (67) we consider the jump condition in (68) restricted to the boundary ∂T . The symmetry of the functions b ∂T p,2k implies that [u] T ∈ P ⊥,sym p,p-1 (T ), i.e., there is a function q 1 ∈ S p K1,nc (see (20)) such that [u] T = q 1 | T and ũ, defined by ũ| K1 = u 1 + q 1 and ũ| K2 = u 2 , is continuous across T . On the other hand, all functions u ∈ S p T whose restrictions u| ωT are discontinuous can be found
The proof involves a series of steps. The argument will depend on the values of the functions on the three rays A 0 A 1 , A 0 A 2 , A 0 A 3 , each one of them is given coordinates t so that t = 0 at A 0 and t = 1 at the other end-point. For a fixed k let q
Lemma 32 Suppose 0 ≤ k ≤ p-1 3 and 0 ≤ t ≤ 1 then q (t) + q (t) + , q (t) = 0.
Proof. The actions of RM and MR on polynomials
k to the values on the ray
is constructed taking the origin at A 1 and because of the reverse orientation of the ray we see that the value of
k is given by q. The value of
k on the ray A 0 A 2 is , q (by the symmetry of , q the orientation of the ray does not matter). The other functions are handled similarly, and the contributions to the three rays are given in this table:
We use q k , , q k , q k to denote the polynomials corresponding to b refl p,k . Suppose that the linear combination
Evaluate the sum on the three rays to obtain the equations:
We used Lemma 32 to eliminate q k from the equations. In Theorem 23.b we showed the linear independence of
, and in Lemma 12 that the restriction map f → f (x 1 , 0) is an isomorphism from the orthogonal polynomials P ⊥ p,p-1 to P p ([0, 1]). Thus the projection of the set is also linearly independent, that is, ,
3 is a linearly independent set of polynomials on 0 ≤ t ≤ 1. This implies all the coefficients in the above equations vanish: the q k terms show c k,0 = c k,1 = c k,2 = c k,3 and then the ,
To prove (71) it suffices to transfer the statement to the reference element T . The pullbacks of the restrictions
Properties of Non-Conforming Crouzeix-Raviart Finite Elements
The and u refl ∈ S p refl,nc . We prove by contradiction that u sym ∈ C 0 (Ω). Assume that u sym / ∈ C 0 (Ω). Then, there exists a facet T ⊂ F Ω such that [u sym ] T = 0. Then, [u refl ] T = -[u sym ] T is a necessary condition for the continuity of u. However, [u sym ] T ∈ P ⊥,sym p,p-1 (T ) while [u refl ] T ∈ P ⊥,refl p,p-1 (T ) and there is a contradiction because P ⊥,sym p,p-1 (T ) ∩ P ⊥,refl p,p-1 (T ) = {0}. Hence, u sym ∈ C 0 (Ω) and, in turn, u refl ∈ C 0 (Ω). Since u = 0, at least, one of the functions u sym and u refl must be different from the zero function. Case a. We show u sym = 0 by contradiction: Assume u sym = 0. Then, u sym | T = 0 for all facets T ∈ F. (Proof by contradiction: If u sym | T = 0 for some T ∈ F, we pick some K ∈ F which has T as a facet. Since
we have u sym | T ′ = 0 for all facets T ′ of K and u sym | K = 0. Since u sym is continuous in Ω, the restriction u sym | K ′ is zero for any K ′ ∈ G which shares a facet with K. This argument can be applied inductively to show that u sym = 0 in Ω. This is a contradiction.) We pick a boundary facet T ∈ F ∂Ω . The condition u ∈ Sp G,c implies u = 0 on ∂Ω and, in particular, u| T = u sym | T + u refl | T = 0. We use again the argument P ⊥,sym p,p-1 (T ) ∩ P ⊥,refl p,p-1 (T ) = {0} which implies u sym = 0 and this is a contradiction to the assumption u sym = 0.
Case b. From Case a we know that u sym = 0, i.e., u refl = u, and it remains to show u refl = 0. The condition u refl ∈ Sp G,c implies u refl | ∂Ω = 0 and u refl (V) = 0 for all vertices V ∈ V. The proof of Case b is similar than the proof of Case a and we start by showing for a tetrahedron, say K, with a facet on the boundary that u refl | K = 0 and employ an induction over adjacent tetrahedrons to prove that u refl = 0 on every tetrahedron in G.
We consider a boundary facet T 0 ∈ F ∂Ω with adjacent tetrahedron K ⊂ G. We denote the three other facets of K by T i , 1 ≤ i ≤ 3, and for 0 ≤ i ≤ 3, the vertex of K which is opposite to T i by A i .
Case b.1. First we consider the case that there is one and only one other facet, say, T 1 which lies in ∂Ω.
The case that there are exactly two other facets which are lying in ∂Ω can be treated in a similar way.
Case b.3. Next, we consider the case that
Ti,nc . On T we choose a local (x 1 , x 2 )-coordinate system such that A 1 = 0, A 2 = (1, 0) ⊺ , A 3 = (0, 1) ⊺ . From (51) and (53) we conclude that
) and, in turn, that the restrictions u E i of u i to the edge E i = T i ∩ T 0 , 1 ≤ i ≤ 3, are the "same", more precisely, the affine pullbacks of u E i to the interval [0, 1] are the same. From Lemma 13, we obtain that
set of vertex-oriented conforming basis functions have to be removed from the conforming space. This is in analogy to the properties (74) and ( 75). Future research is devoted on numerical experiments and the application of these functions to system of equations as, e.g., Stokes equation and the Lamé system.
Acknowledgement This work was supported in part by ENSTA, Paris, through a visit of S.A. Sauter during his sabbatical. This support is gratefully acknowledged.
A Alternative Sets of "Reflection-type" Basis Functions
In this Appendix we define further sets of basis functions for the τ refl component of P ⊥ n,n-1 T -different choices might be preferable for different kinds of applications. All these sets have in common that two vertices of T are special -any basis function is symmetric/skew symmetric with respect to the angular bisector of one of these two vertices.
Remark 37 The functions b n,2k can be characterized as the range of I + R. We project these functions onto τ refl , that is, the space E (refl) := {p : RMp + MRp = -p}. Let
The range of both is E (refl) . We will show that {T 1 b n,2k , T 2 b n,2k , 0 ≤ k ≤ (n -2) /3} is a basis for E (refl) . Previously we showed {RMq k , M Rq k } is a basis, where
holds, so the basis is made up out of linear combinations of {T 1 b n,2k , T 2 b n,2k , 0 ≤ k ≤ (n -1) /3}. These can be written as elements of the range of T 1 (I + R) and T 2 (I + R). Different linear combinations will behave differently under the reflections R, M, RM R (that is (x, y) → (y, x), (1xy, y), (x, 1xy) respectively). After some computations we find
Any two of these types can be used in producing bases from the b n,2k . Also each pair (first two, second two, third two) are orthogonal to each other. Note R fixes (0, 0) and reflects in the line x = y, M fixes (0, 1), reflects in 2x + y = 1, and RMR fixes (1, 0), reflects in x + 2y = 1.
If we allow for a complex valued basis, the three vertices of T can be treated more equally as can be seen from the following remark.
Remark 38 The basis functions can be complexified: set ω = e 2π i /3 ; any polynomial in E (refl) can be expressed as p = p 1 + p 2 such that MRp = ωp 1 + ω 2 p 2 (consequently RM p = ω 2 p 1 + ωp 2 ), then This is a basis which behaves similarly at each vertex. | 81,089 | [
"4372"
] | [
"3316",
"127972",
"217898"
] |
00148826 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2007 | https://hal.science/hal-00148826/file/IAVSD_06_global_chassis.pdf | Péter Gáspár
email: gaspar@sztaki.hu
Z Szabó
J Bokor
C Poussot-Vassal
O Sename ⋆⋆
L ⋆⋆
⋆⋆ Dugard
Global chassis control using braking and suspension systems
Motivation
In the current design practice several individual active control mechanisms are applied in road vehicles to solve different control tasks, see e.g. [START_REF] Alleyne | Improved vehicle performance using combined suspension and braking forces[END_REF][START_REF] Hedrick | Brake system modelling, control and integrated brake/throttle switching[END_REF][START_REF] Odenthal | Nonlinear steering and braking control for vehicle rollover avoidance[END_REF][START_REF] Trächtler | Integrated vehicle dynamics control using active brake, steering and suspension systems[END_REF]. As an example, the suspension system is the main tool to achieve comfort and road holding for a vehicle whilst the braking system is the main tool applied in emergency situations. Since there is a certain set of dynamical parameters influenced by both systems, due to the different control goals, the demands for a common set of dynamical parameters might be in conflict if the controllers of these systems are designed independently. This fact might cause a suboptimal actuation, especially in emergencies such as an imminent rollover. For example, the suspension system is usually designed to merely improve passenger comfort and road holding although its action could be used to improve safety [START_REF] Gáspár | The design of an integrated control system in heavy vehicles based on an LPV method[END_REF]. The aim of the global chassis design is to use the influence of the systems in an optimal way, see [START_REF] Gáspár | Active suspension design using the mixed µ synthesis[END_REF][START_REF] Zin | An LP V /H∞ active suspension control for global chassis technology: Design and performance analysis[END_REF].
The goal is to design a controller that uses active suspensions all the time to improve passenger comfort and road holding and it activates the braking system only when the vehicle comes close to rolling over. In extreme situations, such as imminent rollover, the safety requirement overwrites the passenger comfort demand by executing a functional reconfiguration of the control goals by generating a stabilizing moment to balance an overturning moment. This reconfiguration can be achieved by a sufficient balance between the performance requirements imposed on the suspension system. In the presentation an integration of the control of braking and suspension systems is proposed.
LPV modeling for control design
The model for control design is constructed in a Linear Parameter Varying (LPV) structure that allows us to take into consideration the nonlinear effects in the state space description, thus the model structure is nonlinear in the parameter functions, but linear in the states. In the control design the performance specifications for rollover and suspension problems, and the model uncertainties are taken into consideration.
In normal operation suspension control is designed based on a full-car model describing the vertical dynamics and concentrating on passenger comfort and road holding. The state vector includes the the vertical displacement, the pitch angle and the roll angle of the sprung mass, the front and rear displacements of the unsprung masses on both sides and their derivatives. The measured signals are the relative displacements at the front and rear on both sides. Since the spring coefficient is a nonlinear function of the relative displacement and the damping coefficient also depends nonlinearly on the relative velocities these parameters are used as the scheduling variables of our LPV model. The performance outputs are the heave acceleration, pitch and roll angle accelerations to achieve passenger comfort and the suspension deflections and tire deflections for road holding.
The design for emergency is based on a full-car model describing the yaw and roll dynamics and contains as actuators both the braking and the suspension systems. The state components are the side slip angle of the sprung mass, the yaw rate, the roll angle, the roll rate and the roll angle of the unsprung mass at the front and rear axles. The measured signals are the lateral acceleration, the yaw rate and the roll rate. The forward velocity has a great impact on the evaluation of the dynamics, thus this parameter is chosen as a scheduling variable in our LPV model. The performance demands for control design are the minimization of the lateral acceleration and the lateral load transfers at the front and the rear.
In order to monitor emergencies the so-called normalized lateral load transfers R, which are the ratio of lateral load transfers and the mass of the vehicle at the front and rear axles, are introduced. An adaptive observer-based method is proposed to estimate these signals [START_REF] Gáspár | Continuous-time parameter identification using adaptive observers[END_REF].
Integrated control design based on the LPV method
The control design is performed in an H ∞ setting where performance requirements are reflected by suitable choices of weighting functions. In an emergency one of the critical performance outputs is the lateral acceleration. A weighting function W a (R), which depends on the parameter R is selected for the lateral acceleration. It is selected to be small when the vehicle is not in an emergency, indicating that the control should not focus on minimizing acceleration. However, W a (R) is selected to be large when R is approaching a critical value, indicating that the control should focus on preventing the rollover. As a result of the weighting strategy, the LPV model of the augmented plant contains additional scheduling variables such as the parameter R. The weighting function W z (R) for the heave displacement and heave acceleration must be selected in a trade-off with the selection of W a (R).
The H ∞ controller synthesis extended to LPV systems using a parameter dependent Lyapunov function is based on the algorithm of Wu et al. [START_REF] Wu | Induced L 2 -norm control for LPV systems with bounded parameter variation rates[END_REF]. The control design of the rollover problem results in the stabilizing roll moments at the front and the rear generated by active suspensions and the difference between the braking forces between the left and right-hand sides of the vehicle. A sharing logic is required to distribute the brake forces for wheels to minimize the wear of the tires. The control design of the suspension problem is to generate suspension forces which are modified by the demand of the stabilizing moment during an imminent rollover. The full version of the paper contains all the details concerning the analysis of this design.
An illustrative simulation example
The operation of the integrated control is illustrated through a double lane changing maneuver based on a model of a real vehicle. The time responses of the steering angle, the normalized load transfer at the front and the rear and their maximum, the lateral acceleration, the roll moments at the front and the rear and the difference of the braking forces are presented in the figure. When a rollover is imminent the values R increase and reach a lower critical limit (R 1 crit ) and suspension forces are generated to create a moment at the front and the rear to increase the stabilization of the vehicle. When this dangerous situation persists and R reaches the second critical limit (R 2 crit ) the active brake system generates unilateral brake forces in order to reduce the risk of the rollover. The detailed analysis of the example is included in the full paper. | 7,740 | [
"756640",
"834135",
"1618",
"5833"
] | [
"15818",
"15818",
"15818",
"388748",
"388748",
"388748"
] |
00148830 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2007 | https://hal.science/hal-00148830/file/AAC_06_global_chassis.pdf | Péter Gáspár
email: gaspar@sztaki.hu
Z Szabó
J Bokor
C Poussot-Vassal
O Sename
L Dugard
TOWARDS GLOBAL CHASSIS CONTROL BY INTEGRATING THE BRAKE AND SUSPENSION SYSTEMS
Keywords: LPV modeling and control, performance specifications, uncertainty, safety operation, passenger comfort, automotive
A control structure that integrates active suspensions and an active brake is proposed to improve the safety of vehicles. The design is based on an H ∞ control synthesis extended to LPV systems and uses a parameter dependent Lyapunov function. In an emergency, such as an imminent rollover, the safety requirement overwrites the passenger comfort demand by tuning the performance weighting functions associated with the suspension systems. If the emergency persists active braking is applied to reduce the effects of the lateral load transfers and thus the rollover risk. The solution is facilitated by using the actual values of the so-called normalized lateral load transfer as a scheduling variable of the integrated control design. The applicability of the method is demonstrated through a complex simulation example containing vehicle maneuvers.
INTRODUCTION
These days road vehicles contain several individual active control mechanisms that solve a large number of required control tasks. These control systems contain a lot of hardware components, such as sensors, actuators, communication links, power electronics, switches and micro-processors. In traditional control systems the vehicle functions to be controlled are designed and implemented separately. This means that control hardware is grouped into disjoint subsets with sensor information and control demands handled in parallel processes. However, these approaches can lead to unnecessary hardware redundancy. Al-though in the design of the individual control components only a subset of the full vehicle dynamics is considered these components influence the entire vehicle. Thus in the operation of these autonomous control systems interactions and conflicts may occur that might overwrite the intentions of the designers concerning the individual performance requirements.
The aim of the integrated control methodologies is to combine and supervise all controllable subsystems affecting the vehicle dynamic responses in order to ensure the management of resources. The flexibility of the control systems must be improved by using plug-and-play extensibility, see e.g. [START_REF] Gordon | Integrated control methodologies for road vehicles[END_REF]. The central purpose of vehicle control is not only to improve functionality, but also simplify the electric architecture of the vehicle. Complex and overloaded networks are the bottle-neck of functional improvements and high complexity can also cause difficulties in reliability and quality. The solution might be the integration of the high level control logic of subsystems. It enables designers to reduce the number of networks and create a clear-structured vehicle control strategy. Several schemes concerned with the possible active intervention into vehicle dynamics to solve different control tasks have been proposed. These approaches employ active antiroll bars, active steering, active suspensions or active braking, see e.g. [START_REF] Alleyne | Nonlinear adaptive control of active suspensions[END_REF][START_REF] Fialho | Design of nonlinear controllers for active vehicle suspensions using parameter-varying control synthesis[END_REF][START_REF] Hedrick | Brake system modelling, control and integrated brake/throttle switching[END_REF][START_REF] Kim | Investigation of robust roll motion control considering varying speed and actuator dynamics[END_REF][START_REF] Nagai | Integrated robust control of active rear wheel steering and direct yaw moment control[END_REF][START_REF] Odenthal | Nonlinear steering and braking control for vehicle rollover avoidance[END_REF][START_REF] Sampson | Active roll control of single unit heavy road vehicles[END_REF][START_REF] Shibahata | Progress and future direction of chassis control technology[END_REF][START_REF] Trächtler | Integrated vehicle dynamics control using active brake, steering and suspension systems[END_REF].
In this paper a control structure that integrates active suspensions and an active brake is proposed to improve the safety of vehicles. The active suspension system is primarily designed to improve passenger comfort, i.e. to reduce the effects of harmful vibrations on the vehicle and passengers. However, the active suspension system is able to generate a stabilizing moment to balance an overturning moment during vehicle maneuvers in order to reduce the rollover risk, (Gáspár and Bokor, 2005). Although the role of the brake is to decelerate the vehicle, if the emergency persists, the effects of the lateral tire forces can be reduced directly by applying unilateral braking and thus reducing the rollover risk (Gáspár et al., 2005;[START_REF] Palkovics | Roll-over prevention system for commercial vehicles[END_REF]. This paper is an extension of the principle of the global chassis control, which has been proposed in [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF].
The controller uses the actual values of the socalled normalized lateral load transfer R as a scheduling variable of the integrated control design. When a rollover is imminent the values of R increase and reach a lower critical limit, and then suspension forces must be generated to create a moment at the front and the rear to enhance the stability of the vehicle. When this dangerous situation persists and R reaches the upper critical limit the active brake system must generate unilateral brake forces in order to reduce the risk of the rollover. The goal of the control system is to use the active suspension system all the time to improve passenger comfort and road holding and activate the braking system only when the vehicle comes close to rolling over. In an emergency the safety requirement overwrites the passenger comfort demand by tuning the performance weighting functions associated with the suspension systems. Then a functional reconfiguration of the suspension system is carried out in order to generate stabilizing moments to balance an overturning moment during vehicle maneuvers.
In this paper the control-oriented model design has been carried out in a Linear Parameter Varying (LPV) framework that allows us to take into consideration the nonlinear effects in the state space description. Thus the model structure is nonlinear in the parameter functions, but it remains linear in the states. In the control design the performance specifications for rollover and suspension problems, and also the model uncertainties are taken into consideration. The design is based on an H ∞ control synthesis extended to LPV systems that use parameter dependent Lyapunov functions, [START_REF] Balas | Theory and application of linear parameter varying control techniques[END_REF][START_REF] Wu | Induced l 2 -norm control for LPV systems with bounded parameter variation rates[END_REF].
The structure of the paper is as follows. After a short introduction in Section 2 the control oriented modeling for rollover prevention and suspension problems is presented. In Section 3 the weighting strategy applied for the parameterdependent LPV control is presented. In Section 4 the operation of the integrated control system is demonstrated through a simulation example. Finally, Section 5 contains some concluding remarks.
AN LPV MODELING FOR THE CONTROL DESIGN
The combined yaw-roll dynamics of the vehicle is modeled by a three-body system, where m s is the sprung mass, m f and m r are the unsprung masses at the front and at the rear including the wheels and axles and m is the total vehicle mass. , respectively. The front and rear displacements at both sides of the sprung and the unsprung masses are denoted by x 1f l , x 1f r , x 1rl , x 1rr and x 2f l , x 2f r , x 2rl , x 2rr , respectively. In the model, the disturbances w f l , w f r , w rl , w rr are caused by road irregularities.
k tf k tr k tr k tf m f r m rr m rl m f l f f r f rr f rl f f l T ' x, φ z, ψ y, θ j E E ' t f E ' t r b a l r b a l f {m s , I x , I y , I z } T c h CG w rr w rl w f l w f r
The yaw and roll dynamics of the vehicle is shown in Figure 2. The roll moment of the inertia of the sprung mass and of the yaw-roll product is denoted by I xx and I xz while I yy is the the pitch moment of inertia and I zz is the yaw moment of inertia. The total axle loads are F zl and F zr .
The lateral tire forces in the direction of the wheel-ground contact are denoted by F yf and F yr . h is the height of CG of the sprung mass and h uf , h ur are the heights of CG of the unsprung masses, ℓ w is the half of the vehicle width and r is the height of the roll axis from the ground. β denotes the side slip angle of the sprung mass, ψ is the heading angle, φ is the roll angle, ψ denotes the yaw rate and θ the pitch angle. The roll angle of the unsprung mass at the front and at the rear axle are denoted by φ t,f and φ t,r , respectively. δ f is the front wheel steering angle, a y denotes the lateral acceleration and z s is the heave displacement while v stands for the forward velocity. First the modeling for suspension purposes is formalized. The vehicle dynamical model, i.e. the heave, pitch and roll dynamics of the sprung mass and the front and rear dynamics of the unsprung masses at both sides of the front and rear, is as follows:
ms zs = k f (∆ f l + ∆ f r ) + kr(∆ rl + ∆rr) + b f ( ∆fl + ∆fr ) + br( ∆rl + ∆rr) -f f l -f f r -f rl -frr Iyy θ = k f l f (∆ f l + ∆ f r ) + krlr(∆ rl + ∆rr) + b f l f ( ∆fl + ∆fr ) -brlr( ∆rl + ∆rr) -(f f l + f f r )l f + (f rl + frr)lr Ixx φ = k f ℓw(∆ f l -∆ f r ) + krℓw(∆ rl -∆rr) + b f ℓw( ∆fl -∆fr ) + brℓw( ∆rl -∆rr) -(f f l -f f r )ℓw -(f rl -frr)ℓw m f ẍ2fl = -k f ∆ f l + k tf ∆ wf l + b f ∆fl -f f l m f ẍ2fr = -k f ∆ f r + k tf ∆ wf r + b f ∆fr -f f r mr ẍ2rl = -kr∆ rl + ktr∆ wrl + br ∆rl -f rl mr ẍ2rr = -kr∆rr + ktr∆wrr + br ∆rr -frr with the following notations: with ∆ f l = -x 1f l + x 2f l , ∆ f r = -x 1f r + x 2f r , ∆ rl = -x 1rl + x 2rl , ∆rr = -x 1rr + x 2rr , ∆ wf l = x 2f l -w f l , ∆ wf r = x 2f r -w f r , ∆ wrl =
x 2rl -w rl and ∆wrr = x 2rr -wrr.
The state space representation of the suspension system is the following:
ẋs = A s x s + B 1s d s + B 2s u s , (1)
with the state vector x s = x 1 ẋ1 T , where
x 1 = z s φ θ x 2f l x 2f r x 2rl x 2rr T . The input signals is u s = f f l f f r f rl f rr T and d s = w f l w f r w rl w rr T is the disturbance.
Second, the modeling for the rollover problem is formalized. This structure includes two control mechanisms which generate control inputs: the roll moments between the sprung and unsprung masses, generated by the active suspensions u af , u ar , and the difference in brake forces between the left and right-hand sides of the vehicle ∆F b . The differential equations of the yaw-roll dynamics are formalized:
mv( β + ψ) -msh φ = F yf + Fyr -Ixz φ + Izz ψ = F yf l f -Fyrlr + lw∆F b (Ixx+msh 2 ) φ -Ixz ψ = msghφ + msvh( β + ψ) -k f (φ -φ tf ) -b f ( φ -φtf ) -kr(φ -φtr) -br( φ -φtr) + ℓwu af + ℓwuar -rF yf = m f v(r -h uf )( β + ψ) + m uf gh uf φ tf -k tf φ tf + k f (φ -φ tf ) + b f ( φ -φtf ) + ℓwu af -rFyr = mrv(r -hur)( β + ψ) -murghurφtr -ktrφtr + kr(φ -φtr) + br( φ -φtr) + ℓwuar.
The lateral tire forces F yf and F yr are approximated linearly to the tire slide slip angles α f and α r , respectively:
F yf = µC f α f and F yr = µC r α r ,
where µ is the side force coefficient and C f and C r are tire side slip constants. At stable driving conditions, the tire side slip angles α f and α r can be approximated as
α f = -β + δ f - l f • ψ v and α r = -β + lr• ψ v .
The differential equations depend on the forward velocity v of the vehicle nonlinearly. Choosing the forward velocity as a scheduling parameter ρ r = v, an LPV model is constructed. Note, that the side force coefficient is another parameter which varies nonlinearly during operational time. In [START_REF] Gáspár | Side force coefficient estimation for the design of active brake control[END_REF] a method has been proposed for the estimation of this parameter. Hence, it can be considered as a scheduling variable of the LPV model, too. In this paper, for the sake of simplicity, the variation of the side force coefficient is ignored.
The equations can be expressed in the state space representation form as:
ẋr = A r (ρ r )x r +B 1rv (ρ r )d r + B 2rv (ρ r )u r , (2) where x r = β ψ φ φ φ tf φ tr T is the state vec- tor, u r = ∆F b is the control input while d r = δ f is considered as a disturbance.
In this approach of the rollover problem the active suspensions generate two stabilizing moments at the front and the rear, which can be considered as the effects of the suspension forces u af = (f f lf f r )ℓ w and u ar = (f rl -f rr )ℓ w . The control input provided by the brake system generates a yaw moment, which affects the lateral tire forces directly. The difference between the brake forces ∆F b provided by the compensator is applied to the vehicle:
∆F b = (F brl + d 2 F bf l ) -(F brr + d 1 F bf r ),
where d 1 and d 2 are distances, which depend on the steering angle. In the implementation of the controller means that the control action be distributed at the front and the rear wheels at either of the two sides. The reason for distributing the control force between the front and rear wheels is to minimize the wear of the tires. In this case a sharing logic is required which calculates the brake forces for the wheels.
INTEGRATED CONTROL DESIGN BASED ON THE LPV METHOD
Predicting emergencies by monitoring R
Roll stability is achieved by limiting the lateral load transfers for both axles, ∆F zl and ∆F zr , below the level for wheel lift-off. The lateral load transfers are given by ∆F zi = ktiφti lw , where i denotes the front and rear axles. The tire contact force is guaranteed if mg 2 ± ∆F z > 0 for both sides of the vehicle. This requirement leads to the definition of the normalized load transfer, which is the ratio of the lateral load transfers at the front and rear axles: r i = ∆Fzi mig , where m i is the mass of the vehicle in the front and the rear. The scheduling parameter in the LPV model is the maximum value of the normalized load transfer R = max(|r i |).
The limit of the cornering condition is reached when the load on the inside wheels has dropped to zero and all the load has been transferred onto the outside wheels. Thus, if the normalized load transfer R takes on the value ±1 then the inner wheels in the bend lift off. This event does not necessary result in the rolling over of the vehicle. However, the aim of the control design is to prevent the rollover in all cases and thus the lift-off of the wheels must also be prevented. Thus, the normalized load transfer is also critical when the vehicle is stable but the tendency of the dynamics is unfavorable in terms of a rollover. An observer design method has been proposed for the estimation of the normalized load transfers, see (Gáspár et al., 2005).
In this paper the detection of an imminent rollover is based on the monitoring of the normalized lateral load transfers for both axles. In the control design the actual value of the normalized load transfer is used. In order to make an estimation of the lateral load transfers the roll angles of the unsprung masses φ t,i must be estimated. For this purpose a Luenberger type observer
η = (A(ρ) + K(ρ)C)η + B(ρ)u -K(ρ)y (3)
is used. The observer is based on the measured signals, a y , ψ and φ, where a y is the lateral acceleration.
In order to obtain a quadratically stable observer the LMI (A(ρ)+K(ρ)C) T P +P (A(ρ)+K(ρ)C) < 0 must hold for suitable K(ρ) and P = P T > 0 for all the corner points of the parameter space, see [START_REF] Apkarian | A convex characterization of gain-scheduled H ∞ controllers[END_REF][START_REF] Wu | Induced l 2 -norm control for LPV systems with bounded parameter variation rates[END_REF]. By introducing the auxiliary variable G(ρ) = P K(ρ), the following set of LMIs on the corner points of the parameter space must be solved:
A(ρ) T P + P A(ρ) + C T G(ρ) T + G(ρ)C < 0.
Weighting strategy for the control design
Based on the model of the suspension system a control is designed considering the suspension deflections at the suspension components as measured output signals and u s as the control inputs.
The performance outputs for control design are the passenger comfort (i.e. heave displacement and acceleration z a and z d ), the suspension deflections z si = z sf l z sf r z srl z srr and the tire deflection z ti = z tf l z tf r z trl z trr .
In an earlier paper of this project the design of a global chassis system is proposed, see [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. Here the suspension forces on the left and right hand sides at the front and rear are designed in the following form:
u a = u -b 0 ( żs -żus ) , (4)
where b 0 is a damping coefficient and u is the active force. When the value b 0 is selected small the suspension system focuses on passenger comfort, while the system focuses on road holding when value b 0 is selected large. In this paper this experience is exploited when a parameter dependent weighting strategy is applied in the design of the suspension system.
Figure 3 shows the structure of the active suspension system incorporated into the integrated control. The inputs of the controller are the measured relative displacements and their numerical differentiations. The controller uses the normalized lateral load transfer R and the so-called normalized moment χ = φ az Mact Mmax as scheduling variables. Here
φ az = 1 if |R| < R s 1 - |R| -R s R c -R s if R s ≤ |R| ≤ R c 0 if |R| > R c
, where R s is a warning level, while R c is a critical value of the admissible normalized lateral load transfer.
The value of the damping b 0 is scheduled by the normalized lateral load transfer R. Its value must be selected in such a way that it improves passenger comfort in normal cruising, however, it enhances road holding in an emergency. With this selection the active suspension system focuses on passenger comfort and road holding due to the value of the normalized load transfer. The LPV controller C is designed to meet the same criteria but its scheduling variable also reflects the presence of the moment demand. This is achieved by using a look-up table that encodes the function φ az .
- the sprung mass acceleration, the sprung mass displacement, the displacement of the unsprung mass, and the relative displacement between the sprung and unsprung masses. This parameter represents the balance between road holding and passenger comfort. The active suspension of the closed-loop model presents better performances than the passive model. When a small value of the tuning parameter is selected a better ride comfort without the deterioration of road holding or the suspension deflection is achieved. On the other hand, when the value of the tuning parameter increases, passenger comfort deteriorates, while road holding improves. This emphasizes the tradeoff between comfort and road holding and the significance of using b 0 as a varying coefficient. The weighting functions applied in the active suspension design are the following:
W zs (χ) = 3 s/(2πf 1 ) + 1 χ W θ (χ) = 2 s/(2πf 2 ) + 1 χ W φ (χ) = 2 s/(2πf 3 ) + 1 (1 -χ) W u = 10 -2 W zr = 7.10 -2 W dx = 10 5 W dy = 5.10 4 W n = 10 -3
where W zs is shaped in order to reduce bounce amplification of the suspended mass (z s ) between [0, 8]Hz (f 1 = 8Hz), W θ attenuate amplification in low frequency and the frequency peak at 9Hz (f 2 = 2Hz) and W φ reduces the rolling moment especially in low frequency (f 3 = 2Hz). Then W zr , W dx , W dy and W n model ground, roll, pitch disturbances (z r , M dx and M dy ) and measurement noise (n) respectively, and W u is used to limit the control signal. Note, that although the suspension model is a linear time invariant (LTI), the model of the augmented plant is LPV because of the weighting strategy. Thus, the control design is performed in an LPV setting.
The control of braking forces are designed in terms of the rollover problem. The measured outputs are the lateral acceleration of the sprung mass, the yaw rate and the roll rate of the sprung mass while u r are the control inputs. The performance outputs for the control design are the lateral acceleration a y , the lateral load transfers at the front and the rear ∆F zf and ∆F zr . The lateral acceleration is given by a
y = v β + v Ψ -h Φ.
The weighting function for the lateral acceleration is selected in such a way that in the low frequency domain the lateral accelerations of the body must be penalized by a factor of φ ay .
W p,ay = φ ay s 2000 + 1
s 12 + 1
,
where φ ay = 0 if |R| < R s |R| -R s R c -R s if R s ≤ |R| ≤ R c 1 if |R| > R c
, R c defines the critical status when the vehicle is in an emergency and the braking system must be activated. The gain φ ay in the weighting functions is selected as a function of parameter |R| in the following way. In the lower range of |R| the gain φ ay must be small, and in the upper range of |R| the gains must be large. Consequently, the weighting functions must be selected in such a way that they minimize the lateral load transfers in emergencies. In normal cruising the brake is not activated since the weight is small. The weighting function for the lateral loads and the braking forces are the following:
W p,F z = diag( 1 7 , 1 5 )
W p,∆F b = 10 -3 φ ay
The control design is performed based on an augmented LPV model of the yaw-roll dynamics where two parameters are selected as scheduling variables: the forward velocity and the maximum value of the normalized lateral load transfer either at the rear side or at the front ρ r = v R T .
In the design of rollover problem the difference in the braking forces is designed. Based on this fictitious control input the actual control forces at the front and rear on both sides generated in the braking system are calculated. Certainly, different optimization procedures, which distribute the fictitious force between the braking forces can be implemented. However, this problem is not within the scope of the paper.
selected R = [0, R s , R c , 1].
A SIMULATION EXAMPLE
In the simulation example, a double lane change maneuver is performed. In this maneuver passenger comfort and road holding are guaranteed by the suspension actuators and the rollover is prevented by modifying the operation of the suspension actuators and using an active brake. When a rollover is imminent the values R increase and reach a lower critical limit (R s ) and suspension forces are generated to create a moment at the front and the rear. When this dangerous situation persists and R reaches the second critical limit (R c ) the active brake system generates unilateral brake forces. The velocity of the vehicle is 90 km/h. The maneuver starts at the 1 st second and at the 2.5 th and the 7 th seconds 6-cm-high bumps on the front wheels disturbs the motion of the vehicle. The steering angle is generated with a ramp signal with 3.5 degrees maximum value and 4 rad/s filtering, which represents the finite bandwidth of the driver. The time responses of the steering angle, the road disturbance, the yaw rate, the roll rate, the lateral acceleration, the heave acceleration on the front-left side, the normalized load transfer at the rear and their maximum, the vehicle velocity, the roll moments at the front and the rear and the braking forces at the front and the rear are presented in Figure 5... Figure 7. The effect of a 6-cm-high bump disturbs heave acceleration at the 2.5 th second. The effect of this disturbance should be reduced by the suspension system, since it improves the passenger comfort and road holding. During the maneuver the lateral acceleration and the roll angles of the unsprung masses increase, thus the normalized load transfer also increases and reaches the critical value R s . Control forces (0.5 kN and 0.5 kN at the front and at the rear, respectively) should also be generated by the suspension forces so that the controller can prevent the rollover of the vehicle. Thus, during the maneuver the suspension system focuses on both the passenger comfort and the roll stability of the vehicle. The control moments are not sufficient to prevent rollovers, since the normalized lateral load transfers have achieved the critical value R c . Thus the brake is also activated and unilateral braking forces (approximately 0.9 kN and 1 kN on the left and the right hand sides in the rear) are generated. As a result the velocity of the vehicle decreases and the normalized lateral load transfers stay below the critical value 1. After the double lane maneuver another 6-cm-high bump disturbs the motion. In this case a large suspension force generated by the suspension actuators is needed to reduce both the magnitude and the duration of the oscillation. In the future it is possible to exploit the balance between the brake and suspension systems to enhance braking. During braking the real path might be significantly different from the desired path due to the brake moment which affects the yaw motion. Thus, the braking maneuver usually requires the drivers intervention. Applying the integrated control, the suspension system is able to focus on the emergency, consequently safety is improved.
CONCLUSION
In this paper an integrated control structure that uses active suspensions and an active brake is proposed to improve the safety of vehicles. In normal operation the suspension system focuses on passenger comfort and road holding, however, in an emergency the safety requirement overwrites the passenger comfort demand. When the emergency persists, the brake is also activated to reduce the rollover risk. The solution is based on a weighting strategy in which the normalized lateral load transfer is selected as a scheduling variable.
The design is based on an H ∞ control synthesis extended to LPV systems that uses a parameter dependent Lyapunov function. This control mechanism guarantees the balance between rollover prevention and passenger comfort. The applicability of the method is demonstrated through a complex simulation example containing vehicle maneuvers.
Fig. 1 .
1 Fig. 1. Vertical dynamics of the full-car model. The suspension system, which is shown in Figure1, contains springs, dampers and actuators between the body and the axle on both sides at the front and rear. The suspension stiffnesses, the tire stiffnesses and the suspension dampers at the front and rear are denoted by k f , k r , k tf , k tr , b f , b r , respectively. The front and rear displacements at both sides of the sprung and the unsprung masses are denoted by x 1f l , x 1f r , x 1rl , x 1rr and x 2f l , x 2f r , x 2rl , x 2rr , respectively. In the model,
Fig. 2 .
2 Fig.2. Yaw and roll dynamics of the full-car model
Fig. 3 .
3 Fig. 3. Logical structure of the suspension controllerFigure4illustrates the effects of the tuning parameters b 0 and χ through the frequency responses of the closed loop system to the disturbances, i.e. the sprung mass acceleration, the sprung mass displacement, the displacement of the unsprung mass, and the relative displacement between the sprung and unsprung masses. This parameter represents the balance between road holding and passenger comfort. The active suspension of the closed-loop model presents better performances than the passive model. When a small value of the tuning parameter is selected a better ride comfort without the deterioration of road holding or the suspension deflection is achieved. On the other hand, when the value of the tuning parameter increases, passenger comfort deteriorates, while road holding improves. This emphasizes the tradeoff between comfort and road holding and the significance of using b 0 as a varying coefficient.
Fig. 4 .
4 Fig. 4. Frequency responses of the suspension system
Fig. 5 .
5 Fig. 5. Time responses in the double lane change maneuver
Fig. 6 .
6 Fig. 6. Output signals in the double lane change maneuver
Fig. 7 .
7 Fig. 7. Control signals in the double lane change maneuver
The solution of an LPV problem is governed by the set of infinite dimensional LMIs being satisfied for all ρ ∈ F P , thus it is a convex problem. In practice, this problem is set up by gridding the parameter space and solving the set of LMIs that hold on the subset of F P . If this problem does not have a solution, neither does the original infinite dimension problem. Even if a solution is found, it does not guarantee that the solution satisfies the original constraints for all ρ. However, it is expected since the matrix functions are continuous with respect to ρ. The number of grid points depends on the nonlinearity and the operation range of the system. For the interconnection structure, H ∞ controllers are synthesized for 7 values of velocity in a range v = [20km/h, 140km/h]. The normalized lateral load transfer parameter space is
Acknowledgement: This work was supported by the Hungarian National Office for Research and Technology through the project "Advanced Vehicles and Vehicle Control Knowledge Center" (OMFB-01418/2004) and the Hungarian National Science Foundation (OTKA) under the grant T -048482 which are gratefully acknowledged. Dr Gáspár and Dr Szabó were supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. | 29,905 | [
"756640",
"834135",
"1618",
"5833"
] | [
"15818",
"15818",
"15818",
"388748",
"388748",
"388748"
] |
00148831 | en | [
"spi"
] | 2024/03/04 23:41:48 | 2007 | https://hal.science/hal-00148831/file/SSSC07_2.pdf | C Poussot-Vassal
O Sename
L Dugard
P Gáspár
Z Szabó
J Bokor
A LPV BASED SEMI-ACTIVE SUSPENSION CONTROL STRATEGY
Keywords: Semi-active suspension, Linear Parameter Varying (LPV), H ∞ Control, Linear Matrix Inequality (LMI)
In this paper we consider the design and analysis of a semi-active suspension controller. In the recent years different kinds of semi-active control strategies, like two-state Skyhook, LQ-clipped or model-predictive, have already been developed in the literature. In this paper we introduce a new semi-active suspension control strategy that achieves a priori limitations of a semi-active suspension actuator (dissipative constraint and force bounds) through the Linear Parameter Varying (LPV) theory. This new approach exhibits some interesting advantages compared to already existing methods (implementation, performance flexibility, robustness etc.). Both industrial criterion evaluation and simulations on nonlinear quarter vehicle model are performed to show the efficiency of the method and to validate the theoretical approach.
1. INTRODUCTION Suspension system's aim is to isolate passenger from road irregularities keeping a good road holding behavior. Industrial and scientist research is very active in the automotive field and suspension control and design is an important aspect for comfort and security achievements. In the last decade, many different active suspension system control approaches were developed: Linear Quadratic (e.g. [START_REF] Hrovat | Survey of advanced suspension developments and related optimal control application[END_REF], Skyhook (e.g. [START_REF] Poussot-Vassal | Optimal skyhook control for semi-active suspensions[END_REF], that suits well to improve comfort. Robust Linear Time Invariant (LTI) H ∞ (e.g. [START_REF] Rossi | H ∞ control of automotive semi-active suspensions[END_REF]) can achieve better results improving both comfort and road holding but which is limited to fixed performances (due to fixed weights), Mixed LTI H ∞ /H 2 (see [START_REF] Gáspár | Iterative model-based mixed H 2 /H ∞ control design[END_REF][START_REF] Lu | Multiobjective optimal suspension control to achieve integrated ride and handling performance[END_REF][START_REF] Takahashi | A multiobjective approach for H 2 and H ∞ active suspension control[END_REF] can improve H ∞ control reducing signals energy. Recently, Linear Parameter Varying (LPV) (e.g. [START_REF] Fialho | Road adaptive active suspension design using linear parameter varying gain scheduling[END_REF]Balas 2002, Gáspár et al. 2004), that can either adapt the performances according to measured signals (road, deflection, etc.) or improve robustness, taking care of the nonlinearities (see [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF]. Most of these controllers are designed and validated assuming that the actuator of the suspension is active. Unfortunately such active actuators are not yet used on a wide range of vehicles because of their inherent cost (e.g. energy, weight, volume, price, etc.) and low performance (e.g. time response); hence, in the industry, semi-active actuators (e.g. controlled dampers) are often preferred. The twostate skyhook control is an on/off strategy that switches between high and low damping coefficient in order to achieve body comfort specifications. Clipped approaches leads to unpredictable behaviors and reduce the achievable performances. In Giorgetti et al.'s (2006) article, authors compare different semi-active strategies based on optimal control and introduce an hybrid model predictive optimal controller. The resulting control law is implemented by an hybrid controller that switches between a large number (function of the prediction horizon) of controllers and requires a full state measurement. In Canale et al.'s (2006) paper, another model-predictive semi-active suspension is proposed and results in good performances compared to the Skyhook and LQ-clipped approaches but requires an on-line "fast" optimization procedure. As it involves optimal control, full state measurement and a good knowledge of the model parameters are necessary.
The contribution of this paper is to introduce a new methodology to design a semi-active suspension controller through the LPV technique. The main interest of such an approach is that it a priori fulfills the dissipative actuator constraint and allows the designer to build a controller in the robust framework (H ∞ , H 2 , Mixed etc...). As long as the new method does not involves any on-line optimization process and only requires a single sensor, it could be an interesting algorithm from the applications point of view.
The paper is organized as follows: in Section 2 we both introduce linear and nonlinear quarter car models used for synthesis and validation. In Section 3, the involved semi-active suspension actuator system (based on real experimental data) is described. In Section 4 the proposed semi-active LPV/H ∞ control design and its scheduling strategy are presented. In Section 5, both industrial based performance criterion and simulations on a nonlinear quarter vehicle model show the efficiency of the proposed method. Conclusions and perspectives are discussed in Section 6.
QUARTER CAR MODEL
The simplified quarter vehicle model involved here includes the sprung mass (m s ) and the unsprung mass (m us ) and only catches vertical motions (z s , z us ). As the damping coefficient of the tire is negligible, it is simply modeled by a spring linked to the road (z r ) where a contact point is assumed. The passive suspension, located between m s and m us , is modeled by a damper and a spring as on Figure 1 (left).
The nonlinear "Renault Mégane Coupé" based passive model, that will be later used as our reference model (for performance evaluation and comparison with the controlled one), is given by:
F c F k k t m s m us z s z us z r F k k t m s m us u > z s z us z r Fig. 1. Passive (left) Controlled (right) quarter car model. m s zs = -F k (z def ) -F c ( żdef ) m us zus = F k (z def ) + F c ( żdef ) -k t (z us -z r ) z def ∈ z def z def (1)
where F k (z def ) and F c ( żdef ) are the nonlinear forces provided by the spring and damper respectively (see dashed curves on Figure 2). In the controlled suspension framework, one considers the model given on Figure 1 (right) and described by,
m s zs = -F k (z def ) + u m us zus = F k (z def ) -u -k t (z us -z r ) z def ∈ z def z def (2)
where u is the control input of the system, provided by the considered actuator. Note that in this formulation, the passive damper that appears in equation ( 1) is replaced by an actuator or a controlled damper.
SEMI-ACTIVE SUSPENSION ACTUATOR
In the previous section, the u control input was introduced to control the quarter car model. Since we focus here on semi-active suspension control, in the sequel, emphasis is put on static performances and structural limitations for the considered semiactive actuator.
Active vs. Semi-active suspension systems
Active suspension systems can either store, dissipate or generate energy to the masses (m s and m us ). When semi-active suspension actuators are considered, only energy dissipation is allowed. This difference is usually represented using the Force-Deflection speed space representation given on Figure 3. Hence, a semi-active controller can only deliver forces within the two semi-active quadrants. Note that when a full active actuator is considered, all the four quadrants can be used.
6
-
F [N ] żdef [m/s]
The Magneto-rheological damper
The actuators considered here are called semiactive Continuously Controlled Dampers (CCD).
For this kind of controlled damper, it is assumed that all the forces within the allowed semi-active space quadrants can be achieved (Figure 3). In our application, we consider a magneto-rheological (M-R) damper (more and more studied and used in the industry because of its great performances) (see [START_REF] Du | Semiactive H ∞ control with magneto-rheological dampers[END_REF]. Through the change of current input, M-R damper viscosity can be adjusted (i.e. the damping coefficient). The main advantages of such an actuator are that the weight and the volume are similar to classic passive dampers and the range of damping coefficients is nearly infinite within the bounded area. In the meantime, the time response is very fast (about 10ms), compared to an active hydrological actuator.
For this purpose, we consider a Delphi M-R damper available in the Tecnologico de Monterrey (see [START_REF] Nino | Applying black box model in the identification of mr damper[END_REF]. To evaluate the upper and lower capacities of this actuator, a sinusoidal disturbance of frequency 0.25Hz is generated at the extremity of the suspension (equivalent to a deflection disturbance) for different magnitudes of current in order to measure the achievable forces of this damper. Figure 4 shows results for two different current values.
Note that, due to hysteresis behavior of such actuators (see [START_REF] Du | Semiactive H ∞ control with magneto-rheological dampers[END_REF], some points are in the actives quadrants. of the damper and will be denoted as D. Then, for a given deflection speed ( żdef ), if the controller computes a force F * out of the achievable damper range, the force provided to the system will be F ⊥ the projection of F * on the possible force area (see Figure 5).
Semi-active suspension static model
6
-
F [N ] żdef [m/s] F * 2 F * 1 F ⊥ 2 F ⊥ 1 ? ? F * 3 = F ⊥ 3 Fig. 5
. Projection principle of the semi-active controlled damper model (F * 1 and F * 2 are out of the allowed area and F * 3 is inside).
LPV BASED ROBUST SEMI-ACTIVE SUSPENSION CONTROL DESIGN
For controller synthesis purpose we consider the model described in (1) where F k (z def ) and F c ( żdef ) are linear functions (see solid curves on Figure 2). The control law, applied on model (2), is then given by: u = -c.( żdef ) + u H∞ where c is the nominal linearized damping coefficient of the M-R damper and u H∞ the added force provided by the controller. To account for actuator limitations shown in Section 3, we propose a new method based on the LPV polytopic theory using the H ∞ synthesis approach.
Frequency based industrial performance criterion
In the sequel, we introduce four performance objectives derived from industrial specifications (see [START_REF] Sammier | Skyhook and H ∞ control of active vehicle suspensions: some practical aspects[END_REF].
Comfort at high frequencies: vibration isolation between [4 -30]Hz is evaluated by zs /z r . Comfort at low frequencies: vibration isolation between [0 -5]Hz is evaluated by z s /z r .
Road holding: wheel are evaluated by z us /z r between [0 -20]Hz. Suspension constraint: suspension deflection is evaluated between [0 -20]Hz by z def /z r .
In each case, one wish to perform better wrt. a passive suspension does. Therefore, to evaluate the control approach exposed thereafter wrt. the passive one, we introduce the power spectral density (PSD) measure of each of these signals on the frequency and amplitude space of interest by the use of the following formula:
I {f1,a1}→{f2,a2} (x) = f2 f1 a2 a1 x 2 (f, a)da • df (3)
where f 1 and f 2 (resp. a 1 , a 2 ) are the lower and higher frequency (resp. amplitude) bounds respectively and x is the signal of interest. The frequency response (x(f, a)) of the nonlinear system is evaluated assuming a sinusoidal input z r of varying magnitude (1 -8cm) for 10 periods (with varying frequency). Then a discrete Fourier Transform is performed to evaluate the system gain.
Semi-active proposed approach
To ensure the semi-activeness of the controller output, the static damper model D given in Section 3 is used in the LPV controller; the computed control force u provided by the controller is compared with the possible reachable one v (Figure 6). The controller is scheduled according to this difference (as the anti-windup does with the integral action) as:
|u -v| = 0 ⇒ semi-active control (u H∞ = 0) |u -v| > ε ⇒ nominal control (u H∞ = 0)
where ε is chosen sufficiently small (≃ 10 -4 ) to ensure the semi-active control. |u -v| = 0 means that the required force is outside the allowed range, then the "passive control" is chosen (u H∞ = 0 ⇔ u = -c( żdef )). To incorporate this strategy in the framework of a LPV design, we introduce a parameter ρ with the following choice:
|u -v| = 0 ⇒ ρ low |u -v| > ε ⇒ ρ high
With this strategy, we can find a controller S(ρ) that can either satisfy some performance objectives or be passive (when no control law can be applied because of actuator limitations). The generalized block scheme incorporating the weighting functions is given on Figure 6, where ρ is the scheduling parameter that will be used to satisfy the dissipative damper constraints.
W zr W u (ρ) - - - - - + ? W n S(ρ) z s y Σ u z r z 1 z 3 w 1 w 2 n W zs D -- u 7 + żdef v - z us W zus z 2 - ρ Fig. 6. General block diagram.
LPV design & Scheduling strategy
As described on Figure 6, the ρ parameter appears in the W u (ρ) weight function. Through the LPV design, W u (ρ) is varying between an upper and a lower bound. Let remember that in the H ∞ framework this weight indicates how large the gain on the control signal can be. Choosing a high W u (ρ) = ρ forces the control signal to be low, and conversely. Hence, when ρ is large, the control signal is so penalized that it is practically zero, and the closed-loop behavior is the same as the passive quarter vehicle model one. Conversely, when ρ is small, the control signal is no more penalized, hence the controller acts as an active controller and can achieve performances. Let consider the generalize plant description,
ẋ z ∞ y = A(ρ) B ∞ (ρ) B C ∞ (ρ) D ∞w (ρ) D ∞u C 0 0 x w ∞ u (4)
where, x = x quarter x weights T represents the states of the linearized quarter vehicle model (obtained thanks to equation 1) and the weight states,
z ∞ = W zs z s W zus z us W u u T the per- formance signals, w ∞ = W -1 zr z r W -1 n n T the
weighted input signals, y = z def the measurement and ρ ∈ ρ ρ the varying parameter.
The weighting functions are given by: W zs = 2 2πf1 (s+2πf1) , W zus = 2πf2 (s+2πf2) , W zr = 7.10 -2 , W n = 10 -4 and W u (ρ) ∈ ρ = 0.1 10 . W zs (resp. W zus ) is shaped according to performance specifications, W zr and W n model ground disturbances (z r ) and measurement noise (n) respectively, and W u (ρ) is used to limit the control signal and achieve the semi-active constraint. f 1 = 3Hz and f 2 = 5Hz.
To find the LPV/H ∞ controller, we solve at each vertex of the polytope formed by co{ρ, ρ}, the bounded real lemma (using a common parameter independent Lyapunov function):
A(ρ) T K + KA(ρ) KB(ρ) C(ρ) T B(ρ) T K -γ 2 ∞ I D(ρ) T C(ρ) D(ρ) -I < 0 (5)
Because of the ρ parameter, ( 5) is an infinite set Bilinear Matrix Inequality (BMI), hence a nonconvex problem has to be solved. Via a change of basis expressed in [START_REF] Scherer | Multiobjective output-feedback control via LMI optimization[END_REF], extended to polytopic systems, we can find a nonconservative LMI that expresses the same problem in a tractable way for Semi-Definite Programs (SDP). As the parameter dependency enters in a linear way in the system definition, the polytopic approach is used (see e.g. [START_REF] Zin | An LPV/H ∞ active suspension control for global chassis technology: Design and performance analysis[END_REF].
It leads to two controllers S(ρ) and S(ρ), hence two closed-loop (CL(ρ) and CL(ρ)). Then, the applied control law is a convex combination of these two controllers. Hence, controller S(ρ) and closed-loop CL(ρ) can be expressed as the following convex hull: co{S(ρ), S(ρ)} ⇔ co{S 1 , S 0 } and co{CL(ρ), CL(ρ)} ⇔ co{CL 1 , CL 0 }. Note that a major interest in using the LPV design is that it ensures the internal stability of the closed-loop system for all ρ ∈ [ρ, ρ]. Note that the passive reference model is a "Renault Mégane Coupé", which is known to be a good road holding car. Nevertheless, the proposed semi-active control shows to improve the comfort without deteriorating the road holding.
SIMULATION & VALIDATION
Time simulation results
To validate the approach and check weather the semi-active constraint is fulfilled, a step road disturbance (z r = 3cm) is generated on both passive and controlled system. This leads to the Force-Deflection speed space and chassis displacement given on Figure 9 and 10. With this representation it is clear that the proposed LPV controller provides a force that fulfills the dissipative inherent constraint of the controlled damper keeping a good chassis behavior. It also appears that this strategy does not only satisfy the semi-active constraint, but also the actuator limitations.
CONCLUSION AND FUTURE WORKS
In this article, we introduce a new strategy to ensure the dissipative constraint for a semi-active suspension keeping the advantages of the H ∞ control design. Interests of such approach compared to existing ones are: Hence the new semi-active strategy exhibits significant improvements on the achieved performances. Moreover, implementation of such a controller results in a cheap solution. In future works we aim to implement such algorithm on a suspension.
Fig. 2 .
2 Fig. 2. Nonlinear (dashed) and Linear (solid) Spring (left) and the Damper (right) forces.
Fig. 3 .
3 Fig. 3. Active vs. Semi-active quadrant.
Fig. 4 .
4 Fig. 4. Delphi Force-Deflection speed diagrams for different current (0A cross, and 3A dots).
Fig. 9 .
9 Fig. 9. LPV/H ∞ semi-active controller (dot), nominal damping & saturation force (solid).
Fig.10. Chassis displacement for passive(dashed) and LPV/H ∞ semi-active (solid) suspension.
Flexible design:
possibility to apply H ∞ , H 2 , Pole placement, Mixed etc. criterion Measurement: only the suspension deflection sensor is required Computation: synthesis leads to two LTI controllers & simple scheduling strategy (no on-line optimization process involved) Robustness: internal stability & robustness
Fig. 7 .
7 Fig. 7. Freq. resp. of z s /z r for the passive (left) and controlled (right) nonlinear quarter vehicle.
Table 1 .
1 5.1 Performance evaluation & Frequency behaviorOn Figures7 and 8we plot the frequency response z s /z r and z us /z r , of the passive and controlled quarter car. Passive vs. Controlled PSD.
Both show frequency responses and PSD show
improvements of the proposed approach. Then,
applying PSD criteria (3) on both passive and
controlled nonlinear quarter car model leads to
the results summarized on Table 1 where the
improvement is evaluated as: (Passive PSD -
Controlled PSD)/Passive PSD.
Signal Passive PSD Controlled PSD Gain [%]
zs/zr 280 206 25.4%
zs/zr 2.4 2.1 12.1%
zus/zr 1.3 1.2 7%
z def /zr 1.5 1.4 8.02% | 18,989 | [
"834135",
"1618",
"5833",
"756640"
] | [
"388748",
"388748",
"388748",
"15818",
"15818",
"15818"
] |