text
stringlengths 313
1.33M
|
---|
# Ada Programming/Attributes
subprograms are declared, certain properties thereof normally are left
to the compiler to specify (like the size or the address of a variable,
the calling convention of a subprogram). Properties which may be queried
are called *Attributes*; those which may be specified are called
Aspects. Aspects and attributes
are defined in the Ada Reference Manual annex
.
## Language summary attributes
The concept of **attributes** is pretty unique to
Ada. Attributes allow you to get ---and
sometimes set--- information about objects or other language entities
such as types. A good example is the
attribute. It describes the size of
an object or a type in bits.
`A : Natural := Integer'``; `
However, unlike the operator
from C/C++ the
attribute can also be set:
` Byte `` `` -128 .. 127; `\
` `\
` Byte'`` `` 8; `
Of course not all attributes can be set. An attribute starts with a tick
\' and is followed by its name. The compiler determines by context if
the tick is the beginning of an attribute, a character literal or a
quantified expression.
`A : Character := Character'`` (32); `\
`B : Character := ' '; `\
`S : String := Character'(')')'Image; `
## List of language defined attributes
Ada 2005 : This is a new Ada 2005 attribute.\
Ada 2012 : This is a new Ada 2012 attribute.\
Obsolescent : This is a deprecated attribute and should not be used in new code.
### A -- B
- \'Access
- \'Address
- \'Adjacent
- \'Aft
- \'Alignment
- \'Base
- \'Bit_Order
- \'Body_Version
### C
- \'Callable
- \'Caller
- \'Ceiling
- \'Class
- \'Component_Size
- \'Compose
- \'Constrained
- \'Copy_Sign
- \'Count
### D -- F
- \'Definite
- \'Delta
- \'Denorm
- \'Digits
- \'Emax
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'Exponent
- \'External_Tag
- \'Epsilon
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'First
- \'First_Bit
- \'Floor
- \'Fore
- \'Fraction
### G -- L
- \'Has_Same_Storage
`<small>`{=html}(Ada 2012)`</small>`{=html}
- \'Identity
- \'Image
- \'Input
- \'Large
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'Last
- \'Last_Bit
- \'Leading_Part
- \'Length
### M
- \'Machine
- \'Machine_Emax
- \'Machine_Emin
- \'Machine_Mantissa
- \'Machine_Overflows
- \'Machine_Radix
- \'Machine_Rounding
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Machine_Rounds
- \'Mantissa
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'Max
- \'Max_Alignment_For_Allocation
`<small>`{=html}(Ada 2012)`</small>`{=html}
- \'Max_Size_In_Storage_Elements
- \'Min
- \'Mod
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Model
- \'Model_Emin
- \'Model_Epsilon
- \'Model_Mantissa
- \'Model_Small
- \'Modulus
### O -- R
- \'Old
`<small>`{=html}(Ada 2012)`</small>`{=html}
- \'Output
- \'Overlaps_Storage
`<small>`{=html}(Ada 2012)`</small>`{=html}
- \'Partition_ID
- \'Pos
- \'Position
- \'Pred
- \'Priority
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Range
- \'Read
- \'Remainder
- \'Result
`<small>`{=html}(Ada 2012)`</small>`{=html}
- \'Round
- \'Rounding
### S
- \'Safe_Emax
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'Safe_First
- \'Safe_Large
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'Safe_Last
- \'Safe_Small
`<small>`{=html}(Obsolescent)`</small>`{=html}
- \'Scale
- \'Scaling
- \'Signed_Zeros
- \'Size
- \'Small
- \'Storage_Pool
- \'Storage_Size
- \'Stream_Size
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Succ
### T -- V
- \'Tag
- \'Terminated
- \'Truncation
- \'Unbiased_Rounding
- \'Unchecked_Access
- \'Val
- \'Valid
- \'Value
- \'Version
### W -- Z
- \'Wide_Image
- \'Wide_Value
- \'Wide_Wide_Image
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Wide_Wide_Value
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Wide_Wide_Width
`<small>`{=html}(Ada 2005)`</small>`{=html}
- \'Wide_Width
- \'Width
- \'Write
## List of implementation defined attributes
The following attributes are not available in all Ada compilers, only in
those that had implemented them.
Currently, there are only listed the implementation-defined attributes
of a few compilers. You can help Wikibooks
adding
specific attributes of other compilers:
GNAT : Implementation-defined attribute of the GNAT compiler from AdaCore/FSF.\
HP Ada : Implementation-defined attribute of the HP Ada compiler (formerly known as \"DEC Ada\").\
ICC : Implementation-defined attribute[^1] of the Irvine ICC compiler.\
PowerAda : Implementation-defined attribute of OC Systems\' PowerAda.\
SPARCompiler : Implementation-defined attribute of Sun\'s SPARCompiler Ada.
### A -- D
- \'Abort_Signal
(GNAT)
- \'Address_Size
(GNAT)
- \'Architecture
(ICC)
- \'Asm_Input
(GNAT)
- \'Asm_Output
(GNAT)
- \'AST_Entry
(GNAT, HP Ada)
- \'Bit (GNAT, HP Ada)
- \'Bit_Position
(GNAT)
- \'CG_Mode (ICC)
- \'Code_Address
(GNAT)
- \'Compiler_Key
(SPARCompiler)
- \'Compiler_Version
(SPARCompiler)
- \'Declared (ICC)
- \'Default_Bit_Order
(GNAT)
- \'Dope_Address
(SPARCompiler)
- \'Dope_Size
(SPARCompiler)
### E -- H
- \'Elaborated
(GNAT)
- \'Elab_Body
(GNAT)
- \'Elab_Spec
(GNAT)
- \'Emax (GNAT)
- \'Enabled (GNAT)
- \'Entry_Number
(SPARCompiler)
- \'Enum_Rep (GNAT)
- \'Enum_Val (GNAT)
- \'Epsilon (GNAT)
- \'Exception_Address
(ICC)
- \'Extended_Aft
(PowerAda)
- \'Extended_Base
(PowerAda)
- \'Extended_Digits
(PowerAda)
- \'Extended_Fore
(PowerAda)
- \'Extended_Image
(PowerAda)
- \'Extended_Value
(PowerAda)
- \'Extended_Width
(PowerAda)
- \'Extended_Wide_Image
(PowerAda)
- \'Extended_Wide_Value
(PowerAda)
- \'Extended_Wide_Width
(PowerAda)
- \'Fixed_Value
(GNAT)
- \'Has_Access_Values
(GNAT)
- \'Has_Discriminants
(GNAT)
- \'High_Word
(ICC)
- \'Homogeneous
(SPARCompiler)
### I -- N
- \'Img (GNAT)
- \'Integer_Value
(GNAT)
- \'Invalid_Value
(GNAT)
- \'Linear_Address
(ICC)
- \'Low_Word (ICC)
- \'Machine_Size
(GNAT, HP Ada)
- \'Max_Interrupt_Priority
(GNAT)
- \'Max_Priority
(GNAT)
- \'Maximum_Alignment
(GNAT)
- \'Mechanism_Code
(GNAT)
- \'Null_Parameter
(GNAT, HP Ada)
### O -- T
- \'Object_Size
(GNAT)
- \'Old (GNAT)
- \'Passed_By_Reference
(GNAT)
- \'Pool_Address
(GNAT)
- \'Range_Length
(GNAT)
- \'Ref (SPARCompiler)
- \'Storage_Unit
(GNAT)
- \'Stub_Type
(GNAT)
- \'Target (ICC)
- \'Target_Name
(GNAT)
- \'Task_ID
(SPARCompiler)
- \'Tick (GNAT)
- \'To_Address
(GNAT)
- \'Type_Class
(GNAT, HP Ada)
- \'Type_Key
(SPARCompiler)
### U -- Z
- \'UET_Address
(GNAT)
- \'Unconstrained_Array
(GNAT)
- \'Universal_Literal_String
(GNAT)
- \'Unrestricted_Access
(GNAT, ICC)
- \'VADS_Size
(GNAT)
- \'Value_Size
(GNAT)
- \'Wchar_T\_Size
(GNAT)
- \'Word_Size
(GNAT)
## See also
### Wikibook
- Ada Programming
- Ada Programming/Aspects
- Ada Programming/Pragmas
- Ada Programming/Keywords
### Ada Reference Manual
#### Ada 83
-
-
#### Ada 95
-
-
#### Ada 2005
-
-
#### Ada 2012
-
-
## References
```{=html}
<references/>
```
Programming}}\|Attributes es:Programación
en Ada/Atributos
[^1]: \"4.2 ICC-Defined Attributes\", *ICC Ada Implementation Reference
--- ICC Ada Version 8.2.5 for i960MC Targets*, document version
2.11.4 1
|
# Ada Programming/Aspects
subprograms are declared, certain properties thereof normally are left
to the compiler to specify (like the size or the address of a variable,
the calling convention of a subprogram). Properties which may be queried
are called Attributes; those
which may be specified are called *Aspects*. Some aspects correspond
with attributes which then have the same name. Aspects and attributes
are defined in the Ada Reference Manual
,
pragmas in
.
## Description
*Aspects* are certain properties of an entity that may be specified,
depending on the kind of entity, by an aspect specification as part of
its declaration or by a separate attribute definition clause or pragma
declaration.
`Aspect_Specification ::=`\
` `` `*`Aspect_Name`*` [ => `*`Aspect_Definition`*`] {,`\
` `*`Aspect_Name`*` [ => `*`Aspect_Definition`*`] } ;`
`Attribute_Definition_Clause ::= `\
` `` entity_name'attribute_designator `` expression;`\
` | `` entity_name'attribute_designator `` name;`
` Name (Parameter_List);`
If an aspect is not specified, it depends on the aspect itself whether
its value is left to the compiler or prescribed in the Ada RM.
The specification of a `Boolean` valued aspect may omit the aspect
definition, which then has the value `True`.
Examples of such properties are the size of a type, i.e. the number of
bits a stand-alone object of that type will use; or that a subprogram
will not return from its call: aspect
. This latter one is an
example of an aspect that has a `Boolean` value.
## List of language defined aspects
If not marked otherwise, an aspect is specified by an
*Aspect_Specification*.
An aspect marked *Ada 2012* is an
Ada 2012 language functionality
not available in previous Ada generations. Aspects not so marked were
previously defined via pragmas or attribute definition clauses. This is
still possible, but deprecated.
### A -- D
- Address
`<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- Alignment
`<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Pragma)`</small>`{=html}
-
-
-
-
- Bit_Order
`<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- Coding
`<small>`{=html}(Enumeration_Representation_Clause)`</small>`{=html}
- Component_Size
`<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
-
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012; Pragma)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012; Aspect_Specification,
Pragma)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
### E -- O
- `<small>`{=html}(Pragma)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
-
-
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
-
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
-
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012;
Attribute_Definition_Clause)`</small>`{=html}
-
-
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- Layout
`<small>`{=html}(Record_Representation_Clause)`</small>`{=html}
-
- Machine_Radix
`<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
-
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012;
Attribute_Definition_Clause)`</small>`{=html}
### P -- Z
-
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Pragma)`</small>`{=html}
-
- `<small>`{=html}(Pragma)`</small>`{=html}
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012;
Attribute_Definition_Clause)`</small>`{=html}
-
- `<small>`{=html}(Pragma)`</small>`{=html}
- `<small>`{=html}(Pragma)`</small>`{=html}
- `<small>`{=html}(Pragma)`</small>`{=html}
- Size
`<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
- `<small>`{=html}(Ada 2012)`</small>`{=html}
-
- `<small>`{=html}(Ada 2012)`</small>`{=html}
-
-
- `<small>`{=html}(Attribute_Definition_Clause)`</small>`{=html}
- `<small>`{=html}(Ada 2012;
Attribute_Definition_Clause)`</small>`{=html}
## List of implementation defined aspects
The following pragmas are not available in all Ada compilers, only in
those that had implemented them.
Currently, there are only listed the implementation-defined pragmas of a
few compilers. You can help Wikibooks
adding
specific aspects of other compilers:
GNAT : Implementation defined aspect of the GNAT compiler from AdaCore and FSF.
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
- `<small>`{=html}(GNAT)`</small>`{=html}
## See also
### Wikibook
- Ada Programming
- Ada Programming/Attributes
- Ada Programming/Keywords
- Ada Programming/Pragmas
### Ada Reference Manual
#### Ada 2012
-
-
-
## References
```{=html}
<references/>
```
Programming}}\|Aspects
|
# Ada Programming/Pragmas
## Description
Pragmas control the compiler, i.e.
they are compiler directives. They
have the standard form of
` `*`Name`*` (`*`Parameter_List`*`);`
where the parameter list is optional.
## List of language defined pragmas
Some pragmas are specially marked:
Ada 2005 : This is a new Ada 2005 pragma.\
Ada 2012 : This is a new Ada 2012 pragma.\
Obsolescent : This is a deprecated pragma and it should not be used in new code.
### A -- H
- All_Calls_Remote
- Assert
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Assertion_Policy
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Asynchronous
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Atomic
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Atomic_Components
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Attach_Handler
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Controlled
`<small>`{=html}(Removed from Ada 2012)`</small>`{=html}
- Convention
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- CPU
`<small>`{=html}(Ada 2012)`</small>`{=html}
- Default_Storage_Pool
`<small>`{=html}(Ada 2012)`</small>`{=html}
- Detect_Blocking
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Discard_Names
- Dispatching_Domain
`<small>`{=html}(Ada 2012)`</small>`{=html}
- Elaborate
- Elaborate_All
- Elaborate_Body
- Export
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
### I -- O
- Import
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Independent
`<small>`{=html}(Ada 2012)`</small>`{=html}
- Independent_Component
`<small>`{=html}(Ada 2012)`</small>`{=html}
- Inline
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Inspection_Point
- Interface
`<small>`{=html}(Obsolescent)`</small>`{=html}
- Interrupt_Handler
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Interrupt_Priority
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Linker_Options
- List
- Locking_Policy
- Memory_Size
`<small>`{=html}(Obsolescent)`</small>`{=html}
- No_Return
`<small>`{=html}(Ada 2005)`</small>`{=html}
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Normalize_Scalars
- Optimize
### P -- R
- Pack
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Page
- Partition_Elaboration_Policy
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Preelaborable_Initialization
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Preelaborate
- Priority
`<small>`{=html}(Obsolescent since Ada 2012)`</small>`{=html}
- Priority_Specific_Dispatching
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Profile
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Pure
- Queueing_Policy
- Relative_Deadline
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Remote_Call_Interface
- Remote_Types
- Restrictions
- Reviewable
### S -- Z
- Shared
`<small>`{=html}(Obsolescent)`</small>`{=html}
- Shared_Passive
- Storage_Size
- Storage_Unit
`<small>`{=html}(Obsolescent)`</small>`{=html}
- Suppress
- System_Name
`<small>`{=html}(Obsolescent)`</small>`{=html}
- Task_Dispatching_Policy
- Unchecked_Union
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Unsuppress
`<small>`{=html}(Ada 2005)`</small>`{=html}
- Volatile
- Volatile_Components
## List of implementation defined pragmas
The following pragmas are not available in all Ada compilers, only in
those that had implemented them.
Currently, there are only listed the implementation-defined pragmas of a
few compilers. You can help Wikibooks
adding
specific aspects of other compilers:
GNAT : Implementation defined pragma of the GNAT compiler from AdaCore and FSF.\
HP Ada : Implementation defined pragma of the HP Ada compiler (formerly known as \"DEC Ada\").\
ICC : Implementation-defined pragma[^1] of the Irvine ICC compiler.\
PowerAda : Implementation defined pragma of OC Systems\' PowerAda.\
SPARCompiler : Implementation defined pragma of Sun\'s SPARCompiler Ada.2
### A -- C
- Abort_Defer
`<small>`{=html}(GNAT)`</small>`{=html}
- Ada_83
`<small>`{=html}(GNAT)`</small>`{=html}
- Ada_95
`<small>`{=html}(GNAT)`</small>`{=html}
- Ada_05
`<small>`{=html}(GNAT)`</small>`{=html}
- Ada_2005
`<small>`{=html}(GNAT)`</small>`{=html}
- Ada_12
`<small>`{=html}(GNAT)`</small>`{=html}
- Ada_2012
`<small>`{=html}(GNAT)`</small>`{=html}
- Annotate
`<small>`{=html}(GNAT)`</small>`{=html}
- Assume_No_Invalid_Values
`<small>`{=html}(GNAT)`</small>`{=html}
- Ast_Entry
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Bit_Pack
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Built_In
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Byte_Pack
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- C_Pass_By_Copy
`<small>`{=html}(GNAT)`</small>`{=html}
- Call_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Canonical_Streams
`<small>`{=html}(GNAT)`</small>`{=html}
- Check
`<small>`{=html}(GNAT)`</small>`{=html}
- Check_Name
`<small>`{=html}(GNAT)`</small>`{=html}
- Check_Policy
`<small>`{=html}(GNAT)`</small>`{=html}
- CM_Info
`<small>`{=html}(PowerAda)`</small>`{=html}
- Comment
`<small>`{=html}(GNAT)`</small>`{=html}
- Common_Object
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Compatible_Calls
`<small>`{=html}(ICC)`</small>`{=html}
- Compile_Time_Error
`<small>`{=html}(GNAT)`</small>`{=html}
- Compile_Time_Warning
`<small>`{=html}(GNAT)`</small>`{=html}
- Complete_Representation
`<small>`{=html}(GNAT)`</small>`{=html}
- Complex_Representation
`<small>`{=html}(GNAT)`</small>`{=html}
- Component_Alignment
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Compress
`<small>`{=html}(ICC)`</small>`{=html}
- Constrain_Private
`<small>`{=html}(ICC)`</small>`{=html}
- Convention_Identifier
`<small>`{=html}(GNAT)`</small>`{=html}
- CPP_Class
`<small>`{=html}(GNAT)`</small>`{=html}
- CPP_Constructor
`<small>`{=html}(GNAT)`</small>`{=html}
- CPP_Virtual
`<small>`{=html}(GNAT)`</small>`{=html}
- CPP_Vtable
`<small>`{=html}(GNAT)`</small>`{=html}
### D -- H
- Data_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Debug
`<small>`{=html}(GNAT)`</small>`{=html}
- Debug_Policy
`<small>`{=html}(GNAT)`</small>`{=html}
- Delete_Subprogram_Entry
`<small>`{=html}(ICC)`</small>`{=html}
- Elaboration_Checks
`<small>`{=html}(GNAT)`</small>`{=html}
- Eliminate
`<small>`{=html}(GNAT)`</small>`{=html}
- Error
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Export_Exception
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Export_Function
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Export_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Export_Object
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Export_Procedure
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Export_Value
`<small>`{=html}(GNAT)`</small>`{=html}
- Export_Valued_Procedure
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Extend_System
`<small>`{=html}(GNAT)`</small>`{=html}
- Extensions_Allowed
`<small>`{=html}(GNAT)`</small>`{=html}
- External
`<small>`{=html}(GNAT, SPARCompiler)`</small>`{=html}
- External_Name
`<small>`{=html}(ICC, SPARCompiler)`</small>`{=html}
- External_Name_Casing
`<small>`{=html}(GNAT)`</small>`{=html}
- Fast_Math
`<small>`{=html}(GNAT)`</small>`{=html}
- Favor_Top_Level
`<small>`{=html}(GNAT)`</small>`{=html}
- Finalize_Storage_Only
`<small>`{=html}(GNAT)`</small>`{=html}
- Float_Representation
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Foreign
`<small>`{=html}(ICC)`</small>`{=html}
- Generic_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Generic_Policy
`<small>`{=html}(SPARCompiler)`</small>`{=html}
### I -- L
- i960_Intrinsic
`<small>`{=html}(ICC)`</small>`{=html}
- Ident
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Images
`<small>`{=html}(PowerAda)`</small>`{=html}
- Implemented,
previously named \'Implemented_By_Entry\'
`<small>`{=html}(GNAT)`</small>`{=html}
- Implicit_Code
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Implicit_Packing
`<small>`{=html}(GNAT)`</small>`{=html}
- Import_Exception
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Import_Function
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Import_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Import_Object
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Import_Procedure
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Import_Valued_Procedure
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Include
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Initialize
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Initialize_Scalars
`<small>`{=html}(GNAT)`</small>`{=html}
- Inline_Always
`<small>`{=html}(GNAT)`</small>`{=html}
- Inline_Generic
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Inline_Only
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Instance_Policy
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Interface_Constant
`<small>`{=html}(ICC)`</small>`{=html}
- Interface_Information
`<small>`{=html}(PowerAda)`</small>`{=html}
- Interface_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Interface_Name
`<small>`{=html}(GNAT, HP Ada, ICC, SPARCompiler)`</small>`{=html}
- Interrupt_State
`<small>`{=html}(GNAT)`</small>`{=html}
- Invariant
`<small>`{=html}(GNAT)`</small>`{=html}
- Keep_Names
`<small>`{=html}(GNAT)`</small>`{=html}
- Label
`<small>`{=html}(ICC)`</small>`{=html}
- License
`<small>`{=html}(GNAT)`</small>`{=html}
- Link_With
`<small>`{=html}(GNAT, ICC, SPARCompiler)`</small>`{=html}
- Linker_Alias
`<small>`{=html}(GNAT)`</small>`{=html}
- Linker_Constructor
`<small>`{=html}(GNAT)`</small>`{=html}
- Linker_Destructor
`<small>`{=html}(GNAT)`</small>`{=html}
- Linker_Section
`<small>`{=html}(GNAT)`</small>`{=html}
- Long_Float
`<small>`{=html}(GNAT: OpenVMS, HP Ada)`</small>`{=html}
### M -- P
- Machine_Attribute
`<small>`{=html}(GNAT)`</small>`{=html}
- Main
`<small>`{=html}(GNAT)`</small>`{=html}
- Main_Storage
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- No_Body
`<small>`{=html}(GNAT)`</small>`{=html}
- No_Image
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- No_Strict_Aliasing
`<small>`{=html}(GNAT)`</small>`{=html}
- No_Suppress
`<small>`{=html}(PowerAda)`</small>`{=html}
- No_Reorder
`<small>`{=html}(ICC)`</small>`{=html}
- No_Zero
`<small>`{=html}(ICC)`</small>`{=html}
- Noinline
`<small>`{=html}(ICC)`</small>`{=html}
- Non_Reentrant
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Not_Elaborated
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Not_Null
`<small>`{=html}(ICC)`</small>`{=html}
- Obsolescent
`<small>`{=html}(GNAT)`</small>`{=html}
- Optimize_Alignment
`<small>`{=html}(GNAT)`</small>`{=html}
- Optimize_Code
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Optimize_Options
`<small>`{=html}(ICC)`</small>`{=html}
- Ordered
`<small>`{=html}(GNAT)`</small>`{=html}
- Parameter_Mechanism
`<small>`{=html}(ICC)`</small>`{=html}
- Passive
`<small>`{=html}(GNAT, HP Ada, SPARCompiler)`</small>`{=html}
- Persistent_BSS
`<small>`{=html}(GNAT)`</small>`{=html}
- Physical_Address
`<small>`{=html}(ICC)`</small>`{=html}
- Polling
`<small>`{=html}(GNAT)`</small>`{=html}
- Postcondition
`<small>`{=html}(GNAT)`</small>`{=html}
- Precondition
`<small>`{=html}(GNAT)`</small>`{=html}
- Preserve_Layout
`<small>`{=html}(PowerAda)`</small>`{=html}
- Profile_Warnings
`<small>`{=html}(GNAT)`</small>`{=html}
- Propagate_Exceptions
`<small>`{=html}(GNAT)`</small>`{=html}
- Protect_Registers
`<small>`{=html}(ICC)`</small>`{=html}
- Protected_Call
`<small>`{=html}(ICC)`</small>`{=html}
- Protected_Return
`<small>`{=html}(ICC)`</small>`{=html}
- Psect_Object
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Pure_Function
`<small>`{=html}(GNAT)`</small>`{=html}
- Put
`<small>`{=html}(ICC)`</small>`{=html}
- Put_Line
`<small>`{=html}(ICC)`</small>`{=html}
### R -- S
- Reserve_Registers
`<small>`{=html}(ICC)`</small>`{=html}
- Restriction_Warnings
`<small>`{=html}(GNAT)`</small>`{=html}
- RTS_Interface
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- SCCS_ID
`<small>`{=html}(PowerAda)`</small>`{=html}
- Share_Body
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Share_Code
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Share_Generic
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Shareable
`<small>`{=html}(ICC)`</small>`{=html}
- Short_Circuit_And_Or
`<small>`{=html}(GNAT)`</small>`{=html}
- Short_Descriptors
`<small>`{=html}(GNAT)`</small>`{=html}
- Simple_Storage_Pool_Type
`<small>`{=html}(GNAT)`</small>`{=html}
- Simple_Task
`<small>`{=html}(ICC)`</small>`{=html}
- Source_File_Name
`<small>`{=html}(GNAT)`</small>`{=html}
- Source_File_Name_Project
`<small>`{=html}(GNAT)`</small>`{=html}
- Source_Reference
`<small>`{=html}(GNAT)`</small>`{=html}
- Stack_Size
`<small>`{=html}(ICC)`</small>`{=html}
- Static_Elaboration
`<small>`{=html}(ICC)`</small>`{=html}
- Static_Elaboration_Desired
`<small>`{=html}(GNAT)`</small>`{=html}
- Stream_Convert
`<small>`{=html}(GNAT)`</small>`{=html}
- Style_Checks
`<small>`{=html}(GNAT)`</small>`{=html}
- Subtitle
`<small>`{=html}(GNAT)`</small>`{=html}
- Suppress_All
`<small>`{=html}(GNAT, HP Ada, PowerAda,
SPARCompiler)`</small>`{=html}
- Suppress_Elaboration_Checks
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Suppress_Exception_Locations
`<small>`{=html}(GNAT)`</small>`{=html}
- Suppress_Initialization
`<small>`{=html}(GNAT)`</small>`{=html}
- System_Table
`<small>`{=html}(ICC)`</small>`{=html}
### T -- Z
- Task_Attributes
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Task_Info
`<small>`{=html}(GNAT)`</small>`{=html}
- Task_Name
`<small>`{=html}(GNAT)`</small>`{=html}
- Task_Storage
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Test_Case
`<small>`{=html}(GNAT)`</small>`{=html}
- Thread_Body
`<small>`{=html}(GNAT)`</small>`{=html}
- Thread_Local_Storage
`<small>`{=html}(GNAT)`</small>`{=html}
- Time_Slice
`<small>`{=html}(GNAT, HP Ada, ICC)`</small>`{=html}
- Time_Slice_Attributes
`<small>`{=html}(ICC)`</small>`{=html}
- Title
`<small>`{=html}(GNAT, HP Ada)`</small>`{=html}
- Unimplemented_Unit
`<small>`{=html}(GNAT)`</small>`{=html}
- Universal_Aliasing
`<small>`{=html}(GNAT)`</small>`{=html}
- Universal_Data
`<small>`{=html}(GNAT)`</small>`{=html}
- Unmodified
`<small>`{=html}(GNAT)`</small>`{=html}
- Unreferenced
`<small>`{=html}(GNAT)`</small>`{=html}
- Unreferenced_Objects
`<small>`{=html}(GNAT)`</small>`{=html}
- Unreserve_All_Interrupts
`<small>`{=html}(GNAT)`</small>`{=html}
- Unsigned_Literal
`<small>`{=html}(ICC)`</small>`{=html}
- Use_VADS_Size
`<small>`{=html}(GNAT)`</small>`{=html}
- Validity_Checks
`<small>`{=html}(GNAT)`</small>`{=html}
- Warning
`<small>`{=html}(SPARCompiler)`</small>`{=html}
- Warnings
`<small>`{=html}(GNAT, SPARCompiler)`</small>`{=html}
- Weak_External
`<small>`{=html}(GNAT)`</small>`{=html}
- Wide_Character_Encoding
`<small>`{=html}(GNAT)`</small>`{=html}
## See also
### Wikibook
- Ada Programming
- Ada Programming/Aspects
- Ada Programming/Attributes
- Ada Programming/Keywords
### Ada Reference Manual
#### Ada 83
-
-
#### Ada 95
-
-
#### Ada 2005
-
-
#### Ada 2012
-
-
## References
```{=html}
<references/>
```
Programming}}\|Pragmas Ada
Programming}}/Ada 2005 feature\|Pragmas
es:Programación en
Ada/Pragmas
[^1]: \"2.2 ICC-Defined Pragmas\", *ICC Ada Implementation Reference ---
ICC Ada Version 8.2.5 for i960MC Targets*, document version
2.11.4.1
|
# Ada Programming/Libraries
## Predefined Language Libraries
Ada\'s built-in library is provided by three root library units: Ada,
Interfaces, and System; other library units are children of these. The
library is quite extensive and well-structured. These chapters too are
more reference like. Most specifications included in them have been
obtained from the reznikmm/adalib
repository.
The package Standard contains all predefined identifiers in the
language.
- Standard
- Ada
- Interfaces
- System
Ada 83 had a much smaller library
and did not yet have this library structure. These root libraries were
introduced in Ada 95 to inhibit a
*name pollution*. To preserve compatibility, there exist renamings of
all Ada 83 library units `XXX` as `Ada.XXX` respectively `System.XXX`;
see RM .
Contrary to the names in the root hierarchies, the original Ada 83 names
`XXX` are not protected -- they may be reused for user-defined library
units.
## Implementation-Defined Language Libraries
Every Ada implementation has as an extension of the predefined Ada
library. One example is the library provided by the
GNAT implementation.
- GNAT
## Other Language Libraries
Other libraries which are not part of the standard but freely available.
- Multi Purpose
- Container
Libraries
- GUI Libraries
- Distributed
Objects
- Database
- Web Programming
- Input/Output
## See also
### Wikibook
- Ada Programming
### Ada Reference Manual
-
### Resources
- A collection of Tools and
Libraries
maintained by the Ada Resource Association.
- The collection of crates of
Alire, a package manager for Ada
libraries and applications in source form.
Programming}}\|Libraries
es:Programación en Ada/Unidades
predefinidas
|
# Ada Programming/Portals
## Forges of open-source projects
SourceForge : Currently, there are more than 200 Ada projects hosted at SourceForge --- including the example programs for Ada Programming wikibook.
```{=html}
<!-- -->
```
GitHub : A source code repository based on Git with many recent developments.\
Codeberg : Codeberg is a democratic community-driven, non-profit software development platform. It contains a few Ada projects.
```{=html}
<!-- -->
```
Ada-centric forges: There are some Ada-centric forges/catalogues hosted by Ada associations and individuals:
:\* <http://codelabs.ch>
:\* <http://www.adaforge.org>
## Directories of freely available tools and libraries
Ada Information Clearinghouse --- Free Tools and Libraries
```{=html}
<!-- -->
```
Open Hub (language summary, ada tag, language search): Open Hub is a directory of Open Source projects. Its main features are source code analysis of public repositories and public reviews of projects.
## Collections of Ada source code
The Public Ada Library (PAL) : The PAL is a library of Ada and VHDL software, information, and courseware that contains over 1 BILLION bytes of material (mainly in compressed form). All items in the PAL have been released to the public with unlimited distribution, and, in most cases (the exceptions are shareware), the items are freeware.
```{=html}
<!-- -->
```
Ada and Software Engineering Library Version 2 (ASE2) : *The ASE2 Library contains over 1.1GB of material on Ada and Software Engineering assembled through a collaboration with over 60 organizations*. Walnut Creek CD-ROM once sold copies of this library. Nowadays, it is no longer maintained, but is still hosted in the archive.adaic.com server. It may contain useful resources, but it is highly redundant with other libraries.
```{=html}
<!-- -->
```
AdaPower : A directory and collection of Ada tools and resources.
## See also
### Wikibook
- Ada Programming
- Ada Programming/Tutorials
- Ada Programming/Web 2.0
|
# Ada Programming/Tutorials
This page contains a list of other Ada tutorials on the Net.
1. Ada Programming, available on
Wikibooks, is based on the Ada
2005 standard and currently
being updated to Ada 2012.
2. Introduction to Ada at
learn.adacore.com
is an interactive web tutorial based on Ada 2012. It has editable
and compilable examples.
3. Lovelace is a free (no-charge),
self-directed Ada 95 tutorial available on the World Wide Web (WWW).
Lovelace assumes that the user already knows another algorithmic
programming language, such as C, C++, or Pascal. Lovelace is
interactive and contains many short sections, most of which end with
a question to help ensure that users understand the material.
Lovelace can be used directly from the WWW, downloaded, or run from
CD-ROM. Lovelace was developed by David A. Wheeler.
4. AdaTutor is
an interactive Ada 95 tutorial that was distributed as a
public-domain Ada
program
and has been converted to a web tutorial.
5. The Ada-95: A guide for C and C++
programmers is a short
hypertext tutorial for programmers who have a C or C++ style
programming language background. It was written by Simon Johnston,
with some additional text by Tucker Taft. PDF
edition.
6. Dale Stanbrough\'s
Introduction is a
set of notes that provide a simple introduction to Ada. This
material has been used for a few years as a simple introduction to
the language.
7. Coronado Enterprises Ada 95 Tutorial: shareware
edition.
Programming}}\|Tutorials
|
# Ada Programming/Web 2.0
\_\_TOC\_\_
Here is a list of Web 2.0 resources
about Ada:
## News & Blogs
- reddit.com --- Ada
\`<small>`{=html}[RSS`</small>`{=html}\],
social news website on which users can post links to content on the
web
- Stack Overflow --- Ada
questions
- Ada Gems, programming tips and
articles about specific language features
- Ada Planet, news aggregator.
- Ada Programming blog
\`<small>`{=html}[RSS`</small>`{=html}\],
by Martin Krischik and other authors
- Java 2 Ada
\`<small>`{=html}[RSS`</small>`{=html}\]
## Social Networks
- \@AdaProgrammers at Twitter
- Linked In --- Ada developers
group (free register
needed)
- Gitter chat room
## General Info
- Ada Resource Association
- Awesome Ada, a curated
list of awesome resources related to the Ada and SPARK programming
language.
- SlideShare, presentations about Ada
programming, Ada
95, Ada
2005, Ada
2012 tag pages.
- Open Hub, a directory of Open
Source projects. Its main features are source code
analysis of public repositories
and public reviews of projects
- Ada@Krischik, Ada homepage of Martin
Krischik
- WikiCFP --- Calls For Papers on
Ada
\`<small>`{=html}[RSS`</small>`{=html}\]
- AdaCore channel on
youtube.com, Ada related
videos.
## Wikimedia projects
- **Wikipedia articles** (Ada
category "wikilink")):
- Ada "wikilink")
- Jean Ichbiah
- Beaujolais effect
- ISO 8652
- Ada Semantic Interface
Specification
- \...
- **Wiktionary entries**:
- ACATS
- Ada
- ASIS
- **Wikisource documents**:
- Steelman language
requirements
- Stoneman requirements
- **Wikibooks tutorials**:
- *Programación en Ada*, in
Spanish
- *Programmation Ada*, in
French
- *Ada*, in Italian
- **Wikiquote**:
- Programming languages ---
Ada
- **Wikiversity**:
- Ada course
## Source code
- Examples *Ada Programming*
wikibook
- Rosetta Code --- Ada
Category, programming examples
in multiple languages
## Projects
- AdaCL
- The Ada 95 Booch
Components
- The GNU Ada Compiler
- ASIS
- GLADE
- Florist
- GNAT --- GCC Wiki
- RTEMSAda
- AVR-Ada - Ada compiler for Atmel
microcontrollers (Arduinos)
- Ada Bare Bones - Tutorial to write a small Multiboot kernel written
in Ada
Ada Programming}}\|Web 2.0
|
# Engineering Acoustics/Simple Oscillation
## The Position Equation
This section shows how to form the equation describing the position of a
mass on a spring.
For a simple oscillator consisting of a mass *m* attached to one end of
a spring with a spring constant *s*, the restoring force, *f*, can be
expressed by the equation
```{=html}
<center>
```
$f = -sx\,$
```{=html}
</center>
```
where *x* is the displacement of the mass from its rest position.
Substituting the expression for *f* into the linear momentum equation,
```{=html}
<center>
```
$f = ma = m{d^2x \over dt^2}\,$
```{=html}
</center>
```
where *a* is the acceleration of the mass, we can get
```{=html}
<center>
```
$m\frac{d^2 x}{d t^2 }= -sx$
```{=html}
</center>
```
or,
```{=html}
<center>
```
$\frac{d^2 x}{d t^2} + \frac{s}{m}x = 0$
```{=html}
</center>
```
Note that the frequency of oscillation $\omega_0$ is given by
```{=html}
<center>
```
$\omega_0^2 = {s \over m}\,$
```{=html}
</center>
```
To solve the equation, we can assume
```{=html}
<center>
```
$x(t)=A e^{\lambda t} \,$
```{=html}
</center>
```
The force equation then becomes
```{=html}
<center>
```
$(\lambda^2+\omega_0^2)A e^{\lambda t} = 0,$
```{=html}
</center>
```
Giving the equation
```{=html}
<center>
```
$\lambda^2+\omega_0^2 = 0,$
```{=html}
</center>
```
Solving for $\lambda$
```{=html}
<center>
```
$\lambda = \pm j\omega_0\,$
```{=html}
</center>
```
This gives the equation of *x* to be
```{=html}
<center>
```
$x = C_1e^{j\omega_0 t}+C_2e^{-j\omega_0 t}\,$
```{=html}
</center>
```
Note that
```{=html}
<center>
```
$j = (-1)^{1/2}\,$
```{=html}
</center>
```
and that *C~1~* and *C~2~* are constants given by the initial conditions
of the system
If the position of the mass at *t* = 0 is denoted as *x~0~*, then
```{=html}
<center>
```
$C_1 + C_2 = x_0\,$
```{=html}
</center>
```
and if the velocity of the mass at *t* = 0 is denoted as *u~0~*, then
```{=html}
<center>
```
$-j(u_0/\omega_0) = C_1 - C_2\,$
```{=html}
</center>
```
Solving the two boundary condition equations gives
```{=html}
<center>
```
$C_1 = \frac{1}{2}( x_0 - j( u_0 / \omega_0 ))$
```{=html}
</center>
```
\
```{=html}
<center>
```
$C_2 = \frac{1}{2}( x_0 + j( u_0 / \omega_0 ))$
```{=html}
</center>
```
\
The position is then given by
```{=html}
<center>
```
**$x(t) = x_0 cos(\omega_0 t) + (u_0 /\omega_0 )sin(\omega_0 t)\,$**
```{=html}
</center>
```
\
This equation can also be found by assuming that *x* is of the form
```{=html}
<center>
```
$x(t)=A_1 cos(\omega_0 t) + A_2 sin(\omega_0 t)\,$
```{=html}
</center>
```
And by applying the same initial conditions,
```{=html}
<center>
```
$A_1 = x_0\,$
```{=html}
</center>
```
\
```{=html}
<center>
```
$A_2 = \frac{u_0}{\omega_0}\,$
```{=html}
</center>
```
\
This gives rise to the same position equation
```{=html}
<center>
```
$x(t) = x_0 cos(\omega_0 t) + (u_0 /\omega_0 )sin(\omega_0 t)\,$
```{=html}
</center>
```
Back to Main page
## Alternate Position Equation Forms
If *A~1~* and *A~2~* are of the form
```{=html}
<center>
```
$A_1 = A cos( \phi)\,$
```{=html}
</center>
```
```{=html}
<center>
```
$A_2 = A sin( \phi)\,$
```{=html}
</center>
```
\
Then the position equation can be written
```{=html}
<center>
```
**$x(t) = Acos( \omega_0 t - \phi )\,$**
```{=html}
</center>
```
\
By applying the initial conditions (*x(0)=x~0~, u(0)=u~0~*) it is found
that
```{=html}
<center>
```
$x_0 = A cos(\phi)\,$
```{=html}
</center>
```
\
```{=html}
<center>
```
$\frac{u_0}{\omega_0} = A sin(\phi)\,$
```{=html}
</center>
```
\
If these two equations are squared and summed, then it is found that
```{=html}
<center>
```
**$A = \sqrt{x_0^2 + (\frac{u_0}{\omega_0})^2}\,$**
```{=html}
</center>
```
\
And if the difference of the same two equations is found, the result is
that
```{=html}
<center>
```
**$\phi = tan^{-1}(\frac{u_0}{x_0 \omega_0})\,$**
```{=html}
</center>
```
The position equation can also be written as the Real part of the
imaginary position equation
```{=html}
<center>
```
$\mathbf{Re} [x(t)] = x(t) = A cos(\omega_0 t - \phi)\,$
```{=html}
</center>
```
\
Due to euler\'s rule (e^jφ^ = cosφ + jsinφ), **x**(t) is of the form
```{=html}
<center>
```
$x(t) = A e^{j(\omega_0 t - \phi)}\,$
```{=html}
</center>
```
{m\_{TOTAL}}} = \\sqrt{\\frac{2s}{M}} `</math>`{=html}
$\mathbf{f_0} = \frac{\omega_{0}}{2\pi} = \mathbf{\frac{1}{2\pi}\sqrt{\frac{2s}{M}}}$
2. !Simple
Oscillator-1.2.1.b{width="100"}
$\omega_{0} = \sqrt{\frac{s_{TOTAL}}{m_{TOTAL}}} = \sqrt{\frac{s}{2M}}$
$\mathbf{f_0} = \frac{\omega_{0}}{2\pi} = \mathbf{\frac{1}{2\pi}\sqrt{\frac{s}{2M}}}$
3. !Simple
Oscillator-1.2.1.c{width="100"}
!Simple
Oscillator-1.2.1.c-solution{width="300"}
$\mathbf{1.}\text{ }
s(x_1-x_2) = sx_2$
$\mathbf{2.}\text{ }
-s(x_1-x_2) = m \frac{d^2x}{dt^2}$
$\frac{d^2x_1}{dt^2} + \frac{s}{2m}x_1 = 0$
$\omega_0 = \sqrt{\frac{s}{2m}}$
$\mathbf{f_0 = \frac{1}{2\pi}\sqrt{\frac{s}{2m}}}$
4. !Simple
Oscillator-1.2.1.d{width="100"}
$\omega_0=\sqrt{\frac{2s}{m}}$
$\mathbf{f_0 = \frac{1}{2\pi}\sqrt{\frac{2s}{m}}}$
}}
Back to Main page
|
# Engineering Acoustics/Mechanical Resistance
## Mechanical Resistance
For most systems, a simple oscillator is not a very accurate model.
While a simple oscillator involves a continuous transfer of energy
between kinetic and potential form, with the sum of the two remaining
constant, real systems involve a loss, or dissipation, of some of this
energy, which is never recovered into kinetic nor potential energy. The
mechanisms that cause this dissipation are varied and depend on many
factors. Some of these mechanisms include drag on bodies moving through
the air, thermal losses, and friction, but there are many others. Often,
these mechanisms are either difficult or impossible to model, and most
are non-linear. However, a simple, linear model that attempts to account
for all of these losses in a system has been developed.
## Dashpots
The most common way of representing mechanical resistance in a damped
system is through the use of a dashpot. A dashpot acts like a shock
absorber in a car. It produces resistance to the system\'s motion that
is proportional to the system\'s velocity. The faster the motion of the
system, the more mechanical resistance is produced.
As seen in the graph above, a linear relationship is assumed between the
force of the dashpot and the velocity at which it is moving. The
constant that relates these two quantities is $R_M$, the mechanical
resistance of the dashpot. This relationship, known as the viscous
damping law, can be written as:
```{=html}
<center>
```
$F = R\cdot u$
```{=html}
</center>
```
Also note that the force produced by the dashpot is always in phase with
the velocity.
The power dissipated by the dashpot can be derived by looking at the
work done as the dashpot resists the motion of the system:
```{=html}
<center>
```
$P_D = \frac{1}{2}\Re\left[\hat{F}\cdot\hat{u^*}\right]= \frac{|\hat{F}|^{2}}{2R_{M}}$
```{=html}
</center>
```
## Modeling the Damped Oscillator
In order to incorporate the mechanical resistance (or damping) into the
forced oscillator model, a dashpot is placed next to the spring. It is
connected to the mass ($M_M$) on one end and attached to the ground on
the other end. A new equation describing the forces must be developed:
```{=html}
<center>
```
$F - S_Mx - R_Mu = M_Ma \rightarrow F = S_Mx + R_M\dot{x} + M_M\ddot{x}$
```{=html}
</center>
```
It\'s phasor form is given by the following:
```{=html}
<center>
```
$\hat{F}e^{j\omega t} = \hat{x}e^{j\omega t}\left[S_M + j\omega R_M + \left(-\omega ^2\right)M_M\right]$
```{=html}
</center>
```
## Mechanical Impedance for Damped Oscillator
Previously, the impedance for a simple oscillator was defined as
$\mathbf{\frac{F}{u}}$. Using the above equations, the impedance of a
damped oscillator can be calculated:
```{=html}
<center>
```
$\hat{Z_M} = \frac{\hat{F}}{\hat{u}} = R_M + j\left(\omega M_M - \frac{S_M}{\omega}\right) = |\hat{Z_M}|e^{j\Phi_Z}$
```{=html}
</center>
```
For very low frequencies, the spring term dominates because of the
$\frac{1}{\omega}$ relationship. Thus, the phase of the impedance
approaches $\frac{-\pi}{2}$ for very low frequencies. This phase causes
the velocity to \"lag\" the force for low frequencies. As the frequency
increases, the phase difference increases toward zero. At resonance, the
imaginary part of the impedance vanishes, and the phase is zero. The
impedance is purely resistive at this point. For very high frequencies,
the mass term dominates. Thus, the phase of the impedance approaches
$\frac{\pi}{2}$ and the velocity \"leads\" the force for high
frequencies.
\
Based on the previous equations for dissipated power, we can see that
the real part of the impedance is indeed $R_M$. The real part of the
impedance can also be defined as the cosine of the phase times its
magnitude. Thus, the following equations for the power can be obtained.
```{=html}
<center>
```
$W_R = \frac{1}{2}\Re\left[\hat{F}\hat{u^{*}}\right] = \frac{1}{2}R_M|\hat{u}|^2 = \frac{1}{2}\frac{|\hat{F}|^2}{|\hat{Z_M}|^2}R_M = \frac{1}{2}\frac{|\hat{F}|^2}{|\hat{Z_M}|}cos(\Phi_Z)$
```{=html}
</center>
```
Back to Main page
|
# Engineering Acoustics/Characterizing Damped Mechanical Systems
## Characterizing Damped Mechanical Systems
Characterizing the response of Damped Mechanical Oscillating system can
be easily quantified using two parameters. The system parameters are the
resonance frequency ($'''w resonance'''$ and the damping of the system
$'''Q (quality factor)or B (Temporal Absorption''')$. In practice,
finding these parameters would allow for quantification of unknown
systems and allow you to derive other parameters within the system.
------------------------------------------------------------------------
Using the mechanical impedance in the following equation, notice that
the imaginary part will equal zero at resonance.
($Z_m = F/u = R_m + j(w*M_m - s/w)$)
Resonance case:($w*M_m = s/w$)
## Calculating the Mechanical Resistance
The decay time of the system is related to 1 / B where B is the Temporal
Absorption. B is related to the mechancial resistance and to the mass of
the system by the following equation.
$B = Rm / 2*Mm$
The mechanical resistance can be derived from the equation by knowing
the mass and the temporal absorption.
## Critical Damping
The system is said to be critically damped when:
$Rc = 2*M*sqrt(s/Mm) = 2*sqrt(s*Mm) = 2*Mm*wn$
A critically damped system is one in which an entire cycle is never
completed. The absorption coefficient in this type of system equals the
natural frequency. The system will begin to oscillate, however the
amplitude will decay exponentially to zero within the first oscillation.
## Damping Ratio
$Damping Ratio = Rm/Rc$
The damping ratio is a comparison of the mechanical resistance of a
system to the resistance value required for critical damping. Rc is the
value of Rm for which the absorption coefficient equals the natural
frequency (critical damping). A damping ratio equal to 1 therefore is
critically damped, because the mechanical resistance value Rm is equal
to the value required for critical damping Rc. A damping ratio greater
than 1 will be overdamped, and a ratio less than 1 will be underdamped.
## Quality Factor
The Quality Factor (Q) is way to quickly characterize the shape of the
peak in the response. It gives a quantitative representation of power
dissipation in an oscillation.
$Q = wresonance / (wu - wl)$
Wu and Wl are called the half power points. When looking at the response
of a system, the two places on either side of the peak where the point
equals half the power of the peak power defines Wu and Wl. The distance
in between the two is called the half-power bandwidth. So, the resonant
frequency divided by the half-power bandwidth gives you the quality
factor. Mathematically, it takes Q/pi oscillations for the vibration to
decay to a factor of 1/e of its original amplitude.
Back to Main page
|
# Engineering Acoustics/Electro-Mechanical Analogies
## Why Circuit Analogs?
Acoustic devices are often combinations of mechanical and electrical
elements. A common example of this would be a loudspeaker connected to a
power source. It is useful in engineering applications to model the
entire system with one method. This is the reason for using a circuit
analogy in a vibrating mechanical system. The same analytic method can
be applied to Electro-Acoustic
Analogies.
## How Electro-Mechanical Analogies Work
An electrical circuit is described in terms of its potential (voltage)
and flux (current). To construct a circuit analog of a mechanical system
we define flux and potential for the system. This leads to two separate
analog systems. The Impedance Analog denotes the force acting on an
element as the potential and the velocity of the element as the flux.
The Mobility Analog equates flux with the force and velocity with
potential.
Mechanical Electrical Equivalent
---------------------- ------------ -----------------------
**Impedance Analog**
Potential: Force Voltage
Flux: Velocity Current
**Mobility Analog**
Potential: Velocity Voltage
Flux: Force Current
For many, the mobility analog is considered easier for a mechanical
system. It is more intuitive for force to flow as a current and for
objects oscillating the same frequency to be wired in parallel. However,
either method will yield equivalent results and can also be translated
using the dual (dot) method.
## The Basic Elements of an Oscillating Mechanical System
**The Mechanical Spring:**
: ![](resistor11.jpg "fig:resistor11.jpg")
The ideal spring is considered to be operating within its elastic limit,
so the behavior can be modeled with Hooke\'s
Law.
It is also assumed to be massless and have no damping effects.
$$F=-cx, \$$
**The Mechanical Mass**
In a vibrating system, a mass element opposes acceleration. From
Newton\'s Second Law:
$$F=mx^{\prime\prime}=ma=m\frac{du}{dt}$$
$$F=K\int\,u dt$$
**The Mechanical Resistance**
: ![](Dashpot.png "fig:Dashpot.png")
The dashpot is an ideal viscous damper which opposes velocity.
$$F=R u\displaystyle$$
**Ideal Generators**
The two ideal generators which can drive any system are an ideal
velocity and ideal force generator. The ideal velocity generator can be
denoted by a drawing of a crank or simply by declaring $u(t)=f(t)$, and
the ideal force generator can be drawn with an arrow or by declaring
$F(t)=f(t)$
**Simple Damped Mechanical Oscillators**
: ![](Forced_Oscillator.PNG "fig:Forced_Oscillator.PNG")
In the following sections we will consider this simple mechanical system
as a mobility and impedance analog. It can be driven either by an ideal
force or an ideal velocity generator, and we will consider simple
harmonic motion. The m in the subscript denotes a mechanical system,
which is currently redundant, but can be useful when combining
mechanical and acoustic systems.
## The Impedance Analog
**The Mechanical Spring**
In a spring, force is related to the displacement from equilibrium. By
Hooke\'s Law,
$$F(t)=c_m \Delta x = c_m \int_{0}^{t} u( \tau )d \tau$$
The equivalent behaviour in a circuit is a capacitor:
$$V(t)=\frac{1}{C}\int_{0}^{t} \,i(\tau) d\tau$$
**The Mechanical Mass**
The force on a mass is related to the acceleration (change in velocity).
The behaviour, by Newton\'s Second Law, is:
$$F(t)=m_ma=m_m\frac{d}{dt}u(t)$$
The equivalent behaviour in a circuit is an inductor:
$$V(t)=L\frac{d}{dt}i(t)$$
**The Mechanical Resistance**
For a viscous damper, the force is directly related to the velocity
$$F=R_m u\displaystyle$$
The equivalent is a simple resistor of value $R_m\displaystyle$
$$V=R i\displaystyle$$
**Example:**
Thus the simple mechanical oscillator in the previous section becomes a
series RCL Circuit:
![](RLC_series_circuit_v1.svg "RLC_series_circuit_v1.svg")
The current through all three elements is equal (they are at the same
velocity) and that the sum of the potential drops across each element
will equal the potential at the generator (the driving force). The ideal
voltage generator depicted here would be equivalent to an ideal force
generator.
**IMPORTANT NOTE**: The velocity measured for the spring and dashpot is
the relative velocity ( velocity of one end minus the velocity of the
other end). The velocity of the mass, however, is the absolute velocity.
**Impedances:**
Element Impedance
--------- ----------- -------------------------------------------------
Spring Capacitor $Z_c = \frac{V_c}{I_c} = \frac{c_m}{j \omega }$
Mass Inductor $Z_m = \frac{V_m}{I_m} = j \omega m_m$
Dashpot Resistor $Z_d = \frac{V_m}{I_m} = R_m$
## The Mobility Analog
Like the Impedance Analog above, the equivalent elements can be found by
comparing their fundamental equations with the equations of circuit
elements. However, since circuit equations usually define voltage in
terms of current, in this case the analogy would be an expression of
velocity in terms of force, which is the opposite of convention.
However, this can be solved with simple algebraic manipulation.
**The Mechanical Spring**
$$F(t)= c_m \int u(t)d t$$
The equivalent behavior for this circuit is the behavior of an inductor.
$$\int V dt=\int L \frac{d}{dt} i(t) dt$$
$$i=\frac{1}{L}\int\,V dt$$
**The Mechanical Mass**
$$F=m_ma=m_m\frac{d}{dt}u(t)$$
Similar to the spring element, if we take the general equation for a
capacitor and differentiate,
$$\frac{d}{dt}V(t)=\frac{d}{dt}\frac{1}{C}\int \,i(t) dt$$
$$i(t)=C\frac{d}{dt}V(t)$$
**The Mechanical Resistance**
Since the relation between force and velocity is proportionate, the only
difference is that the mechanical resistance becomes inverted:
$$F=\frac{1}{r_m} u=R_m u$$
$$i=\frac{1}{R}V$$
**Example:**
The simple mechanical oscillator drawn above would become a parallel RLC
Circuit. The potential across each element is the same because they are
each operating at the same velocity. This is often the more intuitive of
the two analogy methods to use, because you can visualize force
\"flowing\" like a flux through your system. The ideal voltage generator
in this drawing would correspond to an ideal velocity generator.
![](RLC_parallel_circuit.png "RLC_parallel_circuit.png")
**IMPORTANT NOTE:** Since the measure of the velocity of a mass is
absolute, a capacitor in this analogy must always have one terminal
grounded. A capacitor with both terminals at a potential other than
ground may be realized physically as an inverter, which completes all
elements of this analogy.
**Impedances:**
Element Impedance
--------- ----------- --------------------------------------------------
Spring Inductor $Z_c = \frac{V_m}{I_m} = \frac{j \omega}{c_m}$
Mass Capacitor $Z_m = \frac{V_c}{I_c} = \frac{1}{j \omega m_m}$
Dashpot Resistor $Z_d = \frac{V_m}{I_m} = r_m = \frac{1}{R_m}$
Back to Main page
|
# Engineering Acoustics/Solution Methods: Electro-Mechanical Analogies
After drawing the electro-mechanical analogy of a mechanical system, it
is always safe to check the circuit. There are two methods to accomplish
this:
## Review of Circuit Solving Methods
**Kirchkoff\'s Voltage law**
\"The sum of the potential drops around a loop must equal zero.\"
![](KVL.png "KVL.png")
$v_1 + v_2 + v_3 + v_4 = 0 \displaystyle$
**Kirchkoff\'s Current Law**
\"The Sum of the currents at a node (junction of more than two elements)
must be zero\"
![](KCL.png "KCL.png")
$-i_1+i_2+i_3-i_4 = 0 \displaystyle$
**Hints for solving circuits:**
Remember that certain elements can be combined to simplify the circuit
(the combination of like elements in series and parallel)
If solving a ciruit that involves steady-state sources, use impedances.
Any circuit can eventually be combined into a single impedance using the
following identities:
Impedances in series: $Z_\mathrm{eq} = Z_1 + Z_2 + \,\cdots\, + Z_n.$
Impedances in parallel:
$\frac{1}{Z_\mathrm{eq}} = \frac{1}{Z_1} + \frac{1}{Z_2} + \,\cdots\, + \frac{1}{Z_n} .$
## Dot Method: (Valid only for planar network)
This method helps obtain the dual analog (one analog is the dual of the
other). The steps for the dot product are as follows: 1) Place one dot
within each loop and one outside all the loops. 2) Connect the dots.
Make sure that only there is only one line through each element and that
no lines cross more than one element. 3) Draw in each line that crosses
an element its dual element, including the source. 4) The circuit
obtained should have an equivalent behavior as the dual analog of the
original electro-mechanical circuit.
**Example:**
![](Dotmethod.jpg "Dotmethod.jpg")
The parallel RLC Circuit above is equivalent to a series RLC driven by
an ideal current source
## Low-Frequency Limits
This method looks at the behavior of the system for very large or very
small values of the parameters and compares them with the expected
behavior of the mechanical system. For instance, you can compare the
mobility circuit behavior of a near-infinite inductance with the
mechanical system behavior of a near-infinite stiffness spring.
Very High Value Very Low Value
--------------- ----------------- ----------------
**Capacitor** Short Circuit Open Circuit
**Inductor** Open Circuit Closed Circuit
**Resistor** Open Circuit Short Circuit
Back to Main page
|
# Engineering Acoustics/Primary variables of interest
## Basic Assumptions
Consider a piston moving in a tube. The piston starts moving at time t=0
with a velocity u=$u_p$. The piston fits inside the tube smoothly
without any friction or gap. The motion of the piston creates a planar
sound wave or acoustic disturbance traveling down the tube at a constant
speed c\>\>$u_p$. In a case where the tube is very small, one can
neglect the time it takes for acoustic disturbance to travel from the
piston to the end of the tube. Hence, one can assume that the acoustic
disturbance is uniform throughout the tube domain.
```{=html}
<center>
```
![](Acousticplanewave1.gif "Acousticplanewave1.gif")
```{=html}
</center>
```
### Assumptions
1\. Although sound can exist in solids or fluid, we will first consider
the medium to be a fluid at rest. The ambient, undisturbed state of the
fluid will be designated using subscript zero. Recall that a fluid is a
substance that deforms continuously under the application of any shear
(tangential) stress.
2\. Disturbance is a compressional one (as opposed to transverse).
3\. Fluid is a continuum: infinitely divisible substance. Each fluid
property assumed to have definite value at each point.
4\. The disturbance created by the motion of the piston travels at a
constant speed. It is a function of the properties of the ambient fluid.
Since the properties are assumed to be uniform (the same at every
location in the tube) then the speed of the disturbance has to be
constant. The speed of the disturbance is the speed of sound, denoted by
letter $c_0$ with subscript zero to denote ambient property.
5\. The piston is perfectly flat, and there is no leakage flow between
the piston and the tube inner wall. Both the piston and the tube walls
are perfectly rigid. Tube is infinitely long, and has a constant area of
cross section, A.
6\. The disturbance is uniform. All deviations in fluid properties are
the same across the tube for any location x. Therefore the instantaneous
fluid properties are only a function of the Cartesian coordinate x (see
sketch). Deviations from the ambient will be denoted by primed
variables.
## Variables of interest
### Pressure (force / unit area)
Pressure is defined as the normal force per unit area acting on any
control surface within the fluid.
```{=html}
<center>
```
![](Acousticcontrolsurface.gif "Acousticcontrolsurface.gif")
```{=html}
</center>
```
$p = \frac {\tilde{F}.\tilde{n}}{dS}$
For the present case,inside a tube filled with a working fluid, pressure
is the ratio of the surface force acting onto the fluid in the control
region and the tube area. The pressure is decomposed into two
components - a constant equilibrium component, $p_0$, superimposed with
a varying disturbance $p^'(x)$. The deviation $p^'$is also called the
acoustic pressure. Note that $p^'$ can be positive or negative. Unit:
$kg/ms^2$. Acoustical pressure can be measured using a microphone.
```{=html}
<center>
```
![](Acousticpressure1.gif "Acousticpressure1.gif")
```{=html}
</center>
```
### Density
Density is mass of fluid per unit volume. The density, ρ, is also
decomposed into the sum of ambient value (usually around ρ0= 1.15 kg/m3)
and a disturbance ρ'(x). The disturbance can be positive or negative, as
for the pressure. Unit: $kg/m^3$
### Acoustic volume velocity
Rate of change of fluid particles position as a function of time. Its
the well known fluid mechanics term, flow rate.
```{=html}
<center>
```
$U=\int_{s}\tilde{u}.\tilde{n}\, dS$
```{=html}
</center>
```
In most cases, the velocity is assumed constant over the entire cross
section (plug flow), which gives acoustic volume velocity as a product
of fluid velocity $\tilde{u}$ and cross section S.
```{=html}
<center>
```
$U=\tilde{u}.S$
```{=html}
</center>
```
Back to Main page
|
# Engineering Acoustics/Electro-acoustic analogies
## Electro-acoustical Analogies
### Acoustical Mass
Consider a rigid tube-piston system as following figure.
```{=html}
<center>
```
![](acousticalmass.gif "acousticalmass.gif")
```{=html}
</center>
```
Piston is moving back and forth sinusoidally with frequency of **f**.
Assuming $f << \frac{c}{l\ or\ \sqrt{S}}$ (where **c** is sound velocity
$c=\sqrt{\gamma R T_0}$), volume of fluid in tube is,
```{=html}
<center>
```
$\Pi_v=S\ l$
```{=html}
</center>
```
Then mass (mechanical mass) of fluid in tube is given as,
```{=html}
<center>
```
$M_M= \Pi_v \rho_0 = \rho_0 S\ l$
```{=html}
</center>
```
For sinusoidal motion of piston, fluid move as rigid body at same
velocity as piston. Namely, every point in tube moves with the same
velocity.
Applying the Newton\'s second law to the following free body diagram,
```{=html}
<center>
```
![](FBD.gif "FBD.gif")
```{=html}
</center>
```
```{=html}
<center>
```
$SP'=(\rho_0Sl)\frac{du}{dt}$
```{=html}
</center>
```
```{=html}
<center>
```
$\hat{P}=\rho_0l(j\omega)\hat{u}=j\omega(\frac{\rho_0l}{S})\hat{U}$
```{=html}
</center>
```
Where, plug flow assumption is used.
`"Plug flow" assumption:`\
`Frequently in acoustics, the velocity distribution along the normal surface of`\
`fluid flow is assumed uniform. Under this assumption, the acoustic volume velocity U is`\
`simply product of velocity and entire surface. `$U=Su$
#### Acoustical Impedance
Recalling mechanical impedance,
```{=html}
<center>
```
$\hat{Z}_M=\frac{\hat{F}}{\hat{u}}=j\omega(\rho_0Sl)$
```{=html}
</center>
```
acoustical impedance (often termed an **acoustic ohm**) is defined as,
```{=html}
<center>
```
$\hat{Z}_A=\frac{\hat{P}}{\hat{U}}=\frac{Z_M}{S^2}=j\omega(\frac{\rho_0l}{S})\quad \left[\frac{N s}{m^5}\right]$
```{=html}
</center>
```
where, acoustical mass is defined.
```{=html}
<center>
```
$M_A=\frac{\rho_0l}{S}$
```{=html}
</center>
```
#### Acoustical Mobility
Acoustical mobility is defined as,
```{=html}
<center>
```
$\hat{\xi}_A=\frac{1}{\hat{Z}_A}=\frac{\hat{U}}{\hat{P}}$
```{=html}
</center>
```
#### Impedance Analog vs. Mobility Analog
```{=html}
<center>
```
![](Imp-mov.gif "Imp-mov.gif")
```{=html}
</center>
```
#### Acoustical Resistance
Acoustical resistance models loss due to viscous effects (friction) and
flow resistance (represented by a screen).
```{=html}
<center>
```
![](Ra_analogs.png "Ra_analogs.png")
***r~A~***is the reciprocal of***R~A~*** and is referred to as
*responsiveness*.
```{=html}
</center>
```
### Acoustical Generators
The acoustical generator components are pressure, **P** and volume
velocity, **U**, which are analogus to force, **F** and velocity, **u**
of electro-mechanical analogy respectively. Namely, for impedance
analog, pressure is analogous to voltage and volume velocity is analogus
to current, and vice versa for mobility analog. These are arranged in
the following table.
```{=html}
<center>
```
![](e-a_analogy.gif "e-a_analogy.gif")
```{=html}
</center>
```
Impedance and Mobility analogs for acoustical generators of constant
pressure and constant volume velocity are as follows:
```{=html}
<center>
```
![](acoustic_gen.png "acoustic_gen.png")
```{=html}
</center>
```
### Acoustical Compliance
Consider a piston in an enclosure.
```{=html}
<center>
```
![](Enclosed_Piston.png "Enclosed_Piston.png")
```{=html}
</center>
```
When the piston moves, it displaces the fluid inside the enclosure.
Acoustic compliance is the measurement of how \"easy\" it is to displace
the fluid.
Here the volume of the enclosure should be assumed to be small enough
that the fluid pressure remains uniform.
Assume no heat exchange 1.adiabatic 2.gas compressed uniformly , p prime
in cavity everywhere the same.
from thermo equitation ![](Equ1.jpg "Equ1.jpg") it is easy to get the
relation between disturbing pressure and displacement of the piston
![](Equ3.gif "Equ3.gif") where U is volume rate, P is pressure according
to the definition of the impendance and mobility, we can
get![](Equ4.gif "Equ4.gif")
Mobility Analog VS Impedance Analog
```{=html}
<center>
```
![](Comp.gif "Comp.gif")
```{=html}
</center>
```
### Examples of Electro-Acoustical Analogies
Example 1: Helmholtz Resonator
```{=html}
<center>
```
![](Example1holm.JPG "Example1holm.JPG")
```{=html}
</center>
```
Assumptions - (1) Completely sealed cavity with no leaks. (2) Cavity
acts like a rigid body inducing no vibrations.
Solution:
```{=html}
<center>
```
\- Impedance Analog -
```{=html}
</center>
```
```{=html}
<center>
```
![](Example2holm1sol.JPG "Example2holm1sol.JPG")
```{=html}
</center>
```
Example 2: Combination of Side-Branch Cavities
```{=html}
<center>
```
![](Exam2prob.JPG "Exam2prob.JPG")
```{=html}
</center>
```
Solution:
```{=html}
<center>
```
\- Impedance Analog -
```{=html}
</center>
```
```{=html}
<center>
```
![](Exam2sol.JPG "Exam2sol.JPG")
```{=html}
</center>
```
Back to Main page
|
# Engineering Acoustics/Transducers - Loudspeaker
# Acoustic transducer
The purpose of an acoustic transducer is to convert electrical energy
into acoustic energy. Many variations of acoustic transducers exist,
such as electrostatic, balanced armature and moving-coil loudspeakers.
This article focuses on moving-coil loudspeakers since they are the most
commonly used type of acoustic transducer. First, the physical
construction and principle of a typical moving coil transducer are
discussed briefly. Second, electro-mechano-acoustical modeling of each
element composing the loudspeaker is presented in a tutorial way to
reinforce and supplement the theory on electro-mechanical
analogies
and electro-acoustic
analogies
previously seen in other sections. Third, the equivalent circuit is
analyzed to introduce the theory behind Thiele-Small parameters, which
are very useful when designing loudspeaker enclosures. A method to
experimentally determine Thiele-Small parameters is also included.
## Moving-coil loudspeaker construction and principle
The classic moving-coil loudspeaker driver can be divided into three key
components:
1\) The magnet motor drive system, comprising the permanent magnet, the
center pole and the voice coil acting together to produce a mechanical
force on the diaphragm from an electrical current.
2\) The loudspeaker cone system, comprising the diaphragm and dust cap,
permitting mechanical force to be translated into acoustic pressure;
3\) The loudspeaker suspension, comprising the spider and surround,
preventing the diaphragm from breaking due to over excursion, allowing
only translational movement and tending to bring the diaphragm back to
its rest position.
The following illustration shows a cut-away view of a typical moving
coil-permanent magnet loudspeaker. A coil is mechanically coupled to a
diaphragm, also called cone, and rests in a fixed magnetic field
produced by a magnet. When an electrical current flows through the coil,
a corresponding magnetic field is emitted, interacting with the fixed
field of the magnet and thus applying a force to the coil, pushing it
away or towards the magnet. Since the cone is mechanically coupled to
the coil, it will push or pull the air it is facing, causing pressure
changes and emitting a sound wave.
```{=html}
<center>
```
![](MovingCoilLoudspeaker.png "MovingCoilLoudspeaker.png"){width="800"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 1: A cross-sectional view of a typical moving-coil loudspeaker
```{=html}
</center>
```
An equivalent circuit can be obtained to model the loudspeaker as a
lumped system. This circuit can be used to drive the design of a
complete loudspeaker system, including an enclosure and sometimes even
an amplifier that is matched to the properties of the driver. The
following section shows how such an equivalent circuit can be obtained.
## Electro-mechano-acoustical equivalent circuit
Electro-mechanico-acoustical systems such as loudspeakers can be modeled
as equivalent electrical circuits as long as each element moves as a
whole. This is usually the case at low frequencies or at frequencies
where the dimensions of the system are small compared to the wavelength
of interest. To obtain a complete model of the loudspeaker, the
interactions and properties of electrical, mechanical, and acoustical
subsystems composing the loudspeaker driver must each be modeled. The
following sections detail how the circuit may be obtained starting with
the amplifier and ending with the acoustical load presented by air. A
similar development can be found in \[1\] or \[2\].
### Electrical subsystem
The electrical part of the system is composed of a driving amplifier and
a voice coil. Most amplifiers can be approximated as a perfect voltage
source in series with the amplifier output impedance. The voice coil
exhibits an inductance and a resistance that may be directly modeled as
a circuit.
```{=html}
<center>
```
![](ElectricalSubsystemLoudspeaker.png "ElectricalSubsystemLoudspeaker.png"){width="400"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 2: The amplifier and loudspeaker electrical elements modeled as a
circuit
```{=html}
</center>
```
### Electrical to mechanical subsystem
When the loudspeaker is fed an electrical signal, the voice coil and
magnet convert current to force. Similarly, voltage is related to the
velocity. This relationship between the electrical side and the
mechanical side can be modeled by a transformer.
$\tilde{f_c} = Bl \tilde{i}$; $\tilde{u_c} = \dfrac{\tilde{e}}{Bl}$
```{=html}
<center>
```
![](_ElectricalToMechanicalLoudspeaker.png "_ElectricalToMechanicalLoudspeaker.png")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 3: A transformer modeling transduction from the electrical
impedance to mechanical mobility analogy
```{=html}
</center>
```
### Mechanical subsystem
In a first approximation, a moving coil loudspeaker may be thought of as
a mass-spring system where the diaphragm and the voice coil constitute
the mass and the spider and surround constitute the spring element.
Losses in the suspension can be modeled as a resistor.
```{=html}
<center>
```
![](MechanicalSubsystemModelingLoudspeaker.png "MechanicalSubsystemModelingLoudspeaker.png"){width="1000"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 4: Mass spring system and associated circuit analogies of the
impedance and mobility type.
```{=html}
</center>
```
The equation of motion gives us :
```{=html}
<center>
```
$\tilde{f_c} = R_m \tilde{u_c} + \dfrac{\tilde{u_c}}{ j \omega C_{MS}} + j \omega M_{MD} \tilde{u_c}$
```{=html}
</center>
```
```{=html}
<center>
```
$\dfrac{\tilde{f_c} }{\tilde{u_c}}= R_m+\dfrac{1}{ j \omega C_{MS}}+ j\omega M_{MD}$
```{=html}
</center>
```
Which yields the mechanical impedance type analogy in the form of a
series RLC circuit. A parallel RLC circuit may also be obtained to get
the mobility analog following mathematical manipulation:
```{=html}
<center>
```
$\dfrac{\tilde{u_c} }{\tilde{f_c}}= \dfrac{1}{R_m+\dfrac{1}{ j \omega C_{MS}}+ j\omega M_{MD} }$
```{=html}
</center>
```
```{=html}
<center>
```
$\dfrac{\tilde{u_c} }{\tilde{f_c}}= \dfrac{1}{\dfrac{1}{G_m}+\dfrac{1}{ j \omega C_{MS}} + \dfrac{1}{\dfrac{1}{j \omega M_{MD}}}}$
```{=html}
</center>
```
Which expresses the mechanical mobility type analogy in the form of a
parallel RLC circuit where the denominator elements are respectively a
parallel conductance, inductance, and compliance.
### Mechanical to acoustical subsystem
A loudspeaker's diaphragm may be thought of as a piston that pushes and
pulls on the air facing it, converting mechanical force and velocity
into acoustic pressure and volume velocity. The equations are as follow:
$\tilde{P_d} = \dfrac{\tilde{f_c}}{\tilde{S_D}}$;
$\tilde{U_c} = \tilde{u_c}{S_D}$
These equations can be modeled by a transformer.
```{=html}
<center>
```
![](MechanicalToAcousticalLoudspeaker.png "MechanicalToAcousticalLoudspeaker.png")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 5: A transformer modeling the transduction from mechanical
mobility to acoustical mobility analogy performed by a loudspeaker\'s
diaphragm
```{=html}
</center>
```
### Acoustical subsystem
The impedance presented by the air load on the loudspeaker\'s diaphragm
is both resistive due to sound radiation and reactive due to the air
mass that is being pushed radially but does not contribute to sound
radiation to the far field. The air load on the diaphragm can be modeled
as an impedance or an admittance. Specific values and approximations can
be found in \[1\], \[2\] or \[3\]. Note that the air load depends on the
mounting conditions of the loudspeaker. If the loudspeaker is mounted in
a baffle, the air load will be the same on each side of the diaphragm.
Then, if the air load on one side is $Y_{AR}$ in the admittance analogy,
then the total air load is $Y_{AR}/2$ as both loads are in parallel.
### Complete electro-mechano-acoustical equivalent circuit
Using electrical impedance, mechanical mobility and acoustical
admittance yield the following equivalent circuit, modeling the entire
loudspeaker drive unit.
```{=html}
<center>
```
![](_LoudspeakerEquivalentCircuit.png "_LoudspeakerEquivalentCircuit.png"){width="1000"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 6: A complete electro-mechano-acoustical equivalent circuit of a
loudspeaker drive unit
```{=html}
</center>
```
This circuit can be reduced by substituting the transformers and
connected loads by an equivalent loading that would present the same
impedance as the loaded transformer. An example of this is shown on
figure 7, where acoustical and electrical loads and sources have been
\"brought over\" to the mechanical side.
```{=html}
<center>
```
![](LoudspeakerEquivalentCircuitMechanical.png "LoudspeakerEquivalentCircuitMechanical.png"){width="900"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 7: Mechanical equivalent circuit modeling of a loudspeaker drive
unit
```{=html}
</center>
```
The advantage of doing such manipulations is that we can then directly
relate electrical measurements with elements in the circuit. This will
later allow us to obtain values for the different components of the
model and match this model to real loudspeaker drivers. We can further
simplify this circuit by using Norton\'s theorem and converting the
series electrical components and voltage source into an equivalent
current source and parallel electrical components. Then, using a
technique called the Dot method, presented in section Solution Methods:
Electro-Mechanical
Analogies,
we can obtain a single loop series circuit which is the dual of the
parallel circuit previously obtained with Norton\'s theorem. If we are
mainly interested in the low frequency behavior of the loudspeaker, as
should be the case when using lumped element modeling, we can neglect
the effect of the voice coil inductance, which has an effect only at
high frequencies. Furthermore, the air load impedance at low frequencies
is mass-like and can be modeled by a simple inductance $M_{M1}$. This
results in a simplified low frequency model equivalent circuit, shown of
figure 8, which is easier to manipulate than the circuit of figure 7.
Note that the analogy used for this circuit is of the impedance type.
```{=html}
<center>
```
![](LowFrequencyLoudspeakerEquivalentCircuitMechanical.png "LowFrequencyLoudspeakerEquivalentCircuitMechanical.png"){width="800"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 8: Low frequency approximation mechanical equivalent circuit of a
loudspeaker drive unit
```{=html}
</center>
```
Where $M_{M1} = 2.67 a^3 \rho$ if $a$ is the radius of the loudspeaker
and $\rho$, the density of air. Mass elements, in this case the mass of
the diaphragm and voice coil $M_{MS}$ and the air mass loading the
diaphragm $2M_{M1}$ can be regrouped in a single element:
```{=html}
<center>
```
$M_{MS} = M_{MD}+2M_{M1}$
```{=html}
</center>
```
## Thiele-Small Parameters
### Theory
The complete low frequency behavior of a loudspeaker drive unit can be
modeled with just six parameters, called Thiele-Small parameters. Most
of these parameters result from algebraic manipulation of the equations
of the circuit of figure 8. Loudspeaker driver manufacturers seldom
provide electro-mechano-acoustical parameters directly and rather
provide Thiele-Small parameters in datasheets, but conversion from one
to the other is quite simple. The Thiele-Small parameters are as follow:
1\. $R_e$, the voice coil DC resistance;
2\. $Q_{ES}$, the electrical Q factor;
3\. $Q_{MS}$, the mechanical Q factor;
4\. $f_s$, the loudspeaker resonance frequency;
5\. $S_D$, the effective surface area of the diaphragm;
6\. $V_{AS}$, the equivalent suspension volume: the volume of air that
has the same acoustic compliance as the suspension of the loudspeaker
driver.
These parameters can be related directly from the low frequency
approximation circuit of figure 8, with $R_e$ and $S_D$ being explicit.
$Q_{MS} = \dfrac{1}{R_{MS}} \sqrt{\dfrac{M_{MS}}{C_{MS}}}$;
$Q_{ES} = \dfrac{R_g + R_e }{(Bl)^2} \sqrt{\dfrac{M_{MS}}{C_{MS}}}$ ;
$f_s= \dfrac{1}{2\pi\sqrt{M_{MS}C_{MS}}}$; $V_{AS}= C_{MS}S_D^2\rho c^2$
Where $\rho c^2$ is the Bulk modulus of air. It follows that, if given
Thiele-Small parameters, one can extract the values of each component of
the circuit of figure 8 using the following equations :
$C_{MS} = \dfrac{V_{AS}}{S_D^2 \rho c^2}$;
$M_{MS}= \dfrac{1}{(2 \pi f_s)^2 C_{MS}}$;
$R_{MS} = \dfrac{1}{Q_{MS}}\sqrt{\dfrac{M_{MS}}{C_{MS}}}$;
$Bl = \sqrt{\dfrac{R_e}{2 \pi f_s Q_{ES} C_{MS} } }$;
$M_{MD}= M_{MS}-2M_{M1}$;
### Measurement
Many methods can be used to measure Thiele-Small parameters of drivers.
Measurement of Thiele-Small parameters is sometimes necessary if a
manufacturer does not provide them. Also, the actual Thiele-Small
parameters of a given loudspeaker can differ from nominal values
significantly. The method described in this section comes from \[2\].
Note that for this method, the loudspeaker is considered to be mounted
in an infinite baffle. In practice, a baffle with a diameter of four
times that of the loudspeaker is sufficient. Measurements without a
baffle are also possible: the air mass loading will simply be halved and
can be easily accounted for. The setup for this method includes an FFT
analyzer or a mean to obtain an impedance curve. A signal generator of
variable frequency and an AC meter can also be used.
```{=html}
<center>
```
![](Setup_to_measure_impedance_of_loudspeakers.png "Setup_to_measure_impedance_of_loudspeakers.png"){width="300"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 9: Simple experimental setup to measure the impedance of a
loudspeaker drive unit
```{=html}
</center>
```
```{=html}
<center>
```
$Z_{spk}= R\dfrac{V_{spk}}{V_s \left( 1-\dfrac{V_{spk}}{V_s} \right)}$
```{=html}
</center>
```
```{=html}
<center>
```
![](ImpedanceCurveLoudspeaker.png "ImpedanceCurveLoudspeaker.png"){width="600"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 10: A typical loudspeaker drive unit impedance curve
```{=html}
</center>
```
Once the impedance curve of the loudspeaker is measured, $R_e$ and $f_s$
can be directly identified by looking at the low frequency asymptote of
the impedance value and the center frequency of the resonance peak. If
the frequencies where $Z_{spk}=\sqrt{R_e R_c}$ are identified as $f_l$
and $f_h$, Q factors can be calculated.
```{=html}
<center>
```
$Q_{MS}= \dfrac{f_s}{f_h-f_l} \sqrt{\dfrac{R_c}{R_e}}$
```{=html}
</center>
```
```{=html}
<center>
```
$Q_{ES} = \dfrac{Q_{MS}}{\dfrac{R_c}{R_e}-1}$
```{=html}
</center>
```
$S_D$ can simply be approximated by $\pi a^2$, where $a$ is the radius
of the loudspeaker driver. The last remaining Thiele-Small parameter,
$V_{AS}$ is slightly trickier to measure. The idea is to either increase
mass or reduce compliance of the loudspeaker drive unit and note the
shift in resonance frequency. If a known mass $M_x$ is added to the
loudspeaker diaphragm, the new resonance frequency will be:
```{=html}
<center>
```
$f^'_s= \dfrac{1}{2 \pi \sqrt{(M_{MS} + M_x) C_{MS} } }$
```{=html}
</center>
```
And the equivalent suspension volume may be obtained with:
```{=html}
<center>
```
$V_{AS} = \left( 1- \dfrac{f^{'2}_s}{f_s^2} \right) \dfrac{S_D^2 \rho c^2}{(2 \pi f^'_s)^2 M_x}$
```{=html}
</center>
```
Hence, all Thiele-Small parameters modeling the low frequency behavior
of the loudspeaker drive unit can be obtained from a fairly simple
setup. These parameters are of tremendous help in loudspeaker enclosure
design.
### Numerical example
This section presents a numerical example of obtaining Thiele-Small
parameters from impedance curves. The impedance curves presented in this
section have been obtained from simulations using nominal Thiele-Small
parameters of a real woofer loudspeaker. Firsy, these Thiele-Small
parameters have been transformed into an electro-mechano-acoustical
circuit using the equation presented before. Second, the circuit was
treated as a black box and the method to extract Thiele-Small parameters
was used. The purpose of this simulation is to present the method, step
by step, using realistic values so that the reader can get more familiar
with the process, the magnitude of the values and with what to expect
when performing such measurements.
For this simulation, a loudspeaker of radius $a=6.55cm$ is mounted on a
baffle sufficiently large to act as an infinite baffle. Its impedance is
obtained and plotted in figure 11, where important cursors have already
been placed.
```{=html}
<center>
```
![](SimulatedImpedanceWoofer.png "SimulatedImpedanceWoofer.png"){width="1000"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 11: Simulated measurement of an impedance curve for a woofer
loudspeaker
```{=html}
</center>
```
The low frequency asymptote is immediately identified as $Re=6.6\Omega$.
The resonance is clear and centered at $fs=33Hz$. The value of the
impedance at this frequency is about $66\Omega$. This yields
$\sqrt{R_eR_c}=20.8\Omega$, which occurs at $f_l=19.5 Hz$ and
$f_h= 52.5 Hz$. With this information, we can compute some of the
Thiele-Small parameters.
```{=html}
<center>
```
$Q_{MS}= \dfrac{f_s}{f_h-f_l}\sqrt{\dfrac{R_c}{R_e}}=\dfrac{33}{52.5-19.5}*\sqrt{\dfrac{66}{6.6}}= 3.1$
```{=html}
</center>
```
```{=html}
<center>
```
$Q_{ES} = \dfrac{Q_{MS}}{\dfrac{R_c}{R_e}-1} = \dfrac{3.1}{\dfrac{66}{6.6}-1} = 0.35$
```{=html}
</center>
```
As a next step, a mass of $M_x=10g$ is fixed to the loudspeaker
diaphragm. This shifts the resonance frequency and yields a new
impedance curve, as shown on figure 12.
```{=html}
<center>
```
![](SimulatedImpedanceWooferAddedMass.png "SimulatedImpedanceWooferAddedMass.png"){width="1000"}
```{=html}
</center>
```
```{=html}
<center>
```
Figure 12: Simulated measurement of an impedance curve for a woofer
loudspeaker
```{=html}
</center>
```
```{=html}
<center>
```
$S_{D} = \pi a^2 = 0.0135 m^2$
```{=html}
</center>
```
```{=html}
<center>
```
$V_{AS} = \left( 1- \dfrac{27.5^2}{33^2} \right) \dfrac{0.0135^2*1.18*344^2}{(2 \pi 27.5)^2*0.01}= 0.0272 m^3$
```{=html}
</center>
```
Once all six Thiele-Small parameters have been obtained, it is possible
to calculate values for the electro-mechano-acoustical circuit modeling
elements of figure 6 or 7. From then, the design of an enclosure can
start. This is discussed in application sections Sealed box subwoofer
design
and Bass reflex enclosure
design.
# References
\[1\] Kleiner, Mendel. Electroacoustics. CRC Press, 2013.
\[2\] Beranek, Leo L., and Tim Mellow. Acoustics: sound fields and
transducers. Academic Press, 2012.
\[3\] Kinsler, Lawrence E., et al. Fundamentals of Acoustics, 4th
Edition. Wiley-VCH, 1999.
\[4\] Small, Richard H. \"Direct radiator loudspeaker system analysis.\"
Journal of the Audio Engineering Society 20.5 (1972): 383-395.
|
# Engineering Acoustics/Moving Resonators
## Moving Resonators
Consider the situation shown in the figure below. We have a typical
Helmholtz resonator driven by a massless piston which generates a
sinusoidal pressure $P_G$, however the cavity is not fixed in this case.
Rather, it is supported above the ground by a spring with compliance
$C_M$. Assume the cavity has a mass $M_M$.
Recall the Helmholtz resonator (see Module
#9).
The difference in this case is that the pressure in the cavity exerts a
force on the bottom of the cavity, which is now not fixed as in the
original Helmholtz resonator. This pressure causes a force that acts
upon the cavity bottom. If the surface area of the cavity bottom is
$S_C$, then Newton\'s Laws applied to the cavity bottom give
```{=html}
<center>
```
$\sum{F} = p_CS_C - \frac{x}{C_M} = M_M\ddot{x} \Rightarrow p_CS_C = \left[\frac{1}{j\omega C_M} + j\omega M_M\right]u$
```{=html}
</center>
```
In order to develop the equivalent circuit, we observe that we simply
need to use the pressure (potential across $C_A$) in the cavity to
generate a force in the mechanical circuit. The above equation shows
that the mass of the cavity and the spring compliance should be placed
in series in the mechanical circuit. In order to convert the pressure to
a force, the transformer is used with a ratio of $1:S_C$.
## Example
A practical example of a moving resonator is a marimba. A marimba is a
similar to a xylophone but has larger resonators that produce deeper and
richer tones. The resonators (seen in the picture as long, hollow pipes)
are mounted under an array of wooden bars which are struck to create
tones. Since these resonators are not fixed, but are connected to the
ground through a stiffness (the stand), it can be modeled as a moving
resonator. Marimbas are not tunable instruments like flutes or even
pianos. It would be interesting to see how the tone of the marimba
changes as a result of changing the stiffness of the mount.
For more information about the acoustics of marimbas see
<http://www.mostlymarimba.com/techno1.html>
Back to main page
|
# Engineering Acoustics/Speed of sound
When sound waves propagate in a medium, they cause fluctuations in the
pressure, density, temperature and particle velocity in the medium. The
total pressure, $P$, in the medium can be expressed as:
$P=P_o+p'$
where $P_o$ is the hydrostatic or ambient pressure and $p'$ is the
acoustic pressure or pressure fluctuation.
The hydrostatic pressure can be thought as the mean pressure while the
acoustic pressure represents fluctuations around the mean pressure.
Similarly, the density, temperature, and particle velocity are separated
into mean and fluctuating components.
$\rho=\rho_o+\varrho '$
$T=T_o + T'$
$\vec{U} =\vec{u}_o+\vec{u}'$
where $\rho_o$ is the ambient density; $\varrho '$, the density
fluctuation; $T_o$, the ambient temperature; $T'$, the temperature
fluctuation; $\vec{u}_o$, the mean velocity; and $\vec{u}'$, the
particle velocity. Notice that pressure, density, and temperature are
scalar quantities and the particle velocity is a vectorial quantity.
!Planar wave traveling inside a
tube\|440x440px
Let\'s consider a planar wave traveling in the x-direction inside a tube
filled with a fluid at rest ($\vec{u}_o=0$) with constant pressure,
density and temperature ($P_o$, $\rho_o$, and $T_o$, respectively). As
the wave moves through the fluid at a speed, $c_o$, it creates
infinitesimally small fluctuations in the initially stagnant fluid in
front. All four quantities describing the fluid vary around their mean
value, increasing or decreasing depending on whether the fluid is being
compressed or expanded. To obtain a relation for the propagating speed,
$c_o$, an inertial frame of reference is used and a control volume is
drawn around the wave. !Control volume around a traveling planar
wave\|400x400px
Applying continuity,
$\rho_o c_o A = (\rho_o+\varrho')(c_o-u') A$
and neglecting higher order terms,
$\rho_o u'= c_o \varrho'$
Applying conservation of momentum,
$P_o+ \rho_o c_o^2 = P_o + p' + (\rho_o+\varrho')(c_o-u')^2$
neglecting higher order terms,
$\rho_o c^2_o = p' + (\rho_o+\varrho')(c_o^2-2 c_o u')$
$0 = p' + -2c_o \rho_o u' +\varrho' c_o^2$,
and using the continuity equation,
$c_o^2=\frac{p'}{\varrho'}$
The speed of sound is related to the ratio of pressure fluctuation
(acoustic pressure) to density fluctuation. Given that the speed of
sound is always a positive quantity, an increase in the fluid pressure
implies an increase in the fluid density and vice versa. The total
pressure is expressed as a Taylor series expansion about the ambient
density to relate its infinitesimally small fluctuations to the total
pressure and density.
$P(\rho_o+\varrho')= P_o + p' = P(\rho_o)+ \frac{\partial P(\rho_o)}{\partial \rho} \varrho' + \frac{1}{2} \frac{\partial^2 P(\rho_o)}{\partial \rho^2} \varrho'^2 + ...$
Neglecting second order terms and higher, the speed of sound can be
related to the total pressure and density.
$c_o^2=\frac{p'}{\varrho'}= \frac{\partial P(\rho_o)}{\partial \rho}$
As a sound wave moves through a fluid, the fluid is usually assumed to
follow an adiabatic and reversible thermodynamic path. So, the heat
transfer between fluid particles is negligible and the changes caused by
the sound wave onto the fluid can be reverse to their original state
without changing the entropy of the system. For an isentropic process,
the total pressure and density are related by the thermodynamic
relation,
$P=C \rho^\gamma$.
Taking the partial derivative and using the ideal gas law,
$P=\rho R_g T$,
$\frac{\partial P}{\partial \rho} = C \gamma \rho ^{\gamma -1 } = \frac{\gamma P}{\rho} = \gamma R_g T$.
The speed of sound can be expressed in terms of the ambient pressure,
density and temperature using,
$c_o^2= \frac{\partial P(\rho_o)}{\partial \rho} = \frac{\gamma P_o}{\rho_o} = \gamma R_g T_o$.
Using the definition of the Bulk modulus,
$B_o=\rho_o \frac{\partial P(\rho_o)}{\partial \rho}= \gamma P_o$
the speed of sound can written as,
$c_o = \sqrt{\frac{\gamma P_o}{\rho_o}} = \sqrt{\frac{B_o}{\rho_o}} = \sqrt{ \gamma R_g T_o}$.
## See also
Original Document [^1]
Acoustics Wikibooks: Sound Speed
Speed of sound in
water
```{=html}
<references />
```
[^1]: 1
|
# Engineering Acoustics/Acoustic wave equation
To derive the wave equation, three relations are combined: the equation
of state, ideal gas law; continuity equation, conservation of mass; and
Newton\'s law, conservation of momentum. From the speed of sound, a
relation between the acoustic pressure and the bulk modulus can be
derived,
$c_o^2=\frac{\partial P}{\partial \rho} = \frac{p'}{\varrho'}= \frac{p'}{\rho-\rho_o}.$
Using the definition of the condensation, relative change of density in
a fluid, denoted by $s$,
$\frac{p'}{\rho_o}= c_o^2 \Big( \frac{\rho-\rho_o}{\rho_o} \Big) = c_o^2s,$
and introducing the bulk modulus which makes implicit use of the ideal
gas law,
$p'= \rho_o c_o^2 s = \gamma P_o s = B_o s.$
The general form of the continuity equation for a control volume from
fluid dynamics is simplified to its one-dimensional form in Cartesian
coordinates.
$\frac{\partial \rho}{\partial t} + \vec{\nabla} (\rho \vec{U}) =0$
$\frac{\partial \rho}{\partial t} + \rho_o \frac{\partial U}{\partial x} =0$
$\frac{\partial U}{\partial x} = \frac{-1}{\rho_o} \frac{\partial \rho}{\partial t} = \frac{-1}{\rho_o c_o^2} \frac{\partial P}{\partial t}$
!Pressure on fluid element inside a
tube\|left\|440x440px
Using Newton\'s law on the fluid element, the net force acting on the
fluid boundaries causes an acceleration on the fluid proportional to its
mass,
$A (P(x)-P(x+dx))= (A dx \rho_o) \frac{dU}{dt}.$
Noticing that
$\frac{P(x)-P(x+dx)}{dx} = \frac{\partial P}{\partial x}$
as $dx$ approaches zero, evaluating the derivative, and neglecting small
terms,
$\frac{\partial P}{\partial x} dx= (\rho_o dx) \frac{dU}{dt} = (\rho_o dx) \Big( \frac{\partial U}{\partial t} + \frac{\partial U}{\partial x} \frac{dx}{dt}\Big)$
$\frac{\partial U}{\partial t}= \frac{-1}{\rho_o}\frac{\partial P}{\partial x}.$
To obtain the wave equation, the partial derivative with respect to time
is taken for the continuity equation and the partial derivative with
respect to space for the conservation of momentum equation. By equating
the two results and eliminating the density term on both sides,
$\frac{\partial^2 U}{\partial t \partial x}= \frac{-1}{\rho_o c_o^2} \frac{\partial^2 P}{\partial t^2}= \frac{-1}{\rho_o}\frac{\partial^2 P}{\partial x^2}$
the acoustic wave equation is recovered
$\frac{\partial^2 P}{\partial x^2}= \frac{1}{c_o^2} \frac{\partial^2 P}{\partial t^2}$
$\frac{\partial^2 P}{\partial x^2}- \frac{1}{c_o^2} \frac{\partial^2 P}{\partial t^2}=0.$
The equation above is the acoustic wave equation in its one-dimensional
form. It can be generalized to 3-D Cartesian
coordinates$\frac{\partial^2 P}{\partial x^2}+\frac{\partial^2 P}{\partial y^2}+\frac{\partial^2 P}{\partial z^2}- \frac{1}{c_o^2} \frac{\partial^2 P}{\partial t^2}=0.$
Using the Laplace operator, it can be generalized to other coordinate
systems
$\nabla^2 P- \frac{1}{c_o^2} \frac{\partial^2 P}{\partial t^2}=0$
### One-dimensional wave solution in Cartesian coordinates
The one-dimensional acoustic wave equation is described by the second
order partial differential equation,
$\frac{\partial^2 P}{\partial x^2}= \frac{1}{c_o^2} \frac{\partial^2 P}{\partial t^2}.$
It can be solved using separation of variables. Suppose the pressure is
the product of one function only dependent on space and another function
only dependent on time,
$P(x,t)=X(x)T(t).$
Substituting back into the wave equation,
$X''T=\frac{1}{c_o^2}XT''$
$\frac{X''}{X}=\frac{1}{c_o^2}\frac{T''}{T}=-\lambda^2.$
This substitution leads to two homogeneous second order ordinary
differential equations, one in time and one in space.
$T'' + c_o^2 \lambda^2 T=0$
$X'' + \lambda^2 X=0$
The time function is expected to be dependent on the angular frequency
of the wave
$T=Ce^{j \omega t}.$
Substituting and solving for the constant which is define as the wave
number, $K$,
$-\omega^2 + c_o^2 \lambda^2=0$
$K^2=\lambda^2=\frac{\omega^2}{c_o^2}.$
The wave number relates the angular velocity of the wave to its
propagation speed in the medium. It can be expressed in different forms,
$K=\frac{\omega}{c_o}=\frac{2 \pi f}{c_o}=\frac{2 \pi}{\lambda},$
where $f$ is the frequency in hertz and $\lambda$ is the wavelength.The
second differential equation can be solved using the wave number. The
spacial function is given a general form,
$X=Ce^{jrx}.$
Substituting and solving for $r$,
$-r^2+K^2=0$
$r=\pm K.$
The solution of the 1-D acoustic wave equation is obtained,
$P(x,t)=(C_1e^{j K x} + C_2e^{-j K x})e^{j \omega t}.$
The real and imaginary parts of the solution are also solutions to the
1-D wave equation.
$P(x,t)=C_1cos(\omega t + Kx) + C_2cos(\omega t - Kx)$
$P(x,t)=C_1sin(\omega t + Kx) + C_2sin(\omega t - Kx)$
Using phasor notation, the solution is written in more compact form,
$\mathbf{P(x,t)}=\mathbf{P}e^{j(\omega t \pm Kx)}.$
The actual solution is recovered by taking the real part of the above
complex form. The value of the constants above is determined by applying
initial and boundary conditions. In general, any function of the
following form is a solution for periodic waves,
$P(x,t)=f_1(\omega t + Kx)+f_2(\omega t - Kx),$
and similarly, for progressive waves,
$P(x,t)=f(ct + x)+g(ct - x)$
$P(x,t)=f(\xi)+g(\eta),$
where $f$ and $g$ are arbitrary functions, that represent two waves
traveling in opposing directions. These are known as the d\'Alembert
solutions. The form of these two functions can be found by applying
initial and boundary conditions.
Original document[^1]
[^1]: 1
|
# Engineering Acoustics/Reflection and transmission of planar waves
## Specific acoustic impedance
Before discussing the reflection and transmission of planar waves, the
relation between particle velocity and acoustic pressure is
investigated.
$\frac{\partial u}{\partial t}= \frac{-1}{\rho_o}\frac{\partial p}{\partial x}$
The acoustic pressure and particle velocity can be described in complex
form.
$\mathbf{p}=\mathbf{P}e^{j(\omega t -Kx)}$
$\mathbf{u}=\mathbf{u_o}e^{j(\omega t -Kx)}$
Differentiating and substituting,
$-j \omega \mathbf{u}=\frac{-j K \mathbf{p}}{\rho_o}=\frac{-j \omega \mathbf{p}}{\rho_o c_o}$
$\mathbf{u}=\frac{\mathbf{p}}{\rho_o c_o}$
The specific acoustic impedance for planar waves is defined.
$\mathbf{z}=\frac{\mathbf{p}}{\mathbf{u}}=\rho_o c_o=r_o$
## Planar wave: Normal incidence
Consider an incident planar wave traveling in an infinite medium with
specific impedance $r_1=\rho_1 c_1$ which encounters the boundary
between medium 1 and medium 2. Part of the wave is reflected back into
medium 1 and the remaining part is transmitted to medium 2 with specific
impedance $r_2=\rho_2 c_2$. The pressure field in medium 1 is described
by the sum of the incident and reflected components of the wave.
$\mathbf{p_1}=\mathbf{p_i}+\mathbf{p_r}=\mathbf{P_i}e^{j(\omega t -K_1x)}+\mathbf{P_r}e^{j(\omega t +K_1x)}$
The pressure field in medium 2 is composed only of the transmitted
component of the wave.
$\mathbf{p_2}=\mathbf{p_t}=\mathbf{P_t}e^{j(\omega t -K_2x)}$
!Reflection and transmission of normally incident planar
wave.\|440x440px
Notice that the frequency of the wave remains constant across the
boundary, however the specific acoustic impedance changes across the
boundary. The propagation speed in each medium is different, so the wave
number of each medium is also different. There are two boundary
conditions to be satisfied:
1. The acoustic pressure must be continuous at the boundary
2. The particle velocity must be continuous at the boundary
Imposition of the first boundary condition yields
$\mathbf{p_1}(x=0)=\mathbf{p_2}(x=0),$
$\mathbf{P_i} + \mathbf{P_r} = \mathbf{P_t}.$
Imposition of second boundary condition yields
$\mathbf{u_1}(x=0)=\mathbf{u_2}(x=0),$
$\mathbf{u_i}(x=0)+ \mathbf{u_r}(x=0)= \mathbf{u_t}(x=0),$
and using the definition of specific impedance, the equations are
expressed in terms of pressure
$\frac{\mathbf{P_i}}{r_1} - \frac{\mathbf{P_r}}{r_1} = \frac{\mathbf{P_t}}{r_2}.$
The pressure reflection coefficient is the ratio of the reflected
acoustic pressure over the incident acoustic pressure,
$\mathbf{R}=\frac{\mathbf{P_r}}{\mathbf{P_i}}$. The pressure
transmission coefficient is the ratio of the transmitted acoustic
pressure over the incident acoustic pressure,
$\mathbf{T}=\frac{\mathbf{P_t}}{\mathbf{P_i}}$. The specific acoustic
impedance ratio is also defined as: $\zeta=\frac{r_2}{r_1}$. Applying
the above definitions, the boundary conditions can be rewritten as:
$1+\mathbf{R}=\mathbf{T}$
$1-\mathbf{R}=\frac{\mathbf{T}}{\zeta}.$
Solving for the pressure reflection coefficient yields:
$\mathbf{R}=\mathbf{T}-1=\frac{\zeta-1}{\zeta+1}=\frac{r_2-r_1}{r_2+r_1}.$
Solving for the pressure transmission coefficient yields:
$\mathbf{T}=\mathbf{R}+1=\frac{2 \zeta}{\zeta +1}=\frac{2r_2}{r_2+r_1}.$
Solving for the specific acoustic impedance ratio yields:
$\zeta = \frac{1+\mathbf{R}}{1-\mathbf{R}} = \frac{\mathbf{T}}{2-\mathbf{T}}.$
## Case 1: Rigid boundary
Consider an incident planar wave that encounters a rigid boundary. This
is the case if the specific impedance of medium 2 is significantly
larger than the specific impedance of medium 1. Thus, the specific
acoustic impedance ratio is very large, the reflection coefficient
approaches 1 and the transmission coefficient approaches 2.
$\mathbf{R}=1=\frac{\mathbf{P_r}}{\mathbf{P_i}} \Rightarrow \mathbf{P_r} = \mathbf{P_i} \Rightarrow \mathbf{u}(x=0)=0$
$\mathbf{T}=2=\frac{\mathbf{P_t}}{\mathbf{P_i}} \Rightarrow \mathbf{P_t} = 2 \mathbf{P_i} \Rightarrow \mathbf{p}(x=0)= 2 \mathbf{P_i}$
The amplitudes of the incident and reflected waves are equal. The
reflected wave is in phase with the incident wave. The particle velocity
at the boundary is zero. The acoustic pressure amplitude at the boundary
is equal to twice the pressure amplitude of the incident wave and it is
maximum.
## Case 2: Resilient boundary
Consider an incident planar wave that encounters a resilient boundary.
This is the case if the specific impedance of medium 2 is significantly
smaller than the specific impedance of medium 1. Thus, the specific
acoustic impedance ratio approaches zero, the reflection coefficient
approaches -1 and the transmission coefficient approaches zero.
$\mathbf{R}=-1=\frac{\mathbf{P_r}}{\mathbf{P_i}} \Rightarrow \mathbf{P_r} = - \mathbf{P_i} \Rightarrow \mathbf{u}(x=0)= \frac{\mathbf{2P_i}}{r_1}$
$\mathbf{T}=0=\frac{\mathbf{P_t}}{\mathbf{P_i}} \Rightarrow \mathbf{P_t} = 0 \Rightarrow \mathbf{p}(x=0)= 0$
The amplitudes of the incident and reflected waves are equal. The
reflected wave is 180\\degree out of phase with the incident wave. The
particle velocity at the boundary is a maximum. The acoustic pressure at
the boundary is zero.
## Case 3: Equal impedance in both media
Consider two media with the same specific acoustic impedance so that the
specific acoustic impedance ratio is unity, the reflection coefficient
is zero and the transmission coefficient is unity. Therefore, the wave
is not reflected, only transmitted. It behaves as if there was no
boundary.
|
# Engineering Acoustics/Transverse vibrations of strings
## Introduction
This section deals with the wave nature of vibrations constrained to one
dimension. Examples of this type of wave motion are found in objects
such a pipes and tubes with a small diameter (no transverse motion of
fluid) or in a string stretched on a musical instrument.
Stretched strings can be used to produce sound (e.g. music instruments
like guitars). The stretched string constitutes a mechanical system that
will be studied in this chapter. Later, the characteristics of this
system will be used to help to understand by analogies acoustical
systems.
## What is a wave equation?
There are various types of waves (i.e. electromagnetic, mechanical,
etc.) that act all around us. It is important to use wave equations to
describe the time-space behavior of the variables of interest in such
waves. Wave equations solve the fundamental equations of motion in a way
that eliminates all variables but one. Waves can propagate longitudinal
or parallel to the propagation direction or perpendicular (transverse)
to the direction of propagation. To visualize the motion of such waves
click
here
(Acoustics animations provided by Dr. Dan Russell,Kettering University)
## One dimensional Case
Assumptions :
\- the string is uniform in size and density
\- stiffness of string is negligible for small deformations
\- effects of gravity neglected
\- no dissipative forces like frictions
\- string deforms in a plane
\- motion of the string can be described by using one single spatial
coordinate
Spatial representation of the string in vibration:
![](1Dwave_graph1.png "1Dwave_graph1.png"){width="300"}
The following is the free-body diagram of a string in motion in a
spatial coordinate system:
![](string_dwg.jpg "string_dwg.jpg")
From the diagram above, it can be observed that the tensions in each
side of the string will be the same as follows:
![](equations1.jpg "equations1.jpg")
Using Taylor series to expand we obtain:
![](equations2.jpg "equations2.jpg")
## Characterization of the mechanical system
A one dimensional wave can be described by the following equation
(called the wave equation):
$\left( \frac{\partial^2 y}{\partial x^2} \right)=\left( \frac{1}{c^2} \right)\left( \frac{\partial^2 y}{\partial t^2} \right)$
where,
$y(x,t)= f(\xi)+g(\eta)\,$ is a solution,
With $\xi=ct-x\,$ and $\eta=ct+x\,$
This is the D\'Alambert solution, for more information see:
1
Another way to solve this equation is the Method of separation of
variables. This is useful for modal analysis. This assumes the solution
is of the form:
$y(x,t)= f(x)g(t)\$
The result is the same as above, but in a form that is more convenient
for modal analysis.
For more information on this approach see: Eric W. Weisstein et al.
\"Separation of Variables.\" From MathWorld---A Wolfram Web Resource.
2
Please see Wave
Properties
for information on variable c, along with other important properties.
For more information on wave equations see: Eric W. Weisstein. \"Wave
Equation.\" From MathWorld---A Wolfram Web Resource.
3
Example with the function $f(\xi)$ :
Example: Java String
simulation
This show a simple simulation of a plucked string with fixed ends.
Back to Main page
|
# Engineering Acoustics/Time-Domain Solutions
## d\'Alembert Solutions
In 1747, Jean Le Rond
d\'Alembert
published a solution to the one-dimensional wave equation.
The general solution, now known as the d\'Alembert method, can be found
by introducing two new variables:
$\xi=ct-x\,$ and $\eta=ct+x\,$
and then applying the chain rule to the general form of the wave
equation.
From this, the solution can be written in the form:
$y(\xi,\eta)= f(\xi)+g(\eta)\,=f(x+ct)+g(x-ct)$
where f and g are arbitrary functions, that represent two waves
traveling in opposing directions.
A more detailed look into the proof of the d\'Alembert solution can be
found here.
## Example of Time Domain Solution
If f(ct-x) is plotted vs. x for two instants in time, the two waves are
the same shape but the second displaced by a distance of c(t2-t1) to the
right.
The two arbitrary functions could be determined from initial conditions
or boundary values.
Back to Main page
|
# Engineering Acoustics/Boundary Conditions and Forced Vibrations
## Boundary Conditions
The functions representing the solutions to the wave equation previously
discussed,
i.e. $y(x,t)= f(\xi)+g(\eta)\,$ with $\xi=ct-x\,$ and $\eta=ct+x\,$
are dependent upon the boundary and initial conditions. If it is assumed
that the wave is propagating through a string, the initial conditions
are related to the specific disturbance in the string at t=0. These
specific disturbances are determined by location and type of contact and
can be anything from simple oscillations to violent impulses. The
effects of boundary conditions are less subtle.
The most simple boundary conditions are the Fixed Support and Free End.
In practice, the Free End boundary condition is rarely encountered since
it is assumed there are no transverse forces holding the string (e.g.
the string is simply floating).
### For a Fixed Support
The overall displacement of the waves travelling in the string, at the
support, must be zero. Denoting x=0 at the support, This requires:
$y(0,t)= f(ct-0)+g(ct+0) = 0\,$
Therefore, the total transverse displacement at x=0 is zero.
The sequence of wave reflection for incident, reflected and combined
waves are illustrated below. Please note that the wave is traveling to
the left (negative **x** direction) at the beginning. The reflected wave
is ,of course, traveling to the right (positive **x** direction).
**t=0** ![](Fixed_t0.gif "Fixed_t0.gif")
**t=t1** ![](Fixed_t1.gif "Fixed_t1.gif")
**t=t2** ![](Fixed_t2.gif "Fixed_t2.gif")
**t=t3** ![](Fixed_t3.gif "Fixed_t3.gif")
### For a Free Support
Unlike the Fixed Support boundary condition, the transverse displacement
at the support does not need to be zero, but must require the sum of
transverse forces to cancel. If it is assumed that the angle of
displacement is small,
$\sin(\theta)\approx\theta=\left( \frac{\partial y}{\partial x} \right)\,$
and so,
$\sum F_{y} = T\sin(\theta)\approx T\left( \frac{\partial y}{\partial x} \right)=0\,$
But of course, the tension in the string, or T, will not be zero and
this requires the slope at x=0 to be zero:
i.e. $\left( \frac{\partial y}{\partial x} \right)=0\,$
Again for free boundary, the sequence of wave reflection for incident,
reflected and combined waves are illustrated below:
**t=0** ![](Fixed_t0.gif "Fixed_t0.gif")
**t=t1** ![](Free_t1.gif "Free_t1.gif")
**t=t2** ![](Free_t2.gif "Free_t2.gif")
**t=t3** ![](Free_t3.gif "Free_t3.gif")
### Other Boundary Conditions
There are many other types of boundary conditions that do not fall into
our simplified categories. As one would expect though, it isn\'t
difficult to relate the characteristics of numerous \"complex\" systems
to the basic boundary conditions. Typical or realistic boundary
conditions include mass-loaded, resistance-loaded, damping-loaded, and
impedance-loaded strings. For further information, see Kinsler,
Fundamentals of Acoustics, pp 54--58.
Here is a website with nice movies of wave reflection at different
BC\'s: Wave
Reflection
## Wave Properties
To begin with, a few definitions of useful variables will be discussed.
These include; the wave number, phase speed, and wavelength
characteristics of wave travelling through a string.
The speed that a wave propagates through a string is given in terms of
the phase speed, typically in m/s, given by:
$c = \sqrt{T/\rho_{L}}\,$ where $\rho_{L}\,$ is the density per unit
length of the string.
The wave number is used to reduce the transverse displacement equation
to a simpler form and for simple harmonic motion, is multiplied by the
lateral position. It is given by:
$k=\left( \frac{\omega}{c} \right)\,$ where $\omega=2\pi f\,$
Lastly, the wavelength is defined as:
$\lambda=\left( \frac{2\pi}{k} \right)=\left( \frac{c}{f} \right)\,$
and is defined as the distance between two points, usually peaks, of a
periodic waveform.
These \"wave properties\" are of practical importance when calculating
the solution of the wave equation for a number of different cases. As
will be seen later, the wave number is used extensively to describe wave
phenomenon graphically and quantitatively.
For further information: Wave
Properties
## Forced Vibrations
1.forced vibrations of infinite string suppose there is a string very
long , at x=0 there is a force exerted on it.
F(t)=Fcos(wt)=Real{Fexp(jwt)}
use the boundary condition at x=0,
neglect the reflected wave
it is easy to get the wave form
where w is the angular velocity, k is the wave number.
according to the impedance definition
it represents the characteristic impedance of the string. obviously, it
is purely resistive, which is like the resistance in the mechanical
system.
------------------------------------------------------------------------
The dissipated power
Note: along the string, all the variables propagate at same speed.
link title a useful link to show
the time-space property of the wave.
------------------------------------------------------------------------
Some interesting animations of the wave at different boundary
conditions.
1.hard boundary (which is like a fixed end)
2.soft boundary (which is like a free end)
3.from low density to high density string
4.from high density to low density string
Back to Main page
|
# Engineering Acoustics/Boundary Conditions and Wave Properties
## Boundary Conditions
The functions representing the solutions to the wave equation previously
discussed,
i.e. $y(x,t)= f(\xi)+g(\eta)\,$ with $\xi=ct-x\,$ and $\eta=ct+x\,$
are dependent upon the boundary and initial conditions. If it is assumed
that the wave is propagating through a string, the initial conditions
are related to the specific disturbance in the string at t=0. These
specific disturbances are determined by location and type of contact and
can be anything from simple oscillations to violent impulses. The
effects of boundary conditions are less subtle.
The most simple boundary conditions are the Fixed Support and Free End.
In practice, the Free End boundary condition is rarely encountered since
it is assumed there are no transverse forces holding the string (e.g.
the string is simply floating).
`- For a Fixed Support:`
The overall displacement of the waves travelling in the string, at the
support, must be zero. Denoting x=0 at the support, This requires:
$y(0,t)= f(ct-0)+g(ct+0) = 0\,$
Therefore, the total transverse displacement at x=0 is zero.
`- For a Free Support:`
Unlike the Fixed Support boundary condition, the transverse displacement
at the support does not need to be zero, but must require the sum of
transverse forces to cancel. If it is assumed that the angle of
displacement is small,
$\sin(\theta)\approx\theta=\left( \frac{\partial y}{\partial x} \right)\,$
and so,
$\sum F_{y} = T\sin(\theta)\approx T\left( \frac{\partial y}{\partial x} \right)=0\,$
But of course, the tension in the string, or T, will not be zero and
this requires the slope at x=0 to be zero:
i.e. $\left( \frac{\partial y}{\partial x} \right)=0\,$
`- Other Boundary Conditions:`
There are many other types of boundary conditions that do not fall into
our simplified categories. As one would expect though, it isn\'t
difficult to relate the characteristics of numerous \"complex\" systems
to the basic boundary conditions. Typical or realistic boundary
conditions include mass-loaded, resistance-loaded, damping-loaded, and
impedance-loaded strings. For further information, see Kinsler,
Fundamentals of Acoustics, pp 54--58.
## Wave Properties
To begin with, a few definitions of useful variables will be discussed.
These include; the wave number, phase speed, and wavelength
characteristics of wave travelling through a string.
The speed that a wave propagates through a string is given in terms of
the phase speed, typically in m/s, given by:
$c = \sqrt{T/\rho_{L}}\,$ where $\rho_{L}\,$ is the density per unit
length of the string.
The wave number is used to reduce the transverse displacement equation
to a simpler form and for simple harmonic motion, is multiplied by the
lateral position. It is given by:
$k=\left( \frac{\omega}{c} \right)\,$ where $\omega=2\pi f\,$
Lastly, the wavelength is defined as:
$\lambda=\left( \frac{2\pi}{k} \right)=\left( \frac{c}{f} \right)\,$
and is defined as the distance between two points, usually peaks, of a
periodic waveform.
These \"wave properties\" are of practical importance when calculating
the solution of the wave equation for a number of different cases. As
will be seen later, the wave number is used extensively to describe wave
phenomenon graphically and quantitatively.
For further information: Wave
Properties
Back to Main page
------------------------------------------------------------------------
Edited by: Mychal Spencer
|
# Engineering Acoustics/Attenuation of Sound Waves
Back to main page
## Introduction
When sound travels through a medium, its intensity diminishes with
distance. This weakening in the energy of the wave results from two
basic causes, scattering and absorption. The combined effect of
scattering and absorption is called attenuation. For small distances or
short times the effects of attenuation in sound waves can usually be
ignored. Yet, for practical reasons it should be considered. So far in
our discussions, sound has only been dissipated by the spreading of the
wave, such as when we consider spherical and cylindrical waves. However
this dissipation of sound in these cases is due to geometric effects
associated with energy being spread over an increasing area and not
actually to any loss of total energy.
## Types of Attenuation
As mentioned above, attenuation is caused by both absorption and
scattering. Absorption is generally caused by the media. This can be due
to energy loss by both viscosity and heat conduction. Attenuation due to
absorption is important when the volume of the material is large.
Scattering, the second cause of attenuation, is important when the
volume is small or in cases of thin ducts and porous materials.
#### Viscosity and Heat conduction
Whenever there is a relative motion between particles in a media, such
as in wave propagation, energy conversion occurs. This is due to stress
from viscous forces between particles of the medium. The energy lost is
converted to heat. Because of this, the intensity of a sound wave
decreases more rapidly than the inverse square of distance. Viscosity in
gases is dependent upon temperature for the most part. Thus as you
increase the temperature you increase the viscous forces.
#### Boundary Layer Losses
A special type of absorption occurs when a sound wave travels over a
boundary, such as a fluid flowing over a solid surface. In such a
situation, the fluid in immediate contact with the surface must be at
rest. Subsequent layers of fluid will have a velocity that increases as
the distance from the solid surface increases such as in the figure
below.
The velocity gradient causes an internal stress associated with
viscosity, that leads to a loss of momentum. This loss of momentum leads
to a decrease in the amplitude of a wave close to the surface. The
region over which the velocity of the fluid decreases from its nominal
velocity to that of zero is called the acoustic boundary layer. The
thickness of the acoustic boundary layer due to viscosity can be
expressed as
$\delta_{visc}=\sqrt{\left(\frac{ 2*\mu}{\omega*\rho_o}\right)}$
Where $\mu \,$ is the shear viscosity number. Ideal fluids would not
have a boundary layer thickness since $\mu=0 \,$ .
### Relaxation
Attenuation can also occur by a process called relaxation. One of the
basic assumptions prior to this discussion on attenuation was that when
a pressure or density of a fluid or media depended only on the
instantaneous values of density and temperature and not on the rate of
change in these variables. However, whenever a change occurs,
equilibrium is upset and the media adjusts until a new local equilibrium
is achieved. This does not occur instantaneously, and pressure and
density will vary in the media. The time it takes to achieve this new
equilibrium is called the relaxation time, $\theta \,$ . As a
consequence the speed of sound will increase from an initial value to
that of a maximum as frequency increases. Again the losses associated
with relaxation are due to mechanical energy being transformed into
heat.
## Modeling of losses
The following is done for a plane wave. Losses can be introduced by the
addition of a complex expression for the wave number
$k=\ \beta-j\alpha$
which when substituted into the time-solution yields
$\ p = A e^{-\alpha x}e^{jwt-j\beta x}$
with a new term of $\ e^{-\alpha x}$ which resulted from the use of a
complex wave number. Note the negative sign preceding $\alpha$ to denote
an exponential decay in amplitude with increase values of $x$.
$\ \alpha$ is known as the absorption coefficient with units of nepers
per unit distance and $\ \beta$ is related to the phase speed. The
absorption coefficient is frequency dependent and is generally
proportional to the square of sound frequency. However, its relationship
does vary when considering the different absorption mechanisms as shown
below.
The velocity of the particles can be expressed as
$\ u=\frac{k}{w*\rho_o}p= \frac{1}{\rho_o c}\left(1-j\frac{\alpha}{k}\right)p$
The impedance for this traveling wave would be given by
$\ z= \rho_o c\frac{1}{1-j\frac{\alpha}{k}}$
From this we can see that the rate of decrease in intensity of an
attenuated wave is $\ a=8.7\alpha$
## References
Wood, A. A Textbook of Sound. London: Bell, 1957.
Blackstock, David. Fundamentals of Physical Acoustics. New York: Wiley,
2000.
Attenuation considerations in
Ultrasound
Back to main page
|
# Engineering Acoustics/Sound Propagation in a Cylindrical Duct With Compliant Wall
There have been many applications that deals with sound propagation in a
compliant wall duct. These range from hydraulic applications(i.e. water
hammer) to biomechanical
applications (i.e. pressure pulse in an
artery). In working with sound
propagation in a circular duct, the duct wall is often assumed to be
rigid so that any pressure disturbance in the fluid has no effect on the
wall. However, if the wall is assumed to be compliant, i.e. wall
deformation is possible when a pressure disturbance is encountered,
then, this will change the speed of the sound propagation. In reality,
the rigid wall assumption will be valid if the pressure disturbance in
the fluid, which is a function of the fluid density, is very small so
that the deformation of the wall is insignificant. However, if the duct
wall is assumed to be thin, i.e. \~ 1/20 of the radius or smaller, or if
the wall is made of plastic type of material with low Young\'s modulus
and density, or if the fluid contained is "heavy", the rigid wall
approximation is no longer true. In this case, the wall is assumed to be
compliant.
In the book by Morse & Ingard \[1\], the wall stiffness is defined as
K~w~, and this is the ratio between the pressure disturbances, p, to the
fractional change in cross-sectional area of the duct, produced by p. Of
course this pressure disturbance p is not static and the inertial of the
wall has to be considered. Because the deformation of the wall is due to
the pressure disturbances in the fluid, this is a typical
fluid-structure interaction problem, where the pressure disturbances in
the fluid cause the structural deformation, which in turn, modifies the
pressure disturbances.
Unlike sound propagate in a duct with a rigid wall where sound pressure
travels down the tube axially; part of the pressure is used to stretch
the tube radially. Clearly, because of the inclusion of tube wall
displacement, this becomes a fluid-structure interaction problem.
# Analysis
In this analysis, it is expected that the speed of the propagation will
depend on the material properties of the tube wall, i.e. Young\'s
modulus and the density. Also, as the analysis unfolds, it will become
apparently clear that the speed of propagation will vary with excitation
frequency, unlike wave propagation in a rigid-wall tube. Keep in mind
that the analysis presented here is a very simple one. Of course
depending on the fluid & structure model, one can reach a more accurate
results. In the analysis presented here, the goal is to provide a basic
physical understanding of sound propagation in a cylindrical duct with
compliant wall.
Assumptions:
1. One-dimensional analysis
2. All viscous & thermal dissipation are neglected
3. Analysis are per unit length
4. No mean flow velocity is considered
## Fluid
Two simplified fluid equations will be considered here:
$-\frac{\partial u}{\partial x}=\kappa \frac{\partial p}{\partial t}\dots \dots .Eq(1)$
and
$-\frac{\partial p}{\partial x}={\rho }_f\frac{\partial u}{\partial t}\dots \dots .Eq(2)$
where $\kappa =\frac{1}{{\rho }_fc^2}$, $u$ is the fluid particle
velocity and $p$ is the fluid pressure.
The first equation is the continuity equation, where the density term is
replaced by the pressure term by applying the ideal gas law $p=\rho RT$
and the isentropic gas law $c^2=\gamma RT$.
Because of the compliant wall, the fluid experiences an additional
compressibility effects, and according to Morse & Ingard \[1\] , this
additional compressibility is derived based on the wall stiffness (K).
The wall stiffness is defined as the ratio between the pressure and the
fractional change in cross-sectional area. By introducing K into Eq. 1,
it yields
$-\frac{\partial u}{\partial x}=\left(\kappa +\frac{1}{K}\right)\frac{\partial p}{\partial t}\dots \dots Eq\left(3\right)$
If the mass of the tube is considered, then additional mass parameter,
M~w~, must be included.
The total stiffness impedance of the wall is then:
$Z_w=j\omega M_w+\frac{K_w}{j\omega }=j\frac{M_w}{\omega }\left({\omega }^2-{\omega }^2_n\right)$
Treating this wall impedance as a compliance term (i.e.
$K=j\omega Z_w$), and substitute back into Eq 3 yields
$-\frac{\partial u}{\partial x}=\left(\kappa +\frac{1}{M_w({{\omega }^2_n-\omega }^2)}\right)\frac{\partial p}{\partial t}$
Here, ${\omega }_n=\sqrt{\frac{K_w}{M_w}}$.
Furthermore, by taking out ${\kappa }$, the expression becomes
$-\frac{\partial u}{\partial x}=\kappa \left(\frac{M_w\left({{\omega }^2_n-\omega }^2\right)+{\rho }_fc^2}{M_w({{\omega }^2_n-\omega }^2)}\right)\frac{\partial p}{\partial t}$
After some manipulation, it yields
$-\frac{\partial u}{\partial x}=\kappa \left(\frac{{{\omega }^2_1-\omega }^2}{{{\omega }^2_n-\omega }^2}\right)\frac{\partial p}{\partial t}$
where ${\omega }^2_1={\omega }^2_n+\frac{{\rho }_fc^2}{M_w}$
For rigid wall, as $K_w\to \infty$, ${\omega }^2_n\to \infty$,
${\omega }^2_1\to \infty$, hence,
$-\frac{\partial u}{\partial x}=\kappa \frac{\partial p}{\partial t}$,
which leads back to Equation 1 .
If the impedance analogy is used, i.e. pressure is the voltage and the
velocity the current, then
$C=\ \kappa \left(\frac{{{\omega }^2_1-\omega }^2}{{{\omega }^2_n-\omega }^2}\right)$
,where C is the compliance of the wall per unit length.
The speed of the sound is then determined by $\sqrt{\frac{1}{LC}}$,
hence
$c_p=c\sqrt{\left(\frac{{{\omega }^2_n-\omega }^2}{{{\omega }^2_1-\omega }^2}\right)}$
Here, $c_p$ is the phase velocity and it depends on the excitation
frequency,${\omega }$, the acoustic wave is dispersive. When the
excitation frequency is below the natural frequency, ${\omega }_n$, the
phase speed is lower than that of the free wave speed in fluid.
The next step is to identify K~w~ and M~w~, which will be determined
through structural response.
## Structure
Assume the undeformed tube has a diameter of D and after deformation, it
becomes $D+\triangle D$. The area change in this case is
$\frac{\pi }{4}\left(2D\triangle D+{\left(\triangle D\right)}^2\right)$.
The ratio between the two is
$\frac{\triangle A}{A}=\frac{2\triangle D}{D}+{\left(\frac{\triangle D}{D}\right)}^2$.
Hence, the wall stiffness is then
$K_w=\frac{\triangle pD}{2\triangle D}$.
where the inverse of this is known as the compliance or
distensibility(a
bio-medical term).
To determine the term $\frac{\triangle D}{D}$, it is necessary to look
at the structural response by using Newtown\'s Law and Hook\'s Law.
Consider a cross section of a half tube with diameter D, thickness h and
tension T,
!A cut away cylindrical
tube{width="300"}
The hoop stress in a cylindrical tube is given by,
$\sigma =p\frac{D}{2h}$ Applying Hook\'s Law, it is possible to
determine the strain, $\varepsilon$, by
$\varepsilon =\frac{\sigma}{E}$.
With some substitutions,
$\varepsilon =\frac{pD}{2hE}$
For small strain,
$\Delta \varepsilon =\frac{\Delta pD}{2hE}=\frac{\Delta D}{D}$ Hence,
$K_w=E\frac{h}{D}$
This is the wall stiffness, a function of only the tube elastic
properties.
Mass of the tube per unit length is considered, then
$M_w={\rho}_s\pi h(D+h)$.
Finally, it is possible to plot the phase speed and the wall impedance
verse the excitation frequency.
# Discussions
In the simulation the thickness to diameter ratio,$\frac{h}{D}$ is 0.1,
the material is steel with $\rho = 7800 \frac{kg}{m^3}$ and
$E = 2x10^{11} Pa$. The fluid contained inside is assumed to be air with
$\rho = 1.21 \frac{kg}{m^3}$ and the free wave speed of
$c = 343 \frac{m}{s}$.
![](Tws1.jpg "Tws1.jpg"){width="600"}
In this diagram, the \'o\' denotes the real part of the phase speed and
the \'+\' denotes the imaginary part of the phase speed. The straight
line shows the sound speed in air with a numerical value of
$c = 343 \frac{m}{s}$. In this plot, propagation of wave is possible
only if the phase speed is real. There are two important frequencies
that deserves a close attention. The first is the natural frequency of
the empty structure, i.e. ${\omega }_n$ and the natural frequency of the
fluid loaded structure, ${\omega }_1$. In this plot,
${\omega }_n = 450 Hz$ while ${\omega }_1 = 550 Hz$.
Unlike 1-D wave propagation in a rigid duct where the propagation speed
is a constant, the phase speed depends on the excitation frequency. It
shows that the propagation speed decreases as the excitation frequency
approaches to ${\omega }_n$. Between ${\omega }_n$ and ${\omega }_1$,
the phase speed is imaginary, which means no wave can propagate in
between these two frequencies. As soon as the frequency increases pass
${\omega }_1$, the phase speed is greater than the free wave speed of
343 m/s. As the excitation frequency increases, the phase speed
approaches to the free wave speed.
When the excitation frequency is increased, the parts of the fluid
energy is used to excite the tube, until the excitation frequency
matches ${\omega }_n$. Beyond ${\omega }_n$, no true wave propagation is
possible because in between these two frequencies.
For a very rigid tube, i.e. $E = 2x10^{20} Pa$ , the phase speed is
exactly the free wave speed in air, which is a constant. This agrees
with what have been discussed before for a 1-D wave propagation in a
rigid duct.
![](Tws2.jpg "Tws2.jpg"){width="600"}
When the stiffness is reduced to 1/100 of the steel, there are numerous
differences than the steel tube. First, at the low frequency, the phase
speed is slower. This is because the lower wall stiffness, the more the
wall can be stretched, hence can absorb more energy. Also, the system
has a much lower ${\omega }_n$ and ${\omega }_1$.
![](Tws3.jpg "Tws3.jpg"){width="600"}
From the above analysis, it is possible to conclude the following:
1\. The stiffness, the wall thickness and the density of the tube
affects the phase speed dramatically.
2\. A reduction in stiffness, reduces the propagation speed at low
frequencies. The wave also becomes evanescent at much lower frequency.
This is because the natural frequency is reduced. As the stiffness
increases, the propagation speed approaches to that of the free wave
speed regardless of the frequency region. This is the case of rigid
wall.
3\. The propagation speed of a wave in a duct with compliant wall is
dispersive as it depends greatly on the frequency. The phase speed
differs significantly than that of in a rigid wall.
4\. The non-propagating zone is known as the stop band. Here, the wall
properties can be modified to create a larger stop band. Hence, a duct
with compliant wall can be considered as a type of band filter.
# References
\[1\]. Morse & Ingard (1968), \"Theoretical Acoustics\", Princeton
University Press, Princeton, New Jersey
|
# Engineering Acoustics/Reflection, transmission and refraction of planar waves
## Two-dimensional planar waves
Two-dimensional planar pressure waves can be described in Cartesian
coordinates by decomposing the wave number into x and y components,
$\mathbf{p}(x,y,t)=\mathbf{P}e^{j(\omega t -K_x x-K_y y)}.$
Substituting into the general wave equation yields:
$\nabla^2 \mathbf{p}- \frac{1}{c_o^2} \frac{\partial^2 \mathbf{p}}{\partial t^2}=0,$
$\mathbf{P}(-K_x^2-K_y^2)+\frac{\omega^2}{c_o^2}\mathbf{P}=0,$
$K=\frac{\omega}{c_o}=\sqrt{K_x^2+K_y^2}.$
The wave number becomes a vector quantity and may be expressed using the
directional cosines,
$\vec{K}=K_x \boldsymbol{\hat{\imath}} + K_y \boldsymbol{\hat{\jmath}} = K \cos(\alpha) \boldsymbol{\hat{\imath}} + K \cos(\beta) \boldsymbol{\hat{\jmath}}.$
## Obliquely incident planar waves
Consider an obliquely incident planar wave in medium 1 which approaches
the boundary at an angle $\theta_i$ with respect to the normal. Part of
the wave is reflected back into medium 1 at an angle $\theta_r$ and the
remaining part is transmitted to medium 2 at an angle $\theta_t$.
$\mathbf{p_1}=\mathbf{P_i}e^{j(\omega t - \cos\theta_i K_1 x - \sin\theta_i K_1 y)} + \mathbf{P_r}e^{j(\omega t + \cos\theta_r K_1 x - \sin\theta_r K_1 y)}$
$\mathbf{p_2}=\mathbf{P_t}e^{j(\omega t - \cos\theta_t K_2 x - \sin\theta_t K_2 y)}$
!Reflection and transmission of obliquely incident planar
wave.\|440x440px
Notice that the wave frequency does not change across the boundary,
however the specific acoustic impedance does change from medium 1 to
medium 2. The propagation speed is different in each medium, so the
wave number changes across the boundary. There are two boundary
conditions to be satisfied.
1. The acoustic pressure must be continuous at the boundary.
2. The particle velocity component normal to the boundary must be
continuous at the boundary.
Imposition of the first boundary condition yields
$\mathbf{p_1}(x=0)=\mathbf{p_2}(x=0),$
$\mathbf{P_i}e^{-j \sin\theta_i K_1 y} + \mathbf{P_r} e^{-j \sin\theta_r K_1 y}= \mathbf{P_t} e^{-j \sin\theta_t K_2 y}.$
For continuity to hold, the exponents must be all equal to each other
$K_1 \sin\theta_i =K_1 \sin\theta_r=K_2 \sin\theta_t.$
This has two implications. First, the angle of incident waves is equal
to the angle of reflected waves,
$\sin\theta_i = \sin\theta_r$
and second, Snell\'s law is recovered,
$\frac{\sin\theta_i}{c_1}=\frac{\sin\theta_t}{c_2}.$
The first boundary condition can be expressed using the pressure
reflection and transmission coefficients
$1+\mathbf{R}=\mathbf{T}.$
Imposition of the second boundary condition yields
$\mathbf{u_{1x}}(x=0)=\mathbf{u_{2x}}(x=0),$
$\mathbf{u_i}\cos\theta_i+ \mathbf{u_r}\cos\theta_r= \mathbf{u_t}\cos\theta_t.$
Using the specific acoustic impedance definition yields
$\frac{\mathbf{P_i}}{r_1}\cos\theta_i- \frac{\mathbf{P_r}}{r_1}\cos\theta_r= \frac{\mathbf{P_t}}{r_2}\cos\theta_t.$
Using the reflection coefficient, the transmission coefficient and the
acoustic impedance ratio leads to
$1- \mathbf{R}= \frac{\cos\theta_t}{\cos\theta_i}\frac{\mathbf{T}}{\zeta}.$
Solving for the pressure reflection coefficient yields:
$\mathbf{R}=\mathbf{T}-1=\frac{\frac{\cos\theta_i}{\cos\theta_t}\zeta-1}{\frac{\cos\theta_i}{\cos\theta_t}\zeta+1}=\frac{\frac{r_2}{\cos\theta_t}-\frac{r_1}{\cos\theta_i}}{\frac{r_2}{\cos\theta_t}+\frac{r_1}{\cos\theta_i}}.$
Solving for the pressure transmission coefficient yields:
$\mathbf{T}=\mathbf{R}+1=\frac{2 \frac{\cos\theta_i}{\cos\theta_t}\zeta}{\frac{\cos\theta_i}{\cos\theta_t}\zeta +1}=\frac{2\frac{r_2}{\cos\theta_t}}{\frac{r_2}{\cos\theta_t}+\frac{r_1}{\cos\theta_i}}.$
Solving for the specific acoustic impedance ratio yields
$\zeta = \frac{\cos\theta_t}{\cos\theta_i}\Big(\frac{1+\mathbf{R}}{1-\mathbf{R}}\Big) = \frac{\cos\theta_t}{\cos\theta_i}\Big(\frac{\mathbf{T}}{2-\mathbf{T}}\Big) .$
## Rayleigh reflection coefficient
The Rayleigh reflection coefficient relates the angle of incidence from
Snell\'s law to the angle of transmission in the equations for
$\mathbf{R}$, $\mathbf{T}$ and $\zeta$. From the trigonometric identity,
$\cos^2\theta_t+\sin^2\theta_t=1$
and using Snell\'s law,
$\cos\theta_t=\sqrt{1-\Big( \frac{c_2}{c_1}\sin\theta_i \Big)^2}.$
Notice that for the angle of transmission to be real,
$c_2<\frac{c_1}{\sin\theta_i}$
must be met. Thus, there is a critical angle of incidence such that
$\sin{\theta_c}=\frac{c_1}{c_2}.$
The Rayleigh reflection coefficient are substituted back into the
equations for $\mathbf{R}$, $\mathbf{T}$ and $\zeta$ to obtain
expression only in term of impedance and angle of incidence.
$\mathbf{R}==\frac{\cos\theta_i\zeta-\sqrt{1-\Big( \frac{c_2}{c_1}\sin\theta_i \Big)^2}}{\cos\theta_i\zeta+\sqrt{1-\Big( \frac{c_2}{c_1}\sin\theta_i \Big)^2}}$
$\mathbf{T}=\frac{2 \cos\theta_i\zeta}{\cos\theta_i\zeta +\sqrt{1-\Big( \frac{c_2}{c_1}\sin\theta_i \Big)^2}}$
$\zeta = \frac{\sqrt{1-\Big( \frac{c_2}{c_1}\sin\theta_i \Big)^2}}{\cos\theta_i}\Big(\frac{1+\mathbf{R}}{1-\mathbf{R}}\Big) = \frac{\sqrt{1-\Big( \frac{c_2}{c_1}\sin\theta_i \Big)^2}}{\cos\theta_i}\Big(\frac{\mathbf{T}}{2-\mathbf{T}}\Big) .$
|
# Engineering Acoustics/Wave Motion in Elastic Solids
## Wave types
In an infinite medium, two different basic wave types, dilatational and
distortional, can propagate in different propagation velocities.
Dilatational waves cause a change in the volume of the medium in which
it is propagating but no rotation; while distortional waves involve
rotation but no volume changes. Having displacement field, strain and
stress fields can be determined as consequences.
` Figure 1: Dilatational wave`
` Figure 1: Distortional wave`
## Elasticity equations
Elasticity equations for homogeneous isotropic elastic solids which are
used to derive wave equations in Cartesian tensor notation are
**Conservation of momentum**
$$\tau_{ij,j} + \rho f_i = \rho{ \ddot{u_i}} ,\ (1)$$
**Conservation of moment of momentum**
$$\tau_{ij} = \tau_{ji} ,\ (2)$$
**Constitutive equations (which relate states of deformation with states
of traction)**
$$\tau_{ij,j} = \lambda \epsilon_{kk}\delta_{ij} + 2 \mu \epsilon_{ij} ,\ (3)$$
**Strain-displacement relations**
$$\epsilon_{ij} = {1 \over 2}(u_{i,j}+u_{j,i}) ,\ (4a)$$
$$\omega_{ij} = {1 \over 2}(u_{i,j}-u_{j,i}) ,\ (4b)$$
in which $\scriptstyle\tau$ is the stress tensor, $\scriptstyle\rho$ is
the solid material density, and $\scriptstyle\mathbf{u}$ is the vector
displacement. $\scriptstyle f$ is body force, $\scriptstyle\lambda$ and
$\scriptstyle\mu$ are Lame constants. $\scriptstyle\epsilon$ and
$\scriptstyle\omega$ are strain and rotation tensors.
## Wave equations in infinite media
Substituting Eq. (4) in Eq. (3), and the result into Eq. (1) gives
Navier's equation (governing equations in terms of displacement) for the
media
$$( \lambda + 2\mu )u_{j,ji} + \mu u_{i,jj}+ \rho f_i = \rho{ \ddot{u_i}} .\ (5)$$
The displacement equation of motion for a homogeneous isotropic solid in
the absence of body forces may be expressed as
$$( \lambda + 2\mu )\nabla(\nabla \cdot \mathbf{u}) + \mu\nabla^2\mathbf{u} = \rho{ \ddot{\mathbf{u}}} .\ (6)$$
Displacement can advantageously be expressed as sum of the gradient of a
scalar potential and the curl of a vector potential
$$\mathbf{u} = \nabla \phi\ + \nabla \times\psi ,\ (7)$$
with the condition $\nabla \cdot \psi =0$. The above equation is called
Helmholtz (decomposition) theorem in which $\scriptstyle\phi$ and
$\scriptstyle\psi$ are called scalar and vector displacement potentials.
Substituting Eq. (7) in Eq. (6) yields
$$[( \lambda + 2\mu )\nabla^2 \phi\ - \rho\frac{\partial^2 \phi}{\partial t^2}]+\nabla \times\ [\mu\nabla^2\psi - \rho\frac{\partial^2 \psi}{\partial t^2}] =0 .\ (8)$$
Equation (8) is satisfied if
$$c_p^2 \nabla^2 \phi =\frac{\partial^2 \phi}{\partial t^2}$$ where
$c_p^2 =\frac{( \lambda + 2\mu )}{\rho} ,\ (9a)$
$$c_s^2 \nabla^2 \psi =\frac{\partial^2 \psi}{\partial t^2}$$ where
$c_s^2 =\frac{\mu }{\rho} .\ (9b)$
Equation (9a) is a dilatational wave equation with the propagation
velocity of $\scriptstyle c_p$. It means that dilatational disturbance,
or a change in volume propagates at the velocity $\scriptstyle c_p$. And
Eq. (9b) is a distortional wave equation; so distortional waves
propagate with a velocity $\scriptstyle c_s$ in the medium. Distortional
waves are also known as rotational, shear or transverse waves.
It is seen that these wave equations are simpler than the general
equation of motion. Therefore, potentials can be found from Eq. (9) and
the boundary and initial conditions, and then the solution for
displacement will be concluded from Eq. (7).
## References
\[1\] Wave Motion in Elastic Solids; Karl F. Graff, Ohio State
University Press, 1975.
\[2\] The Diffraction of Elastic Waves and Dynamic Stress Concentration;
Chao-chow Mow, Yih-Hsing Pao, 1971.
|
# Engineering Acoustics/Qualitative Description of Shocks
## Defining a Shock-Wave
In the general case of mechanical wave propagation it is assumed that
the intensive properties of the medium can be described by continuous
functions of space and time. In the limit where pressure amplitudes
become very large, the wave propagation can evolve such that the wave
front becomes discontinuous and must be described in terms of a jump
from the undisturbed thermodynamic states in front of the wave to the
final thermodynamic states behind the wave.
A propagating disturbance of this type, which generates a discontinuous
change in pressure, temperature, enthalpy, internal energy, density and
particle velocity is referred to as a shock wave. A shock-wave is
depicted schematically in the following figure:
```{=html}
<center>
```
![](Basic_Shock_Sketch.png "Basic_Shock_Sketch.png"){width="350"}
```{=html}
</center>
```
A shock-wave can ultimately be interpreted as a transverse mechanical
wave with an undefinable pulse wavelength that discontinuously changes
the state of the medium at a propagation velocity greatly exceeding the
sound speed of the medium.
## Shock Formation and Attenuation
### \"Shocking Up\"
It is most intuitive to consider how a shock wave is formed by
considering the process in an elastic solid as the behaviour can then be
extended in principle to fluids. For a linear-elastic material,
behaviour under compressive loading can be broadly described by two
regimes. In the elastic regime the deformation (strain) is directly
proportional to the stress applied to it. Above a certain critical
stress level (termed the yield stress) the strain is no longer directly
proportional to the stress and the material begins to behave
nonlinearly - this is the plastic regime.
If we define the sound speed in the material as:
```{=html}
<center>
```
$C^2=\frac{\partial P}{\partial \rho}$
```{=html}
</center>
```
It is clear that in the elastic regime pressure and density are linearly
related and thus the speed of propagation of a wave is constant if their
pressure amplitudes are below the yield stress of the material. However,
consider a wave whose amplitude is in the regime of pressure beyond the
yield strength as depicted in the following figure:
```{=html}
<center>
```
![](stressstrainVSwave.png "stressstrainVSwave.png"){width="550"}
```{=html}
</center>
```
Since the pressure amplitudes are past the regime of linear
proportionality between stress and strain the wave speed is no longer
constant. By consulting the stress-strain curve it is apparent that wave
velocity increases with increasing pressure beyond the elastic limit.
Consequently, point C of the waveform will have the lowest local wave
speed while points B and A will have consecutively increasing wave
speeds. As a result the highest pressure parts of the waveform travel
faster than the lower parts and must eventually overtake them. A time
lapse of this process is depicted as follows:
```{=html}
<center>
```
![](ShockingUp.png "ShockingUp.png"){width="550"}
```{=html}
</center>
```
As the smooth wave pulse propagates through the material the
instantaneously faster parts of the wave overtake the slower ones and
the pulse itself becomes increasingly steep until it adopts the
familiar, discontinuous profile associated with a shock-wave.
Consequently any wave with pressure amplitudes greater than the yield
strength of the material will eventually \"shock-up\" and become
discontinuous due to the non-linear increase in wave speed with
increasing pressure.
It is tempting to assume that if we were to play the evolution of the
shock-wave even further in time, the top of the vertical line would
continue to outpace the bottom and the shock wave front would become
sloped. This does not happen in reality due to a competing wave process
that serves to attenuate the shock.
### Shock Rarefaction
Once a shock-wave has established itself in a material it cannot
propagate indefinitely unless it is either driven mechanically via a
piston or self supported via coupled chemical reaction in a detonation
wave. It will be shown that the attenuation and eventual dissipation of
a shock wave is also the natural result of the non-linear relation
between pressure, density and wave speed above the elastic limit.
Consider the following square pulse shock wave:
```{=html}
<center>
```
![](ShockPulse.png "ShockPulse.png"){width="350"}
```{=html}
</center>
```
Examine point A\': It is moving into un-shocked material with wave speed
$C_{0}$ and associated particle speed $\nu_{0}$. In contrast, point A is
moving into already shocked material at significantly higher pressure
and density and thus with higher particle velocity, $\nu_{A'}$ and wave
speeds $C_{A'}$. Consequently, point A will be moving significantly
faster than point A\' and will soon overtake the front. Now examine
point C. It has been relieved down to ambient conditions and thus has a
low associated wave speed. It will thus lag progressively further and
further behind point A. As the shock-wave wave propagates, the line A-C
will stretch out and thus angle down. This can be see as an averaging
out of the shock pulse amplitude over a larger and large front
thickness. This averaging serves to attenuate the pulse until the
pressure decays below the elastic limit and the shock-wave devolves into
an acoustic wave. The line A-C is in fact a wave process with a
propagation velocity faster than that of the shock front. Such a wave is
known as a rarefaction and it is a fundamental characteristic of
shock-wave processes. The rarefaction attenuation process is depicted in
the following figure:
```{=html}
<center>
```
!350
```{=html}
</center>
```
## Shock Description via the Method of Characteristics
While the previous discussion is extremely intuitive in understanding
shock behaviour, all of these results can be obtained directly via
mathematical solution of the non-linear wave equations through the
method of characteristics and
Riemann Invariants. The method of characteristics is a technique for
solving partial differential
equations by reducing the
PDE to a set of ordinary differential equations through the
parametrization of the existing coordinate system into a new system
where properties of the PDE remain constant over curves in the new
system. The contours revealed during this method are called
characteristics.
Consider the basic set of nonlinear-elastic wave equations:
$$\ \epsilon = \frac{\rho_0}{\rho}$$ strain-density relation
$$\ \sigma = \sigma (\epsilon)$$ constitutive equation
$$\ \frac{\partial \rho}{\partial t} + \nu \frac{\partial \rho}{\partial x} +\rho \frac{\partial \nu}{\partial t} = 0$$
conservation of mass
$$\ \rho \left(\frac{\partial \nu}{\partial t} + \nu \frac{\partial \nu}{\partial x}\right) = \frac{\partial \sigma}{\partial x}$$
conservation of momentum
And characteristic coordinates:
$$\ \xi = \xi(x,t)$$
$$\ \zeta = \zeta(x,t)$$
Subject to the constraints where $\ x = x(X,t)$:
$$\ \frac{\partial \zeta}{\partial t} = C\frac{\partial \zeta}{\partial X}$$
$$\ \frac{\partial \xi}{\partial t} = -C\frac{\partial \xi}{\partial X}$$
Where we define:
$$\ C = \left( \frac{1}{\rho_{0}} \frac {d \sigma}{d \epsilon} \right)^{ \frac{1}{2}}$$
Note that here we have effectively employed the expression for wave
speed as a function of pressure and density!
Yielding the constraint equations:
$$\ \frac{\partial \zeta}{\partial t} = (C \epsilon - \nu)\frac{\partial \zeta}{\partial x}$$
$$\ \frac{\partial \xi}{\partial t} = -(C\epsilon + \nu)\frac{\partial \xi}{\partial x}$$
Combining the constraint equations with the differentials
$d \zeta = \frac{\partial \zeta}{\partial x}dx + \frac{\partial \zeta} {\partial t}d t$
and
$d \xi = \frac{\partial \xi}{\partial x}dx + \frac{\partial \xi}{\partial t} d t$
yields :
$$d \zeta = \frac{\partial \zeta}{\partial x} [dx - (\nu - C \epsilon)dt]$$
$$d \xi = \frac{\partial \xi}{\partial x} [dx - (\nu + C \epsilon)dt]$$
These equations then yield the slopes of the contours of the new
coordinate system:
$$\ \frac{d x}{d t} = \nu + C\epsilon$$ when $\ d \xi = 0$
$$\ \frac{d x}{d t} = \nu - C\epsilon$$ when $\ d \zeta = 0$
We must now apply these relations to transform the expression of the
wave equations from $\ (x , t)$ space into $\ (\zeta , \xi)$ space. Take
the time derivative of constitutive equation and substitute the slope of
the characteristic lines to obtain:
$$\ \frac{\partial \sigma}{\partial x} = - (C \epsilon)^{2}\frac{\partial \rho}{\partial x}$$
$$\ \frac{\partial \sigma}{\partial t} = - (C \epsilon)^{2}\frac{\partial \rho}{\partial y}$$
For simplicity make the substitution $\ c = C \epsilon$
Substitution into the conservation of mass and momentum equations
yields:
$$\ \frac{\partial \sigma}{\partial t} +\nu \frac{\partial \sigma}{\partial x} = \rho c^{2} \frac{\partial \nu}{\partial x}$$
$$\frac{\partial \nu}{\partial t} + \nu \frac{\partial \nu}{\partial x} = \frac{1}{\rho} \frac{\partial \sigma}{\partial x}$$
These can be combined and solved via partial derivative chain rule
expansion to yield:
$$\ \frac{\partial \nu '}{\partial \zeta} = \frac{1}{\rho c}\frac{\partial \sigma '}{\partial \zeta '}$$
$$\ \frac{\partial \nu '}{\partial \xi} = -\frac{1}{\rho c}\frac{\partial \sigma '}{\partial \xi '}$$
These are the characteristic equations in $\zeta$ and $\xi$ space.
In order to solve for the characteristic contours we must integrate from
some reference to a final state with reference to the invariants, thus:
$$\ J_{+}(\xi) = \nu - \nu^{0} - \int_1^\epsilon{C(\epsilon)\,d\epsilon}$$
$$\ J_{-}(\zeta)= \nu - \nu^{0} + \int_1^\epsilon{C(\epsilon)\,d\epsilon}$$
These two equations represent the Riemann invariants for the wave
system. They can be combined to yield the characteristic equations for
which the combination of stress and particle velocity does not change as
follows:
$$\ \nu - \nu^{0} = \frac{1}{2}(J_{+}(\xi) +J_{-}(\zeta))$$
$$\ \int_1^\epsilon{C(\epsilon)\,d\epsilon} = \frac{1}{2}(-J_{+}(\xi) + J_{-}(\zeta))$$
### Simple Wave Solution
A simple wave is a solution to the wave equations in characteristic
space for which one of the invariants is constant. Consider a non-linear
wave where we set:
$$\ J_{-}(\zeta) = 0$$
This yields stress, strain, particle velocity, and sound speeds that are
solely a function of $\xi$. From the slopes of the coordinate contours
derived previously we obtain:
$$\ \frac{d X}{dt} = \nu(\xi^{i})+c(\xi^{i})$$
Integration of these contours directly yields :
$$\ \xi = t - \frac{x}{\nu(\xi)+c(\xi)}$$
Consequently the simple wave solution of the non linear equations in
characteristic space can described as a transformation of a specified
transversely moving pulse into a collection of straight line
characteristics with differing slopes. Each line can be interpreted as
the x-t history of one specific point on the pulse.
If we compute the Riemann integrals:
$$\ \nu - \nu^{0} = \frac{1}{2}J_{+}\left[ t-\frac{x}{\nu+c}\right]$$
$$\ \int_1^\epsilon{C(\epsilon)\,d\epsilon} = -\frac{1}{2}J_{+}\left[ t-\frac{x}{\nu+c}\right]$$
We obtain the important result:
$$\ \frac{d(v+c)}{d\epsilon} \le 0$$
This inequality mathematically substantiates our previous statement that
wave speed increases with increasing pressure in the non-linear
(plastic) regime and illustrates an important concept: the wave speed is
equal to the sum of the particle velocity and the sound speed.
The relation between simple wave solutions in characteristic space can
be linked to the formation of shock waves through the concept that
characteristics with varying slopes must eventually intersect at some
point in time.
### Shocks From Waves
It can be shown mathematically that the point of intersection of the
contours of a non-linear wave is a mathematical discontinuity, thereby
recovering our concept that shock-waves are state discontinuities
For simplicity sake consider a wave that is a ramp of particle velocity
with respect to position - this is analogous our previous qualitative
example from the first section but with particle velocity plotted
instead of pressure (where the two are related). The equation describing
this ramp is:
$\ \nu = -mx$
Applying our solution for the characteristics of a simple wave we
obtain:
$\ \nu = m[(\nu+c)t -x]$
In order to make this solution tractable we must employ an equation of
state to mathematically link the wave velocity and particle speeds.
Since we have not specified the material in question it is expedient to
simply assume that the relationship is linear, thus:
$$\ c = \lambda u + c_{0}$$
Substituting this into our relationship we obtain for velocity:
$$\nu = \frac{m(c_{0}t-x)}{1-\lambda m t}$$
Clearly when $t = \frac{1}{\lambda m}$, the value for the particle
velocity is undefined and the simple wave solution breaks down and we
have a shock-wave formed. However, when $m$ is negative, the value of
$\nu$ never tends to infinity and we have a rarefaction wave. If we plot
the position history of one point at the bottom of the ramp
(corresponding to the point C in our previous discussion) and one point
at the top of our ramp (corresponding to point A) on an x-t diagram we
can visualize how these characteristics behave:
```{=html}
<center>
```
![](XTShockvsRare.png "XTShockvsRare.png"){width="700"}
```{=html}
</center>
```
It is important to note that our mathematical and conceptual discussions
have ultimately yielded analogous descriptions of shock-wave formation
and behaviour. In the mathematical discussion we can see that each
characteristic corresponds to trajectory of a specific point on the
waveform depicted in either $(P,x)$ or $(P,t)$ space. The history of
these characteristics tracks how specific portions of the waveform
overtake or lag behind the others. In the case of a set of converging
characteristics the point of intersection corresponds to a mathematical
singularity and the formation of a shockwave. In the case of diverging
characteristics we can see that the waveform points begin smearing out
in space - this is clearly analogous to the description of rarefaction
waves.
### Strong and Weak Shocks
In the context of the method of characteristics a shock-wave is any
discontinuity produced by the convergence of characteristic lines. A
distinction is made between two types of shock solutions depending on
how they affect the locus of thermodynamic states. A weak shock is
define as the case when the change between final and reference states is
nearly identical to that of the equivalent simple, characteristically
convergent wave. In this case the process by which the change in states
is effected is isentropic and the path through which the material is
loaded from the reference state to the final state is described by the
isentrope.
Conversely a strong shock is the discontinuous solution for which the
locus of all possible states does not coincide with the isentrope but
rather a different loading path.
This discussion of state loci will become more clear with the
introduction of the Hugoniot.
## References
1. Introduction to Wave Propagation in Nonlinear Fluids and Solids;
D.S. Drumheller; 1998
2. Explosives Engineering; Paul W. Cooper; 1996
3. Shock Waves and Explosions; P.L. Sachdev; 2004
|
# Engineering Acoustics/The Rankine-Hugoniot Jump Equations
## Conservation Equations and Derivation
For the purpose of performing engineering calculations, equations
linking the pre- and post- shock states are required. One of the most
fundamental expressions relating states is the $(P,\rho)$ Hugoniot which
relates pressure and density as:
$\ P = f(\rho)$
This expression can be derived via simplification of the canonical
conservation equations:
Conservation of mass:
$$\rho_1 \nu_1=\rho_o \nu_o\,$$
Conservation of momentum:
$$p_1+\rho_1\nu_1^2=p_o+\rho_o \nu_o^2$$
Conservation of energy:
$$e_1+\frac{p_1}{\rho_1}+\frac{1}{2}\nu_{1}^2=e_o+\frac{p_o}{\rho_o}+\frac{1}{2}\nu_{o}^2$$
The parameters of a shock required to completely solve for the jump
conditions are pressure, particle velocity,specific internal
energy,density and shock speed. With 4 state variables and only 3
equations an additional equation is required to relate some of the
states and make the problem tractable. This equation is referred to as
an equation of state (EOS) - of which many exist for a variety of
applications. The most common EOS is the ideal gas law and can be used
to reduce the system of equations to the familiar Hugoniot expression
for fluids with constant specific heats in steady flow:
$$\frac{p_1}{p_2}=
\frac{(\gamma+1)-(\gamma-1)\frac{\rho_2}{\rho_1}}
{(\gamma+1)\frac{\rho_2}{\rho_1}-(\gamma-1)}$$
For general, non-linear elastic materials there exists no equation of
state that can be derived from first principles. However, a huge
database of experimental data has revealed that virtually all materials
display a linear relationship between particle velocity and shock speed
(the voracity of the linear assumption in the method of characteristics
example is now even more clear!):
$$\ U = C_{o} + s\nu$$
This equation is also known as the shock Hugoniot in the $U-\nu$ plane.
Combination of this linear relation with the momentum and mass equations
yields the desired expression for the Hugoniot in the $P-\rho$ plane for
virtually all solid materials:
$\ P = C_{o}^2\frac{\frac{1}{\rho_{0}}-\frac{1}{\rho}}{\left(\frac{1}{\rho_{0}}-\frac{s}{\rho}\right)^2}$
## Paths and Jump Conditions
The Hugoniot describes the locus of all possible thermodynamic states a
material can exist in behind a shock, projected onto a two dimensional
state-state plane. It is therefore a set of equilibrium states and does
not specifically represent the path through which a material undergoes
transformation.
Consider again our discussion of strong and weak shocks. It was said
that weak shocks are isentropic and that the isentrope represents the
path through which the material is loaded from the initial to final
states by an equivalent wave with converging characteristics (termed a
compression wave). In the case of weak shocks, the Hugoniot will
therefore fall directly on the isentrope and can be used directly as the
equivalent path.
In the case of a strong shock we can no longer make that simplification
directly, howevewer for engineering calculations it is deemed that the
isentrope is close enough to the Hugoniot that the same assumption can
be made.
If the Hugoniot is approximately the loading path between states for an
\"equivalent\" compression wave, then the jump conditions for the shock
loading path can be determined by drawing a straight line between the
initial and final states. This line is called the Rayleigh line and has
the following equation:
$\ P_1 - P_0 = U^2\left(\rho_0 - \frac{\rho_0^2}{\rho_1}\right)$
```{=html}
<center>
```
![](HugoniotRaleigh.png "HugoniotRaleigh.png"){width="350"}
```{=html}
</center>
```
## References
|
# Engineering Acoustics/Detonation
## Detonation
A detonation wave is a combustion wave propagating at supersonic speeds.
It is composed of a leading shock which adiabatically compresses the
reactants, followed by a reaction zone which converts the reactants into
products. During this process, a significant amount of heat is released,
increasing the temperature and decreasing the pressure and density. The
products are expanded in the reaction zone, giving the detonation a
forward thrust.
In contrast, a deflagration wave, which can be thought of as a
propagating flame, is a combustion wave which propagates at subsonic
speeds.A deflagration consist of a precursor shock followed by a
reaction zone. A deflagration propagates via heat and mass diffusion
from the reaction zone to ignite the reactants ahead of it. Being a
subsonic wave, information downstream can travel upstream and change the
initial thermodynamic state of the reactants.
A qualitative difference between these two combustion modes are
tabulated below.
Detonation Deflagration
------------------------------- ------------ --------------
Mach number $u_0/c_0$ 5-10 0.00001-0.03
Velocity ratio $u_1/u_0$ 0.4-0.7 4-6
Pressure ratio $p_1/p_0$ 13-55 0.98
Temperature ratio $T_1/T_0$ 8-21 4-16
Density ratio $\rho_1/\rho_0$ 1.7-2.6 0.06-0.25
According to this table, the burned products (with respect to a
stationary combustion wave) experience a deceleration across a
detonation wave, and an acceleration in a deflagration wave. The
pressure and density rise across the detonation which is why a
detonation wave is known as a compression wave. In contrast, the
pressure decreases slightly across a deflagration, hence it is
considered an expansion wave.
## Rayleigh line and Hugoniot curve
Let's first treat the detonation as a black box that brings reactants of
state 0 to products of state 1.
\[\[Image:1d steady flow across combustion
wave.jpg\|center\|thumb\|400px\|
```{=html}
<center>
```
Figure 1: 1-D steady flow across a combustion wave
```{=html}
<center>
```
\]\]
The basic conservation equations are applied to relate state 0 to
state 1. Considering a one dimension steady flow across the combustion
wave, the basic equations are:
Conservation of mass:
`<big>`{=html}`<big>`{=html}$\rho_0 u_0=\rho_1 u_1$`</big>`{=html}`</big>`{=html}
Conservation of momentum:
$p_0+\rho_0 u_0^2=p_1+\rho_1 u_1^2$
Conservation of energy:
$h_0+q+\frac{u_0^2}{2}=h_1+\frac{u_1^2}{2}$
where ρ, p,u,h and q are the density, pressure, velocity, enthalpy and
the difference between the enthalpies of formation of the reactants and
the products, respectively.
Combining the conservation of mass and momentum, the following equation
is obtained
$(p_1 - p_0)/(\nu_0 - \nu_1)=\rho_0^2u_0^2=\rho_1^2u_1^2= \dot{m^2}$,
where $\nu$ is the specific volume and $\dot{m}=\rho u$ is the mass flux
per unit area. The mass flux can also be written as
$\dot{m} = \sqrt{(p_1 - p_0)/(\nu_0 - \nu_1)}$.
Remember that the mass flux must be a real number. Therefore, if the
numerator is positive so must the denominator and vice versa.
If we define `<big>`{=html}$y= p_1 / p_0$`</big>`{=html} and
`<big>`{=html} $x= \nu_1 / \nu_0$`</big>`{=html} then we can obtain the
following equation
$\dot{m^2} = (y - 1)/(1 - x)$
Substituting expressions for the speed of sound of the reactants
`<small>`{=html}$c_0= \sqrt{\gamma_0p_0/\rho_0}$`</small>`{=html} and
the Mach number of the combustion wave
`<big>`{=html}$M= u_0/c_0$`</big>`{=html}, the above equation can be
re-casted as
$y= (1+\gamma_0M_0^2)- (\gamma_0M_0^2)x$
The above equation defining the thermodynamic path linking state 0 to
state 1is also known as the Rayleigh line. Isolating $u_0^2$ or $u_1^2$
from the following equation
$(p_1 - p_0)/(\nu_0 - \nu_1)=\rho_0^2u_0^2=\rho_1^2u_1^2= \dot{m^2}$
we can eliminate the velocity terms from the energy equation and obtain
the equation of the Hugoniot
curve,
$h_1-(h_0+q)=\frac{1}{2} (p_1-p_0)(\nu_0-\nu_1)$.
The Hugoniot curve represents the locus of all possible downstream
states, given an upstream state. It is also possible to express the
Hugoniot curve with respect to the variables x and y, similar to the
Rayleigh line, as
$y= \frac{\frac{\gamma_0+1}{\gamma_0-1} -x+\frac{2q}{p_0\nu_o}}{\frac{\gamma_0+1}{\gamma_0-1}x -1}$.
Note that the case where q=0 corresponds to a non reacting shock wave
and the Hugoniot curve will pass by the point (1,1) in the x-y plane.
When the Rayleigh line is tangent to the Hugoniot curve, the tangent
points are called Chapman-Jouquet (CJ) points. The upper CJ point
corresponds to the CJ detonation solution whereas the lower CJ point is
referred as the CJ deflagration solution.
\[\[Image:CJ detonation and deflagration
points.jpg\|center\|thumb\|400px\|
```{=html}
<center>
```
Figure 2: CJ detonation and deflagration solution
```{=html}
<center>
```
\]\]
Note that the CJ theory does not take into account the detailed
structure of the detonation wave. It simply links upstream conditions to
downstream conditions via steady one dimensional conservation laws.
## Detonation wave structure
### ZND model
Assuming a one dimensional steady flow, the Zel'dovich, von Neumann and
Döring (ZND) model is an idealized representation of the detonation
wave. The model essentially describes the detonation wave as a leading
shock followed by chemical reactions. The leading shock adiabatically
compresses the reactants, increasing the temperature, pressure and
density across the shock. An induction zone is followed where the
reactants are dissociated into radicals and free radicals are generated.
The induction zone is thermally neutral in the sense that the
thermodynamic properties remain relatively constant. When enough active
free radicals are produced, a cascade of reactions occurs to convert the
reactants into products. Chemical energy is released resulting in a rise
in temperature and a drop in pressure and density. The decrease in
pressure in the reaction zone is further decreased by expansion waves
and creates a forward thrust that will support the leading shock front.
In other words, the proposed propagation mechanism of a detonation wave
is autoignition by the leading shock which is supported by the thrust
provided by the expansion of the products.
The variation of thermodynamic properties is illustrated in the
following sketch.
\[\[Image:Variation of properties across ZND
structure.jpg\|center\|thumb\|400px\|
```{=html}
<center>
```
Figure 3: Variation of properties across a ZND detonation wave
```{=html}
<center>
```
\]\]
Although the ZND provides a description of the structure of the
detonation wave, it does not consider any boundary conditions. In
reality, both the initial conditions ( thermodynamic states, mixture
composition) as well as boundary conditions ( geometry, degree of
confinement, nature of walls) affect the detonation velocity. Under
certain conditions, initial and boundary conditions can even make the
propagation of detonations impossible. To-date, no quantitative
theoretical model can accurately predict the limits of detonations.
### Experimental observations
Although the ZND models the detonation in a one-dimensional frame,
experimental observations indicate that the detonation front is actually
three-dimensional and unstable. Instabilities are manifested in both
longitudinal ( pulsating detonation) and transverse directions. The
front consists of many curved shocks composed of Mach stems and incident
shocks. At the intersection between these curved shocks, reflected
shocks, also referred to as transverse waves, extend into the reacting
mixture. The intersection of these three shocks is referred as the
triple point. These transverse waves move back and forth sweeping across
the entire front. The trajectories of these triple points can be
recorded on a sooth covered surface as the detonation passes by. The
wave spacing can be measured from smoked foils and is referred to as the
cell size.
A sketch of a simplified cellular detonation structure is shown below. λ
represents the cell size.
\[\[Image:Det_front_structure.jpg\|center\|thumb\|400px\|
```{=html}
<center>
```
Figure 4: Detonation front structure
```{=html}
<center>
```
\]\]
## How to initiate a detonation
There are a few ways to initiate a detonation. Here are some examples:
- Deflagration-to-detonation transition (DDT): assuming that a
deflagration has already been formed, it needs to accelerate to a
certain velocity via turbulence. When conditions permit, the
deflagration abruptly transits into a detonation.Few key processes
of the development of the detonation wave are summarized as follows:
- Generation of compression waves ahead of the propagating flame
- Coalescence of the compression wave to form a shock wave.
- Generation of turbulence
- Creation of blast wave from a local explosion in the reaction
zone resulting into a detonation "bubble". A detonation bubble
catches up to the precursor shock and an overdriven detonation
is formed. Transverse pressure waves are generated.
- Direct initiation: Method of initiating a detonation bypassing the
deflagration phase. A detonation may be formed directly through the
use of a powerful ignition source.
- Diffraction of a planar detonation into a larger volume to form a
spherical detonation.
## Limits of detonation
Detonation limits refer to the critical set of conditions outside of
which a self-sustained detonation can no longer propagate. The
detonation velocity is actually affected by initial conditions of the
explosive mixture (thermodynamic states, composition, dilution, etc.)
and the boundary conditions (degree of confinement, geometry and
dimensions of confinement, type of wall surface, etc.). For example,
given a set of thermodynamic states and a particular experimental
apparatus, we change the mixture composition rich or lean. At a
particular fuel concentration the detonation will cease to propagate.
This type of limit approach reveals the composition limit of an
explosive mixture. For given initial conditions, we can also vary the
dimensions of the experimental apparatus. Detonation propagating inside
a tube below a critical tube diameter, no detonation can propagate. This
yields the critical tube diameter. It is worth noting that it becomes
more difficult to initiate a detonation as the detonation limit is
approached. The critical energy required for initiation increases
exponentially. The limit is not a characteristic property of the
explosive mixture since it is affected by both initial and boundary
conditions.
Since the steady propagation velocity of the detonation depends on
initial and boundary conditions, a common observation as the detonation
limit is approached is the presence of velocity deficit. Studies in
round tubes reveal a velocity deficit of about 15% that of CJ velocity
before failure of the detonation wave. Near the limit, longitudinal
fluctuation of the velocity has also been observed. Depending on the
magnitude of these fluctuations and their duration, different unstable
behaviour such as stuttering and galloping can be manifested.
Another characteristic indicating the approach of the limit is the
detonation cell size compared to the tube dimension. Away from the
limit, the detonation cell size is small compared to the dimensions of
the detonation tube. As the limit is approached, there are less
transverse waves and the wave spacing increases until a single
transverse wave propagates around the perimeter of the tube, indicative
of a spinning detonation. The magnitude of the transverse pressure wave
oscillations become larger and larger as the limit is approached.
## Prevention
Since it takes less energy to initiate a deflagration, this is the mode
of combustion most likely to occur in industrial accidents. Although the
key factors required for the formation of a detonation or a deflagration
( explosive mixture and ignition source) may not be eliminated in a
chemical plant, some prevention mechanisms have been developed to stop
the propagation of a deflagration and prevent a detonation from forming.
Here are a few examples:
- Inhibition of flames : once a flame is detected, flame suppressant
is injected. The suppressant will combine with active radicals. By
taking away the radical necessary for the chemical reactions to take
place, the flame will cease to spread.
- Venting : to avoid formation of a detonation, any pressure build up
is released. However, by actually releasing the pressure, the
turbulence created can accelerate the flame.
- Quenching : it is possible to quench (or suppress) the flame, even
near detonation velocities.
## References
- Lee, J.H.S.,*The Detonation Phenomenon*, Cambridge University Press,
2008
- Kuo, K.K.,*Principles of Combustion*, John Wiley and Sons, Inc.
2005, 2nd ed.
- Fickett, W. and Davis, W.C.,*Detonation*, University of California
Press, 1979
## Other links
A Detonation
database containing
a collection of experimental data related to detonations is available
online.
|
# Engineering Acoustics/The Acoustic Parameter of Nonlinearity
## The Acoustic Parameter of Nonlinearity
For many applications of nonlinear acoustic phenomena the magnitude of
an acoustic medium\'s nonlinearity can be quantified using a single
value known as the parameter of nonlinearity. This convention is often
attributed to Robert T. Beyer, stemming from his influential 1960 paper
titled *Parameter of Nonlinearity in Fluids* [^1] and his subsequent
texts on the subject Nonlinear Acoustics.[^2] It is worthwhile to note
that in Hamilton's text on Nonlinear Acoustics,[^3] Beyer attributes the
concept to an earlier work by Fox and Wallace (1954).[^4]
The mathematical grounds for the parameter of nonlinearity stem from the
Taylor series relating the perturbed pressure, *p* \', to the perturbed
density, *ρ* \'. Physically, these small perturbation values are
referenced to an ambient state defined by a density value, *ρ~o~*, and a
constant entropy, *s* = *s*~o~.
$$p' = A\left(\frac{\rho'}{\rho_o}\right)
+ \frac{B}{2!}\left(\frac{\rho'}{\rho_o}\right)^2
+ \frac{C}{3!}\left(\frac{\rho'}{\rho_o}\right)^3
+ ...$$
where the coefficients *A*, *B*, *C*, give the magnitude for each term
in the Taylor expansion. As the coefficients *B*, *C*, apply to squared
and cubic terms they represent a nonlinearity in the relation between
pressure, *p* \', and density *ρ* \'. Values for *A*, *B*, and *C* can
be determined experimentally using a several techniques.[^5][^6][^7] It
is also possible to use the Taylor series definition of *A*, *B*, and
*C* to calculate values when a constitutive relation between pressure
and density is known. Using the ideal gas or Tait equation of state for
this purpose is discussed in a subsequent section. The Taylor series
coefficient definitions are:
\
$$A = \rho_o\left(\frac{\partial P}{\partial \rho}\right)_{\rho_o,s_o} = \rho_o c_o^2$$
$$B = \rho_o^2\left(\frac{\partial^2 P}{\partial \rho^2}\right)_{\rho_o,s_o}$$
$$C = \rho_o^3\left(\frac{\partial^3 P}{\partial \rho^3}\right)_{\rho_o,s_o}$$
where a definition for ambient sound speed, (*∂p*/*∂ρ*)*~ρ~o~,s~o~~* =
*c~o~*^2^, has been applied to shown: *A* = *ρ*~o~ *c*~o~^2^. For a
majority of problems in nonlinear acoustics, utilizing the first two
terms of this expansion is sufficient to represent the range of density
perturbations encountered. In this case the series reduces to:
\
$$p' = A\left(\frac{\rho'}{\rho_o}\right) + \frac{B}{2}\left(\frac{\rho'}{\rho_o}\right)^2$$
This truncation leads to what is commonly referred to as the parameter
of nonlinearity in a fluid, or *B/A*. By factoring *A* = *ρ*~o~
*c*~o~^2^ from both terms in the truncated series, the physical
relevance of *B/A* becomes more apparent:
\
$$p' = \rho' c_o^2\left[1+\frac{1}{2}\frac{B}{A}\left(\frac{\rho'}{\rho_o}\right)\right]$$
This expression shows the ratio *B/A* quantifies the influence of
nonlinearity on the local pressure perturbation for a given state, *ρ*\'
/ *ρ*~o~. Similarly, it can also be shown the parameter *B/A* quantifies
the variation of local sound speed as a function of perturbed density
according to:[^8]
\
$$c = c_o\left[1+\frac{1}{2}\frac{B}{A}\left(\frac{\rho'}{\rho_o}\right)\right]$$
## B/A In Relation to Power Law Equations of State
For power law equations of state (EOS), such as the Tait-Kirkwood EOS
for liquids,[^9] or the isentropic compression of an ideal gas, the
*B/A* parameter can be related to know power law coefficients. To
demonstrate this relation, the partial derivatives of pressure with
respect to density, *∂p*/*∂ρ*, are calculated for the Tait EOS and are
then applied in the Taylor series for perturbed pressure, *p\'*. The end
result when using the ideal gas EOS instead of Tait is identical to that
shown.
\
$$\frac{P+D}{P_o+D} = \left(\frac{\rho}{\rho_o} \right )^{\gamma}$$
$$\frac{\partial P}{\partial \rho} = \left(P_o+D \right )\frac{\gamma}{\rho_o}\left(\frac{\rho}{\rho_o} \right )^{\gamma-1}$$
Evaluating the first derivative at the ambient state (*ρ* = *ρ~o~*)
gives a useful equation for the linear sound speed: *c~o~^2^* =
*γ*(*P~o~*+*D*) / *ρ~o~*. Including this expression to simplify
*∂p*/*∂ρ*, and continuing on to calculate *∂^2^p* / *∂ρ^2^* gives:
\
$$\frac{\partial P}{\partial \rho} = c_o^2\left(\frac{\rho}{\rho_o} \right )^{\gamma-1}$$
$$\frac{\partial^2P}{\partial \rho^2} = c_o^2 \frac{\left(\gamma-1 \right )}{\rho_o} \left(\frac{\rho}{\rho_o} \right )^{\gamma-2}$$
Incorporating these first and second derivatives into a Taylor series of
*p\'*on *ρ\'*gives equations derived from the power law EOS that can be
compared to the previous series containing the B/A parameter.
\
$$p'=c_o^2 \rho' + \frac{c_o^2\left(\gamma-1 \right )}{2\rho_o}\rho'^2+...$$
$$p'=c_o^2 \rho'\left[1 + \frac{\left(\gamma-1 \right )}{2}\left(\frac{\rho'}{\rho_o} \right ) \right ] + ...$$
This final equation showns the two Taylor series for *p\'*are identical,
with (*γ*-1) in place of *B/A*. Thus, for liquids obeying the
Tait-Kirkwood EOS and ideal gasses under isentropic conditions with
known adiabatic index:
\
$$\frac{B}{A} = \gamma - 1$$
## Sample Values for B/A
Table 1 provides a sample of values for the B/A parameter in various
gases, liquids, and biological materials. Reference temperatures are
included along with each sample as the value of *B/A* for a particular
material will vary with temperature. Several organic materials are
included in Table 2 as nonlinear acoustic effects are particularly
prominent in biomedical ultrasound applications.[^10]
\
:{\| class=\"wikitable\" style=\"text-align: center;\" \|+ Table 2:
Sample *B/A* values for fluids. \|- ! Material !! *B/A* !! Ref. Temp.
^o^C !! Reference \|- \| style=\"width: 160px;\" \| Diatomic Gases (Air)
\|\| style=\"width: 60px;\" \| 0.4 \|\| 20 \|\| [^11] \|- \| Distilled
Water \|\| 5.0 \|\| 20 \|\| [^12] \|- \| Distilled Water \|\| 5.4 \|\|
40 \|\| [^13] \|- \| Salt Water \|\| 5.3 \|\| 20 \|\| [^14] \|- \|
Ethanol \|\| 10.5\|\| 20 \|\| [^15] \|}\
:{\| class=\"wikitable\" style=\"text-align: center;\" \|+ Table 2:
Sample *B/A* values for organic materials. \|- ! Material !! *B/A* !!
Ref. Temp. ^o^C !! Reference \|- \| style=\"width: 160px;\" \| Glycerol
\|\| style=\"width: 60px;\" \| 9.1 \|\| 30 \|\| [^16] \|- \| Hemoglobin
(50%) \|\| 7.6 \|\| 30 \|\| [^17] \|- \| Liver \|\| 6.5 \|\| 30 \|\|
[^18] \|- \| Fat \|\| 9.9 \|\| 30 \|\| [^19] \|- \| Collagen \|\| 4.3
\|\| 25 \|\| [^20] \|}
## References
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]:
[^7]:
[^8]:
[^9]:
[^10]:
[^11]:
[^12]:
[^13]:
[^14]:
[^15]:
[^16]:
[^17]:
[^18]:
[^19]:
[^20]:
|
# Engineering Acoustics/Harmonic Generation
## Nonlinear Generation of Harmonics
As described in the entry for the qualitative description of
shocks,
finite amplitude waves in any medium will undergo a steepening phenomena
cumulating in the formation of shock wave. For strong flows and wave
conditions where fluid velocity is similar in magnitude to sound speed,
*u*/*c~o~* ≥ *O*(1), where *u* is the particle velocity and *c~o~* is
the ambient sound speed, the transition to a shockwave occurs rapidly
and can be termed a local effect. For weaker wave conditions where
*u*/*c~o~* \<\< 1, but nonlinear effects are still observable, wave
steepening occurs over many wavelengths and can be termed a cumulative
effect. In this regime of wave strengths an important result of wave
deformation is the accumulation of harmonic content in the propagating
waveform.
\
== Progressive Wave Deformation ==
To describe the harmonic content of the deformed wave profile consider
some analysis can be carried out for the case of a plane wave
propagating in the x^+^ direction driven by a boundary piston with
velocity *u~o~* = *sin*(*ωt*), where *ω* is the driving frequency, and
*t* is the time variable. In this case the resulting sound field depends
only on the *x*^+^ wave, thus falls under the simple wave
assumption
and can be defined using a reduced equation for progressive waves in an
inviscid fluid:[^1][^2]
\
$$\frac{\partial u}{\partial t}+\left(c + u \right )\frac{\partial u}{\partial x} = 0$$
$$\frac{\partial u}{\partial t}+\left(c_o + \beta u \right )\frac{\partial u}{\partial x} = 0$$
$$\beta = 1 + \frac{1}{2}\frac{B}{A}$$
The relation between *β* and *B/A* is given to highlight the relation to
the acoustic parameter of nonlinearity. The equations given describe a
propagating planar wave, in which the propagation velocity for any
particular point is given by local value of (*c~o~* + *βu*), as opposed
to simply *c~o~* in the case of an assumed linear wave. For an initially
sinusoidal wave profile, the wave apexes propagate with the greatest
velocity, while equilibrium points propagate only at the ambient sound
speed. This progression is qualitatively depicted in Figure 1, where the
trajectory (wave velocity) of the wave maximum, minimum, and equilibrium
points are plotted. As the trajectories of different points on the wave
are non-parallel, the wave will deform as it propagates. The rate of
deformation depends on the magnitude of difference between the various
trajectories in the wave profile, and those depend on both the induced
particle velocity and the fluids value of *B/A*. As a result of this
dependence, a fluid with a higher *B/A* value will exhibit more rapid
wave deformation than a fluid with a lower *B/A* value if the same
boundary velocity is applied to both.
\
! Figure 1: Progressive deformation of an initially sinusoidal wave
profile. Local trajectories are plotted on the x-t axis for the wave
maximum (*c* + *v*), the local equilibrium (*c~o~*), and the wave
minimum (*c* +
*v*)., the local equilibrium (co), and the wave minimum (c + v)."){width="420"}
According to the progressive wave equation given for an inviscid fluid -
and the process depicted in Figure 1 - the wave maximum will eventually
catch up and overtake the wave front to form a discontinuous shock. In
real fluids this is not necessarily the only possible outcome, as all
sound waves in real fluids will attenuate as they propagate to some
extent. For many dissipative processes, the effect on the wave is
proportional to *ω*^2^,[^3] thus the generated higher harmonics are
dissipated more severely than the fundamental frequency. In this regard
the effects of dissipation hamper wave steepening, and for some wave
amplitudes a quasi-steady wave-form can be reached where the nonlinear
steeping effects are perfectly balanced by dissipative effects.
Provided sufficient amplitude, or a nearly inviscid fluid, shock waves
are formed in a progressive wave. It is for this reason that cumulative
deformation is more likely to result in shocks in water than air, as air
can be highly dissipative at the required wave amplitudes. Although not
discussed in the sections to follow, the generation of new harmonic
content continues after shock formation. The analytical description of
this process is not fundamentally different from the analysis given, as
the weak shock assumption is employed. For more details on the post
shock regime refer to the seminal paper by Blackstock [^4] or in a
variety of reference texts on the nonlinear acoustics.[^5][^6][^7][^8]
## Frequency Analysis of Solutions Obtained Using the Method of Characteristics
The most intuitive analytical solution to the deformed wave profile,
*u*(*x*,*t*), is obtained when using an approach method based on the
method of
characteristics.
When applied to planar progressive waves this approach is described in
the entry for the qualitative description of shock
waves.
Discussion focusing on the intermediate wave profile in addition to the
shock formation properties are given in many reference texts on
nonlinear acoustics, including those by Hamilton and Blackstock,[^9]
Elfno,[^10] Beyer,[^11] or Pierce.[^12]
As depicted to in Figure 1, the essence of this approach is to project
know values or ordinary differential eqations along characteristic paths
defined by *dx*/*dx* = (*c* + *u*). For the case of a single propagating
wave driven from a sinusoidal boundary, *u~o~* = *sin*(*ωt*), the
solution is particularly simple as each location in the domain
corresponds to only one characteristic path carrying constant values of
*u*, *c*, etc. At any position throughout the domain the characteristic
trajectories can be identified and the corresponding value of *u* is
used to construct the solution. In the following example, the implicit
equations described in Elfno [^13] were used to calculate the wave
profile:
\
$$u = u_o \sin\left(\tau + \frac{\beta u}{c_o^2}x \right ),$$
$$\tau = t - \frac{x}{c_o}$$
For this particular example the wave amplitude was set to achieve shock
formation after five wavelengths. For generality all magnitudes of the
solution are given in non-dimensional form. For the fluid, the parameter
of nonlinearity was set to B/A = 5.0, which corresponds to fresh water
at 20^o^C.[^14] The upper portion of Figure 2 gives the spatial wave
profile in which the wave steepening is readily apparent. In the lower
three panels of Figure 2, the frequency spectrum of the velocity wave is
shown as while passing through the regions indicated by red bands. The
frequency spectrum plots were obtained using Discrete Fourier transform
(DFT) of the calculated time domain signal at each indicated location.
\
! Figure 2: Harmonic contnet of a deforming progressive wave. *β* =
3.5, corresponding to *B/A* = 5.0 for distilled water at
20^o^C.{width="420"}
In the frequency spectra, where *f~o~* = 2*πω*, the progressive wave
contains only the fundamental frequency at the driven boundary, x = 0.
Importantly, it can also be seen the propagated and deformed wave
profile contains only integer harmonics of the initial frequency, and
the magnitude of these harmonics increases with propagation distance.
## Direct Analytical Solution to Harmonic Profile
While the method of characteristics provides spatial and time domain
wave profiles, from which frequency content can be analyzed, a more
direct approach is available to directly solve for the harmonic content
of the deformed progressive wave. Again consider the example of a planar
wave driven by a sinusoidal boundary condition, *u~o~* = *sin*(*ωt*).
One common approach for obtaining a general solution when system input
is periodic, is to assume a periodic system response expressed as a
Fourier Series. As shown
by Pierce,[^15] applying this approach to the progressive wave problem
gives the series solution as:
\
$$u\left(\omega \tau, \sigma \right ) = \sum_{n=1}^{\infty} B_n\left(\sigma \right ) \sin\left(n \omega \tau \right )$$
$$\sigma = \frac{x}{x_\text{shock}}$$
$$\tau = t - \frac{x}{c_o}$$
$$B_n\left(\sigma \right ) = \frac{2}{\pi} \int_{0}^{\pi}u\left(\omega \tau, \sigma \right ) \sin \left(n \omega \tau \right ) d\left(\omega \tau \right )$$
From this form of the solution the coefficient magnitudes directly yield
the harmonic amplitudes, while the complete Fourier series yields the
solution in the time and space domains. The challenge in this approach
is the evaluation of the Fourier coefficients. In the context of
nonlinear acoustics this solution is first attributed to Fubini in
1935,[^16] where manipulation of the integral terms was used to achieve
the integral form of Bessel's function, thus the harmonic components can
be directly defined according to:
\
$$B_n(\sigma) = \frac{2u_o}{n \sigma} J_n(n\sigma)$$
Where *J~n~* is the Bessel
function of the first
kind. Historically, the subsequent work by Blackstock in 1966 [^17]
clarified the physical implications of this solution somewhat and also
brought this solution to a wider audience in nonlinear acoustics. For
additional details on this derivation refer to the texts by Pierce,[^18]
Elfno,[^19] or Hamilton.[^20] For completeness, Figure 3 plots the
continuous harmonic profile as a function of normalized propagation
distance. The wave conditions are the same as those used in calculations
for Figure 2. In this particular example the shock formation occurs at
five wavelengths; however, the plotted harmonic profile is general to
any value of x~shock~.
! Figure 3: Continuous harmonic profile of a deforming progressive
wave. *β* = 3.5, corresponding to *B/A* = 5.0 for distilled water at
20^o^C.{width="480"}
## References
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]:
[^7]:
[^8]:
[^9]:
[^10]:
[^11]:
[^12]:
[^13]:
[^14]:
[^15]:
[^16]:
[^17]:
[^18]:
[^19]:
[^20]:
|
# Engineering Acoustics/Thunder acoustics
**Thunder** is defined as the sound signature associated to the shock
wave produced after a lightning discharge. Thunder has intrigued and
frightened humans for millennia. Early explanations of this phenomenon
included fights between Zeus and his subordinates in Greek mythology, or
the collision of clouds in the sky by Aristotle. It was only in the late
1800s that the true physical causes were discovered by the scientific
community i.e. the heating of a narrow channel to \~24000K. The air
molecules within ionize to generate a highly powerful shock wave that
can be heard over distances up to 25 km, depending on the wind.
## Lightning fundamentals
!CGtypes\|400px\|Figure 2. The 4 groups of cloud-to-ground (CG)
discharges (adapted from *Lightning: Physics and Effects*
(2003)). discharges (adapted from Lightning: Physics and Effects (2003)).")
There are two main types of lightning discharges: cloud-to-ground (CG)
and intra-cloud (IC), the latter accounting for about 3/4 of all
discharges (there are however other types of discharges that are less
commonly encountered such as ball
lightning).
The CGs can be categorized into 4 groups:[^1]
: \(a\) downward negative (90% of all CGs are of this group),
: \(b\) upward positive,
: \(c\) downward positive, and
: \(d\) upward negative.
The charges have three main modes in which they can be sent to the
ground from the cloud (see diagram):
1. Dart-leader-return stroke sequences.
2. Continuing currents (that last for hundreds of milliseconds) which
are long quasi-stationary arcs.
3. M-components (named after D.F. Malan who first studied these
processes in the 1930s) which are transient processes during a
continuing current (2nd mode).
!modesoftransfer\|400px\|Figure 3. The three main modes of charge
transfer to ground (adapted from *Lightning: Physics and Effects*
(2003)).).")
## Thunder generation
\"Thunder can be defined as the acoustic emission associated with a
lightning discharge\".[^2]
### Types
All processes in CG and IC discharges produce thunder which can be
divided into 2 categories:
- audible (frequencies greater than 20 Hz) which come from a series of
decaying shockwaves produced by the expansion of various portions of
the nearly instantaneous heated lightning channel which is filled
with ionized air molecules
(plasma)). Of these
there are: a) rumbling thunder which is a long low series of
growling-like sounds and b) clapping thunder that is usually loud
and quick.
- non-audible or infrasonic thunder (frequencies lower than 20 Hz)
which is thought to originate from ICs where large volumes of air
are displaced by the rapid removal of electrons or protons from the
cloud itself. This category of thunder has only recently received
the attention of the scientific community.
### Maximum amplitude frequency
!Figure 4. The frequency spectrum of thunder (and unfiltered rain
sounds) obtained using Audacity 1.3
(Beta). obtained using Audacity 1.3 (Beta)."){width="300"}
It has been empirically found that the loudest frequency in thunder is:
$$f_{\rm max} = c_0 \left( \frac{p_0}{E_0} \right)^\frac{1}{2}$$ where
$c_0$ is the speed of sound, $p_0$ is the ambient pressure, and $E_0$ is
the energy per unit length of lightning channel which is defined as:
$E_0 = \frac{1}{\pi {R_0}^2} \int\limits_{0}^{t_{\rm disch}}\rho I^2\,dt$
where $R_0$ is the initial channel radius, $\rho$ is the resistivity of
the plasma and $t_{\rm disch}$ is the discharge duration. The values of
$E_0$ have been found to vary around 50 kJ/m.
### A.A. Few\'s Model of thunder generation
It is widely accepted that audible thunder is generated by the lightning
channel and the subsequent shock wave that travels extremely rapidly
(\~3000 m/s).[^3] A.A. Few provides a experimentally-proved thunder
generation mechanism.
#### Assuming perfectly cylindrical/spherical expansion
The shock wave time history can be cut into three intervals: the first
consists of a strong shock with an extremely high pressure ratio across
the boundary. The second section is a weak shock that travels at a
relatively slower pace. And finally the third section of the shockwave
is the acoustic wave that propagates at 343 m/s i.e. the speed of sound
at 293K.
The distance traveled by the strong shock wave before it turns into a
weak shock can be found by performing a work-energy balance on the fluid
that has been compressed by the strong shock (i.e. work is done on the
fluid by volume and pressure changes). A so-called relaxation radius,
$R_s$ (for spherical shock waves or $R_c$ for cylindrical shock waves)
can thus be defined to account for the distance traveled by the strong
shock:
Spherical relaxation radius:
$$R_s=\left(\frac{3E_t}{4 \pi p_0}\right)^\frac{1}{3}$$
Cylindrical relaxation radius:
$$R_c=\left(\frac{E_0}{\pi p_0}\right)^\frac{1}{2}$$
where $E_t$ is the total energy released by the spherical shock wave. It
is in this last section of the shock wave that the thunder is heard.
Most studies that were done on thunder could not analyze from close
range a naturally produced sound. This is because of the impossibility
to predict exactly where lightning will strike and thus to place a
microphone near it. The most common way of artificially generating
lightning is with a rocket, that is connected to a steel wire and fired
into a thundercloud to create a short circuit near ground. This
\"forces\" an electrical current from the electrically-charged
thundercloud to the ground via the attached electrical wire.
This type of discharge is commonly called artificially triggered
lightning, and was used, among others, by Depasse in 1986, 1990 and 1991
at Saint-Privat D\'Allier, France, where the pressure profile behind a
lighting-generated shockwave was matched to the theoretical profile
obtained from cylindrical shockwave theory developped by Few in
1969.[^4]
!Figure 5. The different sizes of tortuous segments in a lightning
rod.-3.png "Figure 5. The different sizes of tortuous segments in a lightning rod."){width="400"}
#### Effect of tortuosity on rumbling or clapping thunder
Lightning channels are not straight channels with perfectly circular gas
dynamic expansion and hence the tortuosity of these must be accounted
for. It was for this reason that A. A. Few derived three different
levels of tortuosity in lightning rods. Macro, meso and micro-tortuous
segments can be qualitatively observed in any CG discharge. It was found
that macro- and meso-tortuous segments are very important in organizing
the pulses and acoustic behaviour of a CG discharge.[^5] Through
computational studies, it was found that 80% of the acoustic energy is
released at a 30 degree angle of the plane perpendicular to the main axe
of a macro tortuous discharge. It is in this region that an observer
will hear a loud clap sort of thunder whereas one placed outside this
region will hear a rumbling sort of thunder.
## Thunder propagation
!Thunder propagation
schematics.{width="400"}
### Attenuation
There are two types of attenuation effects. The first is due to the
finite amplitude of the propagating acoustic waves which cause a
non-negligeable amount of stretching to the wave. There is a so-called
\"eroding\" effect which tries to break down the sudden pressure jump at
the wave front into a more rounded profile. This is a form of
dissipation due to visco-thermal losses that affects the higher
frequencies and so explains why only lower frequencies are heard when
lightning strikes 1 or more kilometres away. The second form of
attenuation is due to the scattering and aerosol effect left by the rain
drops and thunderclouds (filled with water vapour) commonly found in
most lightning conditions. These micro particles also attenuate the
higher frequencies of a thunder clap or rumble. See
shockwave and
detonation wikis for more
information about the decay of a strong shock.
### Environment
Recalling that the speed of sound, c, is dependent on the density of the
medium it is thus likely that, depending on the conditions surrounding
the lightning rod such as the air composition, atmospheric pressure, the
thunder will travel at a unique velocity, pitch, frequency band and
duration depending on the characteristics of the lightning rod. Indeed,
as shown in the study by Blanco et al. (2009) [^6] the geometry plays a
vital role in the perceived resulting sound. Furthermore, there is a
level of attenuation that must be accounted for as the sound travels
through the atmosphere and ground obstacles (such as trees, buildings,
bridges, land).
Back to Main page
## References
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]:
|
# Engineering Acoustics/Basic Concepts of Underwater Acoustics
The study of Underwater Acoustics was always of great importance, when
it comes to navigation instruments, for those who depend on the sea.
Methods to find the position on earth according to the stars were
available since long ago, but the first apparatus that track what is
underwater are relatively recent. One of these instruments, which
improved the safety of navigation is the fathometer. It has the simple
concept of measuring how much time a sound wave generated at the ship
takes to reach the bottom and a reflected wave returns. If one knows the
speed of sound in the medium, the depth could be easily determined.
Another mechanism consists of underwater bells on lightships or
lighthouses and hydrophones on ships that are to find the distance
between them. These could be considered the forerunners of the SONAR
(SOund Navigation And Ranging). There are a lot of animals that also
take advantage of underwater sound propagation to communicate.
# Speed of Sound
In 1841, Jean-Daniel Colladon
1 was able to
measure the speed of sound underwater for the first time. He conducted
experiments in Lake Geneva, where he was able to transmit sound waves
from Nyon to Montreux (50 km). The idea of the experiment was to
propagate a sound wave making use of a hammer and an anvil to generate
the wave and a parabolic antenna to capture the sound at distance. A
flash of light is emitted at the same time that the hammer hit the anvil
and the delay between the light and sound is used to determine the speed
of sound.
The difference in speed of sound with depth is much more significant
than its difference along the surface.
The equation for speed of sound(m/s) in water developed by Del
Grosso,[^1] applicable in Neptunian[^2] waters, depends on the
Temperature(T) in Celsius, Salinity(S) in ppt(part per thousand) and
gauge Pressure(P) in atmospheres.:
$c(T,S,P) = 1449.08 + 4.57Te^{-(T/86.9+(T/360)^2)} + 1.33(S-35)e^{-T/120} + 0.1522Pe^{T/1200+(S-35)/400} + 1.4610^{-5}P^{2}e^{-T/20+(S-35)/10}$
where the pressure is a function of depth\[Km\] and the Latitude, given
by:
$P=99.5(1-0.00263cos2\phi)Z + 0.239Z^{2}$
!Figure 1: Speed of sound profile at low latitudes. Salinity gradient
was not taken into
account.
The Axis passing by the region where the speed of sound is minimum is
known as Deep Sound Channel Axis.
The speed of sound is very sensible to the temperature, which changes
considerably on the thermocline
2. Beyond 1000 m meters in
depth, the pressure governs the equation, increasing slowly the speed
with depth. The salinity has very low effect on the equation unless in
very specific situations, such as heavy rain or the encounter between a
river and the sea. The shape of the curve may change drastically from
one place to another, e.g., the curve would be more like a linear
function of depth in very cold places.
# Refraction
The gradient of speed of sound within the water causes a phenomena
similar to a mirage, in which rays of light are bent. If we divide the
water in multiple layers parallel to surface, we should get various
medium with different speed of sound i.e., different specific
characteristic impedance. Considering a source of sound pressure
underwater and making use of Snell\'s Law
3 we can see the path the
wave will follow. Snell\'s Law tells us that the sound bends towards the
lower sound speed layer. If the sound wave angle with the horizontal is
too high (higher than $\theta_{max}$), the wave will eventually hit the
bottom or the surface, otherwise it will bend continuously towards the
horizontal until it passes the critical angle($\theta_c$) and then will
be completely reflected back.
$\theta_{max}=(2\Delta c/c_{max}))^{1/2}$
($c_{max}$ is the maximum speed found in the SOFAR channel.)
$sin\theta_c = c_1/c_2$
This process happens over and over again causing the sound to be trapped
in certain depth range known as SOFAR (Sound Fixing and Ranging) channel
4. As the sound cannot
reach the bottom nor the surface, the losses are small and no sound is
transmitted to the air nor the seabed, helping sound propagate through
large distances. Signals have been detected in ranges that exceed
3000 km.
This channel can be used for communication successfully by some species
of *cetacea*.
We can see that the sound concentrates at some depths and is much less
present in others, causing some regions to be more noisy than others.
!Sound trapping in the SOFAR
channel{width="800"}
Note that if the surface temperature is very low this phenomena may no
longer occur. The wave would be bouncing on the surface and being
reflected back just like we can see on the graph for 15.19° angle. The
same effect occurs on the *mixed layer* which is the layer affected by
the agitation of waves, causing the speed of sound to be only dependent
on pressure. This effect may cause shadow zones.
If you have a source that is between the deep sound channel axis and the
surface, only the rays making an angle less than $\theta_o$ with the
horizontal would be trapped.
$\theta_o=\theta_{max}(z_s/D_s)^{1/2}$
where $z_s$ is the depth of the source and $D_s$ is the depth of the
sound axis
# Reflection
Reflection also occurs when the sound wave hits a another body, such as
the seabed, the surface, animals, ships and submarines.
$R = (r_2/r_1 - cos\theta_t/cos\theta_i)/(r_2/r_1 + cos\theta_t/cos\theta_i)$
where $r_1$ is the characteristic acoustic impedance of water and $r_2$
is the characteristic acoustic impedance of the other body, $\theta_i$
is the incident angle and $\theta_t$ is the angle of the transmitted
wave, which can be obtained via Snell\'s Law. The formula is for 2D
case, but we can easily recall the 1D case by setting
$\theta_i=\theta_t=0$
If we can measure the reflected wave, we can determine the reflection
factor and with it we are able to determine the characteristic acoustic
impedance of the body that the wave hit to then have an idea of what the
body might be.
# Transmission Loss
The transmission loss is defined as
$TL=10log[I(1)/I(r)]$
where $I(r)$ is the intensity of sound measured at a distance $r$
Sometimes it is useful to separate $TL$ in loss due to geometrical
spreading and loss due to absorption
$TL=TL(geom)+TL(losses)$
If the sound is trapped between two perfect reflecting surfaces
$TL=10log r + ar$
where a is the absorption coefficient in dB/m.
# Sonar Equations
The passive sonar measures the incoming waves and is able to determine
the position of the target if there is more than one device, by
triangulation. Its equation determines that the *Sound Level* coming
from the source reduced by the *Transmission Loss* has to be higher than
the background noise (generated by waves, wind, animals, ships and
others) in order to get any measurement.
Passive Sonar Equation
$SL - TL >= NL - DI + DT_N$
where *SL* is the sound emitted by the target, *NL* is the noise level,
*DI* is the directivity index and $DT_N$ is the detection threshold for
noise-limited performace and *TL* is the transmission loss.
The active sonar emits a wave and measures the reflected sound waves.
Since the wave will propagate the double distance. The transmission loss
term is multiplied by two. The equation determines the conditions to get
valid measures (higher than background noise).
Active Sonar Equation
$SL - 2TL + TS >= NL - DI + DT_N$
where *SL* is the sound emitted by the source, *NL* is the noise level,
*DI* is the directivity index and $DT_N$ is the detection threshold for
noise-limited performace and *TL* is the transmission loss and*TS* is
the target strength, a measure of how good acoustic reflector the target
is.
# References
`3. Lawrence E. Kinsler, Austin R. Frey, Alan B. Coppens, James V. Sanders(2000) ,`*`Fundamentals of Acoustics 4th ed`*`, Wiley`
[^1]:
[^2]:
|
# Engineering Acoustics/Analogies in aeroacoustics
## Acoustic Analogies
!Test facility and testing of supersonic jet engine to assess noise
emissions at NASA Langley Research
Center
A direct way to predict Aerodynamic noise would be to solve the
Navier-Stokes Equations in a general three-dimensional unsteady
compressible numerical simulations. Unfortunately, it is hardly
achievable except for very simple academic configurations within a
delimited region. The idea of an acoustic analogy is to restate the full
equations of gas dynamics providing an equivalent wave equation in
homogeneous medium in uniform motion from the point of view of a distant
observer. This condition leads to simplifications of usual linear
acoustics problems\[1\]. The most systematically used formalisms are
Lighthill\'s analogy and the extensions made by Curle and Ffowcs
Williams & Hawkings, because they offer of wide range of applicability.
It must be started clearly that the aim of an analogy is not essentially
to deduce exacts results or numerical coefficients, but to infer general
laws from the standard procedures associated with the classical wave
equation. A preliminary knowledge of the main flow features coming from
either experiments, CFD or analytical methods is needed to apply these
analogies. Moreover, the degree of accuracy in the flow variables to
extract acoustic results is crucial to ensure the relevance of the
prediction method.
## Governing gas dynamics equations to Lighthill\'s equation
References \[2\] \[3\], development below originates from Aeroacoustics
wikipage.
$$\frac{\partial \rho}{\partial t} + \nabla\cdot\left(\rho\mathbf{v}\right)=\frac{D\rho}{D t} + \rho\nabla\cdot\mathbf{v}= 0,$$\...\...\...\...\...\...\...\.....*conservation
of mass equation (E1)*
where $\rho$ and $\mathbf{v}$ represent the density and velocity of the
fluid, which depend on space and time, and $D/Dt$ is the substantial
derivative.
Next is the conservation of momentum equation, which is given by
$${\rho}\frac{\partial \mathbf{v}}{\partial t}+{\rho(\mathbf{v}\cdot\nabla)\mathbf{v}} = -\nabla p+\nabla\cdot\sigma,$$\...\...\...\...\...\...\...\.....*conservation
of momentum equation (E2)*
where $p$ is the thermodynamic pressure, and $\sigma$ is the viscous (or
traceless) part of the Cauchy stress tensor from the Navier--Stokes
equations.
_Step 1:_ Multiplying (E1) by $\mathbf{v}$ and
adding it to (E2) yields
$$\frac{\partial}{\partial t}\left(\rho\mathbf{v}\right) + \nabla\cdot(\rho\mathbf{v}\otimes\mathbf{v}) = -\nabla p + \nabla\cdot\sigma.$$
_Step 2:_ Differentiating (E1) with respect to
time, taking the divergence of (E2) and subtracting the latter from the
former, we get
$$\frac{\partial^2\rho}{\partial t^2} - \nabla^2 p + \nabla\cdot\nabla\cdot\sigma = \nabla\cdot\nabla\cdot(\rho\mathbf{v}\otimes\mathbf{v}).$$
_Step 3:_ Subtracting $c_0^2\nabla^2\rho$, where
$c_0$ is the speed of sound in the medium in its equilibrium (or
quiescent) state, from both sides of the last equation and rearranging
it results in:
$$\frac{\partial^2\rho}{\partial t^2}-c^2_0\nabla^2\rho = \nabla\cdot\left[\nabla\cdot(\rho\mathbf{v}\otimes\mathbf{v})-\nabla\cdot\sigma +\nabla p-c^2_0\nabla\rho\right],$$
which is equivalent to
$$\frac{\partial^2\rho}{\partial t^2}-c^2_0\nabla^2\rho=(\nabla\otimes\nabla) :\left[\rho\mathbf{v}\otimes\mathbf{v} - \sigma + (p-c^2_0\rho)\mathbb{I}\right],$$
where $\mathbb{I}$ is the identity tensor, and $:$ denotes the (double)
tensor contraction operator.
Using *Einstein notation*, Lighthill's equation can be written as
$$\frac{\partial^2\rho}{\partial t^2}-c^2_0\nabla^2\rho=\frac{\partial^2\sigma_{ij}}{\partial x_i \partial x_j},\quad$$
where $\sigma_{ij}$ is the so-called Lighthill\'s stress tensor
$\sigma_{ij} = \rho V_i V_j + (P-c_0^2 \rho)\delta_{ij} - \tau_{ij}$,
further details are provided in the next section. with: $\rho V_i V_j$
is the
inertial
stress tensor of Reynolds $\tau_{ij}$ is the viscous stress tensor of
Reynolds $(P-c_0^2 \rho)$ represents all effects due to entropy
non-homogeneities (important for hot jets with high temperature
gradients)
## Lighthill\'s Acoustic Analogy
Aeroacoustic engineers need to predict noise arising with turboengines
id est localized unsteady flow in a propagating medium, at a first step
of approximation. The basic idea of Lighthill (1952) is to reformulate
the general equations of gas dynamics, in order to derive a wave
equation. The featured variable is the density fluctuation, naturally
preferred because acoustic waves in a gas are due to compressibility. No
special assumption is made nor linearisation introduced.
$$\ \frac{\partial^2 \rho}{\partial t^2} - \frac{\partial^2 (\rho V_i V_j)}{\partial x_i \partial x_j} = - \frac{\partial^2 \sigma_{ij}}{\partial x_i \partial x_j}$$\...\...\...\...\...\...\...\...\...\....*Lighthill\'s
equation*
with
$$\ \sigma_{ij} = \rho V_i V_j + (P-c_0^2 \rho)\delta_{ij} -\tau_{ij}$$
called *Lighthill\'s tensor*
This equation remains true if the same quantity is added on both sides.
Add the $$\ c_0^2 \frac{\partial^2 \rho}{\partial^2 x_j^2}$$ with
$\ c_0$ being the characteristic speed of sound of the undisturbed gas
(precisely the speed of sound in the medium surrounding the flow region,
in the applications; this is different from the local speed of sound
$\ c^2$ in the flow). Then, forming a wave operator on the left-hand
side and removing all other terms on the right-hand side leads to:
$$\ \frac{\partial^2 \rho}{\partial t^2} - c_0^2 \frac{\partial^2 \rho}{\partial x_j^2} = \frac{\partial^2}{\partial x_i \partial x_j} [\rho V_i V_j + (P-c_0^2 \rho)\delta_{ij} -\tau_{ij} ]$$
This result is the well-known Lighthill\'s equation. When applied to a
true problem of acoustics, it reduces to the homogeneous wave equation
at large distances from the flow, since all terms in the right-hand side
can be considered negligible (according to the reasonable assumptions
related to the propagation of acoustic waves, as a small-amplitude,
isentropic motion\[4\]). An alternative form of Lighthill\'s equation
can be written with the pressure instead of the density, as follows:
$$\ \frac{1}{c_0^2} \frac{\partial^2 P}{\partial t^2} - \frac{\partial^2 P}{\partial x_j^2} = \frac{\partial^2}{\partial x_i \partial x_j} (\rho V_i V_j + \frac{1}{c_0^2} \frac{\partial^2}{\partial t^2}(P-c_0^2 \rho-\tau_{ij}))$$\...\...\...\...\...\...\...*LightHill
equation on the fluctuating pressure*
Maybe this one is less common because density fluctuations are directly
related to compressible effects, whereas pressure fluctuations can exist
to compensate inertial accelerations in the fluid. When temperature
non-homogeneities are involved, however, the fluctuating pressure is
well suited.
Explanation of each term of Lighthill\'s tensor:
Aeoracoustic sources can be separate in three distinct categories,
!Monopole acoustic
source{width="100"}
- Monopole sources: $\frac{\partial^2}{\partial t^2}(P)$ Spherical
sources or discrete source delivering flow rate $Q(t)$ fluctuating
over the time. Can appear only if solid surfaces are encountered.
!Dipole acoustic
source{width="100"}
- Dipole sources: $-\frac{\partial^2}{\partial t^2}(c_0^2 \rho)$
Similar to two monopoles placed side by side with opposition phase
$+Q(t)$ and $-Q(t)$. Dipoles are associated to the force $F(t)$
according to the axe created by our joined monopoles. Like Monopoles
they only appear when solid surfaces are involved in the domain.
!Quadrupole acoustic
source{width="100"}
- Quadrupole sources:
$\frac{\partial^2}{\partial x_i \partial x_j} (\rho V_i V_j)$
Constituted of two dipoles side by side in phase opposition, these
sources cames from turbulent vortices and are usually neglected at
low velocity flow. They originate from shear-terms in Navier-Stokes
equations.
As a consequence of the general equations of gas dynamics, Lighthill\'s
equation is exact. All aeroacoustic processes, including generation of
sound by the flow non-homogeneities, sound propagation through the flow
and sound dissipation by viscosity or heat conduction, are accounted
for. Hence, this equation is not tractable as a pure wave equation using
linear acoustics since the right-hand side contains the acoustic field
to be determined and cannot be considered as a true source term. Thus we
need to approximate this term independently of the acoustic variables
which corresponds to neglect some of the mechanisms. To remove this
fundamental difficulty, Lighthill proposed some simplifications
motivated by the thinking that **sound generation by the mixing of fluid
is the dominant mechanism** , especially at the high Reynolds numbers of
interest in aeronautics. This is equivalent to privilege the mechanical
effects related to fluid inertia and discard thermodynamic effects as
secondary ones.
## Lighthill\'s Approximation
Lighthill\'s equation is well posed because it assumes that sources are
contained within the flow and not outside of it. I reduces to the
homogeneous wave equation in the propagation region. But in order to
solve near the sources approximations using comparison between phenomena
has to be done.
Lighthill\'s analogy is often use to calculate jet noise, is this type
of application we have specific conditions leading to simplifications
\[5\]:
!Explain classical approximation made using Lighthill\'s
approximation{width="600"}
Practical approximation used in industrial context:
!Explain classical approximation made using Lighthill\'s
approximation{width="500"}
The approximation makes the equation explicit in the sense of the wave
equation in linear acoustics, to be solved formally by the Green\'s
function technique. When numerical means are used to described the flow,
some assumptions can be removed for a more accurate evaluation and the
equation used to post-process the flow data.
!Description of Pseudo-sound phenomenon and influence on near-field and
far-field assumptions to calculate sound
pressure{width="600"}
If entropic non-homogeneities dominate in a disturbed flow, sources
appears as equivalent monopoles:
Using Ribner\'s splitting and uncompressible fluid laws:
: ```{=html}
<center>
```
$\ \Delta p'_t = -\rho_0 \frac{\partial^2(U_i U_j)} {\partial x_i \partial x_j} = -\rho_0 \frac{\partial U_i} {\partial x_j} \frac{\partial U_j} {\partial x_i}$
```{=html}
</center>
```
Leading to the final approximate Lighthill equation:
: ```{=html}
<center>
```
$\ \Delta p'_a - \frac{1}{c_0^2} \frac{\partial^2p'_a}{\partial t^2} = \frac{1}{c_0^2} \frac{\partial^2p'_t}{\partial t^2}$
\...\...\...\...\...\...\...*approximate Lighthill equation*
```{=html}
</center>
```
## Ffowcs Williams & Hawkings formulation
### History
In 1969, Ffowcs Williams & Hawkings, are the first scientists to express
a fundamental equation to predict noise generate by blades in a flow
\[6\].
### Ffowcs Williams & Hawkings general equation
(They are several formulations of FWH analogy, here is one adapted to
blade noise description phenomena)
Deriving from Lighthill\'s equation, it reveal that fluctuating pressure
generating acoustic sources in a rotor is solution of a specific
inhomogeneous wave equation:
$\frac{\partial^2 p(z,t_z)}{\partial z_i^2} - \frac{1}{c_0^2} \frac{\partial^2 p(z,t_z)}{\partial t_z^2} = S(z,t_z)$
where, $z$ the vector coordinates from a source point and $t_z$ the time
in sources domain.
The source term $S(z,t_z)$ can be written as the following sum:
$S(z,t_z) = Q(z,t_z) + \frac{\partial F_i(z,t_z)}{\partial z_i} + \frac{\partial^2 T_{ij}(z,t_z)}{\partial z_i \partial z_j}$
- First term $Q(z,t_z)$ represents *thickness noise* generated by
volume displacement of fluid. A fan blade has its thickness and
volume. As the rotor rotates, the volume of each blade displaces
fluid volume, then they consequently fluctuate pressure of near
field, and noise is generated. This noise is tonal at the running
frequency and generally very weak for cooling fans, because their
RPM is relatively low. Therefore, thickness of fan blades hardly
affects to electronic cooling fan noise. (This kind of noise can
become severe for high speed turbomachines like helicopter rotors.)
- Second term is called *loading noise*, comes from fluctuation of the
force-field $F_i(z,t_z)$ on moving surfaces. In a rotor, It
originates from nonstationary aerodynamic forces between the fluid
and the blades. In computational models, this term is represented by
surface-distributed dipoles. (Dominant for fans)
- Final term is *shear noise*, which is composed of quadrupoles on the
surface of the blades \[7\].
The Ffowcs Williams & Hawkings theory allows to solve this equation
knowing the source terms in Green\'s functions.
### Ffowcs Williams & Hawkings extended analogy using permeable control surface
In all applications where the quadrupole term is significant and must be
calculated, which preferentially occurs at high speeds, the computations
can be made cumbersome because the sources are distributed inside a
volume, the boundaries of which are not precisely defined. In contrast,
the surface source terms are much simpler to compute and clearly
delimited. If CFD must be used in a limited domain surrounding the
surfaces, and if the computations are able to reproduce the acoustic
near-field, a more convenient way of solving the acoustic problem can be
proposed by taking the information not on the physical surfaces but on a
delocalised control surface that can be user-defined \[8\]. Double layer
potential description can be apply, to solve the Helmotz equation using
Bessel functions assumption. This generalised form of Ffowcs Williams &
Hawkings' analogy is widely used in recent Computational Aero-Acoustics
(CAA).
!Aeroacoustic computational simulations using FWH analogy and permeable
surface around aerodynamic
profile{width="650"}
N.B: The CFD Domain inside the control surface must be implemented until
turbulence systems generated are fully developed (using for example
k-$\epsilon$criterion).
The formal advantage of an analogy is to state a problem of
aeroacoustics as a usual problem of linear acoustics, by defining
equivalent sources that would produce in a uniform medium the same sound
as what is heard at observer's location from the flow-and-surface
configuration. The difficulty of the initial gas-dynamics equations is
transposed to the description of the source terms. The formal solution
is derived using the theoretical background of linear acoustics but it
may be useless if the equivalent source terms are not determined
elsewhere. Using Lighthill and FWH analogies allow engineers to
calculate aircraft engines noise for example at a lower time-calculation
cost.
Intensive research at NASA and CERFACS facilities try to develop more
efficient calculation schemes to provide improved design tools to
motorists engineer to gain confidence in noise prediction and develop
aero-mechanical-acoustic design for future generation products.
Making approximations means discarding phenomena that are expected
negligible and retaining the dominant features. This is just proposing
an interpretation.
## References
\[1\]Goldstein, M. E. (1976). Aeroacoustics. New York, McGraw-Hill
International Book Co., 1976. 305 p., 1.
\[2\]Tam, C. K. (1995). Computational aeroacoustics-Issues and methods.
AIAA journal, 33(10), 1788-1796.
\[3\]Wang, M., Freund, J. B., & Lele, S. K. (2006). Computational
prediction of flow-generated sound. Annu. Rev. Fluid Mech., 38, 483-512.
\[4\]Colonius, T., Lele, S. K., & Moin, P. (1993). Boundary conditions
for direct computation of aerodynamic sound generation. AIAA journal,
31(9), 1574-1582.
\[5\] Williams, J. F. (1969). Hydrodynamic noise. Annual Review of Fluid
Mechanics, 1(1), 197-222.
\[6\] Williams, J. F., & Hawkings, D. L. (1969). Sound generation by
turbulence and surfaces in arbitrary motion. Philosophical Transactions
of the Royal Society of London. Series A, Mathematical and Physical
Sciences, 264(1151), 321-342.
\[7\] Ianniello, S. (1999). Quadrupole noise predictions through the
Ffowcs Williams-Hawkings equation. AIAA journal, 37(9), 1048-1054.
\[8\]Di Francescantonio, P. (1997). A new boundary integral formulation
for the prediction of sound radiation. Journal of Sound and Vibration,
202(4), 491-509.
|
# Engineering Acoustics/Noise in Hydraulic Systems
## Noise in Hydraulic Systems
Hydraulic systems are the most preferred source of power transmission in
most of the industrial and mobile equipments due to their advantages in
power density, compactness, flexibility, fast response and efficiency.
The field hydraulics and pneumatics is also known as \'Fluid Power
Technology\'. Fluid power systems have a wide range of applications
which include industrial, off-road vehicles, automotive system and
aircrafts. In spite of these advantages, there are also some
disadvantages. One of the main drawbacks with the hydraulic fluid power
systems is the vibration and noise generated by them. The health and
safety issues relating to noise, vibration and harshness (NVH) have been
recognized for many years and legislation is now placing clear demands
on manufacturers to reduce noise levels \[1\]. Hence, a lot of attention
has been paid on reducing the noise of hydraulic fluid power systems
both from the industrial and academic researchers. A good understanding
of the noise generation, transmission and propagation is very important
in order to improve the NVH performance of hydraulic fluid power
systems.
## Sound in fluids
The speed of sound in fluids can be determined using the following
relation.
$c = \sqrt {\frac{K}{\rho}}$ where K - fluid bulk modulus, $\rho$- fluid
density, c - velocity of sound
Typical value of bulk modulus range from **2e9 to 2.5e9 N/m2**. For a
particular oil, with a density of **889 kg/m3**,
speed of sound $c = \sqrt {\frac{2e9}{889}}= 1499.9 m/s$
## Source of Noise
The main source of noise in hydraulic systems is the pump which supplies
the flow. Most of the pumps used are positive displacement pumps. Of the
positive displacement pumps, axial piston swash plate type is mostly
preferred due to their controllability and efficiency.
The noise generation in an axial piston pump can be classified under two
categories:
\(i\) fluidborne noise and
\(ii\) Structureborne noise
==
```{=html}
<center>
```
Fluidborne Noise (FBN)
```{=html}
</center>
```
==
Among the positive displacement pumps, highest levels of FBN are
generated by axial piston pumps and lowest levels by screw pumps and in
between these lie the external gear pump and vane pump \[1\]. The
discussion in this page is mainly focused on **axial piston swash plate
type pumps**. An axial piston pump has a fixed number of displacement
chambers arranged in a circular pattern separated from each other by an
angular pitch equal to $\phi = \frac {360}{n}$ where n is the number of
displacement chambers. As each chamber discharges a specific volume of
fluid, the discharge at the pump outlet is sum of all the discharge from
the individual chambers. The discontinuity in flow between adjacent
chambers results in a kinematic flow ripple. The amplitude of the
kinematic ripple can be theoretically determined given the size of the
pump and the number of displament chambers. The kinematic ripple is the
main cause of the fluidborne noise. The kinematic ripples is a
theoretical value. The actual **flow ripple** at the pump outlet is much
larger than the theoretical value because the **kinematic ripple** is
combined with a **compressibility component** which is due to the fluid
compressibility. These ripples (also referred as flow pulsations)
generated at the pump are transmitted through the pipe or flexible hose
connected to the pump and travel to all parts of the hydraulic circuit.
The pump is considered an ideal flow source. The pressure in the system
will be decided by resistance to the flow or otherwise known as system
load. The flow pulsations result in pressure pulsations. The pressure
pulsations are superimposed on the mean system pressure. Both the **flow
and pressure pulsations** easily travel to all part of the circuit and
affect the performance of the components like control valve and
actuators in the system and make the component vibrate, sometimes even
resonate. This vibration of system components adds to the noise
generated by the flow pulsations. The transmission of FBN in the circuit
is discussed under transmission below.
A typical axial piston pump with 9 pistons running at 1000 rpm can
produce a sound pressure level of more than 70 dBs.
==
```{=html}
<center>
```
Structureborne Noise (SBN)
```{=html}
</center>
```
==
In swash plate type pumps, the main sources of the structureborne noise
are the fluctuating forces and moments of the swas plate. These
fluctuating forces arise as a result of the varying pressure inside the
displacement chamber. As the displacing elements move from suction
stroke to discharge stroke, the pressure varies accordingly from few
bars to few hundred bars. This pressure changes are reflected on the
displacement elements (in this case, pistons) as forces and these force
are exerted on the swash plate causing the swash plate to vibrate. This
vibration of the swash plate is the main cause of **structureborne
noise**. There are other components in the system which also vibrate and
lead to structureborne noise, but the swash is the major contributor.
```{=html}
<center>
```
![](Pump_noise.png "Pump_noise.png")
```{=html}
</center>
```
```{=html}
<center>
```
**Fig. 1 shows an exploded view of axial piston pump. Also the flow
pulsations and the oscillating forces on the swash plate, which cause
FBN and SBN respectively are shown for one revolution of the pump.**
```{=html}
</center>
```
## Transmission
### FBN
The transmission of FBN is a complex phenomenon. Over the past few
decades, considerable amount of research had gone into mathematical
modeling of pressure and flow transient in the circuit. This involves
the solution of wave equations, with piping treated as a distributed
parameter system known as a transmission line \[1\] & \[3\].
Lets consider a simple pump-pipe-loading valve circuit as shown in Fig.
2. The pressure and flow ripple at ay location in the pipe can be
described by the relations:
```{=html}
<center>
```
$\frac {}{} P = Ae^{-k x} + Be^{-k x}$ \...\...\...(1)
```{=html}
</center>
```
```{=html}
<center>
```
$Q = \frac {1}{Z_{0}}(Ae^{-k x} - Be^{-k x})$\.....(2)
```{=html}
</center>
```
where $\frac {}{} A$ and $\frac {}{} B$ are frequency dependent complex
coefficients which are directly proportional to pump (source) flow
ripple, but also functions of the source impedance $\frac {}{} Z_{s}$,
characteristic impedance of the pipe $\frac {}{} Z_{0}$ and the
termination impedance $\frac {}{} Z_{t}$. These impedances ,usually vary
as the system operating pressure and flow rate changes, can be
determined experimentally.
```{=html}
<center>
```
**Fig.2 Schematic of a pump connected to a hydraulic line**
```{=html}
</center>
```
For complex systems with several system components, the pressure and
flow ripples are estimated using the transformation matrix approach. For
this, the system components can be treated as lumped impedances (a
throttle valve or accumulator), or distributed impedances (flexible hose
or silencer). Various software packages are available today to predict
the pressure pulsations.
### SBN
The transmission of SBN follows the classic source-path-noise model. The
vibrations of the swash plate, the main cause of SBN, is transferred to
the pump casing which encloses all the rotating group in the pump
including displacement chambers (also known as cylinder block), pistons
and the swash plate. The pump case, apart from vibrating itself,
transfers the vibration down to the mount on which the pump is mounted.
The mount then passes the vibrations down to the main mounted structure
or the vehicle. Thus the SBN is transferred from the swash plate to the
main strucuture or vehicle via pumpcasing and mount.
Some of the machine structures, along the path of transmission, are good
at transmitting this vribational energy and they even resonate and
reinforce it. By converting only a fraction of 1% of the pump
structureborne noise into sound, a member in the transmission path could
radiate more ABN than the pump itself \[4\].
## Airborne noise (ABN)
Both FBN and SBN , impart high fatigue loads on the system components
and make them vibrate. All of these vibrations are radiated as
**airborne noise** and can be heard by a human operator. Also, the flow
and pressure pulsations make the system components such as a control
valve to resonate. This vibration of the particular component again
radiates airborne noise.
## Noise reduction
The reduction of the noise radiated from the hydraulic system can be
approached in two ways.
\(i\) **Reduction at Source** - which is the reduction of noise at the
pump. A large amount of open literature are availabale on the reduction
techniques with some techniques focusing on reducing FBN at source and
others focusing on SBN. Reduction in FBN and SBN at the source has a
large influence on the ABN that is radiated. Even though, a lot of
progress had been made in reducing the FBN and SBN separately, the
problem of noise in hydraulic systems is not fully solved and lot need
to be done. The reason is that the FBN and SBN are interlated, in a
sense that, if one tried to reduce the FBN at the pump, it tends to
affect the SBN characteristics. Currently, one of the main researches in
noise reduction in pumps, is a systematic approach in understanding the
coupling between FBN and SBN and targeting them simultaneously instead
of treating them as two separte sources. Such an unified approach,
demands not only well trained researchers but also sophisticated
computer based mathematical model of the pump which can accurately
output the necessary results for optimization of pump design. The
amplitude of fluid pulsations can be reduced, at the source, with the
use of an hydraulic attenuator(5).
\(ii\) **Reduction at Component level** - which focuses on the reduction
of noise from individual component like hose, control valve, pump mounts
and fixtures. This can be accomplished by a suitable design modification
of the component so that it radiates least amount of noise. Optimization
using computer based models can be one of the ways.
## Hydraulic System noise
```{=html}
<center>
```
![](Noise.png "Noise.png")
```{=html}
</center>
```
```{=html}
<center>
```
**Fig.3 Domain of hydraulic system noise generation and transmission
(Figure recreated from \[1\])**
```{=html}
</center>
```
## References
1\. *Designing Quieter Hydraulic Systems - Some Recent Developements and
Contributions*, Kevin Edge, 1999, Fluid Power: Forth JHPS International
Symposium.
2\. *Fundamentals of Acoustics*, L.E. Kinsler, A.R. Frey, A.B.Coppens,
J.V. Sanders. Fourth Edition. John Wiley & Sons Inc.
3\. *Reduction of Axial Piston Pump Pressure Ripple*, A.M. Harrison. PhD
thesis, University of Bath. 1997
4\. *Noise Control of Hydraulic Machinery*, Stan Skaistis, 1988. MARCEL
DEKKER , INC.
5\. *Hydraulic Power System Analysis*, A. Akers, M. Gassman, & R. Smith,
Taylor & Francis, New York, 2006,
6\. *Experimental studies of the vibro-acoustic characteristics of an
axial piston pump under run-up and steady-state operating conditions*,
Shaogan Ye et al., 2018, Measurement, 133.
7\. *Sound quality evaluation and prediction for the emitted noise of
axial piston pumps*, Junhui Zhang, Shiqi Xia, Shaogan Ye et al., 2018,
Applied Acoustics 145:27-40.
Back to main page
|
# Engineering Acoustics/Specific application-automobile muffler
------------------------------------------------------------------------
**General information about Automobile muffler**
------------------------------------------------------------------------
## Introduction
A muffler is a part of the exhaust system on an automobile that plays a
vital role. It needs to have modes that are located away from the
frequencies that the engine operates at, whether the engine be idling or
running at the maximum amount of revolutions per second.A muffler that
affects an automobile in a negative way is one that causes noise or
discomfort while the car engine is running.Inside a muffler, you\'ll
find a deceptively simple set of tubes with some holes in them. These
tubes and chambers are actually as finely tuned as a musical instrument.
They are designed to reflect the sound waves produced by the engine in
such a way that they partially cancel themselves out.( cited from
www.howstuffworks.com )
It is very important to have it on the automobile. The legal limit for
exhaust noise in the state of California is 95 dB (A) - CA. V.C. 27151
.Without a muffler the typical car exhaust noise would exceed 110 dB.A
conventional car muffler is capable of limiting noise to about 90 dB.
The active-noise canceling muffler enables cancellation of exhaust noise
to a wide range of frequencies.
## The Configuration of A automobile muffler
![](ProRacer.jpg "ProRacer.jpg")
## How Does automobile muffler function?
### General Concept
The simple and main part of designing the automobile muffler is to use
the low-pass filter. It typically makes use of the change of the cross
section area which can be made as a chamber to filter or reduce the
sound wave which the engine produced.
### Low-Pass Filter
A low-pass filter is a circuit that passes low frequency signals but
stops the high frequency signals. Once the low pass filter is set by the
user at a specific cutoff frequency, all frequencies lower than that
will be passed through the filter, while higher frequencies will be
attenuated in amplitude. This circuit is made up of passive components
(resistor, capacitors and inductors) capable of accomplishing this
objective.
![](inductive_law_pass_filter.jpg "inductive_law_pass_filter.jpg")
the formula to be used:
### Human ear sound reaction feature
When these pressure pulses reach your ear, the eardrum vibrates back and
forth. Your brain interprets this motion as sound. Two main
characteristics of the wave determine how we perceive the sound:
1.sound wave frequency. 2.air wave pressure amplitude.
It turns out that it is possible to add two or more sound waves together
and get less sound.
### Description of the muffler to cancel the noise
The key thing about sound waves is that the result at your ear is the
sum of all the sound waves hitting your ear at that time. If you are
listening to a band, even though you may hear several distinct sources
of sound, the pressure waves hitting your ear drum all add together, so
your ear drum only feels one pressure at any given moment. Now comes the
cool part: It is possible to produce a sound wave that is exactly the
opposite of another wave. This is the basis for those noise-canceling
headphones you may have seen. Take a look at the figure below. The wave
on top and the second wave are both pure tones. If the two waves are in
phase, they add up to a wave with the same frequency but twice the
amplitude. This is called constructive interference. But, if they are
exactly out of phase, they add up to zero. This is called destructive
interference. At the time when the first wave is at its maximum
pressure, the second wave is at its minimum. If both of these waves hit
your ear drum at the same time, you would not hear anything because the
two waves always add up to zero.
### Benefits of an Active Noise-Canceling Muffler
1\. By using an active muffler the exhaust noise can be easily tuned,
amplified, or nearly eliminated.
2\. The backpressure of a conventional muffler can be essentially
eliminated, thus increasing engine performance and efficiency.
3\. By increasing engine efficiency and performance, less fuel will be
used and the emissions will be reduced.
## Absorptive muffler
![](Open-twister.gif "Open-twister.gif")
### Lined ducts
It can be regarded as simplest form of absorptive muffler. Attach
absorptive material to the bare walls of the duct.( in car that is the
exhaustion tube) The attenuation performance improves with the thickness
of absorptive material.
The attenuation curves like a skewed bell. Increase the thickness of the
wall will get the lower maximum attenuation frequency. For higher
frequency though, thinner absorbent layers are effective, but the large
gap allows noise to pass directly along. Thin layers and narrow passages
are therefore more effective at high frequencies. For good absorption
over the widest frequency range, thick absorbent layers and narrow
passages are best.
### Parallel and block-line-of-sight baffles
Divide the duct into several channels or turn the flow channels so that
there is no direct line-of-sight through the baffles. Frequently the
materials line on the channels. Attenuation improves with the thickness
of absorptive material and length of the baffle. Lined bends can be used
to provide a greater attenuation and attenuate best at high frequency.
Comparatively, at low frequency attenuation can be increased by adding
thicker lining.
### Plenum chambers
They are relatively large volume chambers, usually fabricated from sheet
metal, which interconnect two ducts. The interior of the chamber is
lined with absorbing material to attenuate noise in the duct. Protective
facing material may also be necessary if the temperature and velocity
conditions of the gas stream are too severe.
The performance of a plenum chamber can be improved by: 1.increase the
thickness of the absorbing lining 2.blocking the direct line of sight
from the chamber inlet to the outlet. 3.increase the cross-sectional
area of the chamber.
|
# Engineering Acoustics/Flow-induced oscillations of a Helmholtz resonator and applications
## Introduction
The importance of flow excited acoustic resonance lies in the large
number of applications in which it occurs. Sound production in organ
pipes, compressors, transonic wind tunnels, and open sunroofs are only a
few examples of the many applications in which flow excited resonance of
Helmholtz resonators can be found.\[4\] An instability of the fluid
motion coupled with an acoustic resonance of the cavity produce large
pressure fluctuations that are felt as increased sound pressure levels.
Passengers of road vehicles with open sunroofs often experience
discomfort, fatigue, and dizziness from self-sustained oscillations
inside the car cabin. This phenomenon is caused by the coupling of
acoustic and hydrodynamic flow inside a cavity which creates strong
pressure oscillations in the passenger compartment in the 10 to 50 Hz
frequency range. Some effects experienced by vehicles with open sunroofs
when buffeting include: dizziness, temporary hearing reduction,
discomfort, driver fatigue, and in extreme cases nausea. The importance
of reducing interior noise levels inside the car cabin relies primarily
in reducing driver fatigue and improving sound transmission from
entertainment and communication devices. This Wikibook page aims to
theoretically and graphically explain the mechanisms involved in the
flow-excited acoustic resonance of Helmholtz resonators. The interaction
between fluid motion and acoustic resonance will be explained to provide
a thorough explanation of the behavior of self-oscillatory Helmholtz
resonator systems. As an application example, a description of the
mechanisms involved in sunroof buffeting phenomena will be developed at
the end of the page.
# Feedback loop analysis
As mentioned before, the self-sustained oscillations of a Helmholtz
resonator in many cases is a continuous interaction of hydrodynamic and
acoustic mechanisms. In the frequency domain, the flow excitation and
the acoustic behavior can be represented as transfer functions. The flow
can be decomposed into two volume velocities.
qr: flow associated with acoustic response of cavity
qo: flow associated with excitation
Figure 1 shows the feedback loop of these two volume velocities.
```{=html}
<center>
```
***Figure 1***
```{=html}
</center>
```
# Acoustical characteristics of the resonator
## Lumped parameter model
The lumped parameter model of a Helmholtz resonator consists of a
rigid-walled volume open to the environment through a small opening at
one end. The dimensions of the resonator in this model are much less
than the acoustic wavelength, in this way allowing us to model the
system as a lumped system.
where re is the equivalent radius of the orifice.
Figure 2 shows a sketch of a Helmholtz resonator on the left, the
mechanical analog on the middle section, and the electric-circuit analog
on the right hand side. As shown in the Helmholtz resonator drawing, the
air mass flowing through an inflow of volume velocity includes the mass
inside the neck (Mo) and an end-correction mass (Mend). Viscous losses
at the edges of the neck length are included as well as the radiation
resistance of the tube. The electric-circuit analog shows the resonator
modeled as a forced harmonic oscillator. \[1\] \[2\]\[3\]
```{=html}
<center>
```
***Figure 2***
```{=html}
</center>
```
V: cavity volume
$\rho$: ambient density
c: speed of sound
S: cross-section area of orifice
K: stiffness
$M_a$: acoustic mass
$C_a$: acoustic compliance
The equivalent stiffness K is related to the potential energy of the
flow compressed inside the cavity. For a rigid wall cavity it is
approximately:
```{=html}
<center>
```
$K = \left(\frac{\rho c^2}{V}\right)S^2$
```{=html}
</center>
```
The equation that describes the Helmholtz resonator is the following:
```{=html}
<center>
```
$S \hat{P}_e =\frac{\hat{q}_e}{j\omega S}(-\omega ^2 M + j\omega R + K)$
```{=html}
</center>
```
$\hat{P}_e$: excitation pressure
M: total mass (mass inside neck Mo plus end correction, Mend)
R: total resistance (radiation loss plus viscous loss)
From the electrical-circuit we know the following:
```{=html}
<center>
```
$M_a = \frac{L \rho}{S}$
```{=html}
</center>
```
```{=html}
<center>
```
$C_a = \frac{\pi V}{\rho c^2}$
```{=html}
</center>
```
```{=html}
<center>
```
$L ' = \ L + \ 1.7 \ re$
```{=html}
</center>
```
The main cavity resonance parameters are resonance frequency and quality
factor which can be estimated using the parameters explained above
(assuming free field radiation, no viscous losses and leaks, and
negligible wall compliance effects)
```{=html}
<center>
```
$\omega_r^2 = \frac{1}{M_a C_a}$
```{=html}
</center>
```
```{=html}
<center>
```
$f_r = c 2 \pi \sqrt{\frac{S}{L' V}}$
```{=html}
</center>
```
The sharpness of the resonance peak is measured by the quality factor Q
of the Helmholtz resonator as follows:
```{=html}
<center>
```
$Q = 2 \pi \sqrt{V \left(\frac{L'} {S}\right)^3}$
```{=html}
</center>
```
$f_r$: resonance frequency in Hz
$\omega_r$: resonance frequency in radians
L: length of neck
L\': corrected length of neck
From the equations above, the following can be deduced:
-The greater the volume of the resonator, the lower the resonance
frequencies.
-If the length of the neck is increased, the resonance frequency
decreases.
## Production of self-sustained oscillations
The acoustic field interacts with the unstable hydrodynamic flow above
the open section of the cavity, where the grazing flow is continuous.
The flow in this section separates from the wall at a point where the
acoustic and hydrodynamic flows are strongly coupled. \[5\]
The separation of the boundary layer at the leading edge of the cavity
(front part of opening from incoming flow) produces strong vortices in
the main stream. As observed in Figure 3, a shear layer crosses the
cavity orifice and vortices start to form due to instabilities in the
layer at the leading edge.
```{=html}
<center>
```
***Figure 3***
```{=html}
</center>
```
From Figure 3, L is the length of the inner cavity region, d denotes the
diameter or length of the cavity length, D represents the height of the
cavity, and $\delta$ describes the gradient length in the grazing
velocity profile (boundary layer thickness).
The velocity in this region is characterized to be unsteady and the
perturbations in this region will lead to self-sustained oscillations
inside the cavity. Vortices will continually form in the opening region
due to the instability of the shear layer at the leading edge of the
opening.
# Applications to Sunroof Buffeting
## How are vortices formed during buffeting?
In order to understand the generation and convection of vortices from
the shear layer along the sunroof opening, the animation below has been
developed. At a certain range of flow velocities, self-sustained
oscillations inside the open cavity (sunroof) will be predominant.
During this period of time, vortices are shed at the trailing edge of
the opening and continue to be convected along the length of the cavity
opening as pressure inside the cabin decreases and increases. Flow
visualization experimentation is one method that helps obtain a
qualitative understanding of vortex formation and conduction.
The animation below, shows in the middle, a side view of a car cabin
with the sunroof open. As the air starts to flow at a certain mean
velocity Uo, air mass will enter and leave the cabin as the pressure
decreases and increases again. At the right hand side of the animation,
a legend shows a range of colors to determine the pressure magnitude
inside the car cabin. At the top of the animation, a plot of circulation
and acoustic cavity pressure versus time for one period of oscillation
is shown. The symbol x moving along the acoustic cavity pressure plot is
synchronized with pressure fluctuations inside the car cabin and with
the legend on the right. For example, whenever the x symbol is located
at the point where t=0 (when the acoustic cavity pressure is minimum)
the color of the car cabin will match that of the minimum pressure in
the legend (blue).
```{=html}
<center>
```
![](theplot.gif "theplot.gif")
```{=html}
</center>
```
The perturbations in the shear layer propagate with a velocity of the
order of 1/2Uo which is half the mean inflow velocity. \[5\] After the
pressure inside the cavity reaches a minimum (blue color) the air mass
position in the neck of the cavity reaches its maximum outward position.
At this point, a vortex is shed at the leading edge of the sunroof
opening (front part of sunroof in the direction of inflow velocity). As
the pressure inside the cavity increases (progressively to red color)
and the air mass at the cavity entrance is moved inwards, the vortex is
displaced into the neck of the cavity. The maximum downward displacement
of the vortex is achieved when the pressure inside the cabin is also
maximum and the air mass in the neck of the Helmholtz resonator (sunroof
opening) reaches its maximum downward displacement. For the rest of the
remaining half cycle, the pressure cavity falls and the air below the
neck of the resonator is moved upwards. The vortex continues displacing
towards the downstream edge of the sunroof where it is convected upwards
and outside the neck of the resonator. At this point the air below the
neck reaches its maximum upwards displacement.\[4\] And the process
starts once again.
## How to identify buffeting
Flow induced tests performed over a range of flow velocities are helpful
to determine the change in sound pressure levels (SPL) inside the car
cabin as inflow velocity is increased. The following animation shows
typical auto spectra results from a car cabin with the sunroof open at
various inflow velocities. At the top right hand corner of the
animation, it is possible to see the inflow velocity and resonance
frequency corresponding to the plot shown at that instant of time.
```{=html}
<center>
```
![](curve.gif "curve.gif")
```{=html}
</center>
```
It is observed in the animation that the SPL increases gradually with
increasing inflow velocity. Initially, the levels are below 80 dB and no
major peaks are observed. As velocity is increased, the SPL increases
throughout the frequency range until a definite peak is observed around
a 100 Hz and 120 dB of amplitude. This is the resonance frequency of the
cavity at which buffeting occurs. As it is observed in the animation, as
velocity is further increased, the peak decreases and disappears. In
this way, sound pressure level plots versus frequency are helpful in
determining increased sound pressure levels inside the car cabin to find
ways to minimize them. Some of the methods used to minimize the
increased SPL levels achieved by buffeting include: notched deflectors,
mass injection, and spoilers.
# Useful Websites
This link: 1 takes you to the website of EXA
Corporation, a developer of PowerFlow for Computational Fluid Dynamics
(CFD) analysis.
This link:
2 is a
small news article about the current use of(CFD) software to model
sunroof buffeting.
This link:
3
is a small industry brochure that shows the current use of CFD for
sunroof buffeting.
# References
\[1\] Acoustics: An introduction to its Physical Principles and
Applications ; Pierce, Allan D., Acoustical Society of America, 1989.
\[2\] Prediction and Control of the Interior Pressure Fluctuations in a
Flow-excited Helmholtz resonator ; Mongeau, Luc, and Hyungseok Kook.,
Ray W. Herrick Laboratories, Purdue University, 1997.
\[3\]Influence of leakage on the flow-induced response of vehicles with
open sunroofs ; Mongeau, Luc, and Jin-Seok Hong., Ray W. Herrick
Laboratories, Purdue University.
\[4\]Fluid dynamics of a flow excited resonance, part I: Experiment ;
P.A. Nelson, Halliwell and Doak.; 1991.
\[5\]An Introduction to Acoustics ; Rienstra, S.W., A. Hirschberg.,
Report IWDE 99-02, Eindhoven University of Technology, 1999.
------------------------------------------------------------------------
Back to main page
|
# Engineering Acoustics/Car Mufflers
## Introduction
A car muffler is a component of the exhaust system of a car. The exhaust
system has mainly 3 functions:
1\) Getting the hot and noxious gas from the engine away from the
vehicle
2\) Reduce exhaust emission
3\) Attenuating the noise output from the engine
The last specified function is the function of the car muffler. It is
necessary because the gas coming from the combustion in the pistons of
the engine would generate an extremely loud noise if it were sent
directly in the ambient surrounding through the exhaust valves. There
are mainly 2 techniques used to dampen the noise: the absorption and the
reflection. Each technique has its advantages and inconveniences.
!Muffler type \"Cherry
bomb\"\|right{width="150"}
## The absorber muffler
The muffler is composed of a tube covered by sound absorbing material.
The tube is perforated so that some part of the sound wave goes through
the perforation to the absorbing material. The absorbing material is
usually made of fiberglass or steel wool. The dampening material is
protected from the surrounding by a supplementary coat made of a bent
metal sheet.
The advantages of this method are a low back pressure and the relatively
simple design. The inconvenient aspect of this method is a low sound
damping compared to the other techniques, especially at low frequency.
Mufflers using the absorption technique are usually installed on sports
vehicles to increase the performances of the engine because of their low
back pressure. A trick to improve their
muffling ability consist of lining up several \"straight\" mufflers.
## The reflector muffler
Principle: Sound wave reflection is used to create a maximum amount of
destructive interferences
!Destructive
interference\|right{width="400"}
### Definition of destructive interferences
Let\'s consider the noise a person would hear when a car drives past.
This sound would physically correspond to the pressure variation of the
air which would make his ear-drum vibrate. The curve A1 of the graph 1
could represent this sound. The pressure amplitude is a function of the
time at a certain fixed place. If another sound wave A2 is produced at
the same time, the pressure of the two waves will add. If the amplitude
of A1 is exactly the opposite of the amplitude A2, then the sum will be
zero, which corresponds physically to the atmospheric pressure. The
listener would thus hear nothing although there are two radiating sound
sources. A2 is called the destructive interference. !Wave
reflection\|right{width="250"}
### Definition of the reflection
The sound is a travelling wave i.e. its position changes in function of
the time. As long as the wave travels in the same medium, there is no
change of speed and amplitude. When the wave reaches a frontier between
two mediums which have different impedances, the speed, and the pressure
amplitude change (and so does the angle if the wave does not propagate
perpendicularly to the frontier). The figure 1 shows two medium A and B
and the 3 waves: incident transmitted and reflected.
### Example
If plane sound waves are propagating across a tube and the section of
the tube changes at a point x, the impedance of the tube will change. A
part of the incident waves will so be transmitted in the part of the
tube with the new section value and the other part of the incident waves
will be reflected.
Animation
Mufflers using the reflection technique are most commonly used because
they damp the noise much better than the absorber muffler. However, they
often create higher back pressure which can
lower the performance of the engine at higher rpm\'s. While some engines
develope maximum horsepower at lower rpm\'s (say, under 2800 rpm), most
do not and would thus yield a greater net horsepower (at the higher
rpm\'s)with no muffler at all.
!Schema{width="400"}
The upper right image represents a Car Muffler typical architecture. It
is composed of 3 tubes. There are 3 areas separated by plates, the part
of the tubes located in the middle area are perforated. Small quantity
of pressure \"escapes\" from the tubes through the perforation and
cancel one another.
Some high-end mufflers use the reflection principle together with a
cavity(shown in red below) known as a Helmholtz
Resonator
to further dampen the noise.
![](Muffler_resonator.png "Muffler_resonator.png"){width="300"}
## Back pressure
Car engines are 4 stroke cycle engines. Out of these 4 strokes, only one
produces the power, this is when the explosion occurs and pushes the
pistons back. The other 3 strokes are necessary evil that don\'t produce
energy. They on the contrary consume energy. During the exhaust stroke,
the remaining gas from the explosion is expelled from the cylinder. The
higher the pressure behind the exhaust valves (i.e. back pressure), and
the higher effort necessary to expel the gas out of the cylinder. So, a
low back pressure is preferable in order to have a higher engine
horsepower.
## Muffler Modeling by Transfer Matrix Method
This method is easy to use on computer to obtain theoretical values for
the transmission loss of a muffler. The transmission loss gives a value
in dB that correspond to the ability of the muffler to dampen the noise.
### Example
!Muffler working with waves
reflections{width="500"}
P stands for Pressure \[Pa\] and U stand for volumetric flowrate\[m3/s\]
$\begin{bmatrix} P1 \\ U1 \end{bmatrix}$=$\begin{bmatrix} T1 \end{bmatrix}
\begin{bmatrix} P2 \\ U2 \end{bmatrix}$ and
$\begin{bmatrix} P2 \\ U2 \end{bmatrix}$=$\begin{bmatrix} T2 \end{bmatrix}
\begin{bmatrix} P3 \\ U3 \end{bmatrix}$ and
$\begin{bmatrix} P3 \\ U3 \end{bmatrix}$=$\begin{bmatrix} T3 \end{bmatrix}
\begin{bmatrix} P4 \\ U4 \end{bmatrix}$
So, finally: $\begin{bmatrix} P1 \\ U1 \end{bmatrix}$=
$\begin{bmatrix} T1 \end{bmatrix}
\begin{bmatrix} T2 \end{bmatrix}
\begin{bmatrix} T3 \end{bmatrix}
\begin{bmatrix} P4 \\ U4 \end{bmatrix}$
with
$\begin{bmatrix} T_i \end{bmatrix}$=$\begin{bmatrix} cos (k L_i) & j sin (k L_i) \frac{\rho c}{S_i} \\ j sin (k L_i) \frac{\rho c}{S_i} & cos (k L_i) \end{bmatrix}$
Si stands for the cross section area
k is the angular
velocity
$\ \rho$ is the medium density
c is the speed of sound of the medium
### Results
!Schema{width="400"}
https://commons.wikimedia.org/wiki/File:Transmission_loss.png#Source_code
Matlab
code
of the graph above.
### Comments
The higher the value of the transmission loss and the better the
muffler.
The transmission loss depends on the frequency. The sound frequency of a
car engine is approximately between 50 and 3000 Hz. At resonance
frequencies, the transmission loss is zero. These frequencies correspond
to the lower peaks on the graph.
The transmission loss is independent of the applied pressure or velocity
at the input.
The temperature (about 600 Fahrenheit) has an impact on the air
properties : the speed of sound is higher and the mass density is lower.
The elementary transfer matrice depends on the element which is
modelled. For instance the transfer matrice of a Helmholtz Resonator is
$\begin{bmatrix} 1 & 0 \\ \frac{1}{Z} & 1 \end{bmatrix}$ with
$\ Z = j \rho ( \frac{\omega L_i}{S_i} - \frac{c^2}{\omega V})$
## Links
More informations about the Transfer Matrice Method :
www.scielo.br/pdf/jbsmse/v27n2/25381.pdf
General informations about filters: Filter Design &
Implementation
General information about car mufflers:
<http://auto.howstuffworks.com/muffler.htm>
Example of car exhaust manufacturer
<http://www.performancepeddler.com/manufacturer.asp?CatName=Magnaflow>
Back to Main page
|
# Engineering Acoustics/Sound Absorbing Structures and Materials
## Introduction
Noise can be defined as unwanted sound. There are many cases and
applications that reducing noise level is of great importance. Loss of
hearing is only one of the effects of continuous exposure to excessive
noise levels. Noise can interfere with sleep and speech, and cause
discomfort and other non-auditory effects. Moreover, high level noise
and vibration lead to structural failures as well as reduction in life
span in many industrial equipments. As an example in control valves, the
vibration caused by flow instability occasionally defects the feedback
to the control system and resulting in extreme oscillations. The
importance of noise issue could be well understood by looking at
regulations that have been passed by governments to restrict noise
production in society. Industrial machinery, air/surface transportation
and construction activities are assumed to be main contributors in noise
production or so called \"noise pollution\".
## Noise Control Mechanisms
- **Active Noise control**
Modifying and canceling sound field by electro-acoustical approaches is
called active noise control. There are two methods for active control.
First by utilizing the actuators as an acoustic source to produce
completely out of phase signals to eliminate the disturbances. second
method is to use flexible and vibro-elastic materials to radiate a sound
field interfering with the disturbances and minimize the overall
intensity. The latter method is called active structural acoustic
control (ASAC).
- **Passive Noise Control**
Passive noise control refers to those methods that aim to suppress the
sound by modifying the environment close to the source. Since no input
power is required in such methods, Passive noise control is often
cheaper than active control, however the performance is limited to mid
and high frequencies. active control works well for low frequencies
hence, the combination of two methods may be utilized for broadband
noise reduction.
\[\[Image:Absorbing_Mechanisms.jpg\|center\|thumb\|400px\|
```{=html}
<center>
```
Figure 1: Noise Control Mechanisms
```{=html}
</center>
```
\]\]
## Sound Absorption
Sound waves striking an arbitrary surface are either reflected,
transmitted or absorbed; the amount of energy going into reflection,
transmission or absorption depends on acoustic properties of the
surface. The reflected sound may be almost completely redirected by
large flat surfaces or scattered by a diffused surface. When a
considerable amount of the reflected sound is spatially and temporally
scattered, this status is called a diffuse reflection, and the surface
involved is often termed a diffuser. The absorbed sound may either be
transmitted or dissipated. A simple schematic of surface-wave
interactions is shown in figure 2.
\[\[Image:Sound_treatment.jpg\|center\|thumb\|500px\|
```{=html}
<center>
```
Figure 2: surface-sound interaction- absorption (left), reflection
(middle) and diffusing (right)
```{=html}
</center>
```
\]\] Sound energy is dissipated by simultaneous actions of viscous and
thermal mechanisms. Sound absorbers are used to dissipate sound energy
and to minimize its reflection.[^1] The absorption coefficient $\alpha$
is a common quantity used for measuring the sound absorption of a
material and is known to be the function of the frequency of the
incident wave. It is defined as the ratio of energy absorbed by a
material to the energy incident upon its surface.
### Sound Absorbing Coefficient
The absorbing coefficient can be mathematically presented as follows:
$\alpha=1-\frac{I_R}{I_I}$
*where α, $I_R$, and $I_I$ are the sound absorption coefficient,
one-sided intensity of the reflected sound and the one-sided intensity
of the incident sound, respectively.*
from the above equation, it can be observed that the absorption
coefficient of materials varies from 0 to 1. there are several standard
methods to measure sound absorption coefficient. In one of the common
approaches,a plane wave impedance tube that is equipped with two
microphones is utilized.The experimental setup and dimensions are
according to ASTM E1050/ISO 10534-2.^2The method is done by
evaluating the transfer function, $\hat{h}(f)$ between two microphones
spaced ***s***apart, and a distance***l*** from sample to get absorption
coefficient using the following equations:
$\hat{h}=\frac{\hat{p_1}}{\hat{p_2}}$
$\hat{r}=\frac{\hat{h}-e^{-jks}}{e^{jks}+\hat{h}}$
$\alpha=1-|\hat{r}|^2$
*Where, $\hat{p_1},\hat{p_2}$ are complex pressure amplitude measured by
Mic. 1 and Mic. 2 respectively. **k** is the wave number, **s** is the
microphones spacing and **$\alpha$** is the absorption coefficent.*
According to the standard technique,[^3] frequency is limited by
microphone spacing as well as tube diameter. It is also recommended that
**$0.05\frac{c}{s} < f < 0.45\frac{c}{s}$** to guarantee the plane wave
propagation. The coefficient of commercial absorbing materials is
specified in terms of a noise reduction coefficient (NRC) which refers
to the average of absorption coefficients at **250 Hz, 500 Hz, 1,000 Hz,
and 2,000 Hz**. Average values of some acoustic insulating materials
that are used in buildings are tabulated in table 1. Based on their
construction and material structure, sound absorbers are categorized as
**non-porous** and **porous** absorbers.
\[\[Image:Imp_tube.jpg\|center\|thumb\|400px\|
```{=html}
<center>
```
Figure 3: Two-microphone method to obtain sound absorption coefficient
```{=html}
</center>
```
\]\]
Material Sound absorption coefficient
------------------------------- ------------------------------
6 mm cork sheet 0.1-0.2
6 mm porous rubber sheet 0.1-0.2
12 mm fiberboard on battens 0.3-0.4
50 mm slag wool or glass silk 0.8-0.9
Hardwood 0.3
100 mm mineral wool 0.65
: **Table 1 - Sound absorbing coefficient of common absorbents** [^4]
### Non-Porous Absorbers ( Absorbing Resonators )
There are Two types of non-porous absorbers that are common in
industrial applications. **Panel (membrane) resonators** and **Helmholtz
resonators**. Panel absorbers are light, thin and non-porous sheets or
membranes that are tuned to absorb sound waves over a specific frequency
range. The structural resistance of the panel to fast shaping leads to
sound absorption. Panel absorbers are defined by their geometry and
structural vibration properties. Helmholtz Resonators or cavity
absorbers are perforated structures containing very small pores; one
example is the acoustic liners that are used inside the aircraft engine
frame to suppress the noise emission from the compression and combustion
stages. Similar structures are applied in fans and ventilators used in
ventilation and air-conditioning systems. The size of the opening, the
length of the neck, and the volume of the cavity govern the resonant
frequency of the resonator and hence the absorption performance.
### Porous Absorbers
Porous sound absorbers correspond to materials where sound propagation
takes place in a network of interconnected pores such that viscous and
thermal interaction cause acoustic energy to be dissipated and converted
to heat. Absorptive treatment such as mineral wool, glass fiber or
high-porosity foams reduces reflection sound. Porous absorbers are in
fact thermal materials and usually not effective sound barriers. The
need for significant thickness compared to operating sound wavelength
makes porous absorbers dramatically inefficient and impractical at low
frequencies.
\[\[Image:Typic_absorption.jpg\|center\|thumb\|500px\|
```{=html}
<center>
```
Figure 2: Typical variation of sound absorbing coefficient for different
absorbers
```{=html}
</center>
```
\]\]
## Physical Characteristic Properties of Porous Absorbers
The propagation of sound in a porous material is a phenomenon that
governed by physical characteristics of a porous medium, namely porosity
($\phi$) , tortuosity (*q*), flow resistivity ($\sigma$), viscous
characteristic length ($\Lambda$) and thermal characteristic
length($\Lambda'$).
- **Porosity**
Defined as the ratio of interconnected void volume (air volume in open
pores) and the total volume. Most commercial absorbers have high
porosity (greater than 0.95). The higher porosity the easier interaction
between solid-fluid phases which leads to more sound attenuation.
$\phi = \frac{V_0}{V_T}$
$V_0$ = volume of the void space.
$V_T$ = total volume of the porous material.
- **Tortuosity** [^5]
the physical characteristics corresponds to the "non-straightness" of
the pore network inside the porous material. It shows how well the
porous material prevents direct flow through the porous medium. The more
complex the path, the more time a wave is in contact with the absorbent
and hence the more energy dissipation and more absorbing capability. If
the Porous absorber is not conductive, one method to measure it is to
saturate the absorbent with an electrically conducting fluid and measure
the electrical resistivity of the saturated sample, $R_s$, and compare
to the resistivity of the fluid itself, $R_f$ then the tortuosity can be
expressed as follows:
$q = \phi\frac{R_s}{R_f}$
- **Flow resistivity**
The pressure drop required to drive a unit flow through the material can
be related to the viscous losses of the propagating sound waves inside
the porous absorber and denoted as flow resistivity . For a wide range
of porous materials, the flow resistivity is the major factor for sound
absorption. The unit of flow resistance is **$N.s/m^4$ or $Rayls/m$**
and defined as the ratio of static pressure drop **$\Delta P$** to a
volume flow **(*U*)** for a small sample thickness **(*d*)**.
$\sigma = \frac{\Delta P}{Ud}$
- **Characteristic lengths** [^6]
Two more important microstructural properties are the characteristic
viscous length $\Lambda$ and characteristic thermal length $\Lambda$'
that contribute viscous and thermal dissipation. The former is related
to smaller pores and the latter is related to the larger pores of porous
aggregate. The thermal length $\Lambda '$ is the twice ratio of volume
to surface area in connected pores. This is geometric and can be
measured directly. The viscous length, $\Lambda$, is nearly the same,
but each integral is weighted by the square of the fluid velocity
***v*** inside the pores and hence, cannot be measured directly.
$\Lambda ' = 2\frac{\int dV}{\int dS}$
$\Lambda = 2\frac{\int v^2_{fluid}dV}{\int v^2_{fluid} dS}$
# Acoustic modeling of Porous Absorbents
## Wave equation in rigid porous absorbents
the plane wave equation derived from the linearized equations of
conservation of mass and momentum should be modified to account for the
effects of porosity, tortuosity and flow resistance. The modified wave
equation[^7] that governs the sound propagation in compressible-gas
filled in rigid porous materials is given by:
$\frac{\partial^2 P}{\partial x^2} -(\frac{q\rho_0}{k_{eff} })*\frac{\partial^2 P}{\partial t^2}-(\frac{\sigma\phi}{k_{eff}})*\frac{\partial p}{\partial t} = 0$
where, p = sound pressure within the pores of material
$\rho_0$ = density of compressible gas
$k_{eff}$ = effective bulk modulus of the gas
*q* = tortuosity
$\phi$ = porosity
$\sigma$ = flow resistivity
The acoustical behavior of a absorptive porous layer can also be
investigated from its basic acoustic quantities: the complex wave number
and characteristic impedance. These quantities are obtained as a part of
solution of the modified plane wave equation and can be used to
determine the absorption coefficient and surface impedance. The most
practical and common values for the complex wave number and the surface
impedance are based on semi-empirical methods and correlated using the
regression analysis. One important correlation is suggested by Delany
and Bazely [^8]
$k' = \alpha+j\beta = \frac{\omega}{c} [1+0.0978(\frac{\rho_0 f}{\sigma})^{-0.700}-j0.189(\frac{\rho_0 f}{\sigma})^{-0.595}]$
$z' = R+jX = \rho_0 c[1+0.0571(\frac{\rho_0 f}{\sigma})^{-0.754}-j0.087(\frac{\rho_0 f}{\sigma})^{-0.732}]$
Where, *f* = frequency *σ* = flow resistance
## Effective Density
By assuming rigid-frame pore network in the absorbent, the solid phase
would be completely motionless and the frame bulk modulus is
considerably greater than that of compressible gas, hence it can be
modeled as an effective fluid using the wave equation for a fluid with
complex effective fluid density and complex effective bulk modulus. In
this situation the dynamic density accounts for the viscous losses and
the dynamic bulk modulus for the thermal losses. The effective density
relation as a function of dynamic tortuosity was proposed by Johnson *et
al.* [^9]
$\rho_{eff}=q(1+\frac{\sigma\phi}{j\omega\rho_0 q} G(\omega))\rho_0$
$G(\omega)=\sqrt{1+j\frac{4q^2\mu\rho_0\omega}{\sigma^2\Lambda^2\phi^2}}$
where,
*μ* = gas viscosity
*ω* = 2*πf*
## Effective Bulk Module
Another factor that affects the sound propagation in the absorbent is
the thermal interaction in material due to the heat exchange between the
acoustic wave front traveling in the compressible fluid and the solid
phase. Champoux and Allard [^10]., have introduced a function
$G'(\omega)$ to evaluate effective bulk module for the gas. as it is
observed in the following formula this would be the function of thermal
characteristic length ($\Lambda '$).
$k_{eff}=\frac{\gamma p_0}{\gamma-(\gamma-1)(1-j\frac{8\mu}{\Lambda '^2 Pr^2\omega\rho_0} G'(\omega))\rho_0}$
$G'(\omega)=\sqrt{1+j\frac{\Lambda '^2 Pr^2\omega\rho_0}{16\mu}}$
where,
*γ* = gas specific heat ratio (for air \~ 1.4)
*Pr* = fluid Prantdl number
## References
[^1]: Cox, T. J. and P. D\'antonio, Acoustic Absorbers and Diffusers,
SponPress,(2004)
[^2]: ASTM E1050 - 08 Standard Test Method for Impedance and Absorption
of Acoustical Materials Using A Tube, Two Microphones and A Digital
Frequency Analysis System
[^3]:
[^4]: Link common absorbing
materials,
absorbing coefficients.
[^5]:
[^6]:
[^7]: Fahy, F., Foundations of Engineering Acoustics, Academic Press
London, (2001).
[^8]: Delany, M.E., and Bazley, E.N., \"Acoustical properties of fibrous
absorbent materials\" Applied Acoustics, vol. 3, 1970, pp. 105-116.
[^9]: Johnson, D.L., Koplik, J., and Dashen, R.,\"Theory of dynamic
permeability and tortuosity in fluid-saturated porous media,\"
Journal of Fluid Mechanics, vol. 176, 1987, pp. 379-402
[^10]: Champoux, Y., and Allard, J.F., \"Dynamic tortuosity and bulk
modulus in air saturated porous media,\" Journal of Applied Physics,
vol. 70, no. 4, 1991, pp. 1975-1979.
|
# Engineering Acoustics/Noise from cooling fans
## Proposal
As electric/electronic devices get smaller and functional, the noise of
cooling device becomes important. This page will explain the origins of
noise generation from small axial cooling fans used in electronic goods
like desktop/laptop computers. The source of fan noises includes
aerodynamic noise as well as operating sound of the fan itself. This
page will be focused on the aerodynamic noise generation mechanisms.
## Introduction
Inside a desktop computer, there may be three (or more) fans. Usually
there is a fan on the heat sink of the CPU, in the rear of the power
supply unit, on the case ventilation hole, and maybe on the graphics
card, plus one on the motherboard chipset if it is a very recent one.
The noise from a computer that annoys people is mostly due to cooling
fans if the hard drive(s) is fairly quiet. When Intel Pentium processors
were first introduced, there was no need to have a fan on the CPU at
all, but most modern CPUs cannot function even for several seconds
without a cooling fan, and some CPU\'s (such as Intel\'s
Prescott core) have extreme cooling requirements,
which often causes more and more noise. The type of fan used in a
desktop computer is almost always an axial
fan "wikilink"), while centrifugal fans are commonly
used in laptop computers. Several fan types are shown here (pdf
format). Different
fan types have different characteristics of noise generation and
performance. The axial flow fan is the main type considered in this
page.
## Noise Generation Mechanisms
The figure below shows a typical noise spectrum of a 120 **mm** diameter
electronic device cooling fan. One microphone is used at the point 1
**m** far from the upstream side of the fan. The fan has 7 blades, 4
struts for motor mounting and operates at 13V. Certain amount of load is
applied. The blue plot is background noise of anechoic chamber, and the
green one is sound loudness spectrum when the fan is running.
```{=html}
<center>
```
![](Noisespectrum.gif "Noisespectrum.gif")
```{=html}
</center>
```
(\*BPF = Blade Passing Frequency) Each noise elements shown in this
figure is caused by one or more of following generation mechanisms.
### Blade Thickness Noise - Monopole (But very weak)
Blade thickness noise is generated by volume displacement of fluid. Fan
blades has its thickness and volume. As the rotor rotates, the volume of
each blade displaces fluid volume, then they consequently fluctuate
pressure of near field, and noise is generated. This noise is tonal at
the running frequency and generally very weak for cooling fans, because
their RPM is relatively low. Therefore, thickness of fan blades hardly
affects to electronic cooling fan noise.
(This kind of noise can become severe for high speed turbomachines like
helicopter rotor)
### Tonal Noise by Aerodynamic Forces - Dipole
#### Uniform Inlet Flow (Negligible)
The sound generation due to uniform and steady aerodynamic force has
very similar characteristic as the blade thickness noise. It is very
weak for low speed fans, and depends on fan RPM. Since at least of ideal
steady blade forces are necessary for a fan to do its duty, even in an
ideal condition, this kind of noise is impossible to be avoided. It is
known that this noise can be reduced by increasing the number of blades.
#### Non-uniform Inlet Flow
Non-uniform (still steady) inlet flow causes non-uniform aerodynamic
forces on blades as their angular positions change. This generates noise
at blade passing frequency and its harmonics. It is one of the major
noise sources of electronic cooling fans.
#### Rotor-Casing interaction
If the fan blades are very close to a structure which is not symmetric,
unsteady interaction forces to blades are generated. Then the fan
experiences a similar running condition as lying in non-uniform flow
field.
#### Impulsive Noise (Negligible)
This noise is caused by the interaction between a blade and
blade-tip-vortex of the preceding blade, and not severe for cooling
fans.
#### Rotating Stall
Click here "wikilink") to read the definition and an
aerodynamic description of **stall**.
The noise due to stall is a complex phenomenon that occurs at low flow
rates. For some reason, if flow is locally disturbed, it can cause stall
on one of the blades. As a result, the upstream passage on this blade is
partially blocked. Therefore, the mean flow is diverted away from this
passage. This causes increasing of the angle of attack on the closest
blade at the upstream side of the originally stalled blade, the flow is
again stalled there. On the other hand, the other side of the first
blade is un-stalled because of reduction of flow angle.
```{=html}
<center>
```
![](Stall.gif "Stall.gif")
```{=html}
</center>
```
repeatedly, the stall cell turns around the blades at about 30\~50% of
the running frequency, and the direction is opposite to the blades. This
seriese of phenomenon causes unstedy blade forces, and consequently
generates noise and vibrations.
#### Non-uniform Rotor Geometry
Asymmetry of rotor causes noise at the rotating frequency and its
harmonics (not blade passing frequency obviously), even when the inlet
flow is uniform and steady.
#### Unsteady Flow Field
Unsteady flow causes random forces on the blades. It spreads the
discrete spectrum noises and makes them continuous. In case of
low-frequency variation, the spread continuous spectral noise is around
rotating frequency, and narrowband noise is generated. The stochastic
velocity fluctuations of inlet flow generates broadband noise spectrum.
The generation of random noise components is covered by the following
sections.
### Random Noise by Unsteady Aerodynamic Forces
#### Turbulent Boundary Layer
Even in the steady and uniform inlet flow, there exist random force
fluctuations on the blades. That is from turbulent blade boundary layer.
Some noise is generated for this reason, but dominant noise is produced
by the boundary layer passing the blade trailing edge. The blade
trailing edges scatter the non-propagating near-field pressure into a
propagatable sound field.
#### Incident Turbulent
Velocity fluctuations of the intake flow with a stochastic time history
generate random forces on blades, and a broadband spectrum noise.
#### Vortex Shedding
For some reason, a vortex can separate from a blade. Then the
circulating flow around the blade starts to be changed. This causes
non-uniform forces on blades, and noises. A classical example for this
phenomenon is \'Karman vortex
street\'.
(some images and
animations.)
Vortex shedding mechanism can occur in a laminar boundary layer of low
speed fan and also in a turbulent boundary layer of high frequency fan.
#### Flow Separation
Flow separation causes stall explained above. This phenomenon can cause
random noise, which spreads all the discrete spectrum noises, and turns
the noise into broadband.
#### Tip Vortex
Since cooling fans are ducted axial flow machines, the annular gap
between the blade tips and the casing is important parameter for noise
generation. While rotating, there is another flow through the annular
gap due to pressure difference between upstream and downstream of fan.
Because of this flow, tip vortex is generated through the gap, and
broadband noise increases as the annular gap gets bigger.
## Installation Effects
Once a fan is installed, even though the fan is well designed
acoustically, unexpected noise problem can come up. It is called as
installation effects, and two types are applicable to cooling fans.
### Effect of Inlet Flow Conditions
A structure that affects the inlet flow of a fan causes installation
effects. For example Hoppe & Neise \[3\] showed that with and without a
bellmouth nozzle at the inlet flange of 500**mm** fan can change the
noise power by 50**dB** (This application is for much larger and noisier
fan though).
### Acoustic Loading Effect
This effect is shown on duct system applications. Some high performance
graphic cards apply duct system for direct exhaustion.
The sound power generated by a fan is not only a function of its
impeller speed and operating condition, but also depends on the acoustic
impedances of the duct systems connected to its inlet and outlet.
Therefore, fan and duct system should be matched not only for
aerodynamic noise reasons but also because of acoustic considerations.
## Closing Comment
Noise reduction of cooling fans has some restrictions:
1. Active noise control is not economically effective. 80mm cooling
fans are only 5\~10 US dollars. It is only applicable for high-end
electronic products.
2. Restricting certain aerodynamic phenomenon for noise reduction can
cause serious performance reduction of the fan. Increasing RPM of
the fan is of course much more dominant factor for noise.
Different stories of fan noise are introduced at some of the linked
sites below like active RPM control or noise camparison of various
bearings used in fans.
## Links to Interesting Sites about Fan Noise
Some practical issue of PC noise are presented at the following sites:
- Cooling Fan Noise Comparison - Sleeve Bearing vs. Ball Bearing (pdf
format)
- Brief explanation of fan noise origins and noise reduction
suggestions
- Effect of sweep angle
comparison
- Comparisons of noise from various 80mm
fans
- Noise reduction of a specific desktop
case
- Noise reduction of another specific desktop
case
- Informal study for noise from CPU cooling
fan
- Informal study for noise from PC case
fans
- Active fan speed optimizators for minimum noise from desktop
computers
- Some general fan noise reduction
technics
- Various applications and training in - Brüel &
Kjær
## References
\[1\] Neise, W., and Michel, U., \"Aerodynamic Noise of Turbomachines\"\
\[2\] Anderson, J., \"Fundamentals of Aerodynamics\", 3rd edition, 2001,
McGrawHill\
\[3\] Hoppe, G., and Neise, W., \"Vergleich verschiedener
Gerauschmessverfahren fur Ventilatoren. Forschungsbericht FLT 3/1/31/87,
Forschungsvereinigung fur Luft- und Trocknungstechnik e. V.,
Frankfurt/Main, Germany
back to **Engineering Acoustics**
|
# Engineering Acoustics/Noise from turbine blades
## Introduction
Sound prediction in fluid flows is hard to predict because of the
non-linearity of governing equations. The sound production occurs at
high Reynolds number, where the nonlinear inertial terms are much higher
than the viscous terms. The sound production is a very small portion of
energy in fluid flow, especially in open space for subsonic flows.
Aeroacoustics provide approximations of such flows, and the difference
between the actual flow and the reference flow is identified as the
source of sound. The sound field is obtained through the Green\'s
function, in which the Green\'s function is the linear response of the
fluid flow to an impulsive sound source, expressed in delta function of
space and time. The Green\'s function is as below:
```{=html}
<center>
```
$\frac{1}{c_0^2}\frac{d^2G}{dt^2} =\delta(x-y)\delta(t-\tau)$
```{=html}
</center>
```
Aeroacoustics is a field of study that focuses on sound from fluid flow,
and is often used to predict sound in turbine flows.
```{=html}
<center>
```
![](Blade11.png "Blade11.png"){width="188" height="188"}
```{=html}
</center>
```
## Blade displacement noise(monopole)
Blade displacement noise is a monopole source of sound, and can be
severe for turbomachinery and helicopter blades. The simplest model of a
monopole is a radially expanding sphere. In an infinite homogenous
medium, a pulsating sphere will produce a spherical wave as below:
```{=html}
<center>
```
$p(r,t) = (A/r)e^{j(wt-kr)}$
```{=html}
</center>
```
where A is determined by an approximate boundary condition. In a sphere
of average radius *a*, vibrating radially in a complex speed
$U_0{exp(jwt)}$ The specific acoustic impedance for the spherical wave
is
```{=html}
<center>
```
$z(a) = \rho_0{c}{cos\theta_a}e^{j\theta_a}$
```{=html}
</center>
```
Where $cot{\theta_a} = ka$. The pressure of the surface is then
```{=html}
<center>
```
$p(a,t) = {\rho_0}{cU_0}cos\theta_a}{e^{i(wt-ka+\theta_a)}$
```{=html}
</center>
```
Then A becomes
```{=html}
<center>
```
$A = {\rho_0}{cU_0}cos\theta_a}{e^{i(ka+\theta_a)}$
```{=html}
</center>
```
So, the pressure at any distance r\>a is
```{=html}
<center>
```
$p(r,t) = {\rho_0}{cU_0}cos\theta_a}{e^{i(wt-k(r-a)+\theta_a)}$
```{=html}
</center>
```
## Tonal noise(Dipole)
Tonal noise at blade passing frequency(BPF) is an example of a dipole
source. While volume displacement is a monopole source, fluctuating
pressures is a dipole source, and unsteady Reynolds stress or transport
of momentum would be a quadrupole source. The fluctuating blade
pressures (dipoles) are always an important source of sound for rotating
machinery. Steady rotating forces and unsteady rotating forces would
classify as dipole blade forces, and examples of these are uniform
stationary inflow and non-uniform stationary inflow, non-uniform
unstationary inflow, vortex shedding and secondary flows. If two
monopole sources of equal strength but opposite direction are close
enough, it resembles a dipole. A rigid sphere whose center is
oscillating back and forth is another example of a dipole. The net force
exerted on the fluid by the sphere, in accordance to Newton's third law,
is the surface integral of $p(a,e,t)e_r$. Symmetry requires that this
force have only a *z* component, so the force is
```{=html}
<center>
```
$F(t) = F_z(t)e_z = e_z{a^2}\int_{0}^{2\pi}\int_{0}^{\pi}p(a,\theta,t)\cos{\theta}\sin{\theta}{d\theta}dz$
```{=html}
</center>
```
## Noise from wind turbine blade(Flutter)
Flutter has been a problem traditionally related to compressor and fan
blades. Over the years fan blades has decreased in blade and disc
thickness and increased in aspect ratio, in efforts to increase it lift
coefficient. This leads to the decrease in blade stiffness of the bladed
disc assembly and its natural frequencies, and as a result can lead to
flutter motion. Flutter boundaries are very sensitive to mode-shapes,
and reduced frequencies play a secondary role. Flutter would bring
pressure fluctuations and would be a source of dipole sound.
```{=html}
<center>
```
![](Wind_Turbine_blade.png "Wind_Turbine_blade.png"){width="500"
height="500"}
```{=html}
</center>
```
## Noise from gas turbines
In a gas turbine, there are three main sources of noise such as intake,
exhaust, and casing. Intake noise is created by the interaction of the
axial air compressor rotor and stator, and is a function of blade
number, tip speed, and pressure increase. Intake noise is less than
exhaust noise in overall, but its high frequency content sounds are much
larger than that of the exhaust noise. Exhaust noise has higher
amplitude and has lower frequency due to combustion process. Typically,
the inlet and exhaust sound power levels range from 120 dB to over 155
dB. Casing noise is generated through high speed misaligned mechanical
components in the turbine housing radiating to the outer casing. In
principle, gas turbine noise come from aerodynamic sources. High
aerodynamic turbulence are combustion are present in the operation of
gas turbine. Combustion would be a monopole source of sound along with
rotating shock waves. Dipole sources of sound would mainly be from
fluctuating forces on blades and guide vanes, and free jets would be a
quadrupole noise.
```{=html}
<center>
```
![](Gas_turbine_internal_cooling_model_.png "Gas_turbine_internal_cooling_model_.png"){width="500"
height="500"}
```{=html}
</center>
```
## External References
1. Pierce, A. D., & Beyer, R. T. (1990). Acoustics: An Introduction to
Its Physical Principles and Applications. 1989 Edition. 2.
Kinsler, L. E., Frey, A. R., Coppens, H. B., Sanders, J. V., &
Saunders, H. (1983). Fundamentals of acoustics.
2. Kinsler, L. E., Frey, A. R., Coppens, H. B., Sanders, J. V., &
Saunders, H. (1983). Fundamentals of acoustics.
3. <http://www.sandia.gov/>
4. <http://www.sonobex.com/gas-turbines/>
|
# Engineering Acoustics/International Space Station Acoustics Challenges
The International Space Station
(ISS) is a
research facility laboratory made up of several different modules in a
Low Earth Orbit. This facility represents a union of several different
space station projects from various different nations. The long-term
goals of ISS is to develop the technology necessary for human-based
space and planetary exploration and colonization (including life support
systems, safety precautions, environmental monitoring in space), new
ways to treat diseases, more efficient methods of producing materials,
more accurate measurements than would be possible to achieve on Earth,
and a more complete understanding of the Universe.
In order to achieve these objectives, human space flight centers train
highly qualified astronauts in order to best perform different tasks on
board the ISS. Astronauts spend months at a training facility and in a
variety of conditions which simulate the environment of space before
going to ISS. A part of astronauts are cross trained to perform a number
of tasks. A pilot, for example, might also be trained to carry out
scientific experiments, or to work on equipment repairs. Astronauts
schedule is usually very dense on ground, and this becomes even more
critical on board the ISS. Therefore, care needs to be taken in order to
provide astronauts a safe and habitable working environment once they
are on orbit.
One of the major problems on board the ISS is the presence of white
noise. Basically, each module of ISS have equipment such as fans, pumps,
compressors, avionics, and other hardware or systems that serve ISS
functionality and astronauts' life support needs. These equipment
present a significant acoustics challenge to the ISS because of
difficulties with controlling the noise produced by various elements
provided by international partners. The excessive noise levels from
machinery or equipment on ISS has shown to affect crews' hearing,
habitability, safety, productivity, annoyance, and sleep interference.
Crew performance concerns include inability to effectively communicate
and understand what is being said or what is going on around them (e.g.
intelligibility,
speech interference, inability to hear alarms or other important
auditory cues such as an equipment item malfunctioning, inability to
concentrate, strain in vocal cords).
## Normal Hearing Response and Hearing Threshold
The sensitivity of hearing mechanism is highly dependent upon the
frequency content of the received sound. Human ears can detect sound
between frequencies of 20 to 20000 Hz. Intensity of sound is measured in
decibels (dB). The decibel is a logarithmic unit used to describe a
ratio (10 log10 (p2/p1)2). The ratio may be power, sound pressure,
voltage or intensity or several other things. Sound power level (in dB)
of typical sources could be viewed
here.
There are several different units to quantize the perceived sound by
human ears. We will introduce some of them. The phon is a unit that is
related to dB by the psycho-physically measured frequency response of
the ear. At 1 kHz, readings in phons and dB are, by definition, the
same. For all other frequencies, the phon scale is determined by the
results of experiments in which healthy volunteers are asked to adjust
the loudness of a signal at a given frequency until they judge its
loudness is equal to that of a 1 kHz signal
exercise. To convert from
dB to phons, a graph of such
results is needed. Such a graph depends on sound level: it becomes
flatter at high sound levels. Hearing threshold defines the level at
which the ear barely perceives the sound. The threshold is
frequency-dependent and corresponds to the 0-phon curve. Damage to the
hearing mechanism has the effect of raising the hearing threshold, which
indicates that higher sound levels are required in order to be heard.
The degree of shift in the hearing threshold is used as an index of the
amount of hearing impairment incurred; criteria for hearing damage are
usually based on shifts in this threshold.
## dBA
The human ear does not respond equally to all frequencies: we are much
more sensitive to sounds in the frequency range about 1 kHz to 4 kHz
than to very low or high frequency sounds. For this reason, sound meters
are usually fitted with a filter whose response to frequency is a bit
like that of the human ear. If the sound pressure level is given in
units of dB(A) or dBA, the \"A weighting filter\" is used. dBA roughly
corresponds to the inverse of the 40 dB (at 1 kHz) equal-loudness curve.
Sound pressure level on the dBA scale is easy to measure
link and is therefore widely
used. It is still different from loudness, however, because the filter
does not respond in quite the same way as the ear. dBA different
sources
## Noise Exposures
![](Table_1_Permissible_noise_exposures.jpg "Table_1_Permissible_noise_exposures.jpg"){width="50"}
Exposure to excessive and prolonged noise is one major cause of hearing
disorders worldwide. Hearing damage result in an increase in hearing
threshold. Table 1 shows the duration of exposure to higher sound
intensities, set by Occupational Safety and Health Act (OSHA), which
will result in no more damage to hearing than that produced by 8 h at 90
dBA. In addition people must not be exposed to steady sound levels above
115 dBA, regardless of the duration.
When the daily noise exposure is composed of two or more periods of
noise exposure of difference levels, their combined effect should be
considered, rather than the individual effect of each. When people are
exposed to different sound levels during the day, the mixed dose (D)
must be calculated by using the following formula:
$D=C1/T1+C2/T2+...+Cn/Tn$
where Cn is the total exposure time at a given noise level and Tn is the
total exposure time permitted at that level. If the sum of the fractions
equals or exceeds 1, then the mixed exposure is considered to exceed the
limit value.
source
## Intelligibility
The listeners' speech comprehension is diminished by the ambient noise
and the distortion of the system. One could ensure that a message is
clear and intelligible in all situations by measuring the
"intelligibility" of the system. Speech intelligibility is the degree to
which speech can be understood by a listener in a noisy environment. For
satisfactory communication the average speech level should exceed that
of the noise by 6 dB, but lower S/N ratios can be acceptable (Moore,
1997). The most common method to rate the speech interference effect of
noise is called the Preferred Speech Interference Level (PSIL) shown in
the figure bellow.
![](PSIL.jpg "PSIL.jpg")
## Noise detection and protection on board the ISS
In order to determine the total exposure to noise during a given period
astronauts wear an audio dosimeter. The astronauts of the ISS are
exposed to an average noise level of 72dBA for the entire duration of
their stay on the ISS, which can last up to six months. One of the
medical flight rules sets noise exposure limits based on a 24-hour
exposure level. If the 24-hour noise exposure levels measured by the
audio dosimeters exceed 65 dBA, then the crewmembers are directed to
wear approved hearing protection devices. Design specifications of 60
dBA for work areas and 50 dBA for sleep areas have been agreed upon as
"safety limits" for ISS operations. The use of hearing protection
devices (HPDs) is suggested if noise exposure levels exceed 60 dBA, or
if the crewmembers are exposed to high intermittent noise periods (e.g.
use of exercise devices such as the treadmill, airlock repress, or other
short term high noises). The ISS specifications take into account the
impact of noise on crew hearing (both temporary and permanent threshold
shifts), as well as habitability and performance (disrupted
communication, irritability, impaired sleep, etc.). Use of HPDs during
sleep provides additional lowering of the noise input to the inner ear
and aids recovery from acoustic trauma sustained during the day.
Recovery is more robust in quieter environments such as in an adequately
quieted crew sleep station or with the use of hearing protection during
high noise exposure periods. Although hearing protection headsets are
available, astronauts do not use them all the time, as they are
uncomfortable to wear continuously and make communication with other
crewmembers difficult. Since hearing needs to be tested in a quiet
environment, researchers\' efforts to record in-flight changes in
hearing have not been successful because of the continuous noise on the
ISS. Some of the astronauts\' reports could be found in the following
website: SpaceRef
## Further Research
- Acoustical Testing Laboratory
- Leuven Measurement Systems
- Test on
Humans
## References
|
# Engineering Acoustics/Rotor Stator Interactions
An important issue for the aeronautical industry is the reduction of
aircraft noise. The characteristics of the turbomachinery noise are to
be studied. The rotor/stator interaction is a predominant part of the
noise emission. We will present an introduction to these interaction
theory, whose applications are numerous. For example, the conception of
air-conditioning ventilators requires a full understanding of this
interaction.
## Noise emission of a Rotor-Stator mechanism
A Rotor wake induces on the downstream Stator blades a fluctuating vane
loading, which is directly linked to the noise emission.
We consider a B blades Rotor (at a rotation speed of $\Omega$) and a V
blades stator, in a unique Rotor/Stator configuration. The source
frequencies are multiples of $B \Omega$, that is to say $mB \Omega$. For
the moment we don't have access to the source levels $F_{m}$. The noise
frequencies are also $mB \Omega$, not depending on the number of blades
of the stator. Nevertheless, this number V has a predominant role in the
noise levels ($P_{m}$) and directivity, as it will be discussed later.
*Example*
*For an airplane air-conditioning ventilator, reasonable data are :*
*$B=13$ and $\Omega = 12000$ rnd/min*
*The blade passing frequency is 2600 Hz, so we only have to include the
first two multiples (2600 Hz and 5200 Hz), because of the human ear
high-sensibility limit. We have to study the frequencies m=1 and m=2.*
## Optimization of the number of blades
As the source levels can\'t be easily modified, we must focus on the
interaction between those levels and the noise levels.
The transfer function ${{F_m } \over {P_m }}$ contains the following
part :
```{=html}
<center>
```
$\sum\limits_{s = - \infty }^{s = + \infty } {e^{ - {{i(mB - sV)\pi } \over 2}} J_{mB - sV} } (mBM)$
```{=html}
</center>
```
Where m is the Mach number and $J_{mB - sV}$ the Bessel function of
mB-sV order. In order to minimize the influence of the transfer
function, the goal is to reduce the value of this Bessel function. To do
so, the argument must be smaller than the order of the Bessel function.
*Back to the example :*
*For m=1, with a Mach number M=0.3, the argument of the Bessel function
is about 4. We have to avoid having mB-sV inferior than 4. If V=10, we
have 13-1x10=3, so there will be a noisy mode. If V=19, the minimum of
mB-sV is 6, and the noise emission will be limited.*
*Remark :*
*The case that is to be strictly avoided is when mB-sV can be nul, which
causes the order of the Bessel function to be 0. As a consequence, we
have to take care having B and V prime numbers.*
## Determination of source levels
The minimization of the transfer function ${{F_m } \over {P_m }}$ is a
great step in the process of reducing the noise emission. Nevertheless,
to be highly efficient, we also have to predict the source levels
$F_{m}$. This will lead us to choose to minimize the Bessel functions
for the most significative values of m. For example, if the source level
for m=1 is very higher than for m=2, we will not consider the Bessel
functions of order 2B-sV. The determination of the source levels is
given by the Sears theory,which will not be explicited here.
## Directivity
All this study was made for a privilegiate direction : the axis of the
Rotor/Stator. All the results are acceptable when the noise reduction is
ought to be in this direction. In the case where the noise to reduce is
perpendicular to the axis, the results are very different, as those
figures shown :
For B=13 and V=13, which is the worst case, we see that the sound level
is very high on the axis (for $\theta = 0$)
```{=html}
<center>
```
![](Acoustics_1313.JPG "Acoustics_1313.JPG")
```{=html}
</center>
```
For B=13 and V=19, the sound level is very low on the axis but high
perpendicularly to the axis (for $\theta = Pi/2$)
```{=html}
<center>
```
![](Acoustics_1319.jpg "Acoustics_1319.jpg")
```{=html}
</center>
```
## External references
|
# Engineering Acoustics/Noise control with self-tuning Helmholtz resonators
## Introduction
Many engineering systems create unwanted acoustic noise. Noise may be
reduced using engineering noise control methods. One noise control
method popular in mufflers is the Helmholtz resonator, see
here. It is comprised
of a cavity connected to the system of interest through one or several
short narrow tubes. The classical examples are in automobile exhaust
systems. By adding a tuned Helmholtz resonator, sound is reflected back
to the source.
Helmholtz resonators have been exploited to enhance or attenuate sound
fields at least since ancient Greek times where they were used in
ancient amphitheaters to reduce reverberation. Since this time,
Helmholtz resonators have found widespread use in reverberant spaces
such as churches and as mufflers in ducts and pipes. The Helmholtz
resonator effect underlies the phenomena of sunroof buffeting seen
here.
One advantage of the Helmholtz resonator is its simplicity. However, the
frequency range over which Helmholtz resonators are effective is
relatively narrow. Consequently these devices need to be precisely tuned
to the noise source to achieve significant attenuation.
## Noise and vibration control
There are four general categories for noise and vibration control:[^1]
1. **Active systems:** load or unload the unwanted noise by using
actuators such as loudspeakers
1
and Acoustics/Active Control
2. **Passive systems:** achieve sound attenuation by using
2
: 2.1. reactive devices such as Helmholtz resonators and expansion
chambers.
: 2.2. resistive materials such as acoustic linings and porous
membranes
3. **Hybrid systems:** use both active and passive elements to achieve
sound reduction
3
4. **Adaptive-passive systems:** use passive devices whose parameters
can be varied in order to achieve optimal noise attenuation over a
band of operating frequencies.
## Lumped element model of the Helmholtz resonator
The Helmholtz resonator is an acoustic filter element. If dimensions of
the Helmholtz resonator are smaller than the acoustic wavelength, then
dynamic behavior of the Helmholtz resonator can be modelled as a lumped
system see 4.
It is effectively a mass on a spring and can be treated so
mathematically. The large volume of air is the spring and the air in the
neck is the oscillating mass. Damping appears in the form of radiation
losses at the neck ends, and viscous losses due to friction of the
oscillating air in the neck. Figure 1 shows this analogy between
Helmholtz resonator and a vibration absorber.
\[\[Image: HR and vibration absorber sys.JPG\|frame\|center\|
```{=html}
<center>
```
Figure 1. Helmholtz resonator and vibration absorber
```{=html}
</center>
```
\]\]
## Parameters definition
```{=html}
<table align=center border=2>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Parameter
```{=html}
</th>
```
```{=html}
<th>
```
definition
```{=html}
</th>
```
```{=html}
<th>
```
Parameter
```{=html}
</th>
```
```{=html}
<th>
```
definition
```{=html}
</th>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$M_a$
```{=html}
</td>
```
```{=html}
<td>
```
Acoustic mass of the resonator
```{=html}
</td>
```
```{=html}
<td>
```
$C_a$
```{=html}
</td>
```
```{=html}
<td>
```
Acoustic compliance
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$\rho$
```{=html}
</td>
```
```{=html}
<td>
```
Density of the fluid
```{=html}
</td>
```
```{=html}
<td>
```
P
```{=html}
</td>
```
```{=html}
<td>
```
pressure at the neck entrance
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
ω
```{=html}
</td>
```
```{=html}
<td>
```
Excitation frequency
```{=html}
</td>
```
```{=html}
<td>
```
S
```{=html}
</td>
```
```{=html}
<td>
```
Cross section area of the neck
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
a
```{=html}
</td>
```
```{=html}
<td>
```
Radius of the neck
```{=html}
</td>
```
```{=html}
<td>
```
L
```{=html}
</td>
```
```{=html}
<td>
```
Actual neck length
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
y
```{=html}
</td>
```
```{=html}
<td>
```
Displacement in the direction pointing inward the neck
```{=html}
</td>
```
```{=html}
<td>
```
V
```{=html}
</td>
```
```{=html}
<td>
```
Cavity volume of the Helmholtz resonator
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
F
```{=html}
</td>
```
```{=html}
<td>
```
Force applied in the y direction at the resonator neck entrance
```{=html}
</td>
```
```{=html}
<td>
```
L~eff~
```{=html}
</td>
```
```{=html}
<td>
```
Effective neck length (the mass inside the neck and the mass near the
neck edges)
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Theoretical analysis of Helmholtz resonators
For a neck which is flanged at both ends, L~eff~ is approximately:
$L_\text{eff} = \ L + \ 1.7 \ a$
The acoustic mass of a Helmholtz resonator is given by
$M_a = L_\text{eff}\ \rho\ S$
The stiffness of the resonator is defined as the reciprocal of the
compliance, and it is defined as
$k_r = \frac{dF}{dy}$
where
$F = P\ S$
For adiabatic system with air as an ideal gas, the thermodynamic process
equation for the resonator is
$PV^{\gamma} = \text{constant} \,$
Differentiating this equation gives
$V^\gamma dP + P \gamma V^{\gamma -1} dV = 0 \,$
The change in the cavity volume is
$dV = -Sdy\$
substituting these into differential equation, it can be re-casted as
$V^\gamma \frac{dF}{S} - P \gamma V^{\gamma -1} S dy = 0 \Rightarrow \frac{dF}{dy} = \frac{P \gamma S^2}{V} = k_r$
or considering $P = \rho R\ T$ and $c=\sqrt{\gamma R T}$, resonator
stiffness is then:
$k_r =\frac{\rho c^2\ S^2}{V}$
where c is the speed of sound, and $\rho$ is the density of the medium.
Two source of damping in the Helmholtz resonator can be considered:
sound radiation from the neck and viscous losses in the neck, which in
many cases can be neglected compared to radiation losses.
1\. **Sound radiation from the neck:** Sound radiation resistance is a
function of the outside neck geometry. For a flanged pipe, the radiation
resistance is approximately[^2]
$R_r =\frac{\rho c k^2\ S^2}{2\pi}$
where k is the wave number, $k =\frac{w}{c}$
2\. **Viscous losses in the neck:** The mechanical resistance due to
viscous losses can be considered as[^3]
$R_v =2R_s S\frac{ \ (L+a)}{\rho c a}$
where R~s~ for a sufficiently large neck diameter is
$R_s =0.00083\sqrt{\frac{w}{2\pi}}$, where ω is the excitation
frequency.
The mechanical impedance of the mechanical system is defined as the
ratio of the driving force and the velocity of the system at the driving
point.The mechanical impedance of a driven mass-spring--damper system is
$\hat{Z}_m=\frac{\hat{F}}{\hat{u}}= R_m+j(\omega m - \frac{k_r}{w})$
according to the analogy between Helmholtz resonator and
mass-spring--damper system (vibration absorber), the mechanical
impedance of a Helmholtz resonator is obtained by replacing mass and
damping from Helmholtz resonator system in above equation:
$\hat{Z}_{mres}= (R_v+\frac{\rho c k^2\ S^2}{2\pi})+j(\omega \rho L_{eff} S - \frac{\rho c^2\ S^2}{wV})$
The natural frequency of a Helmholtz resonator,w~0~, is the frequency
for which the reactance is zero:
$w_0 =c\sqrt{\frac{S}{L_\text{eff}V}}$,
and the acoustic impedance of the Helmholtz resonator is
$\hat{Z}_{res}= \frac{\hat{Z}_{mres}}{\ S^2}$
Resonance occurs when the natural frequency of the resonator is equal to
the excitation frequency. Helmholtz resonators are typically used to
attenuate sound pressure when the system is originally at resonance.A
simple open-ended duct system with a side branch Helmholtz resonator and
the analogous electrical circuit of the system is shown below. For an
undamped resonator, the impedance at resonance is zero, and therefore
according to electrical analogy in Fig.2 the Helmholtz resonator becomes
a short circuit. There is no current flowing in the elements in the
right. On the other hand, the undamped Helmholtz resonator at resonance
causes all reflection of acoustic waves back to the source, while in
damped resonator some current will flow through the branch to the right
of the Helmholtz resonator and reduce the magnitude of attenuation.
\[\[Image: HR analogy.JPG\|frame\|center\|300px\|
```{=html}
<center>
```
Figure 2. open duct system with a side branch Helmholtz resonator with
electrical circuit analogy near junction point A
```{=html}
</center>
```
\]\]
**1- Effect of Resonator Volume on sound attenuation**
Figure 3 shows the frequency response of the above duct system without
Helmholtz resonator, and with two different volume Helmholtz resonators
with the same natural frequency. The excitation frequency axis is
normalized with respect to the fundamental frequency of the straight
pipe system, which was also chosen as the natural frequency of the
resonator. The maximum attenuation of sound pressure for duct systems
with side branch Helmholtz resonators occurs when the natural frequency
of the resonator is equal to the excitation frequency. By comparing two
curves with different colors, blue and gray, it can be seen that to
increase the effective bandwidth of attenuation of a Helmholtz
resonator, the device should be made as large as possible. It should be
mention that in order to minimize the effects of standing waves within
the device, the dimensions do not exceed a quarter wavelength of the
resonator natural frequency .
\[\[Image: volume var HR.png\|frame\|center\|300px\|
```{=html}
<center>
```
Figure 3. Effect of Resonator Volume on sound attenuation
```{=html}
</center>
```
\]\]
**2- Effect of Resonator Damping on sound attenuation**
The effect of Helmholtz resonator damping(Resulting from radiation
resistance and viscous losses in the neck) on the frequency response of
the duct system is shown in Figure 5. The lightly damped Helmholtz
resonator is not robust with respect to changes in the excitation
frequency, since the sound pressure in the duct system can be amplified
if the noise frequency shifts to the vicinity of either of the two
system resonances. To increase the robustness of Helmholtz resonator
with respect to changes in the excitation frequency, damping is useful
to add to the resonators to decrease the magnitude of the resonant
peaks. Such increase in robustness decreases performance, since the
maximum attenuation is significantly less for heavily damped Helmholtz
resonators. The motivation for creating a tunable Helmholtz resonator
stems from this trade off between robustness and performance. A tunable
Helmholtz resonator, capable of adjusting its natural frequency to match
the excitation frequency, would be able to guarantee the high
performance of a lightly damped Helmholtz resonator and track changes in
frequency.
\[\[<File:Damping> HR.png\|frame\|center\|300px\|
```{=html}
<center>
```
Figure 4. Effect of Resonator Damping on sound attenuation
```{=html}
</center>
```
\]\]
## Adaptive Helmholtz resonator
The tunable Helmholtz resonator is a variable volume resonator, which
allows the natural frequency to be adjusted.As shown in Figure 5, a
variable volume Helmholtz resonator can be achieved by rotating an
internal radial wall inside the resonator cavity with respect to an
internal fixed wall. The movable wall is fixed to the bottom end plate
which is attached to a DC motor to provides the motion to change the
volume.
\[\[Image: movable HR4.JPG\|frame\|center\|100px\|
```{=html}
<center>
```
Figure 5. Variable volume Helmholtz resonator
```{=html}
</center>
```
\]\]
To determine the sound pressure and volume velocity at any position
along the duct such as the microphone position in Figure 2, we should
first determine the pressure and velocity at the speaker.
\[\[Image: speaker analo.JPG\|frame\|center\|100px\|
```{=html}
<center>
```
Figure 5. Equivalent circuit for duct system in figure 2
```{=html}
</center>
```
\]\]
\[\[Image: equivalent circuit.JPG\|frame\|center\|100px\|
```{=html}
<center>
```
Figure 6. Simplified equivalent circuit
```{=html}
</center>
```
\]\]
The acoustic impedance for the system termination, which is an unflanged
open pipe is approximately
$Z_1 =\frac{\rho c}{S_p}(\frac{1}{4})(ka)^2+j 0.6 ka$
where S~p~ is the cross sectional area of the pipe, and a~p~ is the
radius of the pipe. The impedance at point 2 is
$\frac{Z_2}{\frac{\rho c} {S_p}}=\frac{\frac{Z_1}{\frac{\rho c} {S_p}}+j tan(kL_1)}{1+j\frac{Z_1}{\frac{\rho c} {S_p}}tan(kL_1)}$
where L~1~ is the length of the pipe separating termination form the
resonator. The resonator acoustic impedance is the same as what is shown
above. The acoustic impedance at point 3 is given by
$Z_3 =\frac{Z_2 Z_{res}}{Z_2+ Z_{res}}$
The impedance at point 4 can be determined by
$\frac{Z_4}{\frac{\rho c} {S_p}}=\frac{\frac{Z_3}{\frac{\rho c} {S_p}}+j tan(kL_2)}{1+j\frac{Z_3}{\frac{\rho c} {S_p}}tan(kL_2)}$
finally the impedance at the speaker is given by
$\frac{Z_{sys}}{\frac{\rho c} {S_{enc}}}=\frac{\frac{Z_4}{\frac{\rho c} {S_{enc}}}+j tan(kL_{enc})}{1+j\frac{Z_4}{\frac{\rho c} {S_{enc}}}tan(kL_{enc})}$
where S~enc~ is the cross section area of the speaker enclosure, and
L~enc~ is the length of the enclosure aperture from the speaker.
From figure 6 with the system impedance (Z\_{sys}), the pressure and
velocity at the speaker can be determined. considering transfer
matrices, the pressure and velocity at any location in the duct system
may be computed from the pressure and velocity at the speaker.The first
transfer matrix may be used to relate the pressure and velocity at the
point downstream in a straight pipe to the pressure and velocity at the
origin of the pipe.
$\begin{bmatrix} P_d(l) \\ U_d(l) \end{bmatrix}$=$\begin{bmatrix} cos (k L) & -j sin (k L) \frac{\rho c}{S_p} \\ -j \frac{sin (k L)}{\frac{\rho c}{S_p}} & cos (k L) \end{bmatrix}
\begin{bmatrix} P_d(0) \\ U_d(0) \end{bmatrix}$
The second matrix relates the pressure and velocity immediately
downstream of the sidebranch, to the pressure and velocity immediately
before the side branch.
$\begin{bmatrix} P_d(3) \\ U_d(3) \end{bmatrix}$=$\begin{bmatrix} 1 & 0 \\ -\frac {1}{Z_{res}} & 1 \end{bmatrix}
\begin{bmatrix} P_d(2) \\ U_d(2) \end{bmatrix}$
Correct combination of theses transfer matrices may be used to determine
the pressure occurring in the system at the location of the microphone
in figure 2.
## Appendix A: Matlab Code for straight pipe with Helmholtz resonator
``` {.matlab .numberLines startFrom="1"}
%This Matlab code is used for calculating pressure at the place of
%microphone for the pipe without Helmholtz resonator
clear all
clc
V0=1;
for f=10:1:200;
omega=2*pi*f;
freq(f-9)=f;
c=343; % speed of sound
rho=1.21; % density of the medium (air)
ap=0.0254; % radius of the pipe
Sp=pi*(ap^2); % cross sectional area of the pipe
k=omega/c; % wave numbe
Lenc=0.10; %distance separating the enclosure aperture and the speaker face
L1=0.34+(0.6*ap); % length of the pipe separating the termination from the resonator
L2=0.62+(0.85*ap); % length of the pipe between the resonator and speaker enclosure
Lx=0.254; % length of the pipe from HR to micropjone
Lm=Lx+L2; %distance from speaker to microphon
Ld=L1+L2;
Rm=1; % loudspeaker coil resistance
Bl=7.5;
Cm=1/2000; %Complience of the speaker
OmegaN=345; %Natural frequency of the speaker(55HZ)
a=0.0425; %effective radius of the diaphragm
Senc=pi*(a^2); % cross sectional area of the speaker enclosure (Transformer ratio)
Mm=0.01; %Air load mass on both side of the driver
Gamma=1.4; %Specific heat ratio for air
P0=10^5;
b=0.38;
Pref=20e-6; %Reference pressure for sound pressure level
%calculate the system impedance
Z1=(rho*c*Sp)*(((1/4)*((k*ap)^2)+i*(0.6*k*ap))/(Sp^2)); %open ended unflanged pipe
% impedance at point 4( after speaker)
Z4=(rho*c/Sp)*(((Z1/(rho*c/Sp))+i*tan(k*Ld))/(1+i*(Z1/(rho*c/Sp))*tan(k*Ld)));
Zsys=(rho*c/Senc)*(((Z4/(rho*c/Senc))+i*tan(k*(Lenc)))/(1+i*(Z4/(rho*c/Senc))*tan(k*(Lenc))));
%calculating the impedance of the loud speaker
Induct=(Mm/(Senc^2));
Resist=((Bl^2)/((Senc^2)*(Rm)));
Drivimp=Resist+(i*omega*Induct)+(1/(i*omega*Cm)); % impedance moadel for speaker
Impedance=Zsys+ Drivimp;
Voltage=V0*Bl/((Rm)*Senc);
Velocity=Voltage/Impedance;
Pressure=Velocity*Zsys;
Pressure=Pressure*sqrt(2);
Velocity=Velocity*sqrt(2);
TR1=[Pressure;Velocity];
TR15=[cos(k*(Lenc)) -i*(rho*c/Senc)*sin(k*(Lenc)); -i*(sin(k*(Lenc)))/(rho*c/Senc) cos(k*(Lenc))];
TR3=[cos(k*Lm) -i*(rho*c/Sp)*sin(k*Lm); -i*(sin(k*Lm))/(rho*c/Sp) cos(k*Lm)];
Tf=TR3*TR15*TR1;
Pmike=Tf(1)/(sqrt(2)); % pressure at microphone
Vmike=Tf(2); % velocity at microphone
MagP(f-9)=20*log10(Pmike/Pref);
end;
nondim=freq/(OmegaN/2/pi);
plot(nondim,MagP,'k-.');
title('frequency response of system with Helmholtz Rsonator');
xlabel('Normalized Excitation Frequency (Hz)');
ylabel('Sound pressure level (dB)');
```
## References
[^1]: Robert J. Bernhard, Henry R. Hall, and James D. Jones.
Adaptive-passive noise control. Proceeding of Inter-Noise 92, pages
427-430, 1992.
[^2]: L. Kinsler, A. Frey, A. Coppens, and J. Sandesr. Fundamentals of
Acoustics. John Wiley and Sons, New York, NY, Third edition, 1982.
[^3]: On the theory and design of acoustic resonators. The journal of
the Acoustical society of America, 25(6):1037-1061, 1953.
|
# Engineering Acoustics/Outdoor Sound Propagation
Back to main page
## Introduction
Outdoor sound propagation or atmospheric sound propagation is of special
interest in environmental acoustics which is concerned with the control
of sound and vibrations in an outdoor environment. Outdoor sound
propagation is affected by spreading, absorption, ground configuration,
terrain profile, obstacles, pressure, wind, turbulence, temperature,
humidity, etc. The subjects covered in this page are speed of sound in
air, decibel scales, spreading losses, attenuation by atmospheric
absorption, attenuation over the ground, refraction, diffraction and
sound reduction examples.
## Speed of sound in air
Sound speed in air varies with pressure, density, temperature, humidity,
wind speed, etc. The expression for the speed of sound $c$ in a fluid is
given in terms of its thermodynamic properties. The equation is given by
```{=html}
<center>
```
$c^2 =\ \left( \frac{ \partial P}{ \partial \rho} \right) _{adiabat}$
```{=html}
</center>
```
where $\rho$ is the fluid density and $P$ is the fluid pressure.
This equation can be simplified for an ideal gas leading to
```{=html}
<center>
```
$c^2 = \gamma \frac{P_0}{\rho_0}$
```{=html}
</center>
```
where $\gamma$ is the ratio of heat capacities
For air at 0 °C and 1 atm, the speed of sound is
```{=html}
<center>
```
$c_0 = 331.5 \ \mbox{m/s}$
```{=html}
</center>
```
For air at 20 °C and 1 atm, the speed of sound is
```{=html}
<center>
```
$c_0 = 343 \ \mbox{m/s}$
```{=html}
</center>
```
An equivalent expression for the speed of sound in terms of the
temperature in Kelvin $T_K$ is
```{=html}
<center>
```
$c^2 = \ \gamma r T_{K}$
```{=html}
</center>
```
where $r$ is the specific gas constant.
## Decibel scale
The decibel dB or dB SPL (Sound pressure level) in acoustics is used to
quantify sound pressure levels and intensities relative to a reference
on a logarithmic scale.
The intensity level $IL$ of a sound intensity $I$ is defined by
```{=html}
<center>
```
$IL = 10 \log_{10} \bigg(\frac{I}{I_{ref}}\bigg) \mbox{ dB} \,$
```{=html}
</center>
```
where $I_{ref}$ is a reference intensity.
Since the intensity carried by a traveling wave is proportional to the
square of the pressure amplitude, the intensity level can be expressed
as the sound pressure level
```{=html}
<center>
```
$SPL = 10 \log_{10} \bigg(\frac{{P_e}^2}{{P_{ref}}^2}\bigg) = 20 \log_{10} \bigg(\frac{P_e}{P_{ref}}\bigg) \mbox{ dB} \,$
```{=html}
</center>
```
where $P_e$ is the measured effective pressure amplitude of the sound
wave and $P_{ref}$ is the reference effective pressure amplitude. The
effective sound pressure is the root mean square of the instantaneous
sound pressure over a given interval of time. $SPL$ is also called sound
level $L_p$.
For air, the pressure reference is taken to be
```{=html}
<center>
```
$P_{ref} = 20 \cdot 10^{-6} \mbox{ Pa} \$,
```{=html}
</center>
```
the lowest sound pressure a human can hear, also called the threshold of
hearing.
![](Plot_of_decibel_and_inverse.png "Plot_of_decibel_and_inverse.png")
```{=html}
<center>
```
**Figure 1 - Plot of decibel and inverse**
```{=html}
</center>
```
## Sound attenuation
Study of outdoor sound activity requires the definition of a source and
a receiver in order to explain the different phenomenon involved in the
process. The sound attenuation due to its propagation in the atmosphere
can be described in terms of its total attenuation $A_{T}$ in dB between
the source and the receiver. The total attenuation $A_{T}$ can be
expressed as
```{=html}
<center>
```
$A_{T} = L_{ps} - L_{pr} = 10 \log_{10} \bigg(\frac{I_{ps}}{I_{pr}}\bigg) = 20 \log_{10} \bigg( \frac{P_{s}}{P_{r}} \bigg) \mbox{ dB} \,$
```{=html}
</center>
```
where $L_{ps}$ is the sound pressure level of the root-mean-square (rms)
sound pressure $P_s$ at a distance $s$ near the source and $L_{pr}$ is
the corresponding sound pressure level with an rms sound pressure $P_r$
measured at a distance r from the source.
The total attenuation is defined as the sum of the attenuation due to
geometric spreading $A_s$, the attenuation due to atmospheric absorption
$A_a$, and the excess attenuation due to all other effects $A_e$, namely
```{=html}
<center>
```
$A_{T} = A_s + A_a + A_e \mbox{ dB} \,$
```{=html}
</center>
```
The excess attenuation $A_e$ can include attenuation from the ground in
a homogeneous atmosphere $A_g$, refraction by a non-homogeneous
atmosphere, attenuation by diffraction and reflection by a barrier or
obstacle, and scattering or diffraction effects due to turbulence. The
values of attenuation are normally positive.
## Spreading losses
The geometric spreading loss $A_s$, in dB, between two points at a
distance $r_1$ and $r_2$ from a source can be expressed as
```{=html}
<center>
```
$A_{s} = 20 g \log_{10} \bigg( \frac{r_{2}}{r_{1}} \bigg) \quad \mbox{dB}$
```{=html}
</center>
```
where $g$ is a constant given by the geometry of the problem. $g = 0$
for plane wave propagation (uniform pipe), $g = 0.5$ for cylindrical
propagation from a line source, and $g = 1$ for spherical wave
propagation from a point source. It is noticed that for spherical wave
propagation from a point source, doubling the distance from the source
($r_2 = 2 r_1$) corresponds to a loss of $6 \mbox{dB}$.
## Attenuation by atmospheric absorption
Absorption of sound through the atmosphere is due to shear viscosity,
thermal conductivity or heat dissipation, and molecular relaxation due
to oxygen, nitrogen, and water vapor vibrational, rotational, and
translational energy. The attenuation $A_a$, in dB due to atmospheric
absorption can be expressed as
```{=html}
<center>
```
$A_{a} = -20 \log_{10} \left[ \frac{P(r)}{P(0)} \right] = -20 \log_{10} [ \exp (- \alpha r) ] = a r \quad \mbox{dB}$
```{=html}
</center>
```
where $r$, in meters, is the path length of the traveling wave, $P(r)$
is the sound pressure after traveling the distance $r$, $P(0)$ is the
initial sound pressure at $r = 0$, $\alpha$ is the attenuation
coefficient in Nepers per meter, and $a$ is the attenuation coefficient
in dB per meter.
Spreading losses are dependent on the pressure, relative humidity and
frequency for air in still atmosphere. The attenuation coefficient $a$
for pure tone frequencies can be expressed as
```{=html}
<center>
```
$\frac{a}{p_s} = \frac{20}{\ln 10}\ \frac{F^2}{p_{s0}} \left \{ 1.84 \times 10^{-11} \left( \frac{T}{T_0} \right)^{1/2} + \left( \frac{T}{T_0} \right)^{-5/2} \left[ 0.01275 \frac{e^{-2239.1/T}}{F_{r,O}+F^2/F_{r,O}} + 0.1068\frac{e^{-3352/T}}{F_{r,N}+F^2/F_{r,N}} \right] \right \} \quad \frac{\mathrm{dB}}{\mathrm{m} \cdot \mathrm{atm}} \,$
```{=html}
</center>
```
with $F = f/p_s$, $F_{r,O} = f_{r,O}/p_s$, and $F_{r,N} = f_{r,N}/p_s$,
and where $f$ is the acoustic frequency in Hz, $p_s$ is the atmospheric
pressure, $p_{s0}$ is the reference atmospheric pressure (1 atm), $T$ is
the atmospheric temperature in K, $T_0$ is the reference temperature
(293.15 K), $f_{r,O}$ is the relaxation frequency of molecular oxygen
and $f_{r,N}$ is the relaxation frequency of molecular nitrogen. Scaled
relaxation frequencies for oxygen and nitrogen formulas from
experimental measurements are given by
```{=html}
<center>
```
$F_{r,O} = \frac{1}{p_{s0}} \left( 24 + 4.04 \times 10^4 h \frac{0.02 + h}{0.391 + h} \right) \quad \frac{\mathrm{Hz}}{\mathrm{atm}}$
```{=html}
</center>
```
and
```{=html}
<center>
```
$F_{r,N} = \frac{1}{p_{s0}} \left( \frac{T_0}{T} \right)^{1/2} \left( 9 + 280 h \times \exp \left\{ -4.17 \left[ \left( \frac{T_0}{T} \right)^{1/3} - 1 \right] \right\} \right) \quad \frac{\mathrm{Hz}}{\mathrm{atm}}$
```{=html}
</center>
```
where $h$ is the molar concentration of water vapor (absolute humidity)
in percent. $h$ is calculated from the relative humidity $h_r$ as
follows
```{=html}
<center>
```
$h = p_{s0} \left( \frac{h_r}{p_s} \right) \left( \frac{p_{sat}}{p_{s0}} \right) \quad \%$
```{=html}
</center>
```
where the saturated vapor pressure $p_{sat}$ is given by
```{=html}
<center>
```
$p_{sat} = p_{s0} \times 10^{ -6.8346 (T_{01}/T)^{1.261} + 4.6151 } \quad \mbox{atm}$
```{=html}
</center>
```
with $T_{01} = 273.16 \mbox{K}$.
The formulas, are valid for a pressure under 2 atm, a temperature under
330 K (57 C, or 134 F) and up to an altitude of 3 km. One can see from
the graph and formulas that the absorption coefficient is higher for a
higher frequency and/or a higher pressure.
The attenuation coefficient $\alpha$ for pure tone frequencies is shown
in Figure 2 for air at 20 °C as a function of frequency per atmosphere
and relative humidity per atmosphere. The matlab script used to produce
the graph is shown in Appendix A.
```{=html}
<center>
```
![](Atmospheric_sound_absorption_coefficient_2.svg "Atmospheric_sound_absorption_coefficient_2.svg"){width="650"}
```{=html}
</center>
```
```{=html}
<center>
```
**Figure 2 - Attenuation coefficient for atmospheric absorption per
atmosphere as a function of frequency and relative humidity, for air at
20 °C.
**
```{=html}
</center>
```
Values of attenuation can also be obtained from Table 1 for different
temperatures, relative humidities and pure tone frequencies at 1
atmosphere.
```{=html}
<center>
```
**Table 1 - Atmospheric attenuation coefficient $a$ (dB/km) at selected
frequencies at 1 atm**
Temperature Relative humidity (%) 62.5 Hz 125 Hz 250 Hz 500 Hz 1000 Hz 2000 Hz 4000 Hz 8000 Hz
------------- ----------------------- --------- -------- -------- -------- --------- --------- --------- ---------
30 °C 10 0.362 0.958 1.82 3.40 8.67 28.5 96.0 260
20 0.212 0.725 1.87 3.41 6.00 14.5 47.1 165
30 0.147 0.543 1.68 3.67 6.15 11.8 32.7 113
50 0.091 0.351 1.25 3.57 7.03 11.7 24.5 73.1
70 0.065 0.256 0.963 3.14 7.41 12.7 23.1 59.3
90 0.051 0.202 0.775 2.71 7.32 13.8 23.5 53.5
20 °C 10 0.370 0.775 1.58 4.25 14.1 45.3 109 175
20 0.260 0.712 1.39 2.60 6.53 21.5 74.1 215
30 0.192 0.615 1.42 2.52 5.01 14.1 48.5 166
50 0.123 0.445 1.32 2.73 4.66 9.86 29.4 104
70 0.090 0.339 1.13 2.80 4.98 9.02 22.9 76.6
90 0.071 0.272 0.966 2.71 5.30 9.06 20.2 62.6
10 °C 10 0.342 0.788 2.29 7.52 21.6 42.3 57.3 69.4
20 0.271 0.579 1.20 3.27 11.0 36.2 91.5 154
30 0.225 0.551 1.05 2.28 6.77 23.5 76.6 187
50 0.160 0.486 1.05 1.90 4.26 13.2 46.7 155
70 0.122 0.411 1.04 1.93 3.66 9.66 32.8 117
90 0.097 0.348 0.996 2.00 3.54 8.14 25.7 92.4
0 °C 10 0.424 1.30 4.00 9.25 14.0 16.6 19.0 26.4
20 0.256 0.614 1.85 6.16 17.7 34.6 47.0 58.1
30 0.219 0.469 1.17 3.73 12.7 36.0 69.0 95.2
50 0.181 0.411 0.821 2.08 6.83 23.8 71.0 147
70 0.151 0.390 0.763 1.61 4.64 16.1 55.5 153
90 0.127 0.367 0.760 1.45 3.66 12.1 43.2 138
```{=html}
</center>
```
The effective atmospheric attenuation of a constant-percentage band of a
broadband noise is normally less than for pure-tone sound due to the
finite bandwidth and slope of the filter
skirts. Some atmospheric
attenuation also occurs in fog and
precipitation,
in dust in
air,
and at frequencies below 10 Hz due to electromagnetic radiation of moist
air
molecules.
## Reflection from the surface of a solid
When a wavefront comes in to contact with a solid surface, it is
reflected away from that surface. The angle of reflection of the sound
wave is equal to the angle of incidence of the wave. Reflected waves can
interfere with incident waves causing constructive and destructive
interference. This can cause a standing wave pattern and resonance
because the incident and reflected waves travel in opposite directions.
Near the surface of the solid, sound pressure intensity is enhanced
because the pressure of the reflected wave adds up to the pressure of
the incident wave.
![](Sound_Angle_of_Incidence.png "Sound_Angle_of_Incidence.png")
```{=html}
<center>
```
**Figure 3 - Angle of incidence for a reflected sound wave**
```{=html}
</center>
```
## Attenuation over the ground
Sound propagation near the ground is affected by absorption and
reflection of the sound waves by the ground. Sound can either leave a
source and follow a straight path to a receiver or be reflected and/or
absorbed by the ground. How the sound wave reacts with the ground is
influenced by the ground impedance which relates pressure and speed.
## Refraction
Refraction of sound is normally defined as the deviation of a sound wave
leaving a fluid and entering another one with different speed of sound.
For outdoor sound propagation refraction of sound waves causes the waves
to bend due to a change in the speed of sound. This change in the speed
of sound is caused by strong wind speeds and temperature gradients. An
upward refraction reduces sound levels near the ground, while a downward
refraction helps sound travel over obstacle such as noise barriers. A
downward refraction will occur if air above the earth is warmer than the
air at the surface. The warmer air above earth will have a faster speed
of sound causing the wave to bend back towards the earth. The same
phenomena will occur for hot air over a cold ground.
![](Outdoor_Sound_Refraction.png "Outdoor_Sound_Refraction.png")
```{=html}
<center>
```
**Figure 4 - Outdoor sound refraction caused by temperature gradients**
```{=html}
</center>
```
## Diffraction
Diffraction is the mechanism by which sound can propagate and spread out
beyond an opening and around obstacles. The traveling waves tend to bend
around obstacles. Diffraction is related to the wavelength of the sound
produced by the source. It is stronger for lower frequencies. High
frequencies propagate in a more directional manner. This is why low
frequencies can be heard better from behind obstacles and in shadows
zones. If a wavefront travels towards a small opening, diffraction will
cause the waves to spread out past the opening in a spherical manner.
When a wavefront that passes through a small obstacle, compared to the
wavelength, diffraction will cause the sound waves to bend and the
wavefront will reconstruct past the obstacle. This means that one could
not identify the presence of that small obstacle from sound measurement
far from the obstacle and the source.
![](Sound_diffraction_paths.png "Sound_diffraction_paths.png")
```{=html}
<center>
```
**Figure 5 - Different paths of sound propagation behind a barrier**
```{=html}
</center>
```
![](Sound_Diffraction_from_a_Hole.png "Sound_Diffraction_from_a_Hole.png")
```{=html}
<center>
```
**Figure 6 - Outdoor sound diffraction behind a small opening**
```{=html}
</center>
```
## Sound reduction
Several mechanisms can be used in order to reduce noise in outdoor
environment. Different noise barriers used to reduce highway noise are
shown in Figure 7. Another way of reducing noise is to use vegetation.
Figure 8 shows pictures of vertical vegetation walls.
```{=html}
<center>
```
![](Geluidswal_123.jpg "Geluidswal_123.jpg"){width="300"} ![](TullamarineFwy.jpg "TullamarineFwy.jpg"){width="300"} ![](Slunečný_vršek,_K_Horkám_-_Bratislavská.jpg "Slunečný_vršek,_K_Horkám_-_Bratislavská.jpg"){width="300"}
----------------------------------------------------------- ----------------------------------------------------------- -------------------------------------------------------------------------------------------------------------
```{=html}
</center>
```
```{=html}
<center>
```
**Figure 7 - Different types of noise barriers**
```{=html}
</center>
```
```{=html}
<center>
```
![](Jardin_vertical_de_Plaza_del_Pericón,_Málaga..JPG "Jardin_vertical_de_Plaza_del_Pericón,_Málaga..JPG"){width="300"} ![](Mur_vegetal_avignon_nuit1.jpg "Mur_vegetal_avignon_nuit1.jpg"){width="300"} ![](CaixaForumMadridyJardinVertical.jpg "CaixaForumMadridyJardinVertical.jpg"){width="300"}
------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------
```{=html}
</center>
```
```{=html}
<center>
```
**Figure 8 - Vertical vegetation walls**
```{=html}
</center>
```
## Useful Websites
- HyperPhysics,
Section on the basics of sound propagation
- Acoustical Porous Material Recipes
## References and further reading
References are presented in order of date of publication.
**General**
1. Piercy, Embleton, Sutherland, *Review of noise propagation in the
atmosphere*, J. Acoust. Soc.
Am. Volume 61, Issue 6, pp. 1403--1418, June 1977
2. Delany,
*Sound propagation in the atmosphere - A historical
review*,
Acustica. Vol. 38, pp. 201--223, October 1977
3. Piercy, Embleton, *Review of sound propagation in the atmosphere
(A)*, J. Acoust. Soc. Am.
Volume 69, Issue S1, pp. S99-S99, May 1981
4. Crocker, *Handbook
of
Acoustics*,
John Wiley & Sons, February 1998
5. Kinsler, Frey, Coppens, Sanders,
*Fundamentals of
Acoustics*,
4th Ed., John Wiley & Sons, New York, 2000
**Speed of sound**
1. Wong, *Speed of sound in standard
air*, J. Acoust. Soc. Am.
Volume 79, Issue 5, pp. 1359--1366, May 1986
**Absorption of sound in the atmosphere**
1. Calvert, Coffman, Querfeld,
*Radiative
Absorption of Sound by Water Vapor in the
Atmosphere*, J. Acoust. Soc.
Am. Volume 39, Issue 3, pp. 532--536, March 1966
2. Henley and Hoidale,
*Attenuation and dispersion of acoustic energy by atmospheric
dust*, J. Acoust. Soc. Am.
Volume 54, Issue 2, pp. 437--445, August 1973
3. Sutherland, Piercy, Bass, Evans, *Method for calculating the
absorption of sound by the
atmosphere*, J. Acoust. Soc.
Am. Volume 56, Issue S1, pp. S1-S1, November 1974
4. Bass, Sutherland, Piercy, Evans,
\[<http://adsabs.harvard.edu/abs/1984papm>\...17..145B *Absorption
of sound by the atmosphere*\], Physical acoustics: Principles and
methods. Volume 17 (A85-28596 12-71). Orlando, FL, Academic Press,
Inc., p. 145-232, 1984
5. Bass, Sutherland, Zuckerwar,
*Atmospheric absorption of sound:
Update*, J. Acoust. Soc. Am.
Volume 88, Issue 4, pp. 2019--2021, October 1990
6. Bass, Sutherland, Zuckerwar, Blackstock, Hester,
*Atmospheric absorption of sound: Further
developments*, J. Acoust. Soc.
Am. Volume 97, Issue 1, pp. 680--683, January 1995
**Attenuation over the ground**
1. Embleton, Piercy, and Olson, *Outdoor sound propagation over ground
of finite impedance*, J.
Acoust. Soc. Am. Volume 59, Issue 2, pp. 267--277, February 1976
2. Bolen and Bass, *Effects of ground cover on the propagation of
sound through the
atmosphere*, J. Acoust. Soc.
Am. Volume 69, Issue 4, pp. 950--954, April 1981
3. Embleton, Piercy, and Daigle, *Effective flow resistivity of ground
surfaces determined by acoustical
measurements*, J. Acoust. Soc.
Am. Volume 74, Issue 4, pp. 1239--1244, October 1983
4. Rasmussen, *On the effect of terrain profile on sound propagation
outdoors*,
Journal of Sound and Vibration, Volume 98, Issue 1, Pages 35--44,
January 1985
5. Attenborougha, *Review of ground effects on outdoor sound
propagation from continuous broadband
sources*,
Applied Acoustics, Volume 24, Issue 4, Pages 289-319, 1988
6. Hess, Attenborough, and Heap, *Ground characterization by
short-range propagation
measurements*, J. Acoust. Soc.
Am. Volume 87, Issue 5, pp. 1975--1986, May 1990
7. Attenborough, Taherzadeh, Bass, Di, and others *Benchmark cases for
outdoor sound propagation
models*, J. Acoust. Soc. Am.
Volume 97, Issue 1, pp. 173--191, January 1995
**Vegetation**
1. Aylor, *Noise Reduction by Vegetation and
Ground*, J. Acoust. Soc. Am.
Volume 51, Issue 1B, pp. 197--205, January 1972
2. Bullen, Fricke, *Sound propagation through
vegetation*,
Journal of Sound and Vibration, Volume 80, Issue 1, 8 January 1982,
Pages 11--23, May 1981
3. Price, Attenborough, Heap, *Sound attenuation through trees:
Measurements and models*, J.
Acoust. Soc. Am. Volume 84, Issue 5, pp. 1836--1844, November 1988
**Barriers and screens**
1. Maekawa, *Noise reduction by
screens*,
Applied Acoustics, Volume 1, Issue 3, Pages 157-173, July 1968
2. Jonasson, *Sound reduction by barriers on the
ground*,
Journal of Sound and Vibration, Volume 22, Issue 1, Pages 113-126,
May 1972
3. Kurze, *Noise reduction by
barriers*, J. Acoust. Soc. Am.
Volume 55, Issue 3, pp. 504--518, March 1974
4. Isei, Embleton, Piercy, *Noise reduction by barriers on finite
impedance ground*, J. Acoust.
Soc. Am. Volume 67, Issue 1, pp. 46--58, January 1980
5. Li, Law, Kwok, *Absorbent parallel noise barriers in urban
environments*,
Journal of Sound and Vibration Volume 315, Issues 1-2, Pages
239-257, August 2008
**Absorbent materials**
1. Delany and Bazley, *Acoustical properties of fibrous absorbent
materials*,
Applied Acoustics, Volume 3, Issue 2, April 1970, Pages 105-116
2. Attenborough, *Acoustical characteristics of porous
materials*,
Physics Reports Volume 82, Issue 3, Pages 179-227, February 1982
3. Lauriksa, Copsa, Verhaegena, *Acoustical properties of elastic
porous
materials*,
Journal of Sound and Vibration Volume 131, Issue 1, Pages 143-156,
22 May 1989
## Appendices
### Appendix A - Matlab program for the plot of the attenuation coefficient
``` matlab
clear all;
clc ;
close all;
T_0 = 293.15;
T_01 = 273.16 ;
T = 20 + 273.15;
p_s0 = 1;
F = logspace(1,6);
ler=length(F);
hrar=[0 10 20 40 60 80 100];
a_ps_ar=zeros(7,ler);
for k=1:7
hr=hrar(k);
psat = p_s0*10^(-6.8346*(T_01/T)^1.261 + 4.6151);
h = p_s0*(hr)*(psat/p_s0);
F_rO = 1/p_s0*(24 + 4.04*10^4*h*(0.02+h)/(0.391+h));
F_rN = 1/p_s0*(T_0/T)^(1/2)*( 9 + 280*h*exp(-4.17*((T_0/T)^(1/3)-1)) );
alpha_ps= 100*F.^2./p_s0.*( 1.84*10^(-11)*(T/T_0)^(1/2)...
+ (T/T_0)^(-5/2)*(0.01275*exp(-2239.1/T)./(F_rO + F.^2/F_rO)...
+ 0.1068*exp(-3352/T)./(F_rN + F.^2/F_rN) ) );
a_ps_ar(k,:) = alpha_ps*20/log(10);
end
psvg = figure (1);
loglog(F,a_ps_ar(1,:), F,a_ps_ar(2,:), F,a_ps_ar(3,:), F,a_ps_ar(4,:),...
F,a_ps_ar(5,:), F,a_ps_ar(6,:), F,a_ps_ar(7,:),'LineWidth',1);
xlabel({'f / p_s [Hz/atm]';'Frequency/pressure'},'FontSize',10,...
'FontWeight','normal','FontName','Times New Roman');
ylabel('Absorption coefficient/pressure a / p_s [dB/100 m atm]',...
'FontSize',10,'FontWeight','normal','FontName','Times New Roman');
title({'Sound absorption coefficient per atmosphere for air at 20°C ';...
'according to relative humidity per atmosphere'},...
'FontSize',10,'FontWeight','bold','FontName','Times New Roman')
hleg = legend(' 0',' 10',...
' 20',' 40',' 60',...
' 80',' 100');
v = get(hleg,'title');
set(v,'string',{'h_r / p_s [%/atm]'},'FontName','Times New Roman','FontSize',10,...
'BackgroundColor', 'white','EdgeColor','white','HorizontalAlignment','center');
set(hleg,'Location','SouthEast','EdgeColor','black')
axis([1e1 1e6 1e-3 1e4]);
grid on;
set(gca,'gridlinestyle','-');
set(gca,'MinorGridLineStyle','-')
%plot2svg('Absorption_coefficient.svg',psvg);
```
### Appendix B - Python program for the plot of the attenuation coefficient
``` python
#!/usr/bin/python3
import math
import numpy as np
import matplotlib.pyplot as plt
## 1 atm in Pa
ps0 = 1.01325e5
def absorption(f, t=20, rh=60, ps=ps0):
""" In dB/m
f: frequency in Hz
t: temperature in °C
rh: relative humidity in %
ps: atmospheric pressure in Pa
From http://en.wikibooks.org/wiki/Engineering_Acoustics/Outdoor_Sound_Propagation
See __main__ for actual curves.
"""
T = t + 273.15
T0 = 293.15
T01 = 273.16
Csat = -6.8346 * math.pow(T01 / T, 1.261) + 4.6151
rhosat = math.pow(10, Csat)
H = rhosat * rh * ps0 / ps
frn = (ps / ps0) * math.pow(T0 / T, 0.5) * (
9 + 280 * H * math.exp(-4.17 * (math.pow(T0 / T, 1/3.) - 1)))
fro = (ps / ps0) * (24.0 + 4.04e4 * H * (0.02 + H) / (0.391 + H))
alpha = f * f * (
1.84e-11 / ( math.pow(T0 / T, 0.5) * ps / ps0 )
+ math.pow(T / T0, -2.5)
* (
0.10680 * math.exp(-3352 / T) * frn / (f * f + frn * frn)
+ 0.01278 * math.exp(-2239.1 / T) * fro / (f * f + fro * fro)
)
)
return 20 * alpha / math.log(10)
def plot():
## Figure in http://en.wikibooks.org/wiki/Engineering_Acoustics/Outdoor_Sound_Propagation
ax = plt.subplot(111)
fs = np.logspace(1, 6, num=100, endpoint=True, base=10)
ys = np.zeros(fs.shape)
rh = (0, 10, 20, 40, 60, 80, 100)
for r in rh:
for i in np.arange(fs.shape[0]):
ys[i] = absorption(fs[i], rh=r)
ax.loglog(fs, 100 * ys, label='rh:%d'%r)
ax.grid(True)
ax.set_xlabel('Frequency/pressure [Hz/atm]')
ax.set_ylabel('Absorption coefficient/pressure [dB/100m.atm]')
ax.legend(loc='lower right')
plt.show()
def table():
p = ps0
for t in [30, 20, 10, 0]:
for rh in [10, 20, 30, 50, 70, 90]:
print("T=%2d RH=%2d " % (t, rh), end='')
for f in [62.5, 125, 250, 500, 1000, 2000, 4000, 8000]:
a = absorption(f, t, rh, p)
print("%7.3f " % (a*1000), end='')
print()
if __name__ == '__main__':
table()
plot()
```
Back to main page
|
# Engineering Acoustics/Anechoic Tiling
## Introduction
! HMS Triumph (S93))
anechoic
tiling. anechoic tiling."){width="325"}
One important application of acoustics research is the testing, design,
and construction of anechoic tiles for underwater acoustic stealth.
Anechoic tiling, first used in the Second World War by German U-boats
(codenamed "Alberich" [^1]), are used to reduce the acoustic signature
of naval vessels. The tiles reduce both reflected active SONAR off the
pressure hull, and reduce internal noise transmitted through the
pressure hull that can be picked up by passive SONAR. In modern times,
anechoic tiles are present on nearly all submarines. The frequency range
of SONAR, and consequently the range of interest of anechoic tiling to
minimize detection, is around 1-30 kHz [^2].
The main sound attenuation mechanism in the tiles comes from resonant
scattering the sound waves due to air cavities in the rubber. The use of
air bubbles to attenuate sound was first published by German
acousticians Erwin Meyer and Eugen Skudrzyk in a report written in the
British occupied zone of Germany in 1946, translated to English in 1950
for an unclassified release by the U.S. Navy [^3]. The air bubbles work
to attenuate sound by acting as resonant oscillators and dissipate
acoustic energy through thermal losses, frictional losses, and other
processes[^4].
Tank measurements of acoustic parameters with the panel surround on both
sides by water, or "free-field", are theoretically simple measurements
that can be conducted in an interior laboratory setting [^5], and will
be described in later sections. Audoly [^6] presents a method to
transfer the results of free-field acoustic properties from tank
acoustic measurements to the acoustic properties of panels with
arbitrary backing, such as the rigid backing of a submarine hull.
## Planar Waves
The transmission and reflection coefficients $\hat{T}$ and $\hat{R}$ are
presented as ratios of the incident, reflected, and transmitted acoustic
pressure magnitudes:
{\\hat{P}\_{incident}},\\quad \\quad \\hat{R} =
\\frac{\\hat{P}\_{reflected}}{\\hat{P}\_{incident}}`</math>`{=html}\|}}
The conservation of acoustic power from transmitted waves through a
panel is as follows: $|\hat{T}|^2+|\hat{R}|^2+|\hat{A}|^2=1$
Where $\hat{A}$ is the acoustic absorption coefficient. From careful
observation of the conservation of energy equation, it is evident that
to minimize $|\hat{T}|$ and $|\hat{R}|$, acoustic energy dissipated
$|\hat{A}|$ should be increased. In materials such as metals in the
frequency ranges pertinent to this study, $|\hat{A}| \approx 0$. In
rubber materials, and especially those with air voids [^7] [^8],
$|\hat{A}|$ can no longer be assumed to be negligible [^9].
The terms \"insertion loss\" ($IL$) or \"echo reduction\" ($ER$) are
used. The insertion loss is the reduction (in decibels) of the acoustic
power of the insertion of a panel, related to the transmission
coefficient: $IL = -10\log|\hat{T}|^2$, and the echo reduction is the
reduction (in decibels) of the acoustic power after a reflection:
$ER = -10\log|\hat{R}|^2$.
### Three-layered media: No panel absorption
For three-layered medium with two infinite fluid layers, each with
impedance $r_{1/3}=\rho_{1/3}c_{1/3}$ on either side of a sample of
thickness $L$ and impedance impedance $r_{2}=\rho_{2}c_{2}$, and
$k=\omega/c_2$, at normal incidence the following equation for the
reflection coefficient $\hat{R}$ may be used [^10]:
For the symmetrical case of the first fluid and third fluid identical
$r_3=r_1$, equation () can be reduced
into the following simplified formulations [^11]:
Where $m=\frac{r_{1}}{r_2}$. By inspection of equations
(), resonances of minima of $\hat{R}$
and maxima of $\hat{T}$ occur at $kL = n\pi$. These lossless $IL$ and
$ER$ are plotted over $kL$ in the following figures in the next section
(for aluminum panels suspended in water) as the black lines.
### Three-layered media: With panel absorption
For panel media with acoustic attenuation, the following formulas in
() can be used to describe the insertion
loss and echo reduction, where $\alpha$ is the attenuation constant in
$dB/m$, and $r=\frac{\alpha}{\omega c_2}$[^12]. For the case of no
acoustic attenuation $\alpha=0$, equations
() are recovered.
A sketch of the effect of absorption in panel materials on $IL$ and $ER$
is shown in the following figures:
For a general formulation of n-layer solid panels with absorption,
methods are described in references [^13] [^14].
### Determination of *α(ω)* from experimental data
A 2nd order approximation for $\alpha$ is used (Equation
()) as it is not constant over frequency
[^15], as shown for the case of aluminum and nitrile rubber in the
figure below.
It is possible to experimentally determine $\alpha (\omega)$ from tank
acoustic tests. First the magnitude of the absorption coefficient is
determined from conservation of energy and measured values of
$|\hat{T}|$ and $|\hat{R}|$ [^16]:
The coefficient $a$ in equation () is
estimated using the following formula, then $b$ is then fit to data
[^17].
## Experimental Investigations
To characterize $|\hat{T}|$ and$|\hat{R}|$ of panel materials, free
field acoustic measurements are performed: In a water-filled tank, a
parametric array source $(a)$ produces an highly directional discrete
acoustic wave [^18] with the far-field directionality function
$D(\theta)$ [^19]. For demonstration, the far field directionality of an
underwater array source is included in the insertion loss sample
measurement figure below. The shape of the discrete wave is shown in
[^20].
### Insertion Loss
Using a hydrophone $(c)$, one recording is made with the sample $(b)$ in
place to record transmitted pressure $P_t$ and one measurement without
the sample in place to record incident pressure $P_i$. The measurement
configurations are shown in the following figures.
### Echo Reduction
For reflection experiments, the reflected pressure $P_r$ off the sample
is measured, and for the incident pressure $P_i$ measurement a foam
reflector is used. A foam reflector has a high acoustic impedance
mismatch with the water and reflects sound efficiently.
### Pressure Measurements
Shown in the following figures, the measured pressure signals over time
are are recorded and processed with a Fourier transform to determine
pressure over frequency. The coefficients $|\hat{R}|$ and $|\hat{T}|$
over frequency are simply the ratio of the resultant pressure spectra.
### Results
Insertion loss and echo reduction plots for, aluminum test samples, such
as those measured in [^21], are shown in the following figures. Aluminum
samples are conducted as it is a material with well known acoustic
properties, and negligibly low absorption over the frequencies studied.
The experimental setup described shows good agreement with theory [^22]
[^23].
### Other Considerations
For applications of anechoic tiling on submarines, the conditions
surrounding the submarine change drastically with submerged depth.
Varying pressure, salinity, temperature all affect acoustic properties
of the rubber, the surrounding water, and the anechoic tile in general.
Environmental tank such as the ones described in reference [^24] can be
used to simulate ocean conditions. The wavelength of the sound produced
by the parametric array is limited by the physical size of the tank.
Achieving lower frequency measurements necessitates the use of larger
tanks for experiments, or semi-anechoic siding on the tank walls [^25].
## References
Back to main page
------------------------------------------------------------------------
By: Geoffrey Chase
[^1]: James L Lastinger and Gerald A Sabin. Underwater sound absorbers:
A review of published research with an annotated
bibliography .
Technical report, NAVAL RESEARCH LAB ORLANDO FL UNDERWATER SOUND
REFERENCE DIV, 1970.
[^2]: N. Friedman and United States Naval Institute. The Naval
Institute guide to world naval weapons
systems. The Naval
Institute Guide To\... Series. Naval Institute Press, 1989.
[^3]: Walter Kuhl. Sound absorption and sound absorbers in water
(Dynamic properties of rubber and rubberlike substances in the
acoustic frequency
region). Dept. of
the Navy, Bureau of Ships, 1950.
[^4]:
[^5]: V.F. Humphrey. The measurement of acoustic properties of limited
size panels by use of a parametric
source90403-1). Journal of
Sound and Vibration, 98(1):67 -- 81, 1985.
[^6]: Christian Audoly. Determination of efficiency of anechoic or
decoupling hull coatings using water tank acoustic
measurements. In
Societe Francaise d'Acoustique, editor, Acoustics 2012, Nantes,
France, April 2012.
[^7]:
[^8]:
[^9]: E. Eugene Mikeska and John A. Behrens. Evaluation of transducer
window materials. The Journal of
the Acoustical Society of America, 59(6):1294--1298, 1976.
[^10]: Lawrence E Kinsler, Austin R Frey, Alan B Coppens, and James V
Sanders. Fundamentals of
Acoustics. 4th Edition, pp. 560.
ISBN 0-471-84789-5. Wiley-VCH, December 1999., page 560, 1999.
[^11]: Robert J Bobber. Underwater electroacoustic
measurements.
Technical report, NAVAL RESEARCH LAB ORLANDO FL UNDERWATER SOUND
REFERENCE DIV, 1970.
[^12]:
[^13]: A. K. Mal, C.-C. Yin, and Y. Bar-Cohen. The Influence of
Material Dissipation and Imperfect Bonding on Acoustic Wave
Reflection from Layered
Solids,
pages 927--934. Springer US, Boston, MA, 1988.
[^14]: Bernard Hosten and Michel Castaings. Transfer matrix of
multilayered absorbing and anisotropic media. Measurements and
simulations of ultrasonic wave propagation through composite
materials. The Journal of the
Acoustical Society of America, 94(3):1488--1495, 1993.
[^15]:
[^16]:
[^17]:
[^18]:
[^19]: P. D. Thorne. A broad-band acoustic source for underwater
laboratory applications.
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency
Control, 34(5):515--523, Sept 1987.
[^20]:
[^21]:
[^22]:
[^23]: Victor F Humphrey, Stephen P Robinson, John D Smith, Michael J
Martin, Graham A Beamiss, Gary Hayman, and Nicholas L Carroll.
Acoustic characterization of panel materials under simulated ocean
conditions using a parametric array
source. The Journal of the
Acoustical Society of America, 124(2):803--814, 2008.
[^24]:
[^25]:
|
# Engineering Acoustics/Phonograph Sound Reproduction
## Phonograph Sound Reproduction
The content of this article is intended as an electro-mechanical
analysis of phonograph sound reproduction. For a general history and
overview of phonograph technology refer to Wikipedia entries on
Phonograph and Magnetic
Cartridges.
## A Simplified Phono Model for Phono (Magnetic) Cartridges
The basic principle of phonograph sound reproduction stems from a small
diameter diamond needle follows a groove cut into the surface of a
record. The resulting needle velocity is mechanically coupled to one
element of an electrical coil transducer to produce an electrical
current.
Two main variants of cartage design exist. Moving Magnet (MM) designs
couple a permanent magnet to the needle, causing the magnet to move near
an electrical coil solenoid. Moving Coil (MC) cartridges couple an
electrical coil to the needle, causing the coil to move in a fixed
permanent magnetic field. In both cartridge designs the relative motion
of the magnetic flux field induces current flow in the electrical coil.
Figure 1 demonstrates this process with a simplified MM cartridge
schematic. In this configuration the position of the magnet alters the
magnetic domains of the surrounding
ferromagnetic transducer
core. Similarly, the velocity of the magnet induces a change in the
magnetic flux of the
transducer core, and according to the principle of electromagnetic
induction, a
current in the electrical coil is produced.
\
!Figure 1: Top view of simplified (MM) phono cartridge
schematic. phono cartridge schematic."){width="600"}
## Electro-Mechanical Analogy of a Phono Cartridge
Figure 2 gives an electrical analogue model for the simplified MM
cartridge show in Figure 1. This circuit representation of the system
was obtained according to the Mobility Analogue for
Mechanical-Acoustical
Systems.
The following assumptions are included in this model:
- Motion is limited to the horizontal plane.
- Angular velocities are proportional to linear velocities according
to the small angle assumption.
- The stylus cantilever and tonearm are perfectly rigid acting only as
mechanical transformers.
- All compliant and damping elements are represented by ideal
linearized elements.
- The MM transducer element is represented by an ideal transformer
with an aggregate coefficient *μBl*.
\
!Figure 2: Mechanical mobility circuit analogy for MM phonograph
system.{width="600"}
As an estimate of the phonograph system frequency response can be
obtained by calculating the complex input impedance, $Z_o$. An
analytical expression for $Z_o$ is more easily obtained by neglecting
the stylus mass *M~s~* and electrical system influence. These
assumptions are consistent with a low frequency approximaiton of the
system, shown in Figure 2. The resulting system input impedance is given
by the equation for *Z~o~*.
\
!Figure 3: Simplified low frequency circuit analogy for MM phonograph
system.{width="400"}\
```{=html}
<center>
```
$\begin{matrix}
\hat{Z}_o = &
\left[\frac{
R_pL_t^2R_t + \omega^2R_p(M_c+L_t^2M_t)^2 + \frac{R_t}{w^2C_p^2}
}{
\left( R_p L_t^2 R_t + \frac{M_c+L_t^2M_t}{C_p} \right)^2 +
\left(\omega (M_c+L_t^2M_t) R_p - \frac{L_t^2 R_t}{\omega C_p} \right)^2
}\right]
\\
&
\\
&
+j \left[\frac{
\omega(M_c+L_t^2M_t)\left(\frac{M_c+L_t^2M_t}{C_p} - R_p^2\right)
- \frac{1}{\omega C_p}\left(\frac{M_c+L_t^2M_t}{C_p} - L_t^4 R_t^2\right)
}{
\left( R_p L_t^2 R_t + \frac{M_c+L_t^2M_t}{C_p} \right)^2 +
\left(\omega (M_c+L_t^2M_t) R_p - \frac{L_t^2 R_t}{\omega C_p} \right)^2
}\right]
\end{matrix}$
```{=html}
</center>
```
## References
The technique of applying a lumped element system analysis was a
standard method used in the development of phonograph cartridges. In
addition to the low frequency analysis shown it is also possible to
conduct a simplified high frequency analysis for which the properties of
the stylus mass and vinyl surface compliance dominate the response.
Interestingly, poor performance in the low frequency extreme of a
phonograph cartridge response can have substantial and detrimental
effects on the high frequency response capability. For further reading
on this topic a list of relevant reference material is given below.
1. Hunt, F. V. (1962). \"The Rational Design of Phonograph
Pickups.\" J. Audio Eng. Soc 10(4): 274-289.
2. Bauer, B. B. (1963). \"On the Damping of Phonograph Arms.\" J. Audio
Eng. Soc 11(3): 207-211.
3. Walton, J. (1963). \"Stylus Mass and Reproduction Distortion.\" J.
Audio Eng. Soc 11(2): 104-109.
4. Bauer, B. B. (1964). \"On the Damping of Phonograph Styli.\" J.
Audio Eng. Soc 12(3): 210-213.
5. Anderson, C. R. K., J. H.; Samson, R. S., (1965). Optimizing the
Dynamic Characteristics of a Phonograph Pickup. Audio Engineering
Society Convention 17.
6. Anderson, C. R. K., James H.; Samson, Robert S., (1966).
\"Optimizing the Dynamic Characteristics of a Phonograph
Pickup.\" J. Audio Eng. Soc 14(2): 145-152.
7. White, J. V. (1972). \"Mechanical Playback Losses and the Design of
Wideband Phonograph Pickups.\" J. Audio Eng. Soc 20(4): 265-270.
8. Nakai, G. T. (1973). \"Dynamic Damping of Stylus Compliance/Tone-Arm
Resonance.\" J. Audio Eng. Soc 21(7): 555-562.
9. Kates, J. M. (1976). \"Low-Frequency Tracking Behavior of Pickup
Arm-Cartridge Systems.\" J. Audio Eng. Soc 24(4): 258-262.
10. Bauer, B. B. (1977). \"The High-Fidelity Phonograph Transducer.\" J.
Audio Eng. Soc 25(10/11): 729-748.
11. Kogen, J. H. (1977). \"Record Changers, Turntables, and Tone Arms-A
Brief Technical History.\" J. Audio Eng. Soc 25(10/11): 749-758.
12. Barlow Donald A.; Garside, G. R. (1978). \"Groove Deformation and
Distortion in Recordings.\" J. Audio Eng. Soc 26(7/8): 498-510.
13. Lipshitz, S. P. (1978). \"Impulse Response of the Pickup
Arm-Cartridge System.\" J. Audio Eng. Soc 26(1/2): 20-35.
14. Takahashi, S. T., Sadao; Kaneko, Nobuyuki; Fujimoto, Yasuhiro,
(1979). \"The Optimum Pivot Position on a Tone Arm.\" J. Audio Eng.
Soc 27(9): 648-656.
15. Happ, L. R. (1979). \"Dynamic Modeling and Analysis of a Phonograph
Stylus.\" J. Audio Eng. Soc 27(1/2): 3-12.
16. Pardee, R. P. (1981). \"Determination of Sliding Friction Between
Stylus and Record Groove.\" J. Audio Eng. Soc 29(12): 890-894.
## References
Back to Main page
|
# Engineering Acoustics/Bass Reflex Enclosure Design
## Introduction
```{=html}
<div style="float:right;margin:0 0 1em 1em;">
```
![](Bassreflex-Gehäuse_(enclosure).png "Bassreflex-Gehäuse_(enclosure).png")
```{=html}
</div>
```
Bass-reflex enclosures improve the low-frequency response of loudspeaker
systems. Bass-reflex enclosures are also called \"vented-box design\" or
\"ported-cabinet design\". A bass-reflex enclosure includes a vent or
port between the cabinet and the ambient environment. This type of
design, as one may observe by looking at contemporary loudspeaker
products, is still widely used today. Although the construction of
bass-reflex enclosures is fairly simple, their design is not simple, and
requires proper tuning. This reference focuses on the technical details
of bass-reflex design. General loudspeaker information can be found
here.
## Effects of the Port on the Enclosure Response
Before discussing the bass-reflex enclosure, it is important to be
familiar with the simpler sealed enclosure system performance. As the
name suggests, the sealed enclosure system attaches the loudspeaker to a
sealed enclosure (except for a small air leak included to equalize the
ambient pressure inside). Ideally, the enclosure would act as an
acoustical compliance element, as the air inside the enclosure is
compressed and rarified. Often, however, an acoustic material is added
inside the box to reduce standing waves, dissipate heat, and other
reasons. This adds a resistive element to the acoustical lumped-element
model. A non-ideal model of the effect of the enclosure actually adds an
acoustical mass element to complete a series lumped-element circuit
given in Figure 1. For more on sealed enclosure design, see the Sealed
Box Subwoofer
Design
page.
```{=html}
<center>
```
*Figure 1. Sealed enclosure acoustic circuit.*
```{=html}
</center>
```
In the case of a bass-reflex enclosure, a port is added to the
construction. Typically, the port is cylindrical and is flanged on the
end pointing outside the enclosure. In a bass-reflex enclosure, the
amount of acoustic material used is usually much less than in the sealed
enclosure case, often none at all. This allows air to flow freely
through the port. Instead, the larger losses come from the air leakage
in the enclosure. With this setup, a lumped-element acoustical circuit
has the following form.
```{=html}
<center>
```
![](Vented_box_ckt.gif "Vented_box_ckt.gif")\
*Figure 2. Bass-reflex enclosure acoustic circuit.*
```{=html}
</center>
```
In this figure, $Z_{RAD}$ represents the radiation impedance of the
outside environment on the loudspeaker diaphragm. The loading on the
rear of the diaphragm has changed when compared to the sealed enclosure
case. If one visualizes the movement of air within the enclosure, some
of the air is compressed and rarified by the compliance of the
enclosure, some leaks out of the enclosure, and some flows out of the
port. This explains the parallel combination of $M_{AP}$, $C_{AB}$, and
$R_{AL}$. A truly realistic model would incorporate a radiation
impedance of the port in series with $M_{AP}$, but for now it is
ignored. Finally, $M_{AB}$, the acoustical mass of the enclosure, is
included as discussed in the sealed enclosure case. The formulas which
calculate the enclosure parameters are listed in Appendix
B.
It is important to note the parallel combination of $M_{AP}$ and
$C_{AB}$. This forms a Helmholtz resonator (click here for more
information).
Physically, the port functions as the "neck" of the resonator and the
enclosure functions as the "cavity." In this case, the resonator is
driven from the piston directly on the cavity instead of the typical
Helmholtz case where it is driven at the "neck." However, the same
resonant behavior still occurs at the enclosure resonance frequency,
$f_{B}$. At this frequency, the impedance seen by the loudspeaker
diaphragm is large (see Figure 3 below). Thus, the load on the
loudspeaker reduces the velocity flowing through its mechanical
parameters, causing an anti-resonance condition where the displacement
of the diaphragm is a minimum. Instead, the majority of the volume
velocity is actually emitted by the port itself instead of the
loudspeaker. When this impedance is reflected to the electrical circuit,
it is proportional to $1/Z$, thus a minimum in the impedance seen by the
voice coil is small. Figure 3 shows a plot of the impedance seen at the
terminals of the loudspeaker. In this example, $f_B$ was found to be
about 40 Hz, which corresponds to the null in the voice-coil impedance.
```{=html}
<center>
```
![](_Za0_Zvc_plots.gif "_Za0_Zvc_plots.gif")\
*Figure 3. Impedances seen by the loudspeaker diaphragm and voice coil.*
```{=html}
</center>
```
## Quantitative Analysis of Port on Enclosure
The performance of the loudspeaker is first measured by its velocity
response, which can be found directly from the equivalent circuit of the
system. As the goal of most loudspeaker designs is to improve the bass
response (leaving high-frequency production to a tweeter), low frequency
approximations will be made as much as possible to simplify the
analysis. First, the inductance of the voice coil, $\it{L_E}$, can be
ignored as long as $\omega \ll R_E/L_E$. In a typical loudspeaker,
$\it{L_E}$ is of the order of 1 mH, while $\it{R_E}$ is typically
8$\Omega$, thus an upper frequency limit is approximately 1 kHz for this
approximation, which is certainly high enough for the frequency range of
interest.
Another approximation involves the radiation impedance, $\it{Z_{RAD}}$.
It can be shown \[1\] that this value is given by the following equation
(in acoustical ohms):
```{=html}
<center>
```
$Z_{RAD} = \frac{\rho_0c}{\pi a^2}\left[\left(1 - \frac{J_1(2ka)}{ka}\right) + j\frac{H_1(2ka)}{ka}\right]$
```{=html}
</center>
```
Where $J_1(x)$ and $H_1(x)$ are types of Bessel functions. For small
values of *ka*,
```{=html}
<table align=center width=50% cellpadding=10>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$J_1(2ka) \approx ka$
```{=html}
</td>
```
```{=html}
<td>
```
and
```{=html}
</td>
```
```{=html}
<td>
```
$H_1(2ka) \approx \frac{8(ka)^2}{3\pi}$
```{=html}
</td>
```
```{=html}
<td>
```
$\Rightarrow Z_{RAD} \approx j\frac{8\rho_0\omega}{3\pi^2a} = jM_{A1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Hence, the low-frequency impedance on the loudspeaker is represented
with an acoustic mass $M_{A1}$ \[1\]. For a simple analysis, $R_E$,
$M_{MD}$, $C_{MS}$, and $R_{MS}$ (the transducer parameters, or
*Thiele-Small* parameters) are converted to their acoustical
equivalents. All conversions for all parameters are given in Appendix
A.
Then, the series masses, $M_{AD}$, $M_{A1}$, and $M_{AB}$, are lumped
together to create $M_{AC}$. This new circuit is shown below.
```{=html}
<center>
```
![](VB_LF_ckt.gif "VB_LF_ckt.gif")\
*Figure 4. Low-Frequency Equivalent Acoustic Circuit*
```{=html}
</center>
```
Unlike sealed enclosure analysis, there are multiple sources of volume
velocity that radiate to the outside environment. Hence, the diaphragm
volume velocity, $U_D$, is not analyzed but rather
$U_0 = U_D + U_P + U_L$. This essentially draws a "bubble" around the
enclosure and treats the system as a source with volume velocity $U_0$.
This "lumped" approach will only be valid for low frequencies, but
previous approximations have already limited the analysis to such
frequencies anyway. It can be seen from the circuit that the volume
velocity flowing *into* the enclosure, $U_B = -U_0$, compresses the air
inside the enclosure. Thus, the circuit model of Figure 3 is valid and
the relationship relating input voltage, $V_{IN}$ to $U_0$ may be
computed.
In order to make the equations easier to understand, several parameters
are combined to form other parameter names. First, $\omega_B$ and
$\omega_S$, the enclosure and loudspeaker resonance frequencies,
respectively, are:
```{=html}
<table align=center width=40%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\omega_B = \frac{1}{\sqrt{M_{AP}C_{AB}}}$
```{=html}
</td>
```
```{=html}
<td>
```
$\omega_S = \frac{1}{\sqrt{M_{AC}C_{AS}}}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Based on the nature of the derivation, it is convenient to define the
parameters $\omega_0$ and *h*, the Helmholtz tuning ratio:
```{=html}
<table align=center width=25%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\omega_0 = \sqrt{\omega_B\omega_S}$
```{=html}
</td>
```
```{=html}
<td>
```
$h = \frac{\omega_B}{\omega_S}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
A parameter known as the *compliance ratio* or *volume ratio*, $\alpha$,
is given by:
```{=html}
<center>
```
$\alpha = \frac{C_{AS}}{C_{AB}} = \frac{V_{AS}}{V_{AB}}$
```{=html}
</center>
```
Other parameters are combined to form what are known as *quality
factors*:
```{=html}
<table align=center width=45%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$Q_L = R_{AL}\sqrt{\frac{C_{AB}}{M_{AP}}}$
```{=html}
</td>
```
```{=html}
<td>
```
$Q_{TS} = \frac{1}{R_{AE}+R_{AS}}\sqrt{\frac{M_{AC}}{C_{AS}}}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
This notation allows for a simpler expression for the resulting transfer
function \[1\]:
```{=html}
<center>
```
$\frac{U_0}{V_{IN}} = G(s) = \frac{(s^3/\omega_0^4)}{(s/\omega_0)^4+a_3(s/\omega_0)^3+a_2(s/\omega_0)^2+a_1(s/\omega_0)+1}$
```{=html}
</center>
```
where
```{=html}
<table align=center width=70%>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$a_1 = \frac{1}{Q_L\sqrt{h}}+\frac{\sqrt{h}}{Q_{TS}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 = \frac{\alpha+1}{h}+h+\frac{1}{Q_L Q_{TS}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_3 = \frac{1}{Q_{TS}\sqrt{h}}+\frac{\sqrt{h}}{Q_L}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Development of Low-Frequency Pressure Response
It can be shown \[2\] that for $ka < 1/2$, a loudspeaker behaves as a
spherical source. Here, *a* represents the radius of the loudspeaker.
For a 15" diameter loudspeaker in air, this low frequency limit is about
150 Hz. For smaller loudspeakers, this limit increases. This limit
dominates the limit which ignores $L_E$, and is consistent with the
limit that models $Z_{RAD}$ by $M_{A1}$.
Within this limit, the loudspeaker emits a volume velocity $U_0$, as
determined in the previous section. For a simple spherical source with
volume velocity $U_0$, the far-field pressure is given by \[1\]:
```{=html}
<center>
```
$p(r) \simeq j\omega\rho_0 U_0 \frac{e^{-jkr}}{4\pi r}$
```{=html}
</center>
```
It is possible to simply let $r = 1$ for this analysis without loss of
generality because distance is only a function of the surroundings, not
the loudspeaker. Also, because the transfer function magnitude is of
primary interest, the exponential term, which has a unity magnitude, is
omitted. Hence, the pressure response of the system is given by \[1\]:
```{=html}
<center>
```
$\frac{p}{V_{IN}} = \frac{\rho_0s}{4\pi}\frac{U_0}{V_{IN}} = \frac{\rho_0Bl}{4\pi S_DR_EM_AS}H(s)$
```{=html}
</center>
```
Where $H(s) = sG(s)$. In the following sections, design methods will
focus on $|H(s)|^2$ rather than $H(s)$, which is given by:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$|H(s)|^2 = \frac{\Omega^8}{\Omega^8 + \left(a^2_3 - 2a_2\right)\Omega^6 + \left(a^2_2 + 2 - 2a_1a_3\right)\Omega^4 + \left(a^2_1 - 2a_2\right)\Omega^2 + 1}$
```{=html}
</td>
```
```{=html}
<td>
```
$\Omega = \frac{\omega}{\omega_0}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
This also implicitly ignores the constants in front of $|H(s)|$ since
they simply scale the response and do not affect the shape of the
frequency response curve.
## Alignments
A popular way to determine the ideal parameters has been through the use
of alignments. The concept of alignments is based upon well investigated
electrical filter theory. Filter development is a method of selecting
the poles (and possibly zeros) of a transfer function to meet a
particular design criterion. The criteria are the desired properties of
a magnitude-squared transfer function, which in this case is $|H(s)|^2$.
From any of the design criteria, the poles (and possibly zeros) of
$|H(s)|^2$ are found, which can then be used to calculate the numerator
and denominator. This is the "optimal" transfer function, which has
coefficients that are matched to the parameters of $|H(s)|^2$ to compute
the appropriate values that will yield a design that meets the criteria.
There are many different types of filter designs, each which have
trade-offs associated with them. However, this design approach is
limited because of the structure of $|H(s)|^2$. In particular, it has
the structure of a fourth-order high-pass filter with all zeros at *s* =
0. Therefore, only those filter design methods which produce a low-pass
filter with only poles will be acceptable methods to use. From the
traditional set of algorithms, only Butterworth and Chebyshev low-pass
filters have only poles. In addition, another type of filter called a
quasi-Butterworth filter can also be used, which has similar properties
to a Butterworth filter. These three algorithms are fairly simple, thus
they are the most popular. When these low-pass filters are converted to
high-pass filters, the $s \rightarrow 1/s$ transformation produces $s^8$
in the numerator.
More details regarding filter theory and these relationships can be
found in numerous resources, including \[5\].
## Butterworth Alignment
The Butterworth algorithm is designed to have a *maximally flat* pass
band. Since the slope of a function corresponds to its derivatives, a
flat function will have derivatives equal to zero. Since as flat of a
pass band as possible is optimal, the ideal function will have as many
derivatives equal to zero as possible at *s* = 0. Of course, if all
derivatives were equal to zero, then the function would be a constant,
which performs no filtering.
Often, it is better to examine what is called the *loss function*. Loss
is the reciprocal of gain, thus
```{=html}
<center>
```
$|\hat{H}(s)|^2 = \frac{1}{|H(s)|^2}$
```{=html}
</center>
```
The loss function can be used to achieve the desired properties, then
the desired gain function is recovered from the loss function.
Now, applying the desired Butterworth property of maximal pass-band
flatness, the loss function is simply a polynomial with derivatives
equal to zero at *s* = 0. At the same time, the original polynomial must
be of degree eight (yielding a fourth-order function). However,
derivatives one through seven can be equal to zero if \[3\]
```{=html}
<center>
```
$|\hat{H}(\Omega)|^2 = 1 + \Omega^8 \Rightarrow |H(\Omega)|^2 = \frac{1}{1 + \Omega^8}$
```{=html}
</center>
```
With the high-pass transformation $\Omega \rightarrow 1/\Omega$,
```{=html}
<center>
```
$|H(\Omega)|^2 = \frac{\Omega^8}{\Omega^8 + 1}$
```{=html}
</center>
```
It is convenient to define $\Omega = \omega/\omega_{3dB}$, since
$\Omega = 1 \Rightarrow |H(s)|^2 = 0.5$ or -3 dB. This definition allows
the matching of coefficients for the $|H(s)|^2$ describing the
loudspeaker response when $\omega_{3dB} = \omega_0$. From this matching,
the following design equations are obtained \[1\]:
```{=html}
<table align=center cellspacing=20>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$a_1 = a_3 = \sqrt{4+2\sqrt{2}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 = 2+\sqrt{2}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Quasi-Butterworth Alignment
The quasi-Butterworth alignments do not have as well-defined of an
algorithm when compared to the Butterworth alignment. The name
"quasi-Butterworth" comes from the fact that the transfer functions for
these responses appear similar to the Butterworth ones, with (in
general) the addition of terms in the denominator. This will be
illustrated below. While there are many types of quasi-Butterworth
alignments, the simplest and most popular is the 3rd order alignment
(QB3). The comparison of the QB3 magnitude-squared response against the
4th order Butterworth is shown below.
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\left|H_{QB3}(\omega)\right|^2 = \frac{(\omega/\omega_{3dB})^8}{(\omega/\omega_{3dB})^8 + B^2(\omega/\omega_{3dB})^2 + 1}$
```{=html}
</td>
```
```{=html}
<td>
```
$\left|H_{B4}(\omega)\right|^2 = \frac{(\omega/\omega_{3dB})^8}{(\omega/\omega_{3dB})^8 + 1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Notice that the case $B = 0$ is the Butterworth alignment. The reason
that this QB alignment is called 3rd order is due to the fact that as
*B* increases, the slope approaches 3 dec/dec instead of 4 dec/dec, as
in 4th order Butterworth. This phenomenon can be seen in Figure 5.
```{=html}
<center>
```
![](QB3_gradient.GIF "QB3_gradient.GIF")\
*Figure 5: 3rd-Order Quasi-Butterworth Response for*$0.1 \leq B \leq 3$
```{=html}
</center>
```
Equating the system response $|H(s)|^2$ with $|H_{QB3}(s)|^2$, the
equations guiding the design can be found \[1\]:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$B^2 = a^2_1 - 2a_2$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2^2 + 2 = 2a_1a_3$
```{=html}
</td>
```
```{=html}
<td>
```
$a_3 = \sqrt{2a_2}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 > 2 + \sqrt{2}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Chebyshev Alignment
The Chebyshev algorithm is an alternative to the Butterworth algorithm.
For the Chebyshev response, the maximally-flat passband restriction is
abandoned. Now, a *ripple*, or fluctuation is allowed in the pass band.
This allows a steeper transition or roll-off to occur. In this type of
application, the low-frequency response of the loudspeaker can be
extended beyond what can be achieved by Butterworth-type filters. An
example plot of a Chebyshev high-pass response with 0.5 dB of ripple
against a Butterworth high-pass response for the same $\omega_{3dB}$ is
shown below.
```{=html}
<center>
```
![](Butt_vs_Cheb_HP.gif "Butt_vs_Cheb_HP.gif")\
*Figure 6: Chebyshev vs. Butterworth High-Pass Response.*
```{=html}
</center>
```
The Chebyshev response is defined by \[4\]:
```{=html}
<center>
```
$|\hat{H}(j\Omega)|^2 = 1 + \epsilon^2C^2_n(\Omega)$\
```{=html}
</center>
```
$C_n(\Omega)$ is called the *Chebyshev polynomial* and is defined by
\[4\]:
```{=html}
<table align=center>
```
```{=html}
<tr>
```
```{=html}
<td valign=center rowspan=2>
```
$C_n(\Omega) = \big\lbrace$
```{=html}
</td>
```
```{=html}
<td>
```
$\rm{cos}[\it{n}\rm{cos}^{-1}(\Omega)]$
```{=html}
</td>
```
```{=html}
<td>
```
$|\Omega| < 1$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\rm{cosh}[\it{n}\rm{cosh}^{-1}(\Omega)]$
```{=html}
</td>
```
```{=html}
<td>
```
$|\Omega| > 1$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Fortunately, Chebyshev polynomials satisfy a simple recursion formula
\[4\]:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<td>
```
$C_0(x) = 1$
```{=html}
</td>
```
```{=html}
<td>
```
$C_1(x) = x$
```{=html}
</td>
```
```{=html}
<td>
```
$C_n(x) = 2xC_{n-1} - C_{n-2}$
```{=html}
</td>
```
```{=html}
</table>
```
For more information on Chebyshev polynomials, see the Wolfram
Mathworld: Chebyshev
Polynomials
page.
When applying the high-pass transformation to the 4th order form of
$|\hat{H}(j\Omega)|^2$, the desired response has the form \[1\]:
```{=html}
<center>
```
$|H(j\Omega)|^2 = \frac{1+\epsilon^2}{1+\epsilon^2C^2_4(1/\Omega)}$
```{=html}
</center>
```
The parameter $\epsilon$ determines the ripple. In particular, the
magnitude of the ripple is $10\rm{log}[1+\epsilon^2]$ dB and can be
chosen by the designer, similar to *B* in the quasi-Butterworth case.
Using the recursion formula for $C_n(x)$,
```{=html}
<center>
```
$C_4\left(\frac{1}{\Omega}\right) = 8\left(\frac{1}{\Omega}\right)^4 - 8\left(\frac{1}{\Omega}\right)^2 + 1$\
```{=html}
</center>
```
Applying this equation to $|H(j\Omega)|^2$ \[1\],
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td colspan=2>
```
$\Rightarrow |H(\Omega)|^2 = \frac{\frac{1 + \epsilon^2}{64\epsilon^2}\Omega^8}{\frac{1 + \epsilon^2}{64\epsilon^2}\Omega^8 + \frac{1}{4}\Omega^6 + \frac{5}{4}\Omega^4 - 2\Omega^2 + 1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\Omega = \frac{\omega}{\omega_n}$
```{=html}
</td>
```
```{=html}
<td>
```
$\omega_n = \frac{\omega_{3dB}}{2}\sqrt{2 + \sqrt{2 + 2\sqrt{2+\frac{1}{\epsilon^2}}}}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
Thus, the design equations become \[1\]:
```{=html}
<table align=center cellpadding=15>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$\omega_0 = \omega_n\sqrt[8]{\frac{64\epsilon^2}{1+\epsilon^2}}$
```{=html}
</td>
```
```{=html}
<td>
```
$k = \rm{tanh}\left[\frac{1}{4}\rm{sinh}^{-1}\left(\frac{1}{\epsilon}\right)\right]$
```{=html}
<td>
```
$D = \frac{k^4 + 6k^2 + 1}{8}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr>
```
```{=html}
<td>
```
$a_1 = \frac{k\sqrt{4 + 2\sqrt{2}}}{\sqrt[4]{D}},$
```{=html}
</td>
```
```{=html}
<td>
```
$a_2 = \frac{1 + k^2(1+\sqrt{2})}{\sqrt{D}}$
```{=html}
</td>
```
```{=html}
<td>
```
$a_3 = \frac{a_1}{\sqrt{D}}\left[1 - \frac{1 - k^2}{2\sqrt{2}}\right]$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Choosing the Correct Alignment
With all the equations that have already been presented, the question
naturally arises, "Which one should I choose?" Notice that the
coefficients $a_1$, $a_2$, and $a_3$ are not simply related to the
parameters of the system response. Certain combinations of parameters
may indeed invalidate one or more of the alignments because they cannot
realize the necessary coefficients. With this in mind, general
guidelines have been developed to guide the selection of the appropriate
alignment. This is very useful if one is designing an enclosure to suit
a particular transducer that cannot be changed.
The general guideline for the Butterworth alignment focuses on $Q_L$ and
$Q_{TS}$. Since the three coefficients $a_1$, $a_2$, and $a_3$ are a
function of $Q_L$, $Q_{TS}$, *h*, and $\alpha$, fixing one of these
parameters yields three equations that uniquely determine the other
three. In the case where a particular transducer is already given,
$Q_{TS}$ is essentially fixed. If the desired parameters of the
enclosure are already known, then $Q_L$ is a better starting point.
In the case that the rigid requirements of the Butterworth alignment
cannot be satisfied, the quasi-Butterworth alignment is often applied
when $Q_{TS}$ is not large enough.. The addition of another parameter,
*B*, allows more flexibility in the design.
For $Q_{TS}$ values that are too large for the Butterworth alignment,
the Chebyshev alignment is typically chosen. However, the steep
transition of the Chebyshev alignment may also be utilized to attempt to
extend the bass response of the loudspeaker in the case where the
transducer properties can be changed.
In addition to these three popular alignments, research continues in the
area of developing new algorithms that can manipulate the low-frequency
response of the bass-reflex enclosure. For example, a 5th order
quasi-Butterworth alignment has been developed \[6\]; its advantages
include improved low frequency extension, and much reduced driver
excursion at low frequencies and typically bi-amping or tri-amping,
while its disadvatages include somewhat difficult mathematics and
electronic complication (electronic crossovers are typically required).
Another example \[7\] applies root-locus techniques to achieve results.
In the modern age of high-powered computing, other researchers have
focused their efforts in creating computerized optimization algorithms
that can be modified to achieve a flatter response with sharp roll-off
or introduce quasi-ripples which provide a boost in sub-bass frequencies
\[8\].
Back to Engineering Acoustics
## References
\[1\] Leach, W. Marshall, Jr. *Introduction to Electroacoustics and
Audio Amplifier Design*. 2nd ed. Kendall/Hunt, Dubuque, IA. 2001.
\[2\] Beranek, L. L. *Acoustics*. 2nd ed. Acoustical Society of America,
Woodbridge, NY. 1993.
\[3\] DeCarlo, Raymond A. "The Butterworth Approximation." Notes from
ECE 445. Purdue University. 2004.
\[4\] DeCarlo, Raymond A. "The Chebyshev Approximation." Notes from ECE
445. Purdue University. 2004.
\[5\] VanValkenburg, M. E. *Analog Filter Design*. Holt, Rinehart and
Winston, Inc. Chicago, IL. 1982.
\[6\] Kreutz, Joseph and Panzer, Joerg. \"Derivation of the
Quasi-Butterworth 5 Alignments.\" *Journal of the Audio Engineering
Society*. Vol. 42, No. 5, May 1994.
\[7\] Rutt, Thomas E. \"Root-Locus Technique for Vented-Box Loudspeaker
Design.\" *Journal of the Audio Engineering Society*. Vol. 33, No. 9,
September 1985.
\[8\] Simeonov, Lubomir B. and Shopova-Simeonova, Elena.
\"Passive-Radiator Loudspeaker System Design Software Including
Optimization Algorithm.\" *Journal of the Audio Engineering Society*.
Vol. 47, No. 4, April 1999.
## Appendix A: Equivalent Circuit Parameters
```{=html}
<table align=center border=2>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Name
```{=html}
</th>
```
```{=html}
<th>
```
Electrical Equivalent
```{=html}
</th>
```
```{=html}
<th>
```
Mechanical Equivalent
```{=html}
</th>
```
```{=html}
<th>
```
Acoustical Equivalent
```{=html}
</th>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Voice-Coil Resistance
```{=html}
</th>
```
```{=html}
<td>
```
$R_E$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{ME} = \frac{(Bl)^2}{R_E}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{AE} = \frac{(Bl)^2}{R_ES^2_D}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Driver (Speaker) Mass
```{=html}
</th>
```
```{=html}
<td>
```
See $C_{MEC}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{MD}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AD} = \frac{M_{MD}}{S^2_D}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Driver (Speaker) Suspension Compliance
```{=html}
</th>
```
```{=html}
<td>
```
$L_{CES} = (Bl)^2C_{MS}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{MS}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{AS} = S^2_DC_{MS}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Driver (Speaker) Suspension Resistance
```{=html}
</th>
```
```{=html}
<td>
```
$R_{ES} = \frac{(Bl)^2}{R_{MS}}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{MS}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{AS} = \frac{R_{MS}}{S^2_D}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Enclosure Compliance
```{=html}
</th>
```
```{=html}
<td>
```
$L_{CEB} = \frac{(Bl)^2C_{AB}}{S^2_D}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{MB} = \frac{C_{AB}}{S^2_D}$
```{=html}
</td>
```
```{=html}
<td>
```
$C_{AB}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Enclosure Air-Leak Losses
```{=html}
</th>
```
```{=html}
<td>
```
$R_{EL} = \frac{(Bl)^2}{S^2_DR_{AL}}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{ML} = S^2_DR_{AL}$
```{=html}
</td>
```
```{=html}
<td>
```
$R_{AL}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Acoustic Mass of Port
```{=html}
</th>
```
```{=html}
<td>
```
$C_{MEP} = \frac{S^2_DM_{AP}}{(Bl)^2}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{MP} = S^2_DM_{AP}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AP}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Enclosure Mass Load
```{=html}
</th>
```
```{=html}
<td>
```
See $C_{MEC}$
```{=html}
</td>
```
```{=html}
<td>
```
See $M_{MC}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AB}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Low-Frequency Radiation Mass Load
```{=html}
</th>
```
```{=html}
<td>
```
See $C_{MEC}$
```{=html}
</td>
```
```{=html}
<td>
```
See $M_{MC}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{A1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<th>
```
Combination Mass Load
```{=html}
</th>
```
```{=html}
<td>
```
$C_{MEC} = \frac{S^2_DM_{AC}}{(Bl)^2}$\
$= \frac{S^2_D(M_{AB} + M_{A1}) + M_{MD}}{(Bl)^2}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{MC} = S^2_D(M_{AB} + M_{A1}) + M_{MD}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AC} = M_{AD} + M_{AB} + M_{A1}$\
$= \frac{M_{MD}}{S^2_D} + M_{AB} + M_{A1}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
## Appendix B: Enclosure Parameter Formulas
```{=html}
<center>
```
![](_Vented_enclosure.gif "_Vented_enclosure.gif")\
*Figure 7: Important dimensions of bass-reflex enclosure.*
```{=html}
</center>
```
Based on these dimensions \[1\],
```{=html}
<table align=center cellpadding=5>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$C_{AB} = \frac{V_{AB}}{\rho_0c^2_0}$
```{=html}
</td>
```
```{=html}
<td>
```
$M_{AB} = \frac{B\rho_{eff}}{\pi a}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$B = \frac{d}{3}\left(\frac{S_D}{S_B}\right)^2\sqrt{\frac{\pi}{S_D}} + \frac{8}{3\pi}\left[1 - \frac{S_D}{S_B}\right]$
```{=html}
</td>
```
```{=html}
<td>
```
$\rho_0 \leq \rho_{eff} \leq \rho_0\left(1 - \frac{V_{fill}}{V_B}\right) + \rho_{fill}\frac{V_{fill}}{V_B}$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td colspan=2>
```
$V_{AB} = V_B\left[1-\frac{V_{fill}}{V_B}\right]\left[1 + \frac{\gamma - 1}{1 + \gamma\left(\frac{V_B}{V_{fill}} - 1\right)\frac{\rho_0c_{air}}{\rho_{fill}c_{fill}}}\right]$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$V_B= hwd$ (inside enclosure gross volume)
```{=html}
</td>
```
```{=html}
<td>
```
$S_B = wh$ (baffle area of the side the speaker is mounted on)
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$c_{air} =$specific heat of air at constant isovolumetric process (about
$0.718 \frac{\rm kJ}{\rm kg.K}$ at 300 K)
```{=html}
</td>
```
```{=html}
<td>
```
$c_{fill} =$specific heat of filling at constant volume ($V_{filling}$)
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$\rho_0 =$mean density of air (about $1.3 \frac{\rm kg}{\rm m^3}$ at 300
K)
```{=html}
<td>
```
$\rho_{fill} =$ density of filling
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td>
```
$\gamma =$ ratio of specific heats (Isobaric/Isovolumetric processes)
for air (about 1.4 at 300 K)
```{=html}
</td>
```
```{=html}
<td>
```
$c_0 =$ speed of sound in air (about 344 m/s)
```{=html}
</tr>
```
```{=html}
<tr align=center>
```
```{=html}
<td colspan=2>
```
$\rho_{eff}$ = effective density of enclosure. If little or no filling
(acceptable assumption in a bass-reflex system but not for sealed
enclosures), $\rho_{eff} \approx \rho_0$
```{=html}
</td>
```
```{=html}
</tr>
```
```{=html}
</table>
```
|
# Engineering Acoustics/source-filter theory
The source-filter theory (Fant 1960) hypothesizes that an acoustic
speech signal can be seen as a source signal, filtered with the
resonances in the cavities of the vocal tract downstream from the
glottis or the constriction. This simple model for speech synthesis is
based on an assumption that the dynamics of the system is linear and
separable into three main blocks: a glottal energy (soure), the vocal
tract (filter) and the effect of modeling radiation sound that are
independent(As shown in the figure on the right).
The glottal source roughly matches the subglottal systems, while vocal
tract (VT) corresponds to the supra-glottal system. The radiation block
can be considered as a converter, which converts volume velocity in to
acoustic pressure.In general, the radiation characteristic R(f) and the
spectrum envelop of the source function S(f) for the glottal source are
smooth and monotonic functions of frequency. The transfer function T(f),
however, is usually characterized by several peaks corresponding to
resonances of the acoustic cavities that form the vocal tract.
Manipulating the shape of these cavities results in the positions and
amplitudes of the peaks. Figure on the left qualitatively shows the
configuration of vocal tract corresponding to a vowel. The forms of the
source spectrum S(f), the transfer function T(f), the radiation
characteristic R(f), and the sound pressure pr(f) is shown in each case.
The transfer function T(f) is determined by applying the theory of sound
propagation in tubes of arbitrary shape. For frequencies up to 5000 Hz,
the cross dimensions of vocal tract are less than a wavelength of the
sound. Therefore, the sound propagation can be considered as plane waves
parallel to the axis of the tube and the vocal tract can be viewed as an
acoustic tube of varying diameter.
## vocal tract transfer function
The vocal tract is approximated as an acoustic tube of a given length
composed of a number of sections with different section areas. This is
equivalent to the modelling of the sampled vocal tract transfer function
(H(s)) as a superposition of a given number of spectral poles and zeros,
which in spectral domain can be represented by
$H(s)= k\begin{matrix} \prod_{i=1}^N \frac{s-s_{ai}}{s-s_i}\end{matrix}$
Where K is constant, s~a1~,s~a2~,..are the zeros of H(s), s~1~,s~2~,\...
are the poles. for this equation, the poles and zeros mostly occur in
complex conjugate pairs, and the real parts of these complex frequencies
are much less than the imaginary parts which means that the peak energy
lost in one cycle is much less than the energy stored in one cycle of
the oscillation. Therfeore, the poles of H(s) can be expressed as below:
$\frac{1}{k_n}\begin{matrix} \prod_{i=1}^N \frac{s_i{s_i}^*}{(s-s_i)(s-{s_i}^*)}\end{matrix}$
Where Kp is a constant and the stars indicate complex conjugates.
Natural frequencies of vocal tract are represented by poles and the
imaginary parts indicate formant frequencies which refers to frequencies
at which oscillations happen in the absence of excitation and the real
parts give the rates of decay of theses oscillations. In the other
words, depending on the shape of the acoustic tube (mainly influenced by
tongue position), a sound wave travelling through it will be reflected
in a certain way so that interferences will generate resonances at
certain frequencies. These resonances are called
formants . Their location
largely determines the speech sound that is heard.
## Acoustic interpretation of transfer function
!Vocal tract as tubes with varying cross
section\|300px
According to acoustics of tubes, Pressure and volume velocity of a tube
at the end of tube (x=L) can be related to the variables at beginning of
the tube (x=0). The following transfer matrix expresses the acoustical
relationship between two sides of a tube in frequency domain:
$$\begin{bmatrix}
P_0 \\
U_0\\
\end{bmatrix}
=T(\omega)* \begin{bmatrix}
P_L \\
U_L \\
\end{bmatrix}, T(\omega)=
\begin{bmatrix}
a_{11} & a_{22} \\
a_{21} & a_{22}\\
\end{bmatrix}$$
$$a_{11}=cos(KL), a_{21}=jsin(KL)/(\rho*c), a_{21}=\rho*c*jsin(KL), a_{22}=cos(KL)$$
where K is the wave number, and L is the length of the tube.The aboved
mentioned relation can be used to calculate the state of the wave field
at one location, having on hand te state of the field at another
location.
Since the vocal tract can be considered as n tubes with different cross
section (see figure on the right), the transer function can be used to
relate the states between glottis and the radiate sound:
$$T(\omega)=T_1(\omega)*T_2(\omega)*T_3(\omega)*....*T_n(\omega)$$
and the overall equation for vocal tract becomes:
$$\begin{bmatrix}
P_g \\
U_g\\
\end{bmatrix}
=T(\omega)* \begin{bmatrix}
P_r \\
U_r \\
\end{bmatrix}, where P_r=U_r*Z_{rad}$$
In this equation, Z~rad~ is the radiation impedance. Up to a frequency
of about 6000 Hz, the acoustic radiation impedance can be written
approximately as:
$$Z_{rad}=\rho*c/A_m*(\pi*f^2/c^2*A_m)*K_s(f)+j*2*\pi*f*\rho*(0.8a)/A_m$$
where A is the area of the mouth opening, a is the effective radius, and
K~s~(f)is adimensionless frequency dependant factor that accounts for
baffling effect of the head.
The Transfer function of the system can be calculated as fallows:
$$H(\omega)=U_r/U_g$$
Therefore the equations result in:
$$P_r(\omega)=U_g(\omega)*H(\omega)*Z_{rad}(\omega)$$
As can be seen, the equation above expresses the pressure in front of
the mouse with a source, filter and radiation characteristics of the
mouse. This equation describes the source filter theory mentioned in the
first section.
## Effects of vocal tract wall and other losses
In previous section vocal tract is modeled as a system without losses,
except the termination impedance term. However, There are some other
second order effects that are necessary for precise modelling, such as
wall effects, heat conduction and viscosity, glottal opening. These
loses can change band-widths of the resonances frequencies. Also, they
can change or shift resonance frequences.
## Resonant frequencies of air in a tube
The relationship between vocal tract shape and transfer function is
complex -we will consider the simple case of a uniform tube. The vocal
tract in a vowel can be approximated by a tube which is closed at one
end (the glottis) and open at the other (the lips). For a relatively
unconstricted vocal tract, the resonances of a 17 cm vocal tract occur
at the following frequencies:
`f= n * c / 4 * L for n = 1, 3, 5, ...`
f = formant frequency in Hz c = speed of sound 34,000 cm/s L= length of
vocal tract in cm
So the lowest formant frequency in a 17 cm. vocal tract is:
f = c / 4 \* L
# 34,000 / 4 \* 17
500 Hz
And the spacing between formants is: f = 2 \*c / 4 \* L
# c / 2 \* L (always twice the lowest f)
1000 Hz
Therefore, formant frequencies are: F~1~=500, F~2~=1500, F~3~=2500,
F~4~=3500.
## Two Tube Vocal Tract Models of Vowels
!Two tube model for vowel
/a/\|left\|300px
!Two tube model for vowel
/i/\|right\|300px
Two resonators or uniform tubes of different cross sectional areas can
be connected to approximate some vowels or consonants. In this case,
natural frequencies of the whole system are not simply the frequencies
for each tube because of acoustic coupling. figures() shows different
configuration of tubes to simulate vowels /a/, /i.
Typical values (for an adult male vocal tract for vowel /a/) are l1 =
8 cm, l2 = 9 cm with A1 = 5 cm2, A2 = 0.5 cm2. Acoustic theory predicts
that there will be resonances at 944 Hz, 1063 Hz, 2833 Hz. The narrow
and wide tubes can be considered as separate tubes with resonance
frequencies obeying the one stated in previous section for a tube.
However, the acoustic impedance at the boundary between two tubes is not
zero, thus effects the natural frequencies of the tubes. The natural
frequencies of the combined system are the frequencies for which the sum
of the reactances at the junction is zero that is:
$$-\rho*c/A_1 cot(KL_1)+\rho*c/A_2tan(KL_2)=0$$
It should be noted that when the natural frequencies of the tubes are
remote from one another, the influence of coupling is small.
Typical values for vowel /i/ in the human vocal tract are l1 = 9 cm, l2
= 8 cm, A1 = 5 cm2, A2 = 0.5 cm2. Thus, in theory, F1 = 202 Hz, F2 =
1890 Hz, F3 = 2125 Hz.
## Four Tube Vocal Tract Models of Vowels
!four tube
model\|400px
Four tube models of vowels provide a much better estimate of formant
frequencies for a wider range of vowels than do two tube models and so
are more a more popular method of modeling vowels. Such models consist
of a lip tube (tube 1) a tongue constriction tube (tube 3) and
unconstricted tubes either side of the constriction tube. This model is
controlled by three parameters. They are i) the position of the centre
of tube 3, ii) the cross-sectional area of tube 3, and iii) the ratio of
the length to the cross-sectional area at the lip section. For extreme
back constrictions tube 4 disappears whilst for extreme front
constrictions tube 2 disappears.
Calculations of resonance frequencies using the 4 tube model are quite
complex and so Fant (1960) supplied a (fairly complex) graphical
representation of the relationship between the three parameters and the
resultant formant frequencies. These graphical representations are
called nomograms. The original
versions of these nomograms supply, for a continuous range of x
constriction positions (i.e. distance from the centre of the tongue
constriction to the glottis) a continuous range of resultant F1 to F5
values. The original nomograms do this for 5 values of lip area (A1) and
for two values of tongue constriction cross-sectional area (A3). For
different vocal tract lengths, different nomograms need to be computed.
The four tube, three parameter, model provides a sufficiently accurate
prediction of most vowel sounds, but cannot model nasalisation of
vowels.
## References
1- Kenneth N. Stevens,2000, Acoustic phonetics, The MIT Press.
2- Kinsler *et al.* 2000,Fundamentals of Acoustics, John Wiley & Sons.
3- Titze, I.R. (1994). Principles of Voice Production, Prentice Hall
(currently published by NCVS.org),
.
4- James L. Flangam and Lawrence R. Rabiner,1973, Speech synthesis.
|
# Engineering Acoustics/Moving Coil Loudspeaker
# Moving Coil Transducer
The purpose of the acoustic transducer is to convert electrical energy
into acoustic energy. Many variations of acoustic transducers exist,
although the most common is the moving coil-permanent magnet transducer.
The classic loudspeaker is of the moving coil-permanent magnet type.
The classic electrodynamic loudspeaker driver can be divided into three
key components:
1\) The Magnet Motor Drive System
2\) The Loudspeaker Cone System
3\) The Loudspeaker Suspension
```{=html}
<center>
```
![](loud_speaker.gif "loud_speaker.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 1 Cut-away of a moving coil-permanent magnet loudspeaker
```{=html}
</center>
```
## The Magnet Motor Drive System
The main purpose of the Magnet Motor Drive System is to establish a
symmetrical magnetic field in which the voice coil will operate. The
Magnet Motor Drive System is comprised of a front focusing plate,
permanent magnet, back plate, and a pole piece. In figure 2, the
assembled drive system is illustrated. In most cases, the back plate and
the pole piece are built into one piece called the yoke. The yoke and
the front focusing plate are normally made of a very soft cast iron.
Iron is a material that is used in conjunction with magnetic structures
because the iron is easily saturated when exposed to a magnetic field.
Notice in figure 2, that an air gap was intentionally left between the
front focusing plate and the yoke. The magnetic field is coupled through
the air gap. The magnetic field strength (B) of the air gap is typically
optimized for uniformity across the gap. \[1\]
```{=html}
<center>
```
Figure 2 Permanent Magnet Structure
```{=html}
</center>
```
When a coil of wire with a current flowing is place inside the permanent
magnetic field, a force is produced. B is the magnetic field strength, l
is the length of the coil, and I is the current flowing through the
coil.
$F = Bli$
```{=html}
<center>
```
![](Magnet2.gif "Magnet2.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 3 Voice Coil Mounted in Permanent Magnetic Structure
```{=html}
</center>
```
The coil is excited with the AC signal that is intended for sound
reproduction, when the changing magnetic field of the coil interacts
with the permanent magnetic field then the coil moves back and forth in
order to reproduce the input signal. The coil of a loudspeaker is known
as the voice coil.
```{=html}
<center>
```
Figure 4 Photograph - Voice Coil
```{=html}
</center>
```
## The Loudspeaker Cone System
On a typical loudspeaker, the cone serves the purpose of creating a
larger radiating area allowing more air to be moved when excited by the
voice coil. The cone serves a piston that is excited by the voice coil.
The cone then displaces air creating a sound wave. In an ideal
environment, the cone should be infinitely rigid and have zero mass, but
in reality neither is true. Cone materials vary from carbon fiber,
paper, bamboo, and just about any other material that can be shaped into
a stiff conical shape. The loudspeaker cone is a very critical part of
the loudspeaker. Since the cone is not infinitely rigid, it tends to
have different types of resonance modes form at different frequencies,
which in turn alters and colors the reproduction of the sound waves. The
shape of the cone directly influences the directivity and frequency
response of the loudspeaker. When the cone is attached to the voice
coil, a large gap above the voice coil is left exposed. This could be a
problem if foreign particles make their way into the air gap of the
voice coil and the permanent magnet structure. The solution to this
problem is to place what is known as a dust cap on the cone to cover the
air gap. Below a figure of the cone and dust cap are shown.
```{=html}
<center>
```
![](loud_cone.gif "loud_cone.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 6 Cone and Dust Cap attached to Voice Coil
```{=html}
</center>
```
## The Loudspeaker Suspension
Most moving coil loudspeakers have a two piece suspension system, also
known as a flexure system. The combination of the two flexures allows
the voice coil to maintain linear travel as the voice coil is energized
and provide a restoring force for the voice coil system. The two piece
system consists of large flexible membrane surrounding the outside edge
of the cone, called the surround, and an additional flexure connected
directly to the voice coil, called the spider. The surround has another
purpose and that is to seal the loudspeaker when mounted in an
enclosure. Commonly, the surround is made of a variety of different
materials, such as, folded paper, cloth, rubber, and foam. Construction
of the spider consists of different woven cloth or synthetic materials
that are compressed to form a flexible membrane. The following two
figures illustrate where the suspension components are physically at on
the loudspeaker and how they function as the loudspeaker operates.
```{=html}
<center>
```
![](loud_suspension.gif "loud_suspension.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 7 Loudspeaker Suspension System
```{=html}
</center>
```
```{=html}
<center>
```
![](loudspk.gif "loudspk.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 8 Moving Loudspeaker
```{=html}
</center>
```
## Modeling the Loudspeaker as a Lumped System
Before implementing a loudspeaker into a specific application, a series
of parameters characterizing the loudspeaker must be extracted. The
equivalent circuit of the loudspeaker is key when developing enclosures.
The circuit models all aspects of the loudspeaker through an equivalent
electrical, mechanical, and acoustical circuit. Figure 9 shows how the
three equivalent circuits are connected. The electrical circuit is
comprised of the DC resistance of the voice coil, Re, the imaginary part
of the voice coil inductance, Le, and the real part of the voice coil
inductance, Revc. The mechanical system has electrical components that
model different physical parameters of the loudspeaker. In the
mechanical circuit, Mm, is the electrical capacitance due to the moving
mass, Cm, is the electrical inductance due to the compliance of the
moving mass, and Rm, is the electrical resistance due to the suspension
system. In the acoustical equivalent circuit, Ma models the air mass and
Ra models the radiation impedance\[2\]. This equivalent circuit allows
insight into what parameters change the characteristics of the
loudspeaker. Figure 10 shows the electrical input impedance as a
function of frequency developed using the equivalent circuit of the
loudspeaker.
```{=html}
<center>
```
![](Eq_circuit.gif "Eq_circuit.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 9 Loudspeaker Analogous Circuit
```{=html}
</center>
```
```{=html}
<center>
```
![](Freq_resp.gif "Freq_resp.gif")
```{=html}
</center>
```
```{=html}
<center>
```
Figure 10 Electrical Input Impedance
```{=html}
</center>
```
## References
\[1\] The Loudspeaker Design Cookbook 5th Edition; Dickason, Vance.,
Audio Amateur Press, 1997. \[2\] Beranek, L. L. Acoustics. 2nd ed.
Acoustical Society of America, Woodbridge, NY. 1993.
|
# Engineering Acoustics/Microphone Design and Operation
![](Mic_Title.jpg "Mic_Title.jpg")
## Introduction
Microphones are devices which convert pressure fluctuations into
electrical signals. Two main methods of achieving this are used in the
mainstream entertainment industry today - **dynamic microphones** and
**condenser microphones**. Piezoelectric
transducers
can also be used as microphones but they are not commonly used in the
entertainment industry.
## Dynamic microphones
Dynamic microphones utilise \'Faraday's Law\'. The principle states that
when an electrical conductor is moved through a magnetic field, an
electrical current is induced within the conductor. In these microphones
the magnetic field comes from permanent magnets. There are two common
arrangements for the conductor.
![](Moving_coil.JPG "Moving_coil.JPG"){width="300"}
Figure 1: Sectional View of Moving-Coil Dynamic Microphone
The first conductor arrangement has a moving coil of wire. The wire is
typically copper and is attached to a circular membrane or piston
usually made from lightweight plastic or occasionally aluminum. The
impinging pressure fluctuation on the piston causes it to move in the
magnetic field and thus creates the desired electrical current.
![](Ribbon.jpg){width="300"}
Figure 2: Dynamic Ribbon Microphone
The second conductor arrangement is a ribbon of metallic foil suspended
between magnets. The metallic ribbon moves in response to a pressure
fluctuation and an electrical current is produced. In both
configurations, dynamic microphones follow the same principals as
acoustical
transducers.
## Condenser Microphones
Condenser microphones convert pressure fluctuations into electrical
potentials by changes in electrical capacitance, hence they are also
known as capacitor microphones. An electrical capacitor consists of two
charged electrical conductors placed at some relatively small distance
to each other. The basic relation that describes capacitors is:
: **$Q=C\times V$**
where Q is the electrical charge of the capacitor's conductors, C is the
capacitance, and V is the electric potential between the capacitor's
conductors. If the electrical charge of the conductors is held at a
constant value, then the voltage between the conductors will be
inversely proportional to (a) the capacitance and (b) the distance
between the conductors.
![](Condenser_schema.jpg "Condenser_schema.jpg"){width="600"}
Figure 3: Sectional View of Condenser Microphone
The capacitor in a condenser microphone consists of two parts: the
diaphragm and the backplate. The diaphragm moves due to impinging
pressure fluctuations and the backplate is held in a stationary
position. When the diaphragm moves closer to the backplate, the
capacitance increases and a change in electric potential is produced.
The diaphragm is typically made of metallic coated Mylar. The assembly
that houses both the backplate and the diaphragm is commonly referred to
as a capsule.
To keep the diaphragm and backplate at a constant charge, an electric
potential must be presented to the capsule. There are various ways this
can be achieved. The first uses a battery to supply the needed DC
potential to the capsule (figure 4). The resistor across the leads of
the capsule is very high, in the range of 10 mega ohms, to keep the
charge on the capsule close to constant.
![](Voltage.JPG "Voltage.JPG"){width="500"}
Figure 4: Internal Battery Powered Condenser Microphone
An alternative technique for providing a constant charge on the
capacitor is to supply a DC electric potential through the microphone
cable that carries the microphones output signal. Standard microphone
cable is known as XLR cable and is terminated by three pin connectors.
Pin one connects to the shield around the cable. The microphone signal
is transmitted between pins two and three.
![](XLR.jpg "XLR.jpg"){width="700"}
Figure 5: Dynamic Microphone Connected to a Mixing Console via XLR Cable
### 48V Phantom Powering
The most popular method of providing a DC potential through a microphone
cable is to supply +48 V to both of the microphone output leads, pins 2
and 3, and use the shield of the cable, pin 1, as the ground to the
circuit. Because pins 2 and 3 see the same potential, any fluctuation of
the microphone powering potential will not affect the microphone signal
seen by the attached audio equipment. The +48 V will be stepped down at
the microphone using a transformer and provide the potential to the
backplate and diaphragm in a similar fashion as the battery solution.
![](Powering.jpg "Powering.jpg"){width="600"}
Figure 6: Condenser Microphone Powering Techniques
### 12V T-Powering
A less popular method of running the potential through the cable is to
supply 12 V between pins 2 and 3. This method is referred to as
T-powering. The main problem with T-powering is that potential
fluctuation in the powering of the capsule will be transmitted into an
audio signal because the audio equipment analyzing the microphone signal
will not see a difference between a potential change across pins 2 and 3
due to a pressure fluctuation and one due to the power source electric
potential fluctuation.
### Electret Condenser Microphones
Finally, the diaphragm and backplate can be manufactured from a material
that maintains a fixed charge, known as \'electret\' (from
electric+magnet, because these materials can be seen as the electric
equivalents of permanent magnets). As a result, these microphones are
termed electret condenser microphones (ECM). In early electret designs,
the charge on the material tended to become unstable over time. Advances
in science and manufacturing have effectively eliminated this problem in
present designs.
## Conclusion
Two types of microphones are used in the entertainment industry.
- Dynamic microphones, which are found in the moving-coil and ribbon
configurations. The movement of the conductor in dynamic microphones
induces an electric current which is then transformed into the
reproduction of sound.
- Condenser microphones which utilize the properties of capacitors.
The charge on the capsule of condenser microphones can be
accomplished by battery, by phantom powering, by T-powering, and by
using \'electrets\' - materials with a fixed charge.
## References
-Sound Recording Handbook. Woram, John M. 1989.
-Handbook of Recording Engineering Fourth Edition. Eargle, John. 2003.
## Microphone Manufactures Links
AEA
AKG
Audio
Technica
Audix
Bruel & Kjaer
1
Neumann
Rode
Shure
sE Electronics
Back to Engineering
Acoustics
|
# Engineering Acoustics/Piezoelectric Transducers
# Introduction
Piezoelectricity from the Greek word \"piezo\" means pressure
electricity. Certain crystalline substances generate electric charges
under mechanical stress and conversely experience a mechanical strain in
the presence of an electric field. The piezoelectric effect describes a
situation where the transducing material senses input mechanical
vibrations and produces a charge at the frequency of the vibration. An
AC voltage causes the piezoelectric material to vibrate in an
oscillatory fashion at the same frequency as the input current.
Quartz is the best known single crystal material with piezoelectric
properties. Strong piezoelectric effects can be induced in materials
with an ABO3, Perovskite crystalline structure. \'A\' denotes a large
divalent metal ion such as lead and \'B\' denotes a smaller tetravalent
ion such as titanium or zirconium.
For any crystal to exhibit the piezoelectric effect, its structure must
have no center of symmetry. Either a tensile or compressive stress
applied to the crystal alters the separation between positive and
negative charge sights in the cell causing a net polarization at the
surface of the crystal. The polarization varies directly with the
applied stress and is direction dependent so that compressive and
tensile stresses will result in electric fields of opposite voltages.
# Vibrations & Displacements
Piezoelectric ceramics have non-centrosymmetric unit cells below the
Curie temperature and centrosymmetric unit cells above the Curie
temperature. Non-centrosymmetric structures provide a net electric
dipole moment. The dipoles are randomly oriented until a strong DC
electric field is applied causing permanent polarization and thus
piezoelectric properties.
A polarized ceramic may be subjected to stress causing the crystal
lattice to distort changing the total dipole moment of the ceramic. The
change in dipole moment due to an applied stress causes a net electric
field which varies linearly with stress.
# Dynamic Performance
The dynamic performance of a piezoelectric material relates to how it
behaves under alternating stresses near the mechanical resonance. The
parallel combination of C2 with L1, C1, and R1 in the equivalent circuit
below control the transducers reactance which is a function of
frequency.
## Equivalent Electric Circuit
![](eqcct.gif "eqcct.gif")
## Frequency Response
The graph below shows the impedance of a piezoelectric transducer as a
function of frequency. The minimum value at fm corresponds to the
resonance while the maximum value at fn corresponds to anti-resonance.
![](response.gif "response.gif")
# Resonant Devices
Non resonant devices may be modeled by a capacitor representing the
capacitance of the piezoelectric with an impedance modeling the
mechanically vibrating system as a shunt in the circuit. The impedance
may be modeled as a capacitor in the non-resonant case which allows the
circuit to reduce to a single capacitor replacing the parallel
combination.
For resonant devices the impedance becomes a resistance or static
capacitance at resonance. This is an undesirable effect. In mechanically
driven systems this effect acts as a load on the transducer and
decreases the electrical output. In electrically driven systems this
effect shunts the driver requiring a larger input current. The adverse
effect of the static capacitance experienced at resonant operation may
be counteracted by using a shunt or series inductor resonating with the
static capacitance at the operating frequency.
![](resonant_device.gif "resonant_device.gif")
# Applications
## Mechanical Measurement
Because of the dielectric leakage current of piezoelectrics they are
poorly suited for applications where force or pressure have a slow rate
of change. They are, however, very well suited for highly dynamic
measurements that might be needed in blast gauges and accelerometers.
## Ultrasonic
High intensity ultrasound applications utilize half wavelength
transducers with resonant frequencies between 18 kHz and 45 kHz. Large
blocks of transducer material is needed to generate high intensities
which makes manufacturing difficult and is economically impractical.
Also, since half wavelength transducers have the highest stress
amplitude in the center, the end sections act as inert masses. The end
sections are often replaced with metal plates possessing a much higher
mechanical quality factor; giving the composite transducer a higher
mechanical quality factor than a single-piece transducer.
The overall electro-acoustic efficiency is:
` Qm0 = unloaded mechanical quality factor`\
` QE = electric quality factor`\
` QL = quality factor due to the acoustic load alone`
The second term on the right hand side is the dielectric loss and the
third term is the mechanical loss.
Efficiency is maximized when:
then:
The maximum ultrasonic efficiency is described by:
Applications of ultrasonic transducers include:
` Welding of plastics`\
` Atomization of liquids`\
` Ultrasonic drilling`\
` Ultrasonic cleaning`\
` Ultrasound`\
` Non destructive testing`\
` etc.`
# More Information and Source of Information
MorganElectroCeramics\
Resources for Piezoelectric
Transducers
|
# Engineering Acoustics/Piezoelectric Acoustic Sensor
## Introduction
Piezoelectric Acoustic Wave technologies have been used for over 60
years. They have many applications for pressure, chemical concentration,
temperature or mass sensors. Their detection mechanism is based on
acoustic wave propagation. An acoustic wave is excited and propagates
through or on the surface of the material. Changes to the
characteristics of the propagation path affect the velocity and/or
amplitude of the wave. Changes in velocity/amplitude can be monitored by
measuring the natural frequency or phase characteristics of the sensor,
which can then be correlated to the corresponding physical or chemical
quantity being measured. [^1] Acoustic waves sensors use piezoelectric
materials to generate and detect acoustic waves. Piezoelectric materials
provide the transduction between electrical and mechanical response
conversion of electrical signal into mechanical acoustic waves and vice
versa. Conventional piezoelectric materials includes quartz, LiNbO3, AlN
and LiTaO3.
## Acoustic Wave Propagation Modes
Piezoelectric acoustic wave devices are described by the mode of wave
propagation through or on a piezoelectric substrate. If the wave
propagates on the surface of the substrate, it is known as a surface
wave; and if wave propagating through the substrate is called a bulk
wave.
### Mechanical waves in sensor devices
Mechanical waves for sensor applications are of two different types:
shear waves and compressional waves. Shear waves (also called S wave)
have particle displacements that are normal to the direction of wave
propagation, as for surface water waves. Compressional waves (also
called P wave) are waves in which the displacement of the particle is
along the same direction as the propagation direction of the wave[^2].
!Shear
wave\|400px
!Compressional
wave\|400px
## Acoustic Wave Technology
Surface acoustic wave (SAW) and bulk acoustic wave (BAW) are two most
commonly used technologies in sensor applications.
### Surface Acoustic Wave
The operation frequency of the SAW device ranges from the MHz to GHz
range, mainly depending on the interdigital transducer's design and
piezoelectric material[^3]:
```{=html}
<center>
```
$f_{res} = \frac{V_{R}}{\lambda}$
```{=html}
</center>
```
where $V_{R}$ is Rayleigh wave velocity determined by material
properties and λ is the wavelength defined as the periodicity of the
IDT. The figure below is SAW delay line configuration, which consists of
two IDTs, one of them acting as the transmitter to generate acoustic
waves, and the other as a receiver, the path between the IDTs is known
as the delay-line. When an electric signal is applied on the
interdigitated electrodes (IDT) with alternate polarity, as shown in
Figure, an alternating regions of tensile and compressive strain between
two fingers of the electrodes due to piezoelectric effect of material. A
mechanical wave is generated at the surface. The mechanical wave
propagates in both directions from the input IDT, only half of the
energy of the mechanical wave propagates across the delay line in the
direction of the output IDT. [^4] The delay-line is sensing area,
usually, the sensor material is deposited on the delay-line for chemical
sensor to absorb the target analytics.
!Surface Acoustic Wave Sensor Interdigitated Transducer
Diagram\|700px
!Surface acoustic wave resulting from opposing polarity of the
electrodes of the IDT
The animation below is time-domain simulation for the 2D structure of
SAW device using COMSOL. The x,y-axis represent the position of model.
The small rectangles on top are electrodes.
!FEM model with SAW\|
#### Sensor Response
The surface acoustic wave is sensitive to changes in the surface
properties of the medium in the delay-line, these changes modulate the
velocity and amplitude of the wave.
The surface wave velocity can be perturbed by various factors, each of
which represents a possible sensor response[^5]
```{=html}
<center>
```
$\frac{\delta v}{v_{0}}= \frac{1}{v_{0}}(\frac{\delta v}{\delta m}\Delta m + \frac{\delta v}{\delta c}\Delta c + \frac{\delta v}{\delta T}\Delta T + ...)$
```{=html}
</center>
```
- where $v_{0}$ is unperturbed wave velocity, $m$ is mass, $T$ is
temperature and c is stiffness.
Therefore, this kind of devices can be used in mass, pressure and
temperature sensing applications.
##### Mass sensor
One of the most used surface acoustic waves (SAW) sensor are mass
sensor.
Example of application: Gas Sensor, Bio-sensor
The sensor material is deposited along the propagation path between the
two IDTs. After exposure to a target analytes (e.g. target gas), the
active sensing material of the sensor adsorbs the analytes molecules
only, which causes the mass of the sensing material to increase and the
surface acoustic wave speed to decrease on the propagation path due to
mass loading. This causes a change in the delay time[^6],
```{=html}
<center>
```
$\tau = \frac{L_{path}}{V_{R}}$
```{=html}
</center>
```
where $L_{path}$ is the length of propagation path. By tracking the
delay time change at the receiver IDT, one can infer the concentration
of the target analyte.
```{=html}
<center>
```
$\Delta \tau = \frac{L_{path}}{V_{R}} - \frac{L_{path}}{V_{R^{'}}} \propto concentration$
```{=html}
</center>
```
##### Equivalent circuit
Mason's Crossed-Field model was used to develop the equivalent
electrical circuit for an one period of IDT fingers[^7]. Frequency
dependent resistance blocks were used. Resistance is minimum for the
center frequency of the SAW device, and very high for remaining
frequencies. Thus, the input energy propagates only at the frequencies
in near the resonant frequency. The equivalent circuit below is
implemented using ADS.
### Bulk Acoustic Wave
A bulk acoustic wave is a wave that travels through a piezoelectric
material, as in a quartz delay line. It is also known as a volume
acoustic wave. In some materials, the wave velocity is greater for bulk
acoustic waves than surface acoustic wave because SAW is composed of a
longitudinal and a shear wave. The wave velocity is lower than both of
them. Bulk acoustic waves contain either longitudinal or shear waves
only, and thus propagate faster.
#### Quartz Crystal Microbalance (QCM) technology
QCM is the oldest and simplest acoustic wave device for mass sensors. It
consists of a thin disk of AT-cut quartz with parallel circular
electrodes patterned on both sides. The application of a voltage between
these electrodes results in a shear deformation of the crystal[^8].
!Quartz resonators with front and back
electrodes
The working principle is based on mass-loading, which is similar to SAW
sensor. Bulk adsorption of target analyte onto the coated crystal causes
an increase in effective mass, which reduces the resonant frequency of
the crystal, in direct proportion to the concentration of target
analyte. For ideal sensing material, this sorption process is fully
reversible with no long-term drift effect, giving a highly reliable and
repeatable measurement[^9].
The relation between the frequency shift and the mass-loading can be
obtained from a model developed by Prof. Dr. Günter Sauerbrey from
Tiefenort, Germany, in 1959:
$\Delta f = -\frac{2f_{0}^2}{A \sqrt{\rho_{q}\mu_{q}}}\Delta m$[^10]
- $f_{0}$ - resonant frequency depends on the wave velocity (v) and
the piezoelectric material thickness, $f_{0} = \frac{v}{2d}$
- $\Delta f$ - frequency change
- $\Delta m$ - mass change
- $A$ - active area
- $\rho_{q}$ - density of piezoelectric material
- $\mu_{q}$ -shear modulus of piezoelectric material
#### Thin-film Bulk Acoustic Resonator (FBAR) technology
FBAR is special case
of QCM with piezoelectric films thicknesses ranging from only several
micrometers down to tenth of micrometers using MEMS technology. They
resonate in the frequency range up to 10 GHz. Their mass sensitivity is
proportional to their resonance frequency.
FBAR can achieve 3X
mass sensitivity compared to QCM. !Thin-film bulk acoustic
resonator(FBAR)_.png "Thin-film bulk acoustic resonator(FBAR)")
## Reference
[^1]: Hoang T 2009 Design and realization of SAW pressure sensor using
aluminium nitride Dissertation University Joseph Fourier, France
[^2]: (60), H., (69), J., & (68), R. (n.d.). Mechanical waves and shear
wave induction in soft tissuesteemCreated with Sketch. Retrieved
April 13, 2018, from
<https://steemit.com/ultrasonography/@hagbardceline/mechanical-waves-and-shear-wave-induction-in-soft-tissue>
[^3]: H. Wohltjen, "Mechanism of operation and design considerations for
surface acoustic wave device vapour sensors," Sensors and Actuators,
vol. 5, no. 4, pp. 307 -- 325, 1984.
[^4]: Kirschner J 2010 Surface acoustic wave sensors (SAWS): design for
application (www.jaredkirschner.com/
uploads/9/6/1/0/9610588/saws.pdf)
[^5]: Ricco, A.j., et al. "Surface acoustic wave gas sensor based on
film conductivity changes." Sensors and Actuators, vol. 8, no. 4,
1985, pp. 319--333., <doi:10.1016/0250-6874(85)80031-7>.
[^6]: H. Wohltjen, "Mechanism of operation and design considerations for
surface acoustic wave device vapour sensors," Sensors and Actuators,
vol. 5, no. 4, pp. 307 -- 325, 1984.
[^7]: Trang Hoang. Design and realization of SAW pressure sensor using
Aluminum Nitride. Acoustics \[physics.class-ph\]. Université
Joseph-Fourier - Grenoble I, 2009. English. `<tel-00540305>`{=html}
[^8]: Hoang T 2009 Design and realization of SAW pressure sensor using
aluminium nitride Dissertation University Joseph Fourier, France
[^9]: <http://www.michell.com/us/technology/quartz-crystal-microbalance.htm>
[^10]:
|
# Engineering Acoustics/Microphone Technique
## General Technique
1. A microphone should be used whose frequency response will suit the
frequency range of the voice or instrument being recorded.
2. Vary microphone positions and distances until you achieve the
monitored sound that you desire.
3. In the case of poor room acoustics, place the microphone very close
to the loudest part of the instrument being recorded or isolate the
instrument.
4. Personal taste is the most important component of microphone
technique. Whatever sounds right to you, *is* right.
## Working Distance
### Close Miking
When miking at a distance of 1 inch to about 3 feet from the sound
source, it is considered close miking. This technique generally provides
a tight, present sound quality and does an effective job of isolating
the signal and excluding other sounds in the acoustic environment.
#### Leakage
Leakage occurs when the signal is not properly isolated and the
microphone picks up another nearby instrument. This can make the mixdown
process difficult if there are multiple voices on one track. Use the
following methods to prevent leakage:
- Place the microphones closer to the instruments.
- Move the instruments farther apart.
- Put some sort of acoustic barrier between the instruments.
- Use directional microphones.
#### 3 to 1 Rule
The 3:1 distance rule is a general rule of thumb for close miking. To
prevent phase anomalies and leakage, the microphones should be placed at
least three times as far from each other as the distance between the
instrument and the microphone.
!3:1 Rule
### Distant Miking
Distant miking refers to the placement of microphones at a distance of 3
feet or more from the sound source. This technique allows the full range
and balance of the instrument to develop and it captures the room sound.
This tends to add a live, open feeling to the recorded sound, but
careful consideration needs to be given to the acoustic
environment.
### Accent Miking
Accent miking is a technique used for solo passages when miking an
ensemble. A soloist needs to stand out from an ensemble, but placing a
microphone too close will sound unnaturally present compared the distant
miking technique used with the rest of the ensemble. Therefore, the
microphone should be placed just close enough to the soloist that the
signal can be mixed effectively without sounding completely excluded
from the ensemble.
### Ambient Miking
Ambient miking is placing the microphones at such a distance that the
room sound is more prominent than the direct signal. This technique is
used to capture audience sound or the natural reverberation of a room
or concert
hall.
## Stereo and Surround Technique
### Stereo
Stereo miking is simply using two microphones to obtain a stereo
left-right image of the sound. A simple method is the use of a spaced
pair, which is placing two identical microphones several feet apart and
using the difference in time and amplitude to create the image. Great
care should be taken in the method as phase anomalies can occur due to
the signal delay. This risk of phase anomaly can be reduced by using the
X/Y method, where the two microphones are placed with the grills as
close together as possible without touching. There should be an angle of
90 to 135 degrees between the mics. This technique uses only amplitude,
not time, to create the image, so the chance of phase discrepancies is
unlikely.
!Spaced Pair !X/Y
Method
### Surround
To take advantage of 5.1 sound or some other surround setup, microphones
may be placed to capture the surround sound of a room. This technique
essentially stems from stereo technique with the addition of more
microphones. Because every acoustic environment is different, it is
difficult to define a general rule for surround miking, so placement
becomes dependent on experimentation. Careful attention must be paid to
the distance between microphones and potential phase anomalies.
## Placement for Varying Instruments
### Amplifiers
When miking an amplifier, such as for electric guitars, the mic should
be placed 2 to 12 inches from the speaker. Exact placement becomes more
critical at a distance of less than 4 inches. A brighter sound is
achieved when the mic faces directly into the center of the speaker cone
and a more mellow sound is produced when placed slightly off-center.
Placing off-center also reduces amplifier noise.
### Brass Instruments
High sound-pressure levels are produced by brass instruments due to the
directional characteristics of mid to mid-high frequencies. Therefore,
for brass instruments such as trumpets, trombones, and tubas,
microphones should face slightly off of the bell\'s center at a distance
of one foot or more to prevent overloading from windblasts.
### Guitars
Technique for acoustic guitars is dependent on the desired sound.
Placing a microphone close to the sound hole will achieve the highest
output possible, but the sound may be bottom-heavy because of how the
sound hole resonates at low frequencies. Placing the mic slightly
off-center at 6 to 12 inches from the hole will provide a more balanced
pickup. Placing the mic closer to the bridge with the same working
distance will ensure that the full range of the instrument is captured.
Some people prefer to use a contact microphone, attached (usually) by a
fairly weak temporary adhesive, however this will give a rather
different sound to a conventional microphone. The primary advantage is
that the contact microphone performance is unchanged as the guitar is
moved around during a performance, whereas with a conventional
microphone on a stand, the distance between microphone and guitar would
be subject to continual variation. Placement of a contact microphone can
be adjusted by trial and error to get a variety of sounds. The same
technique works quite well on other stringed instruments such as
violins.
### Pianos
Ideally, microphones would be placed 4 to 6 feet from the piano to allow
the full range of the instrument to develop before it is captured. This
isn\'t always possible due to room noise, so the next best option is to
place the microphone just inside the open lid. This applies to both
grand and upright pianos.
### Percussion
One overhead microphone can be used for a drum set, although two are
preferable. If possible, each component of the drum set should be miked
individually at a distance of 1 to 2 inches as if they were their own
instrument. This also applies to other drums such as congas and bongos.
For large, tuned instruments such as xylophones, multiple mics can be
used as long as they are spaced according to the 3:1 rule.
### Voice
Standard technique is to put the microphone directly in front of the
vocalist\'s mouth, although placing slightly off-center can alleviate
harsh consonant sounds (such as \"p\") and prevent overloading due to
excessive dynamic range. Several sources also recommend placing the
microphone slightly above the mouth.
### Woodwinds
A general rule for woodwinds is to place the microphone around the
middle of the instrument at a distance of 6 inches to 2 feet. The
microphone should be tilted slightly towards the bell or sound hole, but
not directly in front of it.
## Sound Propagation
It is important to understand how sound propagates due to the nature of
the acoustic environment so that microphone technique can be adjusted
accordingly. There are four basic ways that this occurs:
### Reflection
Sound waves are reflected by surfaces if the object is as large as the
wavelength of the sound. It is the cause of echo (simple delay),
reverberation (many reflections cause the sound to continue after the
source has stopped), and standing waves (the distance between two
parallel walls is such that the original and reflected waves in phase
reinforce one another).
### Absorption
Sound waves are absorbed by materials rather than reflected. This can
have both positive and negative effects depending on whether you desire
to reduce reverberation or retain a live sound.
### Diffraction
Objects that may be between sound sources and microphones must be
considered due to diffraction. Sound will be stopped by obstacles that
are larger than its wavelength. Therefore, higher frequencies will be
blocked more easily that lower frequencies.
### Refraction
Sound waves bend as they pass through mediums with varying density. Wind
or temperature changes can cause sound to seem like it is literally
moving in a different direction than expected.
## Sources
- Huber, Dave Miles, and Robert E. Runstein. *Modern Recording
Techniques*. Sixth Edition. Burlington: Elsevier, Inc., 2005.
- Shure, Inc. (2003). *Shure Product Literature.* Retrieved November
28, 2005, from
<http://www.shure.com/scripts/literature/literature.aspx>.
Back to the main
page
|
# Engineering Acoustics/Sealed Box Subwoofer Design
## Introduction
A sealed or closed box baffle is the most basic but often the cleanest
sounding subwoofer box design. The subwoofer box in its most simple
form, serves to isolate the back of the speaker from the front, much
like the theoretical infinite baffle. The sealed box provides simple
construction and controlled response for most subwoofer applications.
The slow low end roll-off provides a clean transition into the extreme
frequency range. Unlike ported boxes, the cone excursion is reduced
below the resonant frequency of the box and driver due to the added
stiffness provided by the sealed box baffle.
Closed baffle boxes are typically constructed of a very rigid material
such as MDF (medium density fiber board) or plywood .75 to 1 inch thick.
Depending on the size of the box and material used, internal bracing may
be necessary to maintain a rigid box. A rigid box is important to design
in order to prevent unwanted box resonance.
As with any acoustics application, the box must be matched to the
loudspeaker driver for maximum performance. The following will outline
the procedure to tune the box or maximize the output of the subwoofer
box and driver combination.
## Closed Baffle Circuit
The sealed box enclosure for subwoofers can be modeled as a lumped
element system if the dimensions of the box are significantly shorter
than the shortest wavelength reproduced by the subwoofer. Most subwoofer
applications are crossed over around 80 to 100 Hz. A 100 Hz wave in air
has a wavelength of about 11 feet. Subwoofers typically have all
dimensions much shorter than this wavelength, thus the lumped element
system analysis is accurate. Using this analysis, the following circuit
represents a subwoofer enclosure system.
```{=html}
<center>
```
![](Circuit_schema.jpg "Circuit_schema.jpg")\
```{=html}
</center>
```
where all of the following parameters are in the mechanical mobility
analog
V~e~ - voltage supply
R~e~ - electrical resistance
M~m~ - driver mass
C~m~ - driver compliance
R~m~ - resistance
R~Ar~ - rear cone radiation resistance into the air
X~Af~ - front cone radiation reactance into the air
R~Br~ - rear cone radiation resistance into the box
X~Br~ - rear cone radiation reactance into the box
## Driver Parameters
In order to tune a sealed box to a driver, the driver parameters must be
known. Some of the parameters are provided by the manufacturer, some are
found experimentally, and some are found from general tables. For ease
of calculations, all parameters will be represented in the SI units
meter/kilogram/second. The parameters that must be known to determine
the size of the box are as follows:
f~0~ - driver free-air resonance
C~MS~ - mechanical compliance of the driver
S~D~ - effective area of the driver
#### Resonance of the Driver
The resonance of the driver is either provided by the manufacturer or
must be found experimentally. It is a good idea to measure the resonance
frequency even if it is provided by the manufacturer to account for
inconsistent manufacturing processes.
The following diagram shows the setup for finding resonance:
```{=html}
<center>
```
\
```{=html}
</center>
```
Where voltage V1 is held constant and the variable frequency source is
vaied
until V2 is a maximum. The frequency where V2 is a maximum is the
resonance frequency for the driver.
#### Mechanical Compliance
By definition compliance is the inverse of stiffness or what is commonly
referred to as the spring constant. The compliance of a driver can be
found by measuring the displacement of the cone when known masses are
place on the cone when the driver is facing up. The compliance would
then be the displacement of the cone in meters divided by the added
weight in newtons.
#### Effective Area of the Driver
The physical diameter of the driver does not lead to the effective area
of the driver. The effective diameter can be found using the following
diagram:
```{=html}
<center>
```
![](Effective_area.jpg "Effective_area.jpg")\
```{=html}
</center>
```
From this diameter, the area is found from the basic area of a circle
equation.
## Acoustic Compliance
From the known mechanical compliance of the cone, the acoustic
compliance can be found from the following equation:
C~AS~ = C~MS~S~D~^2^
From the driver acoustic compliance, the box acoustic compliance is
found. This is where the final application of the subwoofer is
considered. The acoustic compliance of the box will determine the
percent shift upwards of the resonant frequency. If a large shift is
desire for high SPL applications, then a large ratio of driver to box
acoustic compliance would be required. If a more flattened response is
desire for high fidelity applications, then a lower ratio of driver to
box acoustic compliance would be required. Specifically, the ratios can
be found in the following figure using line (b) as reference.
C~AS~ = C~AB~\*r
r - driver to box acoustic compliance ratio
```{=html}
<center>
```
![](Compliance.jpg "Compliance.jpg")\
```{=html}
</center>
```
## Sealed Box Design
#### Volume of Box
The volume of the sealed box can now be found from the box acoustic
compliance. The following equation is used to calculate the box volume
V~B~= C~AB~γ
#### Box Dimensions
From the calculated box volume, the dimensions of the box can then be
designed. There is no set formula for finding the dimensions of the box,
but there are general guidelines to be followed. The face of the box
which the driver is set in should not be a square. If the driver were
mounted in the center of a square face, the waves generated by the cone
would reach the edges of the box at the same time, thus when combined
would create a strong diffracted wave in the listening space. In order
to best prevent this, the driver should either be mounted offset on a
square face, or the face should be rectangular, with the driver closer
to one edge.
The ratios between internal height, width and depth should never be
integer (2:1, 3:1 etc.), as this would encourage the formation of
standing waves inside the box. Some have suggested the use of the
Golden ratio and others the
third root of 2, both of
which are close to each other and close to the
IEC-recommended
ratios for room dimensions (which conform to the same acoustical
requirements). In practice most manufacturers formulate their boxes
based on aesthetic and cost considerations, while ensuring, through
testing, that no major box resonances appear. In high-quality units this
entails the extensive use of rigid in-box bracing, sound absorption
material, sophisticated alloys or polymers, complex geometrical shapes,
including curves, etc.
|
# Engineering Acoustics/New Acoustic Filter For Ultrasonics Media
## Introduction
Acoustic filters are used in many devices such as mufflers, noise
control materials (absorptive and reactive), and loudspeaker systems to
name a few. Although the waves in simple (single-medium) acoustic
filters usually travel in gases such as air and carbon-monoxide (in the
case of automobile mufflers) or in materials such as fiberglass,
polyvinylidene fluoride (PVDF) film, or polyethylene (Saran Wrap), there
are also filters that couple two or three distinct media together to
achieve a desired acoustic response. General information about basic
acoustic filter design can be perused at the following wikibook page
Acoustic Filter Design &
Implementation.
The focus of this article will be on acoustic filters that use
multilayer air/polymer film-coupled media as its acoustic medium for
sound waves to propagate through; concluding with an example of how
these filters can be used to detect and extrapolate audio frequency
information in high-frequency \"carrier\" waves that carry an audio
signal. However, before getting into these specific type of acoustic
filters, we need to briefly discuss how sound waves interact with the
medium(media) in which it travels and how these factors can play a role
when designing acoustic filters.
## Changes in Media Properties Due to Sound Wave Characteristics
As with any system being designed, the filter response characteristics
of an acoustic filter are tailored based on the frequency spectrum of
the input signal and the desired output. The input signal may be
infrasonic (frequencies below human hearing), sonic (frequencies within
human hearing range), or ultrasonic (frequencies above human hearing
range). In addition to the frequency content of the input signal, the
density, and, thus, the characteristic impedance of the medium (media)
being used in the acoustic filter must also be taken into account. In
general, the characteristic impedance $Z_0 \,$ for a particular medium
is expressed as\...
```{=html}
<center>
```
` `$Z_0 = \pm \rho_0 c \,$` `$(Pa \cdot s/m)$` `
```{=html}
</center>
```
where
```{=html}
<center>
```
` `$\pm \rho_0 \,$` = (equilibrium) density of medium `$(kg/m^3)\,$\
` `$c \,$` = speed of sound in medium `$(m/s) \,$` `\
` `
```{=html}
</center>
```
The characteristic impedance is important because this value
simultaneously gives an idea of how fast or slow particles will travel
as well as how much mass is \"weighting down\" the particles in the
medium (per unit area or volume) when they are excited by a sound
source. The speed in which sound travels in the medium needs to be taken
into consideration because this factor can ultimately affect the time
response of the filter (i.e. the output of the filter may not radiate or
attentuate sound fast or slow enough if not designed properly). The
intensity $I_A \,$ of a sound wave is expressed as\...
```{=html}
<center>
```
` `$I_A = \frac{1}{T}\int_{0}^{T} pu\quad dt = \pm \frac{P^2}{2\rho_0c} \,$` `$(W/m^2) \,$`. `
```{=html}
</center>
```
$I_A \,$ is interpreted as the (time-averaged) rate of energy
transmission of a sound wave through a unit area normal to the direction
of propagation, and this parameter is also an important factor in
acoustic filter design because the characteristic properties of the
given medium can change relative to intensity of the sound wave
traveling through it. In other words, the reaction of the particles
(atoms or molecules) that make up the medium will respond differently
when the intensity of the sound wave is very high or very small relative
to the size of the control area (i.e. dimensions of the filter, in this
case). Other properties such as the elasticity and mean propagation
velocity (of a sound wave) can change in the acoustic medium as well,
but focusing on frequency, impedance, and/or intensity in the design
process usually takes care of these other parameters because most of
them will inevitably be dependent on the aforementioned properties of
the medium.
## Why Coupled Acoustic Media in Acoustic Filters?
In acoustic transducers, media coupling is employed in acoustic
transducers to either increase or decrease the impedance of the
transducer, and, thus, control the intensity and speed of the signal
acting on the transducer while converting the incident wave, or initial
excitation sound wave, from one form of energy to another (e.g.
converting acoustic energy to electrical energy). Specifically, the
impedance of the transducer is augmented by inserting a solid structure
(not necessarily rigid) between the transducer and the initial
propagation medium (e.g. air). The reflective properties of the inserted
medium is exploited to either increase or decrease the intensity and
propagation speed of the incident sound wave. It is the ability to
alter, and to some extent, control, the impedance of a propagation
medium by (periodically) inserting (a) solid structure(s) such as thin,
flexible films in the original medium (air) and its ability to
concomitantly alter the frequency response of the original medium that
makes use of multilayer media in acoustic filters attractive. The
reflection factor and transmission factor $\hat{R} \,$ and $\hat{T} \,$,
respectively, between two media, expressed as\...
```{=html}
<center>
```
$\hat{R} = \frac{pressure\ of\ reflected\ portion\ of\ incident\ wave}{pressure\ of\ incident\ wave} = \frac{\rho c - Z_{in}}{\rho c + Z_{in}} \,$
```{=html}
</center>
```
and
```{=html}
<center>
```
$\hat{T} = \frac{pressure\ of\ transmitted\ portion\ of\ incident\ wave}{pressure\ of\ incident\ wave} = 1 + \hat{R} \,$,
```{=html}
</center>
```
are the tangible values that tell how much of the incident wave is being
reflected from and transmitted through the junction where the media
meet. Note that $Z_{in} \,$ is the (total) input impedance seen by the
incident sound wave upon just entering an air-solid acoustic media
layer. In the case of multiple air-columns as shown in Fig. 2,
$Z_{in} \,$ is the aggregate impedance of each air-column layer seen by
the incident wave at the input. Below in Fig. 1, a simple illustration
explains what happens when an incident sound wave propagating in medium
(1) and comes in contact with medium (2) at the junction of the both
media (x=0), where the sound waves are represented by vectors.
As mentioned above, an example of three such successive air-solid
acoustic media layers is shown in Fig. 2 and the electroacoustic
equivalent circuit for Fig. 2 is shown in Fig. 3 where
$L = \rho_s h_s \,$ = (density of solid material)(thickness of solid
material) = unit-area (or volume) mass, $Z = \rho c = \,$ characteristic
acoustic impedance of medium, and $\beta = k = \omega/c = \,$
wavenumber. Note that in the case of a multilayer, coupled acoustic
medium in an acoustic filter, the impedance of each air-solid section is
calculated by using the following general purpose impedance ratio
equation (also referred to as transfer matrices)\...
```{=html}
<center>
```
$\frac{Z_a}{Z_0} = \frac{\left( \frac{Z_b}{Z_0} \right) + j\ \tan(kd)}{1 + j\ \left( \frac{Z_b}{Z_0} \right) \tan(kd)} \,$
```{=html}
</center>
```
where $Z_b \,$ is the (known) impedance at the edge of the solid of an
air-solid layer (on the right) and $Z_a \,$ is the (unknown) impedance
at the edge of the air column of an air-solid layer.
## Effects of High-Intensity, Ultrasonic Waves in Acoustic Media in Audio Frequency Spectrum
When an ultrasonic wave is used as a carrier to transmit audio
frequencies, three audio effects are associated with extrapolating the
audio frequency information from the carrier wave: (a) beating effects,
(b) parametric array effects, and (c) radiation pressure.
Beating occurs when two ultrasonic waves with distinct frequencies
$f_1 \,$ and $f_2 \,$ propagate in the same direction, resulting in
amplitude variations which consequently make the audio signal
information go in and out of phase, or "beat", at a frequency of
$f_1 - f_2 \,$.
Parametric array effects occur when the intensity of an ultrasonic wave
is so high in a particular medium that the high displacements of
particles (atoms) per wave cycle changes properties of that medium so
that it influences parameters like elasticity, density, propagation
velocity, etc. in a non-linear fashion. The results of parametric array
effects on modulated, high-intensity, ultrasonic waves in a particular
medium (or coupled media) is the generation and propagation of audio
frequency waves (not necessarily present in the original audio
information) that are generated in a manner similar to the nonlinear
process of amplitude demodulation commonly inherent in diode circuits
(when diodes are forward biased).
Another audio effect that arises from high-intensity ultrasonic beams of
sound is a static (DC) pressure called radiation pressure. Radiation
pressure is similar to parametric array effects in that amplitude
variations in the signal give rise to audible frequencies via amplitude
demodulation. However, unlike parametric array effects, radiation
pressure fluctuations that generate audible signals from amplitude
demodulation can occur due to any low-frequency modulation and not just
from pressure fluctuations occurring at the modulation frequency
$\omega_M \,$ or beating frequency $f_1 - f_2 \,$.
## An Application of Coupled Media in Acoustic Filters
Figs. 1 - 3 were all from a research paper entitled New Type of
Acoustics Filter Using Periodic Polymer Layers for Measuring Audio
Signal Components Excited by Amplitude-Modulated High_Intensity
Ultrasonic
Waves
submitted to the Audio Engineering Society (AES) by Minoru Todo, Primary
Innovator at Measurement Specialties, Inc., in the October 2005 edition
of the AES Journal. Figs. 4 and 5 below, also from this paper, are
illustrations of test setups referred to in this paper. Specifically,
Fig. 4 is a test setup used to measure the transmission (of an incident
ultrasonic sound wave) through the acoustic filter described by Figs. 1
and 2. Fig. 5 is a block diagram of the test setup used for measuring
radiation pressure, one of the audio effects mentioned in the previous
section. It turns out that out of all of the audio effects mentioned in
the previous section that are caused by high-intensity ultrasonic waves
propagating in a medium, sound waves produced from radiated pressure are
the hardest to detect when microphones and preamplifiers are used in the
detection/receiver system. Although nonlinear noise artifacts occur due
to overloading of the preamplifier present in the detection/receiver
system, the bulk of the nonlinear noise comes from the inherent
nonlinear noise properties of microphones. This is true because all
microphones, even specialized measurement microphones designed for audio
spectrum measurements that have sensitivity well beyond the threshold of
hearing, have nonlinearities artifacts that (periodically) increase in
magnitude with respect to increase at ultrasonic frequencies. These
nonlinearities essentially mask the radiation pressure generated because
the magnitude of these nonlinearities are orders of magnitude greater
than the radiation pressure. The acoustic (low-pass) filter referred to
in this paper was designed in order to filter out the \"detrimental\"
ultrasonic wave that was inducing high nonlinear noise artifacts in the
measurement microphones. The high-intensity, ultrasonic wave was
producing radiation pressure (which is audible) within the initial
acoustic medium (i.e. air). By filtering out the ultrasonic wave, the
measurement microphone would only detect the audible radiation pressure
that the ultrasonic wave was producing in air. Acoustic filters like
these could possibly be used to detect/receive any high-intensity,
ultrasonic signal that may carry audio information which may need to be
extrapolated with an acceptable level of fidelity.
## References
\[1\] Minoru Todo, \"New Type of Acoustic Filter Using Periodic Polymer
Layers for Measuring Audio Signal Components Excited by
Amplitude-Modulated High-Intensity Ultrasonic Waves,\" Journal of Audio
Engineering Society, Vol. 53, pp. 930--41 (2005 October)
\[2\] Fundamentals of Acoustics; Kinsler *et al.*, John Wiley & Sons,
2000
\[3\] ME 513 Course Notes, Dr. Luc Mongeau, Purdue University
\[4\]
<http://www.ieee-uffc.org/archive/uffc/trans/Toc/abs/02/t0270972.htm>
------------------------------------------------------------------------
Back to main page
|
# Engineering Acoustics/Acoustic Micro Pumps
## Application to Micro Scale Pipes
Acoustic Streaming is ideal for microfluidic systems because it arises
from viscous forces which are the dominant forces in low Reynolds flows
and which usually hamper microfluidic systems. Also, streaming force
scales favorably as the size of the channel, conveying a fluid through
which an acoustic wave propagates, decreases.[^1] Because of acoustic
attenuation via viscous losses, a gradient in the Reynolds stresses is
manifest as a body force that drives acoustic streaming as well as
streaming from Lagrangian components of the flow.[^2] For more
information on the basic theory of acoustic streaming please see
Engineering Acoustics/Acoustic
streaming. When
applied to microchannels, the principles of acoustic streaming must
include bulk viscous effects (dominant far from the boundary layer,
though driven by boundary layer streaming), investigated in the classic
solution developed extensively by Nyborg[^3] in 1953 as well as
streaming inside the boundary layer. In a micromachined channel, the
dimensions of the channels are on the order of boundary layer thickness,
so both the inner and outer boundary layer streaming must be evaluated
to have a precise prediction for flow rates in acoustic streaming
micropumps.
The derivation that follows is for a circular channel of constant cross
section assuming that the incident acoustic wave is planar and bound
within the channel filled with a viscous fluid.[^4] The acoustic wave
has a known amplitude and fills the entire cross-section and there are
no reflections of the acoustic wave. The walls of the channel are also
assumed to be rigid. This is important, because rigid boundary
interaction results in boundary layer streaming that dominates the flow
profile for channels on the order of or smaller than the boundary layer
associated with viscous flow in a pipe. This derivation follows from the
streaming equations developed by Nyborg who starts with the compressible
continuity equation for a Newtonian fluid and the Navier-Stokes and
dynamic equations to get an expression for the net force per unit
volume. Eckart[^5] uses the method of successive approximations with the
pressure, velocity, and density expressed as the sum of first and second
order terms. Since the first order terms account for the oscillating
portion of the variables, the time average is zero. The second order
terms arise from streaming and are time independent contributions to
velocity, density, and pressure. These non-linear effects due to viscous
attenuation of the acoustic radiation in the fluid are responsible for a
constant streaming velocity\[1\].
Then, the expansion (through the method of approximations) of the
variables are substituted into the standard force balance equations
describing a fluid resulting in two equations\[5\] where:
$(1)\ -F=-\nabla p_2 + \left(\beta_\mu + \frac{4}{3}\mu \right)\nabla \nabla u_2 - \mu \nabla\times\nabla\times u_2$
$(2)\ -F\equiv \rho_0 |\left(u_1\cdot\nabla u_1\right) + u_1\left(\nabla\cdot u_1\right)|$
where the signifier $|expression|$ denotes time average, $F$ is the body
force density, $\beta_\mu$ is the bulk viscosity, $p_2$ is the second
order pressure, $\mu$ is the dynamic viscosity, $\rho_0$ is the density,
$u_2$ is the streaming velocity, and $u_1$ is the acoustic velocity. The
acoustic velocity, represented two dimensionally in axial and radial
directions respectively, is described by:
$(3)\ u_{1x}=V_ae^{-(\alpha+ik)x} \left(1-e^{-(1+i)\zeta z}\right) e^{i\omega t}$
$(4)\ u_{1z}=\frac{-V_ae^{-(\alpha+ik)x} \left(\alpha+ik\right) \left(1-e^{-(1+i)\zeta z}\right) e^{i\omega t}}{\left(1+i\right)\zeta}$
where
$\zeta=\frac{\omega\rho_0}{2\mu}$
where $V_a$ is the acoustic velocity at the source,
$k=\frac{\omega}{c_0}$is the wave number, $c_0$ is the velocity of sound
in the fluid, and $\alpha$ is the acoustic absorption coefficient. The
$\zeta$ term describes the viscous penetration depth, or how large the
boundary layer is. The components of the acoustic velocity given in
Equation (3) and (4) can be substituted into Equation (2) to solve for
the first-order body force. This gives the one-dimensional body force
per unit volume in axial and radial components, respectively:
$(5)\ F_{xv}=\rho_0 V_a^2 e^{-2\alpha x}$
$(6)\ F_{xb}=\frac{1}{2}\rho_0 V_a^2 e^{-2\alpha x}\left[ke^{-\zeta z}\left(cos(\zeta z)+sin(\zeta z)-e^{-\zeta z}\right)+\alpha e^{-\zeta z}\left(e^{-\zeta z}-3(cos(\zeta z)+sin(\zeta z)\right)\right].$
$F_{xv}$ and $F_{xb}$ are expressions for the body force due to viscous
losses and due to the acoustic wave touching the rigid boundary\[5\].
With no-slip boundary conditions imposed on Equation (1), with Equations
(5) and (6) inserted, the streaming velocity $u_2$ can be found. The
differential pressure is assumed to be zero and static head can be
derived by evaluating Equation (1) with a boundary condition of zero net
flow through any fluid element. The solution of Equation(1) for the
streaming velocity profile in two terms relating to the viscous effects
(outer boundary layer streaming) and the boundary layer effects (inner
boundary layer streaming), respectively, results in:
$(7)\ u_{2v}=\frac{\alpha\rho_0 V_a^2 h^2}{2\mu}\left(\frac{z}{h}\right)\left(1-\frac{z}{h}\right)$
$(8)\ u_{2b}=\frac{V_a^2}{4c_0}\left[1+2e^{-\zeta z} \left(sin(\zeta z)-cos(\zeta z)\right)+e^{-2\zeta z} \right]$
These two expressions are summed when calculating the velocity profile
across the diameter of the pipe. With no-slip conditions, the outer
boundary layer streaming contribution to the acoustic streaming velocity
decreases as the diameter decreases with a with a profile similar to
Hele-Shaw flow in infinitely wide rectangular channels \[7\]. Figure 1
shows this diameter scaling effect in water with an acoustic velocity
$A=.1m/s$ and a driving frequency of 2 MHz. !Figure 1: Scaling of the
velocity profile with different channel dimensions in
microns.{width="380"}
Many groups such as Rife *et al.* \[7\], are underestimating the
possibilities that acoustic streaming has to offer in channels less than
$10\mu m$ because the inner boundary layer streaming velocity is
ignored. The boundary layer effects are present regardless of diameter.
In water, the acoustic boundary layer is about 1 micron, therefore, for
pipes with diameters on the order ten microns or less, there is a marked
increase in the streaming velocity.[^6] From the velocity profile of the
inner boundary layer streaming in Figure 2, the contribution of the
boundary layer factors in favorably as the diameter of the channel
decreases. Note that the magnitude of the inner boundary layer streaming
is not affected by the diameter and that the percentage of the channel
experiencing the boundary streaming decreases as channel diameter
increases. !Figure 2: Percentage of channel diameter feeling influence
of inner boundary layer streaming for different channel
dimensions.{width="380"}
Then the total flow velocity profile, with both the viscous and boundary
layer effects in Figure 3, takes on a flow profile that becomes more
plug-like as the diameter of the channel decreases. !Figure 3:
Normalized total acoustic streaming velocity profile showing effect of
channel
diameter.{width="380"}
Driving frequency does have an effect on the velocity profile for a
channel of constant diameter that experiences a sizable contribution
from the boundary layer. The frequency dependence on the inner boundary
layer contribution is evident for a $10\mu meter$ channel with typical
paramaters for water and an acoustic velocity $A=.1m/s$ in Figure 4.
Note that the viscous contribution to acoustic streaming is also shown,
but does not exhibit a frequency dependence. For small channels (less
than 10 microns), the inner boundary layer streaming affects a more
sizable portion of the channel at lower frequencies. !Figure 4: The
inner boundary layer streaming is shown to take up a larger percentage
of the channel dimensions as the driving frequency decreases, while the
viscous contribution is
unnaffected.{width="380"}
The total acoustic streaming flow profile is then given in Figure 5.
From this plot, matching the driving frequency to channel geometry is
important to achieve the maximum flow velocity for micro-nano fluidic
devices. !Figure 5: Total acoustic streaming velocity profile with
effect of driving frequency. The match of 2Mhz and 10 micron channel
yields the greatest volume
flow.{width="380"}
## Actuation in Microfluidics
In microfluidic systems, a piezoelectric actuator can be used to impart
an acoustic wave field in a liquid. This effect is even imparted through
the walls of the device. The advantage is that the actuator does not
need be in contact with the working fluid.[^7] Since the streaming
effect occurs normal to the resonator, there may be difficulties in
coupling an actuator with typical micromachining techniques which
generally yield 2-D layouts of microfluidic networks. The solutions
developed for acoustic streaming assume that the acoustic wave is planar
with respect to the channel axis. Therefore a configuration that results
in the most predictable flow is one in which the acoustic wave source
(piezoelectric bulk acoustic resonator) is placed such that the channel
is axially oriented to the normal of the actuator surface.[^8] Figure 6
shows such a configuration from a view looking down onto the device.
![Figure 6: Micro pump utilizing acoustic streaming based on one
developed by Rife *et al.* \[7\]. The BAW piezo actuators are in black
and the arrows indicate the direction of
flow.](Acoustic_micro_pump1.JPG "Figure 6: Micro pump utilizing acoustic streaming based on one developed by Rife et al. [7]. The BAW piezo actuators are in black and the arrows indicate the direction of flow."){width="380"}
The piezo actuators are in black. This cartoon of a micromachined device
is based on one created by Rife *et al.* \[7\]. The dimensions of their
device where on the order of 1.6 mm square (much greater than the size
of the boundary layer), which makes their predictions using classical
solutions by Nyborg that do not include inner boundary layer streaming
valid, as can be seen in Figure 3 where channels much larger than the
boundary layer size are relatively uninfluenced by that part of the
acoustic streaming. However, employing this configuration in the context
of microfabrication techniques is difficult for very small channels.
Rife *et al.* \[7\], managed to place piezoelectric actuators oriented
perpendicular to the opened ends of channels milled into a block of PMMA
manually, albeit their channel dimensions are much larger than that at
which the boundary layer effects dominate or contribute significantly.
For smaller channels, the only option is to put the actuators on the
underside or top of a micro machined fluidic circuit \[8\]. This
configuration, shown in Figure 7 , results in acoustic wave reflections.
Reflections or standing waves will complicate the streaming analysis.
![Figure 7: Planar micromachined micro pump based off one created by
Hashimoto *et al.* \[8\]. Every other BAW is actuated to drive flow in
one direction. The black rectangles represent activated BAW piezo
devices. By shutting them off and actuating the brown ones, the flow can
be
reversed.](Acoustic_micro_pump_2.jpg "Figure 7: Planar micromachined micro pump based off one created by Hashimoto et al. [8]. Every other BAW is actuated to drive flow in one direction. The black rectangles represent activated BAW piezo devices. By shutting them off and actuating the brown ones, the flow can be reversed."){width="380"}
Another option for instigating acoustic streaming results from the
attenuation of surface acoustic waves (SAW) in contact with a fluid
medium \[6\]. In this case, the transverse component of a Rayleigh wave
(or a Lamb wave) propagating along a surface in contact with a fluid is
effectively transferred into a compression wave in the fluid. The energy
in the SAW is dissipated by the fluid and little disturbance is felt in
the substrate far from the SAW source (interdigital piezo actuators).
Figure 8 is a cartoon of the principle. ![Figure 8: Conversion of SAW,
in a solid substrate in contact with a liquid, to planar acoustic
pressure waves
\[6\].](SAW_interaction_with_liquid.jpg "Figure 8: Conversion of SAW, in a solid substrate in contact with a liquid, to planar acoustic pressure waves [6]."){width="380"}
This is the case so long as the velocity of the SAW is greater than the
acoustic velocity of the liquid. The compression wave radiating from the
surface leaves at the Rayleigh angle given by:
$(9)\ \phi_R = arcsin(\frac{V_a}{V_R})$
where $V_R$ is the velocity of the surface acoustic wave. Given that
angle then, theoretically, two actuators producing SAWs could be placed
opposite each other to produce a standing wave field in the fluid across
the channel and a traveling planar wave parallel to the channel axis.
Figure 9 shows how this could be done. !Figure 9: Parallel SAW acoustic
streaming
micro-pump{width="380"}
Finally, a very interesting pump that uses acoustic standing waves and a
diffuser nozzle is shown in Figure 10, which has been developed by
Nabavi and Mongeau.[^9] !Figure 10: Valveless acoustic standing wave
micro
pump.{width="380"}
While this pump does not use the same acoustic streaming principles, it
is included because it uses acoustic waves to generate a flow. The
standing wave, induced by relatively small movements of the piston, has
a maximum pressure at the anti-node and a min at the node. Positioning
the inlet and the outlet at these two locations allows fluid to enter
the chamber immediately after the part of the cycle where the pressure
at the anti-node has overcome the discharge pressure at the diffuser
nozzle. Most importantly, the diffuser nozzle outlet has an asymmetric
resistance. After the fluid is ejected and the pressure in the chamber
is temporarily lower than the ambient pressure the fluid does not flow
right back in the outlet but enters at the pressure node, where pressure
is less. This clever design allows for a valveless pumping apparatus.
The forward and back flow resistance of the diffuser nozzle are not the
same, so a net mass flow out of the resonance chamber is observed.
Another interesting pump for precise positioning of fluid droplets in
microchannels that uses similar acoustic standing waves in a resonance
chamber has been developed by Langelier *et al.*.[^10] Instead of a
piezoelectric membrane generating the acoustic standing waves in the
resonance chamber, the resonance chamber is filled with air and
connected to a larger container that has a speaker at one end. Multiple
quarter wavelength resonance chambers are tuned to specific frequencies,
each with a different length and width. Different pipes connected to
each resonance chamber can then be activated with one source, each one
independently depending on which frequencies the speaker is emitting.
Just like the acoustic standing wave pump of Nabavi and Mongeau, an
outlet is located at the point of peak pressure amplitude, which in this
case is at the end of the resonance chamber. With a rectification
structure, an oscillating flux of fluid out of the resonance chamber is
converted into a pulsed flow in the microfluidic channel.
## References
[^1]: K. D. Frampton, et al., \"The scaling of acoustic streaming for
application in micro-fluidic devices,\" Applied Acoustics, vol. 64,
pp. 681-692, 2003.
[^2]: J. Lighthill, \"ACOUSTIC STREAMING,\" Journal of Sound and
Vibration, vol. 61, pp. 391-418, 1978.
[^3]: W. L. Nyborg, \"ACOUSTIC STREAMING DUE TO ATTENUATED PLANE
WAVES,\" Journal of the Acoustical Society of America, vol. 25, pp.
68-75, 1953.
[^4]: K. D. Frampton, et al., \"Acoustic streaming in micro-scale
cylindrical channels,\" Applied Acoustics, vol. 65, pp. 1121-1129,
Nov 2004.
[^5]: C. Eckart, \"VORTICES AND STREAMS CAUSED BY SOUND WAVES,\"
Physical Review, vol. 73, pp. 68-76, 1948.
[^6]: G. Lindner, \"Sensors and actuators based on surface acoustic
waves propagating along solid-liquid interfaces,\" Journal of
Physics D-Applied Physics, vol. 41, 2008.
[^7]: J. C. Rife, et al., \"Miniature valveless ultrasonic pumps and
mixers,\" Sensors and Actuators a-Physical, vol. 86, pp. 135-140,
Oct 2000.
[^8]: K. Hashimoto, et al., \"Micro-actuators employing acoustic
streaming caused by high-frequency ultrasonic waves,\" Transducers
97 - 1997 International Conference on Solid-State Sensors and
Actuators, Digest of Technical Papers, Vols 1 and 2, pp. 805-808,
1997.
[^9]: Nabavi, M. and L. Mongeau (2009). \"Numerical analysis of high
frequency pulsating flows through a diffuser-nozzle element in
valveless acoustic micropumps.\" Microfluidics and Nanofluidics
7(5): 669-681.
[^10]: S. M. Langelier, et al., \"Acoustically driven programmable
liquid motion using resonance cavities,\" Proceedings of the
National Academy of Sciences of the United States of America, vol.
106, pp. 12617-12622, 2009.
|
# Engineering Acoustics/Sonic Supercharging of 2 Stroke Engines
## Sonic Supercharging of 2 Stroke Engines
This page of the Engineering Acoustics Wikibook discusses the merits and
design of Tuned Pipes for 2 Stroke Engines. For introductory material on
2 Stroke Engines please see the following links:
Wikipedia 2 Stroke
Engines
HowStuffWorks 2 Stroke
Engines
## Introduction
For a 2 stroke engine the tuned pipe is the section of the exhaust
system that begins at the exhaust port and ends at the end of the
converging section. A tuned pipe is made of between 3 and 4
characteristic sections depending on the desired effect. The figure
below depicts cross sections for 3 typical configurations of tuned pipes
as well as a straight pipe:
!Typical pipes
The purpose of straight and tuned pipes is to utilize the pressure waves
originating from the exhaust port to assist the breathing of the engine.
This is achieved by designing the pipe in such a way that positive and
negative reflected waves arrive back at the exhaust port at an instant
when a low or high pressure is desired. This is beneficial for two
stroke engines because unlike four stroke engines, they do not have
dedicated intake and exhaust strokes and valves.
The following picture labels the various elements of a two-stroke engine
which are referred to in this Wikibook page.
!Two-stroke engine
elements
Furthermore; abbreviations will help as well:
- In order of accensding crank angle
- TDC - Top Dead Center, 0 deg
- EPO - Exhaust Port Open
- TPO - Transfer Port Open
- BDC - Bottom Dead Ceneter, 180 deg
- TPC - Transfer Port Close
- EPC - Exhaust Port Close
For introductory material on 2 Stroke Engines please see the following
links:
Wikipedia 2 Stroke
Engines
HowStuffWorks 2 Stroke
Engines
## Straight Pipe
The goal of a tuned straight pipe in this application is to use the
reflected negative pressure waves from the open end of the pipe to help
draw out the exhaust gases. By selecting the appropriate length of the
pipe, the reflected rarefaction wave arrives at the exhaust port just as
the transfer port opens thus assisting the flow of fresh mixture into
the cylinder, and exhaust gases out of the cylinder. The figure below
illustrates this action. In the figure, even though the piston has
reached bottom dead center (BDC), fresh mixture continues to enter the
cylinder because the rarefaction wave causes P2 to be smaller than P1. A
key point to note is that the velocity with which the pressure and
rarefaction waves travel down and up the exhaust pipe is for the most
part independent of the engine operating frequency (RPM). Due to this
fact, the conclusion must be made that for a given pipe length there is
an optimal RPM for which the waves will arrive producing the greatest
benefit for the breathing of the engine. At this optimal RPM, the engine
breathes significantly better and hence produces a noticeable increase
in output power. This effect is quantified by calculating the ratio of
fresh mixture to exhaust gases within the cylinder as the compression
stage begins at EPC. If the rarefaction wave is very large, it is
possible that fresh mixture is pulled into the exhaust pipe while both
transfer and exhaust ports are open. This phenomenon is known as short
circuiting the engine and produces undesired effects such as a decrease
in fuel economy and an increase in release of volatile organic
compounds. These negative effects can be mitigated by designing the
exhaust system such that either no fresh mixture is pulled into the
exhaust pipe (i.e. perfectly tuned straight pipe) or further utilizing
the exhaust pressure wave to inhibit short circuiting. For performance
two-stroke engines, the second solution is most often employed by means
of a tuned exhaust pipe known as a tune pipe.
!Schematic of two-stroke engine with straight
pipe
## Tune Pipe
With a converging-belly section-diverging type tune pipe the goal is to
have the diverging section create a returning rarefaction wave and the
converging section create a returning pressure wave. The belly section
acts as an appropriate time delay between the returning waves such that
the pressure wave arrives at the exhaust port after the transfer port
has closed. This pressure wave pushes the excess fresh mixture in the
exhaust pipe from a short circuit, back into the cylinder. Here the
short circuited fresh mixture is actually desired since this allows the
returning pressure wave to \"super charge\" the cylinder giving it more
fresh mixture than if the cylinder were filled at ambient pressure. This
is a similar result to turbo-charging or super-charging a four-stroke
engine. If the mixture contained within the cylinder before combustion
occurs were allowed to expand to ambient pressure, its volume would be
larger than the displacement of the engine. This phenomenon is
quantified as volumetric efficiency; it is calculated as the ratio of
the ambient pressure volume of the fresh charge, divided by the
displacement volume of the engine. The operation of a two-stroke engine
equipped with a properly tuned pipe is shown in the animation below, for
a step by step description of the process, please follow the link below
the animation.
There are exhaust manufacturers now that are mating up tuned pipes to
tuned (ported) engines to get the best possible \'supercharging effect\'
at given rpm\'s. In the past a tuned pipe would have been tested on a
stock engine but the length and shape of the pipe will differ on a
\'tuned\' engine because of the way it is able to rev higher .
Two-stroke engine
operation
## Tune Pipe Design Geometry
The most basic form of a tune pipe is shown in the figure below with
corresponding wave equations.
!geometry and equations for simple
pipe
This pipe consists of an expansion chamber which serves to create both
the returning rarefaction and pressure waves. From reference \[1\], we
know that wave speed in the pipe is effectively independent of engine
RPM and largely dependent on temperature of the gases in the pipe. This
means that a tune pipe with basic geometry operates optimally for only
one specific RPM, as the engine RPM deviates from this optimal value the
timing of the arrival of the returning waves is less optimal for the
volumetric efficiency. The relation between the volumetric efficiency
and the engine RPM is characterized qualitatively by the following
graph:
!Qualitative relation of volumetric efficiency to
RPM
Although the basic tune pipe performs the desired task of increasing the
volumetric efficiency, the narrow RPM band width for which increased
power is available reduces the practicality of the basic pipe since
engines are typically required to operate within a wide range of RPM.
One way to broaden the effective RPM band width of a pipe is to taper
the pipe at sections of increasing and decreasing cross section. To
understand how this works, we can represent a tapered section as many
small step increases/decreases in cross section. Each step will produce
transmitted and reflected waves in the same way as the basic geometry;
however, the overall effect is weaker waves with longer wave lengths
arriving back at the exhaust port. Although the waves have smaller peak
amplitudes, the effect on volumetric efficiency is greater due to the
longer interaction times of the waves with the cylinder and crankcase.
If the number of steps increased to n diverging steps and m converging
steps, the equations shown represent the plane waves as well as the
transmission and reflection factors for each change in cross section.
!geometry and equations for stepped
pipe
!reflection and transmission coefficients for
step
The graph below shows qualitatively how pressure at the exhaust port
varies with crank angle for both basic and tapered pipe geometry.
!qualitative different between EPP for basic and tapered
pipes
The important differences to notice in the graph are the relative
magnitudes and durations of the positive and negative pressure waves
arriving back at the exhaust port at TPO and EPC. In this graph, if we
pin down the waves with respect to time along the horizontal axis and
then we increase or decrease the RPM, the effect will be that the
positions of the port timing will no longer match up with the same
positions of the waves. This is due to the fact that, as mentioned
above, the wave speed is independent of RPM. In more detail if we
increase the RPM it would have the effect of shrinking the port timing
scale while keeping EPO in the same position. If we increase the RPM,
the port timing scale expands with EPO remaining in the same position.
Looking at things the other way around, if we change some aspects of the
pipe geometry we can see how they change wave propagation in the pipe
and hence operation of the pipe with respect to the engine.
- Length of the pipe between the exhaust port and the diverging
section (L1) - this length set by the difference in crank angle
between EPO and TPO and the desired effective RPM range of the pipe.
Making the section longer would fit a lower RPM range or a greater
difference between the crank angle of EPO and TPO.
- Length of the belly section - this length is set by the difference
in crank angle between TPO and EPC and the desired effective RPM
range of the engine. This length and L1 are interdependent since the
crank angles are also interdependent, (e.g. EPC=(0-EPO)).
- The angle of the diverging/converging sections - changing this angle
from steep angled cone (interior angle \> 90 degrees) to a shallow
angle cone (interior angle \< 90 degrees) has the effect of
broadening out the wave length. This increases the effective band
RPM band width of the pipe since there greater flexibility of crank
angle for which an appropriate pressure will at the exhaust port.
This also has the effect of decreasing the maximum attainable
volumetric efficiency of the pipe since the peak pressure amplitude
is diminished by spreading out the waves energy over a longer
wavelength. Note that if the diameter of section L1 and the belly
section are kept constant, changing the angle and changing the
length of the diverging/converging section is geometrically the
same.
- The ratio of the cross section of section L1 and the belly section -
the ratio is largely dependent on the desired angle and length of
the diverging/converging sections and the minimum diameter desired
to avoid impeding the flow of the exhaust gases.
## Further Investigation
For further investigate of the operation of two-stroke engines with
tuned exhaust pipes it is most appropriate to analyze actual test data.
For this we can go to the TFX website where they demonstrate their
testing and data analysis software, or read the paper referenced below
titled \"Exhaust Gas Flow Behavior in a Two-Stroke Engine.
For the class presentation I would like to do both given sufficient
time.
## References
1. Exhaust Gas Flow Behavior in a Two-Stroke Engine; Y. Ikeda T.
Takahashi, T. Ito, T. Nakajima; Kobe Univ.; SAE 1993
2. Power Tuning - Two Stroke
Engines
3. 2 STROKE WIZARD - Tuned Pipe Design
Software
4. Ian Williams Tuning MOTA Tune Pipe Design
Software
5. Les phénomènes d\'ondes dans les moteurs; M. Borel; Edition TECHNIP;
2000
6. Engineering Acoustics Course Notes; McGill University MECH 500; L.
Mongeau; 2008
7. Engineering Acoustics; L. Kinsler, A. Frey; 4th Edition; 2000
|
# Engineering Acoustics/Thermoacoustics
One ordinarily thinks of a sound wave as consisting only of coupled
pressure and position oscillations. In fact, temperature oscillations
accompany the pressure oscillations and when there are spatial gradients
in the temperature oscillations, oscillating heat flow occurs. The
combination of these oscillations produces a rich variety of
"thermoacoustic" effects. In everyday life, the thermal effects of sound
are too small to be easily noticed; for example, the amplitude of the
temperature oscillation in conversational levels of sound is only about
0.0001 °C. However, in an extremely intense sound wave in a pressurized
gas, these thermoacoustic effects can be harnessed to create powerful
heat engines and refrigerators. Whereas typical engines and
refrigerators rely on crankshaft-coupled pistons or rotating turbines,
thermoacoustic engines and refrigerators have no moving parts (or at
most only flexing parts without the need for sliding seals). This
simplicity, coupled with reliability and relatively low cost, has
highlighted the potential of thermoacoustic devices for practical use.
As a result, thermoacoustics is maturing quickly from a topic of basic
scientific research through the stages of applied research and on to
important practical applications\[1\]. Recently, thermoacoustic
phenomena have been employed in the medical field for imaging of
tissues.
## History
The history of thermoacoustic engines is long but sparsely populated. A
review of Putnam and Dennis\[2\] describes experiments of Byron
Higgins\[3\] in 1777 in which acoustic oscillations in a large pipe were
excited by suitable placement of a hydrogen flame inside. The Rijke
tube\[4\], an early extension of Higgins\' work, is well known to modern
acousticians. Higgins\' research eventually evolved into the modern
science of pulse combustions\[5\] whose applications have included the
German V-I rocket (the \"buzz bomb\") used in World War II and the
residential pulse combustion furnace introduced by Lennox, Inc., in
1982. The Sondhaus tube is the earliest thermoacoustic engine that is a
direct antecedent of the thermoacoustic prime movers. Over 100 years
ago, glass blowers noticed that when a hot glass bulb was attached to a
cool glass tubular stem, the stem tip sometimes emitted sound, and
Sondhauss quantitatively investigated the relation between the pitch of
the sound and the dimensions of the apparatus.
The English physicist, Lord Rayleigh explained the Sondhauss tube
qualitatively in 1896: \"In almost all cases where heat is communicated
to a body, expansion ensues and this expansion may be made to do
mechanical work. If the phases of the forces thus operative be
favorable, a vibration may be maintained. For the sake of simplicity, a
tube, hot at the closed end and getting gradually cooler towards the
open end, may be considered. At a quarter of a period before the phase
of greatest condensation, the air is moving inwards, i.e., towards the
closed end, and therefore is passing from colder to hotter parts of the
tube; but in fact the adjustment of temperature takes time, and thus the
temperature of the air deviates from that of the neighboring parts of
the tube, inclining towards the temperature of that part of the tube
from which the air has just come. From this it follows that at the phase
of greatest condensation heat is received by the air, and at the phase
of greatest rarefaction heat is given up from it, and thus there is a
tendency to maintain the vibrations.\"
The history of imposing acoustic oscillations on a gas to cause heat
pumping and refrigeration effects is even briefer and more recent than
the history of thermoacoustic prime movers. In a device called a
pulse-tube refrigerator, Gifford and Longsworth produced significant
refrigeration by applying a very low-frequency high-amplitude pressure
oscillation to the gas in a tube. As they explained the phenomenon, \"If
any closed chamber is pressurized and depressurized by delivery and
exhaustion of gas from one point on its surface and the flow is
essentially smooth, heat pumping will occur away from the point on its
surface,\" because of the temperature changes that accompany the
pressure changes in the gas, and their time-phasing relative to the
oscillatory gas flow.
## Principle of operation
When a sound wave is sent down a half-wavelength tube with a vibrating
diaphragm or a loudspeaker, the pressure pulsations make the gas inside
slosh back and forth. This forms regions where compression and heating
take place, plus other areas characterized by gas expansion and cooling.
!Working
principle{width="500"}!Working
principle{width="500"}
A thermoacoustic refrigerator is a resonator cavity that contains a
stack of thermal storage elements (connected to hot and cold heat
exchangers) positioned so the back-and-forth gas motion occurs within
the stack. The oscillating gas parcels pick up heat from the stack and
deposit it to the stack at a different location. The device \"acts like
a bucket brigade\" to remove heat from the cold heat exchanger and
deposit it at the hot heat exchanger, thus forming the basis of a
refrigeration unit.
1
The governing mathematical equations of the thermoacoustic phenomenon
are given below.
!Working principle{width="700"}
### Standing-wave systems
The thermoacoustic engine (TAE) is a device that converts heat energy
into work in the form of acoustic energy. A thermoacoustic engine is
operating using the effects that arise from the resonance of a
standing-wave in a gas. A standing-wave thermoacoustic engine typically
has a thermoacoustic element called the "stack". A stack is a solid
component with pores that allow the operating gas fluid to oscillate
while in contact with the solid walls. The oscillation of the gas is
accompanied with the change of its temperature. Due to the introduction
of solid walls into the oscillating gas, the plate modifies the
original, unperturbed temperature oscillations in both magnitude and
phase for the gas about a thermal penetration depth δ=√(2k/ω) away from
the plate, where k is the thermal diffusivity of the gas and ω=2πf is
the angular frequency of the wave. Thermal penetration depth is defined
as the distance that heat can diffuse though the gas during a time 1/ω.
In air oscillating at 1000 Hz, the thermal penetration depth is about
0.1 mm. Standing-wave TAE must be supplied with the necessary heat to
maintain the temperature gradient on the stack. This is done by two heat
exchangers on both sides of the stack.[^1]
If we put a thin horizontal plate in the sound field the thermal
interaction between the oscillating gas and the plate leads to
thermoacoustic effects. If the thermal conductivity of the plate
material would be zero the temperature in the plate would exactly match
the temperature profiles as in Fig. 1b. Consider the blue line in Fig.
1b as the temperature profile of a plate at that position. The
temperature gradient in the plate would be equal to the so-called
critical temperature gradient. If we would fix the temperature at the
left side of the plate at ambient temperature *T*~a~ (e.g. using a heat
exchanger) then the temperature at the right would be below *T*~a~. In
other words: we have produced a cooler. This is the basis of
thermoacoustic cooling as shown in Fig. 2b which represents a
thermoacoustic refrigerator. It has a loudspeaker at the left. The
system corresponds with the left half of Fig. 1b with the stack in the
position of the blue line. Cooling is produced at temperature *T*~L~.
It is also possible to fix the temperature of the right side of the
plate at *T*~a~ and heat up the left side so that the temperature
gradient in the plate would be larger than the critical temperature
gradient. In that case we have made an engine (prime mover) which can
e.g. produce sound as in Fig. 2a. This is a so-called thermoacoustic
prime mover. Stacks can be made of stainless steel plates but the device
works also very well with loosely packed stainless steel wool or
screens. It is heated at the left, e.g., by a propane flame and heat is
released to ambient temperature by a heat exchanger. If the temperature
at the left side is high enough the system starts to produces a loud
sound.
Thermoacoustic engines still suffer from some limitations, including
that:
- The device usually has low power to volume ratio.
- Very high densities of operating fluids are required to obtain high
power densities
- The commercially-available linear alternators used to convert
acoustic energy into electricity currently have low efficiencies
compared to rotary electric generators
- Only expensive specially-made alternators can give satisfactory
performance.
- TAE uses gases at high pressures to provide reasonable power
densities which imposes sealing challenges particularly if the
mixture has light gases like helium.
- The heat exchanging process in TAE is critical to maintain the power
conversion process. The hot heat exchanger has to transfer heat to
the stack and the cold heat exchanger has to sustain the temperature
gradient across the stack. Yet, the available space for it is
constrained with the small size and the blockage it adds to the path
of the wave. The heat exchange process in oscillating media is still
under extensive research.
- The acoustic waves inside a thermoacoustic engines operated at large
pressure ratios suffer many kinds of non-linearities such as
turbulence which dissipates energy due to viscous effects, harmonic
generation of different frequencies that carries acoustic power in
frequencies other than the fundamental frequency.
The performance of thermoacoustic engines usually is characterized
through several indicators as follows:[^2]
- The first and second law efficiencies.
- The onset temperature difference, defined as the minimum temperature
difference across the sides of the stack at which the dynamic
pressure is generated.
- The frequency of the resultant pressure wave, since this frequency
should match the resonance frequency required by the load device,
either a thermoacoustic refrigerator/heat pump or a linear
alternator.
- The degree of harmonic distortion, indicating the ratio of higher
harmonics to the fundamental mode in the resulting dynamic pressure
wave.
- The variation of the resultant wave frequency with the TAE operating
temperature
### Travelling-wave systems
Figure 3 is a schematic drawing of a travelling-wave thermoacoustic
engine. It consists of a resonator tube and a loop which contains a
regenerator, three heat exchangers, and a bypass loop. A regenerator is
a porous medium with a high heat capacity. As the gas flows back and
forth through the regenerator it periodically stores and takes up heat
from the regenerator material. In contrast to the stack, the pores in
the regenerator are much smaller than the thermal penetration depth, so
the thermal contact between gas and material is very good. Ideally the
energy flow in the regenerator is zero, so the main energy flow in the
loop is from the hot heat exchanger via the pulse tube and the bypass
loop to the heat exchanger at the other side of the regenerator (main
heat exchanger). The energy in the loop is transported via a travelling
wave as in Fig. 1c, hence the name travelling-wave systems. The ratio of
the volume flows at the ends of the regenerator is *T*~H~/*T*~a~, so the
regenerator acts as a volume-flow amplifier. Just like in the case of
the standing-wave system the machine "spontaneously" produces sound if
the temperature *T*~H~ is high enough. The resulting pressure
oscillations can be used in a variety of ways such as in producing
electricity, cooling, and heat pumping.
## Thermoacoustic machines
There are two basic kinds of thermoacoustic machines:
1. Thermoacoustic prime movers
2. Thermoacoustic refrigerator
!A standing wave demonstration
refrigerator{width="550"}
## Basic components of a thermoacoustic system
A thermoacoustic machine generally consists of:
1. Acoustic driver
2. Stack or regenerator
3. Heat exchanger
4. Resonator
## Acoustic driver
Electrodynamic drivers are used in a class of electrically driven
thermoacoustic refrigeration systems. The mechanical and electrical
characteristics of the driver, in conjunction with the acoustic load
impedance at the driver piston, determine the electroacoustic efficiency
of the actuator. The electroacoustic efficiency is, of course, a key
factor in the overall efficiency of the cooling system. For this reason,
it is useful to develop models that allow the efficiency of any such
driver to be predicted for varying operating conditions and loads. A
detailed description of linear models of loudspeakers using equivalent
electrical circuits is readily available.\[2\] !Equivalent electrical
circuit of an electroacoustic
driver
Several methods based on such linear models have been proposed in order
to determine the model parameters experimentally.
## Stack
In the thermoacoustic refrigerator the stack is the main component where
the thermoacoustic phenomenon takes place. Below shown are two stacks of
different materials used in a standing wave thermoacoustic refrigerator.
!A ceramic stack{width="400"} !A
parallel plate
stack{width="400"}
## Heat exchanger
The heat exchangers employed in a thermoacoustic refrigerator influence
the acoustic field created in the resonator. There are many design
constraints such as porosity of the heat exchanger and high heat
transfer coefficient for efficiency. Due to these constraints, special
kind of heat exchangers are used. One typical micro channel aluminum
heat exchanger is shown below.
!Heat exchanger{width="400"}
## Resonator
This the part of refrigerator which is only there for maintaining the
acoustic wave. Because it is a dead volume which causes heat loss and
adds bulk, quarter wavelength resonators are preferred over half
wavelength.
## References
- Greg Swift *et al.*,\"Thermoacoustics for Liquefaction of Natural
Gas\",LNG Technology.
- L. L. Beranek, *Acoustics*, McGraw--Hill, New York.
- R. W. Wakeland, ''Use of electrodynamic drivers in thermoacoustic
refrigerators,''J. Acoust. Soc. Am. 107, 827--832,2000.
[^1]: M. Emam, Experimental Investigations on a Standing-Wave
Thermoacoustic Engine, M.Sc. Thesis, Cairo University, Egypt
(2013).
[^2]: G.W. Swift, A unifying perspective for someengines and
refrigerators, Acoustical Society of America, Melville, (2002).
|
# Engineering Acoustics/Acoustic streaming
## Definition
Streaming is a term to describe a steady time-averaged mass-flux density
or velocity induced by oscillating acoustic waves in a fluid.[^1] Large
amplitude sound propagation in a fluid may result in a steady motion of
the fluid. This nonlinear phenomenon is known as acoustic streaming and
may be theoretically described by quadratic convective terms in the
governing equations of the fluid flow.
## Applications
Acoustic streaming may be effective in enhancement of convective heat
transfer, ultrasonic cleaning of contaminated surfaces, localized
micro-mixing, development of micro-actuators such as micro-manipulators
for small particles and micro-pumps for liquids.Engineering
Acoustics/Acoustic Micro
Pumps
## Different types of acoustic streaming
There exist several ways to classify streaming.
*A first classification of streaming is based on the mechanism by which
streaming is generated.*[^2]
1. **Boundary-layer driven streaming:** Flow driven by viscous stresses
on boundaries and caused by boundary layer effects between a solid
and a fluid. Boundary-layer driven streaming consists of two types
of streaming which occur always together: outer and inner
boundary-layer streaming.
1. **Outer boundary-layer streaming:** Rayleigh analysed acoustic
streaming when a standing wave is present between parallel
plates and explained that the air motion is caused by a
nonlinear second order effect. Rayleigh focussed his
investigations on mean flows outside the boundary layer and his
approach became since then the analytical tool for the study of
acoustic streaming.[^3]
2. **Inner boundary-layer streaming:** The study of inner boundary
layer streaming was developed by Schlichting, who investigated
an incompressible oscillating flow over a flat plate and
calculated the two-dimensional streaming field inside the
boundary layer. Figure 1 shows the inner and outer streaming in
a channel. The length of such a cell is a quarter of the wave
length.!Figure 1: Schematic diagram of inner and outer
streaming
cells{width="380"}
2. **Jet driven streaming:** Periodic suction and ejection of a viscous
fluid through an orifice or a change in cross section. The mechanism
relies on the fact that a viscous fluid behaves quite differently
during the suction and ejection periods. During the suction period
the flow comes from all kind of directions, whilst during the
ejection period a jet is produced. In Figure 2, the outflow and
inflow patterns at the transition between a small tube and open
space are shown. These two outflows can be regarded as the
superposition of a broadly distributed oscillating flow and a
time-averaged toroidal circulation.!Figure 2: Outflow and inflow
patterns{width="480"}
3. **Gedeon streaming:** Associated with travelling wave, as opposed to
a standing wave as for the previous example. In boundary layer and
jet driven streaming, there is no net mass transport. In travelling
wave streaming, a non-zero net mass transport occurs due to the
phase between the acoustic velocity and density. Travelling wave
streaming in Stirling thermoacoustic engines and refrigerators is
called Gedeon streaming or DC flow.
4. **Eckart streaming:** Eckart streaming or \'quartz wind\' is
generated by the dissipation of acoustic energy in the fluid.
Although Eckart was not the first to observe \"quartz wind\", he was
the first one who gave a mathematical analyses for it in 1948.[^4] A
recent paper gives orders of magnitudes of velocities observed in
this type of acoustic streaming [^5]
Figure 3 shows the schematic of Gedeon, Rayleigh and Jet-driven
streaming.
!Figure 3: Gedeon, Rayleigh and jet-driven
streaming{width="480"}
*The second classification is based on the relative magnitude of the
acoustic wavelength to the characteristic length of induced vortical
structures.*
1. **Fine scale:** For the inner boundary-layer streaming (Schlichting
streaming), the boundary-layer thickness is equal to the width of
the vortices. So, it is part of the fine scale classification.
2. **Comparable scale:** For outer boundary-layer streaming (Rayleigh
streaming) and jet driven streaming, the wavelength and the vortex
size are comparable.
3. **Large scale:** Eckart streaming belongs to the large scale
classification because the vortex length scale exceeds the acoustic
wavelength.
*The third classification is based on the magnitude of the streaming
velocity.*
1. **Slow streaming:** Slow streaming is when the streaming velocity is
considerably smaller than the magnitude of the fluid velocity. In
fact, streaming can be characterized by an appropriate Reynolds
number, Re~NL~, which compares inertia and viscosity and determines
the degree to which the streaming velocity field is distorted. The
case Re~NL~\<\<1 corresponds to the slow streaming.[^6]
2. **Fast streaming:** Fast streaming is when the streaming velocity
and fluid velocity are of the same magnitude. The case Re~NL~\>\>1
is referred to fast streaming or nonlinear streaming. Most types of
acoustic streaming are slow, only the jet driven streaming is
considered as fast.
## Axial and transverse components of the streaming velocity
Here, the outer boundary-layer streaming velocity characteristics
obtained from the analytical solution of the linear wave equation are
presented. The amplitude of the axial component of the acoustic velocity
field in the linear case is given as,
$(1)\ u=u_{max}sin(2\pi x/\lambda)$
where *u~max~=P~0~/ρ~0~c~0~*. The axial component *(u~st~)* and the
transverse component *(v~st~)* of the streaming velocity field are,
$(2)\ u_{st}=\frac{3}{8}\frac{u_{max}^2}{c}(1-\frac{2y^2}{(H/2)^2})sin(\pi x/l)$
$(3)\ v_{st}=-\frac{3}{8}\frac{u_{max}^2}{c}\frac{2\pi y}{\lambda}(1-\frac{2y^2}{(H/2)^2})cos(\pi x/l)$
where (*-H/2\<y\<H/2*), *H* is the height of the tube and *l=λ/4* [^7]
## References
[^1]: see video on <http://media.efluids.com/galleries/all?medium=749>
[^2]: Greg Swift,\"Thermoacoustics: A unifying perspective for some
engines and refrigerators\", Condensed Matter and thermal Physics
Group,Los Alamos National Laboratory, Forth edition, 1999.
[^3]: S. Boluriaan, P. J. Morris,\"Acoustic streaming: from Rayleigh to
today\", International Journal of Aeroacoustics, 2 (3-4): 255-292,
2003.
[^4]: O. V. Rudenko, S. I. Soluyan,\"Theoretical foundations of
nonlinear acoustics\", Consultants Bureau, New York and London,
1977.
[^5]:
.
[^6]: S. Moreau, H. Bailliet, J. Valiere,\"Measurements of inner and
outer streaming vortices in a standing waveguide using laser doppler
velocimetry\", Journal of Acoustical Society of America, 123 (2):
640-647, 2008.
[^7]: M. W. Thompson, A. A. Atchley,\"Simultaneous measurement of
acoustic and streaming velocities in a standing wave using lase
doppler anemometry\", Journal of Acoustical Society of America,
117:1828-1838, 2005.
|
# Engineering Acoustics/Acoustic Levitation
## Definition
Acoustic levitation employs sound radiation to lift objects. It mostly
deals with non-linear phenomenon (since the resulting force on the
object is due to non linear properties of wave motion).
## Motivation behind developing an acoustic reactor
The force generated due to acoustic radiation pressure is generally much
larger than force of electromagnetic radiation pressure which makes the
study of these forces interesting and noteworthy.
Secondly, this phenomenon will allow successful containerless
experiments. The importance of such studies is illustrated by the
following:
Kinetic studies can be classified into two categories:
1. The first includes material fixed to the walls.
2. The second includes the flow of particles into and from an apparatus
The drawback of existing methods is that only one type of particle can
be used. Consequently, the behavior reported isn\'t accurate (since the
walls in the first case and the surrounding particles in the second case
can have an effect on the behavior under study).
This elimination of walls can provide further insight by discarding
supports in addition to reducing the interactions with other particles
(e.g.: by handling a single bubble).
One way to achieve this airborne application is by employing a
fascinating application of acoustics, namely acoustic levitation which
involves levitating objects using sound radiation.
Applications of this phenomenon and the corresponding technology can
include material processing in space without using any containers. This
may be particularly useful in the study of materials that are extremely
corrosive.
Moreover, sonoluminescence and acoustic
cavitation encounter this acoustic force.
Other applications can include measuring densities and analyzing fluid
dynamics in which surface tension plays an important role. Lastly,
acoustic positioning is another potential application.
- Discovery
News
lists an interesting application of acoustic levitation.
- Acoustic Levitation on
Mars illustrates an
adventurous application of this technology.
## Components of an Acoustic Reactor
!Figure 1: Schematic of the set
up{width="300"}
A simple acoustic reactor requires a:
- A transducer to generate the desired sound waves. These transducers
usually generate intense sounds, with sound pressure levels greater
than 150 dB.
- A reflector
In order to focus the sound, transducers and reflectors in general have
concave surfaces. The reflection of longitudinal sound waves off the
reflector leads to interference between the compressions and
rarefactions. Perfect interference will result in a standing acoustic
wave, i.e., a wave that will appear to have the same position at any
time.
With this simple arrangement of transducer and reflector, one can
achieve stable levitation but cannot steer the sample. To do so, Weber,
Rey, Neuefeind and Benmore have described an arrangement in their paper
that describes the use of two transducers. These transducers adjust the
location by altering the acoustic phases (which is carried out
electronically).
## Single Bubble Sonoluminescence
This phenomena occurs when a single bubble encounters a non linear
dynamic, namely, rapid compression of bubble preceded by an expansion
which takes place slowly. When the bubble is compressed rapidly, it can
get so hot that it emits a flash of light
!Figure 2: Sonoluminescence -
mechanism{width="500"}
## Theory
(source: Theory of long wavelength acoustic radiation pressure by
Löfstedt and Putterman)
Starting with the integral form of conservation of momentum,
$$\frac{\partial \rho v_i}{\partial t}+ \frac{\partial \Pi_{ij}}{\partial r_j}=0$$
$$\int_{v}\frac{\partial \rho v_i}{\partial t}+ \int_{S_o}\Pi_{ij} dS_j+\int_{S_K}\Pi_{ij} dS_j=0$$
where
`<big>`{=html}$\Pi_{ij}$`</big>`{=html} is the stress tensor,
`<big>`{=html}$\rho, v$`</big>`{=html} are the local fluid density and
velocity,
`<big>`{=html}$S_o$`</big>`{=html} the surface of the object (at time
t),
`<big>`{=html}$S_K$`</big>`{=html} a surface far from the object, and
V the volume bounded by these surfaces.
Using the relation,
$$\int_v \frac{\partial \rho v_i}{\partial t} dr + \frac{d}{dt}\int\rho v_i dr - \int_{S_o} \rho v_i v_j dS_j = 0$$
$$\frac{d}{dt} \int \rho v_i dr + \int_{S_o} (\Pi_{ij} - \rho v_i v_j) dS_j + \int_{S_R} \Pi_{ij} dS_j = 0$$
Time average of this equation gives an expression for the force on a
moving sphere
$$\langle F_i \rangle = \langle \int_{S_o} (\Pi_{ij} - \rho v_i v_j) dS_j \rangle = - \int_{S_R} \langle \Pi_{ij} \rangle dS_j = 0$$
Assuming an ideal fluid,
The Galilean invariant contribution to the stress tensor is
: `<big>`{=html}`<big>`{=html}`<math>`{=html}
\\Pi\_{ij}-\\rho v_i v_j = p\\delta\_{ij}
`</math>`{=html}`</big>`{=html}`</big>`{=html}
and
$$p = - \rho_{eq} \frac{\partial \phi}{\partial t} - \rho_{eq} v^2/2 + \frac{\rho_{eq}}{2c^2} \left({\frac{\partial \phi}{\partial t}}\right)^2$$
here, `<big>`{=html}$v=\nabla \phi$`</big>`{=html}
`<big>`{=html}$\rho_{eq}$`</big>`{=html} represents the equilibrium
density
`<big>`{=html}$c$`</big>`{=html} denotes the speed of sound
Acoustic radiation force on an object in an ideal fluid is
```{=html}
<center>
```
$\langle F_i \rangle = - \int_{S_R} {[- \frac{\rho \langle v^2 \rangle}{2} + \frac{\rho}{2c^2} \langle{\frac{\partial \phi}{\partial t}}^2\rangle + \rho \langle v_i v_j \rangle] dS_j}$
```{=html}
</center>
```
### Multipole Expansion of acoustic radiation force
Consider the linear wave equation,
$$\frac{\partial^2 \phi}{\partial t^2} - c^2 \Delta^2 \phi = 0$$
where
: `<big>`{=html}`<big>`{=html}$\phi = \phi_i + \phi_s$`</big>`{=html}\
`</big>`{=html}
here `<big>`{=html}$\phi_i$`</big>`{=html} is given by transducer,
`<big>`{=html}$\phi_s$`</big>`{=html} is given by the corresponding
boundary condition at the object where s stands for \'scattered\'
$\phi_s = Re \sum_{n=0}^\infty B_n h_n (k r)P_n (cos\theta) e^{-i\omega t}$
`<big>`{=html}$h_n$`</big>`{=html} here are the outgoing spherical
Hankel functions
`<big>`{=html}$P_n$`</big>`{=html} here are the Legendre polynomials
`<big>`{=html}`<big>`{=html}$\omega = k c$`</big>`{=html}`</big>`{=html}
k: wave number of the sound field imposed on the object
As r approaches infinity,
$\lim_{r \to \infty} \phi_s \to Re \frac{e^{i (kr - \omega t)}}{kr} \sum_{n} (-i)^{n+1} B_n P_n (cos\theta)$
For standing waves,
: `<big>`{=html}`<big>`{=html}`<math>`{=html}
\\phi_i = Re A sin(kz) e\^{-i\\omega t}
`</math>`{=html}`</big>`{=html}`</big>`{=html}
Thus by computing $\phi$ and using the result in the expression for the
radiation force we get
```{=html}
<center>
```
`<big>`{=html}`<big>`{=html}$F_z = 2\pi\rho A[coskz_o(Im B_o)-sinkz_o(Im B_1)$`</big>`{=html}`</big>`{=html}
\...\...\...\...\...\...\...\...\...\...\...\...\...\..... (1)
```{=html}
</center>
```
### Radiation Force on a spherical object
Consider a spherical body with radius $R_o$ and density ρ~o~
The wave equation inside the sphere is given by:
$:\frac{\partial^2 \Phi}{\partial t^2}-
c_o^2{\nabla^2 \Phi_o} - \zeta_o \rho_o \frac{\partial \nabla^2 \Phi_o}{\partial t} = 0$
where
`<big>`{=html}$\Phi_o$`</big>`{=html} is the velocity potential
`<big>`{=html}$c_o$`</big>`{=html} is the speed of sound in the sphere
`<big>`{=html}$\zeta_o$`</big>`{=html} characterizes the damping in the
sphere
(Effects due to thermal conductivity and shear viscosity are neglected)
The solution to this equation is given by:
$:\Phi_o = Re \sum_{n=0}^\infty C_n j_n (k_o r)P_n (cos\theta) e^{-i\omega t}$
where `<big>`{=html}$j_n$`</big>`{=html} are spherical Bessel functions
`<big>`{=html}$k_o$`</big>`{=html} is complex (because of dissipation)
`<big>`{=html}$k_o = k_o +i \alpha_o$`</big>`{=html}
where
`<big>`{=html}$k_o = \omega/c_o$`</big>`{=html}
`<big>`{=html}$\alpha_o = k_o^2 \zeta_o/2c_o \rho_o$`</big>`{=html}
Here,`<big>`{=html}$\alpha_o$`</big>`{=html} is the attenuation
coefficient of sound in the sphere
The boundary conditions are given by
`<big>`{=html}`<big>`{=html}$\rho \Phi (R_o) = \rho_o \Phi_o (R_o)$`</big>`{=html}`</big>`{=html}
At r = `<big>`{=html}$R_o$`</big>`{=html}
$\frac{\partial \Phi}{\partial t} = \frac{\partial \Phi_o}{\partial t}$
To satisfy these conditions, the incident wave is expanded using
spherical harmonics
$sin(kz) = sin(k z_o) \sum_{n=0}^\infty a_{2n} (kr) P_{2l} cos\theta + cos(k z_o) \sum_{n=0}^\infty a_{2n+1} (kr) P_{2l+1} cos\theta$
where
$a_{2l} = \frac{4l+1}{2}. \int_{-1}^1 cos krx P_{2l} (x) dx
a_{2l + 1} = \frac{4l+3}{2}. \int_{-1}^1 sin krx P_{2l+1} (x) dx$
Using the above relations, one can compute $a_o$
$a_o = \frac{sin kr}{kr}.$
When `<big>`{=html}$kR_o << 1$`</big>`{=html}
$sin kz = sin kz_o [ 1 - \frac{1}{2}. k^2 r^2 cos^2 \theta + ...] + cos kz_o [kr cos \theta - ...]$
The boundary condition for the case of a standing wave can be deduced as
follows:
For monopole term,
$\rho A sin k z_o - \rho B_o \frac{i e^{ikR_o}}{kR_o} = \rho_o C_o j_o (x_o)$,
and
$- A (\frac{k^2 R_o}{3} sin k z_o + B_o e^{ikR_o}(\frac{1}{R_o} + \frac{i}{(kR_o)^2}) = C_o k_o j^{'}_o (x_o)$
For dipole term
$\rho A (k R_o) cos k R_o - \rho B_1 e^{ikR_o} (\frac{1}{kR_o} + \frac{i}{(kR_o)^2}) = \rho_o C_1 j_1 (x_o)$,
and
$A k cos k z_o + B_1 e^{ikR_o} (\frac{-i}{R_o} + \frac{2}{kR_o^2}+ \frac{2i}{k^2 R_o^3}) = C_1 k_o j^{'}_1 (x_o)$,
where
`<big>`{=html}$x_o = k_o R_o$`</big>`{=html}
`<big>`{=html}$B_1$`</big>`{=html} and
`<big>`{=html}$B_o$`</big>`{=html} can be obtained as functions of A
Using these relations for `<big>`{=html}$B_o$`</big>`{=html} and
`<big>`{=html}$B_1$`</big>`{=html} in the radiation force expression we
get:
Thus the radiation force on a sphere is given by
`<big>`{=html}`<big>`{=html}$F_{z} = - \pi k^3 R^3 A^2 sin 2kz_o Re [f_o + f_1]$`</big>`{=html}`</big>`{=html}
where
$f_o = \frac{(1/3)(\rho_o/\rho) k^2 R_o^2 b_o (x_o) + 1}{k^2 R_o^2 [1 + (\rho_o/\rho) b_o (x_o)+ ikR_o]}$
$f_1 = \frac{(\rho_o/\rho) b_1 (x_o) - 1}{2(\rho_o/\rho) b_1 (x_o)+ 1}$
with
$b_o (x_o) = \frac{j_o (x_o)}{x_o j^'_o (x_o)}$
$b_1 (x_o) = \frac{j_1 (x_o)}{x_o j^'_1 (x_o)}$
If we neglect damping, assume $x_o << 1$ and assume the sphere is
incompressible (i.e. $c_o$ approaches infinity), then the radiation
force simplifies to:
```{=html}
<center>
```
$F_{z} = - \pi k^3 R_o^3 A^2 sin 2kz_o \rho_o {5 \rho_o-2 \rho\over 3(2\rho_o + \rho)}$
```{=html}
</center>
```
This expression for radiation force (in a standing wave field) was
initially derived by King. Note that the radiation force is directly
proportional to the cube of the radius and directly proportional to the
velocity amplitude.
### Assumptions
1. kR \<\< 1, i.e., the wavelength of the sound field is much larger
than the dimension of the sphere.
2. incompressible object (used by King, although Gorkov has derived
results that permits finite compressibility of the sphere)
Sphere is suspended when sum of the forces acting on it equals zero,
i.e., when force due to gravity balances the upward levitation force.
As a result, the object is attracted to regions of minimum potential
energy (pressure nodes). Antinodes are regions experiencing high
pressures.
To ensure the generation of a standing wave, the transducer must be
placed at a certain distance from the reflector and a particular
frequency should be used to get satisfactory results. This distance
should be a multiple of half the wavelength of the sound produced to
make sure the nodes and antinodes are stable.
Secondly, the direction of the force exerted by the radiated pressure
due to the sound waves must be parallel to the direction of gravity.
Since the stable areas should be large enough and able to support the
object to be levitated, the object\'s dimensions should lie between one
third and one half of the wavelength. It is important to note that the
higher the frequency, the smaller the dimensions of the object one is
trying to levitate (since wavelength and frequency are inversely
proportional to each other)
The materials of the object is important too, since the density along
with the dimensions will give the value for its mass and determine the
gravitational force and consequently whether the upward force produced
by the pressure radiation is suitable.
Another characteristic important when talking about material properties
is the **Bond number** which is important when dealing with drops of
fluid. It characterizes the surface tension and size of the liquid
relative to the fluid surrounding it. The lower the Bond number, the
greater the chances that the drop will burst.
Finally, to achieve such high pressures (that can cancel the
gravitational force), linear waves are insufficient. Therefore,
non-linear waves play an important role in acoustic levitation. This is
easily one of the reasons why the study of acoustic levitation is
challenging. Nonlinear acoustics is a field that deals with physical
phenomena difficult to comprehend. Based on experimental observations,
heavy spheres incline to velocity antinodes, light particles are closer
to the nodes.
## Other effects on levitation force
Temperature, pressure, fluid medium characteristics (density, particle
velocity) affect the levitation force. It is important to remember that
the medium changes as conditions change. The fluid medium consists of
reactants and products that change with reaction rate.
Thus, consequently the levitation force is affected. To compensate for
medium changes, resonance tracking system can be employed (which helps
to maintain stable levitation of the particle under study)
## Design considerations
The sphere or particle under study should experience a lateral force
which will act as a positioning force (along with the more obvious
vertical levitating force) Rotation of the sphere about its axis will
ensure uniform heating and stability.
## Using non-spherical particles
When levitating non-spherical particles, the largest cross section of
the object will end up aligning itself perpendicular to the axis of the
standing wave.
## Traveling vs Standing waves
King discovered that the radiation pressure exerted by a standing wave
is much larger than the pressure exerted by a traveling wave (which has
the same amplitude as the standing wave)
This is because the pressure exerted by a standing wave is due to the
interference between the incident and scattered waves. Pressure exerted
by a traveling wave is due to contributions from scattered field only.
## References
1. Theory of long wavelength acoustic radiation pressure by Löfstedt
and Putterman
2. Development of an acoustic levitation reactor by Cao Zhuyoua, Liu
Shuqina, Li Zhimina, Gong Minglia, Ma Yulongb and Wang Chenghaob
3. HowStuffWorks
## Useful Sites
1. Spherical Hankel
Functions
2. Legendre
Polynomials
3. Multipole expansion
|
# Engineering Acoustics/Biomedical Ultrasound
## Biomedical Ultrasound
This chapter of the Engineering Acoustics Wikibook provides a brief
overview of biomedical ultrasound applications along with some
introductory acoustical analysis for ultrasound beams. As a whole, the
field of Biomedical Ultrasound is one that provides a wealth of topics
for study involving many base disciplines. As such, this limited entry
does not cover all aspects of Biomedical Ultrasound, but instead chooses
to focus on providing readers with an introductory understanding, from
which additional study of the topic is possible. For readers interested
in a more thorough reference on Biomedical Ultrasound the 2007 text by
Cobbold [^1] is suggested.
## Diagnostic Applications
The most well know application of Biomedical Ultrasound is in medical
imaging, also known as ultrasonography. For a list of specific
applications of ultrasonography refer to the corresponding Wikipedia
entry. The following section
provides a qualitative description of the acoustical process used to
generate and capture sound signals used in producing ultrasound images.
An ultrasound transducer emits a short pulse of high frequency sound.
Depending on application the wave frequency ranges between 1 MHz and
15 MHz.[^2] As the emitted sound waves propagate they will be partially
reflected or scattered by any variation in acoustic impedance, *ρc*,
that is encountered. In the context of to biomedical imaging, this
corresponds to anywhere there are density changes in the body: e.g. the
connection of bone to muscle, blood cells in blood plasma, small
structures in organs, etc.[^3]
The behavior of the reflected wave depends largely on the size of the
reflective feature and the wavelength of the emitted sound wave. When
the wavelength is short relative to the reflective structure reflections
will be governed according to the principals of acoustic transmission
and reflection with normal or oblique interfaces.[^4] When the
wavelength is long relative to the structure the principals of acoustic
scattering [^5] are applicable. The latter condition, which occurs for
small reflection sources, sets the requirement for frequencies used in
ultrasound imaging. As discussed by Cobbold,[^6] analysis for a planar
wave incident on a spherical reflection source of effective radius *a*,
shows the acoustic intensity of the scattered wave, *I~s~*, varies
according to:
\
$$I_s \propto \frac{a^6}{\lambda ^4}$$
This relation shows that when wavelength is long relative to a scatter
sources effective radius, the scattered energy becomes very small, thus
negligible amounts of the incident wave will be reflected back to the
transducer. To reliably capture a feature in an ultrasound image the
emitted wavelength must be smaller than the features of interest. Other
consideration for wavelength are also applicable: due to attenuation of
the propagating wave lower frequencies offer greater imaging depth,
while higher frequencies (with smaller wavelength) offer increase
ability for lateral focusing of the emitted beam (small beam width at
focus, see below).[^7] Table 1 gives the correlation between frequency
and wavelength in water for several frequencies used in ultrasound
imaging (*λ* = *c*/*f*).
\
{\| class=\"wikitable\" style=\"text-align: center;\" align=\"center\"
\|+ Table 1: Medical ultrasound frequencies and corresponding
wavelengths. \|- \| Frequency (MHz) \|\| 1 \|\| 2 \|\| 5 \|\| 8 \|\| 10
\|\| 12 \|\| 15 \|- \| Wavelength (mm) \|\| 1.50 \|\| 0.75 \|\| 0.30
\|\| 0.19 \|\| 0.15 \|\| 0.13 \|\| 0.10 \|}
After the wave burst is transmitted the ultrasound transducer can act as
a receiver, much like a microphone or hydrophone. Waves reflected off
structures and density gradients are returned to the transducer and
recorded. The delay time between the transmitted and received signal is
correlated to the distance of the reflection source, while the intensity
of the received signal is correlated to the reflection sources acoustic
impedance and size.[^8] In instances where doppler ultrasonography is
utilized, the frequency shift between the transmitted and received
signals can be correlated to the velocity of the reflection source.
Modern ultrasonography use arrays of small transducers, each of which
are individually electronically controlled to achieve an effect know as
beamforming. When using this technique,
control of the phase relation between array elements results in control
over the emitted beam\'s direction and focal depth.[^9] To produce a
two-dimensional ultrasound image, the ultrasound beam focal position is
swept through a region, and the recorded reflected waves are correlated
to the particular focal locations. The exact process by which this
general concept is accomplished varies with each ultrasonography
instrument. Figure 1 provides a sample 2D image produced by the sweeping
of the focal location through a 2D plane.
\
!Figure 1: 2D Obstetric ultrasound
image.{width="420"}
## Clinical and Therapeutic Applications
A number of important clinical and therapeutic applications make use of
high intensity, focused ultrasound beams. In many of these applications,
the therapeutic effect is achieved due the heat generation associated
with dissipation of the high intensity acoustic beam. In some
applications, such as
lithotripsy the
therapeutic effect is obtained from acoustic non-linearity, causing
wave deformation
and shock wave
formation.
This effect is discussed in more detail in a section to
follow.
Provided below is a partial list of Therapeutic applications of
ultrasound:
- Ultrasound is sometimes used to clean teeth in dental
hygenist.
- Focused ultrasound may be used to generate highly localized heating
to treat cysts and tumors (benign or malignant). This is known as
Focused Ultrasound Surgery (FUS) or High Intensity Focused
Ultrasound (HIFU). These procedures generally
use lower frequencies than medical diagnostic ultrasound (from
250 kHz to 2000 kHz), but significantly higher energies.
- Focused ultrasound may be used to break up kidney stones by
lithotripsy.
- Ultrasound may be used for cataract treatment by
Phacoemulsification.
- Additional physiological effects of low-intensity
ultrasound have
recently been discovered, such as the ability to stimulate
bone-growth and the potential to disrupt the blood-brain
barrier for drug delivery.
## Acoustic Properties of Ultrasound Beams
As a first approximation, an ultrasound beam can be considered as
resulting from flat circular piston oscillating on an infinite baffle.
In practice, such a system would lead to relatively high diffusion of
the sound beam, severe side lobes, and an inability to choose a focal
length of the acoustic energy. In current biomedical applications the
use of phased arrays is a common approach stemming from the more general
field know as beamforming. Despite the limitations of planar
transducers, their relatively simple analysis serves well to illustrate
the basic properties of any formed beam and the challenges of designing
more advanced systems.
The analytical approach utilized for the simple cylindrical transducer
appears in many acoustics reference Texts, such as those by Pierce,[^10]
Kinsler et al.[^11] and Cheeke.[^12] The sound field solution is
obtained by first considering the sound emitted by the harmonic motion
of a single point source (small sphere) vibrating in free space. The
resulting sound pressure field from this point source is:
\
$$\boldsymbol{P} = i\left(\frac{\rho_o c U_o k a^2}{r} \right )e^{-ikr}$$
$$I(r) = \frac{1}{2}\rho_o c_o U_o^2 \left(\frac{a}{r} \right )^2 \left(ka \right )^2$$
Where ***P***(*r*) is the harmonic pressure amplitude at the radial
distance *r*, *ρ~o~* is the fluid density, *c~o~* is the fluid sound
speed, *U~o~* is the maximum velocity of the spherical source, *a* is
the sphere radius, and *k* = 2*πf*/*c~o~* is the wave number. In the
preceding equations, *i* = -1^1/2^, which incorporates both amplitude an
phase into the harmonic pressure variable.
To apply this result to the ultrasound transducer as a cylindrical
radiator, each differential section of the cylinder surface can be
considered as a separate spherical source. The resulting sound field
from this approximation is the integral sum of each spherical source. In
general the resulting equation cannot be analytically integrated;
however, when considering regions of the field for r \>\> a, where a is
now the cylinder radius, a simple result is found. Forgoing the full
derivation (for reference see Kinsler [^13] or Cheeke [^14]), the
equations for the produced sound field and acoustic intensity are:
\
$$\boldsymbol{P}(r, \theta) = i \left( \tfrac{1}{2}\rho_o c_o U_o \frac{a}{r}ka \right ) H(\theta)e^{-ikr},$$
$$H(\theta) = \frac{2J_1(ka\sin\theta)}{ka \sin \theta},$$
$$I(r,\theta) = \frac{|\boldsymbol{P}(r)|^2}{2\rho_oc_o},$$
where *H*(*θ*) is the directivity function, *J~1~* is the Bessel
Function of the first
kind, and *I*(*r*) is the
acoustic intensity in W/m^2^. Physically the directivity function
represents the pressure amplitude for beam angles not parallel to the
cylinder axis. It is worthwhile to note that roots of the Bessel
function produce certain beam angles with zero amplitude; the regions
between these angles are known as side lobes, with the on axis component
know as the main lobe. Physically, lobes result from the phase
interaction of waves originating from different parts of the cylindrical
transducer, and are in some ways analogous to pressure nodes in simple
harmonic waves.
To illustrate the phenomena of side lobes in ultrasound beams, the
resulting directivity function and acoustic intensity is calculated for
a 1 MHz beam transmitted into water using a 1 cm radius transducer.
Figure 2 plots the Directivity function, while Figure 3 plots the
acoustic intensity relative to the intensity at the transducer surface.
\
! Figure 2: Beam function for a 1 cm radius cylinder radiating 1 MHz
ultrasound into
water.{width="520"}
! Figure 3: Normalized acoustic intensity field for a 1 cm radius
cylinder radiating 1 MHz ultrasound into
water.{width="520"}
For the purposes of diagnostic and therapeutic ultrasound the presence
of side lobes is an undesirable effect. In diagnostic imaging wave
reflection originating from the side lobes can be misinterpreted as
reflections from the main beam, and act to reduce the resulting image
quality. In therapeutic applications, side lobes represent energy
dissipation on regions not intended to be effected. To reduce the
effects of side lobes, ultrasound devices use transducer designs based
on beamforming theory,
making the analysis substantially more complicated than the simple
cylindrical transducer discussed. One technique to reduce side lobes is
the use of a phased array to focus the main at a particular depth, thus
reducing the relative magnitude of side lobes. Another technique known
as acoustic shadowing reduces side lobes by emitting lower amplitude
waves near the edge of the transducer. As will be discussed in a
proceeding section, an emerging technique to enhance focusing and reduce
side lobes is the purposeful consideration of nonlinear acoustic effects
in ultrasound beams.[^15][^16]
## Nonlinear Acoustics in Biomedical Ultrasound
In many fields related to application of acoustic theory, the assumption
of linear wave propagation is sufficient. In Biomedical ultrasound
however, the propagation of sound waves is often accompanied by
progressive wave distortion resulting from nonlinear, finite amplitude,
effects. The nonlinear effect of most interest in many diagnostic
applications is the generations of harmonics in the ultrasound beam. As
a primer to this section, a review of the acoustic parameter of
nonlinearity,
and harmonic
generation is
suggested.
Nonlinearities relevant to Biomedical Ultrasound are relatively weak,
making their effects on the propagating acoustic wave cumulative with
distance. For appreciable harmonic generation to occur four conditions
should be met:
- Sufficient pressure and velocity amplitude. The waves emitted for
almost all applications of biomedical ultrasound meet this
requirement.[^17]
- Sufficient propagation distance with near planar wave conditions.
For directional beams, such as those used in ultrasonography, this
conditions is approximately met within the Rayleigh Distance, *x* =
1/2 *ka^2^*, on the main lobe.[^18] Furthermore, harmonic generation
is proportional to the number of wavelengths propagated and not the
absolute distance. The ultrasonic frequencies utilized have very
short wavelengths, for example, a 10 MHz wave must propagate over
500 wavelengths for a focal depth of 10 cm.
- Sufficient value of the parameter of
nonlinearity,
B/A. For the same acoustic intensity, a material with a higher value
of B/A will produce harmonics more quickly. The value of B/A in
water is ten times that in air, and B/A for some biological tissues
can be double that of water.
- Low acoustic absorption. In many tissues with high values for B/A
there are also high values for acoustic absorption. As the extent of
wave dissipation increases with frequency, generated harmonics are
absorbed more readily than the fundamental frequency. This effect
reduces the influence of B/A in biological tissues relative to that
in low-loss fluids.[^19]
Reviewing these conditions it can be seen that in many circumstances,
harmonic generation will be appreciable in biomedical ultrasound. Two
developing applications that make use of this harmonic generation are:
- Use of harmonic content in recorded ultrasonography signals. As
acoustic intensity and propagation distance is highest on the main
lobe, harmonic generation occurs most significantly on the main
lobe, and is smaller on side lobes. As a result, the beam pattern
produced by 2nd harmonics is more directional than the beam produced
by the fundamental frequency. This allows for potential improvement
in the resulting image.[^20]
- The analysis of harmonic profiles for tissue characterization using
the B/A parameter. Referring to The Acoustic Parameter of
Nonlinearity,
values of B/A vary for tissues that have otherwise similar acoustic
impedance. As a result, harmonic content in ultrasound waves has the
potential to produce images correlated to the B/A parameter of
tissues. Practical realization of this concept is an area in
development, as current imaging methods are unable to utilize this
potential.[^21]
## External links
- Medical Ultrasonic
Transducers
## References
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]:
[^7]:
[^8]:
[^9]:
[^10]:
[^11]:
[^12]:
[^13]:
[^14]:
[^15]:
[^16]:
[^17]:
[^18]:
[^19]:
[^20]:
[^21]:
|
# Engineering Acoustics/Human Voice Production
## Physiology of Vocal Fold
Human vocal fold is a set of lip-like tissues located inside the
larynx, and is the source of
sound for a human and many animals. The Larynx is located at the top of
trachea. It is mainly composed of cartilages and muscles, and the
largest cartilage, thyroid, is well known as the \"Adam\'s Apple.\"
The organ has two main functions; to act as the last protector of the
airway, and to act as a sound source for voice. This page focuses on the
latter function. In the following image, the cross section of vocal cord
is shown. This three dimensional geometry is made using CT scan data.
!*Vocal fold cross
section*{width="400"}
Links on Physiology: Discover The
Larynx
## Voice Production
Although the science behind sound production for a vocal fold is
complex, it can be thought of as similar to a brass player\'s lips, or a
whistle made out of grass. Basically, vocal folds (or lips or a pair of
grass) make a constriction to the airflow, and as the air is forced
through the narrow opening, the vocal folds oscillate. This causes a
periodical change in the air pressure, which is perceived as sound.
Vocal Folds Video
When the airflow is introduced to the vocal folds, it forces open the
two vocal folds which are nearly closed initially. Due to the stiffness
of the folds, they will then try to close the opening again. And now the
airflow will try to force the folds open etc\... This creates an
oscillation of the vocal folds, which in turn, as I stated above,
creates sound. However, this is a damped oscillation, meaning it will
eventually achieve an equilibrium position and stop oscillating. So how
are we able to \"sustain\" sound?
As it will be shown later, the answer seems to be in the changing shape
of vocal folds. In the opening and the closing stages of the
oscillation, the vocal folds have different shapes. This affects the
pressure in the opening, and creates the extra pressure needed to push
the vocal folds open and sustain oscillation. This part is explained in
more detail in the \"Model\" section.
This flow-induced oscillation, as with many fluid mechanics problems, is
not an easy problem to model. Numerous attempts to model the oscillation
of vocal folds have been made, ranging from a single mass-spring-damper
system to finite element models. In this page I would like to use my
single-mass model to explain the basic physics behind the oscillation of
a vocal fold.
Information on vocal fold models: National Center for Voice and
Speech
## Models
### Single mass model
image: Singlemass - vocal folds motion model schematic (single
mass-spring-damper
system).png.png "wikilink")
` Figure 1: Schematics`
The most simple way of simulating the motion of vocal folds is to use a
single mass-spring-damper system as shown above. The mass represents one
vocal fold, and the second vocal fold is assumed to be symmetry about
the axis of symmetry. Position 3 respresents a location immediately past
the exit (end of the mass), and position 2 represents the glottis (the
region between the two vocal folds).
#### The Pressure Force
The major driving force behind the oscillation of vocal folds is the
pressure in the glottis. The Bernoulli\'s equation from fluid mechanics
states that:
$P_1 + \frac{1}{2}\rho U^2 + \rho gh = Constant$ \-\-\-\--EQN 1
Neglecting potential difference and applying EQN 1 to positions 2 and 3
of Figure 1,
$P_2 + \frac{1}{2}\rho U_2^2 = P_3 + \frac{1}{2}\rho U_3^2$ \-\-\-\--EQN
2
Note that the pressure and the velocity at position 3 cannot change.
This makes the right hand side of EQN 2 constant. Observation of EQN 2
reveals that in order to have oscillating pressure at 2, we must have
oscillation velocity at 2. The flow velocity inside the glottis can be
studied through the theories of the orifice flow.
The constriction of airflow at the vocal folds is much like an orifice
flow with one major difference: with vocal folds, the orifice profile is
continuously changing. The orifice profile for the vocal folds can open
or close, as well as change the shape of the opening. In Figure 1, the
profile is converging, but in another stage of oscillation it takes a
diverging shape.
The orifice flow is described by Blevins as:
$U = C\frac{2(P_1 - P_3)}{rho}$ \-\-\-\--EQN 3
Where the constant C is the orifice coefficient, governed by the shape
and the opening size of the orifice. This number is determined
experimentally, and it changes throughout the different stages of
oscillation.
Solving equations 2 and 3, the pressure force throughout the glottal
region can be determined.
#### The Collision Force
As the video of the vocal folds shows, vocal folds can completely close
during oscillation. When this happens, the Bernoulli equation fails.
Instead, the collision force becomes the dominating force. For this
analysis, Hertz collision model was applied.
$F_H = k_H \delta^{3/2} (1 + b_H \delta')$ \-\-\-\--EQN 4
where
$k_H = \frac{4}{3} \frac{E}{1 - \mu_H^2} \sqrt{r}$
Here $\delta$ is the penetration distance of the vocal fold past the
line of symmetry.
#### Simulation of the Model
The pressure and the collision forces were inserted into the equation of
motion, and the result was simulated.
image: matlab.png
` Figure 2: Area Opening and Volumetric Flow Rate`
Figure 2 shows that an oscillating volumetric flow rate was achieved by
passing a constant airflow through the vocal folds. When simulating the
oscillation, it was found that the collision force limits the amplitude
of oscillation rather than drive the oscillation. Which tells us that
the pressure force is what allows the sustained oscillation to occur.
#### The Acoustic Output
This model showed that the changing profile of glottal opening causes an
oscillating volumetric flow rate through the vocal folds. This will in
turn cause an oscillating pressure past the vocal folds. This method of
producing sound is unusual, because in most other means of sound
production, air is compressed periodically by a solid such as a speaker
cone.
Past the vocal folds, the produced sound enters the vocal tract.
Basically this is the cavity in the mouth as well as the nasal cavity.
These cavities act as acoustic filters, modifying the character of the
sound. The acoustics of vocal tract have traditionally been described on
the basis of a source-filter theory.
Whereas the glottis produces a sound of many frequencies, the vocal
tract selects a subset of these frequencies for radiation from the
mouth. These are the characters that define the unique voice each person
produces.
### Two Mass Model
The basic two mass model is shown in Figure.3 and the two mass model of
vocal fold is shown in Figure.4.
!CGtypes\|500px\|Figure3. Two mass
model
!CGtypes\|350px\|Figure4. Smoothed two mass
model
### Three Mass Model
In Figure5, three mass model of vocal fold is shown. !Figure5. Three
mass model{width="400"}
### Rotating Plate Model[^1]
!Rotatingplate{width="400"} The
motion of vocal fold can be described with two degrees of freedom. First
rotation of mass **M2** and displacement **r**. The equation of motion
will be:
$$\ddot{I}+ B \dot{\theta} +k \theta= T$$
$$m\ddot{r}+b(\dot{r}-\dot{r_b})+k_2(r-r_b)=F$$
Where in these equations: *T* is the applied aerodynamic torque
$I_c$ is the moment of inertia for rotation cover
$B$ is the rotational damping
$k$ is the rotational stiffness
$k_2$is the translational stiffness
$b$ is the translational damping
$F$ is the force\
$M_2$is the cover mass\
$r_b$ is the displacement of the body
$r$ is the displacement of the cover
The equation of motion for the body mass can be written as:
$$M_1\ddot{r_b}+b(\dot{r_b}-\dot{r})+k_2(r_b-r)+K_1r_b+B\dot{r_b}=0$$
where:
$K_1$ is the body stiffness
$M_1$ is the body mass
$B$ is the body damping
### Finite Element Models
## Lumped-element flow circuit for the vocal tract
In the image below, the lumped element flow circuit for the vocal tract
airway is shown. The input impedance to the vocal tracts can be shown by
resistive and inertive lumped elements.[^2] According to the shown
circuit we have:
!lumped element circuit for vocal
tract{width="600"}
$$P_L-R_s u-I_s\dot{u} -\frac{1}{2} \rho u^2/a_g^2-R_eu-I_e \dot{u}=0$$
Where
$P_L$ is a steady lung pressure
$R_s$ is subglottal resistance
$I_s$ is subglottal inertance(epilaryngeal) input resistance
$R_e$ is supraglottal(epilaryngeal) input resistance\
$I_e$ is supraglottal(epilaryngeal) input inertance
## References
\[1\] Fundamentals of Acoustics; Kinsler *et al.*, John Wiley & Sons,
2000
\[2\] Acoustics: An introduction to its Physical Principles and
Applications ; Pierce, Allan D., Acoustical Society of America, 1989.
\[3\] Blevins, R.D. (1984). Applied Fluid Dynamics Handbook. Van
Nostrand Reinhold Co. 81--82.
\[4\] Horacek, J., Sidlof, P., Svec, J.G. Self-Oscillations of Human
Vocal Folds. Institute of Thermomechanics, Academy of Sciences of the
Czech Republic
\[5\] Lucero, J.C., Koenig, L.L. (2005). Simulations of temporal
patterns of oral airflow in men and women using a two-mass model of the
vocal folds under dynamic control, Journal of the Acoustical Society of
America 117, 1362--1372.
\[6\] Titze, I.R. (1988). The physics of small-amplitude oscillation of
the vocal folds. Journal of the Acoustical Society of America 83,
1536--1552
------------------------------------------------------------------------
Back to main page
[^1]: Ingo R. Titze, \"The myoelastic aerodynami theory of phonation\"
2006
[^2]:
|
# Engineering Acoustics/Vocal Folds
## Introduction
The mechanism of sound production in speech and singing would be the
results of airflow in the human lung system, and is also connected to
the digestive system. The diaphragm action from the lungs pushes air
through the vocal folds, and produces a periodic train of air pulses.
This pulses train is shaped by the resonances in the vocal tract, and
has various frequencies and loudness. Vocal formants are basic
resonances, and can be changed by the movements of the articulators to
produce different vowel sounds. To produce different vowel sounds, the
vocal mechanism is controlled to produce the resonances of the vocal
tract is produced the vocal formants. The vocal tract can be considered
to be a cavity resonator, and the soft palate position, the area of
opening, the tongue positions an shape, and the position of jaw would
established the shape of this cavity. More specifically, voice
articulation is the movement of the tongue, pharynx palate, jaw or lips
that changes the volume of cavity, area of opening, and port length,
which determine the frequency of the cavity resonance. The voice
mechanism can be modeled as the lung and diaphragm being the power
source, along with the larynx, pharynx, mouth and nose. At the end of
the tubular larynx rest the vocal folds, also known as vocal cords.
During speech and singing, the larynx is connected to pharynx, and is
covered by epiglottis during swallowing. The vocal tract acts as a
resonator.
![](Voice_production_organs.png "Voice_production_organs.png")
## The vocal folds
Vocal folds are twin enfoldings of mucous membrane that act as a
vibrator during phonation. Phonation is the process by which the energy
from the lungs in the form of air pressure is converted vibration that
is perceivable to the human ear. There are two methods to phonation. One
is by the air pressure setting the elastic vocal folds into vibration,
which is called voicing. The other is air passing through the larynx to
vocal tract, where airstream gets modified as produces transient of
aperiodic sound waves. In aperiodic phonation, the transient or
aperiodic sound waves generates plosive sound, /t/, where sound is
produced by blocking the airstream and suddenly releasing the built-up
air pressure, fricative sound, /sh/, where a continuous noise type
sounds is made by forcing air through a constricted space, affricate
sound, /ch/, which is a combination of plosive and fricative sound, and
a voiced consonant, /d/, which is a plosive sound followed by a voiced
sound. While vocal folds are open for breathing, the folds are close by
the pivoting of the arytenoid cartilages for speech of singing. Positive
air pressure from the lungs forces the vocal folds to open but the high
velocity air produces a lowered pressure due to the Bernoulli equation
which brings them back together. In an adult male, the vocal folds are
17-23 mm long, and it is around 12.5-17 mm in an adult female. Due to
the action of muscles in the larynx, the vocal folds can be stretched 3
to 4 mm. The frequency of the adult male speaking voice is typically 125
Hz, while the frequency of the adult female voice is about 210 Hz.
Children's voice is around 300 Hz. In comparison to a piano keyboard,
the men's voice would be 1 octave lower than a women's voice, and a
child's voice would be 1 octave higher than an adult women's voice. The
front end of the vocal folds is attached to the thyroid cartilage. The
back end is attached to the arytenoid cartilages, which separates the
folds for breathing. ![](Vocalfolds1.png "Vocalfolds1.png")
## Electrical circuit representation of vocal folds
The vibration model of vocal folds, and the acoustic impedance model of
the vocal can be represented as electrical circuit representation. A
gyrator can be used to combine the acoustic impedance model and the
vibration model, so that the velocity(as potential) from the mobility
analogy for the vibration model can be transferred to the velocity(as
current) in the acoustic impedance model, and where pressure would be
force divided by surface area of the vocal folds tube.
## References
1. <http://hyperphysics.phy-astr.gsu.edu/hbase/Music/voice.html>
|
# Engineering Acoustics/Resonating feathers of Manakins
# Introduction
During lekking), Male
Club-winged Manakins, *Machaeropterus deliciosus* (Aves: Pipridae) alter
their secondary feathers by
hypertrophy. The oscillation
of the secondary feathers caused by the cause them to collide and
vibrate in order to produce sustained harmonic tones with a fundamental
frequency of 1500 Hz. The male manakin produces a totally unique sound
in order to attract the attention of the female. Instead of the
conventional way of using voice for sound production, he uses his wings
as a musical instrument. \[\[Image:Golden-headed
Manakin.jpg\|center\|thumb\|500x750px\|
```{=html}
<center>
```
Figure 1: Male Golden Headed Manakin
```{=html}
<center>
```
\]\]
# Mechanism of Sonation
\[\[Image:Manakin.jpg\|thumb\|350x700px\|
```{=html}
<center>
```
Figure 2: Phasing of sonation
```{=html}
<center>
```
\]\]
The male manakin modifies the
sixth and seventh secondary feathers, which act as a pair of coupled
resonators. The five other secondary feathers oscillate in phase with
them to result in sonation, which sounds like a ringing *Tick-Tick-Ting*
\[1\]. The series of motions involves two brief mechanical *ticks*, when
the wings are flicked and a sustained mechanical *ting*, when they are
flipped above the back \[2\]. These two sounds are not acoustically
different except for the longer duration of the *ting*. Each of them is
composed of a fundamental frequency of 1.49 kHz and its higher frequency
harmonics.
Tick-Ting video and audio
demonstration
\[\[Image:Manakin_feather.jpg\|thumb\|350x700px\|
```{=html}
<center>
```
Figure 3 :Modified Secondary feathers
```{=html}
<center>
```
\]\] Unlike typical secondaries, secondary feathers 1--5 exhibit
increasingly wider rachi and an
increasingly pronounced transition from continuously tapering to an
abrupt taper around the distal two-third, three-fourth and then
four-fifth for the third, fourth and fifth secondaries, respectively.
Beyond the sudden taper on the fifth secondary, the rachi bend medially.
This 'kink' in the rachis causes it to overlap and contact the rachis of
the adjacent sixth secondary feather, while at rest\[3\]. The sixth and
seventh secondaries exhibit the following distinct modification: the
rachi is thick at the base, and at approximately one-half of its length,
their width doubles and they twist along their long axis so that the
dorsal feather surface is oriented medially. The sixth secondary feather
has ridges, while the fifth feather has a curved tip \[4\]. The
innermost pair of the modified feathers (seventh secondaries) collide
across the back. Immediately following this collision, the wings shiver
laterally and medially, pulling them just millimeters apart.
Approximately 8ms later, they are
adducted to produce another
collision. The sonation tone is produced continuously throughout this
process and the feathers generate vibrations at just the right frequency
\[3\].
# Acoustic analysis
The male Manakin leans forward and flicks his wings together at a
frequency which peaks at 1500 Hz and has unusually high Q-values. This
frequency is faster than that at which a humming bird beats its wings.
The quality factor Q is a measure of the rate at which a system reaches
maximum amplitude. Using the spectral method, Q can be determined as
$Q=\frac{f_0}{BW - 3 dB, SPL}$
where f0 is the natural frequency of the system and BW -- 3 dB,SPL is
the bandwidth at 3 dB SPL below the peak. From experiments, Q factor was
found to be above 10 for all the feathers and was as high as 27. This
implies that the structures can be good biological resonators \[3\].
# Wing modification
The extra support required for such a high speed wing movement comes
from the super-sized wing bones. The ulna of the male manakin is
modified such that it has bumps and grooves to support the wings and its
width is fourfold the usual width. Another surprising modification is
that the manakin has solid wing bones, as against most birds which have
hollow bones which allow them to fly. The wings thus modified are able
to create the perfect pitch. \[\[Image:TANGARA fêmea ( Chiroxiphia
caudata ).jpg\|thumb\|350x700px\|
```{=html}
<center>
```
Figure 4: Female Blue Manakin
```{=html}
<center>
```
\]\]
# References
\[1\] K. S. Bostwick and R. O. Prum, \"Courting Bird Sings with
Stridulating Wing Feathers,\" Science, vol. 309, pp. 736-, July 29, 2005
2005.
\[2\] K. S. Bostwick, \"Display Behaviors, Mechanical Sounds, and
Evolutionary Relationships of the Club-Winged Manakin (Machaeropterus
deliciosus),\" The Auk, vol. 117, pp. 465--478, 2000.
\[3\] K. S. Bostwick, et al., \"Resonating feathers produce courtship
song,\" Proceedings of the Royal Society B: Biological Sciences, vol.
277, pp. 835--841, March 22, 2010 2010.
\[4\]
<http://www.pbs.org/wnet/nature/episodes/what-males-will-do/photo-gallery-manakin-anatomy/955/>
\<\< Back to Main page
|
# Engineering Acoustics/Echolocation in Bats and Dolphins
Echolocation is a form of acoustics that uses active sonar to locate
objects. Many animals, such as bats and dolphins, use this method to
hunt, to avoid predators, and to navigate by emitting sounds and then
analyzing the reflected waves. Animals with the ability of echolocation
rely on multiple receivers to allow a better perception of the objects'
distance and direction. By noting a difference in sound level and the
delay in arrival time of the reflected sound, the animal determines the
location of the object, as well as its size, its density, and other
features. Humans with visual disabilities are also capable of applying
biosonar to facilitate their navigation. This page will focus mainly on
how echolocation works in bats and dolphins.
## Sound Reflection
!Figure 1: Reflection and Refraction of
Waves{width="300"}
When a wave hits an obstacle, it doesn\'t just stop there but rather, it
gets reflected, diffracted, and refracted. Snell\'s law of reflection
states that:
$$\frac{sin \theta_i}{c_1}=\frac{sin \theta_t}{c_2}=\frac{sin \theta_r}{c_1}$$
where the nomenclature is defined in Figure 1. The law of reflection
states the angle of incidence is equal to the angle of reflection
($\theta_i=\theta_r$) which is clearly shown in the previous equation.
In order to determine the reflection coefficient $R$, which determines
the proportion of the wave is being reflected, the acoustic impedance is
needed and is define as where $c$ is the speed of sound and $\rho$ is
the density of the medium:
:
: `<big>`{=html}`<big>`{=html}$Z=\rho c$`</big>`{=html}\
`</big>`{=html}
For fluids only, the sound reflection coefficient is defined in terms of
the incidence angle and the characteristic impedance of the two media as
\[3\]:
$$R=\frac{\frac{Z_2}{Z_1}-\sqrt{1-[n-1]\tan^2 \theta_1}}{\frac{Z_2}{Z_1}+\sqrt{1-[n-1]\tan^2 \theta_1}}\qquad$$
where $n=\left ( \frac{c_2}{c_1} \right )^2$
As for the case where medium 2 is a solid, the sound reflection
coefficient becomes \[9\]:
$$R=\frac{(r_n-\frac{r_1}{cos \theta_i})+jx_n}{(r_n+\frac{r_1}{cos \theta_i})+jx_n}\qquad$$
where $Z_n=r_n+jx_n$ is the normal specific acoustic impedance.
The law of conservation of energy states that the total amount of energy
in a system is constant therefore if it is not being reflected, it is
either diffracted, or transmitted into the second medium which may be
refracted due to a difference in refraction index. !Figure 2:
Interaural time difference and Interaural Intensity
Level{width="300"}
## Sound Localization
Sound localization denotes the ability to localize the direction and
distance of an object, or \"target\" based on the detected sound and
where it originates from. Auditory systems of humans and animals alike
use the following different cues for sound location: interaural time
differences and interaural level differences between both ears, spectral
information, and pattern matching \[8\].
To locate sound from the lateral plane (left, right, and front), the
binaural signals required are:
- Interaural time differences: for frequencies below 800 Hz
- Interaural level differences: for frequencies above 1600 Hz
- Both: for frequencies between 800 and 1600 Hz
### Interaural Time Differences
!Figure 3: Shadow cast by the
head{width="300"}
Humans and many animals uses both ears to help identify the location of
the sound; this is called binaural hearing. Depending on where the sound
comes from, it will reach either the right or left ear first, therefore
allowing the auditory system to evaluate the arrival times of the sound
at the two reception points. This phase delay is the interaural time
difference. The relationship between the difference between the length
of sound paths at the two ears, $\Delta d$, and their angular position,
$\theta$, may be calculated using the equation \[1\]:
:
: `<big>`{=html}`<big>`{=html}$\Delta d=r(\theta + sin \theta)$`</big>`{=html}`</big>`{=html}
where $r$ is the half distance between the ears. This is mainly used as
cue for the azimuthal location. Thus, if the object is directly in front
of the listener, there is no interaural time difference. This cue is
used at low frequencies because the size of the head is less than half
the wavelength of the sound which allows a noticeable detection in phase
delays between both ears. However, when the frequencies are below 80 Hz,
the phase difference becomes so small that locating the direction of the
sound source becomes extremely difficult.
### Interaural level difference
As the frequency increases above 1600 Hz, the dimension of the head is
greater than the wavelength of the sound wave. Phase delays no longer
allow to detect the location of the sound source. Hence, the difference
in sound level intensity is used instead. Sound level is inversely
proportional to the source-receiver distance, given that the closer you
are to the emitting sound, the higher the sound intensity. This is also
influenced greatly by the acoustic shadow cast by the head. As depicted
in Figure 3, the head blocks sound, decreasing the sound intensity
coming from the source\[4\].
## Active Sonar
Active sonar supplies their own source and then wait for echoes of the
target\'s reflected waves. Bats and dolphins use active sonar for
echolocation. The system begins with a signal produced at the
transmitter with a source level (SL). This acoustic wave has an
intensity of I(r) where r is the distance away from the source. Next,
the source signal travels to the target while gathering a transmission
loss (TL). Once arriving at the target, the fraction of the initial
source signal which is denoted by the target strength (TS), is reflected
toward the receiver. On the way to the receiver, another transmission
loss (TL\') is experienced. For a monostatic case where the source and
the receiver are located in the exact position, TL is equal to TL\',
thus, the echo level (EL) is written as \[9\]:
:
: `<big>`{=html}`<big>`{=html}$EL=10\log \frac{I(r) \sigma}{I_{ref} 4 \pi}-TL'$`</big>`{=html}`</big>`{=html}
The equation for target strength is \[9\]:
:
: `<big>`{=html}`<big>`{=html}$TS=10\log\frac{\sigma}{4\pi}$`</big>`{=html}`</big>`{=html}
### Reverberation
As sound is emitted, other objects in the environment can cause the
signal to be scattered creating different echoes in addition to the echo
produced by the target itself. If we look at underwater for instance,
reverberation can result from bubbles, fish, sea surface and bottom, or
planktons. These background signals mask the echo from the target of
interest, thus, it is necessary to find the reverberation level (RL) in
order to differentiate between the echo level. The equation for RL is as
follow:
$TS_R$ represents the target strength for the reverberating region and
is defined by:
:
: `<big>`{=html}`<big>`{=html}$TS_R=S_v+10\log V=S_A+10\log A$`</big>`{=html}`</big>`{=html}
where \"$V$ (or $A$) is the volume (or surface area) at the range of the
target from which scattered sound can arrive at the receiver during the
same time as the echo from the desired target\" \[9\] and $S_v$
(or$S_A$) is the scattering strength for a unit volume (or a unit
surface area).
## Echolocation in Bats
!Figure 5: Echolocation in
bats{width="300"}
Bats produce sounds through their larynx, emitting them from the mouth
or some, via their nose. Their calls consist of various types: broadband
components (varies in frequencies), pure tone signals (constant
frequency), or a mix of both components. The duration of these sounds
varies between 0.3 and 100 ms over a frequency range between 14 to
100 kHz \[7\]. Each species' calls can vary and have been adapted in a
particular way due to their lifestyles and hunting habits.
The broadband components are used for hunting in a closed environment
with background noise. The short calls yield precision in locating the
target. The short rapid calls, also prevent overlapping of waves, thus,
allowing the use of interaural time difference. Pure tones signals are
used while hunting in open environments without much background noise.
The calls are longer in duration allowing bats to locate preys at
greater distances. When searching for preys, bats emit sounds 10 to 20
times per second. As they approach their target, the number of waves
emitted can reach up to 200 times per second. The usual range for
echolocation is around 17 m.
For other mammals, the interaural time difference and the interaural
intensity level are cues used only for lateral detection. However bats
can also use interaural intensity level to establish objects in the
vertical direction. This is only applicable if the signals received are
broadband. Another difference is that bats' ears are capable of moving,
thus, allowing them to change between different acoustic cues.
The sound conducting apparatus in bats is similar to that of most
mammals. However, over years of evolution, they have been adapted to
suit their needs. One of these special characteristics is the large
pinnae which serves the purpose of acoustic antennae and mechanical
amplifiers. The movement of the pinnae permits focusing of the coming
sound wave, amplifying or weakening it.
## Echolocation in Dolphins
!Figure 6: Echolocation in
Dolphins{width="350"}
The basic idea of echolocation is comparable between bats and dolphins,
however, since both animals live in such different environment, there
are specific characteristics that differ amongst them.
Dolphins use the nasal-pharyngeal area to produce various types of
sounds (clicks, burst pulses, and whistles) in order to achieve two main
functions: echolocation and communication. Clicks, slow rate pulses that
lasts 70-250 μs at a frequencies of 40 to 150 Hz, and bursts, pulses
produced at rapid rates, are primarily used for echolocation \[1\].
After the clicks have been produced, the associated sound waves travel
through the melon, the rounded area of the dolphin\'s forehead comprised
of fats. Its function is to act as an acoustic lens to focus the
produced waves into a beam, sending it ahead. At such high frequencies,
the wave doesn\'t travel very far in water, hence, echolocation is most
effective at a distance of 5 to 200 m for preys that are 5 to 15 cm in
length \[6\].
When waves get reflected after hitting an object, unlike bats that has
pinnae to direct the waves to the inner ear, dolphins receive this
signal via the fat-filled cavities of the lower jaw bones (Figure 6).
Due to the high acoustic impedance of water, these soft body tissues
also have a similar impedance allowing sound waves to travel to the
inner ear without being reflected. Sound is then conducted to the middle
ear and inner ear, from which is then transferred to the brain.
## References
\[1\] Au, W. W. L., Popper, A. N., & Fay, R. R. (2000). Hearing by
whales and dolphins. Springer handbook of auditory research, v. 12. New
York: Springer.\
\[2\] Thomas, J. A., Moss, C., & Vater, M. (2004). Echolocation in bats
and dolphins. Chicago: The University of Chicago Press\
\[3\]Wave Reflection. (n.d.). Retrieved October 17, 2010, from
<http://www.sal2000.com/ds/ds3/Acoustics/Wave%20Reflection.htm>\
\[4\] Diracdelta.co.uk, science and engineering encyclopedia. (n.d.).
Retrieved November 6, 2010, from Interaural Level Difference:
<http://www.diracdelta.co.uk/science/source/i/n/interaural%20level%20difference/source.html>\
\[5\] Murat Aytekin, J. Z. (2007, November 27). 154th ASA Meeting, New
Orleans, LA. Retrieved November 10, 2010, from Sound localization by
echolocating bats: Are auditory signals enough?:
<http://www.acoustics.org/press/154th/aytekin.html>\
\[6\] Seaworld. (2002). Retrieved September 20, 2010, from Bottlenose
Dolphins Communication and Echolocation:
<http://www.seaworld.org/infobooks/bottlenose/echodol.html>\
\[7\] Wikipedia. (2010, February 4). Retrieved September 19, 2010, from
Animal Echolocation: <http://en.wikipedia.org/wiki/Animal_echolocation>\
\[8\] Wikipedia. (2009, March 14). Retrieved September 19, 2010, from
Sound Localization: <http://en.wikipedia.org/wiki/Sound_localization>\
\[9\] Kinsler, L. E. (1982). Fundamentals of acoustics. New York:
Wiley.\
|
# Engineering Acoustics/The Human Ear and Sound Perception
**Summary:** This page will briefly overview the human auditory system,
transducer analogies, and some non linear effects pertaining to specific
characteristics of the auditory system. Results from the discipline of
Psycho-Acoustics will be
presented.
## Introduction
The human ear is a small physical device with disproportionately large
properties. On one hand it can withstand sounds with acoustic pressure
levels close to 1kPa which are pretty much the loudest encountered in
nature and on the other hand it can detect pressure levels that
correspond to displacements of the eardrum of about one tenth the
diameter of the hydrogen atom.[^1] When including the information
processing done in the brain and the physiological response that it
elicits, one can see why the Human Auditory System has been giving
researchers a hard time since the turn of the twentieth century.\
: Some researchers approached the auditory system as a very complicated,
active transducer;one that transmits the wave information first
acoustically, then mechanically, then hydro dynamically, and finally
electro-dynamically to the brain.[^2] Others like the legendary Georg
Von Bekesy, maintained that the
continuously regenerative nature of the living organism should be taken
into account when considering the behavior of the auditory system.[^3]\
: Humans however are no strangers to complicated problems. After all, we
have been to the moon, so what is going on?
## The Problem with humans
In order to explore the behavior of any physical system one needs a set
of variables describing the system.These variables should be well
defined and arise naturally from the physical principles governing the
behavior of the system.The same physical principles also provide any
researcher with well established means of assessing what constitutes a
valid measurement.\
Furthermore,in any well behaved physical system the experimenter has
control over the variables, to such an extent that he or she can hold
most of the variables constant and individually vary a few of them to
evaluate the relationship between them and quantify their dependence.\
Additionally, in any linear system the principle of superposition holds,
so that the overall effect of varying several variables at the same time
equates to a linear combination of the individual contributions observed
from varying every individual variable independently while keeping
everything else constant\
: The above mentioned usually constitute what can be described as a very
happy researcher.However, problems arise when one sets out to evaluate
the human auditory system, because hearing is a sensation and just like
every other sensation it is an esoteric process. To resolve this problem
one has to venture into the realm of psycho-physics and the principles
of psychological measurements. It is known that one cannot directly
measure sensation, but one can measure the response that the sensation
elicits.[^4]\
: With the above approach one can measure quantities such as just
noticeable differences; perceptible excitation; increased nervous
activity etc. However, the validity or relevance of those measurements
cannot readily be confirmed by first principles.[^5] The nature of the
human auditory system is such that one is not able to decouple and
independently vary any of the variables of interest(however those might
be defined) and even if one could, the principle of superposition,in
general, does not apply.
## Non Linearity \| Part1
After acknowledging the difficulties involved in quantifying the
behavior of the auditory system and developing models of hearing one
should take a look into specific sources of non-linearity and the
mechanisms through such a behavior is imposed upon the auditory system.
There is probably no better example of such behavior than what is called
the acoustic, auditory or intra-aural reflex.
### The Acoustic Reflex
The acoustic reflex in man refers to the tendency of the middle ear
muscles controlling the behavior of the ossicles (the little bones in
the middle ear) to tense under an intense acoustic stimulus, thereby
making the inner ear stiffer and in that way limiting the motion of the
stapes (the last bone in the chain). This reduction in the motion of the
stapes equates to a real rather than a perceived reduction in the
amplitude of the vibrations transmitted through the middle ear to the
inner ear. The reflex serves to protect the sensitive inner ear from
damage during exposure to loud sounds.
Unfortunately, although fast, the auditory reflex it is not an
instantaneous reaction. For low frequencies, the response takes from 20
to 40ms to be elicited and therefore offers no protection against loud
impulsive sounds like gunshots and explosions.[^6] With the onset of the
auditory reflex the entire ear exhibits a marked change in acoustic
impedance which was observed by 1934 Geffcken and measured by Bekesy and
other researchers in subsequent years. It is argued, however, that the
onset of the auditory reflex happens for sound of very high intensity
and therefore its effect on perception is limited.[^7] On the other
hand, the same reflex can be voluntarily elicited by, for example,
vocalizing. According to Lawrence A. Kinsler, it seems that the
mechanical characteristics of the ear are mainly responsible for the
response elicited by the auditory system and hence sound perception.[^8]
Whatever the exact nature of the auditory reflex may be, or what the
precise range that it has the most effect however is beyond the scope of
this article.
### Perceived loudness of pure tones
The intensity and loudness of sound are two highly interdependent
quantities. Loudness belongs to the psychological attributes of sound
and intensity is a precisely defined and measurable physical quantity.
Because of their strong similarity, the two quantities were once thought
to be one and the same, since if one increases the intensity of a
particular sound, the sound becomes louder.[^9] In the simplest and
clearest terms: intensity is measured sound level and loudness is
perceived sound level.
The measured sound level is expressed in terms of **intensity** and
**intensity level**, while the perceived sound level in expressed in
terms of **loudness** and **loudness level**.
Sound Intensity is defined as the acoustic power per unit area and it is
measured in Watts per square meter\
$$I=\frac{Power}{Unit Area}\quad\left[\frac{W}{m^2}\right]$$
However, the human ear is capable of detecting sound intensity ranging
from 1x10^−12^Wm^−2^ to 1x10^2^Wm^−2^(above which intensity permanent
damage to the ear will occur). This gives a scale in which the maximum
value is 10 000 000 000 000 times larger than the smaller one.[^10]
In order to provide more insight and get around the cumbersome numbers
we use the **Intensity level** **I~L~** which is defined as the
intensity relative to 10x10^−12^ Wm^−2^, on a logarithmic scale and it
has unit of decibels.\
.
$$I_L=10\log\left(\frac{I}{I_{ref}}\right)\quad[dB]$$
In planar waves in air and standard temperature and mean pressure,
acoustic pressure and Intensity are linked by the following relation:
$$I=\frac{P^2}{\rho c}$$
Where ρ is the air density and c the speed of sound in air. By doing the
following:\
$$I_L=10\log\frac{\left(\frac{P^2}{\rho c}\right)}{\left(\frac{P_{ref}^2}{\rho c}\right)}=10\log{\left(\frac{P}{P_{ref}}\right)}^2=20\log\frac{P}{P_{ref}}=SPL$$
The expression on the right is deemed the **Sound Pressure Level** and
it is identical to the Intensity Level, but in terms of acoustic
pressure.The reference pressure used is 20μPa. It is very close to the
average minimum audible acoustic pressure in air in the absence of any
noise.[^11] It is important to note that the minimum audible pressure is
averaged over multiple subjects, therefore for a given percentage of the
population, negative Sound pressure levels are perceptible i.e. they can
perceive sound pressures smaller than the reference pressure. The chosen
reference pressure level corresponds to the reference Intensity through
the aforementioned relationship, in a way that SPL and I~L~ are
identical.
The qualitative expressions of loud, not very loud, extremely loud, are
used to describe loudness. Although these expressions are adequate in
describing the sensation in any specific individual, they do a very poor
job in quantifying the result. The above qualitative expressions have
been made qualitative for pure tones, i.e. sinusoidal waves, with the
use of **Loudness Level** and **Loudness**.
The **loudness level** of a particular test tone is an indirect measure
of loudness and it is defined as the **Sound Pressure Level(SPL)** of a
1000 Hz pure tone that sounds as loud as the test tone.[^12] The 1000 Hz
tone was chosen arbitrarily and retained as the standard. The Loudness
level is measured in phons. The Loudness Level of the just audible
1000 Hz tone is defined as 3 phons because the minimum perceptible SPL
of a 1 kHz tone is 3 dB. Increments in phons are logarithmic because the
SPL is measured in decibels.
The **loudness level** is very useful in quantifying the sensation,
however it fails to provide information on the relation between sounds
of different loudness levels. In other words it fails to provide insight
on how much louder a sound of e.g. 20 phons is than a sound of 50 phons.
To get around this problem, we use **Loudness** which has units of
sones. **Loudness** is based on the 40 dB, 1000 Hz pure tone which is
defined as to have a loudness of 1 sones. The Loudness scale is derived
by increasing or decreasing the SPL of the 1 kHz tone until it \"sounds
twice as loud as before\" or \"half as loud \" etc. Successive halving
of the loudness creates the rest of the scale. The **Loudness** for the
remaining tones is determined by the same equal loudness judgment that
provides the **Loudness Levels**.[^13]
Loudness and Loudness level are best illustrated and are most useful
when plotted against the SPL of pure tones, in what are called equal
loudness contours or Fletcher & Munson
curves, so named after the
earlier researchers, however the way loudness is measured has been
significantly altered and standardized since the time when such
measurements were first made.\
<File:Lindos1.svg%7CEqual> Loudness contours
## A little bit of bio
### The cochlea
The **cochlea**, or inner ear, constitutes the hydrodynamic part of the
ear. It is a small, hollow, snail shaped member formed from bone and
filled with colorless liquid. It has an uncoiled length of about 35mm
and a cross-sectional area of about 4mm^2^ on the end closest to the
inner ear, that tapers off to about 1mm^2^ at the far end.[^14]\
: It is filled with **two** different fluids separated in **three**
different channels that run together from the base of the stapes to the
apex of the cochlea, however two of the channels are separated by
Raleigh\'s membrane, which is thin and flexible enough to be neglected
from a hydromechanical point of view.[^15] The vibrations are
transmitted directly from the base-plate of the stapes, the last of the
three ossicles, to the contained fluid. The cochlea is divided down the
middle by the **basilar membrane** which is a partly bony and partly
gelatinous membrane. It is on this membrane that the organ of corti and
the infamous **hair cells** reside.
<File:Anatomy> of Human Ear with Cochlear Frequency Mapping.svg\|The
Auditory system and Cochlea <File:Cochlea.png%7CThe> three Fluid filled
cavities
### The basilar membrane
As previously mentioned, the basilar membrane is a flexible gelatinous
membrane that divides the cochlea longitudinally. It is the flexible
part of the cochlear partition (the other being rather bony)and it
contains about 25 000 nerve endings attached to numerous haircells
arranaged on the surface of the membrane. It extends from the base to
just before the apex of the cochlea at which point it terminates at the
helicotrema. This creates two hydromechanically distinct channels, with
the baseplate of the stapes attached to the entrance of the upper
channel at the **oval window**, and a highly flexible membrane called
the **round window** sealing the lower channel. The two channels connect
at the apex through the **helicotrema** which is basically a gap through
the cochlear partition.\
:
<File:Gray928.png%7CDiagrammatic> longitudinal section of the cochlea
showing the location of the Basilar Membrane <File:Two> Views of
Cochlear Mechanics.svg\|Two Views of Cochlear Mechanics
:
The vibrations transmitted to the stapes set up acoustic waves in the
fluid that travel down the upper channel, through the helicotrema and
back up through the lower channel. Since the walls of the cochlea are
relatively rigid and the contained fluids relatively incompressible,
this causes the basilar membrane to flex. In order to equalize the
pressure in the cochlea, the round window \"bulges out\" and in this way
provides pressure release.
:
The basilar membrane starts out narrowly, with a width of about 0.04mm
near the oval window and then widens to about 0.5mm near the
helicotrema. This non uniformity in width, along with the pressure
release provided at the round window cause the basilar membrane to
exhibit maxima of vibration at different locations (distances from the
oval window) along the membrane, depending on the frequency of
vibration. This makes the basilar membrane behave as an acoustic filter
that separates the constituent frequencies of an incoming sound signal
depending on the location of the maxima.\
:Basilar Membrane
Animation
:
<File:Uncoiled> cochlea with basilar membrane.png\|Uncoiled cochlea with
basilar membrane
:
### Whats up with all the hair?
The hair-cells that populate the top surface of the basilar membrane are
the last part in the chain of transformation of the mechanical energy of
the acoustics wave into electrical impulses. These cells are arranged in
an inner row and an outer row in the organ of corti (which runs along
the basilar membrane) and they are surrounded by electrically charged
cells at different potentials(synapses).[^16][^17]\
:
<File:Gray932.png%7CSection> exposing the hair and hair-cells
<File:Cochlea-crosssection.svg%7CCochlea-crosssection> with the
hair-cells visible
:
As was already mentioned, the basilar membrane exhibits various
vibration maxima at different locations when excited by a sound input.
As a result of these excitations a relative motion of the fluid parallel
to the membrane is effected. This motion produces a shear force on
myriads of minuscule hairs protruding from these cells. The disruption
produces an electrochemical cascade on the surrounding electrically
active cells, which results in a signal to be sent to the brain.\
: What is really important is to note that these hair-cells are not
evenly distributed over the surface of the basilar membrane, but rather
they are concentrated in discreet patches. Since different frequencies
make different different parts of the membrane vibrate more than others,
this means that there are ranges of frequencies that we can perceive
better than others, depending on the number density of the hair-cells
surrounding the corresponding region on the basilar membrane. This
introduces discreetness and gives a sort of minimum resolution to our
sense of hearing, thus causing some interesting non linear effects to be
discussed soon.
:
<File:Hair> Cell Patterning Defects in the Cochlea.png\|Arrangement of
the hair cells on the cochlea. Left=healthy;Right=pattern defects
:
Due to the similarity between the behavior of the inner ear and the
behavior of a band-pass filter, the above groups of frequencies have
been named **critical bandwidths**.[^18]
## Non Linearity \| Part 2
Now that a little bit more has been presented about the workings of the
inner ear, more peculiarities of the idiosyncratic auditory system can
be illustrated, starting with a non linear effect that is fairly common
and very noticeable when it occurs. It is the phenomenon of beating.
### Beating Phenomena
Beating phenomena are a characteristic of multiple degree of freedom
systems, where the various degrees of freedom are coupled to some extent
and that receive two harmonic excitations at slightly different
frequencies. The excitations can be summed as follows:[^19]
: $$x=A_1e^{j(\omega_1 t)+\Phi_1}+A_2e^{j(\omega_2 t)+\Phi_2}$$
The resulting vibration is no longer simple harmonic.
:
The inner ear is a continuous system, with the basilar membrane serving
as a complicated bandpass filter to separate frequencies. When one or
both ears are exposed to sound that consists of two tones with a slight
difference in their frequencies, the non uniform distribution and strong
localization of the hair cells on the surface of the basilar membrane
result in the same group (or critical bandwidth) of hair cells being
excited by both tonal components of the incident sound.\
:
<File:Beating> Frequency.svg\|Beating
As a result, the listener perceives the combination sound as that of a
single frequency tone but with periodically varying intensity. This is
known as beating.\
: The tones remain indistinguishable until the frequency separation
between them , is greater than the a bandwidth. It is really interesting
to note that if the two tones are presented to each ear separately, then
no beating occurs and the ear is able to resolve the difference.[^20]
## References
## Links and Resources
- Dancing Hair Cell
- American Society of Acoustics
- Table of sound
levels
- Basilar Membrane
Animation
[^1]: Acoustics, Leo L. Beranek 1993, Copyright: Acoustical Society of
America, Ch13 Hearing, Speech Inteligibility and Psychoacoustic
Criteria
[^2]: Overall Loudness of Steady Sounds According to Theory and
Experiment, Walton L. Howes, Nasa reference Edition 1001
[^3]: THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA VOLUME 23.NUMBER
5, SEPTEMBER.1951,DC Potentials and Energy Balance of the Cochlear
Partition,GEORG V. Bekesy,Psycho-Acousic Laboratory, Harvard
University, Cambridge, Massachusetts,(Received May 5, 1951)
[^4]: The Measurement of Hearing,Ira J. Hirsh, McGraw Hill Book Company,
Inc, First Edition,1952
[^5]:
[^6]: Fundamentals of Acoustics,Lawrence E. Kinsler, Alan B.
Coppens,4rth Edition
[^7]: Acoustic Reflex in Man,Aage R. Moller, J. Acoust. Soc. Am. 34,
1524 (1962), <DOI:10.1121/1.1918384>
[^8]:
[^9]:
[^10]: <http://www.engineeringtoolbox.com/sound-intensity-d_712.html>
[^11]:
[^12]:
[^13]:
[^14]:
[^15]:
[^16]:
[^17]:
[^18]:
[^19]:
[^20]:
|
# Engineering Acoustics/Conductive Mechanisms of Hearing
!Thumb\|right\|border\|400px\|Anatomy of the Human
Ear
In this section we will discuss the pathways for conduction of acoustic
sound waves to the inner ear. Two methods of conduction will be covered;
first we will cover conduction to the inner ear through the outer and
middle ear then conduction to the inner ear through bone conduction.
Without these conductive pathways the acoustic waves that transmit sound
would not be able to reach the inner ear, and we would therefore be
unable to hear.
## Outer Ear
The first area where sound enters the ear is in the outer ear which can
be broken down into two parts, the Pinna (visible portion of the ear)
and the ear canal.
### Pinna
It was once thought that the Pinna funneled sound into the ear canal,
but the primary function of the Pinna is to aid in sound source
localization. The various ridges in the Pinna filter the signal at
frequencies over 4000 Hz according to the direction of the sound.The
resulting spectral variations allow our brain to determine the elevation
of a sound and localize sounds in reference to our position.[^1]
### Ear Canal
The ear canal can be modeled as a tube approximately 2.8 cm long that is
closed at one end by the tympanic membrane. Unlike a rigid tube that has
a sharp spike in amplitude at the resonant frequency, the ear canal has
a wide resonance peak from approximately 2--5 kHz.[^2] It is important
to remember the ear canal is not rigid nor is not a straight tube, and
therefore damping is introduced.While damping is introduced to the
system, thereby altering the resonant frequency, the important thing is
that the ear canal can raise the Sound Pressure
Level (SPL) of the ear by up to 15 dB
which will amplify the incoming acoustic signal.[^3]
!Thumb\|center\|border\|400px
## Middle Ear
The primary purpose of the middle ear is to act as an impedance-matching
transformer that will allow for acoustic energy to be efficiently
transferred from the air filled outer ear to the liquid filled cochlea.
Knowing the impedance ratio, r, of the liquid in the cochlea to the air
is 4000:1 we can use the following equation to determine what the energy
transmission coefficient, T, would be without the middle ear
```{=html}
<center>
```
$T = \frac{4r}{ \left( r+1 \right) ^2}$
```{=html}
</center>
```
which gives us a transmission of 0.001 or 0.1%. This transmission value
is equivalent to a SPL drop of around 40 dB.[^4]
To overcome this impedance mismatch,the middle ear employs three
mechanical amplification systems; the areal ratio between the Tympanic
Membrane and the
Stapes, the areal ratio of the
stapes to the Oval Window
the lever effect of the Ossicular
Chain.
### Areal Ratio
While the ossicular chain helps to increase the pressure of the incoming
acoustic signal, the majority of gain in the middle ear is due to the
areal ratio of the tympanic membrane and the stapes. The average area of
the tympanic membrane is 66 $mm^2$ while that of the stapes is 3.2
$mm^2$. The areal ratio between the two areas should therefore be
approximately 20:1, but as mentioned earlier the effective area of the
tympanic membrane is only 65% so, 66 $mm^2$ x 65% = 42.9 $mm^2$. This
results in an areal ratio of 13.4.
!Thumb\|center\|border\|375px
### Tympanic membrane
The tympanic membrane plays an important role in the conduction of
sound. Serving as a
Transducer, the membrane
converts acoustic pressure waves into mechanical motion. Experiments
have shown that the tympanic membrane is not uniform, but has a
spatially variable stiffness. Due to these variations, the effective
area of the membrane is only 65%.[^5] It is important to mention that
the stiffness of the tympanic membrane is also very important to the
efficient transfer of energy as being too stiff would reflect a large
amount of energy back through the ear canal, while being too flacid
would cause the membrane to absorb too much energy.[^6]
Another Important contribution of the Tympanic membrane is the gain
contributed by the curved membrane principle. This model represents the
membrane in two sections with the manubrium at the center. Due to the
curved membrane principle, the force on the manubrium is greater than on
the membrane and thus the strength of the incoming signal is increased
by a factor of 2.[^7] Including these effects, we find the total gain of
the areal ratio and tympanic membrane will be 26.8. Using this pressure
ratio we can calculate the associated gain in decibels
```{=html}
<center>
```
$\text{dB Gain} = 20\,log(26.8/1) = 28.6\,dB\$
```{=html}
</center>
```
Similar to the areal ratio between the tympanic membrane and the stapes,
the areal ratio between the stapes and the oval window of the
Cochlea is 20:1
```{=html}
<center>
```
$\text{dB Gain} = 20\,log(20/1) = 26\,dB\$
```{=html}
</center>
```
Which yields a pressure gain of 26 dB.[^8]
### Ossicular Chain
The ossicular chain functions like a basic lever. The manubrium of the
Malleus is 1.3 times longer than
the long process of the Incus, and
the two ossicles are connect with ligaments so that they can move
together. Their attachment in the middle ear evenly distributes their
mass about an axis of rotation which allows them to be easily set into
motion by incoming acoustic signals. In addition, the ossicles are
damped enough that when the incoming signal is stopped, the malleus and
incus stop as well which is desirable as continued motion of the
ossicles would lead to echoing. The difference in lengths between the
two ossicles also corresponds to a pressure gain at the incus equivalent
to the length ratio of the two ossicles. This can be best shown by
modeling the two bones as a simple lever:
!Thumb\|center\|border\|325px\|Anatomy of the Human
Ear
Knowing this pressure ratio we are able to compute the equivalent gain
in decibels:[^9]
```{=html}
<center>
```
$\text{dB Gain} = 20\,log(1.3) = 2.3\,dB\$
```{=html}
</center>
```
While the pressure gain is small, it still contributes to the conduction
of sound to the inner ear.
### Overall Middle Ear Gain/ Ideal Transformer Prediction
Using the pressure gain from the ossicular chain, the areal ratio of the
tympanic membrane to the stapes, and the areal ratio of the stapes to
the oval window we find the total gain from the middle ear is
```{=html}
<center>
```
$\text{Middle Ear dB Gain} = 2.3\,dB+ 28.6\,dB\ + 26\,dB = 56.9\,dB$
```{=html}
</center>
```
This gain in the middle ear is called \"The Ideal Transformer
Prediction\...compensate\[s\] for the air-to-cochlea impedance
mismatch\" [^10] and has been confirmed in tests performed on cadavers
with the middle ears removed.
## Bone Conduction
Bone conduction is the transfer of sound to the cochlea through the
bones of the skull. In for conduction to occur in the skull, the
air-conduction threshold of sound must be exceeded by 50-60 dB which
often occur through direct vibration to the skull, primarily the
temporal bone. When this threshold is exceeded, the traveling waves
produced in the cochlea resemble those induced by the stapes.[^11] Due
to the impedance mismatch, bone conduction does not play a significant
role in hearing, but is often used for measuring hearing to determine if
there is damage to the middle ear, or to test the viability of the inner
ear.[^12]
One use for bone conduction is with people with a damaged middle ear but
having a normal functioning inner ear. For these people a traditional
hearing aid can not be used, but due to a healthy inner ear bone
conduction hearing aids can be. For these hearing aids a titanium screw
can be implanted into the skull, and an external device housing the
microphone and receiver can conduct acoustic waves to the inner ear
through the temporal bone. While hearing through bone conduction is not
perfect, it does allow for clear and understandable sound
recognition.[^13]
## References
[^1]: Gelfand, Stanley A. Hearing, an Introduction to Psychological and
Physiological Acoustics. New York: M. Dekker, 1981. Print. p.85-102
[^2]:
[^3]: Kinsler, Lawrence E. Fundamentals of Acoustics. New York: Wiley,
2000. Print. p.312-15
[^4]:
[^5]: Durrant, John D, and Jean H. Lovrinic. Bases of Hearing Science.
Baltimore: Williams & Wilkins, 1977. Print.
[^6]: Auditory Function. Emanuel,Maroonroge,Letowski.
<http://www.usaarl.army.mil/new/publications/HMD_Book09/files/Section%2016%20-%20Chapter%209%20Auditory%20Function.pdf>
[^7]:
[^8]:
[^9]: Hamill, Teri, and Lloyd L. Price. The Hearing Sciences. San Diego:
Plural Pub, 2008. Print. p.166-69
[^10]:
[^11]:
[^12]: Yost, William A. Fundamentals of Hearing: An Introduction. San
Diego: Academic Press, 2000. Print. p.72
[^13]:
|
# Engineering Acoustics/Hearing Protection
## Introduction
The human ear is in constantly exposed to noise. In some situations, the
intensity of this noise can be infuriating, like on a subway, a train,
or a plane. One may want to put on headphones, and crank up the music
volume to overcome the maddening turbojet engine noise or the roar of
the city. In the cabin of an aircraft, the intensity of the noise during
cruise condition is about 85 dB, while reaching over 100 dB during
take-off and landing. A solution to reduce the exposure to high noise
levels is the use of noise-canceling headphones.
## Noise Control Mechanisms
There are two types of noise-canceling headphones. One using passive
elements, the other one making use of active elements.
### Passive Noise Control
Passive noise control elements do not require any source of energy. The
noise reduction comes from the material and the shape of the hearing
device. An example of headphones only using a passive element to block
sound are the earmuffs and ear plugs worn by workers on construction
sites. This type of headphones can reduce the noise level by about 15 to
25 decibels [^1].
The hearing protection device acts as a sound barrier. Sound barriers
are more effective with high frequency noise. Large wavelength noise, or
low frequency sounds, can bend around the device more easily, while high
frequency sound will be diffracted. An important factor in how well the
headphones will block the outside noise is how good the seal created is.
For the same pair of headphones or earphones, the attenuation can be
very good or bad, depending on how they fit on the user.
The phase speed of the device is described by the bulk speed[^2],
$c^2=\frac{(\mathcal{B}+\frac{4}{3}\mathcal{S})}{\rho_0}$
where $\mathcal{B}$ and $\mathcal{S}$ are the bulk and shear moduli of
the solid. $\rho_0$ is its density. The reflection coefficient in a
solid can be described as
$\mathbf{R}=\frac{\frac{r_n-r_1}{\cos{\theta_i}}+jx_n}{\frac{r_n+r_1}{\cos{\theta_i}}+jx_n}$
where $r$ and $n$ denote the specific acoustic resistance and reactance,
respectively. The specific acoustic resistance is
$r=\rho c$
The subscripts $n$ and $1$ denote the property in the normal direction
and of the first medium, respectively. With the reflection coefficient,
the level of attenuation and level of transmission can be found. The
power transmission coefficient is
$T_\pi(\theta)=\frac{1}{1+[(\omega \rho_s / 2r_1)\cos{\theta}]^2}$
### Active Noise Control
What makes noise-canceling headphones such an interesting device is the
use of the active element to eliminate undesirable noise. It is
important to note that a typical active noise-canceling headphone will
also have a passive element that will serve as a first barrier to the
unwanted, high frequency sound waves. Behind the noise barrier, a
combination of microphone, electrical circuit, and speaker is set in
place to destroy some of the noise that made it through the passive
element. The steps to cancel the undesirable noise are fairly
straightforward. First, the microphone detects the noise coming from
outside. Then the electrical circuit reads the information coming from
the microphone and creates a noise signal that has the same frequency
and amplitude as the outside noise, but with a phase of 180° to the
outside source. This signal is sent to the loudspeaker, which will put
out the desired sound[^3]. What happens then is the outside sound waves
get canceled -- or destroyed -- by the sound waves generated by the
speaker. The difficult part is the implementation of the system. The
limiting factor in the efficiency of noise cancelling devices is the
reaction time of the system. \"Coming within 25 degrees of the needed
180-degree phase shift can cut noise by 20 decibels. Headphones that
react more slowly provide less cancellation.\"[^4] claims Mark
Fischetti. Active noise control is effective at low frequencies. For
higher frequency noise, the required response time of the system is too
small for the device to be able to destroy the incoming sound waves.
Such a system is feasible but require complex electronics, difficult to
implement in a relatively small device.
In their studies on different brands of noise-cancelling headphones, L.
Y. L Lang et al. showed that active noise are \"reliable in stationary
noise environments as opposed to those in environment that are highly
transient.\"[^5] In situations with highly transient sound, like in an
airport or inside the cabin while the aircraft is taking off, the active
noise-control system can sometimes become ineffective, or even increase
the sound pressure level. They found that a certain pair of headphones
increased the sound pressure level by 20.4 dB at a frequency of around
600 Hz. In transient noise environment, the headphones can show
fluctuations in the level of attenuation of sound pressure, especially
in the range of 100 Hz to 1000 Hz. The efficiency of the
noise-cancelling headphones can be calculated by finding the insertion
loss. The insertion loss is measured by finding the difference in sound
pressure level in the ear when no headphones are worn and when the
active noise-cancelling device is activated and worn over the ears.
$IL_{T}=L_{0}-L_{C-ON}$
In this equation, $IL_{T}$ is the total insertion loss, $L_{0}$ is the
sound pressure level without the headphones and $L_{C-ON}$represents the
sound pressure level when the ears are covered and the active element is
turned on. The active noise-cancelling headphones have the highest
insertion loss at frequencies below 230 Hz. This is a perfect solution
for usage of headphones while commuting. In an aircraft cabin or a bus
cabin, the sound pressure level is maximum at around 110 Hz. By having
the maximum attenuation at those frequencies, one can protect their ears
from the possibly dangerous noise level found in different
transportation methods.
On top of canceling low frequency noise, noise-cancelling headphones can
improve speech intelligibility. By attenuating low frequency sound
waves, the frequencies located between 500 Hz and 1500 Hz can be
isolated leading to a more intelligible speech. In their 2001 report,
Mongeau, Bernhard and Feist concluded that noise-cancelling headphones
could be a solution to improve communication at noisy toll booth, where
car and truck noise can lead to difficult conversations[^6]. The report
states that a good communication system could raise the speech
intelligibility index (SII) up to 0.75. In regular conditions in a toll
booth, the SII can be as low as 0 when using a regular or raised vocal
effort, or 0.06 when using a loud voice. In perfect conditions, the SII
is equal to 1. Although noise-cancelling devices improve speech
intelligibility, the main issue was the reception of the idea. Having
someone wearing a pair of over-the-ear headphones while serving a client
does not give a good impression as people tend to think the attendant is
also focusing on something else.
Since it is an active system, energy must be added to the system, so the
headphones require an energy source such as a typical battery.
## Types of headphones
There are three different design of headphones. First, the one providing
the least amount of noise attenuation is the supra-aural headphone. This
type of headphone simply sits on the ears. While it is not efficient in
blocking the noise, it provides a more natural sound. The fact that it
is more open to the outside environment allows the headphones to sound
more like a speaker.
The next type is the circumaural headphone. This type of headphones fits
around the user's ears, completely isolating them. They provide good
sound attenuation because they can block sound coming from all
directions. On the other hand, they are usually heavier and not as
comfortable. Due to the size of the circumaural headphones, it is
possible to add an active element to even further decrease outside
noise. This will be discussed in the next section.
Finally, the last type is the in-ear headphone. They are worn directly
in the ear. There are two styles of in-ear headphones. The ear buds are
worn in the opening of the ear, while the canal earphones are worn in
the ear canal. This type provides the best passive noise attenuation as
they can efficiently create a seal and block the outside noise. They
also provide a high-quality sound.
## Conclusion
The noise-canceling headphones are a good example of how theoretical
science can be applied and used to solve an everyday problem. With the
speaker-generated sound wave having the same amplitude and frequency as
the outside sound waves, but with a phase difference of , the two sound
waves cancel each other and the result is a quieter commute, safer for
the ears and the mind.
## Reference
```{=html}
<references />
```
[^1]: M. Fischetti, \"Noise-canceling headphones. Reducing a roar,\"
*Scientific American,* vol. 292, no. 2, pp. 92-3, 2005.
[^2]: L. E. Kinsler, A. R. Frey, A. B. Coppens and J. V. Sanders,
Fundamentals of Acoustics, United States of America: John Wiley &
Sons, Inc., 2000.
[^3]: W. Harris, \"How Noise-canceling Headphones Work,\" 15 February
2007. \[Online\]. Available:
https://electronics.howstuffworks.com/gadgets/audio-music/noise-canceling-headphone.htm.
\[Accessed 25 February 2018\].
[^4]:
[^5]: L. Y. L. Ang, Y. K. Koh and H. P. Lee, \"The performance of active
noise-canceling headphones in different noise,\" *Applied
Acoustics,* no. 122, pp. 16-22, 2017.
[^6]: L. Mongeau, R. J. Bernhard and J. P. Feist, \"Noise Control and
Speech Intelligibility Improvement of a Toll Plaza,\" The Institute
for Safe, Quiet and Durable Highways, West Lafayette, Indiana, 2001.
|
# Engineering Acoustics/Basic Acoustics of the Marimba
## Introduction
!Marimba Band \"La Gloria Antigueña\", Antigua Guatemala,
1979
Like a xylophone, a marimba has octaves of wooden bars that are struck
with mallets to produce tones. Unlike the harsh sound of a xylophone, a
marimba produces a deep, rich tone. Marimbas are not uncommon and are
played in most high school bands.
Now, while all the trumpet and flute and clarinet players are busy
tuning up their instruments, the marimba player is back in the
percussion section with her feet up just relaxing. This is a bit
surprising, however, since the marimba is a melodic instrument that
needs to be in tune to sound good. So what gives? Why is the marimba
never tuned? How would you even go about tuning a marimba? To answer
these questions, the acoustics behind (or within) a marimba must be
understood.
## Components of Sound
What gives the marimba its unique sound? It can be boiled down to two
components: the bars and the resonators. Typically, the bars are made of
rosewood (or some synthetic version of wood). They are cut to size
depending on what note is desired, then the tuning is refined by shaving
wood from the underside of the bar.
### Example: Rosewood bar, middle C, 1 cm thick
+----------------------------------------------------------------------+
| The equation that relates the length of the bar with the desired |
| frequency comes from the theory of modeling a bar that is free at |
| both ends. This theory yields the following equation: |
| |
| $Length = \sqrt |
| {\frac{3.011^2\cdot \pi \cdot t \cdot v}{8 \cdot \sqrt{12}\cdot f}}$ |
| |
| where t is the thickness of the bar, v is the speed of sound in the |
| bar, and f is the frequency of the note. For rosewood, v = 5217 m/s. |
| For middle C, f=262 Hz. Therefore, to make a middle C key for a |
| rosewood marimba, cut the bar to be: |
| |
| $Length = \sqrt{\frac{3.011^2\cdot \p |
| i \cdot .01 \cdot 5217}{8 \cdot \sqrt{12}\cdot 262}}= .45 m = 45 cm$ |
+----------------------------------------------------------------------+
The resonators are made from metal (usually aluminum) and their lengths
also differ depending on the desired note. It is important to know that
each resonator is open at the top but closed by a stopper at the bottom
end.
### Example: Aluminum resonator, middle C
+----------------------------------------------------------------------+
| The equation that relates the length of the resonator with the |
| desired frequency comes from modeling the resonator as a pipe that |
| is driven at one end and closed at the other end. A \"driven\" pipe |
| is one that has a source of excitation (in this case, the vibrating |
| key) at one end. This model yields the following: |
| |
| $Length = \frac {c}{4\cdot f}$ |
| |
| where c is the speed of sound in air and f is the frequency of the |
| note. For air, c = 343 m/s. For middle C, f = 262 Hz. Therefore, to |
| make a resonator for the middle C key, the resonator length should |
| be: |
| |
| $Length = \frac {343}{4 \cdot 262} = .327m = 32.7 cm$ |
+----------------------------------------------------------------------+
### Resonator Shape
The shape of the resonator is an important factor in determining the
quality of sound that can be produced. The ideal shape is a sphere. This
is modeled by the Helmholtz resonator. (For more see Helmholtz
Resonator
page)
However, mounting big, round, beach ball-like resonators under the keys
is typically impractical. The worst choices for resonators are square or
oval tubes. These shapes amplify the non-harmonic pitches sometimes
referred to as "junk pitches". The round tube is typically chosen
because it does the best job (aside from the sphere) at amplifying the
desired harmonic and not much else.
As mentioned in the second example above, the resonator on a marimba can
be modeled by a closed pipe. This model can be used to predict what type
of sound (full and rich vs dull) the marimba will produce. Each pipe is
a \"quarter wave resonator\" that amplifies the sound waves produced by
of the bar. This means that in order to produce a full, rich sound, the
length of the resonator must exactly match one-quarter of the
wavelength. If the length is off, the marimba will produce a dull or
off-key sound for that note.
## Why would the marimba need tuning?
In the theoretical world where it is always 72 degrees with low
humidity, a marimba would not need tuning. But, since weather can be a
factor (especially for the marching band) marimbas do not always perform
the same way. Hot and cold weather can wreak havoc on all kinds of
percussion instruments, and the marimba is no exception. On hot days,
the marimba tends to be sharp and for cold days it tends to be flat.
This is the exact opposite of what happens to string instruments. Why?
The tone of a string instrument depends mainly on the tension in the
string, which decreases as the string expands with heat. The decrease in
tension leads to a flat note. Marimbas on the other hand produce sound
by moving air through the resonators. The speed at which this air is
moved is the speed of sound, which varies proportionately with
temperature! So, as the temperature increases, so does the speed of
sound. From the equation given in example 2 from above, you can see that
an increase in the speed of sound (c) means a longer pipe is needed to
resonate the same note. If the length of the resonator is not increased,
the note will sound sharp. Now, the heat can also cause the wooden bars
to expand, but the effect of this expansion is insignificant compared to
the effect of the change in the speed of sound.
## Tuning Myths
It is a common myth among percussionists that the marimba can be tuned
by simply moving the resonators up or down (while the bars remain in the
same position.) The thought behind this is that by moving the resonators
down, for example, you are in effect lengthening them. While this may
sound like sound reasoning, it actually does not hold true in practice.
Judging by how the marimba is constructed (cutting bars and resonators
to specific lengths), it seems that there are really two options to
consider when looking to tune a marimba: shave some wood off the
underside of the bars, or change the length of the resonator. For
obvious reasons, shaving wood off the keys every time the weather
changes is not a practical solution. Therefore, the only option left is
to change the length of the resonator.
` As mentioned above, each resonator is plugged by a stopper at the bottom end. So, by simply shoving the stopper farther up the pipe, you can shorten the resonator and sharpen the note. Conversely, pushing the stopper down the pipe can flatten the note. Most marimbas do not come with tunable resonators, so this process can be a little challenging. (Broomsticks and hammers are common tools of the trade.)`
### Example: Middle C Resonator lengthened by 1 cm
+----------------------------------------------------------------------+
| For ideal conditions, the length of the middle C (262 Hz) resonator |
| should be 32.7 cm as shown in example 2. Therefore, the change in |
| frequency for this resonator due to a change in length is given by: |
| |
| $\Delta Frequency = 262 Hz - \frac {c}{4\cdot (.327 + \Delta L)}$ |
| |
| If the length is increased by 1 cm, the change in frequency will be: |
| |
| $\D |
| elta Frequency = \frac {343}{4\cdot (.327 + .01)} - 262 Hz = 7.5 Hz$ |
+----------------------------------------------------------------------+
The acoustics behind the tuning a marimba go back to the design that
each resonator is to be ¼ of the total wavelength of the desired note.
When marimbas get out of tune, this length is no longer exactly equal to
¼ the wavelength due to the lengthening or shortening of the resonator
as described above. Because the length has changed, resonance is no
longer achieved, and the tone can become muffled or off-key.
## Conclusions
Some marimba builders are now changing their designs to include tunable
resonators. There are in fact several marimba companies that have had
tunable resonators for decades. However, only a few offer full range
tuning. Since any leak in the end-seal will cause major loss of volume
and richness of the tone, this is proving to be a very difficult task.
At least now, though, armed with the acoustic background of their
instruments, percussionists everywhere will now have something to do
when the conductor says, "tune up!"
## Links and References
1. <http://www.gppercussion.com/html/resonators.html>
2. <http://www.mostlymarimba.com/>
3. <http://www.craftymusicteachers.com/bassmarimba/>
Back to main page
|
# Engineering Acoustics/How an Acoustic Guitar works
## Introduction
There are three main parts of the guitar that contribute to sound
production.
First of all, there are strings. Any string that is under tension will
vibrate at a certain frequency. The tension and gauge in the string
determine the frequency at which it vibrates. The guitar controls the
length and tension of six differently weighted strings to cover a very
wide range of frequencies.
Second of all, there is the body of the guitar. The guitar body is
connected directly to one end of each of the strings. The body receives
the vibrations of the strings and transmits them to the air around the
body. It is the body's large surface area that allows it to "push" a lot
more air than a string.
Finally, there is the air inside the body. This is very important for
the lower frequencies of the guitar. The air mass just inside the
soundhole oscillates, compressing and decompressing the compliant air
inside the body. In practice, this concept is called a Helmholtz
resonator. Without this, it would difficult to produce the wonderful
timbre of the guitar.
![](zehme_guitar.jpg "zehme_guitar.jpg")
## The Strings
The strings of the guitar vary in linear density, length, and tension.
This gives the guitar a wide range of attainable frequencies. The larger
the linear density is, the slower the string vibrates. The same goes for
the length; the longer the string is the slower it vibrates. This causes
a low frequency. Inversely, if the strings are less dense and/or shorter
they create a higher frequency. The lowest resonance frequencies of each
string can be calculated by
$f_1 = \frac{1}{2 L} \sqrt{\frac{T}{\rho_1}}$ where $T$= string tension,
$\rho_1$=linear density, $L$ = string length
The string length, L, in the equation is what changes when a player
presses on a string at a certain fret. This will shorten the string
which in turn increases the frequency it produces when plucked. The
spacing of these frets is important. The length from the nut to bridge
determines how much space goes between each fret. If the length is 25
inches, then the position of the first fret should be located
(25/17.817) inches from the nut. Then the second fret should be located
(25-(25/17.817))/17.817 inches from the first fret. This results in the
equation
![](zehme_frets.jpg "zehme_frets.jpg")
When a string is plucked, a disturbance is formed and travels in both
directions away from point where the string was plucked. These \"waves\"
travel at a speed that is related to the tension and linear density and
can be calculated by
![](zehme_wavespeed.jpg "zehme_wavespeed.jpg")
The waves travel until they reach the boundaries on each end where they
are reflected back. The link below displays how the waves propagate in a
string.
Plucked String @
www.phys.unsw.edu
The strings themselves do not produce very much sound because they are
so thin. This is why they are connected to the top plate of the guitar
body. They need to transfer the frequencies they are producing to a
large surface area which can create more intense pressure disturbances.
## The Body
The body of the guitar transfers the vibrations of the bridge to the air
that surrounds it. The top plate contributes to most of the pressure
disturbances, because the player dampens the back plate and the sides
are relatively stiff. This is why it is important to make the top plate
out of a light springy wood, like spruce. The more the top plate can
vibrate, the louder the sound it produces will be. It is also important
to keep the top plate flat, so a series of braces are located on the
inside to strengthen it. Without these braces the top plate would bend
and crack under the large stress created by the tension in the strings.
This would also affect the magnitude of the sound being transmitted. The
warped plate would not be able to \"push\" air very efficiently. A good
experiment to try, in order to see how important this part of the guitar
is in the amplification process, is as follows:
1\. Start with an ordinary rubber band, a large bowl, adhesive tape, and
plastic wrap.
2\. Stretch the rubber band and pluck it a few times to get a good sense
for how loud it is.
3\. Stretch the plastic wrap over the bowl to form a sort of drum.
4\. Tape down one end of the rubber band to the plastic wrap.
5\. Stretch the rubber band and pluck it a few times.
6\. The sound should be much louder than before.
## The Air
The final part of the guitar is the air inside the body. This is very
important for the lower range of the instrument. The air just inside the
sound hole oscillates, compressing and expanding the air inside the
body. This is just like blowing across the top of a bottle and listening
to the tone it produces. This forms what is called a Helmholtz
resonator. For more information on Helmholtz resonators go to Helmholtz
Resonance. This link
also shows the correlation to acoustic guitars in great detail. The
acoustic guitar makers often tune these resonators to have a resonance
frequency between F#2 and A2 (92.5 to 110.0 Hz)(Hz stands for Hertz).
Having such a low resonance frequency is what aids the amplification of
the lower frequency strings. To demonstrate the importance of the air in
the cavity, simply play an open A on the guitar (the second string).
Now, as the string is vibrating, place a piece of cardboard over the
sound hole. The sound level is reduced dramatically. This is because
you\'ve stopped the vibration of the air mass just inside the sound
hole, causing only the top plate to vibrate. Although the top plate
still vibrates and transmits sound, it isn\'t as effective at
transmitting lower frequency waves, thus the need for the Helmholtz
resonator.
Back to Main page
|
# Engineering Acoustics/Bessel Functions and the Kettledrum
# Abstract
In class, we have begun to discuss the solutions of multidimensional
wave equations. A particularly interesting aspect of these
multidimensional solutions are those of Bessel functions for circular
boundary conditions. The practical application of these solutions is the
kettledrum. This page will explore in qualitative and quantitative terms
how the of the kettledrum works. More specifically, the kettledrum will
be introduced as a circular membrane and its solution will be discussed
in visual (e.g. visualization of Bessel functions, video of kettledrums
and audio forms (wav files of kettledrums playing. In addition, links to
more information about this material, including references, will be
included.
# What is a kettledrum
A kettledrum is a percussion instrument with a circular drumhead mounted
on a \"kettle-like\" enclosure. When one strikes the drumhead with a
mallet, it vibrates which produces its sound. The pitch of this sound is
determined by the tension of the drumhead, which is precisely tuned
before playing. The sound of the kettldrum (called the Timpani in
classical music) is present in many forms of music from many difference
places of the world.
![](myKettledrum.jpeg "myKettledrum.jpeg")
# The math behind the kettledrum:the brief version
When one looks at how a kettledrum produces sound, one should look no
farther than the drumhead. The vibration of this circular membrane (and
the air in the drum enclosure) is what produces the sound in this
instrument. The mathematics behind this vibrating drum are relatively
simple. If one looks at a small element of the drum head, it looks
exactly like the situation for the vibrating string (see:). The only
difference is that there are two dimensions where there are forces on
the element, the two dimensions that are planar to the drum. As this is
the same situation, we have the same equation, except with another
spatial term in the other planar dimension. This allows us to model the
drumhead using a helmholtz equation. The next step (solved in detail
below) is to assume that the displacement of the drumhead (in polar
coordinates) is a product of two separate functions for theta and r.
This allows us to turn the PDE into two ODES which are readily solved
and applied to the situation of the kettledrum head. For more info, see
below.
# The math behind the kettledrum:the derivation
So starting with the trusty general Helmholtz equation:
$\nabla^2\Psi+k^2\Psi=0$
Where k is the wave number, the frequency of the forced oscillations
divided by the speed of sound in the membrane.
Since we are dealing with a circular object, it make sense to work in
polar coordinates (in terms of radius and angle) instead of rectangular
coordinates. For polar coordinates the Laplacian term of the Helmholtz
relation ($\nabla^2$) becomes
$\partial^2 \Psi/ \partial r^2 + 1/r \partial^2\Psi/ \partial r +1/r^2 \partial^2 \Psi /\partial \theta^2$
Now lets assume that$$\Psi (r,\theta) = R(r) \Theta(\theta)$$
This assumption follows the method of separation of variables. (see
Reference 3 for more info) Substituting this result back into our trusty
Helmholtz equation gives the following:
$r^2 / R (d^2 R/dr^2 + 1/r dR/dr) + k^2 r^2 = -1/\Theta d^2 \Theta /d\theta^2$
Since we separated the variables of the solution into two
one-dimensional functions, the partial derivatives become ordinary
derivatives. Both sides of this result must equal the same constant. For
simplicity, I will use $\lambda$ as this constant. This results in the
following two equations:
$d^2 \Theta / d\theta^2 = -\lambda^2 \Theta$
$d^2 R / dr^2 + 1/r dR/dr + (k^2 - \lambda^2 / r^2) R = 0$
The first of these equations readily seen as the standard second order
ordinary differential equation which has a harmonic solution of sines
and cosines with the frequency based on $\lambda$. The second equation
is what is known as Bessel\'s Equation. The solution to this equation is
cryptically called Bessel functions of order $\lambda$ of the first and
second kind. These functions, while sounding very intimidating, are
simply oscillatory functions of the radius times the wave number that
are unbounded at when kr (for the function of the second kind)
approaches zero and diminish as kr get larger. (For more information on
what these functions look like see References 1,2, and 3)
Now that we have the general solution to this equation, we can now model
a infinite radius kettledrum head. However, since i have yet to see an
infinite kettle drum, we need to constrain this solution of a vibrating
membrane to a finite radius. We can do this by applying what we know
about our circular membrane: along the edges of the kettledrum, the drum
head is attached to the drum. This means that there can be no
displacement of the membrane at the termination at the radius of the
kettle drum. This boundary condition can be mathematically described as
the following:
$R(a) = 0$
Where a is the arbitrary radius of the kettledrum. In addition to this
boundary condition, the displacement of the drum head at the center must
be finite. This second boundary condition removes the Bessel function of
the second kind from the solution. This reduces the R part of our
solution to:
$R(r) = AJ_{\lambda}(kr)$
Where $J_{\lambda}$ is a Bessel function of the first kind of order
$\lambda$. Apply our other boundary condition at the radius of the drum
requires that the wave number k must have discrete values, ($j_{mn}/a$)
which can be looked up. Combining all of these gives us our solution to
how a drumhead behaves (which is the real part of the following):
$y_{\lambda n}(r,\theta,t) = A_{\lambda n} J_{\lambda n}(k_{\lambda n} r)e^{j \lambda \theta+j w_{\lambda n} t}$
# The math behind the kettledrum:the entire drum
The above derivation is just for the drum head. An actual kettledrum has
one side of this circular membrane surrounded by an enclosed cavity.
This means that air is compressed in the cavity when the membrane is
vibrating, adding more complications to the solution. In mathematical
terms, this makes the partial differencial equation non-homogeneous or
in simpler terms, the right side of the Helmholtz equation does not
equal zero. This result requires significantly more derivation, and will
not be done here. If the reader cares to know more, these results are
discussed in the two books under references 6 and 7.
# Sites of interest
As one can see from the derivation above, the kettledrum is very
interesting mathematically. However, it also has a rich historical music
tradition in various places of the world. As this page\'s emphasis is on
math, there are few links provided below that reference this rich
history.
A discussion of Persian kettledrums: Kettle drums of Iran and other
countries
A discussion of kettledrums in classical music: Kettle drum
Lit.
A massive resource for kettledrum history, construction and technique\"
Vienna Symphonic
Library
Wikibooks sister cite, references under Timpani: Wikipedia
reference
# References
1.Eric W. Weisstein. \"Bessel Function of the First Kind.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/BesselFunctionoftheFirstKind.html>
2.Eric W. Weisstein. \"Bessel Function of the Second Kind.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/BesselFunctionoftheSecondKind.html>
3.Eric W. Weisstein. \"Bessel Function.\" From MathWorld---A Wolfram Web
Resource. <http://mathworld.wolfram.com/BesselFunction.html>
4.Eric W. Weisstein et al. \"Separation of Variables.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/SeparationofVariables.html>
5.Eric W. Weisstein. \"Bessel Differential Equation.\" From
MathWorld---A Wolfram Web Resource.
<http://mathworld.wolfram.com/BesselDifferentialEquation.html>
6\. Kinsler and Frey, \"Fundamentals of Acoustics\", fourth edition,
Wiley & Sons
7\. Haberman, \"Applied Partial Differential Equations\", fourth
edition, Prentice Hall Press
Back to main page
|
# Engineering Acoustics/Acoustics in Violins
# Acoustics of the Violin
For detail anatomy of violin, please refer to Atelierla
Bussiere.
![](Violin_front_view.jpg "Violin_front_view.jpg")![](backview.jpg "backview.jpg")
## How Does A Violin Make Sound?
### General Concept
When a violinist bows a string, which can produce vibrations with
abundant harmonics. The vibrations of the strings are structurally
transmitted to the bridge and the body of the instrument through the
bridge. The bridge transmits the vibrational energy produced by the
strings to the body through its feet, further triggering the vibration
of body. The vibration of the body determines sound radiation and sound
quality, along with the resonance of the cavity.
![](Acoustics_in_violins_procedure.jpg "Acoustics_in_violins_procedure.jpg")
### String
The vibration pattern of the strings can be easily be observed. To the
naked eye, the string appears to move back and forth in a parabolic
shape (see figure), which resembles the first mode of free vibration of
a stretched string. The vibration of strings was first investigated by
Hermann Von
Helmholtz,
the famous mathematician and physicist in 19th century. A surprising
scenario was discovered that the string actually moves in an inverse "V"
shape rather than parabolas (see figure). What we see is just an
envelope of the motion of the string. To honor his findings, the motion
of bowed strings had been called "Helmholtz motion."
![](String.jpg "String.jpg")
![](Helmholtzmotion.jpg "Helmholtzmotion.jpg")
### Bridge
The primary role of the bridge is to transform the motion of vibrating
strings into periodic driving forces by its feet to the top plate of the
violin body. The configuration of the bridge can be referred to the
figure. The bridge stands on the belly between f holes, which have two
primary functions: One is to connect the air inside the body with
outside air, and the other one is to make the belly between f holes move
more easily than other parts of the body. The fundamental frequency of a
violin bridge was found to be around 3000 Hz when it is on a rigid
support, and it is an effective energy-transmitting medium to transmit
the energy from the string to body at frequencies from 1 kHz to 4 kHz,
which is in the range of keen sensitivity of human hearing. If a
violinist desires a darker sound from the violin, he or she may attach a
mute to the top of the bridge. The mute is actually an additional mass
which reduces the fundamental frequency of the bridge. As a result, the
sound at higher frequencies is diminished since the force transferred to
the body has been decreased. On the other hand, the fundamental
frequency of the bridge can be raised by attaching an additional
stiffness in the form of tiny wedges, and the sound at higher
frequencies will be amplified accordingly.
The sound post connects the flexible belly to the much stiffer back
plate. The sound post can prevent the collapse of the belly due to high
tension force in the string, and, at the same time, couples the
vibration of the plate. The bass bar under the belly extends beyond the
f holes and transmits the force of the bridge to a larger area of the
belly. As can be seen in the figure, the motion of the treble foot is
restricted by the sound post, while, conversely, the foot over bass bar
can move up and down more easily. As a result, the bridge tends to move
up and down, pivoting about the treble foot. The forces appearing at the
two feet remain equal and opposite up to 1 kHz. At higher frequencies,
the forces become uneven. The force on the soundpost foot predominates
at some frequencies, while it is the bass bar foot at some.
![](crossview.jpg "crossview.jpg")
### Body
The body includes top plate, back plate, the sides, and the air inside,
all of which serve to transmit the vibration of the bridge into the
vibration of air surrounding the violin. For this reason, the violin
needs a relatively large surface area to push enough amount of air back
and forth. Thus, the top and back plates play important roles in the
mechanism. Violin makers have traditionally pay much attention on the
vibration of the top and back plates of the violin by listening to the
tap tones, or, recently, by observing the vibration mode shapes of the
body plates. The vibration modes of an assembled violin are, however,
much more complicated.
The vibration modes of top and back plates can be easily observed in a
similar technique first performed by Ernest Florens Friedrich Chaldni
(1756--1827), who is often respectfully referred "the father of
acoustics." First, the fine sand is uniformly sprinkled on the plate.
Then, the plate can be resonated, either by a powerful sound wave tuned
to the desired frequencies, by being bowed by a violin bow, or by being
excited mechanically or electromechanically at desired frequencies.
Consequently, the sand disperses randomly due to the vibration of plate.
Some of the sand falls outside the region of plate, while some of the
sand is collected by the nodal regions, which have relatively small
movement, of the plate. Hence, the mode shapes of the plate can be
visualized in this manner, which can be referred to the figures in the
reference site, Violin
Acoustics. The first
seven modes of the top and back plates of violin are presented, with
nodal lines depicted by using black sands.
The air inside the body is also important, especially in the range of
lower frequencies. It is like the air inside a bottle when you blow into
the neck, or, as known as Helmholtz
resonance,
which has its own modes of vibration. The air inside the body can
communicate with air outside through the f holes, and the outside air
serves as medium carrying waves from the violin.
see www.violinbridges.co.uk for more articles on bridges and acoustics.
### Sound Radiation
A complete description of sound radiation of a violin should include the
information about radiation intensity as functions both of frequency and
location. The sound radiation can be measured by a microphone connected
to a pressure level meter which is rotatably supported on a stand arm
around the violin, while the violin is fastened at the neck by a clip.
The force is introduced into the violin by using a miniature impact
hammer at the upper edge of the bridge in the direction of bowing. The
detail can be referred to Martin Schleske, master studio for
violinmaking.
The radiation intensity of different frequencies at different locations
can be represented by directional characteristics, or acoustic maps. The
directional characteristics of a violin can be shown in the figure in
the website of Martin
Schleske, where
the radial distance from the center point represents the absolute value
of the sound level (re 1Pa/N) in dB, and the angular coordinate of the
full circle indicates the measurement point around the instrument.
According to the directional characteristics of violins, the principal
radiation directions for the violin in the horizontal plane can be
established. For more detail about the principal radiation direction for
violins at different frequencies, please refer to reference (Meyer
1972).
## References And Other Links
- Violin Acoustics
- Paul Galluzzo\'s
Homepage
- Martin Schleske, master studio for
violinmaking
- Atelierla Bussiere
- Fletcher, N. H., and Rossing, T. D., *The physics of musical
instrument*, Springer-Verlag, 1991
- Meyer, J., \"Directivity of bowed stringed instruments and its
effect on orchestral sound in concert halls\", J. Acoustic. Soc.
Am., 51, 1972, pp. 1994--2009
Back to the main page
|
# Engineering Acoustics/Clarinet Acoustics
The clarinet is a member of the woodwind instruments family that is
widely played in orchestra bands or jazz bands. There are different
types of clarinets that differ in sizes and pitches: B flat, E flat,
bass, contrabass, etc. A clarinet typically provides a flow of air of
about 3 kPa acoustic pressure or 3% of one atmosphere.
A clarinet consists several acoustical components:
- a mouthpiece-reed system: like an energy source, it produces air
flow and pressure oscillating components that fill into the
instrument.
- a cylindrical bore: a resonator that forms the air column and
produces the standing wave.
- a bell (at the open end of the cylindrical bore) and open tone
hole(s): act as radiators.
![](Clarinet_sy.png "Clarinet_sy.png"){width="650"}
From the energy\'s point of view, most of the energy injected by the
player compensates the thermal and viscous losses to the wall inside the
cylindrical bore, while only a fractional part of energy is radiated via
bell and open holes and heard by listeners.
### Mouthpiece-Reed System
The reed serves as a spring-like oscillator. It converts steady input
air flow (DC) into acoustically vibrated air flow (AC). However, it is
more than a single-way converter because it also interacts with the
resonance of the air column in the instrument, i.e.:
- initially, increasing the blowing pressure results in more air
flowing into the clarinet bore.
- but too much difference of the blowing pressure and the mouthpiece
pressure will close the aperture between the reed and the mouthpiece
and finally result in zero air flow.
This behavior is roughly depicted in Figure 2:
\[\[Image:mouthpiece.png\|left\|thumb\|450px\|
```{=html}
<center>
```
Figure 2: Mouthpiece-Reed system diagram
```{=html}
</center>
```
\]\]
The lumped reed model is described by:[^1]
```{=html}
<center>
```
$m\frac{d^2 y}{dt^2}+ \mu \frac{dy}{dt}+k(\Delta p)y = \Delta p ,$
```{=html}
</center>
```
where $y$ is the reed displacement, $m$ is the mass, $\mu$ is the
damping coefficient, $k$ is the stiffness and is treated as a function
of $\Delta p$.
Let\'s go to a bit more details of the relation between the input air
flow, the air pressure in the player\'s mouth as well as that in the
mouthpiece chamber.
\[\[Image:ReedResistance.png\|right\|thumb\|550px\|
```{=html}
<center>
```
Figure 3: Input air flow vs. pressure difference
($\Delta p = p_{mouth} - p_{chamber}$)
```{=html}
<center>
```
\]\]
Figure 3 is roughly divided by two parts. The left part shows a
resistance-like feature, i.e., the air flow increases with increasing
difference between the mouth pressure and the mouthpiece pressure. The
right part shows a negative resistance, i.e., the air flow decreases
with increasing pressure difference. The AC oscillating only occurs
within the right part, so the player must play with a mouth pressure
that fall into a certain range. Specifically, the pressure difference
must be larger than the minimum pressure corresponding to the beginning
of the right part, while no more than the maximum pressure that will
shut the reed off.
The relation between the volume flow $U$ and the pressure difference
$\Delta p$ that across the reed channel is mathematically described by
Bernoulli\'s Equation:
```{=html}
<center>
```
$U = hw \sqrt{\frac{2|\Delta p|}{\rho}}sgn(\Delta p) ,$
```{=html}
</center>
```
where $h$ is the reed opening, $w$ is the channel\'s width and $\rho$ is
the fluid density. The reed opening $h$ is related to pressure
difference $\Delta p$. Approximately, increasing $\Delta p$ results in
decreasing $h$ until the mouthpiece channel be closed and no air flow
filled in.
The non-linear behavior of the mouthpiece-reed system is complicated and
is beyond the scope of linear acoustics. Andrey da Silva (2008) in
McGill University simulated the fully coupled fluid-structure
interaction in single reed mouthpieces by using a 2-D lattice Boltzmann
model, where the velocity fields for different instants are visualized
in his PhD
thesis.[^2]
### Cylindrical Bore
If all tone holes are closed, the main bore of a clarinet is
approximately cylindrical and the mouthpiece end can be looked as a
closed end. Therefore, the prime acoustical behavior of the main bore is
similar to a cylindrical closed-open pipe (a pipe with one closed end
and one open end). Also, to further simplify the problem, here we assume
the walls of the pipe are rigid, perfectly smooth and thermally
insulated.
Sound propagation in the bore can be expressed as a sum of numerous
normal modes. These modes are produced by wave motion along all three
axis in a circular cylindrical coordinate system, namely the wave motion
along transverse concentric circles, the wave motion along the
transverse radial plane and the plane wave motions along the principal
axis of the pipe. However, since transverse modes are only weakly
excited in real instruments, we will not discuss them here but only
focus on the longitudinal plane wave.
#### Fundamental frequency
The natural vibrations of the air column confined by the main bore are
supported by a series of standing waves. Even without any mathematical
analysis, by inspecting boundary conditions, we can intuitively learn
some important physical features of these standing waves. There must be
pressure nodes at the open end because the total pressure near the open
end is almost the same as the ambient pressure, that means zero
acoustical pressure at the open end. Then as we look at the closed end
(actually, the end connecting with the mouthpiece chamber is not
completely closed, there is always an opening to let the air fill in,
but we \"pretend\" the end is completely closed to simplify the analysis
at this moment), since the volume velocity of the air flow is almost
zero, the pressure is at its maximum value. The lowest frequency of
these standing waves can then be found from the wave with the longest
wavelength, which is four times of the length of the instrument bore.
Why? Because if we plot one quarter circle of this sinusoid wave and fit
it into a closed-open pipe, that the peak amplitude is located at the
closed end and the zero amplitude is located at the open end, this is a
perfect representation of a pressure standing wave inside a closed-open
pipe. Figure 4 depicts the pressure wave and the velocity wave
corresponding to the lowest pitch (the 1st resonance frequency) in an
ideal closed-open cylindrical pipe.
\[\[Image:c_o\_pipe.png\|left\|thumb\|350px\|
```{=html}
<center>
```
Figure 4: Pressure and velocity distribution of f0 in a lossless
cylindrical closed-open pipe
```{=html}
</center>
```
\]\]
Figure 5 is the normalized pressure and velocity distribution of the
1st, 3rd and 5th resonance frequency of a closed-open pipe of length
148 cm. To simplify the question, the reflectance of the open end is
simplified to -1 and the viscous losses are not accounted.
\[\[Image:c_o\_pipe_pu_f135.png\|right\|thumb\|650px\|
```{=html}
<center>
```
Figure 5: Pressure and velocity distribution of f1, f3 and f5 in a
lossless cylindrical closed-open pipe
```{=html}
</center>
```
\]\]
#### Harmonic series
In the main bore, standing waves of other higher frequencies are also
possible, but their frequencies must be the odd harmonics of the
fundamental frequency due to the closed-open restriction. This is also
an important factor that shapes the unique timbre of clarinets.
Specifically, the series resonance frequencies of the closed-open pipe
with length L are given by:[^3]
$$f_n = \frac{(2n-1)c}{4L}$$, where $n=1,2,...$
For example, for a bore of length of 14.8 cm, the first 5 harmonics are:
0.581, 1.7432, 2.9054, 4.0676, 5.2297 kHz, respectively. The calculation
is based on an ideal cylindrical pipe. For a real clarinet, however, the
resonance frequencies are determined not only by the length of the bore,
but also by the shape of the bore (which is not a perfect cylinder) and
by the fingering of tone holes. Also, due to the end
correction effects caused
by radiation impedance at the open end, the effective length of an
unflanged open pipe is $L_{eff}=L+0.6a$,.[^4] hence the fundamental
frequency and the harmonic series are lowered a bit.
### Tone Holes
The role of tone holes of clarinets can be viewed from two aspects.
Firstly, the open tone holes change the effective length of the main
bore and hence the resonance frequencies of the enclosed air column.
Each discrete note produced by a clarinet is determined by a specific
fingering, i.e., a particular configuration of open and closed tone
holes. By using advance playing techniques, a player can play pitch
bending (a continuous variation of pitch from one note to the next).
These techniques include partially covering a tone hole (for limited
pitch bending of notes from G3/175 Hz to G4/349 Hz and above D5/523 Hz)
and using vocal tract (for substantial pitch bending above
D5/523 Hz).[^5] The beginning bars of Gershwin\'s *Rhapsody in Blue*[^6]
demonstrates a famous example of a large pitch bending over the range up
to 2.5 octave.
Secondly, the sound radiates from both open holes and the bell end, this
makes the clarinets (and other woodwind instruments) have different
directivity patterns comparing to another family of wind instruments,
the brass instruments, which have a similar open bell end but don\'t
have side holes.
We will see how to calculate the acoustic impedance that changed by tone
holes later.
### Bell
The flaring bell of a clarinet is less important than that of a brass
instrument, because open tone holes contribute to sound radiation in
addition to the bell end. The main function of bell is *to form a smooth
impedance transition from the interior of the bore to the surrounding
air*.[^7] A clarinet is still functional for most notes even without a
bell.
### Register Holes
The main purpose of a register hole is to disrupt the fundamental but
keep higher harmonics as much as possible, such that the frequency of
the note will be tripled by opening the register hole.
## Wave Propagation in the Bore
### Wave Equation
The sound waves propagation inside the main bore of a clarinet are
described by the one-dimensional wave equation:
```{=html}
<center>
```
$\frac{1}{c^2}\frac{\partial^2 P(x,t)}{\partial t^2}=\frac{\partial^2 P(x,t)}{\partial x^2},$
```{=html}
</center>
```
where $x$ is the axis along the propagation direction.
The complex solution for the sound pressure wave $P(y,t)$ is:
```{=html}
<center>
```
$P(x,t)=(Ae^{-jkx}+Be^{jkx})e^{j\omega t},$
```{=html}
</center>
```
where $k=\omega /c$ is the wave number, $\omega=2\pi f$, A and B are the
complex amplitudes of the left-,right-going traveling pressure waves,
respectively.
Another interesting physical parameter is the volume velocity $U(x,t)$,
defined as particle velocity $V(x,t)$ times cross-sectional area $s$.
The complex solution for the volume velocity $U(x,t)$ is given by:
```{=html}
<center>
```
$U(x,t)=\frac{s}{\rho c}(Ae^{-jkx}-Be^{jkx})e^{j\omega t},$
```{=html}
</center>
```
### Acoustic Impedance
\[\[Image:InputImpedanceLoseless.png\|right\|thumb\|550px\|
```{=html}
<center>
```
Figure 6: The input impedance of an ideal losses cylindrical pipe
```{=html}
</center>
```
\]\] The acoustic input impedance $Z_{in}(j\omega)$ provides very useful
information about the acoustic behavior of a clarinet in the frequency
domain. The intonation and response can be inferred from the input
impedance, e.g., sharper and stronger peaks indicate frequencies that
are easiest to play.
The input impedance (in the frequency domain) is defined as the ratio of
pressure to volume flow at the input end (x=0) of the pipe:[^8]
```{=html}
<center>
```
$Z_{in}(j\omega) = \frac{P(j\omega)|_{x=0}}{U(j\omega)|_{x=0}} = Z_c\frac{Z_L cos(kL) + jZ_c sin(kL)}{jZ_L sin(kL) + Z_c cos(kL)},$
```{=html}
</center>
```
where $Z_L$ is the load impedance at the open end of the clarinet\'s
bore, and $Z_c=\rho c/s$ is the characteristic impedance.
At this point, if we want to have a \"quick glance\" of the input
impedance of a clarinet bore, we may neglect the radiation losses and
assume zero load impedance at the open end of the main bore to simplify
the problem. We may also neglect the sound absorption due to wall
losses. With these simplifications, we can calculate the theoretical
input impedance of a cylindrical pipe of length $L=0.148$ meters and
radius $r = 0.00775$ meters by Matlab, which is shown in Figure 6.
### Radiation Impedance
\[\[Image:Reflectance.png\|left\|thumb\|550px\|
```{=html}
<center>
```
Figure 7a: Reflectance magnitude and length correction for open end
pipes
```{=html}
</center>
```
\]\] \[\[Image:Radiation impedance chart.png\|left\|thumb\|550px\|
```{=html}
<center>
```
Figure 7b: Normalized amplitude of radiation impedance for open end
pipes
```{=html}
</center>
```
\]\]
The load impedance at the open end of the cylindrical bore are
represented by radiation impedance $Z_r$. We assumed $Z_r=0$ previously
when we discussed the input impedance of an ideal cylindrical pipe.
Although it is very small, the radiation impedance of a real clarinet is
obviously not zero. And not only for the open end of the main bore, each
open tone hole features its own radiation impedance as well.
It is not easy to measure the radiation impedance directly. However, we
can obtain the radiation impedance of a pipe from its input impedance
$Z_{in}$ by:[^9] $Z_r = jZ_c tan[atan(Z_{in}/jZ_c)-kL]$, where $L$ is
the length of the pipe and $a$ is the radius.
Alternatively, we can also calculate the radiation impedance from
reflection coefficient $R$ at the open end by this relation:
```{=html}
<center>
```
$Z_r = \frac{\rho c}{S} \frac{1+R}{1-R}$
```{=html}
</center>
```
where $\rho$ is the air density, $c$ is the sound speed and $S$is the
cross-section area of the pipe.
Levine and Schwinger [^10] gives the theoretical value of $R$ of a tube
with a finite wall thickness, where $R$ is calculated by its modulus
$|R|$ and the length correction $l(\omega)$ as $R = -|R|e^{-2jkl}$. The
original equation proposed by Levine and Schwinger is rather
complicated. To make life easier, as shown in Figure 7a, $R$ and the
length correction can be approximated by the rational equation given by
Norris and Sheng (1989).[^11] The radiation impedance is followed in
Figure 7b.
```{=html}
<center>
```
$|R|=\frac{1+0.2ka-0.084(ka)^2}{1+0.2ka+0.416(ka)^2}$
```{=html}
</center>
```
```{=html}
<center>
```
$l/a=\frac{0.6133+0.027(ka)^2}{1+0.19(ka)^2}$
```{=html}
</center>
```
### Transmission Matrices
#### Bore section
Since the acoustic impedance is so important for the quality and feature
of a clarinet, we somehow are interested in knowing the acoustic
impedance at any place along the main bore. This problem can be solved
by transmission matrices method. We will see the effects of tone holes
can also be incorporated into the acoustic impedance network of the
instrument by introducing extra series and shunt impedances.
The entire bore can be seen as a series cascade cylindrical sections,
each section with an input end and a output end, as shown in Figure
below: \[\[Image:BoreTM.svg\|center\|thumb\|450px\|
```{=html}
<center>
```
Figure 8: A bore section
```{=html}
</center>
```
\]\]
The pressure and volume velocity at the input end and that at the output
end are related by the associated transmission matrix:
```{=html}
<center>
```
$\begin{bmatrix} P_1 \\ U_1 \end{bmatrix}$=$\begin{bmatrix} a & b \\ c & d \end{bmatrix}$$\begin{bmatrix} P_2 \\ U_2 \end{bmatrix}$
where
$\begin{bmatrix} a & b \\ c & d \end{bmatrix}$=$\begin{bmatrix} cos (k L) & j Z_c sin (k L) \\ \frac{j}{Z_c} sin(kL) & cos(kL) \end{bmatrix}$
```{=html}
</center>
```
Thus, $Z_1$ is related to $Z_2$ by: $Z_1=\frac{b+aZ_2}{d+cZ_2}$. Given
the input impedance or the load impedance of a cylindrical pipe, we can
calculate the acoustic impedance at any position of the pipe along the
propagation axis.
#### Tonehole section
Now we deal with the tone holes. The influence of an open or closed tone
hole can be represented by a network of shunt and series impedances, as
shown in Figure 9. \[\[Image:Toneholenetwork.svg\|right\|thumb\|550px\|
```{=html}
<center>
```
Figure 9: Shunt and series impedances of the tone hole
```{=html}
</center>
```
\]\] \[\[Image:Tonehole2.svg\|right\|thumb\|550px\|
```{=html}
<center>
```
Figure 10: Combine the tone hole network with the bore network
```{=html}
</center>
```
\]\]
The shunt impedance $Z_s$ and series impedance $Z_a$ of a tone hole of
ridius $b$ in a main bore of ridius $a$ are given by:[^12]
```{=html}
<center>
```
$Z_{sc}=(\rho c/\pi a^2) (a/b)^2 (-jcotkt),$
```{=html}
</center>
```
```{=html}
<center>
```
$Z_{so}=(\rho c/\pi a^2) (a/b)^2 (jkt_e),$
```{=html}
</center>
```
```{=html}
<center>
```
$Z_{a}=(\rho c/\pi a^2) (a/b)^2 (-jkt_a),$
```{=html}
</center>
```
where $Z_{sc}$ is the shunt impedance of closed tone hole, $Z_{so}$ is
the shunt impedance of open tone hole, $Z_{a}$ is the series impedance
of either closed or open tone hole, the value of $t_e$ and $t_a$ are
related to the geometrical chimney height.
The network of a tone hole can be inserted into the bore section as a
zero-length section, as shown in Figure 10, where $Z_{r}$ and $Z_{rt}$
are radiation impedance of the bore and the open tone hole,
respectively. In the low frequency approximation, a tone hole can be
viewed as a short cylindrical bore, and its radiation impedance can be
calculated in a similar way. The input acoustic impedance of the
combination $Z_{in}=P_{in}/U_{in}$ can be calculated from the entire
network.
#### Wall losses
We assumed a perfect rigid and smooth and thermally insulated wall in
the previous discussions. The bore of a clarinet is of course not that
ideal in the real case, so the losses due to viscous drag and thermal
exchange must be taken into account. The full physical detail of
thermal-viscous losses is complex and tedious and is beyond the scope of
this article. Fortunately, we don\'t have to go to that detail if we
only concerns the final effects, i.e., we just simply replacing the wave
number ($k=\omega /c$) of the transmission matrix coefficients with its
complex brother and bingo, our new transmission matrices take care of
the wall losses effects automatically. This complex version of wave
number is given by:[^13]
```{=html}
<center>
```
$K=\omega /\nu - j\alpha.$
```{=html}
</center>
```
We notice two interesting differences here. For one, the sound speed $c$
is replaced by \"phase velocity\" $\nu$, which is not a constant but
rather a function of frequency and geometric radius of the pipe. For
two, there is an imaginary term $j\alpha$, where $\alpha$ is the
attenuation coefficient per unit length of path, which is also a
function of frequency and geometric radius of the pipe. Both the phase
velocity and the attenuation coefficient are subject to environmental
parameters such as temperature, air viscosity, specific heat and thermal
conductivity of air.
The fact that the phase velocity and the attenuation coefficient are
frequency related suggests that not only the amplitude, but also the
phase of the acoustic impedance is affected by wall losses. In other
words, not only the loudness but also the tonality of the instrument is
affected by the wall losses. That implies when we design a clarinet, if
we calculate the input impedance based on the assumption of an ideal
cylindrical pipe with a perfect rigid and smooth wall, the tone of this
instrument will be problematic! Using a complex wave number that related
to the physics properties of the materials will improve our design, at
least will shorten the gap between the theoretical prediction and the
real results.
The fact that the complex wave number are also influenced by
environmental parameters suggests that the tonality of a woodwind
instrument may change by environmental factors, say, the room
temperature.
It would be interesting to compare the dissipated power due to the
thermal-viscous losses with the radiated power over the clarinet bore.
The comparison over the range from 0 Hz to 2000 Hz is simulated in
Matlab, where the length of the pipe is 0.148 m and the radius is
0.00775 m, the properties of air is chosen at the temperature of 20
degree. We found the dissipated power is much larger than the radiated
power for most frequencies excepts small areas around the resonance
frequencies.
\[\[Image:Prad.png\|center\|thumb\|550px\|
```{=html}
<center>
```
Figure 10: Dissipated power vs. radiated power
```{=html}
</center>
```
\]\]
## Other Useful Links
- Clarinet Acoustics by
UNSW: almost
everything about clarinet acoustics and general musical acoustics.
This excellent online knowledge base is maintained by The University
of New South Wales.
- NICTA-UNSW Clarinet
Robot: online
video shows a robot that can play clarinet and help people to better
understand clarinet playing.
- Clarinet Acoustics: An
Introduction.
- Basic Clarinet
Acoustics: Yet another
online article about basic clarinet acoustics.
- What is Acoustic Impedance and why is it
important? Explain
the acoustic impedance in an intuitive way.
- Physical modeling of woodwind
instruments.
The physical behavior of a clarinet can be modeled by digital
waveguide - a highly efficient time-domain modeling technique.
Digital waveguide is used in Yamaha\'s virtual instruments, such as
VL70m.
See J.O.Smith\'s online book Physical Audio Signal
Processing for more detail
about physical modeling technique based on digital waveguide.
- Physical modeling
clarinet:,
Electronic implementation of another physical modeling clarinet.
## References
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]: 1, George Gershwin:
Rhapsody in Blue - Fantasia 2000
[^7]:
[^8]:
[^9]:
[^10]:
[^11]:
[^12]:
[^13]:
|
# Engineering Acoustics/Acoustic Guitars
I plan to discuss the workings of an acoustic guitar, and how the topics
that we have studied apply. This will largely be vibrations of strings
and vibrations of cavities.
## Introduction
The acoustic guitar is one of the most well known musical instruments.
Although precise dates are not known, the acoustic guitar is generally
thought to have originated sometime during the Renaissance in the form
of a lute, a smaller fretless form of what is known today. After
evolving over the course of about 500 years, the guitar today consists
of a few major components: the strings and neck, the bridge, soundboard,
head, and internal cavity.
## Strings, Neck, and Head
The strings are what actually create vibration on the guitar. On a
standard acoustic, there are six strings, each with a different constant
linear density. Strings run along the length of the neck, and are wound
around adjustable tuning pegs located on the head. These tuning pegs can
be turned to adjust the tension in the string. This allows a
modification of the wave speed, governed by the equation
$c^2=T/\rho$
where c is the wave speed \[m/s\] as a function of tension \[N\], T, and
rho is the linear density \[kg/m\^3\]. The string is assumed to fixed at
the head (x=0) and mass loaded at the bridge (x=L).
To determine the vibrating frequency of an open string, a general
harmonic solution (GHS) is assumed,
$y(x,t)=Aexp(j(wt-kx))+Bexp(j(wt-kx))$
To solve for coefficients A and B, boundary conditions at x=0 and x=L
are evaluated. At x=0, string velocity (dy/dx) must be zero at all times
because that end is assumed to be fixed. Applying this knowledge to the
GHS produces
$y(x,t)=-2jAsin(kx)*exp(jwt)$
Alternatively, at the bridge (a.k.a the mass load at x=L), the bridge
and soundboard (along with any other piece that may vibrate) is assumed
to be a lumped element of mass m. The overall goal with this boundary
condition is to determine the velocity of the mass. From Newton\'s
second law (F=ma), the only force involved is the tension force in the
string. The y-component of this force divided by mass m equals the
acceleration. Knowing that acceleration equals velocity times jw
(a=jwu),
$u(L,t)=-T/(j*w*m)*(dy/dx)$
evaluated at x=L. Combining the two boundary equations and simplifying,
a final equation can be obtained
$cot(kL)=(m/ms)kL$
where k is the wavenumber (w/c), L is the string length, m is the lumped
mass of the guitar body, ms is the total mass of the string (linear
density times length), w is the frequency, and c is the wave speed. If
the ratio of m/ms is large (which in a guitar\'s case, it is), these
frequencies are designated by kL=n\*pi. Simplified, the fundamental
frequency can be given by
$f=sqrt(T/rho) / 2L$
Therefore to adjust the resonance frequency of the string, either change
the tension (turn the tuning knob), change the linear density (play a
different string), or adjust the length (use the fretboard).
To determine the location of the frets, musical notes must be
considered. In the musical world, it is common practice to use a
tempered scale. In this scale, an A note is set at 440 Hz. To get the
next note in the scale, multiply that frequency by the 12th root of 2
(approximately 1.059), and an A-sharp will be produced. Multiply by the
same factor for the next note, and so on. With this in mind, to increase
f by a factor of 1.059, a corresponding factor should be applied to L.
That factor is 1/17.817, with L in inches. For example, consider an open
A string, vibrating at 440 Hz. For a 26 inch string, the position of the
first fret is (26/17.817=1.459) inches from the head. The second fret
will be ((26-1.459)/17.817) inches from the first, and so on.
## Bridge
The bridge is the connection point between the strings and the
soundboard. The vibration of the string moves the assumed mass load of
the bridge, which vibrates the soundboard, described next.
## Soundboard
The soundboard increases the surface area of vibration, increasing the
initial intensity of the note, and is assisted by the internal cavity.
## Internal Cavity
The internal cavity acts as a Helmholtz resonator, and helps to amplify
the sound. As the sound board vibrates, the sound wave is able to
resonate inside.
|
# Engineering Acoustics/Room Acoustics and Concert Halls
# Room Acoustics and Concert Halls
## Introduction
From performing on many different rooms and stages all over the United
States, I thought it would be nice to have a better understanding and
source about the room acoustics. This Wikibook page is intended to help
the user with basic technical questions/answers about room acoustics.
Main topics that will be covered are: what really makes a room sound
*good* or *bad*, *alive* or *dead*. This will lead into absorption and
transmission coefficients, decay of sound in the room, and
reverberation. Different use of materials in rooms will be mentioned
also. There is no intention of taking work from another. This page is a
switchboard source to help the user find information about room
acoustics.
## Sound Fields
Two types of sound fields are involved in room acoustics: Direct Sound
and Reverberant Sound.
### Direct Sound
The component of the sound field in a room that involves only a direct
path between the source and the receiver, before any reflections off
walls and other surfaces.
### Reverberant Sound
The component of the sound field in a room that involves the direct path
and the path after it reflects off of walls or any other surfaces. How
the waves deflect off of the mediums all depends on the absorption and
transmission coefficients.
Good example pictures are shown at Crutchfield
Advisor,
a Physics Site from MTSU,
and Voiceteacher.com
## Room Coefficients
In a perfect world, if there is a sound shot right at a wall, the sound
should come right back. But because sounds hit different materials types
of walls, the sound does not have perfect reflection. From 1, these are
explained as follows:
### Absorption & Transmission Coefficients
The best way to explain how sound reacts to different mediums is through
acoustical energy. When sound impacts on a wall, acoustical energy will
be reflected, absorbed, or transmitted through the wall.
![](Ad_Rev.jpg "Ad_Rev.jpg")
------------------------------------------------------------------------
Absorption Coefficient:
![](Absorption_Coefficient.jpg "Absorption_Coefficient.jpg") NB: this
chemical structure is unrelated to the acoustics being discussed.
------------------------------------------------------------------------
Transmission Coefficient:
![](Transmission_Coefficient.jpg "Transmission_Coefficient.jpg")
------------------------------------------------------------------------
If all of the acoustic energy hits the wall and none is reflected, the
alpha would equal 1. The energy had zero reflection and was absorbed or
transmitted. This would be an example of a *dead* or *soft* wall because
it takes in everything and doesn\'t reflect anything back. Rooms that
are like this are called Anechoic Rooms which looks like this from
Axiomaudio.
If all of the acoustic energy hits the wall and all reflects back, the
alpha would equal 0. This would be an example of a *live* or *hard* wall
because the sound bounces right back and does not go through the wall.
Rooms that are like this are called Reverberant Rooms like this
McIntosh room. Look how the
walls have nothing attached to them. More room for the sound waves to
bounce off the walls.
### Room Averaged Sound Absorption Coefficient
Not all rooms have the same walls on all sides. The room averaged sound
absorption coefficient can be used to have different types of materials
and areas of walls averaged together.
------------------------------------------------------------------------
RASAC: ![](Ralpha.jpg "Ralpha.jpg")
![](Sequals.jpg "Sequals.jpg")
------------------------------------------------------------------------
### Absorption Coefficients for Specific Materials
Basic sound absorption Coefficients are shown here at Acoustical
Surfaces.
Brick, unglazed, painted alpha \~ .01 - .03 -\> Sound reflects back
An open door alpha equals 1 -\> Sound goes through
Units are in
Sabins.
## Sound Decay and Reverberation Time
In a large reverberant room, a sound can still propagate after the sound
source has been turned off. This time when the sound intensity level has
decay 60 dB is called the reverberation time of the room.
------------------------------------------------------------------------
![](Revtime.jpg "Revtime.jpg") ![](Reverberation_Time.jpg "Reverberation_Time.jpg")
------------------------------------------------------------------------
Great Reverberation
Source
## Great Halls in the World
Foellinger Great
Hall
Japan
Budapest
Carnegie Hall in New
York
Carnegie
Hall
Pick Staiger at Northwestern
U
------------------------------------------------------------------------
Concert Hall
Acoustics
------------------------------------------------------------------------
## References
\[1\] Lord, Gatley, Evensen; *Noise Control for Engineers*, Krieger
Publishing, 435 pgs
Back to Engineering
Acoustics
Created by Kevin Baldwin
|
# Engineering Acoustics/Basic Room Acoustic Treatments
## Introduction
Many people use one or two rooms in their living space as \"theatrical\"
rooms where theater or music room activities commence. It is a common
misconseption that adding speakers to the room will enhance the quality
of the room acoustics. There are other simple things that can be done to
increase the acoustics of the room to produce sound that is similar to
\"theater\" sound. This site will take you through some simple
background knowledge on acoustics and then explain some solutions that
will help improve sound quality in a room.
## Room Sound Combinations
The sound you hear in a room is a combination of direct sound and
indirect sound. Direct sound will come directly from your speakers while
the other sound you hear is reflected off of various objects in the
room.
![](sound_lady.jpg "sound_lady.jpg")
The Direct sound is coming right out of the TV to the listener, as you
can see with the heavy black arrow. All of the other sound is reflected
off surfaces before they reach the listener.
sound travels in the ground faster than in the air because the particles
in the solid are closer so they vibrate faster and have a quicker
reaction.
## Good and Bad Reflected Sound
Have you ever listened to speakers outside? You might have noticed that
the sound is thin and dull. This occurs because when sound is reflected,
it is fuller and louder than it would if it were in an open space. So
when sound is reflected, it can add a fullness, or spaciousness. The bad
part of reflected sound occurs when the reflections amplify some notes,
while cancelling out others, changing the character of the sound. It can
also affect tonal quality and create an echo-like effect. There are
three types of reflected sound, pure reflection, absorption, and
diffusion. Each reflection type is important in creating a \"theater\"
type acoustic room.
![](sound.jpg "sound.jpg")
### Reflected Sound
Reflected sound waves, good and bad, affect the sound you hear, where it
comes from, and the quality of the sound when it gets to you. The bad
news when it comes to reflected sound is standing waves. These waves are
created when sound is reflected back and forth between any two parallel
surfaces in your room, ceiling and floor or wall to wall. Standing waves
can distort noises 300 Hz and down. These noises include the lower mid
frequency and bass ranges. Standing waves tend to collect near the walls
and in corners of a room, these collecting standing waves are called
room resonance modes.
#### Finding your room resonance modes
First, specify room dimensions (length, width, and height). **Then
follow this example:**
![](equationandexample.jpg "equationandexample.jpg")![](Resmodepic.jpg "Resmodepic.jpg")![](exampletable.jpg "exampletable.jpg")
#### Working with room resonance modes to increase sound quality
##### 1. There are some room dimensions that produce the largest amount of standing waves.
a\. Cube
b\. Room with 2 out of the three dimensions equal
c\. Rooms with dimensions that are multiples of each other
##### 2. Move the chair or sofa away from the walls or corners to reduce standing wave effects
### Absorbed
The sound that humans hear is actually a form of acoustic energy.
Different materials absorb different amounts of this energy at different
frequencies. When considering room acoustics, there should be a good mix
of high frequency absorbing materials and low frequency absorbing
materials. A table including information on how different common
household surfaces absorb sound can be found on the website
<http://www.crutchfieldadvisor.com/learningcenter/home/speakers_roomacoustics.html?page=2#materials_table>
### Diffused Sound
Using devices that diffuse sound is a fairly new way of increasing
acoustic performance in a room. It is a means to create sound that
appears to be \"live\". They can replace echo-like reflections without
absorbing too much sound.
Some ways of determining where diffusive items should be placed were
found on
<http://www.crutchfieldadvisor.com/S-hpU9sw2hgbG/learningcenter/home/speakers_roomacoustics.html?page=4>:
1.) If you have carpet or drapes already in your room, use diffusion to
control side wall reflections.
2.) A bookcase filled with odd-sized books makes an effective diffusor.
3.) Use absorptive material on room surfaces between your listening
position and your front speakers, and treat the back wall with diffusive
material to re-distribute the reflections.
## How to Find Overall Trouble Spots In a Room
Every surface in a room does not have to be treated in order to have
good room acoustics. Here is a simple method of finding trouble spots in
a room.
1.) Grab a friend to hold a mirror along the wall near a certain speaker
at speaker height.
2.) The listener sits in a spot of normal viewing.
3.) The friend then moves slowly toward the listening position (stay
along the wall).
4.) Mark each spot on the wall where the listener can see any of the
room speakers in the mirror.
5.) Congratulations! These are the trouble spots in the room that need
an absorptive material in place. Don\'t forget that diffusive material
can also be placed in those positions.
## References Sound
<http://www.ecoustics.com/Home/Accessories/Acoustic_Room_Treatments/Acoustic_Room_Treatment_Articles/>
<http://www.audioholics.com/techtips/roomacoustics/roomacoustictreatments.php>
<http://www.diynetwork.com/diy/hi_family_room/article/0,2037,DIY_13912_3471072,00.html>
<http://www.crutchfieldadvisor.com/S-hpU9sw2hgbG/learningcenter/home/speakers_roomacoustics.html?page=1>
Back to the main page
|
# Engineering Acoustics/Electro Acoustic Installations in Room
As the development of new and sophisticated technology is advancing at
an exponential rate, it has become quite evident that the study of room
acoustics would be rather incomplete without the analysis of the effect
of electroacoustic devices. Whether the Canadian Prime Minister is
addressing the members of parliament, a university professor is giving a
lecture in a large auditorium, the CEO of one of the largest tech
companies holds a press conference to unveil their company's technology,
or thousands of heavy metal fans gather to see Metallica perform live,
loudspeakers and microphones are frequently used as a means of speech
amplification. This addition to the Engineering Acoustics Wikibook will
aim to aid the reader in strategic placement of electroacoustic devices
in any room. It will also serve as an extension to the previous Wikibook
entry, entitled Room Acoustics and Concert Halls.
# Loudspeaker Directivity
Although, a common sound amplification setup consists of a microphone,
amplifier and a loudspeaker, the loudspeaker is perhaps the crucial
component of the arrangement. Loudspeakers must be designed in order to
withstand high power while not causing any distortions in the radiated
sound.\[Kuttruff\]
First, a description of the radiation characteristics of a loudspeaker
is of interest. !Motion of piston creating soundwaves at an angle theta
from the
horizontal
When waves having high frequencies begin propagating due to the motion
of the piston, the radius of the piston can no longer be considered
small relative to the wavelength of the radiated sound. Thus, a
significant phase difference occurs between the radiated wave motion and
the motion of the piston. This leads to various degrees of interference
between both elements, which may lead to total cancellation of all
sound. Thus, it is useful to define a directivity function as follows:
$\delta(\theta)=\frac{2J_1(kasin(\theta))}{kasin(\theta)}$
`J1 is the first order Bessel Function`\
`θ is the angle of radiation`\
`k is the wave number, which is the ratio of angular frequency to the speed of sound `\
`a is the radius of the piston`\
`ka is referred to as the Helmholtz number`
A low Helmholtz number to a minimum, as this leads to a uniform sound
directivity. A large Helmholtz number results in a very directive sound
radiation as shown in the figures below.
Next, let us consider the horn loudspeaker, which provides a more
practical model of the loudspeaker. The main advantage of the horn model
is that it provides a broader directivity than the narrow scope provided
by the piston loudspeaker. It is often desirable to combine several horn
loudspeakers. The horn loudspeaker increases radiation resistance. Since
the directional characteristics of horn loudspeakers depend on both the
size and shape of the opening, as well as on the entire shape of the
horn, an expression for the directivity becomes complex.
Multiple loudspeakers will often be found in a room. Each loudspeaker
may be modeled as a point source. A directivity function for the
ensemble may be defined similarly as for a single piston loudspeaker.
$\delta_a (\theta)=\frac{sin(0.5*Nkdsin(\theta))}{Nsin(0.5kdsin(\theta))}$
Here, the angle of radiation is taken along the normal direction of the
array containing N point sources, spaced along a straight line at an
equal distance, d, from each other.
# Acoustical Feedback
If both the loudspeakers and microphone of an electroacoustic setup are
in the same room, the microphone will pick up sound propagating from the
loudspeakers as well as from the original source. This occurrence leads
to a phenomenon known as acoustical feedback
1. This phenomenon often
leads to a disruption in the entire electroacoustic setup and one may
hear loud howling or whistling sounds. As already discussed, loudspeaker
positioning plays a vital role in the minimization of acoustic feedback.
In order to effectively analyze the effects of acoustical feedback in a
room, it is wise to represent the propagation of sound waves from a
source such as a loudspeaker by means of a block diagram. !Block
Diagram Representation of Acoustical feedback in a room of sound
radiating from a
source
$\frac{Y(\omega)}{S(\omega)}=O(\omega)=\frac{KG'(\omega)}{1-G(\omega)K(\omega)}$
$Y(\omega)$` is the complex amplitude spectrum of the signal at the listener's seat`\
$S(\omega)$` is the sound input (into a microphone)`\
$K$` is the amplifier gain`\
$G'(\omega)$` is a complex transfer function representing the path by which the sound will reach the listener `\
$G(\omega)$` is the transmission function representing the path taken by sound radiating from the loudspeaker back into the microphone`
The closed loop transfer function at right shows how acoustical feedback
is caused by the original source signal passing continuously through the
loop, ending up back at the microphone. The term in the denominator of
complex transfer function, $KG(\omega)$, represents the open-loop gain
of the system. The magnitude of this quantity greatly impacts the
amplitude of the signal outputted to the listener, $Y(\omega)$. This
outputted variable represents the sound waves that actually propagate
toward the listener while the rest enter the feedback loop where they
make there way back to the microphone.
!Open Loop Block Diagram of Sound travelling through a room. This
diagram idealizes the actual closed loop system by neglecting the
presence of any acoustical
feedback
By observing the closed loop transfer function defined by $O(\omega)$,
one can see that the poles of the function for which the denominator
vanishes are of immediate interest. If the value of the open loop gain
equals unity, the entire system becomes unstable. If the open loop gain
equals unity, the closed loop transfer function tends to infinity,
producing a large scale instability. Similarly, if the open loop gain is
than unity, the value of the closed loop transfer function becomes
increasingly large as the value of $KG(\omega)$ tends to one. At such
large values of the open loop transfer function, the sound heard by the
listener will be distorted. This effect is better explained when an
impulsive signal is applied at the source, in which case ringing effects
are heard as the sound waves reach the listener. It is of the highest
importance to allow the system to operate within an appropriate range of
amplifier gain values. Through experimentation (See reference 1,) the
amplifier gain must be set in such a way that the following equations is
satisfied:
$20log(\frac{K}{K_o})<-12 dB$, where K_o is the critical amplifier gain
value for which $O(\omega)$ grows without bound.
Naturally, it is not in one\'s best interest to reduce the amplifier
gain to infinitessimily small values. This would simply render the
amplifier useless and the radiated sound signal would be weak. A wiser
strategy in reducing the effects of acoustical feedback involves in
attempting to reduce G(ω), while increasing G\'(ω). Although this may
prove to be difficult, the meticulous selection of the loudspeaker
directivity, with the main lobe pointing towards the listeners, while
the microphone is positioned away from the main lobes, in a position of
weak radiation, will greatly reduce undesirable feedback effects. Such
unidirectional microphones as Cardioid microphones are commonly used as
they effectively reject sounds propagating from other directions other
than those originating from the source (i.e from the speaker.) The
stability of closed loop feedback systems may also be analysed via such
tools as root locus plots or by using the Routh-Hurwitz criterion. In
traditional control theory, lead or lag compensators are frequently used
to improve the stability of the system
2.
# Loudspeaker Positioning
When deciding on an appropriate location for each loudspeaker in a room,
it is necessary to take into account certain factors. As seen in the
previous section, loudspeaker directivity can be manipulated depending
on the amount of loudspeakers present and if they are arranged linearly
in a group. Regardless, loudspeakers should be placed in such a manner
that a uniform sound energy is supplied to all listeners in the room.
Also, it is necessary to establish an adequate speech intelligibility,
which is measured in accordance with the speech intelligibility index
3.
Loudspeaker location must be chosen in such a way that the audience is
supplied direct sound that is as uniform direct as possible. One way of
achieving this is by mounting the loudspeaker at a higher altitude than
the sound source (i.e. microphone), which also limits feedback. This
also ensures that the direct sound will also arrive to the listener with
a relatively uniform directivity in the horizontal plane. Usually, the
human ear is not sensitive to sound variations in the vertical plane,
thereby making the elevation of loudspeakers a wise chose.
It is also of interest to derive a quantity known as the radius of
reverberation, R~o~. Consider sound leaving the loudspeaker with
intensity, I~o~. This intensity then makes it way through the closed
loop system, arriving back at the microphone input. It is then
re-amplified by a gain A to a value of AI~o~ as it passes through the
amplifier and makes its way back to the source. The intensity near the
loudspeaker will then decay as a function of the radial distance,R, from
the loudspeaker according to 1/R^2^. The intensity will decay until a
distance of R~O~ is reached. This radius of reverberation is then
defined as:
$R_O=0.057sqrt{\frac{V}{T}}$ where
`V is the volume of the room in m`^`3`^\
`T is the reverberation time in seconds`
## Loudspeaker Positioning for the Improvement of the reverberation of a Room using Direct Feedback
Since modern amphitheatres and stadiums serve as multipurpose venues,
controlling the reverberation time of the room to suit a specific event
is of importance. For example, Montreal, Quebec\'s Bell Centre serves as
the home of the Montreal Canadiens hockey team while also hosting a
multitude of live concerts and shows every year. The reverberation time
of such multipurpose venues can be modified with the aide of rotating or
removable walls or ceilings or even thick curtains. Such movable items
can play a role in altering the room\'s absorbtion and reflection
properties. However, the implementation of such devices may prove to be
quite costly. Thus, one\'s attention should be turned to the strategic
placement of loudspeakers in order to increase the reverberation time of
a room. A popular method has been discussed in a 1971 paper by Guelke
and Broadhurst.
In order to increase the reverberation time in a room by means of direct
feedback, it should be noted that the total sound intensity in a room is
the sum of the intensity of the source along with contributions from the
pressure from each of the room modes. Both contributions will decay
exponentially according to the relation:
$I_\mathrm{Tot}=I_\mathrm{Room}e^\mathrm{-kt}+I_\mathrm{sys}e^\mathrm{-Lt}$
$I_\mathrm{Tot}$` is the total sound intensity radiated in the room`\
$I_\mathrm{sys}$` is the sound intensity prom the acoustical system comprising of a microphone, amplifier and loudspeaker`\
$I_\mathrm{Room}$` is the sound intensity due to the pressure modes of the room`\
`k is the room damping coefficient`\
`L is the system damping coefficient`\
$T_\mathrm{60}$` is the reverberation time or more specifically, `\
`the time it takes for the energy density or sound pressure level in the room to decay by 60 dB, once the source is abruptly shut off.`
Furthermore, the system damping coefficient as well as the system sound
intensity may be related to the room\'s damping coefficient and radiated
sound intensity by constants n and m respectively, such that:
$L=nK$
$I_\mathrm{sys}=mI_\mathrm{Room}$
Under initial conditions at time, t=0, the total sound intensity in the
room is:
$I_\mathrm{Tot}=I_\mathrm{room}+mI_\mathrm{room}$
Next, the reverberation time is defined to be the point where the sound
intensity reaches 10^−6^ of its original value. Thus, the above equation
becomes:
$10^\mathrm{-6}(1+m)=e^\mathrm{-kt}+me^\mathrm{-nkt}$
Solving for the reverberation time,$T_\mathrm{60}$, yields:
$T_\mathrm{60}=\frac{-1}{nk}log_{e}(\frac{10^\mathrm{-6}(1+m)}{m})$
If the reverberation time of an isolated room is defined as:
$\frac{log_{e}10^6}{k}=\frac{13.8}{k}$
then the total reverberation time, t, may be written as follows:
$t=\frac{t_\mathrm{room}f(m)}{n}$ where f(m) is
$\frac{log_{e}(\frac{10^\mathrm{-6}(1+m)}{m})}{13.8})$
At the microphone, the magnitude of the sound intensity, I~o~ becomes
$\frac{AI_o}{R_o^2}$, so that A = R~o~^2^. Consequently, the larger the
the radial distance, the larger the average sound intensity in the room.
The directivity factor of the loudspeakers may help to create a more
diffusive sound field. Hence, the radial distance, R, is increased by an
increase in the directivity factor. Also it has been should that delay
times greater than 75 ms produce undesirable whining noises. It is
unwise to increase the reverberation time, T, of such a simple feedback
system beyond 1.5 s as the system very near the point of instability has
at least one frequency with a very long reverberation time.
By taking a value of 75 ms for $\tau$ in the following equation, the
reverberation time, t, in open space is defined by the relation:
$t=\frac{60\tau}{20log_{10}(A)}$, where $\tau = \frac{d}{c}$ c is the
speed of in air, 344 m/s and d is the distance between microphone and
loudspeaker
It is noteworthy to mention the works of M.R. Schroeder presented in his
1964 paper on \"Acoustic-Feedback Stability by Frequency Shifting\" In
essence,in order to avoid any ringing or howling noises of any kind in
the feedback loop, Schroeder suggested a frequency shift of 4 Hz into
the feedback loop. This was shown to greatly improve the stability of
the system. However,the method produces undesirable sidebands Because of
this phenomenon, the method is undesirable for concert halls. To improve
on Schroeder\'s method, Guelke and Broadhurst suggest the introduction
of phase modulation into the system in order to achieve longer
reverberation times and increased system stability.
By definition, if a linear, time invariant (LTI) system, such as the one
described by the closed feedback loop of a microphone, amplifier and
loudspeaker, is unstable, radiated sound will increase exponentially and
without bound at certain frequencies. By the implementation of a phase
reversal switch, sounds at the initially unstable frequency will
stabilize. However, a new set of frequencies that were once stable will
begin to grow without bound. Nonetheless, such a consequence of the
utilization of phase reversal is minor as it may be accounted for by
ensuring that the switch is activated a couple of times a second. This
method allows for the reverberation time to be held constant while
assuring that all strange ringing or howling noises vanish.
Such rapid triggering of the switch does induce another concern in the
continued presence of transient effects. Luckily, such effects become
negligible if the phase is varied sinusoidally.
Ultimately, the modulating frequency must be chosen such that:
`1. The change in phase should be enough to stabilize the system `\
`2. The sidebands produced are so small that they be neglected`
The first criteria may be satisfied if the modulation frequency is set
to a value less than the reciprocal of the delay time, τ. The second
criterion may be satisfied by ensuring the the modulation frequency is
below one hertz. A modulation frequency of 1 Hz is so negligible that it
will not be picked up by the human ear. Briefly, based on the definition
of τ increase the distance between microphone and loudspeaker to achieve
a minimal modulation frequency, which will stabilize the system and be
practically non-existent to the human ear.
For further reading on ensuring that the phase modulation is sinusoidal,
it is advised to consult reference \[2\] or a textbook on control
systems, if desired. Introducing a sinusoidal phase modulator of the
form:
$\phi(t)=ksin(w_mt)$
Introducing a modulator that is purely sinusoidal in nature greatly
simplifies the usually non-linear nature of a real modulator, which
greatly simplifies the problem at hand.
# References
\[1\] Kuttruff, Heinrich; *Room Acoustics, Fourth Edition*, Spon Press
Publishing, 342 pgs
\[2\] Guelke, R. W. and A. D. Broadhurst (1971). \"Reverberation Time
Control by Direct Feedback.\" Acta Acustica united with Acustica 24(1):
33-41.
\[3\] Schroeder, M. R. (1964). \"Improvement of Acoustic‐Feedback
Stability by Frequency Shifting.\" The Journal of the Acoustical Society
of America 36(9): 1718-1724.
|
# Elements of Political Communication/References
```
## Academic articles and conference proceedings
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
## Books
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
:
:
:
:
```{=html}
<!-- -->
```
:
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
## Speeches
:
:
:
:
## Websites
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
```{=html}
<!-- -->
```
:
\_\_NOTOC\_\_
|
# Communication Theory/Nonverbal Communication
Scholars in this field usually use a strict sense of the term
\"verbal\", meaning \"of or concerned with words,\" and do not use
\"verbal communication\" as a synonym for oral or spoken communication.
Thus, sign languages and
writing are generally understood as forms of
verbal communication, as both make use of words --- although like
speech, both may contain paralinguistic elements and often occur
alongside nonverbal messages. Nonverbal communication can occur through
any sensory
channel "wikilink") ---
sight, sound,
smell,
touch or
taste. Nonverbal communication is also
distinguished from unconscious
communication, which may be
verbal or non-verbal.
## Voluntary
This is less commonly discussed because it seems unproblematic, refers
to movement, gestures and poses intentionally made by the person:
smiling, moving hands, imitating actions, and generally making movements
with full or partial intention of making them and a realisation of what
they communicate. It can apply to many types of soundless communication,
for example, formalized gestures.
## Involuntary
This applies to involuntary movements that may give observers cues about
what one is really thinking or feeling. The ability to interpret such
movements may itself be unconscious, at least for untrained observers.
Many elements of involuntary body language can easily be understood, and
tested, simply by knowing about them. For example, the tendency for
people to raise their eyebrows as one approaches them fact-to-face is
usually indicative of esteem. If you walk down the street and encounter
someone you do not know then the chances are that neither of you will
raise your eyebrows. If you recognize each other, however, even if you
do not greet each another, then eyebrows will likely raise and lower. Of
particular interest here in a work context is that if one is not rated
highly by the other person then that person will not raise their
eyebrows, even though one is
recognised.1.
It is widely believed that involuntary body language is the most
accurate way into a person\'s subconscious. In principle, if people do
not realize what they are doing or why they are doing it, it should be
possible for a trained observer to understand more of what they are
thinking or feeling than they intend - or even more than they realize
themselves. Interrogators, customs examiners, and others who have to
seek information that people do not necessarily want to give have always
relied on explicit or implicit hypotheses about body language. However,
this is a field that is fraught with risk of error, and it has also been
plagued with plausible but superficial or just plain erroneous popular
psychology: just because someone has
their legs crossed toward you, it does not mean that they want to have
sex with you; it could just mean that they are comfortable with you, but
it could also be how they always sit regardless of where you are.
Furthermore, it is not possible to tell reliably whether body language
has been emitted voluntarily or involuntarily, so to rely too heavily on
it is to run the risk of being bluffed.
Research conducted by Paul Ekman at the end
of the 20th Century resolved an old debate about how facial
expressions vary between cultures. He
was interested in whether, for instance smiling,
was a universal phenomenon, or whether there are cultures in which its
expression varies. Ekman found that there were several fundamental sets
of involuntary facial muscle movements relating to the experience of a
corresponding set of emotions: grief, anger, fear, enjoyment and
disgust. He also indicates that, whilst the furrowing of the eyebrows
when experiencing grief is difficult to perform
voluntarily, such expressions can be learnt through practice. Ekman\'s
ideas are described and photographically illustrated in his book
*Emotions
Revealed*2.
The use of video recording has led to important discoveries in the
interpretation of micro-expressions,
facial movements which last a few milliseconds. In particular, it is
claimed that one can detect whether a person is lying by interpreting
micro-expressions correctly. Oliver Sacks,
in his paper *The President\'s Speech*, indicated how people who are
unable to understand speech because of brain damage are nevertheless
able to assess sincerity accurately. He even suggests that such
abilities in interpreting human behavior may be shared by animals such
as domestic dogs.
A recent empirical study of people\'s ability to detect whether another
was lying established that some people can detect dishonesty
consistently reliably. This study showed that certain convicts, American
secret service agents and a Buddhist monk were better at detecting lying
in others than most people, and it is postulated that this ability is
learned by becoming observant of particular facial micro-expressions.
Body language is a product of both genetic and environmental influences.
Blind children will smile and laugh even though
they have never seen a smile. The ethologist
Iraneus Eibl-Eibesfeldt claimed
that a number of basic elements of body language were universal across
cultures and must therefore be fixed action
patterns under
instinctive control. Some forms of human body
language show continuities with communicative gestures of other
apes, though often with changes in meaning - the
human smile, for example, seems to be related to the open-mouthed threat
response seen in most other primates. More
refined gestures, which vary between cultures (for example the gestures
to indicate \"yes\" and \"no\"), must obviously be learned or modified
through learning, usually by unconscious observation of the environment.
## Reference
Argyle, M. (1975). Bodily communication.
New York: International Universities Press.
|
# This Quantum World/Cover
## About this book
Since 1999 I have taught an introductory course of contemporary physics
to high-school students (grades 11-12) and undergraduates at the Sri
Aurobindo International Centre of
Education
(SAICE) in Pondicherry "wikilink"),
India. The SAICE is not your typical high school or college. While the
students enjoy an exceptional freedom to choose their projects and
courses, the teachers are free to offer subjects of their own choosing
and are encouraged to explore new methods of teaching. My course is
therefore optional. It is suitable for anyone with an interest in what
contemporary physics is trying to tell us about the \"nature of
Nature.\" To students of physics it offers a perspective that is
complementary to those of many excellent textbooks.
Every year at the beginning of the new term I revise and try to improve
the material I hand out to my students. These revisions are to a
considerable extent based on student feedback. I am presently preparing
the handouts for the next term and intend to simultaneously make them
available in these pages, hoping for additional valuable feedback from
the Wikibooks community.
Postscript: I have not been able to add chapters and sections as fast as
I would have liked, but it\'s going to happen. I haven\'t given up on
this project!\--Koantum 03:05, 27 September
2007 (UTC)
|
# This Quantum World/Atoms
# Atoms
## What does an atom look like?
### Like this?
Image:Barium\_(Elektronenbesetzung).png Image:Atom.png Image:Stylised
atom with three Bohr model orbits and stylised nucleus.png
Image:Rutherford_atom.svg
### Or like this?
Image:Orbitals1.png\|$\rho_{2p0}$ Image:Orbitals2.png\|$\rho_{3p0}$
Image:Orbitals3.png\|$\rho_{3d0}$ Image:Orbitals4.png\|$\rho_{4p0}$
Image:Orbitals5.png\|$\rho_{4d0}$ Image:Orbitals6.png\|$\rho_{4f0}$
Image:Orbitals7.png\|$\rho_{5d0}$ Image:Orbitals8.png\|$\rho_{5f0}$
None of these images depicts an atom *as it is*. This is because it is
impossible to even visualize an atom *as it is*. Whereas the best you
can do with the images in the first row is to erase them from your
memory---they represent a way of viewing the atom that is too simplified
for the way we want to start thinking about it---the eight fuzzy images
in the next row deserve scrutiny. Each represents an aspect of a
stationary state of atomic hydrogen. You see neither the nucleus (a
proton) nor the electron. What you see is a fuzzy position. To be
precise, what you see are cloud-like blurs, which are symmetrical about
the vertical and horizontal axes, and which represent the atom\'s
internal relative position---the position of the electron relative to
the proton *or* the position of the proton relative to the electron.
- What is the *state* of an atom?
- What is a *stationary* state?
- What exactly is a *fuzzy* position?
- How does such a blur represent the atom\'s internal relative
position?
- Why can we not describe the atom\'s internal relative position *as
it is*?
## Quantum states
In quantum mechanics, **states** are
probability algorithms. We use them to calculate the probabilities of
the possible outcomes of
measurements on the
basis of actual measurement outcomes. A quantum state takes as its input
- one or several measurement outcomes,
- a measurement M,
- the time of M,
and it yields as its output the probabilities of the possible outcomes
of M.
A quantum state is called **stationary** if the probabilities it assigns
are independent of the time of the measurement.
From the mathematical point of view, each blur represents a density
function
$\rho(\boldsymbol{r})$. Imagine a small region $R$ like the little box
inside the first blur. And suppose that this is a region of the
(mathematical) space of positions relative to the proton. If you
integrate $\rho(\boldsymbol{r})$ over $R,$ you obtain the probability
$p\,(R)$ of finding the electron in $R,$ *provided* that the appropriate
measurement is made:
$$p\,(R)=\int_R\rho(\boldsymbol{r})\,d^3\boldsymbol{r}.$$
\"Appropriate\" here means capable of ascertaining the truth value of
the proposition \"the electron is in $R$\", the possible truth values
being \"true\" or \"false\". What we see in each of the following images
is a surface of constant probability density.
\
Image:Orbitals1a.png\|$\rho_{2p0}$ Image:Orbitals2a.png\|$\rho_{3p0}$
Image:Orbitals3a.png\|$\rho_{3d0}$ Image:Orbitals4a.png\|$\rho_{4p0}$
Image:Orbitals5a.png\|$\rho_{4d0}$ Image:Orbitals6a.png\|$\rho_{4f0}$
Image:Orbitals7a.png\|$\rho_{5d0}$ Image:Orbitals8a.png\|$\rho_{5f0}$
\
Now imagine that the appropriate measurement is made. *Before* the
measurement, the electron is neither inside $R$ nor outside $R$. If it
were inside, the probability of finding it outside would be zero, and if
it were outside, the probability of finding it inside would be zero.
*After* the measurement, on the other hand, the electron is either
inside or outside $R.$
Conclusions:
- Before the measurement, the proposition \"the electron is in $R$\"
is neither true nor false; it lacks a (definite) truth
value.
- A measurement generally changes the state of the system on which it
is performed.
As mentioned before, probabilities are assigned not only *to*
measurement outcomes but also *on the basis of* measurement outcomes.
Each density function $\rho_{nlm}$ serves to assign probabilities to the
possible outcomes of a measurement of the electron\'s position relative
to the proton. And in each case the assignment is based on the outcomes
of a simultaneous measurement of three observables: the atom\'s energy
(specified by the value of the principal quantum number $n$), its total
angular momentum
$l$ (specified by a letter, here *p*, *d*, or *f*), and the vertical
component of its angular momentum $m$.
## Fuzzy observables
We say that an observable $Q$ with a finite or countable number of
possible values $q_k$ is **fuzzy** (or that it has a fuzzy value) if and
only if at least one of the propositions \"The value of $Q$ is $q_k$\"
lacks a truth value. This is equivalent to the following necessary and
sufficient condition: the probability assigned to at least one of the
values $q_k$ is neither 0 nor 1.
What about observables that are generally described as continuous, like
a position?
The description of an observable as \"continuous\" is potentially
misleading. For one thing, we cannot separate an observable and its
possible values from a measurement and its possible outcomes, and a
measurement with an uncountable set of possible outcomes is not even in
principle possible. For another, there is not a single observable called
\"position\". Different partitions of space define different position
measurements with different sets of possible outcomes.
- Corollary: The possible outcomes of a position measurement (or the
possible values of a position observable) are defined by a partition
of space. They make up a finite or countable set of *regions* of
space. An exact position is therefore neither a possible measurement
outcome nor a possible value of a position observable.
So how do those cloud-like blurs represent the electron\'s fuzzy
position relative to the proton? Strictly speaking, they graphically
represent probability densities in the mathematical space of exact
relative positions, rather than fuzzy positions. It is these probability
densities that represent fuzzy positions by allowing us to calculate the
probability of every possible value of every position observable.
It should now be clear why we cannot describe the atom\'s internal
relative position *as it is*. To describe a fuzzy observable is to
assign probabilities to the possible outcomes of a measurement. But a
description that rests on the assumption that a measurement is made,
does not describe an observable *as it is* (by itself, *regardless of
measurements*).
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Atoms#What does an atom look like.3F
# Atoms
## What does an atom look like?
### Like this?
Image:Barium\_(Elektronenbesetzung).png Image:Atom.png Image:Stylised
atom with three Bohr model orbits and stylised nucleus.png
Image:Rutherford_atom.svg
### Or like this?
Image:Orbitals1.png\|$\rho_{2p0}$ Image:Orbitals2.png\|$\rho_{3p0}$
Image:Orbitals3.png\|$\rho_{3d0}$ Image:Orbitals4.png\|$\rho_{4p0}$
Image:Orbitals5.png\|$\rho_{4d0}$ Image:Orbitals6.png\|$\rho_{4f0}$
Image:Orbitals7.png\|$\rho_{5d0}$ Image:Orbitals8.png\|$\rho_{5f0}$
None of these images depicts an atom *as it is*. This is because it is
impossible to even visualize an atom *as it is*. Whereas the best you
can do with the images in the first row is to erase them from your
memory---they represent a way of viewing the atom that is too simplified
for the way we want to start thinking about it---the eight fuzzy images
in the next row deserve scrutiny. Each represents an aspect of a
stationary state of atomic hydrogen. You see neither the nucleus (a
proton) nor the electron. What you see is a fuzzy position. To be
precise, what you see are cloud-like blurs, which are symmetrical about
the vertical and horizontal axes, and which represent the atom\'s
internal relative position---the position of the electron relative to
the proton *or* the position of the proton relative to the electron.
- What is the *state* of an atom?
- What is a *stationary* state?
- What exactly is a *fuzzy* position?
- How does such a blur represent the atom\'s internal relative
position?
- Why can we not describe the atom\'s internal relative position *as
it is*?
## Quantum states
In quantum mechanics, **states** are
probability algorithms. We use them to calculate the probabilities of
the possible outcomes of
measurements on the
basis of actual measurement outcomes. A quantum state takes as its input
- one or several measurement outcomes,
- a measurement M,
- the time of M,
and it yields as its output the probabilities of the possible outcomes
of M.
A quantum state is called **stationary** if the probabilities it assigns
are independent of the time of the measurement.
From the mathematical point of view, each blur represents a density
function
$\rho(\boldsymbol{r})$. Imagine a small region $R$ like the little box
inside the first blur. And suppose that this is a region of the
(mathematical) space of positions relative to the proton. If you
integrate $\rho(\boldsymbol{r})$ over $R,$ you obtain the probability
$p\,(R)$ of finding the electron in $R,$ *provided* that the appropriate
measurement is made:
$$p\,(R)=\int_R\rho(\boldsymbol{r})\,d^3\boldsymbol{r}.$$
\"Appropriate\" here means capable of ascertaining the truth value of
the proposition \"the electron is in $R$\", the possible truth values
being \"true\" or \"false\". What we see in each of the following images
is a surface of constant probability density.
\
Image:Orbitals1a.png\|$\rho_{2p0}$ Image:Orbitals2a.png\|$\rho_{3p0}$
Image:Orbitals3a.png\|$\rho_{3d0}$ Image:Orbitals4a.png\|$\rho_{4p0}$
Image:Orbitals5a.png\|$\rho_{4d0}$ Image:Orbitals6a.png\|$\rho_{4f0}$
Image:Orbitals7a.png\|$\rho_{5d0}$ Image:Orbitals8a.png\|$\rho_{5f0}$
\
Now imagine that the appropriate measurement is made. *Before* the
measurement, the electron is neither inside $R$ nor outside $R$. If it
were inside, the probability of finding it outside would be zero, and if
it were outside, the probability of finding it inside would be zero.
*After* the measurement, on the other hand, the electron is either
inside or outside $R.$
Conclusions:
- Before the measurement, the proposition \"the electron is in $R$\"
is neither true nor false; it lacks a (definite) truth
value.
- A measurement generally changes the state of the system on which it
is performed.
As mentioned before, probabilities are assigned not only *to*
measurement outcomes but also *on the basis of* measurement outcomes.
Each density function $\rho_{nlm}$ serves to assign probabilities to the
possible outcomes of a measurement of the electron\'s position relative
to the proton. And in each case the assignment is based on the outcomes
of a simultaneous measurement of three observables: the atom\'s energy
(specified by the value of the principal quantum number $n$), its total
angular momentum
$l$ (specified by a letter, here *p*, *d*, or *f*), and the vertical
component of its angular momentum $m$.
## Fuzzy observables
We say that an observable $Q$ with a finite or countable number of
possible values $q_k$ is **fuzzy** (or that it has a fuzzy value) if and
only if at least one of the propositions \"The value of $Q$ is $q_k$\"
lacks a truth value. This is equivalent to the following necessary and
sufficient condition: the probability assigned to at least one of the
values $q_k$ is neither 0 nor 1.
What about observables that are generally described as continuous, like
a position?
The description of an observable as \"continuous\" is potentially
misleading. For one thing, we cannot separate an observable and its
possible values from a measurement and its possible outcomes, and a
measurement with an uncountable set of possible outcomes is not even in
principle possible. For another, there is not a single observable called
\"position\". Different partitions of space define different position
measurements with different sets of possible outcomes.
- Corollary: The possible outcomes of a position measurement (or the
possible values of a position observable) are defined by a partition
of space. They make up a finite or countable set of *regions* of
space. An exact position is therefore neither a possible measurement
outcome nor a possible value of a position observable.
So how do those cloud-like blurs represent the electron\'s fuzzy
position relative to the proton? Strictly speaking, they graphically
represent probability densities in the mathematical space of exact
relative positions, rather than fuzzy positions. It is these probability
densities that represent fuzzy positions by allowing us to calculate the
probability of every possible value of every position observable.
It should now be clear why we cannot describe the atom\'s internal
relative position *as it is*. To describe a fuzzy observable is to
assign probabilities to the possible outcomes of a measurement. But a
description that rests on the assumption that a measurement is made,
does not describe an observable *as it is* (by itself, *regardless of
measurements*).
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Atoms#Like this?
# Atoms
## What does an atom look like?
### Like this?
Image:Barium\_(Elektronenbesetzung).png Image:Atom.png Image:Stylised
atom with three Bohr model orbits and stylised nucleus.png
Image:Rutherford_atom.svg
### Or like this?
Image:Orbitals1.png\|$\rho_{2p0}$ Image:Orbitals2.png\|$\rho_{3p0}$
Image:Orbitals3.png\|$\rho_{3d0}$ Image:Orbitals4.png\|$\rho_{4p0}$
Image:Orbitals5.png\|$\rho_{4d0}$ Image:Orbitals6.png\|$\rho_{4f0}$
Image:Orbitals7.png\|$\rho_{5d0}$ Image:Orbitals8.png\|$\rho_{5f0}$
None of these images depicts an atom *as it is*. This is because it is
impossible to even visualize an atom *as it is*. Whereas the best you
can do with the images in the first row is to erase them from your
memory---they represent a way of viewing the atom that is too simplified
for the way we want to start thinking about it---the eight fuzzy images
in the next row deserve scrutiny. Each represents an aspect of a
stationary state of atomic hydrogen. You see neither the nucleus (a
proton) nor the electron. What you see is a fuzzy position. To be
precise, what you see are cloud-like blurs, which are symmetrical about
the vertical and horizontal axes, and which represent the atom\'s
internal relative position---the position of the electron relative to
the proton *or* the position of the proton relative to the electron.
- What is the *state* of an atom?
- What is a *stationary* state?
- What exactly is a *fuzzy* position?
- How does such a blur represent the atom\'s internal relative
position?
- Why can we not describe the atom\'s internal relative position *as
it is*?
## Quantum states
In quantum mechanics, **states** are
probability algorithms. We use them to calculate the probabilities of
the possible outcomes of
measurements on the
basis of actual measurement outcomes. A quantum state takes as its input
- one or several measurement outcomes,
- a measurement M,
- the time of M,
and it yields as its output the probabilities of the possible outcomes
of M.
A quantum state is called **stationary** if the probabilities it assigns
are independent of the time of the measurement.
From the mathematical point of view, each blur represents a density
function
$\rho(\boldsymbol{r})$. Imagine a small region $R$ like the little box
inside the first blur. And suppose that this is a region of the
(mathematical) space of positions relative to the proton. If you
integrate $\rho(\boldsymbol{r})$ over $R,$ you obtain the probability
$p\,(R)$ of finding the electron in $R,$ *provided* that the appropriate
measurement is made:
$$p\,(R)=\int_R\rho(\boldsymbol{r})\,d^3\boldsymbol{r}.$$
\"Appropriate\" here means capable of ascertaining the truth value of
the proposition \"the electron is in $R$\", the possible truth values
being \"true\" or \"false\". What we see in each of the following images
is a surface of constant probability density.
\
Image:Orbitals1a.png\|$\rho_{2p0}$ Image:Orbitals2a.png\|$\rho_{3p0}$
Image:Orbitals3a.png\|$\rho_{3d0}$ Image:Orbitals4a.png\|$\rho_{4p0}$
Image:Orbitals5a.png\|$\rho_{4d0}$ Image:Orbitals6a.png\|$\rho_{4f0}$
Image:Orbitals7a.png\|$\rho_{5d0}$ Image:Orbitals8a.png\|$\rho_{5f0}$
\
Now imagine that the appropriate measurement is made. *Before* the
measurement, the electron is neither inside $R$ nor outside $R$. If it
were inside, the probability of finding it outside would be zero, and if
it were outside, the probability of finding it inside would be zero.
*After* the measurement, on the other hand, the electron is either
inside or outside $R.$
Conclusions:
- Before the measurement, the proposition \"the electron is in $R$\"
is neither true nor false; it lacks a (definite) truth
value.
- A measurement generally changes the state of the system on which it
is performed.
As mentioned before, probabilities are assigned not only *to*
measurement outcomes but also *on the basis of* measurement outcomes.
Each density function $\rho_{nlm}$ serves to assign probabilities to the
possible outcomes of a measurement of the electron\'s position relative
to the proton. And in each case the assignment is based on the outcomes
of a simultaneous measurement of three observables: the atom\'s energy
(specified by the value of the principal quantum number $n$), its total
angular momentum
$l$ (specified by a letter, here *p*, *d*, or *f*), and the vertical
component of its angular momentum $m$.
## Fuzzy observables
We say that an observable $Q$ with a finite or countable number of
possible values $q_k$ is **fuzzy** (or that it has a fuzzy value) if and
only if at least one of the propositions \"The value of $Q$ is $q_k$\"
lacks a truth value. This is equivalent to the following necessary and
sufficient condition: the probability assigned to at least one of the
values $q_k$ is neither 0 nor 1.
What about observables that are generally described as continuous, like
a position?
The description of an observable as \"continuous\" is potentially
misleading. For one thing, we cannot separate an observable and its
possible values from a measurement and its possible outcomes, and a
measurement with an uncountable set of possible outcomes is not even in
principle possible. For another, there is not a single observable called
\"position\". Different partitions of space define different position
measurements with different sets of possible outcomes.
- Corollary: The possible outcomes of a position measurement (or the
possible values of a position observable) are defined by a partition
of space. They make up a finite or countable set of *regions* of
space. An exact position is therefore neither a possible measurement
outcome nor a possible value of a position observable.
So how do those cloud-like blurs represent the electron\'s fuzzy
position relative to the proton? Strictly speaking, they graphically
represent probability densities in the mathematical space of exact
relative positions, rather than fuzzy positions. It is these probability
densities that represent fuzzy positions by allowing us to calculate the
probability of every possible value of every position observable.
It should now be clear why we cannot describe the atom\'s internal
relative position *as it is*. To describe a fuzzy observable is to
assign probabilities to the possible outcomes of a measurement. But a
description that rests on the assumption that a measurement is made,
does not describe an observable *as it is* (by itself, *regardless of
measurements*).
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Atoms#Or like this?
# Atoms
## What does an atom look like?
### Like this?
Image:Barium\_(Elektronenbesetzung).png Image:Atom.png Image:Stylised
atom with three Bohr model orbits and stylised nucleus.png
Image:Rutherford_atom.svg
### Or like this?
Image:Orbitals1.png\|$\rho_{2p0}$ Image:Orbitals2.png\|$\rho_{3p0}$
Image:Orbitals3.png\|$\rho_{3d0}$ Image:Orbitals4.png\|$\rho_{4p0}$
Image:Orbitals5.png\|$\rho_{4d0}$ Image:Orbitals6.png\|$\rho_{4f0}$
Image:Orbitals7.png\|$\rho_{5d0}$ Image:Orbitals8.png\|$\rho_{5f0}$
None of these images depicts an atom *as it is*. This is because it is
impossible to even visualize an atom *as it is*. Whereas the best you
can do with the images in the first row is to erase them from your
memory---they represent a way of viewing the atom that is too simplified
for the way we want to start thinking about it---the eight fuzzy images
in the next row deserve scrutiny. Each represents an aspect of a
stationary state of atomic hydrogen. You see neither the nucleus (a
proton) nor the electron. What you see is a fuzzy position. To be
precise, what you see are cloud-like blurs, which are symmetrical about
the vertical and horizontal axes, and which represent the atom\'s
internal relative position---the position of the electron relative to
the proton *or* the position of the proton relative to the electron.
- What is the *state* of an atom?
- What is a *stationary* state?
- What exactly is a *fuzzy* position?
- How does such a blur represent the atom\'s internal relative
position?
- Why can we not describe the atom\'s internal relative position *as
it is*?
## Quantum states
In quantum mechanics, **states** are
probability algorithms. We use them to calculate the probabilities of
the possible outcomes of
measurements on the
basis of actual measurement outcomes. A quantum state takes as its input
- one or several measurement outcomes,
- a measurement M,
- the time of M,
and it yields as its output the probabilities of the possible outcomes
of M.
A quantum state is called **stationary** if the probabilities it assigns
are independent of the time of the measurement.
From the mathematical point of view, each blur represents a density
function
$\rho(\boldsymbol{r})$. Imagine a small region $R$ like the little box
inside the first blur. And suppose that this is a region of the
(mathematical) space of positions relative to the proton. If you
integrate $\rho(\boldsymbol{r})$ over $R,$ you obtain the probability
$p\,(R)$ of finding the electron in $R,$ *provided* that the appropriate
measurement is made:
$$p\,(R)=\int_R\rho(\boldsymbol{r})\,d^3\boldsymbol{r}.$$
\"Appropriate\" here means capable of ascertaining the truth value of
the proposition \"the electron is in $R$\", the possible truth values
being \"true\" or \"false\". What we see in each of the following images
is a surface of constant probability density.
\
Image:Orbitals1a.png\|$\rho_{2p0}$ Image:Orbitals2a.png\|$\rho_{3p0}$
Image:Orbitals3a.png\|$\rho_{3d0}$ Image:Orbitals4a.png\|$\rho_{4p0}$
Image:Orbitals5a.png\|$\rho_{4d0}$ Image:Orbitals6a.png\|$\rho_{4f0}$
Image:Orbitals7a.png\|$\rho_{5d0}$ Image:Orbitals8a.png\|$\rho_{5f0}$
\
Now imagine that the appropriate measurement is made. *Before* the
measurement, the electron is neither inside $R$ nor outside $R$. If it
were inside, the probability of finding it outside would be zero, and if
it were outside, the probability of finding it inside would be zero.
*After* the measurement, on the other hand, the electron is either
inside or outside $R.$
Conclusions:
- Before the measurement, the proposition \"the electron is in $R$\"
is neither true nor false; it lacks a (definite) truth
value.
- A measurement generally changes the state of the system on which it
is performed.
As mentioned before, probabilities are assigned not only *to*
measurement outcomes but also *on the basis of* measurement outcomes.
Each density function $\rho_{nlm}$ serves to assign probabilities to the
possible outcomes of a measurement of the electron\'s position relative
to the proton. And in each case the assignment is based on the outcomes
of a simultaneous measurement of three observables: the atom\'s energy
(specified by the value of the principal quantum number $n$), its total
angular momentum
$l$ (specified by a letter, here *p*, *d*, or *f*), and the vertical
component of its angular momentum $m$.
## Fuzzy observables
We say that an observable $Q$ with a finite or countable number of
possible values $q_k$ is **fuzzy** (or that it has a fuzzy value) if and
only if at least one of the propositions \"The value of $Q$ is $q_k$\"
lacks a truth value. This is equivalent to the following necessary and
sufficient condition: the probability assigned to at least one of the
values $q_k$ is neither 0 nor 1.
What about observables that are generally described as continuous, like
a position?
The description of an observable as \"continuous\" is potentially
misleading. For one thing, we cannot separate an observable and its
possible values from a measurement and its possible outcomes, and a
measurement with an uncountable set of possible outcomes is not even in
principle possible. For another, there is not a single observable called
\"position\". Different partitions of space define different position
measurements with different sets of possible outcomes.
- Corollary: The possible outcomes of a position measurement (or the
possible values of a position observable) are defined by a partition
of space. They make up a finite or countable set of *regions* of
space. An exact position is therefore neither a possible measurement
outcome nor a possible value of a position observable.
So how do those cloud-like blurs represent the electron\'s fuzzy
position relative to the proton? Strictly speaking, they graphically
represent probability densities in the mathematical space of exact
relative positions, rather than fuzzy positions. It is these probability
densities that represent fuzzy positions by allowing us to calculate the
probability of every possible value of every position observable.
It should now be clear why we cannot describe the atom\'s internal
relative position *as it is*. To describe a fuzzy observable is to
assign probabilities to the possible outcomes of a measurement. But a
description that rests on the assumption that a measurement is made,
does not describe an observable *as it is* (by itself, *regardless of
measurements*).
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Atoms#Quantum states
# Atoms
## What does an atom look like?
### Like this?
Image:Barium\_(Elektronenbesetzung).png Image:Atom.png Image:Stylised
atom with three Bohr model orbits and stylised nucleus.png
Image:Rutherford_atom.svg
### Or like this?
Image:Orbitals1.png\|$\rho_{2p0}$ Image:Orbitals2.png\|$\rho_{3p0}$
Image:Orbitals3.png\|$\rho_{3d0}$ Image:Orbitals4.png\|$\rho_{4p0}$
Image:Orbitals5.png\|$\rho_{4d0}$ Image:Orbitals6.png\|$\rho_{4f0}$
Image:Orbitals7.png\|$\rho_{5d0}$ Image:Orbitals8.png\|$\rho_{5f0}$
None of these images depicts an atom *as it is*. This is because it is
impossible to even visualize an atom *as it is*. Whereas the best you
can do with the images in the first row is to erase them from your
memory---they represent a way of viewing the atom that is too simplified
for the way we want to start thinking about it---the eight fuzzy images
in the next row deserve scrutiny. Each represents an aspect of a
stationary state of atomic hydrogen. You see neither the nucleus (a
proton) nor the electron. What you see is a fuzzy position. To be
precise, what you see are cloud-like blurs, which are symmetrical about
the vertical and horizontal axes, and which represent the atom\'s
internal relative position---the position of the electron relative to
the proton *or* the position of the proton relative to the electron.
- What is the *state* of an atom?
- What is a *stationary* state?
- What exactly is a *fuzzy* position?
- How does such a blur represent the atom\'s internal relative
position?
- Why can we not describe the atom\'s internal relative position *as
it is*?
## Quantum states
In quantum mechanics, **states** are
probability algorithms. We use them to calculate the probabilities of
the possible outcomes of
measurements on the
basis of actual measurement outcomes. A quantum state takes as its input
- one or several measurement outcomes,
- a measurement M,
- the time of M,
and it yields as its output the probabilities of the possible outcomes
of M.
A quantum state is called **stationary** if the probabilities it assigns
are independent of the time of the measurement.
From the mathematical point of view, each blur represents a density
function
$\rho(\boldsymbol{r})$. Imagine a small region $R$ like the little box
inside the first blur. And suppose that this is a region of the
(mathematical) space of positions relative to the proton. If you
integrate $\rho(\boldsymbol{r})$ over $R,$ you obtain the probability
$p\,(R)$ of finding the electron in $R,$ *provided* that the appropriate
measurement is made:
$$p\,(R)=\int_R\rho(\boldsymbol{r})\,d^3\boldsymbol{r}.$$
\"Appropriate\" here means capable of ascertaining the truth value of
the proposition \"the electron is in $R$\", the possible truth values
being \"true\" or \"false\". What we see in each of the following images
is a surface of constant probability density.
\
Image:Orbitals1a.png\|$\rho_{2p0}$ Image:Orbitals2a.png\|$\rho_{3p0}$
Image:Orbitals3a.png\|$\rho_{3d0}$ Image:Orbitals4a.png\|$\rho_{4p0}$
Image:Orbitals5a.png\|$\rho_{4d0}$ Image:Orbitals6a.png\|$\rho_{4f0}$
Image:Orbitals7a.png\|$\rho_{5d0}$ Image:Orbitals8a.png\|$\rho_{5f0}$
\
Now imagine that the appropriate measurement is made. *Before* the
measurement, the electron is neither inside $R$ nor outside $R$. If it
were inside, the probability of finding it outside would be zero, and if
it were outside, the probability of finding it inside would be zero.
*After* the measurement, on the other hand, the electron is either
inside or outside $R.$
Conclusions:
- Before the measurement, the proposition \"the electron is in $R$\"
is neither true nor false; it lacks a (definite) truth
value.
- A measurement generally changes the state of the system on which it
is performed.
As mentioned before, probabilities are assigned not only *to*
measurement outcomes but also *on the basis of* measurement outcomes.
Each density function $\rho_{nlm}$ serves to assign probabilities to the
possible outcomes of a measurement of the electron\'s position relative
to the proton. And in each case the assignment is based on the outcomes
of a simultaneous measurement of three observables: the atom\'s energy
(specified by the value of the principal quantum number $n$), its total
angular momentum
$l$ (specified by a letter, here *p*, *d*, or *f*), and the vertical
component of its angular momentum $m$.
## Fuzzy observables
We say that an observable $Q$ with a finite or countable number of
possible values $q_k$ is **fuzzy** (or that it has a fuzzy value) if and
only if at least one of the propositions \"The value of $Q$ is $q_k$\"
lacks a truth value. This is equivalent to the following necessary and
sufficient condition: the probability assigned to at least one of the
values $q_k$ is neither 0 nor 1.
What about observables that are generally described as continuous, like
a position?
The description of an observable as \"continuous\" is potentially
misleading. For one thing, we cannot separate an observable and its
possible values from a measurement and its possible outcomes, and a
measurement with an uncountable set of possible outcomes is not even in
principle possible. For another, there is not a single observable called
\"position\". Different partitions of space define different position
measurements with different sets of possible outcomes.
- Corollary: The possible outcomes of a position measurement (or the
possible values of a position observable) are defined by a partition
of space. They make up a finite or countable set of *regions* of
space. An exact position is therefore neither a possible measurement
outcome nor a possible value of a position observable.
So how do those cloud-like blurs represent the electron\'s fuzzy
position relative to the proton? Strictly speaking, they graphically
represent probability densities in the mathematical space of exact
relative positions, rather than fuzzy positions. It is these probability
densities that represent fuzzy positions by allowing us to calculate the
probability of every possible value of every position observable.
It should now be clear why we cannot describe the atom\'s internal
relative position *as it is*. To describe a fuzzy observable is to
assign probabilities to the possible outcomes of a measurement. But a
description that rests on the assumption that a measurement is made,
does not describe an observable *as it is* (by itself, *regardless of
measurements*).
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Atoms#Fuzzy observables
# Atoms
## What does an atom look like?
### Like this?
Image:Barium\_(Elektronenbesetzung).png Image:Atom.png Image:Stylised
atom with three Bohr model orbits and stylised nucleus.png
Image:Rutherford_atom.svg
### Or like this?
Image:Orbitals1.png\|$\rho_{2p0}$ Image:Orbitals2.png\|$\rho_{3p0}$
Image:Orbitals3.png\|$\rho_{3d0}$ Image:Orbitals4.png\|$\rho_{4p0}$
Image:Orbitals5.png\|$\rho_{4d0}$ Image:Orbitals6.png\|$\rho_{4f0}$
Image:Orbitals7.png\|$\rho_{5d0}$ Image:Orbitals8.png\|$\rho_{5f0}$
None of these images depicts an atom *as it is*. This is because it is
impossible to even visualize an atom *as it is*. Whereas the best you
can do with the images in the first row is to erase them from your
memory---they represent a way of viewing the atom that is too simplified
for the way we want to start thinking about it---the eight fuzzy images
in the next row deserve scrutiny. Each represents an aspect of a
stationary state of atomic hydrogen. You see neither the nucleus (a
proton) nor the electron. What you see is a fuzzy position. To be
precise, what you see are cloud-like blurs, which are symmetrical about
the vertical and horizontal axes, and which represent the atom\'s
internal relative position---the position of the electron relative to
the proton *or* the position of the proton relative to the electron.
- What is the *state* of an atom?
- What is a *stationary* state?
- What exactly is a *fuzzy* position?
- How does such a blur represent the atom\'s internal relative
position?
- Why can we not describe the atom\'s internal relative position *as
it is*?
## Quantum states
In quantum mechanics, **states** are
probability algorithms. We use them to calculate the probabilities of
the possible outcomes of
measurements on the
basis of actual measurement outcomes. A quantum state takes as its input
- one or several measurement outcomes,
- a measurement M,
- the time of M,
and it yields as its output the probabilities of the possible outcomes
of M.
A quantum state is called **stationary** if the probabilities it assigns
are independent of the time of the measurement.
From the mathematical point of view, each blur represents a density
function
$\rho(\boldsymbol{r})$. Imagine a small region $R$ like the little box
inside the first blur. And suppose that this is a region of the
(mathematical) space of positions relative to the proton. If you
integrate $\rho(\boldsymbol{r})$ over $R,$ you obtain the probability
$p\,(R)$ of finding the electron in $R,$ *provided* that the appropriate
measurement is made:
$$p\,(R)=\int_R\rho(\boldsymbol{r})\,d^3\boldsymbol{r}.$$
\"Appropriate\" here means capable of ascertaining the truth value of
the proposition \"the electron is in $R$\", the possible truth values
being \"true\" or \"false\". What we see in each of the following images
is a surface of constant probability density.
\
Image:Orbitals1a.png\|$\rho_{2p0}$ Image:Orbitals2a.png\|$\rho_{3p0}$
Image:Orbitals3a.png\|$\rho_{3d0}$ Image:Orbitals4a.png\|$\rho_{4p0}$
Image:Orbitals5a.png\|$\rho_{4d0}$ Image:Orbitals6a.png\|$\rho_{4f0}$
Image:Orbitals7a.png\|$\rho_{5d0}$ Image:Orbitals8a.png\|$\rho_{5f0}$
\
Now imagine that the appropriate measurement is made. *Before* the
measurement, the electron is neither inside $R$ nor outside $R$. If it
were inside, the probability of finding it outside would be zero, and if
it were outside, the probability of finding it inside would be zero.
*After* the measurement, on the other hand, the electron is either
inside or outside $R.$
Conclusions:
- Before the measurement, the proposition \"the electron is in $R$\"
is neither true nor false; it lacks a (definite) truth
value.
- A measurement generally changes the state of the system on which it
is performed.
As mentioned before, probabilities are assigned not only *to*
measurement outcomes but also *on the basis of* measurement outcomes.
Each density function $\rho_{nlm}$ serves to assign probabilities to the
possible outcomes of a measurement of the electron\'s position relative
to the proton. And in each case the assignment is based on the outcomes
of a simultaneous measurement of three observables: the atom\'s energy
(specified by the value of the principal quantum number $n$), its total
angular momentum
$l$ (specified by a letter, here *p*, *d*, or *f*), and the vertical
component of its angular momentum $m$.
## Fuzzy observables
We say that an observable $Q$ with a finite or countable number of
possible values $q_k$ is **fuzzy** (or that it has a fuzzy value) if and
only if at least one of the propositions \"The value of $Q$ is $q_k$\"
lacks a truth value. This is equivalent to the following necessary and
sufficient condition: the probability assigned to at least one of the
values $q_k$ is neither 0 nor 1.
What about observables that are generally described as continuous, like
a position?
The description of an observable as \"continuous\" is potentially
misleading. For one thing, we cannot separate an observable and its
possible values from a measurement and its possible outcomes, and a
measurement with an uncountable set of possible outcomes is not even in
principle possible. For another, there is not a single observable called
\"position\". Different partitions of space define different position
measurements with different sets of possible outcomes.
- Corollary: The possible outcomes of a position measurement (or the
possible values of a position observable) are defined by a partition
of space. They make up a finite or countable set of *regions* of
space. An exact position is therefore neither a possible measurement
outcome nor a possible value of a position observable.
So how do those cloud-like blurs represent the electron\'s fuzzy
position relative to the proton? Strictly speaking, they graphically
represent probability densities in the mathematical space of exact
relative positions, rather than fuzzy positions. It is these probability
densities that represent fuzzy positions by allowing us to calculate the
probability of every possible value of every position observable.
It should now be clear why we cannot describe the atom\'s internal
relative position *as it is*. To describe a fuzzy observable is to
assign probabilities to the possible outcomes of a measurement. But a
description that rests on the assumption that a measurement is made,
does not describe an observable *as it is* (by itself, *regardless of
measurements*).
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Serious illnesses
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
## Planck
Quantum mechanics began as a desperate measure to get around some
spectacular failures of what subsequently came to be known as classical
physics.
In 1900 Max Planck discovered a law that
perfectly describes the spectrum of a glowing hot object. Planck\'s
radiation formula
turned out to be irreconcilable with the physics of his time. (If
classical physics were right, you would be blinded by ultraviolet light
if you looked at the burner of a stove, aka the UV
catastrophe.) At first, it was just a fit
to the data, \"a fortuitous guess at an interpolation formula\" as
Planck himself called it. Only weeks later did it turn out to imply the
quantization of energy for the emission of electromagnetic
radiation: the energy $~E~$ of
a quantum of radiation is proportional to the frequency $\nu$ of the
radiation, the constant of proportionality being Planck\'s
constant $~h~$
$$~E = h\nu~$$. We can of course use the angular
frequency $~\omega=2\pi\nu~$ instead of
$\nu$. Introducing the reduced Planck constant $~\hbar=h/2\pi~$, we then
have
$$E = \hbar\omega$$. This theory is valid at all temperatures and
helpful in explaining radiation by black bodies.
## Rutherford
In 1911 Ernest Rutherford proposed a
model of the atom based on experiments
by Geiger and Marsden. Geiger
and Marsden had directed a beam of alpha
particles at a thin gold foil. Most of the
particles passed the foil more or less as expected, but about one in
8000 bounced back as if it had encountered a much heavier object. In
Rutherford\'s own words this was as incredible as if you fired a 15 inch
cannon ball at a piece of tissue paper and it came back and hit you.
After analysing the data collected by Geiger and Marsden, Rutherford
concluded that the diameter of the atomic nucleus (which contains over
99.9% of the atom\'s mass) was less than 0.01% of the diameter of the
entire atom. He suggested that the atom is spherical in shape and the
atomic electrons orbit the nucleus much like planets orbit a star. He
calculated mass of electron as 1/7000th part of the mass of an alpha
particle. Rutherford\'s atomic model is also called the Nuclear model.
The problem of having electrons orbit the nucleus the same way that a
planet orbits a star is that classical electromagnetic theory demands
that an orbiting electron will radiate away its energy and spiral into
the nucleus in about 0.5×10^-10^ seconds. This was the worst
quantitative failure in the history of physics, under-predicting the
lifetime of hydrogen by at least forty orders of magnitude! (This figure
is based on the experimentally established lower bound on the proton\'s
lifetime.)
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Serious illnesses#Planck
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
## Planck
Quantum mechanics began as a desperate measure to get around some
spectacular failures of what subsequently came to be known as classical
physics.
In 1900 Max Planck discovered a law that
perfectly describes the spectrum of a glowing hot object. Planck\'s
radiation formula
turned out to be irreconcilable with the physics of his time. (If
classical physics were right, you would be blinded by ultraviolet light
if you looked at the burner of a stove, aka the UV
catastrophe.) At first, it was just a fit
to the data, \"a fortuitous guess at an interpolation formula\" as
Planck himself called it. Only weeks later did it turn out to imply the
quantization of energy for the emission of electromagnetic
radiation: the energy $~E~$ of
a quantum of radiation is proportional to the frequency $\nu$ of the
radiation, the constant of proportionality being Planck\'s
constant $~h~$
$$~E = h\nu~$$. We can of course use the angular
frequency $~\omega=2\pi\nu~$ instead of
$\nu$. Introducing the reduced Planck constant $~\hbar=h/2\pi~$, we then
have
$$E = \hbar\omega$$. This theory is valid at all temperatures and
helpful in explaining radiation by black bodies.
## Rutherford
In 1911 Ernest Rutherford proposed a
model of the atom based on experiments
by Geiger and Marsden. Geiger
and Marsden had directed a beam of alpha
particles at a thin gold foil. Most of the
particles passed the foil more or less as expected, but about one in
8000 bounced back as if it had encountered a much heavier object. In
Rutherford\'s own words this was as incredible as if you fired a 15 inch
cannon ball at a piece of tissue paper and it came back and hit you.
After analysing the data collected by Geiger and Marsden, Rutherford
concluded that the diameter of the atomic nucleus (which contains over
99.9% of the atom\'s mass) was less than 0.01% of the diameter of the
entire atom. He suggested that the atom is spherical in shape and the
atomic electrons orbit the nucleus much like planets orbit a star. He
calculated mass of electron as 1/7000th part of the mass of an alpha
particle. Rutherford\'s atomic model is also called the Nuclear model.
The problem of having electrons orbit the nucleus the same way that a
planet orbits a star is that classical electromagnetic theory demands
that an orbiting electron will radiate away its energy and spiral into
the nucleus in about 0.5×10^-10^ seconds. This was the worst
quantitative failure in the history of physics, under-predicting the
lifetime of hydrogen by at least forty orders of magnitude! (This figure
is based on the experimentally established lower bound on the proton\'s
lifetime.)
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Serious illnesses#Rutherford
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
## Planck
Quantum mechanics began as a desperate measure to get around some
spectacular failures of what subsequently came to be known as classical
physics.
In 1900 Max Planck discovered a law that
perfectly describes the spectrum of a glowing hot object. Planck\'s
radiation formula
turned out to be irreconcilable with the physics of his time. (If
classical physics were right, you would be blinded by ultraviolet light
if you looked at the burner of a stove, aka the UV
catastrophe.) At first, it was just a fit
to the data, \"a fortuitous guess at an interpolation formula\" as
Planck himself called it. Only weeks later did it turn out to imply the
quantization of energy for the emission of electromagnetic
radiation: the energy $~E~$ of
a quantum of radiation is proportional to the frequency $\nu$ of the
radiation, the constant of proportionality being Planck\'s
constant $~h~$
$$~E = h\nu~$$. We can of course use the angular
frequency $~\omega=2\pi\nu~$ instead of
$\nu$. Introducing the reduced Planck constant $~\hbar=h/2\pi~$, we then
have
$$E = \hbar\omega$$. This theory is valid at all temperatures and
helpful in explaining radiation by black bodies.
## Rutherford
In 1911 Ernest Rutherford proposed a
model of the atom based on experiments
by Geiger and Marsden. Geiger
and Marsden had directed a beam of alpha
particles at a thin gold foil. Most of the
particles passed the foil more or less as expected, but about one in
8000 bounced back as if it had encountered a much heavier object. In
Rutherford\'s own words this was as incredible as if you fired a 15 inch
cannon ball at a piece of tissue paper and it came back and hit you.
After analysing the data collected by Geiger and Marsden, Rutherford
concluded that the diameter of the atomic nucleus (which contains over
99.9% of the atom\'s mass) was less than 0.01% of the diameter of the
entire atom. He suggested that the atom is spherical in shape and the
atomic electrons orbit the nucleus much like planets orbit a star. He
calculated mass of electron as 1/7000th part of the mass of an alpha
particle. Rutherford\'s atomic model is also called the Nuclear model.
The problem of having electrons orbit the nucleus the same way that a
planet orbits a star is that classical electromagnetic theory demands
that an orbiting electron will radiate away its energy and spiral into
the nucleus in about 0.5×10^-10^ seconds. This was the worst
quantitative failure in the history of physics, under-predicting the
lifetime of hydrogen by at least forty orders of magnitude! (This figure
is based on the experimentally established lower bound on the proton\'s
lifetime.)
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Feynman route
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
# The Feynman route to Schrödinger
The probabilities of the possible outcomes of measurements performed at
a time $t_2$ are determined by the Schrödinger wave function
$\psi(\mathbf{r},t_2)$. The wave function $\psi(\mathbf{r},t_2)$ is
determined via the Schrödinger
equation by
$\psi(\mathbf{r},t_1).$ What determines $\psi(\mathbf{r},t_1)$ ? Why,
the outcome of a measurement performed at $t_1$ --- what else? Actual
measurement outcomes determine the probabilities of possible measurement
outcomes.
## Two rules
In this chapter we develop the quantum-mechanical probability algorithm
from two fundamental rules. To begin with, two definitions:
- **Alternatives** are possible sequences of measurement outcomes.
- With each alternative is associated a complex
number called **amplitude**.
Suppose that you want to calculate the probability of a possible outcome
of a measurement given the actual outcome of an earlier measurement.
Here is what you have to do:
- Choose any sequence of measurements that may be made in the
meantime.
- Assign an amplitude to each alternative.
- Apply either of the following rules:
: \
**Rule A**: If the intermediate measurements are made (or if it is
possible to infer from other measurements what their outcomes would
have been if they had been made), first square the absolute values
of the amplitudes of the alternatives and then add the results.
```{=html}
<!-- -->
```
: **Rule B**: If the intermediate measurements are not made (and if it
is not possible to infer from other measurements what their outcomes
would have been), first add the amplitudes of the alternatives and
then square the absolute value of the result.
\
In subsequent sections we will explore the consequences of these rules
for a variety of setups, and we will think about their origin --- their
*raison d\'être*. Here we shall use Rule B to determine the
interpretation of $\overline{\psi}(k)$ given Born\'s probabilistic
interpretation of $\psi(x)$.
In the so-called \"continuum normalization\", the unphysical limit of a
particle with a sharp momentum $\hbar k'$ is associated with the wave
function
$$\psi_{k'}(x,t)=\frac1{\sqrt{2\pi}}\int\delta(k-k')\,e^{i[kx-\omega(k)t]}dk=
\frac1{\sqrt{2\pi}}\,e^{i[k'x-\omega(k')t]}.$$ Hence we may write
$\psi(x,t) = \int\overline{\psi}(k)\,\psi_{k}(x,t)\,dk.$
$\overline{\psi}(k)$ is the amplitude for the outcome $\hbar k$ of an
infinitely precise momentum measurement. $\psi_{k}(x,t)$ is the
amplitude for the outcome $x$ of an infinitely precise position
measurement performed (at time t) subsequent to an infinitely precise
momentum measurement with outcome $\hbar k.$ And $\psi(x,t)$ is the
amplitude for obtaining $x$ by an infinitely precise position
measurement performed at time $t.$
The preceding equation therefore tells us that the *amplitude* for
finding $x$ at $t$ is the product of
1. the *amplitude* for the outcome $\hbar k$ and
2. the *amplitude* for the outcome $x$ (at time $t$) subsequent to a
momentum measurement with outcome $\hbar k,$
summed over all values of $k.$
Under the conditions stipulated by Rule A, we would have instead that
the *probability* for finding $x$ at $t$ is the product of
1. the *probability* for the outcome $\hbar k$ and
2. the *probability* for the outcome $x$ (at time $t$) subsequent to a
momentum measurement with outcome $\hbar k,$
summed over all values of $k.$
The latter is what we expect on the basis of standard probability
theory. But if this holds under the conditions stipulated by Rule A,
then the same holds with \"amplitude\" substituted from \"probability\"
under the conditions stipulated by Rule B. Hence, given that
$\psi_{k}(x,t)$ and $\psi(x,t)$ are amplitudes for obtaining the outcome
$x$ in an infinitely precise position measurement, $\overline{\psi}(k)$
is the amplitude for obtaining the outcome $\hbar k$ in an infinitely
precise momentum measurement.
Notes:
1. Since Rule B stipulates that the momentum measurement is not
actually made, we need not worry about the impossibility of making
an infinitely precise momentum measurement.
2. If we refer to $|\psi(x)|^2$ as \"the probability of obtaining the
outcome $x,$\" what we mean is that $|\psi(x)|^2$ *integrated* over
any interval or subset of the real
line is the probability of finding
our particle in this interval or subset.
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Feynman route#Two rules
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
# The Feynman route to Schrödinger
The probabilities of the possible outcomes of measurements performed at
a time $t_2$ are determined by the Schrödinger wave function
$\psi(\mathbf{r},t_2)$. The wave function $\psi(\mathbf{r},t_2)$ is
determined via the Schrödinger
equation by
$\psi(\mathbf{r},t_1).$ What determines $\psi(\mathbf{r},t_1)$ ? Why,
the outcome of a measurement performed at $t_1$ --- what else? Actual
measurement outcomes determine the probabilities of possible measurement
outcomes.
## Two rules
In this chapter we develop the quantum-mechanical probability algorithm
from two fundamental rules. To begin with, two definitions:
- **Alternatives** are possible sequences of measurement outcomes.
- With each alternative is associated a complex
number called **amplitude**.
Suppose that you want to calculate the probability of a possible outcome
of a measurement given the actual outcome of an earlier measurement.
Here is what you have to do:
- Choose any sequence of measurements that may be made in the
meantime.
- Assign an amplitude to each alternative.
- Apply either of the following rules:
: \
**Rule A**: If the intermediate measurements are made (or if it is
possible to infer from other measurements what their outcomes would
have been if they had been made), first square the absolute values
of the amplitudes of the alternatives and then add the results.
```{=html}
<!-- -->
```
: **Rule B**: If the intermediate measurements are not made (and if it
is not possible to infer from other measurements what their outcomes
would have been), first add the amplitudes of the alternatives and
then square the absolute value of the result.
\
In subsequent sections we will explore the consequences of these rules
for a variety of setups, and we will think about their origin --- their
*raison d\'être*. Here we shall use Rule B to determine the
interpretation of $\overline{\psi}(k)$ given Born\'s probabilistic
interpretation of $\psi(x)$.
In the so-called \"continuum normalization\", the unphysical limit of a
particle with a sharp momentum $\hbar k'$ is associated with the wave
function
$$\psi_{k'}(x,t)=\frac1{\sqrt{2\pi}}\int\delta(k-k')\,e^{i[kx-\omega(k)t]}dk=
\frac1{\sqrt{2\pi}}\,e^{i[k'x-\omega(k')t]}.$$ Hence we may write
$\psi(x,t) = \int\overline{\psi}(k)\,\psi_{k}(x,t)\,dk.$
$\overline{\psi}(k)$ is the amplitude for the outcome $\hbar k$ of an
infinitely precise momentum measurement. $\psi_{k}(x,t)$ is the
amplitude for the outcome $x$ of an infinitely precise position
measurement performed (at time t) subsequent to an infinitely precise
momentum measurement with outcome $\hbar k.$ And $\psi(x,t)$ is the
amplitude for obtaining $x$ by an infinitely precise position
measurement performed at time $t.$
The preceding equation therefore tells us that the *amplitude* for
finding $x$ at $t$ is the product of
1. the *amplitude* for the outcome $\hbar k$ and
2. the *amplitude* for the outcome $x$ (at time $t$) subsequent to a
momentum measurement with outcome $\hbar k,$
summed over all values of $k.$
Under the conditions stipulated by Rule A, we would have instead that
the *probability* for finding $x$ at $t$ is the product of
1. the *probability* for the outcome $\hbar k$ and
2. the *probability* for the outcome $x$ (at time $t$) subsequent to a
momentum measurement with outcome $\hbar k,$
summed over all values of $k.$
The latter is what we expect on the basis of standard probability
theory. But if this holds under the conditions stipulated by Rule A,
then the same holds with \"amplitude\" substituted from \"probability\"
under the conditions stipulated by Rule B. Hence, given that
$\psi_{k}(x,t)$ and $\psi(x,t)$ are amplitudes for obtaining the outcome
$x$ in an infinitely precise position measurement, $\overline{\psi}(k)$
is the amplitude for obtaining the outcome $\hbar k$ in an infinitely
precise momentum measurement.
Notes:
1. Since Rule B stipulates that the momentum measurement is not
actually made, we need not worry about the impossibility of making
an infinitely precise momentum measurement.
2. If we refer to $|\psi(x)|^2$ as \"the probability of obtaining the
outcome $x,$\" what we mean is that $|\psi(x)|^2$ *integrated* over
any interval or subset of the real
line is the probability of finding
our particle in this interval or subset.
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
|
# This Quantum World/Implications and applications
```{=html}
<div class="noprint">
```
**\<
PREVIOUS**
```{=html}
</div>
```
# The Schrödinger equation: implications and applications
In this chapter we take a look at some of the implications of the
Schrödinger equation
: \
`<math>`{=html}
i\\hbar\\,\\frac{\\partial\\psi}{\\partial t} = \\frac{1}{2m}
\\left(\\frac\\hbar i \\frac{\\partial}{\\partial\\mathbf{r}} -
\\mathbf{A}\\right)\^2\\psi + V\\psi. `</math>`{=html}\
```{=html}
<div class="noprint">
```
**NEXT
\>**
```{=html}
</div>
```
|
# This Quantum World/Bell
## Bell\'s theorem: the simplest version
Quantum mechanics permits us to create the following scenario.
- Pairs of particles are launched in opposite directions.
- Each particle is subjected to one of three possible measurements
(**1**, **2**, or **3**).
- Each time the two measurements are chosen at random.
- Each measurement has two possible results, indicated by a red or
green light.
Here is what we find:
- If both particles are subjected to the same measurement, identical
results are never obtained.
- The two sequences of recorded outcomes are completely random. In
particular, half of the time both lights are the same color.
\
471px
\
If this doesn\'t bother you, then please explain how it is that the
colors differ whenever identical measurements are performed!
The obvious explanation would be that each particle arrives with an
\"instruction set\" --- some property that pre-determines the outcome of
every possible measurement. Let\'s see what this entails.
Each particle arrives with one of the following 2^3^ = 8 instruction
sets:
:
: **RRR**,**RRG**,**RGR**,**GRR**,**RGG**,**GRG**,**GGR**, or
**GGG**.
(If a particle arrives with, say, **RGG**, then the apparatus flashes
red if it is set to **1** and green if it is set to **2** or **3**.) In
order to explain why the outcomes differ whenever both particles are
subjected to the same measurement, we have to assume that particles
launched together arrive with opposite instruction sets. If one carries
the instruction (or arrives with the property denoted by) **RRG**, then
the other carries the instruction **GGR**.
Suppose that the instruction sets are **RRG** and **GGR**. In this case
we observe different colors with the following five of the 3^2^ = 9
possible combinations of apparatus settings:
:
: **1---1**,**2---2**,**3---3**,**1---2**, and **2---1**,
and we observe equal colors with the following four:
:
: **1---3**,**2---3**,**3---1**, and **3---2**.
Because the settings are chosen at random, this particular pair of
instruction sets thus results in different colors 5/9 of the time. The
same is true for the other pairs of instruction sets *except* the pair
**RRR**, **GGG**. If the two particles carry these respective
instruction sets, we see different colors *every* time. It follows that
we see different colors *at least* 5/9 of the time.
But different colors are observed half of the time! In reality the
probability of observing different colors is 1/2. Conclusion: the
statistical predictions of quantum mechanics cannot be explained with
the help of instruction sets. In other words, these measurements do not
reveal *pre-existent* properties. They *create* the properties the
possession of which they indicate.
Then how is it that the colors differ whenever identical measurements
are made? How does one apparatus \"know\" which measurement is performed
and which outcome is obtained *by the other apparatus*?
Whenever the joint probability **p(A,B)** of the respective outcomes
**A** and **B** of two measurements does not equal the product
**p(A) p(B)** of the individual probabilities, the outcomes --- or their
probabilities --- are said to be *correlated*. With equal apparatus
settings we have **p(R,R) = p(G,G) = 0**, and this obviously differs
from the products **p(R) p(R)** and **p(G) p(G)**, which equal
$\textstyle\frac12\times\frac12=\frac14.$ What kind of mechanism is
responsible for the correlations between the measurement outcomes?
: *You understand this as much as anybody else!*
The conclusion that we see different colors at least 5/9 of the time is
*Bell\'s theorem* (or *Bell\'s inequality*) for this particular setup.
The fact that the universe violates the logic of Bell\'s Theorem is
evidence that particles do not carry instruction sets embedded within
them and instead have instantaneous knowledge of other particles at a
great distance. Here is a comment by a distinguished Princeton physicist
as quoted by David Mermin[^1]
:
: Anybody who\'s not bothered by Bell\'s theorem has to have rocks
in his head.
And here is why Einstein wasn\'t happy with quantum mechanics:
:
: I cannot seriously believe in it because it cannot be reconciled
with the idea that physics should represent a reality in time
and space, free from spooky actions at a distance.[^2]
Sadly, Einstein (1879 - 1955) did not know Bell\'s theorem of 1964. We
know now that
:
: there must be a mechanism whereby the setting of one measurement
device can influence the reading of another instrument, however
remote.[^3]
: *Spooky actions at a distance are here to stay!*
\
\-\-\--
```{=html}
<references/>
```
------------------------------------------------------------------------
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
[^1]: N. David Mermin, \"Is the Moon there when nobody looks? Reality
and the quantum theory,\" *Physics Today*, April 1985. The version
of Bell\'s theorem discussed in this section first appeared in this
article.
[^2]: Albert Einstein, *The Born-Einstein Letters*, with comments by Max
Born (New York: Walker, 1971).
[^3]: John S. Bell, \"On the Einstein Podolsky Rosen paradox,\"
*Physics* 1, pp. 195-200, 1964.
|
# This Quantum World/Game
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
## A quantum game
Here are the rules:[^1]
- Two teams play against each other: Andy, Bob, and Charles (the
\"players\") versus the \"interrogators\".
- Each player is asked either \"What is the value of **X**?\" or
\"What is the value of **Y**?\"
- Only two answers are allowed: +1 or −1.
- Either each player is asked the **X** question, or one player is
asked the **X** question and the two other players are asked the
**Y** question.
- The players win if the product of their answers is −1 in case only
**X** questions are asked, and if the product of their answers is +1
in case **Y** questions are asked. Otherwise they lose.
- The players are not allowed to communicate with each other once the
questions are asked. Before that, they are permitted to work out a
strategy.
Is there a failsafe strategy? Can they make sure that they will win?
Stop to ponder the question.
Let us try pre-agreed answers, which we will call **X~A~**, **X~B~**,
**X~C~** and **Y~A~**, **Y~B~**, **Y~C~**. The winning combinations
satisfy the following equations:
: \
`<math>`{=html}
X_AY_BY_C=1,\\quad Y_AX_BY_C=1,\\quad Y_AY_BX_C=1,\\quad
X_AX_BX_C=-1.`</math>`{=html}
\
Consider the first three equations. The product of their right-hand
sides equals +1. The product of their left-hand sides equals
**X~A~X~B~X~C~**, implying that **X~A~X~B~X~C~ = 1**. (Remember that the
possible values are ±1.) But if **X~A~X~B~X~C~ = 1**, then the fourth
equation **X~A~X~B~X~C~ = −1** obviously cannot be satisfied.
: The bottom line: There is no failsafe strategy with pre-agreed
answers.
\
\-\-\--
```{=html}
<references/>
```
------------------------------------------------------------------------
```{=html}
<div class="noprint">
```
**NEXT \>**
```{=html}
</div>
```
[^1]: Lev Vaidman, \"Variations on the theme of the
Greenberger-Horne-Zeilinger proof,\" *Foundations of Physics* 29,
pp. 615-30, 1999.
|
# This Quantum World/GHZ
```{=html}
<div class="noprint">
```
**\< PREVIOUS**
```{=html}
</div>
```
## The experiment of Greenberger, Horne, and Zeilinger
And yet there is a failsafe strategy.[^1]
Here goes:
- Andy, Bob, and Charles prepare three particles (for instance,
electrons) in a particular way. As a result, they are able to
predict the probabilities of the possible outcomes of any spin
measurement to which the three particles may subsequently be
subjected. In principle these probabilities do not depend on how far
the particles are apart.
- Each player takes one particle with him.
- Whoever is asked the **X** question measures the *x* component of
the spin of his particle and answers with his outcome, and whoever
is asked the **Y** question measures the *y* component of the spin
of his particle and answers likewise. (All you need to know at this
point about the spin of a particle is that its component with
respect to any one axis can be measured, and that for the type of
particle used by the players there are two possible outcomes, namely
+1 and −1.
Proceeding in this way, the team of players is sure to win every time.
Is it possible for the *x* and *y* components of the spins of the three
particles to be in possession of values before their values are actually
measured?
Suppose that the *y* components of the three spins have been measured.
The three equations
: \
`<math>`{=html}
X_AY_BY_C=1,\\quad Y_AX_BY_C=1,\\quad Y_AY_BX_C=1`</math>`{=html}
\
of the previous section tell us
what we would have found if the *x* component of any one of the three
particles had been measured instead of the *y* component. If we assume
that the *x* components are in possession of values even though they are
*not* measured, then their values can be inferred from the measured
values of the three *y* components.
Try to fill in the following table in such a way that
- each cell contains either +1 or −1,
- the product of the three X values equals −1, and
- the product of every pair of Y values equals the remaining X value.
Can it be done?
A B C
--- --- --- ---
X
Y
The answer is negative, for the same reason that the four equations
: \
`<math>`{=html}
X_AY_BY_C=1,\\quad Y_AX_BY_C=1,\\quad Y_AY_BX_C=1,\\quad
X_AX_BX_C=-1`</math>`{=html}
\
cannot all be satisfied. Just as there can be no strategy with
pre-agreed answers, there can be no pre-existent values. We seem to have
no choice but to conclude that these spin components are in possession
of values *only if* (and only when) they are actually measured.
Any two outcomes suffice to predict a third outcome. If two
*x* components are measured, the third *x* component can be predicted,
if two *y* components are measured, the *x* component of the third spin
can be predicted, and if one *x* and one *y* component are measurement,
the *y* component of the third spin can be predicted. How can we
understand this given that
- the values of the spin components are created as and when they are
measured,
- the relative times of the measurements are irrelevant,
- in principle the three particles can be millions of miles apart.
How does the third spin \"know\" which components of the other spins are
measured and which outcomes are obtained? What mechanism correlates the
outcomes?
: *You understand this as much as anybody else!*
\
\-\-\--
```{=html}
<references/>
```
------------------------------------------------------------------------
[^1]: D. M. Greenberger, M. A. Horne, and A. Zeilinger, \"Going beyond
Bell\'s theorem,\" in *Bell\'s theorem, Quantum Theory, and
Conception of the Universe*, edited by M. Kafatos (Dordrecht: Kluwer
Academic, 1989), pp. 69-72.
|
# Managing Groups and Teams/Introduction
## Foreword
It is often remarked that groups are everywhere, whether in our social
lives, our work lives, or even our families. In each of these
situations, sets of individuals decide to work collectively to achieve
particular goals.
However, although groups are everywhere and we participate in them
constantly, we do not understand them very well. Many of us can tell
stories of groups that seemed perfect for a given task, but which
failed. And we all have reasons (or excuses) that explain such failures.
But our experiences in groups suffer precisely because we are with them.
The study of groups as a phenomenon that is unique and different from
other social phenomena is very active, reflecting both the importance it
has and how much we still don\'t know about groups. `<small>`{=html}S M
Rizwsan Clinical Psychologist ANF`</small>`{=html}
## About this Book
!Hub and Spoke{width="200"}
In this book, we take a challenge-based approach to dealing with groups.
Many other books provide conceptual and descriptive treatments of groups
and teams. Here we will take a prescriptive perspective, one that
focuses on the \"how to\" of managing a group or a team. This
prescriptive perspective, however, will be rooted in social science.
## About Wikibooks and Wikimedia
- Wikibooks, a
Wikipedia article about Wikibooks and its
history.
- Wikimedia Foundation, a Wikipedia article
about the non-profit parent organization of Wikibooks.
- Frequently asked questions about
Wikibooks.
|
# Managing Groups and Teams/Creating and Maintaining Team Cohesion
## Team Cohesion Defined
One definition of cohesion is "a group property with individual
manifestations of feelings of belongingness or attraction to the group"
(Lieberman et al., 1973: 337). It is generally accepted that group
cohesion and performance are associated. "However, the issue of a
cause/effect relationship between group cohesion and performance is not
completely resolved. Generally, there tend to be more studies supporting
a positive relationship between group cohesion and performance."
With that in
mind the following article is an effort to enhance group/team cohesion
and as a result help improve group/team performance.
## The Question
What is team cohesiveness and why does it matter to an organization to
have cohesiveness within its teams?
## Team Composition
### How to promote team cohesion when selecting and identifying diversity within teams
In their journal article Beyond Relational Demography: Time and the
Effects of Surface- and Deep-Level Diversity on Work Group Cohesion,
David A. Harrison, Kenneth H. Price, and Myrtle P. Bell discuss the
composition of teams and its effect on cohesiveness. They describe two
different categories of diversity, namely surface level and deeper
level.
### Surface-Level Diversity:
Surface level attributes are "immutable \[and\] almost immediately
observable." Such
attributes include age, sex, and race/ethnicity. In general, the
findings have been fairly inconsistent within and across studies as to
how diversity in these areas affect team cohesion.
### Deep-Level Diversity:
Deep-level diversity includes differences among members' attitudes,
beliefs, and values. These attributes are less apparent than
surface-level differences and are "learned through extended,
individualized interaction and information gathering."
They are communicated
differences which are shared through both verbal and nonverbal behavior.
There has been less research done in this area with regards to teams in
workplace settings, though a number of social psychological studies have
been conducted. The findings consistently suggest that "attitudinal
similarity \[is\] associated with higher group cohesiveness."
Diversity also
improves communication, reduces personal conflict, attracts friendships,
and gives more satisfaction to group members.
### Summary
Overall, the school of thought that is most widely accepted, in regards
to team cohesion, is that "surface-level differences are less important
and deep-level differences are more important for groups that had
interacted more often"
. Harrison, Price, and
Bell's study concluded that while homogeneous groups interacted and
performed more effectively than heterogeneous groups in the beginning,
with time and information, the diverse groups' performance and processes
improved more rapidly and "had grown more effective in identifying
problems and generating solutions"
. Overall cohesiveness
was strengthened in such cases. Hence, for optimum results, teams ought
to include deep-level diversity as part of the process for achieving
cohesiveness.
## Internal Environment Factors Needed in Team Cohesion
Internally there are several factors that must be present for cohesion
to exist within a team. First good and appropriate communication is
essential to creating and maintaining cohesion. Communication leads to
the second factor, unity of purpose. For a team to work as a cohesive
team they must share a common goal and to collectively work towards that
goal. And finally, the team must have a high level of commitment
understanding that what they do together as a team is better than what
they do on their own.
### Communication
In the article "Building Team Cohesion: Becoming "We" Instead of "Me"
the authors stress the importance of not losing the "human moment" which
they define as "not to lose the powerful impact of face-to-face,
immediate interaction in real time and space." Furthermore, the authors
add the following:
: "It is communication in the "human moment" that most powerfully
creates team synergy -- the energy that truly makes "the whole
greater than the sum of its parts." It is communication in the
"human moment" that also most powerfully creates team cohesion -- a
strong sense of loyalty and commitment to the team vision as one's
own."
: "Providing communication opportunities in real time and space for
forensics team members is necessary to build team cohesion. Whether
a room or lounge where team members can congregate between classes
and the end of the day, practice space for formal and informal
coaching sessions, travel time in cars and vans, or social time to
enjoy pizza and a movie, both quantity and quality of communication
are necessary to build a cohesive team climate of openness and
trust...According to Bormann(1990), highly cohesive groups interact
in an open climate where individuals are free to ask questions and
disagree with one another; even the ability to work through
inevitable team conflict in such a constructive climate will only
serve to strengthen team cohesion."
In order to build cohesion within any team whether it be a sports team
or work team communication is an essential ingredient. Providing
opportunities for the team members to interact socially is necessary to
help build trust. In addition, a safe environment in which the team can
deal with conflict is critical to team cohesion.
### Unity of Purpose or a Common Goal
A critical factor that must be present for groups or teams to experience
cohesion is to have a common goal. In SELF-MANAGING WORK TEAMS:An
Empirical Study of Group Cohesiveness in "Natural Work Groups" at a
Harley-Davidson Motor Company Plant, the authors state: "that highly
cohesive groups tend to perform better because they have high commitment
to attaining group goals (e.g., Stogdill, 1972), and because the members
are more sensitive to others in the group, they are more willing to
assist each other (e.g., Schachter, Ellertson, McBride,&Gregory, 1951)."
Additional support to the importance of a common goal in building and
maintaining a common goal is found in "Building Team Cohesion: Becoming
"We" Instead of "Me" where the author relates the following:
: "Since cohesion is believed to be one of the distinguishing
characteristics of a high-performance team, what is this powerful
team quality and how is it cre-ated? According to Bollen and Hoyle
(1979), cohesion is the degree of attraction members feel toward one
another and the team; \"it is a feeling of deep loyalty, of esprit
de corps, the degree to which each individual has made the team\'s
goal his or her own, a sense of belonging, and a feeling of morale\"
(as cited in Beebe & Masterson, 2000, p. 122). Though cohesion is
rooted in the feelings team mem-bers have for one another as well as
a common goal, creating, shaping, and strengthening those feelings
relies on the use of effective communication. Communication scholars
have long agreed that group or team cohesion is as much about the
relationships created as the task at hand, and success in both
fos-ters the development of team cohesion. (Bormann, 1990).
Without a purpose or a common goal a team will eventually splinter into
separate individuals working towards their own personal agendas and not
together toward a team goal. It is important for team members to see
themselves as a part of the group working towards a goal for
cohesiveness to exist.
### Commitment
Teams that are not committed to each other or a common goal do not
experience cohesion and are much more likely to leave the team or even
the organization. In the article \"Commitment and the Control of
Organizational Behavior and Belief\" the author states the following:
: \"Commitment also derives from the relation of an employee\'s job to
those of other in the organization. Some jobs are rather isolated
and can be done independently of other jobs in the organization. It
has been found that jobs which are not integrated with the work
activities of others tend to be associated with less favorable
attitudes. (Sheperd, 1973). Gow, Clarkand dossett (1974), for
instance find that telephone operators who quit tend to be those who
are not integrated into the work group. Work integration can affect
commitment by the fact that integrated jobs are likely to be
associated with salient demands from others in the organization. If
a person has a job which affects the work of others in the
organization, it is likely that those other will communicate their
expectations for performance of that job. Such expectations can be
committing in that the other people implicitly or explicitly hold
the person accountable for what he does. Earlier we mentioned that
when individuals did not know what was expected of them they tended
to be less committed to the organization. One reason an individual
will not know what is expected is because no one is telling him. In
general, we would expect that anything which contributes to creating
definite expectations for a person\'s behavior would enhance his
felt responsibility, and hence commitment.\"
We learn from the above author that for commitment to exist we employees
need to know what is expected of them and then to know they will be held
accountable either by a manager or other co-workers. Once commitment is
present team members are more likely to stay and work towards the team
goal.
## Role of Management in Team Cohesion
The roles that management has in a team that they oversee are extremely
important. But it is also important for the management to understand the
boundaries of what their roles and responsibilities are and what the
roles and responsibilities of the team itself are. The manager is often
placed in the management position because of their people and technical
skills and experience. A team often benefits from the manager's
abilities, skills, attitudes, insights and ideas. But neither the
management nor the team should ever forget that it is the team's
responsibility to perform the actual work. So what role should
management play in a team that they oversee? How best can they serve the
team to ensure they are successful? A critical role that management can
and should have is to facilitate and encourage team cohesion.
### Establish the Team Vision/Goal
The first step in creating team cohesion and where management should be
involved is in the establishment of the team vision and/or goal.
Management must set a clear vision to which the team can jointly work
towards together. As Tommy Lasorda, former manager of the LA Dodgers,
stated, "My responsibility is to get my 25 guys playing for the name on
the front of their shirt and not the one on the
back." Management must "establish a common
goal for \[the\] team -- an underlying target that will bind \[them\]
together..." The goal must be as clear as
possible for each member of the team. "Goal clarity is critical for team
members to have confidence in their direction and to be committed to
make it happen." A clearly defined goal
articulated to the team in such a way that they all understand will
inspire the team and commit them to the cause.
Once the goal has been clearly defined and clearly articulated,
management must keep the vision and goal alive. Obstacles, tension, and
crises may arise that can distract or discourage away from the common
goal. The management must "continually reinforce and renew the team
goal."
Being that managements "primary responsibility is to ensure that the
team reaches its goal," management must
also facilitate a working environment, set clear expectations and
responsibilities, and lastly, let the team do their job.
### Facilitate a Working Environment
Once the team vision and goal has been established, the most important
contribution management can make "is to ensure a climate that enables
team members to speak up and address the real issues preventing the goal
from being achieved." Such a climate
includes creating an environment of trust, communication and openness
with each other. As Frank Lafasto describes in his book, openness and
supportiveness are "the ability to raise and resolve the real issues
standing the way of a team accomplishing its goal. And to do so in a way
that brings out the best thinking and attitude of everyone involved.
It's too hard for team members to contribute, much less explore the
possibilities, when it is not safe for them to say what's on their
minds. They must be able to speak honestly. They must be able to deal
openly with real obstacles, problems, and opportunities in a way that
promotes listening, understanding of differing perspectives, and
constructively working towards a solution."
The environment and climate in which the team works and operates must be
facilitated by the management to ensure that trust is established,
collective collaboration is demanded, and openness is welcome.
### Set Clear Expectations and Responsibilities
Management responsibility is also to set clear expectations and
responsibilities of the team and individual team members. Patrick
Lencioni describes in his book "The Five Dysfunctions of a Team" that a
team where there is ambiguity about the direction and priorities fails
to commit. Whereas when the expectations, direction and priorities are
clear the team is more likely to commit to the cause and each
other. Management must establish clear
expectations so there is no ambiguity or question of what is expected of
the team, whether it is the timeline, product, requirements, etc.
Also, management must set clear responsibilities. "There are few
behaviors that build confidence as well as personalized expression of
belief in an individual. One of the most direct signals of such belief
is trusting someone with important and meaningful
responsibility." Clear and meaningful
responsibility that allows the team members to stretch enhances their
trust and confidence. And, as Jack Welch, the CEO of General Electric,
put it, "giving people self-confidence is by far the most important
thing I can do. Because then they will
act."
### Training and Staffing
According to Chansler, Swamidass, & Cammann to get a task completed, "a
work team must have the resources to do the job. Specifically, the team
needs trained, competent team members. Training is a planned effort by a
firm to help employees learn job-related competencies (Noe, 1999).
Training is used by companies to gain a competitive advantage over
rivals in their respective industries. A company must provide adequate
resources to an empowered team to staff and train its members
adequately." It is the responsibility of Management to provide such
training. Chansler, Swamidass, & Cammann also suggest management should
provide its workers with both "hard" and "soft" skills. "Hard-skills
training helps them do their jobs properly so that the plant can produce
a quality product cost-effectively. Soft-skills training, on the other
hand, teaches the workers to get along better as part of a functioning
team; this type of skills training improves interpersonal dynamics and
relationships. To effectively and efficiently manufacture quality
product, both types of training are needed."
It is
therefore the responsibility of management to make sure that group/ team
members have the hard and soft skills to perform tasks and maintain
cohesion.
### Get Out of Their Way
And lastly, the manager's role is to get out of the team's way. Once the
team knows what they are working towards, tasks have been clearly
defined and delegated, expectations are clearly set and they have the
means to build relationships of trust and have open communication, the
manager needs to step back and let the team work. The last thing the
team needs, not only to reach their goal, but also to build strong
cohesion is, as Dr. Travis Bradberry described, a seagull manager; one
that swoops in when problems arise "squawking and dumpling advice, only
to take off and let others clean up the
mess." Management needs to let the
members in the team be smart and informed about key issues and facts
related to their tasks and goal. Then management must trust team members
by providing sufficient autonomy, which will in turn build confidence.
### Summary
Ultimately, the goal and role of management should be to add value to
the team's effort. This can be done by defining a clear vision and goal,
facilitate a working environment, set clear expectations and
responsibilities, and provide the team enough autonomy where they can
work and do their jobs with full commitment and confidence.
## Examples of Team Cohesion: The Good
A good example of team Cohesion is that of the Harley Davidson Motor
Company (HDMC) and its group structure. The well known turnaround of
HDMC occurred in the 1980s when it changed from a "command-and-control"
culture to that of self-managing work teams (SMWT). This change allowed
assembly employees to make important decisions in their work teams
. With group work as
the foundation of HDMC's manufacturing cohesion among group members was
essential.
At its Kansas City Plant HDMC natural work groups (NWG) were organized
to make decisions (and build motorcycles). The plant's employees are
made up of local union members. "This partnership allows the shifting of
the decision-making and financial responsibilities for the operation of
the plant to the assembly floor employees"
.
The structure of the plant divides workers into NWGs. Each NWG is either
assigned to one of four process operations groups (POG) (the Assembly
POG, the Fabrication POG, the Paint POG, or a POG dedicated to future
programs) or provides "computer, human resources, materials, and so
forth, support for the operations NWGs (denoted as RG or Resource
Groups). Each of the NWGs is represented by NWG-elected (on a rotating
basis) members. The highest level of the circular organization is the
lone plant leadership group (PLG), which is co-chaired by the plant
manager and two local union presidents"
.
Within this group structure HDMC provides for widespread access to
information. "All financial and operations information is available to
all team members, which allows them to monitor budgets and production
quotas" . This
access to information facilitates open communication which in turn leads
to greater team cohesion. Cohesion is also furthered by the autonomy of
workers within the group. "Each NWG is empowered to make decisions with
regard to any aspect of the assembly process as long as it does not
cross over its boundary and impede another NWG"
. With freedom to
make any necessary decisions and freedom from continuous managerial
intervention NWGs are free to bend and move as needed in response to any
given situation.
Interestingly in this structure there are no formal team leaders. "NWGs
are collectively led by the members of the group. Traditional leadership
duties such as scheduling, safety monitoring, budget balancing, and so
forth, are rotated among the NWG members on a regular basis (usually
monthly). The NWG controls its own budget, sick pay, overtime, and
consumable production materials. Individual performance measures are not
maintained. The NWG performance is measured on achievement of plant
goals and on the goals that they set for themselves"
. This sharing of
responsibilities fosters cohesion by aligning the goals of the group,
goals each member is included in creating.
## Examples of Team Cohesion: The Bad
The 2010 film "The Social Network" is based on the events and
circumstances that lead to the creation and founding of the social
networking website "Facebook." Founder Mark Zuckerberg and his friend,
co-founder Eduardo Saverin agree to launch the site and split up
ownership of the new company equitably. In the process of developing the
company, other individuals and interests come into play that are
detrimental to the team cohesion developed by Mark and Eduardo
eventually leading to multi-million dollar lawsuits and the end of the
original founding team.
Several factors that lead to the failure of team cohesion:
:\* Team members were unable to work together cooperatively
:\* Team goals were not shared by everyone on the team
:\* Team members felt that they were not recognized for individual
contributions to accomplish team goals
:\* Selfish interests were able to infiltrate the team cohesion
The fact that team members were unable to work cooperatively together is
likely the single biggest factor in the failure of the original
"Facebook" leadership team. In the movie, to help advance the growth of
the company, Mark brought in a third partner, Sean Parker, the
co-founder of the famous music sharing sight "Napster." Mark was
instantly drawn to Sean's charismatic personality and vision for
"Facebook." At the same time, Eduardo was highly skeptical of Sean and
his business history. Immediately Mark began to lean toward the ideas
that Sean had developed for "Facebook" and eventually gave Sean a small
ownership stake in the company as well as a management position. Upon
learning this, Eduardo was very upset that Mark would go ahead and make
the decision to include Sean without consulting him first.
Mark and Eduardo both had visions of keeping this site exclusive for the
elite college institutions around the country and gradually introducing
it to other colleges. When Sean was brought into the company he
presented Mark with a business plan to expand "Facebook" beyond the
college scene and introduce it to the general public. At the same time
he was trying to convince Mark that he needed to relocate the business
to Palo Alto, CA from Boston, MA. Eduardo was never consulted on these
propositions that were made to Mark. Eduardo felt like Sean was trying
to push him out of the company and influence many of the decisions made
by Mark. As the company grew and others were able to influence decision
making, the team goals had clearly changed and not everyone shared the
same vision.
When "Facebook" was originally started Eduardo was designated as the CFO
of the company. In this responsibility he put up the initial seed money
to get it off the ground. He was in charge of all finances and bank
accounts for the company. While Mark was moving the company headquarters
to Palo Alto, Eduardo was spending time in New York working on securing
advertising contracts with prominent advertising firms. When Eduardo
goes to visit the team in Palo Alto he begins to tell Mark all about the
progress he has made with the advertisers but instead he is told all
about the work that Sean and Mark had accomplished and is essentially
told that his time and work in New York will not be needed. Eduardo felt
like his contributions to the company and goals were not being
recognized. This drives Eduardo further and further from the team.
Throughout the life of the original leadership team there were many
occasions where selfish interests were able to infiltrate team cohesion.
Sean was the worst offender of this. Sean was one of the founders of
"Napster." "Napster" was eventually forced to shut down and was facing
many lawsuits from the record industry. Sean saw an opportunity to work
with Mark and Eduardo on "Facebook." Sean could see the potential that
this venture had and also that he could influence the socially
introverted Mark by filling him with visions of big pay days and a life
style full of privilege. At times he appeared to try and relive his days
of "Napster" and treated "Facebook" like it was his own company and he
was trying to accomplish the goals there that weren't achievable at
"Napster." After a party to celebrate the 1 millionth member of
"Facebook," Sean was arrested with several other "Facebook" interns for
possession of cocaine and was eventually dismissed from the company.
Through these actions, Sean clearly was acting in his own self interest
and did not take into account what the effects would be on the group or
company. In many ways the selfish actions of Sean drove a wedge between
Mark and Eduardo that eventually lead to lawsuits and the end of the
original leadership team.
## Conclusion
### Ways to Increase Team Cohesion
Each group environment is different and will present different
challenges. In order to create a cohesive team unit it is important for
team members to be aware of this and work towards it. In Joseph Powell
Stokes's research, he found that "risk taking that occurs in a group,
attraction to individual members of the group, and the instrumental
value of a group are all related to the cohesion of the group". He
proposes that "increasing risk taking, intermember attraction, and the
instrumental value of a personal change group might lead to increased
cohesion, which in turn might lead to increase benefits for group
participants."
As such, groups should attempt to foster an "atmosphere of tolerance and
acceptance" so they can assure openness and honesty and hence, increase
their risk taking and intermember attraction. They can "\[reward\]
members who make risky self-disclosures or give honest feedback to other
group members". They should make sure group members know that they are
expected to "like each other" and can help members "differentiate
between not liking other members' behaviors and not liking the other
members themselves". Group leaders ought to act as examples and make
sure that the group composition and expectations of the group members
are in line with risk-taking and intermember attraction. "Leaders can
maximize the instrumental value of a group for its members by having the
group focus explicitly on its goals and by helping redirect the group
when members' needs are not being met".
### Potential problems
One possible caveat of cohesion is that when there is too much cohesion,
groups are prone to groupthink. "Groupthink is a tendency by groups to
engage in a concurrence seeking manner. Groupthink occurs when group
members give priority to sustaining concordance and internal harmony
above critical examination of the issues under consideration".
It is important for all group members
to be conscious of this pitfall and to take precautions to prevent such
behavior. See Ways to Prevent
Groupthink.
## References
Chansler, P. A., Swamidass, P. M., & Cammann, C. (2003). Self-Managing
Work Teams : An Empirical Study of Group Cohesiveness in *Natural Work
Groups* at a Harley-Davidson Motor Company Plant. Retrieved November 25,
2010, from Sage Journals Online:
<http://sgr.sagepub.com/content/34/1/101>
Harrison, David A.; Price, Kenneth H.; Bell, Myrtle P. "Beyond
Relational Demography: Time and the Effects of Surface- and Deep-Level
Diversity on Work Group Cohesion", The Academy of Management Journal,
Vol. 41, No. 1 (Feb., 1998), pp. 96-107
Milliken, F. J., & Martins, L. L. 1996. Searching for common threads:
Understanding the multiple effects of diversity in organizational
groups. Academy of Management Review, 21: 402-433
Terborg, J. R., Castore, C., & DeNinno, J. A. 1976. A longitudinal field
investigation of the impact of group composition on group performance
and cohesion. Journal of Personality and Social Psychology, 34: 782-790.
Friedley, Sheryl A. and Bruce B. Manchester. 2005. Building Team
Cohesion: Becoming "We" Instead of "Me". George Mason University.
SELF-MANAGING WORK TEAMS: An Empirical Study of Group Cohesiveness in
"Natural Work Groups" at a Harley-Davidson Motor Company Plant. SMALL
GROUP RESEARCH, Vol. 34 No. 1, February 2003 101-120
Salancik, Gerald R. Organizational Socialization and Commitment:
Commitment and the Control of Organizational Behavior and Belief. pp.
284-290
LaFasto, F., & Larson, C. (2001). *When Teams Work Best.* Thousand Oaks:
Sage Publications
Lencioni, P. (2002). *The Five Dysfunctions of a Team.* San Francisco:
Jossey-Bass.
Bradberry, T. (2008). *Squaqk.* New York: HarperCollins Publishers.
*The Social Network.* (2010, 11 21). Retrieved 11 21, 2010, from
Wikipedia: <http://en.wikipedia.org/wiki/The_Social_Network>
Stokes, Joseph Powell. Components of Group Cohesion : Intermember
Attraction, Instrumental Value, and Risk Taking. Small Group Research
1983 14: 163
*Managing Groups and Teams/Groupthink.* (2010, March 23). Retrieved 11
15, 2010, from Wikibooks:
<http://en.wikibooks.org/wiki/Managing_Groups_and_Teams/Groupthink>
|
# Managing Groups and Teams/Which attributes are fundamental to team cohesion?
## Introduction
Much has been written about the most effective ways to form team
cohesion. The purpose of this chapter is to offer concrete ideas for
team leaders on how they can develop team cohesion amongst group member
in an organizational setting.
Some of the ideas are inspired by researchers such as Patrick Lencioni,
however, many of the ideas within this chapter have been compiled from
the collective experiences of its authors and other research.
```{=html}
<div style="float:right;margin:0 0 1em 1em;">
```
![](Team_Cohesion_Star.jpg "Team_Cohesion_Star.jpg")
```{=html}
</div>
```
Five concrete ideas for building team cohesion will be presented
including: Appreciation, Incentive, Relevance, Performance Measurement
and Interpersonal Relationships.
A model has been created to help the reader recall these points. The
model is presented as a star with each point representing one of the
five attributes fundamental to building team cohesion.
## Appreciation
It has been said that there are only two types of people in the world
who benefit from gratitude and sincere appreciation -- men and women[^1]
Indeed, all humans have a basic need to feel appreciated, respected and
valued. In Maslow's hierarchy of needs, esteem is recognized as a
fundamental human desire which must be fulfilled in order to achieve
self-actualization (Maslow's hierarchy of needs).[^2] The effective team
leader can utilize appreciation as an important tool to fill this esteem
need for individual team members and work to create and maintain a
culture of appreciation within the team to insure that a team is
cohesive.
In a 2003 study by the US Department of Labor on employment, the number
one reason given for why people decided to leave their jobs was a lack
of appreciation. Just as an underappreciated employee is more likely to
leave a job, an underappreciated team member is more likely to leave a
team. Without feeling valued as contributors within the context of the
team and its objectives, individual team members are much more apt to
feel disconnected and isolated from the team. To maintain a cohesive
team, all members need to feel some degree of appreciation for their
efforts. On a group level, a team which is not esteemed and recognized
for their contributions has little chance of remaining cohesive,
functional and successful.
To view it in a more positive way, a team is much more likely to be
unified, collaborative and ultimately successful in a culture that is
built on appreciation and recognition of the contributions of each
individual, as well as the unique contributions of the team.
Demonstrations of gratitude, acknowledgements of effort, words of
congratulations and other actions of appreciation function as the glue
which binds a successful and cohesive team together. As the French
philosopher Voltaire put it, "Appreciation is a wonderful thing: It
makes what is excellent in others belong to us as well." For a team
leader to succeed, he must take the unique contributions of each team
member and forge them into a combined team identity, and appreciation is
fundamental to this endeavor.
### Creating a Culture of Appreciation
Creating and maintaining a culture of appreciation within a team
requires a concerted focus by the team leader (and team members) on
certain behaviors and characteristics. Below we have identified 6
guidelines related to the principle of appreciation which team leaders
can use as they work on building a strong and cohesive team. Of course
this is not an exhaustive list, but it provides a strong foundation on
which a culture of appreciation can be built.
#### 1. Praise Individuals and Teams
: Appreciation is a principle which applies to individuals within a
team and to the team itself. For a team to be cohesive, the leader
must concentrate on both areas. Without individual recognition, a
team member may feel his/her individual contributions are
irrelevant, unimportant and invisible. Without team recognition,
cohesion is much more difficult to maintain, as the focus is
directed away from the team's accomplishments. Finding a balance
between team and individual appreciation is not an exact science,
and may vary with the unique dynamics of a particular team. However,
both individual and team praise must be present to maximize the
potential for team cohesion.
#### 2. Praise in Private and in Public
: Acknowledging and recognizing accomplishments should be done both in
the private and the public spheres. Expressions of appreciation are
reinforced when shown in multiple venues. A team's willingness to
put in significant extra time to complete a project by the deadline
could warrant a personal thank you to each team member by the team
leader, as well as a team celebration lunch. Going one step further,
the team leader could send an e-mail to select upper management
detailing the extra effort put in by the team (make sure to Cc the
team members). Of course the specific actions are limitless. The
point here is to reinforce the appreciation message and, through
public and private usage, cause it to permeate throughout the
different strata of the individual, team and organizational culture.
#### 3. Be Specific
: Make sure you really get to know each team member so that you can
tailor your appreciation message to each individual as needed. Take
time to learn about a team member's family, hobbies and interests,
preferences, and values. This will allow you to give praise,
recognition and appreciation that will be personally valuable to
each team members. In recognizing or praising a team member, do not
speak in generalities. Instead of, "great job, Amir," the team
leader could say, "Amir, your contribution to the finance meeting
this morning was excellent. I was particularly impressed with your
grasp of the division's key drivers." The latter statement
acknowledges specific attributes and provides Amir with specific
feedback on what his unique contribution was. Similarly,
appreciation at the Team level should be specific and should
demonstrate the team leader's interest in and knowledge of the
team's purpose and accomplishments.
#### 4. Be Sincere
: As the cliché goes, flattery will get you nowhere. When people get
the sense that words of praise or acts of acknowledgement are not
coming from the heart, trust is quickly eroded. A team leader should
never follow up a complement with a "but" or bring up a mistake that
was made by the team or team member. There is a time and place for
that, and it is not during an expression of appreciation.
#### 5. Frequency
: When it comes to appreciation, more is more. As long as the
appreciation is sincere and specific, it is nearly impossible to
show appreciation too much. Whether it is a personal thank you note,
a short recognition speech in front of the team or an e-mail to top
management on a team's accomplishment, appreciation must become a
cultural norm. Consistency is the key. Just one of many ways in
which a team leader can think about frequency is to create three
divisions: day-to-day, informal and formal.[^3] Appreciation must be
demonstrated often for it to become a part of a team's psyche.
#### 6. Develop an Appreciation Plan
: With the tools above, a team leader should put pen to paper, so to
speak, and develop a written appreciation plan. This will function
as a blueprint to creating and maintaining a culture of
appreciation. Ideas for the structure of this plan are many and
varied. A few examples of appreciation/recognition plans from the
corporate world can be found in "Rewarding Teams: Lessons from the
Trenches" by Glenn Parker, Jerry McAdams and David Zielinski.[^4]
"The Carrot Principle", by Chester Elton and Adrian Gostick also
provides insights into creating a detailed recognition plan[^5] and
Whether your plan includes weekly awards, team highlights in the
company newsletter, taking a team member out to lunch, all or none
of the above, the plan should integrate the principles previously
described: namely individual and team praise, private and public
praise, being specific, being sincere and a high degree of
frequency. A written appreciation plan will provide the impetus to
take the above concepts and make them a reality.
### Remarkable Results
In a scene from the movie Remember the Titans, based on the true story
of a high school football team which overcame many potentially divisive
challenges and obstacles to come together and win the 1971 Virginia
State Football Championship, Coach Herman Boone takes his team to the
site of the United States Civil War battle of Gettysburg. There, with a
morning mist hanging over the once bloody battlefield, he delivers an
impassioned plea for team unity and cohesion.
"You listen...take a lesson from the dead," directed Coach Boone. "If we
don't come together, right now on this hallowed ground, we too will be
destroyed...just like they were."
"And I don't care if you like each other or not. But you will respect
each other."[^6]
Coach Boone then went about creating a culture of appreciation, by
demanding it from his staff and his team, and living it himself. The
results were remarkable.
Each team leader faces unique challenges and obstacles to in maintaining
a cohesive team. Among these challenges is creating a culture where the
basic human need of appreciation is met for each team member and the
team as a whole. When a team feels appreciated and recognized, and each
individual within that team feels likewise, the team is free to come
together as a cohesive unit and to create, collaborate and succeed.
## Incentive
As we continue to explore the contributing factors to team cohesion,
recognizing the importance of incentives is fundamental to team success.
While developing organizational success through incentives is a topic of
much attention and misunderstanding amongst business leaders,
interpreting the various factors that encourage people to perform at
high levels of personal success leads to insights of how they might
perform in a team setting. As good leaders are in constant pursuit to
improve the processes and successes of their projects through the use of
teams, proper implementation of incentives and rewards should be
emphasized and understood.
"In economics and sociology, an incentive is any factor (financial or
non-financial) that enables or motivates a particular course of
action."[^7] Types of incentives include cash bonuses, merit increases,
promotion, leadership opportunities, as well as recognition. As it
should be safe to say that everyone likes to be rewarded for their
efforts or actions, Frank LaFasto states "good effort needs to be
recognized."[^8] Whether individuals recognize them as incentives or
rewards, funneling these driving factors in a manner that increases
productivity and success can be very beneficial to the team environment.
Robert Henneman notes the importance of group recognition by stating
"group incentives have a stronger influence on productivity than
individual incentives yielding increases of 13%, but more importantly
they reinforce the concept of teamwork."[^9]
In order to ensure that incentive tools are used in the most beneficial
and influential manner for the team as a whole, the team leader needs to
properly identify and integrate the teams objective. As the team leader
"ensures that rewards and incentives are aligned with achieving the
team's goals", team members feel a sense of accomplishment through an
association to the group process.[^10] By doing so, individual team
members will achieve a higher sense of team worth through their
collaboration and contribution. By keeping rewards "fair", LaFasto
states that individual members of a team will internally channel the
feelings of a reward as if it is a "burden of proof."[^11] A method of
reward permits individual members to identify with the teams objective
and feel a sense of camaraderie through such achievement. This in turn
has an incredible sense of value that can create strong bonds of trust
and commitment to team establishment.
As managers devise methods of assuring proper channels of recognition
and incentive, it is important to note that financial and non-financial
methods can both be equally impactful. As Robert Henneman explains
"Recognition can exert a powerful impact on an employee's performance
and may influence organizations effectiveness as much as financial
incentives do."[^12] This is a strong point for managers to remember as
they strive to build their team members moral on an individual basis as
well as on the team level. Even though incentive and reward have a
financial connotation, it is important to note the impact that simple
recognition can have on a team.
One of the driving factors that contribute to the success of incentives
and rewards toward developing team cohesion is the continual impact this
has on the individual. For example, if a team is given the opportunity
to choose future projects and make certain decisions based on their
ability to achieve success and elevated results as a team, more emphasis
and importance on team interaction will be seen. Team members that are
task oriented react very responsively to opportunities or incentives
that will give them more initiative or influence on future decisions.
Because of the tremendous impact that meaningful incentives have on team
projects and goals, long-term behaviors begin to form with the team.
Eventually, teams are able to collaborate their individual "personal,
financial, and psychological rewards to their group goals."[^13] Simple
incentive programs that are relevant to the overall teams goals and
objectives create a more focused approach to how the team members
interact and respect each of their individual duties. Whether or not
each of the team members have an equitable position in the physical
process at hand, each member feels a sense of placement and share of the
team when recognition is given to the team as a whole.
One of the greatest effects of team incentives that provide evidence of
the formation of long term team cohesion is the development of
organization standards. As more and more incentives are implemented that
are in line with the individual teams objectives, there is an increase
of reward and visibility within the organization as a whole. This
creates an atmosphere of creativity and success which over time develops
into what is understood as organizational or team standards. As LaFasto
states "a reward should be a celebration of standards."[^14] Over long
periods of time, these recognized successes which have become standards
are what ultimately shape an organization's or team's culture. For
example, if a team is highly motivated to be the industry's leading
organization for a certain product and achieves that result and is
ultimately recognized for that, a lasting expectation of highly notable
success is eventually adopted. This is one of the very reasons why
incentives are very effective for not only short term projects but also
for the long-term well being of the organization or team.
Through proper alignment of team objectives and motivations, team
leaders have a tremendous opportunity to build team cohesion through the
proper and strategic use of incentives. Individual team members as well
as the team as a whole require the recognition and reward that should be
demonstrated when high levels of success are achieved and lasting
standards are devised. Whether it is a team bonus for devising a merger
deal or a trophy that is presented in front of the entire organization
to each of the winning team members, "excellent results must be
recognized" and are developed by strategically aligned incentives that
drive upward success.[^15]
## Relevance
Perhaps one of the most challenging aspects of leading a team of
individuals is making each member of the team feel relevant. What does
this exactly mean? Consider the following excerpt from the 1949 World
War II movie 12 O'ClockHigh. The script picks up after General Savage
asks Lt. Jesse Bishop how he feels about the bombing run the team had
just accomplished.
: **Lt. Bishop**: Well sir, that\'s hard. I don\'t know how I feel.
That\'s kind of the trouble.
```{=html}
<!-- -->
```
: **General Savage**: What is?
```{=html}
<!-- -->
```
: **Lt. Bishop**: The whole thing sir, everything. I can\'t see what
good we\'re doing with our bombing. All the boys getting killed.
Just a handful of us, it\'s like we\'re some kind of guinea pigs,
only we\'re not proving anything. You\'ve got to have confidence in
something, then when you find something you\'ve got confidence in,
then everything changes. It just doesn\'t make any sense. I just
want out.
```{=html}
<!-- -->
```
: **General Savage**: Do you think it will be any better in another
group?
```{=html}
<!-- -->
```
: **Lt. Bishop**: It isn\'t a question of that sir; I don\'t want to
fly anymore. I want to transfer to another branch.
```{=html}
<!-- -->
```
: **General Savage**: Doesn\'t it mean anything to you that we hit the
target today with no losses?
```{=html}
<!-- -->
```
: **Lt. Bishop**: Yes sir\....I suppose so\...in a way, but I just
want out.
```{=html}
<!-- -->
```
: **General Savage**: well, that\'s a pretty tough request from a
medal of honor man. Sure we\'re guinea pigs Jesse, but there\'s a
reason. If we can hang on here now, one day soon somebody\'s gonna
look up and see a solid overcast of American bombers on their way to
Germany to hit the Third Reich where it lives. Maybe we won\'t be
the ones to see it, I can\'t promise you that, but I can promise you
that they\'ll be there if only we can manage to make the grade now.
```{=html}
<!-- -->
```
: **Lt. Bishop**: I\'d like to believe you sir. I just don\'t have
confidence in anything anymore.[^16]
This exchange between General Savage and Lt. Bishop illustrates what
happens with many members of group or teams. Often, members of the team
feel much like Lt. Bishop when, even after achieving the desired result,
can't see what good they are doing. In other words, they feel that the
work they are doing is irrelevant to the overall goal and objective of
the team or organization.
In his book Three Signs of a Miserable Job, Patrick Lencioni cites this
idea of irrelevance as being one of the main signs of a miserable job.
For our purposes, we can also say that irrelevance is a main sign of a
team member feeling miserable within the group. Lencioni states that,
"everyone needs to know that their job matters, to someone. Anyone.
Without seeing a connection between the work and the satisfaction of
another person or group of people, an employee simply will not find
lasting fulfillment. Even the most cynical employees need to know that
their work matters to someone, even if it's just the boss." [^17]
Now that we've flushed out this idea of irrelevance, how do team leaders
ensure that each one of the team members feels fulfillment in the role
that they are playing within the group? This chapter has cited other
topics that lend to helping a team member feel relevant. Ideas such as
incentives and measurement can all assist in making group members feel
fulfilled, however there are a few important points that team leaders
must touch on to avoid feelings of irrelevance amongst team members.
### Focus on the Goal
Kevin Eikenberry, Chief Potential Officer of Kevin Eikenberry Group
explains, "Teams don\'t have to be aligned with the goals of the
organization. Teams can work on what they believe to be the right
things. They can work diligently on creating the results they think
matter. They can be completely committed to success from their
perspective." [^18]
Often, teams operate in this manner to the extent that although the team
is working towards goals they believe are important to the team, those
goals may be totally opposite of the strategic goals of the company. In
the end, although the team may have accomplished much, team members fail
to see how their work mattered in the grand scheme of the organization's
goals.
The result of the team not aligning its goals with what the organization
needs results in the team operating in a vacuum. Eikenberry goes on to
say, "sometimes the vacuum is caused by a far more pervasive problem -
no clear organizational goals, objectives or strategies exist to align
to. Leaders must create clear strategies and they must create a clear
line of sight throughout the organization, so people and teams can
connect their work to the important strategies of the organization."
[^19]
One would think that business leaders might have an easy time in
assisting teams to align its goals with the overall goals of the
organization, however, many times business leaders may neglect to
clarify goals to the team or simply forget to share the goals with them.
In the end, it is the responsibility of the business leaders to align
the team's goals. The question becomes, how do leaders accomplish this
task? The following illustrates some steps that Eikenberry gives as ways
to connect the team's work to organizational goals:
**Start At The Beginning:** Do not communicate the goals of the
organization until those goals are set. Changing the goal midway through
the group's work may derail the team. Once the goals are set,
communicate these goals clearly in the early stages of the team's
development. Lastly, make sure that you clearly connect the work the
team is doing to the organization goal(s).
**Generate Conversation:** The delivery method of the goal to the team
is extremely important. Do not deliver the organization's goals in an
email or a packet, but rather present the goal verbally to the team and
ask for feedback from the team and how their work will fit into these
goals.
**Get The Team's Help:** Get the team's input and give them a chance to
come up with team goals and objectives that align with the
organizational goals. This will create ownership and allow for a higher
level of agreement between team members.
**Provide a Connection:** Teams need someone from outside the team to
act as a liaison between the team and organization. This usually comes
in the way of a team sponsor who doesn't necessarily sit on the team but
provides support and keeps the team from feeling alone.
**Make Them Accountable:** Once team goals properly align with
organization goals, then it is easier to have accountability within the
team. Not only does this improve the team's results, but can also
improve overall team dynamics.[^20]
Ensuring that the team's goals align with organizational goals allows
for team members to easily connect their work back to a larger
objective. By clearly providing this connection, team member will feel
that the work that they do both as a team and as individuals make a
difference far beyond just the immediate members of the team.
### Focus on Personalities:
There really isn't just one way to resolve the issue of team member's
feelings of irrelevance or the feeling that the work they do goes
unnoticed. Perhaps, one way to solve this problem is to properly
understand the many personalities that comprise the team. There are
about a million different personality tests on the internet that can be
taken, of which the most popular is Myers-Briggs.
How does understanding the personalities of team members assist team
leaders in making members of the team feel relevant? Many of us easily
see personality traits in one another. It is fairly easy for us talk
about certain personality traits. We may hear other people tell us that
we are energetic or that we are sensitive, however, what do these types
of personality traits mean within the context of the team? More
importantly, by completely understanding team member's personalities,
team leaders can use techniques to assist them in making the team
member's work feel relevant and valued.
As mentioned, Myers-Briggs personality test is the most popular
personality test, however, one personality profile that is useful in our
context is called the SELF Quiz and can be found at
<http://www.nationalseminarstraining.com/selfquiz/indexHP.cfm>. By
answering and scoring a series of questions, team members can find out
which of the four interaction styles their personality mirrors. The test
reveals whether you are a Social, an Efficient, a Loyal, or a Factual. A
complete analysis of what each style means is provided. The test is free
and takes only minutes to complete. [^21]
Perhaps the most interesting information we can pull out of this is the
style definition for each of the four categories. For example, someone
who falls into the Social category, according to the style definition,
is motivated by opportunities and friendship. People who fall into the
Efficient category are motivated by success, control and recognition.
These kinds of insights are important for team leaders and can assist in
helping make each team member feel that their work is important and
relevant to the team and the organization. A team leader dealing with
someone belonging to the Efficient category may find that giving that
person control over certain aspects of the work may make that team
member feel all the more relevant. Likewise, the team leader may try to
ensure that someone belonging to the Social category has enough
opportunity to connect with people on the team and in the organization
in order to go beyond professional relationships and form friendly
relationships as well.
The idea of analyzing team members' personalities may be somewhat of an
unconventional method of making members feel relevant within the team.
However, understanding personalities goes far beyond just understanding
how individual personalities may affect team dynamics. Personalities
should be leveraged appropriately to ensure that the needs of each
member are met to the best of the team leader's capabilities.
### Focus on individual responsibilities:
Many times team leaders don't have the option of choosing which members
will make up the team. Regardless, each member will have individual
responsibilities within the team. Often, team leaders may give certain
responsibilities to team members according to their expertise or
strengths in particular areas. Other times, the team leader may ask a
team member to be responsible for something outside of that person's
expertise in order to gain new perspectives, ideas, or insights.
At any rate, it is important that the responsibilities given to team
members allow them to feel fulfilled. For example, a team member may not
feel comfortable performing high level quantitative analysis but will
still accept the responsibility. Days later that team member may begin
to feel frustrated and may feel that the work he/she is doing is subpar
and therefore somewhat worthless to the team.
The point here is that no team dynamic is the same. Team leaders have
the responsibility to ensure that individual responsibilities assigned
to team members allow the individual to have the success needed in order
to avoid feelings of irrelevance within the team.
Finally, there are many ways to ensure that team members feel relevant
within the team. Aligning team members' work to organization goals,
understanding how personalities play a role, and ensuring that
individual responsibilities allow for team members to feel fulfilled are
just three ways that will help team leaders create feelings of relevance
among team members.
## Performance Measurement
One element key to maintaining morale within a group is the measurement
of performance. It is not sufficient to simply measure whether or not
the main task was accomplished. There will be times that in spite of
superior effort by the group and the individuals within it, they may not
achieve their goal. If a group is marked as a failure in these cases,
team members are demoralized and are not inspired to participate in the
future. In addition, it is important to measure the performance of
individuals within the group to motivate each group member to put forth
their best effort. Individual performance measurement will also benefit
the organization as a whole in that new strengths (and weaknesses) may
be discovered as the group works together.
### Determining what to measure
To maintain morale, the factors being measured must have relevancy not
only to the individuals within the group, but also to the organization
to which the group belongs. These factors will vary from group to group
and from organization to organization. The factors may be specific to
the task at hand, e.g. an engineering group may be assigned to design a
child safety seat which meets or exceeds government standards. In some
cases the organization may also have larger goals in mind, e.g.
individuals from engineering and marketing may be assigned to accomplish
a task and one of the goals the organization has, in addition to
designing a safety seat, is also to improve communication and
cooperation between the departments within the organization.
In The Journal for Quality and Participation, Jack Zigon notes, "It is
not always obvious what results should be measured. Most teams will use
the obvious measures without asking what results they should be
producing and how they will know they\'ve done a good job. Even if you
know what to measure, it is often not clear how the measurement should
be done. Not everything can be easily measured with numbers, thus teams
give up when faced with measuring something like \"creativity\" or
\"user-friendliness.\" [^22]
### Performances that can be measured:
- Achievement of objective -- While this is obviously important and
still needs to be measured, unfortunately in many cases it is the
only element measured.
- Achievement of milestones -- Since many projects are complex and may
take years to complete, measuring milestones not only provides
occasional motivation for members of a group but also helps track
the group's progress.
- Effort of the group as a whole -- Inevitably there will be some
tasks assigned to groups which are not completed, such as finding a
cure for a particular disease. To encourage future participation in
other efforts or even in the same effort, management needs to, when
appropriate, reward the efforts of a group. In addition, monitoring
and measuring the groups' effort will help determine the wisdom of
utilizing certain individuals or the group as a whole for future
projects.
- Individual's team contribution -- Participation in team meetings,
volunteering for projects, the number of ideas contributed and
whether other team members believe them to a valuable part of the
team are all areas that can be measured.[^23]
- Individual behavior - How well the individual works with other team
members, communicates in a constructive way, cooperates with other
team members and participates in group discussions and decision
making are important behaviors to measure.[^24]
- Group behavior -- This includes running effective meetings,
communicating well with each other, allowing opinions to be shared
and coming to a consensus on decisions.[^25]
### Understanding from the beginning
It is important that from the beginning the team members understand the
following:
1. Their performance will be measured
2. Why the measurements are important
3. Who will be measuring their performance
4. How the measurements will be measured
5. Which performances will be measured
6. The potential rewards or consequences of the measurements, if any
7. Whether any of the performances have priority over others
Having this understanding from the beginning will allow team members to
recognize what, besides accomplishing the task, is important to the
organization; in addition these measurements can help guide the team as
they work on the task.
### Periodic check-ups
If the task is going to take a considerable amount of time to
accomplish, periodic measurements should be taken and feedback shared so
that the team can have a clear understanding of their performance along
the way and can make adjustments as necessary. It will also give team
members a formal method to bring up concerns. An additional benefit is
that team members can be encouraged throughout the process and will be
reminded that how they are performing their task is a matter of on-going
interest to the organization.
### Feedback system
Jack Zigon also emphasized the importance of creating a feedback system.
The system consists of the documents and procedures used to collect and
summarize the data. He suggests the following steps to design the
feedback system:
1. Decide what data to collect for each performance standard. The data
should be relevant to the standard and specific enough to allow the
team to know what was right and wrong compared with the standard.
2. Decide which source the feedback should come from. Possibilities
include the job itself, a team member, the team leader, or other
people who receive the team\'s work.
3. Decide whether all data or just a sample should be collected.
Collect all the data if the measure is very critical and needs to be
tracked each time it occurs, or if the accomplishment is performed
infrequently. Sample the performance if the accomplishment is
performed so frequently that it is not practical to collect all
data.
4. Determine when to collect the data. When possible, collect it
immediately after completing the work.
5. Determine who should collect the data. When possible the team should
collect the data unless gathering the data disrupts the work flow
and takes too much time or the completed work is seen only by
another person.
6. Determine who, other than the team, needs to receive the data.
7. Review existing reports for possible use as feedback reports. They
can be used if the information is relevant to the standard, is
specific enough, is frequent enough to be of value, and is not
cluttered with useless information.
8. If possible, modify existing reports to meet the criteria.
9. Create your own feedback tables or graphs where necessary.
10. Decide whether it would be of value to summarize the data. If the
data covers a short period of time daily, for example-summarizing is
probably appropriate.
11. Create the forms.[^26]
When group members have a clear vision of not only what they are to
accomplish but also the importance of their individual contribution and
how that will be measured, it will help unify and motivate them
throughout the experience. It also helps management recognize the
importance of the individual within the organization. A common
understanding among team members and by management of what is expected
helps facilitate the fulfillment of those expectations and aids in the
accomplishment of their goals.
## Interpersonal Relationships
Groups can be a very effective tool in solving problems and
accomplishing tasks. The combined intellects, efforts and creativeness
of a group of individuals provides a better outcome than could come from
one individual or even from the same individuals working solo on the
same tasks. The combination leads to more satisfying outcomes and
efficient work.
However groups and teams have their drawbacks. Effectively aligning the
differing views of the members of the group can lead to disagreements
and has the potential for conflict. It is therefore important to
encourage the positive aspects while eliminating the negative aspects of
a team in the workplace environment. This section looks at how to
increase the positive while mitigating the negative aspects of the
interpersonal relationships.
### What are Interpersonal Relationships?
A team, in the business context, is a group of people that have been
placed together to complete a task or solve a problem. They share a
common purpose and are interdependent on the other members of the team.
The interdependence of the members of the team creates interpersonal
relationships. An interpersonal relationship is an association between
two or more people.
### How are Interpersonal Relationships Beneficial?
One of the greatest assets that a group or team can have is good
interpersonal relationships with each other. A team that feels
comfortable working together can have an energy that creates a positive
environment and work ethic that can lift a team, making it more
effective. This positive environment can make team members work harder,
more efficiently and more productively. Teams that work well together
have been shown to be more effective. In fact, how well teams achieve
goals is directly related to how effective the team is at working
together. "Healthy team relationships are characteristic of unusually
successful teams."[^27] Conversely, interpersonal conflict is the most
destructive force to a team's success. A team that cannot work
effectively together will not work effectively at all.
There have been arguments that a team does not have to have a good
interpersonal relationship in order to be effective. But consider the
fact that a team in which the members rely on and trust each other is
not putting forth additional effort to manage conflict, hurt, and bias
or trying to guess what the "opposition" is doing. They are able to
focus their efforts more effectively. Time and energy are directed at
the common goal and not at resolving conflicts within the group dynamic.
They are all able to put forth 100% of their efforts to the project at
hand, confident in the fact that each member is doing the same, not
worrying about what the other participants may be thinking or doing.
Just imagine two different working scenarios. In the first one, those
involved in their particular project are energized and excited about
coming in to work. They enjoy the association and sense of
accomplishment from working within their group. The other group members
despise getting up in the morning knowing they have to come in to work.
They hate what they do and those that they work with.
It's pretty obvious which group is going to be more effective at
completing their project.
### Interpersonal Conflict
Whenever you put a group of individuals together there is the potential
for conflict. These interpersonal conflicts tend to arise due to
personal differences between individuals. This conflict is damaging to
the team and their environment. It degrades the effectiveness of
achieving the highest outcomes. A team is made up of individuals. Each
individual has their own thoughts, ideas, and personality. Consequently,
these don't always align, nor should they. A team made up of clones,
would destroy the whole purpose of putting a group of individuals
together in the first place. A team is not formed for the sole purpose
that "many hands make light work," but because they each have skills and
talents specific to the individual. Therefore, each member adds to the
group in a different way.
As organizations change their structures to be flatter, the team has
become the predominant entity within the company. Teams are used to
complete projects, solve problems and many other functions within the
organization. However the team structure creates an environment in which
managers rely on peer relationships to accomplish tasks. In groups where
team members have a similar objective and where there is a friendly
atmosphere, especially if they have been working in conjunction for a
while, the team will work well together and there will be fewer
conflicts. "Inevitably, though, no matter how harmonious the group or
how structured the organization, conflicts are bound to occur. Some
conflicts may feel unproductive, even destructive."[^28]
You can take a group of skilled, intelligent and competent individuals
and put them together and their success in a group will come down to
whether or not they can work together. The amount of brain power, skill
and potential to solve problems will matter little if they cannot work
in harmony. If contention, feelings of spite, betrayal, or disloyalty
arise than it will undermine the work of each individual to the point
that nothing can be accomplished. For this very reason one of the most
important tools that a team must have is a good working relationship.
Going into a group environment and believing that conflict will never
occur is naive. Team leaders and team members should be mindful that
conflict *does* and *will* arise. The goal is to effectively resolve the
conflict and get the team back to working effectively.
Team members should never ignore conflict within the group when it
surfaces. They should not just hope that it will go away or resolve
itself. They must not assume that if they don\'t bring the problem up
that the group can still function without resolving the issues at hand.
Problems that are ignored and continue to fester will just cause
ineffectiveness in the team. A problem that is ignored may also explode
at a most inconvenient time and destroy the group altogether. Conflicts,
especially interpersonal conflicts, need to be dealt with as soon as
they are discovered, in an appropriate manner, so that the team can get
back to work and effectively meet its goals.
There are generally considered two types of conflicts that arise in a
group. The first is task conflict and the second is relationship
conflict. Task conflict is described as disagreements on how to proceed,
or a disagreement on what tasks should be preformed to meet the end
goal. Relationship conflicts are personal issues that arise between
members. They revolve around personal disagreements or dislikes between
individuals in a team and rarely have much to do with the actual
project.
Task conflicts are generally beneficial. These conflicts may be
discussions on "how to improve a process, make the product better,
provide a better service, or improve client relationships."[^29] These
types of conflicts are considered productive and discussions on such
topics generally lead to the most constructive outcomes. "What usually
results from productive conflicts are better, new, or different
solutions to the concerns and issues the conflicting parties are
having." [^30]
If task conflicts are not monitored and mediated by a team leader, these
conflicts can become relationship conflicts. Care must be taken to
ensure that task conflicts remain about the issues and don't become
personal. Relationship conflicts arise from a disagreement or conflict
involving "personality, work style, or differences in beliefs or
values." [^31]
John Crawley described business conflict within a team as "a condition
between or among workers whose jobs are interdependent, who feel angry,
who perceive the other(s) as being at fault, and who act in ways that
cause a business problem."[^32] He also describes a workplace conflict
as containing each of the same following elements:
1. They are interdependent
2. They blame each other
3. They are angry
4. Their behavior is causing a business problem[^33]
Gary Topchik, in his First-Time Manager's Guide to Team Building, states
that when team members become "critical of another team member's
actions, behaviors, or appearance, this is unproductive conflict that
must be resolved very quickly. If not, the team will become
self-destructive."[^34]
### What can you do?
It is improbable that anyone will go through their life without having
to deal with conflict in one form or another. The same can be said of a
team; eventually every member of a team will face a disagreement,
differences or conflict. So the question is what to do. "The productive
resolution of conflict usually strengthens relationships, whereas
destructive confrontation, e.g., blaming, name calling, usually destroys
relationships, or at the very least, detracts from their satisfaction
and usefulness. Thus it is very important how you confront the conflict
once you have decided to do so."[^35]
In When Teams Work Best, the authors describe first the questions and
then the answers to "building and sustaining collaborative team
relationships." They begin by describing the "four underlying
characteristics of good relations" as:
1. They are constructive
2. They are productive
3. They embrace mutual understanding
4. They are constructively self-correcting[^36]
Next they offer four questions that "form the basis for assessing the
degree to which an interaction contributes to building a good
relationship."
1. Did we have a constructive conversation?
2. Was the conversation productive enough to make a difference?
3. Did we understand and appreciate each other's perspective?
4. Did we both commit to making improvements?[^37]
Which leads to what they call the CONNECT model.\
**C** Commit to the relationship\
**O** Optimize Safety\
**N** Narrow the discussion to one issue\
**N** Neutralize defensiveness\
**E** Explain and echo each perspective\
**C** Change one behavior each\
**T** Track it![^38]\
![](CONNECT_Model.jpg "CONNECT_Model.jpg"){width="1000"}[^39]
## References
[^1]:
[^2]:
[^3]:
[^4]:
[^5]:
[^6]:
[^7]:
[^8]:
[^9]:
[^10]:
[^11]:
[^12]:
[^13]:
[^14]:
[^15]:
[^16]: 12'O Clock High. 1949. Twentieth Century- Fox Film Corporation
[^17]:
[^18]:
[^19]:
[^20]:
[^21]:
[^22]:
[^23]:
[^24]:
[^25]:
[^26]:
[^27]:
[^28]:
[^29]:
[^30]:
[^31]:
[^32]:
[^33]:
[^34]:
[^35]:
[^36]:
[^37]:
[^38]:
[^39]:
|