AWL

Academic Word List

A Project for Teaching and Learning Academic English Vocabulary

Friday, 12 06th

Last updateFri, 05 Sep 2014 10am

Engineering Physics

Search:

1.1 INTRODUCTION:
The nineteenth century was a very eventful period as far as Physics is concerned. The pioneering work on dynamics by Newton, on electromagnetic theory by Maxwell, laws of thermodynamics and kinetic theory were successful in explaining a wide variety of phenomena. Even though a majority of experimental evidence agreed with the classical physics, a few experiments gave results that could not be explained satisfactorily. These few experiments led to the development of modern physics. Modern physics refers to the development of the theory of relativity and the quantum theory. Inability of the classical concepts to explain certain experimental observations, especially those involving subatomic particles, led to the formulation and development of modern physics. Early twentieth century saw the development of modern physics. The pioneering work of Einstein, Planck, Compton, Roentgen, Born and others formed the basis of modern physics. The dual nature of matter proposed by de Broglie was confirmed by experiments. The wave mechanics and quantum mechanics were later shown to be identical in their mathematical formulation. The validity of classical concepts was explained to be the result of an extrapolation of modern theories to classical situations. In the present chapter, experimental observations of three important phenomena – black body radiation, photoelectric effect and Compton effect – considered as the beginning of modern physics, are briefly described.
1.2 BLACK BODY RADIATION:
When radiation is incident on material objects, it is either absorbed, reflected or transmitted. These processes are dependent on the radiation and the object involved. An object that is capable of absorbing all radiation incident on it is called a black body. Practically, we cannot have a perfect black body but can have objects that are only close to a black body.
For example, a black body can be approximated by a hollow object with a very small hole leading to the inside of the object. Any radiation that enters the object through the hole gets trapped inside and will be reflected by the walls of the cavity till it is absorbed. Objects that absorb a particular wavelength of radiation are also found to be a good emitter of radiation of that particular wavelength. Hence, a black body is also a good emitter of all radiations it has absorbed.
Emissions from objects depend on the temperature of the object. It has been observed that the energy emitted from objects increases as the temperature of the object is increased. Laws of radiation have been formulated to explain the emission of energy by objects maintained at specific temperatures.
1.2.1 Experimental observation of black body radiation:
Experiments have been carried out to study the distribution of energy emitted by a practical blackbody as a function of wavelength and temperature. Figure 1.1
.
1.1 Distribution of emitted energy as a function of wavelength and temperature for a black body.
shows the distribution curves in which the energy density  is plotted as a function of wavelength at different temperatures of the black body. Energy density is defined as the energy emitted by the black body per unit area of the surface. The important features of these distribution curves may be summarized as follows:
(i) The energy vs wavelength curve at a given temperature shows a peak indicating that the emitted intensity is maximum at a particular wavelength and decreases as we move away from the peak.
(ii) An increase in temperature results in an increase in the total energy emitted and also the energy emitted at all wavelengths.
(iii) As the temperature increases, the peak shifts to lower wavelengths. In other words, at higher temperatures, maximum energy is emitted at lower wavelengths.
1.2.2 Laws of black body radiation:
The initial attempts to explain black body radiation were based on classical theories and were found to be limited in application. They could not explain the entire spectrum of the radiation satisfactorily.
1.2.3 Stefan Boltzmann radiation law:
It states that the total energy density Eo of radiation emitted from a black body is directly proportional to the fourth power of its absolute temperature T. Energy density E0 is defined as the total of all the energy emitted at all wavelengths per unit area of the emitter surface. where is a constant called Stefan’s constant. It has a numerical value equal to 5.67 x 10-8 watt m-2 K-4 . This law was suggested empirically by Stefan and later derived by Boltzmann on thermodynamic considerations. The law agrees well with the experimental results.
1.2.4 Wien’s Laws:
Wien’s displacement law states that the wavelength m corresponding to the maximum emissive energy decreases with increasing temperature. where b is called the Wien’s constant and is equal to where c1 and c2 are constants. This is known as Wien’s distribution law. This law holds good for smaller values of  but does not fit the experimental curves for higher values of

1.2.5 Rayleigh – Jean’s law:
This equation does not show any peak in the energy
value but the energy goes on increasing with decrease in wavelength. The total energy emitted is infinite for all temperatures above 0 K.

This is not at all in agreement with the experimental observation. The law holds good only for large values of wavelength (fig 1.3). At lower wavelengths, the energy density increases and becomes very large for wavelengths in the ultra violet region. Such a large increase in the energy emitted at low wavelength does not occur experimentally. This discrepancy is known as “Ultraviolet catastrophe” of classical physics.
All the above laws are based on classical thermodynamics and statistics. They are insufficient to explain the black body radiation satisfactorily.

1.2.6 Planck’s radiation law:
This law is based on quantum theory. Max Planck proposed that atoms or molecules absorb or emit radiation in quanta or small energy packets called photons. Energy of each photon can be expressed as

where is the frequency of the radiation corresponding
to the energy E, h is a constant called Planck’s constant and is equal to 6.63 x 10-34 Js. Light quanta are indistinguishable from each other and there is no restriction on the number of quanta having the same energy. In other words, Pauli’s exclusion principle is not applicable to them. The quantum statistics applicable to photons is Bose-Einstein statistics. Considering all the energy emitted by the black body in the form of photons of different energy, Planck applied Bose - Einstein statistics to obtain the energy distribution of photons. This distribution agrees well with the experimental observation of black body radiation and is valid for all wavelengths. Further, it reduces to

The characteristic curves for the photoelectric emission as shown in fig. 1.5.
1.5 Current – voltage characteristics of photocell. The Intensity of illumination increases from L1 to L3.
The important properties of the emission are as follows:
(i) There is no time interval between the incidence of
light and the emission of photoelectrons.


(ii) There is a minimum frequency for the incident light below which no photoelectron emission occurs. This minimum frequency, called threshold frequency, depends on the material of the emitter surface. The energy corresponding to this threshold frequency is the minimum energy required to release an electron from the emitter surface. This energy is characteristic of the material of the emitter and is called the work function of the material of the emitter.

(iii) For a given constant frequency of incident light, the number of photoelectrons emitted or the photo current is directly proportional to the intensity of incident light.


(iv) The photoelectron emission can be stopped by applying a reverse voltage to the phototube, i.e. by making the emitter electrode positive and the collector negative. This reverse voltage is independent of the intensity of incident radiation but increases with increase in the frequency of incident light. The negative collector potential required to stop the photo electron emission is called the stopping potential.


These characteristics of photoelectron emission can not be explained on the basis of classical theory of light but can be explained using the quantum theory of light. According to this theory, emission of electrons from the metal surface occurs when the energy of the incident photon is used to liberate the electrons from their bound state. The threshold frequency corresponds to the minimum energy required for the emission. This minimum energy is called the work function of the metal. When the incident photon carries an energy in excess of the work function, the extra energy appears as the kinetic energy of the emitted electron. When the intensity of light increases, the number of photoelectrons emitted also increases but their kinetic energy remains unaltered. The reverse potential

required to stop the photoelectron emission, i.e. the stopping potential, depends on the energy of the incident photon and is numerically equivalent to the maximum kinetic energy of the photoelectrons.
When a photon of frequency is incident on a metal
surface of work function, then, where (½ mv2)max is the maximum kinetic energy of the
emitted photoelectrons. This is known as Einstein’s photoelectric equation. Since it can also be written as If Vo is the stopping potential corresponding to the incident photon frequency Then, by experimental determination of Vo, it is possible to find out the work function of the metal.
The experimental observation of photoelectric effect leads to the conclusion that the energy in light is not spread out over wavefronts but is concentrated in small packets called photons. All photons of a particular frequency have the same energy. A change in the intensity of the incident light will change the number of photoelectrons emitted but not their energies. Higher the frequency of the incident light, higher will be the kinetic energy of the photoelectrons. These observations confirm the particle properties of light waves.
1.4 COMPTON EFFECT:
When x-rays are scattered by a solid medium, the
scattered x-rays will normally have the same frequency or energy. This is a case of elastic scattering or coherent scattering. However, Compton observed that in addition to the scattered x-rays of same frequency, there existed some scattered x-rays of a slightly higher wavelength (i.e., lower frequency or lower energy). This phenomenon in which the wavelength of x-rays show an increase after scattering is called Compton effect.
Compton explained the effect on the basis of the quantum theory of radiation. Considering radiation to be made up of photons, he applied the laws of conservation of energy and momentum for the interaction of photon with electron. Consider an x-ray photon of energy incident on an electron at rest (fig. 1.6.) After the interaction, the x-ray photon gets scattered at an angle with its energy changed to a value and the electron which was initially at rest recoils at an the wavelength of the scattered x-rays is indeed in agreement with equation (1.11), thus providing further confirmation to the photon model.
Thus, Planck’s theory of radiation, photoelectric
effect and Compton effect are experimental evidences in favour of the quantum theory of radiation.
1.5 MATTER WAVES AND DE BROGLIE’S HYPOTHESIS
Quantum theory and the theory of relativity are the two important concepts that led to the development of modern physics. The quantum theory was first proposed by Planck to explain and overcome the inadequacies of classical theories of black body radiation. The consequences were very spectacular. Louis de Broglie made the suggestion that particles of matter, like electrons, might possess wave properties and hence exhibit dual nature. His hypothesis was based on the following arguments:
The Planck’s theory of radiation suggests that energy is quantized and is given by

where v is the frequency associated with the radiation. Einstein’s mass-energy relation states that
E = mc2 (1.13)
Combining the two equations, it can be written as
E = hv = mc2

matter waves aroused great interest and several physicists launched experiments designed to test the hypothesis. Heisenberg and Schrodinger proceeded on to develop mathematical theories whereas Davisson and Germer, G.P.Thomson and Kikuchi attempted experimental verification.
1.5.1 Davisson-Germer experiment
The hypothesis of de Broglie was verified by the electron diffraction experiment conducted by Davisson and Germer in the United States. The experimental set up used by them is shown in the figure 1.7.
1.7 Experimental arrangement for Davisson-Germer
experiment.
The apparatus consists of a filament heated with a small a.c power supply to produce thermionic emission of electrons. These electrons are attracted towards an anode in the form of a cylinder with a small aperture maintained at a finite positive potential with respect to the filament. They pass through the narrow aperture forming a fine beam of accelerated electrons. This electron beam was made to incident on a single crystalline sample of nickel. The electrons scattered at different angles were counted using an ionization counter as a detector. The experiment was repeated by recording the scattered electron intensities at various positions of the detector for different accelerating potentials (Fig.1.8).
Fig.1.8. Scattered electron intensity maps at different accelerating potentials.The vertical axis represents the direction of the incident electron beam and ф is the scattering angle.The radial distance from the origin at any angle represents the intensity of scattered electrons. of electrons. These electrons are attracted towards an anode in the form of a cylinder with a small aperture maintained at a finite positive potential with respect to the filament. They pass through the narrow aperture forming a fine beam of accelerated electrons. This electron beam was made to incident on a single crystalline sample of nickel. The electrons scattered at different angles were counted using an ionization counter as a detector. The experiment was repeated by recording the scattered electron intensities at various positions of the detector for different accelerating potentials (Fig.1.8).
Fig.1.8. Scattered electron intensity maps at different accelerating potentials.The vertical axis represents the direction of the incident electron beam and ф is the scattering angle.The radial distance from the origin at any angle represents the intensity of scattered electrons.

1.9 Experimental arrangement of G.P.Thomson
experiment.
He allowed a beam of accelerated electrons to fall on the aluminium foil and observed a diffraction pattern consisting of a series of concentric rings around the direction of the incident beam. This pattern was similar to the Debye-Scherrer pattern obtained for aluminium using x-ray diffraction. Using the data available on aluminium, he calculated the wavelength of the electrons using the Bragg’s equation,
The value of wavelength calculated from the two equations matched well thereby experimentally proving the de Broglie’s relation.
A similar experiment was conducted by Kikuchi in Japan in which he obtained electron diffraction pattern by passing an electron beam through a thin foil of mica to confirm the validity of de Broglie’s relation.
The wave nature of particles is not restricted to electrons. Any particle with a momentum p has a de Broglie wavelength equal to (h/p). Neutrons produced in nuclear reactors possess energies corresponding to wavelength of the order of 0.1 nm. These particles also should be suitable for diffraction by crystals. Neutrons from a nuclear reactor are slowed down to thermal energy of the order of kT and used for diffraction and interference experiments. The results agree well with the de Broglie relation. Since neutrons are uncharged particles, they are particularly useful in certain situations for diffraction studies. Neutron beams have also been used as probes to investigate the magnetic properties of nuclei.
1.5.3 Wave packet and de Broglie waves
We have seen that moving particles may be represented by de Broglie waves. The amplitude of these de Broglie waves does not represent any parameter directly describing the particle but is related to the probability of finding the particle at a particular place at a particular time. Hence, we cannot describe de Broglie waves with a simple wave equation of the type,
Instead, we have to use an equation representing a group of waves. In other words, a wave packet consisting of waves of slightly differing wavelengths may represent the moving particle. Superposition of these waves constituting the wave packet results in the net amplitude being modified, thereby defining the shape of the wave group. The phase velocity of individual waves depends on the wavelength. Since the wave group consists of waves with different wavelengths, all the waves do not proceed together and the wave group has a velocity different from the phase velocities of the individual waves. Hence, de Broglie waves may be associated with group velocity rather than the phase velocity.
1.5.4 Characteristics of matter waves
1. Matter waves are associated with a moving body.
2. The wavelength of matter waves is inversely proportional to the velocity with which the body is moving. Hence, a body at rest has an infinite wavelength and the one traveling with a high velocity has a lower wavelength.
3. Wavelength of matter waves also depends on the mass of the body and decreases with increase in mass. Due to this reason, the wavelike behaviour of heavier bodies is not very evident whereas wave nature of subatomic bodies could be observed experimentally.
4. A wave is normally associated with some quantity that varies periodically with the frequency of the wave. For example, in a water wave, it is the height of the water surface; in a sound wave it is the pressure and in an electromagnetic wave, it is the electric and magnetic fields that vary periodically. But in matter waves, there is no physical quantity that varies periodically. We use a wave function to define matter waves and this wave function is related to the probability of finding the particle at any place at any instant, which varies periodically.
5. Matter waves are represented by a wave packet made up of a group of waves of slightly differing wavelengths. Hence, we talk of group velocity of matter waves rather than the phase velocity. The group velocity can be shown to be equal to the particle velocity.
6. Matter waves show properties similar to other waves. For example, a beam of accelerated electrons produces interference and diffraction effects similar to an electromagnetic wave of same wavelength.

1.7 HEISENBERG’S UNCERTAINTY PRINCIPLE:
1.7.1 Origin and nature of the Principle:
When we assign wave properties to particles there is a limitation to the accuracy with which we can measure the properties like position and momentum.
1.10 A wave packet with an extension
Consider a wave packet as shown in fig.1.10. The particle to which this wave packet corresponds to may be located anywhere within the wave packet at any instant. The probability density suggests that it is most likely to be found in the middle of the wave packet. However, there is a finite probability of finding the particle anywhere within the wave packet.

If the wave packet is smaller in extension, the position of the particle can be specified more precisely. But the wavelength of the waves will not be
well defined in a narrow wave packet. Since wavelength is related to momentum through de Broglie’s relation, the momentum is not precisely known. On the otherhand, a wave packet with large extension can have a more clearly defined wavelength and hence momentum at the cost of the knowledge about the position. This leads to the conclusion that it is impossible to know both the position and momentum of an object precisely at the same time. This is known as Uncertainty principle.
For a wave packet of extension  assuming the uncertainties to be the standard deviation in the respective quantities, it may be shown that a minimum value of the product of such deviations is given by

This minimum value of the product of uncertainties is for the case of a gaussian distribution of the wave functions. Since the wave packets in general do not have gaussian forms, the uncertainty relation becomes

 This equation states that the product of uncertainty
It may be mentioned that these uncertainties are not due to the limitations of the precision of the measuring methods or measuring instruments but due to the nature of the quantities involved.

We have the following ‘Thought experiment’ to illustrate the uncertainty principle. Imagine an electron being observed using a microscope (fig.1.11).
1.11 Schematic diagram of experimental set up to study
uncertainty principle.
The process of observation involves a photon of wavelength  incident on the electron and getting scattered into the microscope. The event may be considered as a two-body problem in which a photon interacts with an electron. The change in the velocity of the photon during the interaction may be anything between zero( for grazing angle of incidence) and 2c ( for head-on collision and reflection). The average change in the momentum of the photon may be written as equal to (h/c) or (h/). This difference in momentum is carried by the recoiling electron which was initially at rest. The change or uncertainty in the momentum of the electron may thus be written as (h/). At the same time, the position of the electron can be determined to an accuracy limited by the resolving power of the microscope, which is of the order of . Hence, the product of the uncertainties in position and momentum is of the order of h. This argument implies that the uncertainties are associated with the measuring process. This illustration only estimates the accuracy of measurement, the uncertainty being inherent in the nature of the moving particles involved.
1.7.3 Physical significance of uncertainty principle:
Uncertainty principle is a consequence of the wave
particle duality. It states that it is impossible to know both the position and momentum of an object exactly and at the same time.
We can try to estimate the product of the uncertainties with the help of illustrations as the one mentioned above. The principle is based on the assumption that a moving particle is associated with a wave packet, the extension of which in space accounts for the uncertainty in the position of the particle. The uncertainty in the momentum arises due to the indeterminacy of the wavelength because of the finite size of the wave packet. Thus, the uncertainty principle is not due to the limited accuracy of measurement but due to the inherent uncertainties in determining the quantities involved. But we can still define the position where the probability of finding the particle is maximum and also the most probable momentum of the particle.
1.7.4 Applications of uncertainty principle:
The uncertainty principle has far reaching implications. In fact, it has been very useful in explaining many observations which cannot be explained otherwise. A few of the applications of the uncertainty principle are worth mentioning.
(a) Diffraction of a beam of electrons: Diffraction of a beam of electrons at a slit is the effect of uncertainty principle. As the slit is made narrower, thereby reducing the uncertainty in the position of the electrons in the beam, the beam spreads even more indicating a larger uncertainty in its velocity or momentum.
1.12 Diffraction at a single slit.
Figure 1.12 shows the diffraction of an electron beam by a narrow slit of width. The beam traveling along OX is diffracted along OY through an angle. Due to the wave nature of the electron, we observe Fraunhoffer diffraction on the screen placed along XY. The accuracy with which the position of the electron is known is since it is uncertain from which place in the slit the electron passes. According to the theory of diffraction, we have

1.8 WAVE MECHANICS:
Quantum theory is based on the quantization of energies. It deals with the particle nature of radiation. It implies that addition or liberation of energy will be between discrete energy levels. It assigns particle status to a packet of energy by calling it ‘quantum of energy’ or ‘photon’ and treats the interaction of radiation with matter as a two-body problem. On the other hand, de Broglie’s hypothesis and the concept of matter waves led to the development of a different formulation called ‘Wave mechanics’. This deals with the wave properties of material particles. It was shown later that the quantum mechanics and the wave mechanics are mathematically identical and lead to the same conclusion.
1.8.1 Characteristics of wave function:
Waves in general are associated with quantities that vary periodically. For example, water waves involve the periodic variation of the height of the water surface at a point. Similarly, sound waves are associated with periodic variations of the pressure.

In the case of matter waves, the quantity that varies periodically is called ‘wave function’. The wave function, represented by , associated with matter waves has no direct physical significance. It is not an observable quantity. But the value of the wave function is related to the probability of finding the body at a given place at a given time. The square of the absolute magnitude of the wave function of a body evaluated at a particular time at a particular place is proportional to the probability of finding the body at that place at that instant.
The wave functions are usually complex. The probability in such a case is taken as , i.e. the product of the wave function with its complex conjugate. Since the probability of finding the body somewhere is finite, we have the total probability over all space equal to certainty. Equation (1.39) is called the normalization condition and a wave function that obeys the equation is said to be normalized. Further, must be single valued since the probability can have only one value at a particular place and time. Since the probability can have any value between zero and one, the wave function must be continuous. Momentum being related to the space 1.8.2 Physical significance of wave function:
We have already seen that the wave function has no direct physical significance. However, it contains information about the system it represents and this can be extracted by appropriate methods. Even though the wave function itself is not directly an observable quantity, the square of the absolute value of the wave function is intimately related to the moving body and is known as the probability density. This probability density is the quantum mechanical method of finding the body at a particular position at a particular time. The wave function carries information about the particle’s wave-like behavior. It also provides information about the momentum and energy of the particle at any instant of time.

1.8.4 Eigen values and eigen functions:
These terms come from the German words and mean proper or characteristic values or functions respectively. The values of energy for which the Schrodinger’s equation can be solved are called ‘Eigen values’ and the corresponding wave functions are called ‘Eigen functions’. The eigen functions possess all the characteristics properties of wave functions in general (see section 1.8.1).
1.9 APPLICATIONS OF SCHRODINGER’S EQUATION:
1.9.1 Case of a free particle:
A free particle is defined as one which is not acted upon by any external force that modifies its motion. Hence, the potential energy U in the Schrodinger’s equation is a constant and does not Solving for the constants A and B pose some difficulties because we cannot apply any boundary conditions on the wave function as it represents a single wave which is not localized and not normalizable. Since the solution has not imposed any restriction on the value of k, the free particle is permitted to have any value of energy given by the equation,
E = ħ2k2/2m
Since the total energy is purely kinetic, the momentum of the particle would be p = ħk or h. This is just what we would expect, since we have constructed the Schrodinger equation to yield the solution for the free particle corresponding to a de Broglie wave. 1.9.2 Particle in a box:
The simplest problem for which Schrodinger’s time independent equation can be applied and solved is the case of a particle trapped in a box with impenetrable walls.
Consider a particle of mass m and energy E travelling along x-axis inside a box of width L. The particle is thus restricted to move inside the box by reflections at x=0 and x=L (Fig. 1.13).
1.13 Schematic for a particle in a box. The height of
the wall extends to infinity.
The particle does not lose any energy when it collides with the walls and hence the total energy of the particle remains constant. The potential energy of the particle is considered to be zero inside the box and infinite outside. Since the total energy of the particle cannot be infinite, it is restricted to move within the box. The example is an oversimplified case of an electron acted upon by the electrostatic potential of the ion cores in a crystal lattice.
Since the particle cannot exist outside the box, We have to evaluate the wave function inside the box. The Schrodinger’s equation (1.48) becomes where A and B are constants.
Applying the boundary condition that y=0 at x = 0, equation 1.52 becomes

Fig. 1.14 shows the variation of the wave function inside the box for different values of n and Fig.1.15 shows the probability densities of finding the particle
1.15 Probability function as a function of position.
at different places inside the box for different values of n. Thus, wave mechanics suggests that the probability of finding any particle at the lowest energy level is maximum at the centre of the box which is in agreement with the classical picture. However, the probability of finding the particle in higher

The above equation represents the dependence of tunneling probability on the width of the barrier and the energy of the particle.
1.9.5 Examples of tunneling across a finite barrier:
There are a few examples of tunneling across a thin finite potential barrier in nature. These observations are in fact proof in favour of the theory of quantum mechanical tunneling. Let us consider a few of them.
(a) Alpha decay: Alpha particles are made up of two protons and two neutrons. In radioactive decay, the alpha particle must free itself from the attractive nuclear force and penetrate through a barrier of repulsive coulombic potential to be emitted out of the nucleus(Fig.1.20). A calculation of the energy of the particle inside the nucleus and the measurement of the energy of the emitted alpha particle indicate that it is not possible that the particle has surmounted the barrier of coulombic potential but must have penetrated through it.

(b) Ammonia inversion: In a molecule of ammonia, the three hydrogen atoms form a plane with the nitrogen atom placed symmetrically at a finite distance from the plane. It has been observed experimentally that the nitrogen atom oscillates between two positions on either sides of the plane(Fig.1.21).
Classical calculations show that the nitrogen atom cannot perform such oscillation since the hydrogen atoms form a barrier against nitrogen atom to prevent it from moving through the plane formed by the hydrogen atoms. However, nitrogen atom oscillates across the plane with a frequency higher than 1010 per second. This can be explained only on the basis of tunneling process.

(c) Zener and tunnel diodes: These are diodes made out of heavily doped semiconductors with special characteristics. The current-voltage characteristics of these diodes can be explained only on the basis of quantum mechanical tunneling process. The high speed of operation of these devices can be explained only as due to tunneling since the movement of charge carriers is otherwise by diffusion which is a very slow process. The scanning tunneling microscope is another device operating on the principle of tunneling.
(d) Frustrated total internal reflection: Figure 1.22 shows a beam of light reflected totally from the surface of glass. If a second prism of glass is brought close to the first, the beam appears through the second glass prism indicating tunneling of light through the surfaces of glass which were otherwise acting as barriers.
1.22 Demonstration of frustrated total internal
reflection.
1.9.6 Harmonic oscillator:
When a body vibrates about an equilibrium position, the body is said to be executing harmonic motion. We have many examples of such motion which we come across, like the vibration in a spring that is stretched and released, vibrations of atoms in a crystal lattice, etc. Whenever a system is disturbed from its equilibrium position, it can come back to its original position only under the influence of a restoring force. Hence, the presence of a restoring

1.9.7 Practical applications of Schrodinger’s wave equation:
The real life situations are much different from the one considered while deriving the Schrodinger’s wave equation. This is especially true when one is analyzing the motion of a particle like electron traveling at velocities comparable to that of light. Relativistic modification to the Schrodinger’s equation and its solution are complex. Further, the boundary condition of an infinitely high potential barrier is never encountered. In case of metals, conduction electrons move in crystal lattice under the influence of finite potentials of the ion cores. The potential energy due to the influence of external forces acting on it may also be functions of position of the particle and time. Incorporation of these factors while formulating and solving the Schrodinger’s wave equation has led to accurate prediction of the behaviour of subatomic, atomic, molecular and other microscopic systems. attenuation will allow more power to reach its receiver than a fiber with higher attenuation. If Pin is the input power and Pout is the output power after passing through a fiber of length L , the mean attenuation constant α of the fiber, in units of dB/km, is given by
α = (10/L) log10(Pin/ Pout) (8.18)
Attenuation can be caused by several factors, but is generally placed in one of two categories: intrinsic or extrinsic. Intrinsic attenuation occurs due to inherent characteristics of the fiber. It is caused by impurities incorporated into the glass during the manufacturing process. When a light signal hits an impurity atom in the fiber, one of two things will occur: it will get scattered or it will be absorbed. Rayleigh scattering accounts for the majority (about 96%) of attenuation in optical fiber. If the scattered light is incident on the core-cladding interface at an angle greater than the critical angle for total internal reflection, the light will be totally internally reflected and no attenuation occurs. If the angle of incidence is less than the critical angle, the light will be diverted out of the core and attenuation results. Intrinsic attenuation can also occur due to absorption of signal in the fiber. The radiation may be absorbed by the impurity atoms and re-emitted by spontaneous emission or converted into some other form of energy.Extrinsic attenuation can be caused by two mechanisms, namely, macro-bending and micro-bending of the fiber. All optical fibers have a minimum bend radius specification that should not be exceeded. This is a restriction on how much bend a fiber can withstand before experiencing problems in optical performance or mechanical reliability. A bend in the fiber may result in the modification of the angle of incidence on the core-cladding interface and hence may lead to signal loss. A bend may also induce strain in the fiber which affects the refractive index locally. Microbending is a small scale distortion of the core-cladding interface in a localized region. This may be related to pressure, temperature, tensile stress or crushing force to which the fiber might have been subjected to.
8.7 APPLICATIONS OF OPTICAL FIBERS:
Optical fibers find extensive applications in communication of optical signals. Transmission using optical frequencies has the advantage of higher speed of transmission and hence greater volume of information carried. They find application in all experimental arrangements where light is transmitted from one point to another without loss. A few important applications of fiber optics are mentioned here.
8.7.1 Fiber optic communication:
A conventional communication system consists of a transmitter which is used to transmit a carrier wave,

information can be transmitted at a higher rate than in the case of radio or microwave communication. In other words, a greater volume of information can be carried over the fiber optic system.
(ii) Optical fibers, because of their flexibility and lightweight can be handled more easily.
(iii) Optical fibers are usually fabricated from electrically insulating materials and hence are safe.
(iv) In conventional communication systems, electrical interference leads to signal from a parallel line being picked up. Such cross talk is absent in optical fibers even when they are bunched together in large numbers.
(v) Optical fibers being insulators are immune to interference as there is no induction of electromagnetic noise.
(vi) Attenuation of signal or transmission loss is very low and optical fiber cables with losses as low as 0.01 dB km-1 have been fabricated.
(vii) The life span of optical fibers is considerably high at 25-30 years as compared to around 15 years of life of copper cables.
(viii) Optical fiber links are more economical compared to copper cables because of the longer life span, higher resistance to temperature and corrosion and ease of maintenance of optical fibers.
8.7.2 Applications in medicine and industry:
Optical fibers are also useful for medical applications for visualization of internal portions of the human body. They can also be used for the examination of visually inaccessible regions for engineering applications. A typical example of a flexible fiberscope (endoscope) is shown in Fig.8.18.
Fig.8.18. Fiber optic endoscope.
Use of laser in combination with optical fibers is being exploited not only for the observation of internal portions but also in the treatment of malignant tissues. A similar equipment will also be useful to examine parts of machinery which are otherwise inaccessible to observation.
Optical fibers also find application in the fabrication of sensors which are devices used to measure and monitor physical quantities such as displacement, pressure, temperature, flow rate etc.

EXERCISE:
8.1 What are the advantages of optical communication over the other conventional types of communication? (March 99).
8.2 Explain the terms spontaneous, stimulated emissions and population inversion. Briefly explain the construction and working of a ruby laser with energy level diagram. (March 99).
8.3 With a neat diagram, explain numerical aperture, and ray propagation in an optical fiber. Describe the types of optical fibers and modes of transmission. (March 99).
8.4 Describe the principle on which optical fiber works. Mention their applications. (August 99).
8.5 What is lasing action? Describe the working of He-Ne laser with the help of energy level diagram. Mention industrial applications of lasers. (August 99).
8.6 Distinguish between luminescence, fluorescence and phosphorescence. How are these phenomena explained on the basis of energy band picture? (March 2000).
8.7 Explain the terms spontaneous emission, stimulated emission and population inversion. With energy level diagram explain the working mechanism of a He-Ne gas laser. (August 2000).

8.8 Explain types of optical fibers. Mention advantages and limitations of optical communication system. (August 2000).
8.9 Explain how transmission of light takes place in optical fibers. Discuss different types of optical fibers. (March 01).
8.10 Explain how lasing action takes place in He-Ne laser. (August 01).
8.11 Explain the principle of optical fibers. Derive an expression for the numerical aperture. Calculate the numerical aperture of a given optical fiber if the refractive indices of core and cladding are 1.623 and 1.522 respectively. (August 01).
8.12 Explain the conditions required for laser action. Describe the construction and working of a He-Ne laser with necessary energy level diagram.
(March 02).
8.13 Explain the terms modes of propagation, cone of acceptance and numerical aperture. (March 02).
8.14 Calculate the numerical aperture and angle of acceptance for an optical fiber having refractive indices of 1.565 and 1.498 for core and cladding respectively. (March 02).
8.15 Discuss the point to point optical fiber communication system. Mention the advantages of optical fiber communication over the conventional communication system. (Feb 2003).

8.16 Obtain an expression for energy density of photons in terms of Einstein’s coefficients. (Aug 2003).
8.17 Explain the construction and working of He-Ne gas laser. (Aug 2003).
8.18 Write a note on holography. (Aug 2003).
8.19 An optical glass fiber of refractive index 1.450 is to be clad with another glass to ensure total internal reflection that will contain light travelling within 5o of the fiber axis. What maximum index of refraction is allowed for the cladding? (Feb 2004).
8.20 Discuss the applications of laser. (Aug 2004).
8.21 Find the ratio of population of two energy levels out of which one corresponds to metastable state if the wavelength of light emitted at 330 K is 632.8 nm. (Aug 2004).[ Note: It is not possible to solve this example if we assume a level to be metastable since we cannot use Boltzman relation].

Materials play a very important role in every sphere of life. The nomenclature for different stages of development of human civilization as stone age, bronze age, iron age, etc., indicate the importance of materials. The ever increasing demand for materials with specific characteristics for diverse applications and the constant search for better and more efficient alternatives has made the study of materials important. Apart from the conventional materials that have been in use for over ages, new materials are being designed and synthesized with attractive properties. This chapter gives a glimpse of the current trends in modern engineering materials and also a few modern techniques of analysis.
9.1 CERAMICS
Ceramics are compounds between metallic and non-metallic elements. The term ceramics comes from the greek word “Keramikos”, which means burnt stuff. These materials are usually bad conductors of heat and electricity. They are resistant to high temperature and corrosive environments, but are usually brittle. Chemically, they are usually oxides, nitrides or carbides.
Depending on the application for which they are used, ceramics are classified as glasses, clay products, refractories and cements. The properties and applications of the various categories of ceramics are discussed below.
9.1.1 Glasses:
Glasses are non-crystalline silicates. Oxides like Na2O, CaO, Al2O3, B2O3, etc are added to modify their properties to suit various applications.They are optically transparent and easy to fabricate. Glasses are produced by heating the raw materials to a temperature above the melting point of all the constituents. In order to ensure transperancy, the melt should be mixed homogeneously and made free from bubbles. Upon cooling, the melt becomes more and more viscous, passes through a state of supercooled liquid and then solidifies into glass.
The process of formation involves inclusion of internal stress, called thermal stresses. These defects are eliminated by heat treatments like annealing and tempering. Annealing is a process in which the glass is heated to a temperature called annealing point. At this temperature, atomic diffusion is rapid and any residual stress may be removed. Then, it is slowly cooled to room temperature. The temperature at which transition occurs from supercooled melt into glass is called the glass transition temperature. In tempering, the glass is heated to a temperature above glass transition temperature but below the softening point. Then it is cooled to room temperature in a jet of air or in an oil bath. Tempered glasses are used for applications where high strength is needed. This includes doors, automobile windshields, eye lenses, etc.
9.1.2 Clay products:
Clay is one of the most widely used raw material. They are used in building bricks, tiles, porcelain tableware and sanitary ware. They are alumino silicates containing chemically bound water. For applications, they are mixed with minerals like flint, quartz and a flux like feldspar. For example, porcelain used for tableware may have a composition of 50% clay, 25% quartz and 25% feldspar.
The formation of clay products includes grinding the raw materials into a fine powder and mixing with water to a desirable consistency. They are drawn into required shape by extrusion technique or moulded using a porous mould when water evaporates to leave solid material. The product so obtained may contain some water and needs to be dried. The process of drying is assisted by controlled air flow at suitable temperature. It may also be subjected to firing by heating it at a temperature in the range 900 – 1400 0C to enhance its mechanical strength. During the process of heat treatment, formation of liquid glass may lead to filling up of pores with an increase in the density of the material. This process is known as vitrification.
9.1.3 Refractories and abrasives:
Refractories are materials capable of withstanding high temperature without melting or decomposing. They also remain inert to the atmosphere. They are useful in the manufacture of bricks and as lining material for furnaces. Chemically, they are mixtures of oxides like SiO2, Al2O3, MgO, etc. For commercial applications, the raw material with the required composition is taken in the form of a powder. Upon firing, the particles undergo a bonding phase thereby increasing the strength. Porosity is an important parameter and must be controlled to obtain suitable material properties.
Abrasive ceramics are materials which are known for their hardness or wear resistance. They possess a high degree of toughness. Diamond is the best known abrasive with a high value for its hardness, but is relatively expensive. Other common ceramic abrasives include silicon carbide, tungsten carbide, aluminum oxide, silica, etc. They are used in several forms depending on the application as a powder, as a coating bonded to grinding wheels, etc. Grinding, lapping and polishing wheels make use of abrasive powders in a liquid medium.
9.1.4 Cements:
There are some inorganic ceramic materials grouped as cements. They have a characteristic property of forming a paste when mixed with water, which subsequently hardens. Examples are Portland cement, plaster of paris, lime, etc. They are useful in solid and rigid structures as construction material.
9.1.5 Cermets:
A cermet is a composite material composed of ceramic (cer) and metallic (met) materials. A cermet is ideally designed to have the optimal properties of both a ceramic and those of a metal. Ceramics possess basic physical properties such as a high melting point, chemical stability, and especially oxidation resistance. The basic physical properties of metals include ductility, high strength and high thermal conductivity. The metal is used as a binder for an oxide, boride, carbide, or alumina. Generally, the metallic elements used are nickel, molybdenum, and cobalt. Depending on the physical structure of the material, cermets can also be metal matrix composites, but cermets are usually less than 20% metal by volume.
Cermets are used in the manufacture of resistors (especially potentiometers), capacitors, and other electronic components which may experience high temperatures. They are used for the high-temperature sections of jet engines as well as high temperature turbine blades. Spark plug is also another example of a cermet. It is an electrical device used in some internal combustion engines to ignite the fuel by means of an electric spark. Ceramic parts have been used in conjunction with metal parts as friction materials for brakes and clutches. Bioceramics are materials that can be used in the human body. They can be in the form of thin layers on metallic implants, composites with a polymer component, or even just porous networks. These materials work well within the human body for several reasons. Cermets are also used in dentistry as a material for fillings and prostheses.
9.2 COMPOSITE MATERIALS
A composite may be defined as a combination of two or more dis-similar materials the properties of which are superior to those of the individual components. A majority of composites consist of two phases, the matrix which is the continuous medium in which a second phase, namely the reinforcement is uniformly distributed. A familiar example is fiber glass which consists of glass fibers in a resin matrix. Resin is light, durable and easy to mould but does not have sufficient strength and stiffness. On the other hand, glass fibers possess high strength and stiffness. By combining these two materials, it is possible to produce a new material with all the useful properties of both the components. Composites are also referred to as engineered materials since they are designed and produced as per requirement. Based on the type of matrix material used in the composite, they are classified into three categories:
a) Polymer matrix composites
b) Metal matrix composites
c) Ceramic matrix composites.
Based on the nature and shape of the reinforcement used, they are further classified as fiber reinforced composites, particle reinforced composites and flake reinforced composites.
The resultant properties of the composites are dependent on the properties of the matrix as well as reinforcement materials. The matrix performs the important function of binding and holding the reinforcement. It also protects the reinforcement from mechanical damage. The matrix material bears the load and transfers it to the reinforcement. The reinforcement phase should be distributed uniformly through out the matrix. It should not react chemically with the matrix material even at elevated temperatures of use. Generally, the reinforcement material should have higher modulus of elasticity than the matrix material. But, both matrix material and the reinforcement should have similar coefficients of thermal expansion.
Fiber reinforced composites consist of very thin fibers of reinforcement material, usually 5 to 20 microns thick, distributed in a suitable matrix material. The fibers may be continuous, extending through the size of the sample or discontinuous, with suitable lengths for specific requirements. Depending on the manner in which the fibers are packed within the matrix, they are further classified as oriented fiber composites and random fiber composites. The randomness of distribution of the fiber may be in two dimensions (planar randomness) or in three dimensions (volume randomness). These composites are often anisotropic depending on the nature of alignment of the fiber phase in the material.
In Particle reinforced composites, the reinforcement is in the form of fine particles, usually smaller than 50 microns in size and constitutes 10 to 50 percent of the total volume of the composite. The properties are strongly dependent on the size of the particulate matter and the inter-particle spacing. The load is carried by both the matrix and reinforcement materials, and more effectively in the case of finer particles. Examples are refractory oxides (silica or alumina) in metallic matrices, metals in polymers and ceramics.
Flake reinforced composites contain reinforcement material in the form of thin flakes or platelets. For example, layer materials like mica, graphite etc. distributed in suitable matrix. They are very limited in applications.
Modern composite materials have made a significant impact on the present technological development of mankind. It is now possible to design materials with specific properties as per the requirements of the application. Composites may be designed to possess high strength and load bearing capacity. They can be lightweight materials with considerable resistance to corrosion and wear. They are therefore widely used in civil construction, automobiles and aeronautical engineering applications. However, they are relatively costly because of the technology needed in their synthesis. Further work is currently in progress to evolve procedures for their synthesis and also to collect design data on the materials.
9.3 SMART MATERIALS:
Smart materials have one or more properties that can be dramatically altered by varying the condition to which they are subjected. Most everyday materials have physical properties which cannot be significantly altered. A variety of smart materials already exist and are being studied extensively. These include piezoelectric materials, magneto-rheostatic materials, electro-rheostatic materials and shape memory alloys.
Each individual type of smart material has a different property which can be significantly altered, such as viscosity, volume, and conductivity. The property that can be altered influences what types of applications the smart material can be used for.
Piezoelectric materials have a unique property which makes them useful as a smart material. When a piezoelectric material is deformed, it produces a small but measurable electrical charge on the sample. Conversely, when an electrical signal is applied to a piezoelectric material it experiences a significant change in size. Hence, piezoelectric materials are most widely used as sensors.
Electro-rheostatic (ER) and magneto-rheostatic (MR) materials are fluids, which can experience a dramatic change in their viscosity. These fluids can change from liquid state to a solid substance when subjected to an electric field or magnetic field. The effect is completely reversible. They regain their original viscosity when the field is removed. MR fluids experience a viscosity change when exposed to a magnetic field, while ER fluids experience similar changes in an electric field. The composition of each type of smart fluid varies widely. The most common example of a magneto-rheostatic material is an oil consisting of suspended tiny iron particles. MR fluids are being developed for use in car shocks, damping washing machine vibration, prosthetic limbs, exercise equipment, and surface polishing of machine parts. ER fluids have mainly been developed for use in clutches and valves, as well as engine mounts designed to reduce noise and vibration in vehicles.
9.4 SHAPE MEMORY ALLOYS
Shape memory alloys are metals, which exhibit two very unique properties, namely, elasticity and shape memory effect. Arne Olander first observed these unusual properties in 1938, but no were any serious research work was made in the field of shape memory alloys until the 1960's. The most effective and widely used shape memory alloys include Nitinol (NiTi) , CuZnAl, and CuAlNi.
The two unique properties mentioned above are made possible through a solid state phase change which occurs in the shape memory alloy. A solid state phase change is one in which a molecular rearrangement occurs with the material remaining a solid. In most shape memory alloys, a temperature change of only about 10°C is sufficient to initiate the phase change. The two phases, which occur in shape memory alloys, are Martensite, and Austenite.
Martensite is the low temperature phase of the material. It is the relatively soft and may be easily deformed. The molecular structure in this phase is twinned. Austenite has a cubic structure and occurs at higher temperatures. The un-deformed Martensite phase is the same size and shape as the cubic Austenite phase on a macroscopic scale, so that no change in size or shape is visible in shape memory alloys until the Martensite is deformed.
Fig.9.1 The two phases of shape memory alloys
The shape memory effect is observed when a piece of shape memory alloy is cooled below a critical temperature (Fig.9.1). At this stage the alloy is composed of Martensite in its twinned form. This sample can be easily deformed by applying a suitable load. The deformed Martensite is transformed to the cubic Austenite phase by heating it to above a critical temperature. By cooling the alloy, the original twinned martensite structure may be regained.
Pseudo-elasticity occurs in shape memory alloys, without a change in temperature,when the alloy is completely composed of Austenite. If the load on the shape memory alloy is increased, the Austenite gets transformed into Martensite simply due to the loading. When the loading is decreased, the Martensite begins to transform back to Austenite and the sample springs back to its original shape.
Some of the main advantages of shape memory alloys include their attractive mechanical strength and corrosion resistant properties. These alloys are expensive compared to other materials such as steel and aluminum. They have poor fatigue properties and a steel component may be hundred times more durable than a SMA element.
Shape memory alloys are being used for a variety of applications. They have been used for military, medical and robotics applications. Nitinol couplers are used in F-14 fighter planes to join hydraulic lines tightly. It is used in robotics actuators and manipulators to simulate human muscle motion. Some examples of applications in which pseudo-elasticity is used are eyeglass frames, medical tools, cellular phone antennae, orthodontic arches etc.
9.5 MICROELECTROMECHANICAL SYSTEMS:
Microelectromechanical systems(MEMS) are systems or devices constructed out of materials that have strong inter-relation between their electrical and mechanical behavior. For example, application of an electric signal leading to a mechanical change or a mechanical change leading to the development of an electrical signal, as in the case of piezoelectric materials, is referred to as an electro-mechanical effect. When such devices are constructed on a micro-scale by processes similar to those employed for fabrication of integrated circuits, the resulting devices are called MEMS. Depending on the parameter that causes the change (Stimuli) and the parameter that changes (response), MEMS are classified as Sensors and Actuators.
9.5.1 Sensors:
A sensor is one in which a mechanical stimuli results in an electrical response. For example, piezoelectric materials have the property of developing electrical charges on their surfaces when subjected to a mechanical pressure. Hence, a piezoelectric material can be used in the construction of a sensor.
Sensors can be classified based on the nature of the stimuli used in the device. Thermal sensors are devices that sense a change in the temperature and respond with a change in its electrical behavior. A thermo-resistive sensor has a device element which has its electrical resistance a strong function of temperature. A change in temperature results in a corresponding change in its electrical resistance. Hence, measurement of temperature is possible through a measurement of the electrical resistance. Similarly, a thermocouple consisting of a junction of two dissimilar metals generates an e.m.f which depends on the temperature of the junction. Thus, a thermocouple can be used as a thermal sensor.
Mechanical sensors respond to application of a force leading to changes in its shape or size resulting in a corresponding change in its electrical behavior. Piezoelectric materials are examples of mechanical sensors (Fig.9.2). Fig.9.2 Working of a piezoelectric sensor crystal sample (a) in the absence of applied force and (b)in presence of applied force.
In these materials, application of a stress, compressive or tensile, results in the development of electric charges on the surface leading to a potential difference. The electric field so generated is a measure of the stress to which the sample is subjected. Strain gauges are also examples of mechanical sensors (Fig.9.3). Fig.9.3 Schematic of a strain gauge (a) and the experimental set up (b).
A strain gauge consists of a very thin, long metal wire, arranged in the form of a meandering pattern and bonded to the surface under study. It can also be in the form of a thin film deposited on the surface. Any mechanical deformation of the surface results in a change in the dimensions of the metal wire leading to a corresponding change in its electrical resistance. The strain gauge may form one of the arms of a Wheatstone’s net to determine accurately the change in the resistance. The change in the resistance will be proportional to the stress to which the surface is subjected. Semiconductor materials, when used as Hall probes, will work as magnetic sensors. There are a few materials which show a large magneto-resistance, a phenomenon in which application of a magnetic field results in changes in the electrical resistance of the materials. Such materials can also be used as magnetic sensors.
Radiation sensors are very common and are used in various instruments as detectors of radiation. They are referred to as photo detectors and can be made sensitive to radiations in the range from infrared to x-rays. Photo conducting materials and photodiodes are used in these sensors.
9.5.2 Actuators:
Actuators are devices which produce mechanical effects as a response to an applied electrical signal. Reverse piezoelectric effect, i.e., deformation of a solid when subjected to an electric field, is an example.
Bi-metallic strip is an example of a thermal actuator. It consists of two metals having widely different values for the coefficient of thermal expansion. The metals are taken in the form of strips and are bonded to each other. When this bi-metallic strip is heated, because of the unequal expansion of the two metals, it bends into an arc. The device can be used to make or break circuits.
In piezoelectric actuators, application of an electric field to the opposite faces of a single crystalline piezoelectric crystal like quartz results in a change in the dimensions of the crystal. The crystal regains its original shape and size when the applied field is removed. Further, the change in size may be positive or negative(increase or decrease) depending on the direction of the applied electric field. This is an example of a mechanical actuator.
9.6 NANO MATERIALS
Nano technology deals with structures of matter having dimensions of the order of a nano meter. Even though the term nano technology is relatively new, structures and devices of nanometer dimensions are known for a long time. Roman glass makers are known to have used nano sized metal particles to produce coloured glasses. Photographic films contain silver halide which decomposes when exposed to light producing nano particles of silver. Richard Feynman, in a lecture at a meeting of the American Physical Society, predicted the potential applications of nano materials Nano materials exhibit properties strikingly different from those of bulk materials. This is because every property of a material has a characteristic length associated with it. For example, the electrical resistance of a material is due to the scattering of conduction electrons away from the direction of flow by collision with lattice atoms, impurities, etc. The effect of scattering on the resistance depends on how many such collisions takes place per unit distance of travel or the average distance travelled by the electron before getting scattered. This characteristic length is called the mean free path of conduction electron. When the dimension of the solid sample becomes comparable to this characteristic length, the fundamental property of electrical resistance changes and becomes a function of the dimension of the sample. This is called quantum size effect and is responsible for the characteristic properties exhibited by the nano phase of materials.
Nano materials are classified as quantum wells, quantum wires and quantum dots. In a three dimensional structure, if one dimension, say thickness, is of nano size, then the structure is called quantum well. If two dimensions are of nano size, then it is called a quantum wire and if all the three dimensions are of nano size, then it is called a quantum dot. The word quantum is associated with the structures because the properties exhibited by them are described by quantum mechanics.
In a bulk metal, for example, the conduction electrons are free to move throughout the entire conducting medium. In other words, the electrons are completely delocalized. When one or more dimensions of the sample become small, say of the order of a few atomic spacings, the delocalization of the electrons is restricted. The electrons experience confinement as their movement is restricted in those dimensions which are small. The number of electrons with a particular energy becomes a function of the size of the sample. The density of states for a bulk metal, which is a measure of the number of allowed energy states available for occupation at various energy values, is shown in Fig.9.4. In the case of nano structures, the dependence of density of states on energy will also get modified depending on the degree of confinement.
The dependence of density of states on energy for different quantum structures is shown in the figure.

Fig. 9.4 Density of states g(E) versus Energy E in the case of (a) bulk material, (b) a quantum well, (c) a quantum wire, (d) a quantum dot and (e) an individual atom.
In the case of a bulk sample, since the number of electrons interacting is very large, the density of states as a function of energy is a continuum of available states at various energies. As the size of the sample reduces in one or more dimensions, there will be restrictions on the number of available states at different energies. As we go from bulk to quantum well to quantum wire to quantum dot, the density of states approaches that for an individual atom. This results in drastic modifications in the electrical, thermal and other properties of nano structures.
9.6.1 Synthesis of nano materials
There are two main approaches to nano technology. One is a top-down approach where nano objects are constructed from larger samples without any need for control at atomic level. The other is the bottom-up approach where materials and devices are built from atomic or molecular components. This approach of designing and manufacturing nano systems with the necessary control at the molecular level is called molecular manufacturing.
Gas condensation was the first technique used to synthesize nanocrystalline metals and alloys. In this technique, a material is vaporized using thermal evaporation sources or electron beam heating in a reduced pressure. The formation of ultra fine particles is achieved by collision of evaporated atoms with residual gas molecules. Various kinds of vaporization methods like resistive heating, heating with high energy electron beams, laser heating and induction heating may be used. The evaporated atoms loose kinetic energy by colliding with air molecules and condense in the form of small crystallites. Sputtering may also be used instead of thermal
evaporation. Sputtering is a non-thermal process in which surface atoms are physically ejected from the surface by momentum transfer from an energetic bombarding ion beam in a glow discharge. Sputtering has been used in low pressure environment to produce a nanophase materials including silver, iron and silicon.
In vacuum deposition process, elements, alloys or compounds are vaporized and deposited in a vacuum. The vaporization source is the one that vaporizes materials by thermal processes. The process is carried out at pressure much lower than that used in gas condensation. The substrate may also be heated to a temperature ranging from ambient to 500°C. These deposits have particles or grains in the range of 1 to 100 nm size. The advantages associated with vacuum deposition process are high deposition rates and economy. However, the deposition of many compounds is difficult.
Chemical Vapour Deposition (CVD) is a well known process in which a solid is deposited on a heated surface via a chemical reaction from the vapour or gas phase. The energy necessary for the chemical reaction to take place can be provided by several methods. In thermal CVD the reaction is activated by a high temperature above 900o C. A typical apparatus comprises of gas supply system, deposition chamber and
an exhaust system. The reaction may be activated by plasma at elevated temperatures. The chemical reaction may also be induced by laser radiation which has sufficient photon energy to break the chemical bond in the reactant molecules. Typical nanocrystalline materials like SiC, Si3N, Al2O3, TiO2 , SiO2 , ZrO2 with average particle size of a few nanometre have been synthesized by this technique.
Sol-gel processing is a wet chemical synthesis approach that can be used to generate nanoparticles. It involves formation of colloidal suspension (sol) in continuous liquid phase (gel). A catalyst is used to start reaction and control pH. Sol-gel formation occurs in four stages namely, hydrolysis, condensation, growth and agglomeration of particles. By controlling the growth parameters, it is possible to vary the structure and properties of sol-gel derived inorganic networks. The significant potential of nanomaterial synthesis and their applications is yet to be explored. Understanding more of synthesis would help in designing better materials.
Unlike many of the methods mentioned above, mechanical attrition produces its nanostructures not by cluster assembly but by the structural decomposition of coarser grained structures (top down approach). The ball milling and rod milling techniques belong to the mechanical alloying process which has
received much attention as a powerful tool for the fabrication of several advanced materials. Mechanical alloying is a unique process, which can be carried out at room temperature.
A ball mill, a type of grinder, is a cylindrical device used in grinding (or mixing) materials like ores, chemicals, ceramic raw materials and paints. Ball mills rotate around a horizontal axis, partially filled with the material to be ground plus the grinding medium. Different materials are used as media, including ceramic balls, flint pebbles and stainless steel balls. An internal cascading effect reduces the material to a fine powder. Industrial ball mills can operate continuously, fed at one end and discharged at the other end. High-quality ball mills are potentially expensive and can grind mixture particles to as small as 5 nm, enormously increasing surface area and reaction rates. The grinding works on principle of critical speed. The critical speed is the speed after which the steel balls (which are responsible for the grinding of particles) start rotating along the direction of the cylindrical device thus causing no further grinding.
9.6.2 Applications of nano materials:
Although there has been much hype about the potential applications of nanotechnology, the current
applications are limited to the ones developed in yesteryears. These include nanoparticles of titanium dioxide, silver, zinc oxide etc in sunscreen, cosmetics, food packaging, clothing, disinfectants, household appliances, paints and outdoor furniture varnishes. One proposed application is the development of so-called smart materials. This term refers to any sort of material designed and engineered at the nanometer scale to perform a specific task, and encompasses a wide variety of possible commercial applications. A nanosensor would be a smart material, involving a small component within a larger machine that would react to its environment and change in some fundamental, intentional way.
A promising field for the applications of nanotechnology is in the practice of nanomedicine. This involves the creation of nanoscale devices for improved therapy and diagnostics. Such nanoscale devices are known as nanorobots or nanobots. These nanobots have the potential to serve as vehicles for delivery of medicine to repair metabolic or genetic defects. Similar to the conventional or macroscopic robots, nanobots would be programmed to perform specific functions and be remotely controlled, but possess a much smaller size, so that they can travel and perform desired functions inside the human body.
Band gap engineering involves tailoring of band
gaps with the intent to create unusual electronic transport and optical effects, and novel devices. Most of the devices based on semiconductor nanostructures are band gap engineered quantum devices. Lasers fabricated using single or multiple quantum wells as the active region have been extensively studied over the last two decades. Quantum well lasers offer improved performance with lower threshold current and lower spectra width as compared to that of regular double heterostructure lasers. Quantum dots have been used in lasers and detectors. Nanostructures have been used for making photo-electrochemical cells for high efficiency conversion of light to electrical power due to its large surface area at which photo-electrochemical processes occur.
Applications of molecular nanotechnology to mechanical engineering will be aimed at realizing some mechanical systems on nano scale. The simplest mechanical system we can think of is a mechanical bearing. A conventional bearing consists of a shaft and a sleeve, with the relative motion of these components being facilitated by a suitable lubricant. The efficiency of such mechanical bearing lies in ensuring minimum wear and tear of the components along with minimum friction between them. A design of nano mechanical bearing is shown in Fig.9.5.
Fig.9.5 Components of a nano mechanical bearing showing (a) shaft and (b) sleeve as seen along their axis. Each circle represents a group of atoms.
The components, namely the shaft and the sleeve, are polycyclic ring structures consisting predominantly of atoms of carbon, hydrogen and nitrogen. The shaft has a six fold symmetry about its axis and the sleeve has a fourteen fold symmetry. The dimensions are designed such that the essential requirements for a satisfactory functioning of the bearing are met. Theoretical calculations show that the design yields low energy barrier to the rotation of the shaft within the sleeve. The advantages of the design include low static friction between the moving parts. The repulsive interactions resist the movement of the shaft away from its axial alignment and displacement along the axis. These characteristics suggest the possibility of extending nano technology to practical mechanical systems.
The existing and well understood conventional mechanical manufacturing techniques are useful in the top-down approach. However, molecular manufacturing involves chemical synthesis with precise placement of atoms and molecules which requires development of new tools and techniques. It is predicted that with the developments in molecular manufacturing, various existing devices and their capabilities will improve by several orders of magnitude. A switch over from micro-devices to nano-devices is expected to reduce device sizes drastically and improve their speed. In the field of mechanical engineering, the conventional machinery may be replaced by molecular machinery.
The future applications of nano materials will include next-generation computer chips, better insulation materials, phosphors for high-definition tv, low-cost flat-panel displays, tougher and harder cutting tools, high energy density batteries, high-power magnets, high-sensitivity sensors, automobiles with greater fuel efficiency, aerospace components with enhanced performance characteristics, longer-lasting satellites, longer-lasting medical implants, ductile, machinable ceramics, large electrochromic display devices, etc.
Due to the far-ranging claims that have been made about potential applications of nanotechnology, a number of concerns have been raised about what effects
these will have on our society. Immediate issues include the effects of nanomaterials on human health and the environment. There is scientific evidence which demonstrates the potential dangers posed by some toxic nanomaterials to humans or the environment. The extremely small size of nanomaterials means that they are much more readily taken up by the human body than larger sized particles. Nanomaterials are able to cross biological membranes and access cells, tissues and organs that larger-sized particles normally cannot. Size is therefore a key factor in determining the potential toxicity of a particle. However it is not the only important factor. Other properties of nanomaterials that influence toxicity include: chemical composition, shape, surface structure, surface charge, aggregation and solubility.
9.6.3 Scaling laws:
The magnitudes of most of the physical parameters when expressed in nano scale differ to a great extent from their macro scale values. The magnitudes for the nano scale systems can be computed applying scaling laws to the values for macro systems. However, the validity of scaling laws needs to be examined carefully. This is because macro scale systems are more or less defined by classical models and a transition to nano scale using scaling laws involves assumptions of the validity of these classical models. Nano scale
systems are atomic size structures where mean free path effects and quantum effects are important. These effects may contribute differently to different physical properties. It is convenient to study the nano systems separately under mechanical systems, electrical systems, thermal systems, etc., by applying classical continuum model.
Nano mechanical systems are useful for many applications. If we assume that the mechanical strength of the material is a constant for a given material irrespective of its dimensions, the total strength will be proportional to the area of the sample.
i.e., total strength α L2 (9.1)
where L represents a linear dimension. Expressed in nano scale, a stress of 1010 Nm-2 will be equal to (1010/10-18) N nm-2, i.e., 10 nN nm-2. Similarly, shear stiffness which increases with sample area but decreases with increasing length, is proportional to L. A sheer stiffness of 1012 Nm-1 can be expressed as 103 N nm-1. Hence, deformation can be written as
deformation α force/stiffness α L. (9.2)
Assuming the density to be constant,
Mass α volume α L3. (9.3)
The mass of a cubic nanometre block of a material of
density 5 x 103 kg m-3 will be 5 x 10-24 kg. Hence,
Acceleration α (force/mass) α (L2/L3) α L-1 (9.4)
≈ 10-8 N nm2/5x10-24 kg
≈ 2 x 1015 m s-2. (9.5)
Thus, a cubic nano metre sample will experience large acceleration as compared to macroscopic systems. Also, the effect of gravitational acceleration will be negligible on nano mechanical systems.
These calculations are based on the assumption that scaling laws are applicable even when we consider nano systems. It is to be realized that a transition from macro to micro and nano scales are associated with major changes in the conditions of construction and operation. In nano systems, for example, the surface becomes more important than the volume. However, the influence of the surfaces on the properties are neglected.
In electromagnetic systems, classical scaling laws have to be applied more carefully because quantum effects become dominant at small dimensions. If we assume the electrical resistivity as a material constant,
Resistance α (length/area) α L-1 (9.6)
Assuming an electrical resistivity of 1.5 x 10-7 ohm m,
the resistance of a cubic nanometre of copper would be 150 ohms. This result has to be examined carefully for its validity since the calculation has ignored the effect of size on the electrical resistivity of the material.
Similarly, the models for thermal conductivity in solids also breakdown for nano systems since the thermal phonons are associated with a mean free path much larger than the dimensions of the structure itself.
It may be concluded that the scaling laws based on classical models are not suitable for describing the behaviour of nano systems.
9.6.4 Carbon nano clusters:
Carbon has two stable allotropic forms, namely diamond and graphite. In diamond, the angle between the carbon-carbon bonds is 109o and in graphite, it is 120o. It was generally believed till recent times that no other carbon bond angles are possible. In 1964, Phil Eaton synthesized a square carbon molecule, C8H8 called cubane. In 1983, L.Paquette synthesized C20H20, a molecule having dodecahedron structure. Carbon nano clusters were discovered in 1985 by researchers at University of Sussex and Rice University. Among these
nano clusters containing various number of carbon atoms, the cluster containing 60 carbon atoms is the most prominent and is widely studied. This cluster is made up of 20 hexagonal and 12 pentagonal faces symmetrically arranged to form a molecular ball of carbon atoms (Fig.9.6). The cluster is named ‘Fullerene’ or ‘Buckyball’, after a noted architect, Richard Buckminster Fuller who popularized the geodesic dome. Carbon nano clusters may be obtained by laser evaporation of carbon. The diameter of C60 is about one nanometre. Chemically, they are quite stable and require very high temperature to break them into atoms. However, they sublime at lower temperature. This property is used in growing crystals and thin films of fullerenes.
Fig.9.6 Molecular structure of fullerene(C60)
C60 is highly electronegative and readily forms compounds with atoms that will donate electrons to it. It is electrically a non conductor but become conducting when doped with electropositive alkali metals.
C60 is a yellow powder which turns pink when dissolved in certain solvents such as toluene. When exposed to strong ultraviolet light, it polymerizes forming bonds between adjacent balls. In the polymerized state, the C60 no longer dissolves in toluene. This property makes it useful as a photoresist in photolithographic processes. Other physical and chemical properties are being studied to evaluate the material for further applications.
9.6.5 Carbon nano tubes:
Carbon nano tubes are cylindrical fullerenes. They are tubes of carbon with a diameter of a few nanometer and several millimeters in length. They may be obtained as single walled tube or multi-walled tube with open or closed ends (Fig.9.7).
Fig.9.7 A single walled carbon nano tube
By virtue of their unique molecular structure, they are characterized by high tensile strength, high ductility, high resistance to heat and chemically inert. They are metallic or semiconducting in nature depending on the diameter of the tube. Nano tubes with large diameters have conductivities higher than that of copper. Smaller tubes are semiconducting with the band gap increasing with decrease in diameter. Typically, a tube of diameter 2 nm has an energy gap of about 1 eV. Carbon nano tubes show negative magneto-resistance. There is a great interest in using these nano tubes for constructing electronic devices. There are several areas where carbon nano tubes are currently being used. Some of the applications include flat-panel displays, scanning probe microscopes, etc. They are also used to store lithium or hydrogen for fuel cells. They are useful as catalysts in chemical reactions and are also used as chemical sensors.
9.7 LIQUID CRYSTALS
The intermediate phase between the solid and the liquid is called the liquid crystal phase. Liquid crystal materials generally have some unique characteristics. They usually have a rod-like molecular structure, with the molecules showing a tendency to point along a common axis. This common
axis is called the director. This is in contrast to molecules in the liquid phase, which have no intrinsic order. In the solid state, molecules are highly ordered and have little translational freedom (Fig.9.8). The characteristic ordering in the liquid crystal state is between the traditional solid and liquid phases. Substances that aren't as ordered as a solid, but do have some degree of alignment are called liquid crystals.
Fig.9.8 Arrangement of molecules in the three phases, (a) solid, (b) liquid crystal and (c) liquid.
To quantify the order present in a liquid crystalline material, an order parameter S is defined as
S = (1/2) < 3cos2 θ - 1 > (9.7)
where θ is the angle between the director and the axis of each individual molecule. In an isotropic liquid, the order parameter will be zero and for a perfect
solid, it will be equal to 1. Typical value for the order parameter of a liquid crystal depends on temperature and may be anything between 0.3 and 0.9.
9.7.1 Classification of liquid crystals
There are many types of liquid crystal states, depending upon the nature and amount of order in the material. The prominent ones are :
(1)Nematic phase
(2)Smectic phase
(3)Cholesteric phase.
The nematic liquid crystal phase is characterized by molecules that have orientational order but no positional order (Fig. 9.9). The molecules can flow and their position is randomly distributed in the liquid. But, they have a tendency to point towards a particular direction leading to a finite orientational order. As a consequence, nematic liquid crystals are strongly anisotropic.
Fig.9.9 Orientational order in nematic liquid crystal. The arrow represents the director.
The smectic phase is another distinct phase of liquid crystal substances. Molecules in this phase show some degree of translational order in addition to the orientational order (Fig.9.10). In the smectic state, the molecules maintain the general orientational order of nematics, but also tend to align themselves in layers or planes. The motion is restricted to within these planes but the layers themselves can move past each other. The increased order means that the smectic state is more "solid-like" than the nematic. There are many types of smectic phases possible with different kinds of ordering.In the smectic-A mesophase, the director is perpendicular to the smectic plane, and there is no positional order within the layer. In
smectic-B, the smectic plane is perpendicular to the director, but the molecules are arranged in a network of hexagons within the layer. In the smectic-C phase, molecules are arranged similar to that in the smectic-A phase, but the director is not perpendicular to the smectic plane.
Fig.9.10 Three major types of smectic liquid crystals. (a) Smectic A phase, (b) Smectic B phase as seen along the director axis and (c) Smectic C phase.
The cholesteric liquid crystal phase is composed of molecules containing a chiral center which produces intermolecular forces that favor alignment between molecules at a slight angle to one another. This leads to the formation of a structure which can be visualized as a stack of very thin 2-D nematic-like layers with the director in each layer twisted with respect to those above and below (Fig.9.11).
Fig.9.11 Cholesteric liquid crystal showing the rotation of the director in a distance equal to half the pitch.
An important characteristic of the cholesteric phase is the pitch. The pitch, p, is defined as the distance it takes for the director to rotate one full turn in the helix. This gives the cholesteric liquids the special property to selectively reflect light of wavelengths equal to the pitch length, so that a
particular color will be reflected when the pitch is equal to the corresponding wavelength of light in the visible spectrum. The wavelength of the reflected light can be controlled by adjusting the chemical composition of the cholesteric phase.
9.7.2 Applications of Liquid Crystals
Liquid crystal have made contribution in many areas of science and engineering, as well as device technology. The most common application is liquid crystal displays (LCDs). They work on the effect of electric field on the optical properties of the liquid crystal. A typical device is shown in Fig. 9.12. It consists of a
Fig.9.12 Schematic of a liquid crystal display unit consisting of (1)polarizer, (2)electrode, (3) liquid crystal layer, (4) back electrode, (5) second polarizer crossed with the first and (6) reflector.
liquid crystal layer sandwiched between two polarizers
that are crossed with each other. The liquid crystal selected is a twisted one so that the light passing through the first polarizer is reoriented to pass through the second polarizer also. When an electric field is applied between the two transparent indium-tin oxide electrodes, all the molecules in the liquid crystal align parallel to the electric field. Since the two polarizers are crossed, the light does not get transmitted through the second polarizer. A reflector is placed after the second polarizer to reflect any light incident on it. Thus, electric field can be used to switch a pixel on or off .
Liquid crystals have a many other uses. They can be used as thermal sensors since many of their properties are very sensitive to temperature. They are now being explored for optical imaging and recording applications. They are used for nondestructive mechanical testing of materials under stress. As new properties and types of liquid crystals are being investigated, these materials are gaining increasing importance in industrial and scientific applications.
9.8 NON DESTRUCTIVE TESTING OF MATERIALS
Non destructive testing(NDT)is the technique of determining the quality of a product without in any way affecting its properties. It is emerging as an important inter-disciplinary technique useful for
meeting the requirements of reliability and safety. Based on the principle used for testing, the technique may be classified as follows:
(a) Radiographic methods
(b) Ultrasonic methods
(c) Magnetic methods
(d) Electrical methods
(e) Optical methods
(f) Thermal methods
The basic principle involved in non destructive testing is to allow energy in some form to pass through the test sample and measure the effects like changes in amplitude, intensity, velocity, frequency, phase, etc of the transmitted signal.
9.8.1 Radiographic methods:
In radiographic methods of NDT, radiations like x-rays or gamma rays are used as the probe. Recently, electron and neutron beams are also being used as probes for certain applications. When a radiation of suitable energy and intensity is made to incident on the specimen under test, it penetrates through the material. The intensity of the transmitted radiation will be modified by the presence of defects. The
defects may be in the form of deviations from the periodic arrangement of atoms in the lattice or may be in the form of inclusion of foreign materials in the specimen. Such defects absorb or scatter the radiation differently from the other regions of the sample and hence result in a modification of the transmitted intensity. These changes in the intensity of transmitted radiation may be recorded and analyzed using photography or other methods.
Radiation for a specific NDT application is selected on the basis of their suitability and efficiency. X-rays are often used for the analysis of defects in crystalline solids. Since the wavelength of x-rays can be of the order of magnitude of the inter-atomic distances in crystalline solids, they produce significant diffraction contrast leading to easy detection of crystal defects. X-rays are also used in the analysis of flaws, inclusions, pinholes, cracks etc generated in welding or casting processes. Gamma rays are more energetic and can be used for thicker samples. In the case of materials of high atomic number, absorption poses a serious limitation to NDT techniques using x-rays or gamma rays. Alternatively, electron or neutron radiography may be suitable for such applications.
9.8.2 Ultrasonic methods:
Ultrasonic waves are mechanical waves, like sound waves, with a frequency above the audible range. Similar to sound waves, they produce local changes in the density of the medium in which they are travelling and propagate with a velocity determined by the medium. Ultrasonic waves are generated by making use of piezoelectric effect or magnetostriction. Piezoelectric oscillators are widely used since they give a wide range of frequencies from 20 kHz to 10 GHz, where as magnetostriction oscillators can be used normally upto 100 kHz. Piezoelectric transducers are made from materials showing piezoelectric effect, like quartz, tourmaline, lithium sulphate, etc. A typical experimental set up is shown in Fig.9.14.
Fig.9.14 Set up for piezoelectric oscillator method
A properly cut quartz crystal is placed between two metal plates forming a parallel plate capacitor with crystal as the dielectric medium. The plates are
connected to the primary of a transformer which is coupled to an oscillator circuit. When the frequency of the circuit matches with the natural frequency of the crystal, resonance will occur and the crystal is set into mechanical vibrations, producing ultrasonic waves.
The principle of magnetostriction can also be used to generate ultrasonic waves. A bar of ferromagnetic material like iron or nickel changes its length when subjected to strong magnetic field. If the applied magnetic field is alternating, the rod will alternately expand and contract with twice the frequency of the applied field. This results in the generation of ultrasonic waves in the medium surrounding the rod.
Fig.9.15 Set up for Electrostriction method
The experimental set up (Fig.9.15) consists of a specimen rod placed inside a solenoid and a high
frequency current is passed through the solenoid. Resonance occurs when the natural frequency of the rod matches with the applied frequency, resulting in the generation of ultrasonic waves.
If f is the resonance frequency and L is the length of the sample, then the velocity v of the ultrasonic waves generated is given by
v = 2fL
The velocity of ultrasonic waves in a solid may also be determined by Pulse echo method. In this method, a short pulse of electrical signal of about 1 to 2 μs duration is produced by a pulse generator (Fig.9.16). This pulse is applied to the piezoelectric crystal which is attached to the solid under test. The electric pulse produces an ultrasonic signal which travels into the sample. The pulse gets reflected from the other end of the solid sample. When the echo reaches the crystal, it generates an electrical signal in the piezoelectric crystal. The initial electrical pulse (A) applied to the crystal and the electrical pulse generated by the echo (B) are recorded on a cathode ray oscilloscope screen. By knowing the length of the sample and the time gap between the two pulses, the velocity of ultrasonic waves in the given sample can be calculated.
Fig.9.16 A schematic diagram of a pulse echo detection system
The velocity of ultrasonic waves in a solid medium depends on the density of the material and also on the elastic constants. In the case of a thin rod like sample whose diameter is much smaller than the wavelength of ultrasonic waves, the velocity is given by
v = (E/ρ)1/2 (9.8)
Where E is the Young’s modulus and ρ is the density of the solid sample.
If the sample under consideration is not piezoelectric, then, the velocity of ultrasonic waves in that sample may be determined as follows:
A quartz crystal in the form of a rod of cross sectional area A and length Lq is used in an
experimental set up shown in Fig.9.17. The resonance frequency fq corresponding to the maximum amplitude of the well-defined wave pattern observed on the CRO is
Fig.9.17 Experimental set up to find the velocity of ultrasonic waves through a solid sample.
noted. The given sample is also taken in the rod form with the same area of cross section A and length Ls. The sample is attached to the quartz crystal using a glue and the resonance frequency fc of the composite(quartz+sample) is determined as before. The natural frequency fs of vibration of the sample is calculated as
fs = fc + (mq/ms)(fc- fq)
where fq, fs and fc are the resonance frequencies for the quartz crystal, sample under study and the composite, mq and ms are the mass of the quartz crystal
and the sample. The velocity of ultrasonic waves in the sample can be calculated as
vs = 2 fs Ls
and the Young’s modulus of the sample can be calculated as
E = vs2 ρ
Where ρ is the density of the sample.
The pulse echo method can also be used to detect flaws in the given solid sample. This is based on the fact that the ultrasonic waves are reflected from a crack or other defects which produce abrupt changes in the elastic properties of the material. An echo will be produced when a defect or a flaw interrupts a beam of ultrasonic waves. In Fig.9.18, peak A represents the transmitted pulse, B is the pulse reflected from the other end of the sample and C is the pulse reflected from the defect. The horizontal distance AC across the screen indicates the position of the defect. The height of the peak C, which is determined
Fig.9.18 CRO display for a sample with a flaw.
by the intensity of the ultrasonic wave reflected from the defect, indicates to some extent the size of the defect. The advantages of the technique are that it is easy, reliable and fast. The method is particularly useful when the samples are large and the x-ray technique cannot be used because of the limited penetration of x-rays in solids.
9.8.3 Magnetic methods:
These methods are particularly useful for magnetic materials. The principle involved is to study the effect of defects on the magnetic behavior of the material. When the given specimen is magnetized, the magnetic field will be distorted at the location of defects. The magnetic field distortion may be measured using a suitable method. For example, the sample may be scanned with a search coil to detect changes in the
induced voltage at the defect sites. Alternatively, magnetic particles such as iron oxide in the form of fine powder or a colloidal solution are spread over the sample surface. The magnetic particles arrange differently at the defects thereby revealing their presence.
9.8.4 Electrical methods:
It is possible to gain insight into the quality of the sample by a study of different electrical properties of the material. However, for an analysis of a localized defect, some special techniques are used. For example, in eddy current testing, a test coil carrying a high frequency alternating current is made use of. When the coil is brought close to a conductor, eddy currents are induced in the coil which in turn change the impedance of the coil. The presence of a defect in the conductor modifies the way in which the eddy currents are induced and hence result in corresponding changes in the impedance of the test coil. The method is suitable for detection of surface cracks, variations in thickness and conductivity of coatings.
9.8.5 Optical methods:
Optical microscopy is a widely used technique for surface examination and evaluation. Surface morphology and microstructures can be effectively studied by
optical microscopy. Modifications to the conventional microscopy includes interference microscopy and phase contrast microscopy which are based on the optical phenomena of interference and polarization respectively yield valuable information on the sample surface. Holographic interferometry technique using laser helps in the study of strained surfaces and surfaces subjected to vibration. Optical examination of transparent samples in transmission helps in the analysis of the internal features of the sample.
9.8.6 Thermal methods:
The principle involved in thermal methods is to supply heat to a specimen and observe the resulting temperature distribution. The temperature at the location of defects will be different from that at other regions. The temperature distribution may be measured using sensitive thermal detectors like thermocouples, bolometers or photo conducting materials. Thermography is a related technique in which the thermal radiation emitted by the test sample is analyzed. By virtue of the temperature at which the sample is maintained, there will be emission of infra red radiation from the sample which can be detected and measured using an infrared detector. At defect sites, the temperature and hence the emission of infrared radiation will vary in intensity and wavelength. An image may be constructed with the data to show the
defects in the sample.
9.9 QUANTUM COMPUTATION:
Quantum computation is the study of information processing using quantum mechanical systems. Quantum mechanics was developed when the classical physics was not in a position to explain many phenomena. Quantum mechanics has been indispensable since then and has been used in various fields with success. Any bulk sample may be considered to be made up of a large number of quantum systems and the behaviour of the bulk sample may be predicted by consolidating the behaviour of individual quantum systems. In an attempt to control the bulk behaviour, attempts are being made to control individual quantum systems. Quantum computation is a result of the study of quantum systems and its application to information processing.
9.9.1 Properties of quantum bits:
In classical computation, the information is addressed and processed using ‘bits’. Bit is defined as a unit of information. On a similar concept, quantum information is built using ‘quantum bits’ or ‘qubits’. A classical bit can have two states- 0 and 1. Similarly, the qubit can be in any of the two states -
0 and 1. This is referred to as ‘Dirac notation’. A classical bit can exist in any one of the two states where as the quantum bit can exist in states that are linear combination or superposition of the two states. This may be represented as
 = 0 0 + 1 1 (9.10)
where 0 and 1 are complex numbers. In other words, a qubit represents a vector in two dimensional space. The states 0 and 1 are called basis states and are opposite to each other in vector space.
Classical bit can be examined and its state may be determined to be either 0 or 1. In the case of qubit, we can only find the probability that it exists in the state 0 or 1. The probability that it exists in the state 0 is equal to 02 and the probability that it exists in the state 1 is equal to 12. Since the total probability has to be 1, we have
02+ 12 = 1 (9.11)
This is called the normalization condition for the qubit. However, it is emphasized that the qubit exists in either 0 state or 1 state only, but with finite probabilities. For example, if we represent a qubit by
 = 1 0 + 1 1 (9.12)
2 2
we only mean that a measurement gives a value 0 for the qubit in 50 % [ 1/22 ] of the trials and a value 1 in the remaining 50 % of the trials.
We can find an example of a quantum mechanical system in an electron. The electron can exist in either ground state or an excited state. We can call these states 0 and 1 respectively. During the excitation process in presence of a radiation field, the electron may be found in 0 state or 1 state. The qubit represented by a linear combination of these states indicates the probability of the two states being occupied.
Figure 9.19shows a geometrical representation of a qubit. Equation (9.10) may be rewritten as
Fig.9.19 Bloch sphere representation of a qubit.
 = cos(/2).0 + ei sin(/2).1 (9.13)
where  and  are real numbers representing the angular coordinates in the three dimensional sphere of unit radius. This sphere is called the Bloch sphere and is used to visualize the state of a single qubit.
9.9.2 Quantum gates:
Classical computer circuits make use of logic gates. In an analogous way, quantum computation is
carried out using quantum gates. Let us consider an example of a single bit gate. A classical single bit logic gate is the NOT gate. The operation of this gate is defined by the truth table in which the 0 and 1 states are interchanged during an operation; i.e., if the input to the NOT gate is 0, the output will be 1 and vise versa. A single bit quantum gate or a single qubit gate should not only interchange the states 0 and 1 but also perform a similar operation on the qubit state which is a superposition of the two states. In other words, assuming the qubit NOT gate to be acting linearly, it should take the 0 0 + 1 1 state to the 0 1 + 1 0 state. Quantum NOT gate is represented in matrix form as
X = 0 1 (9.14)
1 0
The quantum state 0 0 + 1 1 is written in vector notation as 0
1
Here, the top entry corresponds to the amplitude of 0 and the bottom entry to the amplitude of 1. Then, the quantum NOT gate operation is represented by
X 0 = 1 (9.15)
1 0
The normalization condition for the quantum state
0 0 + 1 1 requires that
02 + 12 = 1 (9.16)
This must also be true for the quantum state after the action of the gate. It can be shown that
X+X = I (9.17)
where X+ is the adjoint of X and I is the two by two identity matrix.
While we have only one non-trivial single bit classical gate, there are many non-trivial single qubit gates. Two important qubit gates are the Z-gate and the H-gate. The Z-gate is represented as
Z = 1 0 (9.18)
0 –1
in which the operation results in the change in the sign of 1 to -1. The H-gate is represented as
H = 1 1 1 (9.19)
2 1 -1
in which the operation results in 0 being changed
into 1(0 +1) and 1 being changed into 1(0 - 1).
2 2
Thus, the operation of three single qubit quantum gates are represented as follows:
0 0 + 1 1 - X - 1 0 + 0 1 (9.20)
0 0 + 1 1 - Z - 1 0 - 0 1 (9.21)
0 0 + 11 - H - 0(0+1)+ 1(0-1) (9.22)
2 2
Since there are an infinite number of two by two unitary matrices, we can have an infinite number of single qubit gates. But the properties of the complete set can be understood from the properties of a smaller set. In other words, quantum computation can be generated on any number of qubits using a finite universal set of gates. Such a universal set requires the use of quantum gates involving multiple qubits.
9.9.3 Multiple qubits:
If we have two classical bits, then there would be four possible states, namely 00, 01, 10 and 11. Similarly, a two qubit system has four states denoted by 00, 01, 10 and 11. Since the pair of qubits can also exist in super positional states, the state vector describing the qubit system may be represented as
 = 00 00 + 01 01 + 10 10 + 11 11 (9.23)
where 00, 01, 10 and 11 are the complex coefficients representing the amplitudes of the respective states. Similar to the case of a single qubit, the square of the amplitude represents the probability of finding the system in that particular quantum state.
An important two qubit state is called ‘Bell state’ or ‘EPR pair’ named after the scientists Einstein, Podolsky and Rosen. It is represented by
 = 00 + 11 (9.24)
2
This state has the property that the measurement on the first qubit gives a result 0 with probability (1/2) and 1 with probability (1/2). A measurement on the second qubit also gives the same result. This indicates that the results of measurement are correlated. Such correlations have been a subject of intense study. It has been observed that the measurement correlations are much stronger in the Bell state as compared to what is observed in classical systems.
In general, if we consider a system of n qubits, the quantum state of the system will be specified by 2n amplitudes. This indicates the enormous potential of quantum computation.
EXERCISE
1. Write a note on nano-scale systems (Jan 2003).
2. What are composite materials and give a brief account of classification of composite materials (Jan 2003).
3. Give a brief account of smart materials. (Jan 2003).
4. What is a bit and a quantum bit? (Aug 2003)
5. Describe different methods of fabrication of MEMS (Aug 2003).
6. List the advantages and disadvantages of composite materials (Aug 2003).
7. What are smart materials? Explain the functional properties of smart materials. (Aug 2003,July 2006).
8. Explain the basic principles of quantum computation (Feb 2004,July 2006).
9. Discuss the nanoscale systems (Feb 2004).
10.Explain MEMS. Discuss its application. (Feb 2004).
11.Explain density of states for various quantum structures (Aug 2004).
12.Explain nano-tubes and its applications by giving
their physical properties (Aug 2004).
13.Explain smart materials with two examples (Aug 2004).
14. What are composite materials? Give their classifications. (Aug 2004).
15. Discuss nanoscale systems giving atleast one application in detail.(Feb 2005).
16. What are the advantages and disadvantages of composite materials? (Feb 2005).
17. Explain the term MEMS. Give a brief account of smart materials. (Feb 2005).
18. Explain the term MEMS. Discuss different materials used for MEMS. (Aug 2005).
19. Explain the advantages and disadvantages of composite materials. (Aug 2005, July 2006).
20. Discuss the different types of nano-scale systems. (Aug 2005).
21. What are composites? Discuss their merits in the context of modern applications. (Jan 2006).
22. Write a note on nanotechnology and its importance. (Jan 2006).
23. What is a quantum bit? Explain. (Jan 2006).
CHAPTER 10 : SPECIAL THEORY OF RELATIVITY
10.1 Introduction
10.1.1 Frames of Reference
10.1.2 Galilean transformation
10.1.3 Michelson-Morley experiment:
10.2 Postulates of Special Theory of Relativity
10.3 Time Dilation
10.4 Length Contraction
10.5 Twin Paradox
10.6 Relativity Of Mass
10.7 Massless Particles
Numerical examples
10.1 Introduction
When such quantities as length, time interval, and mass are considered in elementary physics, no special point is made about how they are measured. Since a standard unit exists for each quantity, who makes a certain determination would not seem to matter – everybody ought to get the same result. For instance, there is no difficulty in finding the length of a rocket when it is stationary and on earth. But what if the rocket is in flight and we are on the ground? We have standard methods to determine the length of a distant object with knowledge of trigonometry. However, when we measure the length of a moving rocket from the ground, we find it to be shorter than it is to somebody in the rocket itself. In order to understand how this unexpected difference arises we must analyze the process of measurement when motion is involved.
10.1.1 Frames of Reference
When we say that something is moving, what we mean is that its position relative to something else is changing. A passenger moves relative to an airplane; the airplane moves relative to the earth; the earth
moves relative to the sun; the sun moves relative to the galaxy of stars (the Milky Way) of which it is a member ; and so on. In each case a frame of reference is part of the description of the motion. To say that something is moving always implies a specific frame of reference. A frame of reference is usually a Cartesian coordinate system and the position of any object is defined with respect to the frame. The choice of the frame of reference is determined by our own convenience.
An inertial frame of reference is one in which Newton’s first law of motion holds. In such a frame, an object at rest remains at rest and an object in motion continues to move at constant velocity (constant speed and direction) if no force acts on it. Any frame of reference that moves at constant velocity relative to an inertial frame is itself an inertial frame.
All inertial frame are equally valid. Suppose we see something changing its position with respect to us at constant velocity. Is it moving or are we moving? Suppose we are in a closed laboratory in which Newton’s first law holds. Is the laboratory moving or it is at rest? These questions are meaningless because all constant – velocity motion is relative. There is no universal frame of reference that can be used everywhere, no such thing as “absolute motion”.
The theory of relativity deals with the consequences of the lack of a universal frame of reference. Special relativity, which is what Einstein published in 1905, treats problems that involve inertial frame of reference. General relativity, published by Einstein a decade later, treats problems that involve frames of reference accelerated with respect to one another. An observer in an isolated laboratory can detect accelerations, as anybody who has been in an elevator or on a merry-go-round knows. The special theory has had an enormous impact on much of physics, and we shall concentrate on it here.
10.1.2 Galilean transformation
The transformation from one inertial frame of reference to another is called a Galilean transformation. Let an event occur in an inertial frame of reference S at the location P(x,y,z) at any instant of time t. Consider another frame of reference S’ which moves along positive x direction of reference frame S with a velocity v. Let the origins O and O’ of the two frames of reference coincide at t = 0 and the point P be at rest with respect to the frame of reference S. With respect to the frame of reference S’, it moves with a velocity v and its coordinates change with time.
At any instant of time t = t’, we have
x’ = x – vt
y’ = y
z’ = z
And t’ = t.
This is Galilean transformation and it provides space–time relation of an event in different inertial frames. In this transformation, we have assumed that the time of an event for an observer in S is the same as the time for the same event in S’. This assumption holds good for all classical cases where the velocity v is much smaller than the velocity of light c. For example, consider the case of a person in a train moving with a speed v. If he throws a ball with a speed u in the direction of motion of the train, the speed of the ball to a stationary observer outside the train will be (v + u). What happens if we replace the ball with a flash of light? Will the stationary observer find the speed of light to be (c + v) ? Experiments were carried out by Michelson and Morley to verify the addition rule for the velocities of light and of the earth. The speed of earth in its orbit around the sun is about 30 km/sec which is about (1/10,000) of the velocity of light. With precision experimental set up, they tried to measure the differences in the velocity of light along
and perpendicular to the direction of earth’s motion.
10.1.3 Michelson-Morley experiment:
Albert A. Michelson (1852-1931) was born in Germany but came to the United states at a very tender age with his parents, who settled in Nevada. He attended the U.S. Naval Academy at Annapolis where he became a science instructor. To improve his knowledge of optics, in which he wanted specialize, Michelson went to Europe and studied in Berlin and Paris. Then he left the Navy to work first at the Case school of Applied Science in Ohio, then at Clark University in Massachusetts, and finally at the University of Chicago, where he headed the physics department from 1892 to 1929. Michelson’s specialty was high-precision measurement, and for many decades his successive figures for the speed of light were the best available. He redefined the meter in terms of wavelengths of a particular spectral line and devised an interferometer that could determine the diameter of a star (stars appear as points of light in even the most powerful telescopes).
Michelson’s most significant achievement, carried out in 1887 in collaboration with Edward Morley, was an experiment to measure the motion of the earth through
the “ether” , a hypothetical medium pervading the universe in which light waves were supposed to occur. The notion of the ether was a hangover from the days before light waves were recognized as electromagnetic, but nobody at the time seemed willing to discard the idea that light propagates relative to some sort of universal frame of reference.
A schematic diagram of the Michelson-Morley experiment is shown in Fig. 10.1. A beam of light from a source S is split into two parts by a semi-silvered glass plate P. A part of the beam travels to the mirror M1 , gets reflected to the plate P on the silvered side and is again reflected into a telescope. The other part of the beam travels to mirror M2 , gets reflected to the plate
P and transmitted into the telescope. A compensating glass plate CP is used to compensate for the difference in the optical paths travelled by the two beams before interfering. If the transit time for the two parts of the beam is same, they arrive at the telescope to produce constructive interference. If one of the beams is travelling along the direction of earth’s motion, there should be a change in the transit time for that path and this should lead to a change in the interference condition.
Although the experiment was sensitive enough to detect the expected ether drift, to everyone’s surprise none was found. The negative result had two consequences. First, it showed that the ether does not exist and so there is no such thing as “absolute motion” relative to the ether: all motion is relative to a specified frame of reference, not to a universal one. Second, the result showed that the speed of light is the same for all observers, which is not true of waves that need a material medium to travel (such as sound and water waves).
The Michelson–Morley experiment set the stage for Einstein’s 1905 special theory of relativity, a theory that Michelson himself was reluctant to accept. Indeed, not long before the concepts of relativity and quantum theory revolutionized physics, Michelson announced that “physical discoveries in the future are a matter of
sixth decimal place”. This was a common opinion of the time. Michelson received a Nobel Prize in 1907, the first American to do so.
10.2 POSTULATES OF SPECIAL THEORY OF RELATIVITY
Based on all the theoretical and experimental data available, Einstein put forward his Special Theory of Relativity. Two postulates underlie the special theory of relativity:
The laws of physics are the same in all inertial frames of reference.
This postulate follows from the absence of a universal frame of reference. If the laws of physics were different for different observers in relative motion, the observers could find from these differences which of them were “stationary” in space and which were “moving”. But such a distinction does not exist, and principle of relativity expresses this fact.
The second postulate is based on the results of many experiments:
The speed of light in free space has the same value in all inertial frames of reference.
The speed of light is 2.998×108 m/s to four significant figures. This means that the velocity of light has the same value for all observers and is independent of
their motion or of the motion of the light source.
To appreciate how remarkable these postulates are, let us look at a hypothetical experiment, basically no different from actual ones that have been carried out in a number of ways. Suppose person A turns on a searchlight just as person B takes off in a spacecraft at a speed of 2×108 m/s. Both of them measure the speed of light waves from the searchlight using identical instruments. From the ground person A finds their speed to be 3×108 m/s as usual. “Common sense” tell us that person B ought to find a speed of (3-2) × 108 m/s, or only 1×108 m/s for the same light waves. But he also find the speed to be 3×108 m/s, even though to person A, person B seems to be moving parallel to the waves at 2×108 m/s.
There is only one way to account for these results without violating the principle of relativity. It must be true that measurements of space and time are not absolute but depend on the relative motion between an observer and what is being observed. If person A were to measure from the ground the rate at which person B’s clock clicks and the length of his meter stick, person A would find that the clock ticks more slowly than it did at rest on the ground and that the meter stick is shorter in the direction of motion of the spacecraft. To person B, his clock and meter stick are the same as they were on the ground before he took off. To person A
they are different because of the relative motion, different in such a way that the speed of light person B measures is the same 3×108 m/s as person A measures. Time intervals and lengths are relative quantities, but the speed of light in free space is the same to all observers.
Thus, the Galilean transformation equations relating the space and time coordinates in one frame of reference to those in the other frame of reference are not valid for cases where the velocity v approaches the velocity of light. Transformation equations that apply to all speeds and also incorporate the constancy of the velocity of light were derived by the German physicist H.A.Lorentz. These equations, known as Lorentz transformation equations, for the case considered earlier of two inertial frames of reference moving relative to one another with a velocity v along x direction are as follows:
x’ = γ(x – vt)
y’ = y
z’ = z
and t’ = γ(t – vx/c2)
where γ = 1/(1 – v2/c2)1/2.
Before Einstein’s work, a conflict had existed between the principles of mechanics, which were then based on Newton’s laws of motion, and those of electricity and magnetism, which had been developed into a unified theory by Maxwell. Newtonian mechanics had worked well for over two centuries. Maxwell’s theory not only covered all that was then known about electric and magnetic phenomena but had also predicted that electromagnetic waves exist and identified light as an example of them. However, the equations of Newtonian mechanics and those of electromagnetism differ in the way they relate measurements made in one inertial frame with those made in a different inertial frame.
Einstein showed that Maxwell’s theory is consistent with special relativity whereas Newtonian mechanics is not, and his modification of mechanics brought these branches of Physics into accord. As we will find, relativistic and Newtonian mechanics agree for relative speeds much lower than the speed of light, which is why Newtonian mechanics seemed correct for so long. At higher speeds Newtonian mechanics fails and must be replaced by the relativistic version.
10.3 TIME DILATION
Measurements of time intervals are affected by relative motion between an observer and what is observed. As a
result, a clock that moves with respect to an observer ticks more slowly than it does without such motion and all processes (including those of life) occur more slowly to an observer when they take place in a different inertial frame.
If some one in a space craft finds that the time interval between two events in the space craft is t0, which is determined by events that occur at the same place in an observer’s frame of reference, is called the proper time of the interval between the events. When witnessed from the ground, the events that mark the beginning and end of the time interval occur at different places, and in consequence the duration of the interval appears longer than the proper time. This effect is called time dilation(to dilate is to become larger).
To see how time dilation comes about, let us consider the following example. A pulse of light is reflected back and forth between two mirrors Lo apart (Fig.10.2). Let the two mirrors and the clock be at rest. The total time taken by the light pulse for the return journey is to and is called the proper time.
The proper time t0 is given by
to =
(10.1)
Now, let us consider the case of the two mirrors and the clock in motion with a velocity v in a direction perpendicular to the direction of motion of the light pulse (Fig. 10.3). The time taken by the pulse for the return journey is t. Because the clock is moving, the light pulse, as seen from the ground, follows a zigzag path. On its way from the lower mirror to the upper one
in the time t/2, the pulse travels a horizontal distance of v(t/2) and a total distance of c(t/2). Since L0 is the vertical distance between the mirrors,
(10.2)
But
is the time interval to between ticks on the clock on the ground, as in Eq. (10.1) and so
(10.3)
Here is a reminder of what the symbols in Eq. (10.3) represent:
to = time interval on clock at rest relative to an observer = proper time
t = time interval on clock in motion relative to an observer
v= speed of relative motion
c = speed of light
Because the quantity
is always smaller than 1 for a moving object, t is always greater than t0. The moving clock in the spacecraft appears to tick at a slower rate than the stationary one on the ground, as seen by an observer on the ground.
Exactly the same analysis holds for measurements of the clock on the ground by the pilot of the spacecraft. To him, the light pulse of the ground clock follows a zigzag path that requires a total time t per round trip. His own clock, at rest in the spacecraft, ticks at intervals of t0. He too finds that
So the effect is reciprocal: every observer finds that clocks in motion relative to him tick more slowly than clocks at rest relative to him.
Our discussion has been based on a somewhat unusual clock. Do the same conclusions apply to
ordinary clocks that use machinery – spring – controlled escapements, tuning forks, vibrating quartz crystals, or whatever – to produce ticks at constant time intervals? The answer must be yes, since if a mirror clock and a conventional clock in the spacecraft agree with each other on the ground but not when in flight, the disagreement between them could be used to find the speed of spacecraft independently of any outside frame of reference – which contradicts the principle that all motion is relative.
10.4 LENGTH CONTRACTION
Measurements of lengths as well as of time intervals are affected by relative motion. The length L of an object in motion with respect to an observer always appears to the observer to be shorter than its length Lo when it is at rest with respect to him. This contraction occurs only in the direction of the relative motion. The length Lo of an object in its rest frame is called its proper length. (We note that in Fig.10.3 the clock is moving perpendicular to v, hence L = Lo there.)
The length contraction can be derived in a number of ways. Perhaps the simplest is based on time dilation
and the principle of relativity. Let us consider what happens to unstable particles called muons that are created at high altitudes by fast cosmic-ray particles (largely protons) from space when they collide with atomic nuclei in the earth’s atmosphere. A muon has a mass 207 times that of the electron and has a charge of either +e or –e; it decays into an electron or a positron after an average lifetime of 2.2
(2.2×10-6s).
Cosmic ray muons have speeds about 2.994 ×108 m/s(0.998c) and reach sea level in profusion – one of them passes through each square centimeter of the earth’s surface on the average slightly more than once a minute. But in t0 = 2.2μs, their average lifetime, muons can travel a distance of only
v to = (2.994
108 m/s)(2.2 ×10-6s)
= 6.6 ×102m = 0.66 km
before decaying , whereas they are actually created at altitudes of 6 km or more.
To resolve the paradox, we note that the muon lifetime of to = 2.2μs is what an observer at rest with respect to a muon would find. Because the muons are
hurtling towards us at the considerable speed of 0.998c, their lifetimes are extended in our frame of reference by time dilation to
The moving muons have lifetimes almost 16times longer than those at rest. In a time interval of 34.8μs, a muon whose speed is 0.998c can cover the distance
vt = (2.994×108m/s)(34.8 × 10-6s) = 1.04×104m = 10.4 km
Although its life time is only to = 2.2μs in its own frame of reference, a muon can reach the ground from altitudes of as much as 10.4km because in the frame in which these altitudes are measured, the muon lifetime is t = 34.8μs.
What if somebody were to accompany a muon in its descent at v = 0.998c, so that to him or her the muon is at rest? The observer and the muon are now in the same frame of reference, and in this frame the muon’s lifetime is only 2.2μs. To the observer, the muon can travel only 0.66km before decaying. The only way to account for the arrival of the muon at ground level is if the distance it travels, from the point of view of an observer in the moving frame, is shortened by virtue
of its motion. The principle of relativity tells us the extent of shortening it must be by the same factor of
that the muon lifetime is extended from the point of view of a stationary observer.
We therefore conclude that an altitude on the ground find to be ho must appear in the muon’s frame of reference as the lower altitude
In our frame of reference the muon can travel h0=10.4km because of time dilation. In the muon’s frame of reference, where there is no time dilation, this distance abbreviated to
= 0.66 km
As we know, a muon traveling at 0.998c goes this far in 2.2μs.
The relativistic shortening of distances is an example of the general contraction of lengths in the direction of motion:
L
(10.4)
Clearly the length contraction is most significant at speeds near that of light. A speed of 1000km/s seems fast to us, but it only results in a shortening in the direction of motion to 99.9994 percent of the proper length of an object moving at this speed. On the other hand, something traveling at nine-tenths the speed of light is shortened to 44 percent of its proper length, a significant change.
Like time dilation, the length contraction is reciprocal effect. To a person in a spacecraft, objects on the earth appear shorter than they did when he or she was on the ground by the same factor of
that the spacecraft appears shorter to somebody at rest. The proper length L0 found in the rest frame is the maximum length any observer will measure. As mentioned earlier, only lengths in the direction of motion undergo contraction. Thus to an outside observer a spacecraft is shorter in flight than the ground, but it is not narrower.
10.5 TWIN PARADOX
We are now in a position to understand the famous relativistic effect known as the twin paradox. This paradox involves a twin, one of which remains on the
earth while the other goes on a voyage onto space at the speed v and returns. Dick is 20y old when he takes off on a space voyage at a speed of 0.80c to a star 20 light years away. To Jane, who stays behind, the pace of Dick’s life is slower than hers by a factor of
To Jane, Dick’s heart beat only 3 times for every 5 beats of her heart; Dick takes only 3 breaths for every 5 of hers; Dick thinks only 3 thoughts for every 5 of hers. Finally Dick returns after 50 years have gone by according to Jane’s calendar, but to Dick the trip has taken only 30y. Dick is therefore 50y old whereas Jane, the twin who stayed home, is 70y old.
To look at Dick’s voyage from his perspective, we must take into account that the distance L he covers shortened to
L
To Dick, time goes by at the usual rate, but his voyage to the star has taken L/v = 15y and his return voyage another 15y, for a total of 30y. Of course, Dick’s lifespan has not been extended to him, because
regardless of Jane’s 50-y wait, he has spent only 30y on the roundtrip.
The twin paradox has been verified by experiments in which accurate clocks were taken on an airplane trip around the world and then compared with identical clocks that had been left behind. An observer who departs from an inertial system and then returns after moving relative to that system will always find his or her clocks slow compared with clocks that stayed in the system.
10.6 RELATIVITY OF MASS
When a force is applied to an object free to move, the force does work on the object that increases its kinetic energy. The object goes faster and faster as a result. Because the speed of light is the speed limit of the universe, however, the object’s speed cannot keep increasing in proportion as more work is done on it. But conservation of energy is still valid in the world of relativity. As the object’s speed increases, so does its mass, so that the work done continues to become kinetic energy even though v never exceeds c.
To investigate what happens to the mass of an object as its speed increases, let us consider an elastic collision (that is, a collision in which kinetic energy is conserved) between two particles A
and B, as witnessed by observers in the reference frames S and S' which are in uniform relative motion with S’ moving in the +x direction with respect to S at the velocity v. The properties of A and B are identical when determined in reference frames in which they are at rest. Before collision, particle A had been at rest in frame S and particle B in frame S'. Then, at the same instant, A was thrown in the + y direction at the speed VA while B was thrown in the –y' direction at the speed VB, where
VA = VB (10.5)
Hence the behavior of A as seen from S is exactly the same as the behavior of B as seen from S'.
When the two particles collide, A rebounds in the –y direction at the speed VA , while B rebounds in the +y' direction at the speed VB .
If linear momentum is conserved in the S frame, it must be true that
(10.6)
Inserting these expressions for
and
in Eq. (10.6), we see that momentum is conserved provided that
(10.7)
Because A and B are identical when at rest with respect to an observer, the difference between
and
means that measurements of mass, like those of space and time, depend upon the relative speed between an observer and whatever he or she is observing.
In the example above both A and B are moving in S. In order to obtain a formula that gives the mass m of a body measured while in motion in terms of its mass m0when measured at rest, we need only consider a similar example in which VA and VB are very small compared with . In this case an observer in S will see B approach A with the velocity v, make a glancing collision (since VB << v), and then continue on. In S
and
and so
(10.8)
The mass of a body moving at the speed v relative to an observer is larger than its mass when at rest relative to the observer by the factor 1/
. This mass increase is reciprocal; to an observer in S',
and
. Measured from the earth, a
spacecraft in flight is shorter than its twin still on the ground and its mass is greater. To somebody on the spacecraft in flight the ship on the ground also appears to be shorter and to have a greater mass.
Relativistic mass increases are significant only at speeds approaching that of light. At a speed one – tenth that of light the mass increase amounts to only 05 percent, but this increase is over 100 percent at a speed nine – tenths that of light. Only atomic particles such as electrons, protons, mesons, and so on have sufficiently high speeds for relativistic effects to be measurable, and in dealing with these particles, the “ordinary” laws of physics cannot to used. Historically, the first confirmation of Eq. (10.8) was the discovery by Bucherer in 1908 that the ratio e/m of the electron’s charge to its mass is smaller for fast electrons than for slow ones. This equation, like the others of special relativity, has been verified by so many experiments that it is now recognized as one of the basic formulas of physics.
As v approaches c,
in Eq. (10.8) approaches 0, and the mass m approaches infinity. If v = c, m =
, from which we conclude that v can never equal to c: no material object can travel as fast as light. But what if a spacecraft moving at v1 = 0.5c relative to the earth fires a projectile at v2 = 0.5c
in the same direction? We on earth might expect to observe the projectile’s speed as v1 + v2 = c. Actually, velocity addition in relativity is not so simple a process, and we would find the projectile’s speed to be only 0.8c in such a case.
10.7 MASSLESS PARTICLES
Can a massless particle exist? To be more precise, can a particle exist which has no rest mass but which nevertheless exhibits such particle like properties as energy and momentum? In classical mechanics, a particle must have rest mass in order to have energy and momentum, but in relativistic mechanics this requirement does not hold.
Let us see what we can learn from the relativistic formulas for total energy and linear momentum:
Total energy,
(10.9)
Relativistic momentum
When mo = 0 and v < c, it is clear that E = p=0. A massless particle with a speed less than that of light can have neither energy nor momentum. However, when m0 = 0 and
v = c, E = 0/0, which are indeterminate: E and p can have any values. Thus Eqs. (10.9) and (10.10) are consistent with the existence of massless particles that possess energy and momentum provided that they travel with the speed of light.
There is another restriction on massless particles. From Eq. (10.9),
And from Eq.(10.10)
Subtracting p2c2 from E2 yields
=
i.e.,
=
and
(10.11)
According to this formula, if a particle exists with mo =0, the relationship between its energy and momentum must be given by
For massless particles, E= pc (10.12)
This does not mean that massless particles necessarily occur, only that the laws of mechanics do not exclude the possibility provided that  = c and E = pc for them. In fact, massless particles of two different kinds-the photon and the neutrino – have indeed been discovered and their behavior is as expected.
Numerical examples
1. A spacecraft is moving relative to the earth. An observer on earth finds that, according to his clock, 3601 seconds elapse in a period of one hour as per the clock on the spacecraft. What is the spacecraft’s speed relative to earth?
Here, to = 3600 s, t = 3601 s
or v
= 7.0696 x 106 m/s
2. Solar energy reaches the earth at the rate of about 1.4 kW m-2 of the surface perpendicular to the direction of the sun. By how much does the mass of the sun decrease per second due to this energy loss? The mean radius of the earth’s orbit round the sun is 1.5 x 1011 m.
Energy lost per second = (1.4 x 103).(4π).(1.5 x 1011 )2 = 4 x 1026 J.
Mass lost per second, Mo = Eo /c2 = 4.4 x 109 kg.
3. An electron and a photon have a momenta of 2 MeV/c. Find the total energy of each.
Electron energy Ee =
= 2.064 MeV
Photon energy Ep = pc = 2 MeV.
Some useful physical constants
Avogadro number, NA = 6.025 x 1023 per gram mole
Boltzmann constant, k = 1.38 x 10-23 J K-1
Electron charge, e = 1.60 x 10-19 coulomb
Electron mass, m = 9.11 x 10-31 kg
Permeability of free space, μo = 4 x 10-7(1.257 x 10-6) H m-1
Permittivity of free space, ϵo = 8.85 x 10-12 F m-1
Planck’s constant, h = 6.62 x 10-34 J s
Proton rest mass, mp = 1.67 x 10-27 kg
Velocity of light, c = 3 x 108 m s-1

 

 

BLOG COMMENTS POWERED BY DISQUS

About AWL

The AWL is the most recent and widely referred word list for teaching and learning academic vocabulary. The AWL was developed by Averil Coxhead from the Victoria University of Wellington,

Read More...

Contact us

  • Add: Prof. Amit Hiray
    Assistant Professor
    MIT Academy of Engineering,
    Alandi (D) Pune – 412 105
  • Tel: +91-9923083701