Physics
Physics is the
science of
matter and
energy and their
interactions and
motion through
space and
time.
The science of
change,
including
physical laws, physical properties and phenomena.
Laws -
Forces -
States -
Chemistry -
Observations -
Scopes -
Atoms -
Quantum Physics -
Particles -
Waves -
Fields
Physicist is a
scientist who has
specialized knowledge in the
field of physics, which encompasses the interactions of matter and energy
at all length and
time scales in the physical universe. Physicists
generally are interested in the
root or
ultimate causes of phenomena, and usually frame their understanding in
mathematical terms. Physicists work across a
wide
range of research fields, spanning all length
scales: from sub-atomic and
particle physics, to molecular length scales of
chemical and
biological interest, to
cosmological length scales encompassing the
Universe
as a whole. The field generally includes two types of physicists:
experimental physicists who specialize in the observation of physical
phenomena and the analysis of experiments, and
theoretical physicists who specialize in mathematical modeling of
physical systems to rationalize, explain and predict natural phenomena.
Physicists can apply their knowledge towards
solving practical problems or
developing new technologies.
(also known as applied physics or
engineering
physics). "Physicists do a lot of research to learn more and
understand more about our world, and in the process they make many new
discoveries. Then
engineers will build and
develop new tools and products using this newly discovered knowledge that
physicists provided from years of research".
Classical Physics refers to theories of physics that predate modern,
more complete, or more widely applicable theories. If a currently accepted
theory is considered to be "modern," and its introduction represented a
major
paradigm shift, then the previous theories, or new theories based on
the older paradigm, will often be referred to as belonging to the realm of
"classical" physics.
Physics has learned a lot about matter and
energy and how it works and behaves. But we are still only scratching the
surface of what is known. We live on speck of dust in a universe that is
so large that we can't even see where the universe ends, and we can't even
see how small things are or how small things can get, so we still have a
lot more to learn. So
shut up and calculate
or shut up and
contemplate.
Physical Law is a
theoretical statement "inferred from particular
facts, applicable to a
defined group or class of
phenomena, and expressible by the statement that
a particular phenomenon always occurs if certain conditions be present."
Physical laws are typically conclusions based on
repeated scientific experiments and
observations over many years and which have become
accepted universally within the scientific community. The production of a
summary description of our environment in the form of such laws is a
fundamental aim of science. These terms are not used the same way by all
authors.
Fundamental Physics Formulas (wiki) -
Forces of
Nature (constants).
Constructor Theory is a proposal for a new mode of explanation in
fundamental physics in the language of
ergodic theory, which is a branch of mathematics that studies
statistical properties of deterministic
dynamical systems; it is the study of ergodicity, which expresses the idea
that a point of a moving system, either a dynamical system or a stochastic
process, will eventually visit all parts of the space that the
system moves in, in a
uniform and random sense. This implies that the average behavior of the
system can be deduced from the
trajectory of a "typical" point. Equivalently, a sufficiently large
collection of random samples from a process can represent the average
statistical properties of the entire process.
Ergodicity is a property of the system; it is a statement that the
system cannot be reduced or factored into smaller components. Ergodic
theory is the study of systems possessing ergodicity.
Phenomena is any thing which manifests itself. Phenomena are
often, but not always, understood as "things that appear" or "experiences"
for a sentient being, or in principle may be so. To show, shine, appear, to
be manifest or manifest itself, plural phenomena).
Particle Physics is the branch of physics that studies the
nature of the
particles that constitute
matter (particles with mass) and
radiation (massless particles).
Waves -
Particle Accelerator.
Atomic Physics
is the field of physics that studies
atoms as an
isolated system of electrons and an atomic nucleus. It is primarily
concerned with the arrangement of
electrons around
the nucleus and the processes by which these arrangements change. This
comprises ions, neutral atoms and, unless otherwise stated, it can be
assumed that the term atom includes ions.
Atomic fingerprint is a term for the unique line
spectrum that is characteristic of
a given element and can be used for identification.
Balmer Formula is a mathematical equation for calculating the emitted
wavelength of a light from an excited hydrogen atom
when an electron drops to the second energy level. When an electron
“drops” to a lower energy level, that electron ends up with less energy
than it had originally. That “lost” energy doesn’t just disappear,
it’s not destroyed, it has to
go somewhere. What happens is that that energy is transformed into another
type of energy. In the case of these atoms, it’s transformed into
electromagnetic energy; sometimes visible light, sometimes X-rays,
sometimes other.
Nuclear Physics is the field of physics that studies atomic
nuclei and their constituents and
interactions. The most commonly known
application of nuclear physics is
nuclear power generation, but the
research has led to applications in many fields, including nuclear
medicine and magnetic resonance imaging,
nuclear weapons, ion implantation in materials engineering, and
radiocarbon dating in geology and archaeology.
E=
MC2
-
Action Physics is
the study of
Motion.
Digital Physics is a collection of theoretical perspectives
based on the premise that the universe is, at heart, describable by
information. Therefore, according to this theory, the universe can be
conceived of as either the output of a
deterministic or
probabilistic
computer program, a vast,
digital computation device, or mathematically
isomorphic to such a device.
Digital
Philosophy is a modern re-interpretation that all
information must have a
digital means of its
representation. An informational process transforms the
digital representation
of the state of the
system into its future state. The world can be
resolved into
digital bits, with each bit made of smaller bits. These bits
form a
fractal pattern in fact-space. The pattern behaves like a
cellular
automaton. The pattern is inconceivably large in size and dimensions.
Although the world started
simply, its
computation is irreducibly complex.
Theoretical Physics employs mathematical models and
abstractions of physical objects and systems to rationalize, explain and
predict natural phenomena. This is in contrast to experimental physics,
which uses experimental tools to probe these phenomena.
Experimental
Physics is the category of disciplines and sub-disciplines in the
field of physics that are concerned with the
observation of physical phenomena and
experiments. Methods vary from
discipline to discipline, from simple experiments and observations, such
as the Cavendish experiment, to more complicated ones, such as the
Large Hadron Collider.
Mathematical Physics refers to development of
mathematical methods for application
to problems in physics.
Science Research
Plasma Physics:
Plasma can be created by heating a gas or subjecting it to a strong
electromagnetic field, applied with a
laser
or microwave generator at temperatures above 5000 °C. This decreases or
increases the number of electrons in the atoms or molecules, creating
positive or negative charged particles called ions, and is accompanied by
the dissociation of molecular bonds, if present.
Plasma is the most
abundant form of ordinary matter in the universe.
Fusion (cold fusion)
Astrophysics is the branch of
astronomy that employs the principles
of physics and chemistry "to ascertain the nature of the heavenly bodies,
rather than their positions or motions in space.
Metaphysics (philosophy)
Biophysics is an interdisciplinary science that applies the
approaches and methods of physics to study
biological systems. Biophysics covers all scales of biological
organization, from
molecular to organismic and populations. Biophysical
research shares significant overlap with
biochemistry, physical chemistry,
nanotechnology, bioengineering, computational biology, biomechanics and
systems biology.
Electromagnetic
Radiation -
Mitochondria.
If one wants to summarize our knowledge of physics in the briefest
possible terms, there are three really fundamental observations: (i)
Space-time is a
pseudo-Riemannian manifold M, endowed with a
metric tensor
and governed by geometrical laws. (ii) Over M is a vector bundle X with a
nonabelian gauge group G. (iii)
Fermions are sections of
(S~+⊗VR)⊕(S~−⊗VR~). R and R~ are not isomorphic; their failure to be
isomorphic explains why the light fermions are light and presumably has
its origins in a representation difference Δ in some underlying theory.
All of this must be supplemented with the understanding that the
geometrical laws obeyed by the metric tensor, the gauge
fields, and the
fermions are to be interpreted in
quantum mechanical terms.
Einstein
Field Equations relate the geometry of space-time with the
distribution of
matter within it
Dirac Equation
is a relativistic
wave equation describes all
spin-1/2 massive
particles such as electrons and quarks for which
parity is a symmetry. It is consistent with both the principles of quantum
mechanics and the theory of special relativity, and was the first theory
to account fully for special relativity in the context of quantum
mechanics. It was validated by accounting for the fine details of the
hydrogen spectrum in a completely rigorous way.
Mills Theory is a
gauge
theory based on a special unitary group SU(N), or more generally any
compact, reductive Lie algebra. Yang–Mills theory seeks to describe the
behavior of elementary particles using these non-abelian Lie groups and is
at the core of the unification of the electromagnetic force and weak
forces (i.e. U(1) × SU(2)) as well as quantum chromodynamics, the theory
of the strong force (based on SU(3)). Thus it forms the basis of our
understanding of the Standard Model of particle physics.
Forces of Nature - Laws
1:
Weak Force
is responsible for
radioactive
decay, which plays an essential role in
nuclear fission. Weak Force is also known
as Weak Interaction or Weak Nuclear
Force.
Electroweak Interaction is the unified description of two of the four
known fundamental
interactions of
nature, which are
electromagnetism and the weak
interaction.
Physical Law -
Scientific Law -
Action Physics
-
Waves -
Conservation of Energy and Mass -
Physical Constant
2:
Strong Force is the
mechanism responsible for
holding atoms together. Strong force is also called
strong interaction, the strong force, nuclear strong force or
strong
nuclear force. At the range
of a
femtometer, it is the strongest force, being approximately 137 times
stronger than
electromagnetism, a million times stronger than weak
interaction and 1038 times stronger than
gravitation. The strong nuclear
force ensures the
stability of ordinary
matter, confining
quarks into
hadron particles, such as the proton and
neutron, and the further
binding
of neutrons and protons into atomic nuclei. Most of the mass-energy of a
common proton or neutron is in the form of the strong force field
energy;
the individual
quarks provide only about 1% of the mass-energy of a
proton.
3: Electromagnetic Force
is a
type of physical interaction that occurs between
electrically charged particles.
4: Gravity
is the
force of attraction between all
masses in the universe; especially the attraction of the earth's
mass for
bodies near its surface. All things with
energy are brought toward or
gravitate toward one another, including
stars,
planets,
galaxies and even
light and sub-atomic
particles. The average
gravitational pull
of the Earth is 9.8 meters per second squared (m/s2).
The strongest force is the
nuclear force.
The second strongest force is the
electromagnetic force. The third strongest force is the
weak
nuclear force, and the forth strongest force is
gravity. The nuclear force is much
stronger than the electromagnetic force because it operates at a much
shorter range. Since protons have charge +1 e, they experience an electric
force that tends to push them apart, but at short range, the attractive
nuclear force is strong enough to overcome the electromagnetic force. The
electromagnetic force is a long-range force that acts between electrically
charged particles, while the nuclear force acts only between
protons and
neutrons, which
are found within the atomic nucleus. The nuclear force is also much
stronger than the electromagnetic force because it is mediated by the
exchange of particles called
gluons, which are much
more massive than the photons that mediate the electromagnetic force.
Electromagnetic forces derives from the forces between charges and
magnetism, the strong nuclear force derives from gluons and nucleons. If
you take two protons and hold them very close together, they will exert
several forces on each other.
Fifth Force?
- A
Description is not an Explanation -
Core Theory of
Physics (image of equation)
Theory of Everything is a theoretical framework of physics that links
together all the physical aspects of the universe into a single coherent
explanation.
Fusion.
Standard Model is a theory concerning the electromagnetic, weak, and
strong nuclear interactions, as well as classifying all the subatomic
particles known.
Penguin
Diagram are a class of Feynman diagrams which are important
for understanding CP violating processes in the standard model. They refer
to one-loop processes in which a quark temporarily changes flavor (via a W
or Z loop), and the flavor-changed quark engages in some tree interaction,
typically a strong one. For the interactions where some quark flavors
(e.g. very heavy ones) have much higher interaction amplitudes than
others, such as CP-violating or Higgs interactions, these penguin
processes may have amplitudes comparable to or even greater than those of
the direct tree processes. A similar diagram can be drawn for leptonic
decays.
Fundamental Interaction
are the
interactions that do not
appear to be reducible to more basic interactions. There are four
fundamental interactions known to exist: the gravitational and
electromagnetic interactions, which produce significant long-range forces
whose effects can be seen directly in everyday life, and the strong and
weak interactions, which produce forces at minuscule, subatomic distances
and govern nuclear interactions. Some scientists hypothesize that a fifth
force might exist, but these hypotheses remain speculative. (also known as
fundamental forces).
Uniformitarianism was an assumption that the same
natural laws and
processes that operate in the universe now have
always operated in the universe in the past and apply everywhere in the
universe.
Violating the Laws of Physics. The
laws of physics don't define what's possible; what's possible defines the
laws of physics. So it's possible to violate the known laws of physics.
This has happened repeatedly throughout history, and has led to both
modification of the known laws and greater understanding of our physical
universe.
“The laws of nature are but the mathematical thoughts of
God.” ~
Euclid.
“There is geometry in the humming of the strings, there
is music in the spacing of the spheres.” ~
Pythagoras.
“The physical world is not the real world; instead,
ultimate reality exists beyond our sensory experiences.” ~
Plato.
Laws Man Made -
Law of Divine Oneness
-
Law of Correspondence
-
Law of Rhythm -
Law of Attraction -
Law of Inspired Action -
Law of Gender -
Law of Perpetual Transmutation of Energy
-
Law of Cause and
Effect -
Law of
Compensation -
Law of Relativity -
Law of Polarity.
States of Matter - Phases
1:
Gas may be made up of individual
atoms
like with a
noble gas, or elemental
molecules made from one type of
atom like with
oxygen, or compound molecules made from a variety of atoms
like
carbon dioxide. A gas mixture would contain a variety of pure gases
much like the
air. What
distinguishes a gas from
liquids and solids is the vast separation of the
individual gas particles. This separation usually makes a
colorless gas
invisible to the human observer. The interaction of gas particles in the
presence of electric and gravitational fields are considered negligible as
indicated by the constant velocity vectors in the image. One type of
commonly known gas is
steam.
Evanescent is something tending to vanish
like vapor.
Evaporation -
Effervescent -
Conservation of Mass.2:
Solid is characterized by
structural rigidity and resistance to changes of
shape or volume. Unlike a liquid, a
solid object does not flow to take on
the shape of its container, nor does it expand to fill the entire volume
available to it like a gas does. The atoms in a solid are tightly bound to each other, either in a regular geometric lattice (
crystalline solids,
which include
metals and ordinary
ice) or irregularly (an amorphous solid
such as common window glass).
Rocks -
Minerals -
Particles -
Mass.
3:
Liquid is a nearly
incompressible
fluid that conforms to the shape of its container but
retains a nearly constant volume independent of
pressure. As such, it is
one of the four fundamental states of matter (the others being solid, gas,
and plasma), and is the only state with a definite volume but no fixed
shape. A liquid is made up of tiny vibrating particles of matter, such as
atoms, held together by intermolecular bonds.
Water is, by far, the most
common liquid on Earth. Like a gas, a liquid is able to flow and take
the shape of a container. Most liquids resist compression, although others
can be compressed. Unlike a gas, a liquid does not disperse to fill every
space of a container, and maintains a fairly constant density. A
distinctive property of the liquid state is
surface tension, leading to
wetting phenomena.
Lava is a liquid,
but when is cools, it becomes
rock or a solid.
4:
Plasma
is a
gas that becomes
heated until the
atoms lose all
their
electrons, leaving a highly electrified
collection of nuclei and
free electrons. A plasma has properties unlike those of the other
three states.
A plasma can be created by
heating a
gas or subjecting it to a strong
electromagnetic field, applied with a
Laser or
microwave generator. This
decreases or increases the number of
electrons, creating positive or
negative charged particles called
ions, and is accompanied by the
dissociation of molecular bonds, if present.
Plasma
Oscillation are rapid oscillations of the electron density in
conducting media such as plasmas or metals. The oscillations can be
described as an instability in the
dielectric function of a free electron
gas. The frequency only depends weakly on the
wavelength of the
oscillation. The quasiparticle resulting from the quantization of these
oscillations is the
plasmon.
Plasma is the most common form of
matter as most of
the universe is composed of
stars.
Nonthermal Plasma is a plasma which is not in
thermodynamic equilibrium, because the
electron temperature is much hotter than the temperature of heavy species
(ions and neutrals). As only electrons are thermalized, their
Maxwell-Boltzmann velocity distribution is very different than the ion
velocity distribution. When one of the velocities of a species does not
follow a Maxwell-Boltzmann distribution, the plasma is said to be non-Maxwellian.
A kind of common nonthermal plasma is the mercury-vapor gas within a
fluorescent lamp, where the "electron gas" reaches a temperature of 20,000
K (19,700 °C; 35,500 °F) while the rest of the gas,
ions and neutral
atoms, stays barely above room temperature, so the bulb can even be
touched with hands while operating.
Where does laser energy go after being fired into plasma? -
Plasma Physics -
Plasma
Universe -
Elements
Lightning Strikes create plasma via a
very strong jolt of
electricity.
Certain regions of earths
atmosphere
or ionosphere contain some plasma created primarily by
ultraviolet radiation from the
sun.
State of
Matter is one of the distinct forms in which matter can exist.
Four
States of Matter are Observable in Everyday Life: solid, liquid, gas, and
plasma.
Many other states are known to exist, such as glass or liquid
crystal, and some only exist under extreme conditions, such as
Bose–Einstein condensates, neutron-degenerate matter, and quark-gluon
plasma, which only occur, respectively, in situations of extreme cold,
extreme density, and extremely high-energy. Some other states are believed
to be possible but remain theoretical for now. For a complete list of all
exotic states of matter,
List of States of Matter (wiki).
Phase Transition is when a
substance changes from a solid,
liquid, or gas state to a different state. Every element and substance can
transition from one phase to another at a specific combination of
temperature and pressure.
Phase transition in
chemistry,
thermodynamics, and many other related
fields, phase transitions or phase changes are the physical processes of
transition between the basic states of matter: solid, liquid, and gas, as
well as plasma in rare cases. A phase of a thermodynamic system and the
states of matter have uniform physical properties. During a phase
transition of a given medium, certain properties of the medium change,
often discontinuously, as a result of the change of external conditions,
such as temperature, pressure, or others. For example, a liquid may become
gas upon heating to the boiling point, resulting in an abrupt change in
volume. The measurement of the external conditions at which the
transformation occurs is termed the phase transition. Phase transitions
commonly occur in nature and are used today in many technologies.
Enhanced "super-resolution" machine learning techniques to study phase
transitions.
Surface
Science is the study of physical and
chemical
phenomena that occur at the interface of two phases, including
solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces,
and liquid–gas interfaces. It includes the fields of surface chemistry and
surface physics, which is the study of physical interactions that
occur at
interfaces. It
overlaps with surface chemistry. Some of the things investigated by
surface physics include
friction,
surface states,
surface diffusion, surface reconstruction, surface
phonons and plasmons, epitaxy and surface enhanced Raman scattering, the
emission and tunneling of
electrons, spintronics, and the
self-assembly of
nanostructures on surfaces. In a confined liquid, defined by geometric
constraints on a nanoscopic scale, most molecules sense some surface
effects, which can result in physical properties grossly deviating from
those of the bulk liquid.
Surface Chemistry -
Cell
Surface -
Surface
Engineering -
Refrigeration
Absorption in chemistry is a physical or chemical phenomenon or a
process in which atoms, molecules or ions enter some bulk phase – liquid
or solid material. This is a different process from adsorption, since
molecules undergoing absorption are taken up by the volume, not by the
surface (as in the case for adsorption). A more general term is sorption,
which covers absorption, adsorption, and ion exchange. Absorption is a
condition in which something takes in another substance.
New State of Physical Matter in which atoms can exist as both Solid and
Liquid simultaneously. Applying high pressures and temperatures to
potassium -- a simple metal -- creates a state in which most of the
element's atoms form a solid lattice structure, the findings show.
However, the structure also contains a second set of potassium atoms that
are in a fluid arrangement. Under the right conditions, over half a dozen
elements -- including sodium and bismuth -- are thought to be capable of
existing in the newly discovered state, researchers say.
Glass
Transition is the gradual and
reversible transition in
amorphous materials (or in amorphous regions within
semicrystalline materials) from a hard
and relatively brittle "glassy" state into a
viscous or
rubbery state as the
temperature is increased. An amorphous solid that exhibits a glass
transition is called a
glass.
The reverse transition, achieved by supercooling a viscous liquid into the
glass state, is called
vitrification.
New phase of matter in 2D which defies normal statistical mechanics.
Physicists have created the first two-dimensional version of the
Bose glass, a novel phase of matter that challenges statistical
mechanics. The Bose glass has some glassy properties and within it all
particles are localized. This means that each particle in the system
sticks to itself,
not mixing
with its neighbors. If coffee was localized, then when stirring milk
into the coffee, the intricate pattern of black and white stripes would
remain forever, instead of washing out to an average. To create this new
phase of matter, the group overlapped several laser beams to create a
quasiperiodic pattern, a pattern that is long-range ordered like a
conventional crystal, but not periodic, meaning that, like a Penrose
tiling, it never repeats. When filling the resulting structure with
ultracold atoms cooled to nanokelvin temperatures -- close to absolute
zero, the atoms formed the Bose glass.
Physicists have identified a new state of matter whose
structural order
operates by rules more aligned with
quantum mechanics
than standard
thermodynamic theory. In
a classical material called artificial spin ice, which in certain phases
appears disordered, the material is actually ordered, but in a
"topological" form.
Electron family creates previously unknown state of matter.
Researchers have demonstrated a completely novel state of matter in a
metal. It is created by the combination of four
electrons -- until now, only electron pairs were known. This discovery
could lead to a new type of
superconductivity,
an entirely new research direction, and revolutionary technologies such as
quantum sensors.
Supersolid is a spatially ordered material with superfluid properties.
A supersolid is a special
quantum state of matter
where particles form a rigid, spatially ordered structure, but also flow
with
zero viscosity. This is in
contradiction to the intuition that flow, and in particular superfluid
flow with zero viscosity, is a property exclusive to the fluid state,
e.g., superconducting electron and neutron fluids, gases with
Bose–Einstein condensates, or unconventional liquids such as
helium-4
or
helium-3 at sufficiently low temperature. For more than 50 years it
was thus unclear whether the supersolid state can exist.
New phase of matter called the chiral bose-liquid state. Under
everyday conditions, matter can be a solid, liquid or gas. But once you
venture beyond the everyday -- into temperatures approaching absolute
zero, things smaller than a fraction of an atom or which have extremely
low states of energy -- the world looks very different. You find quantum
states of matter way out on these fringes, and they are much wilder than
the three classical states we encounter in our everyday lives. If the
number of electrons in the top layer and holes in the bottom layer were
equal, then you would expect to see the particles acting in a correlated
manner, but when you design the bottom layer so that there is a local
imbalance between the number of electrons and holes in the bottom layer,
It's like a game of musical chairs designed to frustrate the electrons.
Instead of each electron having one chair to go to, they must now scramble
and have many possibilities in where they sit.
Information is the fifth form of matter or the fifth element.
Information has mass and that all elementary particles, the smallest known
building blocks of the universe, store information about themselves,
similar to the way humans have DNA.
Photon-Phonon Breakthrough. New research has uncovered a novel way to
combine two different states of matter. For
one of the first times, topological photons -- light -- has been combined
with lattice vibrations, also known as phonons, to manipulate their
propagation in a robust and controllable way. The study utilized
topological photonics, an emergent direction in photonics which leverages
fundamental ideas of the mathematical field of topology about conserved
quantities -- topological invariants -- that remain constant when altering
parts of a geometric object under continuous deformations.
Phonon
is a collective excitation in a periodic, elastic arrangement of atoms or
molecules in condensed matter, specifically in solids and some liquids.
Often referred to as a quasiparticle, it is an excited state in the
quantum mechanical quantization of the modes of vibrations for elastic
structures of interacting particles. Phonons can be thought of as
quantized sound waves, similar to photons as quantized light waves. The
study of phonons is an important part of condensed matter physics. They
play a major role in many of the physical properties of condensed matter
systems, such as thermal conductivity and electrical conductivity, as well
as play a fundamental role in models of neutron scattering and related
effects.
Spin Ice
is a
magnetic substance that does not
have a single minimal-energy state. It has magnetic moments (i.e. "
spin")
as elementary degrees of freedom which are subject to frustrated
interactions. By their nature, these interactions prevent the moments from
exhibiting a periodic pattern in their orientation down to a temperature
much below the energy scale set by the said interactions. Spin ices show
low-temperature properties, residual entropy in particular, closely
related to those of common
crystalline water
ice. (Shakti spin ice).
Triple Point of a substance is the temperature and pressure at which
the three phases (gas, liquid, and solid) of that substance coexist in
thermodynamic equilibrium. It is that
temperature and
pressure at which the sublimation curve, fusion curve and the
vaporisation curve meet. For example, the triple point of mercury occurs
at a temperature of −38.83440 °C (−37.90192 °F) and a pressure of 0.165
mPa. In addition to the triple point for solid, liquid, and gas phases, a
triple point may involve more than one solid phase, for substances with
multiple polymorphs. Helium-4 is a special case that presents a triple
point involving two different fluid phases (lambda point). The triple
point of water was used to define the kelvin, the base unit of
thermodynamic temperature in the International System of Units (SI). The
value of the triple point of water was fixed by definition, rather than
measured, but that changed with the 2019 redefinition of SI base units.
The triple points of several substances are used to define points in the
ITS-90 international temperature scale, ranging from the triple point of
hydrogen (13.8033 K) to the triple point of water (273.16 K, 0.01 °C, or 32.018 °F).
Matter
Matter is something which has
mass and occupies
space.
Matter includes
atoms and
molecules and anything made
up of these, but not other
energy phenomena or
waves such as light or
sound.
Elements -
Particles
Substance is the
real physical matter of which a
person or thing consists. The choicest
or most essential or most vital part of some idea or experience. A
particular kind or species of matter with
uniform properties.
The property of holding together and retaining its shape.
Mineral.
Material is the
tangible substance that
goes into the makeup of a physical object. Something derived from or composed of
matter and having physical form or substance. Something physical as
distinct from intellectual or
psychological well-being. Something that
is real rather than
spiritual or
abstract, though
information, data or
ideas and observations can be used or reworked into a finished form.
Material Science -
Immaterial.
Physical is something having
substance or material existence and
perceptible to the
senses. Something
characterized by energetic bodily activity, matter and
energy.
Physical Object or material object or body, is a contiguous collection
of matter, within a defined boundary or surface, that exists in space and
time. Usually contrasted with
abstract objects and
mental objects.
Physical System is a
collection
of physical objects under study. The collection differs from a set:
all the objects must
coexist
and have some
physical relationship. In
other words, it is a portion of the physical universe chosen for analysis.
Everything outside the system is known as the
environment, which is ignored
except for its effects on the system.
State of Matter
is one of the distinct forms that matter takes on.
Four states of matter
are observable in
everyday life:
solid, liquid, gas, and plasma.
Pyramid of
Complexity -
Dark Matter
2 electrons can't occupy the same space at the
same time. Two
electrons always repel each
other, so they cannot by themselves combine. Two colliding electrons would
just bounce away from each other. They would produce some
photons from bremsstrahlung radiation, but there wouldn't be any
nontrivial interaction. Electrons repel each other, because they all have
the same electric charge. There are other reasons. They are energetic, and
the uncertainty principle keeps them from staying close or sticking.
However, an electron can combine with its
anti-particle, called a “
positron.” Postrons
are just like electrons but with positive, instead of negative, charge.
Impenetrability in metaphysics is the name given to that quality of
matter whereby two bodies cannot occupy the same space at the same time.
Phase in matter is a region of space (a thermodynamic system),
throughout which all physical properties of a material are essentially
uniform. Examples of
physical properties include density, index of
refraction, magnetization and chemical composition. A simple description
is that a phase is a region of material that is chemically uniform,
physically distinct, and (often) mechanically separable. In a system
consisting of ice and water in a glass jar, the ice cubes are one phase,
the water is a second phase, and the humid air over the water is a third
phase. The glass of the jar is another separate phase.
Phase in physical chemistry is a distinct
state of matter in a system; matter that is identical in chemical
composition and physical state and separated from other material by the
phase boundary. Phase in astronomy is the particular appearance of a
body's state of illumination (especially one of the recurring shapes of
the part of Earth's moon that is illuminated by the sun).
Phase Stages.
Existence of New Form of Electronic Matter Quadrupole Topological
Insulators
Baryonic Matter is nearly all matter that may be encountered or
experienced in everyday life, which includes
atoms of
any sort, and provides those with the property of
mass.
Non-baryonic matter is any sort of matter that is not composed
primarily of
baryons. This might include neutrinos and
free electrons,
dark matter, such as
supersymmetric particles, axions, and black holes.
Baryon is a composite subatomic
particle made up of three
quarks.
Baryon Asymmetry problem in physics refers to the imbalance in
baryonic matter (the type of matter experienced in everyday life) and
anti-baryonic matter in the observable universe. Neither the standard model
of particle physics, nor the theory of general relativity provides an
obvious explanation for why this should be so, and it is a natural
assumption that the universe be neutral with all conserved charges. The
Big Bang should have produced equal amounts of matter and antimatter.
Since this does not seem to have been the case, it is likely some physical
laws must have acted differently or did not exist for matter and
antimatter. Several competing hypotheses exist to explain the imbalance of
matter and antimatter that resulted in baryogenesis. However, there is as
of yet no consensus theory to explain the phenomenon. As remarked in a
2012 research paper, "The origin of matter remains one of the great
mysteries in physics.
Condensed Matter Physics is a branch of physics that deals with the
physical properties of condensed phases of matter, where particles adhere
to each other. Condensed matter physicists seek to understand the behavior
of these phases by using physical laws. In particular, they include the
laws of quantum mechanics, electromagnetism and statistical mechanics.
Programmable Matter is matter which has the
ability to change its physical
properties (shape, density, moduli, conductivity, optical properties,
etc.) in a programmable fashion, based upon user input or autonomous
sensing. Programmable matter is thus linked to the concept of a material
which inherently has the ability to perform information processing.
Does Matter Die? Stars
create new
elements in their cores by squeezing elements together in a
process called nuclear
fusion. But if
mass can neither be created nor
destroyed, then how does the
Sun create
different
Atoms?
Matter Creation. Scientists do not have a unique definition of what
matter is. In the currently known particle physics, summarized by the
standard model of elementary particles and interactions, it is possible to
distinguish in an absolute sense particles of matter and particles of
antimatter. This is particularly easy for those particles that carry
electric charge, such as electrons, protons or quarks, while the
distinction is more subtle in the case of neutrinos, fundamental
elementary particles that do not carry electric charge. In the standard
model, it is not possible to create a net amount of matter particles—or
more precisely, it is not possible to change the net number of leptons or
of quarks in any perturbative reaction among particles. This remark is
consistent with all existing observations.
Dark Matter -
Time Crystals -
Anti-Gravity -
Dualism
Antimatter
is a material composed of
anti-particles, which have the same mass as
particles of ordinary
matter, but
opposite charges, lepton numbers, and
baryon numbers.
Antiparticle.
Positron or
anti-electron is the
particle with an electric charge of +1e, a spin of
1/2 (the same as the electron), and the same mass as an
electron. It is the antiparticle (antimatter
counterpart) of the electron. When a positron collides with an electron,
annihilation occurs. If this collision occurs at low energies, it results
in the production of two or more photons.
In 1928, a
physicist named
Paul
Dirac found something strange in
his equations. And he predicted,
based purely on mathematical insight, that there ought to be a second kind
of matter, the
opposite to normal matter, that literally annihilates when
it comes in contact.
Antimatter.
Annihilation is
the process that occurs when a subatomic particle collides with its
respective
Antiparticle to produce other
particles, such as an electron colliding with a positron to produce two
photons. The total energy and momentum of the initial pair are conserved
in the process and distributed among a set of other particles in the final
state. Antiparticles have exactly opposite additive quantum numbers from
particles, so the sums of all quantum numbers of such an original pair are
zero. Hence, any set of particles may be produced whose total quantum
numbers are also zero as long as
conservation of energy and conservation of momentum are obeyed.
During a low-energy annihilation, photon production is favored, since
these particles have no mass. However,
high-energy
particle colliders produce annihilations where a wide variety of
exotic heavy particles are created. The word annihilation takes use
informally for the interaction of two particles that are not mutual
antiparticles - not charge conjugate. Some quantum numbers may then not
sum to zero in the initial state, but conserve with the same totals in the
final state. An example is the "annihilation" of a high-energy electron
antineutrino with an electron to produce a W-. If the annihilating
particles are composite, such as mesons or baryons, then several different
particles are typically produced in the final state.
Hollow Atoms are short-lived multiply excited neutral atoms which
carry a large part of their Z electrons (Z ... projectile nuclear charge)
in high-n levels while inner shells remain (transiently) empty. This
population inversion arises for typically
100 femtoseconds during the interaction of a slow highly charged ion (HCI)
with a solid surface. Despite this limited lifetime, the formation and
decay of a hollow atom can be conveniently studied from ejected electrons
and soft X-rays, and the trajectories, energy loss and final charge state
distribution of surface-scattered projectiles. For impact on insulator
surfaces the potential energy contained by hollow atom may also cause the
release of target atoms and -
ions via potential
sputtering and the formation of nanostructures on a surface.
Possible explanation for the dominance of matter over antimatter in the
Universe. Neutrinos and antineutrinos, sometimes called ghost
particles because difficult to detect, can
transform from one type to another. The international T2K Collaboration
announces a first indication that the dominance of matter over antimatter
may originate from the fact that
Neutrinos and
antineutrinos behave differently during those oscillations. Neutrinos are
elementary particles which travel through matter almost without
interaction. They appear in three different types: electron- muon- and
tau-neutrinos and their respective antiparticle (antineutrinos).
Exotic Matter has several proposed types: Hypothetical particles and
states of matter that have "exotic" physical properties that would violate
known laws of physics, such as a particle having a negative mass.
Hypothetical particles and states of matter that have not yet been
encountered, but whose properties would be within the realm of mainstream
physics if found to exist. Several particles whose existence has been
experimentally confirmed that are conjectured to be exotic hadrons and
within the Standard Model. States of matter that are not commonly
encountered, such as Bose–Einstein condensates, fermionic condensates,
quantum spin liquid, string-net liquid, supercritical fluid, color-glass
condensate, quark–gluon plasma, Rydberg matter, Rydberg polaron and
photonic matter but whose properties are entirely within the realm of
mainstream physics. Forms of matter that are poorly understood, such as
dark matter and mirror matter. Ordinary matter placed under high pressure,
which may result in dramatic changes in its physical or chemical
properties. Degenerate matter. Exotic atoms.
Mass
Mass is both a property
of a
physical body and a measure of its resistance to
acceleration (a change in
its state of motion) when a net force is applied. An object's mass also
determines the strength of its
gravitational attraction to other bodies.
The basic SI unit of mass is the kilogram (kg). In physics, mass is not
the same as weight, even though mass is often determined by measuring the
object's weight using a spring scale, rather than balance scale comparing
it directly with known masses. An object on the Moon would weigh less than
it does on Earth because of the lower gravity, but it would still have the
same mass. This is because weight is a force, while mass is the property
that (along with gravity) determines the strength of this
force. Mass is the
property of a body that causes it to have
weight in a gravitational field.
Join together into a mass or collect or form a mass.
Mass is a
property of a physical body. It is the measure of an object's resistance
to
Acceleration
(a change in its state of motion) when a net
force is applied. It also
determines the strength of its
mutual gravitational attraction to other
bodies. The basic SI unit of mass is the kilogram (kg). Mass is not the
same as
weight, even though we often calculate an object's mass by
measuring its weight with a spring scale, rather than comparing it
directly with known masses. An object on the Moon would weigh less than it
does on Earth because of the lower gravity, but it would still have the
same mass. This is because weight is a
force, while mass is the property
that (along with gravity) determines the strength of this force.
Negative Mass.
Physical Object is a collection of matter within a defined contiguous
boundary in three-dimensional space. The boundary must be defined and
identified by the properties of the material. The boundary may change over
time. The boundary is usually the visible or tangible surface of the
object. The matter in the object is constrained (to a greater or lesser
degree) to move as one object. The boundary may move in space relative to
other objects that it is not attached to (through translation and
rotation). An object's boundary may also deform and change over time in
other ways.
Physical Property is any property that is measurable, whose value
describes a state of a physical system. The changes in the physical
properties of a system can be used to describe its
transformations or evolutions
between its momentary states. Physical properties are often referred to as
observables. They are not modal properties. Quantifiable physical property
is called physical quantity.
Environment.
Mass in
Special Relativity. The word mass has two meanings in special
relativity:
rest mass or invariant mass is an invariant quantity which is
the same for all observers in all reference frames, while
relativistic
mass is dependent on the velocity of the observer. According to the
concept of mass–energy equivalence, the rest mass and relativistic mass
are equivalent to the rest energy and total energy of the body,
respectively. The term relativistic mass tends not to be used in particle
and nuclear physics and is often avoided by writers on special relativity,
in favor of using the body's total energy. In contrast, rest mass is
usually preferred over rest energy. The measurable inertia and
gravitational attraction of a body in a given frame of reference is
determined by its relativistic mass, not merely its rest mass. For
example, light has zero rest mass but contributes to the inertia (and
weight in a gravitational field) of any system containing it. For a
discussion of mass in general relativity, see mass in general relativity.
For a general discussion including mass in Newtonian mechanics, see the
article on mass.
E = mc²
Energy Equals Mass Times The Speed Of Light Squared. The equation says
that
energy and
mass or matter are interchangeable; they are different
forms of the same thing. Under the right conditions, energy can become
mass, and vice versa.
Theory of Relativity.
Mass Energy Equivalence states that anything having mass has
an equivalent amount of energy and vice versa, with these fundamental
quantities directly relating to one another by Albert Einstein's famous
formula: E=mc2 -
Simple Version. This formula states that the equivalent
energy (
E) can be
calculated as the
mass (
m) multiplied by
the
speed of light (
c
= about 3×108 m/s) squared. Similarly, anything having energy exhibits a
corresponding mass m given by its energy E divided by the speed of light
squared c². Because the speed of light is a very large number in everyday
units, the formula implies that even an everyday object at rest with a
modest amount of mass has a very large amount of energy intrinsically.
Chemical, nuclear, and other energy transformations may cause a system to
lose some of its energy content (and thus some corresponding mass),
releasing it as light (radiant) or thermal energy for example.
Mass and
Energy
are manifestations of the same thing? The
mass of a body is a measure of its energy content. Mass becomes simply a
physical manifestation of that energy, rather than the other way around.
As we work our way inward—matter into atoms, atoms into sub-atomic
particles, sub-atomic particles into quantum fields and forces—we lost
sight of matter completely. Matter loses its tangibility. It lost its
primacy as mass became a secondary quality, the result of
interactions
between intangible quantum fields. What we recognize as mass is a behavior
of these quantum fields; it is not a property that belongs or is
necessarily intrinsic to them. Mass overwhelmingly arises from the protons
and neutrons it contains, the answer is now clear and decisive. The
inertia of that body, with 95 percent accuracy, is its energy content.
e=vm2c4+p2c2 - E = + Vm2c4 + P2c2 and E = -v mạc4 +
P2c2
Energy-Momentum Relation is the relativistic equation relating total
energy (which is also called relativistic energy) to invariant mass (which
is also called rest mass) and
momentum. It is
the extension of mass–energy equivalence for bodies or systems with
non-zero momentum. It can be written as the following equation:
E = mc^2 didn't lead to the atomic bomb, the discovery of fission
did. Since a teeny, tiny amount of mass is lost when an atom fissions, E =
mc^2 predicts the amount of energy released. E=mc^2 can be used to
calculate the amount of energy liberated in Nuclear Reactions. The
equation is fundamental in understanding nuclear reactions and the energy
released in processes such as nuclear fission and fusion.
1 gram of
water — if its whole mass were converted into pure energy via E=mc² —
contains energy equivalent to 20,000 tons (18,143 metric tons) of TNT
exploding. That's why such a small amount of uranium or plutonium can
produce such a massive atomic explosion.
What is E = mc²? (youtube) -
The Real Meaning of E=mc² (youtube)
Mass is only one form of energy among many,
such as electrical, thermal, or chemical energy, and therefore energy can
be transformed from any of these forms into mass, and vice versa.
Converting mass into energy is the most
energy-efficient process in the Universe. 100% is the greatest
energy gain you could ever hope for out of a reaction. Mass can be
converted into energy and back again, and underlies everything from
nuclear power to particle accelerators to atoms to the Solar System.
Mass is not conserved. If you
take a block of iron and chop it up into a bunch of iron atoms, you fully
expect that the whole equals the sum of its parts. That's an assumption
that's clearly true, but only if mass is conserved. In the real world,
though, according to Einstein, mass is not conserved at all. If you were
to take an iron atom, containing 26 protons, 30 neutrons, and 26
electrons, and were to place it on a scale, you'd find some disturbing
facts. An iron atom with all of its electrons weighs slightly less than an
iron nucleus and its electrons do separately, An iron nucleus weighs
significantly less than 26 protons and 30 neutrons do separately. And if
you try and fuse an iron nucleus into a heavier one, it will require you
to input more energy than you get out. mass is just another form of
energy. When you create something that's more energetically stable than
the raw ingredients that it's made from, the process of creation must
release enough energy to conserve the total amount of energy in the
system. When you bind an electron to an atom or molecule, or allow those
electrons to transition to the lowest-energy state, those binding
transitions must give off energy, and that energy must come from
somewhere: the mass of the combined ingredients. This is even more severe
for nuclear transitions than it is for atomic ones, with the former class
typically being about 1000 times more energetic than the latter class.
Energy is conserved, but only if you account for changing masses. When you
have any attractive force that binds two objects together — whether that's
the electric force holding an electron in orbit around a nucleus, the
nuclear force holding protons and neutrons together, or the gravitational
force holding a planet to a star — the whole is less massive than the
individual parts. And the more tightly you bind these objects together,
the more energy the binding process emits, and the lower the rest mass of
the end product. For every 1 kilogram of mass that you convert, you get a
whopping 9 × 1016 joules of energy out: the equivalent of 21 Megatons of
TNT. Whenever we experience a radioactive decay, a fission or fusion
reaction, or an annihilation event between matter and antimatter, the mass
of the reactants is larger than the mass of the products; the difference
is how much energy is released. In all cases, the energy that comes out —
in all its combined forms — is exactly equal to the energy equivalent of
the mass loss between products and reactants. The ultimate example is the
case of matter-antimatter annihilation, where a particle and its
antiparticle meet and produce two photons of the exact rest energy of the
two particles. Take an electron and a positron and let them annihilate,
and you'll always get two photons of exactly 511 keV of energy out. It's
no coincidence that the rest mass of electrons and positrons are each 511
keV/c2: the same value, just accounting for the conversion of mass into
energy by a factor of c2. Einstein's most famous equation teaches us that
any particle-antiparticle annihilation has the potential to be the
ultimate energy source: a method to convert the entirety of the mass of
your fuel into pure, useful energy.
Explosion is a rapid increase in volume and release of energy in an
extreme manner, usually with the generation of high temperatures and the
release of gases. Supersonic explosions created by high explosives are
known as detonations and travel via supersonic shock waves. Subsonic
explosions are created by low explosives through a slower burning process
known as deflagration.
Combustion -
Rockets -
Chemical Reactions
Coulomb Explosion are a mechanism for transforming energy in intense
electromagnetic fields into atomic motion and are thus useful for
controlled destruction of relatively robust molecules. The explosions are
a prominent technique in laser-based machining, and appear naturally in
certain high-energy
reactions.
Implosion as a mechanical process is when objects are destroyed by
collapsing or being
squeezed in on themselves. The opposite of
explosion, implosion concentrates matter and energy. True implosion
usually involves a difference between internal (lower) and external
(higher) pressure, or inward and outward forces, that is so large that the
structure collapses inward into itself, or into the space it occupied if
it is not a completely solid object. Examples of implosion include a
submarine being crushed from the outside by the hydrostatic pressure of
the surrounding water, and the collapse of a massive star under its own
gravitational pressure. An implosion can fling material outward (for
example due to the force of inward falling material rebounding, or
peripheral material being ejected as the inner parts collapse), but this
is not an essential component of an implosion and not all kinds of
implosion will do so. If the object was previously solid, then implosion
usually requires it to take on a more dense form - in effect to be more
concentrated, compressed, denser, or converted into a new material that is
denser than the original.
Cavitation -
Contraction -
Impeller
Exothermic
Process describes a process or
reaction that releases energy from the
system to its surroundings, usually in the form of
heat, but also in a
form of light (e.g. a spark,
flame, or flash), electricity (e.g. a
battery), or sound (e.g. explosion heard when burning hydrogen). Its
etymology stems from the Greek prefix έξω (exō, which means "outwards")
and the Greek word θερμικός (thermikόs, which means "thermal"). The term
exothermic was first coined by Marcellin Berthelot. The opposite of an
exothermic process is an endothermic process, one that absorbs energy in
the form of heat.
Combustion.
Mass
versus Weight, the mass of an object is often referred to as
its weight, though these are in fact different concepts and quantities. In
scientific contexts, mass refers loosely to the amount of "matter" in an
object (though "matter" may be difficult to define), whereas weight refers
to the force exerted on an object by gravity. In other words, an object
with a mass of 1.0 kilogram will weigh approximately 9.81 newtons on the
surface of the Earth (its mass multiplied by the gravitational field
strength). (The newton is a unit of force, while the kilogram is a unit of
mass.)
Energy Types -
Light
Invariant Mass
is a characteristic of the total
energy and
momentum of an
object or a system of objects that is the same in all frames of reference
related by Lorentz transformations. If a center of momentum frame exists
for the system, then the invariant mass of a system is simply the total
energy divided by the
speed of light squared.
In other reference frames, the energy of the system increases, but system
momentum is subtracted from this, so that the invariant mass remains
unchanged.
Negative Mass is a hypothetical concept of matter whose mass is of
opposite sign to the mass of normal matter, e.g. −2 kg. Such matter would
violate one or more energy conditions and show some strange properties,
stemming from the ambiguity as to whether attraction should refer to force
or the oppositely oriented acceleration for negative mass. It is used in
certain speculative theories, such as on the construction of wormholes.
The closest known real representative of such exotic matter is a region of
pseudo-
negative pressure density produced by the
Casimir effect, which are physical forces arising from a
quantized field, which is the process of transition from a classical
understanding of physical phenomena to a newer understanding known as
quantum mechanics. It is a procedure for constructing a quantum field
theory starting from a classical field theory.
Negative Energy -
Massless Particle -
Dark Energy -
Antimatter -
Anti-Gravity -
Quasiparticle -
Quasi Crystal
Inertial Mass is a mass parameter giving
the inertial resistance to acceleration of the body when responding to all
types of force. Gravitational mass is determined by the strength of the
gravitational force experienced by the body when in the gravitational
field.
Casimir Effect
typical example is of the two uncharged conductive plates in a vacuum,
placed a few nanometers apart. In a classical description, the lack of an
external field means that there is no field between the plates, and no
force would be measured between them. When this field is instead studied
using the
quantum
electrodynamic vacuum, it is seen that the plates do affect the
virtual photons which constitute the field, and generate a net force –
either an attraction or a repulsion depending on the specific arrangement
of the two plates. Although the Casimir effect can be expressed in terms
of
virtual particles interacting with the objects,
it is best described and more easily calculated in terms of the
Zero
Point Energy of a quantized field in the
intervening space between the objects. This force has been measured and is
a striking example of an effect captured formally by second quantization.
The treatment of boundary conditions in these calculations has led to some
controversy. In fact, "Casimir's original goal was to compute the van der
Waals force between
polarizable molecules" of the conductive plates. Thus
it can be interpreted without any reference to the zero-point energy (
vacuum
energy) of quantum fields. Because the strength of the force falls off
rapidly with distance, it is measurable only when the distance between the
objects is extremely small. On a submicron scale, this force becomes so
strong that it becomes the dominant force between
uncharged conductors.
In fact, at separations of 10 nm – about 100 times the typical size of an
atom – the Casimir effect produces the equivalent of about 1 atmosphere of
pressure (the precise value depending on surface geometry and other
factors).
Zero
Point Energy -
Quantized Energy -
Anti-Gravity -
Magnets
Biefeld–Brown Effect is an electrical effect that produces an ionic
wind that transfers its momentum to surrounding neutral particles.
Pauli Exclusion Principle is the quantum mechanical principle which
states that two or more identical
fermions (particles with half-integer
spin) cannot occupy the same quantum state within a quantum system
simultaneously. In the case of electrons in atoms, it can be stated as
follows: it is impossible for two electrons of a poly-electron atom to
have the same values of the four quantum numbers.
Cavitation is the formation of
vapour cavities in a
liquid,
small liquid-free zones ("bubbles" or "voids"), that are the consequence
of forces acting upon the liquid. It usually occurs when a liquid is
subjected to rapid changes of pressure that cause the formation of
cavities in the liquid where the pressure is relatively low. When
subjected to higher pressure, the voids implode and can generate an
intense shock wave.
Elements -
Minerals
-
Mind over Matter
‘Negative mass’ created at Washington State University
Negative Energy is a concept used in physics to explain the nature of
certain fields, including the
gravitational
field and a number of quantum field effects. In more speculative
theories, negative energy is involved in wormholes which allow time travel
and warp drives for faster-than-light space travel.
Negative Mass.
Atoms - Matter - Tiny Particles
Atom is the
smallest constituent
unit of ordinary
matter that has the properties of a
chemical element. Every
solid,
liquid,
gas, and
plasma is composed of neutral or
ionized atoms.
Atoms with
electrons are 99.9% empty space.
An electron is a negatively charged
subatomic particle
that can be either
bound to an atom or
free or not bound. No electron atoms are no
longer atoms. An electron that is bound to an atom is one of the three
primary types of
particles within the atom, the
other two are
protons and
neutrons. A proton can be thought of as a
Hydrogen atom without an
electron and an
alpha particle
can be thought of as a Helium atom without electrons. If we remove an
electron from a stable atom, the atom becomes electrically
incomplete/unbalanced. More protons in the nucleus means it has a
positive charge, more
electrons means a
negative charge.
With an electron removed, the atom possesses a plus one charge, therefore
it is a positive
ion. When the number of electrons in
an atom doesn't equal the number of protons, the atom is said to have a
net charge. Charges add just like positive
and negative numbers, so a charge of +1 exactly cancels a charge of -1.
Atomic Number -
Atoms (youtube)
-
Atoms (youtube) -
Information -
Neutrons -
Protons -
Particles -
ElectronsAtoms are
very small around ten-billionth of a meter,
or in the
short scale, roughly 1
angstrom
across =
one ten millionth of a mm. (1,000
mm in 1 meter). One
nanometer is the
size of two atoms and a nanometer is one millionth of a millimeter or the
size of a small grain of sand. A hydrogen atom is about 0.1 nanometers. Atom is about 0.0000001 of a
millimeter in diameter, or 0.1
nanometers. However,
atoms do not have well-defined boundaries,
and there are different ways to define their size that give different
but close values. The nucleus accounts for 99.9% of an
atom's mass. So what would be the size difference of an
atom if you only measure the size of the protons and neutrons and did not
measure the
space or the electrons?
How many atoms in single drop
of water?
If a single atom were the size of
a
football stadium
the
nucleus of the
atom would be the size of your
eyeball, and the
electrons
circling the stadium would be invisible to you. The atom nucleus is
10,000 times smaller than the radius of the circling electrons. If an atoms
outer electron layer
were the size of a
basketball, the nucleus
of the atom would be so small that you could not see it with your own eyes. An atom is 99.9% empty
space.
The world gets really weird at that scale. If the atom were the size of a
human eye ball, and you had a proton spinning in the palm of your hand,
that means that you are just a little bit bigger than an atom, which means
that the next proton would be so far away that you could not see it. Life
at that
scale would look like
empty space to a human.
Size
Scales (nano) -
Ions -
Relativity -
Weighing Atoms with Electrons
All Atoms have at least
one proton in their core, and
the number of protons determines which kind of
element an atom is. All atoms have
electrons,
negatively charged
particles that move around in
the space surrounding the positively-charged nuclear core.
Hydrogen has one
proton, one electron and no
neutrons. An atom has a positively charged
core. The core is surrounded by negatively charged electrons. The
electrons
spin around the core of the atom. This turns the atom into a
tiny magnet. Each atom in an object
creates a small magnetic force. In most materials, the atoms align in ways
where the magnetic forces of the atoms point in many, random directions.
The forces cancel each other out. There are some special materials,
though, where the atoms align in a way where the
magnetic forces of most of the atoms
are pointed in the same direction. The forces of the atoms combine and the
object behaves as a magnet.
Perpetual Motion.
Carbon Atom -
Nitrogen -
Oxygen -
Photons (light)
Atomism
was an early assumption that the physical universe was composed of
fundamental indivisible components known as atoms, and that all matter was
composed of tiny discrete finite indivisible indestructible particles,
which was wrong. Atomism in psychology was an early theory that reduces
all mental phenomena to
simple elements
such as sensations and feelings that form complex ideas by association,
which was also wrong.
Holistic
Thinking -
DNA -
Life.
People use to think that everything that you see in the world, including
yourself, is made of just three particles of matter, protons, neutrons
and electrons, that are interacting through a handful of
forces, such as gravity,
electromagnetism and the nuclear forces.
Superatom is any
cluster of atoms that seem to exhibit some of the
properties of elemental atoms. Sodium atoms, when cooled from vapor,
naturally condense into clusters, preferentially containing a
magic number of atoms (2, 8, 20, 40, 58,
etc.). The first two of these can be recognized as the numbers of
electrons needed to fill the first and second shells, respectively. The
superatom suggestion is that free electrons in the cluster occupy a new
set of
orbitals that are defined by the entire group of atoms, i.e.
cluster, rather than each individual atom separately (non-spherical or
doped clusters show deviations in the number of electrons that form a
closed shell as the potential is defined by the shape of the positive
nuclei.) Superatoms tend to behave chemically in a way that will allow
them to have a closed shell of electrons, in this new counting scheme.
Therefore, a superatom with one more electron than a full shell should
give up that electron very easily, similar to an alkali metal, and a
cluster with one electron short of full shell should have a large electron
affinity, such as a halogen.
Waves -
Resonance -
Oscillation -
Orbitals
Atomic Theory is a scientific theory of the nature of
matter, which states that matter is composed of discrete units called
atoms.
Artificial Atom.
Sound of an Atom
-
D-Note -
587.33 Hz (youtube)
An
Atom is so much smaller
than the
wavelength of
visible light that the two don’t really
interact.
An Atom is invisible to light itself. Even the most powerful
light-focusing microscopes can’t visualize single atoms.
Atoms in your
body are 99.9% empty space and
none of them are the ones that you were born
with. So why do I feel
solid? Elementary
particles have mass and
the space between elementary particles is filled with the
binding energy that also has the properties of
mass.
Atomic Nuclei - Binding Energy
Nuclear
Binding Energy is the
energy that would be
required to
disassemble the nucleus of an atom into its component parts.
These component parts are neutrons and
protons, which are collectively
called nucleons. The binding energy of nuclei is due to the
attractive forces
that hold these nucleons together, and it is always a positive number,
since all nuclei would require the expenditure of energy to
separate them into individual protons and neutrons. The
mass of an atomic nucleus is less than the sum of the individual masses of
the free constituent protons and neutrons (according to Einstein's
equation
E=mc2) and this 'missing mass' is known as the mass defect, and
represents the energy that was released when the nucleus was formed.
Binding
Energy is the energy required to disassemble a whole system into
separate parts. A bound system typically has a lower
potential energy than
the sum of its constituent parts; this is what keeps the system together.
Often this means that
energy is released upon
the creation of a
bound state. This definition corresponds to a positive
binding energy.
Coulomb's
Law (static) -
Chemical Bonds
Nuclear
Force is the
force between protons and neutrons, subatomic
particles that are collectively called
nucleons. The nuclear force is
responsible for binding protons and neutrons into atomic nuclei. Neutrons
and protons are affected by the nuclear force almost identically. Since
protons have charge +1 e, they experience a strong electric field
repulsion (following Coulomb's law) that tends to push them apart, but at
short range the attractive nuclear force overcomes the repulsive
electromagnetic force. The mass of a nucleus is less than the sum total of
the individual masses of the protons and neutrons which form it. The
difference in mass between bound and unbound nucleons is known as the mass
defect. Energy is released when some large nuclei break apart, and it is
this
energy that is used in
nuclear power and
nuclear weapons.
(m=138 MeV)
Excited State.
Force of Attraction is defined as a
force that
causes two or
more objects to
come together, even if they are not near to or touching one other. It
is a
force that attracts the bodies closer together.
Anti-Gravity.
Atomic Nucleus is the small, dense region consisting of
protons and
neutrons at the
center of an atom. An atom is composed of a
positively-charged nucleus, with a cloud of
negatively-charged electrons
surrounding it, bound together by
electrostatic force. Almost all of the
mass of an atom is located in the nucleus, with a very small contribution
from the electron cloud. Protons and neutrons are
bound together to form a
nucleus by the nuclear force. The diameter of the nucleus is in the range
of 1.75 fm(1.75×10−15 m) for hydrogen (the diameter of a single
proton) to about 15 fm for the heaviest atoms, such as uranium. These
dimensions are much smaller than the diameter of the atom itself (nucleus
+ electron cloud), by a factor of about 23,000 (uranium) to about 145,000
(hydrogen). Atom Nucleus is
spherical,
oblate and prolate simultaneously. Prolate is an elongated
spheroid,
shaped like an American football or rugby ball.
Nucleon is either a
proton or a
neutron,
considered in its role as a component of an atomic nucleus. The number of
nucleons in a nucleus defines an isotope's mass number (nucleon number).
Nucleosynthesis is the process that creates new atomic nuclei from
pre-existing nucleons (protons and neutrons) and nuclei. According to
current theories, the first nuclei were formed a few minutes after the Big
Bang, through nuclear reactions in a process called
Big Bang
Nucleosynthesis (wiki).
Neutron Capture is a nuclear reaction in which an atomic nucleus and
one or more neutrons collide and merge to form a heavier nucleus. Since
neutrons have no electric charge, they can enter a nucleus more easily
than positively charged protons, which are repelled electrostatically.
Neutron capture plays an important role in the cosmic nucleosynthesis of
heavy elements. In stars it can proceed in two ways: as a rapid
(r-process) or a slow process (s-process). Nuclei of masses greater than
56 cannot be formed by thermonuclear reactions (i.e. by nuclear fusion),
but can be formed by neutron capture. Neutron capture on protons yields a
line at 2.223 MeV predicted and commonly observed in solar flares.
R-Process or
rapid neutron-capture process, is a set of nuclear
reactions that is responsible for the creation of approximately half of
the atomic nuclei heavier than iron; the "
heavy
elements", with the other
half produced by the
p-process and
s-process. The r-process usually
synthesizes the most neutron-rich stable isotopes of each heavy element.
The r-process can typically synthesize the heaviest four isotopes of every
heavy element, and the two heaviest isotopes, which are referred to as
r-only nuclei, can be created via the r-process only. Abundance peaks for
the r-process occur near mass numbers A = 82 (elements Se, Br, and Kr), A
= 130 (elements Te, I, and Xe) and A = 196 (elements Os, Ir, and Pt).
Why Neutrons and Protons are modified inside Nuclei. The structure of
a neutron or a proton is modified when the
particle is bound in an atomic
nucleus. Experimental data suggest an explanation for this phenomenon that
could have broad implications for nuclear physics.
Modified structure of protons and neutrons in correlated pairs. The
atomic nucleus is made of protons and neutrons (nucleons), which are
themselves composed of
quarks and gluons.
Understanding how the quark–gluon structure of a nucleon bound in an
atomic nucleus is modified by the surrounding nucleons is an outstanding
challenge.
A careful re-analysis of data taken as revealed a
possible
link between correlated protons and neutrons in the nucleus and a
35-year-old mystery. The data have led to the extraction of a universal
function that describes the EMC Effect, the once-shocking discovery that
quarks inside nuclei have lower average momenta than predicted, and
supports an explanation for the effect.
EMC
Effect is the surprising observation that the cross section for deep
inelastic scattering from an atomic nucleus is different from that of the
same number of free protons and neutrons (collectively referred to as
nucleons). From this observation, it can be inferred that the quark
momentum distributions in nucleons bound inside nuclei are different from
those of free nucleons. This effect was first observed in 1983 at CERN by
the European Muon Collaboration, hence the name "EMC effect". It was
unexpected, since the average binding energy of protons and neutrons
inside nuclei is insignificant when compared to the energy transferred in
deep inelastic scattering reactions that probe quark distributions. While
over 1000 scientific papers have been written on the topic and numerous
hypotheses have been proposed, no definitive explanation for the cause of
the effect has been confirmed. Determining the origin of the EMC effect is
one of the major unsolved problems in the field of nuclear physics.
Atomic Orbitals -
Orbital Hybridisation -
Atomic Bonding
Stable Atom is an atom that has enough
binding energy to hold the nucleus together permanently. An unstable atom
does not have enough binding energy to hold the nucleus together
permanently and is called a
radioactive
atom.
Island of Stability is a predicted set of isotopes of
superheavy elements that may have considerably longer
half-lives than
known isotopes of these elements. It is predicted to appear as an "island"
in the chart of nuclides, separated from known stable and long-lived
primordial radionuclides. Its theoretical existence is attributed to
stabilizing effects of predicted "magic numbers" of protons and neutrons
in the superheavy mass region.
Brownian Motion is the random motion of particles suspended
in a fluid (a liquid or a gas) resulting from their collision with the
fast-moving atoms or molecules in the gas or liquid.
Elements.
Breakthrough in
Nuclear Physics. High-precision measurements of the
strong interaction between stable and unstable
particles. The positively charged protons in atomic nuclei should actually
repel each other, and yet even heavy nuclei with many protons and neutrons
stick together. The so-called strong interaction is responsible for this.
Scientists have now developed a method to precisely measure the strong
interaction utilizing particle collisions in the ALICE experiment at CERN
in Geneva. The strong interaction is one of the four fundamental forces in
physics. It is essentially responsible for the existence of atomic nuclei
that consist of several protons and neutrons. Protons and neutrons are
made up of smaller particles, the so-called quarks. And they too are held
together by the strong interaction. ALICE stands for A Large Ion Collider
Experiment.
Proton
Proton is a
subatomic
particle, symbol p or p+, with a
positive electric charge of +1e
elementary charge and
mass slightly less than that of a neutron. Protons
and neutrons, each with masses of approximately one
atomic mass unit, are
collectively referred to as "
nucleons".
One or more protons are present in
the nucleus of every atom. They are a necessary part of the nucleus. The
number of protons in the nucleus is the defining property of an
element,
and is referred to as the
atomic number (represented by the symbol Z).
Antiproton is the
antiparticle of the proton. Antiprotons are
stable, but they are typically short-lived, since any collision with a
proton will cause both particles to be annihilated in a burst of energy.
Protons are composite particles composed of three valence
quarks: two up
quarks of charge +23e and one down quark of charge –13e. In vacuum, when
free electrons are present, a sufficiently slow
proton may pick up a single free electron, becoming a
neutral hydrogen atom, which is
chemically a
free
radical. Unlike
neutrons, protons are stable.
Free protons that are not connected to neutrons in the nucleus do not
break down, or decay, on their own. This is different from neutrons, which
are also made up of smaller
particles but break
down due to
radioactive
decay.
Charge Radius
is a measure of the size of an
atomic nucleus, particularly of a proton or
a
deuteron. It can be measured by the scattering of electrons by the
nucleus and also inferred from the effects of finite nuclear size on
electron energy levels as measured in atomic spectra.
Proton Radius Puzzle was an unanswered problem in physics relating to
the size of the proton. Historically the proton radius was measured via
two independent methods, which converged to a value of about 0.877
femtometres (1 fm = 10−15 m).
This value was challenged by a 2010 experiment utilizing a third method,
which produced a radius about 5% smaller than this, or 0.842 femtometres.
The discrepancy was resolved when research conducted by Hessel et al.
confirmed the same radius for 'electronic' hydrogen as well as its 'muonic'
variant. (0.833 fentometers).
Interaction of Free Protons with ordinary Matter. Although protons
have affinity for oppositely charged
electrons,
this is a relatively low-energy interaction and so free protons must lose
sufficient velocity (and kinetic energy) in order to become closely
associated and bound to electrons. High energy protons, in traversing
ordinary matter, lose energy by collisions with atomic nuclei, and by
ionization of atoms or
removing
electrons until they are slowed sufficiently to be captured by the
electron cloud in a normal atom. However, in such an association with an
electron, the character of the bound proton is not changed, and it remains
a proton. The attraction of low-energy free protons to any electrons
present in normal matter (such as the electrons in normal atoms) causes
free protons to stop and to form a new chemical bond with an atom. Such a
bond happens at any sufficiently "cold" temperature (i.e., comparable to
temperatures at the surface of the Sun) and with any type of atom. Thus,
in interaction with any type of normal (non-plasma) matter, low-velocity
free protons are attracted to electrons in any atom or molecule with which
they come in contact, causing the proton and molecule to combine. Such
molecules are then said to be "protonated", and chemically they often, as
a result, become so-called Brønsted acids.
Grotthuss Mechanism or
proton jumping
is the process by which an 'excess' proton or proton defect diffuses
through the
hydrogen bond network of water
molecules or other hydrogen-
bonded liquids through
the formation and concomitant cleavage of covalent bonds involving
neighboring molecules.
Proton Tunneling.
Proton
Pump is an integral membrane
protein
pump that builds up a proton gradient across a biological
membrane.
Transport of the positively charged proton is typically
electrogenic, i.e.
it generates an
electrical field across the
membrane also called the
membrane potential. Proton transport becomes electrogenic if not
neutralized electrically by transport of either a corresponding negative
charge in the same direction or a corresponding positive charge in the
opposite direction. An example of a proton pump that is not electrogenic,
is the proton/potassium pump of the gastric mucosa which catalyzes a
balanced exchange of protons and
potassium ions. The combined transmembrane
gradient of protons and charges created by proton pumps is called an
electrochemical gradient. An electrochemical gradient represents a store
of energy or
potential energy that can be used to drive a multitude
of biological processes such as
ATP synthesis, nutrient uptake and action
potential formation. In cell respiration, the proton pump uses energy to
transport protons from the matrix of the mitochondrion to the
inter-membrane space. It is an active pump that generates a proton
concentration gradient across the inner mitochondrial membrane because
there are more protons outside the
matrix than inside. The difference in
pH and electric charge (ignoring differences in buffer capacity) creates
an electrochemical potential difference that works similar to that of a
battery or energy storing unit for the cell. The process could also be
seen as analogous to cycling uphill or charging a battery for later use,
as it produces potential energy. The proton pump does not create energy,
but forms a gradient that stores energy for later use.
Proton Tunneling.
Solar Panels for Cells:
Light-Activated Proton Pumps Generate Cellular Energy, Extend Life,
New research in the
journal Nature Aging takes a page from the field of renewable energy
and shows that genetically engineered
mitochondria can
convert light energy
into chemical energy that cells can use, ultimately extending the life of
the roundworm C. elegans. While the prospect of
sunlight-charged cells in humans is more science fiction than science,
the findings shed light on important mechanisms in the aging process.
Neutron
Neutron is a
subatomic particle, symbol n or n0, with
no net electric
charge. The amount of positive and negative charges in the neutron are
equal. So an electrically neutral object does contain charges. The
neutron mass slightly larger than that of a proton. Protons and
neutrons, each with mass approximately one atomic mass unit, constitute
the nucleus of an atom, and they are collectively referred to as
nucleons.
Their properties and interactions are described by
nuclear physics.
The
chemical properties of an atom are
mostly determined by the configuration of electrons
that orbit the atom's heavy nucleus. The electron configuration is
determined by the charge of the nucleus, which is determined by the
number of protons, or atomic number.
The number of neutrons is the
neutron number. Neutrons do not affect the electron configuration, but
the sum of atomic and neutron numbers is the mass of the nucleus. Atoms of
a chemical element that differ only in neutron number are called
isotopes.
For example,
carbon, with atomic number 6, has an abundant isotope
carbon-12 with 6 neutrons and a rare isotope carbon-13 with 7 neutrons.
Some elements occur in nature with only one stable isotope, such as
fluorine. Other elements occur with many stable isotopes, such as tin with
ten stable isotopes. The properties of an atomic nucleus depend on both
atomic and neutron numbers. With their positive charge, the protons within
the nucleus are repelled by the long-range electromagnetic force, but the
much stronger, but short-range, nuclear force binds the nucleons closely
together. Neutrons are required for the stability of nuclei, with the
exception of the single-proton hydrogen nucleus. Neutrons are produced
copiously in
nuclear fission and
fusion. They are a primary contributor to
the
nucleosynthesis of chemical elements within stars through fission,
fusion, and neutron capture processes. The neutron is essential to the
production of
nuclear power. In the
decade after the neutron was discovered by James Chadwick in 1932,
neutrons were used to induce many different types of nuclear
transmutations. With the discovery of nuclear fission in 1938, it was
quickly realized that, if a fission event produced neutrons, each of these
neutrons might cause further fission events, in a cascade known as a
nuclear chain reaction. These events and findings led to the first
self-sustaining nuclear reactor (Chicago Pile-1, 1942) and the first
nuclear weapon (Trinity, 1945). Free neutrons, while not directly ionizing
atoms, cause ionizing radiation. So they can be a biological hazard,
depending on dose. A small natural "neutron background" flux of free
neutrons exists on Earth, caused by cosmic ray showers, and by the natural
radioactivity of spontaneously fissionable elements in the Earth's crust. Dedicated neutron sources like
neutron generators,
research reactors and spallation sources produce free neutrons for use in
irradiation and in neutron scattering experiments. Neutrons decay into
protons.
Isotopes are variants
of a particular chemical element which differ in
neutron number. All
isotopes of a given element have the
same number of protons in each atom.
The number of protons within the atom's nucleus is called atomic number
and is equal to the number of
electrons in the neutral (
non-ionized)
atom. Each atomic number identifies a specific element, but not the
isotope; an atom of a given element may have a wide range in its number of
neutrons. The number of
nucleons (both protons and neutrons) in the
nucleus is the atom's mass number, and each isotope of a given element has
a different mass number. For example, carbon-12, carbon-13 and carbon-14
are three isotopes of the element
Carbon with mass numbers 12, 13 and 14
respectively. The atomic number of carbon is 6, which means that every
carbon atom has 6 protons, so that the neutron numbers of these isotopes are 6, 7 and 8 respectively.
Neutron Star is the collapsed core of a massive supergiant star, which
had a total mass of between 10 and 25 solar masses, possibly more if the
star was especially metal-rich. Except for black holes, neutron stars are
the smallest and densest known class of stellar objects. Neutron stars
have a radius on the order of 10 kilometers (6 mi) and a mass of about 1.4
MO. They result from the
supernova
explosion of a massive star, combined with gravitational collapse,
that compresses the core past white dwarf star density to that of atomic
nuclei.
Fusion -
Nucleosynthesis.
Neutrons reveal key to extraordinary heat transport. Insights into
supersonic
phasons may improve accuracy of
simulations. Warming a crystal of the mineral fresnoite, scientists
discovered that excitations called phasons carried heat three times
farther and faster than phonons, the excitations that usually carry heat
through a material. In most crystals, atomic vibrations propagate excited
waves through the lattice as phonons. However, in certain crystals, atomic
rearrangements also propagate excited waves as phasons. Because phasons
can move faster than sound, physicists anticipated they would excel at
moving heat.
Neutron lifetime problem -- and its possible solution. How long do
free neutrons live until they
decay?
This has been a hotly debated topic, because different measurement
techniques lead to different results. A possible new solution has now been
proposed: All the results can be explained, assuming there are different
neutron states with different lifetimes.
Electron
Electron is a
subatomic particle,
symbol e−orβ−, with a
negative
electric
charge with an
electric field. An electron
moving in orbit creates a magnetic field with a
magnetic monopole that does not require
a positive charge. Electrons belong to the first generation of the lepton
particle
family, and are generally thought to be
elementary particles because they
have no known components or substructure. The electron has a mass that is
approximately 1/1836 that of the proton. An electron can
spin up or spin down.
Quantum mechanical properties of
the electron include an intrinsic
angular momentum
or
spin of a
half-integer value, expressed in units of the reduced Planck constant, ħ.
As it is a
fermion, no two electrons can occupy the same quantum state, in
accordance with the
pauli exclusion principle. Like all matter, electrons
have
properties of both particles and waves: they can collide with other
particles and can be
diffracted like light. The wave properties of
electrons are easier to observe with experiments than those of other
particles like neutrons and protons because electrons have a lower mass
and hence a larger De Broglie wavelength for a given energy.
Poly-electron is an atom containing more
than one electron.
Hydrogen is the only
atom in the periodic table that
has one electron in the orbitals under ground state.
Free Electrons -
Entanglement
-
Virtual Particles -
Q-Bit
Electron Shell is the area that electrons orbit in around an atom's
nucleus.
Each shell can contain only a fixed number of
electrons: The
first shell can hold up to two
electrons, the second shell can hold up to eight (2 + 6) electrons, the
third shell can hold up to 18 (2 + 6 + 10) and so on. Shell 1 can hold up
to 2 electrons, Shell 2 can hold up to 8 electrons, Shell 3 can hold up to
18 electrons, Shell 4 can hold up to 32 electrons, Shell 5 can hold up to
50 electrons. The general formula is that
the nth shell can in principle hold up to 2(n2) electrons. Since electrons
are electrically attracted to the nucleus, an atom's electrons will
generally occupy outer shells only if the more inner shells have already
been completely filled by other electrons. However, this is not a strict
requirement: atoms may have two or even three incomplete outer shells. The
valence shell is the
outermost shell of an atom in its uncombined
state, which
contains the electrons most likely to account for the nature
of any reactions involving the atom
and of the bonding interactions it has
with other atoms. Care must be taken to note that the
outermost shell of
an ion is not commonly termed valence shell. Electrons in the valence
shell are referred to as valence electrons.
Energy Level of a particle that is bound or confined spatially—can
only take on certain discrete values of energy. If the potential energy is set to zero at infinite
distance from the atomic nucleus or molecule, the usual convention, then
bound electron states have negative potential energy. If an atom,
ion, or
molecule is at the lowest possible energy level, it and its electrons are
said to be in the ground state. If it is at a higher energy level, it is
said to be excited, or any electrons that have higher energy than the
ground state are excited. If more than one quantum mechanical state is at
the same energy, the energy levels are "degenerate". They are then called
degenerate energy levels. (electrons also travel through protons).
Sublevels
of Electron are known by the letters s, p, d, and f. The s sublevel has
just one orbital, so can contain 2 electrons max. The p sublevel has 3
orbitals, so can contain 6 electrons max. The d sublevel has 5 orbitals,
so can contain 10 electrons max. And the 4 sublevel has 7 orbitals, so can
contain 14 electrons max. The superscript shows the number of electrons in each
sublevel. Hydrogen: 1s1, Carbon: 1s2 2s2 2p2, Chlorine: 1s2 2s2 2p6 3s2
3p5, Argon: 1s2 2s2 2p6 3s2 3p6. The sublevels contain orbitals. The s
sublevel has just one orbital, so can contain 2 electrons max. The p
sublevel has 3 orbitals, so can contain 6 electrons max. The d sublevel
has 5 orbitals, so can contain 10 electrons max.
Transition Metal.
Atomic Orbital is a mathematical function that describes the
wave-like
behavior of either one electron or a pair of electrons in an atom. This
function can be used to
calculate the probability of finding any electron
of an atom in any specific region around the atom's nucleus. The term
atomic orbital may also refer to the physical region or space where the
electron can be calculated to be present, as defined by the particular
mathematical form of the
orbital.
Electron Cloud represents the area around
an atom's nucleus where electrons are most likely to be found. It is a
sphere that surrounds the microscopic nucleus, although it is often
rendered as a ring in two-dimensional pictures. The model is a way to help
visualize the most probable position of electrons in an atom. The electron
cloud model is currently the accepted model of an atom.
Electron
Spin is a quantum property of electrons. It is a form of
angular
momentum. The magnitude of this angular momentum is permanent. Like charge
and rest mass, spin is a fundamental, unvarying property of the electron.
If the electron spins clockwise on its axis, it is described as
spin-up; counterclockwise is
spin-down. This is a convenient
explanation, if not fully justifiable mathematically. The spin angular
momentum associated with electron spin is independent of orbital angular
momentum, which is associated with the electron's journey around the
nucleus. Electron spin is not used to define electron shells, subshells,
or
orbitals, unlike the quantum numbers n, l, and ml. Since these two
electrons are in the same orbital, they occupy the same region of space
within the atom. As a result, their spin quantum numbers cannot be the
same, and thus these two electrons cannot exist in the same atom.
Spintronics is the study of the
intrinsic spin of the electron and its
associated magnetic moment, in addition to its fundamental electronic
charge, in
solid-state
devices. Spintronics fundamentally differs from traditional
electronics in that, in addition to charge state, electron spins are
exploited as a further degree of freedom, with implications in the
efficiency of data storage and transfer. Spintronic systems are most often
realised in dilute magnetic semiconductors (DMS) and Heusler alloys and
are of particular interest in the field of
quantum computing.
Valence Electron is an electron that is associated with an atom, and
that can participate in the formation of a
chemical bond; in a single
covalent bond, both atoms in the bond contribute one valence electron in
order to form a shared pair. The presence of valence electrons can
determine the element's chemical properties and whether it may bond with
other elements: For a main group element, a valence electron can exist
only in the outermost electron shell. In a transition metal, a valence
electron can also be in an inner shell.
Redox.
The electron can
gain the energy it needs by absorbing light. If the
electron jumps from the second energy level down to the first energy
level, it must
give off some energy by emitting light. The atom absorbs or
emits light in discrete packets called
photons, and each photon has a
definite energy.
Kinetic and potential energy of atoms result from the
motion of electrons. When electrons are excited they move to a higher
energy orbital farther away from the atom. The further the orbital is from
the nucleus, the higher the
potential energy of an electron at that energy
level.
A new window into electron behavior. Quantum mechanical tunneling
material's band structure. Momentum and energy is a process by which
electrons can traverse energetic barriers by simply appearing on the other
side. An electron is a wave of probability. Electrons are weird, you can't
know its position and speed at the same time, it can be
both a wave and particle at the same time, it
can have an up spin or down spin and have both at the same time until
measured, it can sometimes teleport through matter, there is an
anti-electron or positron with the same properties but with an opposite
charge.
World's fastest microscope that can see electrons in motion.
Freeze-frame: A team of researchers has developed the first transmission
electron microscope which operates at the temporal
resolution of a single attosecond, allowing for the first still-image of
an electron in motion. Imagine owning a camera so powerful it can take
freeze-frame photographs of a moving
electron -- an object traveling so fast it could circle the Earth many
times in a matter of a second. Researchers at the University of Arizona
have developed the world's fastest electron microscope that can do just
that.
New evidence for electron's dual nature found in a quantum spin liquid.
New experiments provide evidence for a decades-old theory that, in the
quantum regime, an electron behaves as if it is made of two particles: one
particle that carries its negative charge and the other that gives it a
magnet-like property called
spin. The team
detected evidence for this theory in materials called
quantum spin liquids, which is a phase of matter that can be formed by
interacting quantum spins in certain magnetic materials. Quantum spin
liquids are generally characterized by their long-range quantum
entanglement, fractionalized excitations, and absence of ordinary magnetic
order.
Physicists see electron whirlpools. Theorists have long predicted
electrons should exhibit this hallmark of fluid flow; the findings could
inform the design of more efficient electronics. Though they are discrete
particles, water molecules flow collectively as liquids, producing
streams, waves, whirlpools, and other classic fluid phenomena. To
visualize electron vortices, the team looked to
tungsten ditelluride (WTe2), an ultraclean metallic compound that has
been found to exhibit exotic electronic properties when isolated in
single-atom-thin, two-dimensional form. The researchers synthesized pure
single crystals of tungsten ditelluride, and exfoliated thin flakes of the
material. They then used e-beam lithography and plasma etching techniques
to pattern each flake into a center channel connected to a circular
chamber on either side. They etched the same pattern into thin flakes of
gold -- a standard metal with ordinary, classical electronic properties.
They then ran a current through each patterned sample at ultralow
temperatures of 4.5 kelvins (about -450 degrees Fahrenheit) and measured
the current flow at specific points throughout each sample, using a
nanoscale scanning
superconducting quantum interference device (SQUID) on a tip. This
device was developed in Zeldov's lab and measures magnetic fields with
extremely high precision. Using the device to scan each sample, the team
was able to observe in detail how electrons flowed through the patterned
channels in each material. The researchers observed that electrons flowing
through patterned channels in gold flakes did so without reversing
direction, even when some of the current passed through each side chamber
before joining back up with the main current. In contrast, electrons
flowing through tungsten ditelluride flowed through the channel and
swirled into each side chamber, much as water would do when emptying into
a bowl. The electrons created small whirlpools in each chamber before
flowing back out into the main channel. We observed a change in the flow
direction in the chambers, where the flow direction reversed the direction
as compared to that in the central strip. The group's observations are the
first direct visualization of swirling vortices in an electric current.
The findings represent an experimental confirmation of a fundamental
property in electron behavior. They may also offer clues to how engineers
might design low-power devices that conduct electricity in a more fluid,
less resistive manner.
Cathode Ray
are streams of electrons observed in vacuum tubes that were first observed
in 1869. If an evacuated glass tube is equipped with two electrodes and a
voltage is applied, glass behind the positive electrode is observed to
glow, due to electrons emitted from and traveling away from the cathode
(the electrode connected to the negative terminal of the voltage supply).
Cathode Ray Tubes (CRTs) use a focused beam of electrons deflected by
electric or magnetic fields to create the image on a television screen.
Particle Accelerator.
Excited State of
a system (such as an atom, molecule or nucleus) is any
quantum state of the system that has a higher energy than the ground
state (that is, more energy than the absolute minimum). Excitation is an
elevation in energy level above an arbitrary baseline energy state. In
physics there is a specific technical definition for energy level which is
often associated with an atom being raised to an excited state. The
temperature of a group of particles is indicative of the level of
excitation (with the notable exception of systems that exhibit negative
temperature). The lifetime of a system in an excited state is usually
short: spontaneous or induced emission of a quantum of energy (such as a
photon or a phonon) usually occurs shortly after the system is promoted to
the excited state, returning the system to a state with lower energy (a
less excited state or the ground state). This return to a lower energy
level is often loosely described as decay and is the inverse of
excitation. Long-lived excited states are often called metastable.
Long-lived nuclear isomers and singlet oxygen are two examples of this.
Hot Electrons are electrons that
have gained very high levels of
kinetic
energy after being accelerated by a strong electric field in areas of
high field intensities within a semiconductor. (a type of 'hot carriers').
Hot-Carrier Injection is a phenomenon in solid-state electronic
devices where an electron or a “hole” gains sufficient kinetic energy to
overcome a potential barrier necessary to break an interface state. The
term "hot" refers to the effective temperature used to model carrier
density, not to the overall temperature of the device. Since the charge
carriers can become trapped in the gate dielectric of a MOS transistor,
the switching characteristics of the transistor can be permanently
changed. Hot-carrier injection is one of the mechanisms that adversely
affects the reliability of semiconductors of solid-state devices.
When
electrons slowly vanish during cooling. Researchers observe an effect
in the
quantum world that does not exist in the
macrocosm. Many substances change their properties when they are cooled
below a certain critical temperature. Such a
phase
transition occurs, for example, when water freezes. However, in
certain metals there are phase transitions that do not exist in the
macrocosm. They arise because of the special laws of quantum mechanics
that apply in the realm of nature's smallest building blocks. It is
thought that the concept of electrons as carriers of quantized electric
charge no longer applies near these exotic phase transitions. Researchers
have now found a way to prove this directly. Their findings allow new
insights into the exotic world of quantum physics.
Valence and Conduction Bands. The band of energy occupied by the
valence electrons is called the valence band. The valence band is the
highest occupied band. Conduction Band:-The conduction band is normally
empty and may be defined as the lowest unfilled energy band. In the
conduction band, electrons can move freely and are generally called
conduction electrons.
Electron Paramagnetic Resonance is a method for studying materials
with unpaired electrons. The basic concepts of EPR are analogous to those
of nuclear magnetic resonance (NMR), but it is electron spins that are
excited instead of the spins of atomic nuclei.
The geometry of an electron determined for the first time.
Positron
is the
antiparticle or the
antimatter counterpart of
the electron. The positron has an electric charge of +1 e, a spin of 1/2
(same as electron), and has the same mass as an electron. When a positron
collides with an electron, annihilation occurs. If this collision occurs
at low energies, it results in the production of two or more gamma ray
photons Positrons may be generated by positron emission radioactive decay
(through weak interactions), or by pair production from a sufficiently
energetic photon which is interacting with an atom in a material.
Valleytronics
refers to the technology of control over the valley degree of freedom (a
local maximum/minimum on the valence/conduction band) of certain
semiconductors that
present multiple valleys inside the first Brillouin zone—known as
multivalley semiconductors. The term was coined in analogy to the blooming
field of spintronics. While in spintronics the internal degree of freedom
of spin is harnessed to store, manipulate and read out bits of
information, the proposal for valleytronics is to perform similar tasks
using the multiple extrema of the band structure, so that the information
of 0s and 1s would be stored as different discrete values of the crystal
momentum.
Mechanical Vibration generated by Electron Spins. A new way to deliver
a force to drive
micro mechanics.
A multiband approach to Coulomb drag and indirect excitons in which
coupled charged particles moved in exactly the opposite direction to that
predicted. This apparently contradictory phenomenon is associated with the
bandgap in dual-layer graphene structures, a bandgap which is very much
smaller than in conventional semiconductors.
Transition Metal is an
element
whose atom has a partially filled d sub-shell, or which can give rise to
cations with an incomplete d sub-shell". Or any element in the d-block of
the periodic table, which includes groups 3 to 12 on the periodic table.
In actual practice, the f-block lanthanide and actinide series are also
considered transition metals and are called "inner transition metals".
Semimetal is a material with a very small overlap between the bottom
of the conduction band and the top of the valence band. According to
electronic band theory, solids can be classified as insulators,
semiconductors,
semimetals, or metals. In insulators and semiconductors the filled valence
band is separated from an empty conduction band by a band gap.
Electron Configuration
is the distribution of electrons of an atom or
molecule (or other physical
structure) in atomic or molecular
orbitals.
Physicists discover topological behavior of electrons in 3D magnetic
material. An international team of researchers led by scientists at
Princeton University has found that a magnetic material at room
temperature enables electrons to behave counterintuitively, acting
collectively rather than as individuals. Their collective behavior mimics
massless particles and anti-particles that coexist in an unexpected way
and together form an exotic loop-like structure. Researchers explored a
type of material in which the electrons behave according to the
mathematical rules of topology. They found topological behaviors of
electrons in a three-dimensional magnetic material at room temperature,
opening new avenues of future study. The key to this behavior is topology
-- a branch of mathematics that is already known to play a powerful role
in dictating the behavior of electrons in crystals. Topological materials
can contain massless particles in the form of light, or photons. In a
topological crystal, the electrons often behave like slowed-down light
yet, unlike light, carry electrical charge. Topology has seldom been
observed in magnetic materials, and the finding of a magnetic topological
material at room temperature is a step forward that could unlock new
approaches to harnessing topological materials for future technological
applications. The exotic magnetic crystal consists of cobalt, manganese
and gallium, arranged in an orderly, repeating three-dimensional pattern.
To explore the material's topological state, the researchers used a
technique called angle-resolved photoemission spectroscopy. In this
experiment, high-intensity light shines on the sample, forcing electrons
to emit from the surface. These emitted electrons can then be measured,
providing information about the way the electrons behaved when they were inside the crystal.
Electron Spin Polarization in strong-field ionization of xenon atoms.
As a fundamental property of the electron, the spin plays a decisive role
in the electronic structure of matter, from solids to molecules and atoms,
for example, by causing magnetism. Yet, despite its importance, the spin
dynamics of the electrons released during the interaction of atoms with
strong ultrashort laser pulses has remained experimentally unexplored1,2.
Here, we report the experimental detection of electron spin polarization
by the strong-field ionization of xenon atoms and support our results with
theoretical analysis. We found up to 30% spin polarization changing its
sign with electron energy. This work opens the new dimension of spin to
strong-field physics. It paves the way to the production of
sub-femtosecond spin-polarized electron pulses with applications ranging
from probing the magnetic properties of matter at ultrafast timescales3 to
testing chiral molecular systems with sub-femtosecond temporal and sub-ångström
spatial resolutions.
Electron Sharing - Free Electrons
Electron Transfer occurs when an
electron
relocates from an atom or
molecule to another such chemical entity. ET is a mechanistic description
of a
redox reaction, wherein the
oxidation state of reactant and product
changes.
Mitochondria.
Electron Donor is a chemical entity that
donates electrons to another
compound. It is a reducing agent that, by virtue of its donating
electrons, is itself
oxidized in the process.
Ph.
Electron Acceptor a complete and irreversible
transfer of one or more
electrons, not completely transferred, but results in an electron
resonance between the donor and acceptor. The microbes aren’t eating naked
electrons. Electrons traverse the entire distance of the membrane
unescorted.
Free Electrons are
electrons that are
not
bound to an atom and are able to move freely. They have a negative
charge and move in the opposite direction of an
electric
field.
Reduction Potential -
Oxidation -
Unpaired Electron -
Grounding -
Heat Energy
Electron Pair consists of two electrons that occupy the same molecular
orbital but have opposite spins.
Unpaired Electron is an electron that
occupies an orbital of an atom
singly, rather than as part of an
electron pair. Each atomic orbital of an atom (specified by the three
quantum numbers n, l and m) has a capacity to contain two electrons
(electron pair) with opposite spins. As the formation of electron pairs is
often energetically favourable, either in the form of a chemical bond or
as a lone pair, unpaired electrons are relatively uncommon in chemistry,
because an entity that carries an unpaired electron is usually rather
reactive. In organic chemistry they typically only occur briefly during a
reaction on an entity called a radical; however, they play an important
role in explaining reaction pathways. Radicals are uncommon in s- and
p-block chemistry, since the unpaired electron occupies a valence p
orbital or an sp, sp2 or sp3 hybrid orbital. These orbitals are strongly
directional and therefore overlap to form strong covalent bonds, favouring
dimerisation of radicals. Radicals can be stable if dimerisation would
result in a weak bond or the unpaired electrons are stabilised by
delocalisation. In contrast, radicals in d- and f-block chemistry are very
common. The less directional, more diffuse d and f orbitals, in which
unpaired electrons reside, overlap less effectively, form weaker bonds and
thus dimerisation is generally disfavoured. These d and f orbitals also
have comparatively smaller radial extension, disfavouring overlap to form
dimers. Relatively more stable entities with unpaired electrons do exist,
e.g. the
nitric oxide molecule has one. According to Hund's rule, the
spins of unpaired electrons are aligned parallel and this gives these
molecules paramagnetic properties. The most stable examples of unpaired
electrons are found on the atoms and ions of lanthanides and actinides.
The incomplete f-shell of these entities does not interact very strongly
with the environment they are in and this prevents them from being paired.
The ions with the largest number of unpaired electrons are Gd3+ and Cm3+
with seven unpaired electrons. An unpaired electron has a magnetic dipole
moment, while an electron pair has no dipole moment because the two
electrons have opposite spins so their magnetic dipole fields are in
opposite directions and cancel. Thus an atom with unpaired electrons acts
as a magnetic dipole and interacts with a magnetic field. Only elements
with unpaired electrons exhibit paramagnetism, ferromagnetism, and
antiferromagnetism.
Lone
Pair refers to a pair of valence electrons that are not shared with
another atom in a
covalent bond and is
sometimes called an unshared pair or non-bonding pair. Lone pairs are
found in the outermost electron shell of atoms. They can be identified by
using a Lewis structure. Electron pairs are therefore considered lone
pairs if two electrons are paired but are not used in chemical bonding.
Thus, the number of lone pair electrons plus the number of bonding
electrons equals the total number of valence electrons around an atom.
Exciton
is a bound state of an electron and an electron hole which are attracted
to each other by the
electrostatic Coulomb force. It is an electrically
neutral quasiparticle that exists in insulators, semiconductors and in
some liquids. The exciton is regarded as an elementary excitation of
condensed matter that can
transport energy
without transporting net electric charge. An exciton can form when a
photon is absorbed by a semiconductor. This excites an electron from the
valence band into the conduction band. In turn, this leaves behind a
positively charged electron hole (an abstraction for the location from
which an electron was moved). The electron in the conduction band is then
effectively attracted to this localized hole by the repulsive Coulomb
forces from large numbers of electrons surrounding the hole and excited
electron. This attraction provides a stabilizing energy balance.
Consequently, the exciton has slightly less energy than the unbound
electron and hole. The wavefunction of the bound state is said to be
hydrogenic, an exotic atom state akin to that of a hydrogen atom. However,
the binding energy is much smaller and the particle's size much larger
than a hydrogen atom. This is because of both the screening of the Coulomb
force by other electrons in the semiconductor (i.e., its dielectric
constant), and the small effective masses of the excited electron and
hole. The recombination of the electron and hole, i.e. the decay of the
exciton, is limited by resonance stabilization due to the overlap of the
electron and hole wave functions, resulting in an extended lifetime for
the exciton.
Quantized
Energy states that electrons are necessary
for atoms to exist. But
where does an electron get
its energy from? How do electrons circulate around the
nucleus forever in
perpetual motion? Is it
electromagnetic radiation or God?. Is it true that Atoms do not have to
get
energy from somewhere, because they are energy? Einstein proposed that
mass and energy are two sides of the same coin. Mass can convert into
energy and vice-versa. In reality, all matter we see is a manifestation of
energy. Matter is nothing but hypercondensed energy, and this
energy can vibrate at different
frequencies, giving rise
to fundamental forces based on the vibrational patterns. This is what the
string theory describes as a "String". Thus we see that matter itself is
energy. However in chemical reactions, it may use its own internal energy,
or absorb energy from surroundings in the form of heat, light etc..
What came first, the chicken or
the egg?
Quantum Mechanics tells us that
electrons have both
wave and particle-like properties.
Tunneling is a
quantum mechanical effect. A tunneling current occurs
when
electrons move through a barrier that they classically shouldn't be
able to move through. In classical terms, if you don't have enough energy
to move "over" a barrier, you won't. However, in the quantum mechanical
world, electrons have wavelike properties. These waves don’t end abruptly
at a wall or barrier, but taper off quickly. If the barrier is thin
enough, the probability function may extend into the next region, through
the barrier! Because of the small probability of an electron being on the
other side of the barrier, given enough electrons, some will indeed move
through and appear on the other side. When an electron moves through the
barrier in this fashion, it is called tunneling.
Doping in semiconductors is the intentional introduction of impurities
into an intrinsic semiconductor for the purpose of modulating its
electrical, optical and structural properties. The doped material is
referred to as an extrinsic semiconductor. A semiconductor doped to such
high levels that it acts more like a conductor than a semiconductor is
referred to as a degenerate semiconductor. In the context of phosphors and
scintillators, doping is better known as activation. Doping is also used
to control the color in some pigments. Doping a semiconductor in a good
crystal introduces allowed energy states within the band gap, but very
close to the energy band that corresponds to the dopant type. In other
words,
electron donor impurities create
states near the conduction band while electron acceptor impurities create
states near the valence band. The gap between these energy states and the
nearest energy band is usually referred to as dopant-site bonding energy
or EB and is relatively small.
Anti-Hydrogen
is the
antimatter counterpart of
Hydrogen. Whereas the common
Hydrogen
atom is composed of an electron and proton, the antihydrogen atom is made
up of a positron and antiproton. Scientists hope studying antihydrogen may
shed light on the question of why there is more matter than antimatter in
the universe, known as the baryon asymmetry problem. Antihydrogen is
produced artificially in particle accelerators. In 1999, NASA gave a cost
estimate of $62.5 trillion per gram of antihydrogen (equivalent to $90
trillion today), making it the most expensive material to produce. This is
due to the extremely low yield per experiment, and high opportunity cost
of using a particle accelerator.
Machine learning reveals how strongly interacting electrons behave at atomic level.
Electron Transport Chain is a series of complexes that transfer
electrons from electron donors to electron acceptors via
redox (both reduction and oxidation occurring simultaneously)
reactions, and couples this electron transfer with the transfer of protons
(H+ ions) across a membrane. The electron transport chain is built up of
peptides, enzymes, and other molecules. A series of electron transporters
embedded in the inner mitochondrial membrane that shuttles electrons from
NADH and FADH2 to molecular oxygen. In the process, protons are pumped
from the mitochondrial matrix to the intermembrane space, and oxygen is
reduced to form water.
Photoelectric Effect is the
emission of electrons
when
electromagnetic radiation, such as
light, hits a material. Electrons emitted
in this manner are called photoelectrons. The phenomenon is studied in
condensed matter physics, and solid state and quantum chemistry to draw
inferences about the properties of
atoms, molecules
and solids. The effect has found use in electronic devices specialized for
light detection and precisely timed electron emission. In classical
electromagnetic theory, the photoelectric effect would be attributed to
the transfer of energy from the continuous
light waves to an electron. An
alteration in the intensity of light would change the
kinetic energy of the emitted
electrons, and sufficiently dim light would result in the emission delayed
by the time it would take the electrons to accumulate enough energy to
leave the material. The experimental results, however, disagree with both
predictions. Instead, they show that
electrons are
dislodged only when the light exceeds a threshold frequency. Below that
threshold, no electrons are emitted from the material, regardless of the
light intensity or the length of time of exposure to the light. Because a
low-frequency beam at a high intensity could not build up the energy
required to produce photoelectrons like it would have if light's energy
was coming from a continuous wave, Einstein proposed that a beam of light
is not a wave propagating through space, but a collection of discrete wave
packets—
photons. Emission of conduction
electrons from typical metals requires a few electron-volt (eV) light
quanta, corresponding to short-wavelength visible or ultraviolet light. In
extreme cases, emissions are induced with photons approaching zero energy,
like in systems with negative electron affinity and the emission from
excited states, or a few hundred keV photons for core electrons in
elements with a high atomic number. Study of the photoelectric effect led
to important steps in understanding the quantum nature of light and
electrons and influenced the formation of the concept of wave–particle
duality. Other phenomena where light affects the movement of electric
charges include the photoconductive effect, the photovoltaic effect, and
the photoelectrochemical effect.
Ions - Ionization
Ion is an
atom or a
molecule in which the total number of
electrons is
not equal to the total
number of
protons, giving the atom or molecule a net
positive or
negative electrical
charge. Positive ions are
molecules that have lost one or more electrons whereas negative ions are
atoms with
extra-negatively-charged electrons. An ion is
an atom or a group of atoms where the number of electrons is not equal to
the number of protons. An ion can also be an atom without any electrons.
Lasers.
Negative ions can improve mood and are believed to produce biochemical
reactions that increase levels of the mood chemical
serotonin. Water Falls are a
great source of negative ions, so go outside. Avoid ion air filters that
create
ozone.
Grounding -
Being in Nature
Benefits.
Cation is a
positively charged ion. Ions can be created, by either
chemical or physical
means, via ionization.
Anion
is a negatively charged ion. Atoms or radicals groups of atoms, that have
gained
electrons, now have more electrons than
protons, thus anions have a negative charge.
Ion Implantation is a low-temperature process by which ions of one
element are accelerated into a solid target, thereby changing the
physical, chemical, or electrical properties of the target. Ion
implantation is used in semiconductor device fabrication and in metal
finishing, as well as in materials science research. The ions can alter
the elemental composition of the target (if the ions differ in composition
from the target) if they stop and remain in the target. Ion implantation
also causes chemical and physical changes when the ions impinge on the
target at high energy. The crystal structure of the target can be damaged
or even destroyed by the energetic collision cascades, and ions of
sufficiently high energy (10s of MeV) can cause nuclear transmutation.
Inside
smartphones are chips that are made by implanting single ions into
silicon, in a process called ion implantation. And that uses a particle
accelerator.
Polyatomic Ion
is a charged chemical species (ion) composed of two or more atoms
covalently bonded or of a metal complex that can be considered to be
acting as a single unit. The prefix poly- means "many," in Greek, but even
ions of two atoms are commonly referred to as polyatomic. In older
literature, a polyatomic ion is also referred to as a radical, and less
commonly, as a radical group. In contemporary usage, the term radical
refers to free radicals that are (not necessarily charged) species with an
unpaired electron.
Ionization is the process by which an atom or a molecule
acquires a negative or positive charge by gaining or losing
electrons to
form ions, often in conjunction with other chemical changes. Ionization
can result from the
loss of an electron after collisions with subatomic
particles, collisions with other
atoms,
molecules and ions, or through the
interaction with light. Heterolytic bond cleavage and heterolytic
substitution reactions can result in the formation of ion pairs.
Ionization can occur through
radioactive decay by the internal conversion
process, in which an excited nucleus transfers its
energy to one of the
inner-shell electrons causing it to be ejected.
Ionizing Radiation is radiation that carries enough energy
to free electrons from atoms or molecules, thereby ionizing them. Ionizing
radiation is made up of energetic subatomic particles, ions or atoms
moving at high speeds (usually greater than 1% of the speed of light), and
electromagnetic waves on the
high-energy end of the
electromagnetic
spectrum.
Three ways to remove an electron
from an atom: 1) Annihilate the electron by hitting the target
(atom, ion, molecule, etc.) with a positron. 2) Hit the target with a very
fast electron or high energy x-ray, shooting the electron off into space.
3) Transfer the electron to another atom, ion, molecule etc. (reduction
oxidation chemistry). The electron leaves the atom when its
kinetic energy is higher than the
attraction between it and the nucleus.
Binding Energy
-
Hydrogen.
Self-Ionization of Water is an ionization reaction in pure water or in
an aqueous solution, in which a water molecule, H2O, deprotonates (loses
the nucleus of one of its hydrogen atoms) to become a hydroxide ion,
OH−.(also autoionization of water, and autodissociation of water).
Ionizing Radiation is any type of particle
or electromagnetic wave that carries enough energy to ionize or
remove electrons from an atom. There are two types of
electromagnetic waves that can ionize atoms: X-rays and gamma-rays, and
sometimes they have the same energy.
Ionizing Radiation is radiation that carries enough energy to
knock
electrons from atoms or molecules, thereby ionizing them. Ionizing
radiation is made up of energetic subatomic particles, ions or atoms
moving at high speeds (usually greater than 1% of the speed of light), and
electromagnetic waves on the high-energy end of the electromagnetic
spectrum.
Radio Active Decay -
Extremophile.
Non-Ionizing Radiation refers to any type of electromagnetic radiation
that does not carry enough energy per quantum (photon energy) to ionize
atoms or molecules—that is, to completely remove an electron from an atom
or molecule. Instead of producing charged ions when passing through
matter, non-ionizing electromagnetic radiation has sufficient energy only
for excitation, the movement of an electron to a higher energy state.
Ionizing radiation which has a higher frequency and shorter wavelength
than nonionizing radiation, has many uses but can be a health hazard;
exposure to it can cause burns, radiation sickness, cancer, and
genetic damage. Using ionizing radiation requires elaborate radiological
protection measures which in general are not required with nonionizing
radiation.
Phosphate ions in water have a curious habit of spontaneously
alternating between their commonly encountered hydrated state and a
mysterious, previously unreported ‘dark’ state.
Redox
Reduction Potential is a measure of the tendency of a chemical species
to
acquire electrons and thereby be reduced.
Reduction potential is measured in volts (V), or millivolts (mV). Each
species has its own intrinsic reduction potential; the more positive the
potential, the greater the species' affinity for electrons and tendency to
be reduced. ORP is a common measurement for
water quality. (also known as redox potential, oxidation).
Redox is a
chemical reaction in which the
oxidation states of atoms are changed. Any such reaction involves both a
reduction process and a complementary oxidation process, two key concepts
involved with
electron transfer processes. Redox reactions include all
chemical reactions in which atoms have their oxidation state changed; in
general, redox reactions involve the transfer of electrons between
chemical species. The chemical species from which the electron is stripped
is said to have been oxidized, while the chemical species to which the
electron is added is said to have been reduced. It can be explained in
simple terms:
Oxidation is the loss of electrons or an increase in
oxidation state by a molecule, atom, or ion.
Reduction is the gain of
electrons or a decrease in oxidation state by a molecule, atom, or ion.
As an example, during the combustion of wood, oxygen from the air is
reduced, gaining electrons from carbon which is oxidized. Although
oxidation reactions are commonly associated with the formation of oxides
from oxygen molecules, oxygen is not necessarily included in such
reactions, as other chemical species can serve the same function. The
reaction can occur relatively slowly, as with the formation of rust, or
more quickly, in the case of fire. There are simple redox processes, such
as the oxidation of carbon to yield carbon dioxide (CO2) or the reduction
of carbon by hydrogen to yield methane (CH4), and more complex processes
such as the oxidation of glucose (C6H12O6) in the human body.
Redox is
a contraction of the name for a chemical reduction–oxidation reaction. Any
such reaction involves both a reduction process and a complementary
oxidation process. Redox reactions include all chemical reactions in which
atoms have their oxidation state changed; in general, redox reactions
involve the transfer of electrons between chemical species.
Antioxidant.
Oxidation is any
chemical reaction that involves the
moving of
electrons. Specifically, it means the substance that gives away electrons
is oxidized. When iron reacts with
oxygen it forms a chemical called rust
because it has been oxidized (The iron has lost some electrons.) and the
oxygen has been reduced (The oxygen has gained some electrons.). Oxidation
is the opposite of reduction. A reduction-reaction always comes together
with an oxidation-reaction. Oxidation and reduction together are called
redox (reduction and oxidation).
Oxygen does not have to be present in a
reaction, for it to be a redox-reaction.
Oxidation is the loss of electrons.
Fire.
Oxidizing Agent is a substance that has the ability to oxidize other
substances, in other words to
cause them to lose
electrons. Common oxidizing agents are
oxygen, hydrogen peroxide and the halogens.
When an avocado is cut, an
enzyme in the flesh reacts with
oxygen, turning the layer of guacamole that's in contact with oxygen an
unpleasant brown color. This is called oxidation. A similar reaction
occurs in apples when you cut them. This chemical reaction is not a sign
of a spoiled avocado. Compounds in the flesh are reacting with oxygen,
with the help of enzymes, to produce brown pigments called melanin. The
brown part of an avocado might look unappetizing and can taste bitter, but
it's still safe to eat. In the fridge, the guacamole lasts five to seven
days. In the freezer it should last for three to four months. To slow down
the oxidation, pat the guacamole down to give it a flat surface. Pour a
thin but visible layer of water or lemon/lime juice over the guacamole to
form a barrier with the air. Cover the dish with plastic wrap, pushing the
wrap, so it is flush with the guacamole to prevent air pockets, or put the
guacamole in a sealed tub.
Ozone is
a powerful oxidant, far more so than dioxygen, and has many industrial and
consumer applications related to oxidation. This same high oxidising
potential, however, causes ozone to damage mucous and respiratory tissues
in animals, and also tissues in plants, above concentrations of about 0.1
ppm. While this makes ozone a potent respiratory hazard and pollutant near
ground level, a higher concentration in the ozone layer (from two to eight
ppm) is beneficial, preventing damaging UV light from reaching the Earth's
surface. Ozone or trioxygen, is an inorganic molecule with the chemical
formula O3. It is a pale blue gas with a distinctively pungent smell. It
is an allotrope of oxygen that is much less stable than the diatomic
allotrope
O2, breaking down
in the lower atmosphere to O2 (dioxygen). Ozone is formed from
dioxygen by the action of ultraviolet light (UV) and electrical
discharges within the Earth's atmosphere. It is present in very low
concentrations throughout the latter, with its highest concentration high
in the ozone layer of the stratosphere, which absorbs most of the Sun's
ultraviolet (UV) radiation.
Electronegativity is a chemical property that describes the tendency
of an atom to
attract a shared pair of electrons
(or electron density) towards itself. An atom's electronegativity is
affected by both its atomic number and the distance at which its valence
electrons reside from the charged nucleus. The higher the associated
electronegativity number, the more an element or compound attracts
electrons towards it. On the most basic level, electronegativity is
determined by factors like the nuclear charge (the more protons an atom
has, the more "pull" it will have on electrons) and the number/location of
other electrons present in the atomic shells (the more electrons an atom
has, the farther from the nucleus the
valence
electrons will be, and as a result the less positive charge they will
experience—both because of their increased distance from the nucleus, and
because the other electrons in the lower energy core orbitals will act to
shield the valence electrons from the positively charged nucleus). The
opposite of electronegativity is
Electropositivity
is a measure of an element's
ability to donate electrons.
"
We have just worked out what atoms are,
and we’ve realized that they are marvelously complex
structures that can undergo amazing changes, many of which occur
naturally. And by studying atoms this way, we’ve been able to
improve our technologies, harness the energy of nuclear
reactions and better understand the natural world around us.
We’ve also been able to better protect ourselves from radiation
and discover how materials change when placed under extreme conditions."
Particles
Particle is a tiny piece of
anything having
finite mass and internal structure but
negligible
dimensions.
Particle
is a
minute fragment or quantity of
matter. In the physical sciences, a
particle is a small localized object to which can be ascribed several
physical or
chemical properties such as volume or mass.
They vary greatly
in size, from subatomic particles like the
electron, to
microscopic
particles like
atoms and
molecules, to macroscopic particles like powders
and other granular materials. Particles can also be used to create
scientific models of even larger objects, such as humans moving in a crowd
or celestial bodies in
motion.
Wave -
Quantum -
Binding Energy -
Particle Detectors -
Hadron
Massless Particle is an elementary particle whose invariant
mass is zero. The two known massless
particles are both gauge bosons: the
photon
(carrier of electromagnetism) and the gluon (carrier of the strong force).
However, gluons are never observed as free particles, since they are
confined within hadrons.
Neutrinos were originally thought to be massless.
However, because neutrinos change flavor as they travel, at least two of
the types of neutrinos must have mass.
Negative Mass
-
Neutrino Oscillation.
List of Particles (wiki) -
Photons (light) -
Electrons
Physicists confirm 67-year-old prediction of massless, neutral composite
particle. in 1956, theoretical physicists predicted that electrons in
a solid can do something strange. While they normally have a
mass and an electric charge, Pines asserted that they
can combine to form a composite particle that is
massless, neutral, and does not interact with light. He called this
particle a 'demon.' Since then, it has been speculated to play an
important role in the behaviors of a wide variety of metals.
Unfortunately, the same properties that make it interesting have allowed
it to elude detection since its prediction.
Particles of the Standard Model (image)
Particle
Adventure -
Particle Smashing
(partical accelerator)
Massive Particle
refers to particles which have real non-zero rest
mass.
According to special relativity, their velocity is always lower than the
speed of light. The synonyms bradyon (from Greek: βραδύς, bradys, “slow”),
tardyon or ittyon are sometimes used to contrast with luxon (which moves
at light speed) and hypothetical tachyon (which moves faster than light).
Charged Particle
is a particle with an
electric
charge. It may be an
ion, such as a molecule or
atom
with a surplus or deficit of electrons relative to
protons. It can be the
electrons and protons themselves, as well as other elementary particles,
like positrons. It may also be an atomic nucleus devoid of
electrons, such
as an alpha particle, a helium nucleus.
Neutrons have no charge. Plasmas
are a collection of charged particles, atomic nuclei and separated
electrons, but can also be a gas containing a significant proportion of
charged particles.
Plasma is called the fourth state of matter because its
properties are quite different from solids, liquids and gases.
Aurora.
Chameleon Particle is a
hypothetical scalar particle that couples to matter
more weakly than
gravity, postulated as a
dark energy candidate. Due to a
non-linear self-interaction, it has a variable effective mass which is an
increasing function of the ambient energy density—as a result, the range
of the force mediated by the particle is predicted to be very small in
regions of high density (for example on Earth, where it is less than 1mm)
but much larger in low-density intergalactic regions: out in the cosmos
chameleon models permit a range of up to several thousand parsecs. As a
result of this variable mass, the hypothetical fifth force mediated by the
chameleon is able to evade current constraints on equivalence principle
violation derived from terrestrial experiments even if it couples to
matter with a strength equal or greater than that of gravity. Although
this property would allow the chameleon to drive the currently observed
acceleration of the universe's expansion, it also makes it very difficult
to test for experimentally.
Dark Energy
-
Ions -
Advanced
Propulsion
Virtual Particle
is a
transient fluctuation that exhibits many of the characteristics of an
ordinary particle, but that
exists for a limited time
or appear for a very, very, very short time and having its existence
limited by the
uncertainty principle. These virtual
particles
form and disappear in a vacuum and often
appear in pairs that
near-instantaneously cancel themselves out. The strong force is carried by
a field of virtual particles called
gluons, randomly
popping into existence and disappearing.
Electrons
have also been caught disappearing and reappearing between atomic layers. The concept of
virtual particles arises in perturbation theory of
quantum field theory
where interactions between ordinary particles are described in terms of
exchanges of
virtual particles. Any process involving virtual particles
admits a schematic representation known as a
Feynman diagram, in which
virtual particles are represented by internal lines.
Quantum Fluctuation.
Quasiparticle are emergent phenomena that occur when a microscopically
complicated system such as a solid behaves as if it contained different
weakly interacting particles in free space. For example, as an electron
travels through a semiconductor, its motion is disturbed in a complex way
by its interactions with all of the other electrons and nuclei; however it
approximately behaves like an electron with a different mass (effective
mass) traveling unperturbed through free space. This "electron with a
different mass" is called an "electron quasiparticle". In another example,
the aggregate motion of electrons in the valence band of a semiconductor
or a hole band in a metal is the same as if the material instead contained
positively charged quasiparticles called electron holes. Other
quasiparticles or collective excitations include phonons (particles
derived from the vibrations of atoms in a solid), plasmons (particles
derived from plasma oscillations), and many others. These particles are
typically called "quasiparticles" if they are related to fermions, and
called "collective excitations" if they are related to bosons, although
the precise distinction is not universally agreed upon. Thus, electrons
and electron holes are typically called "quasiparticles", while phonons
and plasmons are typically called "collective excitations". The
quasiparticle concept is most important in condensed matter physics since
it is one of the few known ways of simplifying the quantum mechanical
many-body problem.
Quasi Crystal.
Phason
is a quasiparticle existing in quasicrystals due to their specific,
quasiperiodic lattice structure. Similar to phonon, phason is associated
with atomic motion. However, whereas phonons are related to translation of
atoms, phasons are associated with atomic rearrangements. As a result of
these rearrangements, waves, describing the position of atoms in crystal,
change phase, thus the term "phason". A Phason can move faster than the
speed of sound in the material and so gives a greater heat transfer rate
(conduction) through the material then one in which the transfer of heat
is carried out only by phonons.
Magnon
is a quasiparticle, a collective excitation of the electrons' spin
structure in a crystal lattice. In the equivalent wave picture of quantum
mechanics, a magnon can be viewed as a quantized spin wave. Magnons carry
a fixed amount of energy and lattice momentum, and are spin-1, indicating
they obey boson behavior.
Unparticle
is a speculative theory that conjectures a form of matter that cannot be
explained in terms of particles using the Standard Model of particle
physics, because its components are scale invariant.
Anti-Particle. Every type of particle has an associated
antiparticle
with the same mass but with opposite physical charges (such as electric
charge). Corresponding to most kinds of particles, there is an
associated antiparticle with the same mass and opposite charge (including
electric charge). For example, the antiparticle of the
electron is the
positron (antielectron), which has positive charge and is produced
naturally in certain types of
radioactive decay. The opposite is also
true: the antiparticle of the positron is the electron. Some particles,
such as the photon, are their own antiparticle. Otherwise, for each pair
of antiparticle partners, one is designated as normal matter (the kind we
are made of), and the other (usually given the prefix "anti-") as in
antimatter. Particle–antiparticle pairs can annihilate each other,
producing photons; since the charges of the particle and antiparticle are
opposite, total charge is conserved. For example, the positrons produced
in natural radioactive decay quickly annihilate themselves with electrons,
producing pairs of gamma rays, a process exploited in positron emission
tomography. The laws of nature are very nearly symmetrical with respect to
particles and antiparticles. For example, an antiproton and a positron can
form an
antihydrogen atom, which is believed to have the same properties
as a hydrogen atom. This leads to the question of why the formation of
matter after the Big Bang resulted in a universe consisting almost
entirely of matter, rather than being a half-and-half mixture of matter
and antimatter. The discovery of Charge Parity violation helped to shed
light on this problem by showing that this symmetry, originally thought to
be perfect, was only approximate. Antiparticles are produced naturally in
beta decay, and in the interaction of cosmic rays in the Earth's
atmosphere. Because charge is conserved, it is not possible to create an
antiparticle without either destroying a particle of the same charge (as
in β+ decay, when a proton (positive charge) is destroyed, a neutron
created and a positron (positive charge, antiparticle) is also created and
emitted) or by creating a particle of the opposite charge. The latter is
seen in many processes in which both a particle and its antiparticle are
created simultaneously, as in
particle accelerators. This is the inverse
of the particle–antiparticle annihilation process. Although particles and
their antiparticles have opposite charges, electrically neutral particles
need not be identical to their antiparticles. The neutron, for example, is
made out of quarks, the antineutron from antiquarks, and they are
distinguishable from one another because neutrons and antineutrons
annihilate each other upon contact. However, other neutral particles are
their own antiparticles, such as photons, Z0 bosons,0 mesons, and
hypothetical gravitons and some hypothetical WIMPs.
Antimatter -
Quantum Fluctuation -
Black Holes -
Information Paradox
Subatomic
Particle are
particles much smaller than atoms. There are two types of
subatomic particles: elementary particles, which according to current
theories are not made of other particles; and composite particles.
Particle physics and nuclear physics study these particles and how they
interact.
Electrons.
Elementary Particle is a particle whose substructure is unknown; thus, it is
unknown whether it is composed of other particles. Known elementary
particles include the fundamental fermions (quarks, leptons, antiquarks,
and antileptons), which generally are "matter particles" and "antimatter
particles", as well as the fundamental bosons (gauge bosons and the Higgs
boson), which generally are "force particles" that mediate interactions
among fermions. A particle containing two or more elementary particles is
a composite particle. (Fundamental Particle).
Beta
Particle is a high-energy, high-speed electron or positron emitted in
the
radioactive decay of an atomic
nucleus, such as a potassium-40 nucleus, in the process of beta decay.
Causality in physics is the relationship between
cause and effects.
It is considered to be fundamental to all natural science, especially
physics. Causality is also a topic studied from the perspectives of
philosophy and statistics. Causality means that an effect cannot occur
from a cause which is not in the back (past) light cone of that event.
Similarly, a cause cannot have an effect outside its front (future) light
cone. Why
cause and
effect isn’t an element of fundamental particle physics, and that’s
because particles don’t care about the flow of time. For cause and effect
to happen things have to occur in a linear fashion forward, “A to B.”
However, the “underlying laws of physics don’t care about the direction of
time,” and instead they follow predictable behaviors of a pattern. The
current state of a particle doesn’t dictate its next state. Cause and
effect as we think of it only exists for us in the context of time, which
only moves forward.
Do Cause and Effect
Really Exist? (Big Picture Ep. 2/5) (youtube) -
Patterns
between
Events.
A system of lifeless particles can become "life-like" by collectively
switching back and forth
between
crystalline and
fluid states -- even when the
environment remains stable.
Baryon
is a composite subatomic particle made up of three quarks (a triquark, as
distinct from mesons, which are composed of one quark and one antiquark).
List of Baryons
(wiki) -
Matter.
Lambda Baryon are a family of subatomic hadron particles containing
one up quark, one down quark, and a third quark from a higher flavour
generation, in a combination where the
wave function changes sign upon the
flavour of any two quarks being swapped (thus differing from a Sigma
baryon). They are thus baryons, with total isospin of 0, and which are
either neutral or have the elementary charge +1.
Strange Matter. In a unique analysis of experimental data, nuclear
physicists have made observations of how
lambda particles, so-called 'strange matter,' are produced by a
specific process called
semi-inclusive deep inelastic scattering or SIDIS. What's more, these
data hint that the building blocks of protons, quarks and gluons, are
capable of marching through the atomic nucleus in pairs called diquarks,
at least part of the time. Unlike protons and neutrons, which only contain
a mixture of up and down quarks, lambdas contain one up quark, one down
quark and one strange quark. Physicists have dubbed matter that contains
strange quarks "strange matter."
Quark is an elementary particle and a fundamental
constituent of matter. Quarks combine to form composite particles called
hadrons, the most stable of which are
protons and neutrons, the components
of atomic nuclei. Due to a phenomenon known as color confinement, quarks
are never directly observed or found in isolation; they can be found only
within hadrons, such as baryons (of which protons and neutrons are
examples) and mesons. For this reason, much of what is known about quarks
has been drawn from observations of the hadrons themselves. Quarks have
various intrinsic properties, including electric charge, mass, color
charge, and spin. There are six types, known as flavors, of quarks: up,
down, strange, charm, bottom, and top.
Waves -
Quantum.
Nucleon is one of the particles that make up the
atomic
nucleus. Each atomic nucleus consists of one or more nucleons, and
each
atom in turn consists of a cluster of nucleons surrounded by one or more
electrons. There are two known kinds of nucleon: the neutron and the
proton. The mass number of a given atomic isotope is identical to its
number of nucleons. Thus the term nucleon number may be used in place of
the more common terms mass number or atomic mass number.
Neutrino is a fermion
(an elementary
particle with half-integer spin) that interacts only via
the weak subatomic force and gravity. The
mass of the neutrino is
much
smaller than that of the other known elementary particles.
Neutrinos typically
pass through normal matter
unimpeded and undetected. For each neutrino, there also exists a
corresponding antiparticle, called an
antineutrino, which also has half-integer spin and no electric charge.
They are distinguished from the neutrinos by having opposite signs of
lepton number and chirality. To conserve total lepton number, in nuclear
beta decay, electron neutrinos appear together with only positrons
(anti-electrons) or electron-antineutrinos, and electron antineutrinos
with electrons or electron neutrinos. Neutrinos are created by various
radioactive decays, including in beta
decay of atomic nuclei or hadrons, nuclear reactions such as those that
take place in the core of a star or artificially in nuclear reactors,
nuclear bombs or particle accelerators, during a supernova, in the
spin-down of a neutron star, or when accelerated particle beams or cosmic
rays strike atoms. The majority of neutrinos in the vicinity of the Earth
are from nuclear reactions in the Sun. In the vicinity of the Earth, about
65 billion (6.5×1010) solar neutrinos per second pass through every square
centimeter perpendicular to the direction of the Sun.
Neutrino Detector is a physics apparatus which is designed to study
neutrinos. Because neutrinos only weakly interact with other particles of
matter, neutrino detectors must be very large to detect a significant
number of neutrinos. Neutrino detectors are often built underground, to
isolate the detector from cosmic rays and other background radiation.
Electron neutrinos are produced in the Sun as a product of
nuclear fusion. Solar neutrinos
constitute by far the largest flux of neutrinos from natural sources
observed on Earth, as compared with e.g. atmospheric neutrinos or the
diffuse supernova neutrino background.
Neutrino Oscillation is a quantum mechanical phenomenon whereby a
neutrino created with a specific lepton family number ("lepton flavor":
electron, muon, or tau) can later be measured to have a different lepton
family number. The probability of measuring a particular flavor for a
neutrino varies between 3 known states, as it propagates through space.
Neutralino
(wiki).
IceCube Neutrino Observatory -
Wiki -
Antarctic Muon And Neutrino Detector Array (wiki)
Anti-Neutrinos are produced in
nuclear beta decay together with a beta particle, in which, e.g., a
neutron decays into a proton, electron, and antineutrino.
Differences in the behavior of Neutrinos and Antineutrinos. Neutrinos
are fundamental particles but do not interact with normal matter very
strongly, such that around 50 trillion neutrinos from the Sun pass through
your body every second.
Antimatter.
Geoneutrino is an electron antineutrino emitted in β−decay
of a radionuclide naturally occurring in the Earth. Neutrinos, the
lightest of the known subatomic particles, lack measurable electromagnetic
properties and interact only via the weak nuclear force when ignoring
gravity.
Beta Decay is a type of radioactive decay in which an atomic nucleus
emits a beta particle (fast energetic electron or positron), transforming
into an isobar of that nuclide. For example, beta decay of a neutron
transforms it into a proton by the emission of an electron accompanied by
an antineutrino; or, conversely a proton is converted into a neutron by
the emission of a positron with a neutrino in so-called positron emission.
Neither the beta particle nor its associated (anti-)neutrino exist within
the nucleus prior to beta decay, but are created in the decay process. By
this process, unstable atoms obtain a more stable ratio of protons to
neutrons. The probability of a nuclide decaying due to beta and other
forms of decay is determined by its nuclear binding energy. The binding
energies of all existing nuclides form what is called the nuclear band or
valley of stability. For either electron or positron emission to be
energetically possible, the energy release (see below) or Q value must be
positive.
Fermion
is a particle that follows Fermi–Dirac statistics. These particles obey
the
Pauli exclusion principle. Fermions include all
quarks and leptons, as well as all composite particles made of an odd
number of these, such as all baryons and many atoms and nuclei. Fermions
differ from
bosons, which obey Bose–Einstein statistics.
Majorana Fermion is a fermion that is its own antiparticle.
Lepton is an elementary, half-integer spin (spin 1⁄2)
particle that does not undergo strong interactions. Two main classes of
leptons exist: charged leptons (also known as the electron-like leptons),
and neutral leptons (better known as neutrinos). Charged leptons can
combine with other particles to form various composite particles such as
atoms and positronium, while neutrinos rarely interact with anything, and
are consequently rarely observed. The best known of all leptons is the
electron.
Gluino
is the hypothetical supersymmetric partner of a gluon. Should they exist,
gluinos are expected by supersymmetry theorists to be pair produced in
particle accelerators such as the Large Hadron Collider.
Mutating quantum particles set in motion. In the world of fundamental
particles, you are either a fermion or a
boson. But
a new study shows that one can behave as the other as they move from one
place to another.
Quark
Gluon Plasma is a state of matter in quantum chromodynamics
(QCD) which exists at extremely high temperature and/or density. This
state is thought to consist of asymptotically free quarks and gluons,
which are several of the basic building blocks of matter. It is believed
that up to a few milliseconds after the Big Bang, known as the Quark
epoch, the Universe was in a quark–gluon plasma state. In June 2015, an
international team of physicists produced quark-gluon plasma at the Large
Hadron Collider by colliding protons with lead nuclei at high energy
inside the supercollider’s Compact Muon Solenoid detector. They also
discovered that this newly produced state of matter behaves like a fluid.
Gluon
are elementary particles that act as the exchange particles (or gauge
bosons) for the strong force between quarks, analogous to the exchange of
photons in the electromagnetic force between two charged particles. In lay
terms, they "glue" quarks together, forming protons and neutrons.
Muon
is an elementary particle
similar to the electron, with an electric charge
of −1 e and a spin of 1/2, but with a much greater mass. It is classified
as a lepton. As is the case with other leptons, the muon is not believed
to have any sub-structure—that is, it is not thought to be composed of any
simpler particles.
Dibaryon are a large family of hypothetical particles, each
particle consisting of six quarks or antiquarks of any flavours. Six
constituent quarks in any of several combinations could yield a colour
charge of zero; for example a hexaquark might contain either six quarks,
resembling two baryons bound together (a dibaryon), or three quarks and
three antiquarks. Once formed, dibaryons are predicted to be fairly stable
by the standards of particle physics. In 1977 Robert Jaffe proposed that a
possibly stable H dibaryon with the quark composition udsuds could
notionally result from the combination of two uds hyperons.
Phonon
is a collective excitation in a periodic, elastic arrangement of atoms or
molecules in condensed matter, specifically in solids and some liquids.
Often referred to as a
quasiparticle, it is
an excited state in the quantum mechanical quantization of the
modes of vibrations for elastic structures
of interacting particles. Phonons can be thought of as quantized
sound waves, similar to
photons as quantized
light waves.
Flavour refers to a species of an elementary particle. The
Standard Model counts six flavours of quarks and six flavours of leptons.
They are conventionally parameterized with flavour quantum numbers that
are assigned to all subatomic particles, including composite ones. For
hadrons, these quantum numbers depend on the numbers of constituent quarks
of each particular flavour.
Strangelet is a hypothetical particle consisting of a bound
state of roughly equal numbers of up, down, and strange quarks. An
equivalent description is that a strangelet is a small fragment of strange
matter, small enough to be considered a particle. The size of an object
composed of strange matter could, theoretically, range from a few
femtometers across (with the mass of a light nucleus) to arbitrarily
large. Once the size becomes macroscopic (on the order of metres across),
such an object is usually called a strange star. The term "strangelet"
originates with Edward Farhi and R. L. Jaffe. Strangelets have been
suggested as a dark matter candidate.
Photino
is a hypothetical subatomic particle, the fermion WIMP superpartner of the
photon predicted by supersymmetry. It is an example of a gaugino. Even
though no photino has ever been observed so far, it is expected to be the
lightest stable particle in the universe. It is proposed that photinos are
produced by sources of ultra-high-energy cosmic rays.
String
Theory is a theoretical framework in which the point-like
particles of particle physics are replaced by one-dimensional objects
called strings. It describes how these strings propagate through space and
interact with each other. On distance scales larger than the string scale,
a string looks just like an ordinary particle, with its mass, charge, and
other properties determined by the vibrational state of the string. In
string theory, one of the many vibrational states of the string
corresponds to the graviton, a quantum mechanical particle that carries
gravitational force. Thus string theory is a theory of quantum gravity.
Superstring theory is an attempt to explain all of the
particles and fundamental forces of nature in one theory by modelling them
as vibrations of tiny supersymmetric strings.
Axion is a hypothetical
elementary
particle postulated by the Peccei–Quinn
theory in 1977 to resolve the strong CP problem in quantum chromodynamics
(QCD). If axions exist and have low mass within a specific range, they are
of interest as a possible component of
cold dark matter.
M-theory
is a theory in physics that unifies all consistent versions of superstring
theory. The existence of such a theory was first conjectured by Edward
Witten at a string theory conference at the University of Southern
California in the spring of 1995. Witten's announcement initiated a flurry
of research activity known as the second superstring revolution.
Introduction to M-theory
presents an idea about the basic substance of the universe. So far no
experimental evidence exists showing that M-theory is a description of the
real world. Interest in this theory is mainly driven by mathematical
elegance.
Symmetry (math) -
Balance -
Quantum
Dimensions -
Shapes -
Geometry
Strong CP Problem is a puzzling question in particle physics: Why does
quantum chromodynamics (QCD) seem to preserve CP-symmetry? In particle
physics, CP stands for Charge+Parity or Charge-conjugation Parity
symmetry: the combination of charge conjugation symmetry (C) and parity
symmetry (P). According to the current mathematical formulation of quantum
chromodynamics, a violation of CP-symmetry in strong interactions could
occur. However, no violation of the CP-symmetry has ever been seen in any
experiment involving only the strong interaction. As there is no known
reason in QCD for it to necessarily be conserved, this is a "fine tuning"
problem known as the strong CP problem. The strong CP problem is sometimes
regarded as an unsolved problem in physics, and has been referred to as
"the most underrated puzzle in all of physics." There are several proposed
solutions to solve the strong CP problem. The most well-known is Peccei–Quinn
theory, involving new pseudoscalar particles called axions.
Charge
Parity is a multiplicative quantum number of some
particles that describes their
behavior under the symmetry operation of charge conjugation. Charge
conjugation changes the sign of all quantum charges (that is, additive
quantum numbers), including the electrical charge, baryon number and
lepton number, and the flavor charges strangeness, charm, bottomness,
topness and Isospin (I3). In contrast, it doesn't affect the mass, linear
momentum or spin of a particle.
C-Symmetry is a transformation that switches all particles with their
corresponding
antiparticles, and thus
changes the sign of all charges: not only electric charge but also the
charges relevant to other forces. In physics, C-symmetry means the
symmetry of physical laws under a charge-conjugation transformation.
Electromagnetism, gravity and the strong interaction all obey C-symmetry,
but weak interactions violate C-symmetry.
CPT Symmetry is a fundamental symmetry of physical laws
under the simultaneous transformations of charge conjugation (C), parity
transformation (P), and
time reversal (T). CPT is the only combination of
C, P, and T that is observed to be an exact symmetry of nature at the
fundamental level.
Spatial Intelligence
-
Holography
(virtual reality)
Cosmological Constant is the value of the energy density of
the vacuum of space.
Theory of Everything
is a hypothetical single, all-encompassing, coherent theoretical
framework of physics that fully explains and links together all physical
aspects of the universe. Finding a
ToE is one of the major unsolved
problems in physics.
Relativity Info-Graph (image)
Alternatives to General Relativity
are physical theories that attempt to describe the phenomena of
gravitation in competition to Einstein's theory of general relativity.
Electromagnetism -
Fields -
Antiparticle
Graviton is
a hypothetical elementary particle that mediates the
force of gravitation
in the framework of quantum field theory.
Interference Wave Propagation is a phenomenon in which two
waves superpose to form a resultant wave of greater, lower, or the same
amplitude. Interference usually refers to the interaction of waves that
are correlated or coherent with each other, either because they come from
the same source or because they have the same or nearly the same
frequency. Interference effects can be observed with all types of waves,
for example, light, radio, acoustic, surface water waves or matter waves.
Coherence two wave sources are perfectly coherent if they
have a constant phase difference and the same frequency. It is an ideal
property of waves that enables stationary (i.e. temporally and spatially
constant) interference. It contains several distinct concepts, which are
limiting cases that never quite occur in reality but allow an
understanding of the physics of waves, and has become a very important
concept in quantum physics. More generally, coherence describes all
properties of the correlation between physical quantities of a single
wave, or between several waves or wave packets.
Dirac Equation is a relativistic wave equation derived by
British physicist Paul Dirac in 1928. In its free form, or including
electromagnetic interactions, it describes all spin-1/2 massive particles
such as electrons and quarks for which parity is a symmetry. It is
consistent with both the principles of quantum mechanics and the theory of
special relativity, and was the first theory to account fully for special
relativity in the context of quantum mechanics. It was validated by
accounting for the fine details of the hydrogen spectrum in a completely
rigorous way.
Frequencies
-
Hz -
Energy -
Outline of Energy
Negative Energy
is a concept used in physics to explain the nature of certain fields,
including the gravitational field and a number of quantum field effects.
In more speculative theories, negative energy is involved in wormholes
which allow time travel and warp drives for faster-than-light space
travel.
Dark Energy -
Negative Mass.
Cosmic Ray are high-energy
radiation, mainly originating
outside the Solar System. Upon impact with the Earth's atmosphere, cosmic
rays can produce showers of secondary particles that sometimes reach the
surface. Composed primarily of high-energy protons and atomic nuclei, they
are of mysterious origin. Data from the Fermi space telescope (2013) have
been interpreted as evidence that a significant fraction of primary cosmic
rays originate from the supernovae explosions of stars. Active galactic
nuclei probably also produce cosmic rays.
Ultra-High-Energy Cosmic Ray is a cosmic ray particle with a
kinetic energy greater than 1×1018 eV, far beyond both the rest mass and
energies typical of other cosmic ray particles. An extreme-energy cosmic
ray (EECR) is an UHECR with energy exceeding 5×1019 eV (about 8 joule),
the so-called Greisen–Zatsepin–Kuzmin limit (GZK limit). This limit should
be the maximum energy of cosmic ray particles that have traveled long
distances (about 160 million light years), since higher-energy ray
particles would have lost energy over that distance due to scattering from
photons in the cosmic microwave background. It follows that EECR could not
be survivors from the early universe but are cosmologically "young",
emitted somewhere in the Local Supercluster by some unknown physical
process. These particles are extremely rare; between 2004 and 2007, the
initial runs of the Pierre Auger Observatory detected 27 events with
estimated arrival energies above 5.7×1019 eV, i.e., about one such event
every four weeks in the 3000 km2 area surveyed by the observatory. There
is evidence that these highest-energy cosmic rays might be iron nuclei,
rather than the protons that make up most cosmic rays.
Electronvolt is a unit of energy equal to approximately 160
zeptojoules (10−21 joules, symbol zJ) or 1.6×10−19 joules (symbol J). By
definition, it is the amount of energy gained (or lost) by the charge of a
single electron moving across an electric potential difference of one
volt. Thus it is 1 volt (1 joule per coulomb, 1 J/C) multiplied by the
elementary charge (e, or 1.6021766208(98)×10−19 C). Therefore, one
electronvolt is equal to 1.6021766208(98)×10−19 J. Historically, the
electronvolt was devised as a standard unit of measure through its
usefulness in electrostatic particle accelerator sciences because a
particle with charge q has an energy E = qV after passing through the
potential V; if q is quoted in integer units of the elementary charge and
the terminal bias in volts, one gets an energy in eV.
Steric Effects arise from a fact that each atom within a
molecule occupies a certain amount of space. If atoms are brought too
close together, there is an associated cost in energy due to overlapping
electron clouds (Pauli or Exchange interaction, or Born repulsion), and
this may affect the molecule's preferred shape (conformation) and
reactivity.
Planck Length is a unit of length, equal to
1.616229(38)×10−35 metres. It is a base unit in the system of Planck
units, developed by physicist Max Planck. The Planck length can be defined
from three fundamental physical constants: the speed of light in a vacuum,
the Planck constant, and the gravitational constant.
Nano Technology.
Molecules -
Space -
Singularity
Alpha is an
international collaboration based at CERN, and who is working with trapped
antihydrogen atoms, the antimatter counterpart of the simplest atom,
hydrogen. By precise comparisons of hydrogen and antihydrogen, the
experiment hopes to study fundamental symmetries between matter and antimatter.
Particle Accelerator - Particle Smashing
Particle Accelerator is a machine that uses electromagnetic fields to
propel charged
particles to very high speeds and
energies, and to contain them in well-defined beams. Large accelerators
are used for basic research in particle physics. The largest accelerator
currently operating is the Large Hadron Collider (LHC) near Geneva,
Switzerland, operated by the CERN. It is a collider accelerator, which can
accelerate two beams of protons to an energy of 6.5 TeV and cause them to
collide head-on, creating center-of-mass energies of 13 TeV. Other
powerful accelerators are SuperKEKB at KEK in Japan, RHIC at Brookhaven
National Laboratory in New York and, formerly, the Tevatron at Fermilab,
Batavia, Illinois. Accelerators are also used as synchrotron light sources
for the study of condensed matter physics. Smaller particle accelerators
are used in a wide variety of applications, including particle therapy for
oncological purposes, radioisotope production for medical diagnostics, ion
implanters for manufacture of semiconductors, and accelerator mass
spectrometers for measurements of rare isotopes such as radiocarbon. There
are currently more than 30,000 accelerators in operation around the world.
There are two basic classes of accelerators: electrostatic and
electrodynamic (or electromagnetic) accelerators. Electrostatic
accelerators use static electric fields to accelerate particles. The most
common types are the Cockcroft–Walton generator and the Van de Graaff
generator. A small-scale example of this class is the cathode ray tube in
an ordinary old television set. The achievable kinetic energy for
particles in these devices is determined by the accelerating voltage,
which is limited by electrical breakdown. Electrodynamic or
electromagnetic accelerators, on the other hand, use changing
electromagnetic fields (either magnetic induction or oscillating radio
frequency fields) to accelerate particles. Since in these types the
particles can pass through the same accelerating field multiple times, the
output energy is not limited by the strength of the accelerating field.
This class, which was first developed in the 1920s, is the basis for most
modern large-scale accelerators. Rolf Widerøe, Gustav Ising, Leó Szilárd,
Max Steenbeck, and Ernest Lawrence are considered pioneers of this field,
conceiving and building the first operational linear particle
accelerator,[4] the betatron, and the cyclotron. Because the target of the
particle beams of early accelerators was usually the atoms of a piece of
matter, with the goal being to create collisions with their nuclei in
order to investigate nuclear structure, accelerators were commonly
referred to as atom smashers in the 20th century. The term persists
despite the fact that many modern accelerators create collisions between
two subatomic particles, rather than a particle and an atomic nucleus.
Cosmic Rays -
Trillions of Frames Per Second
Large Hadron Collider is the world's largest and most powerful
particle collider, most complex experimental facility ever built, and the
largest single machine in the world. It was built by the European
Organization for Nuclear Research (CERN) between 1998 and 2008 in
collaboration with over 10,000 scientists and engineers from over 100
countries, as well as hundreds of universities and laboratories. It lies
in a tunnel 27 kilometres (17 mi) in circumference, as deep as 175 metres
(574 ft) beneath the France–Switzerland border near Geneva, Switzerland.
Its first research run took place from 30 March 2010 to 13 February 2013
at an initial energy of 3.5 teraelectronvolts (TeV) per beam (7 TeV
total), almost 4 times more than the previous world record for a collider,
rising to 4 TeV per beam (8 TeV total) from 2012. On 13 February 2013 the
LHC's first run officially ended, and it was shut down for planned
upgrades. 'Test' collisions restarted in the upgraded collider on 5 April
2015, reaching 6.5 TeV per beam on 20 May 2015 (13 TeV total, the current
world record). Its second research run commenced on schedule, on 3 June
2015. The aim of the LHC is to allow physicists to test the predictions of
different theories of particle physics, including measuring the properties
of the Higgs boson and searching for the large family of new particles
predicted by supersymmetric theories, as well as other unsolved questions
of physics. The collider has four crossing points, around which are
positioned seven detectors, each designed for certain kinds of research.
The LHC primarily collides proton beams, but it can also use beams of lead
nuclei. Proton–lead collisions were performed for short periods in 2013
and 2016, and lead–lead collisions took place in 2010, 2011, 2013, and
2015. The LHC's computing grid is a world record holder. Data from
collisions was produced at an unprecedented rate for the time of first
collisions, tens of petabytes per year, a major challenge at the time, to
be analysed by a grid-based computer network infrastructure connecting 140
computing centres in 35 countries – by 2012 the Worldwide LHC Computing
Grid was also the world's largest distributed computing grid, comprising
over 170 computing facilities in a worldwide network across 36 countries.
(A Sub Atomic Smash Up Derby).
Collider is a
type of
particle accelerator involving directed beams of
particles. Colliders may either be ring accelerators or linear
accelerators, and may collide a single beam of particles against a
stationary target or two beams head-on. Colliders are used as a research
tool in particle physics by accelerating particles to very high kinetic
energy and letting them impact other particles. Analysis of the byproducts
of these collisions gives scientists good evidence of the structure of the
subatomic world and the laws of nature governing it. These may become
apparent only at high energies and for tiny periods of time, and therefore
may be hard or impossible to study in other ways.
Ray
Tubes.
Step
inside the Large Hadron Collider (360 video) - BBC News
(youtube)
New type of entanglement lets scientists 'see' inside nuclei. Nuclear
physicists have found a new way to use the
Relativistic Heavy Ion Collider or RHIC to see the shape and details
inside atomic nuclei. The method relies on particles of light that
surround gold ions as they speed around the collider and a new type of
quantum entanglement that's never been seen before. Through a series of
quantum fluctuations, the particles of light (a.k.a. photons) interact
with gluons -- gluelike particles that hold quarks together within the
protons and neutrons of nuclei. Those interactions produce an intermediate
particle that quickly decays into two differently charged "pions" (π). By
measuring the velocity and angles at which these π+ and π- particles
strike RHIC's STAR detector, the scientists can backtrack to get crucial
information about the photon -- and use that to map out the arrangement of
gluons within the nucleus with higher precision than ever before. This
technique is similar to the way doctors use positron emission tomography
(PET scans) to see what's happening inside the brain and other body parts.
Synchrotron is a particular type of cyclic particle accelerator,
descended from the cyclotron, in which the accelerating particle beam
travels around a fixed closed-loop path. The magnetic field which bends
the particle beam into its closed path increases with time during the
accelerating process, being synchronized to the increasing kinetic energy
of the particles. The synchrotron is one of the first accelerator concepts
to enable the construction of large-scale facilities, since bending, beam
focusing and acceleration can be separated into different components.
Cyclotron is a
type of particle accelerator that accelerates charged particles outwards
from the center of a flat cylindrical vacuum chamber along a spiral path.
The particles are held to a spiral trajectory by a static magnetic field
and accelerated by a rapidly varying electric field.
Boson is
a
particle that follows
Bose–Einstein statistics. Bosons make up one of
the two classes of
particles, the other being
fermions. The name boson was coined by Paul Dirac to commemorate the
contribution of the Indian physicist Satyendra Nath Bose in developing,
with Einstein, Bose–Einstein statistics—which theorizes the
characteristics of elementary particles. Examples of bosons include
fundamental particles such as photons, gluons, and W and Z bosons (the
four force-carrying gauge bosons of the Standard Model), the recently
discovered Higgs boson, and the hypothetical graviton of quantum gravity;
composite particles (e.g. mesons and stable nuclei of even mass number
such as deuterium (with one proton and one neutron, mass number = 2),
helium-4, or lead-208); and some quasiparticles (e.g. Cooper pairs,
plasmons, and phonons).
Higgs Boson
is an elementary particle in the Standard Model of particle
physics. It is the quantum excitation of the Higgs field, a fundamental
field of crucial importance to particle physics theory first suspected to
exist in the 1960s. Unlike other known fields such as the electromagnetic
field, it has a non-zero constant value in vacuum. The question of the
Higgs field's existence has been the last unverified part of the Standard
Model of particle physics and, according to some, "the central problem in
particle physics.
Higgs Boson T-Shirts.
Bose–Einstein Condensate (BEC) is a state of matter of a dilute gas of
bosons cooled to temperatures very close to absolute zero. Under such
conditions, a large fraction of bosons occupy the lowest quantum state, at
which point microscopic quantum phenomena, particularly wavefunction
interference, become apparent. A BEC is formed by cooling a gas of
extremely low density, about one-hundred-thousandth the density of
normal air, to ultra-low temperatures. This state was first predicted,
generally, in 1924–25 by Satyendra Nath Bose and Albert Einstein.
Tevatron was a circular
particle accelerator (now inactive,
since 2011) in the United States, at the Fermi National Accelerator
Laboratory (also known as Fermilab), just east of Batavia, Illinois, and
holds the title of the second highest energy particle collider in the
world, after the Large Hadron Collider (LHC) of the European Organization
for Nuclear Research (CERN) near Geneva, Switzerland. The Tevatron was a
synchrotron that accelerated protons and antiprotons in a 6.86 km, or 4.26
mi, ring to energies of up to 1 TeV, hence its name. The Tevatron was
completed in 1983 at a cost of $120 million and significant upgrade
investments were made in 1983–2011.
Fusion -
Biology -
Telescopes -
Science Kits -
Science Tools -
Computers -
Electricity
Synchrotron is a particular type of cyclic particle
accelerator, descended from the cyclotron, in which the accelerating
particle beam travels around a fixed closed-loop path. The magnetic field
which bends the particle beam into its closed path increases with time
during the accelerating process, being synchronized to the increasing
kinetic energy of the particles . The synchrotron is one of the first
accelerator concepts to enable the construction of large-scale facilities,
since bending, beam focusing and acceleration can be separated into
different components. The most powerful modern particle accelerators use
versions of the synchrotron design. The largest synchrotron-type
accelerator is the 27-kilometre-circumference (17 mi) Large Hadron
Collider (LHC) near Geneva, Switzerland, built in 2008 by the European
Organization for Nuclear Research (CERN).
Particles of light may create fluid flow, data-theory comparison
suggests. A new computational analysis supports the idea that photons
(a.k.a. particles of light) colliding with heavy ions can create a fluid
of 'strongly interacting' particles. In a new paper, they show that
calculations describing such a system match up with data collected by the
ATLAS detector at Europe's Large Hadron Collider (LHC).
Quantum Mechanics - Quantum Physics
Quantum Mechanics is a fundamental branch of physics concerned with
processes involving, for example,
atoms and
photons.
Systems such as these
which obey quantum mechanics can be in a quantum
superposition of
different states, unlike in classical physics. Quantum mechanics is also
known as quantum physics or quantum theory or
quantum field theory.
List of Equations in Quantum Mechanics (wiki) -
Action Physics.
Quantum is the
minimum
amount of any physical entity involved in an
interaction. The fundamental
notion that a physical property may be "quantized" is referred to as "the
hypothesis of
quantization". This means that the magnitude of the physical property
can take on only certain discrete values. Quantum
is a
discrete amount of something that is
analogous to the quantities in quantum theory.
Particles -
Nano Technologies -
Quantum Computing -
Hadron Collider
Quanta is the smallest discrete quantity of some physical
property that a system can possess (according to quantum theory).
Planck Constant is a fundamental physical constant denoted h, and is
of fundamental importance in quantum mechanics. A photon's energy is equal
to its frequency multiplied by the Planck constant. Due to mass–energy
equivalence, the Planck constant also relates mass to frequency. In
metrology it is used, together with other constants, to define the
kilogram, an SI unit. The SI units are defined in such a way that, when
the Planck constant is expressed in SI units, it has the exact value h =
6.62607015×10−34 J-Hz−1.
Up Quark is the
lightest of all quarks, a type of elementary particle, and a major
constituent of matter. It, along with the down quark, forms the neutrons
(one up quark, two down quarks) and protons (two up quarks, one down
quark) of atomic nuclei.
Down Quark is the
second-lightest of all quarks, a type of elementary particle, and a major
constituent of matter. Together with the up quark, it forms the neutrons
(one up quark, two down quarks) and protons (two up quarks, one down
quark) of atomic nuclei.
Quantum
Superposition states that, much like
waves in
classical physics, any two (or more)
quantum states
can be added together ("superposed") and the result will be another valid
quantum state; and conversely, that every quantum state can be represented
as a sum of two or more other distinct states. Mathematically, it refers
to a property of solutions to the Schrödinger equation; since the
Schrödinger equation is linear, any linear
combination of solutions will also be a solution. An example of a
physically observable manifestation of the wave nature of quantum systems
is the interference peaks from an electron beam in a
double-slit experiment. The pattern is very similar to the one
obtained by diffraction of classical waves. Another example is a quantum
logical qubit state, as used in
quantum
information processing, which is a quantum superposition of the "basis
states".
Quantum Super Position is a fundamental principle of
quantum
mechanics.
Superposition Principle states that, for all linear systems, the net
response at a given place and time caused by two or more stimuli is the
sum of the responses that would have been caused by each stimulus
individually. So that if input A produces response X and input B produces
response Y then input (A + B) produces response (X + Y).
Quantum Computers.
Quantum Zeno Effect is a situation in which an unstable
particle, if observed continuously, will never decay. One can "freeze" the
evolution of the system
by measuring it frequently enough in its known
initial state.
Quantum Entanglement is a physical phenomenon that occurs
when pairs or groups of particles are generated or interact in ways such
that the quantum state of each particle cannot be described independently
of the others, even when the particles are separated by a large
distance—instead, a quantum state must be described for the system as a
whole.
Switching 'spin' on and off (and up and down) in quantum materials at
room temperature. Researchers have found a way to control the interaction
of light and quantum 'spin' in organic semiconductors, that works even at
room temperature.
Quantum Mechanics of Time Travel -
Time
is RelativeThere are six
types or flavors of quarks: up, down, strange, charm, bottom, and
top. Up and down quarks have the lowest masses of all quarks. The heavier
quarks rapidly change into up and down quarks through a process of
particle decay: the transformation from a higher mass state to a lower
mass state.
Quantum
Entanglement -
Quantum Dots -
Tunneling
Quantum breakthrough when light makes materials magnetic. The
potential of quantum technology is huge but is today largely limited to
the extremely cold environments of laboratories. Now, researchers have
succeeded in demonstrating for the very first time how
laser light can induce quantum behavior
at room temperature -- and make non-magnetic materials magnetic. The
breakthrough is expected to pave the way for faster and more
energy-efficient computers, information transfer and data storage.
Pauli
Exclusion Principle is the quantum mechanical principle which states
that two or more identical
fermions (particles with half-integer spin)
cannot occupy the same quantum state within a quantum system
simultaneously. In the case of
electrons in atoms,
it can be stated as follows: it is impossible for two electrons of a
poly-electron atom to have the same values of the four quantum
numbers. (n=1, L=0, M=0 S=minus half) (n=1, L=0, M=0 S=plus half).
Hellmann-Feynman Theorem in quantum mechanics, relates the derivative
of the total energy with respect to a parameter, to the expectation value
of the derivative of the Hamiltonian with respect to that same parameter.
According to the theorem, once the spatial distribution of the electrons
has been determined by solving the
Schrödinger equation,
all the forces in the system can be calculated using classical
electrostatics.
Analogous is similar or
equivalent in some respects though
otherwise dissimilar. Corresponding in function but not in evolutionary
origin.
Involving is to
connect closely and
often incriminatingly. Engage as a participant. Contain as a part. Have as
a necessary feature. Make complex or intricate or complicated.
Quantum Error Correction is used in quantum computing to protect
quantum information from errors due to decoherence and other quantum
noise. Quantum error correction is essential if one is to achieve
fault-tolerant quantum computation that can deal not only with noise on
stored quantum information, but also with faulty quantum gates, faulty
quantum preparation, and faulty measurements.
Spontaneous Quantum Error Correction Demonstrated. New research
tackles a central challenge of powerful quantum computing. Physicists take
a step toward building a fault-tolerant quantum computer. They have
realized a novel type of QEC where the quantum errors are spontaneously
corrected.
Interpretations of quantum mechanics is a set of statements which
attempt to explain how quantum mechanics informs our understanding of
nature.
Quantum Theory
is a physical theory that certain properties occur only in discrete
amounts. Constituting a separate entity or part.
Easy Explanation of
Quantum Theory - Documentary (youtube).
Bell's Theorem proves that quantum physics is incompatible with local
hidden variable theories. It was introduced by physicist John Stewart Bell
in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox", referring
to a 1935 thought experiment that Albert Einstein, Boris Podolsky and
Nathan Rosen used to argue that quantum physics is an "incomplete" theory.
By 1935, it was already recognized that the predictions of quantum physics
are probabilistic. Einstein, Podolsky and Rosen presented a scenario that,
in their view, indicated that quantum particles, like electrons and
photons, must carry physical properties or attributes not included in
quantum theory, and the uncertainties in quantum theory's predictions are
due to ignorance of these properties, later termed "hidden variables".
Their scenario involves a pair of widely separated physical objects,
prepared in such a way that the quantum state of the pair is
entangled.
Renormalization is a collection of techniques in
quantum
field theory, the statistical mechanics of fields, and the theory of
self-similar
geometric structures, that
are used to treat infinities arising in calculated quantities by altering
values of quantities to compensate for effects of their self-interactions.
Quantum Levitation (youtube) -
Quantum Gravity
Quantum Tunneling refers to the quantum mechanical
phenomenon where a
particle tunnels through a barrier that it classically
could not surmount.
Fusion.
Proton Tunneling
is a type of quantum tunneling involving the instantaneous disappearance
of a
proton in one site and the appearance of the same proton at an
adjacent site separated by a potential barrier. The two available sites
are bounded by a double well potential of which its shape, width and
height are determined by a set of boundary conditions. According to the
WKB approximation, the probability for a particle to tunnel is inversely
proportional to its mass and the width of the potential barrier.
Electron Tunneling is well-known. A
proton is
about
2000 times more massive than an
electron, so it has a much lower
probability of tunneling; nevertheless, proton tunneling still occurs
especially at low temperatures and high pressures where the width of the
potential barrier is decreased.
Observation Flaws.
Grotthuss
Mechanism or
Proton Jumping.
Quantum Pumping in molecular junctions. Researchers have developed a
new theoretical modeling technique that could potentially be used in the
development of switches or amplifiers in molecular electronics.
Quantum Decoherence is the loss of
quantum coherence.
Quantum Biology refers to applications of quantum mechanics
and theoretical chemistry to biological objects and problems.
Quantum Electrodynamics is the relativistic quantum field
theory of electrodynamics. In essence, it describes how light and matter
interact and is the first theory where full agreement between quantum
mechanics and special relativity is achieved. QED mathematically describes
all phenomena involving electrically charged particles interacting by
means of exchange of photons and represents the quantum counterpart of
classical electromagnetism giving a complete account of matter and light
interaction.
Quantum Chromodynamics is the theory of the
strong interaction between
quarks and
gluons, the fundamental particles that make up composite
hadrons such as the proton, neutron and pion.
QCD is a type of quantum
field theory called a non-abelian gauge theory, with symmetry group SU(3).
The QCD analog of electric charge is a property called
color. Gluons are the force carrier
of the theory, like photons are for the electromagnetic force in quantum
electrodynamics. The theory is an important part of the Standard Model of
particle physics. A large body of experimental evidence for QCD has been gathered over the years.
QCD: The Strongest
Force in the Universe Visualized: Quantum Chromodynamics (youtube).
Color Charge
is a property of quarks and gluons that is related to the particles'
strong interactions in the theory of quantum chromodynamics.
Evidence for a New Property of Quantum Matter Revealed. Electrical
dipole activity detected in a quantum material unlike any other tested.
The material, first synthesized 20 years ago, is called k-(BEDT-TTF)2Hg(SCN)2
Br. It is derived from organic compounds, but behaves like a metal.
Layered Double Hydroxides are a class of ionic solids characterized by
a layered structure with the generic layer sequence [AcBZAcB]n, where c
represents layers of metal
cations, A and B are layers of hydroxide (HO−) anions, and Z are
layers of other anions and neutral molecules (such as water). Lateral
offsets between the layers may result in longer repeating periods. The
intercalated anions (Z) are weakly bound, often
exchangeable; their
intercalation properties have scientific and commercial interest. LDHs
occur in nature as minerals, as byproducts of metabolism of certain
bacteria, and also unintentionally in man-made contexts, such as the
products of corrosion of metal objects.
Statistical Mechanics describes how macroscopic observations (such as
temperature and pressure) are related to microscopic parameters that
fluctuate around an average. It connects
thermodynamic quantities (such as heat capacity) to microscopic
behavior, whereas, in classical thermodynamics, the only available option
would be to measure and tabulate such quantities for various materials.
Statistical mechanics is necessary for the fundamental study of any
physical system that has many degrees of freedom. The approach is based on
statistical methods, probability theory and the microscopic physical laws.
It can be used to explain the thermodynamic behaviour of large systems.
This branch of statistical mechanics, which treats and extends classical
thermodynamics, is known as statistical thermodynamics or equilibrium
statistical mechanics. Statistical mechanics can also be used to study
systems that are out of equilibrium. An important sub-branch known as
non-equilibrium statistical mechanics (sometimes called statistical
dynamics) deals with the issue of microscopically modelling the speed of
irreversible processes that are driven by imbalances. Examples of such
processes include chemical reactions or flows of particles and heat. The
fluctuation–dissipation theorem is the basic knowledge obtained from
applying non-equilibrium statistical mechanics to study the simplest
non-equilibrium situation of a steady state current flow in a system of
many particles.
Relativistic Mechanics refers to
mechanics compatible with
special relativity (SR) and general relativity (GR). It provides a
non-quantum mechanical description of a system of particles, or of a
fluid, in cases where the velocities of moving objects are comparable to
the speed of light c. As a result, classical mechanics is extended
correctly to particles traveling at high velocities and energies, and
provides a consistent inclusion of
electromagnetism with the mechanics of
particles.
Classical
Mechanics is a physical theory describing the
motion of
macroscopic objects, from projectiles to parts of
machinery, and
astronomical objects,
such as
spacecraft, planets, stars
and galaxies. For objects governed by classical mechanics, if the present
state is known, it is possible to predict how it will move in the future
(determinism) and how it has moved in the past (reversibility).
Classical Mechanics is one of the two major sub-fields of
mechanics, along with
quantum mechanics. Classical mechanics is concerned
with the set of physical laws describing the
motion of bodies under the
influence of a system of forces. The study of the motion of bodies is an
ancient one, making classical mechanics one of the oldest and largest
subjects in science, engineering and technology. It is also widely known
as Newtonian mechanics.
Mechanics is the
area of physics concerned with the
motions of physical
objects. Forces applied to objects result in displacements, or changes
of an object's position relative to its environment.
Hamiltonian in quantum mechanics, the Hamiltonian of a system is an
operator corresponding to
the total energy of that
system, including both
kinetic energy
and
potential energy.
Its spectrum, the system's energy spectrum or its set of energy
eigenvalues, is the set of possible outcomes obtainable from a
measurement of the system's total energy. Due to its close relation to the
energy spectrum and time-evolution of a system, it is of fundamental
importance in most formulations of quantum theory.
Hamiltonian Mechanics is a mathematically sophisticated formulation of
classical mechanics.
Correspondence Principle states that the behavior of systems
described by the theory of quantum mechanics (or by the old quantum
theory) reproduces classical physics in the limit of large quantum
numbers. In other words, it says that for large orbits and for large
energies, quantum calculations must agree with classical calculations.
Copenhagen Interpretation
Quantum Dot are tiny semiconductor particles a few nanometres in size,
having optical and electronic properties that differ from larger particles
due to quantum mechanics. They are a central topic in
nanotechnology. When the quantum
dots are illuminated by UV light, an electron in the quantum dot can be
excited to a state of higher energy. In the case of a semiconducting
quantum dot, this process corresponds to the transition of an electron
from the valence band to the conductance band. The excited electron can
drop back into the valence band releasing its energy by the emission of
light. This light emission (
photoluminescence)
is illustrated in the figure on the right. The color of that light depends
on the energy difference between the conductance band and the valence band.
Modified Quantum Dots capture more energy from light and lose less to heat.
Los Alamos National Laboratory scientists have synthesized
magnetically-doped quantum dots that capture the kinetic energy of
electrons created by ultraviolet light before it's wasted as heat.
Office Science.
Grid of quantum islands could reveal secrets for powerful technologies.
Researchers have created grids of tiny clumps of atoms known as quantum
dots and studied what happens when electrons dive into these archipelagos
of atomic islands. Measuring the behavior of electrons in these relatively
simple setups promises deep insights into how electrons behave in complex
real-world materials and could help researchers engineer devices that make
possible powerful quantum computers and other innovative technologies.
New quantum computing architecture could be used to connect large-scale
devices. Researchers have demonstrated directional photon emission,
the first step toward extensible quantum interconnects. Researchers have
demonstrated an architecture that can enable high fidelity and scalable
communication between superconducting quantum processors. Their technique
can generate and route photons, which carry quantum information, in a
user-specified direction. This method could be used to develop a
large-scale network of quantum processors that could efficiently
communicate with one another.
Artificial Atoms create stable qubits for quantum computing. Quantum
engineers from
UNSW Sydney have created artificial atoms in silicon chips that offer
improved stability for quantum computing. UNSW quantum computing
researchers describe how they created artificial atoms in a silicon
'quantum dot', a tiny space in a quantum circuit where electrons are used
as qubits (or quantum bits), the basic units of quantum information.
Scientia Professor Andrew Dzurak explains that unlike a real atom, an
artificial atom has no nucleus, but it still has shells of electrons
whizzing around the centre of the device, rather than around the atom's
nucleus.
Artificial Relativistic Molecules.
The spin state story: Observation of the quantum spin liquid state in
novel material. New insight into the spin behavior in an exotic state
of matter puts us closer to next-generation spintronic devices. The
quantum spin liquid (QSL) state is an exotic state of matter where the
spin of electrons, which generally exhibits order at low temperatures,
remains disordered. Now, scientists have developed a new material where a
two-dimensional QSL state can be experimentally observed, advancing our
knowledge of spin behavior, and getting us closer to next-generation ''
spintronic'' devices.
Physicists bring human-scale object to near standstill, reaching a quantum
state. The results open possibilities for studying gravity's effects
on relatively large objects in quantum states. To the human eye, most
stationary objects appear to be just that -- still, and completely at
rest. Yet if we were handed a quantum lens, allowing us to see objects at
the scale of individual atoms, what was an apple sitting idly on our desk
would appear as a teeming collection of vibrating particles, very much in
motion. In the last few decades, physicists have found ways to super-cool
objects so that their atoms are at a near standstill, wrestling small
objects such as clouds of millions of atoms, or nanogram-scale objects,
into such pure quantum states. Now scientists have cooled a large,
human-scale object to close to its motional ground state. The object isn't
tangible in the sense of being situated at one location, but is the
combined motion of four separate objects. The 'object' that the
researchers cooled has an estimated mass of about 10 kilograms, and
comprises nearly 1 octillion atoms.
Quantum algorithms save time in the calculation of electron dynamics.
Quantum computers promise significantly shorter computing times for
complex problems. But there are still only a few quantum computers
worldwide with a limited number of so-called qubits. However, quantum
computer algorithms can already run on conventional servers that simulate
a quantum computer. A team has succeeded in calculating the electron
orbitals and their dynamic development using an example of a small
molecule after a laser pulse excitation. In principle, the method is also
suitable for investigating larger molecules that cannot be calculated
using conventional methods. These quantum computer algorithms were
originally developed in a completely different context. We used them here
for the first time to calculate electron densities of molecules, in
particular also their dynamic evolution after excitation by a light pulse.
We developed an algorithm for a fictitious, completely error-free quantum
computer and ran it on a classical server simulating a quantum computer of
ten Qbits. The scientists limited their study to smaller molecules in
order to be able to perform the calculations without a real quantum
computer and to compare them with conventional calculations. The study
thus shows a new way to calculate electron densities and their "response"
to excitations with light in advance with very high spatial and temporal
resolution. This makes it possible, for example, to simulate and
understand ultrafast decay processes, which are also crucial in quantum
computers made of so-called quantum dots. Also predictions about the
physical or chemical behaviour of molecules are possible, for example
during the absorption of light and the subsequent transfer of electrical
charges. This could facilitate the development of photocatalysts for the
production of green hydrogen with sunlight or help to understand processes
in the light-sensitive receptor molecules in the eye.
Quantum laser turns energy loss into gain? Scientists have fabricated
a laser system that generates highly interactive quantum particles at room
temperature. Their findings could lead to a single microcavity laser
system that requires lower threshold energy as its energy loss increases.
The key is the design and materials. The hexagonal microcavity divides
light particles into two different modes: one that passes through the
upward-facing triangle of the hexagon and another that passes through its
downward-facing triangle. Both modes of light particles have the same
energy and path but don't interact with each other. However, the light
particles do interact with other particles called excitons, provided by
the
hexagonal microcavity, which is made of semiconductors. This
interaction leads to the generation of new quantum particles called
polaritons that then interact with each other to generate the polariton
laser. By controlling the degree of loss between the
microcavity and the
semiconductor substrate, an intriguing phenomenon arises, with the
threshold energy becoming smaller as energy loss increases.
A team of physics educators is focusing on a new approach to teaching
quantum physics in schools. Researchers focus on two-state systems:
Promising approach for classroom teaching.
Fields
Quantum Field
Theory is the theoretical framework for constructing
quantum
mechanical models of
subatomic particles in particle physics and
quasiparticles in condensed
matter physics. QFT treats
particles as
excited states of the underlying physical field, so these are called field
quanta. Quantum Field Theory is a theoretical framework that combines
classical field theory,
special relativity, and
quantum mechanics:xi (but
notably not general relativity's description of gravity) and is used to
construct physical models of subatomic particles (in particle physics) and
quasiparticles (in condensed matter physics). QFT treats particles as
excited states (also called quanta) of their underlying fields, which are
more fundamental than the particles. Interactions between particles are
described by interaction terms in the Lagrangian involving their
corresponding fields. Each interaction can be visually represented by
Feynman diagrams, which are formal computational tools, in the process of
relativistic perturbation theory.
Magnetics -
Waves -
Gravity
-
Orbits -
Electrons
Classical
Field Theory is a physical theory that predicts how
one or more
physical fields interact with matter through field equations. The term
'classical field theory' is commonly reserved for describing those
physical theories that describe
electromagnetism and gravitation, two of
the fundamental forces of nature. Theories that incorporate quantum
mechanics are called quantum field theories.
Unified Field Theory is a type of field theory that allows
all that is usually thought of as fundamental forces and elementary
particles to be written in terms of a single field.
Field Equation is a partial differential equation which
determines the dynamics of a physical field, specifically the time
evolution and spatial distribution of the field. The solutions to the
equation are mathematical functions which correspond directly to the
field, as a functions of
time and space. Since the field equation is a
partial differential equation, there are families of solutions which
represent a variety of physical possibilities. Usually, there is not just
a single equation, but a set of coupled equations which must be solved
simultaneously. Field equations are not ordinary differential equations
since a field depends on
space and time, which requires at least two
variables.
Field is a physical quantity, represented by a number or tensor, that
has a value for each point in
space-time. For example, on a weather map,
the surface temperature is described by assigning a real number to each
point on a map; the temperature can be considered at a fixed point in time
or over some time interval, to study the dynamics of temperature change. A
surface wind map, assigning a vector to each point on a map that describes
the wind velocity at that point, would be an example of a 1-dimensional
tensor field, i.e. a vector field. Field theories, mathematical
descriptions of how field values change in space and time, are ubiquitous
in physics. For instance, the electric field is another rank-1 tensor
field, and the full description of electrodynamics can be formulated in
terms of two interacting vector fields at each point in space-time, or as
a single-rank 2-tensor field theory.
Vector Field is an assignment of a vector to each point in a subset of
space. A vector field in the plane (for instance), can be visualised as a
collection of arrows with a given magnitude and direction, each attached
to a point in the plane. Vector fields are often used to model, for
example, the speed and direction of a moving fluid throughout space, or
the strength and direction of some force, such as the magnetic or
gravitational force, as it changes from one point to another point.
Electric Field surrounds an
electric charge,
and exerts force on other charges in the field, attracting or repelling
them. Electric field is sometimes abbreviated as E-field. The electric
field is defined mathematically as a vector field that associates to each
point in space the (
electrostatic
or
Coulomb) force per unit of charge exerted on an infinitesimal positive
test charge at rest at that point. The SI unit for electric field strength
is volt per meter (V/m).
Newtons per coulomb (N/C) is also used as a unit
of electric field strength. Electric fields are created by electric
charges, or by time-varying
magnetic fields. Electric fields are important
in many areas of physics, and are exploited practically in electrical
technology. On an atomic scale, the electric field is responsible for the
attractive force between the atomic nucleus and
electrons that holds atoms together, and the
forces between atoms that cause
chemical bonding. Electric
fields and magnetic fields are both manifestations of the
electromagnetic force, one of the four
fundamental forces (or interactions) of nature.
Electric Motors.
Electric fields are more reliable for information. Neurons are fickle.
A new study suggests that electric fields may represent information held
in working memory, allowing the brain to overcome 'representational
drift,' or the inconsistent participation of individual neurons.
Observation of new electric field signals strong potential for assorted
devices. A new vortex electric field with the potential to enhance
future electronic, magnetic and optical devices has been observed by
researchers.
Magnetic Field is a vector field that describes the
magnetic influence of electric charges
in
relative motion and magnetized
materials. The effects of magnetic fields are commonly seen in permanent
magnets, which pull on magnetic materials (such as iron) and attract or
repel other magnets. Magnetic fields surround and are created by
magnetized material and by moving electric charges (electric currents)
such as those used in electromagnets. They exert forces on nearby moving
electrical charges and torques on nearby magnets. In addition, a magnetic
field that varies with location exerts a force on magnetic materials. Both
the strength and direction of a magnetic field vary with location. As
such, it is described mathematically as a vector field.
Hamiltonian Field Theory is the field-theoretic analogue to classical
Hamiltonian mechanics. It is a formalism in classical field theory
alongside
Lagrangian field theory. It also has applications in quantum field
theory.
Waves.
Quantum
Entanglement (connectedness) -
Quantum Gravity
(space) -
Quantum Computing (super
computers) -
Time
-
Thermodynamics.
An electric field exerts a force on charged particles. The direction
of the electric field is the direction of that
force on a positive
charge. The actual force on a particle with charge q is given by
F = qE. It points in the opposite direction
of the electric field E for a negative charge. A field is a way of
explaining action at a distance. Massive particles attract each other. How
do they do this, if they are not in contact with each other? We say that
massive particles produce gravitational fields. A field is a condition in
space that can be probed. Massive particle are also the probes that detect
a gravitational field. A gravitational field exerts a force on massive
particles. The magnitude of the gravitational field produced by a massive
object at a point P is the gravitational force per unit mass it exerts on
another massive object located at that point. The direction of the
gravitational field is the direction of that force. We calculate the
gravitation field produced by a mass distribution using Newton's law of
gravitation. The magnitude of the gravitational force near the surface of
Earth is
F = mg, the gravitational field
has magnitude F/m = g. Its direction is downward. Charged particles
attract or repel each other, even when not in contact with each other. We
say that charged particles produce electric fields. Charged particles are
also the probes that detect an electric field. An electric field exerts a
force on charged particles. The magnitude of the electric field E produced
by a charged particle at a point P is the electric force per unit positive
charge it exerts on another charged particle located at that point. The
direction of the electric field is the direction of that force on a
positive charge. The actual force on a particle with charge q is given by
F = qE. It points in the opposite direction of the electric field E for a
negative charge. We calculate the
electric field produced by a charge distribution using a set of
equations called Maxwell's equations. In the presence of many other
charges, a charge q is acted on by a net force F, which is the vector sum
of the forces due to all the other charges. The electric field due to all
the other charges at the position of the charge q is
E = F/q, i.e. it is the vector sum of the
electric fields produce by all the other charges. To measure the electric
field E at a point P due to a collection of charges, we can bring a small
positive charge q to the point P and measure the force on this test
charge. The test charge must be small, because it interacts with the other
charges, and we want this interaction to be small. We divide the force on
the test charge by the magnitude of the test charge to obtain the field.
Field lines were introduced by Michael
Faraday to help visualize the direction and magnitude of he electric
field. The direction of the field at any point is given by the direction
of a line tangent to the field line, while the magnitude of the field is
given qualitatively by the density of field lines. The field lines
converge at the position of a point charge. Near a point charge their
density becomes very large. The magnitude of the field and the density of
the field lines scale as the inverse of the distance squared.
Rules for drawing field lines: Electric
field lines begin on positive charges and end on negative charges, or at
infinity. Lines are drawn symmetrically leaving or entering a charge. The
number of lines entering or leaving a charge is proportional to the
magnitude of the charge. The density of lines at any point (the number of
lines per unit length perpendicular to the lines themselves) is
proportional to the field magnitude at that point. At large distances from
a system of charges, the field lines are equally spaced and radial as if
they came from a single point charge equal in magnitude to the net charge
on the system (presuming there is a net charge). No two field lines can
cross, since the field magnitude and direction must be unique. The field
lines of an electric dipole, i.e. a positive and a negative charge of
equal magnitude, separated by a distance d. The electric field decreases
with distance as 1/(distance)3, much faster than the field of a point
charge. Polar molecules do not have a net charge, but the centers of the
positive and negative charge do not coincide. Such molecules produce a
dipole field and interact via the electrostatic force with their
neighbors.
Electromagnetic fields in the vicinity of nanostructures.
Quantum steering for more precise measurements. Quantum systems
consisting of several particles can be used to measure magnetic or
electric fields more precisely. A young physicist has now proposed a new
scheme for such measurements that uses a particular kind of correlation
between quantum particles.
Higgs Field is a field of energy that is thought to exist in every
region of the universe. The field is accompanied by a
fundamental particle known as the
Higgs boson,
which is used by the field to continuously interact with other particles,
such as the electron. Particles that interact with the field are "given"
mass and, in a similar fashion to an object passing through a treacle (or
molasses), will become slower as they pass through it. The result of a
particle "gaining" mass from the field is the prevention of its ability to
travel at the speed of light. Higgs Field is a field that gives mass to
other fundamental particles such as electrons and quarks. A particle's
mass determines how much it resists changing its speed or position when it
encounters a force. Gravitational fields have spin 2 and are described as
part of space and time; they interact with all particles and fields in
nature. The Higgs field, which has spin 0, only interacts directly with
elementary particles and fields that also participate in the
electromagnetic and weak nuclear forces. A
vacuum Higgs field is responsible for
spontaneous symmetry breaking the gauge symmetries of fundamental
interactions and provides the Higgs mechanism of generating mass of
elementary particles.
Field in mathematics is a set on which addition, subtraction,
multiplication, and division are defined, and behave as when they are
applied to rational and real numbers. A field is thus a fundamental
algebraic structure, which is widely used in algebra, number theory and
many other areas of mathematics.
Moduli or moduli fields is sometimes used to refer to scalar fields
whose potential energy function has continuous families of global minima.
Such potential functions frequently occur in
super-symmetric systems. The term "modulus" is borrowed from
mathematics, where it is used synonymously with "parameter". The word
moduli (Moduln in German) first appeared in 1857 in Bernhard Riemann's
celebrated paper "Theorie der Abel'schen Functionen".
Scalar Field associates a scalar value to
every point in a space –
possibly physical space. The scalar may either be a (dimensionless)
mathematical number or a physical quantity. In a physical context, scalar
fields are required to be independent of the choice of reference frame,
meaning that any two observers using the same units will agree on the
value of the scalar field at the same absolute point in space (or
spacetime) regardless of their respective points of origin. Examples used
in physics include the temperature distribution throughout space, the
pressure distribution in a fluid, and spin-zero quantum fields, such as
the
Higgs field. These fields are the subject of
scalar field theory.
Scalar in mathematics is an element of a field which is used to define
a vector space. A quantity described by multiple scalars, such as having
both direction and magnitude, is called a vector.
Force Field is a barrier made of
energy,
plasma, or
particles. It
protects a person, area, or object from attacks or intrusions. This
fictional technology is created as a field of energy without mass that
acts as a wall, so that objects affected by the particular force relating
to the field are unable to pass through the field and reach the other
side. This concept has become a staple of many science-fiction works, so
much that authors frequently do not even bother to explain or justify them
to their readers, treating them almost as established fact and attributing
whatever capabilities the plot requires. Force Field is sometimes known as
an energy shield, force shield, force bubble, defence shield or deflector
shield.
Observations - Looking Carefully
Observation is taking a patient and
careful look at something in order to
record
it and
measure it and also
discover or
determine
the
existence or the presence of some
fact or detail. Observe is the
act of
noticing something or paying
attention
to something with
careful consideration.
Observation is the
active
acquisition of
information
and facts that are
learned from a primary source. In living beings,
observation employs the
senses. In science, observation can also involve the
recording of data
via the use of
instruments. The
term may also refer to any
data collected during the
scientific activity.
Observations can be qualitative, that is, only the absence or presence of
a property is noted, or quantitative if a numerical value is attached to
the observed phenomenon by
counting or measuring.
Observation is the act of making and
recording a
measurement. Taking a patient
look at something. A remark expressing careful consideration about
something. Facts that are learned by
observing and
witnessing. The act of noticing or paying
close attention.
Observation Flaws -
Biases
-
Point of View
- Seeing isn't always believing -
Observable
UniverseWatching
is the act of observing with
attention
or taking a patient look at something. To observe or determine by
looking. To find out, learn, or determine with
certainty, usually by making an inquiry or other effort.
Investigate -
People Watching -
Knowing when you're being Watched -
Staring
-
Relative
Look is the act of
directing the eyes toward something and
perceiving it
visually. To perceive with attention. To direct one's gaze towards
something. To
analyze carefully.
To study to find a solution. To spend time trying to find something. To be
oriented in a certain
direction,
often with respect to another reference point. Look can also mean the
feelings expressed on a person's face. To give a certain impression or
have a certain outward aspect.
See
is to observe, check out, and look over carefully or
inspect. To perceive by sight or have the
power to perceive by
sight. To observe as if with
an eye. To find out,
learn, or determine
with certainty, usually by making an
inquiry or other
effort. To deem something to be or to
make certain of something. To
get to
know something or become
aware of something. To make sense of something or to
assign a meaning to something. To
perceive an
idea or situation
mentally. To
imagine or conceive of
something and
see in one's mind. To perceive
or think about something in a particular way. To be careful something or
to be certain to do something.
Noticing is to
discover,
perceive or
determine
the
existence, presence, or
fact of something. Express
recognition of the presence or existence of
something.
Detect is to
discover or to
identify the presence of
something or the existence of something. To
determine the existence,
presence, or fact of something. The work of a detective doing an
investigation. Detection is the
perception that something has occurred or some state exists. The
detection that a signal is being received.
Catching sight of something.
Determine
is to establish something after a calculation, investigation, experiment,
survey, or study. To reach, make, or come to a decision about something.
To give a value to something.
Recognize is to
detect something with the
senses and be
fully
aware or
cognizant of
something.
Recognition is coming to
understand something clearly
and distinctly. The state or quality of something being
acknowledged. An acceptance, as
of a claim, that something is true and valid. The process of
recognizing
something or someone by
remembering.
Regard is to look at something
attentively with a
long fixed look.
Paying particular notice.
Identify is to establish the
identity of
someone or something by its distinct
characteristics that are easy
to perceive and can be
clearly outlined.
Distinct is something not alike and
different in nature or quality.
Something that is easy to perceive and clearly outlined and constituting a
separate entity or part.
Distinguished is something detected with
the senses and marked as different.
Reputation.
Scrutiny is the act of
examining something closely
as for mistakes with a prolonged intense look.
Evidence -
Questioning -
Analyzing -
Inspection
-
Label -
Symbol
Contemplate is to look at something
thoughtfully and observe deep in thought. To think intently and at length
and consider the possibilities. To think deeply about a subject or
question over a period of time.
Artifact is any
error
in the
perception or
representation of any
information, introduced by the involved equipment or techniques that were
used in the observation.
Calibration.
Visual Artifact are
anomalies apparent during visual representation as in digital graphics
and other forms of imagery, particularly microscopy. Visual artifacts can
be the result of digital image processing that produces
noise or
distortion errors.
Glimpse is a quick look at something or a
brief or incomplete view.
Slow Seeing
is taking your time to look at something more carefully, either to
enjoy it more or to
study it more in
detail.
Closer Inspection means to have a more
thorough examination in order to reveal something.
Inspect
-
Investigate.
Up Close and Personal is to be physically
close to someone or something. Providing detailed information or firsthand
knowledge.
Trace is a
mark, object, or other indication of the existence of something or the
passing of something. To find or discover something by
investigation. A very small
quantity, especially one
too small
to be accurately measured. Trace can also mean to
copy a drawing, map, or design by drawing
over its lines on a superimposed piece of transparent paper.
Naturalistic Observation (pdf)
-
Experiencing
-
Processing
Discovery as an observation is the act of
detecting something new, or
something "old" that had been unrecognized as meaningful. With reference
to sciences and academic disciplines, discovery is the observation of new
phenomena, new actions, or new events and providing new
reasoning to
explain the knowledge gathered through such observations with previously
acquired knowledge from
abstract thought and everyday
experiences. A
discovery may sometimes be
based on earlier discoveries, collaborations,
or ideas. Some discoveries represent a radical breakthrough in knowledge
or technology.
OODA Loop (observe, orient, decide, and act)
Observational Study draws
inferences from a
sample to a
population where the independent
variable is not under the control of the
researcher because of ethical concerns or logistical constraints. One
common observational study is about the possible effect of a treatment on
subjects, where the
assignment of subjects into a treated group versus a
control group is outside the control of the
investigator. This is in
contrast with
experiments, such as
randomized controlled trials,
where each subject is randomly assigned to a treated group or a control
group.
Unit of Observation is the unit described by the data that one
analyzes. A study may treat groups
as a unit of observation with a country as the unit of
analysis, drawing conclusions on
group characteristics from data collected at the national level.
Unit of Analysis is the entity that frames what is being
analyzed in a
study, or is the entity being
studied as a whole, within which most
factors of causality and change exist.
Statistical Unit is one member of a set of entities being studied. It
is the main source for the mathematical abstraction of a "random
variable". Common examples of a unit would be a single person, animal,
plant, manufactured item, or country that
belongs to a larger collection
of such entities being studied.
Cherrie Picking Data
Prima Facie
means on its first encounter or at first sight. The literal translation
would be "at first face" or "at first appearance".
Empirical Law is a law induced from
observation or
experiment, and
though valid for the
particular
instances observed, not to be relied on beyond the conditions on which
it rests.
Statistics.
What we
see at that same moment in time we will sometimes
be understood differently from person to person. We see the same thing but
process it differently because we are using different information and
different
experiences to
compare something to what we just saw.
But even then, we will sometimes see things differently
depending on what part we
are focusing on. You may
miss
certain information because you're focusing on one part and not seeing
the whole picture. Two
eye witnesses are better than one eye
witness, but understanding how
eyesight
works and how the
mind works
will always be the key factor to
understanding.
Monitoring is the act of observing something (and sometimes keeping a
record of it). Keep tabs on; keep an eye on; keep under surveillance.
Check, track, or observe by means of a receiver.
Environmental Monitoring.
Magnifying Small Objects
Atomic Force Microscopy is a
very-high-resolution type of
scanning probe microscopy (SPM), with demonstrated
resolution on the
order
of fractions of a
nanometer, more than 1000 times better than the optical
diffraction limit. Scanning-force Microscopy or SFM.
Microscopes -
Observing -
EEG -
Observer Effects
Scanning Probe Microscopy is a branch of microscopy that
forms images of surfaces using a physical probe that scans the specimen.
Gamma Spectroscopy identify
atoms by detecting the energy of
gamma rays.
Crookes Tube is an early experimental electrical discharge
tube, with vacuum, used to discover the properties of cathode rays.
Electron Microscope is a
microscope that uses a beam of
accelerated
electrons as a source of
illumination.
Scanning Electron Microscope is a type of
electron microscope that produces images of a sample by scanning the
surface with a focused beam of electrons. The
electrons interact with
atoms in the sample, producing various signals that contain information
about the sample's surface
topography and composition. The electron beam
is scanned in a raster scan pattern, and the beam's position is combined
with the detected signal to produce an image. SEM can achieve resolution
better than 1
nanometer. Specimens can be observed in high vacuum in
conventional SEM, or in low vacuum or wet conditions in variable pressure
or environmental SEM, and at a wide range of cryogenic or elevated
temperatures with specialized instruments.
Freeze Frame.
Transmission Electron Microscopy is a microscopy technique in which a
beam of
electrons is transmitted through a
specimen to form an image. The specimen is most often an ultrathin section
less than 100 nm thick or a suspension on a grid. An image is formed from
the interaction of the electrons with the sample as the beam is
transmitted through the specimen. The image is then magnified and focused
onto an imaging device, such as a fluorescent screen, a layer of
photographic film, or a sensor such as a charge-coupled device.
Electron
Paramagnetic Resonance is a method for studying materials with unpaired electrons.
Cryo-Electron
Microscopy or cryo-EM is a cryomicroscopy technique applied on samples
cooled to cryogenic
temperatures.
For biological specimens, the structure is preserved by embedding in an
environment of vitreous ice. An aqueous sample solution is applied to a
grid-mesh and plunge-frozen in liquid ethane or a mixture of liquid ethane
and propane.
Electron Cryo-Microscopy: Using inexpensive technology to produce
high-resolution images. The trick: the
samples are flash frozen and then bombarded with
electrons. In the case of traditional electron microscopy, all of the
water is first extracted from the sample. This is necessary because the
investigation takes place in a vacuum, which means water would evaporate
immediately and make imaging impossible. However, because water molecules
play such an important role in biomolecules, especially in proteins, they
cannot be examined using traditional electron microscopy. Proteins are
among the most important building blocks of cells and perform a variety of
tasks. In-depth knowledge of their structure is necessary in order to
understand how they work.
Strengthening electron-triggered light emission. A new method can
produce a hundredfold increase in light emissions from a type of
electron-photon coupling, which is key to electron microscopes and other
technologies. The way electrons interact with photons of light is a key
part of many modern technologies, from lasers to solar panels to LEDs. But
the interaction is inherently a weak one because of a major mismatch in
scale: A wavelength of visible light is about 1,000 times larger than an
electron, so the way the two things affect each other is limited by that
disparity. Now, researchers at MIT and elsewhere have come up with an
innovative way to make much stronger interactions between photons and
electrons possible, in the process producing a hundredfold increase in the
emission of light from a phenomenon called
Smith-Purcell
radiation. The finding has potential implications for both commercial
applications and fundamental scientific research, although it will require
more years of research to make it practical.
Scanning Tunneling Microscope is an instrument for imaging
surfaces at the atomic level.
Sizes (nano) -
Telescopes (lens) -
Sensors (chromatography)
Super microscope shows nanoscale biological process for the first time.
New microscope provides more insight into arterial calcification. A new
microscope is capable of live imaging of biological processes in such
detail that moving protein complexes are visible.
Ghost Imaging speeds up super-resolution microscopy. New nanoscopy
approach poised to capture biological processes occurring inside cells at
submillisecond speeds.
Lattice-Light-Sheet Microscopy -
Video
(vimeo)
Light Sheet Fluorescence Microscopy is a
fluorescence
microscopy technique with an intermediate optical resolution, but good
optical sectioning capabilities and high speed.
Optical Coherence Tomography is an imaging technique that uses
low-coherence light to capture micrometer-resolution, two- and
three-dimensional images from within optical scattering media (e.g.,
biological tissue). It is used for medical imaging and industrial
nondestructive testing or NDT. Optical
coherence tomography is based on low-coherence interferometry, typically
employing near-infrared light. The use of relatively long wavelength light
allows it to penetrate into the scattering medium. Confocal microscopy,
another optical technique, typically penetrates less deeply into the
sample but with higher resolution. Depending on the properties of the
light source (superluminescent diodes, ultrashort pulsed lasers, and
supercontinuum lasers have been employed), optical coherence tomography
has achieved sub-micrometer resolution (with very wide-spectrum sources
emitting over a ~100 nm wavelength range). Optical coherence tomography is
one of a class of optical tomographic techniques. Commercially available
optical coherence tomography systems are employed in diverse applications,
including art conservation and diagnostic medicine, notably in
ophthalmology and optometry where it can be used to obtain detailed images
from within the retina. Recently, it has also begun to be used in
interventional cardiology to help diagnose coronary artery disease, and in
dermatology to improve diagnosis. A relatively recent implementation of
optical coherence tomography, frequency-domain optical coherence
tomography, provides advantages in the signal-to-noise ratio provided,
thus permitting faster signal acquisition.
Sub-surface imaging technology can expose counterfeit travel documents.
New research has found that optical coherence tomography or OCT imaging
technology can be utilized to distinguish between legitimate and
counterfeit travel documents.
Confocal Microscopy is an optical imaging technique for increasing
optical resolution and contrast of a micrograph by means of using a
spatial pinhole to block out-of-focus light in image formation. Capturing
multiple two-dimensional images at different depths in a sample enables
the reconstruction of three-dimensional structures (a process known as
optical sectioning) within an object. This technique is used extensively
in the scientific and industrial communities and typical applications are
in life sciences, semiconductor inspection and
materials science. Light travels
through the sample under a conventional microscope as far into the
specimen as it can penetrate, while a confocal microscope only focuses a
smaller beam of light at one narrow depth level at a time. The CLSM
achieves a controlled and highly limited depth of focus.
Differential Interference Contrast Microscopy is an optical microscopy
technique used to enhance the contrast in unstained, transparent samples.
DIC works on the principle of interferometry to gain information about the
optical path length of the sample, to see otherwise invisible features. A
relatively complex optical system produces an image with the object
appearing black to white on a grey background. This image is similar to
that obtained by phase contrast microscopy but without the bright
diffraction halo. Light waves travel at different speeds in different
materials.
Light Refraction.
Phase-Contrast Microscopy is an optical microscopy technique that
converts phase shifts in light passing through a transparent specimen to
brightness changes in the image. Phase shifts themselves are invisible,
but become visible when shown as brightness variations. When light waves
travel through a medium other than vacuum, interaction with the medium
causes the wave amplitude and phase to change in a manner dependent on
properties of the medium. Changes in amplitude (brightness) arise from the
scattering and absorption of light, which is often wavelength-dependent
and may give rise to colors. Photographic equipment and the human eye are
only sensitive to amplitude variations. Without special arrangements,
phase changes are therefore invisible. Yet, phase changes often carry
important information. Phase-contrast microscopy is particularly important
in biology. It reveals many cellular structures that are not visible with
a simpler bright-field microscope, as exemplified in the figure. These
structures were made visible to earlier microscopists by staining, but
this required additional preparation and thus killing the cells. The
phase-contrast microscope made it possible for biologists to study living
cells and how they proliferate through cell division. It is one of the few
methods available to quantify cellular structure and components that does
not use
fluorescence.
Phase Objects are samples that change
the phase but not the amplitude of a light wave. In contrast, amplitude
objects only affect the amplitude but not the phase of light. Flat and
unstained cells almost reach the characteristics of a phase object for
visible light.
Polarized.
Bright-Field Microscopy is the simplest of all the optical microscopy
illumination techniques. Sample illumination is transmitted (i.e.,
illuminated from below and observed from above) white light, and contrast
in the sample is caused by attenuation of the transmitted light in dense
areas of the sample. Bright-field microscopy is the simplest of a range of
techniques used for illumination of samples in light microscopes, and its
simplicity makes it a popular technique. The typical appearance of a
bright-field microscopy image is a dark sample on a bright background,
hence the name.
Microbes Don’t Actually Look Like Anything (youtube) -
Journey to the Microcosmos (youtube channel) .
Dark-Field Microscopy describes microscopy methods, in both light and
electron microscopy, which exclude the unscattered beam from the image. As
a result, the field around the specimen (i.e., where there is no specimen
to scatter the beam) is generally dark.
Echo Revolve
Fluorescene Microscope - Four Microscopes in One.
Mirrored chip could enable handheld dark-field microscopes. Simple
chip powered by quantum dots allows standard microscopes to visualize
difficult-to-image biological organisms.
Mass
Spectrometry
is an analytical technique that ionizes chemical species and sorts the
ions based on their mass-to-charge ratio. In simpler terms, a mass
spectrum measures the masses within a sample. Mass spectrometry is used in
many different fields and is applied to pure samples as well as complex
mixtures.
Atomic Spectroscopy.
Spectrometers
(spectrum)
Raman Spectroscopy is a laser-based microscopic device and a
spectroscopic technique used to observe vibrational, rotational, and other
low-frequency modes in a system. Raman spectroscopy is commonly used in
chemistry to provide a structural fingerprint by which molecules can be
identified. The term MOLE is used to refer to the Raman-based microprobe.
Macromolecular crystallography
(wiki) -
Illuminating The Secrets Of Crystals-Microcrystal Electron Diffraction In
Structural Biology.
Proteolysis
is the breakdown of
proteins into smaller polypeptides or
amino acids.
Nuclear Magnetic Resonance Spectroscopy is a spectroscopic technique
to observe local
magnetic fields
around atomic nuclei. The sample is placed in a magnetic field and the NMR
signal is produced by excitation of the nuclei sample with radio waves
into nuclear magnetic resonance, which is detected with sensitive radio
receivers. The intramolecular magnetic field around an atom in a molecule
changes the resonance frequency, thus giving access to details of the
electronic structure of a molecule and its individual functional groups.
As the fields are unique or highly characteristic to individual compounds,
in modern organic chemistry practice, NMR spectroscopy is the definitive
method to identify monomolecular organic compounds. Similarly, biochemists
use NMR to identify proteins and other complex molecules. Besides
identification, NMR spectroscopy provides detailed information about the
structure, dynamics, reaction state, and chemical environment of
molecules. The most common types of NMR are proton and carbon-13 NMR
spectroscopy, but it is applicable to any kind of sample that contains
nuclei possessing spin. Nuclear Magnetic Resonance (NMR) spectroscopy is
an analytical chemistry technique used in quality control and reserach for
determining the content and purity of a sample as well as its molecular
structure. For example, NMR can quantitatively analyze mixtures containing
known compounds. Nuclear magnetic resonance spectroscopy is widely used to
determine the structure of organic molecules in solution and study
molecular physics, crystals as well as non-crystalline materials. The NMR
phenomenon is based on the fact that nuclei of atoms have magnetic
properties that can be utilized to yield chemical information. Quantum
mechanically subatomic particles (electrons, protons and neutrons) can be
imagined as spinning on their axes. (most commonly known as NMR
spectroscopy or magnetic resonance spectroscopy (MRS)).
Nuclear
Magnetic Resonance Spectroscopy of Proteins
is a field of structural biology in which NMR spectroscopy is used to
obtain information about the structure and dynamics of
proteins, and also
nucleic acids, and their complexes.
Nuclear
Magnetic Resonance is a
physical observation in
which nuclei in a strong constant
magnetic
field are perturbed by a weak oscillating magnetic field (in the near
field and therefore not involving electromagnetic waves) and respond by
producing an electromagnetic signal with a frequency characteristic of the
magnetic field at the nucleus. This process occurs near resonance, when
the oscillation frequency matches the intrinsic frequency of the nuclei,
which depends on the strength of the static magnetic field, the chemical
environment, and the magnetic properties of the isotope involved; in
practical applications with static magnetic fields up to ca. 20 tesla, the
frequency is similar to VHF and UHF television broadcasts (60–1000 MHz).
NMR results from specific magnetic properties of certain atomic nuclei.
Nuclear magnetic resonance spectroscopy is widely used to determine the
structure of organic molecules in solution and study molecular physics,
crystals as well as non-crystalline materials. NMR is also routinely used
in advanced medical imaging techniques, such as in magnetic resonance
imaging (MRI). All isotopes that contain an odd number of protons and/or
neutrons (see Isotope) have an intrinsic nuclear magnetic moment and
angular momentum, in other words a nonzero nuclear spin, while all
nuclides with even numbers of both have a total spin of zero. The most
commonly used nuclei are 1H and 13C, although isotopes of many other
elements can be studied by high-field NMR spectroscopy as well. A key
feature of NMR is that the resonance frequency of a particular simple
substance is usually directly proportional to the strength of the applied
magnetic field. It is this feature that is exploited in imaging
techniques; if a sample is placed in a non-uniform magnetic field then the
resonance frequencies of the sample's nuclei depend on where in the field
they are located. Since the resolution of the imaging technique depends on
the magnitude of the magnetic field gradient, many efforts are made to
develop increased gradient field strength. The principle of NMR usually
involves three sequential steps: The alignment (polarization) of the
magnetic nuclear spins in an applied, constant magnetic field B0. The
perturbation of this alignment of the nuclear spins by a weak oscillating
magnetic field, usually referred to as a radio-frequency (RF) pulse. The
oscillation frequency required for significant perturbation is dependent
upon the static magnetic field (B0) and the nuclei of observation. The
detection of the NMR signal during or after the RF pulse, due to the
voltage induced in a detection coil by precession of the nuclear spins
around B0. After an RF pulse, precession usually occurs with the nuclei's
intrinsic Larmor frequency and, in itself, does not involve transitions
between spin states or energy levels. The two magnetic fields are usually
chosen to be perpendicular to each other as this maximizes the NMR signal
strength. The frequencies of the time-signal response by the total
magnetization (M) of the nuclear spins are analyzed in NMR spectroscopy
and magnetic resonance imaging. Both use applied magnetic fields (B0) of
great strength, often produced by large currents in superconducting coils,
in order to achieve dispersion of response frequencies and of very high
homogeneity and stability in order to deliver spectral resolution, the
details of which are described by chemical shifts, the Zeeman effect, and
Knight shifts (in metals). The information provided by NMR can also be
increased using hyperpolarization, and/or using two-dimensional,
three-dimensional and higher-dimensional techniques. NMR phenomena are
also utilized in low-field NMR, NMR spectroscopy and MRI in the Earth's
magnetic field (referred to as Earth's field NMR), and in several types of
magnetometers.
Multiangle
Light Scattering
describes a technique for measuring the
light scattered by a sample into a
plurality of angles.
Small
Angle Scattering
is a small-angle scattering method for structure analysis of biological
materials.
New Microscope captures detailed 3-D movies of cells deep within living
systems. Merging lattice light sheet microscopy with adaptive optics
reveals the most detailed picture yet of subcellular dynamics in
multicellular organisms.
Refuting a 70-year approach to predicting material microstructure.
Researchers have developed a new microscopy technique that maps material
microstructure in three dimensions; results demonstrate that the
conventional method for predicting materials' properties under high
temperature is ineffective.
Ultrafast
Laser Spectroscopy
is a spectroscopic technique that uses ultrashort pulse
lasers for the
study of dynamics on extremely short time scales (attoseconds to
nanoseconds).
High-speed microscope illuminates biology at the speed of life. The
team behind the revolutionary 3D SCAPE microscope announces today a new
version of this high-speed imaging technology. They used SCAPE 2.0 to
reveal previously unseen details of living creatures -- from neurons
firing inside a wriggling worm to the 3D dynamics of the beating heart of
a fish embryo, with far superior resolution and at speeds up to 30 times
faster than their original demonstration.
Time-lapse.
Dual-Polarization
Interferometry
is an analytical technique that probes molecular layers adsorbed to the
surface of a waveguide using the evanescent wave of a
laser beam. It is
used to measure the conformational change in proteins, or other
biomolecules, as they function (referred to as the conformation activity
relationship).
Circular Dichroism is dichroism involving circularly polarized light,
i.e., the differential absorption of left- and right-handed light.
Left-hand circular (LHC) and right-hand circular (RHC) polarized light
represent two possible spin angular momentum states for a photon, and so
circular dichroism is also referred to as dichroism for spin angular
momentum.
Neutron Imaging is the process of making an image with neutrons. The
resulting image is based on the neutron attenuation properties of the
imaged object. The resulting images have much in common with industrial
X-ray images, but since the image is based on neutron attenuating
properties instead of X-ray attenuation properties, some things easily
visible with neutron imaging may be very challenging or impossible to see
with
X-Ray Imaging Techniques (and vice
versa). X-rays are attenuated based on a material's density. Denser
materials will stop more X-rays. With neutrons, a material's likelihood of
attenuation of neutrons is not related to its density. Some light
materials such as boron will absorb neutrons while hydrogen will generally
scatter neutrons, and many commonly used metals allow most neutrons to
pass through them. This can make neutron imaging better suited in many
instances than X-ray imaging; for example, looking at O-ring position and
integrity inside of metal components, such as the segments joints of a
Solid Rocket Booster.
Neutron Diffraction is the application of
neutron scattering to the determination of the atomic and/or magnetic
structure of a material. A sample to be examined is placed in a beam of
thermal or cold neutrons to obtain a diffraction pattern that provides
information of the structure of the material. The technique is similar to
X-ray diffraction but due to their different scattering properties,
neutrons and X-rays provide complementary information: X-Rays are suited
for superficial analysis, strong x-rays from synchrotron radiation are
suited for shallow depths or thin specimens, while neutrons having high
penetration depth are suited for bulk samples.
X-Ray Crystallography is a technique used for determining the atomic
and molecular structure of a
crystal, in which the crystalline atoms cause
a beam of incident X-rays to diffract into many specific directions. By
measuring the angles and intensities of these diffracted beams, a
crystallographer can produce a three-dimensional picture of the density of
electrons within the crystal. From this electron density, the mean
positions of the atoms in the crystal can be determined, as well as their
chemical bonds, their disorder, and various other information.
X-Ray Absorption Spectroscopy is a widely used technique for
determining the local geometric and/or electronic structure of matter. The
experiment is usually performed at synchrotron radiation facilities, which
provide intense and tunable X-ray beams. Samples can be in the gas phase,
solutions, or solids.
First X-Ray of a Single Atom. Scientists have taken the world's first
X-ray SIGNAL (or SIGNATURE) of just one atom. This groundbreaking
achievement could revolutionize the way scientists detect the materials.
Over the years, the quantity of materials in a sample required for X-ray
detection has been greatly reduced thanks to the development of
synchrotron X-rays sources and new instruments. To date, the smallest
amount one can X-ray a sample is in attogram, that is about 10,000 atoms
or more. This is due to the X-ray signal produced by an atom being
extremely weak so that the conventional X-ray detectors cannot be used to
detect it. According to Hla, it is a long-standing dream of scientists to
X-ray just one atom, which is now being realized by the research team led
by him.
Synchrotron X-ray Scanning Tunneling Microscopy.
Synchrotron is a particular type of cyclic
particle
accelerator, descended from the cyclotron, in which the accelerating
particle beam travels around a fixed closed-loop path. The magnetic field
which bends the particle beam into its closed path increases with time
during the accelerating process, being synchronized to the increasing
kinetic energy of the particles. The synchrotron is one of the first
accelerator concepts to enable the construction of large-scale facilities,
since bending, beam focusing and acceleration can be separated into
different components. The most powerful modern particle accelerators use
versions of the synchrotron design. The largest synchrotron-type
accelerator, also the largest particle accelerator in the world, is the
27-kilometre-circumference (17 mi) Large Hadron Collider (LHC) near
Geneva, Switzerland, built in 2008 by the European Organization for
Nuclear Research (CERN). It can accelerate beams of protons to an energy
of 6.5 teraelectronvolts (TeV).
Synchrotron Light Source is a source of electromagnetic radiation
usually produced by a storage ring, for scientific and technical purposes.
First observed in synchrotrons, synchrotron light is now produced by
storage rings and other specialized particle accelerators, typically
accelerating electrons. Once the high-energy electron beam has been
generated, it is directed into auxiliary components such as bending
magnets and insertion devices (undulators or wigglers) in storage rings
and free electron lasers. These supply the strong magnetic fields
perpendicular to the beam which are needed to convert high energy
electrons into photons.
Synchrotron Radiation is the electromagnetic radiation emitted when
charged particles are accelerated radially, e.g., when they are subject to
an acceleration perpendicular to their velocity (a ⊥ v). It is produced,
for example, in synchrotrons using bending magnets, undulators and/or
wigglers. If the particle is non-relativistic, then the emission is called
cyclotron emission. If, on the other hand, the particles are relativistic,
sometimes referred to as ultrarelativistic, the emission is called
synchrotron emission. Synchrotron radiation may be achieved artificially
in synchrotrons or storage rings, or naturally by fast electrons moving
through magnetic fields. The radiation produced in this way has a
characteristic polarization and the frequencies generated can range over
the entire electromagnetic spectrum which is also called continuum
radiation.
Interferometry
is a family of techniques in which waves, usually
electromagnetic waves,
are superimposed causing the phenomenon of interference in order to
extract information. Interferometry is an important investigative
technique in the fields of astronomy, fiber optics, engineering metrology,
optical metrology, oceanography, seismology, spectroscopy (and its
applications to chemistry), quantum mechanics, nuclear and particle
physics, plasma physics, remote sensing, biomolecular interactions,
surface profiling, microfluidics, mechanical stress/strain measurement,
velocimetry, and optometry.
Probe-based
confocal laser endomicroscopy combines the slender camera-toting
probe traditionally snaked down the throat to view the insides of organs
(an endoscope) with a laser that lights up tissues, and sensors that
analyze the reflected fluorescent patterns. It offers a microscopic view
of living tissues instead of fixed ones.
Confocal
laser endomicroscopy (CLE) is a novel in vivo imaging technique
that can provide real-time optical biopsies in the evaluation of
pancreaticobiliary strictures and pancreatic cystic lesions (PCLs), both
of which are plagued by low sensitivities of routine evaluation
techniques. Compared to pathology alone, CLE is associated with a higher
sensitivity and accuracy for the evaluation of indeterminate
pancreaticobiliary strictures. CLE has the ability to determine the
malignant potential of PCLs. As such, CLE can increase the diagnostic
yield of endoscopic retrograde cholangiopancreatography and endoscopic
ultrasound, reducing the need for repeat procedures. It has been shown to
be safe, with an adverse event rate of ≤1%. Published literature regarding
its cost-effectiveness is needed.
Contrast Agent
is a substance used to increase the contrast of structures or fluids
within the body in medical imaging. Contrast agents absorb or alter
external electromagnetism or ultrasound, which is different from
radiopharmaceuticals, which emit radiation themselves. In x-rays, contrast
agents enhance the radiodensity in a target tissue or structure. In MRIs,
contrast agents shorten (or in some instances increase) the relaxation
times of nuclei within body tissues in order to alter the contrast in
the image. Contrast agents are commonly used to improve the visibility of
blood vessels and the gastrointestinal tract. Several types of contrast
agent are in use in medical imaging and they can roughly be classified
based on the imaging modalities where they are used. Most common contrast
agents work based on X-ray attenuation and magnetic resonance signal
enhancement.
Imaging Machines (EEG) -
Microscopes (science tools)
Radiopharmaceutical are a group of pharmaceutical drugs which have
radioactivity. Radiopharmaceuticals can be used as diagnostic and
therapeutic agents. Radiopharmaceuticals emit radiation themselves, which
is different from contrast media which absorb or alter external
electromagnetism or ultrasound. Radiopharmacology is the branch of
pharmacology that specializes in these agents. The main group of these
compounds are the radiotracers used to diagnose dysfunction in body
tissues. While not all medical isotopes are radioactive,
radiopharmaceuticals are the oldest and still most common such drugs.
Radioactive
Tracer is a chemical compound in which one or more atoms have been
replaced by a radionuclide so by virtue of its radioactive decay it can be
used to explore the mechanism of chemical reactions by tracing the path
that the radioisotope follows from reactants to products. Radiolabeling or
radiotracing is thus the radioactive form of isotopic labeling.
Radioisotopes of hydrogen, carbon, phosphorus, sulfur, and iodine have
been used extensively to trace the path of biochemical reactions. A
radioactive tracer can also be used to track the distribution of a
substance within a natural system such as a cell or tissue, or as a flow
tracer to track fluid flow. Radioactive tracers are also used to determine
the location of fractures created by hydraulic fracturing in natural gas
production. Radioactive tracers form the basis of a variety of imaging
systems, such as, PET scans, SPECT scans and technetium scans. Radiocarbon
dating uses the naturally occurring carbon-14 isotope as an isotopic
label.
Radionuclide is an atom that has excess nuclear energy, making it
unstable. This excess energy can be used in one of three ways: emitted
from the nucleus as gamma radiation; transferred to one of its electrons
to release it as a conversion electron; or used to create and emit a new
particle (alpha particle or beta particle) from the nucleus. During those
processes, the radionuclide is said to undergo radioactive decay. These
emissions are considered ionizing radiation because they are powerful
enough to liberate an electron from another atom.
Computerized Tomography or CT scan combines a series of X-ray images
taken from different angles around your body and uses computer processing
to create cross-sectional images (slices) of the bones, blood vessels and
soft tissues inside your body. CT scan images provide more-detailed
information than plain X-rays do.
CAT scan
is a
medical imaging technique used to obtain detailed internal images of
the body. The personnel that perform CT scans are called radiographers or
radiology technologists. CT scanners use a rotating X-ray tube and a row
of detectors placed in a gantry to measure X-ray attenuations by different
tissues inside the body. The multiple X-ray measurements taken from
different angles are then processed on a computer using tomographic
reconstruction algorithms to produce tomographic (cross-sectional) images
(virtual "slices") of a body. CT scan can be used in patients with
metallic implants or pacemakers, for whom
magnetic resonance imaging or MRI is contraindicated.
Radiographer are healthcare professionals who specialize in the
imaging of human anatomy for the diagnosis and treatment of pathology.
Scientists invent a new type of microscope that can see through an intact
skull. The microscope uses a combination of hardware and
software-based
adaptive optics to reconstruct object image. Non-invasive
microscopic techniques such as
optical coherence microscopy and two-photon
microscopy are commonly used for in vivo imaging of living tissues. When
light passes through turbid materials such as biological tissues, two
types of light are generated: ballistic photons and multiply scattered
photons. The ballistic photons travel straight through the object without
experiencing any deflection and hence is used to reconstruct the object
image. On the other hand, the multiply scattered photons are generated via
random deflections as the light passes through the material and show up as
speckle noise in the reconstructed image. As the light propagates through
increasing distances, the ratio between multiply scattered and ballistic
photons increases drastically, thereby obscuring the image information. In
addition to the noise generated by the multiply scattered light, optical
aberration of ballistic light also causes contrast reduction and image
blur during the image reconstruction process.
Innovative microscopy technique reveals secrets of lipid synthesis inside
cells. Two-color infrared photothermal microscopy (2C-IPM) opens new
avenues for long-term study of lipid metabolism in living cells.
Everywhere and Nowhere at Once
Observer Effect refers to changes that the act of
observation will make on a phenomenon
being observed.
The act of
observing something could have an effect on what you are observing,
and trying to measure something can have an effect on what you are trying
to measure because the
instruments can have an effect. This is often the
result of instruments that, by necessity, alter the state of what they
measure in some manner. A commonplace example is checking the pressure in
an automobile tire; this is difficult to do without letting out some of
the air, thus changing the pressure. Furthermore, it is not possible to
see any object without light hitting the object, and causing it to emit
light; while this may seem negligible, the object still experiences a
change. This effect can be
observed in many domains of physics and can
often be reduced to insignificance by using better instruments or
observation techniques. In
quantum mechanics, there is a common
misconception that it is the mind of a
conscious observer
that causes the observer effect in quantum processes. It is rooted in a
misunderstanding of the
quantum wave function ψ and the quantum
measurement process.
Once one has measured the system, one knows its current state; and
this prevents it from being in one of its other states, it has apparently
decohered from them without prospects of future strong quantum
interference. This means that the type of measurement one performs on the
system affects the end-state of the system.
Some interpretations of quantum mechanics posit a central role for an
observer of a quantum phenomenon. The quantum mechanical observer is tied
to the issue of observer effect, where a measurement necessarily requires
interacting with the physical object being measured, affecting its
properties through the interaction. The term "observable" has gained a
technical meaning, denoting a
Hermitian operator that represents a measurement.
Double Slit Experiment -
Correlation does
not Imply Causation -
Mind over Matter -
People Watching -
Survey Errors
-
Sampling Errors -
Cherry Picking Data
Observational Error is the difference between a measured value of a
quantity and its true value. In statistics, an
error is not a "
mistake".
Variability is an inherent part of the results of measurements and of the
measurement process. Measurement errors can be divided into two
components: Random error and
systematic error. Random errors are
errors in measurement that
lead to measurable values being
inconsistent
when repeated measurements of a constant attribute or quantity are taken.
Systematic errors are errors
that are not determined by chance but are introduced by an inaccuracy
(involving either the
observation or measurement
process) inherent to the system. Systematic error may also refer to an
error with a non-zero mean, the effect of which is not reduced when
observations are averaged.
Observer Errors
can happen when
people know they're being watched or videotaped, they will sometimes
change their behavior.
Probe Effect is unintended
alteration in system
behavior caused by measuring that system. In code profiling and
performance measurements, the delays introduced by insertion/removal of
code instrumentation may result in a non-functioning application, or
unpredictable behavior.
The detector interferes with the
measurement. The detector after one of the slits intercepting the photon,
changes the boundary conditions to a different system, and thus a
different Ψ∗Ψ. It is no longer the same experimental setup.
Interference pattern is clearly observed.
To make the "which-way" detector, a quarter wave plate is put in front of
each slit. This device is a special crystal that can change linearly
polarized light into circularly polarized light.
Retrocausality or backwards causation is a concept of
cause and effect
in which an effect precedes its cause in time and so a later event affects
an earlier one. In quantum physics, the distinction between cause and
effect is not made at the most fundamental level and so time-symmetric
systems can be viewed as causal or retrocausal. Philosophical
considerations of time travel often address the same issues as
retrocausality, as do treatments of the subject in fiction, but the two
phenomena are distinct. Retrocausality is associated with the Double
Inferential state-Vector Formalism (DIVF), later known as the two-state
vector formalism (TSVF) in quantum mechanics, where the present is
characterised by quantum states of the past and the future taken in
combination.
Born Rule is a key postulate of quantum mechanics which gives the
probability that a measurement of a quantum system will yield a given
result. In its simplest form, it states that the probability density of
finding a particle at a given point is proportional to the square of the
magnitude of the particle's
wavefunction at that point.
Observer Effect in information technology is the impact on the
behavior of a computer
process caused by
the act of observing the process
while it is running. This effect is a manifestation of the uncertainty
principle in information technology. The uncertainty principle is
attributed to
Werner Heisenberg and was originally referring to
quantum mechanics.
Heisenbug is a software bug that seems to disappear or alter
its behavior when one attempts to study it.
Uncertainty Principle also known as
Heisenberg's uncertainty
principle, is any of a variety of mathematical inequalities asserting a
fundamental limit to the
precision with which certain pairs of physical
properties of a
particle, known as complementary
variables, such as
position x and momentum p, can be known.
Chaos Theory -
Anti-Particle -
Black Holes -
Information Paradox
Heisenberg Uncertainty Principle states that there is an
absolute
limit on the combined
accuracy of certain pairs of
simultaneous, related measurements,
especially that of the position and momentum of a particle. Originally
posited as a problem of measurement, it was soon refined as an inherent
property of the universe.
Refraction and
Diffraction of Light -
Wave or Partial?
Electrons are
waves and only
look like
particles when we look at them or try to
measure them.
Schrödinger Equation
Superposition.
Quantum
Fluctuation which is the
temporary appearance
of energetic
particles out of nothing, as allowed
by the
Uncertainty Principle. It is also
known as
vacuum fluctuation. It is the temporary
random change in the
amount of energy in a point in space, as prescribed by Werner Heisenberg's
uncertainty principle. They are minute random fluctuations in the values
of the
fields which represent elementary particles,
such as electric and magnetic
fields which represent the electromagnetic
force carried by photons, W and Z fields which carry the
weak force, and gluon fields which carry the strong force. Vacuum
fluctuations appear as
virtual particles, which
are always created in
particle-antiparticle pairs.
Since they are
created
spontaneously without a source of energy, vacuum fluctuations and
virtual particles are said to violate the
conservation of energy.
This is theoretically allowable because the particles
annihilate each other within a time limit
determined by the uncertainty principle so they are not directly
observable. This means that pairs of virtual particles with energy Δ E
\Delta E and lifetime shorter than Δ t \Delta t are
continually created and annihilated in
empty space. Although the particles are not directly detectable, the
cumulative effects of these particles are measurable. For example, without
quantum fluctuations, the "bare" mass and charge of elementary particles
would be infinite; from renormalization theory the shielding effect of the
cloud of virtual particles is responsible for the finite mass and charge
of elementary particles. Another consequence is the Casimir effect. One of
the first observations which was evidence for vacuum fluctuations was the
Lamb shift in hydrogen. In July 2020, scientists reported that quantum
vacuum fluctuations can influence the motion of macroscopic, human-scale
objects by measuring correlations below the standard quantum limit between
the position/momentum uncertainty of the mirrors of LIGO and the photon
number/phase uncertainty of light that they reflect.
Wave Function Collapse occurs when a
wave function—initially
in a
superposition of several eigenstates—reduces
to a single eigenstate due to interaction with the external world. This
interaction is called an "
observation". It
is the essence of a measurement in quantum mechanics which connects the
wave function with classical observables like position and momentum.
Collapse is one of two processes by which
quantum systems evolve in time; the other is the continuous evolution
via the Schrödinger equation. Collapse is a black box for a
thermodynamically irreversible interaction with a classical environment.
Calculations of quantum decoherence show that when a quantum system
interacts with the environment, the
superpositions
apparently reduce to mixtures of classical alternatives. Significantly,
the combined wave function of the system and environment continue to obey
the Schrödinger equation throughout this apparent collapse. More
importantly, this is not enough to explain actual wave function collapse,
as decoherence does not reduce it to a single eigenstate. Historically
Werner Heisenberg was the first to use the idea of wave function reduction
to explain quantum measurement.
Measurement Problem in quantum mechanics is the problem of how (or
whether)
wave function collapse occurs. The inability to observe such a
collapse directly has given rise to different
interpretations of quantum
mechanics and poses a key set of questions that each interpretation must
answer. The wave function in quantum mechanics evolves deterministically
according to the Schrödinger equation as a linear superposition of
different states. However, actual
measurements
always find the physical system in a definite state. Any future evolution
of the wave function is based on the state the system was discovered to be
in when the measurement was made, meaning that the measurement "did
something" to the system that is not obviously a consequence of
Schrödinger evolution. The measurement problem is describing what that
"something" is, how a superposition of many possible values becomes a
single measured value. To express matters differently (paraphrasing Steven
Weinberg), the Schrödinger wave equation determines the wave function at
any later time. If observers and their measuring apparatus are themselves
described by a deterministic wave function, why can we not predict precise
results for measurements, but only probabilities? As a general question:
How can one establish a correspondence between quantum and classical
reality?
Spooky Action at a Distance is the concept that an object can be
moved, changed, or otherwise affected without being physically touched (as
in mechanical contact) by another object. That is, it is the nonlocal
interaction of objects that are separated in space.
Quantum Entanglement.
Complementarity in physics is both a theoretical and an experimental
result of quantum mechanics, also referred to as principle of
complementarity. It holds that objects have complementary properties which
cannot all be observed or measured simultaneously.
The complementarity principle was formulated by Niels Bohr, a leading
founder of
quantum mechanics. Examples of complementary properties that
Bohr considered: Position and momentum. Energy and duration. Spin on
different axes. Wave and particle. Value of a field and its change (at a
certain position). Entanglement and coherence.
Relativity & The
Equivalence of Reference Frames - Breakthrough Junior Challenge 2017 (youtube)
Smart atomic cloud solves Heisenberg's observation problem.
Wheeler's Delayed Choice Experiment attempts to decide
whether light somehow "senses" the experimental apparatus in the
double-slit experiment, it will travel through and adjusts its behavior to
fit by assuming the appropriate determinate state for it, or whether light
remains in an indeterminate state, neither wave nor particle until
measured. Wheeler's delayed-choice experiment is actually several thought
experiments in quantum physics.
Counterfactual Quantum Computation is a method of inferring the result
of a computation without actually running a quantum computer otherwise
capable of actively performing that computation.
Electro-Optic Modulator is an optical device in which a
signal-controlled element exhibiting the electro-optic effect is used to
modulate a beam of light. The modulation may be imposed on the phase,
frequency, amplitude, or polarization of the beam. Modulation bandwidths
extending into the gigahertz range are possible with the use of
laser-controlled modulators.
Mach–Zehnder Interferometer is a device used to determine the
relative phase shift variations between two collimated beams derived by
splitting light from a single source. The interferometer has been used,
among other things, to measure phase shifts between the two beams caused
by a sample or a change in length of one of the paths.
Ludwig Zehnder
(wiki).
Sagnac Effect is a phenomenon encountered in interferometry that is
elicited by rotation. The Sagnac effect manifests itself in a setup called
a ring interferometer. A beam of light is split and the two beams are made
to follow the same path but in opposite directions. On return to the point
of entry the two light beams are allowed to exit the ring and undergo
interference. The relative phases of the two exiting beams, and thus the
position of the interference fringes, are shifted according to the angular
velocity of the apparatus. In other words, when the interferometer is at
rest with respect to the earth, the light travels at a constant speed.
However, when the interferometer system is spun, one beam of light will
slow with respect to the other beam of light. A gimbal mounted mechanical
gyroscope remains pointing in the same direction after spinning up, and
thus can be used as a rotational reference for an inertial navigation
system. With the development of so-called laser gyroscopes and fiber optic
gyroscopes based on the Sagnac effect, the bulky mechanical gyroscope is
replaced by one having no moving parts in many modern inertial navigation
systems. The principles behind the two devices are different, however. A
conventional gyroscope relies on the principle of conservation of angular
momentum whereas the sensitivity of the ring interferometer to rotation
arises from the invariance of the speed of light for all inertial frames
of reference.
Transactional interpretation takes the psi and psi*
wave
functions of the standard quantum formalism to be retarded (
forward in
time) and advanced (
backward in time) waves that form a quantum
interaction as a Wheeler–Feynman handshake or transaction. Transactional
interpretation of
quantum mechanics was first
proposed in 1986 by John G. Cramer, who argues that it helps in developing
intuition for quantum processes. He also suggests that it avoids the
philosophical problems with the Copenhagen interpretation and the role of
the observer, and also resolves various quantum paradoxes. TIQM formed a
minor plot point in his science fiction novel Einstein's Bridge.
Neutrinos' Metamorphosis
De Broglie-Bohm Theory is an interpretation of quantum
theory. In addition to a
wave function on the space of all possible
configurations, it also postulates an actual configuration that exists
even when unobserved.
Doppler Effect is the change in frequency or
wavelength of a wave (or other periodic event) for an observer moving
relative to its source. It is named after the Austrian physicist Christian
Doppler, who proposed it in 1842 in Prague. It is commonly heard when a
vehicle sounding a siren or horn approaches, passes, and recedes from an
observer. Compared to the emitted frequency, the received frequency is
higher during the approach, identical at the instant of passing by, and
lower during the recession.
Bouncing-Droplet Experiments
The
Pilot-Wave Dynamics of Walking Droplets (youtube)
Mie
Scattering occurs when the diameters of atmospheric particulates are
similar to or larger than the wavelengths of the light. Dust, pollen,
smoke and microscopic water droplets that form clouds are common causes of
Mie scattering.
Schrodinger's Cat is a thought experiment, sometimes
described as a
paradox, devised by Austrian physicist Erwin Schrödinger in
1935. It illustrates what he saw as the problem of the Copenhagen
interpretation of quantum mechanics applied to everyday objects. The
scenario presents a cat that may be simultaneously both alive and dead, a
state known as a
quantum superposition, as a result of being linked to a
random subatomic event that may or may not occur. The thought experiment
is also often featured in theoretical discussions of the interpretations
of quantum mechanics. Schrödinger coined the term Verschränkung
(entanglement) in the course of developing the thought experiment.
Wave Function.
Observation Flaws
(Psychology) -
Paranormal
-
Free Will -
A persons thoughts can effect a water drop.
Implicate and Explicate Order is used to describe two different
frameworks for understanding the same phenomenon or aspect of reality. In
particular, the concepts were developed in order to explain the bizarre
behavior of subatomic particles which quantum physics struggles to
explain. In Bohm's
Wholeness and the Implicate Order, he used these notions to describe
how the appearance of such phenomena might appear differently, or might be
characterized by, varying principal factors, depending on contexts such as
scales. The implicate (also referred to as the "enfolded") order is seen
as a deeper and more fundamental order of reality. In contrast, the
explicate or "unfolded" order include the abstractions that humans
normally perceive. As he wrote: In the enfolded [or implicate] order,
space and time are no longer the dominant factors determining the
relationships of dependence or independence of different elements. Rather,
an entirely different sort of basic connection of elements is possible,
from which our ordinary notions of space and time, along with those of
separately existent material particles, are abstracted as forms derived
from the deeper order. These ordinary notions in fact appear in what is
called the "explicate" or "unfolded" order, which is a special and
distinguished form contained within the general totality of all the
implicate orders (Bohm 1980, p. xv).
Hidden-Variable Theory are proposals to provide deterministic
explanations of quantum mechanical phenomena, through the introduction of
unobservable hypothetical entities. The existence of indeterminacy for
some measurements is assumed as part of the mathematical formulation of
quantum mechanics; moreover, bounds for indeterminacy can be expressed in
a quantitative form by the
Heisenberg uncertainty principle. As per its
mathematical formulation, quantum mechanics is non-deterministic, meaning
that it generally does not predict the outcome of any measurement with
certainty. Instead, it indicates what the probabilities of the outcomes
are, with the indeterminism of observable quantities constrained by the
uncertainty principle. The question arises whether there might be some
deeper reality hidden beneath quantum mechanics, to be described by a more
fundamental theory that can always predict the outcome of each measurement
with certainty: if the exact properties of every subatomic particle were
known the entire system could be modeled exactly using deterministic
physics similar to classical physics. In other words, it is conceivable
that quantum mechanics is an incomplete description of nature. The
designation of variables as underlying "hidden"
variables depends on the
level of physical description (so, for example, "if a gas is described in
terms of temperature, pressure, and volume, then the velocities of the
individual atoms in the gas would be hidden variables"). Physicists
supporting De Broglie–Bohm theory maintain that underlying the observed
probabilistic nature of the universe is a deterministic objective
foundation/property—the hidden variable. Others, however, believe that
there is no deeper deterministic reality in quantum mechanics. A lack of a
kind of realism (understood here as asserting independent existence and
evolution of physical quantities, such as position or momentum, without
the process of measurement) is crucial in the Copenhagen interpretation.
Realistic interpretations (which were already incorporated, to an extent,
into the physics of Feynman), on the other hand, assume that particles
have certain trajectories. Under such view, these trajectories will almost
always be continuous, which follows both from the finitude of the
perceived speed of light ("leaps" should rather be precluded) and, more
importantly, from the principle of least action, as deduced in quantum
physics by Dirac. But continuous movement, in accordance with the
mathematical definition, implies deterministic movement for a range of
time arguments; and thus realism is, under modern physics, one more reason
for seeking (at least certain limited) determinism and thus a
hidden-variable theory (especially that such theory exists: see De
Broglie–Bohm interpretation). Although determinism was initially a major
motivation for physicists looking for hidden-variable theories,
non-deterministic theories trying to explain what the supposed reality
underlying the quantum mechanics formalism looks like are also considered
hidden-variable theories; for example Edward Nelson's stochastic
mechanics. "
God does not play dice" In June
1926, Max Born published a paper, "Zur Quantenmechanik der Stoßvorgänge"
("Quantum Mechanics of Collision Phenomena") in the scientific journal
Zeitschrift für Physik, in which he was the first to clearly enunciate the
probabilistic interpretation of the quantum wave function, which had been
introduced by Erwin Schrödinger earlier in the year. Born concluded the
paper as follows: Here the whole problem of determinism comes up. From the
standpoint of our quantum mechanics there is no quantity which in any
individual case causally fixes the consequence of the collision; but also
experimentally we have so far no reason to believe that there are some
inner properties of the atom which conditions a definite outcome for the
collision. Ought we to hope later to discover such properties ... and
determine them in individual cases? Or ought we to believe that the
agreement of theory and experiment—as to the impossibility of prescribing
conditions for a causal evolution—is a pre-established harmony founded on
the nonexistence of such conditions? I myself am inclined to give up
determinism in the world of atoms. But that is a philosophical question
for which physical arguments alone are not decisive.
Length Contraction is the phenomenon of a decrease in length of an
object as measured by an observer who is traveling at any non-zero
velocity relative to the object.
Space time.
David Bohm was an American scientist who has been described
as one of the most significant theoretical physicists of the 20th century
and who contributed unorthodox ideas to quantum theory, neuropsychology
and the philosophy of mind.
Richard Feynman was an American theoretical physicist known
for his work in the path integral formulation of quantum mechanics, the
theory of quantum electrodynamics, and the physics of the superfluidity of
supercooled liquid helium, as well as in particle physics for which he
proposed the parton model. For his contributions to the development of
quantum electrodynamics, Feynman, jointly with Julian Schwinger and
Sin'ichirō Tomonaga, received the Nobel Prize in Physics in 1965.
Feynman Diagram is a pictorial representation of the mathematical
expressions describing the behavior and interaction of
subatomic particles.
Correspondence Problem refers to the problem of ascertaining
which parts of one image correspond to which parts of another image, where
differences are due to movement of the camera, the elapse of time, and/or
movement of objects in the photos.
Coherence in physics states that two wave sources are perfectly coherent if they
have a constant phase difference and the same frequency. It is an ideal
property of waves that enables stationary (i.e. temporally and spatially
constant) interference. It contains several distinct concepts, which are
limiting cases that never quite occur in reality but allow an
understanding of the physics of waves, and has become a very important
concept in quantum physics. More generally, coherence describes all
properties of the correlation between physical quantities of a single
wave, or between several waves or wave packets.
Electromagnetic Field -
Light -
Color -
Hertz -
Sound
-
Quantum
-
Reality
-
Spatial Intelligence.
Widely used engineering technique has unintended consequences.
Focused Ion Beam can in fact dramatically alter the material’s
structural identity. A FIB setup is a scientific instrument that resembles
a
scanning electron microscope (SEM). However, while the SEM uses a
focused
beam
of electrons to image the sample in the chamber, a FIB setup uses a
Focused
beam of
ions instead. FIB can also be incorporated
in a system with both
electron and ion beam
columns, allowing the same feature to be investigated using either of the
beams. FIB should not be confused with using a beam of focused ions for
direct write
lithography (such as in
proton beam writing). These are generally quite different systems
where the material is modified by other mechanisms.
PhysClips Waves and Sound.
Perfect transmission through barrier using sound. A research team has
for the first time experimentally proved a century old quantum theory that
relativistic particles can pass through a barrier with 100% transmission.
The perfect transmission of
sound through a barrier is difficult to achieve, if not impossible
based on our existing knowledge. This is also true with other energy forms
such as light and heat.
Leggett-Garg Inequality is a mathematical inequality
fulfilled by all
macrorealistic physical theories. Here, macrorealism
(macroscopic realism) is a classical worldview defined by the conjunction
of two postulates: Macrorealism per se: "A macroscopic object, which has
available to it two or more macroscopically distinct states, is at any
given time in a definite one of those states." Noninvasive measurability:
"It is possible in principle to determine which of these states the system
is in without any effect on the state itself, or on the subsequent system
dynamics."
Life is
like a
simulation because humans understand how
simulations work
and see the similarities in life.
This does not mean that
life is a
simulation. But what it does mean is that we are getting
closer
to fully understanding how our universe works, and that we have
more
controls than we ever dreamed
about having. When people ask the wrong questions, they can
easily make inaccurate assumptions.
God
Science: Episode One - The Simulation Hypothesis (youtube)
Does matter create mind or does mind create matter?
Both. Matter creates the mind and the mind creates matter.
Mind over Matter...as a
Matter of Fact...the
Mind drives the
Mass.
-
Observation Errors
Corporeal is having material
or
physical form or substance. Affecting or characteristic of the body as
opposed to the mind or spirit.
Dualism is the position that
mental phenomena are, in some
respects,
non-physical, or that the mind and body are not identical. Thus,
it encompasses a set of views about the relationship between mind and
matter, and between subject and object, and is contrasted with other
positions, such as physicalism and enactivism, in the mind–body problem.
Waves - Osillations
Wave is an
oscillation accompanied by a
transfer of energy
that travels through a
medium such
as
space or
mass. Wave is a
pattern
of
movement that
goes up and down or back and forth.
Frequency refers to the addition of
time. Wave
motion
transfers energy from one point to another, which displace
particles of the transmission medium–that is, with little or no associated
mass transport. Waves consist, instead, of
oscillations or
vibrations (of
a physical quantity), around almost fixed locations. A wave is a
disturbance that transfers energy through matter or space. There are two
main types of waves.
Mechanical waves propagate through a medium, and the substance of this
medium is deformed. Restoring forces then reverse the deformation. For
example,
sound waves propagate via air molecules colliding with their
neighbors. When the molecules collide, they also bounce away from each
other (
a restoring force). This keeps the molecules from continuing to
travel in the direction of the wave. The second main type, electromagnetic
waves, do not require a medium. Instead, they consist of periodic
oscillations of
electrical and magnetic fields originally generated by
charged particles, and can therefore travel through a
vacuum. These types
vary in wavelength, and include
radio waves, microwaves, infrared
radiation,
visible light, ultraviolet radiation, X-rays and gamma rays.
Waves are described by a wave equation which sets out how the disturbance
proceeds over
time. The mathematical form of this equation varies
depending on the type of wave. Further, the behavior of particles in
quantum mechanics are described by waves. In addition, gravitational waves
also travel through space, which are a result of a vibration or movement
in gravitational fields. A wave can be transverse, where a disturbance
creates
oscillations that are perpendicular to the propagation of energy
transfer, or longitudinal: the oscillations are parallel to the direction
of energy propagation. While mechanical waves can be both transverse and
longitudinal, all
electromagnetic waves are transverse in free space.
Any
motion that repeats itself
after an interval of time is called vibration or oscillation. The swinging
of a pendulum and the
motion of a
plucked string are typical examples of vibration. The theory of vibration
deals with the study of oscillatory
motions of bodies and the forces associated with them.
Visible Spectrum -
Wave Particle Duality -
Noise -
Electricity -
Wireless Energy
-
Tone Generator
Waveform is the shape
and form of a signal such as a wave moving in a physical medium or an
abstract representation. In many cases the medium in which the wave
propagates does not permit a direct observation of the true form. In these
cases, the term "waveform" refers to the shape of a graph of the varying
quantity against time. An instrument called an oscilloscope can be used to
pictorially represent a wave as a repeating image on a screen. To be more
specific, a waveform is depicted by a graph that shows the changes in a
recorded signal's amplitude over the duration of recording. The amplitude
of the signal is measured on the y-axis (vertical), and time on the x-axis (horizontal).
Fields.
Wavefront of a
time-varying wave field is the set
(locus) of all points having the same phase. The term is generally
meaningful only for fields that, at each point, vary sinusoidally in time
with a single temporal frequency (otherwise the phase is not well
defined). Wavefronts usually move with time. For waves propagating in a
unidimensional medium, the wavefronts are usually single points; they are
curves in a two dimensional medium, and surfaces in a three-dimensional
one.
Mirrors.
Pulse in electronics is
a
sharp transient wave in the normal electrical state, or a series of such
transients. The
rhythmic contraction
and expansion of the arteries with each beat of the heart. The rate at
which the heart
beats; usually measured to obtain a quick evaluation of a
person's health.
Pulse Rate.
Standing Wave is
a wave in which its peaks (or any other point on the wave) do not move
spatially. The amplitude of the wave at a point in space may vary with
time, but its phase remains constant. The locations at which the amplitude
is minimum are called nodes, and the locations where the amplitude is
maximum are called antinodes.
Doppler Effect is the change in frequency or wavelength of a wave for
a receiver and source in relative motion.
Node in physics is a point along a standing wave where the wave has
minimum amplitude. For instance, in a vibrating guitar string, the ends of
the string are nodes. By changing the position of the end node through
frets, the guitarist changes the effective length of the vibrating string
and thereby the note played. The opposite of a node is an anti-node, a
point where the amplitude of the standing wave is a maximum. These occur
midway between the nodes.
Sound Waves Makes
Shapes -
Slow
Moving Waves in Giant Hanging Loops - Cool Science Experiment (youtube)
Arbitrary Waveform Generator is a piece of electronic test equipment
used to generate electrical waveforms. These waveforms can be either
repetitive or single-shot (once only) in which case some kind of
triggering source is required (internal or external). The resulting
waveforms can be injected into a device under test and analyzed as they
progress through it, confirming the proper operation of the device or
pinpointing a fault in it.
Digital
Pattern Generator.
Function Generator is usually a piece of
electronic test equipment or
software used to generate different types of electrical waveforms over a
wide range of frequencies. Some of the most common waveforms produced by
the function generator are the sine, square, triangular and sawtooth
shapes. These waveforms can be either repetitive or single-shot (which
requires an internal or external trigger source). Integrated circuits used
to generate waveforms may also be described as function generator ICs.
Electromagnetic waves have both
electric and
magnetic field components, which
oscillate in phase
perpendicular to each other and perpendicular to the direction of energy
propagation, but they
carry no
electric charge themselves. The creation of all electromagnetic waves
begins with an oscillating charged particle, which creates oscillating
electric and magnetic fields. Once in
motion, the electric
and magnetic fields that a charged particle creates are self-perpetuating:
time-dependent changes in one field (electric or magnetic) produce the
other.
Massless.
Wave
Function in
quantum physics is a mathematical
description of the quantum state of a system. The wave function is a
complex-valued probability amplitude, and the probabilities for the
possible results of measurements made on the system can be derived from
it. The wave function is a function of the degrees of freedom
corresponding to some maximal set of commuting observables. Once such
a representation is chosen, the wave function can be derived from the
quantum state.
The most common symbols for a wave function are the Greek letters ψ or
Ψ (lower-case and capital psi, respectively).
Coherence is when
two wave sources are perfectly coherent if they have
a constant phase difference and the same frequency, and the same waveform.
Coherence is an ideal property of waves that enables stationary (i.e.
temporally and spatially constant) interference. It contains several
distinct concepts, which are limiting cases that never quite occur in
reality but allow an understanding of the physics of waves, and has become
a very important concept in quantum physics. More generally, coherence
describes all properties of the correlation between physical quantities of
a single wave, or between several waves or wave packets.
The equation, E=hf, is referred to as the Planck relation or the
Planck-Einstein relation. The letter h is named after Planck, as Planck’s
constant. Energy (E) is related to this constant h, and to the
frequency (f) of the
electromagnetic wave.
Planck Constant is the quantum of electromagnetic action that relates
a photon's energy to its frequency. The Planck constant multiplied by a
photon's frequency is equal to a photon's energy. The Planck constant is a
fundamental physical constant denoted as h, and of fundamental importance
in quantum mechanics. In metrology it is used to define the kilogram in SI
units. The Planck constant is defined to have the exact value h=
6.62607015×10−34 Js in SI units.
Planck-Einstein Relation is a fundamental equation in quantum
mechanics which states that the energy of a photon, E, known as photon
energy, is proportional to its frequency, ν:
Wave or Particle or Both
Wave Particle Duality is the concept that every elementary
particle or quantic entity may be partly described in terms not only of
particles, but also of waves. It expresses the inability of the classical
concepts "
particle" or "
wave" to fully describe the behavior of
quantum-scale objects.
Observation Errors -
Spirals Wavicle is an entity
having characteristic properties of both waves and particles. A
wave-particle which simultaneously has the properties of a wave and a
particle.
Double-Slit Experiment is a demonstration that
light and matter can
display characteristics of both classically defined waves and
particles; moreover, it displays the fundamentally probabilistic nature of
quantum mechanical
phenomena.
Electrons behave differently when they're
observed.
Copenhagen Interpretation is an expression of the meaning of
quantum mechanics that states that material
objects, on a microscopic level, generally
do not have
definite properties prior to being measured, and quantum mechanics can
only predict the probability distribution of a given measurement's
possible results.
The act of measurement affects the
system, causing the set of probabilities to reduce to only one of the
possible values immediately after the measurement. This feature is known
as wave function collapse.
Wave
Interference is a phenomenon in which two waves superpose to form a
resultant wave of greater, lower, or the same amplitude. Interference
usually refers to the interaction of waves that are correlated or coherent
with each other, either because they come from the same source or because
they have the same or nearly the same frequency. Interference effects can
be observed with all types of waves, for example, light, radio, acoustic,
surface water waves or matter waves.
Interference
Pattern.
Quantum Decoherence is the loss of quantum coherence. In quantum
mechanics, particles such as
electrons behave like waves and are described
by a wavefunction. These waves can interfere, leading to the peculiar
behaviour of quantum particles. As long as there exists a definite phase
relation between different states, the system is said to be coherent. This
coherence is a fundamental property of
quantum
mechanics, and is necessary
for the functioning of
quantum computers. However, when a quantum system
is not perfectly isolated, but in contact with its surroundings, the
coherence decays with time, a process called quantum decoherence. As a
result of this process, the quantum behaviour is lost.
Quantum Superposition.
Wave Function Collapse is said to occur when a wave
function—initially in a
superposition of several eigenstates—appears to
reduce to a single eigenstate (by "observation"). It is the essence of
measurement in quantum mechanics and connects the wave function with
classical observables like position and momentum. Collapse of the wave
function is when multiple states suddenly transitions to a single definite
state upon
measurement or interaction with the
environment, essentially "collapsing" its potential possibilities into a
single observable outcome; this is a key concept in understanding the
strange behaviors of
quantum particles.
Schrödinger Equation is a mathematical equation that describes the
changes over time of a physical system in which
quantum
effects, such as
wave–particle duality, are significant. The equation
is a mathematical formulation for studying quantum mechanical systems. It
is considered a central result in the study of quantum systems and its
derivation was a significant landmark in developing the theory of quantum
mechanics.
Schrodinger Equation is a linear partial differential equation that
describes the wave function or state function of a quantum-mechanical
system. It is a key result in quantum mechanics, and its discovery was a
significant landmark in the development of the subject.
Schrodinger's Cat -
Zero Point.
Probability Amplitude is a complex number used in describing the
behavior of systems. The modulus squared of this quantity represents
a probability or probability density. Probability amplitudes provide a
relationship between the wave function (or, more generally, of a quantum
state vector) of a system and the results of observations of that system.
Probability Wave. A quantum state of a
particle or system, as characterized by a wave propagating through space,
in which the square of the magnitude of the wave at any given point
corresponds to the probability of finding the particle at that point.
A Real Life
Quantum Delayed Choice Experiment (youtube)
Beam Splitter is
an optical device that
splits a beam of light in
two. It is a crucial part of many optical experimental and
measurement systems, such as interferometers, also finding widespread
application in fibre optic telecommunications.
Pilot Wave was the first known example of a
hidden variable theory, where the state of a physical system, as
formulated by quantum mechanics, does not give a complete description for
the system; i.e., that quantum mechanics is ultimately incomplete, and
that a complete theory would provide descriptive categories to account for
all observable behavior and thus avoid any indeterminism.
De Broglie - Bohm theory is an interpretation of quantum mechanics. In
addition to a wavefunction on the space of all possible configurations, it
also postulates an actual configuration that exists even when unobserved.
The evolution over time of the configuration (that is, the positions of
all particles or the configuration of all fields) is defined by a guiding
equation that is the nonlocal part of the wave function. The evolution of
the wave function over time is given by the Schrödinger equation. (also
known as the pilot wave theory,
Bohmian mechanics,
Bohm's interpretation, and the causal interpretation).
Faraday Wave are nonlinear standing waves that appear on
liquids enclosed by a vibrating receptacle. When the vibration frequency
exceeds a critical value, the flat hydrostatic surface becomes unstable.
This is known as the Faraday instability.
Polarization Waves is a parameter applying to waves that
specifies the geometrical orientation of the oscillation. Electromagnetic
waves such as light exhibit multiple polarizations, as do gravitational
waves and sound waves in solids. On the other hand, sound waves in a
gas or liquid only oscillate in the wave's direction of propagation, and
the oscillation of ocean waves is always in the vertical direction. In
these cases one doesn't normally speak of "polarization" since the
oscillation's direction is not in question.
Physicists use a 350-year-old theorem that explains the workings of
pendulums and planets to reveal new properties of light waves. The
work, led by Xiaofeng Qian, assistant professor of physics at Stevens and
reported in the August 17 online issue of Physical Review Research, also
proves for the first time that a light wave's degree of non-quantum
entanglement exists in a direct and complementary relationship with its
degree of polarization. As one rises, the other falls, enabling the level
of entanglement to be inferred directly from the level of polarization,
and vice versa. This means that hard-to-measure optical properties such as
amplitudes, phases and correlations -- perhaps even these of quantum wave
systems -- can be deduced from something a lot easier to measure: light
intensity. Once the team visualized a light wave as part of a mechanical
system, new connections between the wave's properties immediately became
apparent -- including the fact that entanglement and polarization stood in
a clear relationship with one another.
Strings that can vibrate longer. Researchers have engineered
string-like resonators capable of vibrating longer at ambient temperature
than any previously known solid-state object -- approaching what is
currently only achievable near absolute zero temperatures. Their study
pushes the edge of nanotechnology and machine learning to make some of the
world's most sensitive mechanical sensors. The newly developed nanostrings
boast the highest mechanical quality factors ever recorded for any
clamping object in room temperature environments; in their case clamped to
a microchip. This makes the technology interesting for integration with
existing microchip platforms. Mechanical quality factors represent how
well energy rings out of a vibrating object. These strings are specially
designed to trap vibrations in and not let their energy leak out.
Order and Disorder in physics designates the presence or absence of
some
symmetry or correlation in a
many-particle system.
Quantum Superposition
The quantum twisting microscope: A new lens on quantum materials. A
clever take on the science of
twistronics offers new ways of exploring quantum phenomena. One of the
striking aspects of the quantum world is that a particle, say, an
electron, is also a wave, meaning that it exists in many places at the
same time. Researchers make use of this property to develop a new type of
tool -- the
quantum twisting microscope -- that can create novel quantum materials
while simultaneously gazing into the most fundamental quantum nature of
their electrons.
Zero-Point Energy is the lowest possible energy that a
quantum mechanical system may have, i.e. it is the energy of the system's
ground state.
Zero-point energy can have several different types of
context, e.g. it may be the energy associated with the ground state of an
atom, a subatomic particle or even the quantum vacuum itself.
Perpetual.
Principle Vibration. Nothing rests;
everything moves; everything vibrates. The Principle of Vibration states
that nothing in the universe is at rest, everything vibrates, everything
is in
motion. Vibration is in everything, from the tiniest molecule to the
biggest rock, in physical and biological systems, we find vibration in
matter, energy, light and sound. In physics vibration is often called
oscillation - either a movement back and fro as in the swing of a pendulum
or random vibrations as are exhibited in the Brownian Movement. It can be
described by three factors: the amplitude (size) , the frequency (rate)
and the phase (timing). Occultists state that differences in rate and
character of vibration determine the different planes of being, seeing the
highest plane as that with the highest rate of vibration. Also, every
mental and/or emotional state has its own rate of vibration the knowledge
of which could enable a skilled person to influence at will. The Principle
of Vibration is closely connected to the Principle of Polarity and the
Principle of Rhythm.
Principle of Rhythm expresses the idea that
in everything there is
manifested a measured motion, a to and from, a flow
and inflow, a swing backward and forward, a
pendulum-like movement.
Principle of Polarity embodies the idea
that
everything is dual, everything has two
poles, and everything has its opposite. All manifested things have two
sides, two aspects, or two poles.
Law of Vibration
states that everything is energy and the energy is vibrating at a certain
frequency.
Motion is
manifest in everything in the Universe, that
nothing rests, and everything moves, vibrates and circles.
Things Come In Waves. You can
ride the wave, or you can
let the
wave pass through you, or you can
go against the wave, and you can even
make
waves yourself. Waves are patterns, but not all
patterns
are good. If
bad things repeat
themselves, then you have problems. If good things repeat themselves,
then you have
progress. If you
make waves,
make sure that they are
good
waves that send
good
vibrations.
Go with the flow and ride
the wave. Don't swim against the current. You can either swim
parallel to the current and hope to find a better opportunity, or, you can
just go with the flow and take the
path of least resistance.
This is not about
conforming or
being passive, this about riding the wave until you can find a better wave
to ride. Positive energy is best when it is used most effectively. To be
bad or to give resistance to this energy is illogical. Negative energy is
to attract positive energy, it's not used to reject positive energy,
because when negative energy increases, so does decay and destruction.
"I like it when a surfer tells me
that "
Life is a Wave!" I like to reply and say that "Life is also a
particle and not just a wave", so go for it dude, enjoy the ride."
PIPELINE - The
Ventures(youtube) -
Wipe Out - The
Ventures (youtube) -
Walk Don't Run -
The Ventures (youtube)
Electromagnetic Spectrum
Electromagnetic Spectrum is all
the known range of
frequencies and their
linked wavelengths of
electromagnetic radiation and their respective
wavelengths
and
photon energies. The
electromagnetic spectrum of an object has a different meaning, and is
instead the characteristic distribution of
electromagnetic radiation
emitted or absorbed by that particular object. The electromagnetic
spectrum extends from below the low frequencies used for modern
radio
communication to
gamma radiation at the short-wavelength (high-frequency)
end, thereby covering
wavelengths from thousands of kilometers down to a
fraction of the size of an atom. Visible light lies toward the shorter
end, with wavelengths from 400 to 700
nanometres. The limit for long
wavelengths is the size of the universe itself, while it is thought that
the short wavelength limit is in the vicinity of the Planck length. Until
the middle of the 20th century it was believed by most physicists that
this spectrum was infinite and continuous. Nearly all types of
electromagnetic radiation can be used for spectroscopy, to study and
characterize matter. Other technological uses are described under
electromagnetic radiation. (
Maxwell's
equations predicted an
infinite number of
frequencies of electromagnetic waves, all traveling at the speed of
light. The number of frequencies in the entire spectrum is the number 81
with 31 zeros after it).
Humans can
only see a small percentage of the entire
electromagnetic spectrum with our
eyes, but with
technology we can
see a lot of the wavelengths that are invisible to
our eyes.
If you could see all wavelengths of
light at once, the excess light would create a blinding glow,
making it impossible to see anything. The human brain is unable to process
all the information.
Visible Spectrum is the portion of the electromagnetic
spectrum that
is
visible to the
human eye. Electromagnetic radiation in this range of
wavelengths is called
visible light or simply
light. A typical
human eye
will respond to wavelengths from about 390 to 700 nm. In terms of
frequency, this corresponds to a band in the vicinity of 430–770 THz. The
spectrum does not, however, contain all the
colors that the human
eyes and
brain can distinguish. Unsaturated
colors such as pink, or purple
variations such as magenta, are absent, for example, because they can be
made only by a mix of multiple wavelengths. Colors containing only one
wavelength are also called pure colors or spectral
colors. Visible
wavelengths pass through the "optical window", the region of the
electromagnetic spectrum that allows wavelengths to pass largely unattenuated through the Earth's atmosphere. An example of this phenomenon
is that clean air
scatters blue light more than red wavelengths, and so
the midday sky appears blue. The optical window is also referred to as the
"visible window" because it overlaps the human visible response spectrum.
The near
infrared or NIR window lies just out of the human vision, as well
as the Medium Wavelength IR or MWIR window, and the Long Wavelength or Far
Infrared or LWIR or FIR window, although other animals may experience them.
Microwave is
an
electromagnetic wave with a wavelength in the
range 0.001–0.3 m,
shorter than that of a normal
radio wave but longer than those of infrared radiation. Microwaves
are used in radar, in communications, and for heating in microwave ovens
and in various industrial processes.
Terahertz is a unit of frequency, defined as one trillion cycles per
second or 10 to the 12 power hertz.
Optical Window is the optical portion of the electromagnetic spectrum
that
passes through the atmosphere all the way to
the ground. Most EM wavelengths are blocked by the atmosphere, so
this is like a window that lets through only a narrow selection of what is
out there, though the sun is particularly active in the passed
wavelengths. It is called "optical" because the wavelengths we can see are
all in this range. The window runs from around 300 nanometers
(ultraviolet-B) at the short end up into the range the eye can use,
roughly 400–700 nm and continues up through the visual infrared to around
1100 nm, which is in the near-infrared range. There are also infrared and
"radio windows" that transmit some infrared and radio waves. The radio
window runs from about one centimeter to about eleven-meter waves. Optical
window in medical physics, the optical window is the portion of the
visible and infrared spectrum where living tissue absorbs relatively
little light. This window runs approximately from 650 to 1200 nm. At
shorter wavelengths, light is strongly absorbed by hemoglobin in blood,
while at longer wavelengths water strongly absorbs infrared light. Optical
window in optics, it means a (usually at least mechanically flat,
sometimes optically flat, depending on resolution requirements) piece of
transparent (for a wavelength range of interest, not necessarily for
visible light) optical material that allows light into an optical
instrument. A window is usually parallel and is likely to be
anti-reflection coated, at least if it is designed for visible light. An
optical window may be built into a piece of equipment (such as a vacuum
chamber) to allow optical instruments to view inside that equipment.
Light -
Colors -
Eye Color -
Electricity -
Radio Waves
Full-Spectrum Light is light that covers the electromagnetic spectrum
from
infrared to near-ultraviolet, or all wavelengths that are
useful to plant
or animal life; in particular,
sunlight is
considered full spectrum, even though the solar spectral distribution
reaching Earth changes with time of day, latitude, and atmospheric
conditions. "Full-spectrum" is not a technical term when applied to an
electrical light bulb. Rather, it implies that the product emulates some
important quality of natural light. Products marketed as "full-spectrum"
may produce light throughout the entire visible spectrum, but without
producing an even spectral distribution. Some may not differ substantially
from lights not marketed as full-spectrum.
Trichromacy is the condition of possessing three independent channels
for conveying color information, derived from the three different cone
types. Organisms with trichromacy are called trichromats.
Spectrum is a condition that is not limited to a specific
set of values but can vary, without steps, across a continuum. The word
was first used scientifically in optics to describe the rainbow of colors
in visible light after passing through a
prism. As scientific
understanding of light advanced, it came to apply to the entire
electromagnetic spectrum.
Layers.
Emission
Spectrum of a chemical element or chemical compound is the spectrum of
frequencies of electromagnetic radiation emitted due to an atom or
molecule making a transition from a high energy state to a lower energy
state. The photon energy of the emitted photon is equal to the energy
difference between the two states. There are many possible electron
transitions for each atom, and each transition has a specific energy
difference. This collection of different transitions, leading to different
radiated wavelengths, make up an emission spectrum. Each element's
emission spectrum is unique. Therefore, spectroscopy can be used to
identify elements in matter of unknown composition. Similarly, the
emission spectra of molecules can be used in chemical analysis of
substances.
Atomic Spectrum is the
spectrum of frequencies of electromagnetic radiation emitted or absorbed
during transitions of electrons between energy levels within an atom. Each
element has a characteristic spectrum by which it can be recognized.
Spectral Density describes the distribution of power into frequency
components composing that signal. According to Fourier analysis any
physical signal can be decomposed into a number of discrete frequencies,
or a spectrum of frequencies over a continuous range. The statistical
average of a certain signal or sort of signal (including noise) as
analyzed in terms of its frequency content, is called its spectrum.
Spectrum Analyzer measures the magnitude of an input signal versus
frequency within the full frequency range of the instrument. The primary
use is to measure the power of the spectrum of known and unknown signals.
The input signal that a spectrum analyzer measures is electrical; however,
spectral compositions of other signals, such as acoustic pressure waves
and optical light waves, can be considered through the use of an
appropriate transducer. Optical spectrum analyzers also exist, which use
direct optical techniques such as a monochromator to make measurements.
Scopes.
Spectrometer is a
scientific
instrument originally used to split
light
into an array of separate
colors,
called a spectrum. Spectrometers were developed in early studies of
physics, astronomy, and chemistry. The capability of spectroscopy to
determine chemical composition drove its advancement and continues to be
one of their primary uses. Spectrometers are used in astronomy to analyze
the chemical composition of stars and planets, and spectrometers gather
data on the origin of the universe. The concept of a spectrometer now
encompasses instruments that do not examine light. Spectrometers separate
particles, atoms, and molecules by their mass, momentum, or energy. These
types of spectrometers are used in chemical analysis and particle
physics.
Spectrophotometry is the quantitative measurement of the reflection or
transmission properties of a material as a function of wavelength. It is
more specific than the general term electromagnetic spectroscopy in that
spectrophotometry deals with visible light, near-ultraviolet, and
near-infrared, but does not cover time-resolved spectroscopic techniques.
Spectrophotometry uses photometers, known as spectrophotometers, that can
measure a light beam's intensity as a function of its color (wavelength).
Important features of spectrophotometers are spectral bandwidth (the range
of colors it can transmit through the test sample), the percentage of
sample-transmission, the logarithmic range of sample-absorption, and
sometimes a percentage of reflectance measurement. A spectrophotometer is
commonly used for the measurement of transmittance or reflectance of
solutions, transparent or opaque solids, such as polished glass, or gases.
Although many biochemicals are colored, as in, they absorb visible light
and therefore can be measured by colorimetric procedures, even colorless
biochemicals can often be converted to colored compounds suitable for
chromogenic color-forming reactions to yield compounds suitable for
colorimetric analysis. However they can also be designed to measure the
diffusivity on any of the listed light ranges that usually cover around
200 nm - 2500 nm using different controls and calibrations. Within these
ranges of light, calibrations are needed on the machine using standards
that vary in type depending on the wavelength of the photometric
determination.
Spectroscopy is the study of the interaction between matter and
electromagnetic radiation or any
interaction with radiative energy as a function of its wavelength or
frequency. Spectroscopic data is often represented by an emission
spectrum, a plot of the response of interest as a function of wavelength
or frequency.
Ultraviolet is an
electromagnetic radiation with a
wavelength from 10 nm (30 PHz) to 400 nm (750 THz), shorter than that of
visible light but longer than X-rays.
UV radiation constitutes about 10%
of the total light output of the Sun, and is thus present in sunlight. It
is also produced by electric arcs and specialized lights such as
mercury-vapor lamps, tanning lamps, and black lights. Although it is not
considered an ionizing radiation because its photons lack the energy to
ionize atoms, long-wavelength ultraviolet radiation can cause chemical
reactions and causes many substances to glow or
fluoresce. Consequently,
biological effects of UV are greater than simple
heating effects, and many
practical applications of UV radiation derive from its interactions with
organic molecules.
UV Index.
Spatial Intelligence
-
Magnetism -
Acoustic Spectrum (sound)
A 100-year-old physics problem has been solved at EPFL. Researchers
challenge a fundamental law and discover that more electromagnetic energy
can be stored in wave-guiding systems than previously thought. Their trick
was to create asymmetric resonant or wave-guiding systems using magnetic
fields.
Can You See Me?
FLIR T1K Thermal Imaging Camera (youtube)
Thermography are
examples of infrared imaging science. Thermographic cameras usually detect
radiation in the long-infrared range of the electromagnetic spectrum
(roughly 9,000–14,000 nanometers or 9–14 µm) and produce images of that
radiation, called thermograms. Since infrared radiation is emitted by all
objects with a temperature above absolute zero according to the black body
radiation law, thermography makes it possible to see one's environment
with or without visible illumination. The amount of radiation emitted by
an object increases with temperature; therefore, thermography allows one
to see variations in temperature. When viewed through a thermal imaging
camera, warm objects stand out well against cooler backgrounds; humans and
other warm-blooded animals become easily visible against the environment,
day or night. As a result, thermography is particularly useful to the
military and other users of surveillance cameras.
Infrared is
electromagnetic radiation with longer wavelengths than those of
visible light, and is therefore
invisible, although it is sometimes
loosely called infrared light. It extends from the nominal red edge of the
visible spectrum at 700 nanometers (frequency 430 THz), to 1000000 nm (300
GHz) (although people can see infrared up to at least 1050 nm in
experiments. Most of the thermal radiation emitted by objects near room
temperature is infrared. Like all EMR, IR carries radiant energy, and
behaves both like a wave and like its quantum particle, the
photon.
Infrared Thermometer
Infrared
Spectroscopy involves the interaction of
infrared radiation with
matter. Infrared Microspectroscopy. IR spectroscopy is a widely used and
versatile method for analysis at the molecular scale. It covers a range of
techniques, mostly based on absorption spectroscopy. As with all
spectroscopic techniques, it can be used to identify and study chemicals.
Samples may be solid, liquid, or gas. The method or technique of infrared
spectroscopy is conducted with an instrument called an infrared
spectrometer (or spectrophotometer) to produce an infrared spectrum. An IR
spectrum is essentially a graph of infrared light absorbance (or
transmittance) on the vertical axis vs. frequency or wavelength on the
horizontal axis. Typical units of frequency used in IR spectra are
reciprocal centimeters (sometimes called wave numbers), with the symbol
cm−1. Units of IR wavelength are commonly given in micrometers (formerly
called "microns"), symbol μm, which are related to wave numbers in a
reciprocal way. A common laboratory instrument that uses this technique is
a Fourier transform infrared (FTIR) spectrometer. The infrared portion of
the electromagnetic spectrum is usually divided into three regions; the
near-, mid- and far- infrared, named for their relation to the visible
spectrum. The higher-energy near-IR, approximately 14000–4000 cm−1
(0.8–2.5 μm wavelength) can excite overtone or harmonic vibrations. The
mid-infrared, approximately 4000–400 cm−1 (2.5–25 μm) may be used to study
the fundamental vibrations and associated rotational-vibrational
structure. The far-infrared, approximately 400–10 cm−1 (25–1000 μm), lying
adjacent to the microwave region, has low energy and may be used for
rotational spectroscopy. The names and classifications of these subregions
are conventions, and are only loosely based on the relative molecular or
electromagnetic properties.
New infrared imaging technique reveals molecular orientation of proteins
in silk fibres.
Near-Infrared Spectroscopy is a spectroscopic method that uses the
near-infrared region of the electromagnetic spectrum (from 780 nm to 2500
nm). Typical applications include medical and physiological diagnostics
and research including blood sugar, pulse oximetry, functional
neuroimaging, sports medicine, elite sports training, ergonomics,
rehabilitation, neonatal research, brain computer interface, urology
(bladder contraction), and neurology (neurovascular coupling). There are
also applications in other areas as well such as pharmaceutical, food and
agrochemical quality control, atmospheric chemistry, combustion research
and astronomy.
Infrared Astronomical Satellite (IRAS)
Infrared Window is the overall dynamic property of the earth's
atmosphere, taken as a whole at each place and occasion of interest, that
lets some infrared radiation from the cloud tops and land-sea surface pass
directly to space without intermediate absorption and re-emission, and
thus without heating the atmosphere. It cannot be defined simply as a part
or set of parts of the electromagnetic spectrum, because the spectral
composition of window radiation varies greatly with varying local
environmental conditions, such as water vapour content and land-sea
surface temperature, and because few or no parts of the spectrum are
simply not absorbed at all, and because some of the diffuse radiation is
passing nearly vertically upwards and some is passing nearly horizontally.
A large gap in the absorption spectrum of water vapor, the main greenhouse
gas, is most important in the dynamics of the window. Other gases,
especially carbon dioxide and ozone, partly block transmission. An
atmospheric window is a dynamic property of the atmosphere, while the
spectral window is a static characteristic of the electromagnetic
radiative absorption spectra of many greenhouse gases, including water
vapour. The atmospheric window tells what actually happens in the
atmosphere, while the spectral window tells of one of the several abstract
factors that potentially contribute to the actual concrete happenings in
the atmosphere. Window radiation is radiation that passes through the
atmospheric window, whereas non-window radiation is radiation that does
not. Window wavelength radiation is radiation that, judging only from its
wavelength, is likely to pass through the atmospheric window. The
difference between window radiation and window wavelength radiation is
that window radiation is an actual component of the radiation, determined
by the full dynamics of the atmosphere, taking in all determining factors,
while window wavelength radiation is merely theoretically potential,
defined only by one factor, the wavelength.
Near-Infrared Window in Biological Tissue defines the range of
wavelengths from 650 to 1350 nanometre (nm) where light has its maximum
depth of penetration in tissue. Within the NIR window, scattering is the
most dominant light-tissue interaction, and therefore the propagating
light becomes diffused rapidly. Since scattering increases the distance
travelled by photons within tissue, the probability of photon absorption
also increases. Because scattering has weak dependence on wavelength, the
NIR window is primarily limited by the light absorption of blood at short
wavelengths and water at long wavelengths. The technique using this window
is called NIRS. Medical imaging techniques such as fluorescence
image-guided surgery often make use of the NIR window to detect deep
structures.
GLEAM Data Sphere
Animation. Red indicates the lowest frequencies, green the middle
frequencies and blue the highest frequencies. (video)
Color-changing magnifying glass gives clear view of infrared light. By
trapping light into tiny crevices of gold, researchers have coaxed
molecules to convert invisible infrared into visible light, creating new
low-cost detectors for sensing.
Chromoscope lets
you explore our Galaxy the Milky Way and the distant Universe in a range
of wavelengths from gamma-rays to the longest radio waves.
Electromagnetic Radiation in this range of
wavelengths called
visible light or simply light. A typical
human eye will respond to wavelengths from about 390 to 700 nm. In
terms of
frequency, this corresponds to a band in the vicinity of 430–770
THz.
Terahertz -
Wave of the future: Terahertz chips a new way of seeing through matter
-
1 E-7 m
(wiki)
Cellphone Radiation.
There are an uncountable infinity of
possible wavelengths. In general the frequency spectrum for
Electromagnetic (e.g light, radio, etc.) is continuous and thus between
any two frequencies there are an uncountable infinity of
possible frequencies (just as there are an
uncountable number of numbers between 1 and 2).
Brain Waves.
Higher Frequency as things get Smaller.
Graph Below is Reversed
Radio Waves have frequencies
as high as 300 gigahertz (GHz) to as low as 30 hertz (Hz). At 300 GHz, the
corresponding wavelength is 1 mm, and at 30 Hz is 10,000 km. Like all
other electromagnetic waves, radio waves travel at the speed of light in
vacuum.
Ultrathin, flat lens resolves chirality and color Light can also be chiral.
In chiral light, the direction of oscillation of the electromagnetic
wave. Multispectral chiral lens.
Computers act like human brains because human brains made computers.
This does
not mean that
machines can be humans?
"No one created math, math was discovered because
math
already existed in nature. And just because math exists in
nature does not mean that all life is a calculation."
Videos about Physics
Minute Physics (youtube)
Khan Physics (videos)
Physics Fun (youtube channel)
Open
Letter to the President: Physics Education (youtube)
Nassim Haramein (youtube)
Tom Campbell (youtube)
Through the Wormhole (youtube)
6.2
Introduction to Atomic Structure (youtube)
khan Academy Electron Configurations (video)
Veritasium (youtube channel of science and engineering)
Science Videos and Films
Physics Teaching Resources
Physics Classroom
Physics World
Physics
Institute of Physics
Physics 4 Kids
Tutor 4 Physics
Nordic Institute for Theoretical Physics
Physics Illinois.edu
Physics Stackexchange
Publications in Physics (wiki)
Adventure in Physics and Math by Edward Witten (pdf)