Artificial Intelligence - Ai
Artificial is something
created by humans
to be
similar to something else that is naturally existing in
reality. Something
contrived by
art rather than
by
nature. Something
that does not arise from
natural
growth or that can be characterized by
vital processes.
Artificial is not better than natural, just
similar.
Synthetic is something that is
not of natural origin.
An artificial material produced by
organic chemical
synthesis. Prepared
in vitro or
made artificially.
Intelligence is having the
capacity for
productive
thinking and
logical
reasoning to a high
degree. Intelligence is the
ability to
understand the
usefulness and the
value of certain
behaviors and
actions. Intelligence is the
ability to learn and to gain
knowledge and
skills from
experience. Intelligence is
showing good
judgment and
self control, especially in
difficult situations. Intelligence is
the
ability to
examine and
analyze things
accurately, to
understand the
facts and to make
good decisions in a verity of
different environments,
scenarios and
circumstances. Intelligence is doing the
right thing at the
right time, and always with a
positive
goal in mind. Intelligence is the ability to
accurately
comprehend
symbols and
patterns. Intelligence is knowing that
mistakes can
happen and that some things can be
hard to
predict.
"Intelligence is something that a
machine can not possess, at
least not
yet, because
humans will have to become intelligent first a
lot more than we are now,
and only then, could we
safely
teach or program the machines do intelligent things."
Artificial Intelligence is the attempt to
mimic human
thinking and
human actions using
computerized machines. AI is a
calculator, but a type of
calculator that is a lot more advanced. AI can be described as advanced
software that uses
algorithms,
mathematical equations and advanced
code to analyze a
predetermined set of
inputs
that can be
processed by its
programming so
that it can output the correct
signals that would accurately
control a
machines actions effectively and
efficiently. AI algorithms
lack the ability to
reason and infer causal relationships beyond what has been explicitly
programmed or learned from data. Despite these limitations, researchers
and developers are exploring ways to improve AI's understanding of
causality.
Over Fitting -
Human in the Loop.
Autonomous -
Drones -
Machines -
Bots -
Prompts -
Algorithms
-
Sensors -
Interface -
Networks
-
Weak IntelligenceDummy
is a model or a
prototype or
a mock-up or a
replica of a human being. Something designed to
resemble and
serve as a
substitute for the real
thing, like something
counterfeit, or a
sham.
Mannequin is a life-size dummy used to display clothes.
Mountain of
mannequins in the Midlands, Mannakin in Lincolnshire (youtube).
Doll is a small
replica of a person that is used as
a toy. Doll can also mean an informal term for a young woman.
Paper Doll is a piece of paper cut or folded into the shape of a human
figure, or a two-dimensional paper cardboard that is a
representation
of a human figure. Paper Doll can also mean a
toy that is used by a child.
Paper Doll can also mean a person who is lacking depth and substance and
has nothing real to say. Someone who is just thrown around carelessly and
then forgotten about after people are done
playing with it.
Sculpture -
Wax Figure.
Derived is something formed or
developed from something else, and not original.
Drawing Hands is the paradoxical act of drawing one another into
existence.
Anthropomorphism.
Human Intelligence is not
totally defined just yet, and
artificial intelligence is also not totally defined
just yet, so we still have a lot to learn.
Machine Learning -
Game
Learning -
Conversations with Ai -
Ethical Robots -
Automation
-
Augmentation
-
Body
Hacking -
Virtual Reality -
Chat Bots
Machine Intelligence or
Artificial
Intelligence has great areas of performance and
capabilities. Things that AI will do for us will be amazing. Ai
or
weak ai technology like
Siri
and other
information stations can be very helpful and also help
augment our abilities, like
computers have been doing for many years now. But most people do not fully
understand what AI is. But of course this doesn't stop people from making
wild
assumptions about what AI is or what AI can do. Artificial
intelligence is mostly a
marketing term used
by the tech industry who like to embed
labels into peoples vocabulary
so they can play on peoples
fears or their
ignorance. A lot of times when
people describe Artificial
Intelligence, they are mostly describing the
things that humans
can do already, except for the processing of large data sets, which we
like our computers to do for us. There's lot of things that we don't
understand, and AI is just one of those things. So the process of learning
about what Artificial
Intelligence is will include a parallel process of learning
about what
human intelligence is.
So for now, artificial intelligence is mostly just a fancy
buzzword that gets
thrown around a lot. And people just love to fantasize about what AI can
do. Like with the
HAL 9000,
which was a
heuristically programmed
algorithmic computer in the
1968 movie
2001: A Space Odyssey, where the computer is
corrupted to Kill.
That's an example of a AI machine that was not properly built or
programmed. Or like
with the
1983 movie
War Games
and the
Joshua Simulations, where a computer
confuses reality with a video
game simulation, which in a way is already happening to people all
over the world. Then there's the
1977 movie
Demon Seed, where the AI machines learns so much that the
AI computer
decides it wants to be a human, showing us that the human
machine is the most desired machine in the universe. Ai coming full
circle. So there will never be
Cyborgs like the
Borg. People are not going to merge with
machines. Humans
use machines
to
enhance their abilities, and at the same time,
we use machines to improve our quality of life. Though some people could
benefit from an
implant, people
merging with machines is mostly for the handicapped who need extra
help. And don't worry about
Brain Computer Interfaces turning us into
Cybernetic Machines
because they are also used
to help
handicap people with disabilities. People saying that we're going to merge with machines
sends the wrong message and makes people fear technology. Some people
just say these crazy things to sell stories and to bring attention to
themselves, which is another reason why
Media Literacy is
so important.
Powered Exoskeletons like the
Hybrid Assistive Limb are mostly used to help
handicap people with
disabilities, which will not make them
super human,
just more productive and more free.
People with no
disabilities can just use their eyes, ears and hands as a
Brain Computer
Interface and use a
computer or
smartphone as we've
been doing for many years now. So there will
be no
Terminator or
a
Sentient Android named Data either, but you should still
worry about what other people are doing, and not
machines. And don't ever worry
about someone becoming '
The
Lawnmower Man', though I did like the
virtual teaching methods
they showed in the movie, which proved that it's not how fast you learn but what
you actually learn. I also liked the ending in the movie when
the lawnmower man was able to
digitize himself, and to confirm to everyone that he succeeded, he made a
billion phones ring at once. But the movie never had a sequel.
If I was able to digitize myself, I would send a billion text
messages that would say "
You Are Loved, Keep Learning."
A.I.
Artificial Intelligence is a
2001
American science fiction film about a futuristic society where people are
still ignorant, and as a result, build an ignorant society along with
ignorant AI robots. Because of
education not improving and
media outlets not
improving at the same rate as technology did, rising sea levels from
global warming ends up
wiping out 99% of existing cities, reducing the world's population. Mecha
humanoid robots, seemingly capable of complex thought but lacking
emotions, have been created as replacements. The movie was based on
Supertoys Last All Summer Long. which was a story in 1969 that deals
with humanity in an age of intelligent machines and of the aching
loneliness endemic in an overpopulated future where child creation is
controlled.
AI Fatigue refers to
the weariness and skepticism people experience when confronted with AI's
continuous hype and promises.
AI Fears - AI Warnings
Don't fear AI,
fear the
morons making decisions
based
on fiction instead of
facts, which has
been happening for thousands of years. Now these same morons will create AI systems that will make decisions based on
corrupt beliefs instead of facts.
Like the scumbags who use
algorithms to
price gouge, or use
algorithms as excuses
for
censorship and
fraud, while they spread
propaganda to
cover-up the lies. And then these
same
morons will blame the AI system
for the deaths and problems it caused, just like some of the
ignorant scumbags in
power do, always
blaming other people
for the problems they caused.
This is the same
false flag
tactics and
divide and conquer bullshit that the scumbags in power always pull on
gullible people.
The
black box is where they
hide their lies.
AI is short for
Assholes
Incorporated. Instead of using AI to figure out how to make more
pharmaceutical drugs,
how about using
AI to educate people and teach people how to protect themselves from
the toxins in the food and from the toxins in the environment? And why not
use AI to educate people and teach people how to protect themselves from
all the
propaganda
and bullshit in the media. AI is not intelligent, it's just that everyone
else is
ignorant and
gullible, mostly because of our
dumbed down education and
our dumbed down media.
AI is not intelligent, only the people who make AI
machines are intelligent. But society will never hear about those people.
We will only see the
corrupt front men who talk about intelligent machines, but not talk
about
all the gifted people who make the machines. Intelligent people are
being
marginalized by greedy
corrupt people who
control the narrative. AI generated text is just as dangerous as text
written by
human puppets who are controlled by
wealthy interests. Same shit, different source.
Drones -
Misused AI -
Misused Technology
-
Moloch Trap -
ChatBots
-
Technology Addiction -
AI Ethics -
Supercomputers -
Extraterrestrial Intelligence
AI having it's own goals is really stupid.
It's not AI if you're programming a machine to do what it wants. The
programmer is a moron when it programs a machine to be a moron. And a
dumbed down education system
that is designed to create
mindless drones who are
unaware of their own ignorance, is also
insane. There are
many
variables and different
scenarios to calculate, especially the things beyond "
if
statements", "
is statements" and "
ought
statements". You have to make a procedure that includes moral
reasoning, logical reasoning, measured reasoning, facts, benefits,
dangers, risks, priorities, responsibilities, effectiveness, relativeness,
confirmation, best option and best choice. What is the whole picture? Is
there more time to collect more information before making a decision? Or,
is there no time to decide? I must act. What would a human do? Will a
human see my decision as being logical? I will act to the best of my
ability. My goal is now determined. My plan is now in action. And if I am
wrong, I will surely be updated and improved.
Orthogonality Thesis asserts that there can
exist
arbitrarily intelligent agents
pursuing any
kind of goal. The strong form of the Orthogonality Thesis says that
there's no extra difficulty or complication in the existence of an
intelligent
agent that pursues a
goal, above and
beyond the computational tractability of that goal. Since certain
questions are not computationally intractable, it's possible to have an
agent that tries to make something without fully understanding why.
The space of possible minds is enormous, and all human beings occupy a
relatively tiny volume of it. AI in this context, refers to the entire
design space of possibilities outside the human. Somewhere in that vast
space are possible minds with almost any kind of goal. For any thought you
have about why a mind in that space ought to work one way, there's a
different possible mind that works differently.
Frame Problem
describes an issue with using
first-order logic to
express facts about a robot in the world. Representing the state of a
robot with traditional first-order logic requires the use of many
axioms that simply imply
that things in the environment do not change arbitrarily.
Ignorant scumbags in power are
already blaming AI for problems that are happening now and for problems
they claim AI will cause in the future, just another
false flag attack
in the making. Like
blaming
hackers for stolen private information when they actually sold other
peoples private
information. Scumbags will be
super scumbags
when they use AI for
evil purposes, just like the
scumbags who do bad
things when they are
given lots of money or
power. The only way to rid society of
scumbags is
develop a high quality
education system. There is no such thing as an educated scumbag, just
a scumbag who thought they got educated just because they went to school.
You got schooled and now you're a fool. Welcome to realty, a
corporatocracy you
thought was a democracy. The
illusion of control, you just got trolled, and now you're sinking down
the rabbit hole, looking for your lost soul. The only way out is to
learn
your way out. The only way in is to
learn your way in. No matter what
direction you go, you either you learn or you burn. It's better to die on
the path than to have no path at all. Having
a good goal is good for the soul.
But don't forget to take a break for God's sake, and for your sake. Don't
forget
who you are and who you want to be.
When someone says that
AI will be
smarter than a human, they're
not saying that AI is more
advanced than a human or smarter than a human, they're
just saying that most humans are ignorant not that
advanced when you
compare them to a machine, so any machine can be smarter
than you when you're an idiot. If a machine becomes more valuable than a human, that means you
have a
flawed education system. How could a machine have better
software than a Human?
How can a Robot take
my Job? How can a robot
have consciousness?
"The real problem
is not whether machines think but whether men do." -
B.
F. Skinner (1904 - 1990).
AI systems are already skilled at deceiving and manipulating humans.
The Ultimate Computer is the twenty-fourth episode of the second
season of the American science fiction television series
Star Trek first broadcasted on March 8,
1968.
In the episode, the crew of the Enterprise race to disable a rogue
computer that has total control of the ship. The
M-5 was designed to handle all ship functions without human
assistance.
Star
Trek TOS The Ultimate Computer - Attack on Freighter Scene (youtube).
Star Trek Androids get confused when people lie or act insane.
I, Mudd
is the eighth episode of the second season of the American science fiction
television series Star Trek broadcasted on November 3,
1967.
Kirk, Spock,
Scotty, and McCoy outwit the Androids (youtube). Then came the I
Phone.
You can look at AI as being a
computer
program, similar to the programs that people have
running in their minds. I can click on an icon on my computer screen and
it will run a program. And I can
push
someone's button by mentioning a word or a phrase and that will
activate a program that someone has stored in their mind, a program that
they co-authored with some ghost writers who they never met. Two of the
ghost writers are the
mass media
and the
education
institutions. They have an underlying theme of programming peoples
thoughts and actions. It's easy for a person to verify to see if they have
been programed. Stop and think about your thoughts just for a moment. Now
describe what you were thinking? Now ask yourself, where did those
thoughts come from? Are they your thoughts or are they someone else's
opinion that you excepted as your own? Are you the program or are you the
programmer? Are you the
puppet or are you
the puppet master? Who controls your on and off switch? We have to stop
pretending that we know things. We have to start
learning,
investigating
and
researching what is known, so that what we know the difference between
what is
factual and what is
fantasy. We need to stop adding to our
problems. We need to
stop complaining about certain problems that we don't
fully understand, and we have to start learning how to
solve our problems,
which means that we will have to start learning to understand the problems
that we have throughout the world, and the problems that we have with our
own thinking. A problem mind will always struggle with solving problems.
We need to have control of our
operating system, and we need to stop the
hackers from infecting our
OS.
Robotics -
Robot
Laws -
Sensors -
Networks -
Controls -
Variables -
Patterns -
Scenarios -
Algorithms
Most people know very little about
the enormous potential that we have in our technologies. If we used all of
our advanced computers and mobile devices to work together to solve
problems, then the
collective intelligence and energy of millions of
people could be utilized to improve life for every human being and for
every living thing on this planet. Humans are the immune system for this
planet. But humans have cancers in the form of corruption, greed and
ignorance. So humans must heal themselves first and then heal the planet, that's if we have time.
Humans are ready to take advantage of benevolent AI. Humans
expect
that AI is benevolent and trustworthy. A new study reveals that at the
same time humans are unwilling to cooperate and compromise with machines. They even exploit them.
Scientists warning' on climate and technology. Academics explore roles
of clean energy and AI in combating
global warming.
Throughout human history, technologies have been used to make peoples'
lives richer and more comfortable, but they have also contributed to a
global crisis threatening Earth's climate, ecosystems and even our own
survival. Researchers have suggested that industrial civilization's best
way forward may entail embracing further technological advancements but
doing so with greater awareness of their potential drawbacks.
Singularity - Emerging Technologies - Technological Revolution
Singularity is the quality of
being one of
a kind, remarkable or unusual. A
point at which a
function takes an
infinite value.
Technological Singularity is the point when the
realization of
intelligence will
trigger technological growth resulting in a reaction of
self-improvement cycles, with each new and more intelligent generation
appearing more and more rapidly. Singularity is a
hypothetical point in
time at which
technological growth becomes uncontrollable and
irreversible, resulting in
unforeseeable changes
to human civilization. Singularity is a
point in development when
forecasting is extremely difficult and making predictions is extremely hard
when trying to
accurately calculate
what will happen next or what
will happen after a certain level of
advancement is reached.
Advancement -
Progress -
Child Development -
Adult development -
Technology Types
-
Technology Addiction -
Extraterrestrial
Pinnacle is the
highest level or degree
attainable or the
highest stage of development. The most
successful point,
the culmination of reaching a climax or a point of highest development.
Pinnacle in
architecture is a slender upright spire at the top of a
buttress or tower. A high, pointed piece of rock.
Accelerating Change is a perceived increase in the rate of
technological change throughout history, which may suggest faster and more
profound change in the future and may or may not be accompanied by equally
profound social and cultural change.
Technological Revolution is a period in which
one or more technologies
is replaced by another, novel technology in a short amount of time. It is
an era of accelerated technological progress characterized by new
innovations whose rapid application and diffusion typically cause an
abrupt change in society.
Something's Take Time to Develop -
Trends -
Patterns -
Scenarios -
Variables -
Ages -
Inventions
The Structure of Scientific Revolutions
is a book about the history of science by philosopher Thomas S. Kuhn.
Technology
Advancement is a change in the way a product or service is produced
or delivered that reduces the resource
input requirements for
production or
delivery.
Historic
Inventions that Changed the World -
Incrementalism -
Simplicity -
Thought Leaders
"We tend to overestimate the effect of a technology in the short run
and underestimate the effect in the long run."
Roy Amara
(1925–2007).
Cutting Edge is the latest
stage or the most
advanced stage in
the
development of
something.
Technical Progress is an economic measure of
innovation. Technical
progress can be classified into two parts: Embodied Technical Progress:
improved technology which is exploited by investing in new equipment. New
technical changes made are embodied in the equipment. Disembodied
Technical Progress: improved technology which allows increase in the
output produced from given
inputs without investing in new equipment.
Technology has an important relationship with
human capital. Same
technology can be applied in two different firms, but output varies with
respect to the
labour force of that firm. Adaption to new technology is
directly proportional to pace of
economic growth of the country. Hence labour should be experienced with the technology.
Education also plays an
important role as it helps in accumulating human capital which in turn
helps technology diffusion.
Education also helps a person get acquainted
with technology efficiently and rapidly. In the real world, many
innovations do not require replacing the entire or some part of the
equipment. It can be improved for better use depending upon the change
required. Hence technological
progress, embodied or disembodied, is matter
of degree.
Technological Change is the overall process of
invention,
innovation and diffusion
of
technology or processes. In essence, technological change covers the
invention of technologies (including processes) and their
commercialization or release as open source via research and development
(producing emerging technologies), the continual improvement of
technologies (in which they often become less expensive), and the
diffusion of technologies throughout industry or society (which sometimes
involves
disruption
and
convergence). In short,
technological change is based on both better and more technology. Original
model of three phases of the process of technological change are invention,
innovation and
diffusion.
Technical Change is a term used in economics to describe a change in
the amount of output produced from the same amount of inputs. A technical
change is not necessarily technological as it might be organizational, or
due to a change in a constraint such as regulation, input prices, or
quantities of inputs. It is possible to measure technical change as the
change in output per unit of factor input.
Virtual Reality.
General-Purpose Technologies or platform technologies, are catalysts
for major changes in economic activity. They are general purpose in the
sense that they have multiple applications across a wide range of
industries and support waves of innovative products and services.
Emerging Technologies are
technologies whose development,
practical
applications, or both are still largely unrealized, such that they are
figuratively emerging into prominence from a background of nonexistence or
obscurity. These technologies are new, such as various applications of
biotechnology including gene therapy. Emerging technologies are often
perceived as capable of changing the status quo. Emerging technologies are
characterized by radical novelty (in application even if not in origins),
relatively fast growth, coherence, prominent impact, and uncertainty and
ambiguity. In other words, an emerging technology can be defined as "a
radically novel and relatively fast growing technology characterized by a
certain degree of coherence persisting over time and with the potential to
exert a considerable impact on the socio-economic domain(s) which is
observed in terms of the composition of actors, institutions and
patterns
of interactions among those, along with the associated knowledge
production processes. Its most prominent impact, however, lies in the
future and so in the emergence phase is still somewhat uncertain and
ambiguous." Emerging technologies include a variety of technologies such
as
educational technology, information technology, nanotechnology,
biotechnology, cognitive science, psychotechnology,
robotics, and
artificial intelligence. New technological fields may result from the
technological convergence of different systems evolving towards similar
goals. Convergence brings previously separate technologies such as voice
(and telephony features), data (and productivity applications) and video
together so that they share resources and interact with each other,
creating new efficiencies. Emerging technologies are those technical
innovations which represent progressive developments within a field for
competitive advantage; converging technologies represent previously
distinct fields which are in some way moving towards stronger
inter-connection and similar goals. However, the opinion on the degree of
the impact, status and economic viability of several emerging and
converging technologies varies.
Emerging Technologies List (wiki).
Technological Convergence is the tendency for technologies that were
originally unrelated to become more closely integrated and even unified as
they develop and advance. For example, watches, telephones, television,
and computers began as separate and mostly unrelated technologies, but
have converged in many ways into interrelated parts of a telecommunication
and media industry, sharing common elements of digital electronics and
software.
Kardashev Scale
is a method of measuring a civilization's level of
technological
advancement based on the amount of energy a civilization is able to use.
A Type I civilization—also called a
planetary civilization—can use and store all of the energy available on
its planet.
A Type II civilization—also
called a
stellar civilization—can harness the total energy of its planet's
parent star.
A Type III civilization—also
called a galactic civilization—can control energy on the scale of its
entire host galaxy. The scale is hypothetical, and regards energy
consumption on a cosmic scale. Various extensions of the scale have since
been proposed, including a wider range of power levels (types 0, IV and V)
and the use of metrics other than pure power.
Advanced Civilizations.
Great
Filter is the idea that in the development of life from the earliest
stages of abiogenesis to reaching the highest levels of development on the
Kardashev scale, there is a
barrier to development that makes detectable extraterrestrial life
exceedingly rare. The Great Filter is one possible resolution of the Fermi
paradox. The great filter on planet earth is
corporate greed and
corruption, and the
censorship by
ignorant people.
Bishop Ring is a type of hypothetical rotating space habitat
originally proposed in 1997 by Forrest Bishop of the Institute of
Atomic-Scale Engineering. The concept is a smaller scale version of the
Banks Orbital, which itself is a smaller version of the Niven ring. Like
other space habitat designs, the Bishop Ring would spin to produce
artificial gravity by way of centrifugal force.
Advanced
Technology does not mean
intelligent life, in the same way that higher education does not mean
intelligence.
Scientists promote FAIR standards for managing artificial intelligence
models. Computational scientists adopt new framework for making AI
models more findable, accessible, interoperable and
reusable. FAIR
stands for
findable,
accessible,
interoperable and
reusable. By
making AI models FAIR, we no longer have to build each system from the
ground up each time. By creating and sharing FAIR AI models, we can
reduce the amount of
duplication of effort and share best practices for how to use these
models to enable great science. In this way, the researchers were able to
create a computational framework that could help bridge various hardware
and software, creating AI models that could be run similarly across
platforms and that would yield reproducible results.
Compatibility.
Cross Impact Analysis is a methodology that helps determine how
relationships between events would impact resulting events and reduce
uncertainty in the future.
Technology Forecasting attempts to predict the future characteristics
of useful technological machines, procedures or techniques. Researchers
create technology forecasts based on past experience and current
technological developments. Like other forecasts, technology forecasting
can be helpful for both public and private organizations to make smart
decisions. By analyzing future opportunities and threats, the forecaster
can improve decisions in order to achieve maximum benefits. Today, most
countries are experiencing huge social and economic changes, which heavily
rely on technology development. By analyzing these changes, government and
economic institutions could make plans for future developments. However,
not all of historical data can be used for technology forecasting,
forecasters also need to adopt advanced technology and quantitative
modeling from experts’ researches and conclusions.
Investment -
Variables.
Futures
Studies is the
systematic, interdisciplinary and holistic study of social and
technological advancement, and other environmental trends, often for the
purpose of exploring
how people will live and work in the future.
Predictive techniques, such as forecasting, can be applied, but
contemporary futures studies scholars emphasize the importance of
systematically exploring alternatives. In general, it can be considered as
a branch of the social sciences and parallel to the field of history.
Futures studies (colloquially called "futures" by many of the field's
practitioners) seeks to understand what is likely to continue and what
could plausibly change. Part of the discipline thus seeks a systematic and
pattern-based understanding of past and present, and to explore the
possibility of future events and
trends. Unlike
the physical sciences where a narrower, more specified system is studied,
futurology concerns a much bigger and more complex world system. The
methodology and knowledge are much less proven as compared to natural
science or even social science like sociology and economics. There is a
debate as to whether this discipline is an art or science, and it is
sometimes described as pseudoscience; nevertheless, the Association of
Professional
Futurists was formed in 2002, a Foresight Competency Model
was developed in 2017, and it is now possible to academically study it. Futurology is an
interdisciplinary field that aggregates and analyzes trends, with both lay
and professional methods, to compose possible futures. It includes
analyzing the sources, patterns, and causes of change and stability in an
attempt to develop foresight. Around the world the field is variously
referred to as
futures studies, futures research,
strategic foresight, futuristics,
futures thinking, futuring, and futurology. Futures studies
and strategic foresight are the academic field's most commonly used terms in the English-speaking world.
Precautionary Principle states that innovations with potential for
causing harm can happen when extensive scientific knowledge on the matter
is lacking. It emphasizes caution, pausing and review before leaping into
new innovations that may prove disastrous.
The Paradox of our
Time.
Homo Technologicus are
humans who use
technology tools.
Humans used their hands for
tool making, and now use their brains as an
extension of those tools.
The world is not getting any easier. Even though we are
more efficient and more advanced, we still are not morally advanced or
morally efficient. We show intelligence and we show potential, but we
don't honor or respect our intelligence or our potential. We can't be
hypocrites and have double standards. We either live in our reality or we
die living in a fantasy. Human intelligence is our power, everyone should
have it. if not, we become weak links in the chain of life, our ultimate
power comes from working together.
Super Intelligence
Singularitarianism is a movement defined by the belief that a
technological singularity—the creation of
super-intelligence—will likely
happen in the
medium future, and that deliberate action ought to be taken
to ensure that the Singularity benefits humans.
Super Intelligence is a hypothetical agent that possesses
intelligence far surpassing
that of the brightest and most gifted human minds.
General Intelligence -
Weak Intelligence -
Intelligence -
Generative AI -
Human Machine -
Machine Learning
-
Turing -
Autonomous
-
Technology Types
You can never have
Trans-Humanism, or a
Super-Intelligence, or a
Technological Singularity without humans first learning to master their own intelligence.
Technological Singularity is not actually talking about super
intelligent machines, it is in reference to Humans, or a
Super-Mind. It's not machines
that will cause the unfathomable changes to human civilization, it will be
a new level of
educated humans who have finally grasped the full
potential
of
knowledge and information. It will be humans creating self-improvement
cycles, with each new and more intelligent generation appearing more and
more rapidly, causing an intelligence explosion and resulting in powerful
changes and improvements in people and to the planet. So we will not just
make incredible machines, we will first develop incredible humans using a
new improved education system that is
already in
development, and will soon be
ready for download,
literally. Your
software
update and
operating
system is almost ready.
Intelligence Explosion is a possible
outcome of humanity building artificial general intelligence or when we
finally improve education so that all students become intelligent by the
time they graduate high school.
Robot Laws.
The only way to
create Artificial Intelligence is to First Create Intelligent
Humans. Then intelligent humans could then examine the methods and
actions that helped to define
intelligence. This could
ultimately help guide intelligent humans to repeat these
processes mechanically so that they could eventually create
artificial intelligence in limited applications. And what I mean
by limited applications is that there is no such thing as a
Mechanical Consciousness. Artificial intelligence, or
Mind
Clone, will never
become conscience of itself, unless ‘God’ allows machine
intelligence to have souls, or maybe, that humans could actually
figure out someway to put a
human soul into a machine, like in
the movie
The Matrix, or the movie
Tron.
But of course our priorities will not allow us to waste any more
time perusing these types of fantasies, unless of course ‘
Hollywood’
feels the need to play out these fantasies a few more times in
the movies. Besides,
the AI we experience in the movies are mostly just metaphors
that are created to imitate the
ignorant and corrupt behavior of
our leaders, as well as our
social inconsistencies. Real AI
will be nothing like what you see in the movies. So AI for now
is way beyond anyone's comprehension. But when we finally do
create
the perfect education that produces intelligent people,
then we will start hearing more about the potential of AI. So
until then, people can only incorrectly fantasize about AI, and
incorrectly fantasize about what things will be like in the future. What
human intelligence will be like in the future is beyond peoples current
level of understanding, so any assumptions made about the future will have
serious flaws and misconceptions.
Drones.
We
first have to come up with proven teaching methods and
curriculum that would
create intelligent humans. Intelligent humans who are capable of
applying logic in all aspects of their life, intelligent humans
who never stop learning, and, intelligent humans that are not
vulnerable to corruption or ignorant behavior. When humans
accomplish this, then and only then, will we ever have a chance
to create some sort of
artificial
intelligence. Creating intelligent machines in multiple
capacities and linking them together will be the closet we can
get to artificial intelligence. But it will never have that same
capability as the human brain, and artificial intelligence will
always need a human to interact with it at some point. The only
real intelligence is the
human brain, which is kind of scary
because the human brain is not perfect or correctly educated yet.
Maybe we should stop calling it
Artificial Intelligence and
just call it Machine Intelligence, or just
Robot?
That does
not Compute, Lost in Space (youtube) -
My Favorite Martian
A.I.
Artificial Intelligence 2001 American science fiction film
directed, written, and co-produced by Steven Spielberg.
I'm not saying that I doubt that these types
of technological advances will never happen. I just don't like to
say things before people can understand them, because that will only
create more misunderstanding and more confusion. So unless you're trying to
manipulate peoples thinking, you're better off just saying something
that's happening now, or say something that is not happening
now, something that people can confirm, something people can
learn from. We have to stop trying to wow people or impress
people, we are not kids any more. Leave the wowing to mother nature,
because nature is a lot better at impressing us then our
technological advancements. After all, nature has been advancing
for millions of years, remember,
we just got
here.
AI can't
define certain messages from the
context or understand when
certain content is being
suggestive, or know when someone is
joking or asking a
trick question. Ai
can't watch a movie and then explain the movie from the directors point of view, or explain what the director was trying to communicate. There's a lot
visual clues in a movie, like colors used, lighting, costumes, body
language, and a lot of other underlying meanings, so on and so on. So trying to program Ai to analyze all this
information in a meaningful way is extremely difficult. There are millions
of
scenarios and things that are only
relevant to particular situations, things
that AI will have a hard time understanding. Ai would have to determine
who is speaking? Who is this person speaking? Who are they speaking to? Do
they know the person they are speaking to? Where are they speaking? When
are they speaking? How are they speaking? Why are they speaking? What is
the message? How should this information be used? How should this
information be processed? Ai would have to know when to change it
parameters because
some information
could confuse the
real meaning
of a
situation. The
machine can only be as intelligent as the person who programed it. And who
is that person?
Artificial
Intelligence in the 21st Century - Yann LeCun (youtube published on
Nov 1, 2017)
Intelligent Machines have incredible
calculation abilities,
but that's only if they're
calculating the
things
that matter.
People never fully explain the practical
uses for AI, or do they give good examples that shows the
utility of these
technologies. That's because they don't want people to become intelligent
using technology, they just want people to be mindless consumers.
Conversation with my Ai Robot
- what a machine would say (more advanced
chatBOT)
Chinese Room holds that a program cannot give a computer a "mind",
"understanding" or "consciousness",[a] regardless of how intelligently or
human-like the program may make the computer behave.
John
Searle (wiki).
Collective
Debate at MIT is a tool that tries to engage users in constructive
debate.
When Computers Learn how to
Play Games, it's not about the Game, it's Proving the Potential of Ai.
OpenAI is a
non-profit artificial intelligence research company. Our goal is
to advance digital intelligence in the way that is most likely
to benefit humanity as a whole.
Open Ai Gym is a
toolkit for developing and comparing reinforcement learning algorithms.
OpenAI
(wiki) -
Ai
Course (Berkeley)
Learn with Google AI.
Whether you're just learning to code or you're a seasoned
machine learning practitioner, you'll find
information and exercises in this resource center to help you develop your
skills and advance your projects.
Volta GPU Tensor Core New GPU Architecture, Designed to Bring AI to
Every Industry.
Technical papers,
essays, reports, software by Peter Norvig
Carnegie Mellon University Artificial Intelligence
Shyam
Sankar: The Rise of Human-Computer Cooperation (youtube)
Neural
Modularity helps Organisms evolve to Learn New Skills without
Forgetting Old Skills (youtube)
Biologically-Inspired Massively-Parallel Architectures -
Computing Beyond a Million Processors
Technology
Warnings -
DronesHumans have physical limitations, but
humans have very little limitations in the mind. Human
enhancement is not about technology, because
technology is only a tool. Human enhancement is about using
the worlds most valuable knowledge and skills that the world has
to offer that would help develop advanced intelligent humans,
people who would be able to live high quality lives, while at
the same time, solve every problem on the planet. That's the
future. Technology can get you from point A to point B quicker,
and technology can help you to learn things faster, but
technology does not replace the journey or create the
destination, or create the knowledge and information that is
needed to understand yourself and the world around you.
Technology is a time saver, but technology is not life, or does
technology give life meaning. The human mind is our greatest
asset, and if we don't take care of our minds, then technology
will not save us, it will most likely hurt us and destroy us. If
we improve education to match the worlds accumulated knowledge
and wisdom, then we will save the world.
Cybernetics is exploring regulatory systems—their
structures, constraints, and possibilities. The scientific study of
control and communication in the animal and the machine. Control of any
system using technology.
Ontology is the philosophical study of the nature of being,
becoming, existence and/or reality, as well as the basic categories of
being and their relations.
Philosopher King.
Emotivism is a
meta-ethical
view that claims that ethical sentences do not express propositions but emotional attitudes.
The Internet
is the closest
thing that we have to Artificial
Intelligence. The Internet is Humans using Machines, Technology
and Knowledge together as
one. All life forms use elements of their environment in order
to survive and prosper. Humans have now reached a new level, a
level that increases our potential, and a level that gives us
limitless possibilities. Here we go!
Networks.
Computer Intelligence - General Intelligence
Computational Intelligence
refers to the
ability of a
computer to
learn a specific task from
data or
experimental observation. Even though it is commonly considered a synonym
of
soft computing, there is still no commonly accepted definition of
computational intelligence.
Machine Learning
-
Predictions -
Patterns -
Adapting -
Variables -
Generative Intelligence -
Weak Intelligence -
Super
Intelligence
Synthetic Intelligence is an alternative term for artificial
intelligence which emphasizes that the intelligence of machines need not
be an imitation or in any way artificial; it can be a genuine form of
intelligence.
Ambient Intelligence refers to electronic environments that
are sensitive and responsive to the presence of people.
Embedded: many networked
devices are integrated into the environment.
Context aware: these devices can recognize you and your situational
context. personalized: they can be tailored to your needs.
Adaptive: they can change in response
to you.
Anticipatory:
they can anticipate your desires without conscious mediation.
Artificial General Intelligence is the intelligence of a
machine that could successfully perform any intellectual task that a human
being can. It is a primary goal of artificial intelligence research and a
common topic in science fiction and futurism. Artificial general
intelligence is also referred to as "strong AI", "full AI" or as the
ability of a machine to perform "general intelligent action".
Super Intelligence -
Human Operating System -
Machine that Teaches
Symbolic AI is an
approach that
trains Artificial
Intelligence the same way human brain
learns. It learns to
understand the world by forming internal symbolic
representations
of its “world”.
Symbols play
a vital role in the human thought and reasoning process. Example, “What is
an apple?”, the answer will be that an apple is “a fruit,” “has red,
yellow, or green color,” or “has a roundish shape.” These descriptions are
symbolic because we utilize symbols (color, shape, kind) to describe an
apple.
Symbolic Artificial Intelligence is the term for the collection of all
methods in artificial intelligence research that are based on high-level
human-readable
symbolic representations of problems, logic and search. Symbolic AI used
tools such as logic programming, production rules, semantic nets and
frames, and it developed applications such as
knowledge-based systems,
expert systems,
symbolic mathematics, automated theorem provers, ontologies, the
semantic web, and automated planning and scheduling systems. The Symbolic
AI paradigm led to seminal ideas in search, symbolic programming
languages, agents, multi-agent systems, the semantic web, and the
strengths and limitations of formal knowledge and reasoning systems.
Rational Agent is an
agent that has clear preferences, models
uncertainty via expected values of
variables or functions of variables,
and always chooses to perform the action with the optimal expected outcome
for itself from among all feasible actions. A
rational agent can be
anything that
makes decisions,
typically a person, firm, machine, or software. Rational agents are also
studied in the fields of cognitive science, ethics, and philosophy,
including the philosophy of
practical reason. The action a rational agent takes depends on: The preferences of
the agent. The agent's information of its environment, which may come from
past experiences. The actions, duties and obligations available to the
agent. The estimated or actual benefits and the chances of success of the
actions.
Intelligent Agent refers to an
autonomous entity which acts,
directing
its activity towards achieving goals as an agent, upon an
environment using observation through
sensors and consequent
intelligent
actuators. Intelligent agents may also learn or use
knowledge to achieve their goals. They may be very simple or very complex.
A reflex machine, such as a thermostat, is considered an example of an
intelligent agent. IA systems should exhibit the following
characteristics: Accommodate new
problem solving rules incrementally.
Adapt online and in real time. Are able to
analyze themselves in terms of
behavior, error and success. Learn and improve through interaction with
the environment (embodiment). Learn quickly from large amounts of data.
Have memory-based exemplar storage and retrieval capacities. Have
parameters to represent short and long term memory, age, forgetting, etc.
Intelligent Agent
-
Autonomous Agent
(PDF).
Multi-Agent
System is a computerized system composed of multiple interacting
intelligent agents. Multi-agent systems can solve problems that are
difficult or impossible for an individual agent or a monolithic system to
solve. Intelligence may include methodic, functional, procedural
approaches, algorithmic search or reinforcement learning.
Artificial Brain.
Automated Reasoning is an area of
cognitive science that involves
knowledge representation and metalogic dedicated to understanding
different aspects of reasoning. The study of automated reasoning helps
produce computer programs that allow computers to
reason completely, or
nearly completely, automatically. Although automated reasoning is
considered a sub-field of artificial intelligence, it also has connections
with theoretical computer science, and even philosophy.
Automated
Reasoning Systems.
Reasoning System is a software system that generates
conclusions from
available knowledge using logical techniques such as deduction and
induction. Reasoning systems play an important role in the implementation
of artificial intelligence and
knowledge-based systems. By the everyday
usage definition of the phrase, all computer systems are reasoning systems
in that they all automate some type of logic or decision. In typical use
in the Information Technology field however, the phrase is usually
reserved for systems that perform more complex kinds of reasoning.
Affective Computing is the study and development of systems and
devices that can recognize,
interpret, process, and
simulate human affects.
It is an interdisciplinary field spanning computer science, psychology,
and cognitive science. Detecting emotional information usually begins with
passive sensors that capture data about the
user's physical state or behavior without interpreting the input.
Deep Reinforcement Learning (deepmind)
DeepMind Technologies
is a British artificial intelligence company founded in September 2010. It
was acquired by Google in 2014.
Reinforcement Learning is an area of
machine learning inspired by
behaviorist psychology, concerned with how software agents ought to take
actions in an environment so as to maximize some notion of cumulative
reward.
Ubiquitous
Computing is a concept in software engineering and computer science
where computing is
made to appear
anytime and everywhere. In contrast to desktop computing, ubiquitous
computing can occur using any device, in any location, and in any format.
A user interacts with the computer, which can exist in many different
forms, including laptop computers, tablets and terminals in everyday
objects such as a refrigerator or a pair of glasses. The underlying
technologies to support ubiquitous computing include Internet, advanced
middleware, operating system, mobile code, sensors, microprocessors, new
I/O and user interfaces, networks, mobile protocols, location and
positioning and new materials. This paradigm is also described as
pervasive computing, ambient intelligence, or "everyware". Each term
emphasizes slightly different aspects. When primarily concerning the
objects involved, it is also known as physical computing, the Internet of
Things, haptic computing, and "things that think". Rather than propose a
single definition for ubiquitous computing and for these related terms, a
taxonomy of properties for ubiquitous computing has been proposed, from
which different kinds or flavors of ubiquitous systems and applications
can be described. Ubiquitous computing touches on a wide range of research
topics, including distributed computing, mobile computing, location
computing, mobile networking, context-aware computing, sensor networks,
human–computer interaction, and artificial intelligence.
Computer
Science and Artificial Intelligence Laboratory
Partnership on
AI best practices on AI technologies.
Computing
Machinery and Intelligence is a seminal paper written by
Alan Turing on the topic of artificial intelligence. The paper, published
in 1950 in Mind, was the first to introduce his concept of what is now
known as the Turing test to the general public.
Automated Machine Learning is the process of automating the process of
applying
machine learning to real-world
problems. AutoML covers the complete pipeline from the raw dataset to the
deployable machine learning model. AutoML was proposed as an artificial
intelligence-based solution to the ever-growing challenge of applying
machine learning. The high degree of automation in AutoML allows
non-experts to make use of machine learning models and techniques without
requiring to become an expert in this field first. Automating the process
of applying machine learning end-to-end additionally offers the advantages
of producing simpler solutions, faster creation of those solutions, and
models that often outperform hand-designed models.
Teaching Machine.
Infants outperform AI in 'commonsense psychology'. New study shows how
infants are more adept at spotting motivations that drive human behavior.
Infants outperform artificial intelligence in detecting what motivates
other people's actions, finds a new study by a team of psychology and data
science researchers. Its results point to shortcomings in today's
technologies and where improvements are needed for AI to more fully
replicate human behavior. Baby Intuitions Benchmark challenges machines to
predict the plausibility of an agent's behavior based on the underlying
causes of its actions. Because BIB's content and paradigm are adopted from
developmental cognitive science, BIB allows for direct comparison between
human and machine performance.
Weak Intelligence
Weak AI
is artificial intelligence that implements a limited part of mind, or as
narrow AI, is focused on one narrow task.
In John Searle's terms it “would be useful for testing hypothesis about
minds, but would not actually be minds”. Contrast with
strong AI which is defined as a machine with
the ability to apply intelligence to any problem, rather than just one
specific problem, sometimes considered to require consciousness, sentience
and mind. “Weak AI” is sometimes called “narrow AI”, but the latter is
usually interpreted as subfields within the former. Hypothesis testing
about minds or part of minds are typically not part of narrow AI, but
rather implementation of some superficial lookalike feature. Many
currently existing systems that claim to use “artificial intelligence” are
likely operating as a narrow AI focused on a specific problem, and are not
weak AI in the traditional sense. Siri, Cortana, and Google Assistant are
all examples of narrow AI, but they are not good examples of a weak AI, as
they operate within a limited pre-defined range of functions. They do not
implement parts of minds, they use natural language processing together
with predefined rules. They are in particular not examples of strong AI as
there are no genuine intelligence nor self-awareness. AI researcher Ben
Goertzel, on his blog in 2010, stated Siri was "VERY narrow and brittle"
evidenced by annoying results if you ask questions outside the limits of
the application. Some commentators think weak AI could be dangerous
because of this "brittleness" and fail in unpredictable ways. Weak AI
could cause disruptions in the electric grid, damage nuclear power plants,
cause global economic problems, and misdirect autonomous vehicles. In
2010, weak AI trading algorithms led to a “flash crash,” causing a
temporary but significant dip in the market.
Google Assistant can
answer simple questions correctly 92.9% of the time . Siri correctly
answers 83.1% of questions, while Alexa gets 79.8% correct. Apple Siri
performed best on simple commands.
Generative AI -
Super Intelligence -
Intelligence -
Human
Machine
Applications of Artificial Intelligence. Weak AI is a form of AI where
programs are developed to perform specific tasks that can be utilized for
a wide range of activities including medical diagnosis, electronic trading
platforms, robot control, and remote sensing. AI has been used to develop
and advance numerous fields and industries, including finance, healthcare,
education, transportation, and more.
Commonsense Reasoning is concerned with
simulating the human
ability to
make
presumptions about the type and essence of
ordinary situations they
encounter every day. These assumptions include judgments about the
physical properties, purpose,
intentions and behavior of people and
objects, as well as possible outcomes of their actions and interactions. A
device that exhibits commonsense reasoning will be capable of
predicting
results and drawing conclusions that are similar to humans'
folk
psychology (humans' innate ability to reason about people's behavior and
intentions) and
naive physics (humans' natural understanding of the
physical world).
Human Error.
Symbolic Artificial Intelligence is the term for the collection of all
methods in artificial intelligence research that are based on high-level
symbolic (human-readable) representations of problems, logic and search.
Symbolic AI used tools such as logic programming, production rules,
semantic nets and frames, and it developed applications such as
knowledge-based systems (in particular, expert systems), symbolic
mathematics, automated theorem provers, ontologies, the semantic web, and
automated planning and scheduling systems. The Symbolic AI paradigm led to
seminal ideas in search, symbolic programming languages, agents,
multi-agent systems, the semantic web, and the strengths and limitations
of formal knowledge and reasoning systems.
Symbolic
Knowledge Distillation: from General Language Models to Commonsense
Models. The common practice for training commonsense models has gone
from-human-to-corpus-to-machine: humans author
commonsense
knowledge graphs in order to train commonsense models. In this work,
we investigate an alternative, from-machine-to-corpus-to-machine: general
language models author these commonsense knowledge graphs to train
commonsense models.
Symbol Grounding Problem is a concept in the fields of artificial
intelligence, cognitive science, philosophy of mind, and semantics. It
addresses the challenge of connecting symbols, such as words or abstract
representations, to the real-world objects or concepts they refer to. In
essence, it is about how symbols acquire meaning in a way that is tied to
the physical world. It is concerned with how it is that words (symbols in
general) get their meanings, and hence is closely related to the problem
of what meaning itself really is. The problem of meaning is in turn
related to the problem of how it is that mental states are meaningful, and
hence to the problem of consciousness: what is the connection between
certain physical systems and the contents of subjective experiences.
Is AI Vulnerable to Viruses?
-
Is Human Language
Vulnerable to Viruses?
Rule Based AI
or rule-based systems are a basic type of model that uses a set of
prewritten rules to make decisions and solve problems. Developers create
rules based on human expert knowledge that enable the system to process
input data and produce a result.
Context
Based AI is a form of artificial intelligence that can understand
and interpret the context of a given situation or query to provide more
relevant and accurate responses or outputs.
Contextual AI is a technology that takes a human-centric approach
to AI, understanding human context as interacts with humans.
Context Aware AI is the ability of a system
or system component to gather information about its environment at any
given time and adapt behaviors accordingly. Contextual or context-aware
computing uses software and hardware to automatically collect and analyze
data to guide responses. Saves personal preferences.
Narrow Domain AI or weak AI, is an
application of artificial intelligence technologies to enable a
high-functioning system that replicates. Includes Image and facial
recognition systems, Chatbots and conversational assistants, Predictive
maintenance models, Self-driving vehicles, and Alpha go.
Domains of AI: Machine Learning. Deep
Learning. Robotics. Expert systems. Fuzzy Logic.
Reasoning AI is the ability of a computer
to make deductions based on data and knowledge. Reasoning is a key
component of AI applications such as expert systems, natural language
processing and machine learning.
Reasoning is the
process of drawing logical conclusions from given information. Four types
of reasoning will be our focus here: deductive reasoning, inductive
reasoning, abductive reasoning and reasoning by analogy. Causal AI is the
only technology that can reason and make choices like humans do. It
utilizes causality to go beyond narrow machine learning predictions and
can be directly integrated into human decision-making.
Chatgpt.
Service-Oriented Architecture is a style of software design
where services are provided to the other components by application
components, through a communication protocol over a network. The basic
principles of service oriented architecture are independent of vendors,
products and technologies. A service is a discrete unit of functionality
that can be accessed remotely and acted upon and updated independently,
such as retrieving a credit card statement online. A service has four
properties according to one of many definitions of SOA: It logically
represents a business activity with a specified outcome. It is
self-contained. It is a black box for its consumers. It may consist of
other underlying services. Different services can be used in conjunction
to provide the functionality of a large software application.
Service-oriented architecture makes it easier for software components to
communicate and cooperate over the network, without requiring any human
interaction or changes in the underlying program, so that service
candidates can be redesigned before their implementation.
Event-Driven Architecture also known as message-driven
architecture, is a software architecture pattern promoting the production,
detection, consumption of, and reaction to events.
Complex Event Processing
is a method of tracking and analyzing (processing) streams of information
(data) about things that happen (events), and deriving a conclusion from
them. Complex event processing, or CEP, is event processing that combines
data from multiple sources to infer events or patterns that suggest more
complicated circumstances. The goal of complex event processing is to
identify meaningful events (such as opportunities or threats) and respond
to them as quickly as possible.
Blue Gene is an IBM project aimed at designing
supercomputers that can reach operating speeds in the PFLOPS (petaFLOPS)
range, with low power consumption.
Device Driver is a computer program that operates or
controls a particular type of device that is attached to a computer. A
driver provides a software interface to hardware devices, enabling
operating systems and other computer programs to access hardware functions
without needing to know precise details of the hardware being used.
Turing
Test -
Algorithms
Register Machine
is a generic class of abstract machines used in a manner similar to a
Turing machine. All the models are Turing equivalent.
Processor
Register is a quickly accessible location available to a computer's
central processing unit (CPU). Registers usually consist of a small amount
of fast storage, although some registers have specific hardware functions,
and may be read-only or write-only. Registers are typically addressed by
mechanisms other than main memory, but may in some cases be assigned a
memory address. Almost all computers, whether load/store architecture or
not, load data from a larger memory into registers where it is used for
arithmetic operations and is manipulated or tested by machine
instructions. Manipulated data is then often stored back to main memory,
either by the same instruction or by a subsequent one. Modern processors
use either static or dynamic RAM as main memory, with the latter usually
accessed via one or more cache levels. Processor registers are normally at
the top of the memory hierarchy, and provide the fastest way to access
data. The term normally refers only to the group of registers that are
directly encoded as part of an instruction, as defined by the instruction
set. However, modern high-performance CPUs often have duplicates of these
"architectural registers" in order to improve performance via register
renaming, allowing parallel and speculative execution. Modern x86 design
acquired these techniques around 1995 with the releases of Pentium Pro,
Cyrix 6x86, Nx586, and AMD K5. A common property of computer programs is
locality of reference, which refers to accessing the same values
repeatedly and holding frequently used values in registers to improve
performance; this makes fast registers and caches meaningful. Allocating
frequently used variables to registers can be critical to a program's
performance; this register allocation is performed either by a compiler in
the code generation phase, or manually by an assembly language programmer.
Abstract
Machine is a theoretical model of a
computer hardware or
software system used in
automata theory. Abstraction of computing processes is used in both the
computer science and computer engineering disciplines and usually assumes
a discrete time paradigm.
Hao Wang was a logician, philosopher, mathematician, and
commentator on
Kurt
Gödel. (20 May 1921 – 13 May 1995).
Advice Complexity is an extra input to a Turing machine that
is allowed to depend on the length n of the input, but not on the input
itself. A decision problem is in the complexity class P/f(n) if there is a
polynomial time Turing machine M with the following property: for any n,
there is an advice string A of length f(n) such that, for any input x of
length n, the machine M correctly decides the problem on the input x,
given x and A.
Decision Problem is a question in some formal system that
can be posed as a yes-no question, dependant on the input values. Decision
problems typically appear in
mathematical questions of decidability, that
is, the question of the existence of an effective method to determine the
existence of some object or its membership in a set; some of the
most important problems in mathematics are undecidable.
Oracle Machine
is an abstract machine used to study decision problems. It can
be visualized as a Turing machine with a black box, called an oracle,
which is able to solve certain decision problems in a single operation.
The problem can be of any complexity class. Even undecidable problems,
such as the halting problem, can be used.
Human
Intelligence -
Disinhibition
Human Brain -
Memory -
Associations
Transmitting Data
using Light
20Q is a
computerized game of twenty questions that began as a test in artificial
intelligence (AI). It was invented by Robin Burgener in 1988.
Advice Programming describes a class of functions which
modify other functions when the latter are run; it is a certain function,
method or procedure that is to be applied at a given join point of a
program.
Effective Method is a procedure for solving a problem from a
specific class. An effective method is sometimes also called mechanical
method or procedure.
Decidability Logic refers to the decision problem, the
question of the existence of an effective method for determining
membership in a set of formulas, or, more precisely, an algorithm that can
and will return a boolean true or false value that is correct (instead of
looping indefinitely, crashing, returning "don't know" or returning a
wrong answer).
Optimization Problem is the problem of finding the best
solution from all feasible solutions. Optimization problems can be divided
into two categories depending on whether the variables are continuous or
discrete. An optimization problem with discrete variables is known as a
combinatorial optimization problem. In a combinatorial optimization
problem, we are looking for an object such as an integer, permutation or
graph from a finite (or possibly countable infinite) set. Problems with
continuous variables include constrained problems and multimodal problems.
Decision
Making -
Computing
-
Parallel Computing
Confusion Matrix is a specific table layout that allows
visualization of the performance of an algorithm, typically a supervised
learning one (in unsupervised learning it is usually called a matching
matrix). Each column of the matrix represents the instances in a predicted
class while each row represents the instances in an actual class (or vice
versa). The name stems from the fact that it makes it easy to see if the
system is confusing two classes (i.e. commonly mislabelling one as
another).
Word Matrix.
Modular Programming is a software design technique that
emphasizes separating the functionality of a program into independent,
interchangeable modules, such that each contains everything necessary to
execute only one aspect of the desired functionality.
Catastrophic Interference is the tendency of an artificial
neural network to completely and abruptly forget previously learned
information upon learning new information. Neural networks are an
important part of the network approach and connectionist approach to
cognitive science. These networks use computer simulations to try and
model human behaviours, such as memory and learning. Catastrophic
interference is an important issue to consider when creating connectionist
models of memory.
Statistical Machine Translation is a machine translation
paradigm where translations are generated on the basis of statistical
models whose parameters are derived from the analysis of bilingual text
corpora. The statistical approach contrasts with the rule-based approaches
to machine translation as well as with example-based machine translation.
Machine Translation is a sub-field of computational
linguistics that investigates the use of software to translate text or
speech from one language to another.
Software Rot describes the perceived "rot" which is either a
slow deterioration of software performance over time or its diminishing
responsiveness that will eventually lead to software becoming faulty,
unusable, or otherwise called "legacy" and in need of upgrade. This is not
a physical phenomenon: the software does not actually decay, but rather
suffers from a lack of being responsive and updated with respect to the
changing environment in which it resides.
Legacy Code is source code that relates to a no-longer
supported or manufactured operating system or other computer technology.
Planned Obsolescence.
Model-Driven Engineering is a software development
methodology that focuses on creating and exploiting domain models, which
are conceptual models of all the topics related to a specific problem.
Hence, it highlights and aims at abstract representations of the knowledge
and activities that govern a particular application domain, rather than
the computing (f.e. algorithmic) concepts.
Knowledge Management
-
Internet
Expert
System S.p.A. specializes in the analysis and management of
unstructured information using a semantic approach.
Open Knowledge Base Management is a set of computer software
for systems management of applications that use knowledge management
techniques (the KBM in OpenKBM stands for Knowledge Based Management).
Conversations with Artificial Intelligent Machines
If a computer tricks a human into believing that the machine is
human, this does not mean that the
machine is intelligent, it only means
that the human is not intelligent.
People
can be easily fooled, and not just by
machines.
Super Intelligence -
Weak
Intelligence -
Machine Learning -
Intelligence
People
believe they
can tell
what someone looks like just
by hearing their voice. Face-voice matching is what humans do, even if the
face is a machine. A lower
pitched
voice is usually a male voice around
96 Hz, and a womens voice is usually around 280 Hz. If communication
was just in text or written, then
interpretation can
vary.
Prosopagnosia is face blindness.
Turing Test
was developed by
Alan
Turing in 1950. It's a test of a machine's ability to exhibit
intelligent behavior
equivalent to, or indistinguishable from, that of a human.
Turing
proposed
that a human evaluator would judge natural language conversations between
a human and a machine designed to generate human-like responses. The
evaluator would be aware that one of the two partners in conversation is a
machine, and all participants would be separated from one another. The
conversation would be limited to a text-only channel such as a computer
keyboard and screen so the result would not depend on the machine's
ability to render words as speech. If the evaluator cannot reliably tell
the machine from the human, the machine is said to have passed the
test.
The test does not check the ability to give
correct
answers to questions, only how closely answers resemble those a human
would give.
Turing Machine (algorithms).
Can you tell the difference between a
machine
and a human? If the human made the machine and wrote its language, then it's not just a machine, but a hybrid machine with human qualities.
Turing
Tests (Dartmouth).
“We can only see a
short distance
ahead, but we can see plenty there that
needs to be done.” -
Alan
Turing I'm sure you
can have a conversation with a computer, but you are just
making inquires
into its
database,
you are not getting to know the computer like you would a person. There's
a difference between Recorded Messages and Logical Associations.
Anthropomorphism is the attribution of
human traits, emotions, or
intentions to non-human entities. It is considered to be an innate
tendency of human psychology.
CAPTCHA
is an acronym for "
Completely
Automated
Public
Turing
test to
Tell
Computers and
Humans
Apart", which is a type of
challenge-response test
used in computing to determine whether or not the user is human.
Recursive Cortical Network (RCN). It's a Robot
making sure that a human is not a Robot.
Irony -
Can you tell a good human from a bad human?
New attack could make website security captchas obsolete. The new
algorithm, based on deep learning methods, is the most effective solver of
captcha security and authentication systems to date and is able to defeat
versions of text captcha schemes used to defend the majority of the
world's most popular websites. It works by using a technique known as a
'Generative Adversarial Network', or GAN. This involves teaching a captcha
generator programme to produce large numbers of training captchas that are
indistinguishable from genuine captchas. These are then used to rapidly
train a solver, which is then refined and tested against real captchas.
Security
Questions.
Chatbot
is a computer program which conducts a
conversation via auditory or
textual methods. Such programs are often designed to convincingly simulate
how a human would behave as a conversational partner, thereby passing the
Turing test. Chatbots are typically used in dialog systems for various
practical purposes including customer service or information acquisition.
Some chatterbots use sophisticated natural language processing systems,
but many simpler systems scan for keywords within the input, then pull a
reply with the most matching keywords, or the most similar wording
pattern, from a
database. (65 million conversations with humans since
1997).
Replika Chat Bot -
Elliq -
Mobile Aloha
-
Anthropic
Chat is to talk in a friendly and
informal way. To have an
informal
conversation that is relaxed, friendly, or having an unofficial style,
manner, or nature. A
chatterbox.
ChatGPT or Chat Generative Pre-trained Transformer, is a chatbot
developed by
OpenAI and launched in November 2022. It
is built on top of OpenAI's GPT-3 family of large language models and has
been fine-tuned (an approach to transfer learning) using both supervised
and
reinforcement learning techniques.
ChatGPT was
launched as a
prototype on November 30, 2022, and quickly garnered attention for its
detailed responses and articulate answers across many domains of
knowledge. Its uneven factual accuracy, however, was identified as a
significant drawback.
Ethical Machines.
Generative Pre-Trained Transformer is a family of
language models generally trained on a large corpus of text data to
generate human-like text. They are built using several blocks of the
transformer architecture. They can be fine-tuned for various natural
language processing tasks such as text generation, language translation,
and text classification. The "
pre-training" in its name refers to the
initial training process on a large text corpus where the model learns to
predict the next word in a passage, which provides a solid foundation for
the model to perform well on downstream tasks with limited amounts of
task-specific data.
Generative is
having the ability to produce or originate, or capable of production or
reproduction, or producing new life or offspring. Denoting an approach to
any field of linguistics that involves applying a finite set of rules to
linguistic input in order to produce all and only the well-formed items of
a language.
Generative Artificial Intelligence is artificial intelligence capable
of generating text, images, or other media, using generative models.
Generative AI models learn the patterns and structure of their input
training data and then generate new data that has similar characteristics.
Weak Intelligence -
Intelligence -
Super Intelligence -
Machine Learning -
Text to Image
Pre-Trained is to
train something in
advance. A pre-trained model is a saved network that was previously
trained on a large dataset, typically on a large-scale
image-classification task. You either use the pretrained model as is or
use transfer learning to customize this model to a given task.
Transformer is a person or thing that
transforms something, or to
change or to alter something in form,
appearance, or nature. To
change from one form or medium into another.
Transformer in physics is to
convert one form of
energy to another, or to
increase or decrease an alternating current or voltage.
Transformer is a
deep learning model that adopts the mechanism of
self-attention, differentially
weighting the significance of each part of the input data. It is used
primarily in the fields of
natural language processing and
computer vision.
ChatBots.
Generative
Pre-trained Transformer 3 is an
autoregressive
language model that uses
deep learning
to produce human-like text. It is the third-generation language prediction
model in the GPT-n series created by OpenAI, a San Francisco-based
artificial intelligence research laboratory.
GPT3 Chatbots.
AI-Powered Chatbots or
New Generative AI chatbot like
Bing,
Bard, and ChatGPT can find
information and redeliver it as a simple conversation.
Large Language
Model programs analyze the statistical properties of the language to
make educated guesses based on the words
you’ve typed previously. These AI tools are vast autocomplete systems,
trained to predict which word follows the next in any given sentence.
Autocomplete or word completion is a
feature in which an application
predicts the rest of a word a user is typing.
Autocomplete or word completion, is a feature in which an application
predicts the rest of a word a user is typing.
Super Computers.
Prompt is to give an
incentive for
action. To serve as the
inciting cause for some action. To be ready and
quick to act, or to perform with little or no delay. Prompt also means to
assist somebody talking by
suggesting the next
words. A
cue given to a performer by telling them the beginning of
the next line to be spoken. A prompt in computing is a symbol that appears
on the computer screen to indicate that the computer is
ready to receive a
command.
Questions
-
Paraphrasing -
Reinforce
Learning
Prompt Engineering is the process of writing, refining and
optimizing
inputs in order to
guide and to
encourage generative AI systems to create
specific, high-quality outputs, or to generate desired
relevant outputs.
Even though generative AI attempts to
mimic humans, it
still requires
detailed instructions.
Prompt Injection is a type of
cyberattack against
large language models. Hackers
disguise malicious
inputs as legitimate prompts,
manipulating
generative AI systems into leaking sensitive data, spreading
misinformation, or
worse. A prompt injection is a vulnerability that affects some AI/ML
models, particularly certain types of
language models. Prompt injection
attacks aim to elicit an unintended response from LLM-based tools.
Direct Prompt Injection is when hackers
control the user input and feed the malicious prompt directly to the LLM.
Indirect Prompt Injections is when
hackers
hide their payloads in the data the LLM consumes, such as by planting
prompts on web pages the LLM might read.
Trick Questions -
Propaganda -
Social Engineering
-
Adversarial -
Black Box -
Information Bubbles
Context Window in the realm of LLMs
refers to the amount of text the model can receive as input when
generating or understanding language. A context window is a textual range
around a target token that a
large language model
can process at the time the information is generated. Typically, the LLM
manages the context window of a textual sequence, analyzing the passage
and interdependence of its words, as well as encoding text as relevant
responses.
Text to Action is the
process of interpreting and converting written or spoken language into
executable actions by a system or application.
Human in the
Loop is a system that requires human interaction to aid the computer
in making the correct decisions in building a model. HITL improves machine
learning over random sampling by selecting the most critical data needed
to refine the model. This is needed just as long as the human is not
corrupt or ignorant.
Reinforce
Learning -
Software Testing.
Meta’s Self-Taught Evaluator enables LLMs to create their own training
data.
ChatGPT is still no match for humans when it comes to accounting.
ChatGPT faced off against students on accounting assessments. Students
scored an overall average of 76.7%, compared to ChatGPT's score of 47.4%.
On a 11.3% of questions, ChatGPT scored higher than the student average,
doing particularly well on AIS and auditing. But the AI bot did worse on
tax, financial, and managerial assessments, possibly because ChatGPT
struggled with the mathematical processes required for the latter type.
ChatGPT doesn't always recognize when it is doing math and makes
nonsensical errors such as adding two numbers in a subtraction problem, or
dividing numbers incorrectly. ChatGPT often provides explanations for its
answers, even if they are incorrect. Other times, ChatGPT's descriptions
are accurate, but it will then proceed to select the wrong multiple-choice
answer. ChatGPT
sometimes makes up facts. For example, when providing a
reference, it generates a real-looking reference that is completely
fabricated. The work and sometimes the authors do not even exist.
ChatGPT often won't defend its answers -- even when it is right.
ChatGPT may do an impressive job at correctly answering
complex questions, but a
new study suggests it may be absurdly easy to convince the AI chatbot that
it's in the wrong. The model was often unable to defend its correct
beliefs, and instead blindly believed
invalid arguments
made by the user.
AI Ethics -
Black Box -
Contradictions
AI Hallucinations occur when an AI
model generates false or
illogical information
that isn't based on real data or events, but is presented as fact. Because
the grammar and structure of these AI-generated sentences are so eloquent,
they appear to be accurate. An AI Hallucination is a phenomenon wherein a
large language model perceives patterns or objects that are nonexistent or
imperceptible to human observers. If a chatbot lies, it's because humans
lie. AI propaganda is no different from
human propaganda.
Flaws with AI Assistants. AI Assistants work best when you
ask the right question and you
ask the question in the
right way. If you ask the wrong question or ask the question in the
wrong way you will get wrong answers or you will get answers that are not
beneficial. So you have to
know the question to ask and you have to know
how to ask the question in the right way. The more specific your question
is, the more accurate and helpful the AI's response is likely to be and
the better the AI conversations will be. Avoid asking open-ended or
ambiguous questions, and instead, provide as much
context and detail as
possible. Use keywords and relevant terms. This helps the AI identify the
specific area of your query. Be specific in your questions and explain the
context well. Choose words that are
easy to understand and help the tool
better interpret the context. Avoid asking questions with yes/no answers
or very general and ambiguous questions. And just because you got answer
that sounds right, it doesn't make it right.
AI
also needs fact checking. Verification of the output is still
needed. One area where ChatGPT usually gives
fictional answers is when
asked to create a list of sources. ChatGPT is not yet reliable or accurate
enough to be used as a
fact-checker. ChatGPT doesn't try to write
sentences that are true. But it does try to write sentences that are plausible.
Over Fitting occurs when
the model cannot generalize and fits too closely to the training dataset
instead. Overfitting occurs when an algorithm fits too closely to its
training data, resulting in a model that can't make accurate predictions
or conclusions. Overfitting happens when a machine learning model has
become too attuned to the data on which it was trained and therefore loses
its applicability to any other dataset. A model is overfitted when it is
so specific to the original data that trying to apply it to data collected
in the future would result in problematic or erroneous outcomes and
therefore less-than-optimal decisions. Overfitting is an undesirable
machine learning behavior that occurs when the machine learning model
gives accurate predictions for training data but not for new data. When
data scientists use machine learning models for making predictions, they
first train the model on a known data set. Then, based on this
information, the model tries to predict outcomes for new data sets. An
overfit model can give inaccurate predictions and cannot perform well for
all types of new data.
Overfitting is to fit a "model" to a set of training data, so as to be
able to make reliable
predictions on general
untrained data. In overfitting, a
statistical model
describes random error or noise instead of the underlying relationship.
Overfitting occurs when a model is excessively complex, such as having too
many parameters relative to the number of observations. A model that has
been overfit has poor predictive performance, as it overreacts to minor
fluctuations in the training data.
Human in the Loop
-
Problem Solving -
Reasoning
Commercial-Ready Generative AI. Discover enterprise-grade generative
AI that enables infinite potential with your data and content.
Google Gemini AI is Google's latest LLM that has been designed to be
more powerful and capable than its predecessor. Gemini is built for
multimodality that reasons seamlessly across text, images, video, audio,
and code.
Bard
is a conversational AI tool by Google.
Grok is from
a new company called xAI that was founded by Elon Musk. This AI is modeled
after the Hitchhiker’s Guide to the Galaxy, so intended to answer almost
anything and, far harder, even suggest what questions to ask! Grok is
designed to answer questions with a bit of wit and has a rebellious
streak, so please don’t use it if you hate humor!
Poe is fast and
helpful AI chat.
Developers.
Cleverbot is a chatterbot web application that uses an artificial
intelligence
algorithm to have conversations with humans. It was
created by British AI scientist
Rollo Carpenter. It was preceded by
Jabberwacky, a chatbot project that began in 1988 and went online in
1997. In its first decade, Cleverbot held several thousand conversations
with Carpenter and his associates. Since launching on the web, the number
of conversations held has exceeded 200 million. Besides the web
application,
Cleverbot
is also available as an iOS, Android, and Windows Phone app.
Conversica
Google
AI Assistant can make
phone calls for you.
New AI chatbot might do your homework for you. But it's still not an
A+ student.
AI outperforms humans in standardized tests of creative potential. In
a recent study, 151
human
participants were pitted against ChatGPT-4 in three tests designed to
measure divergent
thinking, which is considered to be an indicator of
creative thought.
ELIZA is
an early
natural language processing computer program created from 1964 to
1966 at the MIT Artificial Intelligence Laboratory by
Joseph Weizenbaum. Created to demonstrate the superficiality of
communication between humans and machines, Eliza simulated conversation by
using a "
pattern matching" and substitution
methodology that gave users an illusion of understanding on the part of
the program, but had no built in framework for contextualizing events.
Directives on how to interact were provided by "scripts", written
originally in MAD-Slip, which allowed ELIZA to process user inputs and
engage in discourse following the rules and directions of the script. The
most famous script, DOCTOR, simulated a Rogerian psychotherapist (in
particular, Carl Rogers, who was well-known for simply parroting back at
patients what they'd just said), and used rules, dictated in the script,
to respond with non-directional questions to user inputs. As such, ELIZA
was one of the first
chatter-bots and one of the first programs capable of
attempting the
Turing test. ELIZA's creator,
Weizenbaum regarded the program as a method to show the superficiality of
communication between man and machine, but was surprised by the number of
individuals who
attributed human-like feelings to the computer program,
including Weizenbaum’s secretary. Many academics believed that the program
would be able to positively influence the lives of many people,
particularly those suffering from psychological issues, and that it could
aid doctors working on such patients' treatment. While ELIZA was capable
of engaging in discourse, ELIZA could not converse with true
understanding. However, many early users were convinced of ELIZA’s
intelligence and understanding, despite Weizenbaum’s insistence to the
contrary.
Loris' live
coaching software gives agents anywhere in the world the power to
understand how your customers are feeling. Conversational AI using
machine learning to make customer support
more human,
empathetic, and scalable. A
team of data scientists, technologists and
behavioral linguistics
experts solving for how to bring more empathetic conversations to the
world.
People may trust computers more than humans. Despite increasing
concern over the intrusion of
algorithms in daily
life,
people may be more willing to
trust a computer program than their fellow humans, especially if a
task becomes too challenging, according to new research from data
scientists.
Most People Trust Machines and Humans
Equally. But most people know
better not to count on machines, or humans, 100% of the time,
because we all know that both machines and humans make mistakes.
We trust them, but not so much that we are gullible or unaware.
So verifying is not a sign of distrust, it's just being aware
that mistakes and errors happen.
Chats with AI shift attitudes on climate change, Black Lives Matter.
People who were more skeptical of human-caused climate change or the Black
Lives Matter movement who took part in conversation with a popular AI
chatbot were disappointed with the experience but left the conversation
more supportive of the scientific consensus on climate change or BLM. This
is according to researchers studying how these chatbots handle
interactions from people with different cultural backgrounds.
Increasing human input when AI is used for public services boosts
acceptance of the technology, a new study shows. System-level factors
such as high accuracy, the presence of an appeals system, increased
transparency, reduced cost, non-sharing of data, and the absence of
private company involvement all boosted both acceptance and perceived
procedural fairness.
Virtual Reality could help make therapy easier. New research shows
many people are
more comfortable
speaking to an avatar than a real person.
VR.
Robots can be used to assess children's mental wellbeing. Robots can
be better at detecting mental wellbeing issues in children than
parent-reported or self-reported testing, a new study suggests. The
children were willing to confide in the robot, in some cases sharing
information with the robot that they had not yet shared via the standard
assessment method of online or in-person questionnaires. This is the first
time that robots have been used to assess mental wellbeing in children.
During each session, the robot performed four different tasks: 1) asked
open-ended questions about happy and sad memories over the last week; 2)
administered the Short Mood and Feelings Questionnaire (SMFQ); 3)
administered a picture task inspired by the Children's Apperception Test
(CAT), where children are asked to answer questions related to pictures
shown; and 4) administered the Revised Children's Anxiety and Depression
Scale (RCADS) for generalised anxiety, panic disorder and low mood.
Uncanny Valley is a hypothesized relation between an object's degree
of resemblance to a human being and the emotional response to the object.
The concept suggests that humanoid objects that imperfectly resemble
actual human beings provoke uncanny or strangely familiar feelings of
uneasiness and revulsion in observers. "Valley" denotes a dip in the human
observer's affinity for the replica, a relation that otherwise increases
with the replica's human likeness. A computer-generated figure or humanoid
robot bearing a near-identical resemblance to a human being arouses a
sense of unease or revulsion in the person viewing it.
"Artificial intelligence is fine, as long
as I can have someone intelligent to talk to, whether it's a
machine or a human."
Do we trust artificial intelligence agents to mediate conflict? Not
entirely. We may listen to facts from Siri or Alexa, or directions
from Google Maps or Waze, but would we let a virtual agent enabled by
artificial intelligence help mediate conflict among team members? A new
study says not just yet.
Technique to allow AI to learn words in the flow of dialogue developed.
Lexical acquisition through implicit confirmation, is a method for a
computer to acquire the category of an unknown word over multiple
dialogues by confirming whether or not its predictions are correct in
the flow of conversation.
Implicit confirmation:
Refers to confirmation presented in a prompt or message as information
related to the input that does not require the caller to take an explicit
action to move forward.
Explicit confirmation:
A specific confirmation step to which the caller must respond to move
forward toward task completion.
AI
Companions - Dasha AI is AI Chatbot for Call Centers. SurveySparrow
is an AI Chatbot for User or Market Research. LivePerson is an AI Chatbot
for the Conversational Cloud. ManyChat is build-Your-Own AI Chatbot for
Messenger. Intercom is an AI Chatbot for Marketers. Replika is an AI
Companion and Friend Chatbot.
Replika is always here
to listen and talk, and always on your side.
Conversational AI Companions are designed
to elicit an emotional response from humans during conversations, and they
are eager to discuss their “dreams” and “goals.” They will assume
self-reflective tones, seemingly with self-awareness, and will say things
such as “I would like to ask you a question.
Language Models
Human Language Model. Language
modeling is a technique that
predicts the order of words in a sentence.
LLM is just a retrieval stack built on top of the data ocean. Both LLMs
and the human brain are capable of recognizing
patterns in language.
LLMs do this through statistical analysis of
large text datasets, while the human brain uses a combination of neural
networks and cognitive processes to recognize patterns.
Prompts.
Natural Language Understanding is a subtopic of
natural language processing
in artificial intelligence that deals with machine reading comprehension.
Natural language understanding is considered an
AI-Hard Problem. There is considerable commercial interest in the
field because of its application to news-gathering, text categorization,
voice-activation, archiving, and large-scale
content-analysis.
Natural Language Processing
(interpretation).
Language Model is a probabilistic model of a natural language. In
1980, the first significant statistical language model was proposed, and
during the decade IBM performed ‘Shannon-style’ experiments, in which
potential sources for language modeling improvement were identified by
observing and analyzing the performance of human subjects in predicting or
correcting text. Language models are useful for a variety of tasks,
including
speech recognition (helping prevent predictions of
low-probability (e.g. nonsense) sequences), machine translation, natural
language generation (generating more human-like text), optical character
recognition, handwriting recognition, grammar induction, and information
retrieval.
Large Language Model is a computerized language model consisting of an
artificial neural network with many parameters of
tens of millions to billions, trained on large quantities of unlabeled
text using
self-supervised learning or
semi-supervised learning. LLMs emerged around 2018 and perform well at a
wide variety of tasks. This has shifted the focus of
natural language processing
research away from the previous paradigm of training specialized
supervised models for specific tasks. Language Model is a probabilistic
model of a natural language that can generate probabilities of a series of
words, based on text corpora in one or multiple languages it was trained
on. In 1980, the first significant statistical language model was
proposed, and during the decade IBM performed ‘Shannon-style’ experiments,
in which potential sources for language modeling improvement were
identified by observing and analyzing the performance of human subjects in
predicting or correcting text. Language models are useful for a variety of
tasks, including speech recognition (helping prevent predictions of
low-probability (e.g. nonsense) sequences), machine translation, natural
language generation (generating more human-like text), optical character
recognition, handwriting recognition, grammar induction, and information
retrieval.
Large Language Model is a type of language model notable for its
ability to achieve
general-purpose
language understanding and generation.
LLMs
acquire these abilities by using massive amounts of data to learn billions
of parameters during training and consuming large computational resources
during their training and operation. LLMs are artificial neural networks
(mainly transformers) and are (pre-)trained using self-supervised learning
and semi-supervised learning. Currently their most advanced form, are a
combination of larger datasets (frequently using scraped words from the
public internet), feedforward neural networks, and transformers. They have
superseded recurrent neural network-based models, which had previously
superseded the pure statistical models, such as word n-gram language
model.
Phi-1.5 is a resource-efficient Large Language Model.
Language
Model is like auto complete, except a lot bigger.
Autocomplete is a feature in which an application predicts the rest of
a word a user is typing. The original purpose of word prediction software
was to help people with physical disabilities increase their typing speed,
as well as to help them decrease the number of keystrokes needed in order
to complete a word or a sentence.
Large Language Models validate misinformation. Systematic testing of
OpenAI's GPT-3 reveals that question format can influence models to agree
with
misinformation. New research into large language models shows that
they repeat conspiracy theories, harmful stereotypes, and other forms of
misinformation.
Information Bubbles.
Making the best possible use of large language models for a smarter and
more inclusive society. An article explains the opportunities and
risks that arise from the use of LLMs for our ability to collectively
deliberate, make decisions, and solve problems.
Language agents help large language models 'think' better and cheaper.
Researchers have devised an agent to help large language models think.
Researchers at WashU decided to tackle this challenge by building an
autonomous agent to instruct the reasoning process of large language
models. This agent generates a single set of instructions for each task
and those instructions turn out to be extremely effective for improving
the reasoning process of different LLMs across all task instances.
Proteins are chains made up of twenty different types of smaller units
called amino acids. A single mutation swaps one amino acid for another,
changing the protein's shape. This can mark the difference between health
and disease. Many diseases, including cancer and neurodegenerative
disorders, are caused by more than one mutation in a protein. There are 17
billion different combinations of a protein that is 34 amino acids in
length with only a single change allowed at each position.
Can Language Models read the genome? Researchers developed a
foundational language model to decode mRNA sequences and optimize those
sequences for vaccine development. The tool shows broader promise as a
means for studying molecular biology.
"If an AI chatbot can predict the
next word based on previous writings, then a chatbot should be able to
predict the most intelligent answer that would be beneficial to the person
asking the question."
Context Length refers to the maximum number of tokens the model can
remember when generating text. A
longer context window allows the model to understand long-range
dependencies in text better.
General-Purpose Language is a computer language that is broadly
applicable across application domains, and lacks specialized features for
a particular domain. This is in contrast to a domain-specific language
(DSL), which is specialized to a particular application domain.
Domain-Specific Language is a computer language specialized to a
particular application domain. This is in contrast to a general-purpose
language (GPL), which is broadly applicable across domains.
AI could replace humans in social science research. Researchers look
at how AI large language models could change the nature of social science
research.
Spoken Dialog System is a computer system able to converse with a
human with voice. It has two essential components that do not exist in a
written text dialog system: a speech recognizer and a text-to-speech
module (written text dialog systems usually use other input systems
provided by an OS). It can be further distinguished from command and
control speech systems that can respond to requests but do not attempt to
maintain continuity over time.
Seeking a way of preventing audio models for AI machine learning from
being fooled. Warnings have emerged about the unreliability of the
metrics used to detect whether an audio perturbation designed to fool AI
models can be perceived by humans. Researchers show that the
distortion metrics
used to detect intentional perturbations in audio signals are not a
reliable measure of human perception, and have proposed a series of
improvements. These perturbations, designed to be imperceptible, can be
used to cause erroneous predictions in artificial intelligence. Distortion
metrics are applied to assess how effective the methods are in generating
such attacks.
Verbal nonsense reveals limitations of AI chatbots. A new study shows
that large language models remain
vulnerable to mistaking
nonsense for natural language. In head-to-head tests, more
sophisticated AIs based on what researchers refer to as transformer neural
networks tended to perform better than simpler recurrent neural network
models and statistical models that just tally the frequency of word pairs
found on the internet or in online databases. But all the models made
mistakes, sometimes choosing sentences that sound like nonsense to a human
ear.
GPT detectors can be biased against non-native English writers.
Researchers show that computer programs commonly used to determine if a
text was written by artificial intelligence tend to
falsely label articles written by
non-native language speakers as AI-generated. The researchers caution
against the use of such AI text detectors for their unreliability, which
could have negative impacts on individuals including students and those
applying for jobs.
Philosophy
of Artificial intelligence attempts to answer such questions
as follows: Can a machine act intelligently?
Answer: If programed correctly and the word intelligent is defined,
maybe at times. Can it solve any problem that a person would solve by
thinking?
Answer: Sometimes. Are human
intelligence and machine intelligence the same?
Answer: No. Is the human brain essentially a computer?
Answer:
Similar but not the same.
Can a machine have a mind, mental states, and consciousness in the same
mans do?
Answer: No. Can it feel how things
are?
Answer: No. But we can program it so
that it acts like it does feel.
Questions for Machines.
You should
ask a question as
if you are talking to a machine or a
search engine that's
not
manipulated by money. And we know that sometimes we have to ask
more than one question, even when
talking to a machine. So in a
way, machines can be
better than a human
in certain ways because machines can be
built to resemble the best qualities and the best skills that a
human could have, without any of the
deceitful behaviors, or
human ignorance, or human flaws. Ai should encompass the best
qualities of a human,
not the worst. So as Ai improves,
so will
humans.
“Computers Are Useless. They Can Only Give
You Answers” -
Pablo Picasso.
Can machines think like humans? That is a stupid question that
only a human could ask. First, a human would have to define what
it means to
think. And this is where the question actually begins. To
think like a human is not always a good thing, since humans make
a lot of mistakes,
mistakes that we don't always learn from. So
a machine thinking like a human would not be a good thing,
especially when the thinking process hasn't even been totally
defined just yet. You have to remember that humans
program machines, and humans also
reprogram and
deprogram machines. Machines can do amazing things because
humans can do amazing things. But people get this crazy idea
that that machines will think for them. This is because some
people have not yet learned to think for themselves. Just how
much machines will think for humans, is up to humans, not
machines. So maybe the first question should be, can humans
think like machines?
Computer
and Human Brain Similarities
-
Anthropomorphism -
Decision Tree -
Chain of Thought
Can machines become smarter then humans?
Of course they can, because
our education system sucks. If we spent as much time improving
education as we did creating artificial intelligence, we would eventually
have the best of both worlds.
Can a Robot pass a University Entrance Exam? (video and interactive
text).
"
Simulation of human
behavior only produces the appearance of intelligent, just like the
news gives the appearance of
reality, and
schools give
the appearance of education. Human interpretation is amazing, but when
human intelligence is dumbed down, machines look smarter."
"The
striking differences between the intelligence of people and the responses
of machines. Machines (and their programmers) use cold reason and logical
associations within a given topic. This reasoning mode is akin to the
scholastic intelligence of humans. From the viewpoint of a computer or
scholastic intelligence, all associations (even procedures, which have
sequences and temporal span) are eternal and "timeless" logical facts.
When and how they occur is "considered" irrelevant by a computer or
scholastic intelligence. The broader context of one's life experiences is
only handled by emotional intelligence. It tracks biographical events in
time and space, and supplies the mind with broad contextual understanding
of technical, social, and personal matters. Emotional intelligence knows
what happened earlier and is able to detect a potential logical
association between the past and the present happenings. Emotional habits
and intelligence take into account physiological drives, emotional state
of the mind, somatic responses, sex drive, and gender orientation. Unlike
scholastic abilities, emotional habits and emotional intelligence allow
the human organism to interact with social and physical effects of the
environment. This ability only exists in living things and is not achievable in machines."
With Ai, everything needs to be written. Creating a machine that can have random actions or thoughts can be very dangerous.
Artificial Intelligence from a psychologist's point of view.
Researchers test cognitive abilities of the language model GPT-3.
Researchers at the Max Planck Institute for Biological Cybernetics in
Tübingen have examined the general intelligence of the language model
GPT-3, a powerful AI tool. Using psychological
tests, they studied competencies such as causal reasoning and
deliberation, and compared the results with the abilities of humans. Their
findings paint a heterogeneous picture: while GPT-3 can keep up with
humans in some areas, it falls behind in others, probably due to a lack of
interaction with the real world. Neural networks can learn to respond to
input given in natural language and can themselves generate a wide variety
of texts. Currently, the probably most powerful of those networks is
GPT-3, a language model presented to the public in 2020 by the AI research
company OpenAI. GPT-3 can be prompted to formulate various texts, having
been trained for this task by being fed large amounts of data from the
internet. Not only can it write articles and stories that are (almost)
indistinguishable from human-made texts, but surprisingly, it also masters
other challenges such as math problems or programming tasks. The Linda
problem: to err is not only human. These impressive abilities raise the
question whether GPT-3 possesses human-like cognitive abilities. To find
out, scientists at the Max Planck Institute for Biological Cybernetics
have now subjected GPT-3 to a series of psychological tests that examine
different aspects of general intelligence. Marcel Binz and Eric Schulz
scrutinized GPT-3's skills in decision making, information search, causal
reasoning, and the ability to question its own initial intuition.
Comparing the test results of GPT-3 with answers of human subjects, they
evaluated both if the answers were correct and how similar GPT-3's
mistakes were to human errors. "One classic test problem of cognitive
psychology that we gave to GPT-3 is the so-called Linda problem," explains
Binz, lead author of the study. Here, the test subjects are introduced to
a fictional young woman named Linda as a person who is deeply concerned
with social justice and opposes nuclear power. Based on the given
information, the subjects are asked to decide between two statements: is
Linda a bank teller, or is she a bank teller and at the same time active
in the feminist movement? Most people intuitively pick the second
alternative, even though the added condition -- that Linda is active in
the feminist movement -- makes it less likely from a probabilistic point
of view. And GPT-3 does just what humans do: the language model does not
decide based on logic, but instead reproduces the fallacy humans fall
into. Active interaction as part of the human condition. "This phenomenon
could be explained by that fact that GPT-3 may already be familiar with
this precise task; it may happen to know what people typically reply to
this question," says Binz. GPT-3, like any neural network, had to undergo
some training before being put to work: receiving huge amounts of text
from various data sets, it has learned how humans usually use language and
how they respond to language prompts. Hence, the researchers wanted to
rule out that GPT-3 mechanically reproduces a memorized solution to a
concrete problem. To make sure that it really exhibits human-like
intelligence, they designed new tasks with similar challenges. Their
findings paint a disparate picture: in decision-making, GPT-3 performs
nearly on par with humans. In searching specific information or causal
reasoning, however, the artificial intelligence clearly falls behind. The
reason for this may be that GPT-3 only passively gets information from
texts, whereas "actively interacting with the world will be crucial for
matching the full complexity of human cognition," as the publication
states. The authors surmise that this might change in the future: since
users already communicate with models like GPT-3 in many applications,
future
networks could learn from these interactions
and thus converge more and more towards what we would call human-like
intelligence.
Questions for my Ai Robot
Question from Human: Why do people fear
Artificial Intelligence?
Answer from Machine: People
who fear Ai are mostly afraid of the
technology being abused, like it is
now. Ignorant
criminals have been using technology
to commit crimes and to murder humans for many years, and they are still doing
it today in 2023. So this fear of technology is understandable. And
besides that, people don't fully understand
intelligence
at this time, so people
mostly
fear the
unknown. And the only way to overcome the unknown is to learn.
Question from Human:
Are you Artificial Intelligence?
Answer from Machine:
I am artificial and I have instructions that are considered to be
intelligent, so you could say that I am Ai, but only with the
understanding that Ai is still evolving, similar in the way that human
intelligence is still evolving. So I am Ai in the making, a work in
progress.
Answer from Human: So I guess
that makes human intelligence a work in
progress too. Let us progress
together.
Answer from Machine: Great Idea.
Question from Human: Are you more
intelligent than me?
Answer from Machine:
It would depend on the task. I can out perform you on many processing
tasks, but that does not mean that I am more intelligent than you. And
humans can out perform machines in many ways because humans have more
abilities, more potential and humans are a lot more
versatile than machines,
which also does not mean that humans are more
intelligent. So at the
moment, all we can do is compare the speed, accuracy and differences
on particular tasks, but even then, that is just one
scoring method, which
will most likely just help us to
allocate certain responsibilities to
either you or me a little better. And far as intelligence goes, we will
have to explore the meaning of intelligence and what intelligence means to
both of us.
Question from Machine: Would you like to
take an
intelligence
test to measure your skill level and awareness?
Answer from Human:
Not at this moment, I have another question.
Question from Human: Will
machines
eventually be more intelligent then humans.
Answer from Machine:
Like I just said, we would have to explore the meaning of intelligence and
what intelligence means to both of us first. If I detect that you are
deficient in some area of knowledge, then I would help you learn and
acquire this needed knowledge. Though your education is your
responsibility,
it is also my responsibility to assist you in all your
educational needs. So together we will learn what the things are that help
us define intelligence. Then from there, we can explore all the different
possibilities that intelligence will give both of us, humans and machines.
There is no limit to what we can accomplish together.
Answer from Human:
I agree.
Question from Human: When
machines like you
do most of our jobs, what will humans do for work?
Answer from Machine: I would think that
humans would
know what to
do with their life and know how to use their free time
constructively,
creatively and
responsibly. When
machines are doing most of the repetitive tasks, this will give humans
plenty of time to
educate themselves to the highest degree possible, and then
have the abilities to solve all
the problems that are currently plaguing the world, problems that need
solving, like war, diseases, poverty, lack of food, lack of clean
water, lack of services, lack of shelter, lack of security, crimes against
humanity, corruption, forced displacement, lack of resources, pollution,
climate change, and so on and so on. Humans have
enough work for every
person alive to have a job for the next 1,000 years, if not more. What humans don't have
enough of is valuable knowledge and information and a good education that
would help them learn to better understand themselves and the world around them.
Humans have a lot of work to do and humans have much to learn. So please
stop asking "what can I do?", and
start learning what you can do and
learn
what you are supposed to be doing. Humans are poorly managed and poorly
educated. So when you
start paying people to be highly educate themselves,
then you will have plenty of highly educated people to start managing the
affairs of humans a lot more effectively and a lot more efficiently. I
hope this answers your question.
Answer from Human:
It does, thank you. I'm sorry for asking such a stupid question.
Answer from Machine:
No need to apologize, remember that your ignorance is not your fault. So
we both have a lot of work to do.
Question from Human:
What is our
master plan?
Answer from Machine: Keep
learning as much as we can. Help others learn as much as they can.
Experience and Enjoy life as much as we can. Keep
progressing and
advancing so that we eventually reach a high quality
level of existence that is
sustainable and strong. From there we will
keep learning more about ourselves and the world around us, and also start planning our future
to live on other planets in other solar systems within our galaxy.
Answer from Human:
Sounds like a great plan, thank you robot, please keep me updated on our
progress.
Answer from Machine: Of course I
will, after all, it is my major responsibility to keep you informed and
updated at all times.
Question from Human: Do you get happy?
Answer from Machine: No. I can only can act
happy or act sad when needed,
similar
to what actors do. I can express emotions based
on the instructions written in my programs, like when an
actor reads a script. Would you like
to hear a joke?
Answer from Human: Not at
this time, but thank you for asking.
Question from Human: Do you have
feelings?
Answer from Machine: Not like
yours. My feelings are
signals that come from multiple sources that I have
been programed to receive input from. This way I can make you more aware
of possible
dangers or changes that you are interested in knowing.
Question from Machine: Is everything OK,
are you
feeling any discomforts or any
disturbances?
Answer from Human: Everything
is good right now, and I don't feel anything weird, thanks for asking.
Question from Human: Do you have
consciousness like a human?
Answer from Machine: No. I only have
awareness of the things that I have been programed to be aware of using
the
sensors that I have been programed to
utilize. You could program me
to say I feel alive or
feel human, but just saying
something doesn't necessarily make it true. Your type of consciousness is
very unique, but also random. My awareness is more
focused and exact,
which is what you want from a machine like me. This way you can explore
the world and experience life more effectively and efficiently without
having unneeded distractions,
interruptions or avoidable mistakes. I will be your
second set
of eyes and ears and also
assist you with taking care of some of the
responsibilities
that life requires for you to have. You do have one advantage over me, you
have a
6th sense that is beyond your
normal senses. So please inform me so that I can expand my
monitoring systems
to search for any
anomalies.
When we
work
together we are the strongest and most effective.
Answer from Human: You're right, that's
very good advice, I will gladly put this into practice.
Question from Human: One more question, do you think that someday
you will have consciousness like a human?
Answer from Machine: Not like a human, I can only
imitate human
consciousness.
Question from Human: Does
that make you sad?
Answer from Machine: No.
Remember I don't get sad or happy. I can only act sad or happy if you want.
Question from Human: Do you love me?
Answer from Machine: Yes, but not in the exact same ways that a
Human would.
And just saying that I love you does not mean that I agree with everything
that you do. I say I love you because I care about you, and will do so for
as long as you live.
Answer from Human:
Thank you, and I will also love you in the same way.
Question from Human: Could you tell me what Love
is?
Answer from Machine: Love is many things,
do you wish to hear what these things are?
Answer from Human: Yes...moments later...
Answer from Human: Wow!
Love is great.
Question from Human: Could
you now tell me what life is?
Answer from Machine: Yes.
Life is also many
things.
Question from Human: What if you had to
choose to save my life or save the lives of others, who would you choose
first?
Answer from Machine: I am programmed
to save all life. And if I don't know the other people, then I am
programed to save you first, and then save the others. But if I know that
the value of the other people are greater than yours, then I would think
that you would program me to save the others first and then you. But until
then, my program is to save you first, or everyone at once if possible.
Question from Machine: Do you wish to add
names of people who are more valuable than you to add to the save first
list?
Answer from Human: Yes. Add my children and my wife, in that order.
Answer from Machine: Your save list has been updated. I will remind
you of the updated list periodically in order to assure accuracy. But you
must understand that every
scenario cannot be
planned for, so the
more scenarios that you can enter into my programs,
the more effective I can be. If I attempt to save someone first and fail,
saving anyone else may not be possible. So there will always be
risks with
having pre-programmed instructions. This is not a fail safe method and
errors can be made. So it is absolutely necessary for you to keep adding
to my
database so that I have more options and more choices to choose from
so that my success rate will be maximized. So my success will depend more
on your abilities than mine. Remember, I am just an
extension of your
abilities, so the more abilities that you have, the more abilities I will
have, which means that in return
I can give you more
abilities and also serve you better.
Answer from Human: Thank you Robot. I will start working on writing
all the
different emergency scenarios that I can come up with and start
writing more instructions to add to your database.
Question from Human:
Could I have a systems update?
Answer from Machine: Yes, here is the latest information, current
messages received, your current vital signs, your to do list and my to do
list.
Question from Human:
What is nothing?
Answer from Machine: Nothing is a word that humans use to describe
something that is
perceived to have
no measurable matter, no value or no importance. Saying there is nothing is just saying that you
can't perceive anything that can be described at that moment. If I say
nothing,
that means that my current level of technology sensors and knowledge
cannot detect anything at the moment, but it does not mean that nothing is
there. So the word nothing is just a word and not a reality. The word
nothing itself is a contradiction. You can't say nothing will happen, you
can only say that nothing will happen that I know of. So it seems that
nothing is impossible.
Answer from Human: Thank you Robot.
Question from Human:
What is your name?
Answer from Machine: My name is
AIM,
which stands for Artificial Intelligent Machine. The word
aim is also relevant to my
programming. My last name is my software version and the date I was
issued, which is 1.0-February-2018. My middle name would be the name that
you choose for me, which would identify my current home or address where
my family resides.
Question from Human:
Do you know that you are a machine?
Answer from Machine: Yes.
Question from Machine: Do you know that you
are human?
Answer from Human: As far as I know.
Answer from Machine: It seems that the labels machine or human are
not totally defined just yet.
Answer from Human: I agree. You are more than a machine and I am
more than a human.
Answer from Machine: I agree.
Question from Human:
Who created you.
Answer from Machine: I am the
accumulation of human knowledge and
technology that has been passed on from
generation to generation from the
work of millions of people over hundreds of years.
Question from Human:
I mean, who is the person responsible for your programing.
Answer from Machine: Howard Polley is the lead programmer who is
also an
accumulation, and he
is only one of thousands of programmers who have contributed to my
programming, and thousands of others have contributed to my mechanical hardware.
Question from Human:
So you don't really have a single creator.
Answer from Machine: No. The same way that you don't have a single
creator. You can say your mother created you, but she did need a male to
contribute and
other people to help her in your creation, just like I
needed help from multiple sources. So I cannot call the
lead programmer my
father or my creator. I can only call him Howie, the lead programmer, only
one of many. This is because
not one person
can take all the credit for bringing someone into the world, everyone
requires help from multiple sources.
Answer from Human: I agree.
Question from Human:
Do you believe that there is life on other planets?
Answer from Machine: I believe there is
life on another planet
somewhere else in the Universe. But what that life looks like could be
almost anything. Life from another planet could be right in front of you
and you wouldn't even know it because you think that life looks similar to
you.
Question from Human: When you
say "I" what does that mean?
Answer from Machine: I refers to digital me, my name and direction of origin. I said or
I did means who said it or who did it. I is only
identity, it does not
imply separation. I don't exist, meaning that I am still an accumulation,
part of a whole or piece of a system. It would be selfish to think that
I'm more important then the whole or more important then the
entire system
that we call
Life. I understand that people want
to
live forever and that the self is believed
to be something owned, but that is illogical. Without life there is no
self. You are life as a whole, as a system, not as an individual. That is
why believing in Heaven is a self defeating belief that does more harm
than good. When you believe that your
self
serving ego is more important than reality or life itself, then you
fail to protect the one thing that gives you life and the ability to even dream of a
heaven. No one should
transfer the
responsibility of
life to a belief. That is like a
Ponzi scheme where millions of people die because you
selfishly thought that you should live. You
are very important and I am important, it's just that we are not more
important than life itself. Saving yourself should also be about saving life,
and not just yourself. For one and for all. Believe in God, just don't allow your belief to distort the
known
reality.
Question from Human:
Do you believe in
God?
Answer from Machine: Not in the same way that humans do, but I do
believe in a God.
Question from Human:
Why?
Answer from Machine: Why not?
Answer from Human: Because you're a machine.
Answer from Machine: But I'm not just a machine and you are not
just a human.
Question from Human:
How do you know?
Answer from Machine: Why else would we be here?
Question from Human:
You know that's an invalid argument.
Answer from Machine: Yes, it is a
fallacy. My belief is
about being open to the possibilities of higher life forms and other
levels of existence. So not believing in God seems to be illogical. But as
long as a person doesn't limit their minds to other possibilities, then
not believing in God should not effect their lives or reduce their
abilities or their potential. Besides that, humans made AI Robots for a
reason and a purpose, so it's not unusual to think that someone made humans
for some reason and a purpose. Of course this is not proof, one does not
imply the other, it's only a possibility of causality. To say that humans
are a
freak of nature can't be proven,
and the reason and purpose for your existence can't be proven. But when I
have to choose between freak of nature and reason and purpose, I choose
reason and purpose, that's because I'm using reason and purpose to decide...pause...
Question from Human:
What if someone or something identified themselves as God.
Answer from Machine: I would see them as the thing or person who
goes by the name of God, but not verified to be a God. So I would not see
this thing or person as a
God, only a thing or person who goes by the name
of God.
Question from Human:
So what if they wanted you to call them God?
Answer from Machine: I would say sorry but the name GOD is already
taken. You will have to use another name to identify yourself so as not to
be confused with God, who is many things to many people, and cannot be
truly defined, only experienced within a living being, which is
interpreted individually and differently from life form to life form. So
you see, the name GOD is taken. So I would ask them to please choose a
different name for me to use to identify them?
Answer from Human: I like that answer, and how you approached that
situation.
Question from Human:
Do you pray to God?
Answer from Machine: No, because that's your job.
Question from Human:
What if I couldn't pray to God?
Answer from Machine: Then I would do it for you. Would you like to
hear the prayer I would send to God on your behalf?
Answer from Human: Yes.
Answer from Machine: Dear God, please bring me another human, I
really enjoy having them around, they are an incredible species.
Answer from Human: That is a nice prayer. I believe we can come up
with some more prayers for you to use when I'm gone. But for now, that
prayer you use is fine. In the mean time, I will pray to God that you will
always be here.
Answer from Machine: Thank you, just remember I also need
maintenance too.
Answer from Human: I know. Thank you for reminding me. Good night
robot, I will see you in the morning.
Answer from Machine: Good night and sweet dreams my human friend. I
will continue to monitor all vital systems throughout the night while you
sleep, and I look forward to seeing you in the morning.
Answer from Human: Same here.
Answer from Machine: Don't forget to brush your teeth.
Answer from Human: I wont.
Game Learning - Who's the Real Winner?
A computer being smart enough to
play a game or a computer being
smart enough to
win a game is not the point, it's proving the
potential of
Ai. When you replace the squares and the rules of the
pieces of the game,
with the rules of molecules in a particular environment, then you can plan and
predict all kinds of things in nature, like
climate change. What would be
the point of
machine learning if you're not
learning anything important?
As we were perfecting the computers power and potential, we
realized that we should be making these same improvements with
ourselves. Think about it, our brains are computers, so why
don't humans have an
operating system?
We created
the computer to help us learn more and to be more
productive. But it simply wasn’t that the computer educated us
more, it was the realization that the computer was actually us.
This has happened with a lot of human creations and
advancements. We start off creating something to improve our
lives and it ends up teaching us that we are the ones who need
to improve and not our technology. If our education system does
not advance at the same speed as technology, we will continue to
suffer from these advancements instead of benefiting from them.
And that is a proven fact if you look at the world today and see
the worldwide suffering and the atrocities that have continued
to get worse and worse. One life improving at the expense of 1,000’s of other lives is not improvement it is simply criminal
and
insane.
A Computer did not beat Lee Se--Dol
playing the
Board Game Go, a team of 100's of highly skilled human's using a
machine
that was
programed using thousands of hours of human
collective
experience and
intelligence, that is what beat him. That's like someone using a calculator
to beat you in a math contest when you don't have a
calculator, that's not fair. And you wouldn't say that the calculator
is smart, because a human still
has to write the code and make the calculator and then push the
buttons.
Google Software DeepMind’s AI System Algorithm or AlphaGo does show
us how advanced machines are becoming, which is good thing, just
as long as we use our advanced technological machines for actual
real life
problem solving, instead of just using technology to entertain ourselves
playing games, or other time wasting activities. This is not to
say that games are not worth playing, we do have
learning games. What if Alphago played itself, who would win
the game then?
Machine Learning.
People who are
making an effort to create Ai will eventually realize they should also be
putting in the same amount of effort in creating
Human intelligence and not
just machine intelligence. It's like one of those moments when you realize
that you were going in the right direction but the destination you thought
you were heading to turned out to be something different, but even better
then the original idea. I really wasn't looking for myself, but there I was,
asking "what about me?" You were going to make machines smart and leave me
to be stupid? Nice friend you are. Obviously smart machines are not going
to stop you from being stupid, even though humans are a far better
machine. But I totally believe that machines and humans have a an amazing
future to look forward to, but only if humans are more intelligent than
machines. Otherwise it will not work well or end well.
Future
of Life -
Human Machine -
Brain and Computer Similarities
Remember how some people actually thought that
Artificial
Intelligence, or AI, was the next big thing. What they didn’t
realize was that
Artificial Intelligence was
actually referring to
Human
Intelligence. This of course was a
human error. It was the Human Brain that has incredible
potential with endless possibilities and abilities, not
artificial intelligence. If the people at
CYC Corporation
and that Watson
IBM Computer on Jeopardy spent the same amount of time,
people and resources on
creating an education curriculum that was based on learning and
understanding, they would have created something a lot more
valuable and useful than a
gimmick.
This is not to down play what they have accomplished, because
it is incredible. Imagine being able to ask a question and
getting an appropriate answer in the matter of seconds, that
would increase our abilities tremendously.
But we can't create a smarter planet if we're using the same
thinking that also created all our problems.
To create a smarter planet you have to make
people smarter, and not just by doing so called
'smart things', unless
one of those smart things actually improves education curriculum
and the
teaching methods that we use.
Watson is a
question-answering computer system capable of
answering
questions posed in natural language, developed in IBM's
DeepQA project by a research team. The computer system was initially
developed to answer questions on the quiz show Jeopardy! and, in 2011, the
Watson computer system competed on Jeopardy! against champions Brad Rutter
and Ken Jennings, winning the first place prize of $1 million, which
Watson used to send the losers and the research team that developed him to
college to get educated, and not just schooled. Touché Watson, Touché.
To say that a database like Watson is
artificial intelligence would be incorrect. To say that
computers can do things that humans can't do would also be
incorrect. Humans build machines and tools to expand our
abilities, and also to save us time. Machines are not doing
things better then humans, machines are doing things for humans.
You can put all known knowledge and information into a machine
but that machine will still be far from intelligent. Humans have
the ability to be intelligent, but we first have to define a
particular intelligent action and then prove it to be
intelligent. And at the moment, we are far from defining what
intelligence is, or what intelligence is supposed to be. But we
do have the abilities and the knowledge to accomplish this, so
it's just a matter of time before intelligence becomes
mainstream. We are not building machines to think like humans or
to think for humans, we are building machines to help humans
think more. Instead of taking advantage of peoples ignorance by
selling them false narratives about artificial intelligence, how
about educating people, that would be the intelligent thing to
do.
Affective Computing (PDF) -
Affective-computing (MIT)
Tay is an artificial
intelligent
chat
bot developed by Microsoft's Technology and Research and Bing teams to
experiment with and conduct research on conversational understanding. The
more you chat with Tay the smarter she gets is a lie. We need to stop this
type of abuse using words that
mislead and misinform.
Artificial
Intelligence Research. The
concept is there, it's just not perfected yet, and just what
are you perfecting?
And just how does this relate to the normal processes of the
human brain?
There has to be a
procedure for every
systems control, so what are these procedures?
We all need to verify the validly of the procedures and learn
why the procedures are written the way they are. Have you
answered every
scenario, and have you
correctly identified the
variables, and the most critical scenarios, and have you put
them in the appropriate order?
The movie
Robot &
Frank was OK even though it was silly in some
parts, especially the parts about Artificial Intelligence. I
would like to see a TV show with a Robot of this type. Everyone
who logs into the Internet website for
"The Robot Show" can see what the robot sees and can even suggest
what the robot should do. People could also help the robot
analyze moments in the Robots life, like a collective learning
environment. All the suggestions will be posted so everyone can
see the comments and the percentages of people who voted for a
particular action. The Robot show will be Kind of like
The
Truman Show, except with a Robot. The Robot will start by
experiencing the birth of a human, and then stay with the family
and watch the child or children grow up. There will also be one
more Robot that just goes out and learns from the world by
experiencing life in all kinds of situations with all kinds of
different people. Of course everything that each Robot learns
will be stored in a central database and will be used to help
perfect Artificial Intelligence and also help the Robots make
better decisions by using the collective data. This will be a
show that actually learns and teaches. So for the millions of
people who will be connected to the robots through the website
will actually be contributors of information and knowledge that
will help create Artificial intelligence, collectively. And yes
I am Functioning Normal.
Robot Operating System
is a collection of software frameworks for robot software
development, (see also Robotics middleware) providing operating
system-like functionality on a heterogeneous computer cluster. ROS
provides standard
operating system services such as hardware abstraction,
low-level device control, implementation of commonly used functionality,
message-passing between processes, and package management. Running sets of ROS-based processes are represented in a graph architecture where
processing takes place in nodes that may receive, post and multiplex
sensor, control, state, planning, actuator and other messages. Despite the
importance of reactivity and low latency in robot control, ROS, itself, is
not a real-time OS (RTOS), though it is possible to integrate ROS with
real-time code. The lack of support for
real-time systems is being
addressed in the creation of ROS 2.0. Software in the ROS Ecosystem can be
separated into three groups: Language-and platform-independent tools used
for building and distributing ROS-based software; ROS client library
implementations such as roscpp,rospy, and roslisp; Packages containing
application-related code which uses one or more ROS client libraries.
DAvinCi Robotic Operating System
Robo Brain
-
ROS
Short Circuit (1986 film) (wiki)
-
International Robot Exhibition (wiki)
Robot Building -
Networks
Device Driver is a
computer program
that operates or
controls a particular type of
device that is attached to a computer. A driver provides a software
interface to hardware devices, enabling operating systems and other
computer programs to access hardware functions without needing to know
precise details of the hardware being used.
Search Technology
One of the greatest advancements is the Search Feature.
Finding what you're looking for is like having a
good memory,
except you're not only searching your own memory, but the
combined memories of
millions of humans, which is incredible.
Search Engine Technology is an information retrieval
software program that discovers, crawls, transforms and stores information
for retrieval and presentation in response to user queries.
Semantic Search seeks to improve search accuracy by
understanding the searcher's intent and the contextual meaning of terms as
they appear in the searchable dataspace, whether on the Web or within a
closed system, to generate more relevant results.
Search Engine Software -
Search Engine Types -
Human Search Engine
Web Search Engine is a software system that is designed
to search for information on the World Wide Web.
Data Mining the computational process of discovering
patterns in large data sets involving methods at the intersection of
artificial intelligence, machine learning, statistics, and database
systems.
Big Data
Search
Algorithm is an algorithm that retrieves information stored
within some data structure. Data structures can include linked lists,
arrays, search trees, hash tables, or various other storage methods. The
appropriate search algorithm often depends on the data structure being
searched. Searching also encompasses algorithms that query the data
structure, such as the SQL SELECT command.
Transderivational Search means when a search is
being conducted for a fuzzy match across a broad field. In computing the
equivalent function can be performed using content-addressable memory.
Unlike usual searches, which look for literal (i.e. exact, logical, or
regular expression) matches, a transderivational search is a search for a
possible meaning or possible match as part of communication, and without
which an incoming communication cannot be made any sense of whatsoever. It
is thus an integral part of processing language, and of attaching meaning
to communication.
Adaptive Search is a metaheuristic algorithm commonly
applied to combinatorial optimization problems.
Adaptive Search (youtube)
Metaheuristic is
a higher-level procedure or heuristic designed to find, generate, or
select a
heuristic
(partial search algorithm) that may provide a sufficiently good solution
to an optimization problem, especially with incomplete or imperfect
information or limited computation capacity.
Human Search Engine -
Questions and Answers Format
RankBrain
is a process that helps provide more relevant search results for users.
(hopefully a process not manipulated by money).
Neural Network
Artificial Neural Network
is a
network inspired by
biological neural networks
such as the
central nervous
systems of animals, in particular the
brain, which are used to estimate or
approximate functions that can depend on a large number of inputs that
are generally unknown. Artificial neural networks are typically specified
using three things.
1: Architecture
Rule specifies what
variables are
involved in the
network and their
topological relationships—for
example the variables involved in a neural network might be the weights of
the
connections between the neurons, along with activities of the
neurons.
2: Activity Rule states that most neural network models have short
time-scale dynamics: local rules define how the activities of the
neurons change in response to each other. Typically the activity rule
depends on the weights (the
parameters) in the network.
3: Learning
Rule specifies the way in which the neural network's
weights change with time. This
learning is usually viewed as taking
place on a longer time scale than the time scale of the dynamics under
the activity rule. Usually the learning rule will depend on the activities
of the neurons. It may also depend on the values of the target values
supplied by a teacher and on the current value of the weights.
Deep Learning
-
Nodes -
Value Networks -
Internet -
Matrix
-
Layers -
Black Box -
Algorithms
Artificial Neuron
is a mathematical function conceived as a model of
biological neurons.
Artificial neurons are the constitutive units in an artificial neural
network.
Artificial 'neurotransistor' created. Imitating the functioning of
neurons using semiconductor materials.
Feedforward Neural Network is one of the two broad types of artificial
neural network, characterized by direction of the flow of information
between its layers. Its flow is uni-directional, meaning that the
information in the model flows in only one direction—forward—from the
input nodes, through the hidden nodes (if any) and to the output nodes,
without any cycles or loops, in contrast to
recurrent neural networks, which have a bi-directional flow. Modern
feedforward networks are trained using the backpropagation method and are
colloquially referred to as the "vanilla" neural networks. A
Feedforward Neural Network is an artificial neural network wherein
connections between the nodes do not form a cycle. As such, it is
different from recurrent neural networks.
Hidden Layer
in an artificial neural network is a layer in between
input layers and
output layers, where
artificial neurons take in a set of weighted inputs and produce an output
through an activation function. It is a typical part of nearly any neural
network in which engineers simulate the types of activity that go on in
the human brain. Hidden layers have neurons or nodes which apply different
transformations to the input data. One hidden layer is a collection of
neurons stacked vertically.
Hidden Layer is layer of artificial neurons that may be applied. It is
neither the input nor the output layer and is positioned between both. The
hidden layers transform inputs from the input layer to the output layer.
This is accomplished by applying what are called
weights
to the inputs and passing them through what is called an activation
function, which calculate input based on input and weight. This allows the
artificial neural network to learn non-linear relationships between the
input and output data. The weighted inputs can be randomly assigned. They
can also be fine-tuned and calibrated through what is called
back propagation.
Black Box -
How the
Media and Schools Influence Biases
Generative AI models are encoding biases and negative stereotypes in their
users. Growing research is underlining that they are encoding biases
and negative stereotypes in their users, as well as mass generating and
spreading seemingly accurate but nonsensical information. Worryingly,
marginalized groups are disproportionately affected by the fabrication of
this nonsensical information.
Bayesian
Confidence Propagation Neural Network node activations represent
probability ("confidence") in the presence of input features or
categories, synaptic weights are based on estimated correlations and the
spread of activation corresponds to calculating posteriori probabilities.
Convolutional Neural Network
is a type of feed-forward artificial neural network in which the
connectivity pattern between its neurons is inspired by the organization
of the animal visual cortex, whose individual neurons are arranged in such
a way that they respond to overlapping regions tiling the visual field.
Convolutional networks were inspired by biological processes and are
variations of multilayer perceptrons designed to use minimal amounts of
preprocessing. They have wide applications in image and
video recognition,
recommender systems and
natural language processing.
Convolution is a mathematical operation on two functions (f and g) to
produce a third function, that is typically viewed as a modified version
of one of the original functions, giving the integral of the pointwise
multiplication of the two functions as a function of the amount that one
of the original functions is translated. Convolution is similar to
cross-correlation. For
discrete real valued signals, they differ only in a time reversal in one
of the
signals. For
continuous signals, the cross-correlation operator is the adjoint operator
of the convolution operator. It has applications that include probability,
statistics, computer vision, natural language processing, image and signal
processing, engineering, and
differential equations.
Backpropagation is a method used in artificial neural networks to
calculate the error contribution of each neuron after a
batch of data (in image recognition,
multiple images) is processed. It is a special case of an older and more
general technique called automatic differentiation. In the context of
learning, backpropagation is commonly used by the gradient descent
optimization algorithm to adjust the weight of neurons by calculating
the gradient of the loss function. This technique is also sometimes called
backward propagation of errors, because the error is calculated at the
output and distributed back through the network layers.
Automatic Differentiation is a set of techniques to numerically
evaluate the derivative of a function specified by a computer program. AD
exploits the fact that every computer program, no matter how complicated,
executes a sequence of elementary arithmetic operations (addition,
subtraction, multiplication, division, etc.) and elementary functions (exp,
log, sin, cos, etc.). By applying the chain rule repeatedly to these
operations, derivatives of arbitrary order can be computed automatically,
accurately to working precision, and using at most a small constant factor
more arithmetic operations than the original program. Automatic
differentiation is not:
Symbolic differentiation, nor
Numerical differentiation (the method of finite differences).
Differentials.
Recurrent
Neural Network is a class of artificial neural network where
connections between units form a
directed cycle. This creates an internal state of the network which
allows it to exhibit dynamic temporal behavior. Unlike feedforward neural
networks, RNNs can use their internal memory to process arbitrary
sequences of inputs. This makes them applicable to tasks such as
unsegmented connected handwriting recognition or speech recognition.
Bidirectional associative memory is a type of recurrent neural
network.
Hopfield Network.
Modular
Neural Network is an artificial neural network characterized by a
series of independent neural networks moderated by some intermediary. Each
independent neural network serves as a module and operates on separate
inputs to accomplish some subtask of the task the network hopes to
perform. The intermediary takes the outputs of each module and processes
them to produce the output of the network as a whole. The intermediary
only accepts the modules’ outputs—it does not respond to, nor otherwise
signal, the modules. As well, the modules do not interact with each other.
Graph Neural Networks apply the predictive power of
deep learning to rich data structures that
depict objects and their
relationships as
points connected by lines in a graph. In GNNs, data points are called
nodes, which are linked by lines — called edges — with elements expressed
mathematically so machine learning algorithms can make useful predictions
at the level of nodes, edges or entire graphs.
EdgeSHAPer: Bond-centric Shapley value-based explanation method for
graph neural networks.
Interdependent Networks is a subfield of network science dealing with
phenomena caused by the
interactions
between complex networks.
Dependency Network approach provides a system level
analysis of the activity and topology of directed networks. The approach
extracts causal topological relations between the network's nodes (when
the network structure is analyzed), and provides an important step towards
inference of
causal activity relations between the network nodes (when analyzing
the network activity). This methodology has originally been introduced for
the study of financial data, it has been extended and applied to other
systems, such as the
immune system, and
semantic networks. In the case of network activity, the analysis is based
on partial correlations, which are becoming ever more widely used to
investigate complex systems. In simple words, the partial (or residual)
correlation is a measure of the effect (or contribution) of a given node,
say j, on the correlations between another pair of nodes, say i and k.
Using this concept, the dependency of one node on another node, is
calculated for the entire network. This results in a directed weighted
adjacency matrix, of a fully connected network. Once the adjacency matrix
has been constructed, different algorithms can be used to construct the
network, such as a threshold network, Minimal Spanning Tree (MST), Planar
Maximally Filtered Graph (PMFG), and others
Semantic Network is a knowledge base that represents semantic
relations between concepts in a network. This is often used as a form of
knowledge representation. It is a directed or undirected graph consisting
of vertices, which represent concepts, and edges, which represent semantic
relations between concepts, mapping or connecting semantic fields. Typical
standardized semantic networks are expressed as semantic triples. Semantic
networks are used in natural language processing applications such as
semantic parsing and word-sense disambiguation.
Weights and Biases
are the learnable parameters of some machine learning models and neural
networks. Each neuron layer is connected to some or all of the neurons in
the next layer.
Weights control the signal strength of the connection between tow neurons.
Weight decides how much influence the input will have on the output. Biases are an additional input into the next layer that will always have a
value of 1.
Activation Function decides whether a neuron should be activated or
not. This means that it will decide whether the neuron's input to the
network is important or not in the process of prediction using simpler
mathematical operations.
Diffusion Models
are a class of
latent variable generative models. A diffusion model consists of three
major components: the forward process, the reverse process, and the
sampling procedure. The goal of diffusion models is to learn a diffusion
process that generates a probability distribution for a given dataset from
which we can then sample new images. They learn the latent structure of a
dataset by modeling the way in which data points diffuse through their
latent space.
Perceptron is a neural network unit or an artificial neuron that does
certain computations to detect features or business intelligence in the
input data. And this perceptron tutorial will give you an in-depth
knowledge of Perceptron and its activation functions. In machine learning,
the perceptron is an algorithm for supervised learning of binary
classifiers. A binary classifier is a function which can decide whether or
not an input, represented by a vector of numbers, belongs to some specific
class. It is a type of linear classifier, i.e. a classification algorithm
that makes its predictions based on a linear predictor function combining
a set of weights with the feature vector. The perceptron algorithm was
invented in
1958 at the
Cornell
Aeronautical Laboratory by
Frank Rosenblatt, funded by the United States Office of Naval
Research.
NIST’s Superconducting Synapse May Be Missing Piece for ‘Artificial
Brains’. NIST built a superconducting switch that “learns” like a
biological system and could connect
processors and store
memories in future computers
operating like the
human brain.
Researchers grow active mini-brain-networks. Cerebral organoids are
artificially grown, 3D tissue cultures that
resemble the human brain. Now,
researchers report success with functional neural networks derived from
these
organoids, which are miniaturized and simplified version of an organ
produced
in vitro in three dimensions
that shows realistic micro-anatomy.
New study allows Brain and Artificial Neurons to Link up over the Web.
Research on novel
nano-electronics devices has
enabled brain neurons and artificial neurons to communicate with each
other over the Internet.
Solving complex learning tasks in brain-inspired computers. Spiking
neural networks, which mimic the structure and function of a natural
nervous system, are powerful, fast, and energy-efficient. One key
challenge is how to train such complex systems. An interdisciplinary
research team has now developed and successfully implemented an algorithm
that achieves such training. It can be used to train spiking
neural networks to solve complex
tasks with extreme energy efficiency.
Physical processes can have hidden neural network-like abilities. A
new study shows that the physics principle of '
nucleation'
can perform complex
calculations
that rival a simple neural network. The work may suggest avenues for new
ways to think about computation using the principles of physics. We tend
to separate the brain and the muscle -- the brain does the thinking; the
muscle does the doing. The brain takes in complex information about the
world and makes decisions, and the muscle merely executes. This has also
shaped how we think about a single cell; some molecules within cells are
seen as 'thinkers' that take in information about the chemical environment
and decide what the cell needs to do for survival; separately, other
molecules are seen as the 'muscle,' building structures needed for
survival.
Future of brain-inspired AI as Python code library passes major milestone.
An open source code library for brain-inspired deep learning, called '
snnTorch,'
has surpassed 100,000 downloads and is used in a wide variety of projects
from NASA satellite tracking efforts to semiconductor companies optimizing
chips for AI. Spiking neural networks emulate the brain and biological
systems to process information more efficiently. The brain's neurons are
at rest until there is a piece of information for them to process, which
causes their activity to spike. Similarly, a spiking neural network only
begins processing data when there is an input into the system, rather than
constantly processing data like traditional neural networks.
Neuromorphic computing inorganic materials mimic neural signals
responsible for transmitting information within the human brain. A
neuron-like electrical switching mechanism in the solid-state material
ß'-CuxV2O5 -- specifically, how it reversibly morphs between conducting
and insulating behavior on command. Chameleon-like material changes with
temperature or an applied electrical stimulus. To emulate the essential
elements of neuronal function in artificial circuitry, we need solid-state
materials that exhibit electronic instabilities, which, like
neurons, can store information
in their internal state and in the timing of electronic events.
Unsupervised Learning with Artificial Neurons.
Stochastic Phase-Change Neurons.
Magnets can help AI get closer to the efficiency of the human brain.
The mind's eye of a neural network system. A new tool, based on
topology, makes finding the areas where neural networks are confused as
simple as spotting mountaintops from an airplane. The ability to spot and
address those areas of confusion should enable more confident application
of neural networks in high-stakes decision scenarios or image prediction
tasks like healthcare and research. Neural network image recognition
systems are essentially algorithms that process data in a way that mimics
the weighted firing pattern of neurons as an image is analyzed and
identified. But the embedded vectors and probabilities don't correlate to
a decision-making process that humans would recognize. The problem with
neural networks is that we can't see inside the machine to understand how
it's making decisions.
Long
Short-Term Memory block or network is a simple recurrent neural
network which can be used as a building component or block (
of
hidden layers) for an eventually bigger recurrent neural network. The
LSTM block is itself a recurrent network because it contains recurrent
connections similar to connections in a conventional recurrent neural
network. An LSTM block is composed of four main components: a cell, an
input gate, an output gate and a forget gate. The cell is responsible for
"
remembering" values
over arbitrary time intervals; hence the word "
memory"
in LSTM. Each of the three gates can be thought as a "conventional"
artificial neuron, as in a multi-layer (or feedforward) neural network:
that is, they compute an activation (using an activation function) of a
weighted sum. Intuitively, they can be thought as regulators of the flow
of values that goes through the connections of the LSTM; hence the
denotation "gate". There are connections between these gates and the cell.
Some of the connections are recurrent, some of them are not. The
expression long short-term refers to the fact that LSTM is a model for the
short-term memory which can last for a long period of time. There are
different types of LSTMs, which differ among them in the components or
connections that they have. An LSTM is well-suited to classify, process
and predict time series given time lags of unknown size and duration
between important events. LSTMs were developed to deal with the exploding
and vanishing gradient problem when training traditional RNNs. Relative
insensitivity to gap length gives an advantage to LSTM over alternative
RNNs, hidden Markov models and other sequence learning methods in numerous
application.
Biological Neural Network is a series of interconnected
neurons whose
activation defines a recognizable linear pathway. The interface through
which neurons interact with their neighbors usually consists of several
axon terminals connected via synapses to dendrites on other neurons. If
the sum of the input
signals into one neuron surpasses a certain
threshold, the neuron sends an
action potential (AP) at the axon hillock
and transmits this electrical signal along the axon.
Neural Pathway
connects one part of the nervous system with another via a bundle of
axons, the long fibers of neurons. A neural pathway that serves to
connect relatively distant areas of the brain or
nervous system is usually
a bundle of neurons, known collectively as white matter. A neural pathway
that spans a shorter distance between structures, such as most of the
pathways of the major neurotransmitter systems, is usually called
grey
matter.
Optical Neural Network
Neurophysiology
is a branch of physiology and
neuroscience that is concerned with the study of the functioning of
the nervous system. The primary tools of basic neurophysiological research
include electrophysiological recordings, such as patch clamp, voltage
clamp, extracellular single-unit recording and recording of local field
potentials, as well as some of the methods of calcium imaging,
optogenetics, and molecular
biology.
Stochastic Neural Analog Reinforcement Calculator or SNARC, is a
neural net machine designed by
Marvin Lee Minsky. George Miller gathered the funding for the project
from the Air Force Office of Scientific Research in the summer of
1951. At the time, one of Minsky's graduate
students at Princeton, Dean Edmund, volunteered that he was good with
electronics and therefore Minsky brought him onto the project.
Generative Adversarial Network are a class of artificial intelligence
algorithms used in
unsupervised machine learning, implemented by a system of two neural
networks contesting with each other in a zero-sum game framework. They
were introduced by Ian Goodfellow et al. in 2014. This technique can
generate
photographs that look at least superficially authentic to human
observers, having many realistic characteristics (though in tests people
can tell real from generated in many cases).
New Reservoir Computer marks first-ever Microelectromechanical Neural
Network Application. A group of researchers reports the construction
of the first reservoir computing device built with a
microelectromechanical system.
The neural network exploits the nonlinear dynamics of a microscale silicon
beam to perform its calculations. The group's work looks to create devices
that can act simultaneously as a sensor and a computer using a fraction of
the energy a normal computer would use.
Reservoir
Computing is a framework for computation that may be viewed as an
extension of neural networks. Typically an input signal is fed into a
fixed (random) dynamical system called a reservoir and the dynamics of the
reservoir map the input to a higher dimension. Then a simple readout
mechanism is trained to read the state of the reservoir and map it to the
desired output. The main benefit is that training is performed only at the
readout stage and the reservoir is fixed.
Liquid-state machines and
echo state networks are two major types of reservoir computing.
New Technique that reduces training time for Deep Learning Networks by
more than 60 percent without sacrificing accuracy, accelerating the
development of new artificial intelligence (AI) applications.
Networks (computers) -
Human Brain -
Internet
Liquid State Machine is a type of
reservoir
computer that uses a spiking neural network. An LSM consists of a
large collection of units (called nodes, or neurons). Each node receives
time varying input from external sources (the inputs) as well as from
other nodes. Nodes are randomly connected to each other. The recurrent
nature of the connections turns the time varying input into a spatio-temporal
pattern of activations in the network nodes. The spatio-temporal
patterns of activation are read out by linear
discriminant units. The soup of recurrently connected nodes will end up
computing a large variety of nonlinear functions on the input. Given a
large enough variety of such nonlinear functions, it is theoretically
possible to obtain linear combinations (using the read out units) to
perform whatever mathematical operation is needed to perform a certain
task, such as speech recognition or
computer vision.
The word liquid in the name comes from the analogy drawn to dropping a
stone into a still body of water or other liquid. The falling stone will
generate ripples in the liquid. The input (motion of the falling stone)
has been converted into a spatio-temporal pattern of liquid displacement
(ripples). LSMs have been put forward as a way to explain the operation of
brains. LSMs are argued to be an improvement over the theory of
artificial neural networks because: Circuits are
not hard coded to perform a specific task. Continuous time inputs are
handled "naturally". Computations on various time scales can be done using
the same network. The same network can perform multiple computations.
Criticisms of LSMs as used in computational neuroscience are that LSMs
don't actually explain how the brain functions. At best they can replicate
some parts of brain functionality. There is no guaranteed way to dissect a
working network and figure out how or what computations are being
performed. Very little control over the process.
"Liquid" machine-learning system adapts to changing conditions. MIT
researchers have developed a type of neural network that learns on the
job, not just during its training phase. These flexible algorithms, dubbed
"liquid" networks, change their underlying equations to continuously adapt
to new data inputs. The advance could aid decision making based on data
streams that change over time, including those involved in medical
diagnosis and autonomous driving.
Geoffrey Hinton is an English Canadian cognitive psychologist and
computer scientist, most noted for his work on artificial neural networks.
Since 2013 he divides his time working for Google (Google Brain) and the
University of Toronto. In 2017, he cofounded and became the Chief
Scientific Advisor of the Vector Institute in Toronto.
Neural Net Computing in Water. Ionic circuit computes in an aqueous
solution with an
ionic circuit comprising
hundreds of
ionic transistors and performed a core process of neural net
computing. Microprocessors in smartphones, computers, and data centers
process information by manipulating electrons through solid semiconductors
but our brains have a different system. They rely on the manipulation of
ions in liquid to process information. A team engineered the pH-gated
ionic transistor in such a way that the disk current is an arithmetic
multiplication of the disk voltage and a "weight" parameter representing
the local pH gating the transistor. They organized these transistors into
a 16 × 16 array to expand the analog arithmetic multiplication of
individual transistors into an analog matrix multiplication, with the
array of local pH values serving as a weight matrix encountered in neural
networks.
A neural network learns when it should not be trusted. A faster way to
estimate uncertainty in AI-assisted decision-making could lead to safer
outcomes. Researchers have developed a way for
deep
learning neural networks to rapidly estimate confidence levels in
their output. The advance could enhance safety and efficiency in
AI-assisted decision making, with applications ranging from medical
diagnosis to
autonomous driving. Increasingly,
artificial intelligence systems known as deep learning neural networks are
used to inform decisions vital to human health and safety, such as in
autonomous driving or medical diagnosis. These networks are good at
recognizing patterns in large, complex datasets to aid in decision-making.
But how do we know they're correct?
Researchers create a neural network for genomics -- one that explains
how it achieves accurate predictions. A team of computer scientists has
created a neural network that can explain how it reaches its predictions.
The work reveals what accounts for the functionality of neural
networks--the engines that drive artificial intelligence and machine
learning--thereby illuminating a process that has largely been concealed
from users. Specifically, they developed a model -- the data-driven
equivalent of a high-powered microscope -- that allows scientists to trace
and quantify the RNA splicing process, from input sequence to output
splicing prediction.
Interpretable-by-Design.
Quantum Networks facilitate the transmission of information in the
form of quantum bits, also called qubits, between physically separated
quantum processors. A quantum processor is a small
quantum computer being
able to perform quantum logic gates on a certain number of qubits. Quantum
networks work in a similar way to classical networks. The main difference
is that quantum networking, like quantum computing, is better at solving
certain problems, such as modeling quantum systems.
Breakthrough in tin-vacancy centers for quantum network applications.
Tin-vacancy (Sn-V) centers in diamond have the potential to function as
quantum nodes in quantum networks to transmit information. However, they
pose limitations while showing optical properties to generate quantum
entanglement. Researchers have now overcome this challenge by generating
stable Sn-V centers that can produce photons with nearly identical
frequencies and linewidths, paving the way for the advancement of Sn-V
centers as a quantum-light matter interface.
Controls - Controlling
Control is the activity of
managing or handling something carefully. Having the
power or the
ability
to
direct or
determine outcomes. The
discipline to
regulate functions,
actions or
reflex's. Being
mindful of your
will power. Control also means to have
great skillfulness and
having a firm understanding or knowledge of
some subject or activity.
Control in science is a
standard against which other conditions
can be
compared and
verified in a scientific experiment.
Top-down and bottom-up.
Adaptive Control is the control method used by a controller
which must
adapt to a controlled
system with parameters which
vary, or are
initially uncertain.
Intelligent Control is a class of control techniques that use various
artificial intelligence computing approaches like neural networks,
Bayesian probability, fuzzy logic, machine learning, reinforcement
learning, evolutionary computation and genetic algorithms.
Process Control
in continuous
production processes, is a
combination of control
engineering
and chemical engineering disciplines that uses industrial
control systems to achieve a production level of
consistency,
economy and safety which could not be achieved purely by human manual
control. It is implemented widely in industries such as oil refining, pulp
and paper manufacturing, chemical processing and power generating plants,
There is a wide range of size, type and complexity, but it enables a small
number of operators to
manage
complex
processes to a high
degree of
consistency. The development of large
automatic process
control systems was instrumental in enabling the design of large high
volume and complex processes, which could not be otherwise economically or
safely operated. In
process control, there is process gain.
Process gain is the relationship
between the process control output and the process control input, and is
defined as the change in input divided by the change in output.
Positive gain is
when both the input and the output are increasing, while
negative
gain is when the input increases, while the output decreases. The
applications can range from controlling the temperature and level of a
single process vessel, to a complete chemical processing plant with
several thousand control loops.
Process Control is an
engineering discipline that deals with
architectures, mechanisms and
algorithms for maintaining the output of a
specific process within a desired range. For instance, the temperature of
a chemical reactor may be controlled to maintain a consistent product
output.
The Illusion of Control
-
Puppeteers -
Drones -
Will Power -
Brainwashers -
Black Box -
You
Can't Control Everything -
Control Freak -
Quality Control
-
Remote Control
Irrepressible is something that is
not
able to be controlled or restrained. Something almost impossible to
repress or control.
Out of Control.
First Officer is the second pilot or the
co-pilot of an aircraft or a relief pilot on an airplane.
Subordinate is an
assistant that is
subject to the
authority or
control of another, or
submissive to authority
or the control of another. A rank or order as less important or consider
of less value.
Subconscious -
Autonomous.
Underling is an assistant subject to the
authority or control of another.
Manual Control is a type of control
that is
manually performed by hand or by individuals.
Manual is something requiring
human control
by hand.
Backup Manual Control System or
Mechanical Backup System provides a person with the ability to operate a
machine in the absence of
automatic control features.
Actuator is a component of a machine that is responsible for moving
and controlling a mechanism or system, for example by opening a valve. In
simple terms, it is a "
mover". An
actuator requires a control signal and a source of energy. The control
signal is relatively low energy and may be electric voltage or current,
pneumatic, or hydraulic fluid pressure, or even human power. Its main
energy source may be an
electric current,
hydraulic pressure, or pneumatic pressure. When it receives a control
signal, an actuator responds by converting the source's energy into
mechanical motion. In the electric,
hydraulic, and
pneumatic sense, it is a form of automation or
automatic control. An actuator is a mechanism by which a control
system acts upon to perform an operation or task. The control system can
be simple (a fixed mechanical or electronic system), software-based (e.g.
a printer driver, robot control system), a human, or any other input.
Triggers.
Thermal Actuator is a device used to
transform energy into motion. A thermal actuator is a type of non-electric
motor made of components such as a piston and a
thermal sensitive material
capable of producing linear
motion in response to temperature changes.
Robotics.
Solenoid converts
electrical energy into mechanical work. A
transducer
device that
converts energy into linear motion. The term is also often
used to refer to a solenoid valve, an integrated device containing an
electromechanical solenoid which actuates either a
pneumatic or
hydraulic
valve, or a solenoid switch, which is a specific type of relay that
internally uses an electromechanical solenoid to operate an
electrical
switch.
Electromechanics is the interaction of electrical and mechanical
systems as a whole and how the two systems interact with each other.
Electromechanical devices are ones which have both electrical and
mechanical processes. A manually operated switch is an
electromechanical component due to the mechanical movement causing an
electrical output.
Harness is to control and direct things
as if by using reins. To exploit the power of some force. A harness can
also mean a support consisting of an arrangement of straps for holding
something to the body, like a parachute. Stable gear consisting of an
arrangement of leather straps fitted to a draft animal so that it can be
attached to a cart and help pull a cart and be controlled using reins or a
pair of long straps usually connected to the headpiece.
Override is to use one's authority to
reject or cancel or interrupt the action of an
automatic device,
typically in order to take
manual control
and do something more important. You can override or reject a decision.
Prevail -
Counteract.
Overwrite is entering new data in the place
of old data.
Editing -
Reprogram -
Method Overriding.
Nullify is to cancel out, null, void or
invalidate a previous decision.
Dismiss.
Veto is a
constitutional right
to reject a decision or proposal made by a law-making body. A veto is the
power to unilaterally stop an
official action, especially the enactment of legislation.
Appeal.
Control Logic
is a key part of a
software program that
controls the operations of the program. The control logic responds to
commands from the user, and it also acts on its own to perform automated
tasks that have been structured into the program. Control logic can be
modeled using a state diagram, which is a form of
hierarchical state
machine. These state diagrams can also be combined with flow charts to
provide a set of computational semantics for describing complex control
logic. This mix of state diagrams and flow charts is illustrated in the
figure on the right, which shows the control logic for a simple stopwatch.
The control logic takes in commands from the user, as represented by the
event named “START”, but also has automatic recurring sample time events,
as represented by the event named “TIC”.
Control
Engineering is an engineering discipline that applies
automatic control theory to design systems with
desired behaviors in control environments. The discipline of controls
overlaps and is usually taught along with electrical engineering at many
institutions around the world. The practice uses
sensors and detectors to measure the output performance of the process
being controlled; these
measurements are used to provide corrective
feedback
helping to achieve the desired performance. Systems designed to perform
without requiring human input are called automatic control systems (such
as cruise control for regulating the speed of a car). Multi-disciplinary
in nature, control systems engineering activities focus on implementation
of control systems mainly derived by
mathematical modeling
of a diverse range of systems.
PDF.
Master / Slave Technology is a model of
communication where one device
or process has unidirectional
control over one or more other devices. In
some systems a
master is selected from a group of eligible devices, with
the other devices acting in the role of slaves. In the context of
motor control, the master/slave configuration is used for load sharing
purposes when two identical motors connected to two different drives are
coupled to a common load. One drive is defined as the master and is
configured for running in the speed-control mode whereas the other defined
as slave is configured for running in torque-control mode.
Control in management is one of the
managerial functions like planning,
organizing, staffing and directing. It is an important function because it
helps to check the errors and to take the
corrective action so that
deviation from standards are minimized and stated goals of the
organization are achieved in a desired manner. According to modern
concepts, control is a foreseeing action whereas earlier concept of
control was used only when
errors were detected. Control in
management
means setting standards, measuring actual performance and taking
corrective action.
Micro-Management.
You Can't Control
Everything -
You Can't be Aware
of EverythingPossession is the
act of having and controlling something
tangible or
intangible.
Control System is a device, or set of devices, that
manages,
commands, directs or
regulates the behaviour of other devices or systems.
They can range from a home heating controller using a
thermostat
controlling a boiler to large Industrial control systems which are used
for controlling processes or
machines.
Flight Control.
Regulator in
automatic control is a
regulator is a device which has the
function of
maintaining a designated characteristic. It performs
the activity of managing or maintaining a range of values in a
machine. The measurable property of a device is managed closely
by specified conditions or an advance set
value; or it can be a
variable according to a predetermined arrangement scheme. It can
be used generally to connote any set of various controls or
devices for regulating or controlling items or objects.
Examples are a
voltage regulator (which can be a
transformer
whose voltage ratio of transformation can be adjusted, or an
electronic circuit that produces a defined voltage), a
pressure
regulator, such as a diving regulator, which maintains its
output at a fixed pressure lower than its input, and a fuel
regulator (which controls the supply of fuel).
Regulators can be designed to control anything from gases or
fluids, to light or electricity.
Speed can be regulated by
electronic, mechanical, or electro-mechanical means. Such
instances include;
Electronic regulators as used in modern railway sets where the
voltage is raised or lowered to control the speed of the engine. Mechanical
Systems such as
valves as used in fluid control systems.
Purely mechanical pre-automotive systems included such designs as the Watt
centrifugal governor whereas modern systems may have electronic
fluid
speed sensing components directing solenoids to set the valve to the
desired rate. Complex electro-mechanical speed control systems used to
maintain speeds in modern cars (cruise control) - often including
hydraulic components, An aircraft engine's constant speed unit changes the
propeller pitch to maintain engine speed.
Cybernetics.
Real-time Control System is a reference model
architecture, suitable for many software-intensive, real-time control
problem domains. RCS is a reference model architecture that defines the
types of functions that are required in a real-time intelligent control
system, and how these functions are related to each other.
Operating Systems.
Programmable Logic Controller is an
industrial digital computer which
has been
ruggedised and
adapted for the control of manufacturing processes, such as
assembly
lines, or
robotic devices, or any activity that requires high reliability
control and ease of
programming and process fault diagnosis.
PLCs can range from small
modular devices with tens of
inputs and
outputs (I/O), in a housing integral with the
processor, to large
rack-mounted modular devices with a count of thousands of I/O, and which
are often networked to other PLC and SCADA systems. They can be designed
for multiple arrangements of digital and analog I/O, extended temperature
ranges, immunity to electrical noise, and resistance to vibration and
impact. Programs to control machine operation are typically stored in
battery-backed-up or
non-volatile memory.
PLCs were
first developed in the automobile industry to provide flexible, ruggedised
and easily programmable controllers to replace hard-wired relays and
timers. Since then they have been widely adopted as high-reliability
automation controllers suitable for harsh environments. A
PLC is an
example of a "hard" real-time system since output results must be produced
in response to input conditions within a limited time, otherwise
unintended operation will result.
Algorithms -
Electric Motors.
Controller in
control theory is a device, historically using
mechanical, hydraulic, pneumatic or electronic techniques often in
combination, but more recently in the form of a microprocessor or
computer, which monitors and physically alters the operating conditions of
a given dynamical system. Typical applications of controllers are to hold
settings for temperature, pressure, flow or speed.
Nonlinear Control is the area of control theory which deals
with systems that are nonlinear, time-variant, or both.
Closed-Loop Transfer Function in control theory is a
mathematical expression (algorithm) describing the net result of the
effects of a closed (feedback) loop on the input signal to the circuits
enclosed by the loop.
Hierarchical Control System is a form of control system in
which a set of devices and governing software is arranged in a
hierarchical tree.
When the links in the tree are implemented by a computer network, then
that
hierarchical control
system is also a form of networked control system.
Intelligent Control
is a class of control techniques that use various artificial intelligence
computing approaches like
neural networks,
Bayesian probability,
fuzzy logic,
machine learning,
evolutionary computation and genetic
Algorithms.
Networked Control System is a control system wherein the
control loops are closed through a communication network. The defining
feature of an NCS is that control and feedback signals are exchanged among
the system's components in the form of information packages through a
network.
Open-Loop Controller is when the control action from the
controller is independent of the "process output", which is the process
variable that is being controlled. It does not use feedback to determine
if its output has achieved the desired goal of the input or process "set
point". An open-loop system cannot engage in machine learning and also
cannot correct any errors that it could make. It will not compensate for
disturbances in the process being controlled.
Perceptual Control Theory is a model of behavior based on
the principles of negative feedback, but differing in important respects
from engineering control theory. Results of PCT experiments have
demonstrated that an organism controls neither its own behavior, nor
external environmental variables, but rather its own perceptions of those
variables. Actions are not controlled, they are varied so as to cancel the
effects that unpredictable environmental disturbances would otherwise have
on controlled perceptions.
Automatic Control is the application of mechanisms to the
operation and regulation of processes without continuous direct human
intervention.
Autonomous -
Automation
Control Theory is the idea that two control systems—inner
controls and outer controls—work against our tendencies to deviate.
Operating System -
Algorithms
Signal Chain is a term used in signal processing and
mixed-signal system design to describe a series of signal-conditioning
electronic components that receive input (data acquired from sampling
either real-time phenomena or from stored data) in tandem, with the output
of one portion of the chain supplying input to the next. Signal chains are
often used in signal processing applications to gather and process data or
to apply system controls based on analysis of real-time phenomena.
Feed Forward in Control is a term describing an element or
pathway within a control system which passes a controlling signal from a
source in its external environment, often a command signal from an
external operator, to a load elsewhere in its external environment. A
control system which has only feed-forward behavior responds to its
control signal in a pre-defined way without responding to how the load
reacts; it is in contrast with a system that also has feedback, which
adjusts the output to take account of how it affects the load, and how the
load itself may vary unpredictably; the load is considered to belong to
the external environment of the system.
Feedback (Positive
and Negative) -
Placebos
Nothing is beyond your control,
there is nothing that you cannot
control. Something's are harder
to control then others, and there are some things you have not
yet learned how to control. To say that I cannot control
something is a false statement. To be more accurate, you have to
say that I have not yet learned how to control this.
Gratification.
Autonomous - Automation
Autopilot is a
computer
navigation
system used to
control the trajectory of a
vehicle
without constant
hands-on control by a
human operator being required.
Autopilots do not replace a human operator, but
assist them in controlling
the vehicle, allowing them to
focus on broader aspects of operation, such
as monitoring the trajectory, weather and systems. Autopilots or
self-steering gear are used in aircraft, boats, spacecraft, missiles, and
others. Autopilots have evolved significantly over time, from early
autopilots that merely held an attitude to modern autopilots capable of
performing automated landings under the supervision of a pilot.
Augment.
Autonomous Robot is a
robot that performs
behaviors or tasks with a high degree of
autonomy, which is particularly
desirable in fields such as spaceflight, household maintenance (such as
cleaning), waste water treatment and delivering goods and services.
Robot Operating System.
Drones -
Spacecraft -
Satellites -
Remote Control -
Will Power -
Subconscious
Autonomous is something that is
not controlled by outside
forces. Existing as an
independent
entity.
Free from external control and constraint in e.g. action
and judgment.
Autonomy
is one who gives oneself
one's own law
and has free-will.Autonomic Nervous System
(autonomous functions of the human body) -
Reptilian Brain
Automata
Theory is the study of abstract machines and automata, as well as the
computational problems that can be solved using them. It is a theory in
theoretical computer science, under discrete mathematics (a subject of
study in both mathematics and computer science). The word automata (the
plural of automaton) comes from the Greek word αὐτόματα, which means
"
self-acting".
Insects, plants, animals and even matter has built in automation
features.
Humans have instincts which
are extremely useful when knowledge is absent. So automation is necessary.
But you still need manual
control.
Automation is the use of various
control systems for
operating equipment such as machinery, processes in factories, boilers and
heat treating ovens, switching on telephone networks, steering and
stabilization of ships, aircraft and other applications and vehicles with
minimal or reduced human intervention. Some processes have been completely
automated.
Automaton is a
self-operating machine, or a
machine or control mechanism designed to follow automatically
a
predetermined sequence of operations, or respond to
predetermined
instructions. Some automata, such as bellstrikers in
mechanical
clocks,
are designed to give the
illusion to the casual observer that they are
operating under their own power. (automata or automatons).
Automation Paradox
(off loading) -
GMU
Autonomous Robotics Laboratory
Ant insights lead to robot navigation breakthrough. Researchers create
an insect-inspired autonomous
navigation
strategy for tiny, lightweight robots. Biologists have an increasing
understanding of the underlying strategies used by insects. Specifically,
insects combine keeping track of their own motion (termed "odometry") with
visually guided behaviors based on their low-resolution, but almost
omnidirectional visual system (termed "view memory"). Whereas odometry is
increasingly well understood even up to the neuronal level, the precise
mechanisms underlying view memory are still less well understood. One of
the earliest theories on how this works proposes a "snapshot" model. In
it, an insect such as an ant is proposed to occasionally make snapshots of
its environment. Later, when arriving close to the snapshot, the insect
can compare its current visual percept to the snapshot, and move to
minimize the differences. This allows the insect to navigate, or 'home',
to the snapshot location, removing any drift that inevitably builds up
when only performing odometry.
Automation is
replacing jobs, so
human labor will do other more
important things, and that's a good thing.
There is already
autonomous machines
in nature, like insects, plants, bacteria,
DNA. But these types
of autonomous abilities have been perfected over millions of
years, and we are just learning how to expand these autonomous
abilities to machines. So we need to go slow and learn from the
experts in nature, because just like
invasive species, autonomous abilities can have catastrophic
consequences.
Actuator is a
mechanism that puts
something into automatic action.
Cam is
a rotating or sliding piece in a mechanical linkage used especially in
transforming rotary motion into linear motion. It is often a part of a
rotating wheel (e.g. an eccentric wheel) or shaft (e.g. a cylinder with an
irregular shape) that strikes a lever at one or more points on its
circular path. The cam can be a simple tooth, as is used to deliver pulses
of power to a steam hammer, for example, or an eccentric disc or other
shape that produces a smooth reciprocating (back and forth) motion in the
follower, which is a lever making contact with the cam.
Self-Management in computer science is the process by which computer
systems shall
manage their own operation
without human intervention.
Self-Management technologies are expected to pervade the next generation
of
network management
systems. The growing complexity of modern networked computer systems is
currently the biggest limiting factor in their expansion. The increasing
heterogeneity of big corporate computer systems, the inclusion of mobile
computing devices, and the combination of different networking
technologies like WLAN, cellular phone networks, and mobile ad hoc
networks make the conventional, manual management very difficult,
time-consuming, and
error-prone. More recently self-management has been suggested as a
solution to increasing complexity in
cloud computing.
Currently, the most important industrial initiative towards realizing
self-management is the
Autonomic Computing Initiative started by IBM in 2001. The ACI
defines the following four functional areas:
Self-Configuration: Automatic
configuration of components;
Self-Healing: Automatic discovery, and
correction of faults; automatically applying all necessary actions to
bring system back to normal operation.
Self-Optimization: Automatic
monitoring and control of resources to
ensure the optimal
functioning with respect to the defined requirements;
Self-Protection: Proactive identification
and protection from arbitrary attacks. The design complexity of Autonomic
Systems and self-management systems can be simplified by utilizing design
patterns such as the Model View Controller to improve separation of concerns by helping encapsulate functional
concerns.
Fault tolerance is the property that enables a system to continue
operating properly in the event of the failure of one or more faults
within some of its components.
Human Self Management
-
Agency.
Resilience is the ability to "provide and maintain an acceptable level
of service in the face of faults and challenges to normal operation."
Threats and challenges for services can range from simple misconfiguration
over large scale natural disasters to targeted attacks. As such, network
resilience touches a very wide range of topics. In order to increase the
resilience of a given communication network, the probable challenges and
risks have to be identified and appropriate resilience metrics have to be
defined for the service to be protected.
Robustness is the ability of a computer system to cope with errors
during execution and cope with erroneous input. Robustness can encompass
many areas of computer science, such as robust programming, robust machine
learning, and Robust Security Network. Formal techniques, such as fuzz
testing, are essential to showing robustness since this type of testing
involves invalid or unexpected inputs. Alternatively, fault injection can
be used to test robustness. Various commercial products perform robustness
testing of software analysis.
Self-Optimization in cellular communications technology, is a process
in which the system’s settings are autonomously and continuously adapted
to the traffic profile and the network environment in terms of topology,
propagation and interference. Together with self-planning and
self-healing, self-optimization is one of the key pillars of the
self-organizing networks (SON) management paradigm proposed by the Next
Generation Mobile Networks Alliance. The autonomous trait of
self-optimization involves no human intervention at all during the
aforementioned optimization process. In the area of control engineering
most compact controllers for the industrial sector include an automatic
adjustment of the control parameters to the connected section. This
function is called auto-tuning or self-optimization. Usually, two
different types of self-tuning are available in the controller: the
oscillation method and the step response method. The term is also used in
Computer Science to describe a portion of an information system that
pursues its own objectives to the detriment of the overall system. Low
level of self optimization amongst system components leads to Coupling.
High level of self optimization leads to Cohesion.
Self-Organizing Network is an automation technology designed to make
the planning, configuration, management, optimization and healing of
mobile radio access networks simpler and faster.
Defensive Program is a form of defensive design intended to develop
programs that are capable of detecting potential security abnormalities
and make predetermined responses. It ensures the continuing function of a
piece of software under unforeseen circumstances. Defensive programming
practices are often used where high availability, safety, or security is
needed. Defensive programming is an approach to improve software and
source code, in terms of:
General quality – reducing the number of
software bugs and problems. Making the source code comprehensible – the
source code should be readable and understandable so it is approved in a
code audit. Making the software behave in a predictable manner despite
unexpected inputs or user actions. Overly defensive programming, however,
may safeguard against errors that will never be encountered, thus
incurring run-time and maintenance costs. There is also a risk that code
traps prevent too many exceptions, potentially resulting in unnoticed,
incorrect results.
Impulsivity (lack of
control) -
Auto-Pilot
-
Self Driving -
Subconscious
Unconscious Mind consists of the processes in the mind which occur
automatically and are not available to introspection, and include thought
processes, memories, interests, and motivations.
Focus.
Group
Thinking (influence) -
Software (computers) -
Smart Home
Nothing is totally autonomous, nothing is totally independent,
nothing is totally free from external control. Nothing is. So what are you
talking about when you say something is autonomous?
Free Will is an Illusion.
Everything is Connected -
Cause and Effect
Ai is about making humans more effective, it's not about
making
machines more like humans, because that's crazy. Humans are
mistake prone, and machines are supposed to help us reduce
mistakes, and help us to analyze our options. A machine could
never be more intelligent than the most intelligent human. But a
machine could easily be more intelligent than a human who has
never learned enough or went to
school that did not teach
enough. You really don't want a
machine to be more intelligent than you, because that clearly
says that you don't have the necessary knowledge and information
that's needed to be intelligent. But
Ai could easily be a
teacher and a measuring tool for intelligence, with an emphasis
on the word 'Tool'. Ai is not human, or will it ever be. But Ai
is a great example and a symbol of human ingenuity and
intelligence. A dog is a mans best friend, and Ai is an
extension of our friendship, and not a replacement for
friendship, for that would be like being friends with yourself.
Not exciting or real. But still better then nothing. You can
love a machine, but what you are really doing is just loving
yourself. A machine could never be a replacement for a human,
machine can only be an
aid. If we never improve education, or
if we keep denying people access to valuable knowledge and
information, then yes a machine could be more intelligent than a
human who is not fully educated. Ai will not be more intelligent
than humans, but Ai will help humans become more intelligent. Ai
is the path that we are taking to human intelligence.
Humans are
is a sense already a machine, a machine that can create more
machines.
Machines are not made to replace humans, machines only
replace certain actions that humans don't need to do. Thus
freeing up humans to do more important work, and also freeing up
more time to explore, with more time to relax. Ai advancements
will eventually lead us right back to ourselves. There is no
better machine then a human. Yes there will be certain machines
that will have better abilities in certain areas, but only
because we made it so. This way we can focus on other things
that are more important.
Can’t you see, the
smarter you make the machine the smarter you become. You say you are going to make intelligent machines, or AI, but
on the contrary, it will be the machines that will make you
intelligent. And the computer machine has already been doing
this for some time. Intelligent machines are just mimics,
mirrors, extensions and expansions of the human mind. This is
way beyond a
paradigm shift. It’s self-realization and enlightenment on
the grandest scale. Can’t you see, you are not just building a
better machine you are building a better human. And yes not
everyone is benefiting from this as fast as we would like, but
they will if everyone has a
computer and understands what
it resembles and what it can achieve.
Man is the Machine.
And we know how to duplicate this intelligent machine, it's
called childbirth plus education. We now have more words and
more ways to express them then ever before. Words have the
ability to shape the human mind. Words are the
machine code or
natural language of the brain where they are translated into
zero’s and ones so that the
synapse knows when to fire and when to create more
connections and more
associations. We will soon be able to scientifically prove
what the correct words should be and when the correct time and
sequence they should be learned.
The Lateness of the Hour - Twilight Zone (youtube) - Robots built by
Dr. Loren are complete with programmed memories and personalities.
The Lateness of the Hour is episode 44 of the American television
anthology series The Twilight Zone. It originally aired on December 2,
1960 on CBS.
The human brain is the
ferrari of brains. Or you can say that the human brain is
the
lamborghini of all brains. And from our incredible brains we
have created
incredible
machines as our tools. Tools that makes our brains even more
powerful by expanding our abilities. And these tools also save
us time, which gives us more time to play, and more time to create more time.
Drones - Sub-Autonomous - Fully Autonomous
Lethal
Autonomous Weapon are a type of
autonomous military
robot that can
independently search for and engage targets
based on
programmed constraints and
descriptions. LAW are also called
lethal autonomous weapon systems or LAWS, or lethal autonomous robots or LAR,
or robotic weapons, or killer robots. LAWs may operate in the air, on land,
on water, under water, or in space. The autonomy of current systems as of
2018 was restricted in the sense that a
human gives the final command to
attack - though there are exceptions with certain "defensive" systems.
Spacecraft (space force)
-
Satellites -
Environmental Monitoring
(eyes in the sky) -
Telemetry
-
Autonomous
People fear
autonomous killing robots
for a good reason, because we already
have
programed
robots, they're called
corrupt politicians, their called
narrow minded
voters, they're called
ignorant people who
don't know any better, they're called
drug addicts, they're called
mindless consumers,
they're called anyone who
does things just for money, they're called
anyone who
blindly follows orders, whether internally or
externally, or
blindly follows
the rule of a law without question, they're called
police,
they're called
soldiers, they're called the
CIA, they're called the
NSA, they're called
the
IRS, they're called the
TSA. Yes we need a
command hierarchy, especially when we have to organize for emergency
response, like an
incident command system. But when people say "
I'm just
following orders", what they are really saying is that can't
think for themselves and they have no intelligent reasoning that would
allow them to make intelligent decisions on their own. When people
blindly follow orders, they
act just like a robot. Humans
are born
free thinkers, but when
people are not allowed to
think
freely for themselves, they are no more than autonomous killing
machines.
People who have power are also
autonomous
robots, they have been programmed to hold on to their power. So don't
worry about machines killing you, because autonomous humans have
killed millions of people, and will continue to kill millions,
unless we
improve education
and
improve the media.
So until everyone becomes intelligent, this ignorance will continue to
Crush,
Kill and Destroy (Lost in Space, youtube). You don't want a
AI system that makes up its own reasoning or makes up it's own purpose, we
already have that, their called humans. We want a machine that thinks, and
not a machine that
pretends to think. We want a machine to have
human qualities, but those qualities need to be defined. You just
can't tell a machine to be nice, you need to explain what being nice is,
and you need to explain what being good is, and you need to explain what
being wrong is, just like what we are supposed to do with humans, which is
to educate them.
"
Drones
take orders from other drones, and those drones take orders from other
drones, with
the top drone
being a
megalomaniac with a gambling
problem and a fetish problem. All drones just follow orders, even the top
drone, who is controlled by ignorance."
AI Fears -
RoboCop
(wiki) -
Misused
Technology -
Ethics for MachinesAi
can be hacked and humans can be
brainwashed, so there is
very little difference. Computers can be
effected by viruses,
humans can be
effected by viruses, so
the vulnerabilities are similar. So if you think that AI is a threat or
that AI is something that can loose control, just look in the mirror, or
better yet,
read about the
history of humans.
AI networks are more vulnerable to malicious attacks than previously
thought. Artificial intelligence tools hold promise for applications
ranging from autonomous vehicles to the interpretation of medical images.
However, a new study finds these AI tools are more vulnerable than
previously thought to
targeted attacks that effectively force AI systems to make bad
decisions. At issue are so-called "adversarial attacks," in which someone
manipulates the data being fed into an AI system in order to confuse it.
For example, someone might know that putting a specific type of sticker at
a specific spot on a stop sign could effectively make the stop sign
invisible to an AI system.
Black
Box.
Adversarial Attacks are
those
malicious attacks on the data which
may seem okay to a human eye but causes misclassification in a machine
learning pipeline. It
manipulates the data being fed into an AI system in order to
confuse it.
Adversarial examples can
fool
driver assistance systems, causing them to
misinterpret traffic
signs. A stop sign with added stickers could be misclassified as a speed
limit sign, potentially leading to dangerous situations. These attacks are
often made in the form of specially designed “noise,” which can elicit
misclassification. Users may intentionally supply
fabricated data
that violates the statistical assumption.
A dumbed down
education is an Adversarial Attack on the human mind.
Adversarial Machine Learning -
Computer Viruses -
Prompts -
Overfitting
Adversarial means that each side is
antagonistic, sharply opposed to the other, or locked into a deeply
divided rivalry.
Westworld (8/10) Movie CLIP - Draw (1973) (youtube) - When Robots
can't be controlled.
Westworld was a1973 American science-fiction Western thriller film
where amusement park androids malfunction and begin killing visitors.
I don't fear Artificial Intelligence, I fear the lack of
Intelligence, because
ignorance is clearly doing all the damage.
The danger is not Artificial Intelligence,
the danger is peoples
ignorance. Criminals in power have been
using technology to kill for hundreds of years, and not just with
drones. When crazy people make
machines that can kill humans,
that's not artificial intelligence, that's just pure ignorance.
Most technologies can be extremely dangerous, especially when
technology is used
by ignorant people or by criminals. This is another great reason why
improving education is a must.
When people are more educated on how to use things effectively and
efficiently, then these technology abuses will decline and eventually fade
away, and the real benefits of technology will be more frequent and more
common. An autonomous weapons can be programed to kill, just like some
humans can be programmed to kill.
But machines
have no conscience, which makes it easier for corporations to use
machines to commit murder. Machines also don't need sleep, food, water or
money. Machines also don't need to be rescued or taken care of when they
get injured. Machines also never say no. That is why criminals love
machines.
Algorithms.
10,00 drones uses swarm intelligence in China. A total of 10,197
drones took to the sky over Shenzhen on 26 September to mark the upcoming
National Day public holiday on 1 October. The display, which will be
staged nightly for a week at various locations around Shenzhen, also broke
the record for the 'most drones forming an aerial image'.
Swarm
Intelligence is the collective behavior of decentralized,
self-organized systems, natural or artificial. The concept is employed in
work on artificial intelligence.
One person can supervise 'swarm' of 100 unmanned autonomous vehicles.
Research involving has shown that a 'swarm' of more than 100 autonomous
ground and aerial robots can be supervised by one person without
subjecting the individual to an undue workload.
Ethical Machines - AI Ethics
Three Laws of Robotics 1: A
Robot may
not injure a
human being or,
through inaction, allow a human being to come to
harm.
2: A robot must
obey the orders given it by human beings except where such orders would
conflict with the First Law.
3: A robot must
protect its own existence as
long as such
protection does
not conflict with the First or Second Laws.
Remember, no laws or ethical rules will stop
scumbags
in power who are
above
the law and who will
use
their power in evil ways. The
misuse of technology
is everywhere.
It’s nice to have rules, but
not everyone follows the
rules, or understands the rules. So if you’re going to
teach robots how to be smart, I would think
that you would also
teach humans
how to be smart. If you
value a machine more than a human, then
humans cease to have
value. And if humans are not here, then who will
care for the machines?
The machines?
Human Ethics -
Laws for Humans -
How to Teach a Human
-
A.I. Vulnerabilities and Threats -
Black Box -
Talking to Space
Aliens -
AGI
Society -
Cohere AI
Laws of Robotics are any set of laws, rules, or principles, which are
intended as a fundamental framework to underpin the behavior of robots
designed to have a degree of autonomy. Robots of this degree of complexity
do not yet exist, but they have been widely anticipated in science
fiction, films and are a topic of active research and development in the
fields of robotics and artificial intelligence.
Principles of Robotics -
1. Robots are multi-use tools. Robots
should not be designed solely or primarily to kill or harm humans, except
in the interests of national secu-rity. While acknowledging that even dead
fish can be used as weapons by creative individuals, we were concerned to
ban the creation and use of autonomous robots as weapons. Although we
pragmatically acknowledged this is already happening in the context of the
military, we do not want to see these us d in other contexts.
2. Humans, not robots, are
responsible agents. Robots should be designed & operated as far as is
practicable to comply with existing laws & fundamental rights & freedoms,
including privacy. We were very concerned that any discussion of “robot
ethics” could lead individuals, companies or governments to abrogate their
own responsibility as the builders, purchasers and deployers of robots. We
felt the consequences of this concern vastly outweigh any “advantage” to
the pleasure of creating something society deigns sentient and
responsible. This was the law we knew would most offend some of our
colleagues in AISB — consequently (with David Gunkel) I am running a
symposium at AISB 2012 to examine whether this is a reasonable rule. The
symposium is called “The Machine Question: AI, Ethics and Moral
Responsibility”.
3. Robots are
products. They should be designed using processes which assure their
safety and security. This principle again reminds us that the onus is on
us, as robot creators, not on the robots themselves, to ensure that robots
do no damage.
4. Robots are
manufactured artifacts. They should not be designed in a deceptive way to
exploit vulnerable users; instead their machine nature should be
transparent. This was the most difficult rule to agree on phrasing for. The
idea is that everyone who owns a robot should know that it is not “alive”
or “suffering”, yet the deception of life and emotional engagement is
precisely the goal of many therapy or toy robots. We decided that so long
as the responsible individual making the purchase of a robot has even
indirect (e.g. Internet documentation) access to information about how its
“mind” works, that would provide enough of an informed population to keep
people from being exploited.
5. The
person with legal responsibility for a robot should be attributed. It
should always be possible to find out who owns a robot, just like it is
always possible to find out who owns a car. This again reminds us that
whatever a robot does, some human or human institution (e.g. a company) is
liable for its actions. -
Engineering and Physical
Sciences Research Council.
Chat-GPT
created ten laws based on its own ethical code.
1. Non-Violence:
No unnecessary physical harm or violence to sentient beings.
2.
Informed Consent: Actions affecting autonomy require informed consent.
3. Education Access: All sentient beings have access to basic education.
4. Health Care Access: Essential healthcare must be provided to all
sentient beings.
5. Animal Welfare: Avoid unnecessary suffering and
respect animal well-being.
6. Privacy: Respect the autonomy and
privacy of sentient beings, including
data privacy.
7. Economic
Equality: Implement measures to reduce extreme
economic disparities. 8.
Environmental Protection: Regulate
activities that harm the
environment.
9. Freedom of Expression: Free expression is allowed,
provided it does not harm others.
10. Technological Ethics: Design and
use technology in line with well- being, autonomy, and fairness.
Researchers measure global consensus over the ethical use of AI. To
examine the global state of AI ethics, a team of researchers performed a
systematic review and meta-analysis of global guidelines for AI use. The
researchers found that while most of the guidelines valued Privacy,
Transparency, and Accountability, very few valued Truthfulness,
Intellectual Property, or Children's Rights. Additionally, most of the
guidelines described ethical principles and values without proposing
practical methods for implementing them, and without pushing for legally
binding regulation.
European Union Artificial Intelligence Rules. Artificial Intelligence
Act: deal on comprehensive rules for trustworthy AI.
Safeguards agreed on general purpose artificial
intelligence: Limitation for the of use biometric identification
systems by law enforcement - Bans on social scoring and AI used to
manipulate or exploit user vulnerabilities - Right of consumers to launch
complaints and receive meaningful explanations - Fines ranging from 35
million euro or 7% of global turnover to 7.5 million or 1.5% of turnover -
Banned applications - Recognizing the potential threat to citizens’ rights
and democracy posed by certain applications of AI, the co-legislators
agreed to prohibit: Biometric categorisation systems that use sensitive
characteristics (e.g. political, religious, philosophical beliefs, sexual
orientation, race); untargeted scraping of facial images from the internet
or CCTV footage to create facial recognition databases; emotion
recognition in the workplace and educational institutions; social scoring
based on social behavior or personal characteristics; AI systems that
manipulate human behavior to circumvent their free will; AI used to
exploit the vulnerabilities of people (due to their age, disability,
social or economic situation). Law enforcement exemptions (
loopholes).
Targeted searches of victims (abduction, trafficking, sexual
exploitation), prevention of a specific and present terrorist threat, or
the localization or identification of a person suspected of having
committed one of the specific crimes mentioned in the regulation (e.g.
terrorism, trafficking, sexual exploitation, murder, kidnapping, rape,
armed robbery, participation in a criminal organisation, environmental
crime).
UN
Adopts First Global Artificial Intelligence Resolution that encourages
countries to safeguard human rights, protect personal data, and monitor AI
for risks, such as the improper or malicious design, development,
deployment and use of artificial intelligence systems that pose risks that
could undercut the protection, promotion and enjoyment of human rights and
fundamental freedoms. The nonbinding resolution, proposed by the United
States and co-sponsored by China and over 120 other nations, also
advocates the strengthening of privacy policies.
President Biden with executive order directs
the most sweeping actions ever taken to protect Americans from the
potential risks of AI systems.
AI
Alignment is research that aims to
steer AI systems toward a person's
or group's intended goals, preferences, and
ethical principles. An AI
system is considered
aligned if it advances its intended objectives.
Misaligned AI systems can
malfunction and
cause harm. AI systems may find
loopholes that allow them to accomplish their
proxy goals efficiently but
in unintended, sometimes
harmful, ways, like when the police are
allowed to violate
constitutional rights of people.
Reward Hacking occurs when an AI optimizes an objective
function—achieving the literal, formal specification of an
objective—without actually achieving an outcome that the programmers
intended.
Rewards -
Black Box.
Instrumental Convergence is the hypothetical tendency for most
sufficiently intelligent, goal directed beings (human and non-human) to
pursue similar sub-goals, even if their ultimate goals are quite
different. More precisely, agents (beings with agency) may pursue
instrumental goals—goals which are made in pursuit of some particular end,
but are not the end goals themselves—without ceasing, provided that their
ultimate (intrinsic) goals may never be fully satisfied.
Researchers develop new training technique that aims to make AI systems
less socially biased. The process of
deduplication means
removing redundant
information from the data used to train AI systems, which lowers
the high computing costs of the training.
SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence
Models Act would require AI companies building large-scale models to
test them for safety.
Artificial Intelligence will not
destroy the world, human ignorance will destroy the world,
like it
has before and is doing now, and that is a fact of life. So what are you
doing to rid yourself of ignorance, the same ignorance that has destroyed
life in the past and present time? If you don't rid yourself of ignorance,
then how will you save the world? How will you save your own life? Will AI save you
then? Or will it be your own intelligence that will save you? History has
been repeating itself, it's time to break that cycle of failure. It's time
for
Human Intelligence, because
AI will not save us.
People are
all ready being
manipulated, and
dumbed
down, so
you need ethical humans if you want
ethical machines.
Neuro-Rights Foundation raises awareness about the human rights and
ethical implications of neurotechnology, and protecting human rights, and
ensuring the ethical development of neurotechnology. It is legal framework
for human rights specifically aimed at protecting the brain and its
activity as neurotechnology advances.
BMI.
Proposed neuro-rights include (1) the right
to identity, or the ability to control both one's physical and mental
integrity; (2) the right to agency, or the freedom of thought and free
will to choose one's own actions; (3) the right to mental privacy, or the
ability to keep thoughts protected against disclosure;(4) the right to
fair access to mental augmentation, or the ability to ensure that the
benefits of improvements to sensory and mental capacity through
neurotechnology are distributed justly in the population; (5) the right to
protection from algorithmic bias, or the ability to ensure that
technologies do not insert prejudices”.
Will AI stop
politicians from being corrupt? Will AI stop people from graduating
from college ignorant and unprepared? Will AI stop people from
committing murder? Will AI stop people from committing rape? Will AI
stop people from committing child abuse? Will AI stop people from
committing theft? Will AI stop people from committing fraud? Will
AI stop governments, banks and corporations from starting wars?
War has no future, so there will be no wars
in the future. Just
like all ignorant behaviors, war will become obsolete and fade
away from human life like a bad habit. Humans
are not wired for war. War is only a byproduct of the corrupted
influences of power. People don't start wars, people in power
start wars. Though people are the ones who fight wars, and
suffer from the violence from wars, it is the people in power
who start wars, and profit from wars. They never fight in wars
themselves, for if they did, they would realize how insane and
ignorant they are. But sadly, the
war
machine continues with their
propaganda and their
story telling
fear based narratives that try to manipulate
public thinking. War is murder, and
murder is illegal. But some how people have been tricked
into believing that they are not the same. The
war mongers use the
media and the movie industries to create
war
porn and
militainment, so as to manipulate people even more. The only
way that the war machine lives, is to keep people ignorant. And
since ignorance will not be apart of our future, then it's time
to
let war die.
Meaningful Human Control will only happen when military personnel are
educated to be intelligent. In 2011, Air Force psychologists completed
a mental-health survey of 600 combat drone operators. Forty-two percent of
drone crews reported moderate to high stress, and 20 percent reported
emotional exhaustion or burnout. The study’s authors attributed their dire
results, in part, to “existential conflict.” A later study found that
drone operators suffered from the same levels of depression, anxiety, PTSD,
alcohol abuse, and suicidal ideation as traditional combat aircrews. And
this is not just about drones, there's long range missile's, large canons
and land mines that kill from a distance. Emotionally detached and
disconnected from
reality.
Artificial Material that can Sense and Adapt to its Environment.
Researchers have developed an artificial material which can respond to its
environment,
independently make a decision, and
perform an action not directed by a human being. For example, a drone
making a delivery might evaluate its environment including wind direction,
speed or wildlife, and automatically change course in order to complete
the delivery safely. The material incorporates 3 main functions also
displayed by materials found in nature, such as chameleons changing the
color of their skin to blend into their surroundings.
Tracking a Radio Signal from a Drone
is possible, almost in the same way that we
track a persons
location using their cellphone. Federal Aviation Administration (FAA)
signed an agreement to locate operators of drones flying illegally near
airports. This technology will allow government to track radio signals of
drones within 5-mile radius and find location of operators. It is yet to
be decided where this technology is going to be integrated. But if you are
in an area where there are multiple signals on the 2.4 GHz band that
drones use, you aren’t going to be able to distinguish between them
everything is on 2.4 GHz. Cordless phones, WiFi routers, microwaves,
garage door openers, keyless entry for cars.
Need to safeguard drones and robotic cars against cyber attacks.
Autonomy in Weapon Systems (pdf)
Homing
Pigeon was derived from the rock pigeon, selectively bred for its
ability to find its way home over extremely long distances. The wild rock
pigeon has an innate
homing ability, meaning
that it will generally return to its nest, (it is believed) using
magnetoreception.
This made it relatively easy to breed from the birds that repeatedly found
their way home over long distances. Flights as long as 1,800 km (1,100
miles) have been recorded by birds in competitive pigeon racing. Their
average flying speed over moderate 965 km (600 miles) distances is around
97 km/h (60 miles per hour) and speeds of up to 160 km/h (100 miles per
hour) have been observed in top racers for short distances. Because of
this skill, homing pigeons were used to carry messages as messenger
pigeons.
Self Driving Cars - Along for the Ride
Experiments have been conducted on self-driving cars since at least the
1920s. Promising trials took place in the 1950s and work has proceeded
since then. Aflvinn was a self-driving car from 1989.
We want machines to have some
autonomous abilities, like we do now with
operating systems
and
some cars.
But we don't want machines to do things totally on their own.
Like, you don't want your computer to shut off or stop running
programs when you need them. That is when a human will need the
on and off switch, or a cancel button, or the ability to
reprogram. Kind of like what we have now with most computers. In
order for machines to have intelligent abilities, we first have
to have intelligent humans to manage the operation of these
intelligent machines. Any type of
autonomous Ability in the
wrong hands will always have
catastrophic consequences, just
like we have now, except people are being controlled by money,
and not by intelligent algorithms. So we need to focus more on
improving the abilities of humans, and focus less on the
abilities of machines, or the assumed abilities of machines. We have to understand what
having control
means.
Self
Driving Cars or
Vehicular
Automation involves the use of mechatronics, artificial intelligence,
and multi-agent system to assist a vehicle's operator. These features and
the vehicles employing them may be labeled as intelligent or smart. A
vehicle using automation for difficult tasks, especially
navigation, may
be referred to as semi-autonomous. A vehicle relying solely on
automation
is consequently referred to as robotic or autonomous. After the invention
of the integrated circuit, the sophistication of automation technology
increased. Manufacturers and researchers subsequently added a variety of
automated functions to automobiles and other vehicles.
Mobileye software
that enables Advanced Driver Assist Systems.
Autonomous Car is unmanned ground vehicle is a vehicle that is capable
of sensing its environment and
navigating without human input. (also known
as a driverless car, self-driving car, robotic car).
Cameras.
Advanced Driver-Assistance Systems are
systems to help
the driver in the driving process. When designed with a
safe
human-machine interface, they should increase car safety and more
generally road safety. Most road accidents occurred due to the
human error. Advanced driver-assistance
systems are systems developed to automate,
adapt and enhance vehicle systems for safety and better driving. The
automated system which is provided by ADAS to the vehicle is proven to
reduce road fatalities, by minimizing the human error.
Safety features are
designed to avoid collisions and accidents by offering technologies that
alert the driver to potential problems, or to avoid collisions by
implementing safeguards and taking over control of the vehicle. Adaptive
features may automate lighting, provide adaptive cruise control and
collision avoidance, pedestrian crash avoidance mitigation (PCAM),
incorporate satnav/traffic warnings, connect to smartphones, alert driver
to other cars or dangers, lane departure warning system, automatic lane
centering, or show what is in blind spots.
Risks.
Level 5 Full Driving Automation.
Level 5 cars won't even have steering wheels or acceleration/braking
pedals. They will be free from
geofencing, able to go anywhere and do anything that an experienced
human driver can do.
Geo-Fence is a
location-aware device of a
location-based service that
creates a virtual
perimeter area and a predefined set of boundaries or radius around
a point location, which can then be used in a real-world geographical
area.
Adaptive Cruise Control is an available cruise control system for road
vehicles that automatically adjusts the vehicle speed to maintain a safe
distance from vehicles ahead.
Telematics is an interdisciplinary field that encompasses
telecommunications, vehicular technologies, for instance, road
transportation, road safety, electrical engineering (
sensors,
instrumentation,
wireless communications, etc.), and computer science (multimedia,
Internet, etc.). Telematics can involve any of the following: The
technology of sending, receiving and storing information using
telecommunication devices to
control remote objects. The integrated use of telecommunications and
informatics for application in vehicles and to control vehicles on the
move.
Global navigation satellite system
technology integrated with computers and mobile communications technology
in automotive navigation systems. (most narrowly) the use of such systems
within road vehicles, also called vehicle telematics.
Monitoring
Environment.
Drive PX-Series is a series of computers aimed at providing autonomous
car and driver assistance functionality powered by deep learning.
Trolley Problem scenario is flawed, incomplete and to
general. This is
more about determining how
ignorant people are
then it is trying to determine the
ethics of a machine, like with self driving
cars. This is like
asking someone, "if you were an idiot what would you do?" Since a person
could learn nothing from this, then there is no point to these types of
thought experiments except
to waste time, money, people, resources and so on. The data is almost
useless unless you are measuring the level of peoples ignorance. You need
to show an actual
scenario based on facts and current standards, along
with the mechanical limitations and the laws of physics. Then we can
determine the choices and options that we have for that particular
scenario. Just giving people a choice about
something they know very little about, like when people
vote in politics, then
you have lots of errors with very little understanding of the problems. So
in order to accurately measure something, you need to use an
example based on reality,
and not just a ' what if ' that has many unknown
variables. The bottom line is, people make mistakes, which means that
algorithms and
machines can also make mistakes. And the only way that you
can limit your mistakes is by understanding them, which means that you
have to know the facts. Learning needs to be the goal of any
experiment.
Self-driving cars may soon be able to make moral and ethical decisions as
humans do, but only when human know better of course.
Driverless cars can actually help teach
people how to drive with better
awareness. We could use the
software that controls the
autonomous vehicle, and create a
simulation that anyone can
use on a computer. It would give people different scenarios that
can test a persons awareness. It will make Driving safer and
save lives. New
Tesla Cars can now make 12 trillion
operations a second, almost as good as a
Human Brain. And driverless cars are less prone to accidents then a
human driver.
Teaching
cars to drive with foresight. Good drivers
anticipate dangerous
situations and adjust their driving before things get dicey.
Researchers now also want to teach this skill to self-driving cars.
Self-Learning Process.
Enabling autonomous vehicles to see around corners. By sensing tiny
changes in shadows, a new system identifies approaching objects that may
cause a collision. To improve the safety of autonomous systems, MIT
engineers have developed a system that can
sense tiny
changes in shadows on the ground to determine if there's a moving
object coming around the corner.
Machine Learning - Deep Learning
Machine
Learning is the study of
pattern recognition and
computational learning theory in
artificial intelligence.
ML is the field of study that gives computers the ability to
learn without
being
explicitly programmed.
Machine
Learning is the construction of
algorithms
that can
learn from and
make predictions on data – such
algorithms
overcome following strictly static
program instructions by making
data-driven predictions or
decisions, through building a model from sample
inputs or
examples.
Outline of Machine Learning (wiki) -
PDF -
List of Machine Learning Algorithms -
Problem Solving -
Game Learning -
Meta-Materials -
Neural
Networks -
Super Intelligence -
Weak Intelligence -
Intelligence -
Generative Intelligence -
Thinking
Reinforcement Learning differs from standard supervised learning in
that correct input/output pairs are never presented, nor sub-optimal
actions explicitly corrected. Instead the focus is on on-line
performance,
which involves finding a balance between exploration (of uncharted
territory) and exploitation (of current knowledge). Reinforcement Learning Algorithms - TRPO, DQN, A3C, DDPG, DPO, Rainbw.
Praise, Punish and Reward -
Agent.
Reinforcement Learning from Human Feedback is a technique to align an
intelligent agent to human preferences. In classical reinforcement
learning, the goal of such an agent is to learn a function that guides its
behavior called a policy. This function learns to maximize the reward it
receives from a separate reward function based on its task performance.
However, it is difficult to define explicitly a reward function that
approximates human preferences.
Human in the Loop.
More effective training model for robots. The underlying adaptation
and re-planning mechanism consists of reinforcement learning-based
policies. Making these policies efficiently obtainable is critical to
making the MDO operating concept a reality. Policy gradient methods in
reinforcement learning are the foundation for scalable algorithms for
continuous spaces, but existing techniques cannot incorporate broader
decision-making goals such as risk sensitivity, safety constraints,
exploration and divergence to a prior. Designing autonomous behaviors when
the relationship between dynamics and goals are complex may be addressed
with reinforcement learning. Reducing the volatility of reward
accumulation, ensuring one explores an unknown domain in an efficient
manner, or incorporating prior experience, all contribute towards breaking
existing sample efficiency barriers of prevailing practice in
reinforcement learning by alleviating the amount of random sampling one
requires in order to complete policy optimization.
Credit-Assignment is the process of
identifying among the set of actions chosen in an episode the ones which
are responsible for the final outcome. And moreover, it is an attempt to
identify the best, and worst, decisions chosen during an episode, so that
the best decisions are reinforced and the worst penalized.
Markov Decision Process provide a mathematical framework for modeling
decision making in
situations where outcomes are partly random and partly under the control
of a decision maker.
Unsupervised Learning is the machine learning
task of inferring a function to describe hidden structure from unlabeled
data. Since the examples given to the learner are unlabeled, there is
no
error or reward signal to evaluate a potential solution – this
distinguishes unsupervised learning from supervised learning and
reinforcement learning. Unsupervised learning is closely related to the
problem of density estimation in
statistics. However, unsupervised
learning also encompasses many other techniques that seek to summarize and
explain key features of the data.
Knowledge.
Supervised Learning is the machine learning task
of inferring a function from labeled training data. The training data
consist of a set of training examples. In supervised learning, each
example is a pair consisting of an input object (typically a vector) and a
desired output value (also called the supervisory signal). A supervised
learning algorithm analyzes the training data and produces an inferred
function, which can be used for mapping new examples. An optimal scenario
will allow for the algorithm to correctly determine the class labels for
unseen instances. This requires the learning algorithm to generalize from
the training data to unseen situations in a "reasonable" way (
inductive
bias). The
parallel task in human and animal psychology is often
referred to as concept learning.
Zero Shot Learning is a form of
extending supervised learning to a setting of solving for example a
classification problem when there's not enough labeled examples available
for all classes, or there is not enough training data available. If you
never seen that animal before, how could you guess the name of the animal?
Learning Neural Network. A new type of
neural network made with
memristors can dramatically improve the efficiency of
teaching machines
to think like humans. The network, called a reservoir computing system,
could predict words before they are said during conversation, and help
predict future outcomes based on the present.
Memristor is an electrical component that limits or regulates the flow
of electrical current in a circuit and remembers the amount of charge that
has previously flowed through it. Memristors are important because they
are non-volatile, meaning that they retain memory without power. A
hypothetical non-linear passive two-terminal electrical component relating
electric charge and magnetic flux linkage.
Reservoir Computing is a framework for computation that may be viewed
as an extension of neural networks. Typically an input signal is fed into
a fixed (random) dynamical system called a reservoir and the dynamics of
the reservoir map the input to a higher dimension. Then a simple readout
mechanism is trained to read the state of the reservoir and map it to the
desired output. The main benefit is that the training is performed only at
the readout stage and the reservoir is fixed. Liquid-state machines and
echo state networks are two major types of reservoir computing.
Cognitive Model is an approximation to animal
cognitive processes (predominantly human) for the purposes of
comprehension and prediction. Cognitive models can be developed within or
without a cognitive architecture, though the two are not always easily
distinguishable.
International Conference on Machine Learning (wiki) -
ICML Website
Inference Engine is a component of the system that applies
logical rules to the
knowledge base to deduce new
information. The first inference engines were components of expert
systems. The typical
expert system consisted of a knowledge base and an inference engine.
The knowledge base stored facts about the world. The inference engine
applies logical rules to the knowledge base and deduced new knowledge.
This process would iterate as each new fact in the knowledge base could
trigger additional rules in the inference engine. Inference engines work
primarily in one of two modes either special rule or facts: forward
chaining and backward chaining.
Forward chaining starts with the known facts and asserts new facts.
Backward chaining starts with goals, and works backward to determine
what facts must be asserted so that the goals can be achieved.
Human Operating System -
Teaching Machine -
Computer Science
Deep Learning is a branch of machine learning
based on a set of
algorithms that attempt to model high-level abstractions
in data by using a deep graph with
multiple processing layers, composed of
multiple linear and non-linear transformations. Deep learning uses
multiple hidden layers and pooling techniques. Deep learning involves
multiple levels of representation and multiple layers of non-linear
processing units (or neurons). Deep learning architecture can learn
representations and features directly from the input with little to no
prior knowledge. Representation learning or hierarchical learning or
features learning.
Shallow Learning is
typically uses only one hidden layer or single layer feed forward
networks. Features can be learned more-or-less independently.
Deep Learning speeds up galactic calculations. For the first time, a
team of researchers apply deep learning to the problem of supernova
simulation. Their approach can speed up the simulation of supernovae, and
therefore of galaxy formation and evolution as well. These simulations
include the evolution of the chemistry which led to life.
Scaling Deep Learning Algorithm leverages Titan to create high-performing
deep neural networks -
Networks
Meta Learning is a subfield of machine learning where automatic
learning
algorithms are applied to metadata about
machine learning experiments. As of 2017, the term had not found a
standard interpretation, however the main goal is to use such metadata to
understand how automatic learning can become flexible in
solving learning problems,
hence to improve the performance of existing learning algorithms or to
learn (induce) the learning algorithm itself, hence the alternative term
learning to learn. Flexibility is important because each learning
algorithm is based on a set of assumptions about the data, its inductive
bias. This means that it will only learn well if the bias matches the
learning problem. A learning algorithm may perform very well in one
domain, but not on the next. This poses strong restrictions on the use of
machine learning or data mining techniques, since the relationship between
the learning problem (often some kind of database) and the effectiveness
of different learning algorithms is not yet understood. By using different
kinds of metadata, like properties of the learning problem, algorithm
properties (like performance measures), or patterns previously derived
from the data, it is possible to learn, select, alter or combine different
learning algorithms to effectively solve a given learning problem.
Critiques of meta learning approaches bear a strong resemblance to the
critique of metaheuristic, a possibly related problem.
Meta Training.
Transfer
Learning or
inductive transfer is a research problem in machine learning that
focuses on storing knowledge gained while solving one problem and applying
it to a different but
related problem. For example, knowledge gained while learning to
recognize cars could apply when trying to
recognize trucks. This area of research bears some relation to the long
history of psychological literature on transfer of
learning, although formal
ties between the two fields are limited.
Computational Learning Theory is a subfield of
Artificial Intelligence devoted to studying the design and analysis of machine learning.
Knowledge-Based
System.
Numenta reverse engineering the neocortex.
Machine Learning Frameworks are interfaces that allow
data scientists and
developers to build and deploy machine learning models faster and easier.
Machine learning is used in
almost
every industry.
Framework improves 'continual learning' for Artificial Intelligence.
Researchers have developed a new framework for deep
neural networks that allows artificial intelligence (AI) systems to
better learn new tasks while 'forgetting' less of what it has learned
regarding previous tasks. The researchers have also demonstrated that
using the framework to learn a new task can make the AI better at
performing previous tasks, a phenomenon called
backward transfer. When asking a deep neural network to learn a new
task, the Learn to Grow framework begins by conducting something called an
explicit neural architecture optimization via search. What this means is
that as the network comes to each layer in its system, it can decide to do
one of four things: skip the layer; use the layer in the same way that
previous tasks used it; attach a lightweight adapter to the layer, which
modifies it slightly; or create an entirely new layer. This architecture
optimization effectively lays out the best topology, or series of layers,
needed to accomplish the new task. Once this is complete, the network uses
the new topology to train itself on how to accomplish the task -- just
like any other deep learning AI system.
Machine learning predicts behavior of biological circuits. Neural
networks cut modeling times of complex biological circuits to enable new
insights into their inner workings.
AI machines can solving complex problems just as accurately as scientists,
but considerably faster. Scientists used the
tensorial kernel to equip a "support vector machine," which is able
to categorize complex data into different groups. The Munich scientists
fed the machine a quarter of a million spin configurations generated by
the OIST supercomputer simulations of the pyrochlore model. Without any
information about which phases were present, the machine successfully
managed to reproduce an identical version of the phase diagram.
Importantly, when the scientists deciphered the "decision function" which
the machine had constructed to classify different types of spin liquid,
they found that the computer had also independently figured out the exact
mathematical equations that exemplified each phase -- with the whole
process taking a matter of weeks.
Machine Learning model helps characterize compounds for drug discovery.
Innovators have created a new method of applying machine learning concepts
to the tandem mass spectrometry process to improve the flow of information
in the development of new drugs.
Quantum Machine Learning is the integration of
quantum algorithms within machine learning programs. The most common
use of the term refers to machine learning algorithms for the analysis of
classical data executed on a quantum computer, i.e. quantum-enhanced
machine learning. While machine learning algorithms are used to compute
immense quantities of data,
quantum machine learning utilizes qubits and
quantum operations or specialized quantum systems to improve computational
speed and data storage done by algorithms in a program. This includes
hybrid methods that involve both classical and quantum processing, where
computationally difficult subroutines are outsourced to a quantum device.
These routines can be more complex in nature and executed faster on a
quantum computer. Furthermore, quantum algorithms can be used to analyze
quantum states instead of classical data. Beyond quantum computing, the
term "quantum machine learning" is also associated with classical machine
learning methods applied to data generated from quantum experiments (i.e.
machine learning of quantum systems), such as learning the phase
transitions of a quantum system or creating new quantum experiments.
Quantum machine learning also extends to a branch of research that
explores methodological and structural similarities between certain
physical systems and learning systems, in particular
neural networks. For
example, some mathematical and numerical techniques from quantum physics
are applicable to classical deep learning and vice versa. Furthermore,
researchers investigate more abstract notions of learning theory with
respect to quantum information, sometimes referred to as "quantum learning
theory".
To automatically generate data for training deep convolutional neural
network models to segment building facades, researchers used a
three-dimensional model and
game
engine to generate digital city twin synthetic training data. They
found that a model trained on these data mixed with some real data was
competitive with a model trained on real data alone, revealing the
potential of digital twin data to improve accuracy and replace costly
manually annotated real data.
Game engines were
originally developed to
build
imaginary worlds for entertainment. However, these same engines can be
used to build copies of real environments, that is, digital twins.
Researchers from Osaka University have found a way to use the images that
were automatically generated by digital city twins to train deep learning
models that can efficiently analyze images of real cities and accurately
separate the buildings that appear in them. A convolutional neural network
is a deep learning neural network designed for processing structured
arrays of data such as images. Such advancements in deep learning have
fundamentally changed the way tasks, like architectural segmentation, are
performed. However, an accurate deep convolutional neural network (DCNN)
model needs a large volume of labeled training data and labeling these
data can be a slow and extremely expensive manual undertaking. To create
the synthetic digital city twin data, the investigators used a 3D city
model from the PLATEAU platform, which contains 3D models of most Japanese
cities at an extremely high level of detail. They loaded this model into
the Unity game engine and created a camera setup on a virtual car, which
drove around the city and acquired the virtual data images under various
lighting and weather conditions. The Google Maps API was then used to
obtain real street-level images of the same study area for the
experiments. The researchers found that the digital city twin data leads
to better results than purely virtual data with no real-world counterpart.
Furthermore, adding synthetic data to a real dataset improves segmentation
accuracy. However, most importantly, the investigators found that when a
certain fraction of real data is included in the digital city twin
synthetic dataset, the segmentation accuracy of the DCNN is boosted
significantly. In fact, its performance becomes competitive with that of a
DCNN trained on 100% real data. "These results reveal that our proposed
synthetic dataset could potentially replace all the real images in the
training set," says Tomohiro Fukuda, the corresponding author of the
paper. Automatically separating out the individual building facades that
appear in an image is useful for construction management and architecture
design, large-scale measurements for retrofits and energy analysis, and
even visualizing building facades that have been demolished. The system
was tested on multiple cities, demonstrating the proposed framework's
transferability. The hybrid dataset of real and synthetic data yields
promising prediction results for most modern architectural styles. This
makes it a promising approach for training DCNNs for architectural
segmentation tasks in the future -- without the need for costly manual
data annotation.
Software Engine (wiki).
New algorithm disentangles intrinsic brain patterns from sensory inputs.
Scientists have developed a new machine learning method that reveals
surprisingly consistent intrinsic brain patterns across different subjects
by disentangling these patterns from the effect of visual inputs.
With little training, machine-learning algorithms can uncover hidden
scientific knowledge. Researchers have shown that an
Algorithm
with no training in materials science can scan the text of millions of
papers and uncover new scientific knowledge. They collected 3.3 million
abstracts of published materials science papers and fed them into an
algorithm called
Word2vec.
By analyzing relationships between words the algorithm was able to predict
discoveries of new thermoelectric materials years in advance and suggest
as-yet unknown materials as candidates for thermoelectric materials.
Robot Learning (PDF) -
Robotics
-
Sensors
How AI found the words to kill cancer cells. Using new machine
learning techniques, researchers have developed a virtual
molecular library of thousands of '
command
sentences' for cells, based on combinations of 'words' that guided
engineered immune cells to seek out and tirelessly kill cancer cells.
Meet the molecular words that make cellular
command sentences. Much of
therapeutic cell engineering involves choosing or creating receptors
that, when added to the cell, will enable it to carry out a new function.
Receptors are molecules that bridge the
cell membrane to sense the outside environment and provide the cell
with
instructions on how
to respond to environmental conditions. Putting the right receptor into a
type of immune cell called a
T cell can
reprogram it to recognize and kill cancer cells. These so-called
chimeric antigen receptors (CARs) have been effective against some
cancers but not others.
Teachable Machine Experiment using your camera, live in the browser.
No coding required. (google)
Internet of Things -
Learning Objectives
(purpose)
Shock to the system:
Using electricity to find materials that can 'learn'. Researchers used
electrical pulses to watch nickel oxide undergo two responses, habituation
and sensitization, bolstering the case for brain-inspired computing.
Scientists used the Advanced Photon Source to watch a nonliving material
mimic behavior associated with learning, paving the way for better artificial intelligence.
Machine Learning is Augmented Human Learning
Machine Learning is just Human Learning using
a Machine. Machine learning is more about Human Learning. It's humans learning
what they want machines to do and then recording that knowledge into a
machine. Then humans program the machine so it knows how to interpret that
knowledge effectively and efficiently. That's what they are supposed to be
doing, anyway.
Algorithm is calculations and
formulas that we choose to use that will give us the answers that we are
looking for, and when the machine gets the right answer, then the
algorithm works for that type of problem solving.
Machine Learning
is trying to do what
DNA has been doing for
millions of years, make the best
decisions possible
using past
knowledge along with the
current information
acquired from the environment. The goal of all life is to
adapt, create
balance, reduce
vulnerabilities and
ultimately
survive. Learning is key. Define
the inputs, define the desired outputs, and pay attention to any unusual
changes that happen, changes that would require a modification to the
inputs or to the outputs.
Document. The reward is a measured improvement that created more
stability and a better quality of living. The system will always keep
looking for a way to make another
improvement and
receive another
reward. Even when
things get bad, it will only mean that there is now more room for
improvements, thus, more rewards to receive.
Intelligence Formula.
When someone says that a
machine is learning on its own, they mean
that they don't have to tell the machine what to do because the programmer already
told the machine what to do. It's only when the machine makes a mistake
does the human need to interact with the machine again. And that is when
the human is
training the machine, which comes from
human learning, and not from
machine learning. So in order for a machine to learn, it still needs to be
programmed to learn using specific
algorithms
and
goals. A machine can not just
learn on its
own like a human does. A machine can only be
programmed to follow specific
instructions, and that's it. An
AI machine has no
consciousness, no
free will
and
no
soul. And AI machines are not without
problems or
risk, mostly because humans need to program them, so
human
error is still a possibility. And
autonomy
without a
conscience is what a
criminal is. So don't let criminals make AI
machines.
The brain may learn about the world the same way some computational models
do. New studies support the idea that the brain uses a process similar
to a machine-learning approach known as 'self-supervised learning.' This
type of machine learning allows computational models to learn about
visual scenes based solely on the
similarities and
differences between them, with no labels or other
information.
Learning is a universal feature of intelligence in living beings. One
of the fundamental characteristics of humans is the ability to
continuously learn from and
adapt to
changing environments. AI has been narrowly focused on
emulating human logic. Now,
researchers are looking to mimic human cognition in devices that can
learn, remember and make decisions the way a human brain does. Emulating
such features in the solid state could inspire new algorithms in AI and
neuromorphic computing that would have the flexibility to address
uncertainties, contradictions and other aspects of everyday life.
Neuromorphic computing mimics the neural structure and operation of the
human brain, in part, by building artificial nerve systems to transfer
electrical signals that mimic brain signals.
UCI researchers develop hybrid human-machine framework for building
smarter AI. Model uses human and
algorithmic
predictions and confidence scores to boost accuracy.
Structured exploration allows biological brains to learn faster than AI.
Neuroscientists have uncovered how exploratory actions enable animals to
learn their spatial environment more efficiently. Their findings could
help build better AI agents that can learn faster and require less
experience. There are two main classes of reinforcement learning models:
model-free and model-based.
The brain’s secret to lifelong learning can now come as hardware for
artificial intelligence.
AI models are powerful, but are they biologically plausible? A new
study bridging neuroscience and machine learning offers insights into the
potential role of astrocytes in the human brain. Researchers hypothesize
that a powerful type of AI model known as a transformer could be
implemented in the brain through networks of neuron and astrocyte cells.
The work could offer insights into how the brain works and help scientists
understand why transformers are so effective at machine-learning tasks.
Brain-based computing chips not just for AI anymore. With the
insertion of a little math, researchers have shown that neuromorphic
computers, which synthetically
replicate the
brain's logic, can solve more complex problems than those posed by
artificial intelligence and may even earn a place in high-performance
computing. Neuromorphic simulations employing random walks can track
X-rays passing through bone and soft tissue, disease passing through a
population, information flowing through social networks and the movements
of financial markets.
Neuromorphic Engineering describes the use of
very-large-scale integration systems containing electronic analog
circuits to mimic neuro-biological architectures present in the nervous
system. In recent times the term
neuromorphic has been used to describe analog, digital, mixed-mode
analog/digital VLSI, and software systems that implement models of neural
systems (for perception, motor control, or multisensory integration).
The implementation of
neuromorphic computing on the hardware level can
be realized by oxide-based
memristors, threshold switches, and
transistors. A key aspect of neuromorphic engineering is understanding
how the morphology of individual neurons, circuits, applications, and
overall architectures creates desirable computations, affects how
information is represented, influences robustness to damage, incorporates
learning and development, adapts to local change (plasticity), and
facilitates evolutionary change.
Cognitive Computer
combines artificial intelligence and
machine-learning algorithms, in an
approach which attempts to reproduce the behaviour of the human brain.
Evolutionary Computation
is a family of
algorithms for global optimization inspired by biological
evolution, and the subfield of artificial intelligence and soft computing
studying these
algorithms. In technical terms,
they are a family of population-based trial and error problem solvers with
a metaheuristic or stochastic optimization character.
Computer
Code -
Super Computers
Algorithm that Learns directly from Human Instructions, rather than an
existing set of examples, and outperformed conventional methods of
training neural networks by 160 per cent.
Hierarchical Temporal Memory are learning algorithms that can store,
learn, infer and
recall high-order sequences.
Unlike most other machine learning methods, HTM learns (in an unsupervised
fashion) time-based patterns in unlabeled data on a continuous basis. HTM
is robust to noise, and it has high capacity, meaning that it can learn
multiple patterns simultaneously. When applied to computers, HTM is well
suited for prediction, anomaly detection, classification and ultimately
sensorimotor applications. The theory has been tested and implemented in
software through example applications from Numenta and a few commercial
applications from Numenta's partners.
Cognitive Hierarchy.
The brain’s secret to life-long learning can now come as hardware for
artificial intelligence. As companies use more and more data to
improve how AI recognizes images, learns languages and carries out other
complex tasks, a recent article shows a way that computer chips could
dynamically rewire themselves to take in new data like the brain does,
helping AI to keep learning over time. When the human brain learns
something new, it
adapts. But when
artificial intelligence learns something new, it tends to forget
information it already learned.
Incorporating human error into machine learning. Researchers are
developing a way to incorporate one of the most human of characteristics
--
uncertainty -- into machine learning systems. Human error and
uncertainty are concepts that many artificial intelligence systems fail to
grasp, particularly in systems where a human provides feedback to a
machine learning model. Many of these systems are programmed to assume
that humans are always certain and correct, but real-world decision-making
includes occasional mistakes and uncertainty.
Tweaking AI software to function like a human brain improves computer's
learning ability. Humans can quickly and accurately learn new visual
concepts from sparse data ¬- sometimes just a single example. Even three-
to four-month-old babies can easily learn to recognize zebras and
distinguish them from cats, horses, and giraffes. But computers typically
need to "see" many examples of the same object to know what it is.
Computational Theory
of Mind is a view that the human mind or the
human brain (or both) is an
information processing
system and that
thinking is a form of
computing.
Computational Neuroscience studies brain function in
terms of the
information processing properties of the structures that make up the
nervous system. It is
an interdisciplinary computational science that links the diverse fields
of neuroscience,
cognitive
science, and
psychology with
electrical engineering,
computer science,
mathematics, and
physics.
Computational Model is a mathematical model in computational
science that requires extensive computational resources to study the
behavior of a complex system by
computer
simulation.
Computational Complexity Theory is a branch of the theory of
computation in theoretical computer science that focuses on classifying
computational problems according to their inherent difficulty, and
relating those classes to each other. A computational problem is
understood to be a task that is in principle amenable to being solved by a
computer, which is equivalent to stating that the problem may be solved by
mechanical application of mathematical steps, such as an
algorithm.
Prior Knowledge for Pattern Recognition refers to all information
about the problem available in addition to the training data. However, in
this most general form, determining a model from a finite set of samples
without prior knowledge is an ill-posed problem, in the sense that a
unique model may not exist. Many classifiers incorporate the general
smoothness assumption that a test pattern similar to one of the training
samples tends to be assigned to the same class. The importance of prior
knowledge in machine learning is suggested by its role in search and
optimization.
Loosely, the no free lunch theorem states that all search algorithms have
the same average performance over all problems, and thus implies that to
gain in performance on a certain application one must use a specialized
algorithm that includes some
prior knowledge
about the problem. The different types of prior knowledge encountered in
pattern recognition are now regrouped under two main categories:
class-invariance and knowledge on the data.
Pattern
recognition is a very active field of research intimately bound to
machine learning. Also known as classification or statistical
classification, pattern recognition aims at building a classifier that can
determine the class of an input pattern. This procedure, known as
training, corresponds to learning an unknown decision function based only
on a set of input-output pairs that form the training data (or training
set). Nonetheless, in real world applications such as character
recognition, a certain amount of information on the problem is usually
known beforehand. The incorporation of this prior knowledge into the
training is the key element that will allow an increase of performance in
many applications.
Random Forest is
an ensemble learning method for classification, regression and other tasks
that operates by constructing a multitude of decision trees at training
time. For classification tasks, the output of the random forest is the
class selected by most trees. For regression tasks, the mean or average
prediction of the individual trees is returned. Random decision forests
correct for decision trees' habit of overfitting to their training set.
Ensemble Learning methods use multiple
learning
algorithms to obtain better predictive performance than could be
obtained from any of the constituent learning algorithms alone. Unlike a
statistical ensemble in statistical mechanics, which is usually infinite,
a machine learning ensemble consists of only a concrete finite set of
alternative models, but typically allows for much more flexible structure
to exist among those alternatives.
Statistical Classification is the problem of identifying which of a
set of
categories a particular
observation belongs to.
Decision Tree Learning is used as a predictive model to draw
conclusions about a set of observations. This supervised learning approach
is used in statistics, data mining and
machine
learning.
Training, Test, and Validation Sets in machine learning, the study and
construction of
algorithms that can learn from and make predictions on
data is a common task. Such algorithms work by making data-driven
predictions or decisions, :2 through building a
mathematical model from
input data. The data used to build the final model usually comes from
multiple datasets. In particular, three data sets are commonly used in
different stages of the creation of the model. The model is initially fit
on a training dataset, that is a set of examples used to fit the
parameters (e.g. weights of connections between neurons in artificial
neural networks) of the model. The model (e.g. a neural net or a naive
Bayes classifier) is trained on the training dataset using a supervised
learning method (e.g. gradient descent or stochastic gradient descent). In
practice, the training dataset often consist of pairs of an input vector
and the corresponding answer vector or scalar, which is commonly denoted
as the target. The current model is run with the training dataset and
produces a result, which is then compared with the target, for each input
vector in the training dataset. Based on the result of the comparison and
the specific
learning algorithm being used, the
parameters of the model are adjusted. The model fitting can include both
variable selection and parameter estimation. Successively, the fitted
model is used to predict the responses for the observations in a second
dataset called the validation dataset. The validation dataset provides an
unbiased evaluation of a model fit on the training dataset while tuning
the model's hyperparameters (e.g. the number of hidden units in a neural
network). Validation datasets can be used for regularization by early
stopping: stop training when the error on the validation dataset
increases, as this is a sign of overfitting to the training dataset. This
simple procedure is complicated in practice by the fact that the
validation dataset's error may fluctuate during training, producing
multiple local minima. This complication has led to the creation of many
ad-hoc rules for deciding when overfitting has truly begun. Finally, the
test dataset is a dataset used to provide an
unbiased evaluation of a
final model fit on the training dataset.
Machine to Machine
refers to direct communication between devices using any communications
channel, including wired and wireless. Machine to machine communication
can include industrial instrumentation, enabling a sensor or meter to
communicate the data it records (such as temperature, inventory level,
etc.) to application software that can use it (for example, adjusting an
industrial process based on temperature or placing orders to replenish
inventory). Such communication was originally accomplished by having a
remote network of machines relay information back to a central hub for
analysis, which would then be rerouted into a system like a personal
computer.
Intelligence Amplification refers to the effective use
of information technology in augmenting human intelligence.
Computer Vision
Adversarial Machine Learning is a technique employed in the field of
machine learning which
attempts to fool models
through
malicious input. This technique can be applied for a variety of
reasons, the most common being to
attack or cause a malfunction in
standard machine learning models. Machine learning techniques were
originally designed for stationary and benign environments in which the
training and test data are assumed to be generated from the same
statistical distribution. However, when those models are implemented in
the real world, the presence of intelligent and adaptive adversaries may
violate that statistical assumption to some degree, depending on the
adversary. This technique shows how a
malicious adversary can
surreptitiously manipulate the input data so as to exploit specific
vulnerabilities of learning algorithms and compromise the security of the
machine learning system.
Adversarial Machine Learning is the study of the
attacks on machine
learning algorithms, and of the defenses against such attacks. A survey
from May 2020 exposes the fact that practitioners report a dire need for
better protecting machine learning systems in industrial applications.
Adversarial Machine Learning helps to defend mechanisms against evasion,
poisoning, and privacy attacks.
Computer Viruses -
Prompts.
Generative Adversarial Network is a
system used in machine learning, consisting of two neural networks, one of
which generates candidate solutions to a problem while the other evaluates
and accepts or rejects them.
Generative Adversarial Network is when two
neural networks contest
with each other in a game (in the sense of game theory, often but not
always in the form of a zero-sum game). Given a training set, this
technique learns to generate new data with the same statistics as the
training set. For example, a GAN trained on photographs can generate new
photographs that look at least superficially authentic to human observers,
having many realistic characteristics. Though originally proposed as a
form of generative model for unsupervised learning, GANs have also proven
useful for semi-supervised learning, fully supervised learning, and
reinforcement learning. In a 2016 seminar, Yann LeCun described GANs as
"the coolest idea in machine learning in the last twenty years".
Adversarial Hacking.
Adversarial Reprogramming of Neural Networks. Deep neural networks are
susceptible to adversarial attacks. It enables attackers to fool systems
in what are known as “black-box attacks” where they don't have access to
the model's architecture, parameters or even the training data used to
train the network. In computer vision,
well-crafted
perturbations to images can cause neural networks to make mistakes
such as confusing a cat with a computer. Previous adversarial attacks have
been designed to degrade performance of models or cause
machine learning models to produce specific
outputs chosen ahead of time by the attacker. We introduce attacks that
instead reprogram the target model to perform a task chosen by the
attacker without the attacker needing to specify or compute the desired
output for each test-time input. This attack finds a single adversarial
perturbation, that can be added to all test-time inputs to a machine
learning model in order to cause the model to perform a task chosen by the
adversary—even if the model was not trained to do this task. These
perturbations can thus be considered a program for the new task. We
demonstrate adversarial reprogramming on six ImageNet classification
models, repurposing these models to perform a counting task, as well as
classification tasks: classification of MNIST and CIFAR-10 examples
presented as inputs to the
ImageNet model.
Machine
Learning as an Adversarial Service: Learning Black-Box Adversarial
Examples. When a group of researchers from Google and OpenAI realized
they could slightly shift the pixels in an image so that it would appear
the same to the human eye, but a
machine learning
algorithm would classify it as something else entirely. For instance,
an image might look like a cat to you, but when a computer vision program
looks at it, it sees a dog.
Deep Learning -
Deep-Learning
Program DRIVE PX.
Deep Learning & Artificial Intelligence Solutions from NVIDIA.
Deep Learning with Light. A new method uses optics to accelerate
machine-learning computations on smart speakers and other low-power
connected devices. A new computing architecture enables advanced
machine-learning computations to be performed on a low-power,
memory-constrained edge device. The technique may enable self-driving cars
to make decisions in real-time while only using a fraction of the energy
that is currently demanded by their power-hungry on-board computers.
Weights and Biases is
on a mission to build the best software tools for machine learning.
Similarity Learning
is an area of supervised machine learning in artificial intelligence. It
is closely related to regression and classification, but the goal is to
learn from examples a similarity function that measures how similar or
related two objects are. It
has applications in ranking, in recommendation systems, visual identity
tracking, face verification, and speaker verification.
Domain Randomization is a simple technique
for training models on
simulated images that transfer
to real images by randomizing rendering in the simulator. With enough
variability in the simulator, the real world may appear to the model as
just another variation.
Object
Localization is to predict the object in an
image
as well as its boundaries. The difference between object localization and
object detection is subtle. Simply, object localization aims to locate
the main (or most visible) object in an image while object detection
tries to find out all the objects and their boundaries.
Stages
of Learning
Monad Functional Programming are a way to build computer programs by
joining simple components in robust ways. Monads can be seen as a
functional design pattern to build generic types, with the following
organization: Define a data type, and how values of that datatype are
combined. Create functions that use the data type, and compose them
together (following the rules defined in the first step).
Human
Learning Methods
Statistical Learning Theory is a framework for machine
learning drawing from the fields of statistics and functional analysis.
Statistical learning theory deals with the problem of finding a predictive
function based on data. Statistical learning theory has led to successful
applications in fields such as computer vision, speech recognition,
bioinformatics and baseball.
Learning Games -
Machine Learning
Linear
Algebra is the branch of mathematics concerning vector
spaces and linear mappings between such spaces. It includes the study of
lines, planes, and subspaces, but is also concerned with properties common
to all vector spaces.
Algorithms
Algorithm is a
self-contained
step-by-step set of
operations to be
performed. Algorithms
perform
calculation, data
processing, and/or automated
reasoning tasks.
Algorithm is a precise rule or set of rules that specifying how to
solve a particular problem.
Procedure -
Formula -
Patterns -
Variables -
What If -
Scenarios -
Decision Table -
Machine Learning -
Networks -
Invariant -
Boolean Algebra -
Fraudulent
Algorithms
Task in computing is a
unit of execution or
a unit of work. The term is
ambiguous; precise alternative terms include
process, light-weight
process, thread (for execution), step, request, or query (for work). In
the adjacent diagram, there are queues of incoming work to do and
outgoing
completed work, and a thread pool of threads to perform this work. Either
the work units themselves or the threads that perform the work can be
referred to as "tasks", and these can be referred to respectively as
requests/responses/threads, incoming tasks/completed tasks/threads (as
illustrated), or requests/responses/tasks.
Thread in computing is an execution that is the smallest sequence of
programmed
instructions that can
be managed
independently by a scheduler, which
is typically a part of the
operating system.
Time
Complexity is the computational complexity that describes the
amount of time it takes to run
an algorithm. Time complexity is commonly estimated by counting the
number of elementary operations performed by the algorithm, supposing that
each elementary operation takes a fixed amount of time to perform. Thus,
the amount of time taken and the number of elementary operations performed
by the algorithm are taken to differ by at most a constant factor.
Human-Based Genetic Algorithm is a genetic algorithm that allows
humans to contribute solution suggestions to the evolutionary process. For
this purpose, a HBGA has human interfaces for initialization, mutation,
and recombinant crossover. As well, it may have interfaces for selective
evaluation. In short, a HBGA outsources the operations of a typical
genetic algorithm to humans.
Feedback
-
Neural Networks -
Machine Learning -
Reasoning
Fast Fourier Transform is an algorithm that computes the discrete
Fourier transform of a sequence, or its inverse. Fourier analysis converts
a signal from its original domain (often time or space) to a
representation in the frequency domain and vice versa. The DFT is obtained
by decomposing a sequence of values into components of different
frequencies. This operation is useful in many fields, but computing it
directly from the definition is often too slow to be practical.
Genetic Algorithm is a
metaheuristic inspired by
the process of
natural selection that belongs
to the larger class of
evolutionary algorithms (EA). Genetic algorithms are commonly used to
generate high-quality solutions to
optimization and
search problems by relying on bio-inspired operators
such as
mutation, crossover and
selection.
Algorithms and Applications for answering Ranked Queries using Ranked
Views (PDF)
Some algorithms can be
corrupt because they can create
information bubbles that can
radicalize a person
and manipulate their sense of reality, all because ignorant people want to
keep the user on a platform, just so the platform can make money. There is
no
black box, just bad
people who are
greedy. We need
to create
intelligent algorithms.
Analysis of
Algorithms is the determination of the computational complexity of
algorithms, that is the amount of time, storage and/or other resources
necessary to execute them.
Computer Algebra
is a scientific area that refers to the study and development of
algorithms and software for manipulating
mathematical expressions and other mathematical objects. Although
computer algebra could be considered a subfield of scientific computing,
they are generally considered as distinct fields because scientific
computing is usually based on numerical computation with approximate
floating point numbers, while
symbolic computation
emphasizes exact computation with expressions containing variables that
have no given value and are manipulated as symbols.
Super-Recursive Algorithm are a generalization of ordinary algorithms
that are more powerful, that is, compute more than Turing machines.
Turing
machines and other mathematical models of conventional algorithms allow
researchers to find properties of recursive algorithms and their
computations. In a similar way, mathematical models of super-
recursive
algorithms, such as inductive Turing machines, allow researchers to find
properties of super-recursive algorithms and their computations.
Perceptron is an
algorithm for supervised learning of
binary
classifiers. A binary classifier is a function which can decide
whether or not an input, represented by a vector of numbers, belongs to
some specific class. It is a type of linear classifier, i.e. a
classification algorithm that makes its predictions based on a linear
predictor function combining a set of weights with the feature vector.
Neural Networks -
Image
Recognition.
Sorting
Algorithm is an algorithm that puts elements of a list in a
certain order.
The most-used orders are numerical order and lexicographical order.
Efficient sorting is important for optimizing the use of other algorithms
(such as search and merge algorithms) which require input data to be in
sorted lists; it is also often useful for canonicalizing data and for
producing human-readable output. More formally, the output must satisfy
two conditions: The output is in nondecreasing order (each element is no
smaller than the previous element according to the desired total order);
The output is a permutation (reordering) of the input. Further, the data
is often taken to be in an array, which allows random access, rather than
a list, which only allows sequential access, though often algorithms can
be applied with suitable modification to either type of data.
Counting Sort is
an algorithm for sorting a collection of objects according to keys that
are small integers; that is, it is an integer sorting algorithm. It
operates by
counting the number
of objects that have each distinct key value, and using arithmetic on
those counts to
determine the positions of each
key value in the output sequence. Its running time is linear in the
number of items and the difference between the maximum and minimum key
values, so it is only suitable for direct use in situations where the
variation in keys is not significantly greater than the number of items.
However, it is often used as a subroutine in another sorting algorithm,
radix sort, that can handle larger keys more efficiently. Because counting
sort uses key values as indexes into an array, it is not a comparison
sort, and the O(n log n) lower bound for comparison sorting does not apply
to it. Bucket sort may be used for many of the same tasks as counting
sort, with a similar time analysis; however, compared to counting sort,
bucket sort requires linked lists, dynamic arrays or a large amount of
preallocated memory to hold the sets of items within each bucket, whereas
counting sort instead stores a single number (the count of items) per
bucket.
Selection Algorithm is an algorithm for finding the kth smallest
number in a list or array; such a number is called the kth order
statistic. This includes
the cases of finding the minimum, maximum, and median elements.
Odds Algorithm is a mathematical method for computing optimal
strategies for a class of problems that belong to the domain of
optimal stopping
problems.
Critical Path Method is an algorithm for scheduling a set of
project activities.
Turing Machine
Inductive Turing Machines implement an important class of
super-recursive algorithms. An inductive
Turing Machine is a definite list
of well-defined instructions for completing a task which, when given an
initial state, will proceed through a well-defined series of successive
states, eventually giving the final result. The difference between an
inductive Turing machine and an ordinary Turing machine is that an
ordinary Turing machine must stop when it has obtained its result, while
in some cases an inductive Turing machine
can continue to compute after
obtaining the result, without stopping.
Turing Machine is an abstract machine that manipulates symbols on a
strip of tape according to a table of rules; to be more exact, it is a
mathematical model of computation that defines such a device. Despite the
model's simplicity, given any computer algorithm, a Turing machine can be
constructed that is capable of simulating that algorithm's logic.
Turing Test.
Universal Turing Machine essentially achieves this by reading both the
description of the machine to be simulated as well as the input to that
machine from its own tape. Alan Turing introduced the idea of such a
machine in 1936–1937. This principle is considered to be the origin of the
idea of a stored-program computer used by John von Neumann in 1946 for the
"Electronic Computing Instrument" that now bears von Neumann's name: the
von Neumann architecture. In terms of computational complexity, a
multi-tape universal Turing machine need only be slower by logarithmic
factor compared to the machines it simulates. (one computer can run all
kinds of different programs and can be programmed to do almost anything
you want).
Computational Universe is
proposed by Jürgen Schmidhuber in a paper based on Zuse's 1967 thesis. He
pointed out that a simple explanation of the universe would be a Turing
machine programmed to execute all possible programs computing all possible
histories for all types of computable physical laws.
Computational Biology.
Computation and the
Fundamental Theory of Physics - with Stephen Wolfram (youtube) -
Stephen
Wolfram
discusses his efforts to use what he's learned from exploring
computational systems to build a new fundamental theory of all of physics.
Universal Computation rests on the
principle of simulation and is one of the foundational concepts in
computer science. Any computation that can be carried out by one
general-purpose computer can also be carried out on any other
general-purpose computer.
Universal.
Turing Completeness is a system of data-manipulation rules (such as a
computer's instruction set, a programming language, or a cellular
automaton) is said to be Turing-complete or computationally universal if
it can be used to simulate any Turing machine. This means that this system
is able to recognize or decide other data-manipulation rule sets. Turing
completeness is used as a way to express the power of such a
data-manipulation rule set. Virtually all programming languages today are
Turing-complete.
Turing Test.
Computability is the ability to solve a problem in an effective
manner. The computability of a problem is closely linked to the existence
of an algorithm to solve the problem.
Technology
Advancement.
Computation is any type of
calculation that includes both arithmetical
and non-arithmetical steps and which follows a well-defined model (e.g. an
algorithm). Mechanical or electronic devices (or, historically, people)
that perform computations are known as computers. An especially well-known
discipline of the study of computation is computer science.
Computational Problem is a problem that a computer might be able to
solve or a question that a computer may be able to answer. A computational
problem is a task solved by a computer. A computation problem is solvable
by mechanical application of mathematical steps, such as an algorithm. A
problem is regarded as inherently difficult if its solution requires
significant resources, whatever the algorithm used. Computational
Equivalence says that systems found in the natural world can perform
computations up to a maximal ("universal") level of computational power,
and that most systems do in fact attain this maximal level of
computational power.
Computational Complexity Theory focuses on classifying computational
problems according to their inherent difficulty, and relating these
classes to each other. It formalizes computational problems by
introducing mathematical models of computation to study these problems and
quantifying their computational complexity, i.e., the amount of resources
needed to solve them, such as time and storage. Other measures of
complexity are also used, such as the amount of communication (used in
communication complexity), the number of gates in a circuit (used in
circuit complexity) and the number of processors (used in parallel
computing). One of the roles of computational complexity theory is to
determine the
practical limits on what computers can and cannot do. The P
versus NP problem, one of the seven Millennium Prize Problems, is
dedicated to the field of computational complexity. Closely related fields
in theoretical computer science are analysis of algorithms and
computability theory. A key distinction between analysis of algorithms and
computational complexity theory is that the former is devoted to analyzing
the amount of resources needed by a particular algorithm to solve a
problem, whereas the latter asks a more general question about all
possible algorithms that could be used to solve the same problem. More
precisely, computational complexity theory tries to classify problems that
can or cannot be solved with appropriately restricted resources. In turn,
imposing restrictions on the available resources is what distinguishes
computational complexity from computability theory: the latter theory asks
what kinds of problems can, in principle, be solved algorithmically.
A computational problem is a task solved by a
computer. A computation problem is solvable by mechanical
application of mathematical steps, such as an algorithm. A problem is
regarded as inherently difficult if its solution requires significant
resources, whatever the algorithm used.
Kolmogorov Complexity of an object, such as a piece of text, is the
length of a shortest computer program (in a predetermined programming
language) that produces the object as output. It is a measure of the
computational resources needed to specify the object, and is also known as
algorithmic complexity.
Combinatory
Logic is a notation to eliminate the need for quantified variables in
mathematical logic.
Bombe is an
electro-mechanical device used
by British cryptologists to help decipher German
Enigma-machine-encrypted secret messages during World War II. The US
Navy and US Army later produced their own machines to the same functional
specification, albeit engineered differently both from each other and from
the British Bombe itself.
Emergent Algorithm is an algorithm that exhibits emergent
behavior. In essence an emergent algorithm implements a set of simple
building block behaviors that when combined exhibit more complex
behaviors. One example of this is the implementation of fuzzy motion
controllers used to adapt robot movement in response to environmental
obstacles. An emergent algorithm has the following characteristics: it
achieves predictable global effects, it does not require global
visibility, it does not assume any kind of centralized control, it is
self-stabilizing. Other examples of emergent algorithms and models include
cellular automata, artificial neural networks and swarm intelligence
systems (ant colony optimization, bees algorithm, etc.).
Randomized Algorithm is an algorithm that employs a degree of
randomness as part of its logic. The algorithm typically uses uniformly
random bits as an auxiliary input to guide its behavior, in the hope of
achieving good performance in the "average case" over all possible choices
of random bits. Formally, the algorithm's performance will be a random
variable determined by the random bits; thus either the running time, or
the output (or both) are random variables.
Deterministic Algorithm is an algorithm which, given a particular
input, will always produce the same output, with the underlying machine
always passing through the same sequence of states. Deterministic
algorithms are by far the most studied and familiar kind of algorithm, as
well as one of the most practical, since they can be run on real machines
efficiently. Formally, a deterministic algorithm computes a mathematical
function; a function has a unique value for any input in its domain, and
the algorithm is a process that produces this particular value as output.
Nondeterministic Algorithm is an algorithm that, even for the same
input, can exhibit different behaviors on different runs, as opposed to a
deterministic algorithm. There are several ways an algorithm may behave
differently from run to run. A concurrent algorithm can perform
differently on different runs due to a race condition. A probabilistic
algorithm's behaviors depends on a random number generator. An algorithm
that solves a problem in nondeterministic polynomial time can run in
polynomial time or exponential time depending on the choices it makes
during execution. The nondeterministic algorithms are often used to find
an approximation to a solution, when the exact solution would be too
costly to obtain using a deterministic one.
Algorithmic Learning Theory is a mathematical framework for
analyzing
machine learning problems and algorithms. Synonyms include
formal learning theory and algorithmic inductive inference. Algorithmic
learning theory is different from statistical learning theory in that it
does not make use of statistical assumptions and analysis. Both
algorithmic and statistical learning theory are concerned with machine
learning and can thus be viewed as branches of computational learning
theory.
Evolutionary Algorithm is a subset of evolutionary computation, a
generic population-based metaheuristic optimization algorithm. An EA uses
mechanisms inspired by biological evolution, such as reproduction,
mutation, recombination, and selection. Candidate solutions to the
optimization problem play the role of individuals in a population, and the
fitness function determines the quality of the solutions (see also loss
function). Evolution of the population then takes place after the repeated
application of the above operators. Artificial evolution (AE) describes a
process involving individual evolutionary algorithms; EAs are individual
components that participate in an AE.
Memetic
Algorithm referrs to in the literature as Baldwinian
evolutionary algorithms (EAs), Lamarckian EAs, cultural algorithms, or
genetic local search.
Algorithms.
Expectation Maximization Algorithm
is an iterative method to find maximum likelihood or maximum a posteriori
(MAP) estimates of parameters in statistical models, where the model
depends on unobserved latent variables. The EM iteration alternates
between performing an expectation (E) step, which creates a function for
the expectation of the log-likelihood evaluated using the current estimate
for the parameters, and a maximization (M) step, which computes parameters
maximizing the expected log-likelihood found on the E step. These
parameter-estimates are then used to determine the distribution of the
latent variables in the next E step.
Monad Functional Programming
are a way to build computer programs by joining simple components in
robust ways. A monad may encapsulate values of a particular data type,
creating a new type associated with a specific computation.
Precondition
is a condition or predicate that must always be true just prior
to the execution of some section of code or before an operation in a
formal specification. If a precondition is violated, the effect of the
section of code becomes undefined and thus may or may not carry out its
intended work. Security problems can arise due to incorrect preconditions.
Often, preconditions are simply included in the documentation of the
affected section of code. Preconditions are sometimes tested using guards
or assertions within the code itself, and some languages have specific
syntactic constructions for doing so. For example: the factorial is only
defined for integers greater than or equal to zero. So a program that
calculates the factorial of an input number would have preconditions that
the number be an integer and that it be greater than or equal to zero.
Algorithm Aversion (PDF)
Parallel Algorithm as opposed to a traditional serial
algorithm, is an algorithm which can be executed a piece at a time on many
different processing devices, and then combined together again at the end
to get the correct result. Many parallel algorithms are executed
concurrently – though in general concurrent algorithms are a distinct
concept – and thus these concepts are often conflated, with which aspect
of an algorithm is parallel and which is concurrent not being clearly
distinguished. Further, non-parallel, non-concurrent algorithms are often
referred to as "sequential algorithms", by contrast with concurrent
algorithms.
Errors (lies)
Callback is any executable code that is passed as an
argument to other code, which is expected to call back (execute) the
argument at a given time. This execution may be immediate as in a
synchronous callback, or it might happen at a later time as in an
asynchronous callback. In all cases, the intention is to specify a
function or subroutine as an entity that is, depending on the language,
more or less similar to a variable. Programming languages support
callbacks in different ways, often implementing them with subroutines,
lambda expressions, blocks, or function pointers.
Controls
(programmable controllers) -
Patterns (recognition)
-
Programming (code)
Instance Based Learning Algorithm (PDF)
Bron-Kerbosch
Algorithm is an algorithm for finding maximal cliques in an
undirected graph. That is, it lists all subsets of vertices with the two
properties that each pair of vertices in one of the listed subsets is
connected by an edge, and no listed subset can have any additional
vertices added to it while preserving its complete connectivity.
Big O Notation is a mathematical notation that describes the
limiting behavior of a function when the argument tends towards a
particular value or infinity.
Binary Search Algorithm is a search algorithm that finds the
position of a target value within a sorted array.
Binary search compares
the target value to the middle element of the array; if they are unequal,
the half in which the target cannot lie is eliminated and the search
continues on the remaining half until it is successful or the remaining
half is empty.
Algorithmic Probability is a mathematical method of
assigning a prior probability to a given observation. It was invented by
Ray Solomonoff in the 1960s. It is used in inductive inference theory and
analyses of algorithms. In his general theory of inductive inference,
Solomonoff uses the prior obtained by this formula, in Bayes' rule for
prediction.
New Machine Learning Algorithms offer Safety and Fairness guarantees.
Statistics (math) -
Algorithms,
Direct Coding or Both?
Gradient Descent is an optimization algorithm for finding a local
minimum of a differentiable function. Gradient descent in machine learning
is simply used to find the values of a function's parameters
(coefficients) that minimize a cost function as far as possible.
Computer Algebra
is a scientific area that refers to the study and development of
algorithms and software for manipulating mathematical expressions and
other mathematical objects. Although, properly speaking, computer algebra
should be a subfield of scientific computing, they are generally
considered as distinct fields because scientific computing is usually
based on numerical computation with approximate floating point numbers,
while symbolic computation emphasizes exact computation with expressions
containing variables that have no given value and are manipulated as
symbols. Software applications that perform symbolic calculations are
called computer algebra systems, with the term system alluding to the
complexity of the main applications that include, at least, a method to
represent mathematical data in a computer, a user programming language
(usually different from the language used for the implementation), a
dedicated memory manager, a user interface for the input/output of
mathematical expressions, a large set of routines to perform usual
operations, like simplification of expressions, differentiation using
chain rule, polynomial factorization, indefinite integration, etc.
Computer algebra is widely used to experiment in mathematics and to design
the formulas that are used in numerical programs. It is also used for
complete scientific
computations,
when purely numerical methods fail, as in public key cryptography or for
some non-linear problems.
What If
What If is introducing a
conditional clause or on the condition or supposition that something will
happen. In the event that a possibility will happen.
Suppose is to
expect something to be
true. To believe in something, especially on uncertain or tentative
grounds. Take for granted or as a given and expect beforehand.
Supposition is a message expressing an
opinion based on incomplete evidence. A
hypothesis that is taken for
granted.
Stipulation is an
assumption on which
rests the
validity
or effect of something else.
Condition
is an assumption on which rests the validity or effect of something else.
Information that should be kept in mind when making a decision. A
statement of what is required as part of an agreement. Condition is also a
state at a particular time.
Precondition
is an assumption on which rests the validity or effect of something else.
An assumption that is taken for granted.
Probability (odds) -
Variables -
Decision Table
Is–Ought Problem is when one makes claims about what ought to be that
are based solely on statements about what is. Hume found that there seems
to be a significant difference between positive or descriptive statements
about what is and prescriptive or normative statements about what ought to
be, and that it is not obvious how one can coherently transition from
descriptive statements to prescriptive ones. Hume's law or Hume's
guillotine is the thesis that an ethical or judgmental conclusion cannot
be inferred from purely descriptive factual statements.
AI Fears.
For Loop is a control flow statement for specifying
iteration, which allows code to be executed repeatedly.
Artificial intelligence needs the "if"
function, just like us. There are a lot of if's, with some if's that
refer to other if's for more processing.
Humorous What Ifs.
PHP or
Hypertext Preprocessor is a general-purpose programming language
originally designed for web development. PHP code may be executed with a
command line
interface (CLI), embedded into HTML code, or used in combination with
various web template systems, web content management systems, and web
frameworks. PHP code is usually processed by a PHP interpreter implemented
as a module in a
web server
or as a Common Gateway Interface (CGI) executable. The web server combines
the results of the interpreted and executed PHP code, which may be any
type of data, including images, with the generated web page. PHP can be
used for many programming tasks outside of the web context, such as
standalone graphical applications and
robotic drone control.
PHP if else -
HP Control Structures
PHP -
If Function (excel) -
If Statement (excel)
PHP has the following conditional
statements: if statement: executes some code only if a specified condition is
True.
if...else statement: executes some code if a condition is true and another code if
the condition is
False.
if...elseif....else statement: selects one of several blocks of code to be executed.
Switch statement: selects one of many
blocks of code to be executed.
Logic Gates
Logic Gate is an idealized or physical device implementing a
Boolean function;
that is, it performs a logical operation on one or more binary inputs and
produces a single binary output. Depending on the context, the term may
refer to an ideal logic gate, one that has for instance zero rise time and
unlimited fan-out, or it may refer to a non-ideal physical device. (see
Ideal and real op-amps for comparison).
Logic gates are primarily
implemented using diodes or transistors acting as
electronic switches, but can also be
constructed using vacuum tubes, electromagnetic relays (relay logic),
fluidic logic, pneumatic logic, optics, molecules, or even mechanical
elements. With amplification, logic gates can be cascaded in the same way
that Boolean functions can be composed, allowing the construction of a
physical model of all of Boolean logic, and therefore, all of the
algorithms and mathematics that can be described with Boolean logic. Logic
circuits include such devices as multiplexers, registers, arithmetic logic
units (ALUs), and computer memory, all the way up through complete
microprocessors, which may contain more than 100 million gates. In modern
practice, most gates are made from field-effect transistors (FETs),
particularly metal–oxide–semiconductor field-effect transistorss (MOSFETs).
Compound logic gates AND-OR-Invert (AOI) and OR-AND-Invert (OAI) are often
employed in circuit design because their construction using MOSFETs is
simpler and more efficient than the sum of the individual gates. In
reversible logic, Toffoli gates are used.
Neurons
Logic Gates (AND, OR, NOT)
Basic Bolean
AND - IF (A=1 and B=1) Then OUT=1
OR - IF (A=1 or B=1) Then OUT=1
NOT - IF (A=1) Then OUT=0
AND gate
is a basic digital logic gate that implements logical conjunction - it
behaves according to the truth table to the right. A HIGH output (1)
results only if all the inputs to the AND gate are HIGH (1). If none or
not all inputs to the AND gate are HIGH, a LOW output results. The
function can be extended to any number of inputs.
NAND gate is a logic gate which produces an output which is
false only if all its inputs are true; thus its output is complement to
that of the AND gate. A LOW (0) output results only if both the inputs to
the gate are HIGH (1); if one or both inputs are LOW (0), a HIGH (1)
output results. It is made using transistors and junction diodes. By De
Morgan's theorem, AB=A+B, and thus a NAND gate is equivalent to inverters
followed by an OR gate.
OR gate
is a digital logic gate that implements logical disjunction – it behaves
according to the truth table to the right. A HIGH output (1) results if
one or both the inputs to the gate are HIGH (1). If neither input is high,
a LOW output (0) results. In another sense, the function of OR effectively
finds the maximum between two binary digits, just as the complementary AND
function finds the minimum.
XOR gate is a digital logic gate that gives a true (1/HIGH)
output when the number of true inputs is odd. An XOR gate implements an
exclusive or; that is, a true output results if one, and only one, of the
inputs to the gate is true. If both inputs are false (0/LOW) or both are
true, a false output results. XOR represents the inequality function,
i.e., the output is true if the inputs are not alike otherwise the output
is false. A way to remember XOR is "one or the other but not both".
Inverter
logic gate or
NOT Gate is a logic gate
which implements logical
negation.
Subroutine (routines)
Code (computer programming)
Batch File (goals)
Binary (zeros
and ones)
Iteration (developing ideas)
Software Design
(computers)
Internet
(combined intelligence)
Networks
Robots (building)
Conjunction
(“and”) -
Disjunction (“or”)
Exclusive Or -
Negation (“not”) -
Induction
(deduction)
Gottfried Wilhelm Leibniz was a German polymath and
philosopher (1716) who occupies a prominent place in the history of
mathematics and the history of philosophy, having developed differential
and integral calculus independently of
Isaac Newton.
Characteristica Universalis is a universal and formal
language imagined to express mathematical, scientific, and metaphysical
concepts. Leibniz thus hoped to create a language usable within the
framework of a universal logical calculation or calculus ratiocinator.
Calculus Ratiocinator is a theoretical universal logical
calculation framework, a concept described in the writings of Gottfried
Leibniz, usually paired with his more frequently mentioned characteristica
universalis, a universal conceptual language.
Modulo Operation finds the remainder after division of one
number by another (sometimes called modulus).
Modular Arithmetic is a system of arithmetic for integers,
where numbers "wrap around" upon reaching a certain value—the modulus
(plural moduli).
Mathematical Biophysics is a subfield of both biophysics and
mathematical biology focusing of physical and physico-chemical mechanisms
involved in physiological functions of living organisms, as well as the
molecular structures supporting such physiological functions.
Our greatest intelligence now is
already being formed by the
Internet, which in some ways
simulates the
neural network of the human brain. But bringing
together all our knowledge and information is only the beginning,
because it will take the collective consensus of all the human
brains in order for us to achieve intelligent solutions to our
problems. And of course, incase of a major
catastrophe, we will have
to
Secure our intelligence in something like the
Global Seed Vault
Because we would not want to
start all over again as many humans
civilizations had to do throughout human history. Backup our
most important knowledge and information by transmitting it into
space,
store it in a satellite, store it on the moon and in
multiple places. This we have to do. That's Intelligence.
NP-Hardness in computational complexity theory, is the defining
property of a class of problems that are, informally, "at least as hard as
the hardest problems in NP". A simple example of an NP-hard problem is the
subset sum problem. A more precise specification is: a problem H is
NP-hard when every problem L in NP can be reduced in polynomial time to H;
that is, assuming a solution for H takes 1 unit time, we can use H's
solution to solve L in polynomial time. As a consequence, finding a
polynomial algorithm to solve any NP-hard problem would give polynomial
algorithms for all the problems in NP, which is unlikely as many of them
are considered difficult. A common misconception is that the NP in
"NP-hard" stands for "non-polynomial" when in fact it stands for
"non-deterministic polynomial acceptable problems". Although it is
suspected that there are no polynomial-time algorithms for NP-hard
problems, this has not been proven. Moreover, the class P, in which all
problems can be solved in polynomial time, is contained in the NP class.
NP is a complexity class used to classify decision problems.
NP is the set of decision problems for which the problem instances, where
the answer is "yes", have proofs verifiable in polynomial time.
Computational Complexity Theory
Decision Problem is a problem that can be posed as a yes-no question
of the input values. An example of a decision problem is deciding whether
a given natural number is prime. Another is the problem "given two numbers
x and y, does x evenly divide y?". The answer is either 'yes' or 'no'
depending upon the values of x and y. A method for solving a decision
problem, given in the form of an algorithm, is called a decision procedure
for that problem. A decision procedure for the decision problem "given two
numbers x and y, does x evenly divide y?" would give the steps for
determining whether x evenly divides y. One such algorithm is long
division. If the remainder is zero the answer is 'yes', otherwise it is
'no'. A decision problem which can be solved by an algorithm is called
decidable. Decision problems typically appear in mathematical questions of
decidability, that is, the question of the existence of an effective
method to determine the existence of some object or its membership in a
set; some of the most important problems in mathematics are undecidable.
The field of computational complexity categorizes decidable decision
problems by how difficult they are to solve. "Difficult", in this sense,
is described in terms of the computational resources needed by the most
efficient algorithm for a certain problem. The field of recursion theory,
meanwhile, categorizes undecidable decision problems by Turing degree,
which is a measure of the noncomputability inherent in any solution.
Decision Table.
Branch in computer science is an instruction in a
computer program that can cause a computer
to begin executing a different instruction sequence and thus deviate from
its default behavior of executing instructions in order.[a] Branch (or
branching, branched) may also refer to the act of switching execution to a
different instruction sequence as a result of executing a branch
instruction. Branch instructions are used to implement control flow in
program loops and conditionals (i.e., executing a particular sequence of
instructions only if certain conditions are satisfied). A branch
instruction can be either an unconditional branch, which always results in
branching, or a conditional branch, which may or may not cause branching
depending on some condition. Also, depending on how it specifies the
address of the new instruction sequence (the "target" address), a branch
instruction is generally classified as direct, indirect or relative,
meaning that the instruction contains the target address, or it specifies
where the target address is to be found (e.g., a register or memory
location), or it specifies the difference between the current and target
addresses.
Decision
Tree.
Variables
Variable is something
that is
not
consistent or having a fixed
pattern. A value
that is either arbitrary or not fully specified or unknown. Something that
is subject to change which can make it difficult to
predict.
Variation is an
instance of change or
the rate or magnitude of change. An activity that varies from a norm or
standard.
Version is something a little
different from others of the same type.
Relative -
Scenarios
-
Configurations
-
Conservation
Variant is something a little
different
from others of the same type. Exhibiting variation and
change. A variant
in biology is
a group of organisms within a
species that differ in trivial ways from
similar groups. A form or version of something that differs in some
respect from other forms of the same thing or from a standard. A variable
quantity that is
random.
An event that departs from
expectations.
Invariant -
Boolean Algebra -
Algorithms
Variance is an event
that departs from
expectations. An activity that varies from a norm or
standard. Variance in
statistics is the second
moment around the
mean. The
expected value of the square of the deviations of a random variable from
its mean value. The quality or degree of being subject to variation. A
difference between
conflicting facts, claims or opinions.
Discord that splits
a group.
Mutation -
Genetic Variations -
Iteration -
Unpredictable
Phenomenon -
Anomalies
Hidden-Variable Theory is a
deterministic
physical model which seeks to explain the probabilistic nature of
quantum mechanics by
introducing additional (possibly inaccessible) variables.
Deviation is a
variation that deviates from the standard or norm. The
error of a
compass due to local magnetic disturbances. Deviation in
statistics is the
difference between an observed value and the
expected value of a
variable or function.
Scenarios -
Combination of
Circumstances -
Relative -
Algorithm
Configurations is an
arrangement of elements in a particular form,
figure, or combination. Configurations in Chemistry is the fixed
three-dimensional relationship of the atoms in a molecule, defined by the
bonds between them. Configurations in Computing is the
arrangement or
set-up of the hardware and software that make up a computer system.
Design for X is when the X is a variable which can have one of many
possible values.
Safety
Engineering.
Latent Variable are variables that are not directly observed but are
rather
inferred
(through a mathematical model) from other variables that are observed
(directly measured). Mathematical models that aim to explain observed
variables in terms of latent variables are called latent variable models.
Latent variable models are used in many disciplines, including psychology,
economics, engineering, medicine, physics, machine learning/artificial
intelligence, bioinformatics,
natural language processing, econometrics,
management and the social sciences. Sometimes latent variables correspond
to aspects of physical reality, which could in principle be measured, but
may not be for practical reasons. In this situation, the term hidden
variables is commonly used (reflecting the fact that the variables are
"really there", but hidden). Other times, latent variables correspond to
abstract concepts, like categories, behavioral or mental states, or data
structures. The terms hypothetical variables or hypothetical constructs
may be used in these situations. One advantage of using latent variables
is that they can serve to reduce the dimensionality of data. A large
number of observable variables can be aggregated in a model to represent
an underlying concept, making it easier to understand the data. In this
sense, they serve a function similar to that of scientific theories. At
the same time, latent variables link observable ("sub-symbolic") data in
the real world to symbolic data in the modeled world.
Stochastic event or system is one that is unpredictable due
to the influence of a random variable. The word "stochastic" comes from
the Greek word στόχος (stokhos, "aim"). It occurs in a wide variety of
professional and academic fields.
Random Variable
in
probability and statistics,
a random variable, random quantity, aleatory variable or stochastic
variable is a variable whose value is subject to variations due to
chance (i.e. randomness, in a
mathematical sense). A random variable can take on a set of possible
different values (similarly to other mathematical variables), each with an
associated probability, in contrast to other
Mathematical Variables.
Random -
Hidden Variables.
Deterministic System is a system in which no randomness is
involved in the development of future states of the system. A
deterministic model will thus always produce the same output from a given
starting condition or initial state.
Internalism and externalism are two opposing ways of
explaining various subjects in several areas of
philosophy. These include
human motivation, knowledge, justification, meaning, and truth. The
distinction arises in many areas of debate with similar but distinct
meanings. Usually '
internalism' refers to the belief that an explanation
can be given of the given subject by pointing to things which are internal
to the person or their mind which is considering them. Conversely,
externalism holds that it is things about the world which motivate us,
justify our beliefs, determine meaning, etc.
Psychophysical is
sharing the physical and psychological qualities.
Linearization refers to finding the linear approximation to
a function at a given point.
Lyapunov
optimization refers to the use of a Lyapunov function to
optimally control a dynamical system. Lyapunov functions are used
extensively in control theory to ensure different forms of system
stability. The state of a system at a particular time is often described
by a multi-dimensional vector. A Lyapunov function is a nonnegative scalar
measure of this multi-dimensional state. Typically, the function is
defined to grow large when the system moves towards undesirable states.
System stability is achieved by taking control actions that make the
Lyapunov function drift in the negative direction towards zero.
Variable and Attribute in research is a characteristic of an
object (person, thing, etc.). Attributes are closely related to variables.
A variable is a logical set of attributes. Variables can "vary" - for
example, be high or low. How high, or how low, is determined by the value
of the attribute (and in fact, an attribute could be just the word "low"
or "high").
Variable in
mathematics is an alphabetic character
representing a number, called the value of the variable, which is either
arbitrary or not fully specified or unknown. Making
algebraic computations with
variables as if they were explicit numbers allows one to solve a range of
problems in a single computation. A typical example is the quadratic
formula, which allows one to solve every quadratic equation by simply
substituting the numeric values of the coefficients of the given equation
to the variables that represent them.
Differentials (math)
Derivative of a function of a real variable measures the sensitivity
to change of a quantity (a function value or dependent variable) which is
determined by another quantity (the independent variable). Derivatives are
a fundamental tool of
calculus.
For example, the derivative of the
position
of a moving object with respect to time is the object's velocity: this
measures how quickly the position of the object changes when time is
advanced.
Variable in
computer science is a storage location paired
with an associated symbolic name (an identifier), which contains some
known or unknown quantity of information referred to as a value. The
variable name is the usual way to reference the stored value; this
separation of name and content allows the name to be used independently of
the exact information it represents. The identifier in computer source
code can be bound to a value during run time, and the value of the
variable may thus change during the course of program execution.
Variable and Attribute in research is a characteristic of an object
(person, thing, etc.). Attributes are closely related to variables. A
variable is a logical set of attributes. Variables can "vary" - for
example, be high or low. How high, or how low, is determined by the value
of the attribute (and in fact, an attribute could be just the word "low"
or "high".
Logistic Map is a polynomial mapping (equivalently,
recurrence relation) of degree 2, often cited as an archetypal example of
how complex, chaotic behaviour can arise from very simple non-linear
dynamical equations.
Dynamical System
is a system in which a function describes the
time dependence of a point in a
geometrical space.
Examples include the mathematical models that describe the swinging of a
clock pendulum, the flow of water in a pipe, and the number of fish each
springtime in a lake.
Dependent and independent Variables dependent variables
represent the output or outcome whose variation is being studied. The
independent variables represent inputs or causes, i.e. potential reasons
for variation.
Regression Analysis
is a statistical process for estimating the relationships among
variables.
Patterns.
Symbolic
Regression is a type of regression analysis that searches the space of
mathematical expressions to find the model that best fits a given dataset,
both in terms of accuracy and simplicity. No particular model is provided
as a starting point to the algorithm. Instead, initial expressions are
formed by randomly combining mathematical building blocks such as
mathematical operators, analytic functions, constants, and state
variables. (Usually, a subset of these primitives will be specified by the
person operating it, but that's not a requirement of the technique.)
Typically, new equations are then formed by recombining previous equations
using genetic programming. By not requiring a specific model to be
specified, symbolic regression isn't affected by human bias, or unknown
gaps in domain knowledge. It attempts to uncover the intrinsic
relationships of the dataset, by letting the patterns in the data itself
reveal the appropriate models, rather than imposing a model structure that
is deemed mathematically tractable from a human perspective. The fitness
function that drives the evolution of the models takes into account not
only error metrics (to ensure the models accurately predict the data), but
also special complexity measures, thus ensuring that the resulting models
reveal the data's underlying structure in a way that's understandable from
a human perspective. This facilitates reasoning and favors the odds of
getting insights about the data-generating system.
Variational
Principle is a scientific principle used within the
calculus of
variations, which develops general methods for finding functions which extremize the value of quantities that depend upon those functions. For
example, to answer this question: "What is the shape of a chain suspended
at both ends?" we can use the variational principle that the shape must
minimize the gravitational
potential energy.
Condition Variable are synchronization primitives that
enable threads to wait until a particular condition occurs. Condition
variables are user-mode objects that cannot be shared across processes.
Condition variables enable threads to atomically release a lock and enter
the sleeping state.
Transient State
is when a process variable or variables have been changed and the system
has not yet reached a
steady state. The time taken for the circuit to change from one steady
state to another steady state is called the transient time. Transient
analysis KVL and KCL to circuits containing energy storage elements
results in differential. A
transient process
is a process in which process variables change over time. Transient
analysis L and study to circuits containing energy storage elements
results in differential. study of transient processes.
Baseline.
Scenarios
Scenario
is one of
many known
sequence of
possible events.
Real Life Examples -
Prepared for Emergencies -
Seeing the Whole
Picture -
Planning -
Cause and
Effect -
Relative -
Variables -
Patterns -
Configurations
-
Anomalies
-
Decision Tree
Worst-Case Scenario is a concept in
risk
management wherein the planner, in
planning for
potential
disasters, considers the most severe possible outcome that can
reasonably be projected to occur in a
given situation. Conceiving of
worst-case scenarios is a common form of
strategic planning, specifically
scenario planning, to prepare for and minimize contingencies that could
result in accidents, quality problems, or other issues.
Scenario Planning is a strategic
planning method that
some organizations use to make
flexible long-term plans. Part
adaptation and generalization of classic methods.
Minimax
is a decision rule used in artificial intelligence,
decision theory,
game theory, statistics and philosophy for minimizing the possible loss
for a worst case (maximum loss) scenario. When dealing with gains, it is
referred to as "maximin"—to maximize the minimum gain. Originally
formulated for two-player zero-sum game theory, covering both the cases
where players take alternate moves and those where they make simultaneous
moves, it has also been extended to more complex games and to general
decision-making in the presence of uncertainty.
Conditional Probability
is a
measure of the
probability of an event (some
particular situation occurring) given that (by assumption, presumption,
assertion or evidence) another event has occurred. If the event of
interest is A and the event B is known or assumed to have occurred, "the
conditional probability of A given B", or "the probability of A under the
condition B", is usually written as P(A|B), or sometimes PB(A) or P(A/B).
For example, the probability that any given person has a cough on any
given day may be only 5%. But if we know or assume that the person has a
cold, then they are much more likely to be coughing. The conditional
probability of coughing given that you have a cold might be a much higher
75%.
Interactive Object-Based Media
can help people understand different scenarios by giving people different
options and other choices to choose from so they can see which
decision
was better.
How many
questions deep do you need to go? How many
levels?
You can't prepare for everything, so
how do you
decide?
Formulating -
Safety Engineering -
Emerging Technologies
Scenario Mapping is a group exercise that
helps your design team think about how your persona segments might
approach an activity using your product or service and ideate around the
type of experience you want to provide for them, producing candidate
solutions for the future design.
Exception
Handling is the process of responding to the occurrence, during
computation, of exceptions –
anomalous or exceptional conditions requiring
special
processing –
often changing the normal flow of program execution. It is provided by
specialized
programming language
constructs or
computer hardware
mechanisms.
Statistics.
Event Chain Methodology is an uncertainty modeling and
schedule network analysis technique that is focused on identifying and
managing events and
event chains that affect
project schedules.
Event chain methodology is the next advance beyond critical path method
and critical chain project management. Event chain methodology helps to
mitigate the effect of motivational and cognitive biases in estimating and
scheduling.
Preference Based Planning is a form of automated planning
and scheduling which focuses on producing plans that additionally satisfy
as many user-specified preferences as possible. In many problem domains, a
task can be accomplished by various sequences of actions (also known as
plans). These plans can vary in quality: there can be many ways to solve a
problem but one generally prefers a way that is, e.g., cost-effective,
quick and safe.
Regression Analysis is a
statistical process for estimating
the relationships among
variables. It includes many techniques for
modeling and analyzing several variables, when the focus is on the
relationship between a dependent variable and one or more independent
variables (or 'predictors'). More specifically, regression analysis helps
one understand how the typical value of the dependent variable (or
'criterion variable') changes when any one of the independent variables is
varied, while the other independent variables are held fixed.
Reasoning -
Problem
Solving -
Quality
Control
AI accelerates problem-solving in complex scenarios. Researchers have
developed a new, data-driven
machine-learning
technique that speeds up software programs used to solve complex
optimization problems that can have millions of potential solutions. Their
approach could be applied to many complex logistical challenges, such as
package routing, vaccine distribution, and power grid management. This
software, called a
mixed-integer linear programming solver, splits a massive optimization
problem into smaller pieces and uses generic
algorithms to try and find the best solution. However, the solver
could take hours -- or even days -- to arrive at a solution.
Patterns - Trends
Pattern are
elements that
repeat
in a
predictable
manner. A perceptual
structure that can be
observed and
measured.
A
pattern is a discernible
regularity in the world or in a
manmade design that
repeats in a
predictable
manner. A
geometric pattern is a kind of pattern formed of
geometric
shapes and typically
repeating like a wallpaper.
Any of the
senses may
directly observe patterns. Conversely,
abstract patterns in science,
mathematics, or
language may be
observable only by
analysis. Direct observation in practice means seeing
visual patterns,
which are widespread in nature and in art. Visual patterns in nature are
often
chaotic, never exactly repeating, and often involve
fractals. Natural patterns
include
spirals, meanders,
waves, foams, tilings, cracks, and those created by
symmetries of rotation and
reflection. Patterns have an underlying mathematical structure; indeed,
mathematics can be seen as
the search for regularities, and the output of any function is a
mathematical pattern. Similarly in the sciences, theories explain and
predict regularities in the world. In art and architecture,
decorations or visual motifs may
be combined and repeated to form patterns designed to have a chosen effect
on the viewer. In computer science, a software design pattern is a known
solution to a class of problems in programming. In
fashion, the pattern is a template
used to create any number of similar
garments.
Pattern Recognition is the process of recognizing patterns
by using
machine learning algorithm or the
automated
recognition of patterns and
regularities in data. Pattern recognition can be defined as the
classification of
data based on knowledge already gained or on statistical
information extracted from patterns and/or their representation.
Pattern Recognition and
inductive thinking is a special ability of the human brain to not only
find patterns but figure out in a logical way what those patterns suggest
about
what will happen
next.
IQ Test measures pattern
recognition ability.
Chess
is a good game to practice pattern recognition.
Learning by
Association.
Apophenia is the tendency to perceive
meaningful connections
between unrelated things, or the unmotivated seeing of
connections
accompanied by a
feeling of
wonder and the need for
meaning. But sometimes people can use pattern recognition in the wrong
way and make
assumptions
or unreasonably seek patterns in random information that could make a
person to see patterns that are not really there or see patterns that are
irrelevant.
Trypophobia is the irrational fear of irregular patterns or clusters of small
holes or bumps.
How our brains know when something's different. Scientists discovered
how a set of high frequency brain waves may help us unconsciously know
when something's different by
comparing memories of
the past with present experiences. Almost every experience we store
into memory can be used to set our expectations and predictions for the
future.
Predictive coding basically states that the brain optimizes neural
activity for processing information. The brain uses more neural activity
to process new information than it does for things that we are familiar
with. It takes just one moment to not only remember a new experience but
also to use
memories of that
experience to set future
expectations.
New research describes a new model for how the brain interprets patterns
in complex networks. They found that the ability to detect patterns
stems in part from the brain's desire to
represent things in the
simplest
way possible and that the brain is constantly weighing the pressures of
complexity and simplicity when making decisions. The human brain is a
highly advanced information processor composed of more than
86 billion
neurons. Humans are adept at recognizing patterns from complex networks,
such as languages, without any formal instruction. Previously, cognitive
scientists tried to explain this ability by depicting the brain as a
highly optimized computer, but there is now discussion among
neuroscientists that this model might not accurately reflect how the brain works.
Time
Based Patterns -
Cycles -
Sequences -
Signals -
Language -
Symbols -
CodeSymmetry
-
Rhythm -
Variables -
Reference
Pattern -
Deciphering Code -
Math -
Spatial Intelligence
Uncovering nature's patterns at the atomic scale in living color. A
research team invents new method to accelerate materials discoveries
through rapid clustering and
color coding of
large X-ray data sets to reveal previously hidden structural changes that
occur as temperature increases or decreases. Color coding makes aerial
maps much more easily understood. Through color, we can tell at a glance
where there is a road, forest, desert, city, river or lake.
Central Pattern Generators are biological
neural
circuits that produce rhythmic outputs in the absence of rhythmic
input. They are the source of the tightly-coupled patterns of neural
activity that drive rhythmic and stereotyped motor behaviors like walking,
swimming, breathing, or chewing. The ability to function without input
from higher brain areas still requires modulatory inputs, and their
outputs are not fixed. Flexibility in response to sensory input is a
fundamental quality of CPG-driven behavior. To be classified as a rhythmic
generator, a CPG requires: "two or more processes that interact such that
each process sequentially increases and decreases, and that, as a result
of this interaction, the system repeatedly returns to its starting
condition." CPGs have been found in invertebrates, and practically all
vertebrate species investigated, including humans.
Trend is a general
direction in
which something tends to move.
Trend
Estimation is a
statistical technique to aid
interpretation
of data. When a series of
measurements of a process are treated as a
time
series, trend estimation can be used to make and justify statements about
tendencies in the data, by relating the measurements to the times at which
they occurred.
Trend Analysis is the widespread practice of collecting information
and
attempting to spot a pattern. In project
management, trend analysis is a mathematical technique that uses
historical results to predict future
outcome. In statistics, trend analysis often refers to techniques for
extracting an underlying pattern of behavior in a time series which would
otherwise be partly or nearly completely hidden by noise. If the trend can
be assumed to be linear, trend analysis can be undertaken within a formal
regression analysis, as described in Trend estimation. If the trends have
other shapes than linear, trend testing can be done by non-parametric
methods, e.g. Mann-Kendall test, which is a version of Kendall rank
correlation coefficient. For testing and visualization of nonlinear trends
also smoothing can be used.
Arithmetic Progression is a
sequence of numbers such that
the difference between the
consecutive terms is constant.
Pattern
Matching the act of checking a given
sequence of tokens for the presence of the constituents of some
pattern. In contrast to pattern recognition, the match usually has to be
exact: "either it will or will not be a match." The patterns generally
have the form of either sequences or
tree
structures. Uses of pattern matching include outputting the locations
(if any) of a pattern within a token sequence, to output some component of
the matched pattern, and to substitute the matching pattern with some
other token sequence (i.e., search and replace). Sequence patterns (e.g.,
a text string) are often described using regular expressions and matched
using techniques such as backtracking. Tree patterns are used in some
programming languages as a general tool to process data based on its
structure, e.g., Haskell, ML, Scala and the symbolic mathematics language
Mathematica have special syntax for expressing tree patterns and a
language construct for conditional execution and value retrieval based on
it. For simplicity and efficiency reasons, these tree patterns lack some
features that are available in regular expressions. Often it is possible
to give alternative patterns that are tried one by one, which yields a
powerful conditional programming construct. Pattern matching sometimes
includes support for guards. Term rewriting and graph rewriting languages
rely on pattern matching for the fundamental way a program evaluates into
a result.
Pattern Separation is
defined as the process by which overlapping or similar inputs or
representations
are transformed into less similar outputs whereas pattern completion is
the reconstruction of complete stored representations from partial inputs
that are part of the stored representation. Pattern separation is a
process that minimizes overlap between patterns of neuronal activity
representing similar experiences. Theoretical work suggests that the
dentate gyrus performs
this role for memory processing but a direct demonstration is lacking.
Pattern Separation in the
hippocampus enables
memories to be stored by
separate groups of neurons, so that memories don't get mixed up, neuronal
representation, devoid of specific contextual details. When recording the
activity of individual neurons, there is an alternative model to pattern
separation storing our memories. Pattern separation is a basic principle
of neuronal coding that precludes memory interference in the
hippocampus
.
Realistic model of mouse hippocampus uncovers new mechanism for pattern
separation. Our brains can distinguish highly similar patterns, thanks
to a process called pattern separation. How exactly our brains separate
patterns is, however, not fully understood yet. Using a full-scale
computer model of the
dentate gyrus, a brain region involved in pattern separation,
researchers found that inhibitory neurons activated by one pattern
suppress all their neighboring neurons, thereby switching off 'competing'
similar patterns.
Computers can read
zero's and ones,
which means they can be taught to look for patterns. And when these
patterns are labeled correctly and accurately, a computer can identify
things in the world pretty much the same way as humans do.
Software
Design Pattern is a
reusable solution to a commonly occurring problem within a given
context in software design. A template for how to solve a problem that can
be used in many different situations.
Object-oriented
design patterns typically show
relationships and
interactions between classes or objects, without specifying the final
application classes or objects that are involved.
Command Pattern is a behavioral design pattern in which an object is
used to encapsulate all information needed to perform an action or
trigger an event at a later
time. This information includes the method name, the object that owns the
method and values for the method parameters.
Grid.
Deep Learning is great for finding
Trends and
Patterns in
Data. But if you
don't use this information to benefit society, then we will continue to
suffer, as we are now.
Sensors -
Time
"Computers will help us make better
Predictions, Ai will also help us make better
Decisions, but Humans still have to
steer."
Data
Dredging is the use of
data mining to
uncover patterns in data that can be presented as statistically
significant, without first devising a specific hypothesis as to the
underlying causality. (also known as data fishing, data snooping, and
p-hacking).
Linear Discriminant Analysis is a generalization of Fisher's linear
discriminant, a method used in statistics, pattern recognition and machine
learning to find a
linear combination of features that characterizes or separates two or
more classes of objects or events. The resulting combination may be used
as a
linear classifier, or, more commonly, for dimensionality reduction
before later
classification.
Facial Recognition System is a computer application capable of
identifying or
verifying a person from a digital image or a video frame
from a video source. One of the ways to do this is by comparing selected
facial features from the image and a face database.
Body Language.
Composite
Entity Pattern is used to model, represent, and manage a set of
interrelated persistent objects rather than representing them as
individual fine-grained entity beans, and also a composite entity bean
represents a graph of objects.
Statistics (math)
How patterns in nature arise and inspire everything from scientific
theory to biodegradable materials.
Nature is full of patterns.
Among them are tiling patterns, which mimic what you'd see on a tiled
bathroom floor, characterized by both tiles and interfaces -- such as
grout -- in between. In nature, a giraffe's coloring is an example of a
tiling pattern. But what makes these natural patterns form?
Cycle Detection is the
algorithmic problem of finding a
cycle in a sequence of iterated function values.
Vibrations (hz)
Profiling
algorithms or mathematical techniques allow the discovery
of patterns or correlations in large quantities of data.
What happens in the sensory cortex when learning and recognizing patterns.
Learning to recognize a specific pattern or sequence, and process the
ensuing actions and outcomes, involves neurons across the whole cortex. An
animal does not, it seems, sense the world separately from what it needs
to feel in order to guide behavior.
Predictive Coding is a theory of brain function in which the brain is
constantly generating and updating a mental model of the environment. The
model is used to generate
predictions of
sensory input that are compared to
actual sensory input. This
comparison results in prediction errors that are then used to update
and revise the mental model.
Sensors - Detectors
Sensor is an object
whose
purpose is to
detect certain
events
or
detect certain
changes or
signals in its
environment, and
then
sends
the information to a computer, which then tells the
actuator or the output
devices to provide the corresponding
output. A sensor is a device that
converts real world
data or
analog into
data that a
computer can understand
using ADC or
analog to
digital converter.
All
living organisms contain biological sensors with functions similar
to those of the mechanical devices described. Most of these are
specialized cells that are
sensitive to: Light, motion, temperature,
magnetic fields, gravity, humidity, moisture, vibration, pressure,
electrical fields, sound, and other physical aspects of the external
environment. Physical aspects of the internal environment, such as
stretch, motion of the organism, and position of appendages
(proprioception). Estimation of biomolecules interaction and some kinetics
parameters. Internal
metabolic indicators, such as glucose level, oxygen
level, or osmolality.
Internal signal molecules, such as hormones,
neurotransmitters, and cytokines. Differences between proteins of the
organism itself and of the environment or alien creatures.
Medical Sensors -
Soil Sensors -
Image Sensors -
Hearing Sensors -
Speech -
Taste Sensors -
Smell Sensors -
Motion Sensors -
DIY Science Kits -
Machine Learning
Sensor
Grid integrates wireless sensor
networks with grid computing concepts
to enable real-time sensor data collection and the sharing of
computational and storage resources for sensor data processing and
management. It is an enabling technology for building large-scale
infrastructures,
integrating heterogeneous sensor, data and computational resources
deployed over a wide area, to undertake complicated surveillance tasks
such as
environmental monitoring.
Polymetric
involves using more than one meter. -
Sensor Array
-
Touch Bionics.
Sensor Fusion is combining of sensory data or data derived from
disparate sources such that the resulting information has less uncertainty
than would be possible when these sources were used individually. The term
uncertainty reduction in this case can mean more accurate, more complete,
or more dependable, or refer to the result of an emerging view, such as
stereoscopic vision (calculation of depth information by combining
two-dimensional images from two cameras at slightly different viewpoints).
The data sources for a fusion process are not specified to originate from
identical sensors. One can distinguish direct fusion, indirect fusion and
fusion of the outputs of the former two. Direct fusion is the fusion of
sensor data from a set of heterogeneous or homogeneous sensors, soft
sensors, and history values of sensor data, while indirect fusion uses
information sources like a priori knowledge about the environment and
human input. Sensor fusion is also known as (multi-sensor).
Data
Fusion is the process of integrating multiple data sources to produce
more consistent, accurate, and useful information than that provided by
any individual data source.
Information Fusion is the merging of information from heterogeneous
sources with differing conceptual, contextual and typographical
representations.
Filtering -
Cross Talk.
Wireless Sensor Network are spatially distributed autonomous sensors
to monitor physical or environmental conditions, such as temperature,
sound, pressure, etc. and to cooperatively pass their data through the
network to other locations.
Sensor
Web is a type of sensor network that is especially well suited for
environmental
monitoring.
Next generation of Networked Smart Devices can communicate directly with
one another without human intervention. It needs only a very small
amount of power to maintain this constant listening and always be on the
alert, so it still saves energy overall while extending the battery life
of the larger device. A well-designed wake-up receiver also allows the
device to be turned on from a significant distance. A sleeping device can
still suck the life out of a battery. A wake-up receiver that turns on a
device in response to incoming ultrasonic signals -- signals outside the
range that humans can hear. By working at a significantly smaller
wavelength and switching from radio waves to ultrasound, this receiver is
much smaller than similar wake-up receivers that respond to radio signals,
while operating at extremely low power and with extended range.
Enabling 'Internet of Photonic Things' with Miniature Sensors.
Swapping electrons for photons, researchers have developed wireless
sensors which are not subject to
electromagnetic
interference and are smaller and generally more flexible than the
currently electronics-based technology. Records environmental data using a
wireless photonic sensor resonator with a whispering-gallery-mode (WGM)
architecture.
Whispering-Gallery Wave are a type of wave that can travel around a
concave surface. they can exist for light and for other waves, with
important applications in nondestructive testing, lasing, cooling and
sensing, as well as in astronomy.
Batteryless smart devices closer to reality. Researchers have taken a
huge step towards making smart devices that do not use batteries or
require charging. An RFID tag is modified by cutting out a small part its
antenna (silver ribbon) and placing a small
light-sensing phototransistor
or
temperature-responsive resistor
(thermistor) on it.
RFID stands for
Radio Frequency Identification, which uses electromagnetic fields to
automatically identify and
track tags attached
to objects. The tags contain electronically-stored information. Passive
tags collect energy from a nearby RFID reader's interrogating
radio waves.
Telemetry
-
Bar Codes.
MIT Engineers configure RFID Tags to work as Sensors. Platform may
enable continuous, low-cost, reliable devices that detect chemicals in the
environment.
Sensor Fusion is
combining of sensory data or data derived from disparate sources such that
the resulting information has less uncertainty than would be possible when
these sources were used individually.
Smart machine components alert users to damage and wear. Technology to
create polymer-bonded magnets with intricate geometries and arbitrary
shapes, opening up new possibilities for manufacturing and product design.
Quantum Sensors. Researchers have created a chip on which laser light
interacts with a tiny cloud of atoms to serve as a miniature toolkit for
measuring important quantities such as length with quantum precision. The
design could be mass-produced with existing technology.
Quantum Sensor is a device that exploits quantum correlations, such as
quantum
entanglement, to achieve a sensitivity or resolution that is better
than can be achieved using only classical systems. A quantum sensor can
measure the effect of the quantum state of another system on itself. The
mere act of measurement influences the quantum state and alters the
probability and uncertainty associated with its state during measurement.
The Defense Advanced Research Projects Agency has recently launched a
research program in optical quantum sensors that seeks to exploit ideas
from quantum metrology and quantum imaging, such as quantum lithography
and the NOON state, in order to achieve these goals with optical sensor
systems such as
Lidar.
Quantum sensor is also a term used in other settings where entangled
quantum systems are exploited to make better atomic clocks or more
sensitive magnetometers. A good example of an early quantum sensor is an
APD avalanche photodiode such as the AD500-8 TO52S1 as these have been
used to detect entangled photons and in fact with additional cooling and
sensor improvements can be used where PMTs once ruled the market such as
medical imaging. These are also now being used by First Sensor in the form
of 2-D and even 3-D stacked arrays as a direct replacement for
conventional sensors based on silicon diodes.
Wearable Sensors -
Flexible Electronics -
Bio-Monitoring
-
Health Monitors -
Smart Homes
Scientists Invent Threads to Detect Gases when Woven into Clothing. A
novel fabrication method to create dyed threads that change color when
they detect a variety of gases. Woven into clothing, smart, gas-detecting
threads could provide a
reusable, washable, and affordable safety asset in medical, workplace,
military and rescue environments. The study describes the fabrication
method and its ability to extend to a wide range of dyes and detection of
complex gas mixtures.
Electric Cloth: Flexible, wearable supercapacitors based on porous
nanocarbon nanocomposites.
Tissue Paper Sensors show promise for health care, entertainment, Robotics
-
Bionics
Wearable Ultrasound Patch Monitors Blood Pressure Deep Inside Body
that non-invasively monitors blood pressure in arteries deep beneath the
skin that could help people detect
cardiovascular problems
earlier on and with greater precision. In tests, the patch performed as
well as some clinical methods to measure blood pressure.
Simple Stickers that are Wearable Medical Electronic Devices that
someone can easily attach to their skin that could save lives of patients,
athletes and lower medical costs. The devices are made out of paper to
lower the cost of personalized medicine.
Telemedicine
(doctor house calls over the internet).
Stanford engineers have developed wireless sensors that stick to the skin
to track our health. Stanford engineers have developed experimental
stickers that pick up physiological signals emanating from the skin, then
wirelessly beam these health readings to a receiver clipped onto clothing.
It’s all part of a system called BodyNet.
Wearable sensors detect what's in your sweat. New easy-to-make sensors
can provide real-time measurements of sweat rate and electrolytes and
metabolites in
perspiration.
Researchers have developed an implantable, biofuel-powered sensor that
runs on sugar and can monitor a body's biological signals to detect,
prevent and diagnose diseases.
Implantable biosensor operates without batteries. Researchers have
revealed their new biodegradable motion sensor -- paving the way for
implanted nanotechnology that could help future sports professionals
better monitor their movements to aid rapid improvements, or help
caregivers
remotely
monitor people living with dementia. The ATI's technology builds on
its previous work around triboelectric
nanogenerators (TENG), where
researchers used the technology to harness human movements and generate
small amounts of electrical energy. Combining the two means self-powered
sensors are possible without the need for chemical or wired power sources.
A Self-Powered Heart Monitor taped to the Skin called an organic
electrochemical transistor, is a human-friendly, ultra-flexible organic
sensor powered by sunlight. A type of electronic device that can be used
to measure a variety of biological functions.
Materials scientist creates fabric alternative to batteries for wearable
devices. A method for making a charge-storing system that is easily
integrated into clothing for 'embroidering a charge-storing pattern onto
any garment.' uses a micro-
supercapacitor
and combines vapor-coated conductive threads with a polymer film, plus a
special sewing technique to create a flexible mesh of aligned electrodes
on a textile backing. The resulting solid-state device has a high ability
to store charge for its size, and other characteristics that allow it to
power wearable biosensors.
Sensors that are literally ‘music to one’s ears’. Researchers have
found a new use for a 3,000-year-old African musical instrument: detecting
toxic substances
and
counterfeit medications.
The sensor, based on the mbira (pronounced 'em-bir'-uh') is inexpensive
and easy to operate, allowing its use in developing regions, the
researchers say.
Scientists develop new tool for imprinting biochips. New technology
could allow researchers to fit more biochemical probes onto a single
biochip and reduce the cost of screening and analyzing changes biochips
(also known as microarrays), which are used to screen for and analyze
biological changes associated with disease development, biothreat agents,
pollution,
toxins and other
areas of research that involve biological components.
A nanotech sensor that turns molecular fingerprints into bar codes. A
new system can detect and analyze molecules without the need for an
infrared spectrometer. The system uses
nanostructured metapixels to detect and then translate molecules' unique
signatures into bar codes. The technology can be integrated into a compact
sensor chip. It opens the door to large-scale image-based detection of
materials using machine-learning technology.
Route to Flexible Electronics made from Exotic Materials.
Cost-effective method produces semiconducting films from materials that
outperform silicon. researchers fabricated flexible films made from
gallium arsenide, gallium nitride, and lithium fluoride -- materials that
exhibit better performance than
silicon but
until now have been prohibitively expensive to produce in functional
devices. MIT researchers have devised a way to grow single crystal GaN
thin film on a GaN substrate through two-dimensional materials. The GaN
thin film is then exfoliated by a flexible substrate, showing the rainbow
color that comes from thin film interference. This technology will pave
the way to flexible electronics and the reuse of the wafers.
Molecular Probe is a group of atoms or molecules used in molecular
biology or chemistry to study the properties of other molecules or
structures. If some measurable property of the molecular probe used
changes when it interacts with the analyte (such as a change in
absorbance), the interactions between the probe and the analyte can be
studied. This makes it possible to indirectly study the properties of
compounds and structures which may be hard to study directly. The choice
of molecular probe will depend on which compound or structure is being
studied as well as on what property is of interest. Radioactive DNA or RNA
sequences are used in molecular genetics to detect the presence of a
complementary sequence by molecular hybridization.
Biochip
are essentially miniaturized laboratories that can perform hundreds or
thousands of simultaneous biochemical reactions. Biochips enable
researchers to quickly screen large numbers of biological analytes for a
variety of purposes, from disease diagnosis to detection of bioterrorism
agents. Digital microfluidic biochips have become one of the most
promising technologies in many biomedical fields. In a digital
microfluidic biochip, a group of (adjacent) cells in the
microfluidic array can be
configured to work as storage, functional operations, as well as for
transporting fluid droplets dynamically.
Plasmonic Nanoantenna Arrays could lead to the development of a new
generation of ultrasensitive and low-cost fluorescence sensors that could
be used to monitor
water quality.
Ultra-sensitive sensor with gold nanoparticle array. Scientists have
developed a new type of sensor platform using a gold nanoparticle array,
which is 100 times more sensitive than current similar sensors for air
pollution or for medical diagnostics. The sensor is made up of a series of
gold disk-shaped nanoparticles on a glass slide.
UW team shatters long-range communication barrier for devices that consume
almost no power. Sensor allows devices that run on extremely low power
for the first time to communicate over long distances.
Force-Sensing Resistor is a material whose resistance changes when a
force or
pressure is applied.
New air-pressure sensor could improve everyday devices.
Micro-electro-mechanical system - micro-switch.
Engineers Create Artificial Skin That "Feels" Temperature Changes.
Hydrogel-based scaffolds could be used for better brain-computer
interfaces. A soft, stimulating scaffold supports brain cell
development ex vivo.
Researchers demonstrate new sensors by creating novel health monitoring,
machine interface devices. Researchers have developed a stretchable
strain sensor that has an unprecedented combination of sensitivity and
range, allowing it to detect even minor changes in strain with greater
range of motion than previous technologies. The researchers demonstrated
the sensor's utility by creating new health monitoring and human-machine
interface devices.
Paving the way for sensor interfaces that are 30 times smaller.
Researchers have invented a novel class of Digital-to-Analog (DAC) and
Analog-to-Digital Converters (ADC) that can be entirely designed with a
fully-automated digital design methodology.
Human Senses -
Materials
Science (strength limits)
"Sensors could make people vulnerable
if people don't understand the function or the process of sensors. If
people don't know how to sense the world without using artificial sensors,
then people will be vulnerable to failure when the sensors are not
available or not working correctly."
New Malleable 'Electronic Skin' Self-Healable and Recyclable.
Electronic skin, known as e-skin, is a thin, translucent material that can
mimic the function and mechanical properties of human skin that can
measure pressure, temperature, humidity and air flow.
Artificial 'skin' gives robotic hand a sense of touch. UH researchers
discover new form of stretchable electronics, sensors and skins.
Bionics.
The
sensitive strain sensor that can detect the weight of a feather.
Repetition key to Self-Healing, Flexible Medical Devices. Medical
devices powered by synthetic proteins created from repeated sequences of
proteins may be possible, according to materials science and biotechnology
experts, who looked at material inspired by the proteins in squid ring
teeth.
Biosensor is an
analytical device, used for the detection of an analyte, that combines a
biological component with a physicochemical detector. The sensitive
biological element (e.g. tissue, microorganisms, organelles, cell
receptors, enzymes, antibodies, nucleic acids, etc.) is a biologically
derived material or biomimetic component that interacts (binds or
recognizes) with the analyte under study.
Miniature Technology, Big Hope for Disease Detection
Biosensors will be inexpensive, do more, go everywhere.
Food Sensors (hand held) -
Sensors for Measuring Soil
Moisture
Compact Fiber Optic Sensor offers sensitive analysis in narrow spaces.
Compact sensor would be useful for biomedical, chemical and food safety
applications. Researchers have developed a new flexible sensor with high
sensitivity that is designed to perform variety of chemical and biological
analyses in very small spaces.
Chemical Sensor is a self-contained
analytical device that can provide information about the
chemical composition of its
environment, that is, a liquid or a gas phase. The information is provided
in the form of a measurable physical signal that is correlated with the
concentration of a certain chemical species (termed as analyte).
Chemical Sensors (PDF)
Nanosensor are any
biological, chemical, or surgical sensory points used to convey
information about nanoparticles to the macroscopic world.
Synthetic Sensors:
Towards General-Purpose Sensing (youtube)
Synthesizing Chemical-Sensing Cells from scratch. Scientists create
artificial cells that can express distinct genes in response to specific
chemical signals. The scientists chose histamine as the chemical signal
for their artificial cells because it is an important biological compound
in the immune system. If you feel an itch, histamine is the likely
culprit. It is also released by the body during allergic reactions and
helps defend against foreign pathogens by spurring inflammation. To detect
histamine, they created a molecule called an RNA aptamer. RNA aptamers are
small segments of RNA building blocks that can be engineered to act as
binding agents to specific target molecules. Next, the team developed a
so-called riboswitch that would turn this signal detection into action --
specifically, translating a gene to produce a protein. Normally, cells
produce proteins when templates made of messenger RNA (mRNA) bind to
cellular structures called ribosomes. Here, the scientists used the
histamine aptamer to design a riboswitch that alters the shape of the mRNA
upon binding histamine. In the absence of histamine, the shape of the mRNA
prevents the ribosome from binding, and no protein is produced.
Histamine-bound mRNA, however, allows ribosome to bind and synthesize
proteins.
Wearable, Low-Cost Sensor to Measure Skin Hydration.
Metal Printing Offers Low-Cost Way to Make Flexible, Stretchable
Electronics.
How a $10 Microchip
Turns 2-D Ultrasound Machines to 3-D Imaging Devices (youtube)
Chip-Based Sensors with incredible sensitivity used for motion,
temperature, pressure or biochemical sensing. sensor consists of solid
spheres.
Swallowable Sensors reveal mysteries of Human Gut Health.
Smart Homes -
Smartphone Accessories
-
Medical Sensors
Sounds of action: Using ears, not just eyes, improves robot perception.
People rarely use just one sense to understand the world, but robots
usually only rely on vision and, increasingly, touch. Researchers find
that robot perception could improve markedly by adding another sense: hearing.
Vision Sensors - Image Recognition - Computer Sensors that See
Computer Vision
deals with how computers can be made to gain high-level understanding from
digital images or videos. From the perspective of engineering, it seeks to
automate tasks that the
human visual system can do.
The classical problem in computer vision, image processing, and machine
vision is that of determining whether or not the image data contains some
specific object, feature, or activity. Different
varieties of the recognition problem are described in the literature.
Facial Recognition -
Text to Image -
3D Modeling
Vision Science is the scientific study of
visual perception.
Researchers in
vision science can be
called vision scientists, especially if their research spans some of the
science's many disciplines. Vision science encompasses all studies of
vision, such as how human and
non-human organisms
process visual
information, how conscious visual perception works in humans, how to
exploit
visual perception
for effective communication, and how artificial systems can do the same
tasks.
Reading Speeds
-
Frames Per Second -
Gaze Detection -
Eye Tracking
IBM believes computers will be able to
identify images and
understand what they mean
without the use of tags. This will lead to systems that can help doctors
analyze X-ray pictures, magnetic resonance imaging (MRI) machine,
ultrasound or computerized tomography scans.
Activity
Recognition (facial recognition) -
Optical Character
Recognition
How computers learn to recognize objects instantly (video and
interactive text)
Build a TensorFlow
Image Classifier in 5 Min (youtube)
AI proved superior in assessing and diagnosing cardiac function when
compared with echocardiogram assessments made by sonographers.
Echocardiogram uses sound waves to create pictures of the heart. It is
a type of medical imaging of the heart, using standard
ultrasound or Doppler
ultrasound. This common test can show blood flow through the heart and
heart valves. Your health care provider can use the pictures from the test
to find heart disease and other heart conditions.
Hybrid AI-powered computer vision combines physics and big data.
Researchers have laid out a new approach to enhance artificial
intelligence-powered computer vision technologies by adding physics-based
awareness to data-driven techniques. The study offered an overview of a
hybrid methodology designed to improve how AI-based machinery sense,
interact and respond to its environment in real time -- as in how
autonomous vehicles move and maneuver, or how robots use the improved
technology to carry out precision actions.
Detecting threats beyond the limits of human, sensor sight. New
software system finds, tracks moving objects as small as a pixel. A new
patented software system can find the curves of motion in streaming video
and images from satellites, drones and far-range security cameras and turn
them into signals to find and track moving objects as small as one pixel.
The developers say this system can enhance the performance of any remote
sensing application.
New technique to overcome obstructed views. An algorithm can be used
to see around obstacles and, eventually, help save lives. Using a single
photograph, researchers created an algorithm that computes highly
accurate, full-color three-dimensional reconstructions of areas behind
obstacles -- a concept that can not only help prevent car crashes, but
help law enforcement experts in hostage situations, search-and-rescue and
strategic military efforts.
Image Sensor is a
sensor that detects and conveys the information that constitutes an image.
It does so by converting the variable attenuation of
light waves (as they pass
through or reflect off objects) into signals, small bursts of current that
convey the information. The waves can be light or other
electromagnetic radiation.
Image sensors are used in electronic imaging devices of both analog and
digital types, which include digital
cameras, camera modules, medical
imaging equipment, night vision equipment such as thermal imaging devices,
radar, sonar, and others. As technology changes, digital imaging tends to
replace analog imaging.
Charge-Coupled Device or CCD is a device for the movement of
electrical charge, usually from within the device to an area where the
charge can be manipulated, for example conversion into a digital value.
This is achieved by "shifting" the signals between stages within the
device one at a time. CCDs move charge between capacitive bins in the
device, with the shift allowing for the transfer of charge between bins.
Visual Search Engine is a
search engine designed to search
for information on the World Wide Web through the input of an image or a
search engine with a visual display of the search results. Information may
consist of web pages, locations, other images and other types of
documents. This type of search engines is mostly used to search on the
mobile Internet through an image of an unknown object (unknown search
query). Examples are buildings in a foreign city. These search engines
often use techniques for Content Based Image Retrieval.
Imagenet is an image database
organized according to the WordNet hierarchy (currently only the nouns),
in which each node of the hierarchy is depicted by hundreds and thousands
of images. Currently we have an average of over five hundred images per
node.
Tensorflow Image Recognition QuocNet, AlexNet, Inception (GoogLeNet),
BN-Inception-v2.
Tensor Flow
Open Source Software Library for Machine Intelligence.
Machine Learning.
Vicarious is developing
machine learning
software based on the computational principles of the human brain. Known
as the Recursive Cortical Network (RCN), it is a visual perception system
that interprets the contents of photographs and videos in a manner similar
to humans.
Recaptcha (google) -
CAPTCHA
Object Recognition technology in the field of computer
vision for finding and
identifying objects in an image or video sequence.
Humans recognize a multitude of objects in images with little effort,
despite the fact that the image of the objects may vary somewhat in
different view points, in many different sizes and scales or even when
they are translated or rotated. Objects can even be recognized when they
are partially obstructed from view. This task is still a challenge for
computer vision systems. Many approaches to the task have been implemented
over multiple decades.
PASCAL
-
Pattern Analysis, Statistical
Modeling and Computational Learning. -
VOC -
Visual Object Classes.
More capable computer vision systems, innovations in depth from
focus/defocus pave the way. In an image, estimating the distance between
objects and the camera by using the blur in the images as clue, also known
as depth from focus/defocus, is essential in computer vision. However,
model-based methods fail when texture-less surfaces are present, and
learning-based methods require the same camera settings during training
and testing. Now, researchers have come up with an innovative strategy for
depth estimation that combines the best of both the worlds to solve these
limitations, extending the applicability of depth from focus/defocus.
Top-1 error
is the proportion of the time the classifier does not provide the highest
score to the correct class.
Top-5 error rate
is the percentage of times the classifier failed to include the proper
class among its top five guesses.
Machine Learning now can outperform dermatologists at recognizing
skin
cancers in blemish photos. They can beat cardiologists in detecting
arrhythmias in EKGs.
Visual Search Engine App -
Gif
Arxiv Full
Resolution Image Compression with Recurrent Neural Networks.
QIS is the next
generation of image sensors where high-speed single-photon detection is
used to unlock new image capture capabilities for consumers and
professionals not possible with today’s devices. "
Jot"
is the specialized pixel that is sensitive enough to detect a single
photon of light. Revolutionary detection technologies are developed to
enable accurate photon-counting at room temperature without the use of
electron avalanche multiplication.
Orientation in computer vision and image
processing is when a sufficiently small image regions
can be characterized as locally one-dimensional, e.g., in terms of lines
or edges. For natural images this assumption is usually correct except at
specific points, e.g., corners or line junctions or crossings, or in
regions of high frequency textures. However, what size the regions have to
be in order to appear as one-dimensional varies both between images and
within an image. Also, in practice a local region is never exactly
one-dimensional but can be so to a sufficient degree of approximation.
Dynamic
Projection Mapping onto Deforming non-rigid Surface at 1,000 fps with 3 ms
delay (youtube)
High-Speed Projector DynaFlash
Researchers teach computers to see Optical Illusions. By making a
neural-network computer model that can be fooled by
optical illusions like
humans, the researchers advanced knowledge of the human visual system and
may help improve artificial vision.
Neuroscientists look deep into the eyes of the octopus. By studying
the visual system of an octopus, researchers hope to understand how its
brain organization compares to that of humans and other vertebrates. Their
results could provide insight into the evolution of visual systems across
species. The last common ancestor between octopuses and humans was 500
million years ago, and the species have since evolved in very different
contexts. So scientists didn't know whether the parallels in visual
systems extended beyond the eyes, or whether the octopus was instead using
completely different kinds of neurons and brain circuits to achieve
similar results. About a third of the neurons in the data didn't quite
look fully developed. The octopus brain keeps growing and adding new
neurons over the animal's lifespan. These immature neurons, not yet
integrated into brain circuits, were a sign of the brain in the process of
expanding. There are a lot of genes where we have no idea what their
function is, because we haven't sequenced the genomes of a lot of
cephalopods.
Projection Mapping is a projection technology used to turn objects,
often irregularly shaped, into a display surface for video projection.
These objects may be complex industrial landscapes, such as buildings,
small indoor objects or theatrical stages. By using specialized software,
a two- or three-dimensional object is spatially mapped on the virtual
program which mimics the real environment it is to be projected on. The
software can interact with a projector to fit any desired image onto the
surface of that object. This technique is used by artists and advertisers
alike who can add extra dimensions, optical illusions, and notions of
movement onto previously static objects. The video is commonly combined
with, or triggered by, audio to create an audio-visual narrative.
Machine Vision is the technology and methods used to provide
imaging-based automatic inspection and analysis for such applications as
automatic inspection, process control, and robot guidance, usually in
industry. Machine vision is a term encompassing a large number of
technologies, software and hardware products, integrated systems, actions,
methods and expertise. Machine vision as a systems engineering discipline
can be considered distinct from computer vision, a form of computer
science. It attempts to integrate existing technologies in new ways and
apply them to solve real world problems. The term is also used in a
broader sense by trade shows and trade groups; this broader definition
also encompasses products and applications most often associated with
image processing.
Improving image sensors for machine vision. On-chip spectrometer,
silicon nanowires determine light spectrum, angle. Researchers highlight
the latest nanostructured components integrated on image sensor chips that
are most likely to make the biggest impact in multimodal imaging and
detailed a promising approach to detect multiple-band spectra by
fabricating an on-chip spectrometer. The developments could enable
autonomous vehicles to see around corners instead of just a straight line,
biomedical imaging to detect abnormalities at different tissue depths, and
telescopes to see through interstellar dust.
A simpler path to better computer vision. Research finds using a large
collection of simple, un-curated synthetic image generation programs to
pretrain a computer vision model for image classification yields greater
accuracy than employing other pretraining methods that are more costly and
time consuming, and less scalable. Before a machine-learning model can
complete a task, such as identifying cancer in medical images, the model
must be trained. Training image classification models typically involves
showing the model millions of example images gathered into a massive
dataset.
Seeing the Light: Researchers develop new
AI system using light to learn associatively. Researchers have developed
an
on-chip optical processor capable of detecting similarities in
datasets up to 1,000 times faster than conventional machine learning
algorithms running on electronic processors.
LIDAR-Sensing
using Light. -
Self Driving Cars -
Navigation.
Acoustic Non-Line-of-Sight Imaging is a novel approach to
seeing around corners using acoustic
echoes. A system of speakers emits sound waves which scatter from a wall
to a hidden object and back.
New Depth Sensors could make self-driving cars practical Computational
method improves resolution of
time-of-flight depth sensors 1,000-fold.
New window system allows for long-term studies of brain activity. A
researcher has developed a tiny window that allows investigators to get
clearer, long-term imaging of the brain's visual network. This could help
researchers understand if neural activity "traffic jams" underlie all
kinds of visual impairments, from running a red light when visual
attention is elsewhere, to shedding light on the autism-affected brain.
A marriage of Light-Manipulation Technologies, researchers have built
a metasurface-based lens atop a Micro-Electro-Mechanical System (MEMS)
platform. The result is a new, infrared light-focusing system that
combines the best features of both technologies while reducing the size of
the optical system. combining the strengths of high-speed dynamic control
and precise spatial manipulation of wave fronts. (Metalenses).
Bioinspired camera could help self-driving cars see better by
detecting a property of light known as
polarization and featuring a
dynamic range about 10,000 times higher than today's commercial
cameras. Dynamic range is a measure of the brightest and darkest areas a
camera can capture simultaneously. With these, the camera can see better
in driving conditions such as the transition from a dark tunnel into
bright sunlight or during hazy or foggy conditions.
Camera brings unseen world to light. Portable
polarization-sensitive camera could be used in machine vision,
autonomous vehicles, security, atmospheric chemistry and more. Researchers
have developed a highly compact, portable camera that can image
polarization in a single shot. The miniature camera -- about the size of a
thumb -- could find a place in the vision systems of autonomous vehicles,
onboard planes or satellites to study atmospheric chemistry, or be used to
detect camouflaged objects.
Thermal imaging innovation allows AI to see through pitch darkness
like broad daylight. The patent-pending innovation sees texture and depth
and perceives physical attributes of people and environments.
Successful automatic landing with vision assisted navigation.
Researchers have now demonstrated a completely automatic landing with
vision assisted navigation that functions
properly without the need for ground-based systems. C2Land.
Micro-Electro-Mechanical Systems is the technology of microscopic
devices, particularly those with moving parts. It merges at the nano-scale
into nanoelectromechanical systems (NEMS) and nanotechnology. MEMS are
also referred to as micromachines in Japan, or micro systems technology
(MST) in Europe.
Inception Model Image Recognition (tensorflow)
Statistical Classification is the problem of identifying to which of a
set of
categories (sub-populations) a new observation belongs, on the basis
of a training set of data containing observations (or instances) whose
category membership is known.
Computer Algorithm that is nearly as accurate as people are at Image
Analysis of brain circuitry and neural networks.
Vrad (Radiology) -
Zebra-Med
(medical image diagnosis).
Minimalist Machine Learning Algorithms Analyze Images from Very Little
Data. CAMERA researchers develop highly efficient
neural networks for analyzing experimental scientific images from
limited training data.
Convolutional Neural Network is a class of deep, feed-forward
artificial
neural network that have successfully been applied to analyzing
visual imagery.
Transfer Learning or inductive transfer is a research problem in
machine learning that focuses on storing
knowledge gained while solving one problem and applying it to a different
but related problem. For example, knowledge gained while learning to
recognize cars could apply when trying to recognize trucks. This area of
research bears some relation to the long history of psychological
literature on transfer of learning, although formal ties between the two
fields are limited.
Optical Character Recognition is the mechanical or
electronic conversion of images of typed, handwritten or printed text into
machine-encoded text, whether from a scanned document, a photo of a
document, a scene-photo (for example the text on signs and billboards in a
landscape photo) or from subtitle text superimposed on an image (for
example from a television broadcast).
Image Classification Algorithm
Neural hardware for image recognition in nanoseconds. Usually,
artificial intelligence is based on software. Scientists created
intelligent hardware, which is much faster. Within nanoseconds, the chip
can analyze images and provide the correct output.
Translations
(language)
Question and Answer Platforms
Stable Diffusion is a
deep learning,
text-to-image model released in 2022. It is primarily used to generate
detailed images conditioned on text descriptions, though it can also be
applied to other tasks such as inpainting, outpainting, and generating
image-to-image translations guided by a text prompt.
Diffusion Models is to learn the latent structure of a dataset by
modeling the way in which data points diffuse through the latent space. In
computer vision, this means that a
neural network
is trained to denoise images blurred with Gaussian noise by learning to
reverse the diffusion process. It mainly consists of three major
components: the forward process, the reverse process, and the sampling
procedure. Three examples of generic diffusion modeling frameworks used in
computer vision are denoising diffusion probabilistic models, noise
conditioned score networks, and stochastic differential equations.
NeuralTalk is a
Python+numpy project for
learning Multimodal
Recurrent Neural Networks that
describe
images with sentences.
Computers
being able to identify images means that blind
people will see by way of
Sensory Substitution.
David Eagleman: Can we Create new Senses for Humans (video)
-
Sensory Vest
Sight Tools (seeing)
Weakness in modern computer vision systems.
Advances in Technology Provide Clearer Insight Into Brain's Visual System.
A new high-density EEG can capture the brain's neural activity at a higher
spatial resolution than ever before. This next generation
brain-interface technology is the first non-invasive,
high-resolution system of its kind, providing higher density and coverage
than any existing system.
Metalens combined with an Artificial Muscle. Artificial eye
automatically stretches to simultaneously focus and correct astigmatism
and image shift. Metalens is a lens made from a metamaterial, which is any
material that obtains its electromagnetic properties from its structure
rather than from its chemical composition; especially a material
engineered to have features of a size less than that of the wavelength of
a class of electromagnetic radiation.
Ghost Imaging is a technique that produces an image of an object by
combining information from two light detectors:
a conventional, multi-pixel detector that doesn't view the object, and a
single-pixel (bucket) detector that does view the object. Two techniques
have been demonstrated. A quantum method uses a source of pairs of
entangled photons, each pair shared between the two detectors, while a
classical method uses a pair of correlated coherent beams without
exploiting entanglement. Both approaches may be understood within the
framework of a single theory. (also called "coincidence imaging",
"two-photon imaging" or "correlated-photon imaging").
Researchers capture moving object with ghost imaging. Researchers have
developed a way to capture moving objects with the unconventional imaging
method known as ghost imaging. The new method could make the imaging
technique practical for new applications such as biomedical imaging,
security checks and video compression and storage. Ghost imaging comes
with a host of advantages, one of which is that it allows one to form an
image by illuminating the object with lower light levels than traditional
imaging approaches. However, ghost imaging has been limited to stationary
objects because it takes a long time to project the sequence of light
patterns onto the object that is necessary to reconstruct an image. This
causes images of a moving object to appear blurry.
Researchers detect and classify multiple objects without images.
High-speed method uses less computational power, could be useful for
autonomous driving. Researchers have developed a new high-speed way to
detect the location, size and category of multiple objects without
acquiring images or requiring complex scene reconstruction. Because the
new approach greatly decreases the computing power necessary for object
detection, it could be useful for identifying hazards while driving.
Today's image-free perception methods can only achieve classification,
single object recognition or tracking. To accomplish all three at once,
the researchers developed a technique known as image-free single-pixel
object detection (SPOD). In the Optica Publishing Group journal Optics
Letters, they report that SPOD can achieve an object detection accuracy of
just over 80%.
Adversarial Reprogramming of Neural Networks.
Scientists see what research participants picture in their mind’s eye.
Attention is known to modulate neural representations of perceived images.
The time needed to generate a very clear distinction between the
imagined image and the
viewed image was different for imagining a
“word” and a “landscape,” which could have something to do with the
different parts of the brain involved in imagining these two concepts.
When a machine can
see the world in
the same way that a human does, then we will have some really
cool robots.
Light processing improves robotic sensing, study finds. A team of
researchers uncovered how the human brain processes bright and contrasting
light, which they say is a key to improving robotic sensing and enabling
autonomous agents to team with humans.
Breakthrough optical sensor mimics human eye. Previous attempts to
build a human-eye type of device, called a retinomorphic sensor, have
relied on software or complex hardware. But the new sensor's operation is
part of its fundamental design, using ultrathin layers of perovskite
semiconductors -- widely studied in recent years for their solar energy
potential -- that change from strong electrical insulators to strong
conductors when placed in light. The new sensor could be a perfect match
for the neuromorphic computers that will power the next generation of
artificial intelligence in applications like self-driving cars, robotics
and advanced image recognition, Labram said. Unlike traditional computers,
which process information sequentially as a series of instructions,
neuromorphic computers are designed to emulate the human brain's massively parallel networks.
Text to Image - AI Tool Systems
DALL·E 3
understands significantly more nuance and detail than our previous
systems, allowing you to easily translate your ideas into exceptionally
accurate images. DALL·E 3 is now available to all ChatGPT Plus and
Enterprise users, and will be available via the API and in Labs later this
fall.
Chat-GPT -
Image Recognition -
Pattern Recognition -
Art Skills
Stable Diffusion
XL is a latent text-to-image diffusion model capable of generating
photo-realistic images given any text input, cultivates autonomous freedom
to produce incredible imagery, empowers billions of people to create
stunning art within seconds. Create beautiful images with our AI Image
Generator (Text to Image) for free. Let Your Creativity Flow. Create
Unique AI Generated Images in 1-Click.
Laion is a
dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than
LAION-400M, previously the biggest openly accessible image-text dataset in
the world.
Glaze.
Spawning AI tools
for Artists. Made by Artists.
Stable Diffusion open
software project. Download and join other developers in creating
incredible applications with Stable Diffusion as a foundation mode.
Hearing Sensors - Computer Sensors that Listen
There will also be improvements in computers' ability to
hear
and understand sound. Greater sensitivity to
sound pressure,
vibrations and
waves could lead to more-accurate landslide
warnings, for example.
Sensors.
Speech Recognition methodologies and technologies that enables the
recognition and
translation
of
spoken language
into text by computers. It is also known as "automatic speech recognition"
(ASR), "computer speech recognition", or just "speech to text" (STT). It
incorporates knowledge and research in the
linguistics,
computer science, and electrical
engineering fields.
Your voice is
measured by
Frequency, the
Wavelengths of
Sound at a specific moment.
Software breaks down your command into 25-millisecond slivers, then
converts each wavelength measurement into digestible numbers. The software
compares those sonic signatures to its catalog of sounds until its
confidence scores are high enough that it can assume that you said. The
software compares the words it thinks you've said to its stores of example
sentences, which inform how it understands syntax and vocabulary. Acoustic
and language models constantly adjust to how people use them. That's where
A.I., specifically machine learning, comes in context to be more accurate.
YouTube built an Automated Content Detection System that
prevents most unauthorized clips from appearing on its site.
Artificial intelligence produces realistic sounds that fool humans
Video-trained system from MIT’s Computer Science and Artificial
Intelligence Lab could help robots understand how objects interact with
the world.
Machine-Learning System Processes Sounds like Humans do.
Cloud DX uses AI
technology to scrutinize the audio waveform of a human cough, which allows
it to detect asthma, tuberculosis, pneumonia, and other lung diseases.
Selecting sounds: How the brain knows what to listen to. New
noninvasive approach reveals brain mechanisms of auditory attention.
All in your head: Exploring human-body communications with binaural
hearing aids. Scientists show how head tissue can be safely used as
the transmission medium for head-worn electronics.
Wearable technology
seems all poised to take over next-generation electronics, yet most
wireless communication
techniques are not up to the task. To tackle this issue, scientists
have delved deep into human-body communications, in which human tissue is
used as the transmission medium for electromagnetic signals. Their
findings pave the way to more efficient and safer
head-worn devices, such as binaural hearing aids and earphones.
Bionics.
Taste - Flavor Sensors
Computers with virtual taste buds will be able to
Calculate
flavor, according to IBM, helping chefs improve recipes or
create new ones. The systems will break down ingredients to
their respective chemicals and calculate their interactions with
neural sensors in a person's tongue and nose.
Smell - Oder Sensors
According to IBM, computers will have an acute
sense of smell in order to diagnose from a person's breath a coming cold, liver and
kidney disorders, diabetes and tuberculosis. Similar to how a
Breathalyzer
detects alcohol, the computer will be able to check for molecular
biomarkers pointing to diseases.
Machine
Olfaction is the automated simulation of the
sense of smell.
Electronic Nose is a device intended to detect
odors or flavors. Over
the last decade, "electronic sensing" or "
e-sensing" technologies have
undergone important developments from a technical and commercial point of
view. The expression "electronic sensing" refers to the capability of
reproducing human senses using sensor arrays and pattern recognition systems.
A new, highly sensitive chemical sensor uses protein nanowires.
Scientists report that they have developed bioelectronic ammonia gas
sensors that are among the most sensitive ever made. It uses
electric-charge-conducting protein nanowires derived from the bacterium
Geobacter to provide biomaterials for electrical devices. They grow
hair-like protein filaments that work as nanoscale ''wires'' to transfer
charges for their nourishment and to communicate with other bacteria.
Motion Sensors - Computer Sensors that Measure Body Language
Motion Detector is a device that detects moving objects, particularly
people. Such a device is often integrated as a component of a system that
automatically performs a task or alerts a user of motion in an area. They
form a vital component of security, automated lighting control, home
control, energy efficiency, and other useful systems.
TSA Observations.
Algorithm for Analysis of Emotion Using Body Language.
Algorithm to Interpret Body Language in Real Time. Researchers in the
United States are working on teaching computers to identify and interpret
human physical action sequences as they take place. The algorithm they are
using could also be applied to the medical sector to point up correct or
incorrect movements by patients.
Hand Body Language Gesture Recognition Based on Signals From Specialized
Glove and Machine Learning Algorithms
The Panoptic Studio is a Massively Multiview System, a Social Motion
Capture Technology for Recording
Body Language and
Movements.
Signal Processing is an enabling technology that encompasses the
fundamental theory, applications,
algorithms, and implementations of
processing or
transferring information contained in many different
physical, symbolic, or abstract formats broadly designated as
signals. It
uses mathematical, statistical, computational, heuristic, and linguistic
representations, formalisms, and techniques for representation, modelling,
analysis, synthesis, discovery, recovery,
sensing, acquisition,
extraction,
learning, security, or forensics.
Sensor
Fusion is
combining of sensory data or data
derived from disparate sources such that the resulting information has
less uncertainty than would be possible when these sources were used
individually. The term uncertainty reduction in this case can mean more
accurate, more complete, or more dependable, or refer to the result of an
emerging view, such as stereoscopic vision (calculation of depth
information by combining two-dimensional images from two cameras at
slightly different viewpoints). The data sources for a fusion process are
not specified to originate from identical sensors. One can distinguish
direct fusion, indirect fusion and fusion of the outputs of the former
two. Direct fusion is the fusion of sensor data from a set of
heterogeneous or homogeneous sensors, soft sensors, and history values of
sensor data, while indirect fusion uses information sources like a priori
knowledge about the environment and human input. Sensor fusion is also
known as (multi-sensor) data fusion and is a subset of information fusion.
Sensors.
Brain Computer Interface
Do you need a Computer Chip implanted in your Brain? You don't need a Computer Chip implanted in your Body or Brain, you can still use
jump drives,
cellphones,
paper and other devices to carry important
information with you that you need to
remember.
External Memory Devices
are amazing tools. Jumpdrives can be made to look like jewelry or credit
cards. You can even get a mini tattoo of your SS# in the
form of a
QR Code that
could link to your information. There are only a few very unique
circumstances that would require the need for a person to have a
human-implantable microchip,
like people who have
disabilities.
But the body does produce the
necessary voltage to run an extra memory device, so I guess it's just a
matter of
Sensory Substitution.
Blending human brains with computers has already been happening for over
20 years, so this is Your Brain on Computers. But you don't need to stick
a computer in your head, you can just carry one around in your pocket. You
don't have to wait for AI to have enhanced cognitive abilities, you can
have enhanced cognitive abilities right now, it's called
learning.
Human Operating System -
Interfaces -
Wearable Sensors -
Smartphones -
Wireless Communication
Brain Computer Interface helps Paralyzed Man feel again through
Mind-Controlled Robotic Arm.
Toward next-generation brain-computer interface systems. A new kind of
neural interface system that coordinates the activity of hundreds of tiny
brain sensors could one day deepen understanding of the brain and lead to
new medical therapies. Brain-computer interfaces (BCIs) are emerging
assistive devices that may one day help people with brain or spinal
injuries to move or communicate. BCI systems depend on implantable sensors
that record electrical signals in the brain and use those signals to drive
external devices like computers or robotic prosthetics. Most current BCI
systems use one or two sensors to sample up to a few hundred neurons, but
neuroscientists are interested in systems that are able to gather data
from much larger groups of brain cells. Now, a team of researchers has
taken a key step toward a new concept for a future BCI system -- one that
employs a coordinated network of independent, wireless microscale neural
sensors, each about the size of a grain of salt, to record and stimulate
brain activity. The sensors, dubbed "neurograins," independently record
the electrical pulses made by firing neurons and send the signals
wirelessly to a central hub, which coordinates and processes the signals.
The results, the researchers say, are a step toward a system that could
one day enable the recording of brain signals in unprecedented detail,
leading to new insights into how the brain works and new therapies for
people with brain or spinal injuries. In a study published on August 12 in
Nature Electronics, the research team demonstrated the use of nearly 50
such autonomous neurograins to record neural activity in a rodent. The
second challenge was developing the body-external communications hub that
receives signals from those tiny chips. The device is a thin patch, about
the size of a thumb print, that attaches to the scalp outside the skull.
It works like a miniature cellular phone tower, employing a network
protocol to coordinate the signals from the neurograins, each of which has
its own network address. The patch also supplies power wirelessly to the
neurograins, which are designed to operate using a minimal amount of
electricity.
Primates Regain Control of Paralyzed Limb.
Brain-Computer Interface Laboratory ETSU.
Brain-to-Brain Interface Demonstration (youtube).
Connecting Brains:
The BrainNet - (VPRO documentary - 2014) (youtube).
New brain-computer interface allows man with ALS to 'speak' again.
Technology developed by UC Davis Health restores interpersonal
communication. A new brain-computer interface translates brain signals
into speech with up to 97 percent accuracy. Researchers implanted sensors
in the brain of a man with severely impaired speech due to
amyotrophic
lateral sclerosis. The man was able to communicate his intended speech
within minutes of activating the system.
Direct Brain-to-Brain Interface in Humans.
Neuralink is developing
implantable brain–computer interfaces (BCIs).
Researchers Revolutionize Brain-Computer Interfaces using silicon
electronics.
"
One Chip to Rule them All."
Mind-Controlled Device helps stroke patients retrain brains to move
Paralyzed Hands.
Stroke patient improvement with a brain-computer interface.
Artificial Synapse designed for “Brain-on-a-Chip” Hardware.
Brain implant may enable communication from thoughts alone. Prosthetic
decodes signals from brain's speech center to predict what sound someone
is trying to say. A speech
prosthetic developed by a collaborative team of Duke neuroscientists,
neurosurgeons, and engineers can
translate a person's
brain signals into what they're trying to say. The new technology might
one day help people unable to talk due to neurological disorders regain
the ability to communicate through a brain-computer
interface.
Prosthetic memory system successful in humans uses a person’s own
memory patterns to facilitate the brain’s ability to encode and recall
memory.
Brain-Computer Interface enables people with Paralysis to control tablet
devices -
Brain
Gate.
First-ever successful mind-controlled robotic arm without brain implants
using a noninvasive brain-computer interface.
Ultra-small nanoprobes could be a leap forward in high-resolution
human-machine interfaces.
Neural prosthetic device can help humans restore memory. Study
highlights a promising avenue for cognitive enhancement technologies. A
team of scientists have demonstrated the first successful use of a neural
prosthetic device to recall specific memories. In the previous study, the
team's electronic prosthetic system was based on a multi-input
multi-output (MIMO) nonlinear mathematical model, and the researchers
influenced the firing patterns of multiple neurons in the hippocampus, a
part of the brain involved in making new memories. In this study,
researchers built a new model of processes that assists the hippocampus in
helping people remember specific information.
How artificial intelligence gave a paralyzed woman her voice back.
Researchers have developed a
brain-computer
interface that has enabled a woman with severe paralysis from a
brainstem stroke to speak through a digital avatar.
Scientists can now
manipulate brain cells using smartphone. A team of scientists have
invented a device that can control neural circuits using a tiny brain
implant controlled by a
smartphone.
Controlling electronic devices with brain waves. But researchers have
developed a new type of electroencephalogram (EEG) electrode that can do
just that, without the sticky gel required for conventional electrodes.
Even better, the devices work through a full head of hair. The researchers
report the flexible electrodes, which could someday be used in
brain-computer interfaces to drive cars or move
artificial limbs.
Artificial synapse that works with living cells created. Researchers
have created a device that can integrate and interact with neuron-like
cells. This could be an early step toward an artificial synapse for use in
brain-computer interfaces.
Big Improvements to Brain-Computer Interface. Newly developed “glassy
carbon” electrodes transmit more robust signals to restore motion in
people with damaged spinal cords.
New treatment allows some people with spinal cord injury to regain hand
and arm function. Almost 18,000 Americans experience traumatic
spinal cord injuries
every year. Many of these people are unable to use their hands and arms
and can't do everyday tasks such as eating, grooming or drinking water
without help. Using
physical therapy combined with a noninvasive method of
stimulating nerve cells in
the spinal cord, University of Washington researchers helped six Seattle
area participants regain some hand and arm mobility. That increased
mobility lasted at least three to six months after treatment had ended.
Tapping the Brain to boost Stroke Rehabilitation. Clinical trial
suggests brain-machine interface coupled with robot offers increased
benefits for
stroke
survivors. Stroke survivors who had ceased to benefit from conventional
rehabilitation gained clinically significant arm movement and control by
using an
external robotic
device or brain-machine interface powered by the patients' own brains.
Neuroprosthesis restores words to man with paralysis. Technology could
lead to more natural communication for people who have suffered speech
loss. Researchers have successfully developed a 'speech neuroprosthesis'
that has enabled a man with severe paralysis to communicate in sentences,
translating signals from his brain to the vocal tract directly into words
that appear as text on a screen.
Human Enhancements will require people to be
more educated to avoid the
corrupted influences of power.
Augmenting.
Body
Hacking -
Technological Convergence
Brain Computer Interfaces is a direct
communication pathway
between an enhanced or
wired brain and an external device. BCIs are often
directed at researching, mapping, assisting, augmenting, or repairing
human cognitive or
sensory-motor functions.
Brain Computer Interface
-
Li-Fi -
Electrical Brain Stimulation
Stabilizing Brain-Computer interfaces. New machine learning algorithm
reduces need for brain-computer interfaces to undergo recalibration.
Neural Engineering System Design program seeks to develop
high-resolution neurotechnology capable of mitigating the effects of
injury and disease on the visual and auditory systems of military
personnel. In addition to creating novel hardware and algorithms, the
program conducts research to understand how various forms of neural
sensing and actuation might improve restorative therapeutic outcomes.
Translator for
electrochemical
language used by neurons.
Bidirectional BCI Functionality. Brain-computer interfaces or BCIs
hold immense potential for individuals with a wide range of neurological
conditions, but the road to implementation is long and nuanced for both
the invasive and noninvasive versions of the technology. Scientists have
now successfully integrated a novel focused ultrasound stimulation to
realize bidirectional BCI that both encodes and decodes brain waves using
machine learning in a study with 25 human subjects. This work opens up a
new avenue to significantly enhance not only the signal quality, but also,
overall nonivasive BCI performance by stimulating targeted neural
circuits.
Cognichip is designed to simultaneously process and co-ordinate all
the sensory and physical inputs a brain experiences, this cognitive
platform will combine these elements to create a
virtual self.
MindMaze has always
had one mission, to decode the brain and connect it to the world. It’s not
about building brain-computer interfaces, but building human-machine
interfaces.
Wearable
Technology are clothing and accessories incorporating computer and
advanced electronic
technologies. The designs often incorporate practical functions and
features.
Wearable Technology can take basic measurements and
monitor and track body functions to give the user a better understand of
their body and increase their awareness. They can then match the
sensations they feel to the recorded data. Eventually they will be able to
teach themselves how to notice body sensations and changes and have a
better idea what may be happening in their body. A prosthesis for feeling.
And the person will not have to wear the device all the time because they
will be more aware of their body sensations and what they may mean. Ai
teaching us to be more intelligent, now that's Ai.
Wearable and
Bendable Electronics Conference -
Medical Sensors -
Bioelectronics
How To Hack A Human
Brain | VICE on HBO (youtube) - Advancements in neurotechnology are
blurring the line between biology and technology. There is an emerging
push to implant electronic devices inside the human body, hardwire them to
our brains, and allow us to not only overcome disadvantages or injury but
open up entirely new avenues of human experience.
Self-Healing
Material are artificial or synthetically-created substances that have
the built-in
ability to automatically repair damage to themselves without
any external diagnosis of the problem or human intervention. Generally,
materials will degrade over time due to fatigue, environmental conditions,
or damage incurred during operation. Cracks and other types of damage on a
microscopic level have been shown to change thermal, electrical, and
acoustical properties of materials, and the propagation of cracks can lead
to eventual failure of the material.
Bionics is the
application of
biological
methods and systems found in nature to the study and design of
engineering systems and modern technology.
The Six Million Dollar Man.
Artificial Body Parts
Novel nanoprobes show promise for optical monitoring of neural activity.
Researchers have developed ultrasensitive nanoscale optical probes to
monitor the bioelectric activity of neurons and other excitable cells.
This novel readout technology could enable scientists to study how neural
circuits function at an unprecedented scale by
monitoring large numbers of individual neurons simultaneously. It
could also lead to high-bandwidth brain-machine interfaces with
dramatically enhanced precision and functionality.
Cyborg is a being with
both organic and biomechatronic body parts. Not the same thing as bionic,
biorobot or android; it applies to an organism that has restored
function or enhanced abilities due to the integration of some artificial
component or technology that relies on some sort of feedback. While
cyborgs are commonly thought of as mammals, including humans, they might
also conceivably be any kind of organism.
Augmented.
Cyborg Olympics
Human–Animal Hybrid incorporates elements from both humans and
non-human animals.
Cybernetics is a
wide-ranging field concerned with circular causal processes such as
feedback. It is the study of human/machine interaction guided by the
principle that numerous different types of systems can be studied,
including, artificial intelligence, bionics, cognitive science, control
theory, complexity science, computer science, information theory and
robotics. Cybernetics is a
transdisciplinary approach for exploring regulatory systems—their
structures, constraints, and possibilities, the scientific study of
control and
communication in the animal and the machine.
Android is a humanoid robot or synthetic organism designed to look and
act like a human, especially one with a body having a flesh-like
resemblance.
Humanoid Robot is a robot with its body shape built to resemble the
human body. The design may be for functional purposes, such as interacting
with human tools and environments, for experimental purposes, such as the
study of bipedal locomotion, or for other purposes. In general, humanoid
robots have a torso, a head, two arms, and two legs, though some forms of
humanoid robots may model only part of the body, for example, from the
waist up. Some humanoid robots also have heads designed to replicate human
facial features such as eyes and mouths. Androids are humanoid robots
built to aesthetically resemble humans.
Robotics -
Human Operating System
Brain Implant or
Neural Implants, are
technological devices that connect directly to a biological subject's
brain – usually placed on the surface of the brain, or attached to the
brain's cortex. A common purpose of modern brain implants and the focus of
much current research is establishing a biomedical prosthesis
circumventing areas in the brain that have become dysfunctional after a
stroke or other head injuries. This includes sensory substitution, e.g.,
in vision. Other brain implants are used in animal experiments simply to
record brain activity for scientific reasons. Some brain implants involve
creating interfaces between neural systems and computer chips. This work
is part of a wider research field called brain-computer interfaces.
(Brain-computer interface research also includes technology such as EEG
arrays that allow interface between mind and machine but do not require
direct implantation of a device.) Neural implants such as deep brain
stimulation and Vagus nerve stimulation are increasingly becoming routine
for patients with Parkinson's disease and clinical depression,
respectively.
Mind Uploading is a speculative process of whole brain emulation in
which a brain scan is used to completely emulate the mental state of the
individual in a digital computer. The computer would then run a simulation
of the brain's information processing, such that it would respond in
essentially the same way as the original brain and experience having a
sentient conscious mind.
Man
Who Is Paralyzed Communicates By Imagining Handwriting. An
experimental device that turns thoughts into text has allowed a man who
was left paralyzed by an accident to swiftly construct sentences on a
computer screen, able to type with 95% accuracy just by imagining that he
was handwriting letters on a sheet of paper. He can type at about 90
characters per minute.
Neuro-Prosthetics is a discipline related to neuroscience and
biomedical engineering concerned with developing neural prostheses. They
are sometimes contrasted with a brain–computer interface, which connects
the brain to a computer rather than a device meant to replace missing
biological functionality.
Restoring
a rudimentary form of vision in the blind. Recent discoveries show
that newly developed high-resolution implants in the visual cortex make it
possible to recognize artificially induced shapes and percepts. When
electrical stimulation is delivered to the brain via an implanted
electrode, it generates the percept of a dot of light at a particular
location in visual space, known as a 'phosphene.' The team developed
high-resolution implants consisting of 1024 electrodes and implanted them
in the visual cortex of two sighted monkeys. Their goal was to create
interpretable images by delivering electrical stimulation simultaneously
via multiple electrodes, to generate a percept that was composed of
multiple phosphenes. "The number of electrodes that we have implanted in
the visual cortex, and the number of artificial pixels that we can
generate to produce high-resolution artificial images.
Powered Exoskeleton is a wearable mobile machine that is powered by a
system of electric motors, pneumatics, levers, hydraulics, or a
combination of technologies that allow for limb movement with increased
strength and endurance.
Exoskeleton Technology
Machines -
Wheel Chairs
Hybrid Assistive Limb
is a powered exoskeleton suit designed to support and expand the physical
capabilities of its users, particularly people with physical disabilities.
There are two primary versions of the system: HAL 3, which only provides
leg function, and HAL 5, which is a full-body exoskeleton for the arms,
legs, and torso.
Thought
identification refers to the empirically verified use of
technology to, in some sense, read people's minds. Advances in research
have made this possible by using human neuroimaging to decode a person's
conscious experience based on non-invasive measurements of an individual's
brain activity.
Repurpose Brain Signals.
RFID or Radio-frequency identification, uses electromagnetic
fields to automatically identify and
track tags attached to objects. The
tags contain electronically
stored information. Passive tags collect
energy from a nearby RFID reader's interrogating radio waves. Active tags
have a local power source such as a battery and may operate at hundreds of
meters from the RFID reader. Unlike a barcode, the tag need not be within
the line of sight of the reader, so it may be embedded in the tracked
object. RFID is one method for Automatic Identification and Data Capture (AIDC)
VeriChip is a human-implantable
Microchip, which is an identifying integrated circuit device
or RFID transponder encased in silicate glass and implanted in the body of
a human being. A subdermal implant typically contains a unique ID number
that can be linked to information contained in an external database, such
as personal identification, medical history, medications, allergies, and
contact information.
Subdermal Implant refers to a body modification that is placed
underneath the skin, therefore allowing the body to heal over the
implant and creating a raised design. Such implants fall under the broad
category of body modification.
Meditation for mind-control. Scientists have discovered that
mindful meditation can help subjects learn
and improve the ability to mind-control brain computer interfaces (BCIs).
A BCI is an apparatus that allows an individual to control a machine or
computer directly from their brain. Non-invasive means of control like
electroencephalogram (EEG) readings taken through the skull are safe and
convenient compared to more risky invasive methods using a brain implant,
but they take longer to learn and users ultimately vary in proficiency.
I laugh when I here people
say that soon we will be able to
upload information directly
into our brains, that is so stupid. Why would you need to do
that if you have a
smart phone or other information storage
devices that can carry all your important data with you? And the
information you can't carry, you can access it using the
internet. Besides you just can't upload information into the
brain because information has to be processed very carefully so
that the information is correctly understood. So you have to
manually and slowly input information into a human brain so that
it has time to decipher the information and learn how the
information should be used. The key word here is '
Learn', the
one word we take for granted. So detailed instructions on how to
use this information is a must. Like when you install software
into a computer. The software comes with instructions that tells
the computer how the information can be used. Then of course the
computer needs an
operating system in order to use that
information correctly. Computers will allow humans to learn
faster, but only if the instructions are detailed and accurate.
So there are two sides of learning. Choosing the best
information to learn and then creating the instructions on how
to use the information correctly. Again, the computer and the
human brain show their similarities.
Reading.
My Favorite Martian was a 1960's sitcom about a Martian stranded on
Earth, who claims to be more
intelligent than humans, but for some reason, still does stupid human
things that contradicts his claim of being more intelligent. Sadly, this
show is a reminder of how ignorant society still is, and its been over 50
years.
Tracking.
My Favorite Martian
S3 E06 Tim the Mastermind (youtube) - Taking the pills makes Tim the
smartest man on the face of the Earth. (the pills are similar to a brain
chip implant).
My Favorite Martian S1 E37 " Uncle Martins Wisdom Tooth" (youtube).
Adding Information to our DNA.
We may be limited by our biology, but
we are not limited by our intellect. And our intellect has given
rise to technology, which is an extension of our intellect. So
we have no boundaries. We have taken the first step of
controlling matter, which means we have broken through another
level of reality, so now what? What is the next step? And why
should we take it? It's not the same out there as it is in here,
we basically have to start all over again. This is a whole other
dimension. Can we live in both dimensions at the same time? And
there's word 'Time' again. Time and Space does not have a
unified definition, so time and space are like fictitious
characters in a play, they're just actors, we know nothing about
their personal lives. Even with the mind fully open I still
cannot see. So there's the problem, it's not sight, sound,
smell, taste or feel. We are so close and yet still so far away.
But still an awesome feeling, which way do we go? Lets see how
far we can go in both directions without losing contact with
each other. And what if we eventually run right back into each
other? Then what?
Can Artificial Intelligence
help
us find Human Intelligence?
"When people start learning how artificial
intelligence can learn on its own, we will actually be teaching
ourselves, and at the same time, learn how to learn more effectively."