Computers
Computer is a
machine for performing calculations
automatically. An expert at
calculation or at operating calculating machines.
"A Keen Impassioned
Beauty of a Great Machine" "A Bicycle for the Brain"

Everything about Computers..almost.
Hardware -
IC's -
Code -
Software
-
OS -
VPN -
Servers -
Networks -
Super Computers
You can learn several different subjects at the same time when you're
learning about computers. You can learn Problem Solving, Math, Languages,
Communication, Technology, Electricity, Physics and Intelligence, just to name a few.
Basic Computer
Skills -
Computer Literacy
-
History
-
How Does a Computer Work? -
Help
Computer Science is the study of the theory,
experimentation, and engineering that form the basis for the design and
use of computers. It is the scientific and practical approach to
computation and its applications and the systematic study of the
feasibility, structure, expression, and mechanization of the methodical
procedures (or
algorithms) that underlie the acquisition, representation,
processing, storage, communication of, and access to information. An
alternate, more succinct definition of computer science is the study of
automating algorithmic processes that scale. A computer scientist
specializes in the theory of computation and the design of computational
systems.
Computer Science Books (wiki)
List of Computer Books (wiki)
Theoretical
Computer Science
is a division or subset of general computer science and mathematics that
focuses on more abstract or mathematical aspects of computing and includes
the
theory of computation, which is the branch that deals with how
efficiently problems can be solved on a model of computation, using an
algorithm. The field is divided into three major branches: automata theory
and language, computability theory, and computational complexity theory,
which are linked by the question: "What are the fundamental capabilities
and limitations of computers?".
Doctor of Computer Science is a doctorate in Computer
Science by dissertation or multiple
research papers.
Computer Engineering is a discipline that integrates several fields of
electrical engineering and computer science
required to develop computer
hardware and
software. Computer engineers usually have training
in electronic engineering (or electrical engineering), software design,
and hardware–software integration instead of only software engineering or
electronic engineering. Computer engineers are involved in many hardware
and software aspects of computing, from the design of individual
microcontrollers, microprocessors, personal computers, and supercomputers,
to circuit design. This field of engineering not only focuses on how
computer systems themselves work, but also how they integrate into the
larger picture. Usual tasks involving computer engineers include writing
software and firmware for embedded microcontrollers, designing VLSI chips,
designing analog sensors, designing mixed signal circuit boards, and
designing
operating systems. Computer engineers are also suited for
robotics research, which relies heavily on using digital systems to
control and monitor electrical systems like motors, communications, and
sensors. In many institutions, computer engineering students are allowed
to choose areas of in-depth study in their junior and senior year, because
the full breadth of knowledge used in the design and application of
computers is beyond the scope of an undergraduate degree. Other
institutions may require engineering students to complete one or two years
of General Engineering before declaring computer engineering as their
primary focus.
Computer Architecture is a set of rules and methods that describe the
functionality, organization, and implementation of computer systems. Some
definitions of architecture define it as describing the capabilities and
programming model of a computer but not a particular implementation. In
other definitions computer architecture involves instruction set
architecture design, microarchitecture design, logic design, and
implementation.
Computer Movies
The Machine that
Changed the World - Episode II - Inventing the Future
(youtube)
HyperLand (youtube)
The Virtual Revolution (youtube)
Internet Rising (youtube)
The Code - Linux (film)
All Watched Over by Machines of Loving Grace (vimeo)
Kids Growing Up Online (PBS)
Charles Babbage
was an English polymath. A mathematician, philosopher, inventor and
mechanical engineer, Babbage is best remembered for originating the
concept of a digital programmable computer. (26 December 1791 – 18 October
1871).
List of Pioneers in Computer Science
Great Inventions
Difference Engine (youtube)
Difference Engine
is an automatic mechanical calculator designed to tabulate polynomial
functions. The name derives from the method of divided differences, a way
to interpolate or tabulate functions by using a small set of polynomial
coefficients. Most mathematical functions commonly used by engineers,
scientists and navigators, including logarithmic and trigonometric
functions, can be approximated by polynomials, so a difference engine can
compute many useful tables of numbers. The historical difficulty in
producing error-free tables by teams of mathematicians and human
"computers" spurred Charles Babbage's desire to build a mechanism to
automate the process.
Analytical Engine
was a proposed mechanical general-purpose computer designed by English
mathematician and computer pioneer Charles Babbage. It was first described
in 1837 as the successor to Babbage's difference engine, a design for a
mechanical computer. The Analytical Engine incorporated an arithmetic
logic unit, control flow in the form of conditional branching and loops,
and integrated memory, making it the first design for a general-purpose
computer that could be described in modern terms as Turing-complete. In
other words, the logical structure of the Analytical Engine was
essentially the same as that which has dominated computer design in the
electronic era. Babbage was never able to complete construction of any of
his machines due to conflicts with his chief engineer and inadequate
funding. It was not until the 1940s that the first general-purpose
computers were actually built, more than a century after Babbage had
proposed the pioneering Analytical Engine in 1837.
Computer History
-
Super Computers
Hardware
Hardware is the collection of physical components that
constitute a computer system. Computer hardware is the physical parts or
components of a computer, such as
monitor,
keyboard, computer data
storage,
hard disk drive (HDD), graphic card, sound card,
memory (RAM),
motherboard, and so on, all of which are tangible physical objects. By
contrast, software is instructions that can be stored and run by hardware.
Hardware is directed by the
software to execute any command or
instruction. A combination of hardware and software forms a usable
computing system.
Hardware
Architecture refers to the identification of a
system's
physical components and their interrelationships. This description, often
called a hardware design model, allows hardware designers to understand
how their components fit into a system architecture and provides to
software component designers important information needed for software
development and integration. Clear definition of a hardware architecture
allows the various traditional engineering disciplines (e.g., electrical
and mechanical engineering) to work more effectively together to develop
and manufacture new machines, devices and components.
Processors
Computer Architecture is a set of rules and methods that
describe the functionality, organization, and implementation of computer
systems. Some definitions of architecture define it as describing the
capabilities and programming model of a computer but not a particular
implementation. In other definitions computer architecture involves
instruction set architecture design, microarchitecture design, logic
design, and implementation.
Memory
Computer Memory
refers to the computer hardware devices involved to
store information
for immediate use in a computer; it is synonymous with the term "
primary
storage". Computer memory operates at a high speed, for example
random-access memory (RAM), as a distinction from storage that
provides slow-to-access program and data storage but offers higher
capacities. If needed, contents of the computer memory can be transferred
to secondary storage, through a memory management technique called
"
virtual memory". An archaic synonym for memory is store. The term "
memory",
meaning "primary storage" or "main memory", is often associated with
addressable semiconductor memory, i.e.
integrated circuits
consisting of silicon-based
transistors, used for
example as primary storage but also other purposes in computers and other
digital electronic devices. There are two main kinds of semiconductor
memory,
volatile and
non-volatile. Examples of non-volatile memory are flash
memory (used as secondary memory) and
ROM,
PROM,
EPROM and
EEPROM memory
(used for storing firmware such as BIOS). Examples of volatile memory are
primary storage, which is typically dynamic random-access memory (DRAM),
and fast CPU cache memory, which is typically static random-access memory
(SRAM) that is fast but energy-consuming, offering lower memory areal
density than
DRAM. Most semiconductor memory is organized into memory
cells or bistable flip-flops, each storing one bit (0 or 1).
Flash memory
organization includes both one bit per memory cell and multiple bits per
cell (called MLC, Multiple Level Cell). The memory cells are grouped into
words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit.
Each word can be accessed by a binary address of N bit, making it possible
to store 2 raised by N words in the memory. This implies that processor
registers normally are not considered as memory, since they only store one
word and do not include an addressing mechanism. Typical secondary storage
devices are hard disk drives and
solid-state drives.
Memory Cell (computing) is the fundamental building block of computer
memory. The memory cell is an electronic circuit that stores one bit of
binary information and it must be set to store a logic 1 (high voltage
level) and reset to store a logic 0 (low voltage level). Its value is
maintained/stored until it is changed by the set/reset process. The value
in the memory cell can be accessed by reading it.
Random-Access Memory is a form of
computer data storage
which stores frequently used program instructions to increase the general
speed of a system. A random-access memory device allows data items to be
read or written in almost the same amount of time.
Non-Volatile
Memory is a type of computer memory that can retrieve stored
information even after having been power cycled (turned off and back on).
The opposite of non-volatile memory is
Volatile Memory which needs constant power in order to prevent data
from being erased.
Memory Error Correction
Conductive Bridging Random Access Memory - CBRAM storing data in a
non-volatile or near-permanent way, to reduce the size and power
consumption of components.
Programmable Metallization Cell is a non-volatile computer memory
widely used flash memory, providing a combination of longer lifetimes,
lower power, and better memory density.
Flash
Memory is non-volatile computer storage medium that can be
electrically erased and reprogrammed.
Jump Drive
Multi-Level Cell
is a memory element capable of storing more than a single bit of
information, compared to a single-level cell (SLC) which can store only
one bit per memory element. Triple-level cells (TLC) and quad-level cells
(QLC) are versions of MLC memory, which can store 3 and 4 bits per cell,
respectively. Note that due to the convention, the name "multi-level cell"
is sometimes used specifically to refer to the "two-level cell", which is
slightly confusing. Overall, the memories are named as follows:
SLC (1 bit per cell) - fastest, more
reliable, but highest cost.
MLC (2 bits per
cell).
TLC (3 bits per cell).
QLC (4 bits per cell) - slowest, least
cost. Examples of MLC memories are MLC
NAND flash, MLC PCM (phase change memory), etc. For example, in SLC
NAND flash technology, each cell can exist in one of the two states,
storing one bit of information per cell. Most MLC NAND flash memory has
four possible states per cell, so it can store two bits of information per
cell. This reduces the amount of margin separating the states and results
in the possibility of more errors. Multi-level cells which are designed
for low error rates are sometimes called enterprise MLC (eMLC). There are
tools for modeling the area/latency/energy of MLC memories.
Solid-State
Storage is a type of non-volatile computer storage that stores and
retrieves digital information using only electronic circuits, without any
involvement of moving mechanical parts. This differs fundamentally from
the traditional electromechanical storage paradigm, which accesses data
using rotating or linearly moving media coated with magnetic material.
Solid-State
Drive is a solid-state storage device that uses integrated circuit
assemblies as memory to store data persistently. SSD technology primarily
uses electronic interfaces compatible with traditional block input/output
(I/O) hard disk drives (HDDs), which permit simple replacements in common
applications. New I/O interfaces like SATA Express and M.2 have been
designed to address specific requirements of the SSD technology.
SSDs have no moving mechanical components.
This distinguishes them from traditional electromechanical drives such as
hard disk drives (HDDs) or floppy disks, which contain spinning disks and
movable read/write heads. Compared with electromechanical drives, SSDs are
typically more resistant to physical shock, run silently, have quicker
access time and lower latency. However, while the price of SSDs has
continued to decline over time SSDs are (as of 2018) still more expensive
per unit of storage than HDDs and are expected to continue so into the
next decade.
Solid State Drive (amazon)
Hard Disk Drive is a data storage device that uses magnetic storage to
store and retrieve
digital
information using one or more rigid rapidly rotating disks (platters)
coated with magnetic material. The platters are paired with magnetic
heads, usually arranged on a moving actuator arm, which read and write
data to the platter surfaces. Data is accessed in a random-access manner,
meaning that individual blocks of data can be stored or retrieved in any
order and not only sequentially. HDDs are a type of non-volatile storage,
retaining stored data even when powered off.
Storage Types
NAND Gate is a logic gate which produces an output
which is false only if all its inputs are true; thus its output is
complement to that of the AND gate. A LOW (0) output results only if both
the inputs to the gate are HIGH (1); if one or both inputs are LOW (0), a
HIGH (1) output results. It is made using transistors and junction diodes.
Floating-gate MOSFET is a field-effect transistor, whose structure is
similar to a conventional MOSFET. The gate of the FGMOS is electrically
isolated, creating a floating node in DC, and a number of secondary gates
or inputs are deposited above the floating gate (FG) and are electrically
isolated from it. These inputs are only capacitively connected to the FG.
Since the FG is completely surrounded by highly resistive material, the
charge contained in it remains unchanged for long periods of time. Usually
Fowler-Nordheim tunneling and hot-carrier injection mechanisms are used to
modify the amount of charge stored in the FG.
Field-effect transistor is a transistor that uses an electric field to
control the electrical behaviour of the device. FETs are also known as
unipolar transistors since they involve single-carrier-type operation.
Many different implementations of field effect transistors exist. Field
effect transistors generally display very high input impedance at low
frequencies. The conductivity between the drain and source terminals is
controlled by an electric field in the device, which is generated by the
voltage difference between the body and the gate of the device
Molecule that works as Flash Storage
Macronix
EEPROM
stands for electrically erasable programmable read-only memory and is a
type of non-volatile memory used in computers and other electronic devices
to store relatively small amounts of data but allowing individual bytes to
be erased and reprogrammed.
Computer Data Storage is a technology consisting of computer
components and recording media that are used to retain digital data. It is
a core function and fundamental component of computers.
Knowledge Preservation
Memory-Mapped File is a segment of virtual memory that has been
assigned a direct byte-for-byte correlation with some portion of a file or
file-like resource. This resource is typically a file that is physically
present on disk, but can also be a device, shared memory object, or other
resource that the operating system can reference through a file
descriptor. Once present, this correlation between the file and the
memory space permits applications to treat the mapped portion as if it
were primary memory.
Memory-Mapped I/O are two complementary methods of performing
input/output (I/O) between the CPU and peripheral devices in a computer.
An alternative approach is using dedicated I/O processors, commonly known
as channels on mainframe computers, which execute their own instructions.
Virtual Memory is a memory management technique that is implemented
using both hardware and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses in computer
memory. Main storage, as seen by a process or task, appears as a
contiguous address space or collection of contiguous segments. The
operating system manages virtual address spaces and the assignment of real
memory to virtual memory. Address translation hardware in the CPU, often
referred to as a memory management unit or MMU, automatically translates
virtual addresses to physical addresses. Software within the operating
system may extend these capabilities to provide a virtual address space
that can exceed the capacity of real memory and thus reference more memory
than is physically present in the computer. The primary benefits of
virtual memory include freeing applications from having to manage a shared
memory space, increased security due to memory isolation, and being able
to conceptually use more memory than might be physically available, using
the technique of paging.
Persistent
Memory is any method or apparatus for
efficiently
storing data structures such that they can continue to be accessed
using memory instructions or memory APIs even after the end of the process
that created or last modified them. Often confused with non-volatile
random-access memory (NVRAM), persistent memory is instead more closely
linked to the concept of persistence in its emphasis on program state that
exists outside the fault zone of the process that created it. Efficient,
memory-like access is the defining characteristic of persistent memory. It
can be provided using microprocessor memory instructions, such as load and
store. It can also be provided using APIs that implement remote direct
memory access verbs, such as RDMA read and RDMA write. Other
low-latency methods that allow byte-grain access to data also qualify.
Persistent memory capabilities extend beyond non-volatility of stored
bits. For instance, the loss of key metadata, such as page table entries
or other constructs that translate virtual addresses to physical
addresses, may render durable bits non-persistent. In this respect,
persistent memory resembles more abstract forms of computer storage, such
as file systems. In fact, almost all existing persistent memory
technologies implement at least a basic file system that can be used for
associating names or identifiers with stored extents, and at a minimum
provide file system methods that can be used for naming and allocating
such extents.
Magnetoresistive Random-Access Memory is a non-volatile random-access
memory technology available today that began its development in the 1990s.
Continued increases in density of existing memory technologies – notably
flash RAM and DRAM – kept it in a niche role in the market, but its
proponents believe that the advantages are so overwhelming that
magnetoresistive RAM will eventually become a dominant type of memory,
potentially even becoming a universal memory. It is currently in
production by Everspin, and other companies including GlobalFoundries and
Samsung have announced product plans.. A recent, comprehensive review
article on magnetoresistance and magnetic random access memories is
available as an open access paper in Materials Toda.
Universal Memory refers to a hypothetical computer data storage device
combining the cost benefits of DRAM, the speed of SRAM, the non-volatility
of flash memory along with infinite durability. Such a device, if it ever
becomes possible to develop, would have a far-reaching impact on the
computer market. Computers for most of their recent history have depended
on several different data storage technologies simultaneously as part of
their operation. Each one operates at a level in the memory hierarchy
where another would be unsuitable. A personal computer might include a few
megabytes of fast but volatile and expensive SRAM as the CPU cache,
several gigabytes of slower DRAM for program memory, and multiple hundreds
of gigabytes of the slow but non-volatile flash memory or a few terabytes
of "spinning platters" hard disk drive for long term storage.
Computer Memory (amazon)
Internal Hard Drives
(amazon)
Laptop Computers (amazon)
Desktop Computers (amazon)
Webopedia
has definitions to words, phrases and abbreviations related to computing and information technology.
Motherboard
Motherboard is the main printed circuit board (PCB) found in
general purpose microcomputers and other expandable systems. It holds and
allows communication between many of the crucial electronic components of
a system, such as the central processing unit (CPU) and memory, and
provides connectors for other peripherals. Unlike a backplane, a
motherboard usually contains significant sub-systems such as the central
processor, the chipset's input/output and memory controllers, interface
connectors, and other components integrated for general purpose use.
Motherboard specifically refers to a PCB with expansion capability and as
the name suggests, this board is often referred to as the "
mother"
of all components attached to it, which often include peripherals,
interface cards, and daughtercards: sound cards, video cards, network
cards, hard drives, or other forms of persistent storage; TV tuner cards,
cards providing extra USB or FireWire slots and a variety of other custom
components. Similarly, the term mainboard is applied to devices with a
single board and no additional expansions or capability, such as
controlling boards in laser printers, televisions, washing machines and
other embedded systems with limited expansion abilities.
Mother Board (image)
Printed Circuit Board mechanically supports and electrically
connects electronic components or electrical components using conductive
tracks, pads and other features etched from one or more sheet layers of
copper laminated onto and/or between sheet layers of a non-conductive
substrate. Components are generally soldered onto the PCB to both
electrically connect and mechanically fasten them to it. Printed circuit
boards are used in all but the simplest electronic products. They are also
used in some electrical products, such as passive switch boxes.
Circuit Board Components
Design (Circuit
Boards)
Integrated Circuit -
I.C.
Circuit Board Components
Processor
Microprocessor accepts digital or
binary data as
input,
processes
it according to instructions stored in its
memory, and provides results as output.
Central Processing Unit CPU carries out the
instructions of a
computer
program by performing the basic
arithmetic, logical, control and input/output (I/O) operations
specified by the instructions.
Coprocessor is a computer processor used to supplement the
functions of the primary processor (the CPU).
Multi-Core Processor can run multiple instructions at the
same time, increasing overall
speed for programs.
Multiprocessing
is a computer system having two or more processing units (multiple
processors) each sharing main
memory and peripherals, in order to
simultaneously process programs. It is the use of two or more central
processing units (CPUs) within a single computer system. The term also
refers to the ability of a system to support more than one processor or
the ability to allocate tasks between them. There are many variations on
this basic theme, and the definition of multiprocessing can vary with
context, mostly as a function of how CPUs are defined (multiple cores on
one
die, multiple dies in one package, multiple packages in one system
unit, etc.).
Graphics Processing Unit is a specialized electronic circuit designed
to rapidly manipulate and alter memory to accelerate the creation of
images in a frame buffer intended for output to a display device.
GPUs are used in embedded systems, mobile
phones, personal computers, workstations, and game consoles. Modern GPUs
are very efficient at manipulating computer graphics and image processing,
and their highly parallel structure makes them more efficient than
general-purpose CPUs for algorithms where the processing of large blocks
of data is done in parallel. In a personal computer, a GPU can be present
on a video card, or it can be embedded on the motherboard or—in certain
CPUs—on the CPU die.
NVIDIA TITAN V is the most powerful graphics card ever created for the PC.
Processor Design is the design engineering task of creating a
microprocessor, a component of computer hardware. It is a subfield of
electronics engineering and computer engineering. The design process
involves choosing an instruction set and a certain execution paradigm
(e.g. VLIW or RISC) and results in a microarchitecture described in e.g.
VHDL or Verilog. This description is then manufactured employing some of
the various semiconductor device fabrication processes. This results in a
die which is bonded onto a chip carrier. This chip carrier is then
soldered onto, or inserted into a socket on, a printed circuit board
(PCB).The mode of operation of any microprocessor is the execution of
lists of instructions. Instructions typically include those to compute or
manipulate data values using registers, change or retrieve values in
read/write memory, perform relational tests between data values and to
control program flow.
Multitasking -
Batch Process
Process -
Processing -
Speed
Information Processor is a system (be it electrical,
mechanical or biological) which takes information (a sequence of
enumerated symbols or states) in one form and processes
(transforms) it into another form, e.g. to statistics, by an
algorithmic process. An information processing system is made up
of four basic parts, or sub-systems: input, processor, storage,
output.
Processor Affinity enables the binding and unbinding
of a process or a thread to a central processing unit.
555 timer IC is an
integrated circuit (chip) used in a
variety of timer, pulse generation, and oscillator applications. The 555
can be used to provide time delays, as an oscillator, and as a flip-flop
element. Derivatives provide two or four timing circuits in one package.
Semiconductor
Design standard cell methodology is a method of designing
application-specific integrated circuits (ASICs) with mostly digital-logic
features.
BIOS an
acronym for Basic Input/Output System and also known as the System BIOS,
ROM BIOS or PC BIOS) is a type of firmware used to perform hardware
initialization during the booting process (power-on startup) on IBM PC
compatible computers, and to provide runtime services for operating
systems and programs. The BIOS firmware is built into personal computers
(PCs), and it is the first software they run when powered on. The name
itself originates from the Basic Input/Output System used in the CP/M
operating system in 1975. Originally proprietary to the IBM PC, the BIOS
has been reverse engineered by companies looking to create compatible
systems and the interface of that original system serves as a de facto
standard.
Crystal Oscillator is an electronic oscillator circuit that
uses the mechanical resonance of a vibrating crystal of piezoelectric
material to create an electrical signal with a precise frequency.
Clock Speed typically refers to the
frequency at which a
chip like a central processing unit (CPU), one core of a multi-core
processor, is running and is used as an indicator of the processor's
speed. It is measured in clock cycles per second or its equivalent, the SI
unit hertz (Hz). The clock rate of the first generation of computers was
measured in hertz or kilohertz (kHz), but in the 21st century the speed of
modern CPUs is commonly advertised in gigahertz (GHz). This metric is most
useful when comparing processors within the same family, holding constant
other features that may impact performance. Video card and CPU
manufacturers commonly select their highest performing units from a
manufacturing batch and set their maximum clock rate higher, fetching a
higher price.
Silicon
Photonics is the study and application of photonic systems
which use silicon as an optical medium.
Transistor
is a semiconductor device used to amplify or switch electronic signals and
electrical power. It is composed of semiconductor material usually with at
least three terminals for connection to an external circuit. A voltage or
current applied to one pair of the transistor's terminals controls the
current through another pair of terminals. Because the controlled (output)
power can be higher than the controlling (input) power, a transistor can
amplify a signal. Today, some transistors are packaged individually, but
many more are found embedded in
integrated circuits.
Memistors -
Binary Code
Carbon Nanotube Field-effect Transistor refers to a
field-effect transistor that utilizes a single carbon nanotube or an array
of carbon nanotubes as the channel material instead of bulk silicon in the
traditional MOSFET structure. First demonstrated in 1998, there have been
major developments in CNTFETs since.
Analog Chip is a set of miniature electronic analog circuits
formed on a single piece of semiconductor material.
Analog Signal is any continuous
signal for which the time
varying feature (variable) of the signal is a representation of some other
time varying quantity, i.e., analogous to another time varying signal. For
example, in an
analog audio signal, the instantaneous voltage of the
signal varies continuously with the pressure of the
sound waves. It
differs from a digital signal, in which the continuous quantity is a
representation of a sequence of discrete values which can only take on one
of a finite number of values. The term analog signal usually refers to
electrical signals; however, mechanical, pneumatic, hydraulic, human
speech, and other systems may also convey or be considered analog signals.
An analog signal uses some property of the medium to convey the signal's
information. For example, an aneroid barometer uses rotary position as the
signal to convey
pressure information. In an electrical signal, the
voltage, current, or frequency of the signal may be varied to represent
the information.
Digital Signal is a
signal that is constructed from a
discrete set of
waveforms of a physical quantity so as to represent a
sequence of discrete values. A logic signal is a digital signal with only
two possible values, and describes an arbitrary bit stream. Other types of
digital signals can represent three-valued logic or higher valued logics.
Conversion
Spintronics and Nanophotonics combined in 2-D material is a way to
convert the spin information into a predictable light signal at room
temperature. The discovery brings the worlds of spintronics and
nanophotonics closer together and might lead to the development of an
energy-efficient way of processing data.
Math WorksNimbula
Learning Tools
Digikey Electronic Components
Nand 2
Tetris
Interfaces -
Brain -
Robots -
3D Printing
Operating Systems -
Code -
Programing -
Computer Courses
Online
Dictionary of Computer -
Technology Terms
CS Unplugged
- Computer Science without using computers.
Variable
(cs)
Technology Education -
Engineering
Technology Addiction
-
Technical Competitions
Internet
Computer Standards List (wiki)
IPv6 recent
version of the Internet Protocol.
IPv6
Web 2.0 (wiki)
Trouble-Shoot PC's -
Fixing PC's
-
PC Maintenance
Tips
Digital Displays
Digital Signage is a sub segment of signage. Digital
signages use technologies such as LCD,
LED and Projection to display
content such as digital images, video, streaming media, and information.
They can be found in public spaces, transportation systems, museums,
stadiums, retail stores, hotels, restaurants, and corporate buildings
etc., to provide wayfinding, exhibitions, marketing and outdoor
advertising. Digital Signage market is expected to grow from USD $15
billion to over USD $24bn by 2020.
Display Device is an output device for presentation of
information in visual or
tactile form (the latter used for example in tactile electronic
displays for blind people). When the input information that is supplied
has an electrical signal, the display is called an electronic display.
Common applications for electronic visual displays are televisions or
computer monitors.
Colors -
Eyes
(sight) -
Eye Strain
LED Display is a
flat panel display, which uses an array of light-emitting diodes as pixels
for a video display. Their brightness allows them to be used outdoors in
store signs and billboards, and in recent years they have also become
commonly used in destination signs on public transport vehicles. LED
displays are capable of providing general illumination in addition to
visual display, as when used for stage lighting or other decorative (as
opposed to informational) purposes.
Organic
Light-Emitting Diode (OLED) is a
Light-Emitting
Diode (LED) in which
the emissive electroluminescent layer is a film of organic compound that
emits light in response to an electric current. This layer of organic
semiconductor is situated between two electrodes; typically, at least one
of these electrodes is transparent. OLEDs are used to create digital
displays in devices such as television screens, computer monitors,
portable systems such as mobile phones, handheld game consoles and PDAs. A
major area of research is the development of white OLED devices for use in
solid-state lighting applications.
AMOLED is a display
technology used in smartwatches, mobile devices, laptops, and televisions.
OLED describes a specific type of thin-film-display technology in which
organic compounds form the electroluminescent material, and active matrix
refers to the technology behind the addressing of pixels.
High-Dynamic-Range Imaging is a high dynamic range (HDR) technique
used in imaging and photography to reproduce a greater dynamic range of
luminosity than is possible with standard digital imaging or photographic
techniques. The aim is to present a similar range of luminance to that
experienced through the human visual system. The human eye, through
adaptation of the iris and other methods, adjusts constantly to adapt to a
broad range of luminance present in the environment. The brain
continuously interprets this information so that a viewer can see in a
wide range of light conditions.
Graphics Display Resolution is the width and height dimensions of an
electronic visual display device, such as a computer monitor, in pixels.
Certain combinations of width and height are standardized and typically
given a name and an initialism that is descriptive of its dimensions. A
higher display resolution in a display of the same size means that
displayed content appears sharper.
4K Resolution
refers to a horizontal resolution on the order of 4,000 pixels and
vertical resolution on the order
of 2,000 pixels.
Smartphones -
Tech Addiction
Computer Monitor
is an electronic visual display for computers. A monitor usually comprises
the display device, circuitry, casing, and power supply. The display
device in modern monitors is typically a thin film transistor liquid
crystal display (TFT-LCD) or a flat panel LED display, while older
monitors used a cathode ray tubes (CRT). It can be connected to the
computer via VGA, DVI, HDMI, DisplayPort, Thunderbolt, LVDS (Low-voltage
differential signaling) or other proprietary connectors and signals.
Durable Monitor Screens
(computers)
Liquid-Crystal Display is a flat-panel display or other electronically
modulated optical device that uses the light-modulating properties of
liquid crystals. Liquid crystals do not emit light directly, instead using
a backlight or reflector to produce images in color or monochrome. LCDs
are available to display arbitrary images (as in a general-purpose
computer display) or fixed images with low information content, which can
be displayed or hidden, such as preset words, digits, and 7-segment
displays, as in a digital clock. They use the same basic technology,
except that arbitrary images are made up of a large number of small
pixels, while other displays have larger elements. LCDs are used in a wide
range of applications including computer monitors,
televisions, instrument panels,
aircraft cockpit displays, and indoor and outdoor signage. Small LCD
screens are common in portable consumer devices such as digital cameras,
watches, calculators, and
mobile telephones, including smartphones. LCD
screens are also used on consumer electronics products such as DVD
players, video game devices and clocks. LCD screens have replaced heavy,
bulky cathode ray tube (CRT) displays in nearly all applications. LCD
screens are available in a wider range of screen sizes than CRT and plasma
displays, with LCD screens available in sizes ranging from tiny digital
watches to huge, big-screen television sets. Since LCD screens do not use
phosphors, they do not suffer image burn-in when a static image is
displayed on a screen for a long time (e.g., the table frame for an
aircraft schedule on an indoor sign). LCDs are, however, susceptible to
image persistence. The LCD screen is more energy-efficient and can be
disposed of more safely than a CRT can. Its low electrical power
consumption enables it to be used in battery-powered electronic equipment
more efficiently than CRTs can be. By 2008, annual sales of televisions
with LCD screens exceeded sales of CRT units worldwide, and the CRT became
obsolete for most purposes.
Touchscreen is a input and output device normally layered on the top
of an electronic visual display of an information processing system. A
user can give input or control the information processing system through
simple or multi-touch gestures by touching the screen with a special
stylus and/or one or more fingers. Some touchscreens use ordinary or
specially coated gloves to work while others may only work using a special
stylus/pen. The user can use the touchscreen to react to what is displayed
and to control how it is displayed; for example, zooming to increase the
text size. The touchscreen enables the user to
interact directly with what
is displayed, rather than using a mouse, touchpad, or any other such
device (other than a stylus, which is optional for most modern
touchscreens). Touchscreens are common in devices such as game consoles,
personal computers, tablet computers, electronic voting machines, point of
sale systems ,and smartphones. They can also be attached to computers or,
as terminals, to networks. They also play a prominent role in the design
of digital appliances such as personal digital assistants (PDAs) and some
e-readers.
Touchscreen Features recognizes
multi touch gestures like swipes, pinch, flicks, tap, double tap and drag.
Accepts touch inputs by gloved fingers, finger nails, pens, keys, credit
cards, styluses, erasers, etc. Touch events can be recorded even if the
user's finger does not touch the screen.
Interfaces
LCD that
is paper-thin, flexible, light, tough and cheap perhaps only costing
$5 for a 5-inch screen. flexible paper like display that could be updated
as fast as the news cycles. Less than half a millimeter thick, the new
flexi-LCD design could revolutionize printed media.
A front polarizer-free optically rewritable (ORW) liquid crystal display
(LCD).
7-Segment Display -
9-Segment Display -
14-Segment Display
Software
Software is that part of a computer
system that consists of
encoded
information or computer
instructions, in contrast to the physical
hardware from which the system is built.
Operating System -
Writing Software (word processing)
Software Engineering is the application of
engineering to the
development of software in a systematic method. Typical formal definitions
of Software Engineering are: Research, design,
develop, and test operating
systems-level software, compilers, and network distribution software for
medical, industrial, military,
communications, aerospace, business,
scientific, and general computing applications. The systematic application
of scientific and technological knowledge, methods, and experience to the
design, implementation, testing, and documentation of software"; The
application of a systematic, disciplined, quantifiable approach to the
development, operation, and maintenance of software; An engineering
discipline that is concerned with all aspects of software production; And
the establishment and use of sound engineering principles in order to
economically obtain software that is
reliable and works efficiently on
real machines.
Software Architecture
refers to the high level structures of a software system, the discipline
of creating such structures, and the documentation of these structures.
These structures are needed to reason about the software system. Each
structure comprises software elements, relations among them, and
properties of both elements and relations. The architecture of a software
system is a metaphor, analogous to the architecture of a building.
Software Development is the process of computer programming,
documenting, testing, and bug fixing involved in creating and maintaining
applications and frameworks resulting in a software product. Software
development is a process of writing and maintaining the source code, but
in a broader sense, it includes all that is involved between the
conception of the desired software through to the final manifestation of
the software, sometimes in a planned and structured process. Therefore,
software development may include research, new development, prototyping,
modification, reuse, re-engineering, maintenance, or any other activities
that result in software products.
Reusability -
Smart Innovation
-
Compatibility
Software Developer is a person concerned with facets of the
software development process, including the research, design, programming,
and testing of computer software. Other job titles which are often used
with similar meanings are programmer, software analyst, and software
engineer. According to developer Eric Sink, the differences between system
design, software development, and programming are more apparent. Already
in the current market place there can be found a segregation between
programmers and developers, being that one who
implements is not the same as the one who designs the class structure or
hierarchy. Even more so that developers become systems architects, those
who design the multi-leveled architecture or component interactions of a
large software system. (see also Debate over who is a software engineer).
Software Development Process is splitting of software
development work into distinct phases (or stages) containing activities
with the intent of better planning and management. It is often considered
a subset of the systems development life cycle. The methodology may
include the pre-definition of specific deliverables and artifacts that are
created and completed by a project team to develop or maintain an
application. Common methodologies include waterfall, prototyping,
iterative and incremental development, spiral development, rapid
application development, extreme programming and various types of agile
methodology. Some people consider a life-cycle "model" a more general term
for a category of methodologies and a software development "process" a
more specific term to refer to a specific process chosen by a specific
organization. For example, there are many specific software development
processes that fit the spiral life-cycle model.
Project Management.
Scrum (software development) is a framework for managing software
development. It is designed for teams of three to nine developers who
break their work into actions that can be completed within fixed duration
cycles (called "sprints"), track progress and re-plan in daily 15-minute
stand-up meetings, and collaborate to deliver workable software every
sprint. Approaches to coordinating the work of multiple scrum teams in
larger organizations include Large-Scale Scrum, Scaled Agile Framework (SAFe)
and Scrum of Scrums, among others.
Software Design is the process by which an agent creates a
specification of a software artifact, intended to accomplish goals, using
a set of primitive components and subject to constraints. Software design
may refer to either "all the activity involved in conceptualizing,
framing, implementing, commissioning, and ultimately modifying complex
systems" or "the activity following requirements specification and before
programming, as ... [in] a stylized software engineering process."
Software design usually involves problem solving and planning a software
solution. This includes both a low-level component and algorithm design
and a high-level, architecture design.
Software Design Pattern.
Agile
Software Development describes a set of principles for
software development under which requirements and solutions evolve through
the collaborative effort of self-organizing cross-functional teams. It
advocates adaptive planning, evolutionary development, early delivery, and
continuous improvement, and it encourages rapid and flexible response to
change. These principles support the definition and continuing evolution
of many software development methods.
Software Release Life Cycle is the sum of the stages of development
and maturity for a piece of computer software: ranging from its initial
development to its eventual release, and including updated versions of the
released version to help improve software or fix bugs still present in the
software.
Development Process
-
Develop Meaning
Software as a Service is a software licensing and delivery
model in which software is licensed on a subscription basis and is
centrally
hosted.
Computer Program is a collection of
instructions that performs a
specific task when executed by a computer. A computer requires programs to
function, and typically executes the program's instructions in a central
processing unit.
Computer Code
Instruction Set is the interface between a computer's
software and its hardware, and thereby enables the independent development
of these two computing realms; it defines the valid instructions that a
machine may execute.
Computing Platform means in general sense, where any piece
of software is executed. It may be the hardware or the operating system
(OS), even a web browser or other application, as long as the code is
executed in it. The term computing platform can refer to different
abstraction levels, including a certain hardware architecture, an
operating system (OS), and runtime libraries. In total it can be said to
be the stage on which computer programs can run. A platform can be seen
both as a constraint on the application development process, in that
different platforms provide different functionality and restrictions; and
as an assistance to the development process, in that they provide
low-level functionality ready-made. For example, an OS may be a platform
that abstracts the underlying differences in hardware and provides a
generic command for saving files or accessing the network.
Scrum is an iterative and incremental agile software
development framework for managing product development. It defines "a
flexible, holistic product development strategy where a development team
works as a unit to reach a common goal", challenges assumptions of the
"traditional, sequential approach" to product development, and enables
teams to self-organize by encouraging physical co-location or close online
collaboration of all team members, as well as daily face-to-face
communication among all team members and disciplines involved. A key
principle of Scrum is its recognition that during product development, the
customers can change their minds about what they want and need (often
called requirements volatility), and that unpredicted challenges cannot be
easily addressed in a traditional predictive or planned manner. As such,
Scrum adopts an evidence-based empirical approach—accepting that the
problem cannot be fully understood or defined, focusing instead on
maximizing the team's ability to deliver quickly, to respond to emerging
requirements and to adapt to evolving technologies and changes in market
conditions.
Mobile Application Development
is a term used to denote the act or process by which application software
is developed for mobile devices, such as personal digital assistants,
enterprise digital assistants or mobile phones. These applications can be
pre-installed on phones during manufacturing platforms, or delivered as
web applications using server-side or client-side processing (e.g.,
JavaScript) to provide an "application-like" experience within a Web
browser. Application software developers also must consider a long array
of screen sizes, hardware specifications, and configurations because of
intense competition in mobile software and changes within each of the
platforms. Mobile app development has been steadily growing, in revenues
and jobs created. A 2013 analyst report estimates there are 529,000 direct
app economy jobs within the EU 28 members, 60% of which are mobile app
developers.
APPS (application software)
Software Testing is an
investigation conducted to provide information about the quality of
the product or service under test. Software testing can also provide an
objective, independent
view of the software to allow the business to appreciate and understand
the risks of software implementation. Test techniques include the process
of executing a program or application with the intent of finding software
bugs (
errors or other
defects), and verifying that the software
product is fit for use.
Software testing involves the execution of a software component or system
component to evaluate one or more properties of interest. In general,
these properties indicate the extent to which the component or system
under test: Meets the requirements that guided its design and development,
responds correctly to all kinds of inputs, performs its functions within
an acceptable time, is sufficiently usable, can be installed and run in
its intended environments, and achieves the general result its
stakeholders desire.
White-Box Testing is a method of testing software that tests internal
structures or workings of an application, as opposed to its functionality
(i.e. black-box testing).
Psychology -
Assessments
Black-Box Testing is a method of software testing that
examines the functionality
of an application without peering into its internal structures or
workings. This method of test can be applied virtually to every level of
software testing: unit, integration, system and acceptance. It is
sometimes referred to as specification-based testing.
A/B Testing is a term for a randomized experiment with two
variants, A and B, which are the control and variation in the controlled
experiment. A/B
testing is a form of statistical hypothesis testing with
two variants leading to the technical term, two-sample
hypothesis testing,
used in the field of
statistics.
Regression Testing
is a type of software testing which verifies that software, which was
previously developed and tested, still performs correctly after it was
changed or interfaced with other software. Changes may include software
enhancements, patches, configuration changes, etc. During regression
testing, new software bugs or regressions may be uncovered. Sometimes a
software change impact analysis is performed to determine what areas could
be affected by the proposed changes. These areas may include functional
and non-functional areas of the system.
Observations
Data-Driven Testing is a term used in the testing of
computer software to describe testing done using a table of conditions
directly as test inputs and verifiable outputs as well as the process
where test environment settings and control are not hard-coded. In the
simplest form the tester supplies the inputs from a row in the table and
expects the outputs which occur in the same row. The table typically
contains values which correspond to boundary or partition input spaces. In
the control methodology, test configuration is "read" from a database.
Diagnose
Benchmark is the act of running a computer program, a set of
programs, or other operations, in order to assess the relative performance
of an object, normally by running a number of standard tests and trials
against it. The term 'benchmark' is also mostly utilized for the purposes
of elaborately designed benchmarking programs themselves.
Command Pattern is a behavioral design pattern in which an
object is used to encapsulate all information needed to perform an action
or trigger an event at a later time. This information includes the method
name, the object that owns the method and values for the method
parameters.
Iterative and incremental Development is any combination of
both iterative design or iterative method and incremental build model for
software development. The combination is of long standing and has been
widely suggested for large development efforts. For example, the 1985
DOD-STD-2167 mentions (in section 4.1.2): "During software development,
more than one iteration of the software development cycle may be in
progress at the same time." and "This process may be described as an
'evolutionary acquisition' or 'incremental build' approach." The
relationship between iterations and increments is determined by the
overall software development methodology and software development process.
The exact number and nature of the particular incremental builds and what
is iterated will be specific to each individual development effort.
OSI Model is a conceptual model that characterizes and
standardizes the communication functions of a telecommunication or
computing system without regard to their underlying internal structure and
technology. Its goal is the interoperability of diverse communication
systems with standard protocols. The model partitions a communication
system into abstraction layers. The original version of the model defined
seven layers. A layer serves the layer above it and is served by the layer
below it. For example, a layer that provides error-free communications
across a network provides the path needed by applications above it, while
it calls the next lower layer to send and receive packets that comprise
the contents of that path. Two instances at the same layer are visualized
as connected by a horizontal connection in that layer.
Technology Stack is a set of software subsystems or
components needed to create a complete platform such that no additional
software is needed to support applications. Applications are said to "run
on" or "run on top of" the resulting platform. Some definitions of a
platform overlap with what is known as system software.
Abstraction Layer is a way of hiding the implementation
details of a particular set of functionality, allowing the separation of
concerns to facilitate interoperability and platform independence.
Software models that use layers of abstraction include the OSI 7-layer
model for computer network protocols, the OpenGL graphics drawing library,
and the byte stream input/output (I/O) model originated from Unix and
adopted by DOS, Linux, and most other modern operating systems.
Open Systems Interconnection is an effort to standardize
computer networking that was started in 1977 by the International
Organization for Standardization (ISO), along with the ITU-T.
Enterprise Architecture Framework defines how to create and
use an enterprise architecture. An architecture framework provides
principles and practices for creating and using the architecture
description of a system. It structures architects' thinking by dividing
the architecture description into domains, layers or views, and offers
models - typically matrices and diagrams - for documenting each view. This
allows for making systemic design decisions on all the components of the
system and making long-term decisions around new design, requirements,
sustainability and support.
Enterprise Architecture is "a well-defined practice for
conducting enterprise analysis, design, planning, and implementation,
using a holistic approach at all times, for the successful development and
execution of strategy. Enterprise architecture applies architecture
principles and practices to guide organizations through the business,
information, process, and technology changes necessary to execute their
strategies. These practices utilize the various aspects of an enterprise
to identify, motivate, and achieve these changes.
International Organization for Standardization is an
international standard-setting body composed of representatives from
various national standards organizations.
Conceptual Model is a representation of a system, made of
the composition of concepts which are used to help people know,
understand, or simulate a subject the model represents. Some models are
physical objects; for example, a toy model which may be assembled, and may
be made to work like the object it represents.
Model-Driven Engineering is a software development
methodology that focuses on creating and exploiting domain models, which
are conceptual models of all the topics related to a specific problem.
Hence, it highlights and aims at abstract representations of the knowledge
and activities that govern a particular application domain, rather than
the computing (f.e. algorithmic) concepts.
Model-Based Design is a mathematical and visual method of
addressing problems associated with designing complex control,signal
processing and communication systems. It is used in many motion control,
industrial equipment, aerospace, and automotive applications. Model-based
design is a methodology applied in designing embedded software.
Architectural Pattern
is a general, reusable solution to a commonly occurring problem in
software architecture within a given context. Architectural patterns are
similar to software design pattern but have a broader scope. The
architectural patterns address various issues in software engineering,
such as computer hardware performance limitations, high availability and
minimization of a business risk. Some architectural patterns have been
implemented within software frameworks.
Software Design Pattern is a general reusable solution to a commonly
occurring problem within a given context in software design. It is not a
finished design that can be transformed directly into source or
machine code. It is a description or
template for how to solve a problem that can be used in many different
situations. Design patterns are formalized best practices that the
programmer can use to solve common problems when designing an
application
or system.
Object-oriented design patterns typically show relationships
and interactions between classes or objects, without specifying the final
application classes or objects that are involved. Patterns that imply
mutable state may be unsuited for functional
programming languages, some
patterns can be rendered unnecessary in languages that have built-in
support for solving the problem they are trying to solve, and
object-oriented patterns are not necessarily suitable for
on-object-oriented languages. Design patterns may be viewed as a
structured approach to computer programming intermediate between the
levels of a programming paradigm and a concrete
algorithm
Resource-Oriented Architecture
is a style of software architecture and programming paradigm for designing
and developing software in the form of resources with "RESTful"
interfaces. These resources are software components (discrete pieces of
code and/or data structures) which can be reused for different purposes.
ROA design principles and guidelines are used during the phases of
software development and system integration.
Representational State Transfer or
RESTful Web services are one way of providing interoperability between
computer systems on the Internet. REST-compliant Web services allow
requesting systems to access and manipulate textual representations of Web
resources using a uniform and predefined set of stateless operations.
Other forms of Web service exist, which expose their own arbitrary sets of
operations such as WSDL and SOAP. (REST)
Software Configuration Management is the task of
tracking and controlling changes in the software, part of the larger
cross-disciplinary field of configuration management. SCM practices
include revision control and the establishment of baselines. If something
goes wrong, SCM can determine what was changed and who changed it. If a
configuration is working well, SCM can determine how to replicate it
across many hosts.
Cucumber is a software tool that computer programmers use
for testing other software.
Selenium
Apache Maven
Jwebunit software is a Java-based testing framework for web
applications
Apache JMeter is an Apache project that can be used as a
load testing tool for analyzing and measuring the performance of
a variety of services, with a focus on web applications.
Free
Software
Structure
Interfaces
MatrixCommunications Protocol
Data
Learn to Code
Apps
Application Program or APP for short, is a computer program
designed to perform a group of coordinated functions, tasks, or
activities for the benefit of the user.
Application
Software is a computer program designed to perform a group
of coordinated functions, tasks, or activities for the benefit of the
user. Examples of an application include a word processor, a spreadsheet,
an accounting application, a web browser, a media player, an aeronautical
flight simulator, a console game or a photo editor. The collective noun
application software refers to all applications collectively. This
contrasts with system software, which is mainly involved with running the
computer. Applications may be bundled with the computer and its system
software or published separately, and may be coded as proprietary,
open-source or university projects. Apps built for mobile platforms are
called mobile apps.
Authoring System is a program that has pre-programmed
elements for the development of interactive multimedia software titles.
Authoring systems can be defined as software that allows its user to
create multimedia applications for manipulating multimedia objects.
Application Performance Management is the monitoring and management of
performance and availability of software applications. APM strives to
detect and diagnose complex application performance problems to maintain
an expected level of service. APM is "the translation of IT metrics into
business meaning ([i.e.] value).
User Testing
from concept to launch, User
Testing provides actionable insights enabling
you to create great experiences.
Validately recruit
testers, launch tests, and analyze results.
Lookback designer &
research.
Prototypes
(engineering)
Applications Interface
Create Mobile Apps
Phone Gap
Como
Sweb AppsApp Breeder
My App Builder
I Build
App
Mobile Roadie
Yapp
App
Makr
Best App Makers
Build Your Own Business Apps
in 3 Minutes
Gigster building your app
Google
Developer Apps
Thing Space Verizon
App Management Interface
AppCenter: The Pay-What-You-Want App Store
Health Medical Apps
Apps from Amazon
Car
Finder App
Visual Travel Tours
Audio Travel
Gate Guru App
App Brain
Trip It
Field Tripper App
Test Flight App
App Shopper
Red Laser
Portable Apps
I-nigma Bar Code Reader
More Apps
M-Pesa
Language Translators
Wikitude
Yellow Pages App
Portable Apps
What's App
Apps for Plant LoversPress Pad App
for Digital Magazines and Publishers
Tech Fetch
Travel Tools
Cell Phones & Tools
Next Juggernaut
Rethink DB
Big in Japan
Near by Now
The Find
Milo
Apple
X Code
Quixey
Just in Mind
HyperCard is application software and a programming tool for
Apple Macintosh and Apple IIGS computers. It is among the first successful
hypermedia systems before the World Wide Web. It combines database
abilities with a graphical, flexible, user-modifiable interface. HyperCard
also features HyperTalk, a programming language for manipulating data and
the user interface.
Enable Cognitive Computing Features In Your App Using IBM Watson's
Language, Vision, Speech and Data APIs
Operating Systems
Operating System is system software that manages computer hardware and
software resources and provides common services for computer programs. All
computer programs, excluding firmware, require an operating system to
function.
Time-sharing operating systems
schedule tasks for efficient
use of the
system and may also include accounting software for cost
allocation of processor time, mass storage, printing, and other resources.
Timeline of Operating Systems
(wiki)
History of Operating Systems (wiki)
Types Of Operating Systems
Android
Operating System (wiki)
Red Hat Linux
Linux (wiki)
GNU
Ubuntu
Server OS
How to Dual Boot Linux on your PC
CloudReady lightweight
operating systemBackup Operating System
Substitute Alternate Operating Systems
Open Source Operating Systems Comparisons (wiki)
Human Operating System (HOS)
Server Operating System
A server operating system, also called a server OS, is an
Operating System specifically designed to
run on servers, which are specialized computers that operate within a
client/server architecture to serve the requests of client computers on
the network.
The server operating system, or server OS, is the software
layer on top of which other software programs, or applications, can run on
the server hardware. Server operating systems help enable and facilitate
typical server roles such as Web server, mail server, file server,
database server, application server and print server.
Popular server
operating systems include Windows Server, Mac OS X Server, and variants of
Linux such as Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise
Server. Server edition of Ubuntu Linux is free.
Computing Types
Bio-inspired Computing is a field of study that loosely
knits together subfields related to the topics of connectionism, social
behaviour and emergence. It is often closely related to the field of
artificial intelligence, as many of its pursuits can be linked to machine
learning. It relies heavily on the fields of biology, computer science and
mathematics. Briefly put, it is the use of computers to model the living
phenomena, and simultaneously the study of life to improve the usage of
computers. Biologically inspired computing is a major subset of natural
computation.
Biological
Computation is the study of the computations performed by
natural biota, including the subject matter of systems biology. The design
of algorithms inspired by the computational methods of biota. The design
and engineering of manufactured computational devices using synthetic
biology components. Computer methods for the analysis of biological data,
elsewhere called computational biology. When biological computation refers
to using biology to build computers, it is a subfield of computer science
and is distinct from the interdisciplinary science of bioinformatics which
simply uses computers to better understand biology.
Computational Biology involves the development and
application of data-analytical and theoretical methods, mathematical
modeling and computational simulation techniques to the study of
biological, behavioral, and social systems. The field is broadly defined
and includes foundations in computer science, applied mathematics,
animation, statistics, biochemistry, chemistry, biophysics, molecular
biology, genetics, genomics, ecology, evolution, anatomy, neuroscience,
and visualization. Computational biology is different from biological
computation, which is a subfield of computer science and computer
engineering using bioengineering and biology to build computers, but is
similar to bioinformatics, which is an interdisciplinary science using
computers to store and process biological data.
Information
Model
of Computation is the definition of the set of allowable
operations used in computation and their respective costs. It is used for
measuring the complexity of an algorithm in execution time and or memory
space: by assuming a certain model of computation, it is possible to
analyze the computational resources required or to discuss the limitations
of algorithms or computers.
Computer Simulation -
Virtual Reality
Ubiquitous Computing is a concept in software engineering
and computer science where computing is made to appear anytime and
everywhere. In contrast to desktop computing, ubiquitous computing can
occur using any device, in any location, and in any format. A user
interacts with the computer, which can exist in many different forms,
including laptop computers, tablets and terminals in everyday objects such
as a fridge or a pair of glasses. The underlying technologies to support
ubiquitous computing include Internet, advanced middleware, operating
system, mobile code, sensors, microprocessors, new I/O and user
interfaces, networks, mobile protocols, location and positioning and new
materials.
Parallel Computing
is a type of computation in which many calculations or the execution of
processes are carried out simultaneously. Large problems can often be
divided into smaller ones, which can then be solved at the same time.
There are several different
forms of parallel computing: bit-level, instruction-level, data, and
task parallelism. Parallelism has been employed for many years, mainly in
high-performance computing, but interest in it has grown lately due to the
physical constraints preventing frequency scaling. As power consumption
(and consequently heat generation) by computers has become a concern in
recent years, parallel computing has become the dominant paradigm in
computer architecture, mainly in the form of multi-core processors.
Task
Parallelism is a form of parallelization of computer code across
multiple processors in parallel computing environments. Task parallelism
focuses on distributing tasks—concurrently performed by processes or
threads—across different processors. It contrasts to data parallelism as
another form of parallelism.
Human Brain Parallel Processing
Human Centered Computing studies the design, development,
and deployment of mixed-initiative human-computer systems. It is emerged
from the convergence of multiple disciplines that are concerned both with
understanding human beings and with the design of computational artifacts.
Human-centered computing is closely related to human-computer interaction
and information science. Human-centered computing is usually concerned
with systems and practices of technology use while human-computer
interaction is more focused on ergonomics and the usability of computing
artifacts and information science is focused on practices surrounding the
collection, manipulation, and use of information.
Cloud Computing is a type of Internet-based computing that
provides shared computer processing resources and data to computers and
other devices on demand. It is a model for enabling ubiquitous, on-demand
access to a shared pool of configurable computing resources (e.g.,
computer
networks,
servers,
storage, applications and services), which can
be rapidly provisioned and released with minimal management effort. Cloud
computing and storage solutions provide users and enterprises with various
capabilities to store and process their data in either privately owned, or
third-party data centers that may be located far from the user–ranging in
distance from across a city to across the world. Cloud computing relies on
sharing of resources to achieve coherence and economy of scale, similar to
a utility (like the electricity grid) over an electricity network.
Cloud Computing Tools
Reversible Computing
is a model of computing where the computational process to some
extent is reversible, i.e., time-invertible. In a computational model that
uses deterministic transitions from one state of the abstract machine to
another, a necessary condition for reversibility is that the relation of
the mapping from states to their successors must be one-to-one. Reversible
computing is generally considered an unconventional form of computing.
Adaptable -
Compatible
Natural Computing is a terminology introduced to encompass three
classes of methods: 1) those that take inspiration from nature for the
development of novel problem-solving techniques; 2) those that are based
on the use of computers to synthesize natural phenomena; and 3) those that
employ natural materials (e.g., molecules) to compute. The main fields of
research that compose these three branches are artificial neural networks,
evolutionary algorithms, swarm intelligence, artificial immune systems,
fractal geometry, artificial life, DNA computing, and quantum computing,
among others.
DNA
Computing is a branch of computing which uses DNA, biochemistry, and
molecular biology hardware, instead of the traditional silicon-based
computer technologies. Research and development in this area concerns
theory, experiments, and applications of DNA computing. The term "molectronics"
has sometimes been used, but this term had already been used for an
earlier technology, a then-unsuccessful rival of the first integrated
circuits; this term has also been used more generally, for molecular-scale
electronic technology.
UW engineers borrow from electronics to build largest circuits to date in
living eukaryotic cells. Living cells must constantly process
information to keep track of the changing world around them and arrive at
an appropriate response.
Quantum Computer
(super computers)
Networks
Computer Network is a
telecommunications network which
allows nodes to share resources. In computer networks, networked computing
devices exchange data with each other using a data link. The connections
between nodes are established using either cable media or wireless media.
The best-known computer network is the
Internet.
Server
is a computer program or a device that provides functionality for other
programs or devices, called "clients". This architecture is called the
client–server model, and a single overall computation is distributed
across multiple processes or devices. Servers can provide various
functionalities, often called "services", such as sharing data or
resources among multiple clients, or performing computation for a client.
A single server can serve multiple clients, and a single client can use
multiple servers. A client process may run on the same device or may
connect over a network to a server on a different device. Typical servers
are
Database Servers, file servers, mail servers, print servers, web
servers, game servers, and application servers.
Distributed
Computing is a field of computer science that studies distributed
systems. A distributed system is a model in which components located on
networked computers communicate and coordinate their actions by passing
messages. The components interact with each other in order to achieve a
common goal. Three significant characteristics of distributed systems are:
concurrency of components, lack of a global clock, and independent failure
of components. Examples of distributed systems vary from SOA-based systems
to massively multiplayer online games to peer-to-peer applications.
Proxy Server
is a server (a computer system or an application) that acts as an
intermediary for requests from clients seeking resources from other
servers. A client connects to the proxy server, requesting some service,
such as a file, connection, web page, or other resource available from a
different server and the proxy server evaluates the request as a way to
simplify and control its complexity. Proxies were invented to add
structure and encapsulation to distributed systems. Today, most proxies
are web proxies, facilitating access to content on the World Wide Web,
providing anonymity and may be used to bypass IP address blocking.
Automated
Server Infrastructures -
Autonomous
Network Science
is an academic field which studies complex networks such as
telecommunication networks, computer networks, biological networks,
cognitive and semantic networks, and social networks, considering distinct
elements or actors represented by nodes (or vertices) and the connections
between the elements or actors as links (or edges). The field draws on
theories and methods including graph theory from mathematics, statistical
mechanics from physics, data mining and information visualization from
computer science, inferential modeling from statistics, and social
structure from sociology. The United States National Research Council
defines network science as "the study of network representations of
physical, biological, and social phenomena leading to predictive models of
these phenomena.
Network Science
-
Network
Science
Network
Cultures
Coreos -
Omega -
Mesos
Network
Theory is the study of graphs as a representation of either
symmetric relations or, more generally, of asymmetric relations between
discrete objects. In computer science and network science, network theory
is a part of graph theory: a network can be defined as a graph in which
nodes and/or edges have attributes (e.g. names).
Network Monitoring is the use of a system that
constantly
monitors a computer network for slow or failing components and that
notifies the network administrator (via email, SMS or other alarms) in
case of outages or other trouble. Network monitoring is part of network
management.
Network Management is the process of administering and managing the computer networks of one
or many
organizations. Various services provided by network managers
include fault analysis, performance management, provisioning of network
and network devices, maintaining the quality of service, and so on.
Software that enables network administrators or network managers to
perform their functions is called network management software.
Network Partition refers to network decomposition into relatively
independent
subnets
for their separate optimization as well as network split due to the
failure of network devices. In both cases the partition-tolerant behavior
of subnets is expected. This means that even after network is partitioned
into multiple sub-systems, it still works correctly. For example, in a
network with multiple subnets where nodes A and B are located in one
subnet and nodes C and D are in another, a partition occurs if the switch
between the two subnets fails. In that case nodes A and B can no longer
communicate with nodes C and D, but all nodes A-D work the same as before.
Resilience (network) is the ability to provide and maintain an
acceptable level of service in the face of faults and challenges to normal
operation.
Robustness (computer science) is the ability of a computer system to
cope with errors during execution and cope with erroneous input.
Fault-Tolerant Computer System are systems designed around the
concepts of fault tolerance. In essence, they must be able to continue
working to a level of satisfaction in the presence of faults.
Fault Tolerance is the property that enables a system to continue
operating properly in the event of the failure of (or one or more faults
within) some of its components. If its operating quality decreases at all,
the decrease is proportional to the severity of the failure, as compared
to a naively designed system in which even a small failure can cause total
breakdown. Fault tolerance is particularly sought after in
high-availability or
Life-Critical Systems. The ability of maintaining functionality when
portions of a system break down is referred to as graceful degradation.
Fail-Safe in engineering is a design feature or practice that in the
event of a specific type of failure, inherently responds in a way that
will cause no or minimal harm to other equipment, the environment or to
people.
Network Packet is a formatted unit of data carried by a
packet-switched network. Computer communications links that do not support
packets, such as traditional point-to-point telecommunications links,
simply transmit data as a bit stream. When data is formatted into packets,
packet switching is possible and the bandwidth of the communication medium
can be better shared among users than with circuit switching.
Cluster Manager usually is a backend
graphical user interface
(GUI) or command-line software that runs on one or all cluster nodes (in
some cases it runs on a different server or cluster of management
servers.) The cluster manager works together with a cluster management
agent. These agents run on each node of the cluster to manage and
configure services, a set of services, or to manage and configure the
complete cluster server itself (
see super computing.)
In some cases the cluster manager is mostly used to dispatch work for the
cluster (or cloud) to perform. In this last case a subset of the cluster
manager can be a remote desktop application that is used not for
configuration but just to send work and get back work results from a
cluster. In other cases the cluster is more related to availability and
load balancing than to computational or specific service clusters.
Node
-
Ai
Network Administrator maintains
computer infrastructures with emphasis on networking. Responsibilities may
vary between organizations, but on-site servers, software-network
interactions as well as network integrity/resilience are the key areas of
focus.
Downstream Networking refers to data sent from a network
service provider to a customer.
Upstream Networking refers to the direction in which data
can be transferred from the client to the server (uploading).
Network Operating System is a specialized operating system
for a network device such as a router, switch or firewall. An
operating system oriented to computer
networking, to allow shared file and printer access among multiple
computers in a network, to enable the sharing of data, users, groups,
security, applications, and other networking functions. Typically over a
local area network (LAN), or private network. This sense is now largely
historical, as common operating systems generally now have such features
included.
Professional Services Networks are networks of independent
firms who come together to cost-effectively provide services to clients
through an organized framework.
Social Networks -
Collaborations
Value Network Analysis is a methodology for understanding,
using, visualizing, optimizing internal and external value networks and
complex economic ecosystems. The methods include visualizing sets of
relationships from a dynamic whole systems perspective. Robust network
analysis approaches are used for understanding value conversion of
financial and non-financial assets, such as intellectual capital, into
other forms of value.
Value Network is a business analysis perspective that
describes social and technical resources within and between businesses.
The nodes in a value network represent people (or roles). The nodes are
connected by interactions that represent tangible and intangible
deliverables. These
deliverables take the
form of knowledge or other intangibles and/or financial value.
Value networks exhibit
interdependence. They account for the overall worth of products and
services. Companies have both internal and external value networks.
Encapsulation Networking is a method of designing modular
communication protocols in which logically separate functions in the
network are abstracted from their underlying structures by inclusion or
information hiding within higher level objects.
Dynamic Network Analysis is an emergent scientific field
that brings together traditional social network analysis (SNA), link
analysis (LA), social simulation and multi-agent systems (MAS) within
network science and network theory.
Link Aggregation applies to various methods of combining
(aggregating) multiple network connections in parallel in order to
increase throughput beyond what a single connection could sustain, and to
provide redundancy in case one of the links should fail. A Link
Aggregation Group (LAG) combines a number of physical ports together to
make a single high-bandwidth data path, so as to implement the traffic
load sharing among the member ports in the group and to enhance the
connection reliability.
Artificial Neural Network
(ai)
Matrix (construct)
Cross Linking is a bond that links one polymer chain to
another. They can be
covalent bonds or ionic bonds.
Virtual Private Network (VPN)
Internet -
Internet Connection Types
Fiber Optics
Search Engines
Levels of ThinkingInformation Technology
Network Topology is the arrangement of the various elements
(links, nodes, etc.) of a computer network. Essentially, it is the
topological structure of a network and may be depicted physically or
logically.
Routing is the process of selecting a path for traffic in a
network, or between or across multiple networks. Routing is performed for
many types of networks, including circuit-switched networks, such as the
public switched telephone network (PSTN), computer networks, such as the
Internet, as well as in networks used in public and private
transportation, such as the system of streets, roads, and highways in
national infrastructure.
Asymmetric Digital Subscriber Line is a type of digital
subscriber line (DSL) technology, a data communications technology that
enables faster data transmission over copper telephone lines rather than a
conventional voiceband modem can provide. ADSL differs from the less
common symmetric digital subscriber line (SDSL). In ADSL, Bandwidth and
bit rate are said to be asymmetric, meaning greater toward the customer
premises (downstream) than the reverse (upstream). Providers usually
market ADSL as a service for consumers for Internet access for primarily
downloading content from the Internet, but not serving content accessed by
others. (ADSL).
Cellular Network is a communication network where the last link is
wireless. The network is distributed over land areas called cells, each
served by at least one fixed-location transceiver, known as a cell site or
base station. This base station provides the cell with the network
coverage which can be used for transmission of voice, data and others. A
cell might use a different set of
frequencies from neighboring cells, to
avoid interference and provide guaranteed service quality within each
cell.
Wi-Fi is
a technology for wireless
local area networking
with devices based on the IEEE 802.11 standards. Wi-Fi is a trademark of
the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to
products that successfully complete interoperability certification
testing. Devices that can use Wi-Fi technology include personal computers,
video-game consoles,
phones
and tablets, digital cameras, smart TVs, digital audio players and modern
printers. Wi-Fi compatible devices can connect to the Internet via a WLAN
and a
wireless access point. Such an access point (or hotspot) has a range
of about 20 meters (66 feet) indoors and a greater range outdoors. Hotspot
coverage can be as small as a single room with walls that block radio
waves, or as large as many square kilometres achieved by using multiple
overlapping access points. Depiction of a device sending information
wirelessly to another device, both connected to the local network, in
order to print a document. Wi-Fi most commonly uses the 2.4 gigahertz (12
cm) UHF and 5.8 gigahertz (5 cm) SHF ISM radio bands. Anyone within range
with a wireless modem can attempt to access the network; because of this,
Wi-Fi is more vulnerable to attack (called eavesdropping) than wired
networks. Wi-Fi Protected Access is a family of technologies created to
protect information moving across Wi-Fi networks and includes solutions
for personal and enterprise networks. Security features of Wi-Fi Protected
Access constantly evolve to include stronger protections and new security
practices as the security landscape changes.
Microwaves.
Telephone
is a
telecommunications
device that permits two or more users to conduct a conversation when they
are too far apart to be heard directly. A
Telephone converts
sound,
typically and most efficiently the human voice, into electronic signals
suitable for transmission via cables or other transmission media over long
distances, and replays such signals simultaneously in audible form to its
user.
Tin Can Telephone
is a type of acoustic (non-electrical)
speech-transmitting device made
up of two tin cans, paper cups or similarly shaped items attached to
either end of a taut string or wire. It is a form of mechanical telephony,
where sound is converted into and then conveyed by vibrations along a
liquid or solid medium, and then reconverted back to sound.
Phone Network
Landline refers to a phone that uses a metal wire or
fibre optic
telephone line for transmission as distinguished from a mobile cellular
line, which uses radio waves for transmission.
Ethernet is a family of computer networking technologies
commonly used in local area networks (LAN), metropolitan area networks
(MAN) and wide area networks (WAN).
HomePNA is an incorporated non-profit industry association
of companies that develops and standardizes technology for home networking
over the existing coaxial cables and telephone wiring within homes, so new
wires do not need to be installed.
Communication Law is dedicated to the proposition that
freedom of speech is relevant and essential to every aspect of the
communication discipline.
Communications Act of 1934 as created for the purpose of
regulating interstate and foreign commerce in communication by wire and
radio so as to make available, so far as possible, to all the people of
the United States a rapid, efficient, nationwide, and worldwide wire and
radio communication service with adequate facilities at reasonable
charges, for the purpose of the national defense, and for the purpose of
securing a more effective execution of this policy by centralizing
authority theretofore granted by law to several agencies and by granting
additional authority with respect to interstate and foreign commerce in
wire and radio communication, there is hereby created a commission to be
known as the 'Federal Communications Commission', which shall be
constituted as hereinafter provided, and which shall execute and enforce
the provisions of this Act.
Telecommunications Policy is a framework of law directed by
government and the Regulatory Commissions, most notably the Federal
Communications Commission.
Communications Protocol
is a system of rules that allow two or more entities of a communications
system to transmit information via any kind of variation of a physical
quantity. These are the rules or standard that defines the syntax,
semantics and synchronization of communication and possible error recovery
methods. Protocols may be implemented by hardware, software, or a
combination of both.
Signal Corps develops, tests, provides, and manages
communications and information systems support for the command and control
of combined arms forces.
International Communications Law
consists primarily of a number of bilateral and multilateral
communications treaties.
Outline of Communication
(pdf)
Information and Communications Technology
is an extended term for information technology (IT) which stresses the
role of unified communications and the integration of telecommunications
(telephone lines and wireless signals), computers as well as necessary
enterprise software, middleware, storage, and audio-visual systems, which
enable users to access, store, transmit, and manipulate information. (ICT)
Unified Communications
is a marketing buzzword describing the integration of real-time enterprise
communication services such as instant messaging (chat), presence
information, voice (including IP telephony), mobility features (including
extension mobility and single number reach), audio, web & video
conferencing, fixed-mobile convergence (FMC), desktop sharing, data
sharing (including web connected electronic interactive whiteboards), call
control and speech recognition with non-real-time communication services
such as unified messaging (integrated voicemail, e-mail, SMS and fax). UC
is not necessarily a single product, but a set of products that provides a
consistent unified user interface and user experience across multiple
devices and media types. In its broadest sense, UC can encompass all forms
of communications that are exchanged via a network to include other forms
of communications such as Internet Protocol Television (IPTV) and digital
signage Communications as they become an integrated part of the network
communications deployment and may be directed as one-to-one communications
or broadcast communications from one to many. UC allows an individual to
send a message on one medium and receive the same communication on another
medium. For example, one can receive a voicemail message and choose to
access it through e-mail or a cell phone. If the sender is online
according to the presence information and currently accepts calls, the
response can be sent immediately through text chat or a video call.
Otherwise, it may be sent as a non-real-time message that can be accessed
through a variety of media.
Communicating Knowledge
Super Computers
Supercomputer
is a computer with a high level of computing performance
compared to a
general-purpose computer. Performance of a supercomputer is
measured in floating-point operations per second (FLOPS) instead of
million instructions per second (MIPS). As of 2015, there are
supercomputers which can perform up to quadrillions of FLOPS.
Floating
Point Operations Per Second or FLOPS, is a measure of computer
performance, useful in fields of scientific computations that require
floating-point calculations. For such cases it is a more accurate measure
than measuring instructions per second.
Floating-Point Arithmetic is arithmetic using formulaic representation
of
real numbers as an approximation so as
to support a
trade-off
between range and precision. For this reason, floating-point computation
is often found in systems which include very small and very large real
numbers, which require fast processing times. A number is, in general,
represented approximately to a fixed number of
significant digits (the significand) and scaled using an exponent in
some fixed base; the base for the scaling is normally two, ten, or
sixteen.
Petascale Computing is one quadrillion
Floating Point operations per second.
Exascale Computing is a billion billion calculations per
second.
Titan
is an upgrade of
Jaguar, a previous supercomputer at Oak Ridge, that uses graphics
processing units (GPUs) in addition to conventional central processing
units (CPUs). Titan is the first such hybrid to perform over 10 petaFLOPS.
Titan at
Oak Ridge
National Laboratory will soon be eclipsed by machines capable of
performing a
billion billion floating-point operations per second.
K Computer
meaning 10 quadrillion, is a supercomputer manufactured by Fujitsu which
is based on a distributed memory architecture with over 80,000 computer
nodes.
Nvidia DGX-2 Largest GPU, 2 petaFLOPS system that combines 16 fully
interconnected GPUs for 10X the
deep
learning performance.
Discover the
World’s Largest GPU: NVIDIA DGX-2 (youtube) -
VR
Quantum Computer studies theoretical computation systems
(quantum computers) that make direct use of
quantum-mechanical phenomena,
such as superposition and entanglement, to perform operations on data.
Quantum computers are different from binary digital electronic computers
based on
transistors. Whereas common digital computing requires that the
data be encoded into binary digits (bits), each of which is always in one
of two definite states (0 or 1), quantum computation uses quantum bits,
which can be in
superpositions of states. A quantum Turing machine is a
theoretical model of such a computer, and is also known as the universal
quantum computer. The field of quantum computing was initiated by the work
of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982, and David
Deutsch in 1985. A quantum computer with spins as quantum bits was also
formulated for use as a quantum space–time in 1968.
Key component to scale up quantum computing invented.
Memristor.
Essential Quantum Computer Component Downsized by Two Orders of Magnitude.
Devices built to shield qubits from unwanted signals, known as
nonreciprocal devices, produce magnetic fields themselves. A traffic
roundabout for photons, is only about a tenth of a millimeter in size,
and—more importantly—it is not magnetic. To receive a signal such as a
microwave photon from a qubit, while preventing noise and other spurious
signals from traveling toward the qubit, they use nonreciprocal devices,
such as
isolators or circulators. These devices control the signal traffic.
The 'roundabouts' the group has designed consist of aluminum circuits on a
silicon chip and they are the first to be based on micromechanical
oscillators: Two small silicon beams
oscillate on the chip like the
strings of a guitar and interact with the electrical circuit. These
devices are tiny in size—only about a tenth of a millimeter in diameter.
This is one of the major advantages the new component has over its
traditional predecessors, which were a few centimeters wide.
National Institute of Standards and Technology. Researchers Develop
Magnetic Switch to Turn On and Off a Strange Quantum Property. When an
electron moves around a closed path, ending up where it began, its
physical state may or may not be the same as when it left. Now, there is a
way to control the outcome, thanks to an international research group led
by
scientists at the National Institute of Standards and Technology (NIST).
Reversible Computing is a model of computing where the computational
process to some extent is reversible, i.e., time-invertible. In a model of
computation that uses deterministic transitions from one state of the
abstract machine to another, a necessary condition for reversibility is
that the relation of the mapping from states to their successors must be
one-to-one. Reversible computing is generally considered an unconventional
form of computing.
Landauer's Principle is a physical principle pertaining to the lower
theoretical limit of energy consumption of computation. It holds that "any
logically irreversible manipulation of information, such as the erasure of
a bit or the merging of two computation paths, must be accompanied by a
corresponding entropy increase in non-information-bearing degrees of
freedom of the
information-processing apparatus or its environment".
Another way of phrasing Landauer's principle is that if an observer loses
information about a physical system, the observer loses the ability to
extract work from that system.[why?] If no information is erased,
computation may in principle be achieved which is thermodynamically
reversible, and require no release of heat. This has led to considerable
interest in the study of reversible computing.
Transistor
stores a single “bit” of information.
If the transistor is “on,” it holds a 1, and if it’s “off,” it
holds a 0.
ON=1 / OFF=0
Qubit can store a
Zero's and Ones simultaneously
or two Magnetic Fields at once.
Qubit is a unit of
quantum information—the quantum analogue of the classical bit. A qubit is
a two-state quantum-mechanical system, such as the polarization of a
single photon: here the two states are vertical polarization and
horizontal polarization. In a classical system, a bit would have to be in
one state or the other. However, quantum mechanics allows the qubit to be
in a superposition of both states at the same time, a property that is
fundamental to quantum computing.
(0.01
Kelvin) -
Entanglement -
Interaction Strengths -
Macroscopic
Scale -
Topology
Long-lived storage of a Photonic Qubit for worldwide Teleportation.
Light is an ideal carrier for quantum information encoded on single
photons, but transfer over long distances is inefficient and unreliable
due to losses. Direct teleportation
between the end nodes of a network
can be utilized to prevent the loss of precious quantum bits. First,
remote
entanglement
has to be created between the nodes; then, a suitable measurement on the
sender side triggers the "spooky action at a distance," i.e. the
instantaneous transport of the qubit to the receiver's node. However, the
quantum bit may be rotated when it reaches the receiver and hence has to
be reverted. To this end, the necessary information has to be classically
communicated from sender to receiver. This takes a certain amount of time,
during which the qubit has to be preserved at the receiver. Considering
two network nodes at the most distant places on earth, this corresponds to
a time span of 66 milliseconds.
Dephasing is a
mechanism that recovers classical behavior from a quantum system. It
refers to the ways in which coherence caused by perturbation decays over
time, and the system returns to the state before perturbation. It is an
important effect in molecular and atomic spectroscopy, and in the
condensed matter physics of mesoscopic devices.
Superposition
Principle states that, for all
linear systems, the net
response at a given place and time caused by two or more stimuli is the
sum of the responses that would have been caused by each stimulus
individually. So that if input
A produces response
X and
input
B produces response
Y then input (
A +
B)
produces response (
X +
Y).
Magnetic
Flux Quantum (wiki) -
Magnetic Flux
(wiki)
Quantum Annealing is a metaheuristic for finding the global
minimum of a given objective function over a given set of candidate
solutions (candidate states), by a process using quantum fluctuations.
Quantum annealing is used mainly for problems where the search space is
discrete (combinatorial optimization problems) with many local minima;
such as finding the ground state of a spin glass.
Monte Carlo Method are a broad class of computational algorithms that
rely on repeated random sampling to obtain numerical results. Their
essential idea is using randomness to solve problems that might be
deterministic in principle. They are often used in physical and
mathematical problems and are most useful when it is difficult or
impossible to use other approaches. Monte Carlo methods are mainly used in
three distinct problem classes: optimization, numerical integration, and
generating draws from a probability distribution.
Quadratic Unconstrained Binary Optimization (wiki)
SQUID stands for
superconducting
quantum interference device, which is a very sensitive
magnetometer used to measure extremely
subtle magnetic fields, based on superconducting loops containing
Josephson junctions.
Josephson Effect is the phenomenon of supercurrent—i.e. a current that
flows indefinitely long without any voltage applied—across a device known
as a Josephson junction (JJ), which consists of two superconductors
coupled by a weak link. The weak link can consist of a thin insulating
barrier (known as a superconductor–insulator–superconductor junction, or
S-I-S), a short
section of non-superconducting metal (S-N-S), or a
physical constriction that weakens the superconductivity at the point of
contact (S-s-S).
Superconducting Tunnel Junction is an electronic device consisting of
two superconductors separated by a very thin layer of
insulating
material. Current passes through the junction via the process of quantum
tunneling. The STJ is a type of Josephson junction, though not all the
properties of the STJ are described by the Josephson effect. These devices
have a wide range of applications, including high-sensitivity detectors of
electromagnetic radiation, magnetometers, high speed digital circuit
elements, and quantum computing circuits.
Quantum Tunnelling refers to the
quantum mechanical phenomenon where a particle tunnels through a
barrier that it classically could not surmount. This plays an essential
role in several physical phenomena, such as the nuclear fusion that occurs
in main sequence stars like the Sun. It has important applications to
modern devices such as the tunnel diode, quantum computing, and the
scanning tunnelling microscope.
D-Wave Systems is a
quantum computing company, based in Burnaby, British Columbia, Canada.
D-Wave is the first company in the world to sell quantum computers. (10
Million Dollars).
2048 (video game) The game's objective is to slide numbered tiles on a
grid to combine them to create a tile with the number 2048; however, you
can keep playing the game, creating tiles with larger numbers (such as a
32,768 tile).
Non-Abelian -
Anyon
is a type of
quasiparticle that occurs
only in two-dimensional systems, with properties much less restricted than
fermions and bosons. In general, the operation of exchanging two identical
particles may cause a global phase shift but cannot affect observables.
Anyons are generally classified as abelian or non-abelian. Abelian anyons
have been detected and play a major role in the fractional quantum Hall
effect. Non-abelian anyons have not been definitively detected, although
this is an active area of research.
Method for Improving Quantum Information Processing. A new method for
splitting
light beams into their frequency
modes and encoding photons with quantum information.
Superconducting Qubit 3D integration prospects bolstered by new research.
As superconducting qubit technology grows beyond one-dimensional chains of
nearest neighbour coupled qubits, larger-scale two-dimensional arrays are
a natural next step.
that may be entangled with each other.
Prototypical two-dimensional arrays have been built, but the challenge of
routing control
wiring and readout circuitry has, so far, prevented
the development of high fidelity qubit arrays of size 3×3 or larger. We
have developed a process for fabricating fully superconducting
interconnects that are materially compatible with our existing, high
fidelity, aluminum on silicon qubits. “This fabrication process opens the
door to the possibility of the close integration of two superconducting
circuits with each other or, as would be desirable in the case of
superconducting qubits, the close integration of one high-coherence qubit
device with a dense, multi-layer, signal-routing device”.
Stable Quantum Gate created - Stable Quantum Bits
Cost-Effective Quantum moves a step closer. The National Institute of
Standards and Technology, Colorado, prove the viability of a
measurement-device-independent
quantum
key distribution (QKD) system, based on readily available hardware
such as distributed feedback (DFB) lasers and field-programmable gate
arrays (FPGA) electronics, which enable time-bin qubit preparation and
time-tagging, and active feedback systems that allow for compensation of
time-varying properties of photons
after transmission through deployed
fibre.
Qubit
Oxford Quantum.
Excitonic Insulator. Rules for superconductivity mirrored in device's
braided qubits could form component of topological quantum computer.
Excitonium is a new form of matter soft plasmon
The Tianhe-2 is the most powerful supercomputer built to
date, demands 24 megawatts of power, while the
human brain runs on just 10 watts.
Biological Neuron-Based Computer Chips (wetchips)
Artificial Intelligence
TOP 500 list of
the World’s Top Supercomputers
ASC Sequoia will have 1.6 petabytes of memory, 96 racks,
98,304 compute nodes, and 1.6 million cores. Though orders of magnitude
more powerful than such predecessor systems as ASC Purple and BlueGene/L,
Sequoia will be 160 times more power efficient than Purple and 17 times
more than BlueGene/L. Is expected to be one of the most powerful
supercomputers in the world, equivalent to the 6.7 billion people on earth
using hand calculators and working together on a calculation 24 hours per
day, 365 days a year, for 320 years…to do what Sequoia will do in one
hour.
DARPA or
Defense Advanced Research Projects Agency, is an agency of the U.S.
Department of Defense responsible for the development of emerging
technologies for use by the military.
Darpa
IARPA or Intelligence
Advanced Research Projects Activity, invests in high-risk, high-payoff
research programs to tackle some of the most difficult challenges of the
agencies and disciplines in the Intelligence Community (IC).
Institute for Computational Cosmology is to advance
fundamental knowledge in
cosmology.
Topics of active research include: the nature of dark matter and dark
energy, the evolution of cosmic structure, the formation of galaxies, and
the determination of fundamental parameters.
Fiber Optics
Artificial Intelligent
Computing (Turing - Machine learning)
Speed of Processing
Bandwidth (computing) is the bit-rate of available or consumed
information capacity expressed typically in metric multiples of bits per
second. Variously, bandwidth may be characterized as network bandwidth,
data bandwidth, or digital bandwidth. This definition of bandwidth is in
contrast to the field of signal
processing, wireless communications, modem
data transmission, digital communications, and electronics, in which
bandwidth is used to refer to analog signal bandwidth measured in hertz,
meaning the frequency range between lowest and highest attainable
frequency while meeting a well-defined impairment level in signal power.
However, the actual bit rate that can be achieved depends not only on the
signal bandwidth, but also on the noise on the channel.
Bit Rate is the
number of bits that are conveyed or processed per unit of time.
Bandwidth (signal processing) is the difference between the upper and
lower
frequencies in a
continuous set of frequencies. It is typically measured in hertz, and may
sometimes refer to passband bandwidth, sometimes to baseband bandwidth,
depending on context. Passband bandwidth is the difference between the
upper and lower cutoff frequencies of, for example, a band-pass filter, a
communication channel, or a signal spectrum. In the case of a low-pass
filter or baseband signal, the bandwidth is equal to its upper cutoff
frequency. Bandwidth in hertz is a central concept in many fields,
including electronics, information theory, digital communications, radio
communications, signal processing, and spectroscopy and is one of the
determinants of the capacity of a given communication channel. A key
characteristic of bandwidth is that any band of a given width can carry
the same amount of information, regardless of where that band is
located in the frequency spectrum.[note 1] For example, a 3 kHz band can
carry a telephone conversation whether that band is at baseband (as in a
POTS telephone line) or modulated to some higher frequency.
Instructions Per Second is a measure of a computer's
processor speed. Many reported IPS values
have represented "peak" execution rates on artificial instruction
sequences with few branches, whereas realistic workloads typically lead to
significantly lower IPS values. Memory hierarchy also greatly affects
processor performance, an issue barely considered in IPS calculations.
Because of these problems, synthetic benchmarks such as Dhrystone are now
generally used to estimate
computer performance in commonly used applications, and raw IPS has
fallen into disuse.
Exascale Computing refers to computing systems capable of at least one
exaFLOPS,
or a
billion billion calculations per second.
Such capacity represents a thousandfold increase over the first petascale
computer that came into operation in 2008. (One exaflops is a thousand
petaflops or a quintillion, 1018, floating point operations per second.)
At a supercomputing conference in 2009, Computerworld projected exascale
implementation by 2018. Exascale computing would be considered as a
significant achievement in computer engineering, for it is believed to be
the order of processing power of the human brain at neural
level(functional might be lower). It is, for instance, the target power of
the Human Brain Project.
Spectral
Efficiency refers to the information rate that can be transmitted over
a given bandwidth in a specific
communication system.
It is a measure of how efficiently a limited frequency spectrum is
utilized by the physical layer protocol, and sometimes by the media access
control (the channel access protocol).
Signal Processing concerns the analysis, synthesis, and modification
of
signals, which are broadly
defined as functions conveying, "information about the behavior or
attributes of some phenomenon", such as sound, images, and biological
measurements. For example, signal processing techniques are used to
improve signal transmission fidelity, storage efficiency, and subjective
quality, and to emphasize or detect components of interest in a measured
signal.
Clock
Rate typically refers to the frequency at which a chip like a central
processing unit (CPU), one core of a
multi-core
processor, is running and is used as an indicator of the processor's
speed.
Processing Speed is a
cognitive ability that could
be defined as the time it takes a person to do a mental task. It is
related to the speed in which a person can understand and react to the
information they receive, whether it be visual (letters and numbers),
auditory (language), or movement.
Virtual PC
Virtual
Machine is an emulation of a computer system. Virtual machines are based on
computer architectures and provide functionality of a physical computer.
Their implementations may involve specialized hardware, software, or a
combination. There are different kinds of virtual machines, each with
different functions: System virtual machines (also termed full
virtualization VMs) provide a substitute for a real machine. They provide
functionality needed to execute entire operating systems. A hypervisor
uses native execution to share and manage hardware, allowing for multiple
environments which are isolated from one another, yet exist on the same
physical machine. Modern hypervisors use hardware-assisted virtualization,
virtualization-specific hardware, primarily from the host CPUs. Process
virtual machines are designed to execute computer programs in a
platform-independent environment. Some virtual machines, such as QEMU, are
designed to also emulate different architectures and allow execution of
software applications and operating systems written for another CPU or
architecture. Operating-system-level virtualization allows the resources
of a computer to be partitioned via the kernel's support for multiple
isolated user space instances, which are usually called containers and may
look and feel like real machines to the end users.
Virtual
Desktop is a term used with respect to
user interfaces, usually
within the WIMP paradigm, to describe ways in which the virtual space of a
computer's desktop environment is expanded beyond the physical limits of
the screen's display area through the use of software. This compensates
for a limited desktop area and can also be helpful in reducing clutter.
There are two major approaches to expanding the virtual area of the
screen. Switchable virtual desktops allow the user to make virtual copies
of their desktop view-port and switch between them, with open windows
existing on single virtual desktops. Another approach is to expand the
size of a single virtual screen beyond the size of the physical viewing
device. Typically, scrolling/panning a subsection of the virtual desktop
into view is used to navigate an oversized virtual desktop.
v2.0 Desktops allows you to organize your applications on up
to four virtual desktops.
Hardware Virtualization is the virtualization of computers as complete
hardware platforms, certain logical abstractions of their componentry, or
only the functionality required to run various operating systems.
Virtualization hides the physical characteristics of a computing platform
from the users, presenting instead another abstract computing platform. At
its origins, the software that controlled virtualization was called a
"control program", but the terms "hypervisor" or "virtual machine monitor"
became preferred over time.
Virtualization Software specifically emulators and hypervisors, are software
packages that emulate the whole physical computer machine, often providing
multiple virtual machines on one physical platform. The table below
compares basic information about platform virtualization hypervisors.
Hypervisor
is computer software, firmware, or hardware, that creates and runs virtual
machines. A computer on which a hypervisor runs one or more virtual
machines is called a host machine, and each virtual machine is called a
guest machine. The hypervisor presents the guest operating systems with a
virtual operating platform and manages the execution of the guest
operating systems. Multiple instances of a variety of operating systems
may share the virtualized hardware resources: for example, Linux, Windows,
and OS X instances can all run on a single physical x86 machine. This
contrasts with operating-system-level virtualization, where all instances
(usually called containers) must share a single kernel, though the guest
operating systems can differ in user space, such as different Linux
distributions with the same kernel.
Virtual Box
Sandbox
is a security mechanism for separating running programs. It is often used
to execute untested or untrusted programs or code, possibly from
unverified or untrusted third parties, suppliers, users or websites,
without risking harm to the host machine or operating system. A sandbox
typically provides a tightly controlled set of resources for guest
programs to run in, such as scratch space on disk and memory. Network
access, the ability to inspect the host system or read from input devices
are usually disallowed or heavily restricted. In the sense of providing a
highly controlled environment, sandboxes may be seen as a specific example
of virtualization. Sandboxing is frequently used to test unverified
programs that may contain a virus or other malicious code, without
allowing the software to harm the host device.
Operating System Sandbox: Virtual PC (youtube)
VM Ware
Hyper-V
formerly known as Windows Server Virtualization, is a native hypervisor;
it can create virtual machines on x86-64 systems running Windows. Starting
with Windows 8, Hyper-V supersedes Windows Virtual PC as the hardware
virtualization component of the client editions of Windows NT. A server
computer running Hyper-V can be configured to expose individual virtual
machines to one or more networks.
Parallels
Virtual Private Server is a virtual machine sold as a service by an
Internet hosting service. A VPS runs its own copy of an operating system,
and customers may have superuser-level access to that operating system
instance, so they can install almost any software that runs on that OS.
For many purposes they are functionally equivalent to a dedicated physical
server, and being software-defined, are able to be much more easily
created and configured. They are priced much lower than an equivalent
physical server. However, as they share the underlying physical hardware
with other VPSs, performance may be lower, depending on the workload of
any other executing virtual machines.
Virtual Private Network enables users to send and receive data across
shared or public
networks
as if their computing devices were directly connected to the private
network. Applications running across the VPN may therefore benefit from
the functionality, security, and management of the private network.
Ultimate Guide to Finding the Best VPN
How does a VPN Work?
Safe Internet Use
Artificial Neural Network
Dedicated
Hosting Service is a type of Internet hosting in which the client leases an
entire server not shared with anyone else. This is more flexible than
shared hosting, as organizations have full control over the server(s),
including choice of operating system, hardware, etc. There is also another
level of dedicated or managed hosting commonly referred to as complex
managed hosting. Complex Managed Hosting applies to both physical
dedicated servers, Hybrid server and virtual servers, with many companies
choosing a hybrid (combination of physical and virtual) hosting solution.
Virtualization refers to the act of creating a
Virtual (rather than
actual) version of something, including virtual computer hardware
platforms, storage devices, and computer network resources.
Windows
Virtual PC is a virtualization program for Microsoft Windows. In July 2006
Microsoft released the Windows version as a free product.
Virtual PCVirtual Reality
Remote PC to PC
Teaching via Video Conference - Remote IT
ServicesRemote PC to PC Services
Log Me In
Team Viewer
Go to Assist
Pogo Plug
Dyn DNS
Tight VNC
Web Conferencing
Tutoring
Open Source
Open-source Software is computer software with its source code made
available with a license in which the copyright holder provides the rights
to study, change, and distribute the software to anyone and for any
purpose. Open-source software may be developed in a collaborative public
manner. According to scientists who studied it, open-source software is a
prominent example of open collaboration.
Open Source
is a decentralized development model that encourages open collaboration. A
main principle of open-source software development is peer production,
with products such as source code, blueprints, and documentation freely
available to the public. The open-source movement in software began as a
response to the limitations of proprietary code. The model is used for
projects such as in open-source appropriate technologies, and open-source
drug discovery.
Business
Software Tools and Apps
Open Source Software
Open Source Education
Open Source Initiative
Open Source
Asterisk
open source framework for building communications applications
Alfresco
software built on open standards
Open-Source Electronics
Arduino
Raspberry Pi
Mmassimo Banzi (video)
Arduino 3D
Printer
Science Kits
Open
Source Hardware
Freeware Files
Computer Rentals
Rent Solutions
Vernon Computer
Source
Smart Source
RentalsGoogle
Cromebook
Miles
Technologies Technology Solutions
Maximum PC
Knowledge Management
Artificial Intelligence
Science
Ideas
Innovation
Word Processors
Open Office
SuiteLibre Office
Abi Source
Word
Processors List (PDF)
Google Docs
(writely)
Google
Business Tools and Apps
Zoho
Photo Editing Software
Final Draft (software) is a
screenwriting software for
Writing and
formatting screenplays to
standards
set by theater, television and film industries. The program can also be
used to write documents such as stageplays, outlines, treatments, query
letters, novels, graphic novels, manuscripts, and basic text documents.
Writing Tips
Scraper Wiki
getting data from the web, spreadsheets, PDFs.
Comet Docs
Convert, Store and Share your documents.
Computer Courses
W3 Schools
Webmaster Tutorials
Technology Terms
Creator Academy by Google
J Learning
Lynda
Compucert
Learning Tree
IT Training
Building a Search Engine
Tech Terms
Online Schools
Learn to Code
Computer Maintenance
Computer Hope
How to Geek
Stack Overflow
PC
Mag
Data Doctors
Repairs 4 Laptop
Maintain Your Computer
(wiki how)
PC User
Maintain PC (ehow)
Open Source Ultra
DefragerData Recovery
Software
Dmoz
Computers Websites
Inter-Hacktive
Hackerspace
Technology Tools
Math
Games
Information Management
Computer History
Laptops for Learning
Flash Drive Knowledge
Engineering
Design
Technology News
Self-Help Computer Resources
Thanks to the millions of people sharing their knowledge and experiences
online, you can pretty much learn anything you want on your own. So over
the years I have collected some great resources that come in handy.
Sharing is awesome!
Information Sources
Surfing the Internet TipsFirst a Little Warning:
When visiting other websites be
very careful what you click on because some software downloads are very
dangerous to your computer, so be absolutely sure what you are downloading. Read
the ".exe" file name. Search the internet for more info, or to verify '.exe'
executable files. It's a good idea to always get a second opinion on what
software you might need.
Free Virus Protection
Internet Browsers
Internet Safety
Info
Internet
Connections
Computer Quick Fix Tips
Make sure that your
Computer
System
Restore is on. This can sometimes be used to fix a bad
computer virus or malfunction. It's a good idea to do a System Restore and a
Virus Scan in the
Safe Mode (During Computer Restart hit the F8 Key and then follow
the instructions) (F2 is Setup and F12 is the Boot Menu)
Warning: System Restore that is found under
Start/Programs/Accessories/System Tools is not the same as PC Restore, Factory
Settings or Image Restore, which will delete all your personal files and
software from the PC. If you don't have OS Setup Disk that came with your PC
then sometimes the PC will have a Factory Settings copy installed. This you need
to do while your PC is rebooting. Press ' Ctrl ' then press F11 and then
release both at the same time. You should see something like Symantec Ghost
where you will be prompted to reinstall Factory Settings. This will delete all
your personal files and software from the PC so please back up first.
Always Have your Operating System Restore Disk or
Recovery
Disc handy because not all computer problems can be fixed. You also need
your Drivers and Applications Disks too. Always backup your most important
computer files because reinstalling the operating system will clear your
data.
Kingston DataTraveler 200 - 128 GB USB 2.0 Flash Drive DT200/128GB (Black)
Western Digital 2 TB USB 2.0 Desktop External Hard Drive
Sending Large Files
Bit Torrent Protocol (wiki)
Lime Wire
P2P
Send Large
Files
Zip Files
Stuffit File
Compression
File-Zilla
Dropbox
File
Sharing for Professionals
We
Transfer
You can try some of
these free programs to help keep your computer safe: (might be outdated)
Lava Soft Ad-Ware
Spybot
Search & Destroy
CCleaner
Malwarebytes
Hijack This
Spyware Blaster
Download.com has the
software above but be very careful not to click on the wrong download item.
Please Verify the correct ".exe file." name.
Free Software ?As the saying goes "Nothing is Free." Free software
sometimes comes loaded with other software programs that you don't need. So
always check or uncheck the appropriate boxes, and read everything carefully.
But even then, they might sneak unwanted programs
by you, so you will have to remove those programs manually. With the
internet, dangers are always lurking around the corner, so please be careful, be
aware and educate yourself. When our computer systems and the internet are
running smoothly the beauty of this machine becomes very evident. This is the
largest collaboration of people in human history. With so many contributors from
all over the world, we now have more knowledge and information at our fingertips
then ever before, our potential is limitless.
Free
Software Info
Free Software Foundation
General Public License Free BSD
Jolla
Hadoop Apache
Open Mind
Software Geek
New Computers
Sadly new PC's are loaded with a lot of bogus software and programs
that you don't need. Removing them can be a challenge, but it's absolutely
necessary if you want your PC to run smoothly without all those annoying
distractions that slow your PC down.
Lojack For Laptops (amazon)
Tired and Disgusted with Windows 8
dysfunctional Operating System Interface, Download
Classic Shell to make
your computer like XP, and find things again, or you can just
update your windows 8.0 to windows 8.1,, because 8.1 is definitely better
then 8.0, but still not perfect yet.
Oasis Websoft
advanced software by providing superior solutions for web applications, web
sites and enterprise software. We are committed to building infrastructure that
will ensure that the West African sub-region is not left behind in the
continuous evolution of information technology.
Fawn fast, scalable, and
energy-efficient cluster architecture for data-intensive computing.
BlueStacks is currently the best way to run Android apps on Windows. It
doesn’t replace your entire operating system. Instead, it runs Android apps
within a window on your Windows desktop. This allows you to use Android apps
just like any other program.
Utility Software is system software designed to help analyze,
configure, optimize or maintain a computer. Utility software, along with
operating system software, is a type of system software used to support
the computer infrastructure, distinguishing it from application software
which is aimed at directly performing tasks that benefit ordinary users.
Service-Oriented Architecture is an architectural pattern in computer
software design in which application components provide services to other
components via a communications protocol, typically over a network. The
principles of service-orientation are independent of any vendor, product
or technology. A service is a self-contained unit of functionality, such
as retrieving an online bank statement. By that definition, a service is
an operation that may be discretely invoked. However, in the Web Services
Description Language (WSDL), a service is an interface definition that may
list several discrete services/operations. And elsewhere, the term service
is used for a component that is encapsulated behind an interface. This
widespread ambiguity is reflected in what follows. Services can be
combined to provide the functionality of a large software application. SOA
makes it easier for software components on computers connected over a
network to cooperate. Every computer can run any number of services,
and each service is built in a way that ensures that the service can
exchange information with any other service in the network without human
interaction and without the need to make changes to the underlying program
itself. A paradigm for organizing and utilizing distributed capabilities
that may be under the control of different ownership domains. It provides
a uniform means to offer, discover, interact with and use capabilities to
produce desired effects consistent with measurable preconditions and expectations.
Integrated Circuits - IC's

The First
Integrated Circuit on the right (September 12th, 1958)
And now almost 60 years later...
Integrated Circuit is a set of
electronic
circuits on one small flat piece (or "chip") of
semiconductor material, normally silicon. The integration of
large numbers of tiny
transistors into a small chip resulted in
circuits that are orders of magnitude smaller, cheaper, and
faster than those constructed of discrete
electronic components.
Integrated Circuit Design (wiki)
CMOS or Complementary
metal–oxide–semiconductor, is a technology for constructing integrated
circuits. CMOS technology is used in microprocessors, microcontrollers,
static RAM, and other digital logic circuits. CMOS technology is also used
for several analog circuits such as
image sensors
(CMOS sensor), data converters, and highly integrated transceivers for
many types of communication.
Die in the context of integrated circuits is a small block of
semiconducting material, on which a given functional circuit is
fabricated. Typically, integrated circuits are produced in large batches
on a single wafer of electronic-grade silicon (EGS) or other
semiconductor
(such as GaAs) through processes such as photolithography. The wafer is
cut (“diced”) into many pieces, each containing one copy of the circuit.
Each of these pieces is called a die.
Integrated Circuit Layout is the representation of an integrated
circuit in terms of planar geometric shapes which correspond to the
patterns of metal, oxide, or semiconductor layers that make up the
components of the integrated circuit. When using a standard process—where
the interaction of the many chemical, thermal, and photographic variables
is known and carefully controlled—the behaviour of the final integrated
circuit depends largely on the
positions and
interconnections of the geometric shapes. Using a computer-aided
layout tool, the layout engineer—or layout technician—places and connects
all of the components that make up the chip such that they meet certain
criteria—typically: performance, size, density, and manufacturability.
This practice is often subdivided between two primary layout disciplines:
Analog and digital. The generated layout must pass a series of checks in a
process known as physical verification. The most common checks in this
verification process are design rule checking (DRC), layout versus
schematic (LVS), parasitic extraction, antenna rule checking, and
electrical rule checking (ERC). When all verification is complete, the
data is translated into an industry-standard format, typically GDSII, and
sent to a semiconductor foundry. The process of sending this data to the
foundry is called tapeout because the data used to be shipped out on a
magnetic tape. The foundry
converts the data into another format and uses
it to generate the photomasks used in a photolithographic process of
semiconductor device fabrication. In the earlier, simpler, days of IC
design, layout was done by hand using opaque tapes and films, much like
the early days of printed circuit board (PCB) design. Modern IC layout is
done with the aid of IC layout editor software, mostly automatically using
EDA tools, including place and route tools or schematic-driven layout
tools. The manual operation of choosing and positioning the geometric
shapes is informally known as "polygon pushing".
Hardware
Computer Chip Close-up Macro Photo on right
Newly-discovered semiconductor dynamics may help improve energy efficiency.
The most common material for
semiconductors is
silicon, which is mined from Earth and then refined and purified. But pure
Silicon
doesn't conduct electricity, so the material is purposely and precisely
adulterated by the addition of other substances known as
Dopants.
Boron and phosphorus ions are common dopants added to silicon-based
semiconductors that allow them to conduct electricity. But the amount of
dopant added to a semiconductor matters -- too little dopant and the
semiconductor won't be able to conduct electricity. Too much dopant and
the semiconductor becomes more like a non-conductive insulator.
World's first 1,000-Processor Chip. A microchip containing
1,000 independent programmable processors has been designed. The
energy-efficient 'KiloCore' chip has a maximum computation rate
of 1.78 trillion instructions per second and contains 621
million transistors. The highest clock-rate processor ever
designed.
Nanoelectronics potentially make microprocessor chips work 1,000 times
faster. While most advanced electronic devices are powered by
photonics -- which
involves the use of photons to transmit information -- photonic elements
are usually large in size and this greatly limits their use in many
advanced
nanoelectronics systems.
Plasmons,
which are waves of
electrons that move
along the surface of a metal after it is struck by photons, holds great
promise for disruptive technologies in nanoelectronics. They are
comparable to
photons in terms of speed
(they also travel with the speed of light), and they are much smaller.
This unique property of plasmons makes them ideal for integration with
nanoelectronics. Innovative transducer can directly convert electrical
signals into plasmonic signals, and vice versa, in a single step. By
bridging plasmonics and nanoscale electronics, we can potentially make
chips run faster and reduce power losses. Our plasmonic-electronic
transducer is about 10,000 times smaller than optical elements. We believe
it can be readily integrated into existing technologies and can
potentially be used in a wide range of applications in the future.
5
Nanometer defines the 5 nanometer (5 nm) node as the technology node
following the 7 nm node. Single transistor 7 nm scale devices were first
produced by researchers in the early 2000s, and in 2003 NEC produced a 5
nm transistor. On June 5 2017, IBM revealed that they had created 5 nm
silicon chips, using silicon nanosheets in a Gate All Around configuration
(GAAFET), a break from the usual FinFET design.
Semiconductor Device Fabrication is the process used to create the
integrated circuits that are present in everyday electrical and electronic
devices. It is a multiple-step sequence of photo lithographic and chemical
processing steps during which electronic circuits are gradually created on
a wafer made of pure semiconducting material. Silicon is almost always
used, but various compound semiconductors are used for specialized
applications. The entire manufacturing process, from start to packaged
chips ready for shipment, takes six to eight weeks and is performed in
highly specialized facilities referred to as fabs. In more advanced
semiconductor devices such as modern nodes say as of 2017 regarding 14/10/7nm
device fabrication can take up to 15 weeks with 11-13 weeks being the
industry average.
Atomically Thin Transistors that is Two-Dimensional
Berkeley Lab-led research breaks major barrier with the
Smallest Transistor Ever by creating gate only 1 nanometer
long. High-end 20-nanometer-gate transistors now on the market.
Molybdenum Disulfide (wiki)
Chip-Sized, High-Speed Terahertz Modulator raises possibility of
Faster Data Transmission.
Computers Made of Genetic Material? HZDR researchers conduct
electricity using DNA-based nanowires.
Semiconductor-free microelectronics are now possible, thanks to
metamaterials
Metamaterial is a material engineered to have a property
that is not found in nature.
Semiconductor-free microelectronics (youtube)
Superconducting nanowire memory cell, miniaturized technology
Breakthrough in Circuit Design Makes Electronics More Resistant to Damage
and Defects
2D materials that could make devices faster, smaller, and
efficient nanomaterials that are only a few
atoms in
thickness.
Polaritons in layered two-dimensional materials
Researchers pave the way for Ionotronic Nanodevices.
Discovery helps develop new kinds of electrically switchable
memories. Ionotronic devices rely on charge effects based on
ions, instead of electrons or in
addition to
electrons.
Discovery
of a topological semimetal phase coexisting with ferromagnetic behavior in
Sr1-yMnSb2 (y~0.08). New magnet displays electronic charge carriers
that have almost no mass. The magnetism brings with it an important
symmetry breaking property --
time
reversal symmetry, or TRS, breaking where the ability to run time
backward would no longer return the system back to its starting
conditions. The combination of relativistic electron behavior, which is
the cause of much reduced charge carrier mass, and TRS breaking has been
predicted to cause even more unusual behavior, the much sought after
magnetic Weyl semimetal phase.
Carbon Nanotube Transistors Outperform Silicon, for first time.
Engineers use Graphene as a “copy machine” to produce cheaper
Semiconductor Wafers. In 2016, annual global semiconductor sales
reached their highest-ever point, at $339 billion worldwide. In that same
year, the semiconductor industry spent about $7.2 billion worldwide on
wafers that serve as the substrates for microelectronics components, which
can be turned into transistors, light-emitting diodes, and other
electronic and photonic devices. MIT engineers may vastly reduce the
overall cost of wafer technology and enable devices made from more exotic,
higher-performing semiconductor materials than conventional silicon. Uses
graphene -- single-atom-thin sheets of graphite -- as a sort of "copy
machine" to transfer intricate crystalline patterns from an underlying
semiconductor wafer to a top layer of identical material.
Neuromorphic Engineering also known as neuromorphic
computing, is a concept describing the use of very-large-scale
integration (VLSI) systems containing electronic analog circuits
to mimic neuro-biological architectures present in the nervous
system. Very-Large-Scale Integration is the current level of
computer microchip miniaturization and refers to microchips
containing in the hundreds of thousands of transistors. LSI
(large-scale integration) meant microchips containing thousands
of transistors. Earlier, MSI (medium-scale integration) meant a
microchip containing hundreds of transistors and SSI
(small-scale integration) meant transistors in the tens.
Reconfigurable Chaos-Based Microchips Offer Possible Solution to
Moore’s Law. Nonlinear, chaos-based integrated circuits that
enable computer chips to perform multiple functions with fewer
transistors. The transistor circuit can be programmed to
implement different instructions by morphing between different
operations and functions. The potential of 100 morphable
nonlinear chaos-based circuits doing work equivalent to 100
thousand circuits, or of 100 million transistors doing work
equivalent to three billion transistors holds promise for
extending Moore’s law.
Five Times the Computing Power
Field-Programmable Gate Array is an integrated circuit designed to be
configured by a customer or a designer after manufacturing – hence "
field-programmable".
The FPGA configuration is generally specified using a
hardware description language (HDL), similar to that used for an
application-specific integrated circuit (ASIC). (Circuit diagrams were
previously used to specify the configuration, as they were for ASICs, but
this is increasingly rare.) FPGAs contain an array of programmable logic
blocks, and a hierarchy of reconfigurable interconnects that allow the
blocks to be "wired together", like many logic gates that can be
inter-wired in different configurations.
Logic
blocks can be configured to perform complex combinational functions,
or merely simple logic gates like AND and XOR. In most FPGAs, logic blocks
also include memory elements, which may be simple flip-flops or more
complete blocks of memory.
Fast Fourier Transform algorithm computes the discrete Fourier
transform (DFT) of a sequence, or its inverse (IFFT). Fourier analysis
converts a signal from its original domain (often time or space) to a
representation in the frequency domain and vice versa. An FFT rapidly
computes such transformations by factorizing the DFT matrix into a product
of sparse (mostly zero) factors.
Redox-Based Resistive Switching Random Access Memory (ReRAM)
A team of international scientists have found a way to make
memory chips perform computing tasks, which is traditionally
done by computer processors like those made by Intel and
Qualcomm. Currently, all computer processors in the market are
using the
binary
system, which is composed of two states -- either 0 or 1.
For example, the letter A will be processed and stored as
01000001, an 8-bit character. However, the prototype ReRAM
circuit built by Asst Prof Chattopadhyay and his collaborators processes data in four states instead of two. For example,
it can store and process data as 0, 1, 2, or 3, known as Ternary
number system. Because ReRAM uses different electrical
resistance to store information, it could be possible to store
the data in an even higher number of states, hence speeding up
computing tasks beyond current limitations current computer
systems, all information has to be translated into a string of
zeros and ones before it can be processed.
Parallel
Computing: 18-core credit card sized computer
Memristor or memory
resistor, is a hypothetical non-linear
passive two-terminal electrical component relating
electric
charge and magnetic flux linkage. According to the
characterizing mathematical relations, the memristor would
hypothetically operate in the following way: The memristor's
electrical resistance is not constant but depends on the history
of current that had previously flowed through the device, i.e.,
its present resistance depends on how much electric charge has
flowed in what direction through it in the past; the device
remembers its history — the so-called
non-volatility property.
When the electric power supply is turned off, the memristor
remembers its most recent resistance until it is turned on
again. Memristor is capable of altering its resistance and
storing multiple memory states, ability to retain data by 'remembering'
the amount of charge that has passed through them - potentially resulting
in computers that
switch on and off instantly and
never forget. New memristor technology that can store up to 128
discernible memory states per switch, almost four times more than
previously reported.
Transistors
Illinois team advances GaN-on-Silicon technology towards
scalable high electron mobility transistors
Small tilt in
Magnets makes them viable Memory Chips -
Nano Technology
T-rays will “speed up” computer memory by a factor of 1,000
Germanium Tin Laser Could Increase Processing Speed of Computer
Chips
Silicon Photonics is the study and application of photonic
systems which use silicon as an optical medium. The silicon is
usually patterned with sub-micrometre precision, into
microphotonic components. These operate in the infrared, most
commonly at the 1.55 micrometre wavelength used by most fiber
optic telecommunication systems. The silicon typically lies on
top of a layer of silica in what (by analogy with a similar
construction in microelectronics) is known as silicon on
insulator (SOI).
Silicon Carbide is a compound of silicon and carbon with
chemical formula SiC. It occurs in nature as the extremely
rare mineral moissanite.
Synthetic silicon carbide powder has been mass-produced since
1893 for use as an abrasive. Grains of silicon carbide can be
bonded together by sintering to form very hard ceramics that are
widely used in applications requiring high endurance, such as
car brakes, car clutches and ceramic plates in bulletproof
vests. Electronic applications of silicon carbide such as
light-emitting diodes (LEDs) and detectors in early radios were
first demonstrated around 1907. SiC is used in semiconductor
electronics devices that operate at high temperatures or high
voltages, or both. Large single crystals of silicon carbide can
be grown by the Lely method; they can be cut into gems known as
synthetic moissanite. Silicon carbide with high surface area can
be produced from SiO2 contained in plant material.
ORNL Researchers Break Data Transfer Efficiency Record
transfer of information via superdense coding, a process by
which the properties of particles like photons, protons and
electrons are used to store as much information as possible.
Superdense Coding is a technique used to send two
bits of classical
information using only one qubit, which is a unit of quantum
information.
Quantum Computing makes direct use of quantum-mechanical
phenomena, such as superposition and entanglement, to perform
operations on data.
Quantum computers are different from binary
digital electronic computers based on transistors. Whereas
common digital computing requires that the data are encoded into
binary digits
(bits), each of which is always in one of two definite states (0
or 1), quantum computation uses quantum bits, which can be in
superpositions of states.
A
Single Atom can store one bit of binary informationWhen the
holmium
atoms were placed on a special surface made of magnesium oxide,
they naturally oriented themselves with a magnetic north and south
pole—just like regular magnets have—pointing either straight up or down,
and remained that way in a stable condition. What’s more, they could make
the atoms flip by giving them a zap with a scanning tunneling microscope
that has a needle with a tip just one atom wide. Orientation conveys
binary information—either a
one
or a zero. Experiment shows that they could store one bit of
information in just one atom. If this kind of technology could be scaled
up, it theoretically could hold 80,000 gigabytes of information in just a
square inch. A credit-card-size device could hold 35 million songs. Atoms
could be placed within just about a nanometer of each other without
interfering with their neighbors, meaning they could be packed densely.
This tech won't show up in your smartphone anytime soon. For starters, the
experiment required a very, very chilly temperature: 1 degree Kelvin,
which is colder than -450 Fahrenheit. That's pretty energy intensive, and
not exactly practical in most data storage settings.
Single Molecules Can Work as Reproducible Transistors—at Room Temperature
Diamond-Based Circuits can take the Heat for advanced applications.
Researchers have developed a
hydrogenated diamond circuit operational at 300 degrees Celsius. When
power generators transfer electricity, they lose almost 10 percent of the
generated power. To address this, scientists are researching new diamond
semiconductor circuits to make power conversion systems more efficient
using hydrogenated diamond. These circuits can be used in diamond-based
electronic devices that are smaller, lighter and more efficient than
silicon-based devices.
The moment you turn on your pc, what
you
see is the work of thousands and thousands of people,
educated in the fields of engineering, science, math and
physics, just to name a few. And that's just the software. The
hardware also took the work of thousands of skilled people.
Covering many different industries, which adds the work of
thousands of more people. I'm also a product that
took millions of people
over
thousands of years to
make, just to get me here in this
moment in time.
Computer Industry is the range of businesses involved in
designing computer
hardware and computer
networking infrastructures, developing
computer software, manufacturing
computer
components, and providing
information
technology (IT) services.
Software Industry includes businesses for development,
maintenance and
publication of
software that are using
different business models, also includes software services, such
as training, documentation, consulting and data recovery.
The First Computer
Antikythera Mechanism
is 2,100-year-old ancient analog computer. An international team of scientists has now read about
3,500 characters of explanatory text -- a quarter of the
original -- in the innards of the 2,100-year-old remains.
Analog Computer is a form of computer that uses the
continuously changeable aspects of physical phenomena such as
Electrical Network,
Mechanics, or
Hydraulics quantities to
model the problem being solved. Digital computers represent
varying quantities symbolically, as their numerical values
change. As an analog computer does not use discrete values, but
rather continuous values, processes cannot be reliably repeated
with exact equivalence, as they can with Turing machines. Unlike
digital signal processing, analog computers do not suffer from
the quantization noise, but are limited by analog noise.
Old Mechanical
Calculators (youtube)
Turing
Machine is an abstract machine that manipulates symbols on a
strip of tape according to a table of rules; to be more exact,
it is a mathematical model of computation that defines such a
device. Despite the model's simplicity, given any computer
algorithm, a
Turing machine can be constructed that is capable
of simulating that
algorithm's logic.
Curta is a small mechanical calculator developed by Curt
Herzstark in the 1930s in Vienna, Austria. By 1938, he had filed
a key patent, covering his complemented stepped drum, Deutsches
Reichspatent (German National Patent) No. 747073. This single
drum replaced the multiple drums, typically around 10 or so, of
contemporary calculators, and it enabled not only addition, but
subtraction through nines complement math, essentially
subtracting by adding. The nines' complement math breakthrough
eliminated the significant mechanical complexity created when
"borrowing" during subtraction. This drum would prove to be the
key to the small, hand-held mechanical calculator the Curta
would become. Curtas were considered the best portable
calculators available until they were displaced by electronic
calculators in the 1970s.
Konrad
Zuse built the world's first programmable computer in
May 1941, calculating one operation per
second. The functional program-controlled Turing-complete Z3. Thanks to
this machine and its predecessors, Zuse has often been regarded as the
inventor of the modern computer. Zuse was also noted for the S2 computing
machine, considered the first process control computer. He founded one
of the earliest computer businesses in 1941, producing the Z4, which
became the world's first commercial computer. From 1943 to 1945 he
designed the first high-level programming language, Plankalkül. In 1969,
Zuse suggested the concept of a computation-based universe in his book
Rechnender Raum (Calculating Space).
List of Pioneers in Computer Science (wiki)
Abstract Machine also called an abstract computer, is a
theoretical model of a computer hardware or software system used
in automata theory. Abstraction of computing processes is used
in both the computer science and computer engineering
disciplines and usually assumes a discrete time paradigm.
Autonomous Machines
Jacquard Loom is a device fitted to a power loom that simplifies the
process of manufacturing textiles with such complex patterns as brocade,
damask and matelassé. It was invented by Joseph Marie Jacquard in
1804. The loom was controlled by a "chain
of cards"; a number of
punched cards laced
together into a continuous sequence. Multiple rows of holes were punched
on each card, with one complete card corresponding to one row of the
design.
Computer Programming in the Punched Card Era was the
invention of computer programming languages up to the mid-1980s,
many if not most computer programmers created, edited and stored
their programs line by line on punched cards. The practice was
nearly universal with IBM computers in the era. A punched card
is a flexible write-once medium that encodes data, most commonly
80 characters. Groups or "decks" of cards form programs and
collections of data. Users could create cards using a desk-sized
keypunch with a typewriter-like keyboard. A typing error
generally necessitated repunching an entire card. In some
companies, programmers wrote information on special forms called
coding sheets, taking care to distinguish the digit zero from
the letter O, the digit one from the letter I, eight from B, two
from Z, and so on. These forms were then converted to cards by
keypunch operators, and in some cases, checked by verifiers. The
editing of programs was facilitated by reorganizing the cards,
and removing or replacing the lines that had changed; programs
were backed up by duplicating the deck, or writing it to
magnetic tape.
Keypunch is a device for precisely punching holes into stiff
paper cards at specific locations as determined by keys struck
by a human operator.
Punched Card is a piece of stiff paper that can be used to
contain digital information represented by the presence or
absence of holes in predefined positions. The information
might be data for data processing applications or, in earlier
examples, used to directly control automated machinery. The
terms IBM card, or Hollerith card specifically refer to punched
cards used in semiautomatic data processing. Punched cards were
widely used through much of the 20th century in what became
known as the data processing industry, where specialized and
increasingly complex unit record machines, organized into data
processing systems, used punched cards for data input, output,
and storage. Many early digital computers used punched cards,
often prepared using keypunch machines, as the primary medium
for input of both computer programs and data. While punched
cards are now obsolete as a recording medium, as of 2012, some
voting machines still use punched cards to record votes.

The First
Apple Computer on the right (
1976)
Monochrome Monitor or Green screen was the common name for a
monochrome monitor using a green "P1" phosphor screen. CRT
computer monitor which was very common in the early days of
computing, from the 1960s through the 1980s, before
color
monitors became popular. Monochrome monitors have only one color
of phosphor (mono means "one", and chrome means "
color"). Pixel
for pixel, monochrome monitors produce sharper text and images
than color CRT monitors. This is because a monochrome monitor is
made up of a continuous coating of phosphor and the sharpness
can be controlled by focusing the electron beam; whereas on a
color monitor, each pixel is made up of three phosphor dots (one
red, one blue, one green) separated by a mask. Monochrome
monitors were used in almost all dumb terminals and are still
widely used in text-based applications such as computerized cash
registers and point of sale systems because of their superior
sharpness and enhanced readability.
1983
Compaq
came out with a portable computer, which sold over 50,000 in the first
year, while IBM sold 750,000 pc's that same year. In a few short years,
Compaq became a billion dollar company. IBM tried to make
Intel
sell a new chip only to them but Intel refused so they could sell their
chips to more companies.
Intel 80386 a
32-bit microprocessor introduced in 1985 had 275,000 transistors. In May
2006, Intel announced that 80386 production would stop at the end of
September 2007.
Embedded System
is a computer system with a dedicated function within a larger mechanical
or electrical system, often with real-time computing constraints. It
is embedded as part of a complete device often including hardware and
mechanical parts. Embedded systems control many devices in common use
today. Ninety-eight percent of all microprocessors are manufactured as
components of embedded systems.
Then the
Extended Industry Standard Architecture was announced in September
1988 by a consortium of PC clone vendors (the "Gang of Nine") as a counter
to IBM's use of its proprietary Micro Channel architecture (MCA) in its
PS/2 series and end IBM's monopoly. But that only gave rise to Microsoft's
Monopoly.
Exclusive Right. But by 1991, things got
worse for Compaq. Other computer companies came into the market, and IBM's
patent trolls attacked Compaq. In 2002 Compaq merged with
Hewlett Packard.
Organic computers are coming Scientists found a molecule
that will help to make organic electronic devices.
Radialene are alicyclic organic compounds containing n
cross-conjugated exocyclic double bonds.
Worlds Smallest Computer
Michigan Micro Mote (M3)
Histories
Greatest Inventions
Computer History Films