Computers


Desktop Computer Computer is a machine for performing calculations automatically. A machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. A computer is a person who is an expert at calculation or an expert at operating calculating machines. 

"A Keen Impassioned Beauty of a Great Machine" - "A Bicycle for the Brain"

Previous SubjectNext Subject

Hardware - IC's - Code - Software - OS - VPN - Servers - Networks - Super Computers - Memory - Processing

You can learn several different subjects at the same time when you're learning about computers. You can learn Problem Solving, Math, Languages, Communication, Technology, Electricity, Physics and Intelligence, just to name a few.

Basic Computer Skills - Computer Literacy - History of Computers - Films about Computers - Computer Types

Computer Science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. It is the scientific and practical approach to computation and its applications and the systematic study of the feasibility, structure, expression, and mechanization of the methodical procedures or algorithms that underlie the acquisition, representation, processing, storage, communication of, and access to information. An alternate, more succinct definition of computer science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems. Pioneers in Computer Science (wiki).

Computer Science Books (wiki) - List of Computer Books (wiki)

Theoretical Computer Science is a division or subset of general computer science and mathematics that focuses on more abstract or mathematical aspects of computing and includes the theory of computation, which is the branch that deals with how efficiently problems can be solved on a model of computation, using an algorithm. The field is divided into three major branches: automata theory and language, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".

Doctor of Computer Science is a doctorate in Computer Science by dissertation or multiple research papers.

Computer Engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware–software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture. Usual tasks involving computer engineers include writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, and designing operating systems. Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors. In many institutions, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year, because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one or two years of General Engineering before declaring computer engineering as their primary focus.

Telephone - Remote Work

Computer Architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation. How Does a Computer Work - Help Fixing PC's.

Minimalism in computing refers to the application of minimalist philosophies and principles in the design and use of hardware and software. Minimalism, in this sense, means designing systems that use the least hardware and software resources possible.


Computer Types


General Purpose Computer is a computer that is designed to be able to carry out many different tasks. Desktop computers and laptops are examples of general purpose computers. Among other things, they can be used to access the internet.

Personal Computer is a multi-purpose computer whose size, capabilities, and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers.

First Computers - History of Computers - Super Computers - Artificial Intelligence

Smartphones - Remote Communication - Great Inventions - Technology Advancement

Analog Computer is a computer which is used to process analog data. Analog computers store data in a continuous form of physical quantities and perform calculations with the help of measures. It is quite different from the digital computer, which makes use of symbolic numbers to represent results. analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Systems for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the seconds, minutes and hours needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task.

Future Computers Will Be Radically Different (youtube) - Matrix Multiplication - Neural Network - Image Recognition - Algorithms

Analog or linear circuits typically use only a few components and are thus some of the simplest types of ICs. Generally, analog circuits are connected to devices that collect signals from the environment or send signals back to the environment. Analog circuit works with analog signals. The full signal (a continuously variable signal) in the form of a wave has more data in it—because it is a continuous wave—as opposed to digitized waveform that is made up of binary ups and downs (or pulses). We live in an analog world. A linear circuit is a type of analog circuit that is designed to make a scaled copy of a waveform meaning that the amplitude of the output of the linear circuit is a fraction, or a multiple of the amplitude of the input waveform. The output amplitude is greater than the input amplitude, and the circuit is an amplifier.

Analogue Electronics are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels. The term "analogue" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word analogos meaning "proportional".

Biological Computers - Computing Types

Ionic liquid-based reservoir computing: The key to efficient and flexible edge computing. Researchers have designed a tunable physical reservoir device based on the dielectric relaxation at an electrode-ionic liquid interface. Physical reservoir computing, which relies on the transient response of physical systems, is an attractive machine learning framework that can perform high-speed processing of time-series signals at low power. However, PRC systems have low tunability, limiting the signals it can process. Now, researchers from Japan present ionic liquids as an easily tunable physical reservoir device that can be optimized to process signals over a broad range of timescales by simply changing their viscosity.

10 types of computers include personal computers, desktops, laptops, tablets, hand-held computers, servers, workstations, mainframes, wearable computers and supercomputers.

Portable Computer was a computer designed to be easily moved from one place to another and included a display and keyboard. Operating Systems.

Laptop is a small portable personal computer with a "clamshell" form factor, typically having a thin LCD or LED computer screen mounted on the inside of the upper lid of the clamshell and an alphanumeric keyboard on the inside of the lower lid. The clamshell is opened up to use the computer. Laptops are folded shut for transportation, and thus are suitable for mobile use. Its name comes from lap, as it was deemed to be placed on a person's lap when being used. Although originally there was a distinction between laptops and notebooks (the former being bigger and heavier than the latter), as of 2014, there is often no longer any difference. Laptops are commonly used in a variety of settings, such as at work, in education, for playing games, Internet surfing, for personal multimedia, and general home computer use. Laptops combine all the input/output components and capabilities of a desktop computer, including the display screen, small speakers, a keyboard, hard disk drive, optical disc drive, pointing devices (such as a touchpad or trackpad), a processor, and memory into a single unit. Most modern laptops feature integrated webcams and built-in microphones, while many also have touchscreens. Laptops can be powered either from an internal battery or by an external power supply from an AC adapter. Hardware specifications, such as the processor speed and memory capacity, significantly vary between different types, makes, models and price points. Design elements, form factor and construction can also vary significantly between models depending on intended use. Examples of specialized models of laptops include rugged notebooks for use in construction or military applications, as well as low production cost laptops such as those from the One Laptop per Child (OLPC) organization, which incorporate features like solar charging and semi-flexible components not found on most laptop computers. Portable computers, which later developed into modern laptops, were originally considered to be a small niche market, mostly for specialized field applications, such as in the military, for accountants, or for traveling sales representatives. As the portable computers evolved into the modern laptop, they became widely used for a variety of purposes.

Tablet Computer is a mobile device, typically with a mobile operating system and touchscreen display processing circuitry, and a rechargeable battery in a single thin, flat package. Tablets, being computers, do what other personal computers do, but lack some input/output (I/O) abilities that others have. Modern tablets largely resemble modern Smartphones, the only differences being that tablets are relatively larger than smartphones, with screens 7 inches (18 cm) or larger, measured diagonally, and may not support access to a cellular network.

Desktop Computer is a personal computer designed for regular use at a single location on or near a desk or table due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit (CPU), memory, bus, and other electronic components), disk storage (usually one or more hard disk drives, optical disc drives, and in early models a floppy disk drive); a keyboard and mouse for input; and a computer monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk.

Workstation is a special computer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used loosely to refer to everything from a mainframe computer terminal to a PC connected to a network, but the most common form refers to the group of hardware offered by several current and defunct companies such as Sun Microsystems, Silicon Graphics, Apollo Computer, DEC, HP, NeXT and IBM which opened the door for the 3D graphics animation revolution of the late 1990s.

Industrial PC is a computer intended for industrial purposes (production of goods and services), with a form factor between a nettop and a server rack. Industrial PCs have higher dependability and precision standards, and are generally more expensive than consumer electronics. They often use complex instruction sets, such as x86, where reduced instruction sets such as ARM would otherwise be used. Controllers.


Computing Types


Bio-Inspired Computing is a field of study that loosely knits together subfields related to the topics of connectionism, social behaviour and emergence. It is often closely related to the field of artificial intelligence, as many of its pursuits can be linked to machine learning. It relies heavily on the fields of biology, computer science and mathematics. Briefly put, it is the use of computers to model the living phenomena, and simultaneously the study of life to improve the usage of computers. Biologically inspired computing is a major subset of natural computation.

Biological Computation is the study of the computations performed by natural biota, including the subject matter of systems biology. The design of algorithms inspired by the computational methods of biota. The design and engineering of manufactured computational devices using synthetic biology components. Computer methods for the analysis of biological data, elsewhere called computational biology. When biological computation refers to using biology to build computers, it is a subfield of computer science and is distinct from the interdisciplinary science of bioinformatics which simply uses computers to better understand biology.

Computational Biology involves the development and application of data-analytical and theoretical methods, mathematical modeling and computational simulation techniques to the study of biological, behavioral, and social systems. The field is broadly defined and includes foundations in computer science, applied mathematics, animation, statistics, biochemistry, chemistry, biophysics, molecular biology, genetics, genomics, ecology, evolution, anatomy, neuroscience, and visualization. Computational biology is different from biological computation, which is a subfield of computer science and computer engineering using bioengineering and biology to build computers, but is similar to bioinformatics, which is an interdisciplinary science using computers to store and process biological data. Information.

Biological Computers are made of living cells. Instead of carrying electrical wiring, these computers use chemical inputs and other biologically derived molecules, such as proteins and DNA, to perform computational calculations that involve storing, retrieving and processing data.

DNA Computing is a branch of computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional silicon-based computer technologies. Research and development in this area concerns theory, experiments, and applications of DNA computing. The term "molectronics" has sometimes been used, but this term had already been used for an earlier technology, a then-unsuccessful rival of the first integrated circuits; this term has also been used more generally, for molecular-scale electronic technology.

Chemical Computer is an unconventional computer based on a semi-solid chemical soup where data are represented by varying concentrations of chemicals. The computations are performed by naturally occurring chemical reactions.

Computational Chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. It is essential because, apart from relatively recent results concerning the hydrogen molecular ion (dihydrogen cation, see references therein for more details), the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials.

UW engineers borrow from electronics to build largest circuits to date in living eukaryotic cells. Living cells must constantly process information to keep track of the changing world around them and arrive at an appropriate response.

Model of Computation is the definition of the set of allowable operations used in computation and their respective costs. It is used for measuring the complexity of an algorithm in execution time and or memory space: by assuming a certain model of computation, it is possible to analyze the computational resources required or to discuss the limitations of algorithms or computers.

Computer Simulation - Virtual Reality - Turing Machine

Ubiquitous Computing is a concept in software engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets and terminals in everyday objects such as a fridge or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile protocols, location and positioning and new materials.

Quantum Computer (super computers)

Parallel Computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for many years, mainly in high-performance computing, but interest in it has grown lately due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors. Working Together - Multitasking.

Task Parallelism is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing tasks—concurrently performed by processes or threads—across different processors. It contrasts to data parallelism as another form of parallelism.

Human Brain Parallel Processing

Human Centered Computing studies the design, development, and deployment of mixed-initiative human-computer systems. It is emerged from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts. Human-centered computing is closely related to human-computer interaction and information science. Human-centered computing is usually concerned with systems and practices of technology use while human-computer interaction is more focused on ergonomics and the usability of computing artifacts and information science is focused on practices surrounding the collection, manipulation, and use of information.

Distributed Computing components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Distributed Workforce.

Edge Computing is a distributed computing paradigm that brings computation and data storage closer to the sources of data. This is expected to improve response times and save bandwidth. Edge computing is an architecture rather than a specific technology, and a topology- and location-sensitive form of distributed computing.

Cloud Computing is a type of Internet-based computing that provides shared computer processing resources and data to computers and other devices on demand. It is a model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources (e.g., computer networks, servers, storage, applications and services), which can be rapidly provisioned and released with minimal management effort. Cloud computing and storage solutions provide users and enterprises with various capabilities to store and process their data in either privately owned, or third-party data centers that may be located far from the user–ranging in distance from across a city to across the world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale, similar to a utility (like the electricity grid) over an electricity network. Cloud Computing Tools.

Reversible Computing is a model of computing where the computational process to some extent is reversible, i.e., time-invertible. In a computational model that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is generally considered an unconventional form of computing.

Adaptable - Compatible

Natural Computing is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials (e.g., molecules) to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.



Hardware


Hardware is the collection of physical components that constitute a computer system. Computer hardware is the physical parts or components of a computer, such as monitor, keyboard, computer data storage, hard disk drive (HDD), graphic card, sound card, memory (RAM), motherboard, and so on, all of which are tangible physical objects. By contrast, software is instructions that can be stored and run by hardware. Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system.

Computer Hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, monitor, keyboard, computer data storage, graphic card, sound card, speakers and motherboard. By contrast, software is instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes or modifications; whereas software is "soft" because it is easy to update or change. Intermediate between software and hardware is "firmware", which is software that is strongly coupled to the particular hardware of a computer system and thus the most difficult to change but also among the most stable with respect to consistency of interface. The progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing. Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components.

Hardware Architecture refers to the identification of a system's physical components and their interrelationships. This description, often called a hardware design model, allows hardware designers to understand how their components fit into a system architecture and provides to software component designers important information needed for software development and integration. Clear definition of a hardware architecture allows the various traditional engineering disciplines (e.g., electrical and mechanical engineering) to work more effectively together to develop and manufacture new machines, devices and components. Processors

Computer Architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation.


Memory


Computer Memory refers to the computer hardware devices involved to store information for immediate use in a computer; it is synonymous with the term "primary storage". Computer memory operates at a high speed, for example random-access memory (RAM), as a distinction from storage that provides slow-to-access program and data storage but offers higher capacities. If needed, contents of the computer memory can be transferred to secondary storage, through a memory management technique called "virtual memory". An archaic synonym for memory is store. The term "memory", meaning "primary storage" or "main memory", is often associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary storage but also other purposes in computers and other digital electronic devices. There are two main kinds of semiconductor memory, volatile and non-volatile. Examples of non-volatile memory are flash memory (used as secondary memory) and ROM, PROM, EPROM and EEPROM memory (used for storing firmware such as BIOS). Examples of volatile memory are primary storage, which is typically dynamic random-access memory (DRAM), and fast CPU cache memory, which is typically static random-access memory (SRAM) that is fast but energy-consuming, offering lower memory areal density than DRAM. Most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and multiple bits per cell (called MLC, Multiple Level Cell). The memory cells are grouped into words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word and do not include an addressing mechanism. Typical secondary storage devices are hard disk drives and solid-state
drives. Magnetic Memory (memristors).

Memory Cell in computing is the fundamental building block of computer memory. The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it. Brain Memory - Data Storage - Knowledge Preservation.

Random-Access Memory is a form of computer data storage which stores frequently used program instructions to increase the general speed of a system. A RAM device allows data items to be read or written in almost the same amount of time. Working Memory.

Dynamic Random-Access Memory is a type of random access semiconductor memory that stores each bit of data in a separate tiny capacitor within an integrated circuit. The capacitor can either be charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. The electric charge on the capacitors slowly leaks off, so without intervention the data on the chip would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge. This refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory (SRAM) which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since it loses its data quickly when power is removed. However, DRAM does exhibit limited data remanence. DRAM is widely used in digital electronics where low-cost and high-capacity memory is required. One of the largest applications for DRAM is the main memory (colloquially called the "RAM") in modern computers and graphics cards (where the "main memory" is called the graphics memory). It is also used in many portable devices and video game consoles. In contrast, SRAM, which is faster and more expensive than DRAM, is typically used where speed is of greater concern than cost and size, such as the cache memories in processors. Due to its need of a system to perform refreshing, DRAM has more complicated circuitry and timing requirements than SRAM, but it is much more widely used. The advantage of DRAM is the structural simplicity of its memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities, making DRAM much cheaper per bit. The transistors and capacitors used are extremely small; billions can fit on a single memory chip. Due to the dynamic nature of its memory cells, DRAM consumes relatively large amounts of power, with different ways for managing the power consumption. DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% percent jump in 1988, while in recent years the price has been going down.

Universal Memory refers to a hypothetical computer data storage device combining the cost benefits of DRAM, the speed of SRAM, the non-volatility of flash memory along with infinite durability. Such a device, if it ever becomes possible to develop, would have a far-reaching impact on the computer market. Computers for most of their recent history have depended on several different data storage technologies simultaneously as part of their operation. Each one operates at a level in the memory hierarchy where another would be unsuitable. A personal computer might include a few megabytes of fast but volatile and expensive SRAM as the CPU cache, several gigabytes of slower DRAM for program memory, and multiple hundreds of gigabytes of the slow but non-volatile flash memory or a few terabytes of "spinning platters" hard disk drive for long term storage.

Universal Memory can record or delete data using 100 times less energy than Dynamic Random Access Memory (DRAM) and flash drives. It promises to transform daily life with its ultra-low energy consumption, allowing computers which do not need to boot up and which could sleep between key strokes. While writing data to DRAM is fast and low-energy, the data is volatile and must be continuously 'refreshed' to avoid it being lost: this is clearly inconvenient and inefficient. Flash stores data robustly, but writing and erasing is slow, energy intensive and deteriorates it, making it unsuitable for working memory.

Read-Only Memory is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM can only be modified slowly, with difficulty, or not at all, so it is mainly used to store firmware (software that is closely tied to specific hardware, and unlikely to need frequent updates) or application software in plug-in cartridges. OS.

Volatile Memory is memory that lasts only while the power is on, but when the power is interrupted, the stored data is quickly lost, and thus would be lost after a restart. Volatile computer memory requires power to maintain the stored information and only retains its contents while powered on. Short Term Memory.

Non-Volatile Memory is a type of computer memory that can retrieve stored information even after having been power cycled (turned off and back on). The opposite of non-volatile memory is Volatile Memory which needs constant power in order to prevent data from being erased. Memory Error Correction.

Conductive Bridging Random Access Memory - CBRAM storing data in a non-volatile or near-permanent way, to reduce the size and power consumption of components.

Programmable Metallization Cell is a non-volatile computer memory widely used flash memory, providing a combination of longer lifetimes, lower power, and better memory density.

Flash Memory is non-volatile computer storage medium that can be electrically erased and reprogrammed. Jump Drive.

Multi-Level Cell is a memory element capable of storing more than a single bit of information, compared to a single-level cell (SLC) which can store only one bit per memory element. Triple-level cells (TLC) and quad-level cells (QLC) are versions of MLC memory, which can store 3 and 4 bits per cell, respectively. Note that due to the convention, the name "multi-level cell" is sometimes used specifically to refer to the "two-level cell", which is slightly confusing. Overall, the memories are named as follows: SLC (1 bit per cell) - fastest, more reliable, but highest cost. MLC (2 bits per cell). TLC (3 bits per cell). QLC (4 bits per cell) - slowest, least cost. Examples of MLC memories are MLC NAND flash, MLC PCM (phase change memory), etc. For example, in SLC NAND flash technology, each cell can exist in one of the two states, storing one bit of information per cell. Most MLC NAND flash memory has four possible states per cell, so it can store two bits of information per cell. This reduces the amount of margin separating the states and results in the possibility of more errors. Multi-level cells which are designed for low error rates are sometimes called enterprise MLC (eMLC). There are tools for modeling the area/latency/energy of MLC memories.

Solid-State Storage is a type of non-volatile computer storage that stores and retrieves digital information using only electronic circuits, without any involvement of moving mechanical parts. This differs fundamentally from the traditional electromechanical storage paradigm, which accesses data using rotating or linearly moving media coated with magnetic material.

Solid-State Drive is a solid-state storage device that uses integrated circuit assemblies as memory to store data persistently. SSD technology primarily uses electronic interfaces compatible with traditional block input/output (I/O) hard disk drives (HDDs), which permit simple replacements in common applications. New I/O interfaces like SATA Express and M.2 have been designed to address specific requirements of the SSD technology. SSDs have no moving mechanical components. This distinguishes them from traditional electromechanical drives such as hard disk drives (HDDs) or floppy disks, which contain spinning disks and movable read/write heads. Compared with electromechanical drives, SSDs are typically more resistant to physical shock, run silently, have quicker access time and lower latency. However, while the price of SSDs has continued to decline over time SSDs are (as of 2018) still more expensive per unit of storage than HDDs and are expected to continue so into the next decade.
Solid State Drive (amazon).

Scientists perfect technique to boost capacity of computer storage a thousand-fold. New technique leads to world’s densest solid-state memory that can store 45 million songs on the surface of a quarter.

Hard Disk Drive is a data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rapidly rotating disks (platters) coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data even when powered off. Storage Types.

NAND Gate is a logic gate which produces an output which is false only if all its inputs are true; thus its output is complement to that of the AND gate. A LOW (0) output results only if both the inputs to the gate are HIGH (1); if one or both inputs are LOW (0), a HIGH (1) output results. It is made using transistors and junction diodes.

Floating-gate MOSFET is a field-effect transistor, whose structure is similar to a conventional MOSFET. The gate of the FGMOS is electrically isolated, creating a floating node in DC, and a number of secondary gates or inputs are deposited above the floating gate (FG) and are electrically isolated from it. These inputs are only capacitively connected to the FG. Since the FG is completely surrounded by highly resistive material, the charge contained in it remains unchanged for long periods of time. Usually Fowler-Nordheim tunneling and hot-carrier injection mechanisms are used to modify the amount of charge stored in the FG.

Field-effect transistor is a transistor that uses an electric field to control the electrical behaviour of the device. FETs are also known as unipolar transistors since they involve single-carrier-type operation. Many different implementations of field effect transistors exist. Field effect transistors generally display very high input impedance at low frequencies. The conductivity between the drain and source terminals is controlled by an electric field in the device, which is generated by the voltage difference between the body and the gate of the device

Molecule that works as Flash Storage - Macronix

EEPROM stands for electrically erasable programmable read-only memory and is a type of non-volatile memory used in computers and other electronic devices to store relatively small amounts of data but allowing individual bytes to be erased and reprogrammed.

Computer Data Storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. Knowledge Preservation.

Memory-Mapped File is a segment of virtual memory that has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource. This resource is typically a file that is physically present on disk, but can also be a device, shared memory object, or other resource that the operating system can reference through a file descriptor. Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory.

Memory-Mapped I/O are two complementary methods of performing input/output (I/O) between the CPU and peripheral devices in a computer. An alternative approach is using dedicated I/O processors, commonly known as channels on mainframe computers, which execute their own instructions.

Virtual Memory is a memory management technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit or MMU, automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer. The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging.

Persistent Memory is any method or apparatus for efficiently storing data structures such that they can continue to be accessed using memory instructions or memory APIs even after the end of the process that created or last modified them. Often confused with non-volatile random-access memory (NVRAM), persistent memory is instead more closely linked to the concept of persistence in its emphasis on program state that exists outside the fault zone of the process that created it. Efficient, memory-like access is the defining characteristic of persistent memory. It can be provided using microprocessor memory instructions, such as load and store. It can also be provided using APIs that implement remote direct memory access verbs, such as RDMA read and RDMA write. Other low-latency methods that allow byte-grain access to data also qualify. Persistent memory capabilities extend beyond non-volatility of stored bits. For instance, the loss of key metadata, such as page table entries or other constructs that translate virtual addresses to physical addresses, may render durable bits non-persistent. In this respect, persistent memory resembles more abstract forms of computer storage, such as file systems. In fact, almost all existing persistent memory technologies implement at least a basic file system that can be used for associating names or identifiers with stored extents, and at a minimum provide file system methods that can be used for naming and allocating such extents.

Magnetoresistive Random-Access Memory is a non-volatile random-access memory technology available today that began its development in the 1990s. Continued increases in density of existing memory technologies – notably flash RAM and DRAM – kept it in a niche role in the market, but its proponents believe that the advantages are so overwhelming that magnetoresistive RAM will eventually become a dominant type of memory, potentially even becoming a universal memory. It is currently in production by Everspin, and other companies including GlobalFoundries and Samsung have announced product plans.. A recent, comprehensive review article on magnetoresistance and magnetic random access memories is available as an open access paper in Materials Toda.

Computer Memory (amazon) - Internal Hard Drives (amazon)

Laptop Computers (amazon) - Desktop Computers (amazon)

Webopedia has definitions to words, phrases and abbreviations related to computing and information technology.

Molecular Memory is a term for data storage technologies that use molecular species as the data storage element, rather than e.g. circuits, magnetics, inorganic materials or physical shapes. The molecular component can be described as a molecular switch, and may perform this function by any of several mechanisms, including charge storage, photochromism, or changes in capacitance. In a perfect molecular memory device, each individual molecule contains a bit of data, leading to massive data capacity. However, practical devices are more likely to use large numbers of molecules for each bit, in the manner of 3D optical data storage (many examples of which can be considered molecular memory devices). The term "molecular memory" is most often used to mean indicate very fast, electronically addressed solid-state data storage, as is the term computer memory. At present, molecular memories are still found only in laboratories.

Molecular Memory can be used to Increase the Memory Capacity of Hard Disks. Scientists have taken part in research where the first molecule capable of remembering the direction of a magnetic above liquid nitrogen temperatures has been prepared and characterized. The results may be used in the future to massively increase the storage capacity of hard disks without increasing their physical size.

Researchers develop 128Mb STT-MRAM with world's fastest write speed for embedded memory. A research team has successfully developed 128Mb-density STT-MRAM (spin-transfer torque magnetoresistive random access memory) with a write speed of 14 ns for use in embedded memory applications, such as cache in IOT and AI. This is currently the world's fastest write speed for embedded memory application with a density over 100Mb and will pave the way for the mass-production of large capacity STT-MRAM. STT-MRAM is capable of high-speed operation and consumes very little power as it retains data even when the power is off. Because of these features, STT-MRAM is gaining traction as the next-generation technology for applications such as embedded memory, main memory and logic. Three large semiconductor fabrication plants have announced that risk mass-production will begin in 2018. As memory is a vital component of computer systems, handheld devices and storage, its performance and reliability are of great importance for green energy solutions. The current capacity of STT-MRAM is ranged between 8Mb-40Mb. But to make STT-MRAM more practical, it is necessary to increase the memory density. The team at the Center for Innovative Integrated Electronic Systems (CIES) has increased the memory density of STT-MRAM by intensively developing STT-MRAMs in which magnetic tunnel junctions (MTJs) are integrated with CMOS. This will significantly reduce the power-consumption of embedded memory such as cache and eFlash memory. MTJs were miniaturized through a series of process developments. To reduce the memory size needed for higher-density STT-MRAM, the MTJs were formed directly on via holes -- small openings that allow a conductive connection between the different layers of a semiconductor device. By using the reduced size memory cell, the research group has designed 128Mb-density STT-MRAM and fabricated a chip. In the fabricated chip, the researchers measured a write speed of subarray. As a result, high-speed operation with 14ns was demonstrated at a low power supply voltage of 1.2 V. To date, this is the fastest write speed operation in an STT-MRAM chip with a density over 100Mb in the world.



Motherboard - Main Circuit Board


Computer Motherboard Motherboard is the main printed circuit board or PCB found in general purpose microcomputers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard usually contains significant sub-systems such as the central processor, the chipset's input/output and memory controllers, interface connectors, and other components integrated for general purpose use. Motherboard specifically refers to a PCB with expansion capability and as the name suggests, this board is often referred to as the "mother" of all components attached to it, which often include peripherals, interface cards, and daughtercards: sound cards, video cards, network cards, hard drives, or other forms of persistent storage; TV tuner cards, cards providing extra USB or FireWire slots and a variety of other custom components. Similarly, the term mainboard is applied to devices with a single board and no additional expansions or capability, such as controlling boards in laser printers, televisions, washing machines and other embedded systems with limited expansion abilities. Mother Board (image)

Circuit Board Components - Design (Circuit Boards)

Integrated Circuit - I.C.

Printed Circuit Board mechanically supports and electrically connects electronic components or electrical components using conductive tracks, pads and other features etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate. Components are generally soldered onto the PCB to both electrically connect and mechanically fasten them to it. Printed circuit boards are used in all but the simplest electronic products. They are also used in some electrical products, such as passive switch boxes.


Processor


Processor is the part of a computer or microprocessor chip that does most of the data processing, which is the procedure that interprets the input based on programmed instructions so that the information can be prepared for a particular purpose or output.

Algorithms - Computer Programs - Information Processing

Microprocessor accepts digital or binary data as input, processes it according to instructions stored in its memory, and provides results as output. Transistors.

Central Processing Unit carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions. Uses voltage control as a language. The Central Processing Unit is also know as the CPU for short. The central processing unit includes registers, an arithmetic logic unit , and control circuits, which interpret and execute assembly language instructions. The CPU interacts with all the other parts of the computer architecture to make sense of the data and deliver the necessary output. The CPU is the most important processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units or GPUs for short.

Coprocessor is a computer processor used to supplement the functions of the primary processor or the CPU.

Multi-Core Processor can run multiple instructions at the same time, increasing overall speed for programs.

Multiprocessing is a computer system having two or more processing units (multiple processors) each sharing main memory and peripherals, in order to simultaneously process programs. It is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined. (multiple cores on one die, multiple dies in one package, multiple packages in one system unit, etc.). Brain Processing.

Context Switch in computing is the process of storing the state of a process or of a thread, so that it can be restored and execution resumed from the same point later. This allows multiple processes to share a single CPU, and is an essential feature of a multitasking operating system.

Graphics Processing Unit is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics and image processing, and their highly parallel structure makes them more efficient than general-purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. In a personal computer, a GPU can be present on a video card, or it can be embedded on the motherboard or—in certain CPUs—on the CPU die. NVIDIA TITAN V is the most powerful graphics card ever created for the PC.

Processor Design is the design engineering task of creating a microprocessor, a component of computer hardware. It is a subfield of electronics engineering and computer engineering. The design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture described in e.g. VHDL or Verilog. This description is then manufactured employing some of the various semiconductor device fabrication processes. This results in a die which is bonded onto a chip carrier. This chip carrier is then soldered onto, or inserted into a socket on, a printed circuit board (PCB).The mode of operation of any microprocessor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using registers, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow.

Multitasking - Batch Process - Process - Processing - Speed

Information Processor is a system (be it electrical, mechanical or biological) which takes information (a sequence of enumerated symbols or states) in one form and processes (transforms) it into another form, e.g. to statistics, by an algorithmic process. An information processing system is made up of four basic parts, or sub-systems: input, processor, storage, output.

Processor Affinity enables the binding and unbinding of a process or a thread to a central processing unit.

Clock Signal is a particular type of signal that oscillates between a high and a low state and is used like a metronome to coordinate actions of digital circuits. A clock signal is produced by a clock generator. Although more complex arrangements are used, the most common clock signal is in the form of a square wave with a 50% duty cycle, usually with a fixed, constant frequency. Circuits using the clock signal for synchronization may become active at either the rising edge, falling edge, or, in the case of double data rate, both in the rising and in the falling edges of the clock cycle.

Clock Generator is a circuit that produces a timing signal (known as a clock signal and behaves as such) for use in synchronizing a circuit's operation. The signal can range from a simple symmetrical square wave to more complex arrangements. The basic parts that all clock generators share are a resonant circuit and an amplifier. The resonant circuit is usually a quartz piezo-electric oscillator, although simpler tank circuits and even RC circuits may be used. The amplifier circuit usually inverts the signal from the oscillator and feeds a portion back into the oscillator to maintain oscillation. The generator may have additional sections to modify the basic signal. The 8088 for example, used a 2/3 duty cycle clock, which required the clock generator to incorporate logic to convert the 50/50 duty cycle which is typical of raw oscillators. Other such optional sections include frequency divider or clock multiplier sections. Programmable clock generators allow the number used in the divider or multiplier to be changed, allowing any of a wide variety of output frequencies to be selected without modifying the hardware. The clock generator in a motherboard is often changed by computer enthusiasts to control the speed of their CPU, FSB, GPU and RAM. Typically the programmable clock generator is set by the BIOS at boot time to the selected value; although some systems have dynamic frequency scaling, which frequently re-programs the clock generator.

Crystal Oscillator is an electronic oscillator circuit that uses the mechanical resonance of a vibrating crystal of piezoelectric material to create an electrical signal with a precise frequency.

Clock Speed typically refers to the frequency at which a chip like a central processing unit (CPU), one core of a multi-core processor, is running and is used as an indicator of the processor's speed. It is measured in clock cycles per second or its equivalent, the SI unit hertz (Hz). The clock rate of the first generation of computers was measured in hertz or kilohertz (kHz), but in the 21st century the speed of modern CPUs is commonly advertised in gigahertz (GHz). This metric is most useful when comparing processors within the same family, holding constant other features that may impact performance. Video card and CPU manufacturers commonly select their highest performing units from a manufacturing batch and set their maximum clock rate higher, fetching a higher price.

Counter in digital electronics is a device which stores (and sometimes displays) the number of times a particular event or process has occurred, often in relationship to a clock signal. The most common type is a sequential digital logic circuit with an input line called the "clock" and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter. A counter circuit is usually constructed of a number of flip-flops connected in cascade. Counters are a very widely used component in digital circuits, and are manufactured as separate integrated circuits and also incorporated as parts of larger integrated circuits. (7 Bit Counter).

555 timer IC is an integrated circuit (chip) used in a variety of timer, pulse generation, and oscillator applications. The 555 can be used to provide time delays, as an oscillator, and as a flip-flop element. Derivatives provide two or four timing circuits in one package.

Semiconductor Design standard cell methodology is a method of designing application-specific integrated circuits (ASICs) with mostly digital-logic features.

BIOS

Silicon Photonics is the study and application of photonic systems which use silicon as an optical medium.

Transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is composed of semiconductor material usually with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits.

CPU - Binary Code - Memistors

Transistors, How do they work? (youtube)

Making your own 4 Bit Computer from Transistors (youtube)

See How Computers Add Numbers In One Lesson (youtube)

Carbon Nanotube Field-effect Transistor refers to a field-effect transistor that utilizes a single carbon nanotube or an array of carbon nanotubes as the channel material instead of bulk silicon in the traditional MOSFET structure. First demonstrated in 1998, there have been major developments in CNTFETs since. E-Waste.

Analog Chip is a set of miniature electronic analog circuits formed on a single piece of semiconductor material.

Analog Signal is any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e., analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the pressure of the sound waves. It differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values. The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, human speech, and other systems may also convey or be considered analog signals. An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information.

Digital Signal is a signal that is constructed from a discrete set of waveforms of a physical quantity so as to represent a sequence of discrete values. A logic signal is a digital signal with only two possible values, and describes an arbitrary bit stream. Other types of digital signals can represent three-valued logic or higher valued logics. Conversion.

Spintronics and Nanophotonics combined in 2-D material is a way to convert the spin information into a predictable light signal at room temperature. The discovery brings the worlds of spintronics and nanophotonics closer together and might lead to the development of an energy-efficient way of processing data.

Math Works - Nimbula - Learning Tools - Digikey Electronic Components - Nand 2 Tetris

Interfaces - Brain - Robots - 3D Printing - Operating Systems - Code - Programing - Computer Courses - Online Dictionary of Computer - Technology Terms - CS Unplugged - Computer Science without using computers.

Computer Standards List (wiki) - IPv6 recent version of the Internet Protocol. IPv6 - Web 2.0 (wiki)

Trouble-Shoot PC's - Fixing PC's - PC Maintenance Tips

Variable (cs) - Technology Education - Engineering - Technology Addiction - Technical Competitions - Internet.


Digital Displays


Digital Signage is a sub segment of signage. Digital signages use technologies such as LCD, LED and Projection to display content such as digital images, video, streaming media, and information. They can be found in public spaces, transportation systems, museums, stadiums, retail stores, hotels, restaurants, and corporate buildings etc., to provide wayfinding, exhibitions, marketing and outdoor advertising. Digital Signage market is expected to grow from USD $15 billion to over USD $24bn by 2020. Interface.

Display Device is an output device for presentation of information in visual or tactile form (the latter used for example in tactile electronic displays for blind people). When the input information that is supplied has an electrical signal, the display is called an electronic display. Common applications for electronic visual displays are televisions or computer monitors.

Colors - Eyes (sight) - Eye Strain - Frame Rate

LED Display is a flat panel display, which uses an array of light-emitting diodes as pixels for a video display. Their brightness allows them to be used outdoors in store signs and billboards, and in recent years they have also become commonly used in destination signs on public transport vehicles. LED displays are capable of providing general illumination in addition to visual display, as when used for stage lighting or other decorative (as opposed to informational) purposes.

Organic Light-Emitting Diode (OLED) is a Light-Emitting Diode (LED) in which the emissive electroluminescent layer is a film of organic compound that emits light in response to an electric current. This layer of organic semiconductor is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, portable systems such as mobile phones, handheld game consoles and PDAs. A major area of research is the development of white OLED devices for use in solid-state lighting applications.

AMOLED is a display technology used in smartwatches, mobile devices, laptops, and televisions. OLED describes a specific type of thin-film-display technology in which organic compounds form the electroluminescent material, and active matrix refers to the technology behind the addressing of pixels.

High-Dynamic-Range Imaging is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

Graphics Display Resolution is the width and height dimensions of an electronic visual display device, such as a computer monitor, in pixels. Certain combinations of width and height are standardized and typically given a name and an initialism that is descriptive of its dimensions. A higher display resolution in a display of the same size means that displayed content appears sharper. Display Resolution is the number of distinct pixels in each dimension that can be displayed. Ratio.

4K Resolution refers to a horizontal resolution on the order of 4,000 pixels and vertical resolution on the order of 2,000 pixels.

Smartphones - Tech Addiction

Computer Monitor is an electronic visual display for computers. A monitor usually comprises the display device, circuitry, casing, and power supply. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) or a flat panel LED display, while older monitors used a cathode ray tubes (CRT). It can be connected to the computer via VGA, DVI, HDMI, DisplayPort, Thunderbolt, LVDS (Low-voltage differential signaling) or other proprietary connectors and signals.

Durable Monitor Screens (computers)

Liquid-Crystal Display is a flat-panel display or other electronically modulated optical device that uses the light-modulating properties of liquid crystals. Liquid crystals do not emit light directly, instead using a backlight or reflector to produce images in color or monochrome. LCDs are available to display arbitrary images (as in a general-purpose computer display) or fixed images with low information content, which can be displayed or hidden, such as preset words, digits, and 7-segment displays, as in a digital clock. They use the same basic technology, except that arbitrary images are made up of a large number of small pixels, while other displays have larger elements. LCDs are used in a wide range of applications including computer monitors, televisions, instrument panels, aircraft cockpit displays, and indoor and outdoor signage. Small LCD screens are common in portable consumer devices such as digital cameras, watches, calculators, and mobile telephones, including smartphones. LCD screens are also used on consumer electronics products such as DVD players, video game devices and clocks. LCD screens have replaced heavy, bulky cathode ray tube (CRT) displays in nearly all applications. LCD screens are available in a wider range of screen sizes than CRT and plasma displays, with LCD screens available in sizes ranging from tiny digital watches to huge, big-screen television sets. Since LCD screens do not use phosphors, they do not suffer image burn-in when a static image is displayed on a screen for a long time (e.g., the table frame for an aircraft schedule on an indoor sign). LCDs are, however, susceptible to image persistence. The LCD screen is more energy-efficient and can be disposed of more safely than a CRT can. Its low electrical power consumption enables it to be used in battery-powered electronic equipment more efficiently than CRTs can be. By 2008, annual sales of televisions with LCD screens exceeded sales of CRT units worldwide, and the CRT became obsolete for most purposes.

Pixel is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen. Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black. A pixel is generally thought of as the smallest single component of a digital image. However, the definition is highly context-sensitive. For example, there can be "printed pixels" in a page, or pixels carried by electronic signals, or represented by digital values, or pixels on a display device, or pixels in a digital camera (photosensor elements). This list is not exhaustive and, depending on context, synonyms include pel, sample, byte, bit, dot, and spot. Pixels can be used as a unit of measure such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart. The measures dots per inch (dpi) and pixels per inch (ppi) are sometimes used interchangeably, but have distinct meanings, especially for printer devices, where dpi is a measure of the printer's density of dot (e.g. ink droplet) placement. For example, a high-quality photographic image may be printed with 600 ppi on a 1200 dpi inkjet printer. Even higher dpi numbers, such as the 4800 dpi quoted by printer manufacturers since 2002, do not mean much in terms of achievable resolution. The more pixels used to represent an image, the closer the result can resemble the original. The number of pixels in an image is sometimes called the resolution, though resolution has a more specific definition. Pixel counts can be expressed as a single number, as in a "three-megapixel" digital camera, which has a nominal three million pixels, or as a pair of numbers, as in a "640 by 480 display", which has 640 pixels from side to side and 480 from top to bottom (as in a VGA display), and therefore has a total number of 640×480 = 307,200 pixels or 0.3 megapixels. The pixels, or color samples, that form a digitized image (such as a JPEG file used on a web page) may or may not be in one-to-one correspondence with screen pixels, depending on how a computer displays an image. In computing, an image composed of pixels is known as a bitmapped image or a raster image. The word raster originates from television scanning patterns, and has been widely used to describe similar halftone printing and storage techniques. Anti-Aliasing is the smoothing the jagged appearance of diagonal lines in a bitmapped image. The pixels that surround the edges of the line are changed to varying shades of gray or color in order to blend the sharp edge into the background. Matrix.

A new look at color displays. Tunable structural color images by UV-patterned conducting polymer nanofilms on metal surfaces. Researchers have developed a method that may lead to new types of displays based on structural colors. The discovery opens the way to cheap and energy-efficient color displays and electronic labels.

Touchscreen is a input and output device normally layered on the top of an electronic visual display of an information processing system. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus and/or one or more fingers. Some touchscreens use ordinary or specially coated gloves to work while others may only work using a special stylus/pen. The user can use the touchscreen to react to what is displayed and to control how it is displayed; for example, zooming to increase the text size. The touchscreen enables the user to interact directly with what is displayed, rather than using a mouse, touchpad, or any other such device (other than a stylus, which is optional for most modern touchscreens). Touchscreens are common in devices such as game consoles, personal computers, tablet computers, electronic voting machines, point of sale systems ,and smartphones. They can also be attached to computers or, as terminals, to networks. They also play a prominent role in the design of digital appliances such as personal digital assistants (PDAs) and some e-readers. Touchscreen Features recognizes multi touch gestures like swipes, pinch, flicks, tap, double tap and drag. Accepts touch inputs by gloved fingers, finger nails, pens, keys, credit cards, styluses, erasers, etc. Touch events can be recorded even if the user's finger does not touch the screen.

Making plastic more transparent while also adding electrical conductivity. In an effort to improve large touchscreens, LED light panels and window-mounted infrared solar cells, researchers have made plastic conductive while also making it more transparent.

Interfaces

Stylus in computing is a small pen-shaped instrument that is used to input commands to a computer screen, mobile device or graphics tablet. With touchscreen devices, a user places a stylus on the surface of the screen to draw or make selections by tapping the stylus on the screen. In this manner, the stylus can be used instead of a mouse or trackpad as a pointing device, a technique commonly called pen computing. Pen-like input devices which are larger than a stylus, and offer increased functionality such as programmable buttons, pressure sensitivity and electronic erasers, are often known as digital pens.

7-Segment Display - 9-Segment Display - 14-Segment Display

LCD that is paper-thin, flexible, light, tough and cheap perhaps only costing $5 for a 5-inch screen. flexible paper like display that could be updated as fast as the news cycles. Less than half a millimeter thick, the new flexi-LCD design could revolutionize printed media. A front polarizer-free optically rewritable (ORW) liquid crystal display (LCD).



Software - Digital Information


Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Software are written programs that are stored in read/write memory, which includes procedures or rules and associated documentation pertaining to the operation of a computer system. Software is a specialized tool for performing advanced calculations that allow the user to be more productive and work incredibly fast and efficient. Software is a time saver.

Operating System - Code - Language - Analog - Word Processing - Apps - Testing - AI - Algorithms

Software Engineering is the application of engineering to the development of software in a systematic method. Typical formal definitions of Software Engineering are: Research, design, develop, and test operating systems-level software, compilers, and network distribution software for medical, industrial, military, communications, aerospace, business, scientific, and general computing applications. The systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software"; The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; An engineering discipline that is concerned with all aspects of software production; And the establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines.

Software Architecture refers to the high level structures of a software system, the discipline of creating such structures, and the documentation of these structures. These structures are needed to reason about the software system. Each structure comprises software elements, relations among them, and properties of both elements and relations. The architecture of a software system is a metaphor, analogous to the architecture of a building.

Software Framework is an abstraction in which software providing generic functionality can be selectively changed by additional user-written code, thus providing application-specific software. A software framework provides a standard way to build and deploy applications. A software framework is a universal, reusable software environment that provides particular functionality as part of a larger software platform to facilitate development of software applications, products and solutions. Software frameworks may include support programs, compilers, code libraries, tool sets, and Application Programming Interfaces (APIs) that bring together all the different components to enable development of a project or system. Frameworks have key distinguishing features that separate them from normal libraries: inversion of control: In a framework, unlike in libraries or in standard user applications, the overall program's flow of control is not dictated by the caller, but by the framework. Extensibility: A user can extend the framework - usually by selective overriding; or programmers can add specialized user code to provide specific functionality. Non-modifiable framework code: The framework code, in general, is not supposed to be modified, while accepting user-implemented extensions. In other words, users can extend the framework, but should not modify its code.

Cross-Platform Software is computer software that is designed to work in several computing platforms. Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in an interpreted language or compiled to portable bytecode for which the interpreters or run-time packages are common or standard components of all supported platforms. Cross-platform software is also called multi-platform software, platform-agnostic software, or platform-independent software. Inter-Disciplinarity.

Abstraction is a technique for hiding complexity of computer systems. It works by establishing a level of simplicity on which a person interacts with the system, suppressing the more complex details below the current level. The programmer works with an idealized interface (usually well defined) and can add additional levels of functionality that would otherwise be too complex to handle.

Software Development is the process of computer programming, documenting, testing, and bug fixing involved in creating and maintaining applications and frameworks resulting in a software product. Software development is a process of writing and maintaining the source code, but in a broader sense, it includes all that is involved between the conception of the desired software through to the final manifestation of the software, sometimes in a planned and structured process. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.

Don't Repeat Yourself is a principle of software development aimed at reducing repetition of software patterns, replacing it with abstractions or using data normalization to avoid redundancy.

Worse is Better is when software that is limited, but simple to use, may be more appealing to the user and market than the reverse. The idea that quality does not necessarily increase with functionality—that there is a point where less functionality ("worse") is a preferable option ("better") in terms of practicality and usability.

Unix Philosophy is bringing the concepts of modularity and reusability into software engineering practice.

Reusability - Smart Innovation - Compatibility - Simplicity

Software Versioning is when some schemes use a zero in the first sequence to designate alpha or beta status for releases that are not stable enough for general or practical deployment and are intended for testing or internal use only. It can be used in the third position: 0 for alpha (status) - 1 for beta (status) - 2 for release candidate - 3 for (final) release. Version 1.0 is used as a major milestone, indicating that the software is "complete", that it has all major features, and is considered reliable enough for general release. A good example of this is the Linux kernel, which was first released as version 0.01 in 1991, and took until 1994 to reach version 1.0.0. Me 2.0

Software Developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are often used with similar meanings are programmer, software analyst, and software engineer. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent. Already in the current market place there can be found a segregation between programmers and developers, being that one who implements is not the same as the one who designs the class structure or hierarchy. Even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. (see also Debate over who is a software engineer).

Software Development Process is splitting of software development work into distinct phases (or stages) containing activities with the intent of better planning and management. It is often considered a subset of the systems development life cycle. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Common methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, extreme programming and various types of agile methodology. Some people consider a life-cycle "model" a more general term for a category of methodologies and a software development "process" a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model. Project Management.

Scrum in software Development is a framework for managing software development. It is designed for teams of three to nine developers who break their work into actions that can be completed within fixed duration cycles (called "sprints"), track progress and re-plan in daily 15-minute stand-up meetings, and collaborate to deliver workable software every sprint. Approaches to coordinating the work of multiple scrum teams in larger organizations include Large-Scale Scrum, Scaled Agile Framework (SAFe) and Scrum of Scrums, among others. Scrum is an iterative and incremental agile software development framework for managing product development. It defines "a flexible, holistic product development strategy where a development team works as a unit to reach a common goal", challenges assumptions of the "traditional, sequential approach" to product development, and enables teams to self-organize by encouraging physical co-location or close online collaboration of all team members, as well as daily face-to-face communication among all team members and disciplines involved. A key principle of Scrum is its recognition that during product development, the customers can change their minds about what they want and need (often called requirements volatility), and that unpredicted challenges cannot be easily addressed in a traditional predictive or planned manner. As such, Scrum adopts an evidence-based empirical approach—accepting that the problem cannot be fully understood or defined, focusing instead on maximizing the team's ability to deliver quickly, to respond to emerging requirements and to adapt to evolving technologies and changes in market conditions.

Software Design is the process by which an agent creates a specification of a software artifact, intended to accomplish goals, using a set of primitive components and subject to constraints. Software design may refer to either "all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying complex systems" or "the activity following requirements specification and before programming, as ... [in] a stylized software engineering process." Software design usually involves problem solving and planning a software solution. This includes both a low-level component and algorithm design and a high-level, architecture design. Software Design Pattern.

Agile Software Development describes a set of principles for software development under which requirements and solutions evolve through the collaborative effort of self-organizing cross-functional teams. It advocates adaptive planning, evolutionary development, early delivery, and continuous improvement, and it encourages rapid and flexible response to change. These principles support the definition and continuing evolution of many software development methods.

Software Release Life Cycle is the sum of the stages of development and maturity for a piece of computer software: ranging from its initial development to its eventual release, and including updated versions of the released version to help improve software or fix bugs still present in the software.

Development Process - Develop Meaning

Software as a Service is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted.

Computer Program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the program's instructions in a central processing unit.

Computer Code - Free Software

Instruction Set is the interface between a computer's software and its hardware, and thereby enables the independent development of these two computing realms; it defines the valid instructions that a machine may execute.

Computing Platform means in general sense, where any piece of software is executed. It may be the hardware or the operating system (OS), even a web browser or other application, as long as the code is executed in it. The term computing platform can refer to different abstraction levels, including a certain hardware architecture, an operating system (OS), and runtime libraries. In total it can be said to be the stage on which computer programs can run. A platform can be seen both as a constraint on the application development process, in that different platforms provide different functionality and restrictions; and as an assistance to the development process, in that they provide low-level functionality ready-made. For example, an OS may be a platform that abstracts the underlying differences in hardware and provides a generic command for saving files or accessing the network.

Mobile Application Development is a term used to denote the act or process by which application software is developed for mobile devices, such as personal digital assistants, enterprise digital assistants or mobile phones. These applications can be pre-installed on phones during manufacturing platforms, or delivered as web applications using server-side or client-side processing (e.g., JavaScript) to provide an "application-like" experience within a Web browser. Application software developers also must consider a long array of screen sizes, hardware specifications, and configurations because of intense competition in mobile software and changes within each of the platforms. Mobile app development has been steadily growing, in revenues and jobs created. A 2013 analyst report estimates there are 529,000 direct app economy jobs within the EU 28 members, 60% of which are mobile app developers.

APPS (application software)

Application Performance Management is the monitoring and management of the performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service.

Command Pattern is a behavioral design pattern in which an object is used to encapsulate all information needed to perform an action or trigger an event at a later time. This information includes the method name, the object that owns the method and values for the method parameters.

Iterative and incremental Development is any combination of both iterative design or iterative method and incremental build model for software development. The combination is of long standing and has been widely suggested for large development efforts. For example, the 1985 DOD-STD-2167 mentions (in section 4.1.2): "During software development, more than one iteration of the software development cycle may be in progress at the same time." and "This process may be described as an 'evolutionary acquisition' or 'incremental build' approach." The relationship between iterations and increments is determined by the overall software development methodology and software development process. The exact number and nature of the particular incremental builds and what is iterated will be specific to each individual development effort.

OSI Model is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system without regard to their underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers. The original version of the model defined seven layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that comprise the contents of that path. Two instances at the same layer are visualized as connected by a horizontal connection in that layer.

Technology Stack is a set of software subsystems or components needed to create a complete platform such that no additional software is needed to support applications. Applications are said to "run on" or "run on top of" the resulting platform. Some definitions of a platform overlap with what is known as system software.

Abstraction Layer is a way of hiding the implementation details of a particular set of functionality, allowing the separation of concerns to facilitate interoperability and platform independence. Software models that use layers of abstraction include the OSI 7-layer model for computer network protocols, the OpenGL graphics drawing library, and the byte stream input/output (I/O) model originated from Unix and adopted by DOS, Linux, and most other modern operating systems.

Open Systems Interconnection is an effort to standardize computer networking that was started in 1977 by the International Organization for Standardization (ISO), along with the ITU-T.

Enterprise Social Software comprises social software as used in "enterprise" (business/commercial) contexts. It includes social and networked modifications to corporate intranets and other classic software platforms used by large companies to organize their communication. In contrast to traditional enterprise software, which imposes structure prior to use, enterprise social software tends to encourage use prior to providing structure.

Enterprise Architecture Framework defines how to create and use an enterprise architecture. An architecture framework provides principles and practices for creating and using the architecture description of a system. It structures architects' thinking by dividing the architecture description into domains, layers or views, and offers models - typically matrices and diagrams - for documenting each view. This allows for making systemic design decisions on all the components of the system and making long-term decisions around new design, requirements, sustainability and support.

Enterprise Architecture is "a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a holistic approach at all times, for the successful development and execution of strategy. Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies. These practices utilize the various aspects of an enterprise to identify, motivate, and achieve these changes.

International Organization for Standardization is an international standard-setting body composed of representatives from various national standards organizations.

Conceptual Model is a representation of a system, made of the composition of concepts which are used to help people know, understand, or simulate a subject the model represents. Some models are physical objects; for example, a toy model which may be assembled, and may be made to work like the object it represents.

Model-Driven Engineering is a software development methodology that focuses on creating and exploiting domain models, which are conceptual models of all the topics related to a specific problem. Hence, it highlights and aims at abstract representations of the knowledge and activities that govern a particular application domain, rather than the computing (f.e. algorithmic) concepts.

Model-Based Design is a mathematical and visual method of addressing problems associated with designing complex control,signal processing and communication systems. It is used in many motion control, industrial equipment, aerospace, and automotive applications. Model-based design is a methodology applied in designing embedded software.

Architectural Pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are similar to software design pattern but have a broader scope. The architectural patterns address various issues in software engineering, such as computer hardware performance limitations, high availability and minimization of a business risk. Some architectural patterns have been implemented within software frameworks.

Software Design Pattern is a general reusable solution to a commonly occurring problem within a given context in software design. It is not a finished design that can be transformed directly into source or machine code. It is a description or template for how to solve a problem that can be used in many different situations. Design patterns are formalized best practices that the programmer can use to solve common problems when designing an application or system. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved. Patterns that imply mutable state may be unsuited for functional programming languages, some patterns can be rendered unnecessary in languages that have built-in support for solving the problem they are trying to solve, and object-oriented patterns are not necessarily suitable for on-object-oriented languages. Design patterns may be viewed as a structured approach to computer programming intermediate between the levels of a programming paradigm and a concrete algorithm.

Resource-Oriented Architecture is a style of software architecture and programming paradigm for designing and developing software in the form of resources with "RESTful" interfaces. These resources are software components (discrete pieces of code and/or data structures) which can be reused for different purposes. ROA design principles and guidelines are used during the phases of software development and system integration.

Representational State Transfer or RESTful Web services are one way of providing interoperability between computer systems on the Internet. REST-compliant Web services allow requesting systems to access and manipulate textual representations of Web resources using a uniform and predefined set of stateless operations. Other forms of Web service exist, which expose their own arbitrary sets of operations such as WSDL and SOAP. (REST)

Software Configuration Management is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine what was changed and who changed it. If a configuration is working well, SCM can determine how to replicate it across many hosts.

Cucumber is a software tool that computer programmers use for testing other software.

Selenium Software (wiki)

Communications Protocol - Data - Structure - Interfaces - Matrix - Learn to Code - Free Software.

Apache Maven - Jwebunit software is a Java-based testing framework for web applications.

Apache JMeter is an Apache project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications.

Extreme Programming is a software development methodology which is intended to improve software quality and responsiveness to changing customer requirements. As a type of agile software development, it advocates frequent "releases" in short development cycles, which is intended to improve productivity and introduce checkpoints at which new customer requirements can be adopted. Other elements of extreme programming include: programming in pairs or doing extensive code review, unit testing of all code, not programming features until they are actually needed, a flat management structure, code simplicity and clarity, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers. The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels. As an example, code reviews are considered a beneficial practice; taken to the extreme, code can be reviewed continuously, i.e. the practice of pair programming.


Software Testing


Software Testing is an investigation conducted to provide information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects), and verifying that the software product is fit for use. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test: Meets the requirements that guided its design and development, responds correctly to all kinds of inputs, performs its functions within an acceptable time, is sufficiently usable, can be installed and run in its intended environments, and achieves the general result its stakeholders desire.

Software Bug is an error, flaw or fault in the design, development, or operation of computer software that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The process of finding and correcting bugs is termed "debugging" and often uses formal techniques or tools to pinpoint bugs. Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations. Bug is an error detected in the development environment during testing stage. Defect is a mismatch between the expected and actual result of software development detected by a software developer or end customer in the production environment. Failure is called an error which is founded by the end user.

Smoke Testing is preliminary testing to reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset of test cases that cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly. When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called an intake test. Alternatively, it is a set of tests run on each new build of a product to verify that the build is testable before the build is released into the hands of the test team. In the DevOps paradigm, use of a BVT step is one hallmark of the continuous integration maturity stage.

White-Box Testing is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). Psychology - Assessments.

Black-Box Testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It is sometimes referred to as specification-based testing. It is sometimes referred to as specification-based testing. Typical black-box test design techniques include: Decision table testing. All-pairs testing. Equivalence partitioning. Boundary value analysis. Cause–effect graph. Error guessing. State transition testing. Use case testing. User story testing. Domain analysis. Syntax testing. Combining technique.

Gray Box Testing is a combination of white-box testing and black-box testing. The aim of this testing is to search for the defects if any due to improper structure or improper usage of applications.

Red Team imitates real-world attacks that can hit a company or an organization, and they perform all the necessary steps that attackers would use. By assuming the role of an attacker, they show organizations what could be backdoors or exploitable vulnerabilities that pose a threat to their cybersecurity. Human in the Loop.

A/B Testing is a term for a randomized experiment with two variants, A and B, which are the control and variation in the controlled experiment. A/B testing is a form of statistical hypothesis testing with two variants leading to the technical term, two-sample hypothesis testing, used in the field of statistics.

Regression Testing is a type of software testing which verifies that software, which was previously developed and tested, still performs correctly after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. During regression testing, new software bugs or regressions may be uncovered. Sometimes a software change impact analysis is performed to determine what areas could be affected by the proposed changes. These areas may include functional and non-functional areas of the system. Observations.

Sandbox Test is a type of software testing environment that enables the isolated execution of software or programs for independent evaluation, monitoring or testing. In an implementation, a sandbox also may be known as a test server, development server or working directory. It's a testing environment that isolates untested code changes and outright experimentation from the production environment or repository, in the context of software development including Web development and revision control.

Data-Driven Testing is a term used in the testing of computer software to describe testing done using a table of conditions directly as test inputs and verifiable outputs as well as the process where test environment settings and control are not hard-coded. In the simplest form the tester supplies the inputs from a row in the table and expects the outputs which occur in the same row. The table typically contains values which correspond to boundary or partition input spaces. In the control methodology, test configuration is "read" from a database. Diagnose.

Unit Testing is a software development process in which the smallest testable parts of an application, called units, are individually scrutinized for proper operation. Software developers and sometimes QA staff complete unit tests during the development process. Unit testing is a software testing method by which individual units of source code—sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures—are tested to determine whether they are fit for use.

Error Guessing is a test method in which test cases used to find bugs in programs are established based on experience in prior testing. The scope of test cases usually rely on the software tester involved, who uses past experience and intuition to determine what situations commonly cause software failure, or may cause errors to appear. Typical errors include divide by zero, null pointers, or invalid parameters. Error guessing has no explicit rules for testing; test cases can be designed depending on the situation, either drawing from functional documents or when an unexpected/undocumented error is found while testing operations.

Spike in software development is a product development method originating from Extreme Programming that uses the simplest possible program to explore potential solutions. It is used to determine how much work will be required to solve or work around a software issue. Typically, a "spike test" involves gathering additional information or testing for easily reproduced edge cases. The term is used in agile software development approaches like Scrum or Extreme Programming. A spike in a sprint can be used in a number of ways: As a way to familiarize the team with new hardware or software. To analyze a problem thoroughly and assist in properly dividing work among separate team members. Spikes tests can also be used to mitigate future risk, and may uncover additional issues that have escaped notice. A distinction can be made between technical spikes and functional spikes. The technical spike is used more often for evaluating the impact new technology has on the current implementation. A functional spike is used to determine the interaction with a new feature or implementation. To track such work items, in a ticketing system, a new user story can be set up for each spike, for organization purposes. Following a spike, the results (a new design, a refined workflow, etc.) are shared and discussed with the team.

Exploratory Testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Exploratory testing  is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project. While the software is being tested, the tester learns things that together with experience and creativity generates new good tests to run. Exploratory testing is often thought of as a black box testing technique. Instead, those who have studied it consider it a test approach that can be applied to any test technique, at any stage in the development process. The key is not the test technique nor the item being tested or reviewed; the key is the cognitive engagement of the tester, and the tester's responsibility for managing his or her time.

Acceptance Testing is a test conducted to determine if the requirements of a specification or contract are met.

Benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term 'benchmark' is also mostly utilized for the purposes of elaborately designed benchmarking programs themselves.


Apps - Application Programs


Application Program or APP for short, is a computer program designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Software.

Application Software is a computer program designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a word processor, a spreadsheet, an accounting application, a web browser, a media player, an aeronautical flight simulator, a console game or a photo editor. The collective noun application software refers to all applications collectively. This contrasts with system software, which is mainly involved with running the computer. Applications may be bundled with the computer and its system software or published separately, and may be coded as proprietary, open-source or university projects. Apps built for mobile platforms are called mobile apps.

Authoring System is a program that has pre-programmed elements for the development of interactive multimedia software titles. Authoring systems can be defined as software that allows its user to create multimedia applications for manipulating multimedia objects.

Application Performance Management is the monitoring and management of performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is "the translation of IT metrics into business meaning ([i.e.] value).

User Testing from concept to launch, User Testing provides actionable insights enabling you to create great experiences.

Validately recruit testers, launch tests, and analyze results.

Lookback designer &research.

Prototypes (engineering)

Native Application is a software program that is developed for use on a particular platform or device. Because a native app is built for use on a particular device and its OS, it has the ability to use device-specific hardware and software.

React Native App is a real mobile app. With React Native, you don't build a "mobile web app", an "HTML5 app", or a "hybrid app". You build a real mobile app that's indistinguishable from an app built using Objective-C or Java. React Native uses the same fundamental UI building blocks as regular iOS and Android apps.

Applications Interface

Create Mobile Apps Resources
Phone Gap
Como
Sweb Apps
App Breeder
My App Builder
I Build App
Mobile Roadie
Yapp
App Makr 
Best App Makers
Build Your Own Business Apps in 3 Minutes
Gigster building your app
Google Developer Apps
Thing Space Verizon
App Management Interface
AppCenter: The Pay-What-You-Want App Store
Health Medical Apps
Apps from Amazon
Car Finder App
Visual Travel Tours
Audio Travel
Gate Guru App
App Brain
Trip It
Field Tripper App
Test Flight App
App Shopper
Red Laser
Portable Apps
I-nigma Bar Code Reader
More Apps
M-Pesa
Language Translators
Wikitude
Yellow Pages App
Portable Apps
What's App
Apps for Plant Lovers
Press Pad App for Digital Magazines and Publishers
Tech Fetch
Travel Tools
Cell Phones & Tools
Next Juggernaut
Rethink DB
Big in Japan
Near by Now
The Find
Milo
Apple
X Code
Quixey
Just in Mind

HyperCard is application software and a programming tool for Apple Macintosh and Apple IIGS computers. It is among the first successful hypermedia systems before the World Wide Web. It combines database abilities with a graphical, flexible, user-modifiable interface. HyperCard also features HyperTalk, a programming language for manipulating data and the user interface.

Enable Cognitive Computing Features In Your App Using IBM Watson's Language, Vision, Speech and Data APIs.


Operating Systems


Operating System is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding Firmware, require an operating system to function. Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources. Interfaces (API).  Operating systems automatically Format Partitions with the appropriate File System during the operating system installation process.

Installable File System - File Format - Content Format

Timeline of Operating Systems (wiki) - History of Operating Systems (wiki)

Types Of Operating Systems

System Software is computer software designed to provide a platform to other software. Examples of system software include operating systems, computational science software, game engines, industrial automation, and software as a service applications.

Control Program for Microcomputers is a mass-market operating system created for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc. Initially confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory, later versions of CP/M added multi-user variations and were migrated to 16-bit processors.

Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, and others.

86-DOS is a discontinued operating system developed and marketed by Seattle Computer Products (SCP) for its Intel 8086-based computer kit. Initially known as Q-DOS (Quick and Dirty Operating System), the name was changed to 86-DOS once SCP started licensing the operating system in 1980.

Digital Research was a company created by Gary Kildall to market and develop his CP/M operating system and related 8-bit, 16-bit and 32-bit systems like MP/M, Concurrent DOS, Multiuser DOS, DOS Plus, DR DOS and GEM. It was the first large software company in the microcomputer world. Digital Research was based in Pacific Grove, California.

MS-DOS is an acronym for Microsoft Disk Operating System is an operating system for x86-based personal computers mostly developed by Microsoft. Collectively, MS-DOS, its rebranding as IBM PC DOS, and some operating systems attempting to be compatible with MS-DOS, are sometimes referred to as "DOS" (which is also the generic acronym for disk operating system). MS-DOS was the main operating system for IBM PC compatible personal computers during the 1980s and the early 1990s, when it was gradually superseded by operating systems offering a graphical user interface (GUI), in various generations of the graphical Microsoft Windows operating system.

React OS is a Free Community Open Source Collaborative Compatible Free Operating System.

Linux is a family of free and open-source software operating systems built around the Linux kernel. Typically, Linux is packaged in a form known as a Linux distribution (or distro for short) for both desktop and server use. The defining component of a Linux distribution is the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Many Linux distributions use the word "Linux" in their name. The Free Software Foundation uses the name GNU/Linux to refer to the operating system family, as well as specific distributions, to emphasize that most Linux distributions are not just the Linux kernel, and that they have in common not only the kernel, but also numerous utilities and libraries, a large proportion of which are from the GNU project.

GNU - Ubuntu

How to Dual Boot Linux on your PC - Virtual Machine

Open Source refers to any program whose source code is made available for use or modification as users or other developers see fit. Open source software is usually developed as a public collaboration and made freely available. Free Software.

Open-Source Software is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration.

Android Operating System is a mobile operating system developed by Google, based on a modified version of the Linux kernel and other open source software and designed primarily for touchscreen mobile devices such as smartphones and tablets. In addition, Google has further developed Android TV for televisions, Android Auto for cars, and Wear OS for wrist watches, each with a specialized user interface. Variants of Android are also used on game consoles, digital cameras, PCs and other electronics.

Open Source Operating Systems Comparisons (wiki)

Open Source Group Wants Windows 7 Source Code In A Blank Hard drive. Freeing Windows 7 Opens Doors.

Cloud Ready lightweight operating system - Backup Operating System

Red Hat Linux

Substitute Alternate Operating Systems

Human Operating System (HOS)

Server Operating System: A server operating system, also called a server OS, is an Operating System specifically designed to run on servers, which are specialized computers that operate within a client/server architecture to serve the requests of client computers on the network. The server operating system, or server OS, is the software layer on top of which other software programs, or applications, can run on the server hardware. Server operating systems help enable and facilitate typical server roles such as Web server, mail server, file server, database server, application server and print server. Popular server operating systems include Windows Server, Mac OS X Server, and variants of Linux such as Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server. Server edition of Ubuntu Linux is free.

Menu Navigation

File System controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is easily isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file". The structure and logic rules used to manage the groups of information and their names is called a "file system". Computer File Systems (wiki).


Operating System Boot Process


Turn on Power or Wakeup. Power-On Self Test checks all connected hardware, including RAM and secondary storage devices to be sure it is all functioning properly. After POST or Power On Self Test has completed its job, the boot process searches the boot device list for a device with a BIOS on it. (this is like when you first wake up in the morning and you do a systems check to make sure that everything in your body is working normally). In a computer the system looks for an active device in the boot device list, starting at the top. When it finds an available device, it loads the Basic Input / Output System (BIOS) from the device. The BIOS provides information on basic communications with peripheral devices, and communications on the motherboard itself. The I/O system is essential to the operation of the computer because it defines the rules for communications between the CPU and the other devices attached to the computer via the motherboard. The I/O system, sometimes found in the "io.sys" file on the boot device, provides extensions to the BIOS located in ROM on the motherboard. After ensuring the hardware is functional and loading the BIOS, the computer proceeds to load the operating system into memory. The specific OS is not relevant, as long as its drivers can talk to the hardware on the machine. Any OS-specific configuration routines are also executed as part of loading the OS. Once the hardware functionality is confirmed and the input/output system is loaded, the boot process begins loading the operating system from the boot device. The OS is loaded into RAM, and any instructions specific to the particular operating system are executed. The actual operating system is somewhat irrelevant, as the computer will follow the same boot pattern in any case. Once the previous steps are complete and the operating system is safely loaded into RAM, the boot process relinquishes control to the OS. The OS then proceeds to execute any pre-configured startup routines to define user configuration or application execution. At the end of the handoff, the computer is ready for use. Once the OS is loaded, the boot process turns control over to it, and any OS-specific startup applications are executed by the OS. These startup routines vary from one user to another, based on the user's preferences and desired configuration. When the startup applications complete, your computer is ready to use, or your mind is ready to use.

Booting is starting up a computer or computer appliance until it can be used. It can be initiated by hardware such as a button press or by software command. After the power is switched on, the computer is relatively dumb and can read only part of its storage called read-only memory (ROM). There, a small program is stored called firmware. It does power-on self-tests and, most importantly, allows accessing other types of memory like a hard disk and main memory. The firmware loads bigger programs into the computer's main memory and runs it. In general purpose computers, but additionally in smartphones and tablets, optionally a boot manager is run. The boot manager lets a user choose which operating system to run and set more complex parameters for it. The firmware or the boot manager then loads the boot loader into the memory and runs it. This piece of software is able to place an operating system kernel like Windows or Linux into the computer's main memory and run it. Afterwards, the kernel runs so-called user space software – well known is the graphical user interface (GUI), which lets the user log in to the computer or run some other applications. The whole process may take seconds to tenths of seconds on modern day general purpose computers. Restarting a computer also is called Reboot, which can be "hard", e.g. after electrical power to the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may optionally clear RAM to zero. Both hard and soft booting can be initiated by hardware such as a button press or by software command. Booting is complete when the operative runtime system, typically operating system and some applications, is attained. The process of returning a computer from a state of hibernation or sleep does not involve booting. Minimally, some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may simply run operational programs that are stored in ROM. All computing systems are state machines, and a reboot may be the only method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the boot process can also load a storage dump program for diagnosing problems in an operating system. Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by one's bootstraps. The usage calls attention to the requirement that, if most software is loaded onto a computer by other software already running on the computer, some mechanism must exist to load the initial software onto the computer. Early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem. The invention of read-only memory (ROM) of various types solved this paradox by allowing computers to be shipped with a start up program that could not be erased. Growth in the capacity of ROM has allowed ever more elaborate start up procedures to be implemented.

Boot Device is any piece of hardware that can read or contains the files required for a computer to start. For example, a hard drive, floppy disk drive, CD-ROM drive, DVD drive, and USB jump drive are all considered bootable devices.

Firmware is a specific class of computer software that provides the low-level control for the device's specific hardware. Firmware can either provide a standardized operating environment for the device's more complex software (allowing more hardware-independence), or, for less complex devices, act as the device's complete operating system, performing all control, monitoring and data manipulation functions. Typical examples of devices containing firmware are embedded systems, consumer appliances, computers, computer peripherals, and others. Almost all electronic devices beyond the simplest contain some firmware. Firmware is held in non-volatile memory devices such as ROM, EPROM, or flash memory. Changing the firmware of a device may rarely or never be done during its lifetime; some firmware memory devices are permanently installed and cannot be changed after manufacture. Common reasons for updating firmware include fixing bugs or adding features to the device. This may require ROM integrated circuits to be physically replaced, or flash memory to be reprogrammed through a special procedure. Firmware such as the ROM BIOS of a personal computer may contain only elementary basic functions of a device and may only provide services to higher-level software. Firmware such as the program of an embedded system may be the only program that will run on the system and provide all of its functions. Before the inclusion of integrated circuits, other firmware devices included a discrete semiconductor diode matrix. The Apollo guidance computer had firmware consisting of a specially manufactured core memory plane, called "core rope memory", where data was stored by physically threading wires through (1) or around (0) the core storing each data bit.



Servers


Computer Network is a telecommunications network which allows nodes to share resources. In computer networks, networked computing devices exchange data with each other using a data link. The connections between nodes are established using either cable media or wireless media. The best-known computer network is the Internet.

Server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, and a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities, often called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are Database Servers, file servers, mail servers, print servers, web servers, game servers, and application servers. Remote.

Proxy Server is a server that acts as an intermediary or Link for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource available from a different server and the proxy server evaluates the request as a way to simplify and control its complexity. Proxies were invented to add structure and encapsulation to distributed systems. Today, most proxies are web proxies, facilitating access to content on the World Wide Web, providing anonymity and may be used to bypass IP address blocking. (a computer system or an application). VPN.

Automated Server Infrastructures - Autonomous

Web Server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols. The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol (HTTP). Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the web server is implemented. While the primary function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files. Many generic web servers also support server-side scripting using Active Server Pages (ASP), PHP (Hypertext Preprocessor), or other scripting languages. This means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically ("on-the-fly") as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content. Web servers can frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required (which now is included with most operating systems).



Networks


Network is an interconnected system of things or people. Connect two or more computers or other devices. A system of intersecting lines or channels. Network in electronics is a system of interconnected electronic components or circuits. A static IP address is an address that does not change. A dynamic IP address changes over time.

Computer Network is a digital telecommunications network for sharing resources between nodes, which are computing devices that use a common telecommunications technology. Data transmission between nodes is supported over data links consisting of physical cable media, such as twisted pair or fibre-optic cables, or by wireless methods, such as Wi-Fi, microwave transmission, or free-space optical communication. Network nodes are network computer devices that originate, route and terminate data communication. They are generally identified by network addresses, and can include hosts such as personal computers, phones, and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered (i.e. carried as payload) over other more general communications protocols. Computer networks support many applications and services, such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers, printers, and fax machines, and use of email and instant messaging applications. Computer networks may be classified by many criteria, for example, the transmission medium used to carry their signals, bandwidth, communications protocols to organize network traffic, the network's size, topology, traffic control mechanism, and organizational intent. The best-known computer network is the Internet.

Network Science is an academic field which studies complex networks such as telecommunication networks, computer networks, biological networks, cognitive and semantic networks, and social networks, considering distinct elements or actors represented by nodes (or vertices) and the connections between the elements or actors as links (or edges). The field draws on theories and methods including graph theory from mathematics, statistical mechanics from physics, data mining and information visualization from computer science, inferential modeling from statistics, and social structure from sociology. The United States National Research Council defines network science as "the study of network representations of physical, biological, and social phenomena leading to predictive models of these phenomena.

Artificial Neural Network - Brain Network - Biological Network - Mycorrhizal Network - Virtual Private Network - Social Network - Power Grid

Network Topology is the arrangement of the various elements (links, nodes, etc.) of a computer network. Essentially, it is the topological structure of a network and may be depicted physically or logically.

Routing is the process of selecting a path for traffic in a network, or between or across multiple networks. Routing is performed for many types of networks, including circuit-switched networks, such as the public switched telephone network (PSTN), computer networks, such as the Internet, as well as in networks used in public and private transportation, such as the system of streets, roads, and highways in national infrastructure.

Router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is typically forwarded from one router to another router through the networks that constitute an internetwork (e.g. the Internet) until it reaches its destination node. A router is connected to two or more data lines from different IP networks. When a data packet comes in on one of the lines, the router reads the network address information in the packet header to determine the ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. The most familiar type of IP routers are home and small office routers that simply forward IP packets between the home computers and the Internet. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.

MoFi Routers provide a local Wi-Fi and ethernet LAN network, and support tethering to a cellular hotspot or USB modem to share your cellular connection.

Modem is a hardware device that converts data from a digital format, intended for communication directly between devices with specialized wiring, into one suitable for a transmission medium such as telephone lines or radio. A modem modulates one or more carrier wave signals to encode digital information for transmission, and demodulates signals to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded reliably to reproduce the original digital data. Modems can be used with almost any means of transmitting analog signals, from light-emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into a modulated electrical signal for transmission over telephone lines, to be demodulated by another modem at the receiver side to recover the digital data.

Home Network is a type of computer network that facilitates communication among devices within the close vicinity of a home.

Node in networking is either a connection point, a redistribution point (e.g. data communications equipment), or a communication endpoint (e.g. data terminal equipment). The definition of a node depends on the network and protocol layer referred to. A physical network node is an active electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communications channel. A passive distribution point such as a distribution frame or patch panel is consequently not a node.

Hub is a node with a number of links that greatly exceeds the average. Emergence of hubs is a consequence of a scale-free property of networks. While hubs cannot be observed in a random network, they are expected to emerge in scale-free networks. The uprise of hubs in scale-free networks is associated with power-law distribution. Hubs have a significant impact on the network topology. Hubs can be found in many real networks, such as Brain Network or Internet.

Bridging Networking is a computer networking device that creates a single aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer (layer 2). If one or more segments of the bridged network are wireless, the device is known as a wireless bridge. The main types of network bridging technologies are simple bridging, multiport bridging, and learning or transparent bridging. Audio Video Bridging set of technical standards which provide improved synchronization, low-latency, and reliability for switched Ethernet networks.

Link Aggregation applies to various methods of combining (aggregating) multiple network connections in parallel in order to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links should fail. A Link Aggregation Group (LAG) combines a number of physical ports together to make a single high-bandwidth data path, so as to implement the traffic load sharing among the member ports in the group and to enhance the connection reliability.

Link Layer is the lowest layer in the Internet protocol suite, the networking architecture of the Internet. The link layer is the group of methods and communications protocols confined to the link that a host is physically connected to. The link is the physical and logical network component used to interconnect hosts or nodes in the network and a link protocol is a suite of methods and standards that operate only between adjacent network nodes of a network segment. Despite the different semantics of layering in TCP/IP and OSI, the link layer is sometimes described as a combination of the data link layer (layer 2) and the physical layer (layer 1) in the OSI model. However, the layers of TCP/IP are descriptions of operating scopes (application, host-to-host, network, link) and not detailed prescriptions of operating procedures, data semantics, or networking technologies. The link layer is described in RFC 1122 and RFC 1123. RFC 1122 considers local area network protocols such as Ethernet and other IEEE 802 networks (e.g. Wi-Fi), and framing protocols such as Point-to-Point Protocol (PPP) to belong to the link layer.

Distributed Computing is a field of computer science that studies distributed systems. A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications.

Network Theory is the study of graphs as a representation of either symmetric relations or, more generally, of asymmetric relations between discrete objects. In computer science and network science, network theory is a part of graph theory: a network can be defined as a graph in which nodes and/or edges have attributes (e.g. names).

Network Monitoring is the use of a system that constantly monitors a computer network for slow or failing components and that notifies the network administrator (via email, SMS or other alarms) in case of outages or other trouble. Network monitoring is part of network management.

Network Management is the process of administering and managing the computer networks of one or many organizations. Various services provided by network managers include fault analysis, performance management, provisioning of network and network devices, maintaining the quality of service, and so on. Software that enables network administrators or network managers to perform their functions is called network management software.

Network Science - Network Science - Network Cultures - Grids - Coreos - Omega - Mesos

Network Partition refers to network decomposition into relatively independent subnets for their separate optimization as well as network split due to the failure of network devices. In both cases the partition-tolerant behavior of subnets is expected. This means that even after network is partitioned into multiple sub-systems, it still works correctly. For example, in a network with multiple subnets where nodes A and B are located in one subnet and nodes C and D are in another, a partition occurs if the switch between the two subnets fails. In that case nodes A and B can no longer communicate with nodes C and D, but all nodes A-D work the same as before.

Resilience in network is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.

Robustness in computer science is the ability of a computer system to cope with errors during execution and cope with erroneous input.

Fault-Tolerant Computer System are systems designed around the concepts of fault tolerance. In essence, they must be able to continue working to a level of satisfaction in the presence of faults.

Fault Tolerance is the property that enables a system to continue operating properly in the event of the failure of (or one or more faults within) some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability or Life-Critical Systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation.

Fail-Safe in engineering is a design feature or practice that in the event of a specific type of failure, inherently responds in a way that will cause no or minimal harm to other equipment, the environment or to people.

Backup - Redundancy

Network Packet is a formatted unit of data carried by a packet-switched network. Computer communications links that do not support packets, such as traditional point-to-point telecommunications links, simply transmit data as a bit stream. When data is formatted into packets, packet switching is possible and the bandwidth of the communication medium can be better shared among users than with circuit switching.

Cluster Manager usually is a backend graphical user interface (GUI) or command-line software that runs on one or all cluster nodes (in some cases it runs on a different server or cluster of management servers.) The cluster manager works together with a cluster management agent. These agents run on each node of the cluster to manage and configure services, a set of services, or to manage and configure the complete cluster server itself (see super computing.) In some cases the cluster manager is mostly used to dispatch work for the cluster (or cloud) to perform. In this last case a subset of the cluster manager can be a remote desktop application that is used not for configuration but just to send work and get back work results from a cluster. In other cases the cluster is more related to availability and load balancing than to computational or specific service clusters.

Network Administrator maintains computer infrastructures with emphasis on networking. Responsibilities may vary between organizations, but on-site servers, software-network interactions as well as network integrity/resilience are the key areas of focus.

Downstream Networking refers to data sent from a network service provider to a customer.

Upstream Networking refers to the direction in which data can be transferred from the client to the server (uploading).

Network Operating System is a specialized operating system for a network device such as a router, switch or firewall. An operating system oriented to computer networking, to allow shared file and printer access among multiple computers in a network, to enable the sharing of data, users, groups, security, applications, and other networking functions. Typically over a local area network (LAN), or private network. This sense is now largely historical, as common operating systems generally now have such features included.

Professional Services Networks are networks of independent firms who come together to cost-effectively provide services to clients through an organized framework.

Social Networks - Collaborations

Value Network Analysis is a methodology for understanding, using, visualizing, optimizing internal and external value networks and complex economic ecosystems. The methods include visualizing sets of relationships from a dynamic whole systems perspective. Robust network analysis approaches are used for understanding value conversion of financial and non-financial assets, such as intellectual capital, into other forms of value.

Value Network is a business analysis perspective that describes social and technical resources within and between businesses. The nodes in a value network represent people (or roles). The nodes are connected by interactions that represent tangible and intangible deliverables. These deliverables take the form of knowledge or other intangibles and/or financial value. Value networks exhibit interdependence. They account for the overall worth of products and services. Companies have both internal and external value networks.

Encapsulation Networking is a method of designing modular communication protocols in which logically separate functions in the network are abstracted from their underlying structures by inclusion or information hiding within higher level objects.

Dynamic Network Analysis is an emergent scientific field that brings together traditional social network analysis (SNA), link analysis (LA), social simulation and multi-agent systems (MAS) within network science and network theory.

Artificial Neural Network (ai) - Matrix (construct)

Cross Linking is a bond that links one polymer chain to another. They can be covalent bonds or ionic bonds.

Virtual Private Network (VPN) - Internet - Internet Connection Types - Fiber Optics

Network Domain is an administrative grouping of multiple private computer networks or hosts within the same infrastructure. Domains can be identified using a domain name; domains which need to be accessible from the public Internet can be assigned a globally unique name within the Domain Name System (DNS).

Public Network is a type of network wherein anyone, namely the general public, has access and through it can connect to other networks or the Internet. This is in contrast to a private network, where restrictions and access rules are established in order to relegate access to a select few.

Search Engines - Levels of Thinking - Information Technology

Asymmetric Digital Subscriber Line is a type of digital subscriber line (DSL) technology, a data communications technology that enables faster data transmission over copper telephone lines rather than a conventional voiceband modem can provide. ADSL differs from the less common symmetric digital subscriber line (SDSL). In ADSL, Bandwidth and bit rate are said to be asymmetric, meaning greater toward the customer premises (downstream) than the reverse (upstream). Providers usually market ADSL as a service for consumers for Internet access for primarily downloading content from the Internet, but not serving content accessed by others. (ADSL).


Telephone Networks


Cellular Network is a communication network where the last link is wireless. The network is distributed over land areas called cells, each served by at least one fixed-location transceiver, known as a cell site or base station. This base station provides the cell with the network coverage which can be used for transmission of voice, data and others. A cell might use a different set of frequencies from neighboring cells, to avoid interference and provide guaranteed service quality within each cell.

Cellular Repeater is known as cell phone signal booster or amplifier, which is a type of bi-directional amplifier used to improve cell phone reception. A cellular repeater system commonly consists of a donor antenna that receives and transmits signal from nearby cell towers, coaxial cables, a signal amplifier, and an indoor rebroadcast antenna. Weboost gives you better cell signal wherever you are.

Internet Connection using Cellphone Towers.

Wi-Fi - Internet - Internet Connection Types

Pep Wave System. MAX routers will keep you connected using dual SIM cards and our patented SpeedFusion Bandwidth Bonding technology.

Wi-Fi is a technology for wireless local area networking with devices based on the IEEE 802.11 standards. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that successfully complete interoperability certification testing. Devices that can use Wi-Fi technology include personal computers, video-game consoles, phones and tablets, digital cameras, smart TVs, digital audio players and modern printers. Wi-Fi compatible devices can connect to the Internet via a WLAN and a wireless access point. Such an access point (or hotspot) has a range of about 20 meters (66 feet) indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Depiction of a device sending information wirelessly to another device, both connected to the local network, in order to print a document. Wi-Fi most commonly uses the 2.4 gigahertz (12 cm) UHF and 5.8 gigahertz (5 cm) SHF ISM radio bands. Anyone within range with a wireless modem can attempt to access the network; because of this, Wi-Fi is more vulnerable to attack (called eavesdropping) than wired networks. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of Wi-Fi Protected Access constantly evolve to include stronger protections and new security practices as the security landscape changes. Microwaves.

Bluetooth is a wireless technology standard used for exchanging data between fixed and mobile devices over short distances using short-wavelength UHF radio waves in the industrial, scientific and medical radio bands, from 2.400 to 2.485 GHz, and building personal area networks (PANs). It was originally conceived as a wireless alternative to RS-232 data cables. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. As of 2009, Bluetooth integrated circuit chips ship approximately 920 million units annually.

Quadrature Amplitude Modulation is the name of a family of digital modulation methods and a related family of analog modulation methods widely used in modern telecommunications to transmit information. It conveys two analog message signals, or two digital bit streams, by changing (modulating) the amplitudes of two carrier waves, using the amplitude-shift keying (ASK) digital modulation scheme or amplitude modulation or AM analog modulation scheme. The two carrier waves of the same frequency are out of phase with each other by 90°, a condition known as orthogonality or quadrature. The transmitted signal is created by adding the two carrier waves together. At the receiver, the two waves can be coherently separated (demodulated) because of their orthogonality property. Another key property is that the modulations are low-frequency/low-bandwidth waveforms compared to the carrier frequency, which is known as the narrowband assumption. Phase modulation (analog PM) and phase-shift keying (digital PSK) can be regarded as a special case of QAM, where the amplitude of the transmitted signal is a constant, but its phase varies. This can also be extended to frequency modulation (FM) and frequency-shift keying (FSK), for these can be regarded as a special case of phase modulation. QAM is used extensively as a modulation scheme for digital telecommunication systems, such as in 802.11 Wi-Fi standards. Arbitrarily high spectral efficiencies can be achieved with QAM by setting a suitable constellation size, limited only by the noise level and linearity of the communications channel. QAM is being used in optical fiber systems as bit rates increase; QAM16 and QAM64 can be optically emulated with a 3-path interferometer.

Telephone is a telecommunications device that permits two or more users to conduct a conversation when they are too far apart to be heard directly. A Telephone converts sound, typically and most efficiently the human voice, into electronic signals suitable for transmission via cables or other transmission media over long distances, and replays such signals simultaneously in audible form to its user.

Plain Old Telephone Service is a retronym for voice-grade telephone service employing analog signal transmission over copper loops. POTS was the standard service offering from telephone companies from 1876 until 1988 in the United States when the Integrated Services Digital Network or ISDN. Basic Rate Interface or BRI was introduced, followed by cellular telephone systems, and voice over IP or VoIP. POTS remains the basic form of residential and small business service connection to the telephone network in many parts of the world. The term reflects the technology that has been available since the introduction of the public telephone system in the late 19th century, in a form mostly unchanged despite the introduction of Touch-Tone dialing, electronic telephone exchanges and fiber-optic communication into the public switched telephone network or PSTN for short.

Tin Can Telephone is a type of acoustic (non-electrical) speech-transmitting device made up of two tin cans, paper cups or similarly shaped items attached to either end of a taut string or wire. It is a form of mechanical telephony, where sound is converted into and then conveyed by vibrations along a liquid or solid medium, and then reconverted back to sound.

Phone Network

Landline refers to a phone that uses a metal wire or fibre optic telephone line for transmission as distinguished from a mobile cellular line, which uses radio waves for transmission.

Ethernet is a family of computer networking technologies commonly used in local area networks or LAN, metropolitan area networks or MAN, and wide area networks or WAN.

HomePNA is an incorporated non-profit industry association of companies that develops and standardizes technology for home networking over the existing coaxial cables and telephone wiring within homes, so new wires do not need to be installed.

Interactive Voice Response is a technology that allows telephone users to interact with a computer-operated telephone system through the use of voice and DTMF tones input with a keypad.

Automated Attendant allows callers to be automatically transferred to an extension without the intervention of an operator/receptionist.
Answering Machine (wiki)


Communication Laws


Communication Law is dedicated to the proposition that freedom of speech is relevant and essential to every aspect of the communication discipline.

Communications Act of 1934 as created for the purpose of regulating interstate and foreign commerce in communication by wire and radio so as to make available, so far as possible, to all the people of the United States a rapid, efficient, nationwide, and worldwide wire and radio communication service with adequate facilities at reasonable charges, for the purpose of the national defense, and for the purpose of securing a more effective execution of this policy by centralizing authority theretofore granted by law to several agencies and by granting additional authority with respect to interstate and foreign commerce in wire and radio communication, there is hereby created a commission to be known as the 'Federal Communications Commission', which shall be constituted as hereinafter provided, and which shall execute and enforce the provisions of this Act.

Telecommunications Policy is a framework of law directed by government and the Regulatory Commissions, most notably the Federal Communications Commission.

Communications Protocol is a system of rules that allow two or more entities of a communications system to transmit information via any kind of variation of a physical quantity. These are the rules or standard that defines the syntax, semantics and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both.

Signal Corps develops, tests, provides, and manages communications and information systems support for the command and control of combined arms forces.

International Communications Law consists primarily of a number of bilateral and multilateral communications treaties.

Outline of Communication (pdf) - Communicating Knowledge

Information and Communications Technology is an extended term for information technology (IT) which stresses the role of unified communications and the integration of telecommunications (telephone lines and wireless signals), computers as well as necessary enterprise software, middleware, storage, and audio-visual systems, which enable users to access, store, transmit, and manipulate information. (ICT)

Unified Communications is a marketing buzzword describing the integration of real-time enterprise communication services such as instant messaging (chat), presence information, voice (including IP telephony), mobility features (including extension mobility and single number reach), audio, web & video conferencing, fixed-mobile convergence (FMC), desktop sharing, data sharing (including web connected electronic interactive whiteboards), call control and speech recognition with non-real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS and fax). UC is not necessarily a single product, but a set of products that provides a consistent unified user interface and user experience across multiple devices and media types. In its broadest sense, UC can encompass all forms of communications that are exchanged via a network to include other forms of communications such as Internet Protocol Television (IPTV) and digital signage Communications as they become an integrated part of the network communications deployment and may be directed as one-to-one communications or broadcast communications from one to many. UC allows an individual to send a message on one medium and receive the same communication on another medium. For example, one can receive a voicemail message and choose to access it through e-mail or a cell phone. If the sender is online according to the presence information and currently accepts calls, the response can be sent immediately through text chat or a video call. Otherwise, it may be sent as a non-real-time message that can be accessed through a variety of media.



Super Computers


Supercomputer is a computer with a high level of computing performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS.

Floating Point Operations Per Second or FLOPS, is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.

Floating-Point Arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen.

Petascale Computing is one quadrillion Floating Point operations per second.

Exascale Computing is a billion billion calculations per second. Exascale computing is a type of ultra-powerful supercomputing, with systems performing billions of computations per second utilizing an infrastructure of CPUs and GPUs to process and analyze data. (exaFLOPS).

Frontier supercomputer, or OLCF-5, is the world's first exascale supercomputer. It is hosted at the Oak Ridge Leadership Computing Facility (OLCF) in Tennessee, United States and first operational in 2022. As of December 2023, Frontier is the world's fastest supercomputer. It is based on the Cray EX and is the successor to Summit (OLCF-4). Frontier achieved an Rmax of 1.102 exaFLOPS, which is 1.102 quintillion operations per second, using AMD CPUs and GPUs. Supercomputers, which harness the power of multiple interconnected processing cores, use an immense amount of energy, with some running at as much as 30 megawatts. In a year, these supercomputers consume as much power as a small city.

In 1973, an MIT computer predicted when civilization will end. Super computers can tell us our future and predict dangerous life threatening hazards. But supercomputers can also confirm the negligence from greedy and corrupt companies and politicians. This is why the public will not hear about these predictions or calculations. Mass murder is a business, and corrupt corporations and politicians doesn't want the public to know this, or know about their devious psychopathic behavior that is causing deaths now and will increase the amount of deaths in the future. Supercomputers can be used to model potential future hazards based on current data and trends, and can predict how many humans will die. But corporate greed and corruption wont let you see the data. But they will give you chabot's to play with. People in power have all the advanced technology that the world needs for preservation and protection. But our technologies are being wasted and misused for selfish and narrow-mined goals. The lunatics have taken over the asylum and they are driving humanity off a cliff. The sad part is, these scumbags will survive the crash, which means that these scumbags will continue to do evil and continue do ignorant things far into the future. A living hell will soon have an upgrade. They're not just using super computers to spy on people, they are also using this technology to kill whistleblowers, journalists and activists. The scumbags in power have everything they need to implement a police state, it's just a matter of when. You can't handle the truth, especially if you don't know the truth.

Global Warming - Pollution - Death Rates

Dojo is a Tesla supercomputer designed and built by Tesla for computer vision video processing and recognition. It will be used for training Tesla's machine learning models to improve its Full Self-Driving advanced driver-assistance system. According to Tesla, it went into production in July 2023.

Titan is an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan is the first such hybrid to perform over 10 petaFLOPS. Titan at Oak Ridge National Laboratory will soon be eclipsed by machines capable of performing a billion billion floating-point operations per second.

Sierra Supercomputer or ATS-2, is a supercomputer built for the Lawrence Livermore National Laboratory for use by the National Nuclear Security Administration as the second Advanced Technology System. It is primarily used for predictive applications in stockpile stewardship, helping to assure the safety, reliability and effectiveness of the nation's nuclear weapons. Sierra is very similar in architecture to the Summit supercomputer built for the Oak Ridge National Laboratory. The Sierra system uses IBM POWER9 CPUs in conjunction with Nvidia Tesla V100 GPUs. The nodes in Sierra are Witherspoon S922LC OpenPOWER servers with two GPUs per CPU and four GPUs per node. These nodes are connected with EDR InfiniBand.

Summit Supercomputer or OLCF-4, is a supercomputer developed by IBM for use at Oak Ridge National Laboratory, which as of June 8, 2018 is the fastest supercomputer in the world, capable of 200 petaflops. Its current LINPACK benchmark is clocked at 122.3 petaflops. As of June 2018, the supercomputer is also the 5th most energy efficient in the world with a measured power efficiency of 13.889 GFlops/watts. Summit is the first supercomputer to reach exascale speed, achieving 1.88 exaflops during a genomic analysis and is expected to reach 3.3 exaflops using mixed precision calculations.

K Computer meaning 10 quadrillion, is a supercomputer manufactured by Fujitsu which is based on a distributed memory architecture with over 80,000 computer nodes.

Aurora is a planned state-of-the-art exascale supercomputer designed by Intel/Cray for the U.S. Department of Energy's (DOE) Argonne Leadership Computing Facility (ALCF). The system is expected to become the first supercomputer in the United States to break the exaFLOPS barrier.

Nvidia DGX-2 Largest GPU, 2 petaFLOPS system that combines 16 fully interconnected GPUs for 10X the deep learning performance. Discover the World’s Largest GPU: NVIDIA DGX-2 (youtube) - VR.

Artificial Intelligent Chatbots

Multi-value logic transistor based on zinc oxide, capable of two stable intermediate states between 0 and 1.

AI can be trained to read the millions of research papers, scientific reports and peer reviewed scientific publications, that people have no time to read. Then you can use algorithms to look for patterns that would solve a problem or find a possible answer to a particular problem.

Making artificial intelligence more energy efficient. Researchers develop state-of-the-art device. Energy consumption from artificial intelligence could be reduced by a factor of at least 1,000 with this device. A new model where the data never leaves the memory, called computational random-access memory. The International Energy Agency (IEA) issued a global energy use forecast in March of 2024, forecasting that energy consumption for AI is likely to double from 460 terawatt-hours (TWh) in 2022 to 1,000 TWh in 2026. This is roughly equivalent to the electricity consumption of the entire country of Japan. According to the new paper's authors, a CRAM-based machine learning inference accelerator is estimated to achieve an improvement on the order of 1,000. Another example showed an energy savings of 2,500 and 1,700 times compared to traditional methods.

Classical computers can keep up with quantum computers and sometimes surpass them. A team of scientists has devised means for classical computing to mimic a quantum computing with far fewer resources than previously thought. The scientists' results show that classical computing can be reconfigured to perform faster and more accurate calculations than state-of-the-art quantum computers. Conventional computers process information in the form of digital bits (0s and 1s), while quantum computers deploy quantum bits (qubits) to store quantum information in values between 0 and 1. Under certain conditions this ability to process and store information in qubits can be used to design quantum algorithms that drastically outperform their classical counterparts.



Quantum Computing


Quantum Computer studies theoretical computation systems (quantum computers) that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. A quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of quantum computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982, and David Deutsch in 1985. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968. Key component to scale up quantum computing invented. Memristor.

Quantum Supremacy is defined as the potential ability of quantum computing devices to solve problems that normal computers simply can’t. The weaker quantum advantage is the potential to solve problems merely faster. In computational-complexity-theoretic terms, this generally means providing a superpolynomial speedup over the best known or possible classical algorithm.

Essential Quantum Computer Component Downsized by Two Orders of Magnitude. Devices built to shield qubits from unwanted signals, known as nonreciprocal devices, produce magnetic fields themselves. A traffic roundabout for photons, is only about a tenth of a millimeter in size, and—more importantly—it is not magnetic. To receive a signal such as a microwave photon from a qubit, while preventing noise and other spurious signals from traveling toward the qubit, they use nonreciprocal devices, such as isolators or circulators. These devices control the signal traffic. The 'roundabouts' the group has designed consist of aluminum circuits on a silicon chip and they are the first to be based on micromechanical oscillators: Two small silicon beams oscillate on the chip like the strings of a guitar and interact with the electrical circuit. These devices are tiny in size—only about a tenth of a millimeter in diameter. This is one of the major advantages the new component has over its traditional predecessors, which were a few centimeters wide. Silicon provides means to control quantum bits for faster algorithms. Quantum bits are now easier to manipulate for devices in quantum computing, thanks to enhanced spin-orbit interaction in silicon.

National Institute of Standards and Technology. Researchers Develop Magnetic Switch to Turn On and Off a Strange Quantum Property. When an electron moves around a closed path, ending up where it began, its physical state may or may not be the same as when it left. Now, there is a way to control the outcome, thanks to an international research group led by scientists at the National Institute of Standards and Technology (NIST).

Subjecting a quantum computer’s qubits to quasi-rhythmic laser pulses based on the Fibonacci sequence. Physicists demonstrated a way of storing quantum information that is less prone to errors.

Reversible Computing is a model of computing where the computational process to some extent is reversible, i.e., time-invertible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is generally considered an unconventional form of computing.

Landauer's Principle is a physical principle pertaining to the lower theoretical limit of energy consumption of computation. It holds that "any logically irreversible manipulation of information, such as the erasure of a bit or the merging of two computation paths, must be accompanied by a corresponding entropy increase in non-information-bearing degrees of freedom of the information-processing apparatus or its environment". Another way of phrasing Landauer's principle is that if an observer loses information about a physical system, the observer loses the ability to extract work from that system.[why?] If no information is erased, computation may in principle be achieved which is thermodynamically reversible, and require no release of heat. This has led to considerable interest in the study of reversible computing.

Transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0.
ON=1 / OFF=0

Qubit can store a Zero's and Ones simultaneously or two Magnetic Fields at once. Qubit is a unit of quantum information—the quantum analogue of the classical bit. A qubit is a two-state quantum-mechanical system, such as the polarization of a single photon: here the two states are vertical polarization and horizontal polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a superposition of both states at the same time, a property that is fundamental to quantum computing.

(0.01 Kelvin) - Entanglement - Interaction Strengths - Macroscopic Scale - Topology

Scientists managed to instantly “teleport” data between two chips that are not connected for the very first time. Technical University of Denmark have created “chip-scale devices” that are able to utilize quantum physics to manipulate single particles of light. The team’s findings have been published in the journal Nature Physics.

Long-lived storage of a Photonic Qubit for worldwide Teleportation. Light is an ideal carrier for quantum information encoded on single photons, but transfer over long distances is inefficient and unreliable due to losses. Direct teleportation between the end nodes of a network can be utilized to prevent the loss of precious quantum bits. First, remote entanglement has to be created between the nodes; then, a suitable measurement on the sender side triggers the "spooky action at a distance," i.e. the instantaneous transport of the qubit to the receiver's node. However, the quantum bit may be rotated when it reaches the receiver and hence has to be reverted. To this end, the necessary information has to be classically communicated from sender to receiver. This takes a certain amount of time, during which the qubit has to be preserved at the receiver. Considering two network nodes at the most distant places on earth, this corresponds to a time span of 66 milliseconds.

Dephasing is a mechanism that recovers classical behavior from a quantum system. It refers to the ways in which coherence caused by perturbation decays over time, and the system returns to the state before perturbation. It is an important effect in molecular and atomic spectroscopy, and in the condensed matter physics of mesoscopic devices.

Superposition Principle states that, for all linear systems, the net response at a given place and time caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input (A + B) produces response (X + Y).

Magnetic Flux Quantum (wiki) - Magnetic Flux (wiki)

Quantum Annealing is a metaheuristic for finding the global minimum of a given objective function over a given set of candidate solutions (candidate states), by a process using quantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimization problems) with many local minima; such as finding the ground state of a spin glass.

Monte Carlo Method are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. Their essential idea is using randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution.

Quadratic Unconstrained Binary Optimization (wiki)

SQUID stands for superconducting quantum interference device, which is a very sensitive magnetometer used to measure extremely subtle magnetic fields, based on superconducting loops containing Josephson junctions.

Josephson Effect is the phenomenon of supercurrent—i.e. a current that flows indefinitely long without any voltage applied—across a device known as a Josephson junction (JJ), which consists of two superconductors coupled by a weak link. The weak link can consist of a thin insulating barrier (known as a superconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-s-S).

Superconducting Tunnel Junction is an electronic device consisting of two superconductors separated by a very thin layer of insulating material. Current passes through the junction via the process of quantum tunneling. The STJ is a type of Josephson junction, though not all the properties of the STJ are described by the Josephson effect. These devices have a wide range of applications, including high-sensitivity detectors of electromagnetic radiation, magnetometers, high speed digital circuit elements, and quantum computing circuits.

Quantum Tunnelling refers to the quantum mechanical phenomenon where a particle tunnels through a barrier that it classically could not surmount. This plays an essential role in several physical phenomena, such as the nuclear fusion that occurs in main sequence stars like the Sun. It has important applications to modern devices such as the tunnel diode, quantum computing, and the scanning tunnelling microscope.

D-Wave Systems is a quantum computing company, based in Burnaby, British Columbia, Canada. D-Wave is the first company in the world to sell quantum computers. (10 Million Dollars).

2048 (video game) The game's objective is to slide numbered tiles on a grid to combine them to create a tile with the number 2048; however, you can keep playing the game, creating tiles with larger numbers (such as a 32,768 tile).

Non-Abelian - Anyon is a type of quasiparticle that occurs only in two-dimensional systems, with properties much less restricted than fermions and bosons. In general, the operation of exchanging two identical particles may cause a global phase shift but cannot affect observables. Anyons are generally classified as abelian or non-abelian. Abelian anyons have been detected and play a major role in the fractional quantum Hall effect. Non-abelian anyons have not been definitively detected, although this is an active area of research.

Skyrmion is a topologically stable field configuration of a certain class of non-linear sigma models. It was originally proposed as a model of the nucleon by Tony Skyrme in 1962. As a topological soliton in the pion field, it has the remarkable property of being able to model, with reasonable accuracy, multiple low-energy properties of the nucleon, simply by fixing the nucleon radius. It has since found application in solid state physics, as well as having ties to certain areas of string theory. Skyrmions as topological objects are important in solid state physics, especially in the emerging technology of spintronics. A two-dimensional magnetic skyrmion, as a topological object, is formed, e.g., from a 3D effective-spin "hedgehog" (in the field of micromagnetics: out of a so-called "Bloch point" singularity of homotopy degree +1) by a stereographic projection, whereby the positive north-pole spin is mapped onto a far-off edge circle of a 2D-disk, while the negative south-pole spin is mapped onto the center of the disk. In a spinor field such as for example photonic or polariton fluids the skyrmion topology corresponds to a full Poincaré beam (which is, a quantum vortex of spin comprising all the states of polarization). Skyrmions have been reported, but not conclusively proven, to be in Bose-Einstein condensates, superconductors, thin magnetic films and in chiral nematic liquid crystals. As a model of the nucleon, the topological stability of the Skyrmion can be interpreted as a statement that the baryon number is conserved; i.e. that the proton does not decay. The Skyrme Lagrangian is essentially a one-parameter model of the nucleon. Fixing the parameter fixes the proton radius, and also fixes all other low-energy properties, which appear to be correct to about 30%. It is this predictive power of the model that makes it so appealing as a model of the nucleon. Hollowed-out skyrmions form the basis for the chiral bag model (Cheshire Cat model) of the nucleon. Exact results for the duality between the fermion spectrum and the topological winding number of the non-linear sigma model have been obtained by Dan Freed. This can be interpreted as a foundation for the duality between a QCD description of the nucleon (but consisting only of quarks, and without gluons) and the Skyrme model for the nucleon. The skyrmion can be quantized to form a quantum superposition of baryons and resonance states. It could be predicted from some nuclear matter properties.

Method for Improving Quantum Information Processing. A new method for splitting light beams into their frequency modes and encoding photons with quantum information.

World's first logical quantum processor. A team has realized a key milestone in the quest for stable, scalable quantum computing. For the first time, the team has created a programmable, logical quantum processor, capable of encoding up to 48 logical qubits, and executing hundreds of logical gate operations. Their system is the first demonstration of large-scale algorithm execution on an error-corrected quantum computer, heralding the advent of early fault-tolerant, or reliably uninterrupted, quantum computation.

Scalable and fully coupled quantum-inspired processor solves optimization problems. Researchers experimentally demonstrate the first fully connected annealing processor that can be scaled up across multiple chips.

Kink State control may provide pathway to quantum electronics. Researchers develop a robust quantum highway with switch to control electron movement. We have developed a quantum highway system that could carry electrons without collision, be programmed to direct current flow and is potentially scalable -- all of which lays a strong foundation for future studies exploring the fundamental science and application potentials of this system.

Superconducting Qubit 3D integration prospects bolstered by new research. As superconducting qubit technology grows beyond one-dimensional chains of nearest neighbour coupled qubits, larger-scale two-dimensional arrays are a natural next step. that may be entangled with each other. Prototypical two-dimensional arrays have been built, but the challenge of routing control wiring and readout circuitry has, so far, prevented the development of high fidelity qubit arrays of size 3×3 or larger. We have developed a process for fabricating fully superconducting interconnects that are materially compatible with our existing, high fidelity, aluminum on silicon qubits. “This fabrication process opens the door to the possibility of the close integration of two superconducting circuits with each other or, as would be desirable in the case of superconducting qubits, the close integration of one high-coherence qubit device with a dense, multi-layer, signal-routing device”.

Stable Quantum Gate created - Stable Quantum Bits

Cost-Effective Quantum moves a step closer. The National Institute of Standards and Technology, Colorado, prove the viability of a measurement-device-independent quantum key distribution (QKD) system, based on readily available hardware such as distributed feedback (DFB) lasers and field-programmable gate arrays (FPGA) electronics, which enable time-bin qubit preparation and time-tagging, and active feedback systems that allow for compensation of time-varying properties of photons after transmission through deployed fibre.

Qubit Oxford Quantum.

Excitonic Insulator. Rules for superconductivity mirrored in device's braided qubits could form component of topological quantum computer. Excitonium is a new form of matter soft plasmon.

The Tianhe-2 is the most powerful supercomputer built to date, demands 24 megawatts of power, while the human brain runs on just 10 watts.

Biological Neuron-Based Computer Chips (wetchips)
Artificial Intelligence

TOP 500 list of the World’s Top Supercomputers

ASC Sequoia will have 1.6 petabytes of memory, 96 racks, 98,304 compute nodes, and 1.6 million cores. Though orders of magnitude more powerful than such predecessor systems as ASC Purple and BlueGene/L, Sequoia will be 160 times more power efficient than Purple and 17 times more than BlueGene/L. Is expected to be one of the most powerful supercomputers in the world, equivalent to the 6.7 billion people on earth using hand calculators and working together on a calculation 24 hours per day, 365 days a year, for 320 years…to do what Sequoia will do in one hour.

DARPA or Defense Advanced Research Projects Agency, is an agency of the U.S. Department of Defense responsible for the development of emerging technologies for use by the military. Darpa.

IARPA or Intelligence Advanced Research Projects Activity, invests in high-risk, high-payoff research programs to tackle some of the most difficult challenges of the agencies and disciplines in the Intelligence Community (IC).

Institute for Computational Cosmology is to advance fundamental knowledge in cosmology. Topics of active research include: the nature of dark matter and dark energy, the evolution of cosmic structure, the formation of galaxies, and the determination of fundamental parameters.

New Building Block in Quantum Computing demonstrated. Researchers have demonstrated a new level of control over photons encoded with quantum information. The team's experimental system allows them to manipulate the frequency of photons to bring about superposition, a state that enables quantum operations and computing. We believe that the brain stores information about our surroundings in so-called cognitive spaces. This concerns not only geographical data, but also relationships between objects and experience. The term 'cognitive spaces' refers to mental maps in which we arrange our experience. Everything that we encounter has physical properties, whether a person or an object, and can therefore be arranged along different dimensions. The very regular activation pattern of grid cells can also be observed in humans -- but importantly, not only during navigation through geographical spaces. Grids cells are also active when learning new concepts.

Fiber Optics

Artificial Intelligent Computing (Turing - Machine learning)

In a step forward for orbitronics, scientists break the link between a quantum material's spin and orbital states. The advance opens a path toward a new generation of logic and memory devices that could be 10,000 times faster than today's. In designing electronic devices, scientists look for ways to manipulate and control three basic properties of electrons: their charge; their spin states, which give rise to magnetism; and the shapes of the fuzzy clouds they form around the nuclei of atoms, which are known as orbitals.

Seeing electron movement at fastest speed ever could help unlock next-level quantum computing. New technique could enable processing speeds a million to a billion times faster than today's computers and spur progress in many-body physics. The key to maximizing traditional or quantum computing speeds lies in our ability to understand how electrons behave in solids, and researchers have now captured electron movement in attoseconds--the fastest speed yet. To see electron movement within two-dimensional quantum materials, researchers typically use short bursts of focused extreme ultraviolet (XUV) light. Those bursts can reveal the activity of electrons attached to an atom's nucleus. But the large amounts of energy carried in those bursts prevent clear observation of the electrons that travel through semiconductors -- as in current computers and in materials under exploration for quantum computers. U-M engineers and partners employ two light pulses with energy scales that match that of those movable semiconductor electrons. The first, a pulse of infrared light, puts the electrons into a state that allows them to travel through the material. The second, a lower-energy terahertz pulse, then forces those electrons into controlled head-on collision trajectories. The crashes produce bursts of light, the precise timing of which reveals interactions behind quantum information and exotic quantum materials alike. "We used two pulses -- one that is energetically matched with the state of the electron, and then a second pulse that causes the state to change, we can essentially film how these two pulses change the electron's quantum state and then express that as a function of time. The two-pulse sequence allows time measurement with a precision better than one percent of the oscillation period of the terahertz radiation that accelerates the electrons. Quantum materials could possess robust magnetic, superconductive or superfluid phases, and quantum computing represents the potential for solving problems that would take too long on classical computers. Pushing such quantum capabilities will eventually create solutions to problems that are currently out of our reach. That starts with basic observational science.

Supercomputers without Waste Heat. Physicists explore superconductivity for information processing. Lossless electrical transfer of magnetically encoded information is possible. Magnetic materials are often used for information storage. Magnetically encoded information can, in principle, also be transported without heat production by using the magnetic properties of electrons, the electron spin. Combining the lossless charge transport of superconductivity with the electronic transport of magnetic information -- i.e. "spintronics" -- paves the way for fundamentally novel functionalities for future energy-efficient information technologies.



Speed of Processing


Bandwidth in computing is the bit-rate of available or consumed information capacity expressed typically in metric multiples of bits per second. Variously, bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth. This definition of bandwidth is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics, in which bandwidth is used to refer to analog signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power. However, the actual bit rate that can be achieved depends not only on the signal bandwidth, but also on the noise on the channel. Chips.

Bit Rate is the number of bits that are conveyed or processed per unit of time.

Bandwidth in signal processing is the difference between the upper and lower frequencies in a continuous set of frequencies. It is typically measured in hertz, and may sometimes refer to passband bandwidth, sometimes to baseband bandwidth, depending on context. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. In the case of a low-pass filter or baseband signal, the bandwidth is equal to its upper cutoff frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information,  regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency.

Instructions Per Second is a measure of a computer's processor speed. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.

Exascale Computing refers to computing systems capable of at least one exaFLOPS, or a billion billion calculations per second. Such capacity represents a thousandfold increase over the first petascale computer that came into operation in 2008. (One exaflops is a thousand petaflops or a quintillion, 1018, floating point operations per second.) At a supercomputing conference in 2009, Computerworld projected exascale implementation by 2018. Exascale computing would be considered as a significant achievement in computer engineering, for it is believed to be the order of processing power of the human brain at neural level(functional might be lower). It is, for instance, the target power of the Human Brain Project.

Researchers create first-of-its-kind composable storage platform for high-performance computing. New framework pushes the limits of high-performance computing. A first-of-its-kind framework called BespoKV, performing at the exascale, or a billion billion calculations per second.

Spectral Efficiency refers to the information rate that can be transmitted over a given bandwidth in a specific communication system. It is a measure of how efficiently a limited frequency spectrum is utilized by the physical layer protocol, and sometimes by the media access control (the channel access protocol).

Signal Processing concerns the analysis, synthesis, and modification of signals, which are broadly defined as functions conveying, "information about the behavior or attributes of some phenomenon", such as sound, images, and biological measurements. For example, signal processing techniques are used to improve signal transmission fidelity, storage efficiency, and subjective quality, and to emphasize or detect components of interest in a measured signal.

Clock Rate typically refers to the frequency at which a chip like a central processing unit (CPU), one core of a multi-core processor, is running and is used as an indicator of the processor's speed.

Processing Speed is a cognitive ability that could be defined as the time it takes a person to do a mental task. It is related to the speed in which a person can understand and react to the information they receive, whether it be visual (letters and numbers), auditory (language), or movement. Thinking Fast.

Fast is acting or moving quickly or rapidly. Being hurried and brief.

Quick is to perform with little or no delay. Moving fast and lightly. Apprehending and responding with speed and sensitivity.



Virtual PC


Virtual Machine is an emulation of a computer system. Virtual machines are based on computer architectures and provide functionality of a physical computer. Their implementations may involve specialized hardware, software, or a combination. There are different kinds of virtual machines, each with different functions: System virtual machines (also termed full virtualization VMs) provide a substitute for a real machine. They provide functionality needed to execute entire operating systems. A hypervisor uses native execution to share and manage hardware, allowing for multiple environments which are isolated from one another, yet exist on the same physical machine. Modern hypervisors use hardware-assisted virtualization, virtualization-specific hardware, primarily from the host CPUs. Process virtual machines are designed to execute computer programs in a platform-independent environment. Some virtual machines, such as QEMU, are designed to also emulate different architectures and allow execution of software applications and operating systems written for another CPU or architecture. Operating-system-level virtualization allows the resources of a computer to be partitioned via the kernel's support for multiple isolated user space instances, which are usually called containers and may look and feel like real machines to the end users.

Virtual-Box - Virtual machines allow one or more 'guest' operating systems to run inside another on the same PC. This is useful when you need access to multiple operating systems to run different software that runs on a particular OS.

Privacy - Safe Internet Use - Tor Project - Dark Web

How to install a Virtual Machine (youtube)

Virtual Desktop is a term used with respect to user interfaces, usually within the WIMP paradigm, to describe ways in which the virtual space of a computer's desktop environment is expanded beyond the physical limits of the screen's display area through the use of software. This compensates for a limited desktop area and can also be helpful in reducing clutter. There are two major approaches to expanding the virtual area of the screen. Switchable virtual desktops allow the user to make virtual copies of their desktop view-port and switch between them, with open windows existing on single virtual desktops. Another approach is to expand the size of a single virtual screen beyond the size of the physical viewing device. Typically, scrolling/panning a subsection of the virtual desktop into view is used to navigate an oversized virtual desktop.

v2.0 Desktops allows you to organize your applications on up to four virtual desktops.

Hardware Virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead another abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.

Virtualization Software specifically emulators and hypervisors, are software packages that emulate the whole physical computer machine, often providing multiple virtual machines on one physical platform. The table below compares basic information about platform virtualization hypervisors.

Hypervisor is computer software, firmware, or hardware, that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and OS X instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

Sandbox is a security mechanism for separating running programs. It is often used to execute untested or untrusted programs or code, possibly from unverified or untrusted third parties, suppliers, users or websites, without risking harm to the host machine or operating system. A sandbox typically provides a tightly controlled set of resources for guest programs to run in, such as scratch space on disk and memory. Network access, the ability to inspect the host system or read from input devices are usually disallowed or heavily restricted. In the sense of providing a highly controlled environment, sandboxes may be seen as a specific example of virtualization. Sandboxing is frequently used to test unverified programs that may contain a virus or other malicious code, without allowing the software to harm the host device.

Operating System Sandbox: Virtual PC (youtube) - VM Ware - Parallels.

Hyper-V formerly known as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V supersedes Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks.


VPN - Virtual Private Network


Virtual Private Network enables users to send and receive data across shared or public networks as if their computing devices were directly connected to the private network. Applications running across the VPN may therefore benefit from the functionality, security, and management of the private network.

Ultimate Guide to Finding the Best VPN

How does a VPN Work?

Hotspot Shield provides secure and private access to a free and open Internet.

Artificial Neural Network

Dedicated Hosting Service is a type of Internet hosting in which the client leases an entire server not shared with anyone else. This is more flexible than shared hosting, as organizations have full control over the server(s), including choice of operating system, hardware, etc. There is also another level of dedicated or managed hosting commonly referred to as complex managed hosting. Complex Managed Hosting applies to both physical dedicated servers, Hybrid server and virtual servers, with many companies choosing a hybrid (combination of physical and virtual) hosting solution.

Virtual Private Server is a virtual machine sold as a service by an Internet hosting service. A VPS runs its own copy of an operating system, and customers may have superuser-level access to that operating system instance, so they can install almost any software that runs on that OS. For many purposes they are functionally equivalent to a dedicated physical server, and being software-defined, are able to be much more easily created and configured. They are priced much lower than an equivalent physical server. However, as they share the underlying physical hardware with other VPSs, performance may be lower, depending on the workload of any other executing virtual machines.

Virtualization refers to the act of creating a virtual rather than actual version of something, including virtual computer hardware platforms, storage devices, and computer network resources.

Virtual PC - Virtual Reality

Windows Virtual PC is a virtualization program for Microsoft Windows. In July 2006 Microsoft released the Windows version as a free product.



Remote - PC to PC


Remote is a place situated far away from the main headquarters of an operation or far from the main centers of the population. A place that is situated far from where you are now.

Online Education - Remote Learning - Home Work - AI Teachers - Virtual Training - Telemedicine - Telemetry - Video Conferencing - Travel Work

Remote Control is a component of an electronic device used to operate the device from a distance, usually wirelessly. A TV remote control can control certain TV functions like adjusting the volume or changing the channel. But remember, you can change the channel, but you can't control the content, so it might be the same shit on a different channel.

Remote Desktop Software refers to a software or operating system feature that allows a personal computer's desktop environment to be run remotely on one system (usually a PC, but the concept applies equally to a server), while being displayed on a separate client device. Remote desktop applications have varying features. Some allow attaching to an existing user's session (i.e., a running desktop) and "remote controlling", either displaying the remote control session or blanking the screen. Taking over a desktop remotely is a form of remote administration.

Remote Access Server allows users to gain access to files and print services on the LAN from a remote location. For example, a user who dials into a network from home using an analog modem or an ISDN connection will dial into a remote access server.

Remote Computer is a computer to which a user does not have physical access, but which he or she can access or manipulate via some kind of computer network.

Radio Control is the use of control signals transmitted by radio to remotely control a device.

Telepresence - Interfaces - Remote Viewing - Back Door - Displays

Tele-Operation indicates the operation of a system or machine at a distance.

Telecommand is a command sent to control a remote system or systems not directly connected (e.g. via wires) to the place from which the telecommand is sent.

Remote Work or working from home is an employment arrangement in which employees do not commute to a central place of work, such as an office building, warehouse, or retail store. Instead, work can be accomplished in the home, such as in a study, a small office/home office and/or a telecentre. A company in which all workers perform remote work is known as a distributed company. Remote work is also called work from anywhere, telework, remote job, mobile work, and distance work.

Some employers spy on employees using tracking tools such as video feeds and keystroke monitoring software. 96% of Remote Companies Use Employee Monitoring Software.

Remote work is not good for people who lack discipline or have bad time management skills, or have no purpose or passion for the work they do. Working alone or solo is a skill.

Hybrid Workplace is a model that mixes in-office and remote work to offer flexibility and support to employees. In a hybrid workplace, employees typically enjoy more autonomy and better work-life balance – and are more engaged as a result.

Ghost Work is work performed by a human, but believed by a customer to be performed by an automated process. Ghost work focuses on task-based and content-driven work that can be funneled through the Internet and application programming interfaces or APIs. This work can include labelling, editing, moderating, and sorting information or content. Ghost work can be performed remotely and on a contractual basis. It is an invisible workforce, scaled for those who desire full-time, part-time, or ad-hoc work. Ghost work is differentiated from gig work or temporary work because it is task-based and uncredited. While gig work involves a general platform, ghost work emphasizes the software or algorithm aspect of assisting machines to automate further. Through labelling content, ghost workers teach the machine to learn. low wages, no benefits and isolation from colleagues.

Telecommuting is a work arrangement in which employees do not commute or travel (e.g. by bus or car) to a central place of work, such as an office building, warehouse, or store. Teleworkers in the 21st century often use mobile telecommunications technology such as Wi-Fi-equipped laptop or tablet computers and smartphones to work from coffee shops; others may use a desktop computer and a landline phone at their home.

Honest Work - Temporary Work - Cooperatives - Skill Sharing

Cottage Industry is a business or manufacturing activity carried on in a person's home. Solitude (introverts).

Home Office is a room in a person's house where he or she does office work. A home office is a designated space in a person's residence used for official business purposes and provides a place to work from home.

Teaching via Video Conference - Remote IT Services - Remote PC to PC Services - Log Me In - Team Viewer - Go to Assist - Pogo Plug - Dyn DNS - Tight VNC - Web Conferencing - Tutoring

Virtual Assistant is generally self-employed and provides professional administrative, technical, or creative or social assistance to clients remotely from a home office. Because virtual assistants are independent contractors rather than employees, clients are not responsible for any employee-related taxes, insurance or benefits, except in the context that those indirect expenses are included in the VA's fees. Clients also avoid the logistical problem of providing extra office space, equipment or supplies. Clients pay for 100% productive work, and can work with Virtual Assistants, individually, or in multi-VA firms to meet their exact needs. Virtual Assistants usually work for other small businesses. but can also support busy executives. It is estimated that there are as few as 5,000-10,000 or as many as 25,000 virtual assistants worldwide. The profession is growing in centralized economies with "fly-in fly-out" staffing practices.


Working while Traveling


Remote Job is one that is done away from the office in a remote location. Remote workers can be more productive than their office-bound counterparts. Synchronous Communication - Asynchronous Communication.

Visas - Immigration - Work Life Balance - Remote Work

Digital Nomad are a type of people who use telecommunications technologies to earn a living and, more generally, conduct their life in a nomadic manner. Such workers often work remotely from foreign countries, coffee shops, public libraries, co-working spaces, or recreational vehicles. This is often accomplished through the use of devices that have wireless Internet capabilities such as smartphones or mobile hotspots. Successful digital nomads typically have a financial cushion. Other names used are Nomadic Computer Programmer, Online Virtual Assistant, Professional Consultant and Geographical Free Agent. Mobile Homes.

Global Nomad is a person who is living a mobile and international lifestyle. Global nomads aim to live location-independently, seeking detachment from particular geographical locations and the idea of territorial belonging.

Work on the Road is traveling from place to place while working away from one's home or office.

Working Vacation or workcation is the combination of holiday time with remote work. Business and Leisure is the merging of business and leisure travel or to do something enjoyable that is related to one's work. Working Holiday allows someone to visit a country for longer than the average tourist with the opportunity to take on short-term jobs to save money or at least help fund the trip. Many people seek short-term jobs in multiple regions as a way to explore that country in-depth. Work & Travel programs enable you to earn money and experience daily life for part of your program as you travel and explore the country.

Business Tourism can be divided into primary and secondary activities. Primary ones are business (work)-related, and included activities such as consultancy, inspections, and attending meetings. Secondary ones are related to tourism (leisure) and include activities such as dining out, recreation, shopping, sightseeing, meeting others for leisure activities, and so on. While the primary ones tend to be seen as more important, the secondary ones are nonetheless often described as "substantial".

Business Travel is travel undertaken for work or business purposes, as opposed to other types of travel, such as for leisure purposes or regularly commuting between one's home and workplace. According to a survey 88% small business owners enjoy business travel. Workers who travel away from the workplace on business can be considered "traveling employees," even if the travel is local and of limited duration.

Portable is something that is easily or conveniently transported. Transportation.

Traveling Nurse or travel nurse is a nurse who gets by hired healthcare staffing companies to work temporary contracts for hospitals and other healthcare providers at different locations around the country away from the nurse’s legal tax-home. Hired by a healthcare staffing company. Works temporary contracts. Nurses complete the contracts at hospitals and other healthcare providers. The nurse moves around the country periodically for the work.

Common careers involving business travel include: Salespeople, Sales engineers, Executives, Field engineers, Project managers, Trainers, Consultants, An au pair is a professional live-in babysitter or nanny. Peace Corps / NGO Work. Additionally, it is common to see doctors, nurses, and other medical professionals flying for work. Often lawyers, politicians, athletes, clergy, military, academics, and journalists conduct business travel on a regular basis. Travel Blogging, including food bloggers, mommy bloggers, fashion bloggers, and lifestyle bloggers. Videography / Vlogging / YouTube. Yacht Sailing Jobs. Freelance Travel Photographer. Bartending Jobs Abroad. Musician / Street Performer. A Local Tour Guide. Traveling Yoga Instructor. Freelance Travel Writer. Freelance Massage Therapist. Website & Graphic Design. Work On A Cruise Ship. Traveling Festival Work. Scuba Diving Instructor. A Flight Attendant. Foreign Service Travel Jobs. Travel Agent. Teach English Abroad or Teach English Online. Online Translation Jobs.

Backpacker Worker and vagabonds do work that I’ll call “alternative” travel jobs. The type of work that may not require a computer or a college degree, but has a more hands-on approach. Think musicians, artists, or manual labor. Pay could be under the table.

Migrant Worker is a person who migrates within a home country or outside it to pursue work. Migrant workers usually do not have the intention to stay permanently in the country or region in which they work. Seasonal Travel Jobs. Construction, school teachers, commercial fishing, oil workers, electricians, ski resort staff, etc. These jobs depend on what skills you currently possess or are willing to learn.

Distributed Workforce is a workforce that reaches beyond the restrictions of a traditional office environment. A distributed workforce is dispersed geographically over a wide area – domestically or internationally. By installing key technologies, distributed companies enable employees located anywhere to access all of the company’s resources and software such as applications, data and e-mail without working within the confines of a physical company-operated facility. This is not a virtual business, where employees are distributed but remain primarily unconnected. A company with a distributed workforce connects its employees using a networking infrastructure that makes it easy for team members across the world to work together. Using a shared software approach called SaaS, or software as a service, workers and teams can share files securely as well as access the company’s databases, file sharing, telecommunications/unified communications, Customer relationship management (CRM), video teleconferencing, human resources, IT service management, accounting, IT security, web analytics, web content management, e-mail, calendars and much more.

Some things you can't do from a distance, you have to be one site, physically and mentally. Working remotely is very convenient. But sometimes you have to be present in the place where the work is being done. Boots on the ground is not just a military thing, it is a life thing. Salespeople are the ones manning booths at trade shows, driving from site to site visiting customers, and calling their way through lists of phone numbers.

Putting-Out System is a means of subcontracting work. Historically, it was also known as the workshop system and the domestic system. In putting-out, work is contracted by a central agent to subcontractors who complete the work in off-site facilities, either in their own homes or in workshops with multiple craftsmen. It was used in the English and American textile industries, in shoemaking, lock-making trades, and making parts for small firearms from the Industrial Revolution until the mid-19th century. After the invention of the sewing machine in 1846, the system lingered on for the making of ready-made men's clothing. The domestic system was suited to pre-urban times because workers did not have to travel from home to work, which was quite impracticable due to the state of roads and footpaths, and members of the household spent many hours in farm or household tasks. Early factory owners sometimes had to build dormitories to house workers, especially girls and women. Putting-out workers had some flexibility to balance farm and household chores with the putting-out work, this being especially important in winter. The development of this trend is often considered to be a form of proto-industrialization, and remained prominent until the Industrial Revolution of the 19th century. At that point, it underwent name and geographical changes. However, bar some technological advancements, the putting-out system has not changed in essential practice. Contemporary examples can be found in China, India, and South America, and are not limited to the textiles industry.

Crowdsourcing - Part Time Work - Flextime

Expatriate is a person residing in a country other than their native country.

Inside Contracting is the practice of hiring contractors who work inside the proprietor's factory. It replaced the putting out system, where contractors worked in their own facilities.



Open Source Software


Open-Source Software is computer software with its source code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. Open Source Software may be developed in a collaborative public manner. According to scientists who studied it, open-source software is a prominent example of open collaboration. Open Source Education.

Open-Source Hardware consists of physical artifacts of technology designed and offered by the open-design movement. Both free and open-source software (FOSS) and open-source hardware are created by this open-source culture movement and apply a like concept to a variety of components. It is sometimes, thus, referred to as FOSH (free and open-source hardware). The term usually means that information about the hardware is easily discerned so that others can make it – coupling it closely to the maker movement. Hardware design (i.e. mechanical drawings, schematics, bills of material, PCB layout data, HDL source code and integrated circuit layout data), in addition to the software that drives the hardware, are all released under free/libre terms. The original sharer gains feedback and potentially improvements on the design from the FOSH community. There is now significant evidence that such sharing can drive a high return on investment for the scientific community. Since the rise of reconfigurable programmable logic devices, sharing of logic designs has been a form of open-source hardware. Instead of the schematics, hardware description language (HDL) code is shared. HDL descriptions are commonly used to set up system-on-a-chip systems either in field-programmable gate arrays (FPGA) or directly in application-specific integrated circuit (ASIC) designs. HDL modules, when distributed, are called semiconductor intellectual property cores, also known as IP cores. Open-source hardware also helps alleviate the issue of proprietary device drivers for the free and open-source software community, however, it is not a pre-requisite for it, and should not be confused with the concept of open documentation for proprietary hardware, which is already sufficient for writing FLOSS device drivers and complete operating systems. The difference between the two concepts is that OSH includes both the instructions on how to replicate the hardware itself as well as the information on communication protocols that the software (usually in the form of device drivers) must use in order to communicate with the hardware (often called register documentation, or open documentation for hardware), whereas open-source-friendly proprietary hardware would only include the latter without including the former.

Open Source is a decentralized development model that encourages open collaboration. A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open-source appropriate technologies, and open-source drug discovery. Open Source Initiative - Open Source.

Business Software Tools and Apps

Asterisk open source framework for building communications applications.

Alfresco software built on open standards.

Open-Source Electronics
Arduino - Raspberry Pi
Mmassimo Banzi (video)
Arduino 3D Printer (youtube)
Science Kits
Freeware Files


Computer Rentals


Rent Solutions
Vernon Computer Source
Smart Source Rentals
Google Cromebook

Miles Technologies Technology Solutions.

Maximum PC

Knowledge Management - Artificial Intelligence - Science - Ideas - Innovation


Word Processors


Word Processor is a device or computer program that provides for input, editing, formatting and output of text, often with some additional features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers.

Open Office Suite
Libre Office
Abi Source
Note Tab text-processing and HTML editor.
Word Processors List (PDF)
Google Docs (writely)
Google Business Tools and Apps
Zoho
Photo Editing Software
Free Software

Scraper Wiki getting data from the web, spreadsheets, PDFs.

Comet Docs Convert, Store and Share your documents.

Final Draft software is a screenwriting software for Writing and formatting screenplays to standards set by theater, television and film industries. The program can also be used to write documents such as stageplays, outlines, treatments, query letters, novels, graphic novels, manuscripts, and basic text documents. Writing Tips.



Computer Courses


W3 Schools
Webmaster Tutorials
Technology Terms
Creator Academy by Google
J Learning
Lynda
Compucert
Learning Tree
IT Training
Building a Search Engine
Tech Terms

Online Schools - Learn to Code



Computer Maintenance


Computer Hope
How to Geek
Stack Overflow
PC Mag
Data Doctors
Repairs 4 Laptop
Maintain Your Computer (wiki how)
PC User
Maintain PC (ehow)
Open Source Ultra Defrager
Data Recovery Software
Dmoz Computers Websites
Inter-Hacktive

Hackerspace
Technology Tools
Math
Games
Information Management
Computer History
Laptops for Learning
Flash Drive Knowledge
Engineering Design
Technology News

Remote Computer Assistance

Self-Help Computer Resources

Thanks to the millions of people sharing their knowledge and experiences online, you can pretty much learn anything you want on your own.  So over the years I have collected some great resources that come in handy. Sharing is awesome!
Information Sources.

Surfing the Internet Tips

First a Little Warning: When visiting other websites be very careful what you click on because some software downloads are very dangerous to your computer, so be absolutely sure what you are downloading. Read the ".exe" file name. Search the internet for more info, or to verify '.exe' executable files. It's a good idea to always get a second opinion on what software you might need.

Free Virus Protection
Internet Browsers
Internet Safety Info
Internet Connections

Computer Quick Fix Tips

Make sure that your Computer System Restore is on. This can sometimes be used to fix a bad computer virus or malfunction. It's a good idea to do a System Restore and a Virus Scan in the Safe Mode (During Computer Restart hit the F8 Key and then follow the instructions)  (F2 is Setup and F12 is the Boot Menu) Warning: System Restore that is found under Start/Programs/Accessories/System Tools is not the same as PC Restore, Factory Settings or Image Restore, which will delete all your personal files and software from the PC. If you don't have OS Setup Disk that came with your PC then sometimes the PC will have a Factory Settings copy installed. This you need to do while your PC is rebooting. Press ' Ctrl '  then press F11 and then release both at the same time. You should see something like Symantec Ghost where you will be prompted to reinstall Factory Settings. This will delete all your personal files and software from the PC so please back up first.

Always Have your Operating System Restore Disk or Recovery Disc handy because not all computer problems can be fixed. You also need your Drivers and Applications Disks too. Always backup your most important computer files because reinstalling the operating system will clear your data.

Kingston DataTraveler 200 - 128 GB USB 2.0 Flash Drive DT200/128GB (Black)
Western Digital 2 TB USB 2.0 Desktop External Hard Drive

Sending Large Files
Bit Torrent Protocol (wiki)
Lime Wire P2P
Send Large Files
Zip Files
Stuffit File Compression
File-Zilla
Dropbox
File Sharing for Professionals
We Transfer

You can try some of these free programs to help keep your computer safe: (might be outdated)
Lava Soft Ad-Ware
Spybot Search & Destroy
CCleaner
Malwarebytes
Hijack This
Spyware Blaster

Download.com has the software above but be very careful not to click on the wrong download item. Please Verify the correct ".exe file." name.

Free Software ?

As the saying goes "Nothing is Free." Free software sometimes comes loaded with other software programs that you don't need. So always check or uncheck the appropriate boxes, and read everything carefully. But even then, they might sneak unwanted programs by you, so you will have to remove those programs manually. With the internet, dangers are always lurking around the corner, so please be careful, be aware and educate yourself. When our computer systems and the internet are running smoothly the beauty of this machine becomes very evident. This is the largest collaboration of people in human history. With so many contributors from all over the world, we now have more knowledge and information at our fingertips then ever before, our potential is limitless. Open Source - Operating Systems

Software Repository is a storage location from which software packages may be retrieved and installed on a computer.

Free Software Info (wiki) - Free Software Foundation - General Public License - Free BSD - Jolla (wiki) - Hadoop Apache - Open Mind - Software Geek - Word and Text Processing Software

New Computers - Sadly new PC's are loaded with a lot of bogus software and programs that you don't need. Removing them can be a challenge, but it's absolutely necessary if you want your PC to run smoothly without all those annoying distractions that slow your PC down.

Lojack For Laptops (amazon)

Tired and Disgusted with Windows 8 dysfunctional Operating System Interface, Download Classic Shell to make your computer like XP, and find things again, or you can just update your windows 8.0 to windows 8.1,, because 8.1 is definitely better then 8.0, but still not perfect yet.

Oasis Websoft advanced software by providing superior solutions for web applications, web sites and enterprise software. We are committed to building infrastructure that will ensure that the West African sub-region is not left behind in the continuous evolution of information technology.

Fawn fast, scalable, and energy-efficient cluster architecture for data-intensive computing.

BlueStacks is currently the best way to run Android apps on Windows. It doesn’t replace your entire operating system. Instead, it runs Android apps within a window on your Windows desktop. This allows you to use Android apps just like any other program.

Utility Software is system software designed to help analyze, configure, optimize or maintain a computer. Utility software, along with operating system software, is a type of system software used to support the computer infrastructure, distinguishing it from application software which is aimed at directly performing tasks that benefit ordinary users.

Service-Oriented Architecture is an architectural pattern in computer software design in which application components provide services to other components via a communications protocol, typically over a network. The principles of service-orientation are independent of any vendor, product or technology. A service is a self-contained unit of functionality, such as retrieving an online bank statement. By that definition, a service is an operation that may be discretely invoked. However, in the Web Services Description Language (WSDL), a service is an interface definition that may list several discrete services/operations. And elsewhere, the term service is used for a component that is encapsulated behind an interface. This widespread ambiguity is reflected in what follows. Services can be combined to provide the functionality of a large software application. SOA makes it easier for software components on computers connected over a network to cooperate. Every computer can run any number of services, and each service is built in a way that ensures that the service can exchange information with any other service in the network without human interaction and without the need to make changes to the underlying program itself. A paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations.



Integrated Circuits - IC's


The First Microchip Handmade in 1958 by Jack Kilby The First Integrated Circuit on the right (September 12th, 1958)  And now almost 60 years later...

Integrated Circuit is a set of electronic circuits on one small flat piece (or "chip") of semiconductor material, normally silicon. The integration of large numbers of tiny transistors into a small chip resulted in circuits that are orders of magnitude smaller, cheaper, and faster than those constructed of discrete electronic components. Integrated Circuit Design (wiki).

CMOS or Complementary metal–oxide–semiconductor, is a technology for constructing integrated circuits. CMOS technology is used in microprocessors, microcontrollers, static RAM, and other digital logic circuits. CMOS technology is also used for several analog circuits such as image sensors (CMOS sensor), data converters, and highly integrated transceivers for many types of communication.

Die in the context of integrated circuits is a small block of semiconducting material, on which a given functional circuit is fabricated. Typically, integrated circuits are produced in large batches on a single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through processes such as photolithography. The wafer is cut (“diced”) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die.

Integrated Circuit Layout is the representation of an integrated circuit in terms of planar geometric shapes which correspond to the patterns of metal, oxide, or semiconductor layers that make up the components of the integrated circuit. When using a standard process—where the interaction of the many chemical, thermal, and photographic variables is known and carefully controlled—the behaviour of the final integrated circuit depends largely on the positions and interconnections of the geometric shapes. Using a computer-aided layout tool, the layout engineer—or layout technician—places and connects all of the components that make up the chip such that they meet certain criteria—typically: performance, size, density, and manufacturability. This practice is often subdivided between two primary layout disciplines: Analog and digital. The generated layout must pass a series of checks in a process known as physical verification. The most common checks in this verification process are design rule checking (DRC), layout versus schematic (LVS), parasitic extraction, antenna rule checking, and electrical rule checking (ERC). When all verification is complete, the data is translated into an industry-standard format, typically GDSII, and sent to a semiconductor foundry. The process of sending this data to the foundry is called tapeout because the data used to be shipped out on a magnetic tape. The foundry converts the data into another format and uses it to generate the photomasks used in a photolithographic process of semiconductor device fabrication. In the earlier, simpler, days of IC design, layout was done by hand using opaque tapes and films, much like the early days of printed circuit board (PCB) design. Modern IC layout is done with the aid of IC layout editor software, mostly automatically using EDA tools, including place and route tools or schematic-driven layout tools. The manual operation of choosing and positioning the geometric shapes is informally known as "polygon pushing". Hardware.

Computer Chip Closeup Macro Photo Computer Chip Close-up Macro Photo on right

Newly-discovered semiconductor dynamics may help improve energy efficiency. The most common material for semiconductors is silicon, which is mined from Earth and then refined and purified. But pure Silicon doesn't conduct electricity, so the material is purposely and precisely adulterated by the addition of other substances known as Dopants. Boron and phosphorus ions are common dopants added to silicon-based semiconductors that allow them to conduct electricity. But the amount of dopant added to a semiconductor matters -- too little dopant and the semiconductor won't be able to conduct electricity. Too much dopant and the semiconductor becomes more like a non-conductive insulator.

World's first 1,000-Processor Chip. A microchip containing 1,000 independent programmable processors has been designed. The energy-efficient 'KiloCore' chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors. The highest clock-rate processor ever designed.

Nanoelectronics potentially make microprocessor chips work 1,000 times faster. While most advanced electronic devices are powered by photonics -- which involves the use of photons to transmit information -- photonic elements are usually large in size and this greatly limits their use in many advanced nanoelectronics systems. Plasmons, which are waves of electrons that move along the surface of a metal after it is struck by photons, holds great promise for disruptive technologies in nanoelectronics. They are comparable to photons in terms of speed (they also travel with the speed of light), and they are much smaller. This unique property of plasmons makes them ideal for integration with nanoelectronics. Innovative transducer can directly convert electrical signals into plasmonic signals, and vice versa, in a single step. By bridging plasmonics and nanoscale electronics, we can potentially make chips run faster and reduce power losses. Our plasmonic-electronic transducer is about 10,000 times smaller than optical elements. We believe it can be readily integrated into existing technologies and can potentially be used in a wide range of applications in the future.

Method identified to double computer processing speeds. Scientists introduce what they call 'simultaneous and heterogeneous multithreading' or SHMT. This system doubles computer processing speeds with existing hardware by simultaneously using graphics processing units (GPUs), hardware accelerators for artificial intelligence (AI) and machine learning (ML), or digital signal processing units to process information.

New chip opens door to AI computing at light speed. Engineers have developed a new chip that uses light waves, rather than electricity, to perform the complex math essential to training AI. The chip has the potential to radically accelerate the processing speed of computers while also reducing their energy consumption

5 Nanometer defines the 5 nanometer node as the technology node following the 7 nm node. Single transistor 7 nm scale devices were first produced by researchers in the early 2000s, and in 2003 NEC produced a 5 nm transistor. On June 5 2017, IBM revealed that they had created 5 nm silicon chips, using silicon nanosheets in a Gate All Around configuration (GAAFET), a break from the usual FinFET design.

Toward ever-more powerful microchips and supercomputers. A look at the process to extend 'Moore's law,' which has doubled the number of transistors that can be packed on a microchip roughly every two years, and develop new ways to produce more capable, efficient, and cost-effective chips. The PPPL scientists modeled what is called "atomic layer etching" (ALE), an increasingly critical fabrication step that aims to remove single atomic layers from a surface at a time. This process can be used to etch complex three-dimensional structures with critical dimensions that are thousands of times thinner than a human hair into a film on a silicon wafer.

Physicists show how frequencies can easily be multiplied without special circuitry. A new discovery by physicists could make certain components in computers and smartphones obsolete. The team has succeeded in directly converting frequencies to higher ranges in a common magnetic material without the need for additional components. Frequency multiplication is a fundamental process in modern electronics. Non-linear electronic circuits are typically used to generate the high-frequency gigahertz signals needed to operate today's devices. The team at MLU has now found a way to do this within a magnetic material without the electronic components that are usually used for this. Instead, the magnetization is excited by a low-frequency megahertz source. Using the newly discovered effect, the source generates several frequency components, each of which is a multiple of the excitation frequency. These cover a range of six octaves and reach up to several gigahertz. This is like hitting the lowest note on a piano while also hearing the corresponding harmonic tones of the higher octaves.

Semiconductor Device Fabrication is the process used to create the integrated circuits that are present in everyday electrical and electronic devices. It is a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of pure semiconducting material. Silicon is almost always used, but various compound semiconductors are used for specialized applications. The entire manufacturing process, from start to packaged chips ready for shipment, takes six to eight weeks and is performed in highly specialized facilities referred to as fabs. In more advanced semiconductor devices such as modern nodes say as of 2017 regarding 14/10/7nm device fabrication can take up to 15 weeks with 11-13 weeks being the industry average.

Atomically Thin Transistors that is Two-Dimensional - Ultrathin Transistors for faster computer chips. Two-dimensional materials with insulator made of calcium fluoride.

Berkeley Lab-led research breaks major barrier with the Smallest Transistor Ever by creating gate only 1 nanometer long. High-end 20-nanometer-gate transistors now on the market. Molybdenum Disulfide (wiki).

Winged microchip is smallest-ever human-made flying structure. The size of a grain of sand, dispersed microfliers could monitor air pollution, airborne disease and environmental contamination. By studying the aerodynamics of wind-dispersed seeds, researcher developed a flying microchip (or 'microflier') that catches the wind and passively flies through the air. Packed with ultra-miniaturized technology, including sensors and wireless communication capabilities, these microfliers could be used to monitor air pollution, airborne disease and more.

Atomically thin, transition metal dichalcogenides could increase computer speed, memory by a million times. Transition metal dichalcogenides (TMDCs) possess optical properties that could be used to make computers run a million times faster and store information a million times more energy-efficiently, according to a new study. Transition Metal Dichalcogenide Monolayers (wiki).

Atom-thin transistor uses half the voltage of common semiconductors, boosts current density. The two-dimensional structure could by key for quantum computing, extending Moore's Law. Researchers report a new, two-dimensional transistor made of graphene and molybdenum disulfide that needs less voltage and can handle more current than today's semiconductors. Paper-thin gallium oxide transistor handles more than 8,000 volts.

Faster and more efficient information transfer. Physicists use antiferromagnetic rust to carry information over long distances at room temperature.

Chip-Sized, High-Speed Terahertz Modulator raises possibility of Faster Data Transmission.

Discovery of new nanowire assembly process could enable more powerful computer chips. Researchers have developed a technique to precisely manipulate and place nanowires with sub-micron accuracy. This discovery could accelerate the development of even smaller and more powerful computer chips. The innovative method uses novel tools, including ultra-thin filaments of polyethylene terephthalate or PET with tapered nanoscale tips that are used to pick up individual nanowires.

Quantum computing engineers set new standard in silicon chip performance. Engineers have substantially extended the time that their quantum computing processors can hold information by more than 100 times compared to previous results. Now a team of researchers at UNSW Sydney has broken new ground in proving that 'spin qubits' -- properties of electrons representing the basic units of information in quantum computers -- can hold information for up to two milliseconds. Known as 'coherence time', the duration of time that qubits can be manipulated in increasingly complicated calculations, the achievement is 100 times longer than previous benchmarks in the same quantum processor.

Computers Made of Genetic Material? HZDR researchers conduct electricity using DNA-based nanowires.

Semiconductor-free microelectronics are now possible, thanks to metamaterials.

Metamaterial is a material engineered to have a property that is not found in nature.

Strain Engineering refers to a general strategy employed in semiconductor manufacturing to enhance device performance. Performance benefits are achieved by modulating strain, as one example, in the transistor channel, which enhances electron mobility (or hole mobility) and thereby conductivity through the channel. Another example are semiconductor photocatalysts strain-engineered for more effective use of sunlight.

Fast-track strain engineering for speedy biomanufacturing. Using engineered microbes as microscopic factories has given the world steady sources of life-saving drugs, revolutionized the food industry, and allowed us to make sustainable versions of valuable chemicals previously made from petroleum. But behind each biomanufactured product on the market today is the investment of years of work and many millions of dollars in research and development funding. Scientists want to help the burgeoning industry reach new heights by accelerating and streamlining the process of engineering microbes to produce important compounds with commercial-ready efficiency.

Semiconductor-free microelectronics (youtube)

Superconducting nanowire memory cell, miniaturized technology.

Researchers trap atoms, forcing them to serve as photonic transistors. This groundbreaking research demonstrates a potential for quantum networks based on cold-atom integrated nanophotonic circuits. Researchers at Purdue University have trapped alkali atoms (cesium) on an integrated photonic circuit, which behaves like a transistor for photons (the smallest energy unit of light) similar to electronic transistors. These trapped atoms demonstrate the potential to build a quantum network based on cold-atom integrated nanophotonic circuits. The team, led by Chen-Lung Hung, associate professor of physics and astronomy at the Purdue University College of Science, published their discovery in the American Physical Society's Physical Review X.

Breakthrough in Circuit Design Makes Electronics More Resistant to Damage and Defects.

Researchers resurrect and improve a technique for detecting transistor defects. A traditional method gets a new lease on life and may provide a new standard for measuring electric current. Researchers have revived and improved a once-reliable technique to identify and count defects in transistors, the building blocks of modern electronic devices such as smartphones and computers.

2D materials that could make devices faster, smaller, and efficient nanomaterials that are only a few atoms in thickness.

Polaritons in layered two-dimensional materials.

Researchers pave the way for Ionotronic Nanodevices. Discovery helps develop new kinds of electrically switchable memories. Ionotronic devices rely on charge effects based on ions, instead of electrons or in addition to electrons.

Discovery of a topological semimetal phase coexisting with ferromagnetic behavior in Sr1-yMnSb2 (y~0.08). New magnet displays electronic charge carriers that have almost no mass. The magnetism brings with it an important symmetry breaking property -- time reversal symmetry, or TRS, breaking where the ability to run time backward would no longer return the system back to its starting conditions. The combination of relativistic electron behavior, which is the cause of much reduced charge carrier mass, and TRS breaking has been predicted to cause even more unusual behavior, the much sought after magnetic Weyl semimetal phase.

Topological Transistors and beyond-CMOS electronics. First time that the topological state in a topological insulator has been switched on and off using an electric field. Researchers proved this is possible at room temperature, which is necessary for any viable replacement to CMOS technology in everyday applications. Ultra-low energy electronics such as topological transistors would allow computing to continue to grow, without being limited by available energy as we near the end of achievable improvements in traditional, silicon-based electronics (a phenomenon known as the end of Moore's Law). To be a viable alternative to current, silicon-based technology (CMOS), topological transistors must: operate at room temperature (without the need for expensive supercooling), 'switch' between conducting (1) and non-conducting (0), and switch extremely rapidly, by application of an electric field. Information and communication technology (ICT). The energy burnt in computation accounts for 8% of global electricity use. ICT energy use is doubling every decade. ICT contributes as much to climate change as the aviation industry. Moore's Law, which has kept ICT energy in check for 50 years, will end in the next decade.

Engineers build LEGO-like artificial intelligence chip. The new design is stackable and reconfigurable, for swapping out and building on existing sensors and neural network processors. Engineers built a new artificial intelligence chip, with a view toward sustainable, modular electronics. The chip can be reconfigured, with layers that can be swapped out or stacked on, such as to add new sensors or updated processors.

Physicists develop Printable Organic Transistors. Scientists have come a step closer to the vision of a broad application of flexible, printable electronics. The team has succeeded in developing powerful vertical organic transistors with two independent control electrodes.

Print complete Large Scale Integrated Circuits with more than 100 organic electrochemical transistors. We can now place more than 1000 organic electrochemical transistors on an A4-sized plastic substrate, and can connect them in different ways to create different types of printed integrated circuits.

Carbon Nanotube Transistors Outperform Silicon, for first time.

Engineers use Graphene as a “copy machine” to produce cheaper Semiconductor Wafers. In 2016, annual global semiconductor sales reached their highest-ever point, at $339 billion worldwide. In that same year, the semiconductor industry spent about $7.2 billion worldwide on wafers that serve as the substrates for microelectronics components, which can be turned into transistors, light-emitting diodes, and other electronic and photonic devices. MIT engineers may vastly reduce the overall cost of wafer technology and enable devices made from more exotic, higher-performing semiconductor materials than conventional silicon. Uses graphene -- single-atom-thin sheets of graphite -- as a sort of "copy machine" to transfer intricate crystalline patterns from an underlying semiconductor wafer to a top layer of identical material.

Reconfigurable Chaos-Based Microchips Offer Possible Solution to Moore’s Law. Nonlinear, chaos-based integrated circuits that enable computer chips to perform multiple functions with fewer transistors. The transistor circuit can be programmed to implement different instructions by morphing between different operations and functions. The potential of 100 morphable nonlinear chaos-based circuits doing work equivalent to 100 thousand circuits, or of 100 million transistors doing work equivalent to three billion transistors holds promise for extending Moore’s law.

Transistors can now both Process and Store information. Building a functional transistor integrated with ferroelectric RAM.

Advancement in thermoelectricity could light up the Internet of Things. Researchers have improved the efficiency of heat-to-electricity conversion in gallium arsenide semiconductor microstructures. By judicious spatial alignment of electrons within a two-dimensional electron gas system with multiple subbands, one can substantially enhance the power factor compared with previous iterations of analogous systems. This work is an important advance in modern thermoelectric technology and will benefit the global integration of the Internet of Things. We demonstrate a two-dimensional electron gas (2DEG) system with multiple subbands that uses gallium arsenide. The system is different from conventional methods of thermoelectric conversion.

Organic electronics lead to new ways to sense light. Researchers from Osaka University have developed a soft, flexible, and wireless optical sensor based on carbon nanotubes and organic transistors formed on ultra-thin polymer film for new imaging applications and nondestructive analysis methods.

Extreme Ultraviolet Lithography is a next-generation lithography technology using a range of extreme ultraviolet wavelengths, roughly spanning a 2% FWHM bandwidth about 13.5 nm. In August 2019, Samsung announced the use of EUV for its own 7nm Exynos 9825 chip. However, yield issues have been a concern. ASML, the sole EUV tool supplier, reported in June 2019 that pellicles required for critical layers still required improvements. In September 2019, Huawei announced a 5G version of its Kirin 990 chip that was made in a TSMC 7nm process with EUV, as well as a non-5G version that was made in a conventional TSMC 7nm process; the first phone to use the Kirin 990 5G chip will ship starting 1st of November 2019; in October, TSMC announced products were shipping. TSMC had indicated in the first quarter of 2019 that EUV-generated N7+ revenue would amount to no more than 1 billion TWD (32 million USD) in 2019. For 2020, more focus is being placed on more extensive use of EUV for "5nm" or "N5," although cost per transistor is still a concern. While EUV has entered its production phase, the volume is supported by fewer than 50 systems worldwide; by comparison, as of 2013, over 200 Deep Ultraviolet Lithography (DUV) immersion systems were already deployed. ASML.

Lithography is a method of printing originally based on the immiscibility of oil and water. The printing is from a stone (lithographic limestone) or a metal plate with a smooth surface. It was invented in 1796 by German author and actor Alois Senefelder as a cheap method of publishing theatrical works. Lithography can be used to print text or artwork onto paper or other suitable material.

Photolithography also called optical lithography or UV lithography, is a process used in microfabrication to pattern parts of a thin film or the bulk of a substrate (also called a wafer). It uses light to transfer a geometric pattern from a photomask (also called an optical mask) to a photosensitive (that is, light-sensitive) chemical photoresist on the substrate. A series of chemical treatments then either etches the exposure pattern into the material or enables deposition of a new material in the desired pattern upon the material underneath the photoresist. In complex integrated circuits, a CMOS wafer may go through the photolithographic cycle as many as 50 times. Photolithography shares some fundamental principles with photography in that the pattern in the photoresist etching is created by exposing it to light, either directly (without using a mask) or with a projected image using a photomask. This procedure is comparable to a high precision version of the method used to make printed circuit boards. Subsequent stages in the process have more in common with etching than with lithographic printing. This method can create extremely small patterns, down to a few tens of nanometers in size. It provides precise control of the shape and size of the objects it creates and can create patterns over an entire surface cost-effectively. Its main disadvantages are that it requires a flat substrate to start with, it is not very effective at creating shapes that are not flat, and it can require extremely clean operating conditions. Photolithography is the standard method of printed circuit board (PCB) and microprocessor fabrication.

Five Times the Computing Power - Superconductors

Field-Programmable Gate Array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence "field-programmable". The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). (Circuit diagrams were previously used to specify the configuration, as they were for ASICs, but this is increasingly rare.) FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or merely simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory.

Fast Fourier Transform algorithm computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IFFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain and vice versa. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse (mostly zero) factors.

Redox-Based Resistive Switching Random Access Memory (ReRAM) - A team of international scientists have found a way to make memory chips perform computing tasks, which is traditionally done by computer processors like those made by Intel and Qualcomm. Currently, all computer processors in the market are using the binary system, which is composed of two states -- either 0 or 1. For example, the letter A will be processed and stored as 01000001, an 8-bit character. However, the prototype ReRAM circuit built by Asst Prof Chattopadhyay and his collaborators processes data in four states instead of two. For example, it can store and process data as 0, 1, 2, or 3, known as Ternary number system. Because ReRAM uses different electrical resistance to store information, it could be possible to store the data in an even higher number of states, hence speeding up computing tasks beyond current limitations current computer systems, all information has to be translated into a string of zeros and ones before it can be processed.

Parallel Computing: 18-core credit card sized computer.

Very Large Scale Integration is the process of creating an integrated circuit (IC) by combining millions of MOS transistors onto a single chip. VLSI began in the 1970s when MOS integrated circuit chips were widely adopted, enabling complex semiconductor and telecommunication technologies to be developed. The microprocessor and memory chips are VLSI devices. Before the introduction of VLSI technology, most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI lets IC designers add all of these into one chip.

Neuromorphic Engineering also known as neuromorphic computing, is a concept describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. Very-Large-Scale Integration is the current level of computer microchip miniaturization and refers to microchips containing in the hundreds of thousands of transistors. LSI (large-scale integration) meant microchips containing thousands of transistors. Earlier, MSI (medium-scale integration) meant a microchip containing hundreds of transistors and SSI (small-scale integration) meant transistors in the tens.

Memristor or memory resistor, is a hypothetical non-linear passive two-terminal electrical component relating electric charge and magnetic flux linkage. According to the characterizing mathematical relations, the memristor would hypothetically operate in the following way: The memristor's electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past; the device remembers its history — the so-called non-volatility property. When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again. Memristor is capable of altering its resistance and storing multiple memory states, ability to retain data by 'remembering' the amount of charge that has passed through them - potentially resulting in computers that switch on and off instantly and never forget. New memristor technology that can store up to 128 discernible memory states per switch, almost four times more than previously reported.

Memristors Cut Energy Consumption by a Factor of 100. A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.

Transistors - Phototransistor - Superconductors

Memristive devices by combining incipient ferroelectrics and graphene. Scientists are working to create neuromorphic computers, with a design based on the human brain. A crucial component is a memristive device, the resistance of which depends on the history of the device - just like the response of our neurons depends on previous input. Materials scientists analyzed the behavior of strontium titanium oxide, a platform material for memristor research and used the 2D material graphene to probe it.

New technique for magnetization switching that is nearly 100 times faster than state-of-the-art spintronic devices. The advance could lead to the development of ultrafast magnetic memory for computer chips that would retain data even when there is no power. (the process used to 'write' information into magnetic memory).

Researchers harness 2D magnetic materials for energy-efficient computing. An MIT team precisely controlled an ultrathin magnet at room temperature, which could enable faster, more efficient processors and computer memories.

Light-induced Meissner effect. Researchers have developed a new experiment capable of monitoring the magnetic properties of superconductors at very fast speeds. Superconductivity is a fascinating phenomenon, which allows a material to sustain an electrical current without any loss. This collective quantum behavior of matter only appears in certain conductors at temperatures far below ambient. A number of modern studies have investigated this behavior in so-called non-equilibrium states, that is in situations in which the material is pushed away from thermal equilibrium. In these conditions, it appears that at least some of the features of superconductivity can be recreated even at ambient temperatures. Such non-equilibrium high temperature superconductivity, shown to exist under irradiation with a laser pulse, may be useful for applications different from the ones envisaged for the stationary version of superconductivity, as for example in high-speed devices controlled by laser pulses. This phenomenon has been termed "light-induced superconductivity," signaling an analogy with its equilibrium counterpart. This experiment was made possible by placing a spectator crystal in close vicinity of the sample under investigation and using it to measure the local magnetic field strength. The crystal reflects changes in the magnetic field into changes in the polarization state of a femtosecond laser pulse. "Due to the short duration of the probe pulse, we can reconstruct the time evolution of the magnetic field surrounding the YBa2Cu3O6.48 sample with sub-picosecond resolution and unprecedented sensitivity. Meissner Effect is the expulsion of a magnetic field from a superconductor during its transition to the superconducting state when it is cooled below the critical temperature. This expulsion will repel a nearby magnet.

New Fermi arcs could provide a new path for electronics. Newly discovered Fermi arcs that can be controlled through magnetism could be the future of electronics based on electron spins. During a recent investigation of the rare-earth monopnictide NdBi (neodymium-bismuth), researchers discovered a new type of Fermi arc that appeared at low temperatures when the material became antiferromagnetic, i.e., neighboring spins point in opposite directions.

Mass production of revolutionary computer memory moves closer with ULTRARAM™ on silicon wafers for the first time. A pioneering type of patented computer memory known as ULTRARAM™ has been demonstrated on silicon wafers in what is a major step towards its large-scale manufacture. ULTRARAM™ is novel type of memory with extraordinary properties. It combines the non-volatility of a data storage memory, like flash, with the speed, energy-efficiency and endurance of a working memory, like DRAM. To do this it utilizes the unique properties of compound semiconductors, commonly used in photonic devices such as LEDS, laser diodes and infrared detectors, but not in digital electronics, which is the preserve of silicon.

Engineers put tens of thousands of artificial brain synapses on a single chip. The design could advance the development of small, portable AI devices. Engineers have designed a 'brain-on-a-chip,' smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors -- silicon-based components that mimic the information-transmitting synapses in the human brain.

Study opens route to Ultra-Low-Power Microchips. Innovative approach to controlling magnetism could lead to next-generation memory and logic devices.

Illinois team advances GaN-on-Silicon technology towards scalable high electron mobility transistors.

Small tilt in Magnets makes them viable Memory Chips - Nano Technology

T-Rays will “speed up” computer memory by a factor of 1,000.

Germanium Tin Laser Could Increase Processing Speed of Computer Chips.

Computer Chip Vulnerabilities Discovered.

Fast, Flexible Ionic Transistors for Bioelectronic Devices. Researchers have developed the first biocompatible internal-ion-gated organic electrochemical transistor (IGT) that is fast enough to enable real-time signal sensing and stimulation of brain signals.

Synaptic Transistor is an electrical device that can learn in ways similar to a neural synapse. It optimizes its own properties for the functions it has carried out in the past. The device mimics the behavior of the property of neurons called spike-timing-dependent plasticity, or STDP.

Light Speed

Virtually Energy-Free Superfast Computing invented by scientists using light pulses.

Optical switching at record speeds opens door for ultrafast, light-based electronics and computers. Imagine a home computer operating 1 million times faster than the most expensive hardware on the market. Now, imagine that being the industry standard. Physicists hope to pave the way for that reality. Semiconductor-based transistors are in all of the electronics that we use today in 2023. Semiconductors in electronics rely on electrical signals transmitted via microwaves to switch -- either allow or prevent -- the flow of electricity and data, represented as either "on" or "off." The future of electronics will be based instead on using laser light to control electrical signals, opening the door for the establishment of "optical transistors" and the development of ultrafast optical electronics.

Lightwave electronics for faster compute speeds. Researchers from the University of Rochester have created lightwave-based logic gates. Lightwave electronics is a technique in which it is possible to use laser pulses to guide and speed up electronics. The concept is that an ultrashort laser pulse's extremely high-speed oscillating electric field can excite electrons in an incident material and create an effective current. A computer that runs at one petahertz would allow it to essentially complete one quadrillion computational operations each second. Petahertz is an SI unit of frequency equal to 10 to the 15 hertz

Silicon Photonics is the study and application of photonic systems which use silicon as an optical medium. The silicon is usually patterned with sub-micrometre precision, into microphotonic components. These operate in the infrared, most commonly at the 1.55 micrometre wavelength used by most fiber optic telecommunication systems. The silicon typically lies on top of a layer of silica in what (by analogy with a similar construction in microelectronics) is known as silicon on insulator (SOI).

Silicon Carbide is a compound of silicon and carbon with chemical formula SiC. It occurs in nature as the extremely rare mineral moissanite. Synthetic silicon carbide powder has been mass-produced since 1893 for use as an abrasive. Grains of silicon carbide can be bonded together by sintering to form very hard ceramics that are widely used in applications requiring high endurance, such as car brakes, car clutches and ceramic plates in bulletproof vests. Electronic applications of silicon carbide such as light-emitting diodes (LEDs) and detectors in early radios were first demonstrated around 1907. SiC is used in semiconductor electronics devices that operate at high temperatures or high voltages, or both. Large single crystals of silicon carbide can be grown by the Lely method; they can be cut into gems known as synthetic moissanite. Silicon carbide with high surface area can be produced from SiO2 contained in plant material.

ORNL Researchers Break Data Transfer Efficiency Record transfer of information via superdense coding, a process by which the properties of particles like photons, protons and electrons are used to store as much information as possible.

Superdense Coding is a technique used to send two bits of classical information using only one qubit, which is a unit of quantum information.

Quantum Computing makes direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data are encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states.

A Single Atom can store One Bit of Binary information. When the holmium atoms were placed on a special surface made of magnesium oxide, they naturally oriented themselves with a magnetic north and south pole—just like regular magnets have—pointing either straight up or down, and remained that way in a stable condition. What’s more, they could make the atoms flip by giving them a zap with a scanning tunneling microscope that has a needle with a tip just one atom wide. Orientation conveys binary information—either a one or a zero. Experiment shows that they could store one bit of information in just one atom. If this kind of technology could be scaled up, it theoretically could hold 80,000 gigabytes of information in just a square inch. A credit-card-size device could hold 35 million songs. Atoms could be placed within just about a nanometer of each other without interfering with their neighbors, meaning they could be packed densely. This tech won't show up in your smartphone anytime soon. For starters, the experiment required a very, very chilly temperature: 1 degree Kelvin, which is colder than -450 Fahrenheit. That's pretty energy intensive, and not exactly practical in most data storage settings.

Single-Atom Transistor is the world's smallest. This quantum electronics component switches electrical current by controlled repositioning of a single atom, now also in the solid state in a gel electrolyte. The single-atom transistor works at room temperature and consumes very little energy, which opens up entirely new perspectives for information technology.

Single Molecules Can Work as Reproducible Transistors—at Room Temperature

Diamond-Based Circuits can take the Heat for advanced applications. Researchers have developed a hydrogenated diamond circuit operational at 300 degrees Celsius. When power generators transfer electricity, they lose almost 10 percent of the generated power. To address this, scientists are researching new diamond semiconductor circuits to make power conversion systems more efficient using hydrogenated diamond. These circuits can be used in diamond-based electronic devices that are smaller, lighter and more efficient than silicon-based devices.

Controlling waves in magnets with superconductors for the first time. Quantum physicists have shown that it's possible to control and manipulate spin waves on a chip using superconductors for the first time. These tiny waves in magnets may offer an alternative to electronics in the future, interesting for energy-efficient information technology or connecting pieces in a quantum computer, for example. The breakthrough primarily gives physicists new insight into the interaction between magnets and superconductors.

Superatomic semiconductor sets a speed record. A team of chemists describes the fastest and most efficient semiconductor yet: a superatomic material called Re6Se8Cl2. The atomic structure of any material vibrates, which creates quantum particles called phonons. Phonons in turn cause the particles -- either electrons or electron-hole pairs called excitons -- that carry energy and information around electronic devices to scatter in a matter of nanometers and femtoseconds. This means that energy is lost in the form of heat, and that information transfer has a speed limit.

Transforming biology to design next-generation computers, using a surprise ingredient. A group has found ways of transforming structures that occur naturally in cell membranes to create other architectures, like parallel 1nm-wide line segments, more applicable to computing.

The moment you turn on your pc, what you see is the work of thousands and thousands of people, educated in the fields of engineering, science, math and physics, just to name a few. And that's just the software. The hardware also took the work of thousands of skilled people. Covering many different industries, which adds the work of thousands of more people. I'm also a product that took millions of people over thousands of years to make, just to get me here in this moment in time.

Researchers Repurpose failed cancer drug into Printable Semiconductor. Biological molecules once considered for cancer treatment are now being repurposed as organic semiconductors for use in chemical sensors and transistors.

Computer Industry is the range of businesses involved in designing computer hardware and computer networking infrastructures, developing computer software, manufacturing computer components, and providing information technology (IT) services. Software Industry includes businesses for development, maintenance and publication of software that are using different business models, also includes software services, such as training, documentation, consulting and data recovery.

Source of error in an industry-standard calibration method that could lead microchip manufacturers to lose a million dollars or more in a single fabrication run. The error occurs when measuring very small flows of exotic gas mixtures. Small gas flows occur during chemical vapor deposition (CVD), a process that occurs inside a vacuum chamber when ultra-rarefied gases flow across a silicon wafer to deposit a solid film. CVD is widely used to fabricate many kinds of high-performance microchips containing as many as several billion transistors. CVD builds up complex 3D structures by depositing successive layers of atoms or molecules; some layers are only a few atoms thick. A complementary process called plasma etching also uses small flows of exotic gases to produce tiny features on the surface of semiconducting materials by removing small amounts of silicon.



The First Computer - History of Computers


Antikythera Mechanism Antikythera Mechanism is 2,100-year-old ancient Analog Computer. An international team of scientists has now read about 3,500 characters of explanatory text. The device was used to predict astronomical positions and eclipses for calendar and astrological purposes decades in advance. It could also be used to track the four-year cycle of athletic games which was similar to an Olympiad, the cycle of the ancient Olympic Games. The device, housed in the remains of a 34 cm × 18 cm × 9 cm (13.4 in × 7.1 in × 3.5 in) wooden box. Detailed imaging of the mechanism suggests that it had 37 gear wheels enabling it to follow the movements of the Moon and the Sun through the zodiac, to predict eclipses and even to model the irregular orbit of the Moon, where the Moon's velocity is higher in its perigee than in its apogee. A complex clockwork mechanism composed of 37 meshing bronze gears with some turning clockwise and some counterclockwise. The gear teeth were in the form of equilateral triangles with an average circular pitch of 1.6 mm, an average wheel thickness of 1.4 mm and an average air gap between gears of 1.2 mm. The teeth probably were created from a blank bronze round using hand tools; this is evident because not all of them are even. The Sun gear is b1/b2 and b2 has 64 teeth. This artefact was retrieved from the sea in 1901, and identified on 17 May 1902 as containing a gear by archaeologist Valerios Stais, among wreckage retrieved from a shipwreck off the coast of the Greek island Antikythera. The instrument is believed to have been designed and constructed by Greek scientists and has been variously dated to about 87 BC, or between 150 and 100 BC, or to 205 BC, or to within a generation before the shipwreck, which has been dated to approximately 70–60 BC.

Punch Card - Computer Types - Computer History

Difference Engine is an automatic mechanical calculator designed to tabulate polynomial functions. It was designed in the 1820s, and was first created by Charles Babbage. The name, the difference engine, is derived from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial co-efficients. Some of the most common mathematical functions used in engineering, science and navigation, were, and still are computable with the use of the difference engine's capability of computing logarithmic and trigonometric functions, which can be approximated by polynomials, so a difference engine can compute many useful tables of numbers. Ada Lovelace.

Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, a design for a simpler mechanical computer. The Analytical Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the logical structure of the Analytical Engine was essentially the same as that which has dominated computer design in the electronic era. The Analytical Engine is one of the most successful achievements of Charles Babbage. Babbage was never able to complete construction of any of his machines due to conflicts with his chief engineer and inadequate funding. It was not until the late 1940s that the first general-purpose computers were actually built, more than a century after Babbage had proposed the pioneering Analytical Engine in 1837. Difference Engine (youtube) - Women Coders.

Charles Babbage was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage is best remembered for originating the concept of a digital programmable computer. (Born December 26, 1791 – Died October18, 1871).

Analog Computer is a type of computer that uses the continuously changeable aspects of physical phenomena such as electrical network, mechanics, or hydraulics quantities to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude. Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Systems for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Mechanical Calculators.

Analog Computer is a form of computer that uses the continuously changeable aspects of physical phenomena such as  quantities to model the problem being solved. Digital computers represent varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines. Unlike digital signal processing, analog computers do not suffer from the quantization noise, but are limited by analog noise.

Mechanical Computer is built from mechanical components such as levers and gears, rather than electronic components. The most common examples are adding machines and mechanical counters, which use the turning of gears to increment output displays. More complex examples could carry out multiplication and division—Friden used a moving head which paused at each column—and even differential analysis. One model sold in the 1960s calculated square roots. Mechanical computers can be either analog, using smooth mechanisms such as curved plates or slide rules for computations; or digital, which use gears.

Differential Analyser is a mechanical analogue computer designed to solve differential equations by integration, using wheel-and-disc mechanisms to perform the integration. It was one of the first advanced computing devices to be used operationally. The original machines could not add, but then it was noticed that if the two wheels of a rear differential are turned, the drive shaft will compute the average of the left and right wheels. A simple gear ratio of 1:2 then enables multiplication by two, so addition (and subtraction) are achieved. Multiplication is just a special case of integration, namely integrating a constant function.

Curta is a small mechanical calculator developed by Curt Herzstark in the 1930s in Vienna, Austria. By 1938, he had filed a key patent, covering his complemented stepped drum, Deutsches Reichspatent (German National Patent) No. 747073. This single drum replaced the multiple drums, typically around 10 or so, of contemporary calculators, and it enabled not only addition, but subtraction through nines complement math, essentially subtracting by adding. The nines' complement math breakthrough eliminated the significant mechanical complexity created when "borrowing" during subtraction. This drum would prove to be the key to the small, hand-held mechanical calculator the Curta would become. Curtas were considered the best portable calculators available until they were displaced by electronic calculators in the 1970s.

Turing Machine is an abstract machine that manipulates symbols on a strip of tape according to a table of rules; to be more exact, it is a mathematical model of computation that defines such a device. Despite the model's simplicity, given any computer algorithm, a turing machine can be constructed that is capable of simulating that algorithm's logic. 1936.

Konrad Zuse built the world's first programmable computer in May 1941, calculating one operation per second. The functional program-controlled Turing-complete Z3. Thanks to this machine and its predecessors, Zuse has often been regarded as the inventor of the modern computer. Zuse was also noted for the S2 computing machine, considered the first process control computer. He founded one of the earliest computer businesses in 1941, producing the Z4, which became the world's first commercial computer. From 1943 to 1945 he designed the first high-level programming language, Plankalkül. In 1969, Zuse suggested the concept of a computation-based universe in his book Rechnender Raum (Calculating Space).

Harvard Mark I was a general-purpose electromechanical computer used in the war effort during the last part of World War II. One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann. (IBM Automatic Sequence Controlled Calculator or ASCC).

Pioneers in Computer Science (wiki)

Abstract Machine also called an abstract computer, is a theoretical model of a computer hardware or software system used in automata theory. Abstraction of computing processes is used in both the computer science and computer engineering disciplines and usually assumes a discrete time paradigm.

Autonomous Machines (drones)

Jacquard Loom is a device fitted to a power loom that simplifies the process of manufacturing textiles with such complex patterns as brocade, damask and matelassé. It was invented by Joseph Marie Jacquard in 1804. The loom was controlled by a "chain of cards"; a number of punched cards laced together into a continuous sequence. Multiple rows of holes were punched on each card, with one complete card corresponding to one row of the design.

Computer Programming in the Punched Card Era was the invention of computer programming languages up to the mid-1980s, many if not most computer programmers created, edited and stored their programs line by line on punched cards. The practice was nearly universal with IBM computers in the era. A punched card is a flexible write-once medium that encodes data, most commonly 80 characters. Groups or "decks" of cards form programs and collections of data. Users could create cards using a desk-sized keypunch with a typewriter-like keyboard. A typing error generally necessitated repunching an entire card. In some companies, programmers wrote information on special forms called coding sheets, taking care to distinguish the digit zero from the letter O, the digit one from the letter I, eight from B, two from Z, and so on. These forms were then converted to cards by keypunch operators, and in some cases, checked by verifiers. The editing of programs was facilitated by reorganizing the cards, and removing or replacing the lines that had changed; programs were backed up by duplicating the deck, or writing it to magnetic tape.

Keypunch is a device for precisely punching holes into stiff paper cards at specific locations as determined by keys struck by a human operator.

Punched Card is a piece of stiff paper that can be used to contain digital information represented by the presence or absence of holes in predefined positions. The information might be data for data processing applications or, in earlier examples, used to directly control automated machinery. The terms IBM card, or Hollerith card specifically refer to punched cards used in semiautomatic data processing. Punched cards were widely used through much of the 20th century in what became known as the data processing industry, where specialized and increasingly complex unit record machines, organized into data processing systems, used punched cards for data input, output, and storage. Many early digital computers used punched cards, often prepared using keypunch machines, as the primary medium for input of both computer programs and data. While punched cards are now obsolete as a recording medium, as of 2012, some voting machines still use punched cards to record votes.

Even back in 1968, workers were worried about being replaced by technology | RetroFocus (youtube) - Four Corners Australian TV Program predicted that computers would soon take over and change the workforce.

First Apple Computer 1976 The First Apple Computer on the right (1976).

Monochrome Monitor or Green screen was the common name for a monochrome monitor using a green "P1" phosphor screen. CRT computer monitor which was very common in the early days of computing, from the 1960s through the 1980s, before color monitors became popular. Monochrome monitors have only one color of phosphor (mono means "one", and chrome means "color"). Pixel for pixel, monochrome monitors produce sharper text and images than color CRT monitors. This is because a monochrome monitor is made up of a continuous coating of phosphor and the sharpness can be controlled by focusing the electron beam; whereas on a color monitor, each pixel is made up of three phosphor dots (one red, one blue, one green) separated by a mask. Monochrome monitors were used in almost all dumb terminals and are still widely used in text-based applications such as computerized cash registers and point of sale systems because of their superior sharpness and enhanced readability.

Computer History Chart 1983 Compaq came out with a portable computer, which sold over 50,000 in the first year, while IBM sold 750,000 pc's that same year. In a few short years, Compaq became a billion dollar company. IBM tried to make Intel sell a new chip only to them but Intel refused so they could sell their chips to more companies. Intel 80386 a 32-bit microprocessor introduced in 1985 had 275,000 transistors. In May 2006, Intel announced that 80386 production would stop at the end of September 2007.

Embedded System is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems.

Then the Extended Industry Standard Architecture was announced in September 1988 by a consortium of PC clone vendors (the "Gang of Nine") as a counter to IBM's use of its proprietary Micro Channel architecture (MCA) in its PS/2 series and end IBM's monopoly. But that only gave rise to Microsoft's Monopoly. Exclusive Right. But by 1991, things got worse for Compaq. Other computer companies came into the market, and IBM's patent trolls attacked Compaq. In 2002 Compaq merged with Hewlett Packard.

Organic computers are coming. Scientists found a molecule that will help to make organic electronic devices.

Radialene are alicyclic organic compounds containing n cross-conjugated exocyclic double bonds.

Worlds Smallest Computer Michigan Micro Mote (M3) 

Histories Greatest Inventions - Computer Types


Computer Movies - Films about Computers


The Machine that Changed the World - Episode II - Inventing the Future (youtube)
HyperLand (youtube)
The Virtual Revolution (youtube)
Internet Rising (youtube)
The Code - Linux (film)
All Watched Over by Machines of Loving Grace (vimeo)
Kids Growing Up Online (PBS)


"Most of what we think of as computers is just an illusion. The text on the screen doesn't really exist. Its an array of lights being manipulated by the computer based on the contents of the computers ram which is just a bunch of low and high voltage states."



Previous Subject Up Top Page Next Subject



The Thinker Man