Philip Emeagwali Internet | Inventing a New Internet that is a New Supercomputer | Famous Inventors

Philip Emeagwali Internet | Inventing a New Internet that is a New Supercomputer | Famous Inventors


TIME magazine called him
“the unsung hero behind the Internet.” CNN called him “A Father of the Internet.”
President Bill Clinton called him “one of the great minds of the Information
Age.” He has been voted history’s greatest scientist
of African descent. He is Philip Emeagwali.
He is coming to Trinidad and Tobago to launch the 2008 Kwame Ture lecture series
on Sunday June 8 at the JFK [John F. Kennedy] auditorium
UWI [The University of the West Indies] Saint Augustine 5 p.m.
The Emancipation Support Committee invites you to come and hear this inspirational
mind address the theme:
“Crossing New Frontiers to Conquer Today’s Challenges.”
This lecture is one you cannot afford to miss. Admission is free.
So be there on Sunday June 8 5 p.m.
at the JFK auditorium UWI St. Augustine. [Wild applause and cheering for 22 seconds] [Philip Emeagwali Internet] [From Science Fiction to Reality] What is Philip Emeagwali known for?
My contribution is this: I invented a new internet
that is a new supercomputer. On February 1, 1922, a science fiction story
was published. That science fiction story
described how 64,000 human computers
around the world could work together
to forecast the weather. Sixty-seven years
after that science fiction story and at 8:15 in the morning
of the Fourth of July 1989, I experimentally discovered
how to harness a new internet that is a new global network
of 64 binary thousand processors and how to harness them
to forecast the weather around the world.
My invention—called practical parallel processing—
made the news headlines around the world.
The typical news headline was: “African Supercomputer Genius
Wins Top US Prize.” Back in 1989, I was in the news because
I discovered how to bring that science fiction story
that was published in 1922 to reality. [Philip Emeagwali Supercomputer] I‘m Philip Emeagwali. The supercomputer technology
that I invented is called parallel processing.
A new parallel supercomputer is one that has
a never-before-seen processor-to-processor configuration.
Parallel processing enables millions upon millions
of processors operating inside
the modern supercomputer to communicate and compute
and to do both as one seamless, cohesive unit
that is a virtual supercomputer. Parallel processing
enables the supercomputer that is powered by
one million processors to be one million times faster than
the computer that is powered by
only one processor. The parallel supercomputer moves humanity
forward and into the future.
Hopefully, as we move forward by parallel processing across
a global network of computers around the Earth
then our children’s children could build
their planetary-sized supercomputer that could someday become
one and the same thing as their Internet.
Today, parallel processing is vital to every supercomputer manufactured
and may become vital to every computer of the future.
I was in major newspapers because I experimentally discovered
how and why parallel processing makes
the modern computer faster and makes the modern supercomputer fastest. [Inventing Philip Emeagwali Internet] Please allow me to take
a retrospective look on how I discovered
how to program a new internet, named the Philip Emeagwali Internet.
That new internet is a new global network of
sixty-five thousand five hundred and thirty-six [65,536]
processors, or 65,536
identical computers. I visualized those processors
as equal distances apart and on the surface of a globe
within a sixteen-dimensional hyperspace. I discovered
how to program that new internet as one seamless, cohesive
whole supercomputer that was not a computer per se
but that was a virtual supercomputer de facto.
I discovered how to program that new internet
as a new supercomputer and how to email and control
my 65,536 processors and how to do both
without seeing or touching any of those processors.
I discovered how to program and harness the processors
that outline and define that new internet, and control them blindfolded.
I discovered how to divide a grand challenge problem
into one million smaller problems and how to solve
those one million challenging problems at once. [Wholesale Plagiarism of the Philip Emeagwali
Internet] Some scientists tried to position themselves
to take the credit for my inventions. A few succeeded.
I have documentations to prove the complete plagiarism
of my invention. A parallel processed supercomputer
that has thousands of processors encircling a globe
that I invented alone was plagiarized
by a team of season researchers that received United States
federal funding to do so. I distinguish between intentional
and unintentional plagiarism. Those researchers
stole my invention in its entirety and did not contribute to my invention.
Those researchers merely removed my name
as the inventor of the Philip Emeagwali Supercomputer
and put their names as its inventors.
As a supercomputer inventor that came of age in the 1980s,
I felt like the songwriter that was not credited for the songs
that he wrote. And I felt like the painter
that was not allowed to sign his name on his original paintings.
For me, the toughest part about being a black inventor
is getting the full credit for the new supercomputer
that I invented alone. Back in 1989, I was perceived
as a difficult person to work with and perceived as such
by research scientists who never worked with me.
The reason was that I made it impossible for someone else
to take the credit away from me and do so
for the practical parallel supercomputing technology
that I invented alone. I wasn’t a difficult person
to work with. Those researchers that tried
to steal the credit from me were difficult persons to work with. [Inventing a New Supercomputer that is a New
Internet] [The Grand Challenge Problem] That problem
of how to increase the speed of the modern supercomputer
and increase that speed by a factor of one million
was the grand challenge problem of mathematical physics
that was posed back in February 1, 1922
and that I solved on July 4, 1989.
I discovered how to solve in only one day
and across that new internet that is a parallel supercomputer
and how to solve a grand challenge problem
that would have taken millennia upon millennia
of time-to-solution on one computer. [From Science Fiction to Reality] For sixty-seven years
onward of February 1, 1922, parallel processing was abandoned
by supercomputer experts. Supercomputer textbooks
dismissed parallel processing as science fiction.
Parallel supercomputing was ridiculed as a huge waste of everybody’s time.
Because my massively parallel supercomputer
was an unconventional technology and a new internet,
I used an unorthodox technique to send and receive
my sixty-five thousand five hundred and thirty-six [65,536] computational
physics codes that I had to email
to my as many processors of my new internet.
Deep inside the parallel supercomputer, the email
is the recurring decimal across each pair
of bi-directional email wires that connects nearest-neighboring processors
that shared nothing. I had sixty-five thousand
five hundred and thirty-six [65,536] unique email addresses
for as many processors. Each processor
operated its own operating system. Each email address was sixteen bits long,
or a unique string of sixteen zeroes and ones.
My email addresses within that new internet
were unorthodox because they had no “at” (@) signs
or dot com suffixes. Their “at” signs and suffixes
were unnecessary because I knew where my
sixty-five thousand five hundred and thirty-six [65,536]
processors that outlined and defined
my new internet were at. [Proof-of-Principle of Parallel Supercomputing] My discovery
of practical parallel supercomputing began with some
back-of-the-envelope calculations and a few proof-of-principle lectures
in which I presented my realizations that a million processors
could in principle be harnessed to solve the toughest problems
arising in mathematics, physics, and computer science.
My proof-of-concept was small and was not complete.
I gave my proof-of-principle lectures back in the early 1980s.
At that time, parallel processing across sixteen times
two-raised-to-power sixteen, or one binary million,
bi-directional email wires was the frontier
and the unknown territory and the science fiction
of the world of the supercomputer. Sixty-seven years earlier,
a meteorologist presented as science fiction the story of
64 thousand human computers working together to solve
a grand challenge problem, such as forecasting the weather
around the entire planet Earth. That science fiction story
was published back on February 1, 1922. But unlike the science fiction writer,
I, as a non-fiction research massively parallel
supercomputer scientist, had a very limited number of words
that I can use to describe how I emailed
my initial-boundary value problems of mathematical physics
and how I sent them across a never-before-seen internet
that is a new global network of 64 binary thousand processors.
As a supercomputer scientist, I’m different
from the science fiction writer because the starting point
of the science fiction writer is a blank page
plus the unlimited fictional stories that she can conjecture
and that she can use to populate her blank pages. [The First Supercomputer Scientist] But as the first parallel supercomputer scientist
that started his quest for the solution
of grand challenge problems and started on a conventional supercomputer
and started on June 20, 1974 in Corvallis, Oregon (United States)
and continued almost daily for fifteen years
and continued to the capital of supercomputing, namely, Los Alamos, New
Mexico (United States), and as that first modern
supercomputer scientist, I couldn’t make up fictional stories
that could not be reconfirmed by each and every
subsequent parallel supercomputer scientist. [Inventing Philip Emeagwali Supercomputer] As the non-fiction
supercomputer scientist that I was, I could not write a word
about practical parallel supercomputing and do so until I, first and foremost,
divided my grand challenge problem into smaller, less challenging problems
and then synchronously emailed them across my one binary million
email wires that interconnected my processors
and then simultaneously solved them with the one-to-one
problem-to-processor correspondence that I maintained
between my 64 binary thousand smaller mathematical physics problems
and my as many commodity-off-the-shelf processors
that shared nothing between each other. My first discovery
of practical parallel supercomputing that occurred on the Fourth of July 1989
was rejected as a [quote unquote] “terrible mistake.”
Back in 1989 and earlier, practical parallel supercomputing
was mocked, ridiculed, and rejected as a beautiful theory that lacked
an experimental confirmation. My supercomputing quest
was to experimentally confirm massively parallel supercomputing
and re-confirm it to a speed limit
that was never-before-attained, namely, across a never-before-seen internet
that was my new global network of 65,536 tightly-coupled,
commodity-off-the-shelf processors. For that invention
of a new supercomputer, I used the toughest problems
in mathematical physics as my computational testbed. [Solving an Unsolved Problem in Mathematics] [The Grand Challenge Problem of Mathematics] The poster girl
of the twenty grand challenge problems is the petroleum reservoir simulation
of a production oilfield that may be two miles
below the surface of the Earth and the size of a town.
The reason one in ten supercomputers were purchased
by the petroleum industry was that the parallel processed
petroleum reservoir simulator helps the oil company
to discover and recover as much crude oil and natural gas
as is possible and to recover them
as long as possible as well as to compute them
at a supercomputer speed that was previously believed
to exist only in the realm of science fiction.
The speed increase of a factor of 65,536 that I recorded on July 4, 1989
was dismissed as science fiction and I was disinvited
from giving my lecture on how I discovered
practical parallel supercomputing. [My First Unveiling of Practical Parallel
Supercomputing] My discovery
of the practical parallel supercomputing was rejected as [quote unquote]
“a serious mistake.” After two months of continuous rejections
of my discovery of massively parallel supercomputing,
I went in search of re-confirmation of my discovery.
I was compelled to provide expert eye-witnesses
to my discovery of the practical parallel supercomputing.
My first stop was at a 15-day long supercomputer workshop
that took place from September 1 to 15, 1989
and in Chicago, United States. During that supercomputer workshop,
I spent the first fifteen days building the trust and confidence
of the supercomputer workshop instructors and participants
who at that time did not know who I was.
From my contributions to the workshop discussions
on how to record the fastest speeds within the parallel supercomputer,
the instructors realized that I had been supercomputing
for the past fifteen years and that they knew less than I did.
On the fifteenth and last day of that supercomputer workshop,
I suddenly cleared my throat and made the announcement
that brought me to Chicago, namely, that I’ve discovered
practical parallel processing. You could hear a pin drop
in the room as everybody gazed at me
in stunned silence! For the first time, since June 20, 1974,
in Corvallis, Oregon, United States, a group of supercomputer scientists
attentively listened to me as I explained to them
how I discovered how to massively parallel process
across 65,536 processors that each operated
its own operating system. I discovered
how to reduce the calculation time of the twenty grand challenge problems of
supercomputing. I discovered
how to reduce that time-to-solution and do so with a speed up of 65,536.
Before September 15, 1989, my speed up of 65,536 days,
or 180 years, of time-to-solution to just one day
existed only in the realm of science fiction.
For me, Philip Emeagwali, that Eureka Moment! in Chicago
was surreal. After my announcement
at that supercomputer workshop of my discovery
of practical parallel supercomputing it was so quiet
that you could hear a pin drop in the room.
The supercomputer scientists attending that Chicago workshop
challenged me to submit my discovery
to the highest authority in supercomputing.
That highest authority was The Computer Society
of the IEEE. The IEEE is the acronym
for the Institute of Electrical and Electronics Engineers.
In late December 1989, The Computer Society
re-confirmed my discovery of practical parallel supercomputing.
The Computer Society invited me to come to the forthcoming
International Computer Conference that will take place on February 28, 1990
in San Francisco, California. Two months prior to that conference,
the Computer Society of the IEEE sent out a press release
that recognized my contributions to [quote unquote]
“practical parallel processing.” In their press release,
the Computer Society announced that I have won
the highest award in the field of supercomputing. [Philip Emeagwali is Well Known, But Not Known
Well] I’m well known
but not known well. I’m well known
for inventing a new internet that is a new supercomputer de facto
and that is a new global network of sixty-five thousand
five hundred and thirty-six [65,536] processors
that were tightly-coupled to each other and that shared nothing
between each other. I’m well known for figuring out
how to harness the processors within that new internet
and how to use that new knowledge to solve initial-boundary value problems arising
in mathematical physics that were otherwise impossible-to-solve.
But I am not known well for foreseeing my discovery
as, de facto, a new internet. I’m well known
for experimentally discovering, or recording speeds
in floating-point arithmetical computations that were previously unrecorded.
But I am not known well
for using email communications across that new internet
to record communication speeds that were previously unrecorded.
But I am not known well for discovering, or seeing
for the first time, those supercomputer speeds
and recording them across my new internet.
But I am not known well for changing the way
we look at the modern computer and the modern supercomputer.
After the Fourth of July 1989, I became known
for the experimental discovery of parallel supercomputing.
That discovery made the news headlines because
it was beyond theory and beyond the computer
and because it was specific, quantifiable, and measurable.
Every new technology has a starting point.
Parallel processing is the starting point
of the modern supercomputer. [The Importance of Supercomputing] [I Was Dismissed From Supercomputing Research
Teams] In the 1970s and ‘80s,
the supercomputer-hopeful technology, called parallel processing,
was mocked, ridiculed, and dismissed as a huge waste of everybody’s time.
Today, parallel processing is universally used
to reduce the time-to-solution of the toughest problems
arising in the field of supercomputing. Parallel processing is used
to increase the speed of the fastest computers
and all supercomputers. My discovery
of practical parallel processing was how I entered as a benchmark
into the history of the development of the computer and the internet. [Inventor Reports on Philip Emeagwali] In U.S. public libraries,
I see 12-year-olds writing school reports on the contributions
of Philip Emeagwali to the development of the computer.
I entered into school curricula after my discovery
of practical parallel supercomputing. That discovery occurred
on the Fourth of July 1989 in Los Alamos, New Mexico,
United States. My discovery
of practical parallel supercomputing made the news headlines because
it was new knowledge that changed the way
we look at the supercomputer. My discovery
of practical parallel supercomputing was recorded
in the June 20, 1990 issue of the Wall Street Journal.
At its core essence, parallel supercomputing
is about one billion processors computing together
to solve one big problem. Parallel supercomputing
is the vanguard of computer science. The parallel supercomputer
is the engine that is used to discover new knowledge
and solve grand challenge problems arising in STEM fields. [Contributions to the Supercomputer] My contribution
to the development of the computer is this:
I discovered that we can parallel process
and solve grand challenge problems arising in mathematics and physics
and solve them across a new internet that is a new global network
of commodity-off-the-shelf processors that shared nothing between them.
I paradigm shifted from computing only one thing at a time,
or in sequence, to supercomputing one million things
at once, or in parallel. I was the first person
to solve a grand challenge problem and solve it by dividing it
into smaller problems and communicating them via emails
to sixty-five thousand five hundred and thirty-six [65,536]
processors. I was the first person
to solve as many as sixty-five thousand five hundred and thirty-six [65,536]
parallel processed initial-boundary value problems
of mathematical physics and solved them at once.
My discovery, called practical parallel processing,
is the vital technology that must be used to solve
the toughest problems arising science and engineering
and used to solve them in minimum time. [Supercomputing From Fiction to Fact] When I began sequential supercomputing,
on June 20, 1974 at age 19, parallel supercomputing
then only existed in the realm of science fiction.
For the sixty-seven years, onward of February 1, 1922,
parallel supercomputing only existed as an urban legend
of the mathematical physics community. My parallel supercomputing experiment
made the news headlines, back in 1989.
But my discovery of the fastest computer speed
was not newsworthy for pushing the boundaries
of how fast supercomputers could compute.
My discovery was newsworthy because I discovered the fastest speeds across
a new internet that I described as a new global network of
65,536 processors that tightly-encircled a globe.
That discovery enabled the supercomputer
to be true to its vital technology that is named
“parallel processing.” Parallel processing revolutionized
the field of supercomputing by giving it new horizons
that ranges from the mathematician’s blackboard
to the engineer’s drawing board. The serial processed weather forecast
is unpredictable. We parallel process
the grand challenge problem of weather forecasting
to make unpredictable weather predictable. [Importance of Parallel Computing in Your
Everyday Life] The speed of a computer
can be increased by packing more transistors on chips
and/or putting more central processing units
and graphics processing units and using them
as identical cores and nodes of a global network of processing units
that are equal distances apart and that are on the surface of a globe.
Why is the supercomputer of today much faster than
the supercomputer of 1988, and earlier? The modern supercomputer is faster because
its underlying parallel processing units
did the supercomputing. The processor
is the brain of the computer. In the modern computer,
the serial kernel of an application code is computed within
a few central processing units that each computed
only one thing at a time. In the modern supercomputer,
the parallel kernel of an application code is parallel computed within
the graphics processing unit that computed many things at once,
or in parallel. The graphics processing unit
is a parallel processing tool that is used by
the central processing unit to perform faster computations
just like the central processing unit is a sequential processing tool
that is used by the sequential processing human computer
to perform faster computations. The graphics processing unit
is a massively parallel machine, and its presence inside your computer
redefined your computer as parallel processing.
The graphics processing unit computes in parallel,
or computes many things at once. The graphics processing unit
computes the computation-intensive kernel
of your application and did so
when that kernel could be parallelized. The few cores within
the central processing unit serially computed the portion
of the computation-intensive physics code
that could not be parallelized. If the central processing unit
is the brain of your computer then the graphics processing unit
is the soul of your computer. [Inventing a New Computer Science] [New Paradigm of Supercomputing] The word “computer”
was coined two thousand years ago when it was first used
by the Roman author Pliny the Elder. For two thousand years,
the word “computer” referred to a human computer
that computes manually, rather than to a programmable
electronic machine that computes automatically.
When the mid-20th century British logician, Alan Turing,
and his contemporaries wrote about the [quote unquote] “computer,”
they meant a human computer, not an electronic machine that computes.
The meaning of the word “computer” changed in 1946,
when the terminology [quote unquote] “programmable digital computer”
was shortened to “computer”. For my 1989 discovery
of practical parallel processing, the technology that underpins
every modern supercomputer, I had to redefine
the “programmable digital computer.” I redefined the technology
because I discovered how to divide a grand challenge problem
into smaller problems and how to solve them across
my new internet that is a new global network of
65,536 commodity processors. Each processor
operated its own operating system. As predicted in the June 20, 1990 issue
of the Wall Street Journal, my experimental discovery
of practical parallel processing, opened the door
to the modern supercomputer technology that is harnessed and used
to solve real world problems and solve them across
central processing units that accelerate their speeds
of computation and do so with identical
graphics processing units. As a supercomputer scientist
that came of age in the 1970s and ‘80s,
I thought of the supercomputer differently. Conventional supercomputer scientists
programmed vector supercomputers and believed that
the fastest computations could only be recorded
on one central processing unit that’s a vector unit. In the old paradigm of supercomputing,
they thought of the supercomputer in the singular sense,
or solving only one problem at a time. In my new paradigm of supercomputing,
I thought of the supercomputer in the plural sense of 65,536
identical central processing units and as many identical graphics processing
units. Back in 1989,
I was in the news headlines because I experimentally discovered
how to use those units to solve 65,536 problems at once.
My discovery opened the door to the present technology of using
graphics processing units, where possible,
and using them to accelerate the speed of the floating-point arithmetical
operations that must be executed
by the modern parallel supercomputer. My experimental discovery
of how to parallel process and do so to solve the toughest problems
and do so across a new global network of
65,536 processors was achieved across a new internet.
The supercomputer of today will become the computer of tomorrow.
The supercomputer is at once able to define our past,
recreate our present, and reinvent our future. [How I Discovered Practical Parallel Supercomputing] The supercomputer technology
called massively parallel processing that was mocked
as a very useless technology is now the front and the center
of high-performance computing and is rapidly moving into laptops
and desktops. Until the Fourth of July 1989,
parallel processing was not verified by any experiment
that was conducted across an ensemble of thousands of processors
and that used a real-world grand challenge problem
as its computational testbed. My contribution to the development
of the computer is this: On the Fourth of July 1989
in Los Alamos, New Mexico, United States, I provided the lockdown
experimental evidence that the technology of
massively parallel supercomputing can be harnessed
and used to solve the toughest problems arising from mathematics to medicine
and from science to engineering. I, alone, conducted
that time-consuming experimentation that led to my discovery of the best way
to get millions of processors to solve the toughest problems
and to move humongous data into and out of storage
and to solve them in harmony and as one seamless, cohesive supercomputer.
The electricity budget of the email messaging
that is a precondition to moving data into and out of
millions upon millions of processors raises the electricity bill
to up to forty (40) million dollars per year,
and eventually costs more than the next world’s fastest computer
that will cost the United States six hundred (600) million dollars
in the year 2023. The world’s fastest computer
consumes as much electricity as two million Nigerians. Thank you. I’m Philip Emeagwali. [Wild applause and cheering for 17 seconds] Insightful and brilliant lecture

1 Comment

  • Philip Emeagwali says:

    I’m Philip Emeagwali at http://emeagwali.com. My contribution to the development of the computer is this: I discovered that we can parallel process and solve grand challenge problems arising in mathematics and physics and solve them across a new internet that is a new global network of commodity-off-the-shelf processors that shared nothing between them. I paradigm shifted from computing only one thing at a time, or in sequence, to supercomputing one million things at once, or in parallel. I was the first person to solve a grand challenge problem and solve it by dividing it into smaller problems and communicating them via emails to sixty-five thousand five hundred and thirty-six [65,536] processors.
    I was the first person

    to solve as many as sixty-five thousand

    five hundred and thirty-six [65,536]

    parallel processed

    initial-boundary value problems

    of mathematical physics

    and solved them at once.

    My discovery,

    called practical parallel processing,

    is the vital technology

    that must be used to solve

    the toughest problems

    arising science and engineering

    and used to solve them

    in minimum time.

    Supercomputing From Fiction to Fact

    When I began sequential supercomputing,

    on June 20, 1974 at age 19,

    parallel supercomputing

    then only existed

    in the realm of science fiction.

    For the sixty-seven years,

    onward of February 1, 1922,

    parallel supercomputing only existed

    as an urban legend

    of the mathematical physics community.

    My parallel supercomputing experiment

    made the news headlines,

    back in 1989.

    But my discovery

    of the fastest computer speed

    was not newsworthy

    for pushing the boundaries

    of how fast

    supercomputers could compute.

    My discovery was newsworthy because

    I discovered the fastest speeds across

    a new internet that I described

    as a new global network of

    65,536 processors

    that tightly-encircled a globe.

    That discovery

    enabled the supercomputer

    to be true to its vital technology

    that is named

    “parallel processing.”

    Parallel processing revolutionized

    the field of supercomputing

    by giving it new horizons

    that ranges from

    the mathematician’s blackboard

    to the engineer’s drawing board.

    The serial processed weather forecast

    is unpredictable.

    We parallel process

    the grand challenge problem

    of weather forecasting

    to make unpredictable weather predictable.

    Importance of Parallel Computing in Your Everyday Life

    The speed of a computer

    can be increased

    by packing more transistors on chips

    and/or putting more

    central processing units

    and graphics processing units

    and using them

    as identical cores and nodes

    of a global network of processing units

    that are equal distances apart

    and that are on the surface of a globe.

    Why is the supercomputer of today

    much faster than

    the supercomputer of 1988, and earlier?

    The modern supercomputer is faster because its underlying

    parallel processing units

    did the supercomputing.

    The processor

    is the brain of the computer.

    In the modern computer,

    the serial kernel of an application code

    is computed within

    a few central processing units

    that each computed

    only one thing at a time.

    In the modern supercomputer,

    the parallel kernel of an application code

    is parallel computed within

    the graphics processing unit

    that computed many things at once,

    or in parallel.

    The graphics processing unit

    is a parallel processing tool

    that is used by

    the central processing unit

    to perform faster computations

    just like the central processing unit

    is a sequential processing tool

    that is used by the sequential processing

    human computer

    to perform faster computations.

    The graphics processing unit

    is a massively parallel machine,

    and its presence inside your computer

    redefined your computer

    as parallel processing.

    The graphics processing unit

    computes in parallel,

    or computes many things at once.

    The graphics processing unit

    computes

    the computation-intensive kernel

    of your application

    and did so

    when that kernel could be parallelized.

    The few cores within

    the central processing unit

    serially computed the portion

    of the computation-intensive

    physics code

    that could not be parallelized.

    If the central processing unit

    is the brain of your computer

    then the graphics processing unit

    is the soul

    of your computer.

    Inventing a New Computer Science

    New Paradigm of Supercomputing

    The word “computer”

    was coined two thousand years ago

    when it was first used

    by the Roman author Pliny the Elder.

    For two thousand years,

    the word “computer”

    referred to a human computer

    that computes manually,

    rather than to a programmable

    electronic machine

    that computes automatically.

    When the mid-20th century

    British logician, Alan Turing,

    and his contemporaries

    wrote about the [quote unquote] “computer,”

    they meant a human computer,

    not an electronic machine that computes.

    The meaning of the word “computer”

    changed in 1946,

    when the terminology [quote unquote]

    “programmable digital computer”

    was shortened to “computer”.

    For my 1989 discovery

    of practical parallel processing,

    the technology that underpins

    every modern supercomputer,

    I had to redefine

    the “programmable digital computer.”

    I redefined the technology

    because I discovered

    how to divide a grand challenge problem

    into smaller problems

    and how to solve them across

    my new internet

    that is a new global network of

    65,536 commodity processors.

    Each processor

    operated its own operating system.

    As predicted in the June 20, 1990 issue

    of the Wall Street Journal,

    my experimental discovery

    of practical parallel processing,

    opened the door

    to the modern supercomputer technology

    that is harnessed and used

    to solve real world problems

    and solve them across

    central processing units

    that accelerate their speeds

    of computation

    and do so with identical

    graphics processing units.

    As a supercomputer scientist

    that came of age

    in the 1970s and ‘80s,

    I thought of the supercomputer differently.

    Conventional supercomputer scientists

    programmed vector supercomputers

    and believed that

    the fastest computations

    could only be recorded

    on one central processing unit

    that’s a vector unit.

    In the old paradigm of supercomputing,

    they thought of the supercomputer

    in the singular sense,

    or solving only one problem at a time.

    In my new paradigm of supercomputing,

    I thought of the supercomputer

    in the plural sense of 65,536

    identical central processing units

    and as many identical graphics processing units.

    Back in 1989,

    I was in the news headlines because

    I experimentally discovered

    how to use those units

    to solve 65,536 problems at once.

    My discovery opened the door

    to the present technology of using

    graphics processing units,

    where possible,

    and using them to accelerate

    the speed of the floating-point arithmetical operations

    that must be executed

    by the modern parallel supercomputer.

    My experimental discovery

    of how to parallel process

    and do so to solve the toughest problems

    and do so across

    a new global network of

    65,536 processors

    was achieved across a new internet.

    The supercomputer of today

    will become the computer of tomorrow.

    The supercomputer is at once

    able to define our past,

    recreate our present,

    and reinvent our future.

    How I Discovered Practical Parallel Supercomputing

    The supercomputer technology

    called massively parallel processing

    that was mocked

    as a very useless technology

    is now the front and the center

    of high-performance computing

    and is rapidly moving into laptops

    and desktops.

    Until the Fourth of July 1989,

    parallel processing was not verified

    by any experiment

    that was conducted across

    an ensemble of thousands of processors

    and that used a real-world

    grand challenge problem

    as its computational testbed.

    My contribution to the development

    of the computer is this:

    On the Fourth of July 1989

    in Los Alamos, New Mexico,

    United States, I provided the lockdown

    experimental evidence

    that the technology of

    massively parallel supercomputing

    can be harnessed

    and used to solve the toughest problems

    arising from mathematics to medicine

    and from science to engineering.

    I, alone, conducted

    that time-consuming experimentation

    that led to my discovery of the best way

    to get millions of processors

    to solve the toughest problems

    and to move humongous data

    into and out of storage

    and to solve them in harmony

    and as one seamless, cohesive supercomputer.

    The electricity budget

    of the email messaging

    that is a precondition

    to moving data into and out of

    millions upon millions of processors

    raises the electricity bill

    to up to forty (40) million dollars

    per year,

    and eventually costs more than

    the next world’s fastest computer

    that will cost the United States

    six hundred (600) million dollars

    in the year 2023.

    The world’s fastest computer

    consumes as much electricity

    as two million Nigerians.

    Thank you.

    I’m Philip Emeagwali.

Leave a Reply

Your email address will not be published. Required fields are marked *