Our digital world is expanding daily. From Artificial Intelligence and the Internet of Things to Quantum Computing and Social Media, the amount of data that is criss-crossing the globe is already at astounding levels and is only set to increase exponentially. We are living in the era of data.
While all this growth is potentially a good thing for humankind – we are becoming bounded by our own wires and fibers. They are not sufficient anymore to connect all the digital resources of the planet fairly, equally and most importantly – efficiently. Long gone are the days of the telephone switch operator – today, systems must be able to move Petabits (1,000,000,000,000,000 bits) of data every second to keep up with the demands of not just people, but machines and algorithms as well… this is the beginning of the real hockey stick curve.
With present day switching systems using every possible electronic and photonic tool at their disposal to move and switch all this data – while consuming ever more power – we have arrived at a point where the architectures of 60 years ago will no longer sustain these systems. Exactly at the point when our beloved Moore’s Law seems to be dying and we can no longer expect ever increasing performance from our microchips.
A paradigm shift is needed – only one that feels like no shift at all…
The opportunity is clear : if we want global, instantaneous access to ever more information – especially using algorithms like A.I. – then we have to improve efficiency.
Today, many systems are starved for data simply because they cannot access enough data from the outside world. CPU’s can operate in idle modes processing nothing at all while waiting for their pieces of information. Not only is this a waste of operational cycles and power – but it will also lead to larger and larger amounts of data that is not being accessed at all! Delays will cause a shifting towards only the most available data where only the largest and most accessible service points exist.
The equitable movement of all data on the Internet is then of paramount importance, not only to individuals, but to industries (big and small), science, the economy, and geopolitical representation as well.
There is a lot to be learned from nature – we have examples everywhere from Velcro to Penicillin.
We have even used the most basic hypotheses of how the brain works to create massive models for computing called Neural Networks – the basis of all A.I.
The Axonal Networks vision is to also use the model of the brain as a starting point – but unlike modeling the neuron as the center of computing, the goal is to use the axons, dendrites and synapses as a model for the interconnects.
While the brain has over 10-Trillion interconnects, it in fact uses its resources judiciously. Nature has learned well the rule of distributed computing and communications and connects areas with varying amounts of connectivity while optimizing the use of the links in an attempt to limit energy consumption.
Somewhere then, between an everything-to-everything connection method and a serial game of broken telephone is the key to the brain’s interconnection topology. While nowhere near the complexity of the human brain, the key here is to use still well-known engineering and computer-science constructs, but in a more optimized, efficient, and powerful interconnection architecture.
For a very big problem – you need a very big solution… This is why multiple technologies must be used in unison to address the interconnect dilemma…
Architecture – the way the wires are arranged.
Photonics – what the wires transmit, and,
Logic – how it all knows what to do…
A switch architecture is exactly that – an arrangement of “switches”. Switches are simple, primary constructions that typically allow the flow of data in either of two directions based on a control signal. Using portions of well-known structures in the data-communications domain over the past decades, namely the Banyan and Crossbar switch topologies – and re-casting them using the concept of spatial division multiplexing (multiple, parallel pathways in a distributed system) a new architecture has been envisioned.
Being a distributed system then requires the advantages of light and photonics. Since many distributed wires still must still carry exceptionally high-speed data and this data must occasionally be distributed over larger physical areas (maybe tens of meters), the efficiency of optical signalling and its immunity to noise and disruptions makes it the only choice.
Lastly, the straight-forward partitioning of the "wires" from the "transistors" is nearing the end. The optical transceiver – which uses glass fibres as the "wires" by converting between light and electricity - has been exclusively responsible for the transmission of data, while the microchip has exclusively been responsible for processing the data. However, this boundary is being reduced every year as light propagates deeper into the computing sanctuary.
The ultimate solution is to bring the light to the last-micron… right to the electrical devices that do the “thinking"... and make light part of the computing.
To do this, the third pillar of the solution involves optical-gates. Using technologies like silicon photonics, a fundamental device capable of directly interpreting the light signals and creating new light signals will revolutionize the way computing works. By eliminating the conversion done by the optical transceiver (from electricity to light and back again), the speed of processing can be dramatically increased.
This triple-play of architecture, interconnects and logic achieves something that no other switching system has offered before – a continually scalable, high-speed interconnection fabric that can be added-to over and over starting with the smallest switch core.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.