If you’re reading this article, thanks to the optical communications technology. Optical communication is part of our everyday internet access. Over past years, scientists have progressed transmitting both data and power various devices from a distance.


Murat Uysal and his team from Ozyegin University, Turkey has published a paper “SLIPT for Underwater Visible Light Communications: Performance Analysis and Optimization.” This paper provides a detailed analysis of a new algorithm that helps transmitting of both data and power to devices underwater using light, with highest ever efficiency that is achieved till date.


New algorithm optimizes underwater communication and power for simultaneous transfer
Source: Google images


It’s not so much that humans explored the underwater mysteries throughout centuries. From past few years, scientists are deploying underwater sensors that help them to gather and study the information. The current method of transmitting signals underwater is using sound waves. Though sound waves can travel long distances through the watery depths, they only carry very less amount of data in comparison with light waves.


Visible light communication can provide data rates at several orders of magnitude beyond the capabilities of traditional acoustic techniques and is particularly suited for emerging bandwidth-hungry underwater applications.


Murat Uysal, a professor with the Department of Electrical and Electronics Engineering at Ozyegin University, in Turkey.


With respect to replacing the batteries, it is very difficult to manage and maintain sensors or other electronic devices in such environments. For the devices those are able to work with solar panels come with an added advantage – light signals can be used to transmit data as a part of harvesting the solar energy. When such environments created, an autonomous vehicle can be setup in order to transmit and receive data from the sensor.


The power derived from the light signals received from the sensor can be split into Alternative Current (AC) and Direct Current (DC) components, where the AC component is capable of transmitting data and the DC component is capable of acting as a power source. Murat Uysal calls this process AC-DC Separation (ADS) method. This is the gist of the paper that he published.

New algorithm optimizes underwater communication and power for simultaneous transfer
Source: Google images


The team has also proposed another method that strategically switches between energy harvesting and data transfer and achieve performance optimization in switching between these two. This process is called Simultaneous Light Information and Power Transfer (SLIPT). However, the SLIPT method has not surpassed the ADS method in terms of performance.


The feasibility of wireless power was already successfully demonstrated in underwater environments [using light], despite the fact that seawater conductivity, temperature, pressure, water currents, and biofouling phenomenon impose additional challenges.


These methods of optical communication are yet experimental. According to Murat, the SLIPT method is more potential commercially. Advances in these methods can ultimately be used as underwater modems using visible light. However, scientists are continuing their research in optimizing the method of implementing sensors underwater, powering them remotely and studying the oceans and the underwater life.

What is 5G?


Information is usually transmitted in various forms – over wire, radio, optical or other electromagnetic systems. Over a period of time, transmitting information has changed drastically. The current technology used for information transmission is 5G – Fifth Generation. This is the upgrade of currently existing 4G technology.


Fifth Generation (5G) in mobile communications – Basic things you need to know
Image source: Google

History of 5G:


The next generation in cellular technology is 5G, which was started at the end of 2018. On 3rd April 2019, 5G mobile services were introduced in Chicago and Minneapolis. This was the first time in the world that a 5G enabled smartphone was connected to a 5G network.


Verizon is the leading company that developed, established and accelerated 5G innovation in the initial stages. In 2015, Verizon created 5G Technology Forum (5GTF), which helped to accelerate the release of the Third Generation Partnership Project (3GPP) 5G New Radio (NR) standard in December 2017. Later, Verizon introduced a number of 5G phones, including Samsung Galaxy S10 5G, Galaxy Note 10+ 5G, LG V50 among others.


Technical aspects of 5G:


5G initially used high frequency spectrum known as millimeter wave spectrum. In 5G there exists labels like in Verizon 5G UW (Ultra-Wide) where signals are using millimeter wave, and in AT&T (American Telephone and Telegraph Company) called 5G Plus.


In 5G there are low band, mid band, and millimeter wave air waves. Low band 5G provides best coverage but speeds are not much better. Millimeter wave provides higher frequency range and speed is good up to 1Gbps, but the range is short and that it doesn’t cover indoors. Mid band as the name implies it offers faster speed than low band while it has better coverage than millimeter wave.


5G vs 4G:


  • 5G is superior to 4G. 4G was built upon data and application technology introduced by 3G in early 2000s. 5G provides high reliability and ultrafast speed than that of 4G.
  • 4G LTE (Long Term Evolution) technology is only capable of using lower frequency bands up to 6GHz, whereas in 5G radio bands will handle up to 30 GHz and 300 GHz.
  • 5G support massive data transfers. 5G can reach speeds up to 20 times faster than 4G LTE. 4G LTE has a peak speed of 1GB per second while 5G can achieve speeds of 20 GB per second.
  • 5G have capacity to handle up to a million devices per square kilometer. In 4G competing bandwidth results in slow network connectivity.
  • 5G has better efficiency than 4G. Better efficiency means it wastes less power. 5G has better mobility than 4G.

Comparing 5G vs 4G
Image source: Google

Before 5G:


Every generation of wireless communication standard has brought improvements of technology. 1G is first generation of wireless telephone technology. These are analog telecommunication standards. 1G was introduced in 1979. 2G (Second Generation) was introduced in mid-1980’s. These are digital telecommunications. Speed of 2G is 14.4 kbps. 


2G provides services such as text messages, picture messages, multimedia message services. 3G technology introduced in 1998. This technology provides an information transfer of at least 144 k bit/sec. CDMA2000 is a family of 3G mobile technology standards for sending voice, data and signaling data between mobile phones and cell sites. 


4G technology was introduced in 2009 in Norway and Stockholm and in United States in 2011 with 700 MHz band. Applications of 4G are mobile web access, IP telephony, gaming services, high definition mobile TV, video conferencing, 3D television, and cloud computing.


Before 5G - Fifth Generation
Image source: Google

Benefits of 5G:


  • 5G greatly enhances the speed, coverage and responsiveness of wireless networks. 
  • 5G speed range is up to 1 Gbps (10 to 100 times speedier than our typically cellular connection)
  • Low Latency up to 1 millisecond. (Latency is the response time that delay between sending and receiving of information)
  • 1000x bandwidth per unit area, 99.99% availability and 100% coverage.


Conclusion:


In conclusion, 5G is one of the biggest technological information for our lifetime with unlimited possibilities. 5G can change everything.          

 

5G can change everything
Image source: Google

Apple has been working on a technology called Bone-conduction technology and recently filed a patent with the title “Multipath audio stimulation using audio compressors”.


Is Apple working on Bone-conduction audio technology for AirPods
Image source: Google

Conduction and types of conduction:


If a medium is capable of transmitting sound, then it is called a sound conductor. The most common conductor we rely upon is air. When a sound is generated from a source, it vibrates the air. When these vibrations are sensed by human ear, the human brain analyses it and we then feel the sound. This is called Air-conduction, as air is the medium carrying the sound vibrations. Through Bone-conduction technology, a user can hear the sound vibrations through the user’s cranium i.e., the skull. In contrast with that of the air-conduction, bone-conduction allows the sound vibrations to be transmitted directly through the user’s body.


Air-conduction and Bone-conduction
Air-conduction and Bone-conduction
Image source: Wiki

Working of Bone-conduction AirPods:


Though Apple is calling them bone-conduction AirPods, they don’t rely completely upon bone-conduction. These AirPods filter the input audio to a high-frequency component and a low-frequency component. A low-frequency compressor is configured to reduce dynamic range of the low-frequency component and an air-conduction transducer is configured such that it converts the high-frequency components into air-vibrations so that these vibrations can be sense by the user. Finally, a bone-conduction transducer is setup to convert the low-frequency component with reduced dynamic range into vibrations in the cranial bone.


Components of bone-conduction explained in Apple's patent document
Components of bone-conduction explained in Apple's patent document


The bone-conduction AirPods can produce sound vibrations based up on the portion of the cranium through which the sound is being transmitted. For example, vibrations created by the AirPods through the temporal bones that produces a different sound experience to the user. Sound vibrations can be transmitted through the other parts of the cranium like the nasal bone, sphenoid bone or the jaw bone.


Every technology comes with its own pros and cons. The technology succeeds only if it can over come the most common cons. However, here are the pros and cons of the bone-conduction technology.


Pros of bone-conduction technology:

  • As the vibrations are sensed through bones rather than air, the user still be able to listen to the external sound even with the AirPods in place.
  • Bone-conduction helps people with hearing impairment to grab the most advantage to hear the sounds.
  • This technology is particularly useful where air-conduction is not possible, may be in space.

Cons of bone-conduction technology:

  • This technology doesn’t work fine for high-frequencies, as humans can hear only a range of 20Hz to 20kHz frequencies.
  • As the sound vibrations are transmitted through the bones, high intensity sounds may cause tickling to some times annoying range of vibrations.

Though there are chances that Apple may implement this technology on its smartglasses but AirPods are more likely to get this technology to get into the market. There are various other speculations like Apple is working on hear impaired project. Unless Apple declares, all of these speculations keep mushrooming!


You can find the patent submitted by Apple here.

What are Low Earth Orbit (LEO) satellites?


A Low Earth Orbit satellite is a geo-stationary satellite that flies at an altitude of 2000km from the earth’s surface. These LEO satellites are used for limited area coverage. When compared to that of the conventional geo-centric satellites, LEO satellites require very less amount of energy for placing the satellite into orbit. This, in turn, results in high bandwidth and low latency for communication.


What is satellite constellation?


A group of artificial satellites that work collectively is called satellite constellation. Usually, satellite constellations are formed with the LEO satellites, which are placed in sets of complementary orbital planes. Satellites of these constellations communicate to the earth by globally distributed radio stations.


Amazon’s Project Kuiper for Low Earth Orbit (LEO) satellite constellation
Image source: iStockphoto

What is Project Kuiper?


Kuiper Systems LLC is one of the subsidiaries of Amazon, founded in 2019, with a huge investment of $10 billion. The primary objective of Kuiper Systems is to deploy a constellation of 3,263 LEO satellites for providing internet connectivity to the remote locations that cannot afford optical fiber connectivity. Amazon named this project as Project Kuiper.


The Kuiper Systems LLC and the Project Kuiper were named after the astronomical term Kuiper belt. Kuiper belt is defined as the circumstellar disc in outer solar system extending from the orbit of Neptune up to 50 AU (Astronomical Units) from the Sun.

Project Kuiper
Image source: Google


The approval for such astronomical communication projects is handled by Federal Communications Commission (FCC). Though Amazon has announced Project Kuiper in last spring, FCC took some time and approved it with a 5-0 votes. In the words of Dave Limp, Senior Vice President, Amazon.. 

We have heard so many stories lately about people who are unable to do their job or complete schoolwork because they don’t have reliable internet at home. There are still too many places where broadband access is unreliable or where it doesn’t exist at all. Kuiper will change that. Our $10 billion investment will create jobs and infrastructure around the United States that will help us close this gap. We appreciate the FCC's unanimous, bipartisan support on this issue, and I want to thank Chairman Pai and the rest of the Commission for taking this important first step with us. We’re off to the races.


Amazon is also trying to partner with public and private firms to extend the outcome of Project Kuiper to the maximum possible. With Project Kuiper, Amazon is also planning to backhaul solutions for wireless carriers extending LTE and 5G service to new regions of USA. The development and the testing of Project Kuiper is planned to be carried out in the development facility opening in Redmond, Wash.


Research in the field of satellite constellations is gaining its popularity in recent times. Though Amazon has been approved for setting up satellite constellation recently, SpaceX by Elon Musk has been creating buzz over past few months. OneWeb satellite constellation has also planned to launch as many as 650 satellites initially to form their own satellite constellation.

Nex Computer LLC is a technology firm based in California. The company came up with a revolutionary docking concept called NexDock.


NexDock turns your Smartphone into a laptop – At a fraction of the cost of a laptop
NexDock turns your Smartphone into a laptop – At a fraction of the cost of a laptop


Basic idea of NexDock:


The innovations and advancements in the mobile technology led a highway to the fact that mobile phone has become a basic need for every individual. The simple innovative idea behind NexDock is to extract advantage of the processing power from your mobile. Many of the corporate professionals experience the ease of docking while carrying the laptop from home to workplace.


Nex Computers took this ease to the next level by enabling docking a mobile phone to the NexDock device.


How does it work?


If your mobile phone comes with the “desktop mode” feature, then this is for you. All you have to do is enable the “desktop mode” from your mobile and connect your mobile to the NexDock device. It comes with all the features that you need on your laptop.


Working of NexDock
Working of NexDock

Specifications of NexDock docking station:


Specifications of Nex Computers LLC's NexDock
Specifications of Nex Computers LLC's NexDock


Operating system details:


One of the good questions that arises when the concept of docking is brought up is – “How does the operating system look after docking the mobile?” – This is a very valid question. As soon as you dock your mobile to the NexDock, the apps on the mobile becomes usable from the “laptop”. They become resizeable and a Windows-like environment is created from Android. This give the experience of Windows with the power of Android.


Windows-like Android Operating System for NexDock
Windows-like Android Operating System for NexDock



Compatible smartphones:


Though is one-gold-digging concept, the caveat is that your phone should support the “desktop mode” feature. However, not every mobile comes with this feature. So, Nex Computers has also mentioned the smartphones supporting docking. At the time of writing this article Samsung, Huawei and LG are the only mobile phone companies supporting docking.

Ports of NexDock
Ports of NexDock

Samsung calls the “desktop mode” Samsung DeX and Huawei calls it Easy Projection. LG has not yet announced how this feature is named.


Samsung:

  • Galaxy S8, S8+, S8 Active, S9, S9+
  • Note 8, 9, 10, 10+
  • Galaxy S10, S10E, S10+, Fold
  • Galaxy S20, S20+, S20 Ultra

Huawei:

  • Mate 10, Mate 10 Pro
  • Mate 20, 20 Pro, 20 Pro X, Mate 30
  • P20, P20 Pro, P30, P30 Pro
  • Honor Note 10, View 20

LG (After Android 10 update):

  • G8 Thinq
  • V50 Thinq
  • Velvet

Benefits of the very idea of NexDock:


  • You can use your existing data plan and stay connected all the time.
  • No need of carrying power bank – You can charge your phone using NexDock.
  • New phone is like new laptop – When you buy a new mobile phone, your laptop works with power and capability of your NexDoc.

What better can NexDock yield?


  • NexDock is compatible to connect to Raspberry Pi and harness the true potential of it. This enables you to build your own “developer” PC at the lowest price possible.
  • You can use this as a secondary screen for your primary computer.
  • When you have gaming console, you can dock NexDock as the monitor.
  • If you have a TV stick like Amazon Firestick, you can get a TV on-the-go.
NexDock with Raspberry Pi
NexDock with Raspberry Pi


What’s next?


Nex Computers LLC is working on the next version of NexDock – NexDock 2. This addresses all the drawbacks of the first device. The NexDock 2 is available at a price of $259 USD. Following are the specifications of NexDock 2.


  • Display: 13.3-inch, 1920 x 1080 pixels resolution
  • Dimensions: 317 x 215 x 15.9 mm
  • Weight: 1420 grams
  • Audio: Four 1W speakers
  • Battery: 6,800 mAh

Daimler's Mercedes and Nvidia together are initiating a new technology in the Automobile industry – the Software-Defined Car (SDC). This can be compared to that of a mobile device in the modern world. We buy a mobile, get latest software via updates and use it with the upgraded features. SDC also works in a similar approach. The customer buys a car, downloads new features or updates periodically. Some of these features may not be even available at the time of purchase. In technical terms, the Electronic Control Units (ECUs) are replaced with hardware.


Mercedes and Nvidia announce Drive AGX Orin SoC for Software-Defined Car
Image source : Google


In the automobile industry, cars are categorized into four levels based on the human interaction.


Level 1 – Completely manual (old generation cars)


Level 2 – The car assists the driver (with navigation and ABS etc.)


Level 3 – Human intervention is needed at times (for parking, paying tolls etc.)


Level 4 – No need of human interaction at all


Tesla stood as the first ever initiative in autonomous cars over past years. Later, Volkswagen emulated Tesla in a project is now extracting its fruit in the form of all-electric car. This all-electric car by employs single master electronic architecture that helps in powering all the electric and self-driving cars that are manufacture by Volkswagen. However, there has been a delay in the launch of these all-electric cars due to a glitch in the software.


This give us a hint that Daimler might emerge into software industry in order to develop its own software. However, it is not quite sure how Daimler manages the software. The spokesman of Daimler – Bernhard Wardin – stated that the model that is launched as SDC was selected but he declined to disclose the model and the name of the car.


Technical details of Drive AGX Orin SoC:


The AGX Orin SoC was fabricated with 17 billion transistors. Nvidia claims that the SoC is a new generation deep learning and computer vision accelerator. The AGX Orin SoC is expected to deliver as many as 200 trillion operations per second. This number is nearly seven times higher than that of the performance of Nvidia’s Xavier series SoCs. It’s quite noticeable that Nvidia has announced Orin series where Xavier itself started shipping last year.


Nvidia Xavier Series SoC
Nvidia Xavier Series SoC
Image source : Google


With his advent of an entirely new series of SoC, Daimler’s Mercedes falls into the cars working on “Level 4” category. Though Audi is closely working on Leve 4 car, it uses Nvidia’s Xavier SoC. With all the automobile companies putting their effort in manufacturing Level 4 cars, the automobile industry seems to face tough competition.


Danny Shapiro, Nvidia’s senior director of automotive stated the following.


In modern cars, there can be 100, up to 125, ECUs. Many of those will be replaced by software apps. That will change how different things function in the car—from windshield wipers to door locks to performance mode.

Artificial Intelligence (AI) has been the most evolving technology from few years in the past. The tech world has recently seen a very astonishing application of AI – Literature. After the advent of AI into the creative branches like music, paintings, sculpture and choreography. 


The AI Poet – “Deep-speare” – mastered the dynamics of Shakespeare’s work
Image source : Google
 

Following are the stanzas from the sonnet written by William Shakespeare and Deep-speare.


Sonnet written by William Shakespeare
Sonnet written by William Shakespeare


Sonnet written by Deep-speare
Sonnet written by Deep-speare


The piece of work created by Deep-speare is completely non-sense if you closely observe. But at a first glance, you cannot differentiate between the stanza created by Shakespeare and Deep-speare.


The research team consisted of three machine learning researchers and one scholar of literature. The team trained Deep-spear with 2700 sonnets taken from Project Gutenberg. Apart from these sonnets, no other input was given to Deep-speare. Out of these inputs, the “poet” cooked up three rules on itself – rhyming, rhyming scheme and the fundamentals of human language.


The models generated by AI systems are exceptional at discovering the patterns. Some of the patterns recognized by AI systems are not even intended. Such unexpectedly recognized patterns are referred to as “Accidental creativity.”


A sonnet is a kind of poetry that comprises of fourteen lines. Sonnets are used to as a tool to address a problem and a respective solution. The problem the poet wants to express is presented in three stanzas called Quatrains and the solution/resolution is presented in a couplet, following the quatrain. English poets developed a sonnet style with 10-syllable lines in an unstressed-stressed rhythmic pattern called “Iambic pentameter.”  Such frequent sonnet usage by Shakespeare was named as Shakespearean sonnet.


Shakespearean sonnet
Shakespearean sonnet
Image source : Wikihow

With the Deep-speare project, the scholars produced the individual quatrains from various Shakespearean sonnets and the verses in iambic pentameter with regular schemes.


The Poetic process:


Following are the various steps involved in the poetic process followed by Deep-speare.


Step 1:

Deep-speare first chooses the last word of a line and assesses each word’s frequency to appear at the end of a line, across the inputs provided. A list of top five words is generated, out of which, samples are made and positioned as the word in the new line.

Step 2:

The process is repeated for every word of the line. By the end of the process, the frequency of the words appearing next to each other is generated.


Steps 1 and 2
Steps 1 and 2


Step 3:

This procedure is repeated to generate as many candidate lines as possible. In comparison with the rhythm model, rhythm score is calculated and assigned to the lines generated. Out of these lines, the Deep-speare samples a line that fits good into the iambic pentameter sonnet model.

Step 3
Step 3

Step 4:

Deep-speare repeats these steps in a bottom-to-top approach to all the lines under consideration.

Step 5:

In order to conclude a poem, a score called “rhymability” score is decided. Sampling out of this rhymability score, the Deep-speare finishes the poem.

Steps 4 and 5
Steps 4 and 5

A Neural Network (NN) is the system that “learns” to perform a task based upon the inputs given to it. Usually, Neural networks are observed in humans, in order to perform a specific task when activated. These are called Biological or Natural Neural Networks.


Indian scientists proposed a highly efficient Spiking Neural Networks (SNN) approach
Illustration : iStockphoto


Scientists are working hard to recreate such “neural” system artificially, called Artificial Neural Networks (ANN). The tasks that these ANNs perform are represented in the form of mathematical functions, which are called Artificial Neurons. These artificial neurons are the basic building blocks of ANNs. Spiking Neural Networks (SNN) are another category of Artificial Neural Networks. The SNNs closely mimic the natural or biological neural networks.


The artificial neurons used in SNNs are different from that of the ones used in ANNs. The artificial neurons used in SNNs have the ability to “fire” like the biological neurons i.e., these neurons release bursts of electricity. These impulses help them to connect with other neuron surrounding them and form a neural network.


A group of researchers from Indian Institute of Bombay (IITB) are working on the third generation of these SNNs. Compared to the previous generations of SNNs, the third generation systems are far efficient in firing the impulses and forming networks. This allows more “neurons” to be placed onto a computer chip. This mimicry provides us a better approach to understand the very nature of human brain and its functionality.


Working of artificial neurons in ANNs:


As mentioned earlier, the neurons in human brain communicate with each other by transmitting electrical spikes among them. Such electrical impulses in the SNNs are produced by leaky capacitors. If a leaky capacitor reaches a threshold charge, the voltage or current “spikes” out affecting the neighbor capacitor. This phenomenon of spiking is called Quantum mechanical tunneling. A recursive spiking among the capacitors form a network, which can work together in performing a give task without any training inputs.


Contribution of research team from IITB:


The research team from IITB has modified the artificial neurons in the ANNs with silicon-based devices called Metal-Oxide Semiconductor Field-Effect Transistors (MOSFET). When compared to the traditional transistors, MOSFTEs produce more efficient tunneling. Efficient tunneling results in strong SNNs with better capabilities of learning and adapting.


Advantage of artificial neurons used in SNNs:


Replacing the conventional artificial neurons with MOSFETs adds another mode of working of these SNNs – off-current mode. This mode allows the capacitors to be 10000 times smaller than their size, when current is to be passed through them. This makes these neurons more energy efficient that the conventional neurons. Following are the words of Tanmay Chavan, a member of IITB's research team.


The use of quantum mechanical tunneling provides incredible control, which is a huge advantage. Given the fantastic performance at a unit neuron level, we plan to demonstrate networks of such neurons to understand how models of networks of neurons behave on silicon. This will enable us to understand the robustness and systems-level efficiency of the technology.


You can find the simulations to visualize SNNs here and here.


Google Glass is one of the prestigious project that Google has initiated. Google has started shipping the prototype of Google Glass from 15th April 2013 and tentatively shelved it by 15th January 2015. Later Google launched the Google Glass Enterprise Edition in July 2017 and Google Glass Enterprise Edition 2 in May 2019. Though Google Glass was very much anticipated throughout its incarnation and reincarnation, it was not as successful as it was anticipated.


Google acquires Human Computer Interfaces and Smart glasses company – North


Though there were many odds, Google didn’t give upon the Glass. As a part of enhancing the product, Google has acquired the Canada based human computer interfaces and smart glass company North. Thalmic Labs was a Canada based technology company that was established in 2012. Later it was christened as North. Rick Osterloh, Senior Vice Prsident, Device & Service, Google Inc. has stated that there comes a day where live in a new branch of computing called Ambient computing.


From 10 blue links on a PC, to Maps on your mobile phone, to Google Nest Hub sharing a recipe in the kitchen, Google has always striven to be helpful to people in their daily lives. We’re building towards a future where helpfulness is all around you, where all your devices just work together and technology fades into the background. We call this ambient computing.


North has been working on a device called Myo Gesture Control Armband. This device directly couples neuro-muscular impulses into electric signal that a computer can understand and respond accordingly. Later, the company has started the project called Focals, that deals with direct retinal projection and prescription compatibility.


Myo Gesture Control Armband
Myo Gesture Control Armband

Though there was no mention about the price that Google paid in acquiring the company, Google has mentioned the news in their blog.


Is the world really ready to adapt the smart glasses?


Apart from this acquisition and opinions of various people of tech, is the world really ready for this technology – the smart glasses?


The answer is ‘yes’ but with a caveat. Back in 2014, a mobile phone with 1GB of RAM was considered to be a flagship. Fast forwarding to 2020, we are at 12 – 16GB RAM mobiles. This transition occurred due to the fact that the people felt the real need of such memory. If this is what people really wanted or even if Google can present their product for which a deny could not be found, it is most certain for smart glasses to change the path of technology.


What is Quantum internet?


A traditional computer uses charge accumulated between two parallel plates to represent either 0 or 1, which are called bits. In a quantum computer, 0 and 1 can be represented based on the state of the electrons or photons. For example, the spin of electrons can be used to notate 0 and 1 – spin up and spin down. The polarization of a photon can also be used as horizontal polarization to represent 0 and vertical polarization to 1. These are called qubits, which is derived from Quantum Bits.


Quantum internet – China successfully transmits quantum memory over 50km

Just like a network of traditional computers used for data access across the globe is called Internet, a network of quantum computers used for the same is called Quantum internet.


In traditional computing, a bit would have only one state – either 0 or 1. But in quantum computing, a qubit can in both 0 and 1 states simultaneously. The current status of quantum internet did not cross the lab level yet. DARPA Quantum network is the world’s first quantum network operating 10 optical nodes across Boston, Cambridge and Massachusetts.


Recently, a team of scientists successfully transmitted quantum memory over more than 50 km. In comparison with the modern metrics, this is not a great range. But in comparison with the previous metrics of quantum memory transmission metrics, this is more than 40 times that of the capacity.


If two or more qubits are put together, they become entangled to each other. This entanglement affects the qubits in a quiet strange manner. When two or more particles are entangled together, any action performed on one of the particles instantaneously affects the remaining particles, irrespective of the distance between them. Albert Einstein named this phenomenon as “spooky action at a distance”.


Quantum Communication
Quantum Communication

Jian-Wei Pan is one of the research scientists who is working on building the quantum internet. In 2017, Pan along with his team built a Quantum Secure Communication Beijing-Shanghai Backbone Network with the help of an Earth-orbiting satellite relay named Micius. Pan has also mentioned that they have successfully demonstrated the entanglement of various particles though empty space. Here is the entire interview of Pan back from March 2019.


Quantum leaps - China's Earth-orbiting satellite Micius
Quantum leaps - China's Earth-orbiting satellite Micius
Image source - Google

Data rates in quantum networks:


Quantum networking takes the advantage of this spooky action at a distance phenomenon in order to transmit data. Hence in comparison with the data rate of traditional internet, the data rate of quantum networks is several folds higher.


Security in quantum networks:


The data in quantum networks is stored as a superposition of 0 and 1, and the link might be established in any shape. So, communication between two quantum computers cannot be intercepted. Even if the data is intercepted, the state of the particles is disturbed. This capability of quantum leads to another part of security – communication between two quantum computers cannot be intercepted without letting the computer know who intercepted.


Limitation of quantum networks:


Though quantum networks or quantum internet comes with relatively high advantages, there are several limitations this technology is currently facing.


In practice, entanglement of particles is not an easy task. Even a slight change in temperature or a slightest vibration in the medium can result in disturbance of the state of the particles. This ultimately results in data loss.


Jian-Wei Pan in lab building the quantum internet
Jian-Wei Pan in lab building the quantum internet
Image source - Google

It is expected that it takes a decade more, in order to build a successful quantum network with all these challenges addressed.


The near-term target is to make telecommunication more secure. The quantum internet of the future might be completely different from what we imagine now.


Face recognition has been one of the technologies used in various walks of the modern life. Facial recognition is used in the applications like security and defense, retail and marketing, healthcare and hospitality. Security and defense holds really a great altitude of usage of facial recognition in comparison with the other applications.



The origin of facial recognition dates back to 1800s. As immediate as the camera was invented, law enforcement started utilizing the potential of facial recognition by recording the images of the criminals to identify repeated offenders.


Despite such critical application, San Francisco decided to ban the usage of facial recognition from May 2019. San Francisco is the first country in the world to impose ban on facial recognition. Recently, the Government of Boston joined the list by imposing ban on facial recognition.


There was a situation where five watches were stolen from Shinola retail store, which were of $3800 worth. As a part of investigation, the police collected the CCTV footage and ran a facial recognition software with the person identified from the footage. The software recognized the person as Robert Julian-Borchak Williams (42 YO) hailing from Farmington Hills, Michigan. Finally, charges against Williams were dropped off for the reason that the facial recognition software had a false hit in identification.


In this context, the city council of Boston voted to ban the use of such software. This measure will reach the Mayor of Boston Marty Walsh. One of the sponsors of the bill Councilor Ricardo Arroyo noted that the facial recognition software used by the police is inaccurate for the people of color.

Apart from this practical example of facial recognition software error, MIT has conducted an analysis on understanding the accuracy of such software. This study revealed that the error rate of light-skinned men is 0.8% and 34.7% that of dark-skinned women.


Light-skinned man identified by Facial recognition software


Dark-skinned woman identified by Facial recognition software


It has an obvious racial bias and that's dangerous. But it also has sort of a chilling effect on civil liberties. And so, in a time where we're seeing so much direct action in the form of marches and protests for rights, any kind of surveillance technology that could be used to essentially chill free speech or ... more or less monitor activism or activists is dangerous.


The Boston Police Commissioner William Gross said that the modern technology is not reliable. In his words, “Until this technology is 100%, I'm not interested in it. I didn’t forget that I'm African American and I can be misidentified as well.”


Based on the recent revolution against racism, William’s case is really alarming. Just because of a software bug, he was accused and appeared in the court. In the era of breaking all the inequalities, the software we learn, the algorithms implemented in Machine Learning software and the application of such algorithms should be designed enhanced care.