• The Essential Guide to Digital Transformation

Chapter 6

The Beginner's Guide to the Emerging Technologies

7 April, 2020

Fifth Generation (5G) cellular

Firstly, let's us put 5G into perspective

If 2G phones let us send SMS text messages. 3G let us upload pictures. 4G let us watch video. So what’s the big deal with 5G? Actually, there’s not one single feature that will define 5G yet. Because there are many! Some applications have been around for a while just waiting for the network technology to catch up before they can become as mainstream as instant messaging, digital photography, and streamed movies. 5G is that network technology.
A few examples: virtual and augmented reality video; high-speed gaming without a console; remotely driving vehicles on public roads; surgeons performing operations with robots in rural clinics from the comfort of the city hospital; intelligent video cameras improving the policing of street crime. 5G is all about speed. It brings massive increases in the speed of your Internet connection and it vastly reduces the response time of whatever it is you are doing. We call these characteristics “bandwidth” and “latency”. One goes up, the other goes down.
5G isn’t an upgrade of one single technology. It is about connecting faster. We can break it down into four main components and all four of these need to be upgraded. These components are: (1) your phone, (2) the antenna it talks to, (3) the fibre-optic cable in the ground, (4) and the core network that manages everything.

1. Your Phone

Blog Single
So, let’s start with the first component. Your phone. Yes, you will need a new phone. That phone must contain a new radio chip that communicates 5G signals over 5G radio frequencies. But other than that there will likely be no difference to your existing smartphone. Same screen, same apps. It may feel a bit quicker when you swipe between apps and websites because the response time over the 5G network is improved but you’re not going to notice an immediate difference to your mobile phone service compared to your 4G phone. Look at it like this: you can already watch live video on your 4G phone in great HD quality today and any doctor would tell you your eye would not see any increase in picture quality on that small screen if it was sent as Ultra-HD 4k video over 5G.
But over time, you will see new virtual and augmented reality apps that take advantage of the 5G network. For example, you could be sitting in Wembley stadium watching a football match and using the Sports Channel AR app on your 5G phone to see player details superimposed over the live broadcast. Or you could even insert yourself into the football or basketball action on screen to make a cute video for Instagram.

2. Antenna

Blog Single
The second component is the 5G antenna that your new 5G phone connects to. We call this the Radio Access Network (RAN) and it uses radio waves to talk to your phone. These 5G antennas are quite different to the mobile antennas that you see on top of buildings today. The 5G antenna will be a rectangle about 70 cm by 40 cm. Inside is a grid of tiny antennas that we call “Massive MIMO”. There will be either 32 or 64 transmitters and 32 or 64 receivers inside that single antenna unit which gives it the name “Multiple In Multiple Out” or “MIMO”, and it is massive. 5G is being licensed by governments in higher radio frequencies than 2G, 3G, or 4G mobile networks. The higher the radio frequency, the smaller the individual antenna, which means we can pack more of them into a single antenna unit. 5G is most commonly being licensed in what’s called the c-band frequency but it’s also being licensed in a much higher mm-wave frequency, and these Massive-MIMO antennas will have hundreds of transmitters and receivers in a single unit. The more transmitters and receivers working simultaneously, the faster your Internet connection.
With so many antennas packed into a small space, we can use Artificial Intelligence (AI) to direct the radio waves to your specific location in a process called “beamforming”. Think of beamforming as a searchlight picking out exactly where you are. No longer will you have to wave your phone in the air to get a stronger signal. This AI will also manage the power of the radio beam so it just reaches your phone but goes no further to save electricity.

3. Fibre Optic

Blog Single
The third component is the physical network that links the antennas and transports data all around the country. You may hear it called “backhaul” or “transmission”, but here we’re going to call it the Transport Network. In the same way the Radio Access Network uses different radio frequencies to transmit data through the air, the Transport Network uses different light frequencies to transmit data over fibre-optic cable. There is no point using Massive-MIMO antennas in the Radio Access Network if you do not also increase capacity in the Transport Network behind it. We do this by increasing the number of light wavelengths we get down a single fibre-optic cable.
The latest 5G Transport Network will fit 120 different wavelengths down a single fibre, which is a huge amount of capacity. Optical signals travel faster than electrical signals but to route your data around the country these optical signals have historically been converted to electrical signals to work out where they needed to go. No longer. The latest routing technology works directly with the light signals so no conversion to electrical signals is needed meaning your data will race through the network at the speed of light. Buffering of Ultra-HD 4k video will never be experienced again.

4. The Core

Blog Single
Finally, we get to the fourth component: the Core Network where radio and light waves become recognisable as your data. The core network understands two sorts of information: Control Plane information that determines what’s going on in the network and User Plane information that’s yours. Don’t worry about the Control Plane. The User Plane is where the data becomes things you know about: web sites, streamed video, VoIP calls, instant messaging, or emails. The name “Core Network” is misleading. It doesn’t mean it sits at the centre of the physical network. In fact, the 5G Core Network is a hierarchy of physical data centres that we’re now calling the Telco Cloud. This Cloudified Core Network comprises racks and racks of computers and storage built in air-conditioned units everywhere from huge out-of-town facilities to the smallest telephone exchange at the “edge” of the network. It’s here, at the edge, that your User Plane functions like streamed video will sit ensuring the best possible response time for whatever it is you’re doing online. The cloudification of the core network for 5G will bring with it concepts like Mobile Edge Computing (MEC) and Network Slicing that will enable things like the live football augmented reality app and the remote driving of vehicles that we mentioned earlier.
The latest 5G Transport Network will fit 120 different wavelengths down a single fibre, which is a huge amount of capacity. Optical signals travel faster than electrical signals but to route your data around the country these optical signals have historically been converted to electrical signals to work out where they needed to go. No longer. The latest routing technology works directly with the light signals so no conversion to electrical signals is needed meaning your data will race through the network at the speed of light. Buffering of Ultra-HD 4k video will never be experienced again.

The 5 Keys to 5G

Speeds and feeds

Speeds of up to 20 Gbps will be achieved using a combination of innovations such as carrier aggregation (CA), massive multiple input multiple output (MIMO), and quadrature amplitude modulation (QAM).

Unlicensed spectrum

MNOs are increasingly using unlicensed spectrum in the 2.4 and 5 Gigahertz (GHz) frequency bands. 5G networks will need to tap into the vast amount of spectrum available in these unlicensed bands to offload traffic in heavily congested areas and provide connectivity for billions of IoT devices. Advancements in Wi-Fi, LTE in Unlicensed spectrum (LTE-U), License Assisted Access (LAA), and MulteFire, among others, provide better quality and regulated access to unlicensed spectrum.

Internet of Things (IoT)

IoT devices pose a diverse set of requirements and challenges for 5G networks. It’s only fair that IoT should likewise pose a diverse set of solutions as well! You learn about a few of these solutions — including NarrowBand IoT (NB-IoT), LTE Category M1 (LTE-M), Long Range (LoRa) and Sigfox.


Network functions virtualization (NFV) enables the massive scale and rapid elasticity that MNOs will require in their 5G networks. Virtualization enables a virtual evolved packet core (vEPC), centralized radio access network (C-RAN), mobile edge computing (MEC), and network slicing.

New Radio (NR)

Although the other 5G innovations introduced in this section all have strong starting points in LTE Advanced Pro, 5G NR is a true 5G native technology that has yet to be standardized. 5G NR addresses the need for a new radio access technology that will enable access speeds up to 20 Gbps.

Artificial Intelligence (AI)

What is Artificial Intelligence?

The concept of what defines AI has changed over time, but at the core there has always been the idea of building machines which are capable of thinking like humans.
After all, human beings have proven uniquely capable of interpreting the world around us and using the information we pick up to effect change. If we want to build machines to help us do this more efficiently, then it makes sense to use ourselves as a blueprint.
AI, then, can be thought of as simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn which this gives rise to – using the digital, binary logic of computers.
Research and development work in AI is split between two branches. One is labelled “applied AI” which uses these principles of simulating human thought to carry out one specific task. The other is known as “generalized AI” – which seeks to develop machine intelligences that can turn their hands to any task, much like a person.
Blog Single
Artificial Intelligence (AI) represents machine-based intelligence, typically manifest in "cognitive" functions that humans associate with other human minds. There are a range of different technologies involved in AI including Machine Learning, Natural Language Processing, Deep Learning, and more. Cognitive Computing involves self-learning systems that use data mining, pattern recognition, and natural language processing to mimic the way the human brain works.
AI is increasingly integrated in many areas including Internet search, entertainment, commerce applications, content optimization, and robotics. The long-term prospect for these technologies is that they will become embedded in many different other technologies and provide autonomous decision making on behalf of humans, both directly, and indirectly through many processes, products, and services. AI is anticipated to have an ever increasing role in ICT including both traditional telecommunications as well as many communications enabled applications and digital commerce.
AI is rapidly becoming integrated into many aspects of communication, applications, content, and commerce. One such area transformed by AI is Customer Relationship Management (CRM). AI enabled chatbots represent an advanced technology for automated CRM solutions. Existing User Interfaces (UI) do not scale very well. Chatbots represent a way for brands, businesses, and publishers to interact with users without requiring them to download an app, become familiar with a new UI, or configure and update regularly. Chatbots provide conversational interfaces supported by AI to provide automated, contextual communications.
AI is undergoing a transformation from silo implementations to a utility function across many industry verticals as a form of Artificial General Intelligence (AGI) capability. This capability is becoming embedded and/or associated with many applications, services, products, and solutions. Mind Commerce sees AI innovation in a variety of areas including personalized AI to both support and protect end-users. The Internet of Things (IoT) is a particularly important area for AI as a means for safeguarding assets, reducing fraud, and supporting analytics and automated decision making.
Another important industry solution for AI is Virtual Personal Assistants (VPA) applications, which use Autonomous Agents and Smart Machine technology to enable an Ambient User Experience for applications and services. VPA rely upon software that provides advice while interfacing in a human-like fashion. The emerging role of intelligent VPA encompasses answering questions in an advisory role and performing specific actions virtually on behalf of humans. The Internet of Things (IoT) intensifies this need as machines interact with other machines and humans autonomously.

What are the key developments in AI?

Blog Single
All of these advances have been made possible due to the focus on imitating human thought processes. The field of research which has been most fruitful in recent years is what has become known as “machine learning”. In fact, it’s become so integral to contemporary AI that the terms “artificial intelligence” and “machine learning” are sometimes used interchangeably.
However, this is an imprecise use of language, and the best way to think of it is that machine learning represents the current state-of-the-art in the wider field of AI. The foundation of machine learning is that rather than have to be taught to do everything step by step, machines, if they can be programmed to think like us, can learn to work by observing, classifying and learning from its mistakes, just like we do.
Perhaps the single biggest enabling factor has been the explosion of data which has been unleashed since mainstream society merged itself with the digital world. This availability of data – from things we share on social media to machine data generated by connected industrial machinery – means computers now have a universe of information available to them, to help them learn more efficiently and make better decisions.

What is the future of AI?

Blog Single
That depends on who you ask, and the answer will vary wildly!
Real fears that development of intelligence which equals or exceeds our own, but has the capacity to work at far higher speeds, could have negative implications for the future of humanity have been voiced, and not just by apocalyptic sci-fi such as The Matrix or The Terminator, but respected scientists like Stephen Hawking.
Even if robots don’t eradicate us or turn us into living batteries, a less dramatic but still nightmarish scenario is that automation of labour (mental as well as physical) will lead to profound societal change – perhaps for the better, or perhaps for the worse.

Data Analytics

What is Data Analytics?

Data analytics is the use of processes and technology, typically some sort of analytics software, to extract valuable insight out of datasets. This insight is then applied in a number of ways depending on the business, its industry, and unique requirements.
Data analytics is important because it helps businesses become data-driven, meaning decisions are supported through the use of data. Data analytics is also helping businesses to predict problems before they occur and map out possible solutions.
While more businesses turn to data analytics to identify gaps, there are still plenty of people who could use some clarification. That’s why we’re starting with the root of data analysis: discerning qualitative data from quantitative data.
The convergence of Cloud, Data Management, IoT Platforms and Solutions is enabling the next evolution of data analytics in which enterprise will realize significant tangible and intangible benefits from IoT data. The ability to sort data in a raw format, store it in different structural formats, and subsequently release it for further analytics, will be of paramount importance for all industry verticals. IoT Data as a Service (IoTDaaS) offers convenient and cost-effective solutions to enterprises of various sizes and domain. IoTDaaS constitutes retrieving, storing, and analyzing information and provides customers either of the three or integrated service packages depending on the budget and the requirement.
Blog Single
Every large corporation collects and maintains a huge amount of human-oriented data associated with its customers including their preferences, purchases, habits, and other personal information. As the Internet of Things (IoT) progresses, there will an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Industrial IoT (IIoT) and Enterprise IoT deployments will generate a substantial amount of data, most of which will be of the unstructured variety, requiring next generation data analytics tools and techniques. Streaming data IoT business data is highly valuable when it can be put into context and processes in real-time as it will facilitate completely new product and service offerings.

Section 1: Qualitative and quantitative data

Blog Single
Data analytics is comprised of both qualitative and quantitative data. The makeup of these data types is important, considering it’s how it will be analyzed later on. Let’s start with qualitative data.
Understanding qualitative data
Qualitative data asks “why,” and consists of characteristics, attributes, labels, and other identifiers. Some examples of how qualitative data is generated include:
  • Texts and documents
  • Audio and video recordings
  • Images and symbols
  • Interview transcripts and focus groups
  • Observations and notes
Qualitative data is descriptive and non-statistical, as opposed to quantitative data.
Understanding quantitative data
Quantitative data asks “how much” or “how many,” and consists of numbers and values. Some examples of how quantitative data is generated include:
  • Tests
  • Experiments
  • Surveys
  • Market research
  • Metrics
Quantitative data is statistical, conclusive, and measurable, making it a more optimal candidate for data analysis.
With a grasp on the two types of data, it’s now time to see why data structures make such a difference as well.

Section 2: Structured and unstructured data

Blog Single
Next, onto structured and unstructured data. How data is structured will determine how it is collected and processed, and which methods will need to be used to extract insight. Let’s start with structured data.
Understanding structured data
Structured data is most often categorized as quantitative data. It is, as you may have guessed by its name, highly-structured and organized so it can be easily searched in relational databases. Think of spreadsheets and tables. Some examples of structured data include:
  • Names and dates
  • Home and email addresses
  • Identification numbers
  • Transactional information
Structured data is generally preferred for data analysis since it’s much easier for machines to digest, as opposed to unstructured data.
Understanding unstructured data
Unstructured data actually accounts for more than 80 percent of all data generated today. The downside to this is that unstructured data cannot be collected and processed using conventional tools and methods.
To harness unstructured data, more modern approaches like utilizing NoSQL databases or loading raw data into data lakes will need to be considered. Some examples of unstructured data include:
  • Emails and SMS
  • Audio and video files
  • Social media
  • Satellite and surveillance imagery
  • Server and weblogs
Making sense of unstructured data isn’t an easy task, but for more predictive and proactive insights, more businesses are looking at ways to deconstruct it.

Section 3: The data analysis process

Now that we know the anatomy of data, it’s time to see the steps businesses have to take to analyze it. This is known as the data analysis process.
Blog Single
Step 1
The first step in this process is defining a need for analysis. Are sales dwindling? Are production costs soaring? Are customers satisfied with your product? These are questions that will need to be considered.
Step 2
Next, onto collecting data. A business will typically gather structured data from its internal sources, such as CRM software, ERP systems, marketing automation tools, and more. There are also many open data sources to gather external information. For example, accessing finance and economic datasets to locate any patterns or trends.
Step 3
After you have all the right data, it’s time to sort through and clean any duplicates, anomalous data, and other inconsistencies that could skew the analysis.
Step 4
Now for the analysis, and there are a number of ways to do so. For example, business intelligence software could generate charts and reports that are easily understood by decision-makers. One could also perform a variety of data mining techniques for deeper analysis. This step depends on the business’ requirements and resources.
Step 5
The final step is putting analysis into action. How one interprets the results of the analysis is crucial for resolving the business problem brought up in step one.
Data analysis may have a set of steps, but not every analysis shows the same picture, which brings us to the next topic.

Section 4: Types of data analytics

Not all analyses are created equal. Each has its level of complexity and depth of insight they reveal. Below are the four types of data analytics you’ll commonly hear about.
Blog Single
1. Descriptive analytics
Descriptive analytics is introductory, retrospective, and is the first step of identifying “what happened” regarding a business query. For example, this type of analysis may point toward declining website traffic or an uptick in social media engagement. Descriptive analytics is the most common type of business analytics today.
2. Diagnostic analytics
Diagnostic analytics is retrospective as well, although, it identifies “why” something may have occurred. It is a more in-depth, drilled down analytical approach and may apply data mining techniques to provide context to a business query.
3. Predictive analytics
Predictive analytics attempts to forecast what is likely to happen next based on historical data. This is a type of advanced analytics, utilizing data mining, machine learning, and predictive modeling.
The usefulness of predictive analytics software transcends many industries. Banks are using it for clearer fraud detection, manufacturers are using it for predictive maintenance, and retailers are using it to identify up-sell opportunities.
4. Prescriptive analytics
Prescriptive analytics is an analysis of extreme complexity, often requiring data scientists with prior knowledge of prescriptive models. Utilizing both historical data and external information, prescriptive analytics could provide calculated next steps a business should take to solve its query.
While every business would love to tap prescriptive analytics, the amount of resources needed is just not feasible for many. Although, there are some analytics trends we can expect to take shape soon.

Internet of Things (IoT)

We all know that IoT is changing industries across the board – from agriculture to healthcare to manufacturing and everything in between – but what is IoT, exactly? Working for an Internet of Things (IoT) company, I get asked that question all the time and, over that time, I’ve worked hard to boil it down to something anyone can understand. Here’s everything you need to know about the internet of things.

What is Internet of Things (IoT)?

How are you reading this post right now? It might be on desktop, on mobile, maybe a tablet, but whatever device you’re using, it’s most definitely connected to the internet.
An internet connection is a wonderful thing, it give us all sorts of benefits that just weren’t possible before. If you’re old enough, think of your cellphone before it was a smartphone. You could call and you could text sure, but now you can read any book, watch any movie, or listen to any song all in the palm of your hand. And that’s just to name a few of the incredible things your smartphone can do.
Blog Single
Connecting things to the internet yields many amazing benefits. We’ve all seen these benefits with our smartphones, laptops, and tablets, but this is true for everything else too. And yes, I do mean everything.

“IoT means taking all the things in the world and connecting them to the internet.”

IoT definition for beginners
I think that confusion arises not because the concept is so narrow and tightly defined, but rather because it’s so broad and loosely defined. It can be hard to nail down the concept in your head when there are so many examples and possibilities in IoT.
To help clarify, I think it’s important to understand the benefits of connecting things to the internet. Why would we even want to connect everything to the internet?

Why IoT Matters?

When something is connected to the internet, that means that it can send information or receive information, or both. This ability to send and/or receive information makes things smart, and smart is good.
Let’s use smartphones (smartphones) again as an example. Right now you can listen to just about any song in the world, but it’s not because your phone actually has every song in the world stored on it. It’s because every song in the world is stored somewhere else, but your phone can send information (asking for that song) and then receive information (streaming that song on your phone).
To be smart, a thing doesn’t need to have super storage or a supercomputer inside of it. All a thing has to do is connect to super storage or to a supercomputer. Being connected is awesome.
Blog Single
In the Internet of Things, all the things that are being connected to the internet can be put into three categories:
  • Things that collect information and then send it.
  • Things that receive information and then act on it.
  • Things that do both.
And all three of these have enormous benefits that feed on each other.
1. Collecting and Sending Information
This means sensors. Sensors could be temperature sensors, motion sensors, moisture sensors, air quality sensors, light sensors, you name it. These sensors, along with a connection, allow us to automatically collect information from the environment which, in turn, allows us to make more intelligent decisions.
On the farm, automatically getting information about the soil moisture can tell farmers exactly when their crops need to be watered. Instead of watering too much (which can be an expensive over-use of irrigation systems and environmentally wasteful) or watering too little (which can be an expensive loss of crops), the farmer can ensure that crops get exactly the right amount of water. More money for farmers and more food for the world!
Just as our sight, hearing, smell, touch, and taste allow us, humans, to make sense of the world, sensors allow machines to make sense of the world.
2. Receiving and Acting on Information
We’re all very familiar with machines getting information and then acting. Your printer receives a document and it prints it. Your car receives a signal from your car keys and the doors open. The examples are endless.
Whether it’s a simple as sending the command “turn on” or as complex as sending a 3D model to a 3D printer, we know that we can tell machines what to do from far away. So what?
The real power of the Internet of Things arises when things can do both of the above. Things that collect information and send it, but also receive information and act on it.
3. Doing Both
Let’s quickly go back to the farming example. The sensors can collect information about the soil moisture to tell the farmer how much to water the crops, but you don’t actually need the farmer. Instead, the irrigation system can automatically turn on as needed, based on how much moisture is in the soil.
You can take it a step further too. If the irrigation system receives information about the weather from its internet connection, it can also know when it’s going to rain and decide not to water the crops today because they’ll be watered by the rain anyways.
And it doesn’t stop there! All this information about the soil moisture, how much the irrigation system is watering the crops, and how well the crops actually grow can be collected and sent to supercomputers that run amazing algorithms that can make sense of all this information.
And that’s just one kind of sensor. Add in other sensors like light, air quality, and temperature, and these algorithms can learn much much more. With dozens, hundreds, thousands of farms all collecting this information, these algorithms can create incredible insights into how to make crops grow the best, helping to feed the world’s growing population.

3 Categories of Internet of Things (IoT) Solutions

1. Consumer IoT
In terms of Consumer IoT, there are a few particularly important consumer-oriented markets including Connected Automobiles, Connected Homes, and personal electronics such as Wearable Technology.
Connected Automobiles refers to the use of IoT and broadband communications (LTE, WiFi, and soon 5G) technology in the car with the use of smartphones or other technologies typically manifest as handheld or wearable devices. Vehicles are at the forefront of a major convergence happening that includes a few key technologies: 5G, Artificial Intelligence, Data Management (Big Data, Analytics, Visualization, etc.), Cloud Technologies, and IoT.
Blog Single
Connected (e.g. Smart) Homes represent an Internet connected residences that provide an enhanced lifestyle for its occupants by way of home automation as well as enhanced information, entertainment, and safety services. The Connected Home ecosystem is rapidly expanding beyond merely Connected Entertainment (TV, Receiver, DVD Recorder, Media Player, Gaming Consoles) to include many areas such as Home and Office Equipment (Printer, VoIP Phone, etc.), Personal Consumer Electronics (Wireless IP Camera, Smartphone, Tablet, Portable Media Players, Navigation Devices, etc.), Energy Management (Temperature, Lighting, Heating and Air Conditioning), Safety, and Smart Consumer Appliances (Washing Machine, Refrigerator, etc.), and more.
Wearable technology is increasingly becoming an important medium for communication, infotainment services, health solution, textile, military, and industrial solutions. Wearables provide both a new user interface as well as a convenient and always available means of signaling, communications, and control via IoT.
This segment has the potential for massive transformation in many industries. Early adopter industries include clothing, healthcare, sports, and fitness. For example, wearable devices and digital healthcare represent two dominant trends that are poised to redefine virtually everything about how health products and services are delivered and supported. Ranging from telemedicine to self-monitoring and diagnosis, wearable devices and IoT will start as a novelty and achieve necessity status as insurance company cost optimization become the main driver for adoption and usage.
2. Enterprise IoT
Enterprise IoT is concerned with a variety of factors dealing with business operations efficiency and effectiveness. For example, one important area to consider is the transition from traditional Enterprise Resource Planning (ERP) to IoT enabled ERP, and the impact of IoT enabled ERP on enterprise as a whole. Leading ERP solution providers are adding IoT capabilities into ERP systems to generate meaningful insights for businesses. ERP systems are coupling sensors and other IoT devices to transmit data into ERP system on a real-time basis without human intervention.
Blog Single
In another example that cuts across the Consumer, Enterprise, and Industrial IoT markets, consumer appliances data is fed directly into manufacturers ERP system without using any middleman system. This expedites fault finding program and proactive maintenance using machine generated data. This type of consumer centric ERP process will be the new reality for enterprise ERP systems integrated with IoT solutions.
3. Industrial IoT
The industrial sector is rapidly integrating Internet of Things (IoT) with other key technologies such as 3D Printing, Big Data and Streaming Analytics. Typically referred to as the Industrial Internet of Things (IIoT) or simply the Industrial Internet, IoT in industry includes Connected Manufacturing in which the combination of certain key technologies are anticipated to substantially advance the Industry 4.0 revolution towards increasingly smarter manufacturing. In terms of core functionality for Connected Manufacturing, IIoT provides the basis for communications, control, and automated data capture. Data Analytics provides the means to process vast amounts of machine-generated and often unstructured data. Accordingly, Big Data technologies and predictive analytics enable streamlining of industrial processes. AI technology provides the means to further automate decision making and to engage machine learning for ongoing efficiency and effectiveness improvements.
Blog Single
IIoT is poised to transform many industry verticals such as Agriculture, Automotive, Healthcare, and more. Initially focusing on improving existing processes and augmented current infrastructure, IIoT will evolve to encompass next generation methods and procedures. For example, IoT in Agriculture (IoTAg) represents a more specific use of technology wherein agricultural planning and operations becomes connected in ways previously impossible if it were not for advances in sensors, communications, data analytics and other IoTAg areas. IoT in Healthcare is another promising example. The evolving area of Real-Time Remote Medical Diagnosis Systems promise to revolutionize the detection and prescriptive abilities of healthcare diagnostics as IoT technologies integrate with Electronic Healthcare Records systems.

Near Field Communication (NFC)

What is NFC?

NFC (near field communication) is what enables two devices to communicate wirelessly when they’re close together. NFC is actually a subset of something called RFID (radio-frequency identification), a technology that allows us to identify things through radio waves. RFID is nothing new—it’s been used for decades for things like scanning items in grocery stores and luggage on baggage claims, and tagging cattle.
NFC, which was introduced in the early 2000s, uses a specific RFID frequency (13.56MHz, to be exact) for close-range communications. To date, one of the more common uses for NFC is identification cards to gain access to places like office buildings and private garages. But increasingly, NFC is being used to power something called “contactless” payments.
NFC isn’t just useful on its own—it can also be used in conjunction with other cutting-edge technologies such as the Internet of Things (IoT). From smartphones to home automation, this article will discuss the ways in which NFC and IoT intersect.
Blog Single
NFC enables simplified transactions, data exchange, pairing, wireless connections, and convenience between two objects when in close proximity to one another (up to 10 cm apart). Because the communication is one-to-one and requires such close proximity, data privacy is more inherent than with other wireless approaches.
The benefits of NFC include easy connections, rapid transactions, and simple exchange of data. NFC serves as a complement to other popular wireless technologies such as Bluetooth, which has a wider range than NFC but which also consumes more power.

How NFC Works with IoT?

Have you ever wondered about the science behind tap-and-go technologies like Apple Pay and contactless credit cards? In many cases, these services are powered by a method of wirelessly transferring data called near-field communication (NFC).
The Internet of Things (IoT) is a massive network of billions of devices, from industrial sensors to self-driving cars, that are connected to the Internet in order to collect and exchange information. Tech market research company Juniper Research projects that by 2020, there will be 38.5 billion IoT-connected gadgets.
By enabling closer integration and communication between devices, the IoT is widely expected to shake up the ways that people live, work, and play. However, there are a few serious roadblocks that stand on the path to mainstream IoT adoption.
Blog Single
For example, how do IoT objects know what a user is intending to do? How can you develop IoT devices that are secure from external attacks? How can you connect unpowered objects to the IoT?
NFC solves many of the challenges associated with IoT:
1. With a straightforward tap-and-go mechanism, NFC makes it simple and intuitive to connect two different IoT devices.
2. Because NFC chips must be in close proximity of each other to initiate a transaction, NFC is a clear sign that the user intends to take a certain action. The short range of NFC also protects against unauthorized access by hackers.
3. NFC includes built-in features such as encryption that cut down on the potential for eavesdropping and other malicious activities.
4. Even objects without power or an IoT connection can passively exchange data via NFC tags. Users with an NFC-enabled device can tap the gadget to get information such as URLs.
Blog Single
For example, NFC technology can be used as a substitute for hotel key cards. By downloading your hotel reservation to a mobile app, the NFC chip in your smartphone becomes a key that can unlock your door. In addition, NFC technology can be integrated almost anywhere you might need cheap, battery-less electronic tags like in event tickets and animal tags for wildlife or livestock tracking.
Another major NFC use case for IoT is home automation. For example, introducing a new device onto your “smart home” network can be a laborious process that involves long passwords and complicated configurations.
You can skip this process by equipping your home with an NFC-enabled IoT “gateway” that serves as the nexus for all IoT applications. When you introduce a new device with an NFC tag, you can simply tap the device against the gateway to automatically connect it to your home network.
A second challenge for building a unified smart home is the use of different communications technologies, such as Wi-Fi and Bluetooth. NFC tags can bridge the gap between these technologies with a single tap, letting you do away with the time-consuming process of device discovery and pairing.

Why NFC is the critical link to IoT?

Blog Single

According to market research, soon more users will access the Internet wirelessly via mobile devices than from wired Ethernet connections. These mobile devices offer several different wireless connectivity options, each with their different strengths and capabilities. But only NFC is specifically designed and engineered to provide zero power operation and maximize privacy, both at a very low cost.
NFC by design has a limited field of operation, which prevents data snooping that could occur from a distance. It also requires intent—the application of an NFC-enabled device to an NFC-enabled object—in order to read its memory. This approach is in contrast to protocols such as WiFi, which require radios to broadcast information regardless of intent. The limited field plus other features of the protocol help to ensure that data exchange only occurs with the intended party.
Low power
When communicating between an NFC reader and an NFC transponder (tag), energy harvested from the RF field of the reader powers the tag, enabling connectivity for Internet of Things (IoT) devices without using batteries or power. This energy harvesting feature enables a number of low-power and low-cost applications.
Low cost
Adding a connected NFC tag to an embedded system can establish connectivity to mobile devices at much lower cost than Bluetooth or WiFi approaches. In addition, eliminating the need for a battery in an embedded system can further lower an application’s overall bill of materials.
Comparing wireless protocols
Designers have several choices for connectivity, all with trade-offs (see Table below shows that WiFi, ZigBee, and Bluetooth all have different strengths and capabilities. None, however, were specifically defined and engineered to provide zero-power operation and maximize privacy, and do both at very low cost, as NFC does.
Blog Single

NFC Principles of Operation

NFC has three communication modes: Read/Write, Peer-to-Peer, and Card Emulation.
Blog Single
Read/Write mode
In Read/Write mode, an NFC reader/writer (or NFC-enabled mobile phone acting as a traditional contactless reader/writer) reads data from NFC-enabled smart objects and acts upon that information. With an NFC-enabled phone, for example, users can automatically connect to websites via a retrieved URL, send short message service (SMS) texts without typing, obtain coupons, etc., all with only a touch of their device to the object.
Peer-to-Peer mode
In Peer-to-Peer mode, any NFC-enabled reader/writer can communicate to another NFC reader/writer to exchange data with the same advantages of safety, security, intuitiveness, and simplicity inherent in Read/Write mode. In Peer-to-Peer mode, one of the reader/writers behaves as a tag, creating a communication link. For example, two devices (such as smartphones) with readers/writers can communicate with each other.
Card Emulation mode
An NFC device in Card Emulation mode can replace a contactless smartcard, enabling use of NFC enabled devices within the existing contactless card infrastructure for operations such as ticketing, access control, transit, tollgates, and contactless payments.
NFC Read/Write mode for embedded systems
Blog Single
Most embedded applications that utilize NFC will use Read/Write mode for the link. In these cases, an NFC-enabled device, such as a mobile device, will provide the active reader, and the tag will be in the embedded system.
Functionally, a connected NFC tag in an embedded system behaves similarly to a dual port memory. One of the memory ports is accessed wirelessly through an NFC interface. The other port is accessed by the embedded system.
Through this functionality, data can pass from an external source (e.g., an NFC-enabled mobile device) to the embedded system. Furthermore, because NFC connected tags are passive, they can be read from, or written to, by the external source even when the embedded system is powered off.
Because NFC connected tags function similarly to dual port memories, they facilitate any application that requires data transfer between an embedded system and an external system with an NFC reader/writer, such as an NFCenabled mobile device.

Blockchain Technology

What is Blockchain?

Blockchain is the name of a whole new technology. As the name states, it is a sequence of blocks or groups of transactions that are chained together and distributed among the users.

“The blockchain is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value.”

In the end, it works as an immutable record of transactions that do not require to rely on an external authority to validate the authenticity and integrity of the data. Transactions are typically economic, but we can store any kind of information in the blocks.
Even when we call it ‘new technology’, its origins are accepted to date from 1991 when Scott and Stornetta published “How to Time-Stamp a Digital Document” in the Journal of Cryptography. However, it is now when its popularity has increased thanks to the success of Bitcoin and other cryptocurrencies.

What is NOT Blockchain?

Before describing the Blockchain, we will start clarifying what is NOT Blockchain. Many people misunderstand the terms and concepts, leading to typical mistakes like the followings:
- Blockchain is NOT a cryptocurrency.
- Blockchain is NOT a programming language.
- Blockchain is NOT a cryptographic codification.
- Blockchain is NOT an IA or Machine Learning technology.
- Blockchain is NOT a Python library or framework.

How does it work?

The value of the Blockchain technology comes from the distributed security of the system. For this reason, there are several characteristics that are completely necessary for developing or using a Blockchain.
We describe the 5 key concepts that are the basis of the Blockchain technology as we know it up to the date:
1. Cryptographic Hash
2. Immutable Ledger
3. P2P Network
4. Consensus Protocol
5. Block Validation or ‘Mining’
Blog Single
A Hash is a cryptographic function that transforms any input data into a fixed-length string of numbers. Every single input of the hash function will produce a different output, and the result is deterministic: if you use the same input, the output value will be always the same.
One of the most important features of the Hash functions is that the conversion is one-way: you cannot reverse the function to generate the original input.
There are many algorithms to create different Hash variations. For every input, the algorithm generates a completely different output, and it is not possible to predict how will the input changes affect the output.
The Blockchain nodes use Hash functions to create a unique identifier of any block of transactions. Every block includes the Hash value of the previous block.
This feature is tightly related to the previous one. Since every block of the chain contains the Hash of the previous one, it is not possible to modify any block without changing the entire chain. Hence, the chain works as an immutable digital ledger.
Let us see an example. We have the following chain, in which every block has been hashed and the hash is included in the following one:
If an anonymous attacker removes, adds or modifies any transaction in the first block, the HASH#1 will change:
Blog Single
HASH#1 is included as a part of the contents in Block 2. Because of that, HASH#2 will change too, and the error will propagate to every block of the chain after the block under attack. The user will then declare the chain invalid.
Blog Single
The Blockchain does not need any external or internal trust authority. This is possible because the Blockchain data is distributed among all the users. Every user has its own copy of the transactions and hashed blocks, and they spread the information of any new transaction to the entire network. This way, it is not possible for anyone to alter the information in the chain since it is not stored by an individual entity but for an entire network of node users.
Once a block of transactions is validated, it is added to the chain and every user update their local information. Even if an attacker were to modify your local chain, the network will not accept any block from the altered blockchain.
Blog Single
But what is the real blockchain? Users need to meet an agreement about the validity of the chain before adding more blocks.
Every time a node adds a new block, all the users have to validate the block by using a common protocol. Typically, the nodes reach a consensus about the correctness of a new block by Proof of Work or Proof of Stake methods.
The nodes check that the new block meets the requisites of their Proof method, including validation for all the transactions inside the block. If the block is valid, they consider it as a part of the Blockchain and keep adding new blocks.
In the case that different users have different chains apparently valid, they will discard the shorter one and select the longest chain as the main Blockchain. As in any Byzantine Fault Torelance (BFT) system, they will meet an agreement about the correct chain while at least 2/3 of the total nodes are not malicious.
Blog Single
This feature is actually not completely necessary for a Blockchain, as we can see with examples like the CREDITS platform. However, is it probably one of the most famous facts about Blockchain thanks to the Bitcoin chain.
The term ‘mining’ refers to the act of meeting the Proof of Work requirements for adding a new block with pending transactions to the Blockchain. There are many different mining methods, as they are custom defined for the chain.
The PoW method usually requires the user to create a block with restrictions on its Hash code. Since the Hash code is unpredictable, the ‘miners’ have to test any possible combination before meeting the requirements. These restrictions define the difficulty of the network.
Once a ‘miner’ node finds the solution to the PoW problem, they add the block to the chain and every other node check the validity of the PoW according to their Consensus Protocol. If the block is legit, they will include it on their own local copies of the Blockchain.

Is There More Than One Type of Blockchain?

Blog Single
Public Blockchains
This is an open source software is used by everyone participating in the network. Anyone can join and the network has a global foundation. For example, a lot of cryptocurrencies are built on existing blockchains, ERC20 tokens being the most well-known example built on Ethereum.
Private Blockchains
These use the same principles as public ones except the software is proprietary and hosted on private servers instead. Companies such as WalMart are developing their own blockchains to track supply-chain logistics.

Why is blockchain important?

We are all now used to sharing information through a decentralized online platform: the internet. But when it comes to transferring value – e.g. money, ownership rights, intellectual property, etc. – we are usually forced to fall back on old-fashioned, centralized institutions or establishments like banks or government agencies. Even online payment methods which have sprung into existence since the birth of the internet – PayPal being the most obvious example – generally require integration with a bank account or credit card to be useful.
Blockchain technology offers the intriguing possibility of eliminating this “middleman”. It does this by filling three important roles – recording transactions, establishing identity and establishing contracts – traditionally carried out by the financial services sector.
This has huge implications because, worldwide, the financial services market is the largest sector of industry by market capitalization. Replacing even a fraction of this with a blockchain system would result in a huge disruption of the financial services industry, but also a massive increase in efficiencies.
The third role, establishing contracts, opens up a treasure trove of opportunities. Apart from a unit of value (like a bitcoin), blockchain can be used to store any kind of digital information, including computer code.
That snippet of code could be programmed to execute whenever certain parties enter their keys, thereby agreeing to a contract. The same code could read from external data feeds — stock prices, weather reports, news headlines, or anything that can be parsed by a computer, really — to create contracts that are automatically filed when certain conditions are met.
These are known as “smart contracts,” and the possibilities for their use are practically endless.
For example, your smart thermostat might communicate energy usage to a smart grid; when a certain number of wattage hours has been reached, another blockchain automatically transfers value from your account to the electric company, effectively automating the meter reader and the billing process.
Or, smart contracts might be put to use in the regulation of intellectual property, controlling how many times a user can access, share, or copy something. It could be used to create fraud-proof voting systems, censorship-resistant information distribution, and much more.
The point is that the potential uses for this technology are vast, and I predict that more and more industries will find ways to put it to good use in the very near future.

Augmented Reality (AR)

What is Augmented Reality?

Augmented reality is the technology that expands our physical world, adding layers of digital information onto it. Unlike Virtual Reality (VR), AR does not create the whole artificial environments to replace real with a virtual one. AR appears in direct view of an existing environment and adds sounds, videos, graphics to it.

“A view of the physical real-world environment with superimposed computer-generated images, thus changing the perception of reality, is the AR.”

The term itself was coined back in 1990, and one of the first commercial uses were in television and military. With the rise of the Internet and smartphones, AR rolled out its second wave and nowadays is mostly related to the interactive concept. 3D models are directly projected onto physical things or fused together in real-time, various augmented reality apps impact our habits, social life, and the entertainment industry.
AR apps typically connect digital animation to a special ‘marker’, or with the help of GPS in phones pinpoint the location. Augmentation is happening in real time and within the context of the environment, for example, overlaying scores to a live feed sport events.

How does Augmented Reality work?

Blog Single
What is Augmented Reality for many of us implies a technical side, i.e. how does AR work? For AR a certain range of data (images, animations, videos, 3D models) may be used and people will see the result in both natural and synthetic light. Also, users are aware of being in the real world which is advanced by computer vision, unlike in VR.
AR can be displayed on various devices: screens, glasses, handheld devices, mobile phones, head-mounted displays. It involves technologies like S.L.A.M. (simultaneous localization and mapping), depth tracking (briefly, a sensor data calculating the distance to the objects), and the following components:
Cameras and sensors
Collecting data about user’s interactions and sending it for processing. Cameras on devices are scanning the surroundings and with this info, a device locates physical objects and generates 3D models. It may be special duty cameras, like in Microsoft Hololens, or common smartphone cameras to take pictures/videos.
AR devices eventually should act like little computers, something modern smartphones already do. In the same manner, they require a CPU, a GPU, flash memory, RAM, Bluetooth/WiFi, a GPS, etc. to be able to measure speed, angle, direction, orientation in space, and so on.
This refers to a miniature projector on AR headsets, which takes data from sensors and projects digital content (result of processing) onto a surface to view. In fact, the use of projections in AR has not been fully invented yet to use it in commercial products or services.
Some AR devices have mirrors to assist human eyes to view virtual images. Some have an “array of small curved mirrors” and some have a double-sided mirror to reflect light to a camera and to a user’s eye. The goal of such reflection paths is to perform a proper image alignment.

Types of Augmented Reality

1. Marker-based AR
Blog Single
Some also call it to image recognition, as it requires a special visual object and a camera to scan it. It may be anything, from a printed QR code to special signs. The AR device also calculates the position and orientation of a marker to position the content, in some cases. Thus, a marker initiates digital animations for users to view, and so images in a magazine may turn into 3D models.
2. Markerless AR
Blog Single
A.k.a. location-based or position-based augmented reality, that utilizes a GPS, a compass, a gyroscope, and an accelerometer to provide data based on user’s location. This data then determines what AR content you find or get in a certain area. With the availability of smartphones this type of AR typically produces maps and directions, nearby businesses info. Applications include events and information, business ads pop-ups, navigation support.
3. Projection-based AR
Blog Single
Projecting synthetic light to physical surfaces, and in some cases allows to interact with it. These are the holograms we have all seen in sci-fi movies like Star Wars. It detects user interaction with a projection by its alterations.
4. Superimposition-based AR
Replaces the original view with an augmented, fully or partially. Object recognition plays a key role, without it the whole concept is simply impossible. We’ve all seen the example of superimposed augmented reality in IKEA Catalog app, that allows users to place virtual items of their furniture catalog in their rooms.

Augmented reality devices

Many modern devices already support Augmented reality. From smartphones and tablets to gadgets like Google Glass or handheld devices, and these technologies continue to evolve. For processing and projection, AR devices and hardware, first of all, have requirements such as sensors, cameras, accelerometer, gyroscope, digital compass, GPS, CPU, displays, and things we’ve already mentioned.
Devices suitable for Augmented reality fall into the following categories:
- Mobile devices (smartphones and tablets)
The most available and best fit for AR mobile apps, ranging from pure gaming and entertainment to business analytics, sports, and social networking.
- Special AR devices
Designed primarily and solely for augmented reality experiences. One example is head-up displays (HUD), sending data to a transparent display directly into user’s view. Originally introduced to train military fighters pilots, now such devices have applications in aviation, automotive industry, manufacturing, sports, etc.
- AR glasses (or smart glasses)
Google Glasses, Meta 2 Glasses, Laster See-Thru, Laforge AR eyewear, etc. These units are capable of displaying notifications from your smartphone, assisting assembly line workers, access content hands-free, etc.
- AR contact lenses (or smart lenses)
Taking Augmented Reality one step even further. Manufacturers like Samsung and Sony have announced the development of AR lenses. Respectively, Samsung is working on lenses as the accessory to smartphones, while Sony is designing lenses as separate AR devices (with features like taking photos or storing data).
- Virtual retinal displays (VRD)
Creating images by projecting laser light into the human eye. Aiming at bright, high contrast and high-resolution images, such systems yet remain to be made for a practical use.

Virtual Reality (VR)

What is virtual reality?

Virtual reality (VR) means experiencing things through our computers that don't really exist. From that simple definition, the idea doesn't sound especially new. When you look at an amazing Canaletto painting, for example, you're experiencing the sites and sounds of Italy as it was about 250 years ago—so that's a kind of virtual reality. In the same way, if you listen to ambient instrumental or classical music with your eyes closed, and start dreaming about things, isn't that an example of virtual reality—an experience of a world that doesn't really exist? What about losing yourself in a book or a movie? Surely that's a kind of virtual reality?
Blog Single
If we're going to understand why books, movies, paintings, and pieces of music aren't the same thing as virtual reality, we need to define VR fairly clearly. For the purposes of this simple, introductory article, I'm going to define it as:
A believable, interactive 3D computer-created world that you can explore so you feel you really are there, both mentally and physically. Putting it another way, virtual reality is essentially:
1. Believable
You really need to feel like you're in your virtual world (on Mars, or wherever) and to keep believing that, or the illusion of virtual reality will disappear.
2. Interactive
As you move around, the VR world needs to move with you. You can watch a 3D movie and be transported up to the Moon or down to the seabed—but it's not interactive in any sense.
3. Computer-generated
Why is that important? Because only powerful machines, with realistic 3D computer graphics, are fast enough to make believable, interactive, alternative worlds that change in real-time as we move around them.
4. Explorable
A VR world needs to be big and detailed enough for you to explore. However realistic a painting is, it shows only one scene, from one perspective. A book can describe a vast and complex "virtual world," but you can only really explore it in a linear way, exactly as the author describes it.
5. Immersive
To be both believable and interactive, VR needs to engage both your body and your mind. Paintings by war artists can give us glimpses of conflict, but they can never fully convey the sight, sound, smell, taste, and feel of battle. You can play a flight simulator game on your home PC and be lost in a very realistic, interactive experience for hours (the landscape will constantly change as your plane flies through it), but it's not like using a real flight simulator (where you sit in a hydraulically operated mockup of a real cockpit and feel actual forces as it tips and tilts), and even less like flying a plane.
We can see from this why reading a book, looking at a painting, listening to a classical symphony, or watching a movie don't qualify as virtual reality. All of them offer partial glimpses of another reality, but none are interactive, explorable, or fully believable. If you're sitting in a movie theater looking at a giant picture of Mars on the screen, and you suddenly turn your head too far, you'll see and remember that you're actually on Earth and the illusion will disappear. If you see something interesting on the screen, you can't reach out and touch it or walk towards it; again, the illusion will simply disappear. So these forms of entertainment are essentially passive: however plausible they might be, they don't actively engage you in any way.
VR is quite different. It makes you think you are actually living inside a completely believable virtual world (one in which, to use the technical jargon, you are partly or fully immersed). It is two-way interactive: as you respond to what you see, what you see responds to you: if you turn your head around, what you see or hear in VR changes to match your new perspective.

Types of virtual reality

"Virtual reality" has often been used as a marketing buzzword for compelling, interactive video games or even 3D movies and television programs, none of which really count as VR because they don't immerse you either fully or partially in a virtual world. Search for "virtual reality" in your cellphone app store and you'll find hundreds of hits, even though a tiny cellphone screen could never get anywhere near producing the convincing experience of VR. Nevertheless, things like interactive games and computer simulations would certainly meet parts of our definition up above, so there's clearly more than one approach to building virtual worlds—and more than one flavor of virtual reality. Here are a few of the bigger variations:
1. Fully immersive
Blog Single
For the complete VR experience, we need three things. First, a plausible, and richly detailed virtual world to explore; a computer model or simulation, in other words. Second, a powerful computer that can detect what we're going and adjust our experience accordingly, in real time (so what we see or hear changes as fast as we move—just like in real reality). Third, hardware linked to the computer that fully immerses us in the virtual world as we roam around. Usually, we'd need to put on what's called a head-mounted display (HMD) with two screens and stereo sound, and wear one or more sensory gloves. Alternatively, we could move around inside a room, fitted out with surround-sound loudspeakers, onto which changing images are projected from outside. We'll explore VR equipment in more detail in a moment.
2. Non-immersive
Blog Single
A highly realistic flight simulator on a home PC might qualify as nonimmersive virtual reality, especially if it uses a very wide screen, with headphones or surround sound, and a realistic joystick and other controls. Not everyone wants or needs to be fully immersed in an alternative reality. An architect might build a detailed 3D model of a new building to show to clients that can be explored on a desktop computer by moving a mouse. Most people would classify that as a kind of virtual reality, even if it doesn't fully immerse you. In the same way, computer archaeologists often create engaging 3D reconstructions of long-lost settlements that you can move around and explore. They don't take you back hundreds or thousands of years or create the sounds, smells, and tastes of prehistory, but they give a much richer experience than a few pastel drawings or even an animated movie.
3. Collaborative
Blog Single
What about "virtual world" games like Second Life and Minecraft? Do they count as virtual reality? Although they meet the first four of our criteria (believable, interactive, computer-created and explorable), they don't really meet the fifth: they don't fully immerse you. But one thing they do offer that cutting-edge VR typically doesn't is collaboration: the idea of sharing an experience in a virtual world with other people, often in real time or something very close to it. Collaboration and sharing are likely to become increasingly important features of VR in future.
4. Web-based
Blog Single
Virtual reality was one of the hottest, fastest-growing technologies in the late 1980s and early 1990s, but the rapid rise of the World Wide Web largely killed off interest after that. Even though computer scientists developed a way of building virtual worlds on the Web (using a technology analogous to HTML called Virtual Reality Markup Language, VRML), ordinary people were much more interested in the way the Web gave them new ways to access real reality—new ways to find and publish information, shop, and share thoughts, ideas, and experiences with friends through social media. With Facebook's growing interest in the technology, the future of VR seems likely to be both Web-based and collaborative.
5. Augmented reality
Blog Single
Mobile devices like smartphones and tablets have put what used to be supercomputer power in our hands and pockets. If we're wandering round the world, maybe visiting a heritage site like the pyramids or a fascinating foreign city we've never been to before, what we want is typically not virtual reality but an enhanced experience of the exciting reality we can see in front of us. That's spawned the idea of augmented reality (AR), where, for example, you point your smartphone at a landmark or a striking building and interesting information about it pops up automatically. Augmented reality is all about connecting the real world we experience to the vast virtual world of information that we've collectively created on the Web. Neither of these worlds is virtual, but the idea of exploring and navigating the two simultaneously does, nevertheless, have things in common with virtual reality. For example, how can a mobile device figure out its precise location in the world? How do the things you see on the screen of your tablet change as you wander round a city? Technically, these problems are similar to the ones developers of VR systems have to solve—so there are close links between AR and VR.

What equipment do we need for virtual reality?

Close your eyes and think of virtual reality and you probably picture something like our top photo: a geek wearing a wraparound headset (HMD) and datagloves, wired into a powerful workstation or supercomputer. What differentiates VR from an ordinary computer experience (using your PC to write an essay or play games) is the nature of the input and output. Where an ordinary computer uses things like a keyboard, mouse, or (more exotically) speech recognition for input, VR uses sensors that detect how your body is moving. And where a PC displays output on a screen (or a printer), VR uses two screens (one for each eye), stereo or surround-sound speakers, and maybe some forms of haptic (touch and body perception) feedback as well. Let's take a quick tour through some of the more common VR input and output devices.
1. Head-mounted displays (HMDs)
Blog Single
There are two big differences between VR and looking at an ordinary computer screen: in VR, you see a 3D image that changes smoothly, in real-time, as you move your head. That's made possible by wearing a head-mounted display, which looks like a giant motorbike helmet or welding visor, but consists of two small screens (one in front of each eye), a blackout blindfold that blocks out all other light (eliminating distractions from the real world), and stereo headphones. The two screens display slightly different, stereoscopic images, creating a realistic 3D perspective of the virtual world. HMDs usually also have built-in accelerometers or position sensors so they can detect exactly how your head and body are moving (both position and orientation—which way they're tilting or pointing) and adjust the picture accordingly. The trouble with HMDs is that they're quite heavy, so they can be tiring to wear for long periods; some of the really heavy ones are even mounted on stands with counterweights. But HMDs don't have to be so elaborate and sophisticated: at the opposite end of the spectrum, Google has developed an affordable, low-cost pair of cardboard goggles with built-in lenses that convert an ordinary smartphone into a crude HMD.
2. Immersive rooms
An alternative to putting on an HMD is to sit or stand inside a room onto whose walls changing images are projected from outside. As you move in the room, the images change accordingly. Flight simulators use this technique, often with images of landscapes, cities, and airport approaches projected onto large screens positioned just outside a mockup of a cockpit. A famous 1990s VR experiment called CAVE (Cave Automatic Virtual Environment), developed at the University of Illinois by Thomas de Fanti, also worked this way. People moved around inside a large cube-shaped room with semi-transparent walls onto which stereo images were back-projected from outside. Although they didn't have to wear HMDs, they did need stereo glasses to experience full 3D perception.
3. Datagloves
Blog Single
See something amazing and your natural instinct is to reach out and touch it—even babies do that. So giving people the ability to handle virtual objects has always been a big part of VR. Usually, this is done using datagloves, which are ordinary gloves with sensors wired to the outside to detect hand and figure motions. One technical method of doing this uses fiber-optic cables stretched the length of each finger. Each cable has tiny cuts in it so, as you flex your fingers back and forth, more or less light escapes. A photocell at the end of the cable measures how much light reaches it and the computer uses this to figure out exactly what your fingers are doing. Other gloves use strain gauges, piezoelectric sensors, or electromechanical devices (such as potentiometers) to measure finger movements.
4. Wands
Blog Single
Even simpler than a dataglove, a wand is a stick you can use to touch, point to, or otherwise interact with a virtual world. It has position or motion sensors (such as accelerometers) built in, along with mouse-like buttons or scroll wheels. Originally, wands were clumsily wired into the main VR computer; increasingly, they're wireless.

Applications of virtual reality

VR has always suffered from the perception that it's little more than a glorified arcade game—literally a "dreamy escape" from reality. In that sense, "virtual reality" can be an unhelpful misnomer; "alternative reality," "artificial reality," or "computer simulation" might be better terms. The key thing to remember about VR is that it really isn't a fad or fantasy waiting in the wings to whistle people off to alternative worlds; it's a hard-edged practical technology that's been routinely used by scientists, doctors, dentists, engineers, architects, archaeologists, and the military for about the last 30 years. What sorts of things can we do with it?
1. Education
Difficult and dangerous jobs are hard to train for. How can you safely practice taking a trip to space, landing a jumbo jet, making a parachute jump, or carrying out brain surgery? All these things are obvious candidates for virtual reality applications. As we've seen already, flight cockpit simulators were among the earliest VR applications; they can trace their history back to mechanical simulators developed by Edwin Link in the 1920s. Just like pilots, surgeons are now routinely trained using VR. In a 2008 study of 735 surgical trainees from 28 different countries, 68 percent said the opportunity to train with VR was "good" or "excellent" for them and only 2 percent rated it useless or unsuitable.
2. Scientific visualization
Anything that happens at the atomic or molecular scale is effectively invisible unless you're prepared to sit with your eyes glued to an electron microscope. But suppose you want to design new materials or drugs and you want to experiment with the molecular equivalent of LEGO. That's another obvious application for virtual reality. Instead of wrestling with numbers, equations, or two-dimensional drawings of molecular structures, you can snap complex molecules together right before your eyes. This kind of work began in the 1960s at the University of North Carolina at Chapel Hill, where Frederick Brooks launched GROPE, a project to develop a VR system for exploring the interactions between protein molecules and drugs.
3. Medicine
Apart from its use in things like surgical training and drug design, virtual reality also makes possible telemedicine (monitoring, examining, or operating on patients remotely). A logical extension of this has a surgeon in one location hooked up to a virtual reality control panel and a robot in another location (maybe an entire continent away) wielding the knife. The best-known example of this is the daVinci surgical robot, released in 2009, of which several thousand have now been installed in hospitals worldwide. Introduce collaboration and there's the possibility of a whole group of the world's best surgeons working together on a particularly difficult operation—a kind of WikiSurgery, if you like!
Although it's still early days, VR has already been tested as a treatment for various kinds of psychiatric disorder (such as schizophrenia, agoraphobia, and phantom-limb pain), and in rehabilitation for stroke patients and those suffering degenerative diseases such as multiple sclerosis.
4. Industrial design and architecture
Architects used to build models out of card and paper; now they're much more likely to build virtual reality computer models you can walk through and explore. By the same token, it's generally much cheaper to design cars, airplanes, and other complex, expensive vehicles on a computer screen than to model them in wood, plastic, or other real-world materials. This is an area where virtual reality overlaps with computer modeling: instead of simply making an immersive 3D visual model for people to inspect and explore, you're creating a mathematical model that can be tested for its aerodynamic, safety, or other qualities.
5. Games and entertainment
IIoT is poised to transform many industry verticals such as Agriculture, Automotive, Healthcare, and more. Initially focusing on improving existing processes and augmented current infrastructure, IIoT will evolve to encompass next generation methods and procedures. For example, IoT in Agriculture (IoTAg) represents a more specific use of technology wherein agricultural planning and operations becomes connected in ways previously impossible if it were not for advances in sensors, communications, data analytics and other IoTAg areas. IoT in Healthcare is another promising example. The evolving area of Real-Time Remote Medical Diagnosis Systems promise to revolutionize the detection and prescriptive abilities of healthcare diagnostics as IoT technologies integrate with Electronic Healthcare Records systems.

Pros and cons of virtual reality

Like any technology, virtual reality has both good and bad points. How many of us would rather have a complex brain operation carried out by a surgeon trained in VR, compared to someone who has merely read books or watched over the shoulders of their peers? How many of us would rather practice our driving on a car simulator before we set foot on the road? Or sit back and relax in a Jumbo Jet, confident in the knowledge that our pilot practiced landing at this very airport, dozens of times, in a VR simulator before she ever set foot in a real cockpit?
Critics always raise the risk that people may be seduced by alternative realities to the point of neglecting their real-world lives—but that criticism has been leveled at everything from radio and TV to computer games and the Internet. And, at some point, it becomes a philosophical and ethical question: What is real anyway? And who is to say which is the better way to pass your time? Like many technologies, VR takes little or nothing away from the real world: you don't have to use it if you don't want to.
The promise of VR has loomed large over the world of computing for at least the last quarter century—but remains largely unfulfilled. While science, architecture, medicine, and the military all rely on VR technology in different ways, mainstream adoption remains virtually nonexistent; we're not routinely using VR the way we use computers, smartphones, or the Internet. But the 2014 acquisition of VR company Oculus, by Facebook, greatly renewed interest in the area and could change everything. Facebook's basic idea is to let people share things with their friends using the Internet and the Web. What if you could share not simply a photo or a link to a Web article but an entire experience? Instead of sharing photos of your wedding with your Facebook friends, what if you could make it possible for people to attend your wedding remotely, in virtual reality, in perpetuity? What if we could record historical events in such a way that people could experience them again and again, forever more? These are the sorts of social, collaborative virtual reality sharing that (we might guess) Facebook is thinking about exploring right now. If so, the future of virtual reality looks very bright indeed!

Robotic Process Automation (RPA)

What is Robotic Process Automation?

Robotic Process Automation is a software-based technology utilising software robots to emulate human execution of a business process. This means that it performs the task on a computer, uses the same interface a human worker would, clicks, types, opens applications, uses keyboard shortcuts, and more.

“software robots that mimic and integrate human actions within digital systems to optimize business processes. RPA captures data, runs applications, triggers responses, and communicates with other systems to perform a variety of tasks.”

Definition of Robotic process automation (RPA)
Blog Single
It is predominantly used to automate business processes and tasks, resulting in reductions in spending and giving businesses a competitive edge.
RPA is versatile and flexible enough to be used in a business of all sizes, from start-ups to enterprise organizations. Here is a rundown of the two common types available in the market:
1. Programmable bots
A programmable robot is defined by set rules and instructions. Parameters need to be defined by programmers before the bot can get to work. Ultimately, this involves mapping out a process – step-by-step – which can be very time consuming for more complex tasks.
2. Intelligent bots
Bots with artificial intelligence can analyse data – both historical and current – to learn how employees perform a process. The robot will follow your clicks, mouse movements and actions. After a period of time when enough data has been analysed, the bot will have enough data to complete the process itself. Intelligent and self-learning bots are better suited to perform processes involving unstructured data and processes that involve fluctuating parameters.

How does RPA work?

Automation technology has been a staple of business for the last decade, but in recent years, RPA technology has reached an impressive level of sophistication while retaining ease-of-use. It is no longer a tool that is solely used to facilitate the automation of simple and repetitive IT tasks. RPA is maturing, and with the convergence of other technologies – such as artificial intelligence and machine learning (ML) – we are beginning to explore new possibilities.
Blog Single

RPA compared to traditional process transformation approaches

The potential for the benefits of RPA are considerable – but the risk is that with any new technology, it won’t be thoroughly understood, and projects will not make the best use of the approach.
Unlike other forms of automation, RPA has the intelligence to decide if a process should occur. It can analyse data presented to it and make a decision based on the logic parameters set in place by the developer. In comparison to other forms of automation, it does not require system integration.
RPA is a broad field and there are a wide array of technologies in the market that greatly differ from one another. However, most RPA products will comprise of RPA developer tools, a controller and the robot itself.
Businesses can leverage RPA in a multitude of different ways. Flexible and easy to implement, some businesses may find that they use it in a way that is unique to their organization. Determining what processes should be automated is a key strategic point. There is no point in automating a process just for the sake of it.
Whilst it is great at driving operational excellence, some processes are more viable for automation than others. It is always a good practice to roll out RPA slowly to mitigate teething issues that often come with technology implementation. The most viable candidates for automation are those that tend to process in a way that is simple, repetitive and easy to define. These processes will likely be rule-based and comprised of easily definable structured data.

Top 3 benefits of Robotic Process Automation

Blog Single
1. Automatable work
One of the predominant draws of RPA is that it enables automatable work – relieving human workers from repetitive clerical processes such as data entry and data manipulation, allowing human workers to focus on complex value-adding tasks that elevate a business.
2. Reduction in human error and costs
Foibles to which human workers are prone – particularly during long repetitive tasks – caused by tiredness and boredom are completely mitigated with RPA. This results in work that is more accurate, timely and consistent, ensuring that time and money isn’t lost correcting old work or creating duplicates.
3. It works on existing IT infrastructure and is non-invasive
RPA works alongside existing IT infrastructure; it just needs to be trained on how to use it. This is a major benefit for organisations using legacy systems. It interfaces with front-end infrastructure and uses the same graphic user interface (GUI) that human workers would use to complete a task, ensuring that the IT landscape doesn’t have to be changed to accommodate RPA – keeping costs to a minimum.
Summary: RPA is the application of software as a virtual workforce. It is governed by set rules and business logic set by the RPA developers. It can perform complex tasks just as a human worker would, emulating interaction within a GUI, giving developers the opportunity to create a workforce that mimics the same manual path that a human would take at a fraction of the cost.

Which processes should you automated with RPA?

Blog Single
To maximize impact of RPA, identify most impactful processes. These processes tend to be:
1. Impacting both cost and revenues
Most impactful processes are expensive and touch customers. For example, quote-to-cash can be expensive if pricing rules are not clear. Speed and effectiveness of quote-to-cash process can definitely make or break a sale. Such processes are good candidates for RPA if they can be automated.
2. High volume
One of the key benefits of RPA is reduction of human effort. You should start automating your highest volume processes first.
3. Fault tolerant
If a process can not handle any errors, then its automation should either be deprioritized or there should be a quality control process to ensure that automation errors get caught. RPA bots rely on user interface (UI) to carry out their tasks. They can have errors due to UI changes or process changes. For example, it makes sense to automate invoice-to-pay process for most companies. However, payments above a certain value would need to be approved by humans.
4. Speed-sensitive
Any processes that can delay delivery of services to customers are good candidates for automation as automation can make processes instantaneous.
5. Requiring irregular labor
Since finding temporary labor is difficult. processes with irregular labor demands force companies to employ for peak demand which is inefficient. RPA bots can easily scale up or down, easily managing peak demand.

Which process are most easily automated with RPA?

1. Rules based
Ideal processes can be described by specific rules. RPA bots need to be programmed and if the rules of the process can not be programmed, then that process is not a great candidate for RPA. AI can be trained with complex rules and even uncover rules that are not apparent to human operators. However, automation of such processes requires careful observation of RPA results since there may be cases where AI incorrectly identifies rules.
2. With few exceptions
This is similar to the “rules based” criteria above. However, some processes have so many undocumented rules that even if they are rules based, it is time consuming to identify all rules via interviews with domain experts. Such processes are not good candidates for automation.
3. Company-specific
Is this a process that all companies undertake in the same way or is it unique to your company? For example expense auditing takes place in a similar fashion in most companies of similar sizes. Building an RPA system for expense auditing would be costlier and less effective then just using a solution built for such a process.
4. Mature
Automating a process that is changing every day is a waste of time because developers will spend a lot of time on maintenance. Stable processes are good candidates for automation.
5. Not on the roadmap for new systems
Replacing legacy systems can automate processes even more effectively than RPA. RPA bots need to rely on screen scraping and may introduce errors. Additionally, installing two automation methods for a process does not make sense.

Beacon Technology

What are Beacons?

Beacons are transmitters that broadcast signals at set intervals so that smart devices within its proximity are able to listen for these signals and react accordingly. They run off of Bluetooth low-energy (BLE) wireless technology.

What is iBeacon?

iBeacon is Apple’s implementation of Bluetooth low-energy (BLE) wireless technology to create a different way of providing location-based information and services to iPhones and other iOS devices. iBeacon arrived in iOS6, which means it works with iPhone 4s or later. iPad (third generation and onwards) iPad mini and iPod touch (fifth generation or later). So very few people own devices not compatible with iBeacon. Which is great news for anyone broadcasting iBeacon signals. It’s worth noting the same BLE technology is also compatible with Android 4.3 and above.
Blog Single

What is BLE technology?

BLE is a type of Bluetooth technology that is low energy. Hence the name- Bluetooth Low energy. BLE communication comprises of advertisements of small packets of data which are broadcast at regular intervals through radio waves. BLE broadcasting is a one-way communication method; it simply advertises its packets of data. These packets of data can then be picked up by smart devices nearby and then be used to trigger things like push messages, app actions, and prompts on the smart device.
A typical beacon broadcast signals at the rate of 100ms. If you are using a beacon that is plugged in you can increase the frequency of the beacon without having to worry about battery life. This would allow for quicker discovery by smartphones and other bluetooth enabled devices.
BLE technology is idle for contextual and proximity awareness. Beacons typical broadcast range is between 10-30 meters. Some places advertise beacons as broadcasting up to 75 meters. They measure that is an idle setting where the signal will experience nothing being in the way. So you should count on 30 meters as your broadcast range. This is ideal for indoor location tracking and awareness.

How is BLE technology used?

IBeacons are used to send contextually aware, value driven messages to consumers. Beacons should be about providing value to the customer, not just advertisements. In a store this could be product information, while in an airport it could be flight information once they arrive at the gate. There are endless options within every scenario.
Let’s go through what a beacon experience may look like in a retail store.
Blog Single
When a customer is 10-30 meters away from the store. A business can deliver content that entices customers to enter the retail location. They can do this by delivering a lock screen message.
With this are a few important notes:

First, the consumer must be in front of the store at least 20 seconds in order to receive the lock screen notification. This prevents lock screens from being bombarded with messages.

Second, the consumer must either have the stores app, or a piece of their mobile wallet content. In other words, the consumer must have something within their device that communicates directly with the beacon’s UUID. If they do not, then they signals cannot cause a command within the phone.

The pictures I am using have contextually aware reactions that take place within an app. If the consumer were to open the app, these are the screens they would see depending on their location. The app then allows the consumers to add the offer to their mobile wallet, creating another connection between the consumer and the store.

When a customer has entered the store and is roughly 5-10 meters away from a beacon. The business could deliver a link to product information, reviews, or videos demonstrating the product being used.

When a customer is less than 2 meters away from the beacon. The business could enable a mobile coupon to show on the lock screen or use the app to confirm a transaction.

When customers leave the store, the business should retarget users with content that increases repeat visits and loyalty. They can do this through coupons or even a simple thank you message. Businesses can choose what time to deliver this content and what type of content to deliver.

What’s the difference between beacons and GPS?

Once you know how a beacon works, you may think that it’s somewhat similar to GPS and when it comes to sending out signals, they’re a bit similar but not quite the same. GPS stands for Global Positioning System and the system is composed of three parts: satellites, a ground station, and receivers. There are actually 30 satellites up in space orbiting the Earth as we speak (yes, right now). A receiver can be anything from a car to a phone–anything that receives the signal that is being sent from the satellite. In order to track your location, the receiver uses signals from several satellites to calculate the distances from itself to those satellites, and thus pinpoint where you are.
When it comes to beacons though, that much work doesn’t go into it. We mentioned earlier that beacons were a much simpler technology, and they are. Unlike satellites, beacons aren’t broadcasting their location to a ton of satellites, they’re only broadcasting their location to the device that is receiving their identity, or that Unique ID code we mentioned. GPS can position you anywhere in the world and it typically doesn’t work well indoors. GPS also requires at least three satellites to give you a location and often does so to an accuracy of 1-50 meters. Beacons, on the other hand, can work indoors or outdoors, they can get to a finer accuracy.

How do I know if I need GPS or beacons?

Blog Single
Each of these technologies serves a purpose but depending on the project, one may be better than another and sometimes you may want both When it comes down to it, GPS informs apps of longitude and latitude points. Beacons, however, are able to be customized at a much more granular level and allow a company to more strategically target how their users are receiving and digesting information. Beacons also give more control over the company that installs them. Most Bluetooth beacons can last up to two years without being replaced, but are easily accessible should any changes or testing need to occur.

Where do mobile apps come in?

Here is the part of the equation that we as users don’t often think about. If you’ve ever explored the “Settings” in your mobile phone, then you know you have the option to have your Bluetooth set to either “On” or “Off.” When your Bluetooth is set to “On,” then your device can receive broadcasted messages from other devices and beacons, in this case.
When your Bluetooth setting is “On,” then whenever you’re within broadcasting distance of a beacon, your apps can receive the beacon’s Unique ID. For those with privacy concerns, it’s important to know that beacons don’t track you, all they do is send signals from one point to another (kind of like two tin cans with string tied between them, but with a bit more sophistication). In fact, if you’re an iPhone or Android user, chances are that you’re already impacted by beacons on a daily basis whenever you’re interacting with your phone. For example, say you open your Starbucks app to buy your morning coffee ahead of time on the way to the shop, the beacon’s location is detected by your phone and now the app will know to serve you up any new seasonal offers or deals being promoted. Imagine if you had to go to the homepage of the app each time you wanted to find a great new morning deal. That would be such a hassle! But that’s just one of the ways that beacons make for a positive, more tailored user experience.

iBeacon Implementation scenarios

1. Retail/Shopping
Blog Single
Imagine if you could find your way throughout a store based on guidance coming from an app, that knows where you are in relation to the types of products you’re likely to buy. This sort of technology has been around for years (we built one of the first indoor navigation apps for Macy’s) and with beacon technology, this sort of experience is even easier. Stores like Target and others are incorporating beacon technology to show shoppers about the deals nearby and to be even more connected to their consumers throughout the shopping experience. Baseball stadiums are using beacons to get you to your seat or to a hotdog. American Eagle stores use beacons to give shoppers updates on discounts, rewards based on their locations, and other product recommendations.
2. Transportation
Blog Single
Because GPS is only accurate to a certain extent, those with visual impairments can benefit from even more precise micro-navigation which can be enhanced with beacons. We worked with the Perkins School for the Blind to create an app called BlindWays, which helps the visually impaired confidently navigate public transportation. The team also paired up with the MBTA to integrate beacon technology with the app so that users have enough more insight as to how near or far away from a stop they are. Other helpful information to know besides how far away a bus stop is, would be when the bus will arrive. Beacons aren’t constrained to apps but can work with other technologies, too. The MTA trains in the New York City subway use beacons to communicate a broadcast signal between the train and the station’s platform to alert commuters of when their train will arrive.
3. News
Apps can connect with beacons to receive news that is applicable to them based on the location of their users. When a user walks by a location, a beacon transmits a signal to the phone and the app knows to deliver an update of news applicable to that location for the time they’re there; once they leave the area and the signal from the beacon is no longer detected, the app no longer displays that news. This is called geo-fencing and can be used with a combination of both beacons and GPS. This type of beacon is especially helpful for those hoping to keep up to date on current events but also has the ability to be utilized by companies and apps that utilize emergency alerts. For example, RapidSOS is using Bluetooth beacons to ensure that in an emergency, your location can be more accurately detected.
4. Hospitality
Resorts and restaurants can use beacons to let their patrons know about what’s happening around them, from when the turn-down service arrives, to restaurants nearby that cater to their specific dietary restrictions. Starwood Hotels recently completed a trial using beacons to help concierge connect with customers for a faster check-in process, give insight for housekeeping as to whether or not guests were still in the room, and even tested a method for guests entering their rooms without a key.
5. Travel
There are airlines and airports that are working towards using beacons for passengers in lines at security to notify which airlines have passengers that will run late for their flights. But beyond being used for function and utility, Virgin Atlantic uses beacon technology in London’s Heathrow airport that notifies premium passengers visiting their lounges about their electronic boarding passes and what in-flight entertainment awaits them. More and more companies are beginning to think about beacons through the lens of the user experience and are considering what how beacons can provide extra delight and surprise within their products.

Edge Computing

What is the “Edge”?

The ‘Edge’ refers to having computing infrastructure closer to the source of data. It is the distributed framework where data is processed as close to the originating data source possible. This infrastructure requires effective use of resources that may not be continuously connected to a network such as laptops, smartphones, tablets, and sensors. Edge Computing covers a wide range of technologies including wireless sensor networks, cooperative distributed peer-to-peer ad-hoc networking and processing, also classifiable as local cloud/fog computing, mobile edge computing, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality, and more.
Blog Single
Cloud Computing is expected to go through a phase of decentralization. Edge Computing is coming up with an ideology of bringing compute, storage and networking closer to the consumer.

Why do we need the "Edge"?

Legit question! Why do we even need Edge Computing? What are the advantages of having this new infrastructure?

Imagine a case of a self-driving car where the car is sending a live stream continuously to the central servers. Now, the car has to take a crucial decision. The consequences can be disastrous if the car waits for the central servers to process the data and respond back to it. Although algorithms like YOLO_v2 have sped up the process of object detection the latency is at that part of the system when the car has to send terabytes to the central server and then receive the response and then act! Hence, we need the basic processing like when to stop or decelerate, to be done in the car itself.
Blog Single

The goal of Edge Computing is to minimize the latency by bringing the public cloud capabilities to the edge. This can be achieved in two forms — custom software stack emulating the cloud services running on existing hardware, and the public cloud seamlessly extended to multiple point-of-presence (PoP) locations. Following are some promising reasons to use Edge Computing:
1. Privacy: Avoid sending all raw data to be stored and processed on cloud servers.
2. Real-time responsiveness: Sometimes the reaction time can be a critical factor.
3. Reliability: The system is capable to work even when disconnected to cloud servers. Removes a single point of failure.
To understand the points mentioned above, let’s take the example of a device which responds to a hot keyword. Example, Jarvis from Iron Man. Imagine if your personal Jarvis sends all of your private conversations to a remote server for analysis. Instead, It is intelligent enough to respond when it is called. At the same time, it is real-time and reliable.

Intel CEO Brian Krzanich said in an event that autonomous cars will generate 40 terabytes of data for every eight hours of driving. Now with that flood of data, the time of transmission will go substantially up. In cases of self-driving cars, real-time or quick decisions are an essential need. Here edge computing infrastructure will come to rescue. These self-driving cars need to take decisions is split of a second whether to stop or not else consequences can be disastrous.

Another example can be drones or quadcopters, let’s say we are using them to identify people or deliver relief packages then the machines should be intelligent enough to take basic decisions like changing the path to avoid obstacles locally.

Forms of Edge Computing

Device Edge
Blog Single
In this model, Edge Computing is taken to the customers in the existing environments. For example, AWS Greengrass and Microsoft Azure IoT Edge.
Cloud Edge
Blog Single
This model of Edge Computing is basically an extension of the public cloud. Content Delivery Networks are classic examples of this topology in which the static content is cached and delivered through a geographically spread edge locations.

Vapor IO is an emerging player in this category. They are attempting to build infrastructure for cloud edge. Vapor IO has various products like Vapor Chamber. These are self-monitored. They have sensors embedded in them using which they are continuously monitored and evaluated by Vapor Software, VEC(Vapor Edge Controller). They also have built OpenDCRE, which we will see later in this blog.
The fundamental difference between device edge and cloud edge lies in the deployment and pricing models. The deployment of these models — device edge and cloud edge — are specific to different use cases. Sometimes, it may be an advantage to deploy both the models.

Edges around you

Edge Computing examples can be increasingly found around us:

1. Smart street lights
2. Automated Industrial Machines
3. Mobile devices
4. Smart Homes
5. Automated Vehicles (cars, drones etc)
Data Transmission is expensive. By bringing compute closer to the origin of data, latency is reduced as well as end users have better experience. Some of the evolving use cases of Edge Computing are Augmented Reality(AR) or Virtual Reality(VR) and the Internet of things. For example, the rush which people got while playing an Augmented Reality based pokemon game, wouldn’t have been possible if “real-timeliness” was not present in the game. It was made possible because the smartphone itself was doing AR not the central servers. Even Machine Learning(ML) can benefit greatly from Edge Computing. All the heavy-duty training of ML algorithms can be done on the cloud and the trained model can be deployed on the edge for near real-time or even real-time predictions. We can see that in today’s data-driven world edge computing is becoming a necessary component of it.
There is a lot of confusion between Edge Computing and IOT. If stated simply, Edge Computing is nothing but the intelligent Internet of things(IOT) in a way. Edge Computing actually complements traditional IOT. In the traditional model of IOT, all the devices, like sensors, mobiles, laptops etc are connected to a central server. Now let’s imagine a case where you give the command to your lamp to switch off, for such simple task, data needs to be transmitted to the cloud, analyzed there and then lamp will receive a command to switch off. Edge Computing brings computing closer to your home, that is either the fog layer present between lamp and cloud servers is smart enough to process the data or the lamp itself.

The Fog

Sandwiched between the edge layer and cloud layer, there is the Fog Layer. It bridges the connection between the other two layers.
Blog Single
Fog Computing — Fog computing pushes intelligence down to the local area network level of network architecture, processing data in a fog node or IoT gateway.

Edge computing pushes the intelligence, processing power and communication capabilities of an edge gateway or appliance directly into devices like programmable automation controllers (PACs).

Examples of Edge Computing

1. Autonomous Vehicles
For autonomous driving technologies to replace human drivers, cars must be capable of reacting to road incidents in real-time. On average, it may take 100 milliseconds for data transmission between vehicle sensors and backend cloud datacenters. In terms of driving decisions, this delay can have significant impact on the reaction of self-driving vehicles. Toyota predicts that the amount of data transmitted between vehicles and the cloud could reach 10 exabytes per month by the year 2025. If network capacity fails to accommodate the necessary network traffic, vendors of autonomous vehicle technologies may be forced to limit self-driving capabilities of the cars.

In addition to the data growth and existing network limitations, technologies such as 5G connectivity and Artificial Intelligence are paving way for Edge Computing. 5G will help deploy computing capabilities closer to the logical edge of the network in the form of distributed cellular towers. The technology will be capable of greater data aggregation and processing while maintaining high speed data transmission between vehicles and communication towers. AI will further facilitate intelligent decision-making capabilities in real-time, allowing cars to react faster than humans in response to abrupt changes in traffic flows.
2. Fleet Management
Blog Single
Logistics service providers leverage IoT telematics data to realize effective fleet management operations. Drivers rely on vehicle-to-vehicle communication as well as information from backend control towers to make better decisions. Locations of low connectivity and signal strength are limited in terms of the speed and volume of data that can be transmitted between vehicles and backend cloud networks. With the advent of autonomous vehicle technologies that rely on real-time computation and data analysis capabilities, fleet vendors will seek efficient means of network transmission to maximize the value potential of fleet telematics data for vehicles travelling to distant locations.

By drawing computation capabilities in close proximity of fleet vehicles, vendors can reduce the impact of communication dead zones as the data will not be required to send all the way back to centralized cloud data centers. Effective vehicle-to-vehicle communication will enable coordinated traffic flows between fleet platoons, as AI-enabled sensor systems deployed at the network edges will communicate insightful analytics information instead of raw data as needed.
3. Predictive Maintenance
Blog Single
The manufacturing industry heavily relies on the performance and uptime of automated machines. In 2006, the cost of manufacturing downtime in the automotive industry was evaluated at $1.3 million per hour. A decade later, the rising financial investment toward vehicle technologies and the growing profitability in the market make unexpected service interruptions more expensive in multiple orders of magnitude.

With Edge Computing, IoT sensors can monitor machine health and identify signs of time-sensitive maintenance issues in real-time. The data is analyzed on the manufacturing premises and analytics results are uploaded to centralized cloud data centers for reporting or further analysis. Analyzing anomalies can allow the workforce to perform corrective measures or predictive maintenance earlier, before the issue escalates and impacts the production line. Analyzing the most impactful machine health metrics can allow organizations to prolong the useful life of manufacturing machines. As a result, manufacturing organizations can lower the cost of maintenance, improve operational effectiveness of the machines and realize higher return on assets.
4. Voice Assistance
Voice Assistance technologies such as Amazon Echo, Google Home and Apple Siri, among others are pushing the boundaries of AI. An estimated 56.3 million smart voice assistant devices will be shipped globally in 2018. Gartner predicts that 30 percent of consumer interactions with the technology will take place via voice by the year 2020. The fast-growing consumer technology segment requires advanced AI processing and low-latency response time to deliver effective interactions with end-users.

Blog Single
Particularly for use cases that involve AI voice assistance capabilities, the technology needs go beyond computational power and data transmission speed. The long-term success of voice assistance depends on consumer privacy and data security capabilities of the technology. Sensitive personal information is a treasure trove for underground cybercrime rings and potential network vulnerabilities in voice assistance systems could pose unprecedented security and privacy risks to end-users. To address this challenge, vendors such as Amazon are enhancing their AI capabilities and deploying the technology closer to the edge, so that voice data doesn’t need to move across the network. Amazon is reportedly working to develop its own AI chip for the Amazon Echo devices.

Prevalence of edge computing in the voice assistance segment will hold equal importance for enterprise users as employees working in the field or on the manufacturing line will be able to access and analyze useful information without interrupting manual work operations.

According to the Gartner Hype Cycle 2017, Edge Computing is drawing closer to the Peak of Inflated Expectations and will likely reach the Plateau of Productivity in two to five years. Considering the ongoing research and developments in AI and 5G connectivity technologies, and the rising demands of smart industrial IoT applications, Edge Computing may reach maturity faster than expected.

The Essential Guide to Digital Transformation

The world has gone digital, and there’s no going back.

Read More