Part I
Concepts and Adoption Challenges
|
1 | Introduction to the Internet of Things Karolina Baras and Lina M. P. L. Brito |
1.1 Introduction
Back in 1989, there were around 100,000 hosts connected to the Internet (Zakon, 2016), and the World Wide Web (WWW) came to life a year later at CERN with the first and only site at the time. 1 Ten years after Tom Berners-Lee unleashed the WWW, a whole new world of possibilities started to emerge when Kevin Ashton, from the Massachusetts Institute of Technologyās (MIT) Auto-ID Labs, coined the term Internet of Things (Ashton, 2009). In the same year, Neil Gershenfeld published his work on things that think, where he envisioned the evolution of the WWW as āthings start to use the Net so that people donāt need toā (Gershenfeld, 1999). Simultaneously, in Xerox PARC Laboratories in Palo Alto, California, the so-called third era of modern computing was dawning, with Mark Weiser introducing the concept of āubiquitous computingā in his paper published in Scientific American (Weiser, 1991). Tabs, pads, and boards were proposed as the essential building blocks for the computing of the future. Wireless networking and seamless access to shared resources would make user experience with technology as enjoyable as āa walk in the woods.ā
In 1999, the number of hosts exceeded 2 million and the number of sites jumped to 4 million (Zakon, 2016). The Institute of Electrical and Electronics Engineers (IEEE) standard 802.11b (Wi-Fi) had just been published, with transmission rates of 11 Mbits/s. GSM was growing fast, but the phones were not at all smart yet. They (only) allowed for making phone calls and sending short messages. GPS signals for civil usage were still degraded with selective availability, and the receivers were heavy, huge, and expensive. The area of wireless sensor networks (WSNs) also emerged in the 1990s with the concept of smart dust, a big number of tiny devices scattered around an area capable of sensing, recording, and communicating sensed data wirelessly.
In the dawn of the eagerly expected twenty-first century, the technological growth accelerated at an unprecedented pace. Although the reports published 10 and 20 years after Weiserās vision showed that not everything turned out to be just as he had imagined, significant changes were introduced in the way we use technology and live with it. Our habits changed, our interaction with technology changed, and the way we grow, play, study, work, and communicate also changed.
In 2005, the International Telecommunications Union (ITU) published its first report on the Internet of Things (IoT), noting that
āMachine-to-machine communications and person-to-computer communications will be extended to things, from everyday household objects to sensors monitoring the movement of the Golden Gate Bridge or detecting earth tremors. Everything from tyres to toothbrushes will fall within communications range, heralding the dawn of a new era, one in which todayās internet (of data and people) gives way to tomorrowās Internet of Things.ā (ITU-T, 2005)
Three years later, in 2008, the number of devices connected to the Internet outnumbered the worldās population for the first time. The introduction of Internet protocol version 6 (IPv6) 2 resolved the problem of the exhaustion of IP addresses, which was imminent near the end of the twentieth century. The first international conference on IoT 3 took place in March 2008 to gather industry and academia experts to share their knowledge, experience, and ideas on this emerging concept. In the following years, the number of IoT-related events and conferences grew enormously.
Open-source electronics such as Arduino, 4 which reached the market between 2005 and 2008, gave birth to millions of new ideas and projects for home and office automation, education, and leisure. Other examples of single-board computers (SBCs) followed: Raspberry Pi, 5 BeagleBone Black, 6 Intel Edison, 7 and so on. Today, one can buy a dozen tiny but fairly powerful computers for less than $50 each, connect them to the Internet and to a plethora of sensors and actuators, collect and analyze gigabytes of data, and make interesting home or office automation projects with real-time visualizations of information generated from the data on the go. Alternatively, one can use remote networks of intelligent devices deployed somewhere else, for example, OneLab. 8
In 2009, the Commission of the European Communities published a report on the IoT action plan for Europe showing that the IoT had reached a very high level of importance among European politicians, commercial and industry partners, and researchers (Commission of the European Communities, 2009). Several global standard initiatives were created in recent years to discuss and define IoT-related issues and establish global agreement on standard technologies to be deployed in IoT projects. For example, oneM2M 9 was created in 2012 as a global standard initiative that covers machine-to-machine and IoT technologies, which go from requirements, architecture, and application programming interface (API) specifications, to security solutions and interoperability issues.
In 2015, the European Commission created the Alliance for the Internet of Things (AIOTI) 10 to foster interaction and collaboration between IoT stakeholders. The convergence of cloud computing, the miniaturization and lower cost of sensors and microcontrollers, and the omnipresence of digital connectivity all contributed to making the IoT a reality for years to come.
In fact, some sources consider that the four pillars of digital transformation are cloud, mobility, big data, and social networking, and that IoT is based on these (IDC, 2015; i-SCOOP, 2015).
Gartner forecasts that by 2020 there will be more than 20 billion āthingsā connected to the Internet (Gartner, Inc., 2013). This number excludes PCs, smartphones, and tablets.
Now that the IoT is finally becoming a reality, there is a need for a global understanding on its definition, a reference architecture (RA), requirements, and standards. In the following sections, an overview of the current IoT landscape will be given and some of the proposals that are on the table for discussion in several groups, alliances, and consortia focused on IoT will be highlighted. There is at least an agreement on some of the requirements that need to be addressed, but still there is space for improvement and even more collaboration among the stakeholders. For example, unique device identification, system modularity, security, privacy, and low cost are some of issues that need further discussion and action.
This chapter covers the fundamentals of IoT, its application domains, and the main challenges that still need to be surpassed. The rest of the chapter is organized as follows: Section 1.2 reviews the main definitions and concepts involved. Some of the proposed architectures and reference models (RMs) are described in Section 1.3. Section 1.4 includes an overview of IoT-enabling technologies and the efforts of several working groups and consortia to create standards for the IoT. Section 1.5 gives an overview of the main IoT application domains, and Section 1.6 highlights main IoT implementation challenges. The last section provides the main conclusions of the chapter and outlines current trends.
1.2 Definition of IoT
There have been several international organizations and research centers involved in the creation of common standards for the IoT. One of the first steps in this process has been to find a common definition. The first definitions of the IoT were tightly coupled to the radio-frequency identification (RFID)ārelated context of the Auto-ID Labs at MIT, where the term first emerged. As the concept became universal, the definition started to evolve to more general terms. For Kevin Ashton, the meaning of the IoT and the consequences of its implementation in our environments are the following (Ashton, 2009):
āIf we had computers that knew everything there was to know about thingsāusing data they gathered without any help from usāwe would be able to track and count everything, and greatly reduce waste, loss and cost. We would know when things needed replacing, repairing or recalling, and whether they were fresh or past their best. We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves.ā
Another definition of the IoT is the following (Atzori et al., 2010):
āThe basic idea of this concept is the pervasive presence around us of a variety of things or objects ā such as Radio-Frequency IDentification (RFID) tags, sensors, actuators, mobile phones, etc. ā which, through unique addressing schemes, are able to interact with each other and cooperate with their neighbors to reach common goals.ā
The Study Group 20 (SG 20) was created in 2015 as a result of the 10-year experience period that followed the publication of the first ITU report on IoT in 2005 and the findings of the International Telecommunications Union Telecommunication Standardization Sector (ITU-T) Focus Group on Smart Sustainable Cities, which ceased to exist in 2015. In the SG 20 recommendation document Y.2060 (ITU-T, 2012), the following definition is given:
āInternet of things (IoT): A global infrastructure for the information society, enabling advanced services by interconnecting (physical and virtual) things based on existing and evolving interoperable information and communication technologies.
NOTE 1 ā Through the exploitation of identification, data capture, processing and communication capabilities, the IoT makes full use of things to offer services to all kinds of applications, whilst ensuring that security and privacy requirements are fulfilled.
NOTE 2 ā From a broader perspective, the IoT can be perceived as a vision with technological and societal implications.ā
The ITU-T document goes on to explain ...