Embedded Software Development for Safety-Critical Systems, Second Edition
eBook - ePub

Embedded Software Development for Safety-Critical Systems, Second Edition

Chris Hobbs

  1. 366 pages
  2. English
  3. ePUB (adapté aux mobiles)
  4. Disponible sur iOS et Android
eBook - ePub

Embedded Software Development for Safety-Critical Systems, Second Edition

Chris Hobbs

DĂ©tails du livre
Aperçu du livre
Table des matiĂšres
Citations

À propos de ce livre

This is a book about the development of dependable, embedded software. It is for systems designers, implementers, and verifiers who are experienced in general embedded software development, but who are now facing the prospect of delivering a software-based system for a safety-critical application. It is aimed at those creating a product that must satisfy one or more of the international standards relating to safety-critical applications, including IEC 61508, ISO 26262, EN 50128, EN 50657, IEC 62304, or related standards.

Of the first edition, Stephen Thomas, PE, Founder and Editor of FunctionalSafetyEngineer.com said, "I highly recommend Mr. Hobbs' book."

Foire aux questions

Comment puis-je résilier mon abonnement ?
Il vous suffit de vous rendre dans la section compte dans paramĂštres et de cliquer sur « RĂ©silier l’abonnement ». C’est aussi simple que cela ! Une fois que vous aurez rĂ©siliĂ© votre abonnement, il restera actif pour le reste de la pĂ©riode pour laquelle vous avez payĂ©. DĂ©couvrez-en plus ici.
Puis-je / comment puis-je télécharger des livres ?
Pour le moment, tous nos livres en format ePub adaptĂ©s aux mobiles peuvent ĂȘtre tĂ©lĂ©chargĂ©s via l’application. La plupart de nos PDF sont Ă©galement disponibles en tĂ©lĂ©chargement et les autres seront tĂ©lĂ©chargeables trĂšs prochainement. DĂ©couvrez-en plus ici.
Quelle est la différence entre les formules tarifaires ?
Les deux abonnements vous donnent un accĂšs complet Ă  la bibliothĂšque et Ă  toutes les fonctionnalitĂ©s de Perlego. Les seules diffĂ©rences sont les tarifs ainsi que la pĂ©riode d’abonnement : avec l’abonnement annuel, vous Ă©conomiserez environ 30 % par rapport Ă  12 mois d’abonnement mensuel.
Qu’est-ce que Perlego ?
Nous sommes un service d’abonnement Ă  des ouvrages universitaires en ligne, oĂč vous pouvez accĂ©der Ă  toute une bibliothĂšque pour un prix infĂ©rieur Ă  celui d’un seul livre par mois. Avec plus d’un million de livres sur plus de 1 000 sujets, nous avons ce qu’il vous faut ! DĂ©couvrez-en plus ici.
Prenez-vous en charge la synthÚse vocale ?
Recherchez le symbole Écouter sur votre prochain livre pour voir si vous pouvez l’écouter. L’outil Écouter lit le texte Ă  haute voix pour vous, en surlignant le passage qui est en cours de lecture. Vous pouvez le mettre sur pause, l’accĂ©lĂ©rer ou le ralentir. DĂ©couvrez-en plus ici.
Est-ce que Embedded Software Development for Safety-Critical Systems, Second Edition est un PDF/ePUB en ligne ?
Oui, vous pouvez accĂ©der Ă  Embedded Software Development for Safety-Critical Systems, Second Edition par Chris Hobbs en format PDF et/ou ePUB ainsi qu’à d’autres livres populaires dans Informatik et Softwareentwicklung. Nous disposons de plus d’un million d’ouvrages Ă  dĂ©couvrir dans notre catalogue.

Informations

Éditeur
CRC Press
Année
2019
ISBN
9781000507331
Édition
2
Background
I
Chapter 1
Introduction
We’re entering a new world in which data may be more important than software.
Tim O’Reilly
This is a book about the development of dependable, embedded software.
It is traditional to begin books and articles about embedded software with the statistic of how many more lines of embedded code there are in a modern motor car than in a modern airliner. It is traditional to start books and articles about dependable code with a homily about the penalties of finding bugs late in the development process — the well-known exponential cost curve.
What inhibits me from this approach is that I have read Laurent Bossavit’s wonderful book, The Leprechauns of Software Engineering (reference [1]), which ruthlessly investigates such “well-known” software engineering preconceptions and exposes their lack of foundation.
In particular, Bossavit points out the circular logic associated with the exponential cost of finding and fixing bugs later in the development process: “Software engineering is a social process, not a naturally occurring one — it therefore has the property that what we believe about software engineering has causal impacts on what is real about software engineering.” It is precisely because we expect it to be more expensive to fix bugs later in the development process that we have created procedures that make it more expensive.
Bossavit’s observations will be invoked several times in this book because I hope to shake your faith in other “leprechauns” associated with embedded software. In particular, the “100 million lines of code in a modern car” seems to have become a mantra from which we need to break free.
Safety Culture
A safety culture is a culture that allows the boss to hear bad news.
Sidney Dekker
Most of this book addresses the technical aspects of building a product that can be certified to a standard, such as IEC 61508 or ISO 26262. There is one additional, critically important aspect of building a product that could affect public safety — the responsibilities carried by the individual designers, implementers and verification engineers. It is easy to read the safety standards mechanically, and treat their requirements as hoops through which the project has to jump, but those standards were written to be read by people working within an established safety culture.
Anecdote 1I first started to think about the safety-critical aspects of a design in the late 1980s when I was managing the development of a piece of telecommunications equipment.
A programmer, reading the code at his desk, realized that a safety check in our product could be bypassed. When a technician was working on the equipment, the system carried out a high-voltage test on the external line as a safety measure. If a high voltage was present, the software refused to close the relays that connected the technician’s equipment to the line.
The fault found by the programmer allowed the high-voltage check to be omitted under very unusual conditions.
I was under significant pressure from my management to ship the product. It was pointed out that high voltages rarely were present and, even if they were, it was only under very unusual circumstances that the check would be skipped.
At that time, I had none of the techniques described in this book for assessing the situation and making a reasoned and justifiable decision available to me. It was this incident that set me off down the road that has led to this book.
Annex B of ISO 26262-2 provides a list of examples indicative of good or poor safety cultures, including “groupthink” (bad), intellectual diversity within the team (good), and a reward system that penalizes those who take short-cuts that jeopardize safety (good).
Everyone concerned with the development of a safety-critical device needs to be aware that human life may hang on the quality of the design and implementation.
The official inquiry into the Deepwater Horizon tragedy (reference [2]) specifically addresses the safety culture within the oil and gas industry: “The immediate causes of the Macondo well blowout can be traced to a series of identifiable mistakes made by BP, Halliburton, and Transocean that reveal such systematic failures in risk management that they place in doubt the safety culture of the entire industry.”
The term “safety culture” appears 116 times in the official Nimrod Review (reference [3]) following the investigation into the crash of the Nimrod aircraft XV230 in 2006. In particular, the review includes a whole chapter describing what is required of a safety culture and explicitly states that “The shortcomings in the current airworthiness system in the MOD are manifold and include 
 a Safety Culture that has allowed ‘business’ to eclipse Airworthiness.”
In a healthy safety culture, any developer working on a safety-critical product has the right to know how to assess a risk, and has the duty to bring safety considerations forward.
As Les Chambers said in his blog in February 2012† when commenting on the Deepwater Horizon tragedy:
We have an ethical duty to come out of our mathematical sandboxes and take more social responsibility for the systems we build, even if this means career threatening conflict with a powerful boss. Knowledge is the traditional currency of engineering, but we must also deal in belief.
One other question that Chambers addresses in that blog posting is whether it is acceptable to pass a decision “upward.” In the incident described in Anecdote 1, I refused to sign the release documentation and passed the decision to my boss. Would that have absolved me morally or legally from any guilt in the matter, had the equipment been shipped and had an injury resulted? In fact, my boss also refused to sign and shipment was delayed at great expense.
Anecdote 2At a conference on safety-critical systems that I attended a few years back, a group of us were chatting during a coffee break. One of the delegates said that he had a friend who was a lawyer. This lawyer quite often defended engineers who had been accused of developing a defective product that had caused serious injury or death. Apparently, the lawyer was usually confident that he could get the engineer proven innocent if the case came to court. But in many cases the case never came to court because the engineer had committed suicide. This anecdote killed the conversation, as we reflected on its implications for each of us personally.
Our Path
I have structured this book as follows
Background material.
Chapter 2 introduces some of the terminology to be found later in the book. This is important because words such as fault, error, and failure, often used interchangeably in everyday life, have ...

Table des matiĂšres