Paper available

The paper entitled «Proposal of an Adaptive Fault Tolerance Mechanism to Tolerate Intermittent Faults in RAM», written by J.-Carlos Baraza-Calvo, Joaquín Gracia-Morán, Luis-J. Saiz-Adalid, Daniel Gil-Tomás y Pedro-J. Gil-Vicente can be downloaded from Electronics journal.

Abstract: Due to transistor shrinking, intermittent faults are a major concern in current digital systems. This work presents an adaptive fault tolerance mechanism based on error correction codes (ECC), able to modify its behavior when the error conditions change without increasing the redundancy. As a case example, we have designed a mechanism that can detect intermittent faults and swap from an initial generic ECC to a specific ECC capable of tolerating one intermittent fault. We have inserted the mechanism in the memory system of a 32-bit RISC processor and validated it by using VHDL simulation-based fault injection. We have used two (39, 32) codes: a single error correction–double error detection (SEC–DED) and a code developed by our research group, called EPB3932, capable of correcting single errors and double and triple adjacent errors that include a bit previously tagged as error-prone. The results of injecting transient, intermittent, and combinations of intermittent and transient faults show that the proposed mechanism works properly. As an example, the percentage of failures and latent errors is 0% when injecting a triple adjacent fault after an intermittent stuck-at fault. We have synthesized the adaptive fault tolerance mechanism proposed in two types of Field Programmable Gate Arrays (FPGAs): non-reconfigurable and partially reconfigurable. In both cases, the overhead introduced is affordable in terms of hardware, time and power consumption.

Comments off

Artículo disponible

El artículo titulado «Proposal of an Adaptive Fault Tolerance Mechanism to Tolerate Intermittent Faults in RAM», escrito por J.-Carlos Baraza-Calvo, Joaquín Gracia-Morán, Luis-J. Saiz-Adalid, Daniel Gil-Tomás y Pedro-J. Gil-Vicente ya está disponible para su descarga en la revista Electronics.

Abstract: Due to transistor shrinking, intermittent faults are a major concern in current digital systems. This work presents an adaptive fault tolerance mechanism based on error correction codes (ECC), able to modify its behavior when the error conditions change without increasing the redundancy. As a case example, we have designed a mechanism that can detect intermittent faults and swap from an initial generic ECC to a specific ECC capable of tolerating one intermittent fault. We have inserted the mechanism in the memory system of a 32-bit RISC processor and validated it by using VHDL simulation-based fault injection. We have used two (39, 32) codes: a single error correction–double error detection (SEC–DED) and a code developed by our research group, called EPB3932, capable of correcting single errors and double and triple adjacent errors that include a bit previously tagged as error-prone. The results of injecting transient, intermittent, and combinations of intermittent and transient faults show that the proposed mechanism works properly. As an example, the percentage of failures and latent errors is 0% when injecting a triple adjacent fault after an intermittent stuck-at fault. We have synthesized the adaptive fault tolerance mechanism proposed in two types of Field Programmable Gate Arrays (FPGAs): non-reconfigurable and partially reconfigurable. In both cases, the overhead introduced is affordable in terms of hardware, time and power consumption.

Comments off

Paper accepted at IEEE Transactions on Dependable and Secure Computing

The paper entitled «A Multi-criteria Analysis of Benchmark Results With Expert Support for Security Tools» written by Miquel Martínez, Juan-Carlos Ruiz, Nuno Antunes, David de Andrés and Marco Vieira has been accepted at IEEE Transactions on Dependable and Secure Computing journal.

Abstract. The benchmarking of security tools is endeavored to determine which tools are more suitable to detect system vulnerabilities or intrusions. The analysis process is usually oversimplified by employing just a single metric out of the large set of those available. Accordingly, the decision may be biased by not considering relevant information provided by neglected metrics. This paper proposes a novel approach to take into account several metrics, different scenarios, and the advice of multiple experts. The proposal relies on experts quantifying the relative importance of each pair of metrics towards the requirements of a given scenario. Their judgments are aggregated using group decision making techniques, and pondered according to the familiarity of experts with the metrics and scenario, to compute a set of weights accounting for the relative importance of each metric. Then, weight-based multi-criteria-decision-making techniques can be used to rank the benchmarked tools. The usefulness of this approach is showed by analyzing two different sets of vulnerability and intrusion detection tools from the perspective of multiple/single metrics and different scenarios.

Comments off

Artículo aceptado en la Revista IEEE Transactions on Dependable and Secure Computing

El artículo titulado «A Multi-criteria Analysis of Benchmark Results With Expert Support for Security Tools», escrito por Miquel Martínez, Juan-Carlos Ruiz, Nuno Antunes, David de Andrés and Marco Vieira, ha sido aceptado para su publicación en la revista IEEE Transactions on Dependable and Secure Computing.

Abstract. The benchmarking of security tools is endeavored to determine which tools are more suitable to detect system vulnerabilities or intrusions. The analysis process is usually oversimplified by employing just a single metric out of the large set of those available. Accordingly, the decision may be biased by not considering relevant information provided by neglected metrics. This paper proposes a novel approach to take into account several metrics, different scenarios, and the advice of multiple experts. The proposal relies on experts quantifying the relative importance of each pair of metrics towards the requirements of a given scenario. Their judgments are aggregated using group decision making techniques, and pondered according to the familiarity of experts with the metrics and scenario, to compute a set of weights accounting for the relative importance of each metric. Then, weight-based multi-criteria-decision-making techniques can be used to rank the benchmarked tools. The usefulness of this approach is showed by analyzing two different sets of vulnerability and intrusion detection tools from the perspective of multiple/single metrics and different scenarios.

Comments off

Paper accepted at Electronics Journal

The paper entitled «Proposal of an Adaptive Fault Tolerance Mechanism to Tolerate Intermittent Faults in RAM», written by J.-Carlos Baraza-Calvo, Joaquín Gracia-Morán, Luis-J. Saiz-Adalid, Daniel Gil-Tomás y Pedro-J. Gil-Vicente has been accepted at Electronics journal.

Abstract: Due to transistor shrinking, intermittent faults are a major concern in current digital systems. This work presents an adaptive fault tolerance mechanism based on error correction codes (ECC), able to modify its behavior when the error conditions change without increasing the redundancy. As a case example, we have designed a mechanism that can detect intermittent faults and swap from an initial generic ECC to a specific ECC capable of tolerating one intermittent fault. We have inserted the mechanism in the memory system of a 32-bit RISC processor and validated it by using VHDL simulation-based fault injection. We have used two (39, 32) codes: a single error correction–double error detection (SEC–DED) and a code developed by our research group, called EPB3932, capable of correcting single errors and double and triple adjacent errors that include a bit previously tagged as error-prone. The results of injecting transient, intermittent, and combinations of intermittent and transient faults show that the proposed mechanism works properly. As an example, the percentage of failures and latent errors is 0% when injecting a triple adjacent fault after an intermittent stuck-at fault. We have synthesized the adaptive fault tolerance mechanism proposed in two types of Field Programmable Gate Arrays (FPGAs): non-reconfigurable and partially reconfigurable. In both cases, the overhead introduced is affordable in terms of hardware, time and power consumption.

Comments off

Artículo acceptado en la Revista Electronics

El artículo titulado «Proposal of an Adaptive Fault Tolerance Mechanism to Tolerate Intermittent Faults in RAM», escrito por J.-Carlos Baraza-Calvo, Joaquín Gracia-Morán, Luis-J. Saiz-Adalid, Daniel Gil-Tomás y Pedro-J. Gil-Vicente ha sido aceptado para su publicación en la revista Electronics.

Abstract: Due to transistor shrinking, intermittent faults are a major concern in current digital systems. This work presents an adaptive fault tolerance mechanism based on error correction codes (ECC), able to modify its behavior when the error conditions change without increasing the redundancy. As a case example, we have designed a mechanism that can detect intermittent faults and swap from an initial generic ECC to a specific ECC capable of tolerating one intermittent fault. We have inserted the mechanism in the memory system of a 32-bit RISC processor and validated it by using VHDL simulation-based fault injection. We have used two (39, 32) codes: a single error correction–double error detection (SEC–DED) and a code developed by our research group, called EPB3932, capable of correcting single errors and double and triple adjacent errors that include a bit previously tagged as error-prone. The results of injecting transient, intermittent, and combinations of intermittent and transient faults show that the proposed mechanism works properly. As an example, the percentage of failures and latent errors is 0% when injecting a triple adjacent fault after an intermittent stuck-at fault. We have synthesized the adaptive fault tolerance mechanism proposed in two types of Field Programmable Gate Arrays (FPGAs): non-reconfigurable and partially reconfigurable. In both cases, the overhead introduced is affordable in terms of hardware, time and power consumption.

Comments off

Paper accepted at Electronics journal

The paper entitled “Reducing the Overhead of BCH Codes: New Double Error Correction Codes”, authored by Luis-J. Saiz-Adalid, Joaquín Gracia-Morán, Daniel Gil-Tomás, J.-Carlos Baraza-Calvo and Pedro-J. Gil-Vicente has been published at Electronics journal.

Abstract

The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes. Our implementation of the decoder circuits achieved reductions between 2.8% and 8.7% in the propagation delay, between 1.3% and 3.0% in the silicon area, and between 15.7% and 26.9% in the power consumption. Therefore, we propose LRRO codes as an alternative for protecting information against multiple errors.

Comments off

Artículo aceptado en la revista Electronics

El trabajo titulado “Reducing the Overhead of BCH Codes: New Double Error Correction Codes”, desarrollado por Luis-J. Saiz-Adalid, Joaquín Gracia-Morán, Daniel Gil-Tomás, J.-Carlos Baraza-Calvo y Pedro-J. Gil-Vicente ha sido publicado en la revista Electronics.

Abstract

The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-known class of powerful error correction cyclic codes. BCH codes can correct multiple errors with minimal redundancy. Primitive BCH codes only exist for some word lengths, which do not frequently match those employed in digital systems. This paper focuses on double error correction (DEC) codes for word lengths that are in powers of two (8, 16, 32, and 64), which are commonly used in memories. We also focus on hardware implementations of the encoder and decoder circuits for very fast operations. This work proposes new low redundancy and reduced overhead (LRRO) DEC codes, with the same redundancy as the equivalent BCH DEC codes, but whose encoder, and decoder circuits present a lower overhead (in terms of propagation delay, silicon area usage and power consumption). We used a methodology to search parity check matrices, based on error patterns, in order to design the new codes. We implemented and synthesized them, and compared their results with those obtained for the BCH codes. Our implementation of the decoder circuits achieved reductions between 2.8% and 8.7% in the propagation delay, between 1.3% and 3.0% in the silicon area, and between 15.7% and 26.9% in the power consumption. Therefore, we propose LRRO codes as an alternative for protecting information against multiple errors.

Comments off

TFG: Desarrollo e implementación de un sistema empotrado con propiedades de tolerancia a fallos para sistemas de confort de vehículos autónomos

The student Carmelo Martínez Ruiz has succesfully defended its TFG named «Desarrollo e implementación de un sistema empotrado con propiedades de tolerancia a fallos para sistemas de confort de vehículos autónomos», co-directed by Joaquín Gracia Morán and Luis J. Saiz Adalid.

Congratulations!!!

Abstract:

In this work a study is carried out on embedded systems with fault tolerance properties by protecting the system by Error Correction Codes (ECC). The objective is to be able to implement it in autonomous vehicle comfort systems. Thus, it shows how to implement a high-efficiency ECC in an embedded system to avoid falsified measurements. Errors will be injected into the system to test the efficiency of the ECC. To show the operation of a protected system, the study of the STM32F429i-disc1 board and the DHT11 sensor will be studied in depth. Obtaining and processing the data provided by the sensor will be key, and a comprehensive explanation of how to do it will be provided. Finally, a comparison will be made between the protected system and the unprotected system, in which the reliability and precision guaranteed by the ECC in the protected system leaves no doubt that it is necessary to implement it if an acceptable level of efficiency is to be achieved.

Comments off

TFG: Desarrollo e implementación de un sistema empotrado con propiedades de tolerancia a fallos para sistemas de confort de vehículos autónomos

El alumno Carmelo Martínez Ruiz ha defendido con éxito el TFG titulado «Desarrollo e implementación de un sistema empotrado con propiedades de tolerancia a fallos para sistemas de confort de vehículos autónomos», y que ha sido co-dirigido por Joaquín Gracia Morán y Luis J. Saiz Adalid.

Enhorabuena!!!

Resumen:

En este trabajo se realiza un estudio sobre los sistemas empotrados con propiedades de tolerancia a fallos mediante la protección del sistema por Códigos de Corrección de Errores (ECC). El objetivo es el de poder implementarlo en sistemas de confort de vehículos autónomos. Así pues, se muestra cómo implementar un ECC de alta eficiencia en un sistema empotrado para evitar que salgan medidas falseadas. Se inyectarán errores en el sistema para probar la eficiencia del ECC. Para mostrar el funcionamiento de un sistema protegido, se profundizará en el estudio de la placa
STM32F429i-disc1 y del sensor DHT11. La obtención y el tratamiento de los datos proporcionados por el sensor serán claves, y se proporcionará una explicación exhaustiva de cómo hacerlo. Finalmente, se hará una comparación entre el sistema protegido y el sistema sin proteger, en el que la fiabilidad y precisión que garantiza el ECC en el sistema protegido no deja lugar a dudas de que es necesario implementarlo si se busca tener un nivel de eficiencia aceptable.

Comments off

« Previous entries Página siguiente »