Technology & Engineering

Data Analysis in Engineering

Data analysis in engineering involves the systematic process of inspecting, cleaning, transforming, and modeling data to extract useful information and make informed decisions. It encompasses various statistical and computational techniques to interpret and analyze complex engineering data sets, enabling engineers to identify patterns, trends, and insights that drive improvements in design, performance, and decision-making processes.

Written by Perlego with AI-assistance

4 Key excerpts on "Data Analysis in Engineering"

  • Data Analyst
    eBook - ePub

    Data Analyst

    Careers in data analysis

    It draws attention to the fact that the process of analysing data includes the tasks of manipulating the data by cleaning and transforming it as well as the task of discovering useful information from it. Manipulating data is typically classed as a computer science skill, whereas the discovery of information from data is a statistical or machine learning skill. Successful data analysis requires both of these skill sets.
    There are a number of standard process models for data analysis, most notably CRISP-DM, SEMMA and KDD.4
    The role of data in society
    Data analysis is rapidly becoming one of the most important and challenging activities to drive the improvement of business performance, public services and other important aspects of society. This is happening because the volumes of data we have available for analysis continues to expand, the technological hardware that is accessible to us grows in data processing power and the algorithms that we can apply to the data become more and more advanced. These factors mean that we can get better insight from data than ever before.
    This insight is used by businesses to develop products and services that more precisely suit their customers and increase business profits. These developments in data analysis also benefit other areas, such as the public sector, where the data is used to find the most cost-effective solutions to benefit all social groups.
    There are continual new announcements in the press, commercial trade magazines and academic literature of novel applications of algorithms, automated decision-making and more efficient strategic decisions based on data. These news stories remind us how important data analysis is in transforming political debate, the provision of public services, commercial performances and solving research questions.
    But these achievements are only possible because there are skilled data analysts able to work with the technology, apply the algorithms, explain the analysis and communicate the results to decision-makers. The growing importance of data in decision-making is creating an increasing demand for the education of new data analysts and the advancement of skills of experienced data analysts. To be successful and to keep up with all these developing areas, data analysts need a combination of skills that enable them to extract, process and manipulate data using programming languages and databases as well as statistical skills and acumen in communication. The role of data analyst is detailed further in Chapter 2
  • Essentials of Data Science and Analytics
    eBook - ePub

    Essentials of Data Science and Analytics

    Statistical Tools, Machine Learning, and R-Statistical Software Overview

    Statistics is the science and art of making decision using data. It is often called the science of data and is about analyzing and drawing meaningful conclusions from the data. Almost every field uses data and statistics to learn about systems and their processes. In fields such as business, research, health care, and engineering, a vast amount of raw data is collected and warehoused rapidly; this data must be analyzed to be meaningful. In this chapter, we will look at different types of data. It is important to note that data are not always numbers; they can be in form of pictures, voice or audio, and other categories. We will briefly explore how to make efficient decisions from data. Statistical tools will aid in gaining skills such as (i) collecting, describing, analyzing, and interpreting data for intelligent decision making, (ii) realizing that variation is an integral part of data, (iii) understanding the nature and pattern of variability of a phenomenon in the data, and (iv) being able to measure reliability of the population parameters from which the sample data are collected to draw valid inferences.
    The applications of statistics can be found in a majority of issues that concern everyday life. Examples include surveys related to consumer opinions, marketing studies, and economic and political polls.
    Current Developments in Data Analysis
    Because of the advancement in technology, it is now possible to collect massive amounts of data. Lots of data, such as web data, e-commerce, purchase transactions at retail stores, and bank and credit card transaction data, among more, is collected and warehoused by businesses. There has been an increasing amount of pressure on businesses to provide high-quality products and services to improve their market share in this highly competitive market. Not only it is critical for businesses to meet and exceed customer needs and requirements, but it is also important for businesses to process and analyze a large amount of data efficiently in order to seek hidden patterns in the data. The processing and analysis of large data sets comes under the emerging field known as big data, data mining, and analytics.
    To process these massive amounts of data, data analytics, and mining use statistical techniques and algorithms and extracts nontrivial, implicit, previously unknown, and potentially useful patterns. Because applications of data mining tools are growing, there will be more of a demand for professionals trained in data mining. The knowledge discovered from this data in order to make intelligent data-driven decisions is referred to as business intelligence and business analytics
  • The Management of a Student Research Project
    • John A Sharp, John Peters, Keith Howard(Authors)
    • 2017(Publication Date)
    • Routledge
      (Publisher)
    For the purposes of this chapter analysis will be assumed to involve the ordering and structuring of data to produce knowledge. Structuring will be taken to include summarising and categorising the work of others, which is often the main form of analysis in lower level research projects. Data will be interpreted broadly as information gathered by observation, through books or pictures, field surveys, laboratory experiments, etc. The chapter will cover approaches to analysis that can be applied in a variety of different fields. As such it addresses two topics often separated in texts on research methodology, namely research design and data analysis. In practice, these are closely interlinked – at least in principle – since the design determines the data and what can be done with it, whereas the end purposes of the data analysis are the major determinants of the research design. For this reason we deal here with both aspects.
    The definition of analysis proposed is a broad one and it will be presumed to embrace a whole range of activities of both the qualitative and the quantitative type. From the 1960s through to the beginning of the 1990s there was a discernible tendency for research to make increasing use of quantitative analysis and in particular statistical methods, for example in attempting to quantify notions such as culture. Statistical methods enjoy a special position in research because they grew up through attempts by mathematicians to provide solutions to problems of defining and building knowledge noted by philosophers. They therefore provide a very useful model of many of the processes of data analysis. Furthermore, they reflect the structure of the analysis process in many different fields. Inevitably, then, the bias of this chapter is towards statistical methods. Rather than give a detailed account of some techniques, which would inevitably involve arbitrary selection, we provide here a brief outline of a rather greater number, indicating in particular where they fit into the overall picture, and follow this outline with detailed references in the bibliography at Appendix 2 . To avoid pointless repetition in what follows it will be assumed that researchers will consult the bibliography for further details of techniques of interest to them.
    Despite the bias towards statistical methods we shall pay some attention to the considerable proliferation of readily available computer analysis packages that has taken place in the past fifteen years. These range from packages used by engineers to analyse and synthesise designs, through to packages designed to facilitate the analysis of qualitative data, such as interview notes.
  • Data Science for Civil Engineering
    eBook - ePub
    • Rakesh K. Jain, Prashant Shantaram Dhotre, Deepak Tatyasaheb Mane, Parikshit Narendra Mahalle(Authors)
    • 2023(Publication Date)
    • CRC Press
      (Publisher)
    Khan and Ayyoob, 2018 ). For better decision-making process big data are converted into smart data or small problems, using advanced technology like statistical, AI, ML, data mining, blockchain, etc. The following section discusses in detail about big data analytical techniques are used for deriving smart data.
    6.2.2.1 Statistical
    A statistical method is a type of conventional data analyzing techniques that is designed to perform the function of regression, categorization, segmentation, clustering, detecting anomaly, prediction, etc. For example, groundwater modeling analyzes multivariant regression analysis (e.g., geostatistics) can be used to measure causal relationships between a series of variables, which can be used to predict the outcome of a dependency variation (Gandomi and Haider, 2015 ). For monitoring the groundwater system, water quality and flow statistical techniques are widely used; however, this method is not capable of handling highly dimensional, heterogeneous, and noisy data (Gandomi and Haider, 2015 ). Additionally, it is challenging to implement in parallel-processing/HPC environments, which is essential when trading with big data (Chen and Zhang, 2014 ). However, scientific community is integrating this method into advanced data analytical discipline in handling structured data.
    6.2.2.2 Data Mining
    Data mining is an analytical technique for extracting relevant information from large amount of data for the purpose of decision making (Figure 6.8 ). It is widely used to understanding the pattern, relationship, categorizing the data, etc., (e.g., Ali et al. 2016 ). Many statistical and ML techniques are used in data mining application. Data mining techniques can be applied on structured and unstructured of data, volume, types like image, video, text, etc. Many data mining tools such as R-Programming Tool, WEKA, RATTLE, JHEPWORK, ORANGE, and KNIME are used including in water resource engineering applications for gaining knowledge and understanding the hidden content from the data. In the water resources domain, data mining techniques are used in rainfall analysis, weather report analysis, forecasting drought, predicting the in/out flow, temperature pattern study, etc. For example, reservoir management is a critical task for engineers in a particular watershed and highly depends on downstream water requirements, inflow, and storage capacity. To ensure the balanced or optimized water supply, historical reservoir operation data are analyzed, and derived the hidden pattern based on reservoir rule curves (e.g., Ahmad and Simonovic, 2000 ). This model will extract the knowledge in the form of ‘rules of pattern or if-then’ rules and subsequently used for simulation of reservoir operation followed by the decision-making processes (Mohan and Ramsundram, 2013
Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.