Computer Science

Query Data

Query data refers to the process of requesting specific information from a database using a query language such as SQL. This allows users to retrieve, filter, and manipulate data based on their specific requirements. Query data is essential for extracting meaningful insights and generating reports from large datasets, making it a fundamental concept in database management and data analysis.

Written by Perlego with AI-assistance

4 Key excerpts on "Query Data"

Index pages curate the most relevant extracts from our library of academic textbooks. They’ve been created using an in-house natural language model (NLM), each adding context and meaning to key research topics.
  • The Psychology of Questions
    • Arthur C. Graesser, John B. Black, Arthur C. Graesser, John B. Black(Authors)
    • 2017(Publication Date)
    • Routledge
      (Publisher)
    13 Data Base Querying by Computer
    Steven P. Shwartz Cognitive Systems Inc .
    Wendy G. Lehnert Univ. of Massachusetts Amherst
    The computer-human interface problem is rapidly becoming a critical concern to the data processing industry. Computerized data bases are currently available for a wide spectrum of applications ranging from marketing to law to finance. The computer systems attached to these data bases can, in principle, provide answers to an enormous number of user questions. But in fact, only a small percentage of the potential user population knows how to formulate queries to a data base, i.e., only those with data processing skills.

    Data Base Querying: A Brief History

    Natural language processing technology represents an enormous advance in the user-friendliness of computer systems. In the following section, we present a brief history of the development of user-friendly systems in order to put this new technology in perspective.

    Programming Languages

    The earliest computer data base query systems required queries written in machine or assembler language. To program in machine or assembler language one needs a knowledge of the physical structure of the data base, a knowledge of programming constructs such as loops, variables, and procedures, and a knowledge of the architecture of the machine on which the data base resides. Such programming skills are present in only a very small percentage of the potential user population for most data bases.
    The development of high-level languages, such as FORTRAN, reduced the requisite level of programming expertise. A program written in a particular high-level language will run with virtually no modification on a wide variety of different computers. For this reason, high-level programmers do not need to be knowledgeable concerning computer architecture. However, considerable programming skill is still necessary to access a data base using even high-level languages. A high-level programmer must still know about the physical structure of the data base and have an understanding of programming concepts.
  • Questions and Information Systems
    • Thomas W. Lauer, Eileen Peacock, Arthur C. Graesser, Thomas W. Lauer, Eileen Peacock, Arthur C. Graesser(Authors)
    • 2013(Publication Date)
    • Psychology Press
      (Publisher)
    Methods for querying each of these types of databases have been developed. Indeed, the types of questions that can be asked are determined by the structure of the database to a greater extent than the topic or content of the database. Unfortunately, there has been little research on what types of questions are most appropriate for particular knowledge domains. Individuals are undoubtedly inhibited from asking the right kinds of questions because of the constraints in the structure and content of the database.

    TECHNIQUES FOR QUERYING SYSTEMS

    In the first section of this chapter, we review the different types of query methods that have been developed. At one extreme, there are structured query methods that are strict command sequences in which it is necessary for users to memorize complicated commands and the parameters of each command. At the other end of the continuum, there are natural language methods that accept simple English statements.

    Structured Queries

    Structured queries are used to elicit information from relational databases. Traditional relational databases consist of objects and specified relations between these objects. For example, users of relational databases may perceive the data as being in tabular form. Users are able to manipulate the data and query the database by using operators that create new tables from the old ones (Date, 1986). One command may be used to pull out a subset of rows from a particular table, whereas another operator may pull out a subset of the columns.
    Other types of relational databases have roots in artificial intelligence, such as semantic networks (Collins & Loftus, 1975), scripts (Schank & Abelson, 1977), frames (Minsky, 1975), and conceptual graphs (Sowa, 1984). These databases are structured in a fashion that supports rapid, intelligent, and knowledge-intensive retrieval. Rather than being constructed from a set of rows and columns, these databases are structured according to the types of information they contain. For example, information may include taxonomic, causal, descriptive, or goal-oriented knowledge. Each of these types of knowledge is structured in a different way and does not assume the traditional tabular form.
    There have been at least five major methods for querying traditional relational databases. These include (a) rigid query syntax, (b) retrieval by reformulation, (c) menu-driven natural language, (d) natural language, and (e) query using truth-table exemplars.
  • Big Data Computing
    eBook - ePub

    Big Data Computing

    A Guide for Business and Technology Managers

    Because data streams are normally not stored in any kind of data repository, effective and efficient management and analysis of stream data pose great challenges to researchers. Currently, many researchers are investigating various issues relating to the development of data stream management systems. A typical query model in such a system is the continuous query model, where predefined queries constantly evaluate incoming streams, collect aggregate data, report the current status of data streams, and respond to their changes.

    2.6.7 Web Databases

    The World Wide Web and its associated distributed information services, such as Yahoo!, Google, Facebook, Twitter, provide rich, worldwide, online information services, where data objects are linked together to facilitate interactive access. Users seeking information of interest traverse from one object via links to another. Such systems provide ample opportunities and challenges for data mining. For example, understanding user access patterns will not only help improve system design (by providing efficient access between highly correlated objects) but also leads to better marketing decisions (e.g., by placing advertisements in frequently visited documents, or by providing better customer/user classification and behavior analysis). Capturing user access patterns in such distributed information environments is called web usage mining (or weblog mining) or web analytics.
    Although web pages may appear fancy and informative to human readers, they can be highly unstructured and lack a predefined schema, type, or pattern. Thus, it is difficult for computers to understand the semantic meaning of diverse web pages and structure them in an organized way for systematic information retrieval and data mining. Web services that provide keyword-based searches without understanding the context behind the web pages can only offer limited help to users. For example, a web search based on a single keyword may return hundreds of web page pointers containing the keyword, but most of the pointers will be very weakly related to what the user wants to find. Web mining or analytics can often provide additional help here than web search services. For example, authoritative web page analysis based on linkages among web pages can help rank web pages based on their importance, influence, and topics. Automated web page clustering and classification help group and arrange web pages in a multidimensional manner based on their contents. Web community analysis helps identify hidden web social networks and communities and observe their evolution. Web mining is the development of scalable and effective web data analysis and mining methods. It may help us learn about the distribution of information on the web in general, characterize and classify web pages, and uncover web dynamics and the association and other relationships among different web pages, users, communities, and web-based activities.
  • A-Z of Digital Research Methods
    • Catherine Dawson(Author)
    • 2019(Publication Date)
    • Routledge
      (Publisher)
    CHAPTER 23

    Information retrieval

    Overview

    Information retrieval (IR) refers to the science of searching for information. For the purpose of this book it refers specifically to searching and recovering relevant information stored digitally in documents, files, databases, digital libraries, digital repositories and digital archives, for example. It also refers to the processes involved in searching and recovering this information such as gathering, storing, processing, indexing, querying, filtering, clustering, classifying, ranking, evaluating and distributing information. Although IR does have a long pre-digital history, it is beyond the scope of this book: if you are interested in finding out about the history of IR research and development, see Sanderson and Croft (2012). IR in the digital environment can be seen to be closely related to data mining. The difference is that IR refers to the process of organising data and building algorithms so that queries can be written to retrieve the required (and relevant) information. Data mining, on the other hand, refers to the process of discovering hidden patterns, relationships and trends within the data (see Chapter 12 ). IR can be seen as problem-orientated whereas data mining is data-orientated (or data-driven).
    There are various subsets within the field of information retrieval in the digital environment. These include (in alphabetical order):
    • Blog retrieval: retrieving and ranking blogs according to relevance to the query topic (see Zahedi et al., 2017 for information about improving relevant blog retrieval methods).
    • Cross-lingual information retrieval (or cross-language information retrieval): searching document collections in multiple languages and retrieving information in a language other than that expressed in the query.
    • Document retrieval: locating meaningful and relevant documents in large collections. This can include records, official papers, legal papers, certificates, deeds, charters, contracts, legal agreements and collections of articles or papers. It can include paper documents that have been scanned or those that have been created digitally.