Robert J. Glushko
Table of Contents
We consider a family to be a collection of people affiliated by some connections, such as common ancestors or a common residence. The Simpson family includes a man named Homer and a woman named Marge, the married parents of three sibling children, a boy named Bart and two girls, Lisa and Maggie. This magical family speaks many languages, but most often uses the language of the local television station. In the English-speaking Simpson family, the boy describes his parents as his father and mother and his two siblings as his sisters. In the Spanish speaking Simpson family he refers to his parents as su padre y su madre and his sisters are las hermanas. In the Chinese Simpson family the sisters refer to each other according to their relative ages; Lisa, the elder, as jiě jie and, Maggie, the younger, as mèi mei.
Kinship relationships are ubiquitous and widely studied, and the names and significance of kinship relations like “is parent of” or “is sibling of” are familiar ones, making kinship a good starting point for understanding relationships in organizing systems. An organizing system can make use of existing relationships among resources, or it can create relationships by applying organizing principles to arrange the resources. Organizing systems for digital resources or digital description resources are the most likely to rely on explicit relationships to enable interactions with the resources.
In a classic book called Data and Reality, William Kent defines a relationship as an association among several things, with that association having a particular significance. “The things being associated,” the components of the relationship, are people in kinship relationships but more generally can be any type of resource (Resources in Organizing Systems), when we relate one resource instance to another. When we describe a resource (Resource Description and Metadata), the components of the relationship are a primary resource and a description resource. If we specify sets of relationships that go together, we are using these common relationships to define resource types or classes, which more generally are called categories (Categorization: Describing Resource Classes and Types). We can then use resource types as one or both the components of a relationship when we want to further describe the resource type or to assert how two resource types go together to facilitate our interactions with them.
We begin with a more complete definition of relationship and introduce five perspectives for analyzing them: semantic, lexical, structural, architectural, and implementation. We then discuss each perspective, introducing the issues that each emphasizes, and the specialized vocabulary needed to describe and analyze relationships from that point of view. We apply these perspectives and vocabulary to analyze the most important types of relationships in organizing systems.
The concept of a relationship is pervasive in human societies in both informal and formal senses. Humans are inescapably related to generations of ancestors, and in most cases they also have social networks of friends, co-workers, and casual acquaintances to whom they are related in various ways. We often hear that our access to information, money, jobs, and political power is all about “who you know,” so we strive to “network” with other people to build relationships that might help us expand our access. In information systems, relationships between resources embody the organization that enables finding, selection, retrieval, and other interactions.
Most organizing systems are based on many relationships to enable the system to satisfy some intentional purposes with individual resources or the collection as a whole. In the domain of information resources, common resources include web pages, journal articles, books, datasets, metadata records, and XML documents, among many others. Important relationships in the information domain that facilitate purposes like finding, identifying, and selecting resources include “is the author of,” “is published by,” “has publication date,” “is derived from,” “has subject keyword,” “is related to,” and many others.
When we talk about relationships we specify both the resources that are associated along with a name or statement about the reason for the association. Just identifying the resources involved is not enough because several different relationships can exist among the same resources; the same person can be your brother, your employer, and your landlord. Furthermore, for many relationships the directionality or ordering of the participants in a relationship statement matters; the person who is your employer gives a paycheck to you, not vice versa. Kent points out that when we describe a relationship we sometimes use whole phrases, such as “is-employed-by,” if our language does not contain a single word that expresses the meaning of the relationship.
The implementation perspective considers how the relationship is implemented in a particular notation and syntax and the manner in which relationships are arranged and stored in some technology environment. ()
To describe relationships among resources, we need to understand what the relations mean. This semantic perspective is the essence of relationships and explains why the resources are related, relying on information that is not directly available from perceiving the resources. In our Simpson family example, we noted that Homer and Marge are related by marriage, and also by their relationship as parents of Bart, Lisa, and Maggie, and none of these relationships are directly perceivable. This means that “Homer is married to Marge” is a semantic assertion, but “Homer is standing next to Marge” is not.
Semantic relationships are commonly expressed with a predicate with one or more arguments. A predicate is a verb phrase template for specifying properties of objects or a relationship among objects. In many relationships the predicate is an action or association that involves multiple participants that must be of particular types, and the arguments define the different roles of the participants.
The sequence, type, and role of the arguments are an essential part of the relationship expression. The sequence and role are explicitly distinguished when predicates that take two arguments are expressed using a subject-predicate-object syntax that is often called a triple because of its three parts:
However, we have not yet specified what the “is-married-to” relationship means. People can demonstrate their understanding of “is-married-to” by realizing that alternative and semantically equivalent expressions of the relationship between Homer and Marge might be:
Going one step further, we could say that people understand the equivalence of these different expressions of the relationship because they have semantic and linguistic knowledge that relates some representation of “married,” “husband,” “wife,” and other words. None of that knowledge is visible in the expressions of the relationships so far, all of which specify concrete relationships about individuals and not abstract relationships between resource classes or concepts. We have simply pushed the problem of what it means to understand the expressions into the mind of the person doing the understanding.
We can be more rigorous and define the words used in these expressions so they are “in the world” rather than just “in the mind” of the person understanding them. We can write definitions about these resource classes:
Definitions like these help a person learn and make some sense of the relationship expressions involving Homer and Marge. However, these definitions are not in a form that would enable someone to completely understand the Homer and Marge expressions; they rely on other undefined terms (consensual, law, lifetime, etc.), and they do not state the relationships among the concepts in the definitions. Furthermore, for a computer to understand the expressions, it needs a computer-processable representation of the relationships among words and meanings that makes every important semantic assumption and property precise and explicit. We will see what this takes starting in the next section.
In this discussion we will use entity type, class, concept, and resource type as synonyms. Entity type and class are conventional terms in data modeling and database design, concept is the conventional term in computational or cognitive modeling, and we use resource type when we discuss organizing systems. Similarly, we will use entity occurrence, instance, and resource instance when we refer to one thing rather than to a class or type of them.
All of these are fundamental in organizing systems, both for describing and arranging resources themselves, and for describing the relationships among resources and resource descriptions.
Class inclusion is the fundamental and familiar “is-a,” “is-a-type-of,” or “subset” relationship between two entity types or classes where one is contained in and thus more specific than the other more generic one.
A set of interconnected class inclusion relationships creates a hierarchy, which is often called a taxonomy.
Each level in a taxonomy subdivides the class above it into sub-classes, and each sub-class is further subdivided until the differences that remain among the members of each class no longer matter for the interactions the organizing system needs to support. We discuss the design of hierarchical organizing systems in , “Principles for Creating Categories.”
All of the examples in the current section have expressed abstract relationships between classes, in contrast to the earlier concrete ones about Homer and Marge, which expressed relationships between specific people. Homer and Marge are instances of classes like “married people,” “husbands,” and “wives.” When we make an assertion that a particular instance is a member of class, we are classifying the instance. Classification is a class inclusion relationship between an instance and a class, rather than between two classes. (We discuss Classification in detail in Classification: Assigning Resources to Categories.)
This is just the lowest level of the class hierarchy in which Homer is located at the very bottom; he is also a man, a human being, and a living organism (in cartoon land, at least). You might now remember the bibliographic class inclusion hierarchy we discussed in ; a specific physical item like your dog-eared copy of Macbeth is also a particular manifestation in some format or genre, and this expression is one of many for the abstract work.
Part-whole inclusion or meronymic inclusion is a second type of inclusion relationship. It is usually expressed using “is-part-of,” “is-partly,” or with other similar predicate expressions. Winston, Chaffin, and Herrmann identified six distinct types of part-whole relationships. Their meaning subtly differs depending on whether the part is separately identifiable and whether the part is essential to the whole.
Component-Object is the relationship type when the part is a separate component that is arranged or assembled with other components to create a larger resource. In , “Resources with Parts,” we used as an example the component-object relationship between an engine and a car:
The components of this type of part-whole relationship need not be physical objects; “Germany is part of the European Union” expresses a component-object relationship. What matters is that the component is identifiable on its own as an integral entity and that the components follow some kind of patterned organization or structure when they form the whole. Together the parts form a composition, and the parts collectively form the whole. A car that lacks the engine part will not work.
Member-Collection is the part-whole relationship type where “is-part-of” means “belongs-to,” a weaker kind of association than component-object because there is no assumption that the component has a specific role or function in the whole.
The members of the collection exist independently of the whole; if the whole ceases to exist the individual resources still exist.
Portion-Mass is the relationship type when all the parts are similar to each other and to the whole, unlike either of the previous types where engines are not tires or cars, and books are not like record albums or libraries.
Stuff-Object relationships are most often expressed using “is-partly” or “is-made-of” and are distinguishable from component-object ones because the stuff cannot be separated from the object without altering its identity. The stuff is not a separate ingredient that is used to make the object; it is a constituent of it once it is made.
Feature-Activity is a relationship type in which the components are stages, phases, or sub activities that take place over time. This relationship is similar to component-object in that the components in the whole are arranged according to a structure or pattern.
Locative and Temporal Inclusion is a third type of inclusion relationship between a container, area, or temporal duration and what it surrounds or contains. It is most often expressed using “is-in” as the relationship. However, the entity that is contained or surrounded is not a part of the including one, so this is not a part-whole relationship.
In contrast to inclusion expressions that state relationships between resources, attribution relationships assert or assign values to properties for a particular resource. In we used Resource Description and Metadata“attribute” to mean “an indivisible part of a resource description” and treated it as a synonym of “property.” We now need to be more precise and carefully distinguish between the type of the attribute and the value that it has. For example, the color of any object is an attribute of the object, and the value of that attribute might be “green.”
Some frameworks for semantic modeling define “attribute” very narrowly, restricting it to expressions with predicates with only one argument to assert properties of a single resource, distinguishing them from relationships between resources or resource types that require two arguments:
However, it is always possible to express statements like these in ways that make them into relationships with two arguments:
Dedre Gentner notes that this supposed distinction between one-predicate attributes and two-predicate relationships depends on context. For example, small can be viewed as an attribute, X → is-small, or as a relationship between X and some standard or reference Y, X → is-smaller-than → Y.
These two statements express the idea that Martin is small. However, many implementations of attribution relationships treat the attribute values literally. This means that unless we can process these two statements using another relationship that expresses the conversion of inches to mm, the two statements could be interpreted as saying different things about Martin’s size.
Finally, we note that we can express attribution relationships about other relationships, like the date a relationship was established. Homer and Marge Simpson’s wedding anniversary is an attribute of their “is-married-to” relationship.
The semantic distinctions between attributes and other types of relationships are not strong ones, but they can be made clearer by implementation choices. For example, XML attributes are tightly coupled to a containing element, and their literal values are limited to atomic items of information. In contrast, inclusion relationships are expressed by literal containment of one XML element by another.
However, in the second of these relationships “has” is an elliptical form of “has as a part,” expressing a part-whole relationship rather that one of possession.
The concept of possession is especially important in institutional organizing systems, where questions of ownership, control, responsibility and transfers of ownership, control, and responsibility can be fundamental parts of the interactions they support. However, possession is a complex notion, inherently connected to societal norms and conventions about property and kinship, making it messier than institutional processes might like.
Possession relationships also imply duration or persistence, and are often difficult to distinguish from relationships based on habitual location or practice. Miller and Johnson-Laird illustrate the complex nature of possession relationships with this sentence, which expresses three different types of them:
Semantic relationships can have numerous special properties that help explain what they mean and especially how they relate to each other. In the following sections we briefly explain those that are most important in systems for organizing resources and resource descriptions.
In most relationships the order in which the subject and object arguments are expressed is central to the meaning of the relationship. If X has a relationship with Y, it is usually not the case that Y has the same relationship with X. For example, because “is-parent-of” is an asymmetric relationship, only the first of these relationships holds:
In contrast, some relationships are symmetric or bi-directional, and reversing the order of the arguments of the relationship predicate does not change the meaning. As we noted earlier, these two statements are semantically equivalent because “is-married-to” is symmetric:
We can represent the symmetric and bi-directional nature of these relationships by using a double-headed arrow:
Transitivity is another property that can apply to semantic relationships. When a relationship is transitive, if X and Y have a relationship, and Y and Z have the same relationship, then X also has the relationship with Z. Any relationship based on ordering is transitive, which includes numerical, alphabetic, and chronological ones as well as those that imply qualitative or quantitative measurement. Because “is-taller-than” is transitive:
Inclusion relationships are inherently transitive, because just as “is-taller-than” is an assertion about relative physical size, “is-a-type of” and “is-part-of” are assertions about the relative sizes of abstract classes or categories. An example of transitivity in part-whole or meronymic relationships is: (1) the carburetor is part of the engine, (2) the engine is part of the car, (3) therefore, the carburetor is part of the car. 
Transitive relationships enable inferences about class membership or properties, and allow organizing systems to be more efficient in how they represent them since transitivity enables implicit relationships to be made explicit only when they are needed.
Any relationship that is both symmetric and transitive is an equivalence relationship; “is-equal-to” is obviously an equivalence relationship because if A=B then B=A and if A=B and B=C, then A=C. Other relationships can be equivalent without meaning “exactly equal,” as is the relationship of “is-congruent-to” for all triangles.
We often need to assert that a particular class or property has the same meaning as another class or property or that it is generally substitutable for it. We make this explicit with an equivalence relationship.
For asymmetric relationships, it is often useful to be explicit about the meaning of the relationship when the order of the arguments in the relationship is reversed. The resulting relationship is called the inverse or the converse of the first relationship. If an organizing system explicitly represents that:
We can then conclude that:
We now have described types and properties of semantic relationships in enough detail to return to the challenge we posed earlier: what information is required to fully understand relationships? This question has been asked and debated for decades and we will not pretend to answer it to any extent here. However, we can sketch out some of the basic parts of the solution.
Let us begin by recalling that a taxonomy captures a system of class inclusion relationships in some domain. But as we have seen, there are a great many kinds of relationships that are not about class inclusion. All of these other types of relationships represent knowledge about the domain that is potentially needed to understand statements about it and to make sense when more than one domain of resources or activities comes together.
For example, in the food domain whose partial taxonomy appears in , we can assert relationships about properties of classes and instances, express equivalences about them, and otherwise enhance the representation of the food domain to create a complex network of relationships. In addition, the food domain intersects with food preparation, agriculture, commerce, and many other domains. We also need to express the relationships among these domains to fully understand any of them.
Grilling → is-a-type-of → Food Preparation
Temperature → is-a-measure-of → Grilling
Hamburger → is-equivalent-to → Ground Beef
Hamburger → is-prepared-by → Grilling
Hamburger Sandwich → is-a-type-of → Prepared Food
Rare → is-a → State of Food Preparation
Well-done → is-a → State of Food Preparation
Meat → is-preserved-by → Freezing
Thawing → is-the-inverse-of → Freezing
In this simple example we see that class inclusion relationships form a kind of backbone to which other kinds of relationships attach. We also see that there are many potentially relevant assertions that together represent the knowledge that just about everyone knows about food and related domains. A network of relationships like these creates a resource that is called an ontology. A visual depiction of the ontology illustrates this idea that it has a taxonomy as its conceptual scaffold. (See )
There are numerous formats for expressing ontologies, but many of them have recently converged to or are based on the Web Ontology Language(OWL), developed by the W3C. OWL ontologies use a formal logic-based language that builds on RDF () to define resource classes and assign properties to them in rigorous ways, arrange them in a class hierarchy, establish their equivalence, and specify the properties of relationships.
Ontologies are essential parts in some organizing systems, especially information-intensive ones where the scope and scale of the resources require an extensive and controlled description vocabulary. (See .) The most extensive ontology ever created is Cyc, born in 1984 as an artificial intelligence research project. Three decades later, the latest version of the Cyc ontology contains several hundred thousand terms and millions of assertions that interrelate them.
The semantic perspective for analyzing relationships is the fundamental one, but it is intrinsically tied to the lexical one because a relationship is always expressed using words in a specific language. For example, we understand the relationships among the concepts or classes of “food,” “meat,” and “beef” by using the words “food,” “meat,” and “beef” to identify progressively smaller classes of edible things in a class hierarchy.
The connection between concept and words is not so simple. In the Simpson family example with which we began this chapter, we noted with “father” and “padre” that languages differ in the words they use to describe particular kinship relationships. Furthermore, we pointed out that cultures differ in which kinship relationships are conceptually distinct, so that languages like Chinese make distinctions about the relative ages of siblings that are not made in English.
This is not to suggest that an English speaker cannot notice the difference between his older and younger sisters, only that this distinction is not lexicalized—captured in a single word—as it is in Chinese. This “missing word” in English from the perspective of Chinese is called a lexical gap. Exactly when a lexical gap exists is sometimes tricky, because it depends on how we define “word”—polar bear and sea horse are not lexicalized but they are a single meaning-bearing unit because we do not decompose and reassemble meaning from the two separate words. These “lexical gaps” differ from language to language, whereas “conceptual gaps”—the things we cannot think of or directly experience, like the pull of gravity— may be innate and universal. We revisit this issue as “linguistic relativity” in . Categorization: Describing Resource Classes and Types
Earlier in this book we discussed the naming of resources () and the design of a vocabulary for resource description (), and we explained how increasing the scope and scale of an organizing system made it essential to be more systematic and precise in assigning names and descriptions. We need to be sure that the terms we use to organize resources capture the similarities and differences between them well enough to support our interactions with them. After our discussion about semantic relationships in this chapter, we now have a clearer sense of what is required to bring like things together, keep different things separate, and to satisfy any other goals for the organizing system.
For example, if we are organizing cars, buses, bicycles, and sleds, all of which are vehicles, there is an important distinction between vehicles that are motorized and those that are powered by human effort. It might also be useful to distinguish vehicles with wheels from those that lack them. Not making these distinctions leaves an unbalanced or uneven organizing system for describing the semantics of the vehicle domain. However, only the “motorized” concept is lexicalized in English, which is why we needed to invent the “wheeled vehicle” term in the second case.
Simply put, we need to use words effectively in organizing systems. To do that, we need to be careful about how we talk about the relationships among words and how words relate to concepts. There are two different contexts for those relationships.
First, we need to discuss relationships among the meanings of words. () and the most commonly used tool for describing them ().
When words encode the semantic distinctions expressed by class inclusion, the word for the more specific class in this relationship is called the hyponym, while the word for the more general class to which it belongs is called the hypernym. George Miller suggested an exemplary formula for defining a hyponym as its hypernym preceded by adjectives or followed by relative clauses that distinguish it from its co-hyponyms, mutually exclusive subtypes of the same hypernym.
For example, robin is a hyponym of bird, and could be defined as “a migratory bird that has a clear melodious song and a reddish breast with gray or black upper plumage.” This definition does not describe every property of robins, but it is sufficient to differentiate robins from bluebirds or eagles.
Part-whole or meronymic semantic relationships have lexical analogues in metonomy, when an entity is described by something that is contained in or otherwise part of it. A country’s capital city or a building where its top leaders reside is often used as a metonym for the entire government: “The White House announced today…” Similarly, important concentrations of business activity are often metonyms for their entire industries: “Wall Street was bailed out again…”
Synonymy is the relationship between words that express the same semantic concept.
The strictest definition is that synonyms “are words that can replace each other in some class of contexts with insignificant changes of the whole text’s meaning.” This is an extremely hard test to pass, except for acronyms or compound terms like “USA,” “United States,” and “United States of America” that are completely substitutable.
Most synonyms are not absolute synonyms, and instead are considered propositional synonyms. Propositional synonyms are not identical in meaning, but they are equivalent enough that substituting one for the other will not change the truth value of the sentence. This weaker test lets us treat word as synonyms even though their meanings subtly differ. For example, if Lisa Simpson can play the violin, then because “violin” and “fiddle” are propositional synonyms, no one would disagree with an assertion that Lisa Simpson can play the fiddle.
An unordered set of synonyms is often called a synset, a term first used by the WordNet “semantic dictionary” project started in 1985 by George Miller at Princeton. Instead of using spelling as the primary organizing principle for words, WordNet uses their semantic properties and relationships to create a network that captures the idea that words and concepts are an inseparable system. Synsets are interconnected by both semantic relationships and lexical ones, enabling navigation in either space.
We introduced the lexical relationship of polysemy, when a word has several different meanings or senses, in the context of problems with names (). For example, the word “bank” can refer to a: river bank, money bank, bank shots in basketball and billiards, an aircraft maneuver, and other concepts.
Polysemy is represented in WordNet by including a word in multiple synsets. This enables WordNet to be an extremely useful resource for sense disambiguation in natural language processing research and applications. When a polysemous word is encountered, it and the words that are nearby in the text are looked up in WordNet. By following the lexical relationships in the synset hierarchy, a “synset distance” can be calculated. The smallest semantic distance between the words, which identifies their most semantically specific hypernym, can be used to identify the correct sense. For example, in the sentence:
Put the money in the bank
Two of the three WordNet senses for “money” are:
1) the most common medium of exchange
2) the official currency issued by a government or national bank
and the first two of the ten WordNet senses for “bank” are:
1) a financial institution that accepts deposits
2) sloping land, especially the slope beside a body of water
The synset hierarchies for the two senses of “money” intersect after a very short path with the hierarchy for the first sense of “bank,” but do not intersect with the second sense of “bank” until they reach very abstract concepts.
Antonymy is the lexical relationship between two words that have opposite meanings. Antonymy is a very salient lexical relationship, and for adjectives it is even more powerful than synonymy. In word association tests, when the probe word is a familiar adjective, the most common response is its antonym; a probe of “good” elicits “bad,” and vice versa. Like synonymy, antonymy is sometimes exact and sometimes more graded.
Contrasting or binary antonyms are used in mutually exclusive contexts where one or the other word can be used, but never both. For example, “alive” and “dead” can never be used at the same time to describe the state of some entity, because the meaning of one excludes or contradicts the meaning of the other.
Other antonymic relationships between word pairs are less semantically sharp because they can sometimes appear in the same context as a result of the broader semantic scope of one of the words. “Large” and “small,” or “old” and “young” generally suggest particular regions on size or age continua, but “how large is it?” or “how old is it?” can be asked about resources that are objectively small or young.
The words that people naturally use when they describe resources reflect their unique experiences and perspectives, and this means that people often use different words for the same resource and the same words for different ones. Guiding people when they select description words from a controlled vocabulary is a partial solution to this vocabulary problem () that becomes increasingly essential as the scope and scale of the organizing system grows. A thesaurus is a reference work that organizes words according to their semantic and lexical relationships. Thesauri are often used by professionals when they describe resources.
Thesauri have been created for many domains and subject areas. Some thesauri are very broad and contain words from many disciplines, like the Library of Congress Subject Headings(LOC-SH) used to classify any published content. Other commonly used thesauri are more focused, like the
We can return to our simple food taxonomy to illustrate how a thesaurus annotates vocabulary terms with lexical and semantic relationships. The class inclusion relationships of hypernomy and hyponymy are usually encoded using BT (“broader term”) and NT (“narrower term”):
The BT and NT relationships in a thesaurus create a hierarchical system of words, but a thesaurus is more than a lexical taxonomy for some domain because it also encodes additional lexical relationships for the most important words. Many thesauri emphasize the cluster of relationships for these key words and de-emphasize the overall lexical hierarchy.
A thesaurus might employ USE as the inverse of the UF relationship to refer from a less preferred or variant term to a preferred one:
Thesauri also use RT (“related term” or “see also”) to indicate terms that are not synonyms but which often occur in similar contexts:
The relationships among word meanings are critically important. Whenever we create, combine, or compare resource descriptions we also need to pay attention to relationships between word forms. These relationships begin with the idea that all natural languages create words and word forms from smaller units. The basic building blocks for words are called morphemes and can express semantic concepts (when they are called root words ) or abstract concepts like “pastness” or “plural”). The analysis of the ways by which languages combine morphemes is called morphology.
“uncertain” = “certain” (root) + “un” (negation)
“denied” = “deny” (root) + “ed” (past tense)
Morphological analysis of a language is heavily used in text processing to create indexes for information retrieval. For example, stemming (discussed in more detail in Interactions with Resources) is morphological processing which removes prefixes and suffixes to leave the root form of words. Similarly, simple text processing applications like hyphenation and spelling correction solve word form problems using roots and rules because it is more scalable and robust than solving them using word lists. Many misspellings of common words (e.g., “pain”) are words of lower frequency (e.g., “pane”), so adding “pane” to a list of misspelled words would occasionally identify it incorrectly. In addition, because natural languages are generative and create new words all the time, a word list can never be complete; for example, when “flickr” occurs in text, is it a misspelling of “flicker” or the correct spelling of the popular photo-sharing site?
Derivational morphology deals with how words are created by combining morphemes. Compounding, putting two “free morphemes” together as in “batman” or “catwoman,” is an extremely powerful mechanism. The meaning of some compounds is easy to understand when the first morpheme qualifies or restricts the meaning of the second, as in “birdcage” and “tollbooth.” However, many compounds take on new meanings that are not as literally derived from the meaning of their constituents, like “seahorse” and “batman.”
Other types of derivations using “bound” morphemes follow more precise rules for combining them with “base” morphemes. The most common types of bound morphemes are prefixes and suffixes, which usually create a word of a different part-of-speech category when they are added. Familiar English prefixes include “a-,” “ab-,” “anti-,” “co-,” “de-,” “pre-,” and “un-.” Among the most common English suffixes are “-able,” “-ation,” “-ify,” “ing,” “-ity,” “-ize,” “-ment,” and “-ness.” Compounding and adding prefixes or suffixes are simple mechanisms, but very complex words like “unimaginability” can be formed by using them in combination.
Inflectional mechanisms change the form of a word to represent tense, aspect, agreement, or other grammatical information. Unlike derivation, inflection never changes the part-of-speech of the base morpheme. The inflectional morphology of English is relatively simple compared with other languages.
The structural perspective analyzes the association, arrangement, proximity, or connection between resources without primary concern for their meaning or the origin of these relationships. We take a structural perspective when we define a family as “a collection of people” or when we say that a particular family like the Simpsons has five members. Sometimes all we know is that two resources are connected, as when we see a highlighted word or phrase that is pointing from the current web page to another. At other times we might know more about the reasons for the relationships within a set of resources, but we still focus on their structure, essentially merging or blurring all of the reasons for the associations into a single generic notion that the resources are connected.
Travers and Milgram conducted a now-famous study in the 1960s involving the delivery of written messages between people in the midwestern and eastern United States. If a person did not know the intended recipient, he was instructed to send the message to someone that he thought might know him. The study demonstrated what Travers and Milgram called the “small world problem,” in which any two arbitrarily selected people were separated by an average of fewer than six links.
It is now common to analyze the number of “degrees of separation” between any pair of resources. For example, Markoff and Sengupta describe a 2011 study using Facebook data that computed the average “degree of separation” of any two people in the Facebook world to be 4.74.
See http://oracleofbacon.org/ for a web-based demonstration of “Kevin Bacon Numbers,” which measure the average degrees of separation among more than 2.6 million actors in more than 1.9 million movies. Its name reflects the parlor game “Six Degrees of Kevin Bacon,” a pun on “six degrees of separation” that is often associated with Travers and Milgram’s work; the game relies on the remarkable variety of Bacon’s roles, and hence the number of fellow actors in his movies (two actors in the same movie have one degree of separation). Bacon’s Bacon Number is 2.994, but it turns out that more than 300 actors are closer to the center of the movie universe than Bacon. Try some famous actors and see if their Bacon Numbers are greater or smaller than Bacon’s. (Hint: older actors have been in more movies.)
Many types of resources have internal structure in addition to their structural relationships with other resources. Of course, we have to remember (as we discussed in ) that we often face arbitrary choices about the abstraction and granularity with which we describe the parts that make up a resource and whether some combination of resource should also be identified as a resource. This is not easy when you are analyzing the structure of a car with its thousands of parts, and it is ever harder with information resources where there are many more ways to define parts and wholes. However, an advantage for information resources is that their internal structural descriptions are usually highly “computable,” something we consider in depth in . Interactions with Resources
Management science is constantly reevaluating different structures for organizations. Many large businesses are organized similarly near the top, with a board of directors, a chief executive officer, and other executives who manage the vice presidents or directors of various business units. Within and across these business units, however, there are significant variations in how a business can organize its people.
Management strategies are built around the style of organization the business has chosen. These organizational choices reflect the CEO’s management philosophy, the industry, regulatory requirements, operating scale, and other factors. Strict hierarchies are a traditional approach, with a tree structure leading from the lowest level worker directly up to the CEO. The strict management hierarchy at Foxconn is needed to enable close oversight of large numbers of low level employees in the manufacturing industry, with workers organized by physical location.
Other firms have a matrix structure in which an employee can be working on multiple projects, and reporting to a different manager for each one. A consulting firm’s matrix structure might emphasize an employee’s functional role (e.g., “process engineering consultant”) and disassociate it from the employee’s home location, which is why consultants spend so much time traveling on airplanes from project to project.
In the discipline of organizing we emphasize “intentional structure” created by people or by computational processes rather than accidental or naturally-occurring structures created by physical and geological processes. We acknowledged in “Intentional Arrangement” that there is information in the piles of debris left after a tornado or tsunami and in the strata of the Grand Canyon. These structural patterns might be of interest to meteorologists, geologists, or others but because they were not created by an identifiable agent following one or more organizing principles, they are not our primary focus.
Find a map of the states (or provinces or other divisions) in your country. You probably think of some set of these as members of a collection. Other than their literal arrangement (e.g., “x is next to y, y is east of z”), how could you describe their relationships to each other within the collection? Are these relationships based on natural or unintentional properties or intentional ones? Example: in the United States, California, Oregon, and Washington are considered the “West Coast” and the Pacific Ocean determines their western boundaries. Some of the borders between the states are natural, determined by rivers, and other borders are more intentional and arbitrary.
Some organizing principles impose very little structure. For a small collection of resources, co-locating them or arranging them near each other might be sufficient organization. We can impose two- or three-dimensional coordinate systems on this “implicit structure” and explicitly describe the location of a resource as precisely as we want, but we more naturally describe the structure of resource locations in relative terms. In English we have many ways to describe the structural relationship of one resource to another: “in,” “on,” “under,” “behind,” “above,” “below,” “near,” “to the right of,” “to the left of,” “next to,” and so on. Sometimes several resources are arranged or appear to be arranged in a sequence or order and we can use positional descriptions of structure: a late 1990s TV show described the planet Earth as the “third rock from the Sun.”
We pay most attention to intentional structures that are explicitly represented within and between resources because they embody the design or authoring choices about how much implicit or latent structure will be made explicit. Structures that can be reliably extracted by algorithms become especially important for very large collections of resources whose scope and scale defy structural analysis by people.
We almost always think of human and other animate resources as unitary entities. Likewise, many physical resources like paintings, sculptures, and manufactured goods have a material integrity that makes us usually consider them as indivisible. For an information resource, however, it is almost always the case that it has or might have had some internal structure or sub-division of its constituent data elements.
In fact, since all computer files are merely encodings of bits, bytes, characters and strings, all digital resources exhibit some internal structure, even if that structure is only discernible by software agents. Fortunately, the once inscrutable internal formats of word processing files are now much more interpretable after they were replaced by XML in the last decade.
When an author writes a document, he or she gives it some internal organization with its title, section headings, typographic conventions, page numbers, and other mechanisms that identify its parts and their significance or relationship to each other. The lowest level of this structural hierarchy, usually the paragraph, contains the text content of the document. Sometimes the author finds it useful to identify types of content like glossary terms or cross-references within the paragraph text. Document models that mix structural description with content “nuggets” in the text are said to contain mixed content.
Mixed content distinguishes XML from other data representation languages. It is this structural feature, combined with the fact that child nodes in the XML Infoset () are ordered, that makes it possible for XML documents to function both as human reader-oriented, textual documents and as structured data formats. It allows us to use natural language in writing descriptions while still enabling us to identify content by type by embedding markup to enclose “semantic nuggets” in otherwise undifferentiated text.
The Guidelines for Electronic Text Encoding and Interchange, produced by the Text Encoding Initiative(TEI), for example, includes a set of elements and attributes for Names, Dates, People and Places.
In data-intensive or transactional domains, document instances tend to be homogeneous because they are produced by or for automated processes, and their information components will appear predictably in the same structural relationships with each other. These structures typically form a hierarchy expressed in an XML schema or word processing style template. XML documents describe their component parts using content-oriented elements like <ITEM>, <NAME>, and <ADDRESS>, that are themselves often aggregate structures or containers for more granular elements. The structures of resources maintained in databases are typically less hierarchical, but the structures are precisely captured in database schemas.
The internal parts of XML documents can be described, found and selected using the XPath language, which defines the structures and patterns used by XML forms, queries, and transformations. The key idea used by XPath is that the structure of XML documents is a tree of information items called nodes, whose locations are described in terms of the relationships between nodes. The relationships built into XPath, which it calls axes, include self, child, parent, following, and preceding, making it very easy to specify a structure-based query like “find all sections in Chapter 1 through Chapter 5 that have at least two levels of subsections.” In addition, tools like Schematron take advantage of XPath’s structural descriptions to test assertions about a document’s structure and content. For example, a common editorial constraint might be that a numbered list must have at least three items.
In more qualitative, less information-intensive and more experience-intensive domains, we move toward the narrative end of the Document Type Spectrum, and document instances become more heterogeneous because they are produced by and for people. (See the sidebar, in .) The information conveyed in the documents is conceptual or thematic rather than transactional, and the structural relationships between document parts are much weaker. Instead of precise structure and content rules, there is usually just a shallow hierarchy marked up with Word processing or HTML tags like <HEAD>, <H1>, <H2>, and <LIST>.
Structural metadata, in the form of a schema for a database or document, describes a class of information resources, and may also prescribe grammatical details of inclusion and attribution relationships among the components. For example, the chapters of this book contain four levels of subsections. Each of those sections contains a title, some paragraphs and other text blocks, and subordinate sections. The textual content of the paragraphs includes highlighted terms and phrases that are defined in situ and referenced again in the glossary and index; there are also bibliographic citations that are reflected in the bibliography and index. We can discover these characteristics of a book through observation, but we could also examine its structural metadata, in its schema.
Structural metadata allows us to describe and prescribe relations among database tables, within the chapters of a book, or among parts in an inventory management system. The schema for HTML, for example, informs us that the <A> element can be used to signal a hypertext link-end; whether that link-end is an anchor or a target, or both, depends on the combination of values assigned to attributes. In HTML, the optional REL attribute may contain a value that signals the purpose of a hypertext link, and any HTML element may include a CLASS attribute value that may be used as a CSS selector for the purposes of formatting or dynamic interactions.
The usefulness of any given schema is often a function of the precision with which we may make useful statements based upon the descriptions and prescriptions it offers. Institutional schemas tend to be more prescriptive and restrictive, stressing professional orthodoxy and conformance to controlled vocabularies. Schemas for the information content in social and informal applications tend to be less prescriptive. Whether and how we use structural metadata is a tradeoff. Structural metadata is essential to enable quality control and maintenance in information collection and publishing processes, but someone has to do the work to create it.
Analyze the structure of your syllabus for this course. What are its structural elements and some of the rules that specify how they are organized? Remember, think in terms of structural elements and not presentational elements. How does this structural schema compare to those of other course syllabi? What kinds of interactions would be enabled if all of your courses used the same syllabus schema?
The internal structural hierarchy in a resource is often extracted and made into a separate and familiar description resource called the “table of contents” to support finding and navigation interactions with the primary resource. In a printed media context, any given content resource is likely to only be presented once, and its page number is provided in the table of contents to allow the reader to locate the chapter, section or appendix in question. In a hypertext media context, a given resource may be a chapter in one book while being an appendix in another. Some tables of contents are created as a static structural description, but others are dynamically generated from the internal structures whenever the resource is accessed. In addition, other types of entry points can be generated from the names or descriptions of content components, like selectable lists of tables, figures, maps, or code examples.
The schema most commonly used for producing technical books is called DocBook; it describes every XML element and attribute and prescribes their grammatical forms. The schema lets us know that a formal paragraph must include a title, and that a title may contain emphasis. A schema can also describe and prescribe the lexical value space of a postal code, or require that every list must have at least three items. The DocBook schema is well-documented and has been production-tested in institutional publishing contexts for over twenty years.
Identifying the components and their structural relationships in documents is easier when they follow consistent rules for structure (e.g., every non-text component must have a title and caption) and presentation (e.g., hypertext links in web pages are underlined and change cursor shapes when they are “moused over”) that reinforce the distinctions between types of information components. Structural and presentation features are often ordered on some dimension (e.g., type size, line width, amount of white space) and used in a correlated manner to indicate the importance of a content component.
Many indexing algorithms treat documents as “bags of words” to compute statistics about the frequency and distribution of the words they contain while ignoring all semantics and structure. In , we contrast this approach with algorithms that use internal structural descriptions to retrieve more specific parts of documents. Interactions with Resources
Many types of resources have “structural relationships” that interconnect them. Web pages are almost always linked to other pages. Sometimes the links among a set of pages remain mostly within those pages, as they are in an e-commerce catalog site. More often, however, links connect to pages in other sites, creating a link network that cuts across and obscures the boundaries between sites.
The links between documents can be analyzed to infer connections between the authors of the documents. Using the pattern of links between documents to understand the structure of knowledge and of the intellectual community that creates it is not a new idea, but it has been energized as more of the information we exchange with other people is on the web or otherwise in digital formats. An important function in Google’s search engine is the page rank algorithm that calculates the relevance of a page in part using the number of links that point to it while giving greater weight to pages that are themselves linked to often.
The concept of read-only or follow-only structures that connect one document to another is usually attributed to Vannevar Bush in his seminal 1945 essay titled “” Bush called it associative indexing, defined as “a provision whereby any item may be caused at will to select immediately and automatically another.” The “item” connected in this way was for Bush most often a book or a scientific article. However, the anchor and destination of a hypertext link can be a resource of any granularity, ranging from a single point or character, a paragraph, a document, or any part of the resource to which the ends of link are connected. The anchor and destination of a web link are its structural specification, but we often need to consider links from other perspectives. (See the sidebar, ).
The inclusion, by hypertext reference, of a resource or part of a resource into another resource is called transclusion. Transclusion is normally performed automatically, without user intervention. The inclusion of images in web documents is an example of transclusion. Transclusion is a frequently used technique in business and legal document processing, where re-use of consistent and up-to-date content is essential to achieve efficiency and consistency.
Theodor Holm Nelson, in a book intriguingly titled Literary Machines, renamed associative indexing as hypertext decades later, expanding the idea to make it a writing style as well as a reading style. Nelson urged writers to use hypertext to create non-sequential narratives that gave choices to readers, using a novel technique for which he coined the term transclusion.
At about the same time, and without knowing about Nelson’s work, Douglas Engelbart’s Augmenting the Human Intellect, described a future world in which professionals equipped with interactive computer displays utilize an information space consisting of a cross-linked resources.
In the 1960s, when computers lacked graphic displays and were primarily employed to solve complex mathematical and scientific problems that might take minutes, hours or even days to complete, Nelson’s and Engelbart’s visions of hypertext-based personal computing may have seemed far-fetched. In spite of this, by 1968, Engelbart and his team demonstrated human computer interface including the mouse, hypertext, and interactive media, along with a set of guiding principles.
A lexical perspective on hypertext links concerns the words that are used to signal the presence of a link or to encode its type. In web contexts, the words in which a structural link is embedded are called the anchor text. More generally, rhetorical structure theory analyzes how different conventions or signals in texts indicate relationships between texts or parts of them, like the subtle differences in polarity among “see,” “see also,” and “but see” as citation signals.
Many hypertext links in web pages are purely structural because they lack explicit representation of the reason for the relationship. When it is evident, this semantic property of the link is called the link type.
An architectural perspective on links considers whether links are one-way or bi-directional. When a bi-directional link is created between an anchor and a destination, it is as though a one-way link that can be followed in the opposite direction is automatically created. Two one-way links serve the same purpose, but the return link is not automatically established when the first one is created. A second architectural consideration is whether to employ binary links, connecting one anchor to one destination, or n-ary links, connecting one anchor to multiple types of destinations. (See )
A “front end” or “surface” implementation perspective on hypertext links concerns how the presence of the link is indicated in a user interface; this is called the “link marker”; underlining or coloring of clickable text are conventional markers for web links. A “back end” implementation issue is whether links are contained or embedded in the resources they link or whether they are stored separately in a link base. (See )
Hypertext links are now familiar structural mechanisms in information applications because of the World Wide Web, proposed in 1989 by Tim Berners-Lee and Robert Cailliau. They invented the methods for encoding and following hypertext links using the now popular HyperText Markup Language(HTML). The resources connected by HTML’s hypertext links are not limited to text or documents. Selecting a hypertext link can invoke a connected resource that might be a picture, video, or interactive application.
By 1993, personal computers, with a graphic display, speakers and a mouse pointer, had become ubiquitous. NCSA Mosaic is widely credited with popularizing the World Wide Web and HTML in 1993, by introducing inline graphics, audio and video media, rather than having to link to media segments in a separate window. The ability to transclude images and other media would transform the World Wide Web from a text-only viewer with links to a “networked landscape” with hypertext signposts to guide the way. On 12 November 1993, the first full release of NCSA Mosaic on the world’s three most popular operating systems (X Windows, Microsoft Windows, and Apple Macintosh) enabled the general public to access the network with a graphical browser.
We can portray a set of links between resources graphically as a pattern of boxes and links. Because a link connection from one resource to another need not imply a link in the opposite direction, we distinguish one-way links from explicitly bi-directional ones.
A graphical representation of link structure is shown on the left panel of figure . For a small network of links, a diagram like this one makes it easy to see that some resources have more incoming or outgoing links than other resources. However, for most purposes we leave the analysis of link structures to computer programs, and there it is much better to represent the link structures more abstractly in matrix form. In this matrix the resource identifiers on the row and column heads represent the source and destination of the link. This is a full matrix because not all of the links are symmetric; a link from resource 1 to resource 2 does not imply one from 2 to 1.
A matrix representation of the same link structure is shown on the right panel of . This representation models the network as a directed graph in which the resources are the vertices and the relationships are the edges that connect them. We now can apply graph algorithms to determine many useful properties. A very important property is reachability, the “can you get there from here” property. Other useful properties include the average number of incoming or outgoing links, the average distance between any two resources, and the shortest path between them.
Information scientists began studying the structure of scientific citation, now called bibliometrics, nearly a century ago to identify influential scientists and publications. This analysis of the flow of ideas through publications can identify “invisible colleges” of scientists who rely on each other’s research, and recognize the emergence of new scientific disciplines or research areas. Universities use bibliometrics to evaluate professors for promotion and tenure, and libraries use it to select resources for their collections.
The expression of citation relationships between documents is especially nuanced in legal contexts, where the use of legal cases as precedents makes it essential to distinguish precisely where a new ruling lies on the relational continuum between “Following” and “Overruling” with respect to a case it cites. The analysis of legal citations to determine whether a cited case is still good law is called Shepardizing because lists of cases annotated in this way were first published in the late 1800s by Frank Shepard, a salesman for a legal publishing company.
The links pointing to a web page might be thought of as citations to it, so it is tempting to make the analogy to consider Shepardizing the web. But unlike legal rulings, web pages aren’t always persistent, and only courts have the authority to determine the value of cited cases as precedents, so Shepard-like metrics for web pages would be tricky to calculate and unreliable.
Nevertheless, the web’s importance as a publishing and communication medium is undeniable, and many scholars, especially younger ones, now contribute to their fields by blogging, Tweeting, leaving comments on online publications, writing Wikipedia articles, giving MOOC lectures, and uploading papers, code, and datasets to open access repositories. Because the traditional bibliometrics pay no attention to this body of work, alternative metrics or “altmetrics” have been proposed to count these new venues for scholarly influence.
Facebook’s valuation is based on its ability to exploit the structure of a person’s social network to personalize advertisements for people and “friends” to whom they are connected. Many computer science researchers are working to determine the important characteristics of people and relationships that best identify the people whose activities or messages influence others to spend money.
The architectural perspective emphasizes the number and abstraction level of the components of a relationship, which together characterize the complexity of the relationship. We will briefly consider three architectural issues: degree (or arity), cardinality, and directionality.
These architectural concepts come from data modeling and they enable relationships to be described precisely and abstractly, which is essential for maintaining an organizing system that implements relationships among resources. Application and technology lifecycles have never been shorter, and vast amounts of new data are being created by increased tracking of online interactions and by all the active resources that are now part of the Internet of Things. Organizing systems built without clear architectural foundations cannot easily scale up in size and scope to handle these new requirements.
The degree or arity of a relationship is the number of entity types or categories of resources in the relationship. This is usually, though not always, the same as the number of arguments in the relationship expression.
is a relationship of degree 2, a binary relationship between two entity types, because the “is-married-to” relationship as we first defined it requires one of the arguments to be of entity type “husband” and one of them to be of type “wife.”
Now suppose we change the definition of marriage to allow the two participants in a marriage to be any instance of the entity type “person.” The relationship expression looks exactly the same, but its degree is now unary because only 1 entity type is needed to instantiate the two arguments:
It is always possible to represent ternary relationships as a set of binary ones by creating a new entity type that relates to each of the others in turn. This new entity type is called a dummy in modeling practice.
This transformation from a sensible ternary relationship to three binary ones involving a DUMMY entity type undoubtedly seems strange, but it enables all relationships to be binary while still preserving the meaning of the original ternary one. Making all relationships binary makes it easier to store relationships and combine them to discover new ones.
The cardinality of a relationship is the number of instances that can be associated with each entity type in a relationship. At first glance this might seem to be degree by another name, but it is not.
Cardinality is easiest to explain for binary relationships. If we return to Homer and Marge, the binary relationship that expresses that they are married husband and wife is a one-to-one relationship because a husband can only have one wife and a wife can only have one husband (at a time, in monogamous societies like the one in which the Simpsons live).
As we did with the ternary relationship in , we can transform this more complex relationship architecture to a set of simpler ones by restricting expressions about being a parent to the one-to-one cardinality.
The one-to-many expression brings all three of Homer’s children together as arguments in the same relational expression, making it more obvious that they share the same relationship than in the set of separate and redundant one-to-one expressions.
The directionality of a relationship defines the order in which the arguments of the relationship are connected. A one-way or uni-directional relationship can be followed in only one direction, whereas a bi-directional one can be followed in both directions.
All symmetric relationships are bi-directional, but not all bi-directional relationships are symmetric. (See .) A relationship between a manager and an employee that he manages is “employs,” a different meaning than the “is-employed-by” relationship in the opposite direction. As in this example, the relationship is often lexicalized in only one direction.
Finally, the implementation perspective on relationships considers how a relationship is realized or encoded in a technology context. The implementation perspective contrasts strongly with the conceptual, structural, and architectural perspectives, which emphasize the meaning and abstract structure of relationships. The implementation perspective is a superset of the lexical perspective, because the choice of the language in which to express a relationship is an implementation decision. However, most people think of implementation as all of the decisions about technological form rather than just about the choice of words.
In this book we focus on the fundamental issues and challenges that apply to all organizing systems, and not just on information-intensive ones that rely extensively on technology. Even with this reduced scope, there are some critical implementation concerns about the notation, syntax, and deployment of the relationships and other descriptions about resources. We briefly introduce some of these issues here and then discuss them in detail in . The Forms of Resource Descriptions
The choice of implementation determines how easy it is to understand and process a set of relationships. For example, the second sentence of this chapter is a natural language implementation of a set of relationships in the Simpson family:
Homer Simpson → is-married-to → Marge Simpson
Homer Simpson → is-parent-of → Bart
Homer Simpson → is-parent-of → Lisa
Homer Simpson → is-parent-of → Maggie
Marge Simpson → is-married-to → Homer Simpson
Marge Simpson → is-parent-of → Bart
Marge Simpson → is-parent-of → Lisa
Marge Simpson → is-parent-of → Maggie
Bart Simpson → is-a → Boy
Lisa Simpson → is-a → Girl
Maggie Simpson → is-a → Girl
In the following example of a potential XML implementation syntax, we emphasize class inclusion relationships by using elements as containers, and the relationships among the members of the family are expressed explicitly through references, using XML’s ID and IDREF attribute types:
<Parents children=”Bart Lisa Maggie”>
<Father name=”Homer” spouse=”Marge” />
<Mother name=”Marge” spouse=”Homer” />
<Children parents=”Homer Marge” >
<Boy name=”Bart” siblings=”Lisa Maggie” />
<Girl name=”Lisa” siblings=”Bart Maggie” />
<Girl name=”Maggie” siblings=”Bart Lisa” />
None of the models we have presented so far in this chapter represents the complexities of modern families that involve multiple marriages and children from more than one marriage, but they are sufficient for our limited demonstration purposes.
The syntax and grammar of a language consists of the rules that determine which combinations of its words are allowed and are thus grammatical or well-formed. Natural languages have substantial similarities by having nouns, verbs, adjectives and other parts of speech, but they differ greatly in how they arrange them to create sentences. Conformance to the rules for arranging these parts makes a sentence syntactically compliant but does not mean that an expression is semantically comprehensible; the classic example is Chomsky’s anomalous sentence:
Any meaning this sentence has is odd, difficult to visualize, and outside of readily accessible experience, but anyone who knows the English language can recognize that it follows its syntactic rules, as opposed to this sentence, which breaks them and seems completely meaningless:
The most basic requirement for implementation syntax is that it can represent all the expressions that it needs to express. For the examples in this chapter we have used an informal combination of English words and symbols (arrows and parentheses) that you could understand easily, but simple language is incapable of expressing most of what we readily say in English. But this benefit of natural language only accrues to people, and the more restrictive and formal syntax is easier to understand for computers.
A second consideration is that the implementation can be understood and used by its intended users. We can usually express a relationship in different languages while preserving its meaning, just as we can usually implement the same computing functionality in different programming languages. From a semantic perspective these three expressions are equivalent:
However, whether these expressions are equivalent for someone reading them depends on which languages they understand.
An analogous situation occurs with the implementation of web pages. HTML was invented as a language for encoding how web pages look in a browser, and most of the tags in HTML represent the simple structure of an analogous print document. Representing paragraphs, list items and numbered headings with <P> and <LI> and <Hn> makes using HTML so easy that school children can create web pages. However, the “web for eyes” implemented using HTML is of less efficient or practical for computers that want to treat content as product catalogs, orders, invoices, payments, and other business transactions and information that can be analyzed and processed. This “web for computers” is best implemented using domain-specific vocabularies in XML.
In the previous sections as we surveyed the five perspectives on analyzing relationships we mentioned numerous examples where relationships had important roles in organizing systems. In this final section we examine three contexts for organizing systems where relationships are especially fundamental; the Semantic Web and Linked Data, bibliographic organizing systems, and situations involving system integration and interoperability.
In a classic 2001 paper, Tim Berners-Lee laid out a vision of a Semantic Web in which all information could be shared and processed by automated tools as well as by people. The essential technologies for making the web more semantic and relationships among web resources more explicit are applications of XML, including RDF (), and OWL (). Many tools have been developed to support more semantic encoding, but most still require substantial expertise in semantic technologies and web standards.
More likely to succeed are applications that aim lower, not trying to encode all the latent semantics in a document or web page. For example, some wiki and blogging tools contain templates for semantic annotation, and Wikipedia has thousands of templates and “info boxes” to encourage the creation of information in content-encoded formats.
The “Linked Data” movement is an extension of the Semantic Web idea to reframe the basic principles of the web’s architecture in more semantic terms. Instead of the limited role of links as simple untyped relationships between HTML documents, links between resources described by RDF can serve as the bridges between islands of semantic data, creating a Linked Data network or cloud.
Much of our thinking about relationships in organizing systems for information comes from the domain of bibliographic cataloging of library resources and the related areas of classification systems and descriptive thesauri. Bibliographic relationships provide an important means to build structure into library catalogs.
Bibliographic relationships are common among library resources. Smiraglia and Leazer found that approximately 30% of the works in the Online Computer Library Center(OCLC) WorldCat union catalog have associated derivative works. Relationships among items within these bibliographic families differ, but the average family size for those works with derivative works was found to be 3.54 items. Moreover, “canonical” works that have strong cultural meaning and influence, such as “the plays of William Shakespeare” and The Bible, have very large and complex bibliographic families.
Barbara Tillett, in a study of 19th and 20th-century catalog rules, found that many different catalog rules have existed over time to describe bibliographic relationships. She developed a taxonomy of bibliographic relationships that includes equivalence, derivative, descriptive, whole-part, accompanying, sequential or chronological, and shared characteristic. These relationship types span the relationship perspectives defined in this chapter; equivalence, derivative, and description are semantic types; whole-part and accompanying are part semantic and part structural types; sequential or chronological are part lexical and part structural types; and shared characteristics are part semantic and part lexical types.
Smiraglia expanded on Tillett’s derivative relationship to create seven subtypes: simultaneous derivations, successive derivations, translations, amplifications, extractions, adaptations, and performances.
In , “Identity and Bibliographic Resources,” we briefly mentioned the four-level abstraction hierarchy for resources introduced in the Functional Requirements for Bibliographic Records report. FRBR was highly influenced by Tillett’s studies of bibliographic relationships, and prescribes how the relationships among resources at different levels are to be expressed (work-work, expression-expression, work-expression, expression-manifestation, and so on).
Many cataloging researchers have recognized that online catalogs do not do a very good job of encoding bibliographic relationships among items, both due to catalog display design and to the limitations of how information is organized within catalog records. Author name authority databases, for example, provide information for variant author names, which can be very important in finding all of the works by a single author, but this information is not held within a catalog record. Similarly, MARC records can be formatted and displayed in web library catalogs, but the data within the records are not available for re-use, re-purposing, or re-arranging by researchers, patrons, or librarians.
The Resource Description and Access(RDA) next-generation cataloging rules are attempting to bring together disconnected resource descriptions to provide more complete and interconnected data about works, authors, publications, publishers, and subjects.
The move in RDA to encode bibliographic data in RDF stems from the desire to make library catalog data more web-accessible. As web-based data mash-ups, application programming interfaces (APIs), and web searching are becoming ubiquitous and expected, library data are becoming increasingly isolated. The developers of RDA see RDF as the means for making library data more widely available online.
In addition to simply making library data more web accessible, RDA seeks to leverage the distributed nature of the Semantic Web. Once rules for describing resources, and the relationships between them, are declared in RDF syntax and made publicly available, the rules themselves can be mixed and mashed up. Creators of information systems that use RDF can choose elements from any RDF schema. For example, we can use the Dublin Core metadata schema (which has been aligned with the RDF model) and the Friend of a Friend(FOAF) schema (a schema to describe people and the relationships between them) to create a set of metadata elements about a journal article that goes beyond the standard bibliographic information. RDA’s process of moving to RDF is well underway.
Integration is the controlled sharing of information between two (or more) business systems, applications, or services within or between firms. Integration means that one party can extract or obtain information from another one, it does not imply that the recipient can make use of the information.
Interoperability goes beyond integration to mean that systems, applications, or services that exchange information can make sense of what they receive. Interoperability can involve identifying corresponding components and relationships in each system, transforming them syntactically to the same format, structurally to the same granularity, and semantically to the same meaning.
For example, an Internet shopping site might present customers with a product catalog whose items come from a variety of manufacturers who describe the same products in different ways. Likewise, the end-to-end process from customer ordering to delivery requires that customer, product and payment information pass through the information systems of different firms. Creating the necessary information mappings and transformations is tedious or even impossible if the components and relationships among them are not formally specified for each system.
In contrast, when these models exist as data or document schemas or as classes in programming languages, identifying and exploiting the relationships between the information in different systems to achieve interoperability or to merge different classification systems can often be completely automated. Because of the substantial economic benefits to governments, businesses, and their customers of more efficient information integration and exchange, efforts to standardize these information models are important in numerous industries. will dive deeper into Interactions with Resourcesinteroperability issues, especially those that arise in business contexts.
What is a relationship?
A relationship is “an association among several things, with that association having a particular significance.”
Why is it essential to include the type of association in a specification of a relationship?
Just identifying the resources involved is not enough because several different relationships can exist among the same resources.
What is the most typical grammatical model for expressing a relationship?
Most relationships between resources can be expressed using a subject-predicate-object model.
What knowledge does a computer need to be able to understand relational expressions?
For a computer to understand relational expressions, it needs a computer-processable representation of the relationships among words and meanings that makes every important semantic assumption and property precise and explicit.
What are three broad categories of semantic relationships?
Three broad categories of semantic relationships are inclusion, attribution, and possession.
What is a taxonomy?
A set of interconnected class inclusion relationships creates a hierarchy called a taxonomy.
What kind of semantic relationship is expressed by a classification?
Classification is a class inclusion relationship between an instance and a class.
What kinds of inferences are possible when relationships are transitive?
Ordering and inclusion relationships are inherently transitive, enabling inferences about class membership and properties.
What is an ontology?
Class inclusion relationships form a framework to which other kinds of relationships attach, creating a network of relationships called an ontology.
What is hyponymy?
When words encode the semantic distinctions expressed by class inclusion, the more specific class is called the hyponym; the more general class is the hypernym.
What is a practical application of morphological analysis?
Morphological analysis of how words in a language are created from smaller units is heavily used in text processing.
What are the two types of structural relationships?
Many types of resources have internal structure in addition to their structural relationships with other resources.
What is link analysis?
Using the pattern of links between documents to understand the structure of knowledge and the structure of the intellectual community that creates it is an idea that is nearly a century old.
When are hypertext links merely structural?
Many hypertext links are purely structural because there is no explicit representation of the reason for the relationship.
What aspects of relationships between resources does the architectural perspective emphasize?
The architectural perspective on resources emphasizes the number and abstraction level of the components of a relationship; three important issues are degree, cardinality, and directionality.
What are the essential semantic web technologies?
The essential technologies for making the web more semantic and relationships among web resources more explicit are XML, RDF, and OWL.
What is the origin of the study of relationships in organizing systems?
Much of our thinking about relationships in organizing systems for information comes from the domain of bibliographic cataloging of library resources and the related areas of classification systems and descriptive thesauri.
What is RDA?
The Resource Description and Access(RDA) next-generation cataloging rules are attempting to bring together disconnected resource descriptions.
What is integration?
Integration is the controlled sharing of information between two (or more) business systems, applications, or services within or between firms.
How is interoperability different from integration?
Interoperability goes beyond integration to mean that systems, applications, or services that exchange information can make sense of what they receive.
 The Simpsons TV show began in 1989 and is now the longest running scripted TV show ever. The official website is www.thesimpsons.com. The show is dubbed into French, Italian and Spanish for viewers in Quebec, France, Italy, Latin America and Spain. The Simpson’s Movie has been dubbed into Mandarin Chinese and Cantonese. For more information about Mandarin kinship terms see http://mandarin.about.com/od/vocabularylists/tp/family.htm. (Yes, we know that Bart actually calls his father by his first name.)
 Kinship can be studied from both anthropological and biological perspectives, which differ to the degree to which they emphasize social relationships and genetic ones. Kinship has been systematically studied since the nineteenth century: / [Brian Schwimer can be found at ] developed a system of kinship classification still taught today. A detailed interactive web tutorial developed by http://umanitoba.ca/faculties/arts/anthropology/kintitle.html.
 Kent’s Data and Reality was first published in 1978 with a second edition in 1998. Kent was a well-known and well-liked researcher in data modeling at IBM, and his book became a cult classic. In 2012, seven years after Kent’s death, a third edition  came out, slightly revised and annotated but containing essentially the same content as the book from 34 years earlier because its key issues about data modeling are timeless.
 “Semantic” is usually defined as “relating to meaning or language” and that does not seem helpful here.
 For decades important and vexing questions have been raised about the specificity of these predicate-argument associations and how or when the semantic constraints they embody combine with syntactic and contextual constraints during the process of comprehending language. Consider how “While in the operating room, the surgeon used a knife to cut the ____” generates a different expectancy from the same predicate and agent in “While at the fancy restaurant, the surgeon used a knife to cut the ____.” See .
 This book is not the place for the debate over the definition of marriage. We are not bigots; we just do not need this discussion here. If these definitions upset you here, you will feel better in .
 Typically, when people use language they operate on the assumption that everyone shares their model of the world, providing the common ground that enables them to communicate. As we saw in and Resources in Organizing Systems, (because of the Resource Description and Metadatavocabulary problem and different purposes for using resources and language) this assumption is often wrong, This paves the way for serious misunderstandings, since what is assumed to be shared knowledge may not really be shared or understood the same way.
 Which of these classifications is most relevant depends on the context. In addition, there might be other Homer Simpsons who are not cartoon characters or who are not married, so we might have to disambiguate this homonymy to make sure we referring to the intended Homer Simpson.
 Martin is the animated gecko who is the advertising spokesman for Geico Insurance (http://www.geico.com/). Martin’s wit and cockney accent make him engaging and memorable, and a few years ago he was voted the favorite advertising icon in the US.
 ontology is a branch of philosophy concerned with what exists in reality and the general features and relations of whatever that might be [“ontology” to refer to any computer-processable resource that represents the relationships among words and meanings in some knowledge domain. See . Computer science has adopted ], .
 Languages and cultures differ in how they distinguish and describe kinship, so Bart might find the system of family organization easier to master in some countries and cultures and more difficult in others.
 This example comes from (Fellbaum 2010, pages 236-237). German has a word Kufenfahrzeug for vehicle on runners.
 [“The references to ‘some class’ and to ‘insignificant change’ make this definition rather vague, but we are not aware of any significantly stricter definition. Hence the creation of synonymy dictionaries, which are known to be quite large, is rather a matter of art and insight.”], p, 314. The quote continues
 George Miller made many important contributions to the study of mind and language during his long scientific career. His most famous article, “” , was seminal in its proposals about information organization in human memory, even though it is one of the most misquoted scientific papers of all time. Relatively late in his career Miller began the WordNet project to build a semantic dictionary, which is now an essential resource in natural language processing applications. See http://wordnet.princeton.edu/.
 These contrasting meanings for “bank” are clear cases of polysemy, but there are often much subtler differences in meaning that arise from context. The verb “save” seems to mean something different in “The shopper saved…” versus “The lifeguard saved…” although they overlap in some ways.  and others have proposed definitions of polysemy, but there is no rigorous test for determining when word meanings diverge sufficiently to be called different senses.
 Languages differ a great deal in morphological complexity and in the nature of their morphological mechanisms. Mandarin Chinese has relatively few morphemes and few grammatical inflections, which leads to a huge number of homophones. English is pretty average on this scale. A popular textbook on morphology is .
 These so-called endocentric compounds essentially mean what the morphemes would have meant separately. But if a “birdcage” is exactly a “bird cage,” what is gained by creating a new word? This question has long been debated in subject classification, where it is framed as the contrast between “pre-coordination” and “post-coordination.” For example, is it better to pre-classify some resources as about “Sports Gambling” or should such resources be found by intersecting those classified as about “Sports” and about “Gambling.” See .
 English nouns have plural (book/books) and possessive forms (the professor’s book), adjectives have comparatives and superlatives (big/bigger/biggest), and regular verbs have only four inflected forms (see http://cla.calpoly.edu/~jrubba/morph/morph.over.html). In contrast, in Classical Greek each noun can have 11 word forms, each adjective 30, and every regular verb over 300 .
 Of the five perspectives on relationships in this chapter, the structural one comes closest to the meaning of “relation” in mathematics and computer science, where a relation is a set of ordered elements (“tuples”) of equal degree (). A binary relation is a set of element pairs, a ternary relation is a set of 3-tuples, and so on. The elements in each tuple are “related” but they do not need to have any “significant association” or “relationship” among them.
 This seems like an homage to Jimi Hendrix based on the title from a 1967 song, Third Stone from the Sun http://en.wikipedia.org/wiki/Third_Stone_from_the_Sun.
 The subfield of natural language processing called “named entity recognition” has as its goal the creation of mixed content by identifying people, companies, organizations, dates, trademarks, stock symbols, and so on in unstructured text.
 Text Encoding Initiative.
 These layout and typographic conventions are well known to graphic designers  but are also fodder for more academic treatment in studies of visual language or semiotics .
  describes Page Rank when its inventors were computer science graduate students at Stanford. It is not a coincidence that the technique shares a name with one of its inventors, Google co-founder and CEO Larry Page.  is an excellent textbook. The ultimate authority about how page rank works is Google; see https://www.google.com/insidesearch/howsearchworks/thestory/.
 [“Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them…” See ]. http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/.
 [Douglas Engelbart credits Bush’s “] ” article as his direct inspiration. Engelbart was in the US Navy, living in a hut in the South Pacific during the last stages of WWII when he read The Atlantic monthly magazine in which Bush’s article was published.
 See , [“See” as in “See (Glushko et al. 2013)” when referring to this chapter if it is consistent with his point of view. On the other hand, that same author could use “but” as a contrasting citation signal, writing “But see (Glushko et al. 2013)” to express the relationship that the chapter disagrees with him.]. For example, an author might use
 Before the web, most hypertexts implementations were in stand-alone applications like CD-ROM encyclopedias or in personal information management systems that used “cards” or “notes” as metaphors for the information units that were linked together, typically using rich taxonomies of link types. See , , and .
 Many of the pre-web hypertext designs of the 1980s and 1990s allowed for n-ary links. The Dexter hypertext reference model  elegantly describes the typical architectures. However, there is some ambiguity in use of the term binary in hypertext link architectures. One-to-one vs. one-to-many is a cardinality distinction, and some people reserve binary to discussion about degree.
 Most designers use a variety of visual cues and conventions to distinguish hyperlinks (e.g., plain hyperlink, button, selectable menu, etc.) so that users can anticipate how they work and what they mean. A recent counter-trend called “flat design” —exemplified most notably by the user interfaces of Windows 8 and iOS 7— argues for a minimalist style with less variety in typography, color, and shading. Flat designs are easier to adapt across multiple devices, but convey less information.
 Most web links are very simple in structure. The anchor text in the linking document is wrapped in <A> and </A> tags, with an HREF (hypertext reference) attribute that contains the URI of the link destination if it is in another page, or a reference to an ID attribute if the link is to a different part of the same page. HTML also has a <LINK> tag, which, along with <A> have REL (relationship) and REV (reverse relationship) attributes that enable the encoding of typed relationships in links. In a book context for example, link relationships and reverse relations include obvious candidates such as next, previous, parent, child, table of contents, bibliography, glossary and index.
 Using hypertext links as interaction controls is the modern dynamic manifestation of cross references between textual commentary and illustrations in books, a mechanism that dates from the 1500s . Hypertext links can be viewed as state transition controls in distributed collections of web-based resources; this design philosophy is known as Representational State Transfer(REST). See .
 Mosaic was developed in Joseph Hardin’s lab at the National Center for Supercomputing Applications(NCSA), hosted by the University of Illinois, at Urbana/Champaign by Marc Andreesen, Eric Bina and a team of student programmers. Mosaic was initially developed on the Unix X Window System. See http://www.ncsa.illinois.edu/Projects/mosaic.html.
 Eugene Garfield developed many of the techniques for studying scientific citation and he has been called the “grandfather of Google” (http://blog.lib.uiowa.edu/hardinmd/2010/07/12/eugene-garfield-librarian-grandfather-of-google/) because of Google’s use of citation patterns to determine relevance. See  for a set of papers that review Garfield’s many contributions. See  and  for recent reviews of data sources and citation metrics.
 Shepard first put adhesive stickers into case books, then published lists of cases and their citations. Shepardizing is a big business for Lexis/Nexis and Westlaw (where the technique is called “KeyCite”).
 See http://altmetrics.org/manifesto/ for the original call for altmetrics. Altmetric.com and Plum Analytics are firms that provide altmetrics to authors, publishers, and academic institutions. In 2016 the National Information Standards Organization sought to standardize the definition and use cases for altmetrics, which should benefit everyone who cares about them. See also http://www.niso.org/topics/tl/altmetrics_initiative/
 We are assuming a schema that establishes that the name attributes are of type ID and that the other attributes are of type IDREFS. This schema allows for polygamy, the possibility of multiple values for the spouse attribute. Restrictions on the number of spouses can be enforced with Schematron. (Also see the sidebar, ).
  used these now famous sentences to motivate the distinction between syntax and semantics. He argued that since the probability in both cases that the words had previously occurred in this order was essentially zero, statistics of word occurrence could not be part of language knowledge. See. http://en.wikipedia.org/wiki/Colorless_green_ideas_sleep_furiously.
Ironically, the web was not semantic originally because Berners-Lee implemented web documents using a presentation-oriented HTML markup language. Designing HTML to be conceptually simple and easy to implement led to its rapid adoption. HTML documents can make assertions and describe relationships using REL and REV attributes, but browsers still do not provide useful interactions for link relations.
 Barbara Tillett has written extensively about the theory of bibliographic relationships;  is an especially useful resource because it is a chapter in a comprehensive discussion ambitiously titled Relationships in the Organization of Knowledge .
 See Section 220.127.116.11.
 The FRBR entities, RDA data elements, and RDA value vocabularies have been defined in alignment with RDF using the Simple Knowledge Organization System (SKOS). SKOS is an “RDF-compliant language specifically designed for term lists and thesauri” . The SKOS website provides lists of registered RDF metadata schemas and vocabularies. From these, information system designers can create application profiles for their resources, selecting elements from multiple schemas, including FRBR and RDA vocabularies.