| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Basics and a History

This version was saved 10 years, 10 months ago View current version     Page history
Saved by GregReser
on May 14, 2013 at 11:31:34 am
 

return to Introduction to Embedded Metadata

 

Embedded Metadata, Part I: The Basics and a History

By Greg Reser and Johanna Bauman

from Images, The newsletter of the VR, December 2009 vol.6, no.6

 

 

Embedding metadata in digital files of all kinds has become commonplace, and many of us use embedded data on a daily basis without giving it a second thought.  One of the most prevalent examples is the metadata embedded in audio files.  This data allows you to view your music files in a program like iTunes by artist, album, and song title, and easily transfer them together with all of their associated metadata to your iPod where you are able to access your files just as you do on your computer.  Wouldn’t it be great if you could do the same thing with digital image files as they are transferred from one system or application to another or from one computer to another?  It can be done, but as managers of visual resources, we have to carefully consider the benefits and risks of embedding data in image files.  In this two part article, we will parse out the differences between the types of embedded metadata, provide a background to its history, discuss its uses and limitations, and offer some actual use cases where embedded metadata is being used by members of the VR community.

 

The Basics of Embedded Metadata

At its most basic level, data is embedded in all of the objects around us.  We can look at a coffee cup, a painting, or a stapler, and discern information about it through simple observation: the mug is ceramic, the painting is on canvas, and the stapler is six and a half inches long.  These simple powers of observation, however, are not always enough.  If we want to answer more in depth questions such as what type of materials were used in creating these objects or when they were created, we need to reach out either to human experts who can analyze the objects, or turn to technological tools and subject them to chemical analyses in order to access this information.  Inasmuch as these data are inherent to the objects themselves, they will be available as long as they continue to exist.  Once we have made our observations and extracted the information from the objects, we can then record it, and as long as we can maintain a relationship between the objects and the recorded information (usually by labeling the object with a number that corresponds to a number in the recorded information), we do not have to subject them to the same analysis again and again to access and share this information.  Enter the catalog, whether in the form of lists, cards, or electronic databases.

Like the objects listed above, all digital files contain data about themselves.  Unlike the objects, however, digital files are themselves made up of bits of data in the form of 1s and 0s, and they require the presence of additional data as well as an interface (in the form of computer software and hardware) to even be observed.  Without such information as file type, compression algorithm, and color profile, image viewing tools (such as Photoshop or a web browser) would be unable to decode and display an image.  Precisely because digital images and digital metadata are made of the same stuff (1s and 0s), it is possible to store them together as one digital file rather than having to rely upon an external catalog (Although the embedded data, like the image itself, still requires an interface in order to be viewed.)  Image formats such as JPEG and TIFF actually have segments of their file structure set aside for storing data about the image.  In a TIFF, for example, this information is stored in the header which tells software how to read the file by directing it to the image file directory (IFD) where metadata is stored in distinct byte groupings called tags.1 Moreover, the devices used to create image files, such as cameras and scanners, write a large variety of data (e.g. time and date, camera make and model, exposure and color settings, and sometimes GPS location coordinates) to the digital image as it is being created.

The “embedded” data we have been discussing so far has been limited to describing the materiality of the objects or files: what they are made of, when they were made, and what tools were used to make them, in short, the technical metadata.  These technical data, however, do not tell the whole story.  In the case of the mug, painting, and stapler, we have no idea who made them, why they were made, who owns (or owned them), and where they are physically located.  In the case of the digital file, we know nothing about what it depicts, whether it is a picture of a mug, a painting, or a stapler.  This contextual or descriptive data is the key to making objects and their surrogate digital images discoverable and accessible to users.  As mentioned above, descriptive data about objects (and analog images of them in the form of slides or photographs) are typically stored separately in a catalog (or database) which provides users with a key to search and retrieve them.  A familiar example is the slide collection with an electronic database.2 Most digital image databases and delivery systems follow the same model as the electronic data record and the slide: a user searches the records in a database and the record provides a link to a digital image.  Usually, the image is displayed on a screen along with the cataloging data so that the user knows for certain what they are looking at.  However, as soon as that image is downloaded to the user's own computer, it becomes disconnected from the database - its relationship to the descriptive data that was used to retrieve it is lost.   It would be as if as soon as the slide was removed from the drawer all the information on the label would disappear (with the possible exception of the image accession number).

Overcoming the disconnect between descriptive data and digital images has driven the development of standards for encoding rich descriptive metadata directly in the image files themselves alongside the technical metadata, the goal being to create an image file that is truly self-describing, carrying information about itself and what it depicts everywhere it goes.

 

A Short History of Embedded Metadata

While the power of embedding metadata in digital images had been there from the beginning, it took certain developments in software and data standardization to make it consistently encodable, readable, and writable. 

In the early years of digital imaging not only was there no consistent means of encoding data in an image, every digital device maker had their own proprietary image format. This meant that to view or edit an image, a user had to have the correct software, which was usually supplied by the manufacturer.  This situation made image sharing extremely difficult, but the TIFF format came along and made a big difference. The "Tagged Image File Format" was originally created by the Aldus Corporation (makers of PageMaker) as an attempt to get desktop scanner vendors of the mid-1980s to agree on a common scanned image file format. TIFF was designed to be cross-platform, and to remain backward and forward compatible.  As a result, most modern image software can open and manipulate TIFF files created 20 years ago.3

This goal of creating an environment in which software applications could easily exchange images also applied to metadata.  In the TIFF standard, data segments or "tags" were set aside for metadata and were designed to be flexible and extensible, allowing for rich data to describe any aspect of the image.  Although most of the data that is accommodated in the header falls into the technical realm, there is evidence that the developers of TIFF were thinking about descriptive metadata from the beginning.  The core TIFF tags--the essentials that all mainstream TIFF developers should support in their products--include Image/Description, Date/Time, Artist, and Copyright.  More tags for descriptive metadata have been added over the years creating almost endless possibilities for users.  In fact, TIFF allows for an unlimited amount of user-defined metadata, but the most important of these tags are EXIF and XMP.

While the TIFF standard was widely adopted for desktop publishing, manufacturers in the emerging digital camera industry needed to define specific image attributes to assure that professional and consumer products would communicate easily with each other.  In 1998 the EXIF (Exchangeable Image File Format) was developed by the Japan Electronics and Information Technology industries Association (JEITA) to standardize the way electronic devices format and record image metadata at the time the image is created.4 EXIF has become a default standard and is found on just about every imaging device made and is used to record such things as: camera settings, time and date, image size, compression, name of camera, and color information. When images are viewed or edited by image editing software, all of this image information can be displayed.  EXIF also became a part of the TIFF format with 59 private EXIF tags.  In 2002, 27 GPS tags were added to EXIF allowing hardware and software to accurately capture geospatial data. With the introduction of EXIF a truly robust schema for capturing technical metadata across different image formats came into being.

Even with the introduction of TIFF tags and EXIF, embedded metadata did not move easily between operating systems and software tools.  One problem was that, out of necessity, organizations created metadata in file-specific or non-standard ways.  While these tags filled a need such as workflow management, they were often not readable outside of the custom software in which they were created.  In response to this, Adobe developed the Extensible Metadata Platform (XMP) as an open source infrastructure to create custom schema that are file and platform neutral.  XMP was designed to normalize different schema while retaining their unique elements and values.  To make this possible XMP uses web-friendly Resource Definition Format (RDF), Extensible Metadata Language (XML) and Uniform Resource Identifier (URI) standards which provide consistent ways to represent and exchange objects and concepts and their relationships to one other. XMP was first introduced in 2001 as part of Adobe Acrobat 5.0.  Since then it has been added to all of Adobe's Creative Suite products and has gained acceptance by Apple, Microsoft and most commercial software developers.

In the 1970s—long before the era of digital imaging and embedding metadata--news wire service photographs were transmitting basic metadata encoded as text to describe their images as they submitted them.  These early images were sent on teletype machines (similar to a fax) and the method was very simple - information such as caption, location and credit was directly typewritten or pasted at the top of the print.  The standard format for the text was developed in 1979 by the International Press Telecommunications Council (IPTC),6 a consortium of the world's major news agencies, news publishers and news industry vendors. Later, in 1991, a new standard, the “Information Interchange Model” (IIM), was created to handle digital resources with metadata encoded as binary data within the file.  Early adopters of IPTC IIM created their own methods of encoding images; it was in 1994 when Adobe incorporated it into Photoshop that its use became widespread in the press industry.  Today IPTC is probably one of the most widely used schemas for embedding data in images.

1983 UPI wirephoto with caption "header" taped on.  XMP is a method for encoding this metadata into the digital image itself.

 

In 2004, Adobe began a formal collaboration with IPTC, press and advertising representatives to create an XMP version of the IPTC IIM schema.  The result was the 2005 release of the "IPTC Core Schema for XMP" version 1.0 containing 31 elements broken up into four major categories: photographer, content, image, and status.  To answer the needs of the Stock Photo and Cultural Heritage communities, another update was released in 2009 with 45 additional elements which allow identification of people and artworks in an image.7

We have seen how embedded metadata developed from the need to easily share image files across applications and then was extended by the need of news organizations to send images electronically around the world.  The most recent standard, XMP, is gaining wide acceptance among hardware and software makers and should bring us closer to the goal of making image metadata a ubiquitous part of using digital images.  Image users, whether they are faculty, students, VR curators, or large data aggregators should be able to open and work with image metadata as easily as they open the image itself.

Stay tuned for Part II where we will address some of the more practical aspects of embedded metadata, such as the challenges of managing multiple schema even with the potential for standardization provided by XMP, tools that can be used for embedding metadata, its uses and limitations for managing visual resources, and finally some actual use cases.

 

 


notes:

 

1 AWare System’s TIFF FAQ http://www.awaresystems.be/imaging/tiff/faq.html; Adobe TIFF Specification 6.0, Section 2: TIFF Structure http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf

2 Although it could be argued that a well organized analog collection where the slides or photographs are labeled doesn’t really require an outside database in order for images to be searched and retrieved.  The organization of the drawers themselves and the metadata printed on them in the form of labels and captions together comprise a self-contained image information and retrieval system.

3 The goal for backward compatibility of TIFF files is explained in the Adobe TIFF Specification 6.0, http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf  (PDF), page 7.

4 See Exif Version 2.2 (PDF), http://exif.org/Exif2-2.PDF, section 3.2.

5 RDF/XML Syntax Specification (Revised), http://www.w3.org/TR/REC-rdf-syntax/ (website).  A concise reference can be found at: rdf:about, http://rdfabout.com/quickintro.xpd (website).

6 The original IPTC standard published in 1979 and updated in 1995: http://www.iptc.org/std/IPTC7901/1.0/specification/7901V5.pdf (PDF).  Current information and documentation is available at the IPTC website http://www.iptc.org.

7 IPTC Core & Extension: http://www.iptc.org/cms/site/index.html?channel=CH0099 (website)

 

Comments (0)

You don't have permission to comment on this page.