• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!


VRA_panel_User_Scenario_1 (redirected from VRA_beta_User_Scenario_1)

Page history last edited by GregReser 10 years, 2 months ago

return to VRA XMP Info Panel



Task: Faculty member catalogs their personal research slide collection

User Type: Faculty member (non-VR)

Tools: Adobe CS4, VRA file info panel, EMET, MS Excel, MDID, VPN

Background: Professor Smither is in the Classics department as her university. She has amassed, over several decades, a substantial personal image collection of 35mm slides which have been curated and used only by her and were never accessioned into the university's slide library (now visual resources center). She's currently working with the VR curator to have her slides scanned, but the task of cataloging them falls back onto the professor. She has taken extensive notes on most of her slides, but nothing was divided into separate fields and only she really understands all that she has scribbled on those slide labels over the years.

Narrative: Professor Smither now has a few thousand slides scanned and ready for cataloging. Before the scanning, the VR curator encouraged her to group the slides together into groups of images which share similar data (i.e. archaeological digs which come from the same site, group all terra-cotta vases together, etc.). With the group information provided by Professor Smither, the VR curator sets up a student to first go through the images in Bridge and create and apply templates to folders of images at a time. Next, these images, stored on a server, are now ready for Professor Smither to enhance the cataloging, which she can do from her own personal computer by logging onto the server
 using a virtual private network (VPN), opening the images in Bridge and editing the metadata template one by one. Once all of the images are cataloged, the VR curator uses EMET to extract the embedded records to Excel where she checks the data and makes changes using “find and replace”, concatenate, “split column data” and “drag to fill” and then imports it into MDID, from which Professor Smither can access the images and use them for her teaching in the current digital environment.


Comments (0)

You don't have permission to comment on this page.