Memoir of a chart of the East Coast of Arabia – Project 1

In my post about crowdsourcing I mentioned a digitisation project we worked on as a class. We made a accounts in the site 18th Connect, and together we digitised Memoir of a chart of the east coast of Arabia 1764. This manuscript is a telling of a journey and description of an old map. To access the site one need to create an account and password and can immediately start editing any text that is available in the site.

This manuscript, written back in 1764 is read through an OCR, in particular TypeWriter. The text is read and then an editor, like me and my classmates, has access to every line that is identified by the software. This includes many mistakenly read ink spots, paper folds and others, which should be removed from being included as data. There are also mistakes when it comes to the spelling of the words and sometimes letters that are hard to read. This is the first level of editing that we had to do. Many other additions to the text and its metadata followed. As my final project I decided to finalise the document and prepare it for online publication.

In the following paragraphs I will describe the process, some problems and some solutions and generally refection on the making of such publication.

Turning the Memoir into a TEI XLM file requires many additions to the original text and correcting the OCR’s mistakes. This format is associated with some specific tags, similar to those in HTML, but more literature specific. In order to add more information about the specific layout of the text, the fonts, the character’s size, purpose, I started using the TEI Guidelines. They are very complex because they deal with texts in enormous detail. There are notations of verse structures, rhymes, play structure, character information and many many others. There are tags that give incredible amounts of metadata about the information put in the text. In order to properly annotated a text in this manner one must know the specific tags, similar to learning a program language. I am very incompetent when it comes to the many functions of the language, so my current edition of the Memoir isn’t very profound in metadata. However, diving into the depths of TEI showed me how much there is to learn and what precision of information one can achieve. I found that TEI is not a very popular language outside the limited interest in Digital Humanities. I also noticed that many of the examples in the site are given with logograms, which suggests involvement from far East countries.

Thought the whole text there is a spelling specify that comes from the difference in the age of the English language. Often instead of “f” or “s” the author would be put the long “s” according to the spelling back then. In order to provide accurate “translation”, editors have to change the old letter with the new equivalent to keep the meaning. There are sometimes just spelling mistakes where the OCR couldn’t read the text because of a blur and we had to correct them as well. These were minor things that were fixed a couple of days after we started. There were many symbols such as pictograms of anchors,

Anchor pictogram
degree notations and fractions that aren’t traditionally included into a keyboard. Those are initially noted with “@“ and later replaced with the right symbol using the functions in TEI Guidelines.

Punctuation and emphasis like italics, bold, and underline are manually imputed correspond to the text. This task first requires a closer look to the text and its meaning. Putting different character descriptive tags provides a further reading from the editor’s side. The process happens attentively first in order to copy the text and second to transfer the meaning correctly. Here the editor has some freedom of interpretation but mostly the obligation to fully transfer the author’s intent. The same goes concerning page layouts. They should be inputed in order for the original arrangement to be kept even in digital format, so that it can be recreated as well. These minor but important additions relate to the topography of the text. With time understanding and interpretations change, so keeping as much as possible from the intended is important. The software, TypeWrite, keeps track of all changes that are made to each line with username, time and change made. This information is valuable and it kept in the XML file, because it is very influential to meaning of the text.

I have used the tag “hi rent=“italics”/“bold”/etc” to mark text. The location of certain pieces of text I have marked with “hi rent=“center”/“right”/“left”” . These tags are specific to TEI, however I have used other tags such as marking a paragraph “p” that are typical for HTML but are used in TypeWrite as well. In all of these small insertions that the editor makes can be made mistakes. Or at least there are enough different ways for them to be written that differences may occur. For example, I know that not all commas that follow an italicised piece are italicised. Sometimes I would put the closing tag “/hi” before an additional comma because I haven’t seen it. There are many minor variations like these that come either from the inconsistency of one editor or between many with different styles. Also I would add that even in this text I don’t think I have manages to capture quite all of the elements on the page. There are lists, different indentations and stories within stories that I haven’t marked. This is partly because I don’t feel very comfortable using all of the TEI notations and partly because it requires a lot of time and screen staring.

Once having transcribed the text and all of the information contained within the page, the editor can add annotations, comments, footnotes, and any type of tag to provide additional interpretations. In my edition of the Memoir I have not completed this phase. My interactions with the text end with the input of the actual information.

However, from here one starts academic interest and need for more precise knowledge in TEI. The map can be described in great detail as there are some repeating characters such as Captain Smith, and there are many locations. I think that given the nature of the text – a distribution of an old map and a memoir of a traveler – it would be very appropriate so someone to georeference the locations and see if the distance estimates are correct. As we noticed during the GIS day the coasts around the Arab peninsula change often and it would be interesting to see the changes from 300 years ago. Moreover, there are instances of interaction with groups living near the coasts. There are lists of stocks, plans for trip provisions, and accounts of valuables that the Captain has encountered.

The user interface of the site 18thConnect cannot be left without a comment, because in such crowdsourcing initiatives the experience for the user is very important. The site doesn’t look too welcoming from the home page, but the bigger issues appear when editing starts. The portion of the original text that is seen is very small, which would make sense if this space was attributed for editing purposes instead. But it isn’t. The OCRed text appears in a very small box below the original, which is not very easy to navigate with a mouse as you have to click on the next line in order to start editing it. The problem is that the given shortcut controls for movement between the lines, insertion or lines, deletion and submission don’t work properly. Most of the times I have used them the site doesn’t respond and even after clicking the “Insert line” button the page doesn’t refresh and no result shows. Not to mention that if you are using iOS the shortcut commands are completely inaccurate.

That doesn’t aim to discourage you from editing text, it is just a note to the creators. There is a certain satisfaction of contribution to the world that I associate with this finished digitalisation and with others I have completed. Although small, it is a lasting impact that one can make in the world, which will be recorded in the metadata of the work to be remembered as long as it exists. I think it is fulfilling for people to spend sometime on a text, because it triggers both internal and external change.

NOTE: To anyone interested in this particular text I should mention that I know of at least to instances where something must be added, but I am unable to do so.
Top of page 8 needs to have a line with: “hi rent=“center” (2)”/hi” added.
On 9 page, line 23: a Boat “founded” or “sounded”?
Add a table on page 4

Participants:

rindh.hosting.nyu.edu

whataboutlife.hosting.nyu.edu

themultilingualmuslimah.hosting.nyu.edu

digitalhumanties.hosting.nyu.edu

mlmidh.hosting.nyu.edu

AND Professor Wrisley in his course https://wp.nyu.edu/ahcad139/

GIS Day 2016 NYUAD

Maps are a representation of reality that are intense in information. You can see where and even when (?) things were and are.
When connecting two maps one observes the differences on the them. Often, the emphasis on a maps can range from detailed buildings to depiction of nature – water and topography, which allows you to understand:
1. About the aim of the map
2. Evaluate the accuracy of the map
3. Learn about the long term changes (in UAE’s case the changing coast) and so about the ecological impact of the development.
There is another way to use mapping out: in cases where you have addresses and information about the place. For example in NYUAD’s Akkasah Archive there is a large collection of photos with dates, locations and names – metadata – of photos of the Middle East: Egyptian, Emirati (possibly),Turkish Photo Studios and sometimes personal cameras. When you locate visually, in space the information from these pictures you can learn:
1. The locations are connected with the types of neighbourhoods.
2. Learn about people’s life: clothing, style, likes,
3. [A personal favourite] Individuals. Pictures paint a frame of a life and sometimes they can tell great stories. If it be a very open Egyptian lady on many photos or tourists who had pictures of the Olympics in 1963 and Hitler.
So developing maps of information that corresponds to actual space is another dimension of certain data. And I find it quite useful. For two reasons – it requires easy interactions with maps (which you learn) and allows me to understand the underlying conditions when making a map and using it to interpret information as it is what is depicted on the map that will be of importance.
In the mean time, I found myself having kept a number of maps of similar kind – tourist maps. As I travel around I realised that the only people who would possibly make any use of a paper map are the tourists. Because I know that I wouldn’t use one in my own city, although we also have these tourist maps, I know that there is something fundamentally wrong with these maps when it’s your own town. You care more about banks, about street names and numbers, and none of these cutely drawn cafes doesn’t have what you need.
However, I have found myself a tourist many times and each time those tiny, cute maps have been somehow useful because they have allowed me to study the interesting parts of the area so I can easily recall it and imagine it. When I come back to a city I just recognise the tourist attractions and destinations, and am able to estimate my location and orient around. For this I admire the people who choose which things to depict for doing it so well that I learned from it. This is the art of making maps; they are accurate representations of where we are or were that contain much more information. Showing information in the context that is desired by their use, which helps to then read it. Maps can show real scale of places and people.
In a Ted Talk Danny Dorling uses maps and the data which they represented to show a bigger picture, which we often lose from sight behind all of the bad news. Everything is going well for the Earth, a balance is being established in the population, governments are controlling pollution (especially in more dirty areas – Japan, USA), more and more people are getting educated and caring about Nature. We have enough income of food for everyone [I don’t know his views on diets] if we decrease the amount of meat we eat.

Maps represent more than we can actively see so they seem authoritative pieces of big truth [about big things “out there”]. But maps are someone’s interpretation of spacial information and the truth is arbitrary to the purpose of the map. They are not necessarily real as in “out there”; if Abu Dhabi was as big as shown on an map for a local bank had showed, nothing of such scale would be possible for the city. But they represent the things that are “out there” through a certain prism of need. And depending on the needs can be used for different marketing and targeting strategies, instead of spacial orientation. They are institutionally or personally motivated.

NodeGoat

Like every software NodeGoat has some limitations. For simple data analysis with single edges and not a lot of nodes it works very fine. The problems come when the amount of data is increased. The first one is that as it doesn’t have provided layouts for data input, to make these it takes a lot of time and it also doesn’t support a lot of types of input. In the visualization field is has problems with too much information because it overlaps, lags, and becomes really hard to navigate. It also doesn’t have interface for accessing information that is not presented in the visual network, but is present in the input. The site is not very user friendly, but still is well made.

From all of these problems and interactions I learned a lot about how should such a software look like and what should it be able to do. Firstly, especially in our project where there were different types of people (actors, directors, etc.) I think color coordination, plus a legend, would be very useful. When a project has more than two types that are connected it becomes really hard to understand and navigate, which was pointed out by most of the authors that we had to read for the class. Not only does it complicates more but it also loses its meaning when there is no differentiation from person to person or object. The only conclusions I am able to draw from the network we created are concerned with the “volume” of each producer or film – how many others were connected to it, but not with specific patterns that emerge.

In order for one to find patterns she should keep the information included as limited as possible. As Weingart said in his writing about Networks, in order to make use of them one has to be very focused in her choice of nodes and edges, and to be very careful with symmetric and asymmetric relationships. In the case of Egyptian Movies I think we over complicated the network by adding so much information about the relationships between different people that the connections to movies literally disappeared under a web of marriages and divorces.

Themes

To my experience in Digital Humanities the recurring themes are from two kinds: the one about the subject, the meta ones, and the one about its insides.

From the first kind I see a lot of similarities to the themes in psychology: what is this study? Is a science, a humanitarian discipline? What is the purpose, who does it? These type of meta questions are common to all newly defined disciplines. Establishing the work of the field is important given the fact that it is taking new leaps now as hardware also persistently develops. Using computers to map out texts is in the basis of teaching neuron networks to create text themselves and to understand more complex ideas. Making precise history maps and digitising information will also do that. So, no matter what the exact meta is it is definitely an useful area of study and should be established as one.

The other type of recurring themes are inevitable to mention when speaking about digital humanities: data, metadata, mappings, crowd projects. In the archaeological project that I participated in with my professor and a bunch of archaeological scientist all of these themes made an appearance. Starting from gathering data since 2005 the archaeologists studying Saadiyat Island discovered the many heaths on the island showing that life here existed for millenniums. Then they put all of the information together to form a map of the place, which was being advanced by 3D mapping devices that could take data from the ground and create elaborate pictures including heigh. All of the people who participated in the trip also participated in the crow sourcing part of the project. We all contributed to make a perfect map, with location coordinates, of all of the objects that were observable on the site. This included not only historical items, but such that became history as we documented them.

Data

It comes in many shapes and forms, objects, pictures, text, etc. but really what concerns humanitarians is in the form of 1 and 0, just like the data that interests programmers. But our data begins its way in a very different, real format. All information created in the humanities can be turned into computational data. The works of some poet can be translated into computer language and analyzed for their contents or to teach a computer something. The paintings or an artist can be digitalized for a machine to count the strokes. A historian’s works can be compared to the ones of people long gone. These are just a few examples of data in the humanities.
It can be stored in many ways, or formats, depending on the need of it. In the need of creating networks, for example, the extensions .cvs or .xvs are a good, easy way to keep data organised and easily accessible with other software (Google Sheets, MicroSoft Excel, etc). A extension that keeps all of the digital information in a picture is .raw, however it is quite big.
On the topic of big things, data sets can be of small, medium or big sizes. Small data is easy to navigate and is even workable by hand. Medium data requires a lot of input and a software in order for processing to happen. Lastly, there is big data: it is gathered from many sources, and analyzing it even with softwares usually takes a substantial amount of time. It is really hard to work with because mistakes are hard to find, details might be left out, so it is really dependent on the software that is working with it. A famous example of a failure with big data is Google’s Flu predictor, which failed numerous amounts of times due to misinformation or because it didn’t take into account the metadata of the information it had (ex. time of the year, which correlates with the spread of the flu).

Metadata

This is the background information generated not directly by users but by devices. I believe the most relevant to making humanities data collection is location. Every message or photo, or checkin has coordinates attached to it with the particular geo location the phone estimates based on GPS satellites – these are hardly wrong by the minute. So, without the need to manually input locations, search for pins on maps, one can just use this to map anything. In the humanities this can be used for projects that gather information about anything that is currently available to be photographed – just like we did in the archeological project.

But metadata can be added manually as well, which makes it useful not only for location but for sorting information by anything inputed as metadata. Say, in a play one can ask many questions that cannot be answered always from the pure text, like: whom does the line address, where in space was it said, where in stage space was it said, etc. There types of details can also be contained in the metadata of a play, in order to help both literates and even actors work with the play. In fact, intonation, accent, facial expression, all kinds of details can be added to a metadata bank of a file. Possibly, in the future in plays where the actor’s interpretation of a character or situation is not considered important, he can even learn a play on his own only with information from metadata. All directors beware!

Wait, if there is so much information behind the scenes of electronic files, then how do I know what am I giving away? And who has access to this metadata again? Well, this is tangent to one of the biggest privacy issues of 21st century. More info – Metadata.

Crowdsourcing

This is the miracle of the accessible Internet. People are on it on the time, they generate data and metadata of all sorts, and they can be useful to computers to analyize it. The simplest example we looked at was a site where people had to sort images into categories in order to help a machine to learn to differentiate them. This task is impossible for softwares to do properly and subjectively the way humans would do it, but it is not very important as to assign people to do it for a pay check (in this case). So, only asking the people of the Internet to assist, one can accomplish a lot of things – for free. Users accumulate great amounts of knowledge about the world and sharing it to people who can work with it is always great. Crowdsourcing doesn’t only have to be online, it can easily be in real life as well, as in the example with the sample archeology class. However, the information gathered always ends up accumulated in a digital source in order to be easy to work with, so making the user input it digitally to begin with is even easier.
In class we are working on a project concerning the food places around Abu Dhabi and particularly, if they can serve well to NYUAD students (prices, closeness to campus, delivery, etc.). In this project we are working as a team to input data, but we are also allowing other people to input information about their food adventures in the city, just as they go along without any type of incentive – crowdsourcing. I find it amusing how well Google’s platform GoogleForms works for such endeavours, as it is easy to work with, not time consuming to create and has enough options.

Crowdsourcing

With the internet available to a big part of the human population and with its size bigger than ever before crowdsourcing has become a great way for gathering data and information. The work to be done with the grandness of the Internet is a lot and counting in the new social and psychological sciences to gather data directly from the people without too much involvement is a great way to get things done. Wikipedia is the biggest example of such a project. Everyone uses it as a source of information when wondering and it is all the creation of users putting in information. Of course, a certain amount of editing must be done but it is a lot easier to do once you have a page written out.

It works for many things: open source programs, digitalisation of text, making studies, informing about different places and events all around Earth. Crowdsourcing lets anyone become a part of something bigger than themselves and accomplish more as a part of humanity.

In the case of making text inputs and working with digital texts as a whole, “digital humanities”, crowdsourcing is a very good way to increase the worldwide heritage. With so many texts written before the option for spreading them with super fast speed to all corners of the Earth, there is a great need for help, that doesn’t require any skills. Checking texts for grammatical mistakes, made by softwares, scanning old books, etc. is easy for everyone and there is a need for it.

Here in Abu Dhabi there is a great language barrier, as Arabic is hardly read by computers and many mistakes are found. Also, in the context of NYUAD we have a grand library collection of Ancient arabic texts that most of the world hasn’t yet seen. Making a catalog of these texts would be one way to increase their availability and it can be done using crowdsourcing from NYUAD. On the other hand, having many students fluent in Arabic we can even transfer these texts in a digital format that is accurate and useful to scholars all around the world.

This work would be hard and long, and a great pressure only to the people who know Arabic so using the best part of crowdsourcing – that it is free – will not be an option for this project. Some sort of encouragement should be made in order to find enough participants but even then the numbers will be small and the chance of students carrying out the workload is small. This is one of the biggest problems projects like this, that have a limited target of crowd, face. So instead we can work out something that would help the Western world to get to know the Arabic world, instead of only those interested in Arabic already.

Another suggestion for a crowdsourcing project that can be carried out in the UAE, and with the special help of NYUAD is gathering a “yellow pages” for Abu Dhabi. A lot of tourist guides are posted on the internet but for the UAE they don’t include sufficient information and not many people are doing them. The one option is expats living here, but they have to attend to their lives and work so they don’t have enough time to explore, another is journalists who only come to visit, and the last one, least likely is natives who live here. This is a country with a very closed society so having some way to understand more about it, presented in a pleasant, user-friendly manner, would be a great project.

NYUAD students, and other students who live here because they go to international universities are the best option for crowdsourcing on such a project: we have a lot of free time and travel around a lot, we try to find different forms of entertainment and we also have different perspectives on what is  “fun”, “interesting”, or “cheap”, which will allow for a greater amount of audience to gather. Sharing one’s adventures is a good motivation for people to participate, because people love to talk about themselves. If there is an unified form for presenting information about plans, rating it and etc. it will also be easily categorised (in fact as it is inputted). Such project may face problems of editing because sometimes young people can be too impulsive and write things that are not accurate, so just like Wikipedia there will be a need for editors who will not only have to check the entries but also if possible even visit the places to give a more informed statement on a place, event or series of events.

Digitalised Text

Before and after - Bulgarian in Abbyy FineReader
Before and after – Bulgarian in Abbyy FineReader

The miracles of 21st century! Not only do we have a .pdf format for scanned documents, but we can also turn them into text documents which one can edit. This can be done in a number of ways, including using Google’s application, but we use Abbyy FineReader. The pros of this software is that if has many different languages and many of them with included dictionaries, to help with word recognition. You can also teach it to recognise words (while using it in Windows) and patterns in order to clean out mistakes. Good thing the software is Romanian and the creators included a dictionary in Bulgarian language! (Thanks neighbours!)

My project will very definitely be a continuation of my work in Sofia Central Library (Столична библиотека). There I used to index and often type on a computer – digitalise by hand – the works and books of old Bulgarian collections. Most of the books were quite old, whatever is left from the bombarding of the library in 1945. I hope I will be able to get scanned versions of such documents and digitalise them. So far, I am working collaboratively with the library in order to find the most unique and intriguing works for me to put into my personal corpus. If not, I will get the most practical texts that would help them expand their collection.

On the left you can see the result of turning a .pdf in Bulgarian into a raw format to work with. Originally this wasn’t a paper source so it was quite easy for the software to recognise. However, as this is a test in Bulgarian I noticed that it even underlined the places where spelling mistakes were made. Also there are a couple of lines from old Bulgarian poetry and the software noted that there is something a little bit off with the language. Amazing!