Fire All Mighty


My idea behind the creation of my first project for Cardboard had many dimensions – social, humanitarian, and pure joy of creation. I portrayed a calm night in a forest, with a full moon in the distance and peaceful background of night bugs singing. This whole utopia was extended by a not-so-utopian factor – it was all on fire. There were fires burning everywhere around the player, their sounds blazing and covering the peaceful melody of the night. The only possible interaction with the environment was to shoot out more fireballs, thus making the player unable to prevent the disaster from happening, she could only make it worse.

In the social context lays the first interpretation of my project. It is about the literal destruction that humans cause in Nature. It is our involvement that only makes it worse, as we are unable to turn back time and prevent the disaster from happening in the first place.

The second interpretation of my project is a little bit more subtle. I imagined this forest as a mind map of human brain and consciousness with its sulci and gyrus as the terrain, a mountain, on which ideas, trees, grow. When destruction, or depression, or any other mental condition takes over – the fires – often times the individual cannot stop it from happening by rushing to take some action, shooting fireballs in my example, and should stay calm and just take it all in; let it flow through them. To me this is a very important issue, both because of my own recent and not so recent struggles, but also because I have come to realise that many people suffer in their everyday lives because they are not willing to stay calm and let go. I wanted to show this struggle and also give a way out of it – listening to the peaceful sounds, looking around and enjoying what one has. This works in the context of NYUAD because we live in a desert so such forests are not present but they are still highly appreciated by many here.
Of course, it can be interpreted also as the inability for humans to prevent entropy and a metaphor for our ongoing struggle. This is a more dark interpretation, but still a valid one.

Lastly, I enjoy forests and fire, and I wanted to experience it. Virtual reality allows to interact with fire like never before, like a superpower, and to be able to manipulate it in previously impossible ways. It empowers people, which makes them feel good.

On the topic of creation I will now say a few words about the actual code behind the project. For the fires spread around the forest I used a free asset available in Unity Store, which had the fires and their sounds already prepared for me. The background, the terrain, the forests and the moon I made using already existing in Unity options. For the realistic forest sounds I used a free recording from a forest I found online.

Shooting the fire, my interaction, was naturally the hardest one to make. At first I wanted to use the Fire Assets’s options; however, I learned that reading someone else’s code and being able to understand, and use it, is not always the most straightforward thing and I was unable to do it. So, I did what my professor recommend me – “Why not make it yourself?”

Thus, I created a particle system that I connected to the Cardboard trigger with an if function, so that only when you press the trigger it fires. Then I got the position of the player’s view in order to make the particles shoot in the right field of view. Using the position and rotation of the camera I told the particle system to go in that direction with the Instantiate function.

My biggest problem was making the particles move uniformly in the direction in which they were shot, because by default they spread around. In the update function of their script I added a transform.forward function multiplies by the time it takes them to travel, which I could control. Then I used a material and sound effect from the Fire Asset to attach to my particles so that they look like little fireballs.

If I had to work more on this project I would do the following:

  • make the fires in the forest appear gradually for the purpose of player immersion
  • make a single particle when triggering the button
  • make the particle induce fire where it lands – this would be made by checking if there is anything on the way of the particle with a Raycast function and then triggering a fire asset or particle system on a spot, defined by the location of the original particle, when the raycast function returns a small distance 0.4f(for example)

Memoir of a chart of the East Coast of Arabia – Project 1

In my post about crowdsourcing I mentioned a digitisation project we worked on as a class. We made a accounts in the site 18th Connect, and together we digitised Memoir of a chart of the east coast of Arabia 1764. This manuscript is a telling of a journey and description of an old map. To access the site one need to create an account and password and can immediately start editing any text that is available in the site.

This manuscript, written back in 1764 is read through an OCR, in particular TypeWriter. The text is read and then an editor, like me and my classmates, has access to every line that is identified by the software. This includes many mistakenly read ink spots, paper folds and others, which should be removed from being included as data. There are also mistakes when it comes to the spelling of the words and sometimes letters that are hard to read. This is the first level of editing that we had to do. Many other additions to the text and its metadata followed. As my final project I decided to finalise the document and prepare it for online publication.

In the following paragraphs I will describe the process, some problems and some solutions and generally refection on the making of such publication.

Turning the Memoir into a TEI XLM file requires many additions to the original text and correcting the OCR’s mistakes. This format is associated with some specific tags, similar to those in HTML, but more literature specific. In order to add more information about the specific layout of the text, the fonts, the character’s size, purpose, I started using the TEI Guidelines. They are very complex because they deal with texts in enormous detail. There are notations of verse structures, rhymes, play structure, character information and many many others. There are tags that give incredible amounts of metadata about the information put in the text. In order to properly annotated a text in this manner one must know the specific tags, similar to learning a program language. I am very incompetent when it comes to the many functions of the language, so my current edition of the Memoir isn’t very profound in metadata. However, diving into the depths of TEI showed me how much there is to learn and what precision of information one can achieve. I found that TEI is not a very popular language outside the limited interest in Digital Humanities. I also noticed that many of the examples in the site are given with logograms, which suggests involvement from far East countries.

Thought the whole text there is a spelling specify that comes from the difference in the age of the English language. Often instead of “f” or “s” the author would be put the long “s” according to the spelling back then. In order to provide accurate “translation”, editors have to change the old letter with the new equivalent to keep the meaning. There are sometimes just spelling mistakes where the OCR couldn’t read the text because of a blur and we had to correct them as well. These were minor things that were fixed a couple of days after we started. There were many symbols such as pictograms of anchors,

Anchor pictogram
degree notations and fractions that aren’t traditionally included into a keyboard. Those are initially noted with “@“ and later replaced with the right symbol using the functions in TEI Guidelines.

Punctuation and emphasis like italics, bold, and underline are manually imputed correspond to the text. This task first requires a closer look to the text and its meaning. Putting different character descriptive tags provides a further reading from the editor’s side. The process happens attentively first in order to copy the text and second to transfer the meaning correctly. Here the editor has some freedom of interpretation but mostly the obligation to fully transfer the author’s intent. The same goes concerning page layouts. They should be inputed in order for the original arrangement to be kept even in digital format, so that it can be recreated as well. These minor but important additions relate to the topography of the text. With time understanding and interpretations change, so keeping as much as possible from the intended is important. The software, TypeWrite, keeps track of all changes that are made to each line with username, time and change made. This information is valuable and it kept in the XML file, because it is very influential to meaning of the text.

I have used the tag “hi rent=“italics”/“bold”/etc” to mark text. The location of certain pieces of text I have marked with “hi rent=“center”/“right”/“left”” . These tags are specific to TEI, however I have used other tags such as marking a paragraph “p” that are typical for HTML but are used in TypeWrite as well. In all of these small insertions that the editor makes can be made mistakes. Or at least there are enough different ways for them to be written that differences may occur. For example, I know that not all commas that follow an italicised piece are italicised. Sometimes I would put the closing tag “/hi” before an additional comma because I haven’t seen it. There are many minor variations like these that come either from the inconsistency of one editor or between many with different styles. Also I would add that even in this text I don’t think I have manages to capture quite all of the elements on the page. There are lists, different indentations and stories within stories that I haven’t marked. This is partly because I don’t feel very comfortable using all of the TEI notations and partly because it requires a lot of time and screen staring.

Once having transcribed the text and all of the information contained within the page, the editor can add annotations, comments, footnotes, and any type of tag to provide additional interpretations. In my edition of the Memoir I have not completed this phase. My interactions with the text end with the input of the actual information.

However, from here one starts academic interest and need for more precise knowledge in TEI. The map can be described in great detail as there are some repeating characters such as Captain Smith, and there are many locations. I think that given the nature of the text – a distribution of an old map and a memoir of a traveler – it would be very appropriate so someone to georeference the locations and see if the distance estimates are correct. As we noticed during the GIS day the coasts around the Arab peninsula change often and it would be interesting to see the changes from 300 years ago. Moreover, there are instances of interaction with groups living near the coasts. There are lists of stocks, plans for trip provisions, and accounts of valuables that the Captain has encountered.

The user interface of the site 18thConnect cannot be left without a comment, because in such crowdsourcing initiatives the experience for the user is very important. The site doesn’t look too welcoming from the home page, but the bigger issues appear when editing starts. The portion of the original text that is seen is very small, which would make sense if this space was attributed for editing purposes instead. But it isn’t. The OCRed text appears in a very small box below the original, which is not very easy to navigate with a mouse as you have to click on the next line in order to start editing it. The problem is that the given shortcut controls for movement between the lines, insertion or lines, deletion and submission don’t work properly. Most of the times I have used them the site doesn’t respond and even after clicking the “Insert line” button the page doesn’t refresh and no result shows. Not to mention that if you are using iOS the shortcut commands are completely inaccurate.

That doesn’t aim to discourage you from editing text, it is just a note to the creators. There is a certain satisfaction of contribution to the world that I associate with this finished digitalisation and with others I have completed. Although small, it is a lasting impact that one can make in the world, which will be recorded in the metadata of the work to be remembered as long as it exists. I think it is fulfilling for people to spend sometime on a text, because it triggers both internal and external change.

NOTE: To anyone interested in this particular text I should mention that I know of at least to instances where something must be added, but I am unable to do so.
Top of page 8 needs to have a line with: “hi rent=“center” (2)”/hi” added.
On 9 page, line 23: a Boat “founded” or “sounded”?
Add a table on page 4


AND Professor Wrisley in his course


To my experience in Digital Humanities the recurring themes are from two kinds: the one about the subject, the meta ones, and the one about its insides.

From the first kind I see a lot of similarities to the themes in psychology: what is this study? Is a science, a humanitarian discipline? What is the purpose, who does it? These type of meta questions are common to all newly defined disciplines. Establishing the work of the field is important given the fact that it is taking new leaps now as hardware also persistently develops. Using computers to map out texts is in the basis of teaching neuron networks to create text themselves and to understand more complex ideas. Making precise history maps and digitising information will also do that. So, no matter what the exact meta is it is definitely an useful area of study and should be established as one.

The other type of recurring themes are inevitable to mention when speaking about digital humanities: data, metadata, mappings, crowd projects. In the archaeological project that I participated in with my professor and a bunch of archaeological scientist all of these themes made an appearance. Starting from gathering data since 2005 the archaeologists studying Saadiyat Island discovered the many heaths on the island showing that life here existed for millenniums. Then they put all of the information together to form a map of the place, which was being advanced by 3D mapping devices that could take data from the ground and create elaborate pictures including heigh. All of the people who participated in the trip also participated in the crow sourcing part of the project. We all contributed to make a perfect map, with location coordinates, of all of the objects that were observable on the site. This included not only historical items, but such that became history as we documented them.


It comes in many shapes and forms, objects, pictures, text, etc. but really what concerns humanitarians is in the form of 1 and 0, just like the data that interests programmers. But our data begins its way in a very different, real format. All information created in the humanities can be turned into computational data. The works of some poet can be translated into computer language and analyzed for their contents or to teach a computer something. The paintings or an artist can be digitalized for a machine to count the strokes. A historian’s works can be compared to the ones of people long gone. These are just a few examples of data in the humanities.
It can be stored in many ways, or formats, depending on the need of it. In the need of creating networks, for example, the extensions .cvs or .xvs are a good, easy way to keep data organised and easily accessible with other software (Google Sheets, MicroSoft Excel, etc). A extension that keeps all of the digital information in a picture is .raw, however it is quite big.
On the topic of big things, data sets can be of small, medium or big sizes. Small data is easy to navigate and is even workable by hand. Medium data requires a lot of input and a software in order for processing to happen. Lastly, there is big data: it is gathered from many sources, and analyzing it even with softwares usually takes a substantial amount of time. It is really hard to work with because mistakes are hard to find, details might be left out, so it is really dependent on the software that is working with it. A famous example of a failure with big data is Google’s Flu predictor, which failed numerous amounts of times due to misinformation or because it didn’t take into account the metadata of the information it had (ex. time of the year, which correlates with the spread of the flu).


This is the background information generated not directly by users but by devices. I believe the most relevant to making humanities data collection is location. Every message or photo, or checkin has coordinates attached to it with the particular geo location the phone estimates based on GPS satellites – these are hardly wrong by the minute. So, without the need to manually input locations, search for pins on maps, one can just use this to map anything. In the humanities this can be used for projects that gather information about anything that is currently available to be photographed – just like we did in the archeological project.

But metadata can be added manually as well, which makes it useful not only for location but for sorting information by anything inputed as metadata. Say, in a play one can ask many questions that cannot be answered always from the pure text, like: whom does the line address, where in space was it said, where in stage space was it said, etc. There types of details can also be contained in the metadata of a play, in order to help both literates and even actors work with the play. In fact, intonation, accent, facial expression, all kinds of details can be added to a metadata bank of a file. Possibly, in the future in plays where the actor’s interpretation of a character or situation is not considered important, he can even learn a play on his own only with information from metadata. All directors beware!

Wait, if there is so much information behind the scenes of electronic files, then how do I know what am I giving away? And who has access to this metadata again? Well, this is tangent to one of the biggest privacy issues of 21st century. More info – Metadata.


This is the miracle of the accessible Internet. People are on it on the time, they generate data and metadata of all sorts, and they can be useful to computers to analyize it. The simplest example we looked at was a site where people had to sort images into categories in order to help a machine to learn to differentiate them. This task is impossible for softwares to do properly and subjectively the way humans would do it, but it is not very important as to assign people to do it for a pay check (in this case). So, only asking the people of the Internet to assist, one can accomplish a lot of things – for free. Users accumulate great amounts of knowledge about the world and sharing it to people who can work with it is always great. Crowdsourcing doesn’t only have to be online, it can easily be in real life as well, as in the example with the sample archeology class. However, the information gathered always ends up accumulated in a digital source in order to be easy to work with, so making the user input it digitally to begin with is even easier.
In class we are working on a project concerning the food places around Abu Dhabi and particularly, if they can serve well to NYUAD students (prices, closeness to campus, delivery, etc.). In this project we are working as a team to input data, but we are also allowing other people to input information about their food adventures in the city, just as they go along without any type of incentive – crowdsourcing. I find it amusing how well Google’s platform GoogleForms works for such endeavours, as it is easy to work with, not time consuming to create and has enough options.

Digitalised Text

Before and after - Bulgarian in Abbyy FineReader
Before and after – Bulgarian in Abbyy FineReader

The miracles of 21st century! Not only do we have a .pdf format for scanned documents, but we can also turn them into text documents which one can edit. This can be done in a number of ways, including using Google’s application, but we use Abbyy FineReader. The pros of this software is that if has many different languages and many of them with included dictionaries, to help with word recognition. You can also teach it to recognise words (while using it in Windows) and patterns in order to clean out mistakes. Good thing the software is Romanian and the creators included a dictionary in Bulgarian language! (Thanks neighbours!)

My project will very definitely be a continuation of my work in Sofia Central Library (Столична библиотека). There I used to index and often type on a computer – digitalise by hand – the works and books of old Bulgarian collections. Most of the books were quite old, whatever is left from the bombarding of the library in 1945. I hope I will be able to get scanned versions of such documents and digitalise them. So far, I am working collaboratively with the library in order to find the most unique and intriguing works for me to put into my personal corpus. If not, I will get the most practical texts that would help them expand their collection.

On the left you can see the result of turning a .pdf in Bulgarian into a raw format to work with. Originally this wasn’t a paper source so it was quite easy for the software to recognise. However, as this is a test in Bulgarian I noticed that it even underlined the places where spelling mistakes were made. Also there are a couple of lines from old Bulgarian poetry and the software noted that there is something a little bit off with the language. Amazing!