Skip to content

UoB-COMSM0166/2025-group-13

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2025-group-13

This is a repository for the group project of the unit Software Engineering Discipline and Practice (COMSM0166) 2025 from the MSc Computer Science at the University of Bristol.

Game: DINO ESCAPE

Game cover

Team

Group 13 photo
Name Email Github username Main role
Ran Tian [email protected] HaruTian4604 Graphics & Design
Shrirang Lokhande [email protected] ShrirangL Software Developer
Mahesh Nanavare [email protected] MaheshNanavare Software Developer
Aya Saneh [email protected] aya-codes Project Manager
Santiago Muriel [email protected] smurielv Software Developer

Project Report

Introduction

ADD gif playing the game

Dino Escape is an action-packed retro platformer inspired by classic Mario Bros gameplay, but with a prehistoric twist as well as a different gameplay experience. Step into the escape of the last T-Rex, racing against extinction through challenging levels and vibrant landscapes.

Though our game is inspired by classic platformer games, the health bar provides a slightly different experience. Firstly, users will need to pay attention to the environment and look for food, which will increase their health, and avoid dangerous elements, which will rapidly decrease it. Secondly, due to the harsh landscape that threatens the player, health will continuously run out. This adds a time element, pushing users to feel that constant threat of extinction.

ADD something gif showing extinction theme

The extinction theme is present throughout the game: in the first part, a volcano has erupted and lava is everywhere – jump on the platforms to escape death. In the second part, your world has frozen over, and every moment you stay outside threatens your existence. While the first part is relatively easy and achievable for gamers of any skill level, the second part is far more challenging, and it has a way of keeping even experienced gamers engaged.

The game makes use of classic keyboard controls to ease the learning curve, as well as introducing responsive development through the novelty of a mobile version. Developed primarily using p5.js, with additional flair from HTML and CSS, Dino Escape merges nostalgic gaming experience and style with modern web technologies to deliver a uniquely engaging adventure.

Requirements

Ideation & early stage design

The ideation process to choose and define our game began with every member researching and brainstorming. We then compiled an initial list of 13 proposals. We defined a few categories by which we could evaluate the complexity of each idea such as number of core mechanics or physics requirements. Then, we evaluated each option using the following criteria: Attractiveness, Complexity, Feasibility using P5.js, and User Friendliness.

ADD Ideation gif

From there, we ranked our choices, discussed the ones that scored the best, and chose two very different games to validate through paper prototypes with potential users. Specifically, we chose ‘Dino Escape’ and ‘Train of Thought’.

Choosing our game

ADD Gifs of our paper prototypes

Based on the feedback we received, we settled on ‘Dino Escape’. First, it was clearly more intriguing to potential users who got to see and try both prototypes. Second, many of our peers pointed out that it had much more potential for expansion: new levels, different enemies, and a more salient story.

Before developing the full game design and story, we did some feasibility studies to get us started on programming in this new language, to get more feedback about the game mechanics, and to make sure we were still excited about ‘Dino Escape’. We developed our first digital prototype:

ADD Gifs of our first prototype(s).

Stakeholders

Having studied the Ian Alexander’s Stakeholder Onion Model, we were equipped to think much more broadly about our potential stakeholders. As we filled in the Onion diagram, we realised that this game will have an impact beyond the deadline for the report. Just as we looked to the work of previous students for inspiration, future students will look to ours.

If we were to continue working on the game in the future, its impact will expand to new stakeholders. For example, the inner circle which is currently limited to members of Group 13, who are mainly developers, could encompass more roles such as product owners, financial advisors, marketing experts and more.

In “Stakeholders Without Tears: Understanding Project Sociology by Modeling Stakeholders”, Ian Alexander and Suzanne Robertson discuss the concept of a stakeholder, the way of identifying stakeholders, and the continuous process of maintaining relationships with stakeholders and ensuring their involvement. The section in this article about awareness of role changes helped us understand the impact of our own role changes within the group. For us, changing roles created a more balanced sense of shared ownership of our product.

ADD Onion Model

Reflections

During the first design stage, through user testing and feedback we were able to validate assumptions, collect new requirements, receive suggestions and broaden our understanding of the game. From this information we were able to adjust and complete the game design, based on a user-centred approach and continuous iteration.

We have reflect and learn that organizations use epics and user stories to clarify product requirements and organize work, with acceptance criteria guiding whether goals are met. Initiatives represent the highest level of long-term goals, while epics consist of multiple user stories spanning several sprints, and user stories describe achievable use cases for users within a single sprint. Teams vary in focus—some addressing minority groups early, others concentrating on the main target group first—which requires thorough research. Although templates for user stories and acceptance criteria provided initial guidance, their rigidity eventually gave way to a more flexible approach that proved particularly useful in our game development process.

Use Case Specifications

ADD Diagram with explanation

Epics and User Stories

ADD Kanban gif

Here are a few of the user stories that shaped our game development: As a DevOps engineer, I want to configure branch protection rules for the main branch, so that I can prevent accidental force pushes, deletions and enforce pull requests before merging to maintain code integrity and stability. As someone who rarely plays games, I want to control the character using right, left, up keys, so that navigation is intuitive and the learning curve is smaller when I play. As an experienced gamer, I want the collisions between the player and other objects to feel realistic, so that I am not distracted by bugs in collision while playing. As an amateur user, I need clear instructions on how to move the character (dinosaur) and what to pay attention to, so that I can spend less time figuring out how to play. As a user with a busy schedule, I want to be able to pause and resume the game, so that I am able to play in short bursts and my progress is not lost. As an avid reader and movie watcher, I want to see elements of the story as I am playing, so that I feel engaged throughout and excited to complete the game. As a person who does not own a laptop, I want to be able to play the game from my phone, so that I can experience this game without needing to borrow a computer. As someone with some knowledge about dinosaurs, I want to see that the extinction aspect has a basis in scientific knowledge, so that I feel the developers cared enough to learn about this area. As a lover of art, I want the visual aspect of this game to be pleasing, so that it can be enjoyable to play. As a player of several platformer games, I want this game to have an interesting twist, so that I can experience both familiarity and variety. As an environmentally-conscious gamer, I want this game to be designed with sustainability in mind, so that playing has a low impact on the environment.

In “The Use and Effectiveness of User Stories in Practice” by Garm Lucassen and others at Utrecht University, the findings from their study of user stories found that the “why” in a user story is actually necessary, though many would consider it optional (Lucassen et al., 2016). Though we often struggled to identify the reason why a user story was important, the article stresses on the importance of clarifying that reason, especially for less experienced teams. We found that searching for that reason often leads us to prioritise that user story, instead of seeing it as just another task to complete.

Design

  • 15% ~750 words
  • System architecture. Class diagrams, behavioural diagrams.

Implementation

How it started

Start describing how we start the development of our game.

Challenge 1 - Game Mechanics and Collision Detection

Go into detail about our first challenge, game mechanics and collision detection.

How it evolves

How this process starts changing and evolving.

Challenge 2 - Enabling a Mobile Version

After participating in the Testathon and reflecting on the next steps for the game, we reached the conclusion that enabling the game for mobile devices would be our next big challenge. We start by researching what would be the main implications and feasibility concerns, finding the following key points that end up being our work plan. Input Method Changes. Our game currently uses keyboard input (e.g., arrow keys, spacebar) to control gameplay. Mobile devices lack a physical keyboard so we will need to adapt controls to touch interactions. Responsive Design and Layout. Our game was optimized for desktop resolution with fixed canvas sizes and positioning. Mobile screens are smaller and come in different aspect ratios. The layout must adapt to these constraints. Performance and Optimization: Our game was built for desktop environments where performance constraints are generally less severe. Mobile devices have less processing power and memory compared to desktops. Graphics, animations, and collision detection may need optimization. Testing Across Devices. Our game has been tested mainly on desktop browsers. Mobile devices vary widely in terms of browser capabilities, touch responsiveness, and performance.

For the first one, we design a central controller class for all input handling, to be able to receive signals for multiple sources (keyboard, mouse and touch events). At the beginning it was hard because we needed to refactor all the previous input events. But it was more than worth it, because we not only implement the desired feature, we also end up optimizing our game and improving the quality of our code. The main change was switching from checking which key is pressed or released inside every class that needs to handle an input, to a centralized controller that only runs one time per draw function iteration and propagates the signal to every part of the game that needs it. Also, we decide to implement ‘buttons’ inside the game canvas, but only ‘activate’ them if the device has touchable capabilities.

Image: Include a gif showing something about this

For the second one, we face some common challenges like extracting device characteristics and making decisions based on them. Like in almost all our development journey, after experimenting and testing, we decide to set some rules according to the screen size of the device in which our game is running. Combining the use of HTML, CSS and p5.js we were able to display the whole page and the game itself in the best possible fit for the available screen.

Image: Include a gift showing something about this
Image: We could include an image comparing two or three different displays

For the third one, apart from the optimization achieved on the first point, we focus on three aspects …

Image: Include a gif showing something about this

For the last one, it is important to mention that testing across devices was the way of working over the first three points. During development we were continuously testing the new features and changes using the “developer tools” of different web browsers. Specifically, we use the “device emulator” capabilities to test with different mobile devices. Then, after deployment we use our own phones and other real mobile devices to continue testing. This process was a key part of successfully overcoming this challenge. We were not only able to confirm that things were working as expected, we ended up finding a lot of bugs and errors. For example, the first time we deployed these features, they were only working well on the latest and more equipped devices and only in specific web browsers.

Image: Include a gif showing something about this

Learnings and improvements

What we learn in the process and how we improve the way of developing.

After iterating over sprints and switching roles between the team, we realized that maintaining a consistent and coherent code style is very difficult. It took us many sprints and a couple of refactoring efforts to achieve a better integration between each component developed by different people. We were able to improve our development process by adopting some basic standards and quality rules, like using name conventions, defining how to document code, etc.

Evaluation

To evaluate our game, we conducted qualitative evaluations using Think Aloud, Heuristic, and user interviews, as well as quantitative evaluations using the System Usability Scale and NASA Task Load Index. We used the qualitative evaluations to guide our development of new features and fine-tuning of existing features.

On February 25th and March 4th, our classmates evaluated our game during the lab. However, we wanted more feedback and a larger sample, so we also attended the testathon on March 5th. In preparation, we created a consent form that participants could fill, containing options such as consent to be photographed or recorded while playing or giving feedback. We also had a participant information sheet that explains our game and why we are collecting feedback and taking photos and videos. These documents can be found here.

ADD link to documents

Qualitative Evaluation

At the testathon, we were able to receive feedback from a wide variety of players, including many who are experienced in game development. As such, we received rich and helpful qualitative feedback there. Though we did perform several Think Aloud and Heuristic evaluations, most of the feedback was given in the form of interviews with a few simple prompting questions. We prepared these questions ahead of time as a team with a focus on avoiding leading questions. Some examples of the questions are “What did you think about the visuals and graphic design?”, “What do you think the game is about?”, and “What do you imagine for the next level?”.

The responses were analysed as follows: first, statements were categorised based on whether they were highlighting an issue or something the tester liked. Then, focusing on the issues, these were categorised further depending on which aspect of the game was concerned. Then, we counted the number of times that type of issue was mentioned. The results were that the following issues were the most pressing: jumping mechanics (10 mentions), consistency in graphics (8 mentions), need for more levels (7 mentions), lack of sound or music (6 mentions), specific animation issues (6 mentions), and unclear health indicators (5 mentions). After the testathon, two members of our team dedicated several weeks to resolving these issues, and they were able to resolve all of them. A further analysis with the specific issues mentioned can be found here.

ADD The full analysis links

Quantitative Evaluation

As for the quantitative evaluation, we developed a Google form for each of the two surveying methods we learned: the System Usability Scale (SUS) and NASA Task Load Index (TLX). We collected a large number of responses for both: 47 for the SUS and 32 for the TLX. The results we got for the SUS indicated that there was no significant difference between the levels in terms of usability. We scored above average for both levels, with an average of 82.6 for level one and 83.4 for level two. On the other hand, there was a definite difference for the two levels we tested in terms of task load. Based on the evaluations of eleven participants who completed the two levels in different orders, we had a W test statistic of 2, where less than 10 would indicate a statistically significant difference. On average, the task load scores were 41.4 and 59.8 percent for levels one and two respectively, indicating that level two was significantly more challenging than level one, which is in line with our aim for the two levels.

Testing

Process

Creating a team

As a team, we met frequently and always in person. We made use of spaces with screens or whiteboards, which we used to express and discuss our ideas. For our first few meetings, we focused on getting to know each other better, to create an atmosphere where everyone felt comfortable and part of the team. For example, during one of our first meetings, everyone drew a picture of their country of origin, trying to locate their hometown, and told us about their culture and history.

ADD photo from first meeting if exists

Later, our meetings varied in content and focus, but overall we stuck to an Agile mindset and Scrum methodology and workflow. We had four sprints of one or two weeks each during which we tried to develop our game efficiently, focusing on a specific goal each time. Afterwards, we would meet to do a Sprint Review focused on what we had done, Sprint Retrospective focused on how we had worked together, and Sprint Plan for the next sprint.

Team profiling

One of our first meetings after getting comfortable with each other was a team profiling session. We spent time listening and understanding each other's experiences, interests, and strengths.

Team profiling activity

We defined the necessary roles for our project, differentiating between critical roles, which would be needed throughout the project, and complementary roles, which would be needed for a specific period of time.

We had a lengthy discussion about whether we should each work in the role we are most skilled at, or whether to explore our interests, and even discussed this with our professors. We decided it would be best to balance both, giving each other opportunities to explore new things by shuffling the roles after the second sprint. This gave each person the ability to experience something new, as well as work in a role they had experience in. At one point, someone felt that they wanted to change roles, and several team members shifted roles to make that happen. In the end, we each did some game development as well as at least one other role.

Team profiling activity

Choosing our tools

Though we had great communication in person and found it easy to work together, it was harder to continue working when we were apart. We needed well-defined, simple ways to communicate. We used both Microsoft Teams and Whatsapp for communication, depending on the type of message. For coding, we used Visual Studio Code as our IDE and GitHub for collaboration and version control. Google Drive helped us organise and share our documents and Powerpoint was useful for creating simple diagrams and visual aids.

For communication: Whatsapp: to keep in touch, ask quick questions, and schedule meetings. Microsoft Teams: to centralise communication (discussion, reviews, etc.) and enable asynchronous work.

For coding and writing: Visual Studio Code: as our IDE for software development. Google Drive: to write and edit the report, collect pictures and videos, and manage all our documentation and notes (weekly tasks, documents, spreadsheets, pdfs, etc.). GitHub: to host the central repository of the project, including all the game code, but also to achieve other interesting things. GitHub Projects: to manage the Kanban board. GitHub Actions: to automate the deployment of our code. GitHub Pages: to host and allow to play our game online.

For designing and editing graphics and visual aids: Microsoft Powerpoint: for putting together images and making small edits on them, as well as creating diagrams for this report.

Adjusting along the way

Regarding the implementation of Agile and Scrum, our first sprint was one week long. It was used to test the methodology and p5 language and attempt to create a first prototype of our game. We decided to have stand-up meetings four days a week to discuss any issues we were facing. Up till that point, we had been using just Whatsapp for communication outside meetings, and we hoped the stand-ups would ease communication and stress.

After that sprint, we met for our first Sprint Review and Retrospective, and it was clear that a week-long sprint was too short and that the stand-ups were not working for us. We decided that all future sprints would last two weeks, and that we would have longer meetings twice a week. Later, we added Microsoft Teams to create a second space for communication, which created a clearer boundary between work and personal life, and we all committed to checking Teams regularly.

According to Martin Fowler, the essence of Agile is to be “adaptive” and “people-oriented” (Fowler, 2019). By adopting new tools when needed and choosing and changing roles for each other’s benefit, we were able to implement the core principles of Agile during this project.

Sustainability, Accessibility, and Ethics

The following section will cover some aspects of the environmental and technical impacts of our game.

Environmental Impact and Sustainability

When we see smoke rise from a chimney, fumes escape from a car exhaust, or piles of plastic sit in a landfill, we can make a direct association with the environmental impact. In comparison, the impact of digital storage is inconspicuous and invisible.

When we began developing our game, we decided on a feature branch approach. Practically, this meant that we ended up creating a new branch for every feature we wanted to develop, then created a pull request to merge that feature branch with main. Github will add a message when a pull request is merged stating that the feature branch can now safely get deleted, but we avoided any deletions to branches or within the code, assuming that something may become useful again in the future and naively believing that there was no harm in doing so.

ADD Branches.gif

After the sessions about Sustainability and looking over the Green Software Patterns, we realised that this pattern of working was really unsustainable. For the first time, we saw that each branch was a totally separate duplicate of all our code. We added a new issue to our workflow, which was to clean up our codebase. First, we would need to delete all inactive branches, or branches where no new work was being done. Second, we noticed that a lot of our classes included segments of code that were commented out, and therefore were just cluttering our codebase without doing anything. Also, we removed any unused CSS definitions and compiled many of our images into a composite image of multiple assets.

ADD Composite image

When we consider the environmental impact, we also think of the carbon footprint of running our game on any device. Though this is difficult to estimate, the carbon footprints of manufacturing and using laptops and desktop computers is well-researched. According to “Environmental impact of IT: desktops, laptops and screens”, which is an article by University of Oxford’s IT services, they estimated that the cost of producing and running their standard desktop computer and screen for six years was around “778kg CO2e” (University of Oxford, 2022). The article also highlights the greater impact laptops have because of more frequent replacement.

ADD image from testathon of using multiple computers to test

Interestingly, the article clarifies that “around 85 per cent results from manufacture and shipping, and just 15 per cent from electricity consumption” (University of Oxford, 2022). However, since a significant proportion of our electricity is carbon-based, any unnecessary electricity consumption should be taken seriously. The article by the University of Oxford’s IT services highlights the impact of simply using our personal computers differently: using the default power-saving modes, shutting down at the end of the day, and avoiding unnecessary charging (University of Oxford, 2022). In future versions of our game, we could include reminders to take these simple, impactful steps, or find a creative way to weave these messages into our extinction theme.

Technical Impact and Sustainability

According to the Sustainability Assessment Framework (SusAF), Technical sustainability comprises five aspects: maintainability, usability, adaptability, security, and scalability. For several of these aspects, the critical part is that our game is currently hosted by Github Pages in a web browser, and we have mostly tested it in mainstream browsers such as Google Chrome, Mozilla Firefox, and Microsoft Edge. If these platforms were to change in major ways, we would need to test and update our game, so this is part of the maintainability of our game: continuous testing and updating. As for security, the game does not collect or store any information, and so is not desirable for hackers. Also, all our code is available to the public through Github, so that is not a reason to attack either. In terms of scalability, our game can currently be played on multiple devices at once, but we have not tested this on a large-enough scale that we know if there are limits to this capacity.

As for usability, the SusAF mentions how usable the product is for different users with different abilities. While designing our game, we kept in mind that not all users would be avid gamers with plenty of time and energy to learn a new game. As such, we worked to make our game intuitive and easy to learn, with simple controls and clear instructions. We have also made some adjustments to the visuals of the game, making the graphics higher contrast for easier detection of different elements as well as making all the objects slightly bigger. Importantly, a user could play and enjoy the game regardless of their ability to hear the sound effects or music. We did this by having cues like the player flashing red when losing health. If we had more time, we would add other ways to play the game without using the keyboard or touchscreen, similarly to what the 2024 group 18 did with their game, ‘Oiram’.

Conclusion

  • 10% ~500 words
  • Reflect on project as a whole. Lessons learned. Reflect on challenges. Future work.

Contribution Statement

Name Contribution
Ran Tian 20
Shrirang Lokhande 20
Mahesh Nanavare 20
Aya Saneh 20
Santiago Muriel 20

References

Alexander, I. and Robertson, S. (no date) Stakeholders without Tears: Understanding Project Sociology by Modeling Stakeholders. Available at: https://www.scenarioplus.org.uk/papers/stakeholders_without_tears/stakeholders_without_tears.htm (Accessed: 15 April 2025).

Fowler, M. (2019). Agile software guide. [online] martinfowler.com. Available at: https://martinfowler.com/agile.html.

Lucassen, G. et al. (2016) ‘The use and effectiveness of user stories in practice’, Lecture Notes in Computer Science, pp. 205–222. doi:10.1007/978-3-319-30282-9_14.

University of Oxford (2022). Environmental impact of IT: desktops, laptops and screens. [online] University of Oxford. Available at: https://www.it.ox.ac.uk/article/environment-and-it.