Native Android Project Management App with Java

Initialized 2021.03.16 | Revised 2023.11.16
This Page
CSCI 5115 - UI Design, Implementation & Evaluation is a course offered at the University of Minnesota to both undergraduate and graduate level students. Over the length of the semester, students worked in groups on a single project that involved selecting a problem, researching user needs and current solutions, designing and developing a minimum-viable-product, and collecting user feedback and to plan next steps.
PLANiT logo and backsplash designed by Conrad (Cal) Carlson

Background

I was placed in a group full of motivated students, which meant our topic list from the brainstorming session was filled with a lot of "Big problems"1. As someone with ADHD, even big problems that aren't "Big problems" are hard for me.

Yes, I'll fixate on things and can work on a project for weeks to the detriment of other responsibilities. But many times I'll get an idea for something big, or need to work on a major project, and won't know where to begin. Other times I will simply be too anxious about how long the planning itself might take.

Our group had a variety of experiences like this. We had so much going on that it was often difficult to choose where to start. We wanted to help make the decision process less daunting and thought that helping people with project management and organization would free-up time for some to tackle those "Big problems".

Formative Research

Once our group had selected project management as our issue of choice, we had to decide how to solve it. We obviously had our own experiences to work from, but in order to address these issues on a larger scale, we needed to gather outside perspectives.

Methods

Part of the class structure was interviewing volunteers (primarily other students) about our chosen topics. Before we could do that, we had to make sure that the whole group was on the same page.

User Study Protocol

A user study protocol is a packet of forms and guidelines to follow during the interview process. Ours included a consent form for the participant to sign, a questionnaire for short-answers and demographic information, and detailed checklist that included important interview questions to discuss.

The point of the packet is to ensure that each interviewer has all the materials they need for the session. It's as much for the interviewer's preparation as it is to help gather the same types of information from all the participants. It shouldn't restrict the interviewer from introducing other questions, but is there to help them cover key topics.

The consent form is particularity important because it gives the participant an outline of how the information will be used and a general sense of the kind of data being collected. We wanted to make sure that participants felt safe sharing their experiences with us, so we did our best to anonymize the data we collected for the project.

Interviews

Each of our group members was assigned to conduct one user study interview. Participants were self-selected and identified as having troubles with time management. Interviews were scheduled for 30 minutes and recorded over Zoom to simplify the transcription process.2

Our questions focused on current and past time management skills. We wanted to know what our participants had already tried and where they were still struggling with.

I had this really unsustainable system--just a long list of all the things I needed to do. The most stressful, like things due the next day, would be the highest priority. What would constantly happen is the things that were due in three weeks but actually a lot more work to complete would never get done. There was always something ahead of them in the queue.

Problems like the ones Alex described were remarkably close to some of our own experiences. I had personally tried bullet journaling (with Moleskines and everything) and would constantly find myself copying many things over day after day to the point where it was discouraging.

I always have a lot of other homework to do. So, I seldom have a fixed timeline and it really depends.

Blake's sentiment above captures another problem, that sometimes there just isn't time to organize the tasks that needed to be done. Putting energy into efficiency just doesn't seem worth it when you're overwhelmed by the tasks themselves. Casey expressed a similar concern.

I think just overthink and try to do too many projects at once.

But Casey put it perfectly, the reason we chose this project in the first place.

I think I would prefer organizing my energy differently so I don't feel like I have to do all the projects at once.

Blake was the most helpful in describing the things they wanted to see out of a new application.

In projects like this there is usually a member responsible for reminding others about the time like what is the time that somethings need to be finished.
When I leave my home and go outside some apps requires internet might not remind me of the due dates.
When I am focusing on a task, chat notifications are usually annoying because I need to stop my work and figure out what notification is coming.

So these were some of the things we wanted to focus on, an application that could handle to organization of multiple projects without relying on internet-based tools and with minimal interruptions to the user's focus.

Ideation

Once all the sessions were complete and the interviews transcribed, we began our qualitative analysis. We swapped transcripts with each other and began coding the responses we received from the participants. The goal here was to identify major themes. We were able to build an affinity map that helped us pick out four of them:

  • Visualizing timelines
  • Connecting with collaborators
  • Self Motivation
  • Productivity

For ideation, we used the IDEO method which encourages recording every idea as a means to generate your best ideas. The key themes from the previous stage were used to frame our ideas as we brainstormed. We came up with 85 feature ideas to start, then re-categorized them into another affinity map to identify core themes as we started planning for app development.

From there it was time to decide the features we wanted to focus on. We originally wanted to tackle four things:

  • Calendar and contact integration so users would spend less time importing data they already entered elsewhere
  • A "keep on task" notification to check-in with users while they worked on tasks
  • Automatic estimations to help users learn how long tasks take them
  • Visual progress metrics so users could keep tabs on how well they were doing

Unfortunately, as we started to plan for development, it became clear that we didn’t have time to learn app integrations, implement interactive notifications, or train neural networks, so we aimed our minimum viable product at ensuring users could see their progress.

Development

What we came up with was an app that broke down people's to-do lists into two types of data, projects and tasks.

Projects were the larger items that had deadlines and were otherwise made up of smaller items we called tasks. Tasks were meant to be the actionable items, the things that made sense to complete in one sitting. A task gets assigned a size that represents how long the user thinks the task will take, and a priority to determine how critical that task is to the completion of the project.

This information helps the app organize all the tasks a user has entered into one master list. Over time, the intention is to improve how this algorithm selects and organizes these tasks. This data also contributes to the completion percentage of the projects. Small tasks contribute less to the completion than larger ones.

Another feature of tasks is creating an order to how the tasks get completed. Users can link tasks together by setting one as a blocker to another, if they belong to the same project. This means that it won't be possible to complete the blocked task until all the associated tasks have been completed first. The intention here is to help users focus on the tasks they can complete.

User Testing

Once we had a functional app, the next step was to get some users to test it for us. This was less focused on current bugs and more focused on how the app looked and felt to the user. Were they able to navigate the app with minimal assistance?

Objectives

We decided on four objectives for our test sessions:

  1. Identifying any basic design issues with our application
  2. Gathering any general feedback the users had about their experience
  3. Comparing how well our application held up to competitors
  4. Getting input on future functionality that would enhance the user's experience

But, the most important thing we wanted to address was ease of use.

Intuitive Use

We wanted the experience to be intuitive enough for the users to add projects and tasks on their own without outside help. Each of us tested with two participants where we asked them to make a new project with a predefined due date and to add a couple of tasks to it.

While the users worked on this test, we were watching for three things:

  1. The total number of of mistakes or blunders they made while trying to complete the test
  2. The types of mistakes that resulted or what cause them
  3. How long it took our users to complete the test from start to finish

On average it took our users 4 minutes and 15 seconds to complete this test with a cold open to the application and a generic set of instructions. There was a standard deviation of 42.7 seconds, which could have been due to a number of factors, including some of our participants being long-time Apple users. Unfortunately we didn't capture our user's preferred mobile OS during our first round of testing, but would likely include that in future sessions.

Some other problems our users faced were not recognizing the check mark as the icon to click to save and not understanding what the size of a task meant.

Organizational Preferences

The next thing we wanted to measure was how well the app organized tasks and if the users agreed with the algorithm. In this test we asked users to rank which tasks seemed most important. They were free to scroll through the list but we were hoping they would pick the top three items shown here.

83.3% of the users agreed that "Pass your classes" would be the first item they would select, however the consensus on the second and third items dropped significantly. Only 41.6% chose "Apply to graduate" for the second task and 33.3% chose "Milestone 2 Demonstration" as their third.

Competition

After that, we wanted to see how our app stacked up to other task management applications. We knew that some of the competition was steep as some of the most common applications our testers already used were apps like Google Calendar and Trello. However, we were pleasantly surprised.

When we asked our testers to rate the usability of our application and the applications they already used, PLANiT came in with an average of 4.08 out of 5 and a standard deviation of 0.9. This was compared to an average of 4.5 across the different competitor apps with a standard deviation of 0.67. Although the competitors scored higher on average, the scores were within one standard deviation of each other.

Our application still has much more room for improvement, but this was a promising start.

Conclusions

Given the time we had to complete this project, there are many aspects that we are proud of, but the result is far from a finished application.

Possible Next Steps

We received a lot of positive feedback about the general appearance and flow of the application. There seemed to be very little errors in navigation and users could complete tasks fairly quickly, but we still were able to gather some great suggestions from users.

There was still a strong desire to integrate with services like Google and Outlook, as we saw in our formative research. This would be the first major feature update we would tackle in order to compete with the other apps on the market.

On the smaller scale, we also got feedback around adding an "In Progress" state, implementing better ways to group projects and tasks in the overview, and adding a save button at the bottom of the edit pages to be more consistent with current UI standards. There were many more suggestions, but these stood out as clear next steps.

Acknowledgements

Thanks to the University of Minnesota College of Science and Engineering for offering the UI Design, Implementation & Evaluation course, and to Professor Lana Yarosh for instructing.

Our team was comprised of a total of six students. Thanks to Waller Li for assisting me with the bulk of development, to Marissa Lund and Cal Carlson for visual design and drafting presentation materials, and to Haoyu Tan and An Nguyen for assisting with the interviews and qualitative analysis. It was thanks to everyone's hard work that we were able to deliver a functional, minimum viable product with a smooth user experience.


  1. "Big problems" are defined as issues that have a global or universal scope. They are characteristically difficult because they are often interdisciplinary and have large-scale impacts. Solving them involves a huge amount of participation from multiple groups and often an unpleasant amount of change from from society as a whole.
  2. For the purposes of this article, participants were given aliases and their quotes are summarized for easier reading.
  • Android
  • Java
  • Research