Team Code Rangers
presentDogApp

Identify any dog you see

We brainstormed and came up with several ideas that we thought would give us the chance to integrate some cool technology. Initially, we’d decided to make a bird identification app, but the api we found was not very accurate and we were unable to find any suitable data sets. We talked about alternative subjects for identification and found a good data set of dog images to use. We searched for an api but again were unable to find any that achieved the accuracy we’d need to make a good app and decided to train a model ourselves.

The user experience I guess is that they are able to log into their account, using authentication with firebase. The home page shows a leader board to create a bit of fun competition between users, with the ability to view the current leaders saved dogs. Users are able to navigate to the camera page and upload an image, or use their devices camera to capture an image for evaluation by the model.

They receive back a response showing the top 3 matches and the corresponding confidence levels from the model. Users are also able to navigate to a list of their saved dogs, which is a record of all the requests they’ve made to the model.

The Team

  • Team member imagePreview: Team member image

    Iretunde Soleye

  • Team member imagePreview: Team member image

    Haley Visconti

  • Candace Davies

Technologies

Backend: SQL, Python, Node.js, AWS, FastApi, PyTorch,  Docker, Frontend: React Native, Typescript, Firebase, Expo, TailwindPreview: Backend: SQL, Python, Node.js, AWS, FastApi, PyTorch,  Docker, Frontend: React Native, Typescript, Firebase, Expo, Tailwind

We used: Backend: SQL, Python, Node.js, AWS, FastApi, PyTorch, Docker, HuggingFace, Frontend: React Native, Typescript, Firebase, Expo, Tailwind

Model:
We acquired a dataset of 10s of thousands of dog photos on a ML Site called Kaggle. We then used PyTorch to create a CNN and trained it on the dataset. We used Google Colab for compute resources while training, and were able to achieve a result of over 90% accuracy on validation data. We then made the model callable via api with FastAPI, containerised it with Docker, and then uploaded it to hugging face.

We created a database with Postgres to store user and dog image information, and created endpoints with node/axios/express. The backend is hosted with Supabase and Render. We store user images, both dog photos and avatars, on AWS in an S3 bucket which we created a pipeline to in the backend.

App:
We used react native to build an app that can be used on mobile and web. We used Expo to test our app on mobile and on web by running a server, allowing us to update our codebase and see the changes immediately. We also used Typescript to build some of our component files. For user authentication we used Firebase, which stored user emails and provided us with a unique id to which we linked back to our database, to change the state of the app for each user. Finally, for certain elements of our app, we used Tailwind.

Challenges Faced

There was an issue initially with the model, but we decided to use gradio for the api because it was a nice little prepackaged solution. However, it turned out to be incompatible with iOS and we had to explore alternatives which is how we ended up using fast api and docker.

Building the pipeline to AWS and also setting up authentication with firebase fiddly and required playing around with.

It was the same on the backend with AWS as it was on the frontend with firebase...using the online GUI to initialize it all was pretty straightforward aside from setting bucket policies. But getting it all hooked up in the backend was once again fiddly. and required a bit of creativity to test.