anchor
stringlengths 86
24.4k
| positive
stringlengths 174
15.6k
| negative
stringlengths 76
13.7k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## Inspiration
The inspiration for this project was drawn from the daily experiences of our team members. As post-secondary students, we often make purchases for our peers for convenience, yet forget to follow up. This can lead to disagreements and accountability issues. Thus, we came up with the idea of CashDat, to alleviate this commonly faced issue. People will no longer have to remind their friends about paying them back! With the available API’s, we realized that we could create an application to directly tackle this problem.
## What it does
CashDat is an application available on the iOS platform that allows users to keep track of who owes them money, as well as who they owe money to. Users are able to scan their receipts, divide the costs with other people, and send requests for e-transfer.
## How we built it
We used Xcode to program a multi-view app and implement all the screens/features necessary.
We used Python and Optical Character Recognition (OCR) built inside Google Cloud Vision API to implement text extraction using AI on the cloud. This was used specifically to draw item names and prices from the scanned receipts.
We used Google Firebase to store user login information, receipt images, as well as recorded transactions and transaction details.
Figma was utilized to design the front-end mobile interface that users interact with. The application itself was primarily developed with Swift with focus on iOS support.
## Challenges we ran into
We found that we had a lot of great ideas for utilizing sponsor APIs, but due to time constraints we were unable to fully implement them.
The main challenge was incorporating the Request Money option with the Interac API into our application and Swift code. We found that since the API was in BETA made it difficult to implement it onto an IOS app. We certainly hope to work on the implementation of the Interac API as it is a crucial part of our product.
## Accomplishments that we're proud of
Overall, our team was able to develop a functioning application and were able to use new APIs provided by sponsors. We used modern design elements and integrated that with the software.
## What we learned
We learned about implementing different APIs and overall IOS development. We also had very little experience with flask backend deployment process. This proved to be quite difficult at first, but we learned about setting up environment variables and off-site server setup.
## What's next for CashDat
We see a great opportunity for the further development of CashDat as it helps streamline the process of current payment methods. We plan on continuing to develop this application to further optimize user experience. | ## Inspiration
test
## What it does
## How I built it
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for Driving Safety | ## Inspiration
Today we live in a world that is all online with the pandemic forcing us at home. Due to this, our team and the people around us were forced to rely on video conference apps for school and work. Although these apps function well, there was always something missing and we were faced with new problems we weren't used to facing. Personally, forgetting to mute my mic when going to the door to yell at my dog, accidentally disturbing the entire video conference. For others, it was a lack of accessibility tools that made the experience more difficult. Then for some, simply scared of something embarrassing happening during class while it is being recorded to be posted and seen on repeat! We knew something had to be done to fix these issues.
## What it does
Our app essentially takes over your webcam to give the user more control of what it does and when it does. The goal of the project is to add all the missing features that we wished were available during all our past video conferences.
Features:
Webcam:
1 - Detect when user is away
This feature will automatically blur the webcam feed when a User walks away from the computer to ensure the user's privacy
2- Detect when user is sleeping
We all fear falling asleep on a video call and being recorded by others, our app will detect if the user is sleeping and will automatically blur the webcam feed.
3- Only show registered user
Our app allows the user to train a simple AI face recognition model in order to only allow the webcam feed to show if they are present. This is ideal to prevent ones children from accidentally walking in front of the camera and putting on a show for all to see :)
4- Display Custom Unavailable Image
Rather than blur the frame, we give the option to choose a custom image to pass to the webcam feed when we want to block the camera
Audio:
1- Mute Microphone when video is off
This option allows users to additionally have the app mute their microphone when the app changes the video feed to block the camera.
Accessibility:
1- ASL Subtitle
Using another AI model, our app will translate your ASL into text allowing mute people another channel of communication
2- Audio Transcriber
This option will automatically transcribe all you say to your webcam feed for anyone to read.
Concentration Tracker:
1- Tracks the user's concentration level throughout their session making them aware of the time they waste, giving them the chance to change the bad habbits.
## How we built it
The core of our app was built with Python using OpenCV to manipulate the image feed. The AI's used to detect the different visual situations are a mix of haar\_cascades from OpenCV and deep learning models that we built on Google Colab using TensorFlow and Keras.
The UI of our app was created using Electron with React.js and TypeScript using a variety of different libraries to help support our app. The two parts of the application communicate together using WebSockets from socket.io as well as synchronized python thread.
## Challenges we ran into
Dam where to start haha...
Firstly, Python is not a language any of us are too familiar with, so from the start, we knew we had a challenge ahead. Our first main problem was figuring out how to highjack the webcam video feed and to pass the feed on to be used by any video conference app, rather than make our app for a specific one.
The next challenge we faced was mainly figuring out a method of communication between our front end and our python. With none of us having too much experience in either Electron or in Python, we might have spent a bit too much time on Stack Overflow, but in the end, we figured out how to leverage socket.io to allow for continuous communication between the two apps.
Another major challenge was making the core features of our application communicate with each other. Since the major parts (speech-to-text, camera feed, camera processing, socket.io, etc) were mainly running on blocking threads, we had to figure out how to properly do multi-threading in an environment we weren't familiar with. This caused a lot of issues during the development, but we ended up having a pretty good understanding near the end and got everything working together.
## Accomplishments that we're proud of
Our team is really proud of the product we have made and have already begun proudly showing it to all of our friends!
Considering we all have an intense passion for AI, we are super proud of our project from a technical standpoint, finally getting the chance to work with it. Overall, we are extremely proud of our product and genuinely plan to better optimize it in order to use within our courses and work conference, as it is really a tool we need in our everyday lives.
## What we learned
From a technical point of view, our team has learnt an incredible amount the past few days. Each of us tackled problems using technologies we have never used before that we can now proudly say we understand how to use. For me, Jonathan, I mainly learnt how to work with OpenCV, following a 4-hour long tutorial learning the inner workings of the library and how to apply it to our project. For Quan, it was mainly creating a structure that would allow for our Electron app and python program communicate together without killing the performance. Finally, Zhi worked for the first time with the Google API in order to get our speech to text working, he also learned a lot of python and about multi-threadingbin Python to set everything up together. Together, we all had to learn the basics of AI in order to implement the various models used within our application and to finally attempt (not a perfect model by any means) to create one ourselves.
## What's next for Boom. The Meeting Enhancer
This hackathon is only the start for Boom as our team is exploding with ideas!!! We have a few ideas on where to bring the project next. Firstly, we would want to finish polishing the existing features in the app. Then we would love to make a market place that allows people to choose from any kind of trained AI to determine when to block the webcam feed. This would allow for limitless creativity from us and anyone who would want to contribute!!!! | losing |
## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :) | ## Inspiration
We can make customer support so, so much better, for both customers and organizations alike. This project was inspired by the frankly terrible wait times and customer support information that the DriveTest centres across Ontario have.
## What it does
From a high-level, our platform integrates analytics of both in-person and online customer support lines (using computer vision and Genesys's API, respectively), and uses those to provide customers real-time data for which customer support channel to utilize at a given time. It also provides organizations with analytics and metrics to be able to optimize their customer support pipelines.
Speaking about the internals, our platform utilizes computer vision to determine the number of people in a line at any given moment, then uses AI to calculate the approximate waiting time for that line. This usecase is meant for in-person customer support interactions. Our platform also uses Genesys's Developer API (EstimateWaitTime) to calculate, for any given organization and queue, the wait time and backlog of customer support cases. It then combines these two forms of customer support, allowing customers to make informed decisions as to where to go for customer support, and giving organizations robust analytics for which customer support channels can be further optimized (such as hiring more people to serve as chat agents).
## How we built it
OpenCV along with a custom algorithm for people counting within a certain bounding area was used for the Computer Vision aspect to determine the number of people in a line, in-person. This data is sent to a Flask server. We also used Genesys's API along with simulated Genesys customer-agent interactions to determine how long the wait time is for online customer support. From the Flask server, this data goes to two different front-ends:
1. For customers: customers simply see a dashboard with the wait time for online customer support, as well as the wait times at nearby branches of the company (say, Ontario DriveTest centres) – created using Bulma and Vanilla JS
2. For organizations: organizations see robust analytics regarding wait times at certain intervals, certain points in the day, etc. They can also compare and contrast online and in-person customer support analytics. Organizations can use these metrics to optimize customer support to reduce the load on certain employees, and by making customer support more efficient for customers.
## Challenges we ran into
Working with many services (Computer Vision + Python, Flask backend, Vanilla JS frontend, Vue.js frontend) was a challenge, since we had to find a way to pass the data from one service to another, reliably. We decided to fix this by using a key-value store for redundancy to ensure data is not lost through numerous layers of transmission.
## Accomplishments that we're proud of
Creating a working product using Genesys's API!
## What we learned
The opportunity that lies within the field of customer support and unifying both online and in-person components of it. Also, the opportunities that Genesys's API holds in terms of empowering organizations to make their customer support as efficient as possible.
## What's next for QuicQ
We wanted to use infrared sensors instead of cameras to detect people in a line in-person, due to privacy concerns, but we couldn't find infrared sensors for this hackathon! So, we will integrate them in a future version of QuicQ. | A flower hunting game made using Godot.
Credits:
3D map model generated using [blender-osm](https://prochitecture.gumroad.com/l/blender-osm) with data from <https://www.openstreetmap.org/> (License: <https://www.openstreetmap.org/copyright>). | partial |
## Introduction
2020 has been a very hard year for all of us, we have all faced unique challenges and hardships. From COVID-19, to civil unrest, and plenty of injustice, people all over the world are fighting for a better future. In these unprecedented times, when we are more disconnected and isolated than ever, it is important to be informed of protests and other social movements around the world.
Somo is an intuitive map that displays where social movements are trending around the world. Our goal is to show the public where the support for these movements is coming from and raise further awareness for their cause.
## Tech Stack and Tools
* [React](https://reactjs.org/)
* [Node.js](https://nodejs.org/en/) / [Express](https://expressjs.com/)
* [Ant Design](https://ant.design/)
* APIs: [Twitter](https://developer.twitter.com/en), [OpenCage](https://opencagedata.com/api), [GetDayTrends](https://getdaytrends.com/)
## Features
Somo is a full stack [React](https://reactjs.org/) application with a [Node.js](https://nodejs.org/en/) and [Express](https://expressjs.com/) backend. Client side code is written in React and the backend API is written using Express.
### Description
Somo provides a detailed description of all major social movements around the world
### Map
Our map provides a great view of where in the world a particular social movement is happening
### Twitter Feed
The Twitter Sidebar provides the top tweets of the social movement to keep you updated on the latest news
## Challenges
In terms of challenges, we had trouble setting up for front-end development but as a team we collaborated and were able to solve a number of issues, one of which had to do with dependencies.
APIs are an amazing way for developers to get access to features for an application. However, a challenge we faced was working with unfamiliar API’s and dealing with free account limitations from Twitter, OpenCage and React-Map. By familiarizing ourselves with the documentation and understanding the API limitations, we were quickly able to overcome this challenge.
## Development mode
In the development mode, we will have 2 servers running. The front end code will be served by the [webpack dev server](https://webpack.js.org/configuration/dev-server/) which helps with hot and live reloading. The server side Express code will be served by a node server using [nodemon](https://nodemon.io/) which helps in automatically restarting the server whenever server side code changes.
## Quick Start
```
# Clone the repository
git clone https://github.com/ashtan19/SoMo.git
# Go inside the directory
cd SoMo
# Install dependencies
npm install
# Start development server
npm run dev
```
## Folder Structure
All the source code will be inside **src** directory. Inside src, there is client and server directory. All the frontend code (react, css, js and any other assets) will be in client directory. Backend Node.js/Express code will be in the server directory.
## Future Developments
* Implement interactive markers on the map to display number of tweets & country
* Zoom function where the user can look at smaller countries in more detail
* Filter inappropriate and irrelevant tweets
* Auto generate trending hashtags and filter based on relevance
* Have links to learn more or donate to support causes | ## Inspiration
With the excitement of blockchain and the ever growing concerns regarding privacy, we wanted to disrupt one of the largest technology standards yet: Email. Email accounts are mostly centralized and contain highly valuable data, making one small breach, or corrupt act can serious jeopardize millions of people. The solution, lies with the blockchain. Providing encryption and anonymity, with no chance of anyone but you reading your email.
Our technology is named after Soteria, the goddess of safety and salvation, deliverance, and preservation from harm, which we believe perfectly represents our goals and aspirations with this project.
## What it does
First off, is the blockchain and message protocol. Similar to PGP protocol it offers \_ security \_, and \_ anonymity \_, while also **ensuring that messages can never be lost**. On top of that, we built a messenger application loaded with security features, such as our facial recognition access option. The only way to communicate with others is by sharing your 'address' with each other through a convenient QRCode system. This prevents anyone from obtaining a way to contact you without your **full discretion**, goodbye spam/scam email.
## How we built it
First, we built the block chain with a simple Python Flask API interface. The overall protocol is simple and can be built upon by many applications. Next, and all the remained was making an application to take advantage of the block chain. To do so, we built a React-Native mobile messenger app, with quick testing though Expo. The app features key and address generation, which then can be shared through QR codes so we implemented a scan and be scanned flow for engaging in communications, a fully consensual agreement, so that not anyone can message anyone. We then added an extra layer of security by harnessing Microsoft Azures Face API cognitive services with facial recognition. So every time the user opens the app they must scan their face for access, ensuring only the owner can view his messages, if they so desire.
## Challenges we ran into
Our biggest challenge came from the encryption/decryption process that we had to integrate into our mobile application. Since our platform was react native, running testing instances through Expo, we ran into many specific libraries which were not yet supported by the combination of Expo and React. Learning about cryptography and standard practices also played a major role and challenge as total security is hard to find.
## Accomplishments that we're proud of
We are really proud of our blockchain for its simplicity, while taking on a huge challenge. We also really like all the features we managed to pack into our app. None of us had too much React experience but we think we managed to accomplish a lot given the time. We also all came out as good friends still, which is a big plus when we all really like to be right :)
## What we learned
Some of us learned our appreciation for React Native, while some learned the opposite. On top of that we learned so much about security, and cryptography, and furthered our beliefs in the power of decentralization.
## What's next for The Soteria Network
Once we have our main application built we plan to start working on the tokens and distribution. With a bit more work and adoption we will find ourselves in a very possible position to pursue an ICO. This would then enable us to further develop and enhance our protocol and messaging app. We see lots of potential in our creation and believe privacy and consensual communication is an essential factor in our ever increasingly social networking world. | Ever wonder where that video clip came from? Probably some show or movie you've never watched. Well with RU Recognized, you can do a reverse video search to find out what show or movie it's from.
## Inspiration
We live in a world rife with movie and tv show references, and not being able to identify these references is a sign of ignorance in our society. More importantly, the feeling of not being able to remember what movie or show that one really funny clip was from can get really frustrating. We wanted to enale every single human on this planet to be able to seek out and enjoy video based content easily but also efficiently. So, we decided to make **Shazam, but for video clips!**
## What it does
RU Recognized takes a user submitted video and uses state of the art algorithms to find the best match for that clip. Once a likely movie or tv show is found, the user is notified and can happily consume the much desired content!
## How we built it
We took on a **3 pronged approach** to tackle this herculean task:
1. Using **AWS Rekognition's** celebrity detection capabilities, potential celebs are spotted in the user submitted video. These identifications have a harsh confidence value cut off to ensure only the best matching algorithm.
2. We scrape the video using **AWS' Optical Character Recognition** (OCR) capabilities to find any identifying text that could help in identification.
3. **Google Cloud's** Speech to Text API allows us to extract the audio into readable plaintext. This info is threaded through Google Cloud Custom Search to find a large unstructured datadump.
To parse and exract useful information from this amourphous data, we also maintained a self-curated, specialized, custom made dataset made from various data banks, including **Kaggle's** actor info, as well as IMDB's incredibly expansive database.
Furthermore, due to the uncertain nature of the recognition API's, we used **clever tricks** such as cross referencing celebrities seen together, and only detecting those that had IMDB links.
Correlating the information extracted from the video with the known variables stored in our database, we are able to make an educated guess at origins of the submitted clip.
## Challenges we ran into
Challenges are an obstacle that our team is used to, and they only serve to make us stronger. That being said, some of the (very frustrating) challenges we ran into while trying to make RU Recognized a good product were:
1. As with a lot of new AI/ML algorithms on the cloud, we struggles alot with getting our accuracy rates up for identified celebrity faces. Since AWS Rekognition is trained on images of celebrities from everyday life, being able to identify a heavily costumed/made-up actor is a massive challenge.
2. Cross-connecting across various cloud platforms such as AWS and GCP lead to some really specific and hard to debug authorization problems.
3. We faced a lot of obscure problems when trying to use AWS to automatically detect the celebrities in the video, without manually breaking it up into frames. This proved to be an obstacle we weren't able to surmount, and we decided to sample the frames at a constant rate and detect people frame by frame.
4. Dataset cleaning took hours upon hours of work and dedicated picking apart. IMDB datasets were too large to parse completely and ended up costing us hours of our time, so we decided to make our own datasets from this and other datasets.
## Accomplishments that we're proud of
Getting the frame by frame analysis to (somewhat) accurately churn out celebrities and being able to connect a ton of clever identification mechanisms was a very rewarding experience. We were effectively able to create an algorithm that uses 3 to 4 different approaches to, in a way, 'peer review' each option, and eliminate incorrect ones.
## What I learned
* Data cleaning is ver very very cumbersome and time intensive
* Not all AI/ML algorithms are magically accurate
## What's next for RU Recognized
Hopefully integrate all this work into an app, that is user friendly and way more accurate, with the entire IMDB database to reference. | partial |
## Inspiration
Business cards haven't changed in years, but cARd can change this! Inspired by the rise of augmented reality applications, we see potential for creative networking. Next time you meet someone at a conference, a career fair, etc., simply scan their business card with your phone and watch their entire online portfolio enter the world! The business card will be saved, and the experience will be unforgettable.
## What it does
cARd is an iOS application that allows a user to scan any business card to bring augmented reality content into the world. Using OpenCV for image rectification and OCR (optical character recognition) with the Google Vision API, we can extract both the business card and text on it. Feeding the extracted image back to the iOS app, ARKit can effectively track our "target" image. Furthermore, we use the OCR result to grab information about the business card owner real-time! Using selelium, we effectively gather information from Google and LinkedIn about the individual. When returned to the iOS app, the user is presented with information populated around the business card with augmented reality!
## How I built it
Some of the core technologies that go into this project include the following:
* ARKit for augmented reality in iOS
* Flask for the backend server
* selenium for collecting data about the business card owner on the web in real-time
* OpenCV to find the rectangular business card in the image and use a homography to map it into a rectangle for AR tracking
* Google Vision API for optical character recognition (OCR)
* Text to speech
## Challenges I ran into
## Accomplishments that I'm proud of
## What I learned
## What's next for cARd
Get cARd on the app store for everyone to use! Stay organized and have fun while networking! | ## Inspiration
I've always been fascinated by the complexities of UX design, and this project was an opportunity to explore an interesting mode of interaction. I drew inspiration from the futuristic UIs that movies have to offer, such as Minority Report's gesture-based OS or Iron Man's heads-up display, Jarvis.
## What it does
Each window in your desktop is rendered on a separate piece of paper, creating a tangible version of your everyday computer. It is fully featured desktop, with specific shortcuts for window management.
## How I built it
The hardware is combination of a projector and a webcam. The camera tracks the position of the sheets of paper, on which the projector renders the corresponding window. An OpenCV backend does the heavy lifting, calculating the appropriate translation and warping to apply.
## Challenges I ran into
The projector was initially difficulty to setup, since it has a fairly long focusing distance. Also, the engine that tracks the pieces of paper was incredibly unreliable under certain lighting conditions, which made it difficult to calibrate the device.
## Accomplishments that I'm proud of
I'm glad to have been able to produce a functional product, that could possibly be developed into a commercial one. Furthermore, I believe I've managed to put an innovative spin to one of the oldest concepts in the history of computers: the desktop.
## What I learned
I learned lots about computer vision, and especially on how to do on-the-fly image manipulation. | ## Inspiration
I wanted a platform that incentivizes going outside and taking photos. This also helps with people who want to build a portfolio in photography
## What it does
Every morning, a random photography prompt will appear on the app/website. The users will be able to then go out and take a photo of said prompt. The photo will be objectively rated upon focus quality, which is basically asking if the subject is in focus, plus if there is any apparent motion blur. The photo will also be rated upon correct exposure (lighting). They will be marked out of 100, using interesting code to determine the scores. We would also like to implement a leaderboard of best photos taken of said subject.
## How we built it
Bunch of python, and little bit of HTML. The future holds react coding to make everything run and look much better.
## Challenges we ran into
everything.
## Accomplishments that we're proud of
Managed to get a decent scoring method for both categories, which had pretty fair outcomes. Also I got to learn a lot about flask.
## What we learned
A lot of fun flask information, and how to connect backend with frontend.
## What's next for PictureDay
Many things mentioned above, such as:
* Leaderboard
* Photo gallery/portfolio
* pretty website
* social aspects such as adding friends. | winning |
## Inspiration
There are many occasions where we see a place in a magazine, or just any image source online and we don't know where the place is. There is no description anywhere, and a possible vacation destination may very possibly just disappear into thin air. We certainly did not want to miss out.
## What it does
Take a picture of a place. Any place. And upload it onto our web app. We will not only tell you where that place is located, but immediately generate a possible trip plan from your current location. That way, you will be able to know how far away you are from your desired destination, as well as how feasible this trip is in the near future.
## How we built it
We first figured out how to use Google Cloud Vision to retrieve the data we wanted. We then processed pictures uploaded to our Flask application, retrieved the location, and wrote the location to a text file. We then used Beautiful Soup to read the location from the text file, and integrated the Google Maps API, along with numerous tools within the API, to display possible vacation plans, and the route to the location.
## Challenges we ran into
This was our first time building a dynamic web app, and using so many API’s so it was pretty challenging. Our final obstacle of reading from a text file using JavaScript turned out to be our toughest challenge, because we realized it was not possible due to security concerns, so we had to do it through Beautiful Soup.
## Accomplishments that we're proud of
We're proud of being able to integrate many different API's into our application, and being able to make significant progress on the front end, despite having only two beginner members. We encountered many difficulties throughout the building process, and had some doubts, but we were still able to pull through and create a product with an aesthetically pleasing GUI that users can easily interact with.
## What we learned
We got better at reading documentation for different API's, learned how to integrate multiple API's together in a single application, and realized we could create something useful with just a bit of knowledge.
## What's next for TravelAnyWhere
TravelAnyWhere can definitely be taken on to a whole other level. Users could be provided with different potential routes, along with recommended trip plans that visit other locations along the way. We could also allow users to add multiple pictures corresponding to the same location to get a more precise reading on the destination through machine learning techniques. | ## Inspiration
We all had a common qualm about traveling abroad: while it was exciting to see new stuff at newer places at even more novel nations, we just found it hard to plan the trip and get around. Luckily for us, google maps and overpriced roaming helped us evade the fate of getting lost in the city.
## What it does
HappyTravels is an application designed to simplify the traveling process. It utilizes Radar.io to find tourist locations (eg: a restaurant or a museum) near the user and Google Maps to bring up a path to a selected location. The locations can be filtered by category and preferences of the user to give them a result that may interest them. The preferences of the user will be tracked by the application depending on how frequent they visit a certain shop or place. The longer the user uses the application, the more precise and accurate the preferences will be, providing a more tailored recommendation for each user. Once one of these locations is selected, a path is displayed. This design is effective since, without it, the user would have to constantly switch between a search engine and a map application to evaluate every potential location to visit. When the user travels to another country, the user will be able to enter what are the categories that they want to try in the country, although those preferences may not be at their top preferences at the moment, they will be prioritized while searching for locations to be recommended to the user.
## How we built it
We were set to make a webapp that relied on flask to generate a website that displayed our app's results. However, time wasn't on our side, and we ran out of time before we could link the webapp GUI to the backend. Our app is built with Google Maps APIs, Radar.io API and Flask, interconnected with a backend full of Python.
## Challenges we ran into
Some technical challenges we ran into were integrating the API to our program. In addition, there was a lot of debugging done due to some people not being familiar with the Python syntax. Challenges we ran into as a group include not enough clear communication and project plans constantly being updated. While our group had a lot of vision and ideas for how the project was to end up as, it was difficult to constantly integrate these ideas into our main project plan.
## Accomplishments that we're proud of
Most of us are newbie hackers, so having come up with this in a short time makes me very happy. All of our members also learned a ton of stuff, such as flask, basic html, JavaScript, and Using APIs in various contexts and languages. Also, our algorithm ran very well after a painstakingly long debug process, and we are super proud of it. Lastly, having been able to scrape the code together after an intense amount of time without sleep was a great experience. Teamwork makes the dreamwork!
## What we learned
We learned numerous different technologies we have not used before. Some of our group members got their first experience with API usage while others got experience with development with Python. In addition, we learned about Flask usage and how Python can be used for development of web applications.
## What's next for HappyTravels
The nearest goal for HappyTravels is to create a clean website application that will be easier for our users to interact with. It may expand into a mobile application as well as it is common for a traveler to bring their phone wherever they travel to. In the long term, we may look into utilizing machine learning and data from all our users to create a more effective location filter that can enhance user experience. | ## Inspiration
We were going to build a themed application to time portal you back to various points in the internet's history that we loved, but we found out prototyping with retro looking components is tough. Building each component takes a long time, and even longer to code. We started by automating parts of this process, kept going, and ended up focusing all our efforts on automating component construction from simple Figma prototypes.
## What it does
Give the plugin a Figma frame that has a component roughly sketched out in it. Our code will parse the frame and output JSX that matches the input frame. We use semantic detection with Cohere classify on the button labels combined with deterministic algorithms on the width, height, etc. to determine whether a box is a button, input field, etc. It's like magic! Try it!
## How we built it
Under the hood, the plugin is a transpiler for high level Figma designs.
Similar to a C compiler compiling C code to binary, our plugin uses an abstract syntax tree like approach to parse Figma designs into html code.
Figma stores all it's components (buttons, text, frames, input fields, etc) in nodes.. Nodes store properties about the component or type of element, such as height, width, absolute positions, fills, and also it's children nodes, other components that live within the parent component. Consequently, these nodes form a tree.
Our algorithm starts at the root node (root of the tree), and traverses downwards. Pushing-up the generated html from the leaf nodes to the root.
The base case is if the component was 'basic', one that can be represented with two or less html tags. These are our leaf nodes. Examples include buttons, body texts, headings, and input fields. To recognize whether a node was a basic component, we leveraged the power of LLM.
We parsed the information stored in node given to us by Figma into English sentences, then used it to train/fine tune our classification model provided by co:here. We decided to use an ML to do this since it is more flexible to unique and new designs. For example, we were easily able to create 8 different designs of a destructive button, and it would be time-consuming relative to the length of this hackathon to come up with a deterministic algorithm. We also opted to parse the information into English sentences instead of just feeding the model raw figma node information since the LLM would have a hard time understanding data that didn't resemble a human language.
At each node level in the tree, we grouped the children nodes based on a visual hierarchy. Humans do this all the time, if things are closer together, they're probably related, and we naturally group them. We achieved a similar effect by calculating the spacing between each component, then greedily grouped them based on spacing size. Components with spacings that were within a tolerance percentage of each other were grouped under one html
. We also determined the alignments (cross-axis, main-axis), of these grouped children to handle designs with different combinations of orientations. Finally, the function is recursed on their children, and their converted code is pushed back up to the parent to be composited, until the root contains the code for the design. Our recursive algorithm made it so our plugin was flexible to the countless designs possible in Figma.
## Challenges we ran into
We ran into three main challenges. One was calculating the spacing. Since while it was easy to just apply an algorithm to merge two components at a time (similar to mergesort), it would produce too many nested divs, and wouldn't really be useful for developers to use the created component. So we came up with our greedy algorithm. However, due to our perhaps mistaken focus on efficiency, we decided to implement a more difficult O(n) algorithm to determine spacing, where n is the number of children. This sapped a lot of time away, which could have been used for other tasks and supporting more elements.
The second main challenge was with ML. We were actually using Cohere Classify wrongly, not taking semantics into account and trying to feed it raw numerical data. We eventually settled on using ML for what it was good at - semantic analysis of the label, while using deterministic algorithms to take other factors into account. Huge thanks to the Cohere team for helping us during the hackathon! Especially Sylvie - you were super helpful!
We also ran into issues with theming on our demo website. To show how extensible and flexible theming could be on our components, we offered three themes - windows XP, 7, and a modern web layout. We were originally only planning to write out the code for windows XP, but extending the component systems to take themes into account was a refactor that took quite a while, and detracted from our plugin algorithm refinement.
## Accomplishments that we're proud of
We honestly didn't think this would work as well as it does. We've never built a compiler before, and from learning off blog posts about parsing abstract syntax trees to implementing and debugging highly asychronous tree algorithms, I'm proud of us for learning so much and building something that is genuinely useful for us on a daily basis.
## What we learned
Leetcode tree problems actually are useful, huh.
## What's next for wayback
More elements! We can only currently detect buttons, text form inputs, text elments, and pictures. We want to support forms too, and automatically insert the controlling componengs (eg. useState) where necessary. | losing |
## Inspiration
The worldwide phenomenon that is Wordle brings together users from all over the world to a play a once-a-day game. Loosely inspired by this successful tale, we present to you Bubble:
Young people from around the world have felt the repercussions on our mental well-being due to the never-ending restrictions imposed due to Covid-19. In the meantime, while we have been spending more time online, we have started to feel more disconnected from the real world. Journaling every day has proven to provide many mental health benefits, keeping people grounded and encouraging the practice of mindfulness and gratitude. Bubble is our solution to make self-reflection more accessible by encouraging members to get creative with daily journaling and reflection, as well as providing a moderated, anonymous bulletin board for people from all around the world to read each other's unique stories to a common worldwide prompt.
Also loosely inspired by "Humans of New York" and "Amours Solitaires" submissions; the NYT Crossword app's Daily Mini; and the apps that send out daily Bible verses/motivational quotes to subscribers.
## What it does
Every day a journaling prompt is sent to our subscribers via SMS. The text will direct users to our website, and they will then be able to reflect and flesh out their thoughts before submitting an anonymous response. After submission, the responses will be displayed for members to browse and enjoy. Using our auto-tagging system, each response will be parsed and tagged for its topic. This allows for better categorization and organization of responses.
**Ideal Goal**: Our idea was to create an everyday activity to encourage young people to take a moment out of their day to reflect. Our hope was to accomplish this by using Twilio to send out a daily prompt to each subscriber. It would be a simple yet thoughtful prompt that would ideally allow members to take a moment to think in an otherwise go-go-go world. Members would be able to respond to these prompts and then their answers would be anonymously published to our website for the purposes of inspiring others by sharing stories of gratitude, motivation, and other #wholesome themes.
**Actual Accomplished Goal**: As it so often does, not everything went our way. We couldn't get many things to work and thus one of our main features was out of the picture. From there we ran into further problems understanding and implementing full-stack concepts which were foreign to all of us. In the end we accomplished three interesting programs which each are pieces of the puzzle, but undoubtedly there is much to do before a finished project is completed. We have a website that allows people to publish their story (but with no data base knowledge it just kind of disappears), a program that scours Reddit for thought provoking statements, and we used AI to classify words to give inspiration for any given prompt.
## How we built it
Using a Reddit API and web-scraping techniques we collect writing prompts from the r/WritingPrompts subreddit. The top prompt from this subreddit is sent daily to subscribers via SMS through Twilio's API. Users are then directed to our website which was built using Flask and Twilio's API. Finally, our tagging system is built using Spacy and NLTK libraries. This program analyzes responses to the prompts we got from reddit and returns commonly seen and powerful keywords. We used Figma for the design aspects of the project.
## Challenges we ran into
Integrating all the pieces of our project into one cohesive unit proved to be very challenging. Our original vision was to collect responses via SMS using the Twilio API; after hours of coding, we realized this was not in our capabilities. Our final demo is split into various aspects that currently operate independently however in the future we hope to integrate these components into one unit.
## Accomplishments that we're proud of
We are proud of our persistence in debugging the various obstacles and adversities that came up throughout our project journey. This was our team's first hackathon, so it was difficult to get started on an idea as we were super clueless in the beginning, but we are incredibly happy with how far we have come.
Each of us learned new concepts and put ourselves out of our comfort zones in the pursuit of new skills. While we may not have created a groundbreaking project, ultimately we had fun, we got frustrated, we googled, we talked, we laughed, and in the end, we all become better computer scientists.
## What we learned
Asides from learning that Jonah's Redbull limit is 4 cans (the most important finding from this weekend, clearly), we learned what an API was, how to use one, the basics of full-stack development. Our team came into this project with zero experience working with front-end development and discovered how particularly challenging it is to connect the backend and frontend. We were exposed to new technologies such as Twilio and Glitch, which allowed us to add functionality to our project that we would have otherwise not been able to. We also learned a ton from the workshops and are personally very excited to code and launch our own websites! After getting sleep of course. #shamelessplug
## What's next for Bubble
We hope that Bubble will allow for users from all around the world to connect through words and to realize that we are all probably more similar to one another than we would think. In the future, we envision creating sub-forums (or sub-bubbles, if you will) for various communities and institutions and to have Bubble serve as a platform for these organizations to host digital bulletin boards for their employees/students/members - similar to monday.com, slack, or asana, but for more wholesome purposes. | ## How we built it
We built our project by using Javascript, CSS, and HTML.
## Challenges we ran into
Two teammates are in Canada and one is in China. The thirteen-hour time difference made us very hard to communicate for the project whenever we want. Also, only two of us have some basic knowledge of front-end development and the other one didn’t learn anything about web development before.
## Accomplishments that we're proud of
We tried something new. None of us have experience building a chrome extension before. We are proud that we complete our project on time and the extension gets all the goals we set at the beginning.
## What we learned
We learned how to build a chrome extension. One of our teammates learned CSS, HTML, and JavaScript in only five hours. | ## Slooth
Slooth.tech was born from the combined laziness and frustration towards long to navigate school websites of four Montréal based hackers.
When faced with the task of creating a hack for McHacks 2016, the creators of Slooth found the perfect opportunity to solve a problem they faced for a long time: navigating tediously complicated school websites.
Inspired by Natural Language Processing technologies and personal assistants such as Google Now and Siri, Slooth was aimed at providing an easy and modern way to access important documents on their school websites.
The Chrome extension Slooth was built with two main features in mind: customization and ease of use.
# Customization:
Slooth is based on user recorded macros. Each user will record any actions they which to automate using the macro recorder and associate an activation phrase to it.
# Ease of use:
Slooth is intended to simplify its user's workflow. As such, it was implemented as an easily accessible Chrome extension and utilizes voice commands to lead its user to their destination.
# Implementation:
Slooth is a Chrome extension built in JS and HTML.
The speech recognition part of Slooth is based on the Nuance ASR API kindly provided to all McHacks attendees.
# Features:
-Fully customizable macros
-No background spying. Slooth's speech recognition is done completely server side and notifies the user when it is recording their speech.
-Minimal server side interaction. Slooth's data is stored entirely locally, never shared with any outside server. Thus you can be confident that your personal browsing information is not publicly available.
-Minimal UI. Slooth is designed to simplify one's life. You will never need a user guide to figure out Slooth.
# Future
While Slooth reached its set goals during McHacks 2016, it still has room to grow.
In the future, the Slooth creators hope to implement the following:
-Full compatibility with single page applications
-Fully encrypted autofill forms synched with the user's Google account for cross platform use.
-Implementation of the Nuance NLU api to add more customization options to macros (such as verbs with differing parameters).
# Thanks
Special thanks to the following companies for their help and support in providing us with resources and APIs:
-Nuance
-Google
-DotTech | losing |
## Inspiration
“**Social media sucks these days.**” — These were the first few words we heard from one of the speakers at the opening ceremony, and they struck a chord with us.
I’ve never genuinely felt good while being on my phone, and like many others I started viewing social media as nothing more than a source of distraction from my real life and the things I really cared about.
In December 2019, I deleted my accounts on Facebook, Instagram, Snapchat, and WhatsApp.
For the first few months — I honestly felt great. I got work done, focused on my small but valuable social circle, and didn’t spend hours on my phone.
But one year into my social media detox, I realized that **something substantial was still missing.** I had personal goals, routines, and daily checklists of what I did and what I needed to do — but I wasn’t talking about them. By not having social media I bypassed superficial and addictive content, but I was also entirely disconnected from my network of friends and acquaintances. Almost no one knew what I was up to, and I didn’t know what anyone was up to either. A part of me longed for a level of social interaction more sophisticated than Gmail, but I didn’t want to go back to the forms of social media I had escaped from.
One of the key aspects of being human is **personal growth and development** — having a set of values and living them out consistently. Especially in the age of excess content and the disorder of its partly-consumed debris, more people are craving a sense of **routine, orientation, and purpose** in their lives. But it’s undeniable that **humans are social animals** — we also crave **social interaction, entertainment, and being up-to-date with new trends.**
Our team’s problem with current social media is its attention-based reward system. Most platforms reward users based on numeric values of attention, through measures such as likes, comments and followers. Because of this reward system, people are inclined to create more appealing, artificial, and addictive content. This has led to some of the things we hate about social media today — **addictive and superficial content, and the scarcity of genuine interactions with people in the network.**
This leads to a **backward-looking user-experience** in social media. The person in the 1080x1080 square post is an ephemeral and limited representation of who the person really is. Once the ‘post’ button has been pressed, the post immediately becomes an invitation for users to trap themselves in the past — to feel dopamine boosts from likes and comments that have been designed to make them addicted to the platform and waste more time, ultimately **distorting users’ perception of themselves, and discouraging their personal growth outside of social media.**
In essence — We define the question of reinventing social media as the following:
*“How can social media align personal growth and development with meaningful content and genuine interaction among users?”*
**Our answer is High Resolution — a social media platform that orients people’s lives toward an overarching purpose and connects them with liked-minded, goal-oriented people.**
The platform seeks to do the following:
**1. Motivate users to visualize and consistently achieve healthy resolutions for personal growth**
**2. Promote genuine social interaction through the pursuit of shared interests and values**
**3. Allow users to see themselves and others for who they really are and want to be, through natural, progress-inspired content**
## What it does
The following are the functionalities of High Resolution (so far!):
After Log in or Sign Up:
**1. Create Resolution**
* Name your resolution, whether it be Learning Advanced Korean, or Spending More Time with Family.
* Set an end date to the resolution — i.e. December 31, 2022
* Set intervals that you want to commit to this goal for (Daily / Weekly / Monthly)
**2. Profile Page**
* Ongoing Resolutions
+ Ongoing resolutions and level of progress
+ Clicking on a resolution opens up the timeline of that resolution, containing all relevant posts and intervals
+ Option to create a new resolution, or ‘Discover’ resolutions
* ‘Discover’ Page
+ Explore other users’ resolutions, that you may be interested in
+ Clicking on a resolution opens up the timeline of that resolution, allowing you to view the user’s past posts and progress for that particular resolution and be inspired and motivated!
+ Clicking on a user takes you to that person’s profile
* Past Resolutions
+ Past resolutions and level of completion
+ Resolutions can either be fully completed or partly completed
+ Clicking on a past resolution opens up the timeline of that resolution, containing all relevant posts and intervals
**3. Search Bar**
* Search for and navigate to other users’ profiles!
**4. Sentiment Analysis based on IBM Watson to warn against highly negative or destructive content**
* Two functions for sentiment analysis textual data on platform:
* One function to analyze the overall positivity/negativity of the text
* Another function to analyze the user of the amount of joy, sadness, anger and disgust
* When the user tries to create a resolution that seems to be triggered by negativity, sadness, fear or anger, we show them a gentle alert that this may not be best for them, and ask if they would like to receive some support.
* In the future, we can further implement this feature to do the same for comments on posts.
* This particular functionality has been demo'ed in the video, during the new resolution creation.
* **There are two purposes for this functionality**:
* a) We want all our members to feel that they are in a safe space, and while they are free to express themselves freely, we also want to make sure that their verbal actions do not pose a threat to themselves or to others.
* b) Current social media has shown to be a propagator of hate speech leading to violent attacks in real life. One prime example are the Easter Attacks that took place in Sri Lanka exactly a year ago: <https://www.bbc.com/news/technology-48022530>
* If social media had a mechanism to prevent such speech from being rampant, the possibility of such incidents occurring could have been reduced.
* Our aim is not to police speech, but rather to make people more aware of the impact of their words, and in doing so also try to provide resources or guidance to help people with emotional stress that they might be feeling on a day-to-day basis.
* We believe that education at the grassroots level through social media will have an impact on elevating the overall wellbeing of society.
## How we built it
Our tech stack primarily consisted of React (with Material UI), Firebase and IBM Watson APIs. For the purpose of this project, we opted to use the full functionality of Firebase to handle the vast majority of functionality that would typically be done on a classic backend service built with NodeJS, etc. We also used Figma to prototype the platform, while IBM Watson was used for its Natural Language toolkits, in order to evaluate sentiment and emotion.
## Challenges we ran into
A bulk of the challenges we encountered had to do with React Hooks. A lot of us were only familiar with an older version of React that opted for class components instead of functional components, so getting used to Hooks took a bit of time.
Another issue that arose was pulling data from our Firebase datastore. Again, this was a result of lack of experience with serverless architecture, but we were able to pull through in the end.
## Accomplishments that we're proud of
We’re really happy that we were able to implement most of the functionality that we set out to when we first envisioned this idea. We admit that we might have bit a lot more than we could chew as we set out to recreate an entire social platform in a short amount of time, but we believe that the proof of concept is demonstrated through our demo
## What we learned
Through research and long contemplation on social media, we learned a lot about the shortcomings of modern social media platforms, for instance how they facilitate unhealthy addictive mechanisms that limit personal growth and genuine social connection, as well as how they have failed in various cases of social tragedies and hate speech. With that in mind, we set out to build a platform that could be on the forefront of a new form of social media.
From a technical standpoint, we learned a ton about how Firebase works, and we were quite amazed at how well we were able to work with it without a traditional backend.
## What's next for High Resolution
One of the first things that we’d like to implement next, is the ‘Group Resolution’ functionality. As of now, users browse through the platform, find and connect with liked-minded people pursuing similarly-themed interests. We think it would be interesting to allow users to create and pursue group resolutions with other users, to form more closely-knitted and supportive communities with people who are actively communicating and working towards achieving the same resolution.
We would also like to develop a sophisticated algorithm to tailor the users’ ‘Discover’ page, so that the shown content is relevant to their past resolutions. For instance, if the user has completed goals such as ‘Wake Up at 5:00AM’, and ‘Eat breakfast everyday’, we would recommend resolutions like ‘Morning jog’ on the discover page. By recommending content and resolutions based on past successful resolutions, we would motivate users to move onto the next step. In the case that a certain resolution was recommended because a user failed to complete a past resolution, we would be able to motivate them to pursue similar resolutions based on what we think is the direction the user wants to head towards.
We also think that High Resolution could be potentially become a platform for recruiters to spot dedicated and hardworking talent, through the visualization of users’ motivation, consistency, and progress. Recruiters may also be able to user the platform to communicate with users and host online workshops or events .
WIth more classes and educational content transitioning online, we think the platform could serve as a host for online lessons and bootcamps for users interested in various topics such as coding, music, gaming, art, and languages, as we envision our platform being highly compatible with existing online educational platforms such as Udemy, Leetcode, KhanAcademy, Duolingo, etc.
The overarching theme of High Resolution is **motivation, consistency, and growth.** We believe that having a user base that adheres passionately to these themes will open to new opportunities and both individual and collective growth. | ## Inspiration
It took us a while to think of an idea for this project- after a long day of zoom school, we sat down on Friday with very little motivation to do work. As we pushed through this lack of drive our friends in the other room would offer little encouragements to keep us going and we started to realize just how powerful those comments are. For all people working online, and university students in particular, the struggle to balance life on and off the screen is difficult. We often find ourselves forgetting to do daily tasks like drink enough water or even just take a small break, and, when we do, there is very often negativity towards the idea of rest. This is where You're Doing Great comes in.
## What it does
Our web application is focused on helping students and online workers alike stay motivated throughout the day while making the time and space to care for their physical and mental health. Users are able to select different kinds of activities that they want to be reminded about (e.g. drinking water, eating food, movement, etc.) and they can also input messages that they find personally motivational. Then, throughout the day (at their own predetermined intervals) they will receive random positive messages, either through text or call, that will inspire and encourage. There is also an additional feature where users can send messages to friends so that they can share warmth and support because we are all going through it together. Lastly, we understand that sometimes positivity and understanding aren't enough for what someone is going through and so we have a list of further resources available on our site.
## How we built it
We built it using:
* AWS
+ DynamoDB
+ Lambda
+ Cognito
+ APIGateway
+ Amplify
* React
+ Redux
+ React-Dom
+ MaterialUI
* serverless
* Twilio
* Domain.com
* Netlify
## Challenges we ran into
Centring divs should not be so difficult :(
Transferring the name servers from domain.com to Netlify
Serverless deploying with dependencies
## Accomplishments that we're proud of
Our logo!
It works :)
## What we learned
We learned how to host a domain and we improved our front-end html/css skills
## What's next for You're Doing Great
We could always implement more reminder features and we could refine our friends feature so that people can only include selected individuals. Additionally, we could add a chatbot functionality so that users could do a little check in when they get a message. | ## Inspiration
Product managers and startup founders rely heavily on qualitative, not quantitative data, to make product decisions. Their primary touchpoints with customers is user interviews. However, user interviews are useless if each lives in a siloed document. How can we assist product managers and startup founders in create a coherent body of knowledge from their user interactions?
## What it does
From a note, the users can choose a specific paragraph of note and query for semantically similar information across the database with one shortcut. The query sidebar will return not only the relevant notes, but also point out exactly which line/paragraph of these notes that are relevant to the query.
## How we built it
* User interface: Next.js with Typescript
* Backend API: Python Flask
* Database: Pinecone (vector database) and Firebase
We store the original documents in Firebase Firestore, but we also dissect our notes into paragraphs, embed them into vectors, and store the vectors into Pinecone for the purpose of semantic search.
## Challenges we ran into
Solidify the idea of the project was the most difficult part we ran into because there were various ways we could go about the solving the problem. This led in miscommunication in development that hindered our progress.
## Accomplishments that we're proud of
Despite not having much experience in full-stack development or AI, we still completed the project and learned many valuable skills along the way.
## What's next for NoteFusion
Given more time, we would like to enhance our product's feature to even give a detailed analysis on the content highlighted, basing on past notes, suggesting personalized approaches for notetakers.
This app uses generative AI and personalization to enhance your note-taking experience. It improves the quality of your notes by clarifying any gibberish and incorporates easy commands without relying on other sources like MLA, LaTeX, and more. This transforms your disorganized notes into easy-to-understand and personalized content, tailored to your learning needs. | partial |
Aweare aims to tackle climate change at its root - overconsumption. Sea levels rising, global warming, extreme weather conditions - these are all symptoms of what centuries of buying these we don't need has resulted in. Our global efforts to combat climate change are slower because consumers and businesses are pushing responsibility back and forth - its a blame game. Aweare aims to put responsibility in everyone's hands by creating social pressure for both the average shopper and the clothing companies. We believe that as Aweare trends, consumers will push eachother to adopt more sustainable purchase habits simply out of ethical pressure. Simultaneously, as the demand for environmental footprint information per piece of clothing increases and shareholders begin to take notice, companies will provide this information as a customary accompaniment in their price tags. This is our goal, because the issue of climate change requires a two-pronged approach.
How it works:
1. The user signs up on the Aweare web app and uploads an image of a piece of clothing their looking to buy. User also enter the material composition listed on the tag (e.g., percentage of cotton, polyester, nylon).
2. Aweare matches this image with a trained database of images and classifies this piece of clothing with its particular type (i.e., t-shirt, sweatpands, hoodies, button-down).
3. We use the type of clothing to derive its associated weight. Then we use weight and material composition of the clothing to compute the CO2 emissions that went into manufacturing that item. | ## Inspiration:
Our inspiration stems from the identification of two critical problems in the health industry for patients: information overload and inadequate support for patients post-diagnosis resulting in isolationism. We saw an opportunity to leverage computer vision, machine learning, and user-friendly interfaces to simplify the way diabetes patients interact with their health information and connect individuals with similar health conditions and severity.
## What it does:
Our project is a web app that fosters personalized diabetes communities while alleviating information overload to enhance the well-being of at-risk individuals. Users can scan health documents, receive health predictions, and find communities that resonate with their health experiences. It streamlines the entire process, making it accessible and impactful.
## How we built it:
We built this project collaboratively, combining our expertise in various domains. Frontend development was done using Next.js, React, and Tailwind CSS. We leveraged components from <https://www.hyperui.dev> to ensure scalability and flexibility in our project. Our backend relied on Firebase for authentication and user management, PineconeDB for the creation of curated communities, and TensorFlow for the predictive model. For the image recognition, we used React-webcam and Tesseract for the optical character recognition and data parsing. We also used tools like Figma, Canva, and Google Slides for design, prototyping and presentation. Finally, we used the Discord.py API to automatically generate the user communication channels
## Challenges we ran into:
We encountered several challenges throughout the development process. These included integrating computer vision models effectively, managing the flow of data between the frontend and backend, and ensuring the accuracy of health predictions. Additionally, coordinating a diverse team with different responsibilities was another challenge.
## Accomplishments that we're proud of:
We're immensely proud of successfully integrating computer vision into our project, enabling efficient document scanning and data extraction. Additionally, building a cohesive frontend and backend infrastructure, despite the complexity, was a significant accomplishment. Finally, we take pride in successfully completing our project goal, effectively processing user blood report data, generating health predictions, and automatically placing our product users into personalized Discord channels based on common groupings.
## What we learned:
Throughout this project, we learned the value of teamwork and collaboration. We also deepened our understanding of computer vision, machine learning, and front-end development. Furthermore, we honed our skills in project management, time allocation, and presentation.
## What's next for One Health | Your Health, One Community.:
In the future, we plan to expand the platform's capabilities. This includes refining predictive models, adding more health conditions, enhancing community features, and further streamlining document scanning. We also aim to integrate more advanced machine-learning techniques and improve the user experience. Our goal is to make health data management and community connection even more accessible and effective. | ## Inspiration
Across the globe, a critical shortage of qualified teachers poses a significant challenge to education. The average student-to-teacher ratio in primary schools worldwide stands at an alarming **23:1!** In some regions of Africa, this ratio skyrockets to an astonishing **40:1**. [Research 1](https://data.worldbank.org/indicator/SE.PRM.ENRL.TC.ZS) and [Research 2](https://read.oecd-ilibrary.org/education/education-at-a-glance-2023_e13bef63-en#page11)
As populations continue to explode, the demand for quality education has never been higher, yet the *supply of capable teachers is dwindling*. This results in students receiving neither the attention nor the **personalized support** they desperately need from their educators.
Moreover, a staggering **20% of students** experience social anxiety when seeking help from their teachers. This anxiety can severely hinder their educational performance and overall learning experience. [Research 3](https://www.cambridge.org/core/journals/psychological-medicine/article/much-more-than-just-shyness-the-impact-of-social-anxiety-disorder-on-educational-performance-across-the-lifespan/1E0D728FDAF1049CDD77721EB84A8724)
While many educational platforms leverage generative AI to offer personalized support, we envision something even more revolutionary. Introducing **TeachXR—a fully voiced, interactive, and hyper-personalized AI** teacher that allows students to engage just like they would with a real educator, all within the immersive realm of extended reality.
*Imagine a world where every student has access to a dedicated tutor who can cater to their unique learning styles and needs. With TeachXR, we can transform education, making personalized learning accessible to all. Join us on this journey to revolutionize education and bridge the gap in teacher shortages!*
## What it does
**Introducing TeachVR: Your Interactive XR Study Assistant**
TeachVR is not just a simple voice-activated Q&A AI; it’s a **fully interactive extended reality study assistant** designed to enhance your learning experience. Here’s what it can do:
* **Intuitive Interaction**: Use natural hand gestures to circle the part of a textbook page that confuses you.
* **Focused Questions**: Ask specific questions about the selected text for summaries, explanations, or elaborations.
* **Human-like Engagement**: Interact with TeachVR just like you would with a real person, enjoying **milliseconds response times** and a human voice powered by **Vapi.ai**.
* **Multimodal Learning**: Visualize the concepts you’re asking about, aiding in deeper understanding.
* **Personalized and Private**: All interactions are tailored to your unique learning style and remain completely confidential.
### How to Ask Questions:
1. **Circle the Text**: Point your finger and circle the paragraph you want to inquire about.
2. **OK Gesture**: Use the OK gesture to crop the image and submit your question.
### TeachVR's Capabilities:
* **Summarization**: Gain a clear understanding of the paragraph's meaning. TeachVR captures both book pages to provide context.
* **Examples**: Receive relevant examples related to the paragraph.
* **Visualization**: When applicable, TeachVR can present a visual representation of the concepts discussed.
* **Unlimited Queries**: Feel free to ask anything! If it’s something your teacher can answer, TeachVR can too!
### Interactive and Dynamic:
TeachVR operates just like a human. You can even interrupt the AI if you feel it’s not addressing your needs effectively!
## How we built it
**TeachXR: A Technological Innovation in Education**
TeachXR is the culmination of advanced technologies, built on a microservice architecture. Each component focuses on delivering essential functionalities:
### 1. Gesture Detection and Image Cropping
We have developed and fine-tuned a **hand gesture detection system** that reliably identifies gestures for cropping based on **MediaPipe gesture detection**. Additionally, we created a custom **bounding box cropping algorithm** to ensure that the desired paragraphs are accurately cropped by users for further Q&A.
### 2. OCR (Word Detection)
Utilizing **Google AI OCR service**, we efficiently detect words within the cropped paragraphs, ensuring speed, accuracy, and stability. Given our priority on latency—especially when simulating interactions like pointing at a book—this approach aligns perfectly with our objectives.
### 3. Real-time Data Orchestration
Our goal is to replicate the natural interaction between a student and a teacher as closely as possible. As mentioned, latency is critical. To facilitate the transfer of image and text data, as well as real-time streaming from the OCR service to the voiced assistant, we built a robust data flow system using the **SingleStore database**. Its powerful real-time data processing and lightning-fast queries enable us to achieve sub-1-second cropping and assistant understanding for prompt question-and-answer interactions.
### 4. Voiced Assistant
To ensure a natural interaction between students and TeachXR, we leverage **Vapi**, a natural voice interaction orchestration service that enhances our feature development. By using **DeepGram** for transcription, **Google Gemini 1.5 flash model** as the AI “brain,” and **Cartesia** for a natural voice, we provide a unique and interactive experience with your virtual teacher—all within TeachXR.
## Challenges we ran into
### Challenges in Developing TeachXR
Building the architecture to keep the user-cropped image in sync with the chat on the frontend posed a significant challenge. Due to the limitations of the **Meta Quest 3**, we had to run local gesture detection directly on the headset and stream the detected image to another microservice hosted in the cloud. This required us to carefully adjust the size and details of the images while deploying a hybrid model of microservices. Ultimately, we successfully navigated these challenges.
Another difficulty was tuning our voiced assistant. The venue we were working in was quite loud, making background noise inevitable. We had to fine-tune several settings to ensure our assistant provided a smooth and natural interaction experience.
## Accomplishments that we're proud of
### Achievements
We are proud to present a complete and functional MVP! The cropped image and all related processes occur in **under 1 second**, significantly enhancing the natural interaction between the student and **TeachVR**.
## What we learned
### Developing a Great AI Application
We successfully transformed a solid idea into reality by utilizing the right tools and technologies.
There are many excellent pre-built solutions available, such as **Vapi**, which has been invaluable in helping us implement a voice interface. It provides a user-friendly and intuitive experience, complete with numerous settings and plug-and-play options for transcription, models, and voice solutions.
## What's next for TeachXR
We’re excited to think of the future of **TeachXR** holds even greater innovations! we’ll be considering\**adaptive learning algorithms*\* that tailor content in real-time based on each student’s progress and engagement.
Additionally, we will work on integrating **multi-language support** to ensure that students from diverse backgrounds can benefit from personalized education. With these enhancements, TeachXR will not only bridge the teacher shortage gap but also empower every student to thrive, no matter where they are in the world! | losing |
## Inspiration
Students are often put into a position where they do not have the time nor experience to effectively budget their finances. This unfortunately leads to many students falling into debt, and having a difficult time keeping up with their finances. That's where wiSpend comes to the rescue! Our objective is to allow students to make healthy financial choices and be aware of their spending behaviours.
## What it does
wiSpend is an Android application that analyses financial transactions of students and creates a predictive model of spending patterns. Our application requires no effort from the user to input their own information, as all bank transaction data is synced in real-time to the application. Our advanced financial analytics allow us to create effective budget plans tailored to each user, and to provide financial advice to help students stay on budget.
## How I built it
wiSpend is build using an Android application that makes REST requests to our hosted Flask server. This server periodically creates requests to the Plaid API to obtain financial information and processes the data. Plaid API allows us to access major financial institutions' users' banking data, including transactions, balances, assets & liabilities, and much more. We focused on analysing the credit and debit transaction data, and applied statistical analytics techniques in order to identify trends from the transaction data. Based on the analysed results, the server will determine what financial advice in form of a notification to send to the user at any given point of time.
## Challenges I ran into
Integration and creating our data processing algorithm.
## Accomplishments that I'm proud of
This was the first time we as a group successfully brought all our individual work on the project and successfully integrated them together! This is a huge accomplishment for us as the integration part is usually the blocking factor from a successful hackathon project.
## What I learned
Interfacing the Android and Web server was a huge challenge but it allowed us as developers to find clever solutions by overcoming encountered roadblocks and thereby developing our own skills.
## What's next for wiSpend
Our first next feature would be to build a sophist acted budgeting app to assist users in their budgeting needs. We also plan on creating a mobile UI that can provide even more insights to users in form of charts, graphs, and infographics, as well as further developing our web platform to create a seamless experience across devices. | ## Inspiration
The idea of Fizz inspired us and motivated us to work on a simple yet intuitive budgeting solution that can be used by students especially teens.
## What it does
Fizzy is a simple yet seamless way for teens to have a track of their expenses and visualize them through graphs and charts. Fizzy also provides recommendations for users in managing their expenses. We provide users alerts when their expenses cross a given threshold limit. Further, Fizzy can help the user in decision making and managing the budget by providing an overview about how money is spent and how it can be saved.
## How we built it
We built Fizzy as a cross platform Flutter application. We used Firebase Firestore to save user data and retrieve it in real time. We also used a number of plugins and libraries which include charts for data visualization and tensorflow lite for deep learning.
## Challenges we ran into
* We lost touch with Flutter because it's been months since we worked on it. Today, we attempted to gain back the touch and it took us quite a while.
* Data viz consumed us remarkable amount of time.
## Accomplishments that we're proud of
* Coming up with a fully functional project within the time limits.
* Team work.
## What's next for Fizzy
* Researching the potential of such ideas in Fintech and know the shortcomings.
* Re construct the idea based on the inferences and try deploying it in the market. | ## Inspiration
Have you ever wondered if your outfit looks good on you? Have you ever wished you did not have to spend so much time trying on your whole closet, taking a photo of yourself and sending it to your friends for some advice? Have you ever wished you had worn a jacket because it was much windier than you thought? Then MIR will be your new best friend - all problems solved!
## What it does
Stand in front of your mirror. Then ask Alexa for fashion advice. A photo of your outfit will be taken, then analyzed to detect your clothing articles, including their types, colors, and logo (bonus point if you are wearing a YHack t-shirt!). MIR will simply let you know if your outfits look great, or if there are something even better in your closet. Examples of things that MIR takes into account include types and colors of the outfit, current weather, logos, etc.
## How I built it
### Frontend
React Native app for the smart mirror display. Amazon Lambda for controlling an Amazon Echo to process voice commands.
### Backend
Google Cloud Vision for identifying features and colors on a photo. Microsoft Cognitive Services for detecting faces and estimating where clothing would be. Scipy for template matching. Forecast.io for weather information.
Runs on Flask on Amazon EC2.
## Challenges I ran into
* Determining a good way to isolate clothing in an image - vision networks get distracted by things easily.
* React Native is amazing when it does work, but is just a pain when it doesn't.
* Our original method of using Google's Reverse Image Search for matching logos did not work as consistently.
## Accomplishments that I'm proud of
It works!
## What I learned
It can be done!
## What's next for MIR
MIR can be further developed and used in many different ways!
## Another video demo:
<https://youtu.be/CwQPjmIiaMQ> | partial |
## Inspiration
As developers, we were on a mission to create something truly extraordinary. Something that would change the way people approach to fashion and make getting dressed in the morning an easier and more enjoyable experience.
Introducing Rate My Fit, our revolutionary AI software program that rates people's outfits based on color coordination, mood/aesthetic, appropriateness for the current weather, and the combination of complementary textures. We wanted to create a tool that not only enhances people's fashion sense but also helps them make the best outfit choices for any occasion and weather.
## What it does
We used cutting-edge image recognition technology and machine learning algorithms to train our program to understand the nuances of fashion and personal style. It can analyze an individual's outfit and give instant feedback on how to make it even better.
We are passionate about our technology and the impact it has on people's lives. We believe that our AI outfit rating program will empower individuals to make confident and stylish fashion choices, regardless of their body type, skin tone, or personal style.
## How we built it
Building our AI outfit rating software was a challenging and exciting journey. Our goal was to create a program that was not only accurate and efficient but also user-friendly and visually appealing.
We began by selecting the appropriate technology stack for our project. We chose to use Python and Flask for the back-end, JavaScript, CSS, and HTML for the front-end, and a state-of-the-art computer vision architecture in Python for the image recognition component.
To train our computer vision model, we collected a dataset of over 200,000 images of various outfits. We carefully curated the dataset to ensure a diverse representation of styles, body types, and occasions. Using this dataset, we were able to train our model to accurately recognize and analyze different aspects of an outfit such as color coordination, mood/aesthetic, appropriateness for the current weather, and the combination of complementary textures.
Once the model was trained, we integrated it into our web application using Flask. The front-end team used JavaScript, CSS, and HTML to create a visually appealing and user-friendly interface. We also added a weather API to the software to provide real-time information on the current weather and make the rating even more accurate.
The final product is a powerful yet easy-to-use software that can analyze an individual's outfit and provide instant feedback on how to make it even better. We are proud of the technology we used and the impact it has on people's lives.
## Challenges we ran into
* **Cleaning and organizing the dataset**: With over 200,000 images to sift through, it was a daunting task to ensure that the images were high quality, diverse and appropriately labeled. It took a lot of time and effort to make sure the dataset was ready for training.
* **Building the complex JavaScript UI**: We wanted to create a visually appealing and user-friendly interface that would make it easy for users to interact with the software. However, this required a lot of attention to detail and testing to ensure that everything worked smoothly and looked good on various devices.
* **Creating the back-end processing for the analytics**: We needed to create an efficient pipeline to process the outfit ratings in real-time and provide instant feedback to the users. This required a lot of experimentation and testing to get the right balance between speed and accuracy.
* **Training the model** in PyTorch on a GPU: We had to optimize the training process and make sure the model was ready in time for the project submission. It was a race against time to get everything done before the deadline but with a lot of hard work, we were able to meet the deadline.
## Accomplishments that we're proud of
We're most proud of being able to train and deploy our own custom computer vision model. This is something we've all had the ambition to take on for quite a while but were always intimidated by the daunting task that training a neural network entails. Additionally, we're proud of building a full stack web app which is compatible with bot mobile OS and desktop use.
Overall, building this software was a challenging but rewarding experience. We learned a lot and pushed ourselves to new limits in order to deliver a product that we are truly proud of.
## What we learned
**Using Cuda to train PyTorch models can be very frustrating!** (Documentation is lacking). Also, building tests for the back-end to validate the quality of the ratings and the overall user experience was fun but more intensive than we envisioned.
## What's next for Rate My Fit
* Adding a live fit detection feature.
* Configuring a database to allow users to save their outfits for future reference.
* Add more analytical functionalities.
* Be able to recognize a wider range of clothing styles and garments. | ## 💡 **Inspiration**
The COVID-19 pandemic exposed the weaknesses of the fast fashion industry, revealing its unsustainable practices that left over 70 million garment workers without pay. In response, we developed our virtual try-on app to advocate sustainability and equality in fashion. Our platform enables users to explore their personal style while minimizing waste and production. By offering virtual try-ons, we shift the focus from fast fashion to thoughtful consumption. We advocate diversity by encouraging users to experiment with styles from various cultures, fostering an inclusive fashion community.
## 🔍 **What It Does**
FashioNova intelligently recommends clothing based on your input in the web app. You can enter any prompt and virtually try on clothes using your camera. Our technology detects your body and overlays the selected garments onto your image, allowing you to see how they would look on you. FashioNova streamlines your shopping experience, saving you time by eliminating the hassle of trying on clothes and searching for outfits.
This presents a fantastic opportunity for fashion companies to upload images of their own clothing, allowing customers to virtually "Try before they buy," ultimately enhancing the shopping experience and boosting sales.
## ⚙️ **How we built it**
***Frontend:*** We first made our logo with Adobe Express. Then, we created the client side of our web app using React.js and JSX based on a high-fidelity prototype we created using Figma.
***Backend:*** Our backend was programmed in Python with Flask, where it contains the WebRTC video component for the computer vision and VR/AR tech components of the virtual trying on clothes. As well, it has the Cloudflare AI API logic that allows specific clothing recommendations from the existing wardrobe based on the user's prompt.
## 🚧 **Challenges we ran into**
We had some issues regarding the WebSocket logic (Socketio server) which caused a massive lag in the real-time video footage with the body node displays and virtual clothes. To avoid this error, we resorted to just displaying the current computer's webcam. We also had trouble navigating the CloudFlare AI API to choose clothing options based on the desired prompt and the machine learning classification dataset we created, but after some hard perseverance, we were able to implement the feature.
## ✔️ **Accomplishments that we're proud of**
We are very proud of our ability to create a working video recognition with a body node detection system that allowed us to creatively add the virtual clothes fitted on the user. Creating and training a machine learning model for classification and labelling clothing data was a first for all of us. Also integrating CloudFlare Ai for the first time and integrating it for creative use felt very achievable! Plus, creating a unique idea and having functional backend features and a very artsy front-end design was very rewarding!!!
## 📚 **What we learned**
As a team, we shared valuable knowledge and unique experiences, learning from one another throughout the process. We selected video recognition technology for our project, despite it being new to most of us. Although we encountered numerous bugs along the way, we worked together to overcome them, strengthening our collaboration and problem-solving skills.
## 🔭 **What's next for us!**
1. *Expanded Try-Ons:* We plan to introduce a wider variety of clothing styles and add accessories like shoes, hats, sunglasses, necklaces, and scarves for a more complete virtual experience.
2. *Enhanced Interactivity:* We aim to incorporate a location feature that displays the weather forecast and suggests appropriate clothing based on current conditions. As well, as add a 3D rendering/design option to make the clothes look more realistic vibe!
3. *Relevant Datasets:* We aim to enhance our model's ability to recognize and understand various clothing styles by utilizing datasets that feature a broader and more diverse range of apparel.
4. *Marketing and future:* We hope to partner with clothing companies to add a shopping feature within the app. Clothing stores that want to allow their clothes to be tried on virtually will be added to our shopping catalogue and users can try out clothes without even going to the store. If they like a piece, they will be directed to the company's website link for that product.
5. *Diversity Implementations:* Our web app aims to help all kinds of people. We are excited to continue working on this project to add helpful features to people with disabilities/injuries, people living busy lives, and people who want to upgrade their fashion through a cultural and futuristic approach.
We hope to help people have ease with shopping for clothes, by reducing physical strain, and time spent on shopping, and making it more convenient for our future clients. Join us to help improve the future of virtual fashion!
Invest in **FASHIONOVA !!!** | ## MoodBox
### Smart DJ'ing using Facial Recognition
You're hosting a party with your friends. You want to play the hippest music and you’re scared of your friends judging you for your taste in music.
You ask your friends what songs they want to listen to… And only one person replies with that one Bruno Mars song that you’re all sick of listening to.
Well fear not, with MoodBox you can now set a mood and our app will intelligently select the best songs from your friends’ public playlists!
### What it looks like
You set up your laptop on the side of the room so that it has a good view of the room. Create an empty playlist for your party. This playlist will contain all the songs for the night. Run our script with that playlist, sit back and relax.
Feel free to adjust the level of hypeness as your party progresses. Increase the hype as the party hits the drop and then make your songs more chill as the night winds down into the morning. It’s as simple as adjusting a slider in our dank UI.
### Behind the scenes
We used python’s `facial_recognition` package based on `opencv` library to implement facial recognition on ourselves. We have a map from our facial features from spotify user ids, which we use to find the saved songs.
We use the `spotipy` package to manipulate the playlist in real-time. Once we find a new face in the frame, we first read in the current mood from the slider, and find songs in that user’s public library of songs that match the mood set by the host the best.
Once someone is out of the frame for long enough, they get removed from our buffer, and their songs get removed from the playlist. This also ensures that the playlist is empty at the end of the party, and everyone goes home happy. | losing |
## Inspiration
While attending Hack the 6ix, our team had a chance to speak to Advait from the Warp team. We got to learn about terminals and how he got involved with Warp, as well as his interest in developing something completely new for the 21st century. Through this interaction, my team decided we wanted to make an AI-powered developer tool as well, which gave us the idea for Code Cure!
## What it does
Code Cure can call your python file and run it for you. Once it runs, you will see your output as usual in your terminal, but if you experience any errors, our extension runs and gives some suggestions in a pop-up as to how you may fix it.
## How we built it
We made use of Azure's OpenAI service to power our AI code fixing suggestions and used javascript to program the rest of the logic behind our VS code extension.
## Accomplishments that we're proud of
We were able to develop an awesome AI-powered tool that can help users fix errors in their python code. We believe this project will serve as a gateway for more people to learn about programming, as it provides an easier way for people to find solutions to their errors.
## What's next for Code Cure
As of now, we are only able to send our output through a popup on the user's screen. In the future, we would like to implement a stylized tab where we are able to show the user different suggestions using the most powerful AI models available to us. | ## Inspiration
[Cursorless](https://www.cursorless.org/) is a voice interface for manipulating text (e.g., code). We saw its potential as a bold new interface for text editing. However, it is very unintuitive and learning Cursorless amounts to learning a new language, with unnatural and complicated syntax.
This was the inspiration behind Verbalist. We want to harness the power of voice (and AI) to greatly improve productivity while editing text, especially code.
Most other AI products access user data. We also want to ensure data security of our product.
## What it does
Verbalist is a VSCode extension that enables the use of voice to edit their code. After a user downloads and configures the extension, users can record small voice snippets describing the high-level actions they want to take on text. Then, our AI models decide the specific actions to execute in order to do the high-levels actions--all without processing the content of the file.
## Challenges and what we learned
We learned some limitations of using large-language models on difficult, real-world tasks. For example, the LLM model we used often struggled to identify a correct, intuitive sequence of actions to perform the user's specified action. We spent a long time refining prompts; we learned that our final results were very sensitive to the quality of our prompts.
We also spent a while setting up the interaction between our main extension TypeScript file and our Python file, which handled the recording and AI processing. Through this process, we learned how to set up inter-process communication and extensively using the standard libraries (e.g., input/output streams) of both Python and TypeScript.
## Accomplishments we're proud of
Our extension allows users to use natural language to manipulate collections of lines and perform simple find-and-replace operations. T
We also built on top of the VSCode text editing API to allow for higher-level operations without providing any file contents to AI.
## What's next
The concepts behind this prototype can easily be extended to a fully-functional extension that adds a functionality not present in any other software today. We can implement more high-level, detailed actions for the AI to perform; for example, the ability to rename a variable, surround an expression in parentheses, or perform actions across multiple files. The voice interface can become a natural extension of the keyboard, one that allows programmers to spend less time thinking and more time doing. | ## Inspiration
Let's face it: Museums, parks, and exhibits need some work in this digital era. Why lean over to read a small plaque when you can get a summary and details by tagging exhibits with a portable device?
There is a solution for this of course: NFC tags are a fun modern technology, and they could be used to help people appreciate both modern and historic masterpieces. Also there's one on your chest right now!
## The Plan
Whenever a tour group, such as a student body, visits a museum, they can streamline their activities with our technology. When a member visits an exhibit, they can scan an NFC tag to get detailed information and receive a virtual collectible based on the artifact. The goal is to facilitate interaction amongst the museum patrons for collective appreciation of the culture. At any time, the members (or, as an option, group leaders only) will have access to a live slack feed of the interactions, keeping track of each other's whereabouts and learning.
## How it Works
When a user tags an exhibit with their device, the Android mobile app (built in Java) will send a request to the StdLib service (built in Node.js) that registers the action in our MongoDB database, and adds a public notification to the real-time feed on slack.
## The Hurdles and the Outcome
Our entire team was green to every technology we used, but our extensive experience and relentless dedication let us persevere. Along the way, we gained experience with deployment oriented web service development, and will put it towards our numerous future projects. Due to our work, we believe this technology could be a substantial improvement to the museum industry.
## Extensions
Our product can be easily tailored for ecotourism, business conferences, and even larger scale explorations (such as cities and campus). In addition, we are building extensions for geotags, collectibles, and information trading. | losing |
## Program incentives
I embarked on the journey of creating the Narsingdee Biggan Club web application, driven by my passion for programming and love for creative projects. My toolbox for this endeavor included ReactJS, NodeJS, NextJS, CSS3, HTML5, and a bunch of other technologies.
## What motivates you
Programming is not just a hobby for me; It is a source of inner peace and satisfaction. The act of creating something new and fresh fuels my passion. I draw inspiration from the endless possibilities offered by technology.
## What I learned
I faced challenges and obstacles in this work. However, I embraced these obstacles as opportunities to grow. Whenever I faced a problem, my instinct was to turn to Google and other forums for a solution. This consistent curriculum not only helped me overcome challenges but also expanded my knowledge and problem solving skills.
## Construction of the project
Building the Narsingdi Biggan Club web application is a labor of love. I carefully crafted each element, using the power of ReactJS for dynamic user interfaces, NodeJS for server-side functionality, and NextJS for simple rendering, and CSS3 and HTML5 were my tools of art for designing the visuals of the application so excited.
## Challenges faced
No job is without its share of challenges. I have faced problems from debugging complex code to optimizing performance. But these challenges deepened my commitment to the work. Each problem I solved was a step towards realizing my vision of the Narsingdi Biggan Club web application.
Ultimately, this project not only reflects my technical skills but also my passion for design and creativity. | # OhMyDog
OhMyDog is an app developed as part of an initiative to improve the living conditions of animals across the world. As a 100% non-profit project, OhMyDog directs all of its ad revenue towards enabling our volunteers to shelter, feed, and take care of animals in need.
As an app, OhMyDog teaches its users some really cool animal facts(!), all while generating ad-revenue for a great cause. Additionally, users can accumulate coins that they can spend on digital food and water, medication, or adoption that our (future) sponsors and volunteers will work to match in the real world. | ## Inspiration
Video games evolved when the Xbox Kinect was released in 2010 but for some reason we reverted back to controller based games. We are here to bring back the amazingness of movement controlled games with a new twist- re innovating how mobile games are played!
## What it does
AR.cade uses a body part detection model to track movements that correspond to controls for classic games that are ran through an online browser. The user can choose from a variety of classic games such as temple run, super mario, and play them with their body movements.
## How we built it
* The first step was setting up opencv and importing the a body part tracking model from google mediapipe
* Next, based off the position and angles between the landmarks, we created classification functions that detected specific movements such as when an arm or leg was raised or the user jumped.
* Then we correlated these movement identifications to keybinds on the computer. For example when the user raises their right arm it corresponds to the right arrow key
* We then embedded some online games of our choice into our front and and when the user makes a certain movement which corresponds to a certain key, the respective action would happen
* Finally, we created a visually appealing and interactive frontend/loading page where the user can select which game they want to play
## Challenges we ran into
A large challenge we ran into was embedding the video output window into the front end. We tried passing it through an API and it worked with a basic plane video, however the difficulties arose when we tried to pass the video with the body tracking model overlay on it
## Accomplishments that we're proud of
We are proud of the fact that we are able to have a functioning product in the sense that multiple games can be controlled with body part commands of our specification. Thanks to threading optimization there is little latency between user input and video output which was a fear when starting the project.
## What we learned
We learned that it is possible to embed other websites (such as simple games) into our own local HTML sites.
We learned how to map landmark node positions into meaningful movement classifications considering positions, and angles.
We learned how to resize, move, and give priority to external windows such as the video output window
We learned how to run python files from JavaScript to make automated calls to further processes
## What's next for AR.cade
The next steps for AR.cade are to implement a more accurate body tracking model in order to track more precise parameters. This would allow us to scale our product to more modern games that require more user inputs such as Fortnite or Minecraft. | losing |
## Inspiration
Coming from South-East Asia, we have seen the devastation that natural disasters can wreck havoc on urban populations
We wanted to create a probe that can assist on-site Search and Rescue team members to detect and respond to nearby survivors
## What it does
Each Dandelyon probe detects changes in its surroundings and pushes data regularly to the backend server.
Additionally, each probe has a buzzer that produces a noise if it detects changes in the environment to attract survivors.
Using various services, visualise data from all probes at the same time to investigate and determine areas of interest to rescue survivors.
## What it consists of
* Deployable IoT Probe
* Live data streams
* Data Visualisation on Microsoft Power BI
* Data Visualisation on WebApp with Pitney Bowes API(dandelyon.org)
## How we built it
**Hardware**
* Identified the sensors that we would be using
* Comprises of:
1. Cell battery
2. Breadboard
3. Jumper Wires
4. Particle Electron 2G (swapped over to our own Particle 3G as it has better connectivity) + Cellular antenna
5. GPS + external antenna
6. Sound detector sensor
7. Buzzer
8. Accelerometer
* Soldered pin headers onto sensors
* Tested the functionality of each sensor
1. Wired each sensor alone to the Electron
2. Downloaded the open source libraries for each sensor from GitHub
3. Wrote a code for main function for the sensor to communicate with the Electron
4. Read the output from each sensor and check if it's working
* Integrated every sensor with the Electron
* Tested the final functionality of the Electron
**Software**
* Infrastructure used
1. Azure IoT Hub
2. Azure Stream Analytics
3. Azure NoSQL
1. Microsoft Power BI
4. Google Cloud Compute
1. Particle Cloud with Microsoft Azure IoT Hub integration
* Backend Development
1. Flow of live data stream from Particle devices
2. Supplement live data with simulated data
3. Data is piped from Azure IoT Hub to PowerBI and Webapp Backend
4. PowerBI used to display live dashboards with live charts
5. WebApp displays map with live data
* WebApp Development
Deployed NodeJS server on Google Cloud Compute connected to Azure NoSQL database. Fetches live data for display on map.
## Challenges we ran into
Hardware Integration
Azure IoT Stream connecting to PowerBI as well as our custom back-end
Working with live data streams
## Accomplishments that we're proud of
Integrating the Full Hardware suite
Integrating Probe -> Particle Cloud -> Azure IoT -> Azure Stream Analytics -> PowerBI
and Azure Stream Analytics -> Azure NoSQL -> Node.Js -> PitneyBowes/Leaflet
## What we learned
## What's next for Dandelyon
Prototyping the delivery shell used to deploy Dandelyon probes from a high altitude
Developing on the backend interface used to manage and assign probe response | ## Inspiration
In times of disaster, the capacity of rigid networks like cell service and internet dramatically decreases at the same time demand increases as people try to get information and contact loved ones. This can lead to crippled telecom services which can significantly impact first responders in disaster struck areas, especially in dense urban environments where traditional radios don't work well. We wanted to test newer radio and AI/ML technologies to see if we could make a better solution to this problem, which led to this project.
## What it does
Device nodes in the field network to each other and to the command node through LoRa to send messages, which helps increase the range and resiliency as more device nodes join. The command & control center is provided with summaries of reports coming from the field, which are visualized on the map.
## How we built it
We built the local devices using Wio Terminals and LoRa modules provided by Seeed Studio; we also integrated magnetometers into the devices to provide a basic sense of direction. Whisper was used for speech-to-text with Prediction Guard for summarization, keyword extraction, and command extraction, and trained a neural network on Intel Developer Cloud to perform binary image classification to distinguish damaged and undamaged buildings.
## Challenges we ran into
The limited RAM and storage of microcontrollers made it more difficult to record audio and run TinyML as we intended. Many modules, especially the LoRa and magnetometer, did not have existing libraries so these needed to be coded as well which added to the complexity of the project.
## Accomplishments that we're proud of:
* We wrote a library so that LoRa modules can communicate with each other across long distances
* We integrated Intel's optimization of AI models to make efficient, effective AI models
* We worked together to create something that works
## What we learned:
* How to prompt AI models
* How to write drivers and libraries from scratch by reading datasheets
* How to use the Wio Terminal and the LoRa module
## What's next for Meshworks - NLP LoRa Mesh Network for Emergency Response
* We will improve the audio quality captured by the Wio Terminal and move edge-processing of the speech-to-text to increase the transmission speed and reduce bandwidth use.
* We will add a high-speed LoRa network to allow for faster communication between first responders in a localized area
* We will integrate the microcontroller and the LoRa modules onto a single board with GPS in order to improve ease of transportation and reliability | ## Inspiration
We're students, and that means one of our biggest inspirations (and some of our most frustrating problems) come from a daily ritual - lectures.
Some professors are fantastic. But let's face it, many professors could use some constructive criticism when it comes to their presentation skills. Whether it's talking too fast, speaking too *quietly* or simply not paying attention to the real-time concerns of the class, we've all been there.
**Enter LectureBuddy.**
## What it does
Inspired by lackluster lectures and little to no interfacing time with professors, LectureBuddy allows students to signal their instructors with teaching concerns at the spot while also providing feedback to the instructor about the mood and sentiment of the class.
By creating a web-based platform, instructors can create sessions from the familiarity of their smartphone or laptop. Students can then provide live feedback to their instructor by logging in with an appropriate session ID. At the same time, a camera intermittently analyzes the faces of students and provides the instructor with a live average-mood for the class. Students are also given a chat room for the session to discuss material and ask each other questions. At the end of the session, Lexalytics API is used to parse the chat room text and provides the instructor with the average tone of the conversations that took place.
Another important use for LectureBuddy is an alternative to tedious USATs or other instructor evaluation forms. Currently, teacher evaluations are completed at the end of terms and students are frankly no longer interested in providing critiques as any change will not benefit them. LectureBuddy’s live feedback and student interactivity provides the instructor with consistent information. This can allow them to adapt their teaching styles and change topics to better suit the needs of the current class.
## How I built it
LectureBuddy is a web-based application; most of the developing was done in JavaScript, Node.js, HTML/CSS, etc. The Lexalytics Semantria API was used for parsing the chat room data and Microsoft’s Cognitive Services API for emotions was used to gauge the mood of a class. Other smaller JavaScript libraries were also utilised.
## Challenges I ran into
The Lexalytics Semantria API proved to be a challenge to set up. The out-of-the box javascript files came with some errors, and after spending a few hours with mentors troubleshooting, the team finally managed to get the node.js version to work.
## Accomplishments that I'm proud of
Two first-time hackers contributed some awesome work to the project!
## What I learned
"I learned that json is a javascript object notation... I think" - Hazik
"I learned how to work with node.js - I mean I've worked with it before, but I didn't really know what I was doing. Now I sort of know what I'm doing!" - Victoria
"I should probably use bootstrap for things" - Haoda
"I learned how to install mongoDB in a way that almost works" - Haoda
"I learned some stuff about Microsoft" - Edwin
## What's next for Lecture Buddy
* Multiple Sessions
* Further in-depth analytics from an entire semester's worth of lectures
* Pebble / Wearable integration!
@Deloitte See our video pitch! | partial |
## Inspiration
Our project is inspired by the sister of one our creators, Joseph Ntaimo. Joseph often needs to help locate wheelchair accessible entrances to accommodate her, but they can be hard to find when buildings have multiple entrances. Therefore, we created our app as an innovative piece of assistive tech to improve accessibility across the campus.
## What it does
The user can find wheelchair accessible entrances with ease and get directions on where to find them.
## How we built it
We started off using MIT’s Accessible Routes interactive map to see where the wheelchair friendly entrances were located at MIT. We then inspected the JavaScript code running behind the map to find the latitude and longitude coordinates for each of the wheelchair locations.
We then created a Python script that filtered out the latitude and longitude values, ignoring the other syntax from the coordinate data, and stored the values in separate text files.
We tested whether our method would work in Python first, because it is the language we are most familiar with, by using string concatenation to add the proper Java syntax to the latitude and longitude points. Then we printed all of the points to the terminal and imported them into Android Studio.
After being certain that the method would work, we uploaded these files into the raw folder in Android Studio and wrote code in Java that would iterate through both of the latitude/longitude lists simultaneously and plot them onto the map.
The next step was learning how to change the color and image associated with each marker, which was very time intensive, but led us to having our custom logo for each of the markers.
Separately, we designed elements of the app in Adobe Illustrator and imported logos and button designs into Android Studio. Then, through trial and error (and YouTube videos), we figured out how to make buttons link to different pages, so we could have both a FAQ page and the map.
Then we combined both of the apps together atop of the original maps directory and ironed out the errors so that the pages would display properly.
## Challenges we ran into/Accomplishments
We had a lot more ideas than we were able to implement. Stripping our app to basic, reasonable features was something we had to tackle in the beginning, but it kept changing as we discovered the limitations of our project throughout the 24 hours. Therefore, we had to sacrifice features that we would otherwise have loved to add.
A big difficulty for our team was combining our different elements into a cohesive project. Since our team split up the usage of Android Studio, Adobe illustrator, and programming using the Google Maps API, it was most difficult to integrate all our work together.
We are proud of how effectively we were able to split up our team’s roles based on everyone’s unique skills. In this way, we were able to be maximally productive and play to our strengths.
We were also able to add Boston University accessible entrances in addition to MIT's, which proved that we could adopt this project for other schools and locations, not just MIT.
## What we learned
We used Android Studio for the first time to make apps. We discovered how much Google API had to offer, allowing us to make our map and include features such as instant directions to a location. This helped us realize that we should use our resources to their full capabilities.
## What's next for HandyMap
If given more time, we would have added many features such as accessibility for visually impaired students to help them find entrances, alerts for issues with accessing ramps and power doors, a community rating system of entrances, using machine learning and the community feature to auto-import maps that aren't interactive, and much, much more. Most important of all, we would apply it to all colleges and even anywhere in the world. | ## Inspiration
We got lost so many times inside MIT... And no one could help us :( No Google Maps, no Apple Maps, NO ONE. Since now, we always dreamed about the idea of a more precise navigation platform working inside buildings. And here it is. But that's not all: as traffic GPS usually do, we also want to avoid the big crowds that sometimes stand in corridors.
## What it does
Using just the pdf of the floor plans, it builds a digital map and creates the data structures needed to find the shortest path between two points, considering walls, stairs and even elevators. Moreover, using fictional crowd data, it avoids big crowds so that it is safer and faster to walk inside buildings.
## How we built it
Using k-means, we created nodes and clustered them using the elbow diminishing returns optimization. We obtained the hallways centers combining scikit-learn and filtering them applying k-means. Finally, we created the edges between nodes, simulated crowd hotspots and calculated the shortest path accordingly. Each wifi hotspot takes into account the number of devices connected to the internet to estimate the number of nearby people. This information allows us to weight some paths and penalize those with large nearby crowds.
A path can be searched on a website powered by Flask, where the corresponding result is shown.
## Challenges we ran into
At first, we didn't know which was the best approach to convert a pdf map to useful data.
The maps we worked with are taken from the MIT intranet and we are not allowed to share them, so our web app cannot be published as it uses those maps...
Furthermore, we had limited experience with Machine Learning and Computer Vision algorithms.
## Accomplishments that we're proud of
We're proud of having developed a useful application that can be employed by many people and can be extended automatically to any building thanks to our map recognition algorithms. Also, using real data from sensors (wifi hotspots or any other similar devices) to detect crowds and penalize nearby paths.
## What we learned
We learned more about Python, Flask, Computer Vision algorithms and Machine Learning. Also about frienship :)
## What's next for SmartPaths
The next steps would be honing the Machine Learning part and using real data from sensors. | ## Inspiration
Wanted to build a fun app that combined the fun of gifs with the hard headlines of the finance world
## What it does
The Funancial Times is a news site built for the 21st century. In today's ridiculous economic and political climate, we strived to make a feed that could adequately describe current headlines.
## How we built it
React, giphy search API, Kensho Graph API
## Challenges we ran into
Learning React, API request limits
## What we learned
React
## What's next for Funancial Times
Date changing, full production API | winning |
## Inspiration
In Vancouver, we are lucky to be surrounded by local markets, seafood vendors, and farms. As students we noticed that most people around us fall into habit of shopping from large supermarket chains such as Costco, Walmart, Superstore, which import foods full of preservatives from faraway distribution centres - ultimately selling less fresh food. We wanted to build an app like "Yelp" to encourage healthy grocery choices and simultaneously support local grocers and farmers.
## What it does
Peas of Mind is an web application that allows users to leave reviews of different fruits, vegetables, and seafood weekly so that users will be able to find local grocery stores that have the freshest groceries compared to large supermarket chains. One feature allows users to shop fresh seasonal products, while our Map page highlight popular vendors in walkable radius, or just search the product to find which store(s) have the freshest product.
## How we built it
Back end
* Node.js
* Express.js
* Google Maps API
Front end
* React
Database
* CockroachDB
Design
* Figma
## Challenges we ran into
Working with a new tech stack was a learning curve for some people. Ambitious platform involving user identification, customized feeds, search engines, and review processes. This led to a major time constraint where we found it difficult to juggle all of these challenges on top of dealing with typical coding bugs.
## Accomplishments that we're proud of
We think the concept of our web application builds community engagement for local small businesses, promotes excitement around health and nutrition when grocery shopping. We are also proud of the design for the web application because it's simple and effective, which will make the app easy for users to use.
## What we learned
We learnt that it is always helpful to take a step back when we run into problems or bugs, lean on each other for help, and not be scared of asking mentors or others for help or feedback on our project.
## What's next for Peas of Mind
At the time of writing, we have all the components for the app but we need more time to tie things together, and fully build out our design. We would like to build it out specifically for the community of Vancouver. | ## Inspiration
Students are running low on motivation to do schoolwork since lockdown / the pandemic. This site is designed to help students be more motivated to finish classes by providing a better sense of accomplishment and urgency to schoolwork.
## What it does
Provides a way for students to keep track and stay on top of their deliverables.
## How we built it
Two of us did the backend (database, python driver code [flask]), and the other two did the frontend (figma mockups, html, css, javascript)
## Challenges we ran into
Setting up the database, connecting front/backend, figuring out git (merging in particular).
## Accomplishments that we're proud of
We are very proud of our team collaboration, and ability to put together all this in the short time span. This was 3/4 members first ever hackathon, so the entire thing was such a fun and enjoyable learning experience.
## What we learned
Literally everything here was a huge learning experience for all team members.
## What's next for QScore - Gamify School!
We really think that we can extend this project further by adding more functionality. We want to add integrations with different university's if possible, maybe make a friends system/social media aspect.
## Discord Information
Team 5
Devy#2975
han#0288
Infinite#6201
naters#3774 | # Megafind
## Summary
Megafind is a webapp based platform for hosting live lecture sessions. Professors can begin a session that students in the lecture can join using some provided passcode. Upon joining the live session, students gain access to multiple features that provide an enhanced lecture experience. The dashboard has 3 main features:
1. The first is simply the lecture slides embedded into the left half of their screen––this is for the students to follow along with the presentation.
2. The right side contains two tabs. One is a live transcript of what the professor is saying that updates in real time. The app parses the professor's words in real time to find relevant people, places etc. Each term deemed relevant has a hyperlink to additional resources. These keywords are also stored in a digest that we will send at the end.
3. The third feature is an in-browser note taker that begins lecture with all the bullet points/text scraped from the powerpoint presentation. This way, students can focus on putting their own thoughts/notes instead of simply copying the lecture bullets.
At the end of the lecture, Megafind sends each student a copy of their "lecture digest" which contains 3 parts:
1. A summary of the lecture created by performing natural language understanding on the transcript
2. The notes taken by the student in lecture
3. Each keyword that we picked up on compiled into a list with a short summary of its definition (for study guides/quick reference) | losing |
## Inspiration
We were inspired by the resilience of freelancers, particularly creative designers, during the pandemic. As students, it's easy to feel overwhelmed and not value our own work. We wanted to empower emerging designers and remind them of what we can do with a little bit of courage. And support.
## What it does
Bossify is a mobile app that cleverly helps students adjust their design fees. It focuses on equitable upfront pay, which in turn increases the amount of money saved. This can be put towards an emergency fund. On the other side, clients can receive high-quality, reliable work. The platform has a transparent rating system making it easy to find quality freelancers.
It's a win-win situation.
## How we built it
We got together as a team the first night to hammer out ideas. This was our second idea, and everyone on the team loved it. We all pitched in ideas for product strategy. Afterwards, we divided the work into two parts - 1) Userflows, UI Design, & Prototype; 2) Writing and Testing the Algorithm.
For the design, Figma was the main software used. The designers (Lori and Janice) used a mix iOS components and icons for speed. Stock images were taken from Unsplash and Pexels. After quickly drafting the storyboards, we created a rapid prototype. Finally, the pitch deck was made to synthesize our ideas.
For the code, Android studio was the main software used. The developers (Eunice and Zoe) together implemented the back and front-end of the MVP (minimum viable product), where Zoe developed the intelligent price prediction model in Tensorflow, and deployed the trained model on the mobile application.
## Challenges we ran into
One challenge was not having the appropriate data immediately available, which was needed to create the algorithm. On the first night, it was a challenge to quickly research and determine the types of information/factors that contribute to design fees. We had to cap off our research time to figure out the design and algorithm.
There were also technical limitations, where our team had to determine the best way to integrate the prototype with the front-end and back-end. As there was limited time and after consulting with the hackathon mentor, the developers decided to aim for the MVP instead of spending too much time and energy on turning the prototype to a real front-end. It was also difficult to integrate the machine learning algorithm to our mini app's backend, mainly because we don't have any experience with implement machine learning algorithm in java, especially as part of the back-end of a mobile app.
## Accomplishments that we're proud of
We're proud of how cohesive the project reads. As the first covid hackathon for all the team members, we were still able to communicate well and put our synergies together.
## What we learned
Although a simple platform with minimal pages, we learned that it was still possible to create an impactful app. We also learned the importance of making a plan and time line before we start, which helped us keep track of our progress and allows us to use our time more strategically.
## What's next for Bossify
Making partnerships to incentivize clients to use Bossify! #fairpayforfreelancers | ## Inspiration
Covid-19 has turned every aspect of the world upside down. Unwanted things happen, situation been changed. Lack of communication and economic crisis cannot be prevented. Thus, we develop an application that can help people to survive during this pandemic situation by providing them **a shift-taker job platform which creates a win-win solution for both parties.**
## What it does
This application offers the ability to connect companies/manager that need employees to cover a shift for their absence employee in certain period of time without any contract. As a result, they will be able to cover their needs to survive in this pandemic. Despite its main goal, this app can generally be use to help people to **gain income anytime, anywhere, and with anyone.** They can adjust their time, their needs, and their ability to get a job with job-dash.
## How we built it
For the design, Figma is the application that we use to design all the layout and give a smooth transitions between frames. While working on the UI, developers started to code the function to make the application work.
The front end was made using react, we used react bootstrap and some custom styling to make the pages according to the UI. State management was done using Context API to keep it simple. We used node.js on the backend for easy context switching between Frontend and backend. Used express and SQLite database for development. Authentication was done using JWT allowing use to not store session cookies.
## Challenges we ran into
In the terms of UI/UX, dealing with the user information ethics have been a challenge for us and also providing complete details for both party. On the developer side, using bootstrap components ended up slowing us down more as our design was custom requiring us to override most of the styles. Would have been better to use tailwind as it would’ve given us more flexibility while also cutting down time vs using css from scratch.Due to the online nature of the hackathon, some tasks took longer.
## Accomplishments that we're proud of
Some of use picked up new technology logins while working on it and also creating a smooth UI/UX on Figma, including every features have satisfied ourselves.
Here's the link to the Figma prototype - User point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=68%3A3872&scaling=min-zoom)
Here's the link to the Figma prototype - Company/Business point of view: [link](https://www.figma.com/proto/HwXODL4sk3siWThYjw0i4k/NwHacks?node-id=107%3A10&scaling=min-zoom)
## What we learned
We learned that we should narrow down the scope more for future Hackathon so it would be easier and more focus to one unique feature of the app.
## What's next for Job-Dash
In terms of UI/UX, we would love to make some more improvement in the layout to better serve its purpose to help people find an additional income in job dash effectively. While on the developer side, we would like to continue developing the features. We spent a long time thinking about different features that will be helpful to people but due to the short nature of the hackathon implementation was only a small part as we underestimated the time that it will take. On the brightside we have the design ready, and exciting features to work on. | ## Inspiration
*Our project, Chill Chat, was inspired by the growing need for accessible and empathetic mental health support. We wanted to create a platform that could provide comfort and guidance to individuals seeking emotional well-being, especially in times of stress or anxiety. The increasing popularity of AI-powered chatbots and the potential for emotional intelligence in these tools motivated us to develop Chill Chat as a compassionate and personalized virtual companion.*
## What it does
Chill Chat is an AI-powered voice assistant designed to offer emotional support and guidance to users. It utilizes Hume AI's capabilities to understand and respond to users' emotional states, providing tailored advice, coping mechanisms, and resources. The app aims to create a safe and supportive space where users can express their feelings, receive empathetic responses, and find tools to manage their emotional well-being.
## How we built it
We built Chill Chat using a combination of Python, Flask, and Hume AI's Empathic Voice Interface (EVI). The backend of our application handles the WebSocket connection with EVI, processes user input, and generates responses based on the user's emotional state. The frontend is built using HTML, CSS, and JavaScript to provide a user-friendly interface for interacting with the chatbot.
## Challenges
We encountered technical challenges related to integrating the Hume AI API with our web application, which required careful debugging and problem-solving.
## Accomplishments that we're proud of
We are particularly proud of the ability of Chill Chat to provide empathetic and relevant responses to users. The AI's ability to recognize and understand emotions, even in complex or nuanced situations, is a significant accomplishment. We are also pleased with the user-friendly interface and intuitive design of the application, which makes it easy for users to interact with the chatbot and benefit from its services.
What we learned
Through the development of Chill Chat, we gained valuable insights into the challenges and opportunities of building AI-powered emotional support tools. We learned the importance of carefully curating training data, fine-tuning the AI model, and ensuring a user-friendly interface. We also gained a deeper understanding of the potential benefits and limitations of AI in providing emotional support.
## What's next for Chill Chat
We envision Chill Chat evolving into a more comprehensive platform that offers a wider range of features and resources for mental health support. | winning |
## Inspiration ✨
It can be hard to find the motivation to study, especially when social media suffocates you with an endless stream of distractions, leaving very little time for yourself. And with modern pressures to stay in the moment, balancing your social presence and self-improvement is becoming increasingly difficult, often forcing us to opt for one over the other. But what if you didn't have to? Now introducing Studyscout, an innovative tech solution seeking to connect you with friends while achieving your personal learning goals. Whether you enjoy a lively work environment or a quiet one, our app aims to be accessible to all needs, providing real-time updates on study spaces from the comfort of your home. We utilize the latest indoor-mapping technology with a network of automated IoT devices so you can finally gain some well-deserved "me time".
## What it does 🔥
As a project centered around creating accessible experiences for students, we connect the real world with digitized information through a carefully engineered IoT solution. Each device is affixed under a study space and uses sensor data to determine the number of occupants. Upon detecting a successful movement check, the device promptly uploads the data to MongoDb, where it is processed by our server in real-time to relay information to our users. Additionally, our implementation of Mappedin allows users the most convenient method to visually present our data, enabling future expansion into other fields, such as hospitals, venues, social coordination, and more.
Users can [view the deployed website here](https://www.studyscout.tech/) which spots are readily available for use without walking a marathon for a table. The [full front-end is available here](https://studyspaces-sand.vercel.app/).
Studyscout pairs real-time sensor data with our easy-to-use website. This way, we can effectively save important student time while still providing the outside study experience. With our anonymous data tracking, local management of these spaces can utilize this data to determine peak business hours, area use frequency and ultimately optimize for student convenience.
## How we built it 🛠️
Studyscout was built using a Raspberry Pi and a basic motion sensor. This acts as our physical device pasted under study spaces and can be connected via a local network or cloud. Our front-end was built using React libraries, Figma and standard HTML/CSS and Javascript. For our backend, we rely on MongoDB for our databases, MappedIn for interactive 3-D visualization technology, and Vercel for domain hosting.
## Challenges we ran into 💥
Our team ran into many initial changes setting up our MongoDB database. We followed the Telus Front End tutorial and were met with many errors. The rest of our journey was fairly smooth until we ran into errors with integrating our interactive MappedIn component. Three hours, 2 mentors, and many discussions later, we were left with a single "resize" function error that wasn't even utilized in our local repository. This caused us to lose our progress transferring from React to Next.js as well, and we had to default back to our original libraries. Additionally, our frontend had no experience with React until one week before the hackathon. As the sole member responsible for creating the entire front-end development in React and UI/UX design in Figma, she struggled with the sheer amount of work.
## Accomplishments that we’re proud of 🎉
One of our biggest accomplishments was having the hardware work properly and detect motion. Getting an effective database up and running that could take user data was also a significant accomplishment for us. We worked through many front-end issues with React and Next.js, learning a lot along the way. As for Patti, she is proud of how she was able to design, program, and deploy a website by herself within the time period despite her lack of knowledge (and sleep!).
## What we learned 🧠
Patti - Effectively utilizing React, front-end technologies, and design principles to build a functional, responsive website.
//
Raymond - Hardware and python-end of pushing to Mongodb, hosting with Vercel, designing Mappedin table layout for IBK
//
Steven - Worked on connecting the backend using Next.js and MongoDB and troubleshooting different errors.
//
Xavier - Python computer vision and connecting frontend and backend with React and MongoDB
## What’s next ⏭️
Full implementation of users and exploring further in the direction of socializing and gamifying studying. Creating a 3-D enclosure to house the device safely. | ## Inspiration
Our inspiration for Smart Sprout came from our passion for both technology and gardening. We wanted to create a solution that not only makes plant care more convenient but also promotes sustainability by efficiently using water resources.
## What it does
Smart Sprout is an innovative self-watering plant system. It constantly monitors the moisture level in the soil and uses this data to intelligently dispense water to your plants. It ensures that your plants receive the right amount of water, preventing overwatering or underwatering. Additionally, it provides real-time moisture data, enabling you to track the health of your plants remotely.
## How we built it
We built Smart Sprout using a combination of hardware and software. The hardware includes sensors to measure soil moisture, an Arduino microcontroller to process data, and a motorized water dispenser to regulate watering. The software utilizes custom code to interface with the hardware, analyze moisture data, and provide a user-friendly interface for monitoring and control.
## Challenges we ran into
During the development of Smart Sprout, we encountered several challenges. One significant challenge was optimizing the water dispensing mechanism to ensure precise and efficient watering. The parts required by our team, such as a water pump, were not available. We also had to fine-tune the sensor calibration to provide accurate moisture readings, which took much more time than expected. Additionally, integrating the hardware with a user-friendly software interface posed its own set of challenges.
## Accomplishments that we're proud of
The rotating bottle, and mounting it. It has to be rotated such that the holes are on the top or bottom, as necessary, but the only motor we could find was barely powerful enough to turn it. We reduced friction on the other end by using a polygonal 3d-printed block, and mounted the motor opposite to it. Overall, finding an alternative to a water pump was something we are proud of.
## What we learned
As is often the case, moving parts are the most complicated, but we also are using the arduino for two things at the same time: driving the motor and writing to the display. Multitasking is a major component of modern operating systems, and it was interesting to work on it in this case here.
## What's next for Smart Sprout
The watering system could be improved. There exist valves that are meant to be electronically operated, or a human designed valve and a servo, which would allow us to link it to a municipal water system. | ## Inspiration
We often found ourselves stuck at the start of the design process, not knowing where to begin or how to turn our ideas into something real. In large organisations these issues are not only inconvenient and costly, but also slow down development. That is why we created ConArt AI to make it easier. It helps teams get their ideas out quickly and turn them into something real without all the confusion.
## What it does
ConArt AI is a gamified design application that helps artists and teams brainstorm ideas faster in the early stages of a project. Teams come together in a shared space where each person has to create a quick sketch and provide a prompt before the timer runs out. The sketches are then turned into images and everyone votes on their team's design where points are given from 1 to 5. This process encourages fast and fun brainstorming while helping teams quickly move from ideas to concepts. It makes collaboration more engaging and helps speed up the creative process.
## How we built it
We built ConArt AI using React for the frontend to create a smooth and responsive interface that allows for real-time collaboration. On the backend, we used Convex to handle game logic and state management, ensuring seamless communication between players during the sketching, voting, and scoring phases.
For the image generation, we integrated the Replicate API, which utilises AI models like ControlNet with Stable Diffusion to transform the sketches and prompts into full-fledged concept images. These API calls are managed through Convex actions, allowing for real-time updates and feedback loops.
The entire project is hosted on Vercel, which is officially supported by Convex, ensuring fast deployment and scaling. Convex especially enabled us to have a serverless experience which allowed us to not worry about extra infrastructure and focus more on the functions of our app. The combination of these technologies allows ConArt AI to deliver a gamified, collaborative experience.
## Challenges we ran into
We faced several challenges while building ConArt AI. One of the key issues was with routing in production, where we had to troubleshoot differences between development and live environments. We also encountered challenges in managing server vs. client-side actions, particularly ensuring smooth, real-time updates. Additionally, we had some difficulties with responsive design, ensuring the app looked and worked well across different devices and screen sizes. These challenges pushed us to refine our approach and improve the overall performance of the application.
## Accomplishments that we're proud of
We’re incredibly proud of several key accomplishments from this hackathon.
Nikhil: Learned how to use a new service like Convex during the hackathon, adapting quickly to integrate it into our project.
Ben: Instead of just showcasing a local demo, he managed to finish and fully deploy the project by the end of the hackathon, which is a huge achievement.
Shireen: Completed the UI/UX design of a website in under 36 hours for the first time, while also planning our pitch and brand identity, all during her first hackathon.
Ryushen: He worked on building React components and the frontend, ensuring the UI/UX looked pretty, while also helping to craft an awesome pitch.
Overall, we’re most proud of how well we worked as a team. Every person filled their role and brought the project to completion, and we’re happy to have made new friends along the way!
## What we learned
We learned how to effectively use Convex by studying its documentation, which helped us manage real-time state and game logic for features like live sketching, voting, and scoring. We also learned how to trigger external API calls, like image generation with Replicate, through Convex actions, making the integration of AI seamless. On top of that, we improved our collaboration as a team, dividing tasks efficiently and troubleshooting together, which was key to building ConArt AI successfully.
## What's next for ConArt AI
We plan to incorporate user profiles in order to let users personalise their experience and track their creative contributions over time. We will also be adding a feature to save concept art, allowing teams to store and revisit their designs for future reference or iteration. These updates will enhance collaboration and creativity, making ConArt AI even more valuable for artists and teams working on long-term projects. | partial |
## Inspiration
Peer-review is critical to modern science, engineering, and healthcare
endeavors. However, the system for implementing this process has lagged behind
and results in expensive costs for publishing and accessing material, long turn
around times reminiscent of snail-mail, and shockingly opaque editorial
practices. Astronomy, Physics, Mathematics, and Engineering use a "pre-print
server" ([arXiv](https://arxiv.org)) which was the early internet's improvement
upon snail-mailing articles to researchers around the world. This pre-print
server is maintained by a single university, and is constantly requesting
donations to keep up the servers and maintenance. While researchers widely
acknowledge the importance of the pre-print server, there is no peer-review
incorporated, and none planned due to technical reasons. Thus, researchers are
stuck with spending >$1000 per paper to be published in journals, all the while
individual article access can cost as high as $32 per paper!
([source](https://www.nature.com/subscriptions/purchasing.html)). For reference,
a single PhD thesis can contain >150 references, or essentially cost $4800 if
purchased individually.
The recent advance of blockchain and smart contract technology
([Ethereum](https://www.ethereum.org/)) coupled with decentralized
file sharing networks ([InterPlanetaryFileSystem](https://ipfs.io))
naturally lead us to believe that archaic journals and editors could
be bypassed. We created our manuscript distribution and reviewing
platform based on the arXiv, but in a completely decentralized manner.
Users utilize, maintain, and grow the network of scholarship by simply
running a simple program and web interface.
## What it does
arXain is a Dapp that deals with all the aspects of a peer-reviewed journal service.
An author (wallet address) will come with a bomb-ass paper they wrote.
In order to "upload" their paper to the blockchain, they will first
need to add their file/directory to the IPFS distributed file system. This will
produce a unique reference number (DOI is currently used in journals)
and hash corresponding to the current paper file/directory.
The author can then use their address on the Ethereum network to create a new contract
to submit the paper using this reference number and paperID. In this way, there will
be one paper per contract. The only other action the
author can make to that paper is submitting another draft.
Others can review and comment on papers, but an address can not comment/review
its own paper. The reviews are rated on a "work needed", "acceptable" basis
and the reviewer can also upload an IPFS hash of their comments file/directory.
Protection is also built in such that others can not submit revisions of the
original author's paper.
The blockchain will have a record of the initial paper submitted, revisions made
by the author, and comments/reviews made by peers. The beauty of all of this is
one can see the full transaction histories and reconstruct the full evolution of
the document. One can see the initial draft, all suggestions from reviewers,
how many reviewers, and how many of them think the final draft is reasonable.
## How we built it
There are 2 main back-end components, the IPFS file hosting service
and the Ethereum blockchain smart contracts. They are bridged together
with ([MetaMask](https://metamask.io/)), a tool for connecting
the distributed blockchain world, and by extension the distributed
papers, to a web browser.
We designed smart contracts in Solidity. The IPFS interface was built using a
combination of Bash, HTML, and a lot of regex!
. Then we connected the IPFS distributed net with the Ethereum Blockchain using
MetaMask and Javascript.
## Challenges we ran into
On the Ethereum side, setting up the Truffle Ethereum framework and test
networks were challenging. Learning the limits of Solidity and constantly
reminding ourselves that we had to remain decentralized was hard!
The IPFS side required a lot of clever regex-ing. Ensuring that public access
to researchers manuscript and review history requires other proper identification
and distribution on the network.
The hardest part was using MetaMask and Javascript to call our contracts
and connect the blockchain to the browser. We struggled for about hours
trying to get javascript to deploy a contract on the blockchain. We were all
new to functional programming.
## Accomplishments that we're proud of
Closing all the curly bois and close parentheticals in javascript.
Learning a whole lot about the blockchain and IPFS. We went into this
weekend wanting to learning about how the blockchain worked, and came out
learning about Solidity, IPFS, Javascript, and a whole lot more. You can
see our "genesis-paper"on an IPFS gateway (a bridge between HTTP and IPFS) [here](https://gateway.ipfs.io/ipfs/QmdN2Hqp5z1kmG1gVd78DR7vZmHsXAiSbugCpXRKxen6kD/0x627306090abaB3A6e1400e9345bC60c78a8BEf57_1.pdf)
## What we learned
We went into this with knowledge that was a way to write smart contracts,
that IPFS existed, and minimal Javascript.
We learned intimate knowledge of setting up Ethereum Truffle frameworks,
Ganache, and test networks along with the development side of Ethereum
Dapps like the Solidity language, and javascript tests with the Mocha framework.
We learned how to navigate the filespace of IPFS, hash and and organize directories,
and how the file distribution works on a P2P swarm.
## What's next for arXain
With some more extensive testing, arXain is ready for the Ropsten test network
*at the least*. If we had a little more ETH to spare, we would consider launching
our Dapp on the Main Network. arXain PDFs are already on the IPFS swarm and can
be accessed by any IPFS node. | ## Inspiration
It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform.
## What it does
A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage.
## How we built it
Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API.
For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system.
## Challenges we ran into
We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker!
On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky.
With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette.
## Accomplishments that we're proud of
The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of.
## What's next for LendIt
We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy. | ## Inspiration
We all spend too much money on Starbucks, except for Arri, whose guilty pleasure is a secret that he won't tell us.
## What it does
Have you ever shopped at a disorganized grocery store where prices tags are all over the place? Or gone into a cafe that doesn't list its prices on the menu? *cough Starbucks cough*
Wallai is a personal finance software that helps you control your spending in spite of these barriers! You simply take a picture of a food item, and it'll figure out its price for you. If the software isn't able to figure out the price, it'll prompt you to add the item and its price (once you've figured it out) to our database to improve the experience for future users. As an Android app, it's convenient for everyday use, and its clear, lightweight interface provides an unusually hassle-free way to stay in control of your finances.
*"With great money comes rich, then which equals...equals...great discipline."*
— Kamran Tayyab 2019
## How we built it
Teamwork makes the dream work.
## Challenges we ran into
Coming up with the idea, it was so challenging it took us 12 hours.
Also the fact that we came up with the idea 12 hours into the hackathon.
And Android errors, all of them.
## Accomplishments that we're proud of
We came up with an idea during the hackathon!
We're also proud of Arri not sleeping.
## What we learned
Android is a b\*itch.
## What's next for wallai
We want to partner with restaurants to get better data that'll make our object identification more accurate. Once we do this, we would also like to further optimize how we store and retrieve information from the database as it grows.
We'd also like to create an iOS version to make the app available to more people! | winning |
# Inspiration
When we ride a cycle we to steer the cycle towards the direction we fall, and that's how we balance. But the amazing part is we(or our brain) compute a complex set of calculations to balance ourselves. And that we do it naturally. With some practice it gets better. The calculation in our mind that goes on highly inspired us. At first we thought of creating the cycle, but later after watching "Handle" from Boston Dynamics, we thought of creating the self balancing robot, "Istable". Again one more inspiration was watching modelling of systems mathematically in our control systems labs along with how to tune a PID controlled system.
# What it does
Istable is a self balancing robot, running on two wheels, balances itself and tries to stay vertical all the time. Istable can move in all directions, rotate and can be a good companion for you. It gives you a really good idea of robot dynamics and kinematics. Mathematically model a system. It can aslo be used to teach college students the tuning of PIDs of a closed loop system. It is a widely used control scheme algorithm and also very robust to use algorithm. Along with that From Zeigler Nichols, Cohen Coon to Kappa Tau methods can also be implemented to teach how to obtain the coefficients in real time. Theoretical modelling to real time implementation. Along with that we can use in hospitals as well where medicines are kept at very low temperature and needs absolute sterilization to enter, we can use these robots there to carry out these operations. And upcoming days we can see our future hospitals in a new form.
# How we built it
The mechanical body was built using scrap woods laying around, we first made a plan how the body will look like, we gave it a shape of "I" that's and that's where it gets its name. Made the frame, placed the batteries(2x 3.7v li-ion), the heaviest component, along with that attached two micro metal motors of 600 RPM at 12v, cut two hollow circles from old sandal to make tires(rubber gives a good grip and trust me it is really necessary), used a boost converter to stabilize and step up the voltage from 7.4v to 12v, a motor driver to drive the motors, MPU6050 to measure the inclination angle, and a Bluetooth module to read the PID parameters from a mobile app to fine tune the PID loop. And the brain a microcontroller(lgt8f328p). Next we made a free body diagram, located the center of gravity, it needs to be above the centre of mass, adjusted the weight distribution like that. Next we made a simple mathematical model to represnt the robot, it is used to find the transfer function and represents the system. Later we used that to calculate the impulse and step response of the robot, which is very crucial for tuning the PID parameters, if you are taking a mathematical approach, and we did that here, no hit and trial, only application of engineering. The microcontroller runs a discrete(z domain) PID controller to balance the Istable.
# Challenges we ran into
We were having trouble balancing Istable at first(which is obvious ;) ), and we realized that was due to the placing the gyro, we placed it at the top at first, we corrected that by placing at the bottom, and by that way it naturally eliminated the tangential component of the angle thus improving stabilization greatly. Next fine tuning the PID loop, we do got the initial values of the PID coefficients mathematically, but the fine tuning took a hell lot of effort and that was really challenging.
# Accomplishments we are proud of
Firstly just a simple change of the position of the gyro improved the stabilization and that gave us a high hope, we were loosing confidence before. The mathematical model or the transfer function we found was going correct with real time outputs. We were happy about that. Last but not least, the yay moment was when we tuned the robot way correctly and it balanced for more than 30seconds.
# What we learned
We learned a lot, a lot. From kinematics, mathematical modelling, control algorithms apart from PID - we learned about adaptive control to numerical integration. W learned how to tune PID coefficients in a proper way mathematically, no trial and error method. We learned about digital filtering to filter out noisy data, along with that learned about complementary filters to fuse the data from accelerometer and gyroscope to find accurate angles.
# Whats next for Istable
We will try to add an onboard display to change the coefficients directly over there. Upgrade the algorithm so that it can auto tune the coefficients itself. Add odometry for localized navigation. Also use it as a thing for IoT to serve real time operation with real time updates. | ## Inspiration
Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week.
## What it does
IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application.
## How WE built it
on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time.
## Challenges WE ran into
hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck.
To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult.
Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue.
The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful. | ## Inspiration
Parking services suck!! They are expensive! We want better. We need better.
## What it does
Through an Android App built using Google Maps, Firebase, Places, we offer a solution for people to put out their residential parking spots to other clients for a fee. Just like AirBnb, but except for parking.
## How we built it
We used android studio to get the maps api working, and firebase to build a database for our service.
## Challenges we ran into
Android Studio not working.
Not once could I code on my own laptop.
## Accomplishments that we're proud of
Cool idea.
Getting APIs to work.
## What we learned
Firebase. How hard Android is.
## What's next for ParkTheValley
Develop on our own time and make it work!! With a bot! | winning |
## Inspiration
Nutritional labels tell you about the % of the daily recommended nutrients and vitamins the food you consume provides you with. Now, what if there was also a way to track the environmental impact of the food you consume?
The Footprint app allows its users to scan grocery store items and get their estimated carbon and water footprints so that they can make informed decisions regarding the food they buy. The app also allows its users to track their carbon and water footprints over time, to see the impact of their changes in lifestyle and diet. We aim to make a more sustainable life! :joy:
## What it does
There are multiple functions inside the app, but our goal is to make it as simple as possible for the users. Here are the steps:
1. Open the app to the home page
2. Press "SCAN PRODUCT" button, which will navigate to the product scanner page
3. On the scanner page, users can either use the camera to take picture of grocery products or select an image from their phone album.
4. Not satisfied with the photo? No worries! We have a retake option!
5. The "Use Photo" button on the button right navigates the user to the carbon footprint and water footprint data and the comparison with similar grocery items.
This is accomplished by prediction the product using image classification. The image is uploaded to the cloud and processed. The prediction data is stored in output file that shows the grocery item detected.
6. On the home page, the other button "VIEW PROFILE" links to a user's personal account to keep track of their carbon footprint and output a pie chart for different products that are documented. It also shows the percentage difference of the user's carbon footprint compared to that of the average American citizen.
## How we built it
First, we brainstormed about the topic: "Sustainability" for innovative ideas and solutions. Later, we benchmarked through current apps in the genre for more inspiration and understand what is insufficient in the market that we could contribute to.
A Convenient and Informative Tool to Raise the Awareness of Sustainability
To save the user's time, our goal is to make it as handy as possible. We implement image recognition, since vision is the most straightforward information, and prediction, to save customer's time to type in brands and search for the right product. Charts and plots are shown for effortless understanding of carbon footprint and water footprint data.
After having a direction to proceed, we started gathering data for data analysis. The datasets contain carbon and water footprint information.
JavaScript is used to construct front-end coding and python in addition to ImageAI are used to compute the prediction. The app is made via expo and react.
## Challenges we ran into
Some challenges we ran into involved being able to integrate our image classification model with JavaScript code and being able to run image classification real-time on a mobile device.
## Accomplishments that we're proud of
For this project, all team members stepped out of their comfort zones and pushed themselves to work in unfamiliar areas all while having fun!! We are proud to have created a product that aims to bring sustainability awareness to the general population and are proud at how far we progressed in the development of our app.
## What we learned
From a general perspective, we worked with data related to carbon and water footprints, and in the process, discovered interesting and shocking facts related to carbon consumption and emission and water usage. From a technical perspective, some members of the team had never worked with web or mobile development before, so we learned much about the coding details and all the new and exciting web development technologies that are out there. Additionally, we also delved into the existing research, models, and approaches in the field of grocery and retail image classification.
## What's next for Footprint
Although we are proud of our work, there are a few functions that we would like to improve in the future:
* Expand database and train more data for prediction
* Upload real-time image to local server for faster image detection
* Real-time video for camera and real-time image detection
* Augmented reality icons to show if grocery items are sustainable | ## Inspiration
Of all the mammals on Earth, only 4% are wild; the remaining 96% are livestock and humans. For birds only 30% are wild, the rest being chickens and poultry (Bar-On et al., 2018).
Food, Agriculture, and Land Use directly account for 24% of greenhouse gas sources, more than Transportation (14%), Industry (21%), and on par with Electricity Production (25%) (IPCC, 2014).Food accounts for up to 37% of the global greenhouse emissions and 70% of water withdrawals when taking into account all phases of production and distribution (IPCC, 2019).
A global transition towards more sustainable food will be among the most important strategies to reduce human impact on planetary resources. Many people want to do their part to reduce emissions, but they do not know where to start. Here we present the most accurate and up to date database on food carbon footprints to provide knowledge and tools that can support turning ideas into action.
## What it does
Our app informs users about their carbon footprint based on the food products that are bought from stores. Using a revolutionary dataset from Nature scientific data built using 3349 carbon footprint values extrapolated from 841 publications, we calculated the carbon footprint of specific foods based on the quantity consumed and type of food. The application takes in user input to create a shopping list with grocery items. This can be done through manual photo upload or by adding each item to a shopping list. The application automatically tries to look for similar replacement items in a person's shopping list that has a lower carbon footprint.
## How we built it
This project was built using the MERN Stack, with a splash of computer vision using OpenCV and Microsoft Azure Machine Learning and D3 for visualization. The frontend components were built using Material UI. A MongoDB database was used to store shopping lists and client data. D3, a Javascript visualization library was used to create the graph of carbon footprints per food for exploration.
## Challenges we ran into
One of the biggest challenges we ran into was trying to upload an image to be stored into a URL using React. This step was crucial to the development of our project since the use case of scanning receipts for grocery and food items relied heavily on image input and processing. We thought of different ways to try to diagnose and tackle the problem such as uploading the image onto a free image hosting service, reading the image into an array of bytes (metadata was lost), and asking a mentor for help.
Additionally, we believe that coming up with an idea that we were all passionate about was the hardest part. Brainstorming did not come easy to us since most of us did not know each other prior to the hackathon. Our entire team knew that we wanted to do something sustainability-related. We cycled through many ideas before settling on this one. We struggled with narrowing down our priorities since we had so many ideas to branch out upon. Some of the ideas that we wanted to implement (but couldn't get to) are listed in the last section of the README below.
## Accomplishments that we're proud of
We're proud of all we've done so far, from all the time spent on brainstorming to coming up with a prototype for demonstration. Our team is particularly proud of how we were able to combine our steeply different skills to create an application. Our UI was designed by a team member who has never worked on UI in the past. Deploying and running the CV algorithm was also one of the most time-consuming and messy tasks, but we are still glad that our hard work was able to be showcased.
## What we learned
There's no doubt that each of us has learned a lot from this project. Breaking it down individually,
Mei: I learned how to build and design the webpage using Material UI, along with using React to integrate GET and POST requests from the frontend to the backend.
## What's next for The Secret 37%
The first next step is to make our features work flawlessly. We had a difficult time integrating D3 with react. In addition, we want to make sure that the computer vision for converting receipts into carbon footprints works is reliable. We would also like to add more NLP so that the algorithm automatically finds the closest match for any food item (e.g. user types in Special K, the algorithm match cornflakes, the closest match in our database). As another next step, we would like the user to be able to track their monthly emissions from food by compiling all receipts/shopping lists. Finally, we would like to make this an integrated phone app as well. | **In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.**
## Inspiration
Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief.
## What it does
**Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter.
## How we built it
We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community.
## Challenges we ran into
Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon.
## Accomplishments that we're proud of
We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives.
## What we learned
We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs.
## What's next for Stronger Together
We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations. | losing |
## Inspiration
An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin.
## What it does
The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper
We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown
The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin
## How we built it\
Using Recyclable Cardboard, used dc motors, and 3d printed parts.
## Challenges we ran into
We had to train our Model for the ground up, even getting all the data
## Accomplishments that we're proud of
We managed to get the whole infrastructure build and all the motor and sensors working.
## What we learned
How to create and train model, 3d print gears, use sensors
## What's next for Waste Wizard
A Smart bin able to sort the 7 types of plastic | ## 💡 Inspiration 💯
Have you ever faced a trashcan with a seemingly endless number of bins, each one marked with a different type of recycling? Have you ever held some trash in your hand, desperately wondering if it can be recycled? Have you ever been forced to sort your trash in your house, the different bins taking up space and being an eyesore? Inspired by this dilemma, we wanted to create a product that took all of the tedious decision-making out of your hands. Wouldn't it be nice to be able to mindlessly throw your trash in one place, and let AI handle the sorting for you?
## ♻️ What it does 🌱
IntelliBin is an AI trashcan that handles your trash sorting for you! Simply place your trash onto our machine, and watch it be sorted automatically by IntelliBin's servo arm! Furthermore, you can track your stats and learn more about recycling on our React.js website.
## 🛠️ How we built it 💬
Arduino/C++ Portion: We used C++ code on the Arduino to control a servo motor and an LED based on serial input commands. Importing the servo library allows us to access functions that control the motor and turn on the LED colours. We also used the Serial library in Python to take input from the main program and send it to the Arduino. The Arduino then sent binary data to the servo motor, correctly categorizing garbage items.
Website Portion: We used React.js to build the front end of the website, including a profile section with user stats, a leaderboard, a shop to customize the user's avatar, and an information section. MongoDB was used to build the user registration and login process, storing usernames, emails, and passwords.
Google Vision API: In tandem with computer vision, we were able to take the camera input and feed it through the Vision API to interpret what was in front of us. Using this output, we could tell the servo motor which direction to turn based on if it was recyclable or not, helping us sort which bin the object would be pushed into.
## 🚧 Challenges we ran into ⛔
* Connecting the Arduino to the arms
* Determining the optimal way to manipulate the Servo arm, as it could not rotate 360 degrees
* Using global variables on our website
* Configuring MongoDB to store user data
* Figuring out how and when to detect the type of trash on the screen
## 🎉 Accomplishments that we're proud of 🏆
In a short span of 24 hours, we are proud to:
* Successfully engineer and program a servo arm to sort trash into two separate bins
* Connect and program LED lights that change colors varying on recyclable or non-recyclable trash
* Utilize Google Cloud Vision API to identify and detect different types of trash and decide if it is recyclable or not
* Develop an intuitive website with React.js that includes login, user profile, and informative capabilities
* Drink a total of 9 cans of Monsters combined (the cans were recycled)
## 🧠 What we learned 🤓
* How to program in C++
* How to control servo arms at certain degrees with an Arduino
* How to parse and understand Google Cloud Vision API outputs
* How to connect a MongoDB database to create user authentification
* How to use global state variables in Node.js and React.js
* What types of items are recyclable
## 🌳 Importance of Recycling 🍀
* Conserves natural resources by reusing materials
* Requires less energy compared to using virgin materials, decreasing greenhouse gas emissions
* Reduces the amount of waste sent to landfills,
* Decreasesdisruption to ecosystems and habitats
## 👍How Intellibin helps 👌
**Efficient Sorting:** Intellibin utilizes AI technology to efficiently sort recyclables from non-recyclables. This ensures that the right materials go to the appropriate recycling streams.
**Increased Recycling Rates:** With Intellibin making recycling more user-friendly and efficient, it has the potential to increase recycling rates.
**User Convenience:** By automating the sorting process, Intellibin eliminates the need for users to spend time sorting their waste manually. This convenience encourages more people to participate in recycling efforts.
**In summary:** Recycling is crucial for environmental sustainability, and Intellibin contributes by making the recycling process more accessible, convenient, and effective through AI-powered sorting technology.
## 🔮 What's next for Intellibin⏭️
The next steps for Intellibin include refining the current functionalities of our hack, along with exploring new features. First, we wish to expand the trash detection database, improving capabilities to accurately identify various items being tossed out. Next, we want to add more features such as detecting and warning the user of "unrecyclable" objects. For instance, Intellibin could notice whether the cap is still on a recyclable bottle and remind the user to remove the cap. In addition, the sensors could notice when there is still liquid or food in a recyclable item, and send a warning. Lastly, we would like to deploy our website so more users can use Intellibin and track their recycling statistics! | ## Inspiration
Navigating to multiple destinations can be a hassle when the functionality isn't supported by your favorite navigation app. Choosing the order of destinations that would have the least impact on your trip time, is also difficult to do at a glance. We wanted a simple way to navigate through a set of errands, without having to evaluate the routes ourselves.
## What it does
Users add errands or tasks associated with a location to a list using 1 of 3 goal oriented templates. The user can then request a route, and have the shortest path between their tasks and locations mapped out for navigation purposes.
## How we built it
The app was built with ionic to allow for use on multiple mobile platforms using Angluar 2 and javascript.
## Challenges we ran into
Actually defining a route given a set of locations.
## Accomplishments that we're proud of
We were able to provide specific locations for user inputs (i.e. locate the nearest Target store for an input of "target").
We were able to display a pin location on the map for each task.
## What's next for Map My Errands
We hope to support multiple lists in future iterations, and provide a finer control over the tasks used to generate the route. We would also like to save a given route to offer users the ability to repeat errands without the need to recreate their lists. | winning |
## Inspiration
Over one-fourth of Canadians during their lifetimes will have to deal with water damage in their homes. This is an issue that causes many Canadians overwhelming stress from the sheer economical and residential implications.
As an effort to assist and solve these very core issues, we have designed a solution that will allow for future leaks to be avoidable. Our prototype system made, composed of software and hardware will ensure house leaks are a thing of the past!
## What is our planned solution?
To prevent leaks, we have designed a system of components that when functioning together, would allow the user to monitor the status of their plumbing systems.
Our system is comprised of:
>
> Two types of leak detection hardware
>
>
> * Acoustic leak detectors: monitors abnormal sounds of pipes.
> * Water detection probes: monitors the presence of water in unwanted areas.
>
>
>
Our hardware components will have the ability to send data to a local network, to then be stored in the cloud.
>
> Software components
>
>
> * Secure cloud to store vital information regarding pipe leakages.
> * Future planned app/website with the ability to receive such information
>
>
>
## Business Aspect of Leakio
Standalone, this solution is profitable through the aspect of selling the specific hardware to consumers. Although for insurance companies, this is a vital solution that has the potential to save millions of dollars.
It is far more economical to prevent a leak, rather than fix it when it already happens. The time of paying the average cost of $10,900 US dollars to fix water damage or a freezing claim is now avoidable!
In addition to saved funds, our planned system will be able to send information to insurance companies for specific data purposes such as which houses or areas have the most leaks, or individual risk assessment. This would allow insurance companies to more appropriately create better rates for the consumer, for the benefit of both consumer and insurance company.
### Software
Front End:
This includes our app design in Figma, which was crafted using knowledge on proper design and ratios. Specifically, we wanted to create an app design that looked simple but had all the complex features that would seem professional. This is something we are proud of, as we feel this component was successful.
Back End:
PHP, MySQL, Python
### Hardware
Electrical
* A custom PCB is designed from scratch using EAGLE
* Consists of USBC charging port, lithium battery charging circuit, ESP32, Water sensor connector, microphone connector
* The water sensor and microphone are extended from the PCB which is why they need a connector
3D-model
* Hub contains all the electronics and the sensors
* Easy to install design and places the microphone within the walls close to the pipes
## Challenges we ran into
Front-End:
There were many challenges we ran into, especially regarding some technical aspects of Figma. Although the most challenging aspect in this would’ve been the implementation of the design.
Back-End:
This is where most challenges were faced, which includes the making of the acoustic leak detector, proper sound recognition, cloud development, and data transfer.
It was the first time any of us had used MySQL, and we created it on the Google Cloud SQL platform. We also had to use both Python and PHP to retrieve and send data, two languages we are not super familiar with.
We also had no idea how to set up a neural network with PyTorch. Also finding the proper data that can be used to train was also very difficult.
## Accomplishments that we're proud of
Learning a lot of new things within a short period of time.
## What we learned
Google Cloud:
Creating a MySQL database and setting up a Deep Learning VM.
MySQL:
Using MySQL and syntaxes, learning PHP.
Machine Learning:
How to set up Pytorch.
PCB Design:
Learning how to use EAGLE to design PCBs.
Raspberry Pi:
Autorun Python scripts and splitting .wav files.
Others:
Not to leave the recording to the last hour. It is hard to cut to 3 minutes with an explanation and demo.
## What's next for Leakio
* Properly implement audio classification using PyTorch
* Possibly create a network of devices to use in a single home
* Find more economical components
* Code for ESP32 to PHP to Web Server
* Test on an ESP32 | ## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status. | ## Inspiration
As the imperative to reduce carbon emissions intensifies, businesses are increasingly adopting innovative approaches to shrink their carbon footprint. One such approach, carbon offsetting, has grown into a [$5.5 billion industry](https://journals.library.columbia.edu/index.php/cjel/article/view/10442), garnering support from both fossil fuel companies and environmental advocates alike. Rather than curbing emissions directly at their source, carbon offsets mitigate carbon pollution by either preventing potential emissions or directly extracting carbon dioxide from the atmosphere, often employing technologies such as carbon capture and storage (CCS) or direct air capture (DAC). Presently, numerous critical sectors—such as the aviation industry—face challenges in achieving decarbonization due to financial constraints, material limitations, and a lack of innovative solutions. **As such, the combination of decarbonization strategies and carbon offsetting is often the most economical; by reducing easily avoidable emissions and offsetting the rest, companies can achieve net-zero emissions without the financial pressures of complete emissions reductions.**
However, there are three existing barriers to achieving this diversified net-zero strategy. First, reducing emissions is often a lengthy process, often taking years to decarbonize key components of a company’s business model fully. Second, reducing low-cost emissions is dependent on the equilibrium price of carbon offset—a price that often needs to be calculated far in advance to successfully execute decarbonization. Third, the financial tradeoff between the price of offsets and emissions reductions is blurry, with no clear brightline between offset and reduced emissions.
## What does our project do?
Our product solves these issues by providing key insights about decarbonization strategy through forward pricing curves for carbon offsets. By calculating the expected demand and supply of carbon offsets for a given time in the future, the market equilibrium—the going price of carbon offsets at that time—can be estimated and given to the user in the present. Our results are displayed on a website dashboard.
Our project contains two central components: the market price of carbon credits over time, based on calculated demand and supply curves, and a carbon-zero pathway based on these curves (based on user inputs and other external data). The demand and supply curves are generated for each year until the user’s target year, this allows them to see how the market changes year-by-year, allowing them to optimize their offset buying distribution over a certain period. The carbon-zero pathway allows
After inputting information about a company’s carbon footprint, emissions targets, and the marginal cost of abatement, financial metrics about the best carbon-zero path are computed. The output of this function relies on market equilibrium outputs only available after computing the forward price curves. The output gives the user valuable information about the percentage breakdown between decarbonized emission reductions and credit offsets, and the financial projections of both strategies.
This tool, importantly, allows companies to see how the market price of carbon credits changes over time based on user-given inputs such as the type of carbon removal technology, the year of their net-zero goal, and the scope of emissions they are considering. This tool emphasizes the importance of decarbonization by reorienting the goal of maximizing profits towards environmental advocacy. This is key in assisting companies plan their emissions-reduction strategy in the most economical way—two priorities that have often been at odds with each other.
## How we built our project
We built all of our projection models in Python and used the Reflex framework to display our outputs in a user dashboard. Each of the three system functions—the demand curve, supply curve, and decarbonization-offset tradeoff scheme—are integrated into the user dashboard, with various inputs related to the computation of the base futures curve model and tradeoff model being imputed as fields. Each of the three system functions are described below:
The demand curve uses the emissions profiles of the top 2,000 largest companies in the world to project their future carbon emissions based on their emissions reduction goals. Using generated GICS sector data for industry-specific emissions—combined with company revenue—we project yearly emissions—including scope 1, 2, and 3 emissions—depending on carbon reduction goals. Using an exponential Pareto distribution—tuned to represent companies buying offsets closer to their consumption date—we generalize the year-to-year demand for carbon permits on a per-company basis. This data returns the expected demand for carbon credits per year for a theoretical global market. Using this data, combined with decarbonization prices generalized from half-normal, sector-level decarbonization costs, our model produces the expected carbon demand for a given period for each company. As each company faces a different cost of decarbonizing their business, the marginal benefit of each company is unique. We fit a demand curve to these projected quantity and price demands from individual companies using polynomial regressions from this data. This process is done for each year in the model.
The supply curve is interpolated from the technology pathway of the carbon credit and price range. Based on our research, most companies pay anywhere from $50/ton CO2 to $700/ton CO2 for a range of carbon credits. This data is combined with the cost of implementing the selected pathway for a single ton of CO2 and the ranking of the industries based on their relative CO2 emissions. A coefficient is calculated on these values that inform the supply curve on the scale of 50$/tCO2 to 700$/tCO2. We create a supply curve for each year until the user-specified target year with the values being adjusted for the following years accordingly.
The decarbonization-offset tradeoff graph is produced by mapping another exponential Pareto distribution to the marginal cost of reducing carbon emissions. The resulting intersection of the market price of carbon offsets for a given year with the CDF returns the percentage of total emissions for which it is more economical to reduce the production of emissions over buying offsets, and vice versa. The total cost associated with purchasing carbon offsets, alongside the quantity, can be calculated for a given company’s carbon footprint.
## Challenges we ran into
The biggest challenge we ran into was the lack of data for building the supply curves. One way in which we handled this was by taking data corresponding to the maximum and minimum prices, as well as the range of carbon credits companies were buying in bulk. We used these to generate a pseudo-supply curve that would allow us to reasonably estimate the curve as if we had the data. We expect this problem to be solved as more data becomes available.
## What we learned and accomplished
We learned a lot about time-series analysis, data analysis, and synthesizing, as well as how to use Reflex to implement our website. We made a working model and website on a project that aimed to encapsulate the entire carbon credits market for the next few decades.
## What's next for CarbonInsights
Short Term: The biggest thing we would like to change is to make the website dynamically update. Although the models we wrote *are perfectly suited to update dynamically*, the Reflex framework we are using has made that difficult.
Long Term: Improve the statistical models to create the projection for the supply-demand curves and use ARIMA, ETS, and ML models like Random Forests to validate and provide a more detailed analysis of the trends. | winning |
## Inspiration
Travelling can be a pain. You have to look up attractions ahead of time and spend tons of time planning out what to do. Shouldn't travel be fun, seamless and stress-free?
## What it does
SightSee takes care of the annoying part of travel. You start by entering the city that you're visiting, your hotel, a few attractions that you'd like to see and we take care of the rest. We provide you with a curated list of recommendations based on TripAdvisor data depending on proximity to your attractions and rating. We help you discover new places to visit as well as convenient places for lunch and dinner. Once you've finalized your plans, out pops your itinerary for the day, complete with a walking route on Google Maps.
## How we built it
We used the TripAdvisor API and the Google Maps Embed API. It's built as a single-page Web application, powered by React and Redux. It's hosted on an Express.js-based web server on an Ubuntu 14.04 VM in Microsoft Azure.
## Challenges we ran into
We ran into challenges with the TripAdvisor API and its point of interest data, which can be inaccurate at times.
## Accomplishments that we're proud of
The most awesome user interface ever! | ## Inspiration
Planning vacations can be hard. Traveling is a very fun experience but often comes with a lot of stress of curating the perfect itinerary with all the best sights to see, foods to eat, and shows to watch. You don't want to miss anything special, but you also want to make sure the trip is still up your alley in terms of your own interests - a balance that can be hard to find.
## What it does
explr.ai simplifies itinerary planning with just a few swipes. After selecting your destination, the duration of your visit, and a rough budget, explr.ai presents you with a curated list of up to 30 restaurants, attractions, and activities that could become part of your trip. With an easy-to-use swiping interface, you choose what sounds interesting or not to you, and after a minimum of 8 swipes, let explr.ai's power convert your opinions into a full itinerary of activities for your entire visit.
## How we built it
We built this app using React Typescript for the frontend and Convex for the backend. The app takes in user input from the homepage regarding the location, price point, and time frame. We pass the location and price range into the Google API to retrieve the highest-rated attractions and restaurants in the area. Those options are presented to the user on the frontend with React and CSS animations that allow you to swipe each card in a Tinder-style manner. Taking consideration of the user's swipes and initial preferences, we query the Google API once again to get additional similar locations that the user may like and pass this data into an LLM (using Together.ai's Llama2 model) to generate a custom itinerary for the user. For each location outputted, we string together images from the Google API to create a slideshow of what your trip would look like and an animated timeline with descriptions of the location.
## Challenges we ran into
Front-end and design require a LOT of skill. It took us quite a while to come up with our project, and we originally were planning on a mobile app, but it's also quite difficult to learn completely new languages such as swift along with new technologies all in a couple of days. Once we started on explr.ai's backend, we were also having trouble passing in the appropriate information to the LLM to get back proper data that we could inject back into our web app.
## Accomplishments that we're proud of
We're proud at the overall functionality and our ability to get something working by the end of the hacking period :') More specifically, we're proud of some of our frontend, including the card swiping and timeline animations as well as the ability to parse data from various APIs and put it together with lots of user input.
## What we learned
We learned a ton about full-stack development overall, whether that be the importance of Figma and UX design work, or how to best split up a project when every part is moving at the same time. We also learned how to use Convex and Together.ai productively!
## What's next for explr.ai
We would love to see explr.ai become smarter and support more features. explr.ai, in the future, could get information from hotels, attractions, and restaurants to be able to check availability and book reservations straight from the web app. Once you're on your trip, you should also be able to check in to various locations and provide feedback on each component. explr.ai could have a social media component of sharing your itineraries, plans, and feedback with friends and help each other better plan trips. | ![alt text](https://cdn.discordapp.com/attachments/974437158834282577/974980085201387540/unknown.png)
## 💡INSPIRATION💡
Our team is from Ontario and BC, two provinces that have been hit HARD by the opioid crisis in Canada. Over **4,500 Canadians under the age of 45** lost their lives through overdosing during 2021, almost all of them preventable, a **30% increase** from the year before. During an unprecedented time, when the world is dealing with the covid pandemic and the war in Ukraine, seeing the destruction and sadness that so many problems are bringing, knowing that there are still people fighting to make a better world inspired us. Our team wanted to try and make a difference in our country and our communities, so... we came up with **SafePulse, an app to combat the opioid crisis, where you're one call away from OK, not OD.**
**Please checkout what people are doing to combat the opioid crisis, how it's affecting Canadians and learn more about why it's so dangerous and what YOU can do.**
<https://globalnews.ca/tag/opioid-crisis/>
<https://globalnews.ca/news/8361071/record-toxic-illicit-drug-deaths-bc-coroner/>
<https://globalnews.ca/news/8405317/opioid-deaths-doubled-first-nations-people-ontario-amid-pandemic/>
<https://globalnews.ca/news/8831532/covid-excess-deaths-canada-heat-overdoses/>
<https://www.youtube.com/watch?v=q_quiTXfWr0>
<https://www2.gov.bc.ca/gov/content/overdose/what-you-need-to-know/responding-to-an-overdose>
## ⚙️WHAT IT DOES⚙️
**SafePulse** is a mobile app designed to combat the opioid crisis. SafePulse provides users with resources that they might not know about such as *'how to respond to an overdose'* or *'where to get free naxolone kits'.* Phone numbers to Live Support through 24/7 nurses are also provided, this way if the user chooses to administer themselves drugs, they can try and do it safely through the instructions of a registered nurse. There is also an Emergency Response Alarm for users, the alarm alerts emergency services and informs them of the type of drug administered, the location, and access instruction of the user. Information provided to users through resources and to emergency services through the alarm system is vital in overdose prevention.
## 🛠️HOW WE BUILT IT🛠️
We wanted to get some user feedback to help us decide/figure out which features would be most important for users and ultimately prevent an overdose/saving someone's life.
Check out the [survey](https://forms.gle/LHPnQgPqjzDX9BuN9) and the [results](https://docs.google.com/spreadsheets/d/1JKTK3KleOdJR--Uj41nWmbbMbpof1v2viOfy5zaXMqs/edit?usp=sharing)!
As a result of the survey, we found out that many people don't know what the symptoms of overdoses are and what they may look like; we added another page before the user exits the timer to double check whether or not they have symptoms. We also determined that by having instructions available while the user is overdosing increases the chances of someone helping.
So, we landed on 'passerby information' and 'supportive resources' as our additions to the app.
Passerby information is information that anyone can access while the user in a state of emergency to try and save their life. This took the form of the 'SAVEME' page, a set of instructions for Good Samaritans that could ultimately save the life of someone who's overdosing.
Supportive resources are resources that the user might not know about or might need to access such as live support from registered nurses, free naxolone kit locations, safe injection site locations, how to use a narcan kit, and more!
Tech Stack: ReactJS, Firebase, Python/Flask
SafePulse was built with ReactJS in the frontend and we used Flask, Python and Firebase for the backend and used the Twilio API to make the emergency calls.
## 😣 CHALLENGES WE RAN INTO😣
* It was Jacky's **FIRST** hackathon and Matthew's **THIRD** so there was a learning curve to a lot of stuff especially since we were building an entire app
* We originally wanted to make the app utilizing MERN, we tried setting up the database and connecting with Twilio but it was too difficult with all of the debugging + learning nodejs and Twilio documentation at the same time 🥺
* Twilio?? HUGEEEEE PAIN, we couldn't figure out how to get different Canadian phone numbers to work for outgoing calls and also have our own custom messages for a little while. After a couple hours of reading documentation, we got it working!
## 🎉ACCOMPLISHMENTS WE ARE PROUD OF🎉
* Learning git and firebase was HUGE! Super important technologies in a lot of projects
* With only 1 frontend developer, we managed to get a sexy looking app 🤩 (shoutouts to Mitchell!!)
* Getting Twilio to work properly (its our first time)
* First time designing a supportive app that's ✨**functional AND pretty** ✨without a dedicated ui/ux designer
* USER AUTHENTICATION WORKS!! ( つ•̀ω•́)つ
* Using so many tools, languages and frameworks at once, and making them work together :D
* Submitting on time (I hope? 😬)
## ⏭️WHAT'S NEXT FOR SafePulse⏭️
SafePulse has a lot to do before it can be deployed as a genuine app.
* Partner with local governments and organizations to roll out the app and get better coverage
* Add addiction prevention resources
* Implement google maps API + location tracking data and pass on the info to emergency services so they get the most accurate location of the user
* Turn it into a web app too!
* Put it on the app store and spread the word! It can educate tons of people and save lives!
* We may want to change from firebase to MongoDB or another database if we're looking to scale the app
* Business-wise, a lot of companies sell user data or exploit their users - we don't want to do that - we'd be looking to completely sell the app to the government and get a contract to continue working on it/scale the project. Another option would be to sell our services to the government and other organizations on a subscription basis, this would give us more control over the direction of the app and its features while partnering with said organizations
## 🎁ABOUT THE TEAM🎁
*we got two Matthew's by the way (what are the chances?)*
Mitchell is a 1st year computer science student at Carleton University studying Computer Science. He is most inter tested in programing language enineering. You can connect with him at his [LinkedIn](https://www.linkedin.com/in/mitchell-monireoluwa-mark-george-261678155/) or view his [Portfolio](https://github.com/MitchellMarkGeorge)
Jacky is a 2nd year Systems Design Engineering student at the University of Waterloo. He is most experienced with embedded programming and backend. He is looking to explore various fields in development. He is passionate about reading and cooking. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/chenyuxiangjacky/) or view his [Portfolio](https://github.com/yuxstar1444)
Matthew B is an incoming 3rd year computer science student at Wilfrid Laurier University. He is most experienced with backend development but looking to learn new technologies and frameworks. He is passionate about music and video games and always looking to connect with new people. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-borkowski-b8b8bb178/) or view his [GitHub](https://github.com/Sulima1)
Matthew W is a 3rd year computer science student at Simon Fraser University, currently looking for a summer 2022 internship. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/), visit his [website](https://wongmatt.dev) or take a look at what he's [working on](https://github.com/WongMatthew)
### 🥳🎉THANK YOU WLU FOR HOSTING HAWKHACKS🥳🎉 | partial |
## Inspiration
Polycystic Ovary Syndrome (PCOS) is a disorder that affects 5-10% of the female population due to an imbalance of hormones. Women that experience PCOS have an increased risk of type 2 diabetes, high blood pressure, high cholesterol, anxiety, and depression. Like a lot of women's disorders, it’s common for PCOS to receive a delayed or no diagnosis at all due to lack of awareness of PCOS symptoms and the wide-range of symptoms. As a consequence, many women unknowingly suffer from the health risks without proper treatment.
We have developed an application for the You.com search engine that compares your health data with information about women's health from your searches, allowing you to monitor your health status and identify potential indicators of PCOS.
## What it does
When you search anything related to women’s sexual and reproductive health, our application pops up with 3 key features: (1) providing comprehensive information on PCOS by web scrapping, (2) comparing these symptoms with information from your health app, (3) donation feature that allows you to contribute to organizations dedicated to providing resources to women with the condition.
## How we built it
Using You.com's Developer Dashboard, we utilized their editor to design the user interface, incorporating two APIs for personalized health data and generalized PCOS information. Furthermore, we integrated checkbook.io to enable donations directly to community organizations just with payee info!
## Challenges we ran into
We encountered challenges while incorporating the API into the You.com codebase, primarily due to the limitations of the "Form" components and difficulties placing components precisely as desired. Additionally, the integration of Checkbook.io was challenging due to the steps involved with user authentication and bank account creation.
## Accomplishments that we're proud of
The donation app tile is fully functional and we can track the donations given by the user through email and the Sandbox environment.
## What we learned
We learned about how to create and integrate APIs, front-end development using You.com, and functionality of HTTP POST and GET methods while deepening our knowledge of PCOS and its impact on women's health.
## What's next for YouCare
We aim to expand this tool to include more topics in women's health like STDs, pregnancy and sex education, each with their unique features to improve awareness. It’s capabilities can further develop to address all queries that deal with women’s health like providing advice on topics related to women's health, like periods or menstrual products. For instance, it can show you a summary of your cycle and suggest products that might be useful for you, and physicians you should consider etc. | ## Inspiration
Unhealthy diet is the leading cause of death in the U.S., contributing to approximately 678,000 deaths each year, due to nutrition and obesity-related diseases, such as heart disease, cancer, and type 2 diabetes. Let that sink in; the leading cause of death in the U.S. could be completely nullified if only more people cared to monitor their daily nutrition and made better decisions as a result. But **who** has the time to meticulously track every thing they eat down to the individual almond, figure out how much sugar, dietary fiber, and cholesterol is really in their meals, and of course, keep track of their macros! In addition, how would somebody with accessibility problems, say blindness for example, even go about using an existing app to track their intake? Wouldn't it be amazing to be able to get the full nutritional breakdown of a meal consisting of a cup of grapes, 12 almonds, 5 peanuts, 46 grams of white rice, 250 mL of milk, a glass of red wine, and a big mac, all in a matter of **seconds**, and furthermore, if that really is your lunch for the day, be able to log it and view rich visualizations of what you're eating compared to your custom nutrition goals?? We set out to find the answer by developing macroS.
## What it does
macroS integrates seamlessly with the Google Assistant on your smartphone and let's you query for a full nutritional breakdown of any combination of foods that you can think of. Making a query is **so easy**, you can literally do it while *closing your eyes*. Users can also make a macroS account to log the meals they're eating everyday conveniently and without hassle with the powerful built-in natural language processing model. They can view their account on a browser to set nutrition goals and view rich visualizations of their nutrition habits to help them outline the steps they need to take to improve.
## How we built it
DialogFlow and the Google Action Console were used to build a realistic voice assistant that responds to user queries for nutritional data and food logging. We trained a natural language processing model to identify the difference between a call to log a food eaten entry and simply a request for a nutritional breakdown. We deployed our functions written in node.js to the Firebase Cloud, from where they process user input to the Google Assistant when the test app is started. When a request for nutritional information is made, the cloud function makes an external API call to nutrionix that provides nlp for querying from a database of over 900k grocery and restaurant foods. A mongo database is to be used to store user accounts and pass data from the cloud function API calls to the frontend of the web application, developed using HTML/CSS/Javascript.
## Challenges we ran into
Learning how to use the different APIs and the Google Action Console to create intents, contexts, and fulfillment was challenging on it's own, but the challenges amplified when we introduced the ambitious goal of training the voice agent to differentiate between a request to log a meal and a simple request for nutritional information. In addition, actually finding the data we needed to make the queries to nutrionix were often nested deep within various JSON objects that were being thrown all over the place between the voice assistant and cloud functions. The team was finally able to find what they were looking for after spending a lot of time in the firebase logs.In addition, the entire team lacked any experience using Natural Language Processing and voice enabled technologies, and 3 out of the 4 members had never even used an API before, so there was certainly a steep learning curve in getting comfortable with it all.
## Accomplishments that we're proud of
We are proud to tackle such a prominent issue with a very practical and convenient solution that really nobody would have any excuse not to use; by making something so important, self-monitoring of your health and nutrition, much more convenient and even more accessible, we're confident that we can help large amounts of people finally start making sense of what they're consuming on a daily basis. We're literally able to get full nutritional breakdowns of combinations of foods in a matter of **seconds**, that would otherwise take upwards of 30 minutes of tedious google searching and calculating. In addition, we're confident that this has never been done before to this extent with voice enabled technology. Finally, we're incredibly proud of ourselves for learning so much and for actually delivering on a product in the short amount of time that we had with the levels of experience we came into this hackathon with.
## What we learned
We made and deployed the cloud functions that integrated with our Google Action Console and trained the nlp model to differentiate between a food log and nutritional data request. In addition, we learned how to use DialogFlow to develop really nice conversations and gained a much greater appreciation to the power of voice enabled technologies. Team members who were interested in honing their front end skills also got the opportunity to do that by working on the actual web application. This was also most team members first hackathon ever, and nobody had ever used any of the APIs or tools that we used in this project but we were able to figure out how everything works by staying focused and dedicated to our work, which makes us really proud. We're all coming out of this hackathon with a lot more confidence in our own abilities.
## What's next for macroS
We want to finish building out the user database and integrating the voice application with the actual frontend. The technology is really scalable and once a database is complete, it can be made so valuable to really anybody who would like to monitor their health and nutrition more closely. Being able to, as a user, identify my own age, gender, weight, height, and possible dietary diseases could help us as macroS give users suggestions on what their goals should be, and in addition, we could build custom queries for certain profiles of individuals; for example, if a diabetic person asks macroS if they can eat a chocolate bar for lunch, macroS would tell them no because they should be monitoring their sugar levels more closely. There's really no end to where we can go with this! | ## Inspiration
Have either of the following happened to you?
* Ever since elementary school you've been fascinated by 17th century Turkish ethnography. Luckily, you just discovered a preeminent historian's blog about the collapse of the Ottoman Empire. Overjoyed, you start to text your friends, but soon remember that they're into 19th century Victorian poetry. If only you could share your love of historical discourse with another intellectual.
* Because you're someone with good taste, you're browsing Buzzfeed. Somehow "27 Extremely Disturbing Wikipedia Pages That Will Haunt Your Dreams" is not cutting it for you. Dang. If only you could see what your best friend Alicia was browsing. She would definitely know how to help you procrastinate on your TreeHacks project.
* On a Friday night, all your close friends have gone to a party, and you are bored to death. You look through the list of your Facebook friends. There are hundreds of people online, but you feel awkward and don’t know how to start a conversation with any of them.
Great! Because we built PageChat for you. We all have unique interests, many of which are expressed through our internet browsing. We believe that building simple connections through those interests is a powerful way to improve well-being. We built a *convenient* and *efficient* tool to connect people through their internet browsing.
## What it does
PageChat is a Google Chrome extension designed to promote serendipitous connections by offering one-on-one text chats centered on internet browsing. When active, Pagechat
* displays what you are your friends are currently reading, allowing you to discover and share interesting articles
* centers the conversation around web pages by giving friends the opportunity to chat each other directly through Chrome
* intelligently connects users with similar interests by creating one-on-one chats for users all over the world visiting the same webpage
## How we built it
### Chatting
The chrome extension was built with Angular. The background script keeps track of the tab updates and activations, and this live information is sent to the backend. The Angular App retrieves the list of friends and other users online and displays it on to the chrome extension. For friends, the title of the page they are currently reading is displayed. For users that are not friends, only those who are on the same web page are displayed. For each user displayed on the Chrome extension, we can start a chat. Then, the list view is changed to a chat room where users can have a discussion.
Instead of maintaining our own server, we used Firebase extensively. The live connections are managed by Realtime Database. Yet, since Firestore is easier to work with, we use Cloud Function to reflect the changes of live usage to Firestore using Cloud Function. Thus, there is a ‘status’ collection that contains live information about the connection state and the url and page title each user is looking at. The friends relations are maintained with ‘friends’ collection. We use Firechat, which is an open-source realtime chatting app using Firebase. Thus, all the chatting activities and histories are saved in the chats collection.
One interesting collection is the ‘feature’ collection. It stores the feature vector, which is an array of 256 numbers, for each user. Whenever a user visits a new page (in the future we plan to change the feature vector update criterion), a Cloud Function is triggered, and using our model, the feature vector is updated. The feature vector is used to find better matches i.e. people that would have similar interests among friends and other users using PageChat. As more data is accumulated, the curated list of people users would want to talk to would improve.
### User Recommendations
If several people are browsing the same website and want to chat with each other, how do we pair them up? Intuitively, people who have similar browsing histories will have more in common to talk about, so we should group them together. We maintain a dynamic **feature vector** for each user, which is based off of their reading history. We want feature vectors with small cosine distance to be similar.
When is active on PageChat and visits a site on our whitelist (we don't want sites with generic titles, so we stick to mostly news sites), we obtain the title of the page. We make the assumption that the title is representative of the content our user is reading. For example, we would expect that "27 Extremely Disturbing Wikipedia Pages That Will Haunt Your Dreams" has different content from "Ethnography Museum of Ankara". To obtain a reasonable embedding of our title, we use [SBERT](https://arxiv.org/abs/1908.10084), a BERT-based language model trained to predict the representation of sentences. SBERT can attend to the salient keywords in each title and can obtain a global representation of the title.
Next, we need some way to update feature vectors whenever a user visits a new page. This is well-suited for a recurrent neural network. These models maintain a hidden state that is continually updated with each new query. We will use an [LSTM](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) to update our feature vectors. It takes in the previous feature vector and new title and outputs the new feature vector.
Finally, we need to train our LSTM. Fortunately, UCI has released a [dataset](https://archive.ics.uci.edu/ml/datasets/News+Aggregator) of news headlines along with their respective category (business, science and technology, entertainment, health). We feed these headlines as training data for the model. Before training, we preprocess the headlines, embedding all of them with SBERT. This greatly reduces training times. We use the following three-step procedure to train the model:
1. *Train an [autoencoder](https://arxiv.org/abs/1502.04681) to learn a compressed representation of the feature title.* SBERT outputs features vectors with 768 elements, which is large and unwieldy. An autoencoder is an unsupervised learning model that is able to learn high-fidelity compressed representations of the input sequence. The encoder (an LSTM) encodes a sequence of headlines into a lower-dimensional (128) space and the decoder (another LSTM) decodes the sequence. The model's goal is to output a sequence as close as possible to the original input sequence. After training, we will end up with an encoder LSTM that is able to faithfully map 768 element input vector to a 128 element space and maintain the information of the representation. We use a [contractive autoencoder](https://icml.cc/2011/papers/455_icmlpaper.pdf), which adds an extra loss term to promote the encoder to be less sensitive to variance in the input.
2. *Train the encoder to condense feature vectors that share a category.* Suppose that Holo reads several business articles and Lawrence also reads several business articles. Ideally, they should have similar feature vectors. To train, we build sequences of 5 headlines, where each headline in a sequence is drawn from the same category. The encoder LSTM from the previous step encodes these sequences to feature vectors. We train it to obtain a higher cosine similarity for vectors that share a category than for vectors that don't.
3. *Train the encoder to condense feature vectors that share a story.* The dataset also has a feature corresponding to the specific news story an article covers. Similar to step 2, we build sequences of 5 headlines, where each headline in a sequence is drawn from the same story. By training the encoder to predict a high cosine similarity for vectors that share a story, we further improve the representation of the feature vectors.
We provide some evaluations for our model and document our training process in this [Google notebook](https://colab.research.google.com/drive/1B4eWINVyWntF0VESUUgRt2opA4QFoS-M?usp=sharing). Feel free to make a copy and tinker with it!
## What's next for Pagechat
* Implementation of more features (e.g. group chatting rooms, voice chats, reading pattern analysis per user) that make the app more fun to use. | winning |
## 💡 Inspiration 💡
Have you ever wished you could play the piano perfectly? Well, instead of playing yourself, why not get Ludwig to play it for you? Regardless of your ability to read sheet music, just upload it to Ludwig and he'll scan, analyze, and play the entire sheet music within the span of a few seconds! Sometimes, you just want someone to play the piano for you, so we aimed to make a robot that could be your little personal piano player!
This project allows us to bring music to places like elderly homes, where live performances can uplift residents who may not have frequent access to musicians. We were excited to combine computer vision, MIDI parsing, and robotics to create something tangible that shows how technology can open new doors.
Ultimately, our project makes music more inclusive and brings people together through shared experiences.
## ❓What it does ❓
Ludwig is your music prodigy. Ludwig can read any sheet music that you upload to him, then convert it to a MIDI file, convert that to playable notes on the piano scale, then play each of those notes on the piano with its fingers! You can upload any kind of sheet music and see the music come to life!
## ⚙️ How we built it ⚙️
For this project, we leveraged OpenCV for computer vision to read the sheet music. The sheet reading goes through a process of image filtering, converting it to binary, classifying the characters, identifying the notes, then exporting them as a MIDI file. We then have a server running for transferring the file over to Ludwig's brain via SSH. Using the Raspberry Pi, we leveraged multiple servo motors with a servo module to simultaneously move multiple fingers for Ludwig. In the Raspberry Pi, we developed functions, key mappers, and note mapping systems that allow Ludwig to play the piano effectively.
## Challenges we ran into ⚔️
We had a few roadbumps along the way. Some major ones included file transferring over SSH as well as just making fingers strong enough on the piano that could withstand the torque. It was also fairly difficult trying to figure out the OpenCV for reading the sheet music. We had a model that was fairly slow in reading and converting the music notes. However, we were able to learn from the mentors in Hack The North and learn how to speed it up to make it more efficient. We also wanted to
## Accomplishments that we're proud of 🏆
* Got a working robot to read and play piano music!
* File transfer working via SSH
* Conversion from MIDI to key presses mapped to fingers
* Piano playing melody ablities!
## What we learned 📚
* Working with Raspberry Pi 3 and its libraries for servo motors and additional components
* Working with OpenCV and fine tuning models for reading sheet music
* SSH protocols and just general networking concepts for transferring files
* Parsing MIDI files into useful data through some really cool Python libraries
## What's next for Ludwig 🤔
* MORE OCTAVES! We might add some sort of DC motor with a gearbox, essentially a conveyer belt, which can enable the motors to move up the piano keyboard to allow for more octaves.
* Improved photo recognition for reading accents and BPM
* Realistic fingers via 3D printing | ## Inspiration
There were two primary sources of inspiration. The first one was a paper published by University of Oxford researchers, who proposed a state of the art deep learning pipeline to extract spoken language from video. The paper can be found [here](http://www.robots.ox.ac.uk/%7Evgg/publications/2018/Afouras18b/afouras18b.pdf). The repo for the model used as a base template can be found [here](https://github.com/afourast/deep_lip_reading).
The second source of inspiration is an existing product on the market, [Focals by North](https://www.bynorth.com/). Focals are smart glasses that aim to put the important parts of your life right in front of you through a projected heads up display. We thought it would be a great idea to build onto a platform like this through adding a camera and using artificial intelligence to gain valuable insights about what you see, which in our case, is deciphering speech from visual input. This has applications in aiding individuals who are deaf or hard-of-hearing, noisy environments where automatic speech recognition is difficult, and in conjunction with speech recognition for ultra-accurate, real-time transcripts.
## What it does
The user presses a button on the side of the glasses, which begins recording, and upon pressing the button again, recording ends. The camera is connected to a raspberry pi, which is a web enabled device. The raspberry pi uploads the recording to google cloud, and submits a post to a web server along with the file name uploaded. The web server downloads the video from google cloud, runs facial detection through a haar cascade classifier, and feeds that into a transformer network which transcribes the video. Upon finished, a front-end web application is notified through socket communication, and this results in the front-end streaming the video from google cloud as well as displaying the transcription output from the back-end server.
## How we built it
The hardware platform is a raspberry pi zero interfaced with a pi camera. A python script is run on the raspberry pi to listen for GPIO, record video, upload to google cloud, and post to the back-end server. The back-end server is implemented using Flask, a web framework in Python. The back-end server runs the processing pipeline, which utilizes TensorFlow and OpenCV. The front-end is implemented using React in JavaScript.
## Challenges we ran into
* TensorFlow proved to be difficult to integrate with the back-end server due to dependency and driver compatibility issues, forcing us to run it on CPU only, which does not yield maximum performance
* It was difficult to establish a network connection on the Raspberry Pi, which we worked around through USB-tethering with a mobile device
## Accomplishments that we're proud of
* Establishing a multi-step pipeline that features hardware, cloud storage, a back-end server, and a front-end web application
* Design of the glasses prototype
## What we learned
* How to setup a back-end web server using Flask
* How to facilitate socket communication between Flask and React
* How to setup a web server through local host tunneling using ngrok
* How to convert a video into a text prediction through 3D spatio-temporal convolutions and transformer networks
* How to interface with Google Cloud for data storage between various components such as hardware, back-end, and front-end
## What's next for Synviz
* With stronger on-board battery, 5G network connection, and a computationally stronger compute server, we believe it will be possible to achieve near real-time transcription from a video feed that can be implemented on an existing platform like North's Focals to deliver a promising business appeal | ## Inspiration
Watching the news and hearing the stories of people whose lives were ruined by felony convictions. Many ex-felons find it difficult to reintegrate with society due to lack of resources/opportunities. A small amount of cash will help them readjust to normal life.
## What it does
It uses the power of blockchain technology to set up a charity fund in which ex-felons can withdraw from.
## How we built it
We built it using React, Express, PostgreSQL, and Pi Network.
## Challenges we ran into
Some challenges that we ran into were installing and implementing Pi Network and displaying the data on the form.
## Accomplishments that we're proud of
Building a full-stack web app
## What we learned
We learned about the various languages that we used in order to code this project and gained a deeper understanding of how to build applications.
## What's next for Charity Chain
We will expand our reach and influence to help as many disadvantaged ex-felons as possible to create a better world for everyone. | winning |
## Inspiration
Grasping the emotions people display through text can be really important data for the companies they address. We found this really interesting and saw a lot of potential in using this program to help companies ease their social media presence. We also really wanted to learn and implement google cloud APIs and use beautiful soup for web-scraping so this project seemed like a perfect fit.
## What it does
Our program collects data from websites about customer's experience with flights and uses natural language processing to determine their sentiment towards the airways they used.
## How we built it
We used beautiful soup to scrape data from websites and then used google's API for natural language processing to retrieve the sentiment value correlated to the site's data. We then implemented flags to retrieve more specific data about the customer's experiences.
## Challenges we ran into
We did not have the right update for python when downloading the API from Google. This wasted a lot of our time as it was not clear that this was the reason we were not being able to proceed.
## Accomplishments that we're proud of
We are content that we could integrate web scraping effectively with our NLP commands.
## What we learned
We learnt that understanding and using a API at first may not be easy but after using it a few times, the process becomes significantly easier!
## What's next for Sentiment Scraper
We hope to contact Facebook and Twitter so we can access their APIs and retrieve data from them too. Additionally, we want to include more flags that the user can choose when running our program - allowing them to access more information that the NLP provides. | ## Inspiration
Our team became curious about the concept of sentiment analysis after stumbling across it in the HuggingFace API documentation. After discussing the various real life problems to which sentiment analysis could be applied, we decided that we would build a general platform in which a user could freely determine the concepts that they want to explore.
## What it does
A user is presented with a simple screen featuring a search bar and a drop down of time frames. When they type in a word, the backend runs a combination of Selenium and chromedriver to web scrape the top tweets for that keyword within the timeframe specified by the user. The text from these tweets is then passed through a sentiment analyzer, in which the HuggingFace API scores each text on a "positivity" scale from 1-100. The user is then presented with a "positivity ratio", which gives them an idea of how that specific keyword is perceived by Twitter.
## How we built it
We built the backend with Python3. Most of that file, however, is the logic for Selenium and its chromedriver. Through manual instructions, Selenium logs into Twitter with a burner Twitter account and parses through tweets that appear in Twitter's search function. All of the text is then sent to the sentiment\_analysis file, which attaches a score and populates various data visualization tools (bar plots, histograms) to eventually send back to the user. The frontend is built with React, and Flask was also used to help make the correct calls from user input to the backend, and the resulting data back to the frontend.
## Challenges we ran into
Twitter is notoriously difficult to get information from. We first attempted the use of Tweepy (the official Twitter developer tools API), which failed for two main reasons: the free developer account would only allow access to tweets from the homepage (as opposed to search) and would only show tweets from up to 7 days prior. We then switched gears and attempted various web scraping tools. However, it was extremely difficult to automate this process due to Twitter's new guidelines on web scraping and bot usage; we ended up using Selenium as a sort of proxy to log into an account and look through the timeline.
Another issue we had was that of dynamic loading- Twitter doesn't load tweets (or various other features) until a user manually scrolls through the timeline/search feed to see it. As a result, we were also forced to manually scroll the page down and have the program wait as the tweets loaded. The combination of this issue, as well as the inability to pull tweets from the background made this collection an extremely slow process (~60 seconds to parse text from 100 tweets).
Our other main issue was bringing user input into the backend, and then back to the frontend as organized data. Because the frontend and backend files were made completely independently, we ran into a lot of trouble getting the two to work together.
## Accomplishments that we're proud of
We were really proud to create a fully functional "bot" with the capability to collect text from tweets.
## What we learned
We learned a lot about web scraping, dove deep into the concept of sentiment analysis, and ultimately gained well rounded exposure to full stack development.
## What's next for CHARm
We want to implement categorization by date, which would allow users to see trends in sentiment of certain concepts (Ex. a celebrity 6 months ago vs. today). This could be extremely applicable for those with political campaigns, social media influencer accounts, or even people with a desire to learn. | ## Inspiration
We're computer science students, need we say more?
## What it does
"Single or Nah" takes in the name of a friend and predicts if they are in a relationship, saving you much time (and face) in asking around. We pull relevant Instagram data including posts, captions, and comments to drive our Azure-powered analysis. Posts are analyzed for genders, ages, emotions, and smiles -- with each aspect contributing to the final score. Captions and comments are analyzed for their sentiment, which give insights into one's relationship status. Our final product is a hosted web-app that takes in a friend's Instagram handle and generate a percentage denoting how likely they are to be in a relationship.
## How we built it
Our first problem was obtaining Instagram data. The tool we use is a significantly improved version of an open-source Instagram scraper API (<https://github.com/rarcega/instagram-scraper>). The tool originally ran as a Python command line argument, which was impractical to use in a WebApp. We modernized the tool, giving us increased flexibility and allowing us to use it within a Python application.
We run Microsoft's Face-API on the target friend's profile picture to guess their gender and age -- this will be the age range we are interested in. Then, we run through their most recent posts, using Face-API to capture genders, ages, emotions, and smiles of people in those posts to finally derive a sub-score that will factor into the final result. We guess that the more happy and more pictures with the opposite gender, you'd be less likely to be single!
We take a similar approach to captions and comments. First, we used Google's Word2vec to generate semantically similar words to certain keywords (love, boyfriend, girlfriend, relationship, etc.) as well as assign weights to those words. Furthermore, we included Emojis (is usually a good giveaway!) into our weighting scheme[link](https://gist.github.com/chrisfischer/144191eae03e64dc9494a2967241673a). We use Microsoft's Text Analytics API on this keywords-weight scheme to obtain a sentiment sub-score and a keyword sub-score.
Once we have these sub-scores, we aggregate them into a final percentage, denoting how likely your friend is single. It was time to take it live. We integrated all the individual calculations and aggregations into a Django., then hosted all necessary computation using Azure WebApps. Finally, we designed a simple interface to allow inputs as well as to display results with a combination of HTML, CSS, JavaScript, and JQuery.
## Challenges we ran into
The main challenge was that we were limited by our resources. We only had access to basic accounts for some of the software we used, so we had to be careful how on often and how intensely we used tools to prevent exhausting our subscriptions. For example, we limited the number of posts we analyzed per person. Also, our Azure server uses the most basic service, meaning it does not have enough computing power to host more than a few clients.
The application only works on "public" Instagram ideas, so we were unable to find a good number of test subjects to fine tune our process. For the accounts we did have access to, the application produced a reasonable answer, leading us to believe that the app is a good predictor.
## Accomplishments that we're proud of
We proud that we were able to build this WebApp using tools and APIs that we haven't used before. In the end, our project worked reasonably well and accurately. We were able to try it on people and get a score which is an accomplishment in that. Finally, we're proud that we were able to create a relevant tool in today's age of social media -- I mean I know I would use this app to narrow down who to DM.
## What we learned
We learned about the Microsoft Azure API (Face API, Text Analytics API, and web hosting), NLP techniques, and full stack web development. We also learned a lot of useful software development techniques such as how to better use git to handle problems, creating virtual environments, as well as setting milestones to meet.
## What's next for Single or Nah
The next steps for Single or Nah is to make the website and computations more scalable. More scalability allows more people to use our product to find who they should DM -- and who doesn't want that?? We also want to work on accuracy, either by adjusting weights given more data to learn from or by using full-fledged Machine Learning. Hopefully more accuracy would save "Single or Nah" from some awkward moments... like asking someone out... who isn't single... | losing |
## My Samy helps:
**Young marginalized students**
* Anonymous process: Ability to ask any questions anonymously without feeling judged
* Get relevant resources: More efficient process for them to ask for help and receive immediately relevant information and resources
* Great design and user interface: Easy to use platform with kid friendly interface
* Tailored experience: Computer model is trained to understand their vocabulary
* Accessible anytime: Replaces the need to schedule an appointment and meet someone in person which can be intimidating. App is readily available at any time, any place.
* Free to use platform
**Schools**
* Allows them to support every student simultaneously
* Provides a convenient process as the recommendation system is automatized
* Allows them to receive a general report that highlights the most common issues students experience
**Local businesses**
* Gives them an opportunity to support their community in impactful ways
* Allows them to advertise their services
Business Plan:
<https://drive.google.com/file/d/1JII4UGR2qWOKVjF3txIEqfLUVgaWAY_h/view?usp=sharing> | ## Inspiration
We got together a team passionate about social impact, and all the ideas we had kept going back to loneliness and isolation. We have all been in high pressure environments where mental health was not prioritized and we wanted to find a supportive and unobtrusive solution. After sharing some personal stories and observing our skillsets, the idea for Remy was born. **How can we create an AR buddy to be there for you?**
## What it does
**Remy** is an app that contains an AR buddy who serves as a mental health companion. Through information accessed from "Apple Health" and "Google Calendar," Remy is able to help you stay on top of your schedule. He gives you suggestions on when to eat, when to sleep, and personally recommends articles on mental health hygiene. All this data is aggregated into a report that can then be sent to medical professionals. Personally, our favorite feature is his suggestions on when to go on walks and your ability to meet other Remy owners.
## How we built it
We built an iOS application in Swift with ARKit and SceneKit with Apple Health data integration. Our 3D models were created from Mixima.
## Challenges we ran into
We did not want Remy to promote codependency in its users, so we specifically set time aside to think about how we could specifically create a feature that focused on socialization.
We've never worked with AR before, so this was an entirely new set of skills to learn. His biggest challenge was learning how to position AR models in a given scene.
## Accomplishments that we're proud of
We have a functioning app of an AR buddy that we have grown heavily attached to. We feel that we have created a virtual avatar that many people really can fall for.
## What we learned
Aside from this being many of the team's first times work on AR, the main learning point was about all the data that we gathered on the suicide epidemic for adolescents. Suicide rates have increased by 56% in the last 10 years, and this will only continue to get worse. We need change.
## What's next for Remy
While our team has set out for Remy to be used in a college setting, we envision many other relevant use cases where Remy will be able to better support one's mental health wellness.
Remy can be used as a tool by therapists to get better insights on sleep patterns and outdoor activity done by their clients, and this data can be used to further improve the client's recovery process. Clients who use Remy can send their activity logs to their therapists before sessions with a simple click of a button.
To top it off, we envisage the Remy application being a resource hub for users to improve their overall wellness. Through providing valuable sleep hygiene tips and even lifestyle advice, Remy will be the one-stop, holistic companion for users experiencing mental health difficulties to turn to as they take their steps towards recovery. | ## Inspiration
What happens when you throw 3 Carleton students into a 36 hour hackathon with a Waterloo student? For us, it was Note Padd (triple entendre - can you find all 3 meanings?), a web application that takes text and composes a song from it.
The idea was pitched as a interesting way of exploring natural language processing that incorporates a staff of programming and design principles. Note Padd parses english text and considers sentence length, punctuation, and syllables to create a song unique to the text. The option of using a major or minor scale is also presented to the user.
The project was a interesting challenge to our team which was both achievable and fun.
## What it does
This is a fun application that parses any structured english text for various elements such as sentence length, punctuation and syllables to create a unique song! You will also be able to customize the sound of your text with either major or minor pentatonic pitch.
## How we built it
We used Javascript, HTML, CSS to develop this application. The APIs that we used that were the most important were Tone.js and NexusUI. We also used Google's Materialize framework to make the user interface look more visually appealing.
## Challenges we ran into
One of the main challenges was finding a collection to represent sentences, words and syllables for easy access.
There were also a lot of complexities in actually parsing the text and making it sound good with the individuals tones, as well as fixing the bugs in both our code and inside Tone.js and nexusUI.
## Accomplishments that we're proud of
We are proud of this application as a whole! We never thought that it'd sound this great and the language processing to sound this good.
## What we learned
We learned that it is worth going through the painstaking process of learning new frameworks and new technologies, because it will pay off.
## What's next for Note Padd
Note Padd is going to implement more effects in the future, as well as more parsing methods. | winning |
## **Inspiration**
Ever had to wipe your hands constantly to search for recipes and ingredients while cooking?
Ever wondered about the difference between your daily nutrition needs and the nutrition of your diets?
Vocal Recipe is an integrated platform where users can easily find everything they need to know about home-cooked meals! Information includes recipes with nutrition information, measurement conversions, daily nutrition needs, cooking tools, and more! The coolest feature of Vocal Recipe is that users can access the platform through voice control, which means they do not need to constantly wipe their hands to search for information while cooking. Our platform aims to support healthy lifestyles and make cooking easier for everyone.
## **How we built Vocal Recipe**
Recipes and nutrition information is implemented by retrieving data from Spoonacular - an integrated food and recipe API.
The voice control system is implemented using Dasha AI - an AI voice recognition system that supports conversation between our platform and the end user.
The measurement conversion tool is implemented using a simple calculator.
## **Challenges and Learning Outcomes**
One of the main challenges we faced was the limited trials that Spoonacular offers for new users. To combat this difficulty, we had to switch between team members' accounts to retrieve data from the API.
Time constraint is another challenge that we faced. We do not have enough time to formulate and develop the whole platform in just 36 hours, thus we broke down the project into stages and completed the first three stages.
It is also our first time using Dasha AI - a relatively new platform which little open source code could be found. We got the opportunity to explore and experiment with this tool. It was a memorable experience. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | ## Inspiration
We got into a debate about our hackathon idea. One thing lead to another, and we built YapBot!
## What it does
* **Smart Argument Generation**: Yapbot analyzes your debate topic and delivers well-structured arguments that are both compelling and effective.
* **Real-Time Suggestions**: Get instant feedback and argument enhancements as you engage in debates!
* **Diverse Array of Styles**: From Yoda to Peter Griffin Debate in the style of any character/personality of your choice!
* **Seamless Experience**: Enjoy a user-friendly interface that makes accessing and utilizing Yapbot’s features effortless.
* **Real Time transcription**: Get a real-time transcription of your debate with an option to export the transcription at any point of time.
## How we built it
First we started out with experimenting with the Cohere API to have a working local script that we could use in a Flask backend that we needed to build for our hack. We then moved on to designing the REST API that our React frontend would ultimately communicate with. Once we had the API spec ironed out, our team split off into frontend and backend until we were done with YapBot.
## Challenges we ran into
* React/FE build systems - some of us were not too experienced with React, so getting the FE to look and feel good was a bit challenging.
* Prompt engineering - it was quite difficult for us to settle on a good prompt to use with Cohere's LLM. After some trial and error, we settled on the one we are using on the backend.
* Live audio streaming - figuring out how to live stream the audio from our computer microphone and get it transcribed to text was quite a challenge. We ended up using the [Deepgram](https://deepgram.com/) API with a websocket connection to ensure low latency live audio transcription.
## Accomplishments that we're proud of
This was our first time working with the Cohere LLM API, so we were particularly proud of our prompt engineering that enabled the core functionality of YapBot.
## What we learned
LLM hallucinations are very common without careful prompt engineering. We think that this is a particularly applicable skill now that LLMs are becoming more commonplace in all facets of society.
## What's next for YapBot
* Automatic voice classification/recognition (diarization)
* Clone your voice + text-to-speech (TTS)
* Debate transcript analysis (timeseries data analysis) | winning |
## Inspiration
Self-motivation is hard. It’s time for a social media platform that is meaningful and brings a sense of achievement instead of frustration.
While various pro-exercise campaigns and apps have tried inspire people, it is difficult to stay motivated with so many other more comfortable distractions around us. Surge is a social media platform that helps solve this problem by empowering people to exercise. Users compete against themselves or new friends to unlock content that is important to them through physical activity.
True friends and formed through adversity, and we believe that users will form more authentic, lasting relationships as they compete side-by-side in fitness challenges tailored to their ability levels.
## What it does
When you register for Surge, you take an initial survey about your overall fitness, preferred exercises, and the websites you are most addicted to. This survey will serve as the starting point from which Surge creates your own personalized challenges: Run 1 mile to watch Netflix for example. Surge links to your phone or IOT wrist device (Fitbit, Apple Watch, etc...) and, using its own Chrome browser extension, 'releases' content that is important to the user when they complete the challenges.
The platform is a 'mixed bag'. Sometimes users will unlock rewards such as vouchers or coupons, and sometimes they will need to complete the challenge to unlock their favorite streaming or gaming platforms.
## How we built it
Back-end:
We used Python Flask to run our webserver locally as we were familiar with it and it was easy to use it to communicate with our Chrome extension's Ajax. Our Chrome extension will check the URL of whatever webpage you are on against the URLs of sites for a given user. If the user has a URL locked, the Chrome extension will display their challenge instead of the original site at that URL. We used an ESP8266 (onboard Arduino) with an accelerometer in lieu of an IOT wrist device, as none of our team members own those devices. We don’t want an expensive wearable to be a barrier to our platform, so we might explore providing a low cost fitness tracker to our users as well.
We chose to use Google's Firebase as our database for this project as it supports calls from many different endpoints. We integrated it with our Python and Arduino code and intended to integrate it with our Chrome extension, however we ran into trouble doing that, so we used AJAX to send a request to our Flask server which then acts as a middleman between the Firebase database and our Chrome extension.
Front-end:
We used Figma to prototype our layout, and then converted to a mix of HTML/CSS and React.js.
## Challenges we ran into
Connecting all the moving parts: the IOT device to the database to the flask server to both the chrome extension and the app front end.
## Accomplishments that we're proud of
Please see above :)
## What we learned
Working with firebase and chrome extensions.
## What's next for SURGE
Continue to improve our front end. Incorporate analytics to accurately identify the type of physical activity the user is doing. We would also eventually like to include analytics that gauge how easily a person is completing a task, to ensure the fitness level that they have been assigned is accurate. | ## Inspiration
Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week.
## What it does
IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application.
## How WE built it
on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time.
## Challenges WE ran into
hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck.
To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult.
Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue.
The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful. | ## Inspiration
With the recent scare of Hurricane Florence, we wanted to create a system to more quickly identify people needing aid in disaster-stricken areas.
## What it does
Safety Net is an application designed to run in cooperation with a swarm of drones. Search and Rescue drones are already becoming commonplace, and optimizing their capabilities has the potential to save countless lives. Drones that are deployed to an area affected by natural disasters are often sent to find specific missing persons or to investigate infrastructure failures. Safety Net expands the reconnaissance abilities of the drone swarm by identifying any individuals that are possibly remaining in the region. No prior information is needed about the area - the drones simply sweep the affected zones and use Haar Cascades to spot people in need of rescue on the ground. Safety Net then refines the results using IBM-Watson before transmitting its findings to a base-of-operations in the area. From there, a local server fitted with Google's Maps API drops pins wherever people are found, and the operator can quickly view still images of the area to identify people in need as quickly as possible.
## How we built it
We used OpenCV and IBM-Watson as our two visual classifiers. The location data and images are sent to a Node JS server, which communicates with the Google Maps API to display a comprehensive map to the operators.
## Challenges we ran into
None of our 4 team members were too familiar with Node JS or front-end development. Much of our time was spent simply exploring the available functionality.
## Accomplishments that we're proud of
In line with the previous question, we were very proud that we were out of our comfort zones for much of the time, and yet were able to have a finished project with the intended functionality. Although it is not perfect, Safety Net does its job well and could potentially be developed into life-saving software.
## What we learned
As previously mentioned, we learned a lot of new technologies.
## What's next for Safety Net
We'll continue to develop the project, cleaning up the interface and adding reach goals until we're satisfied with the relevance and helpfulness of Safety Net. | winning |
## Inspiration
Google Glass was, and always will be, one of the most magical advancements in wearable, intelligent technology. With recent advances in mobile processing power and computer vision techniques, the team was driven to combine ideas of the past with technologies of the present to solve complex, yet critical, problems in healthcare and lifestyle, to approach the future.
## Introduction
Our solution, coined "QuitIT!", aims to help smokers quit smoking by preventing relapse by employing environment-based risk prediction. Cigarette smoking results in the deaths of 500,000 US individuals a year, yet the best smoking cessation interventions, of which only a small percentage of smokers take advantage, achieve less than 20% long-term (6-month) abstinence rates. We rely heavily on the concept of just in time adaptive interventions (JITAIs), which aim to provide the right type/amount of support, at the right time, by adapting to an individual's changing internal and contextual state.
## What it does
Our device attaches to the user's head by clipping onto sunglasses, or being placed inside a hat, to provide an accurate stream of the user's point of view (POV). Images are taken every 10 minutes and classified, as smoking areas or non-smoking areas, using a deep learning framework deployed on the cloud (IBM Watson Health). Classifications are sent to a backend server to be stored securely and temporarily. Heuristics, determined from literature analysis, present in the server dictate whether the user is notified. The patient receives an SMS when he or she enters an area likely to elicit smoking craving. The SMS informs the patient to employ an appropriate cessation intervention, such as chewing nicotine gum or applying a nicotine patch.
The user can then check their activity/progress on a simple mobile application. Weekly goals motivate users to reduce their nicotine/smoking reliance. User-specific encounters with smoking environments can serve to warn other patients due to our backend solution recording geographical location, and displaying elegant visualizations in the map functionality. Critically, accumulation of user data in regions can serve to define areas of smoking for non-smokers as well. Non-smoker users such as pregnant women, asthma patients, etc. could benefit from our solution by minimizing adverse outcomes due to passive smoking. Lastly, our solution provides a mechanism for physican/psychiatrist data access for monitoring user activity. This mechanism, however, is completely dependent upon the consent of user, to maintain privacy compliancy.
## How we built it
We used a Raspberry Pi Model B with its companion Pi Camera v2 for capturing images and communicating with cloud and backend components. Bash scripts served to capture images at a constant rate (1 image ~ 10 minutes) as well as communicate with our deep learning classification algorithm implemented using IBM's Visual Recognition service. The algorithm outputs a binary variable and was trained on ~350 images. Half of the images were images of smoking areas and half were images of non-smoking areas. The images were scrapped from Google Image Search using a Python package. Image queries were based upon objects/scenes found to be associated with smoking and non-smoking from recent literature (see below).
Classification results were returned to the Pi, after which they were directly passed to a Firebase database. This is followed by a request to the user's mobile device, which obtains geolocation coordinates and also allows for an app update if the user is in a smoking area. The app (iOS) was designed using Swift and serves to show a map of smoking areas and to motivate the user to meet weekly milestones/goals. The mobile app also provides recommendations if an individual is tempted to smoke at his/her location (chewing nicotine gum or applying nicotine patches).
## Challenges we ran into
None of us really had any backend development experience. We had initially chosen a different backend solution and spent quite a bit of time changing trying to resolve issues, before switching to Firebase. Choosing a mobile OS was also challenging due to only half of our team having access to MacOS. Regardless, we assigned tasks accordingly and overcame our hurdles.
## Accomplishments that we're proud of
We managed to create a novel end-to-end, mobile, machine learning framework for healthcare in less than 24 hrs. Our team members all came with different experiences, interests, and expertise but managed to address all parts of our project. This is especially significant given the diverse composition of our project, which pertained to topics such as deep learning, hardware, camera sensing, backend development, mobile development, and clinical data science.
## What we learned
During the first three quarters of our project, our team had doubts of whether we would have a final product. There were just so many components to our ambitious project that it was very difficult at times to envision a finished product. Alongside learning about IBM's Visual Recognition platform and backend development, our team truly embraced the process of hacking by persevering through our setbacks.
## What's next for QuitIT: Helping smokers quit with computer vision
Our team is very fond of our project, despite its lack of immediate practicality. Despite to this, the team would be open to continuing work, especially on interesting analytics which could originate from this unique type of device. We believe that the device can only get better with more users and data and would be very interested in learning how alter the existing configuration to allow for greater scalability.
## Acknowledgements
The team is extremely grateful of the IBM support team present at HackMIT. They went leaps and bound above with their technical support and patience with our frequent questions/inquiries.
Our team leader, Abhi Jadhav, conducts research under the supervision of Drs. Matthew Engelhard and Joe McClernon, in the Department of Psychiatry & Behavioral Sciences at Duke University. We acknowledge Abhi's mentors for their invaluable guidance and mentoring during his summer research internship, which sparked Abhi's enthusiasm and passion for this specific project and field of research.
## References
Engelhard MM, Oliver JA, Henao R, et al. Identifying Smoking Environments From Images of Daily Life With Deep Learning. JAMA Netw Open. Published online August 02, 20192(8):e197939. doi:10.1001/jamanetworkopen.2019.7939 | # 💥 - How it all started
As students, we always try to optimize everything from our study habits to our sleep schedules. But above all, we agreed that the most important thing to optimize was also the most neglected: health. After a careful look into the current status of health-tracking apps, we noticed a few main problems.
A surplus of health apps: With an excessive number of health apps on the market, users can feel overwhelmed when choosing the right app for their needs while balancing trade-offs missed from other apps. This also leads to a variety of charges and memberships required for necessary health features.
Lacks a call to action: While the mass amount of health data from wearables is beneficial, health apps lack actionable steps you can take to improve in areas you are lacking.
Unclear impact: While metrics and health data are important, health apps fail to alert users of the severity of possible problems with their health. Users can’t differentiate a singularly bad day versus a heavy risk of depression with the current status of health apps.
# 📖 - What it does
We built OptiFi to create a novel, multifaceted approach to creating an all-inclusive health app based on users' health data. We created four main features to fully encapsulate the main use cases of a variety of health apps shown in the slideshow above: Diagnosis, Nutrition Scanner, Health Overview, and Automated Scheduler. Using advanced data analytics, generative AI, and cloud computing, we can take health data, create personalized daily habits, and import them straight into their Google calendar. Check out the other features:
##### Diagnostic:
Based on the data collected by your wearable health data tracking device, we diagnose the user with their three most prevalent health concerns with GPT-4o. Specifically, with OpenAI Assistants we added the user’s parsed health data to the LLM’s context window. The list of worries is supplemented by its estimated risk factor to communicate the severity of the situation if any health category is lacking.
##### Nutrition Scanner:
Our app also includes a scanner that uses Anthropic’s Claude 3.5 Sonnet via the Amazon Bedrock API to analyze the amount of calories in each picture. Using the camera app on any phone, snap a picture of any food you are about to consume and let our AI log the amount of calories in that meal. In addition, utilizing OpenVINO on Intel Tiber Developer Cloud, we provide healthy food recommendations similar to the foods you like so you can be happy and healthy!
##### Health Overview:
The health overview page displays all of your important health data in one easily consumable format. The interactive page allows you to easily view your daily habits from hours slept to steps walked, all condensed into one cohesive page. Furthermore, you can talk to our live AI Voicebot Doctor to answer any questions or health concerns. It will listen to your symptoms, confirm your diagnosis, and provide steps for a path to recovery all in a hyperrealistic-sounding voice provided by ElevenLabs.
##### Automated Scheduler:
Recommends healthy activities to plan in your schedule based on your diagnosis results with GPT-4o. Automatically adds accepted events into your calendar with Google Calendar API. The scheduled event includes what it is, the location, a description explaining why it recommended this event based on citations from your health data, a start time and date, and an end time and date.
# 🔧 - How we built it
##### Building the Calendar Scheduler:
Google Cloud (gcloud): for Google account authentication to access user calendars
Google Calendar API: for managing our health calendars and events
OpenAI API (GPT-4o): for generation of event timing and details
##### Building the Nutrition Scanner:
Anthropic’s Claude-Sonnet 3.5: for computer vision to determine calories in food screenshots
AWS Amazon Bedrock API: for accessing and interfacing the vision LLM
Pillow (PIL): to perform lossless compression of food PNG image inputs
Watchdog: file system listener to access recently uploaded food screenshots to the backend
##### Collecting user fitness and health data:
Apple HealthKit: for exporting Apple watch and iPhone fitness and health data
NumPy: for math and data processing
Pandas: for data processing, organization, and storage
##### Adding personalized recommendations:
Intel Tiber Developer Cloud: development environment and compute engine
Intel OpenVINO: for optimizing and deploying the neural network model
PyTorch: for building the recommendation model with neural networks and for additional optimization
##### AI Voicebot Doctor:
Assembly AI: for transcription of the conversation (both speech-to-text and text-to-speech)
OpenAI (GPT-4o): inputs text response from user to generate an appropriate response
ElevenLabs: for realistic AI audio generation (text to speech)
##### Building our web demos:
Gradio: an open-sourced Python package with customizable UI components to demo the many different features integrated into our application
# 📒 - The Efficacy of our Models
##### Collecting health and fitness data for our app:
By exporting data from the iPhone Health app, we can gain insights into sleep, exercise, and other activities. The Apple HealthKit data is stored in an XML file with each indicator paired with a value and datetime. So we chose to parse the data to a CSV, then aggregate the data with NumPy and Pandas to extract daily user data and data clean. Our result is tabular data that includes insights on sleep cycle durations, daily steps, heart rate variability when sleeping, basal energy burned, active energy burned, exercise minutes, and standing hours.
For aggregating sleep cycle data, we first identified “sessions”, which are periods in which an activity took place, like a sleep cycle. To do this we built an algorithm that analyzes the gaps between indicators, with large gaps (> 1hr) distinguishing between two different sessions. With these sessions, we could aggregate based on the datetimes of the sessions starts and ends to compute heart rate variability and sleep cycle data (REM, Core, Deep, Awake). The rest of our core data is combined using similar methodology and summations over datetimes to compile averages, durations, and sums (totals) statistics into an exported data frame for easy and comprehensive information access. This demonstrates our team’s commitment to scalability and building robust data pipelines, as our data processing techniques are suited for any data exported from the iPhone health app to organize as input for the LLMs context window. We chose GPT-4o as our LLM to diagnose the user’s top three most prevalent health concerns and the corresponding risk factor of each. We used an AI Assistant to parse the relevant information from the Health App data and limited the outputs to a large list of potential illnesses.
##### AI Voicebot Doctor
This script exemplifies an advanced, multi-service AI integration for real-time medical diagnostics using sophisticated natural language processing (NLP) and high-fidelity text-to-speech synthesis. The AI\_Assistant class initializes with secure environment configuration, instantiating AssemblyAI for real-time audio transcription, OpenAI for contextual NLP processing, and ElevenLabs for speech synthesis. It employs AssemblyAI’s RealtimeTranscriber to capture and process audio, dynamically handling transcription data through asynchronous callbacks. User inputs are appended to a persistent conversation history and processed by OpenAI’s gpt-4o model, generating diagnostic responses. These responses are then converted to speech using ElevenLabs' advanced synthesis, streamed back to the user. The script’s architecture demonstrates sophisticated concurrency and state management, ensuring robust, real-time interactive capabilities.
##### Our recommendation model:
We used the Small VM - Intel® Xeon 4th Gen ® Scalable processor compute instance in the Intel Tiber Developer Cloud as a development environment with compute resources to build our model. We collect user ratings and food data to store for further personalization. We then organize it into three tensor objects to prepare for model creation: Users, Food, and Ratings. Next, we build our recommendation model using PyTorch’s neural network library, stacking multiple embedding and linear layers and optimizing with mean squared error loss. After cross-checking with our raw user data, we tuned our hyperparameters and compiled the model with the Adam optimizer to achieve results that closely match our user’s preferences. Then, we exported our model into ONNX format for compatibility with OpenVINO. Converting our model into OpenVINO optimized our model inference, allowing for instant user rating predictions on food dishes and easy integration with our existing framework. To provide the user with the best recommendations while ensuring we keep some variability, we randomize a large sample from a pool of food dishes, taking the highest-rated dishes from that sample according to our model.
# 🚩 - Challenges we ran into
We did not have enough compute resources on our Intel Developer Cloud instance. The only instance available did not have enough memory to support fine tuning a large LLM, crashing our Jupyter notebooks upon run.
# 🏆 - Accomplishments that we're proud of
Connecting phone screenshots to the backend on our computers → implemented a file system listener to manipulate a Dropbox file path connecting to our smart devices
Automatically scheduling a Google Calendar event → used two intermediary LLMs between input and output with one formatted to give Event Name, Location, Description, Start Time and Date, and End Time and Date and the other to turn it into a JSON output. The JSON could then be reliably extracted as parameters into our Google Calendar API
Configuring cloud compute services and instances in both our local machine and virtual machine instance terminals
# 📝 - What we learned
Nicholas: "Creating animated high-fidelity mockups in Figma and leading a full software team as PM.”
Marcus: "Using cloud compute engines such as Intel Developer Cloud, AWS, and Google Cloud to bring advanced AI technology to my projects"
Steven: "Integrating file listeners to connect phone images uploaded to Dropbox with computer vision from LLMs on my local computer."
Sean: "How to data clean from XML files with Pandas for cohesive implementation with LLMs."
# ✈️ - What's next for OptiFi
We envision OptiFi’s future plans in phases. Each of these phases were inspired by leaders in the tech-startup space.
### PHASE 1: PRIORITIZE SPEED OF EXECUTION
Phase 1 involves the following goals:
* Completing a fully interactive frontend that connects with each other instead of disconnected parts
* Any investment will be spent towards recruiting a team of more engineers to speed up the production of our application
* Based on “the agility and speed of startups allow them to capitalize on new opportunities more effectively” (Sam Altman, CEO of OpenAI)
### PHASE 2: UNDERSTANDING USERS
* Mass user test our MVP through surveys, interviews, and focus groups
* Tools: Qualtrics, Nielsen, UserTesting, Hotjar, Optimizely, and the best of all – personal email/call reach out
* Based on “Hey, I’m the CEO. What do you need? That’s the most powerful thing.” (Jerry Tan, President and CEO of YCombinator)
### PHASE 3: SEEKING BRANDING MENTORSHIP
Phase 3 involves the following goals:
* Follow pioneers in becoming big in an existing market by establishing incredible branding like Dollar Shave Club and Patagonia
* Align with advocating for preventative care and early intervention
* Based on “Find mentors who will really support your company and cheerlead you on” (Caroline Winnett, SkyDeck Executive Director)
## 📋 - Evaluator's Guide to OptiFi
##### Intended for judges, however the viewing public is welcome to take a look.
Hey! We wanted to make this guide to help provide you with further information on our implementations of our AI and other programs and provide a more in-depth look to cater to both the viewing audience and evaluators like yourself.
#### Sponsor Services and Technologies We Have Used This Hackathon
##### AWS Bedrock
Diet is an important part of health! So we wanted a quick and easy way to introduce this without the user having to constantly input information.
In our project, we used AWS Bedrock for our Nutrition Scanner. We accessed Anthropic’s Claude 3.5 Sonnet, which has vision capabilities, with Amazon Bedrock’s API.
##### Gradio
* **Project Demos and Hosting:** We hosted our demo on a Gradio playground, utilizing their easy-to-use widgets for fast prototyping.
* **Frontend:** Gradio rendered all the components we needed, such as text input, buttons, images, and more.
* **Backend:** Gradio played an important role in our project in letting us connect all of our different modules. In this backend implementation, we seamlessly integrated our features, including the nutrition scanner, diagnostic, and calendar.
##### Intel Developer Cloud
Our project needed the computing power of Intel cloud computers to quickly train our custom AI model, our food recommendation system.
This leap in compute speed powered by Intel® cloud computing and OpenVINO enabled us to re-train our models with lightning speed as we worked to debug and integrate them into our backend. It also made fine-tuning our model much easier as we could tweak the hyperparameters and see their effects on model performance within seconds.
As more users join our app and scan their food with the Nutrition Scanner, the need for speed becomes increasingly important, so by running our model on Intel Developer Cloud, we are building a prototype that is scalable for a production-level app.
##### Open AI
To create calendar events and generate responses for our Voicebot, we used Open AI’s generative AI technology. We used GPT-3.5-turbo to create our Voicebot responses to the user, quickly getting information to the user. However, a more advanced model, GPT-4o, was necessary to not only follow the strict response guidelines for parsing responses but also to properly analyze user health data and metrics and determine the best solutions in the form of calendar events.
##### Assembly AI and ElevenLabs
We envision a future where it would be more convenient to find information by talking to an AI assistant versus a search function, enabling a hands-free experience.
With Assembly AI’s speech-to-text streaming technology, we could stream audio input from the user device’s microphone and send it to an LLM for prompting in real time! ElevenLabs on the other hand, we used for text-to-speech, speaking the output from the LLM prompt also in real time! Together, they craft an easy and seamless experience for the user.
##### GitHub
We used GitHub for our project by creating a GitHub repository to host our hackathon project's code. We leveraged GitHub not only for code hosting but also as a platform to collaborate, push code, and receive feedback. | ## Inspiration
\_ "According to Portio Research, the world will send 8.3 trillion SMS messages this year alone – 23 billion per day or almost 16 million per minute. According to Statistic Brain, the number of SMS messages sent monthly increased by more than 7,700% over the last decade" \_
The inspiration for TextNet came from the crazy mobile internet data rates in Canada and throughout North America. The idea was to provide anyone with an SMS enabled device to access the internet!
## What it does
TextNet exposes the following internet primitives through basic SMS:
1. Business and restaurant recommendations
2. Language translation
3. Directions between locations by bike/walking/transit/driving
4. Image content recognition
5. Search queries.
6. News update
TextNet can be used by anyone with an SMS enabled mobile device. Are you \_ roaming \_ in a country without access to internet on your device? Are you tired of paying the steep mobile data prices? Are you living in an area with poor or no data connection? Have you gone over your monthly data allowance? TextNet is for you!
## How we built it
TextNet is built using the Stdlib API with node.js and a number of third party APIs. The Stdlib endpoints connect with Twilio's SMS messaging service, allowing two way SMS communication with any mobile device. When a user sends an SMS message to our TextNet number, their request is matched with the most relevant internet primitive supported, parsed for important details, and then routed to an API. These API's include Google Cloud Vision, Yelp Business Search, Google Translate, Google Directions, and Wolfram Alpha. Once data is received from the appropriate API, the data is formatted and sent back to the user over SMS. This data flow provides a form of text-only internet access to offline devices.
## Challenges we ran into
Challenge #1 - We arrived at HackPrinceton at 1am Saturday morning.
Challenge #2 - Stable SMS data flow between multiple mobile phones and internet API endpoints.
Challenge #3 - Google .json credential files working with our Stdlib environment
Challenge #4 - Sleep deprivation ft. car and desks
Challenge #5 - Stdlib error logging
## Accomplishments that we're proud of
We managed to build a basic offline portal to the internet in a weekend. TextNet has real world applications and is built with exciting technology. We integrated an image content recognition machine learning algorithm which given an image over SMS, will return a description of the contents! Using the Yelp Business Search API, we built a recommendation service that can find all of the best Starbucks near you!
Two of our planned team members from Queen's University couldn't make it to the hackathon, yet we still managed to complete our project and we are very proud of the results (only two of us) :)
## What we learned
We learned how to use Stdlib to build a server-less API platform. We learned how to interface SMS with the internet. We learned *all* about async / await and modern Javascript practices. We learned about recommendation, translate, maps, search queries, and image content analysis APIs.
## What's next for TextNet
Finish integrate of P2P payment using stripe
## What's next for HackPrinceton
HackPrinceton was awesome! Next year, it would be great if the team could arrange better sleeping accommodations. The therapy dogs were amazing. Thanks for the experience! | losing |
## Inspiration
As victims, bystanders and perpetrators of cyberbullying, we felt it was necessary to focus our efforts this weekend on combating an issue that impacts 1 in 5 Canadian teens. As technology continues to advance, children are being exposed to vulgarities online at a much younger age than before.
## What it does
**Prof**(ani)**ty** searches through any webpage a child may access, censors black-listed words and replaces them with an appropriate emoji. This easy to install chrome extension is accessible for all institutional settings or even applicable home devices.
## How we built it
We built a Google chrome extension using JavaScript (JQuery), HTML, and CSS. We also used regular expressions to detect and replace profanities on webpages. The UI was developed with Sketch.
## Challenges we ran into
Every member of our team was a first-time hacker, with little web development experience. We learned how to use JavaScript and Sketch on the fly. We’re incredibly grateful for the mentors who supported us and guided us while we developed these new skills (shout out to Kush from Hootsuite)!
## Accomplishments that we're proud of
Learning how to make beautiful webpages.
Parsing specific keywords from HTML elements.
Learning how to use JavaScript, HTML, CSS and Sketch for the first time.
## What we learned
The manifest.json file is not to be messed with.
## What's next for PROFTY
Expand the size of our black-list.
Increase robustness so it parses pop-up messages as well, such as live-stream comments. | ## What it does
"ImpromPPTX" uses your computer microphone to listen while you talk. Based on what you're speaking about, it generates content to appear on your screen in a presentation in real time. It can retrieve images and graphs, as well as making relevant titles, and summarizing your words into bullet points.
## How We built it
Our project is comprised of many interconnected components, which we detail below:
#### Formatting Engine
To know how to adjust the slide content when a new bullet point or image needs to be added, we had to build a formatting engine. This engine uses flex-boxes to distribute space between text and images, and has custom Javascript to resize images based on aspect ratio and fit, and to switch between the multiple slide types (Title slide, Image only, Text only, Image and Text, Big Number) when required.
#### Voice-to-speech
We use Google’s Text To Speech API to process audio on the microphone of the laptop. Mobile Phones currently do not support the continuous audio implementation of the spec, so we process audio on the presenter’s laptop instead. The Text To Speech is captured whenever a user holds down their clicker button, and when they let go the aggregated text is sent to the server over websockets to be processed.
#### Topic Analysis
Fundamentally we needed a way to determine whether a given sentence included a request to an image or not. So we gathered a repository of sample sentences from BBC news articles for “no” examples, and manually curated a list of “yes” examples. We then used Facebook’s Deep Learning text classificiation library, FastText, to train a custom NN that could perform text classification.
#### Image Scraping
Once we have a sentence that the NN classifies as a request for an image, such as “and here you can see a picture of a golden retriever”, we use part of speech tagging and some tree theory rules to extract the subject, “golden retriever”, and scrape Bing for pictures of the golden animal. These image urls are then sent over websockets to be rendered on screen.
#### Graph Generation
Once the backend detects that the user specifically wants a graph which demonstrates their point, we employ matplotlib code to programmatically generate graphs that align with the user’s expectations. These graphs are then added to the presentation in real-time.
#### Sentence Segmentation
When we receive text back from the google text to speech api, it doesn’t naturally add periods when we pause in our speech. This can give more conventional NLP analysis (like part-of-speech analysis), some trouble because the text is grammatically incorrect. We use a sequence to sequence transformer architecture, *seq2seq*, and transfer learned a new head that was capable of classifying the borders between sentences. This was then able to add punctuation back into the text before the rest of the processing pipeline.
#### Text Title-ification
Using Part-of-speech analysis, we determine which parts of a sentence (or sentences) would best serve as a title to a new slide. We do this by searching through sentence dependency trees to find short sub-phrases (1-5 words optimally) which contain important words and verbs. If the user is signalling the clicker that it needs a new slide, this function is run on their text until a suitable sub-phrase is found. When it is, a new slide is created using that sub-phrase as a title.
#### Text Summarization
When the user is talking “normally,” and not signalling for a new slide, image, or graph, we attempt to summarize their speech into bullet points which can be displayed on screen. This summarization is performed using custom Part-of-speech analysis, which starts at verbs with many dependencies and works its way outward in the dependency tree, pruning branches of the sentence that are superfluous.
#### Mobile Clicker
Since it is really convenient to have a clicker device that you can use while moving around during your presentation, we decided to integrate it into your mobile device. After logging into the website on your phone, we send you to a clicker page that communicates with the server when you click the “New Slide” or “New Element” buttons. Pressing and holding these buttons activates the microphone on your laptop and begins to analyze the text on the server and sends the information back in real-time. This real-time communication is accomplished using WebSockets.
#### Internal Socket Communication
In addition to the websockets portion of our project, we had to use internal socket communications to do the actual text analysis. Unfortunately, the machine learning prediction could not be run within the web app itself, so we had to put it into its own process and thread and send the information over regular sockets so that the website would work. When the server receives a relevant websockets message, it creates a connection to our socket server running the machine learning model and sends information about what the user has been saying to the model. Once it receives the details back from the model, it broadcasts the new elements that need to be added to the slides and the front-end JavaScript adds the content to the slides.
## Challenges We ran into
* Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening sentences into bullet points. We ended up having to develop a custom pipeline for bullet-point generation based on Part-of-speech and dependency analysis.
* The Web Speech API is not supported across all browsers, and even though it is "supported" on Android, Android devices are incapable of continuous streaming. Because of this, we had to move the recording segment of our code from the phone to the laptop.
## Accomplishments that we're proud of
* Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
* Working on an unsolved machine learning problem (sentence simplification)
* Connecting a mobile device to the laptop browser’s mic using WebSockets
* Real-time text analysis to determine new elements
## What's next for ImpromPPTX
* Predict what the user intends to say next
* Scraping Primary sources to automatically add citations and definitions.
* Improving text summarization with word reordering and synonym analysis. | ## Inspiration
Creating user interfaces doesn't come easy to everyone. We wanted to make prototyping designs quick and easy. Optical character recognition was one of the major features we wanted to implement into our program.
## What it does
* Handwritten or computer generated text is analyzed
* Convert analysis to a jsx file that is rendered
* Redo analysis whenever there is a change
## How we built it
* EasyOCR reads handwritten and computer generated text
* Our backend uses a Node server to receive JSONs
* The backend creates jsx files
## Challenges we ran into
* When trying to implement the OCR functions of several cloud services (Azure, GCP, AWS), we ran into issues with credentials
* Recognizing hand written text (balancing latency vs accuracy)
* Sending images via POST
* Integrating all the components
## Accomplishments that we're proud of
* Program can recognize text
* The backend server is capable of writing a jsx file
## What we learned
* OCR training
* Writing to a file with escape characters using Node
## What's next for Gen Dev
* Send images directly via POST
* Improve file management system
* Using vector calculations to analyze whiteboard movement | winning |
[Example brainrot output](https://youtube.com/shorts/vmTmjiyBTBU)
[Demo](https://youtu.be/W5LNiKc7FB4)
## Inspiration
Graphic design is a skill like any other, that if honed, allows us to communicate and express ourselves in marvellous ways. To do so, it's massively helpful to receive specific feedback on your designs. Thanks to recent advances in multi-modal models, such as GPT-4o, even computers can provide meaningful design feedback.
What if we put a sassy spin on it?
## What it does
Adobe Brainrot is an unofficial add-on for Adobe Express that analyzes your design, creates a meme making fun of it, and generates a TikTok-subway-surfers-brainrot-style video with a Gordon Ramsey-esque personality roasting your design. (Watch the attached video for an example!)
## How we built it
The core of this app is an add-on for Adobe Express. It talks to a server (which we operate locally) that handles AI, meme-generation, and video-generation.
Here's a deeper breakdown:
1. The add-on screenshots the Adobe Express design and passes it to a custom-prompted session of GPT-4o using the ChatGPT API. It then receives the top design issue & location of it (if applicable).
2. It picks a random meme format, and asks ChatGPT for the top & bottom text of said meme in relation to the design flaw (e.g. "Too many colours"). Using the memegen.link API, it then generates the meme on-the-fly and insert it into the add-on UI.
3. Using yt-dlp, it downloads a "brainrot" background clip (e.g. Subway Surfers gameplay). It then generates a ~30-second roast using ChatGPT based on the design flaw & creates a voiceover using it, using OpenAI Text-to-Speech. Finally, it uses FFmpeg to overlay the user's design on top of the "brainrot" clip, add the voiceover in the background, and output a video file to the user's computer.
## Challenges we ran into
We were fairly unfamiliar with the Adobe Express SDK, so it was a learning curve getting the hang of it! It was especially hard due to having two SDKs (UI & Sandbox). Thankfully, it makes use of existing standards like JSX.
In addition, we researched prompt-engineering techniques to ensure that our ChatGPT API calls would return responses in expected formats, to avoid unexpected failure.
There were quite a few challenges generating the video. We referenced a project that did something similar, but we had to rewrite most of it to get it working. We had to use a different yt-dl core due to extraction issues. FFmpeg would often fail even with no changes to the code or parameters.
## Accomplishments that we're proud of
* It generates brainrot (for better or worse) videos
* Getting FFmpeg to work (mostly)
* The AI outputs feedback
* Working out the SDK
## What we learned
* FFmpeg is very fickle
## What's next for Adobe Brainrot
We'd like to flesh out the UI further so that it more *proactively* provides design feedback, to become a genuinely helpful (and humorous) buddy during the design process. In addition, adding subtitles matching the text-to-speech would perfect the video. | ## Inspiration
memes have become a cultural phenomenon and a huge recreation for many young adults including ourselves. for this hackathon, we decided to connect the sociability aspect of the popular site "twitter", and combine it with a methodology of visualizing the activity of memes in various neighborhoods. we hope that through this application, we can create a multicultural collection of memes, and expose these memes, trending from popular cities to a widespread community of memers.
## What it does
NWMeme is a data visualization of memes that are popular in different parts of the world. Entering the application, you are presented with a rich visual of a map with Pepe the frog markers that mark different cities on the map that has dank memes. Pepe markers are sized by their popularity score which is composed of retweets, likes, and replies. Clicking on Pepe markers will bring up an accordion that will display the top 5 memes in that city, pictures of each meme, and information about that meme. We also have a chatbot that is able to reply to simple queries about memes like "memes in Vancouver."
## How we built it
We wanted to base our tech stack with the tools that the sponsors provided. This started from the bottom with CockroachDB as the database that stored all the data about memes that our twitter web crawler scrapes. Our web crawler was in written in python which was Google gave an advanced level talk about. Our backend server was in Node.js which CockroachDB provided a wrapper for hosted on Azure. Calling the backend APIs was a vanilla javascript application which uses mapbox for the Maps API. Alongside the data visualization on the maps, we also have a chatbot application using Microsoft's Bot Framework.
## Challenges we ran into
We had many ideas we wanted to implement, but for the most part we had no idea where to begin. A lot of the challenge came from figuring out how to implement these ideas; for example, finding how to link a chatbot to our map. At the same time, we had to think of ways to scrape the dankest memes from the internet. We ended up choosing twitter as our resource and tried to come up with the hypest hashtags for the project.
A big problem we ran into was that our database completely crashed an hour before the project was due. We had to redeploy our Azure VM and database from scratch.
## Accomplishments that we're proud of
We were proud that we were able to use as many of the sponsor tools as possible instead of the tools that we were comfortable with. We really enjoy the learning experience and that is the biggest accomplishment. Bringing all the pieces together and having a cohesive working application was another accomplishment. It required lots of technical skills, communication, and teamwork and we are proud of what came up.
## What we learned
We learned a lot about different tools and APIs that are available from the sponsors as well as gotten first hand mentoring with working with them. It's been a great technical learning experience. Asides from technical learning, we also learned a lot of communication skills and time boxing. The largest part of our success relied on that we all were working on parallel tasks that did not block one another, and ended up coming together for integration.
## What's next for NWMemes2017Web
We really want to work on improving interactivity for our users. For example, we could have chat for users to discuss meme trends. We also want more data visualization to show trends over time and other statistics. It would also be great to grab memes from different websites to make sure we cover as much of the online meme ecosystem. | ## Inspiration
There's something about brief glints in the past that just stop you in your tracks: you dip down, pick up an old DVD of a movie while you're packing, and you're suddenly brought back to the innocent and carefree joy of when you were a kid. It's like comfort food.
So why not leverage this to make money? The ethos of nostalgic elements from everyone's favourite childhood relics turns heads. Nostalgic feelings have been repeatedly found in studies to increase consumer willingness to spend money, boosting brand exposure, conversion, and profit.
## What it does
Large Language Marketing (LLM) is a SaaS built for businesses looking to revamp their digital presence through "throwback"-themed product advertisements.
Tinder x Mean Girls? The Barbie Movie? Adobe x Bob Ross? Apple x Sesame Street? That could be your brand, too. Here's how:
1. You input a product description and target demographic to begin a profile
2. LLM uses the data with the Co:here API to generate a throwback theme and corresponding image descriptions of marketing posts
3. OpenAI prompt engineering generates a more detailed image generation prompt featuring motifs and composition elements
4. DALL-E 3 is fed the finalized image generation prompt and marketing campaign to generate a series of visual social media advertisements
5. The Co:here API generates captions for each advertisement
6. You're taken to a simplistic interface where you can directly view, edit, generate new components for, and publish each social media post, all in one!
7. You publish directly to your business's social media accounts to kick off a new campaign 🥳
## How we built it
* **Frontend**: React, TypeScript, Vite
* **Backend**: Python, Flask, PostgreSQL
* **APIs/services**: OpenAI, DALL-E 3, Co:here, Instagram Graph API
* **Design**: Figma
## Challenges we ran into
* **Prompt engineering**: tuning prompts to get our desired outputs was very, very difficult, where fixing one issue would open up another in a fine game of balance to maximize utility
* **CORS hell**: needing to serve externally-sourced images back and forth between frontend and backend meant fighting a battle with the browser -- we ended up writing a proxy
* **API integration**: with a lot of technologies being incorporated over our frontend, backend, database, data pipeline, and AI services, massive overhead was introduced into getting everything set up and running on everyone's devices -- npm versions, virtual environments, PostgreSQL, the Instagram Graph API (*especially*)...
* **Rate-limiting**: the number of calls we wanted to make versus the number of calls we were allowed was a small tragedy
## Accomplishments that we're proud of
We're really, really proud of integrating a lot of different technologies together in a fully functioning, cohesive manner! This project involved a genuinely technology-rich stack that allowed each one of us to pick up entirely new skills in web app development.
## What we learned
Our team was uniquely well-balanced in that every one of us ended up being able to partake in everything, especially things we hadn't done before, including:
1. DALL-E
2. OpenAI API
3. Co:here API
4. Integrating AI data pipelines into a web app
5. Using PostgreSQL with Flask
6. For our non-frontend-enthusiasts, atomic design and state-heavy UI creation :)
7. Auth0
## What's next for Large Language Marketing
* Optimizing the runtime of image/prompt generation
* Text-to-video output
* Abstraction allowing any user log in to make Instagram Posts
* More social media integration (YouTube, LinkedIn, Twitter, and WeChat support)
* AI-generated timelines for long-lasting campaigns
* AI-based partnership/collaboration suggestions and contact-finding
* UX revamp for collaboration
* Option to add original content alongside AI-generated content in our interface | partial |
## Inspiration
To set our goal, we were grandly inspired by the Swiss system, which has proven to be one of the most functional democracy in the world. In Switzerland, there is a free mobile application, VoteInfo, which is managed by a governmental institution, but is not linked to any political groups, where infos about votes and democratic events happening at a national, regional and communal scale are explained, vulgarized and promoted. The goal is to provide the population a deep understanding of the current political discussions and therefore to imply everyone in the Swiss political life, where every citizen can vote approximately 3 times a year on national referendum to decide the future of their country. We also thought it would be interesting to expand that idea to enable elected representative, elected staff and media to have a better sense of the needs and desires of a certain population.
Here is a [link](https://www.bfs.admin.ch/bfs/fr/home/statistiques/politique/votations/voteinfo.html) to the swiss application website (in french, german and italian only).
## What it does
We developed a mobile application where anyone over 18 can have an account. After creating their account and entering their information (which will NOT be sold for profit), they will have the ability to navigate through many "causes", on different scales. For example, a McGill student could join the "McGill" group, and see many ideas proposed by member of elected staff, or even by regular students. They could vote for or against those, or they could choose to give visibility to an idea that they believe is important. The elected staff of McGill could then use the data from the votes, plotted in the app in the form of histograms, to see how the McGill community feels about many different subjects. One could also join the "Montreal Nightlife" group. For instance, a non-profit organization with governmental partnerships like [mtl2424](https://www.mtl2424.ca/), which is currently investigating the possibility of extending the alcohol permit fixed to 3 a.m., could therefore get a good understanding of how the Montreal population feels about this idea, by looking on the different opinion depending on the voters' age, their neighbourhood, or even both!
## How we built it
We used Figma for the graphic interface, and Python (using Spyder IDE) for the data analysis and the graph plotting ,with Matplotlib and Numpy libraries.
## Challenges we ran into
We tried to build a dynamic interface where one could easily be able to set graphs and histograms to certain conditions, i.e. age, gender, occupation... However, the implementation of such deep features happened to be too complicated and time-consuming for our level of understanding of software design, therefore, we abandoned that aspect.
Also, as neither of us had any real background in software design, building the app interface was very challenging.
## Accomplishments that we're proud of
We are really proud of the idea in itself, as we really and honestly believe that, especially in small communities like McGill, it could have a real positive impact. We put a lot of effort into building a realistic and useful tool that we, as students and members of different communities, would really like to have access to.
## What we learned
The thing we mainly learned was how to create a mobile app interface. As stipulated before, it was a real challenge, as neither of us had any experience in software development, so we had to learn while creating our interface.
As we were limited in time and knowledge, we also learned how to understand the priorities of our projects and to focus on them in the first place, and only afterward try to add some features.
## What's next for Kairos
The first step would be to implement our application's back-end and link it to the front-end.
In the future, we would really like to create a nice, dynamic and clean UI, to be attractive and easy to use for anyone, of any age, as the main problem with implementing technological tools for democracy is that the seniors are often under-represented.
We would also like to implement a lot of features, like a special registration menu for organizations to create groups, dynamic maps, discussion channels etc...
Probably the largest challenge in the upcoming implementations will be to find a good way to ensure each user has only one account, to prevent pollution in the sampling. | ## Inspiration
Everyone on our team comes from a family of newcomers and just as it is difficult to come into a new country, we had to adapt very quickly to the Canadian system. Our team took this challenge as an opportunity to create something that our communities could deeply benefit from when they arrive in Canada. A product that adapts to them, instead of the other way around. With some insight from our parents, we were inspired to create this product that would help newcomers to Canada, Indigenous peoples, and modest income families. Wealthguide will be a helping hand for many people and for the future.
## What it does
A finance program portal that provides interactive and accessible financial literacies to customers in marginalized communities improving their financially intelligence, discipline and overall, the Canadian economy 🪙. Along with these daily tips, users have access to brief video explanations of each daily tip with the ability to view them in multiple languages and subtitles. There will be short, quick easy plans to inform users with limited knowledge on the Canadian financial system or existing programs for marginalized communities. Marginalized groups can earn benefits for the program by completing plans and attempting short quiz assessments. Users can earn reward points ✨ that can be converted to ca$h credits for more support in their financial needs!
## How we built it
The front end was built using React Native, an open-source UI software framework in combination with Expo to run the app on our mobile devices and present our demo. The programs were written in JavaScript to create the UI/UX interface/dynamics and CSS3 to style and customize the aesthetics. Figma, Canva and Notion were tools used in the ideation stages to create graphics, record brainstorms and document content.
## Challenges we ran into
Designing and developing a product that can simplify the large topics under financial literacy, tools and benefits for users and customers while making it easy to digest and understand such information | We ran into the challenge of installing npm packages and libraries on our operating systems. However, with a lot of research and dedication, we as a team resolved the ‘Execution Policy” error that prevented expo from being installed on Windows OS | Trying to use the Modal function to enable pop-ups on the screen. There were YouTube videos of them online but they were very difficult to follow especially for a beginner | Small and merge errors prevented the app from running properly which delayed our demo completion.
## Accomplishments that we're proud of
**Kemi** 😆 I am proud to have successfully implemented new UI/UX elements such as expandable and collapsible content and vertical and horizontal scrolling. **Tireni** 😎 One accomplishment I’m proud of is that despite being new to React Native, I was able to learn enough about it to make one of the pages on our app. **Ayesha** 😁 I used Figma to design some graphics of the product bringing the aesthetic to life!
## What we learned
**Kemi** 😆 I learned the importance of financial literacy and responsibility and that FinTech is a powerful tool that can help improve financial struggles people may face, especially those in marginalized communities. **Tireni** 😎 I learned how to resolve the ‘Execution Policy” error that prevented expo from being installed on VS Code. **Ayesha** 😁 I learned how to use tools in Figma and applied it in the development of the UI/UX interface.
## What's next for Wealthguide
Newsletter Subscription 📰: Up to date information on current and today’s finance news. Opportunity for Wealthsimple product promotion as well as partnering with Wealthsimple companies, sponsors and organizations. Wealthsimple Channels & Tutorials 🎥: Knowledge is key. Learn more and have access to guided tutorials on how to properly file taxes, obtain a credit card with benefits, open up savings account, apply for mortgages, learn how to budget and more. Finance Calendar 📆: Get updates on programs, benefits, loans and new stocks including when they open during the year and the application deadlines. E.g OSAP Applications. | Welcome to our demo video for our hack “Retro Readers”. This is a game created by our two man team including myself Shakir Alam and my friend Jacob Cardoso. We are both heading into our senior year at Dr. Frank J. Hayden Secondary School and enjoyed participating in our first hackathon ever, Hack The 6ix a tremendous amount.
We spent over a week brainstorming ideas for our first hackathon project and because we are both very comfortable with the idea of making, programming and designing with pygame, we decided to take it to the next level using modules that work with APIs and complex arrays.
Retro Readers was inspired by a social media post pertaining to another text font that was proven to help mitigate reading errors made by dyslexic readers. Jacob found OpenDyslexic which is an open-source text font that does exactly that.
The game consists of two overall gamemodes. These gamemodes aim towards an age group of mainly children and young children with dyslexia who are aiming to become better readers. We know that reading books is becoming less popular among the younger generation and so we decided to incentivize readers by providing them with a satisfying retro-style arcade reading game.
The first gamemode is a read and research style gamemode where the reader or player can press a key on their keyboard which leads to a python module calling a database of semi-sorted words from Wordnik API. The game then displays the word back to the reader and reads it aloud using a TTS module.
As for the second gamemode, we decided to incorporate a point system. Using the points the players can purchase unique customizables and visual modifications such as characters and backgrounds. This provides a little dopamine rush for the players for participating in a tougher gamemode.
The gamemode itself is a spelling type game where a random word is selected using the same python modules and API. Then a TTS module reads the selected word out loud for readers. The reader then must correctly spell the word to attain 5 points without seeing the word.
The task we found the most challenging was working with APIs as a lot of them were not deemed fit for our game. We had to scratch a few APIs off the list for incompatibility reasons. A few of these APIs include: Oxford Dictionary, WordsAPI and more.
Overall we found the game to be challenging in all the right places and we are highly satisfied with our final product. As for the future, we’d like to implement more reliable APIs and as for future hackathons (this being our first) we’d like to spend more time researching viable APIs for our project. And as far as business practicality goes, we see it as feasible to sell our game at a low price, including ads and/or pad cosmetics. We’d like to give a special shoutout to our friend Simon Orr allowing us to use 2 original music pieces for our game. Thank you for your time and thank you for this amazing opportunity. | partial |
# Healthy.ly
An android app that can show if a food is allergic to you just by clicking its picture. It can likewise demonstrate it's health benefits, ingredients, and recipes.
## Inspiration
We are a group of students from India. The food provided here is completely new to us and we don't know the ingredients. One of our teammates is dangerously allergic to seafood and he has to take extra precautions while eating at new places. So we wanted to make an app that can detect if the given food is allergic or not using computer vision.
We also got inspiration from the HBO show **Silicon Valley**, where a guy tries to make a **Shazam for Food** app.
Over time our idea grew bigger and we added nutritional value and recipes to it.
## What it does
This is an android app that uses computer vision to identify food items in the picture and shows you if you are allergic to it by comparing the ingredients to your restrictions provided earlier. It can also give the nutritional values and recipes for that food item.
## How we built it
We developed a deep learning model using **Tensorflow** that can classify between 101 different food items. We trained it using the **Google Compute Engine** with 2vCPUs, 7.5 GB RAM and 2 Tesla K80 GPU. This model can classify 101 food items with over 70% accuracy.
From the predicted food item, we were able to get its ingredients and recipes from an API from rapidAPI called "Recipe Puppy". We cross validate the ingredients with the items that the user is allergic to and tell them if it's safe to consume.
We made a native **Android Application** that lets you take an image and uploads it to **Google Storage**. The python backend runs on **Google App Engine**. The web app takes the image from google storage and using **Tensorflow Serving** finds the class of the given image(food name). It uses its name to get its ingredients, nutritional values, and recipes and return these values to the android app via **Firebase**.
The Android app then takes these values and displays them to the user. Since most of the heavy lifting happens in the cloud, our app is very light(7MB) and is **computationally efficient**. It does not need a lot of resources to run. It can even run in a cheap and underperforming android mobile without crashing.
## Challenges we ran into
>
> 1. We had trouble converting our Tensorflow model to tflite(tflite\_converter could not convert a multi\_gpu\_model to tflite). So we ended up hosting it on the cloud which made the app lighter and computationally efficient.
> 2. We are all new to using google cloud. So it took us a long time to even figure out the basic stuff. Thanks to the GCP team, we were able to get our app up and running.
> 3. We couldn't use the Google App Engine to support TensorFlow(we could not get it working). So we have hosted our web app on Google Compute Engine
> 4. We did not get a UI/UX designer or a frontend developer in our team. So we had to learn basic frontend and design our app.
> 5. We could only get around 70% validation accuracy due to the higher computation needs and less available time.
> 6. We were using an API from rapidAPI. But since yesterday, they stopped support for that API and it wasn't working. So we had to make our own database to run our app.
> 7. Couldn't use AutoML for vision classification, because our dataset was too large to be uploaded.
>
>
>
## What we learned
Before coming to this hack, we had no idea about using cloud infrastructure like Google Cloud Platform. In this hack, we learned a lot about using Google Cloud Platform and understand its benefits. We are pretty comfortable using it now.
Since we didn't have a frontend developer we had to learn that to make our app.
Making this project gave us a lot of exposure to **Deep Learning**, **Computer Vision**, **Android App development** and **Google Cloud Platform**.
## What's next for Healthy.ly
1. We are planning to integrate **Google Fit** API with this so that we can get a comparison between the number of calories consumed and the number of calories burnt to give better insight to the user. We couldn't do it now due to time constraints.
2. We are planning to integrate **Augmented Reality** with this app to make it predict in real-time and look better.
3. We have to improve the **User Interface** and **User Experience** of the app.
4. Spend more time training the model and **increase the accuracy**.
5. Increase the **number of labels** of the food items. | ## Inspiration
As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad.
## What It Does
After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make.
## How We Built It
On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data.
On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase.
## Challenges We Ran Into
Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine.
On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation. | ## Inspiration
Whenever friends and I want to hang out, we always ask about our schedule. Trying to find a free spot could take a long time when I am with a group of friends. I wanted to create an app that simplifies that process.
## What it does
The website will ask the duration of the event and ask to fill out each person's schedule. The app will quickly find the best period for everyone.
## How we built it
We built it using HTML/CSS and Javascript. We also used a calendar plugin.
## Challenges we ran into
The integration of the calendar plugin is hard and the algorithm that finds the best time slot is also hard to implement.
## Accomplishments that we're proud of
Although the site is not finished, we have the overall website structure done.
## What we learned
It was hard to work on this project in a team of 2.
## What's next for MeetUp
We will try to finish this project after pennapps. | partial |
## What it does
Using Blender's API and a whole lot of math, we've created a service that allows you to customize and perfectly fit 3D models to your unique dimensions. No more painstaking adjustments and wasted 3D prints necessary, simply select your print, enter your sizes, and download your fitted prop within a few fast seconds. We take in specific wrist, forearm, and length measurements and dynamically resize preset .OBJ files without any unsavory warping. Once the transformations are complete, we export it right back to you ready to send off to the printers.
## Inspiration
There's nothing cooler than seeing your favorite iconic characters coming to life, and we wanted to help bring that magic to 3D printing enthusiasts! Just starting off as a beginner with 3D modeling can be a daunting task -- trust us, most of the team are in the same boat with you. By building up these tools and automation scripts we hope to pave a smoother road for people interested in innovating their hobbies and getting out cool customized prints out fast.
## Next Steps
With a little bit of preprocessing, we can let any 3D modeler upload their models to our web service and have them dynamically fitted in no time! We hope to grow our collection of available models and make 3D printing much easier and more accessible for everyone. As it grows we hope to make it a common tool in every 3D artists arsenal.
*Special shoutout to Pepsi for the Dew* | ## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless | ## Inspiration
Like most university students, we understand and experience the turbulence that comes with relocating every 4 months due to coop sequences while keeping personal spendings to a minimum. It is essential for students to be able to obtain an affordable and effortless way to release their stresses and have hobbies during these pressing times of student life.
## What it does
AirDrum uses computer vision to mirror the standard drum set without the need for heavy equipment, high costs and is accessible in any environment.
## How we built it
We used python (NumPy, OpenCV, MatPlotLib, PyGame, WinSound) to build the entire project.
## Challenges we ran into
The documentation for OpenCV is less robust than what we wanted, which lead to a lot of deep dives on Stack Overflow.
## Accomplishments that we're proud of
We're really happy that we managed to actually get something done.
## What we learned
It was our first time ever trying to do anything with OpenCV, so we learned a lot about the library, and how it works in conjunction with NumPy.
## What's next for AirDrums
The next step for AirDrums is to add more functionality, allowing the user to have more freedom with choosing which drums parts they would like and to be able to save beats created by the user. We also envision a guitar hero type mode where users could try to play the drum part of a song or two. We could also expand to different instruments. | winning |
## Inspiration
When the first experimental COVID-19 vaccine became available in China, hundreds of people started queuing outside hospitals, waiting to get that vaccine. Imagine this on a planetary scale when the whole everybody has to be vaccinated all around the world. There's a big chance while queuing they can spread the virus to people around them or maybe get infected because they cannot perform social distancing at all. We sure don't want that to happen.
The other big issue is that there are lots of conspiracy theories, rumors, stigma, and other forms of disinformation simultaneously spread across our social media about COVID-19 and it's vaccine. This misinformation creates frustrations for users many asking, we really don't know which one is right? Which one is wrong?
## What it does
Immunize is a mobile app that can save your life and save your time. The goal is to make the distribution of mass-vaccination become more effective, faster, and less crowded. With this app, you can book your vaccine appointment based on your own preference. So the user can easily choose the hospital based on the nearest location and easily schedule an appointment based on their availability.
In addition, based on the research we found that most of Covid-19 vaccines requires 2 doses given in 3 weeks apart to achieve that high effectiveness. And there's a big probability that people can forget to return for a follow-up shot. We can minimize that probability. This app will automatically schedule the patient for the 2nd vaccination so there is a less likelihood of user error. The reminder system (as notification feature) that will remind them in their phone when they have appointment that day.
## How we built it
We built the prototype using flutter as our client to support mobile. We integrated radar.io for hospital search. For facial recognition we used GCP and SMS reminders we used twilio. The mobile client connected to firebase: using firebase for auth, firebase storage for avatars and firestore for user metadata storage. The second backend host used datastax.
## Challenges we ran into
Working with an international team was very challenging with team members 12+ hours apart. All of us were learning something new whether it was flutter, facial recognition or experimenting with new APIs. Flutter APIs were very experimental, the camera API had to be rolled back two major version which occurred in less than 2 months to find a viable working version compatible with online tutorials
## Accomplishments that we're proud of
The features:
1. **QR Code Feature** for storing all personal data + health condition, so user don't need to wait for a long queue of administrative things.
2. **Digital Registration Form** checking if user is qualified of COVID-19 vaccine and which vaccine suits best.
3. **Facial Recognition** due to potential fraud in people who are not eligible for vaccination attempting to get limited supplies of vaccine, we implemented facial recognition to confirm the user for the appointment is the same one that showed up.
4. **Scheduling Feature** based on date, vaccine availability, and the nearby hospital.
5. **Appointment History** to track all data of patients, this data can be used for better efficiency of mass-vaccination in the future.
6. **Immunize Passport** for vaccine & get access to public spaces. This will create domino effect for people to get vaccine as soon as possible so that they can get access.
7. **Notification** to remind the patients every time they have appointment/ any important news via SMS and push notifications
8. **Vaccine Articles** - to ensure the user can get the accurate information from a verified source.
9. **Emergency Button** - In case there are side effects after vaccination.
10. **Closest Hospitals/Pharmacies** - based on a user's location, users can get details about the closest hospitals through Radar.io Search API.
## What we learned
We researched and learned a lot about the facts of COVID-19 Vaccine; Some coronavirus vaccines may work better in certain populations than others. And there may be one vaccine that seems to work better in the elderly than in younger populations. Alternatively, one may work better in children than it works in the elderly. Research suggests, the coronavirus vaccine will likely require 2 shots to be effective in which taken 21 days apart for Pfizer's vaccine and 28 days apart for Moderna's remedy.
## What's next for Immunize
Final step is to propose this solution to our government. We really hope this app could be implemented in real life and be a solution for people to get COVID-19 vaccine effectively, efficiently, and safely. Polish up our mobile app and build out an informational web app and a mobile app for hospital staff to scan QR codes and verify patient faces (currently they have to use the same app as the client) | ## Inspiration
No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience.
## What it does
We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that!
## How we built it
We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma.
## Challenges we ran into
Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well.
## Accomplishments that we're proud of
Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain.
## What we learned
We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable. | ## Inspiration
On campus, students are always overly stressed. To promote the well-being of students on campus, or people in general, we created this little app to help everyone de-stress!
## What it does
This is a game that let people create their own figure "homework man", and can animate the figure by pressing the "do homework" button.
## How we built it
We built it using swift 4 using the touch function and UIButton.
## Challenges we ran into
Originally, we were working on designing another app. However, after we realized the complexity of building that app, we started to work on this game. The challenges we ran into when designing this game is mostly getting use to Swift, a language we are not familiar with.
## Accomplishments that we're proud of
We build a game... with Emojis in it!
## What we learned
The usage of Swift and Xcode.
## What's next for The Homework Man
We are planning to add more features, like dancing, or playing to animate homeworkman in various ways. | partial |
## Inspiration
The bitalino system is a great new advance in affordable, do-it-yourself biosignals technology. Using this technology, we want to make an application that provides an educational tool to exploring how the human body works.
## What it does
Currently, it uses the ServerBIT architecture to get ECG signals from a connected bitalino and draw them in an HTML file real time using javascript. In this hack, the smoothie.js library was used instead of the jQuery flot to provide smoother plotting.
## How I built it
I built the Lubdub Club using Hugo Silva's ServerBIT architecture. From that, the ECG data was drawn using smoothie.js. A lot of work was put in to make a good and accurate ECG display, which is why smoothie was used instead of flot. Other work involved adjusting for the correct ECG units, and optimizing the scroll speed and scale of the plot.
## Challenges I ran into
The biggest challenge we ran into was getting the Python API to work. There are a lot more dependencies for it than is written in the documentation, but that may be because I was using a regular Python installation on Windows. I installed WinPython to make sure most of the math libraries (pylab. numpy) were installed, and installed everything else afterwards. In addition, there is a problem with server where the TCP listening will not close properly, which caused a lot of trouble in testing.
Apart from that, getting a good ECG signal was very challenging, as testing was done using electrode leads on the hands, which admittedly would give a signal that is quite susceptible to interference (both from surrounding electronics and movements). ALthough we never got an ECG signal close to the ones in the demos online, we did end up with a signal that was definitely an ECG, and had recognizable PQRS phases.
## Accomplishments that I'm proud of
I am proud that we were able to get the Python API working with the bitalino, as it seems that many others at Hack Western 2 were unable to. In addition, I am happy with the way the smoothie.js plot came out, and I think it is a great improvement over the original flot plot.
Although we did not have time to set up a demo site, I am quite proud of the name our team came up with (lubdub.club).
## What I learned
I learned a lot of Javascript, jQuery, Python, and getting ECG signals from less than optimal electrode configurations.
## What's next for Lubdub Club
What's next is to implement some form of wave-signal analysis to clean up the ECG waveform, and to perform calculations to find values like heart rate. Also, I would like to make the Python API / ServerBIT easier to use (maybe rewrite from scratch or at least collect all dependencies in an installer). Other things include adding more features to the HTML site, like changing colour to match heartrate, music, and more educational content. I would like to set up lubdub.club, and maybe find a way to have the data from the bitalino sent to the cloud and then displayed on the webpage. | ## Inspiration
I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets.
## What it does
There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up.
## How I built it
It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object.
## Challenges I ran into
There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game.
## Accomplishments that I'm proud of
I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch.
## What I learned
I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level.
## What's next for Tactile Leap Motion
Creating a better prototype with better hardware(stronger magnets and more accurate sensors) | ## Summary
OrganSafe is a revolutionary web application that tackles the growing health & security problem of black marketing of donated organs. The verification of organ recipients leverages the Ethereum Blockchain to provide critical security and prevent improper allocation for such a pivotal resource.
## Inspiration
The [World Health Organization (WHO)](https://slate.com/business/2010/12/can-economists-make-the-system-for-organ-transplants-more-humane-and-efficient.html) estimates that one in every five kidneys transplanted per year comes from the black market. There is a significant demand for solving this problem which impacts thousands of people every year who are struggling to find a donor for a significantly need transplant. Modern [research](https://ieeexplore.ieee.org/document/8974526) has shown that blockchain validation of organ donation transactions can help reduce this problem and authenticate transactions to ensure that donated organs go to the right place!
## What it does
OrganSafe facilitates organ donations with authentication via the Ethereum Blockchain. Users can start by registering on OrganSafe with their health information and desired donation and then application's algorithms will automatically match the users based on qualifying priority for available donations. Hospitals can easily track donations of organs and easily record when recipients receive their donation.
## How we built it
This application was built using React.js for the frontend of the platform, Python Flask for the backend and API endpoints, and Solidity+Web3.js for Ethereum Blockchain.
## Challenges we ran into
Some of the biggest challenges we ran into were connecting the different components of our project. We had three major components: frontend, backend, and the blockchain that were developed on that needed to be integrated together. This turned to be the biggest hurdle in our project that we needed to figure out. Dealing with the API endpoints and Solidarity integration was one of the problems we had to leave for future developments. One solution to a challenge we solved was dealing with difficulty of the backend development and setting up API endpoints. Without a persistent data storage in the backend, we attempted to implement basic storage using localStorage in the browser to facilitate a user experience. This allowed us to implement a majority of our features as a temporary fix for our demonstration. Some other challenges we faced included figuring certain syntactical elements to the new technologies we dealt (such as using Hooks and States in React.js). It was a great learning opportunity for our group as immersing ourselves in the project allowed us to become more familiar with each technology!
## Accomplishments that we're proud of
One notable accomplishment is that every member of our group interface with new technology that we had little to no experience with! Whether it was learning how to use React.js (such as learning about React fragments) or working with Web3.0 technology such as the Ethereum Blockchain (using MetaMask and Solidity), each member worked on something completely new! Although there were many components we simply just did not have the time to complete due to the scope of TreeHacks, we were still proud with being able to put together a minimum viable product in the end!
## What we learned
* Fullstack Web Development (with React.js frontend development and Python Flask backend development)
* Web3.0 & Security (with Solidity & Ethereum Blockchain)
## What's next for OrganSafe
After TreeHacks, OrganSafe will first look to tackle some of the potential areas that we did not get to finish during the time of the Hackathon. Our first step would be to finish development of the full stack web application that we intended by fleshing out our backend and moving forward from there. Persistent user data in a data base would also allow users and donors to continue to use the site even after an individual session. Furthermore, scaling both the site and the blockchain for the application would allow greater usage by a larger audience, allowing more recipients to be matched with donors. | partial |
Recommends exercises in real-time as the user is exercising based on their heart rate. | ## Inspiration
We often do homework late when there are no TA hours scheduled, so we were thinking that having an easy way to reach out to tutors to get homework help would be very efficient and appreciated.
## What it does
It authenticates users with GAuth, asks them about whether they want to be a tutor or student, as well as relevant info. Then, it connects students who are waiting in queue to a tutor who can help so that they can discuss problems over a real-time chat.
## How we built it
We used Typescript, Firebase for authentication, Firestore for Database, React as our frontend library, Socket.io for real-time chat, and tentatively, peer.js for real-time video streams.
## Challenges we ran into
We tried to implement video streams with Peer.js in addition to chat, but we had difficulties with matching students and tutors through peer.js. Queuing and matching students with tutors was also challenging because there were mutiple client requests that had to be synced with a single data structure on the server.
## Accomplishments that we're proud of
The entire thing: authentication, how we learned a lot more about socket.io, and using webRTC for the live video stream.
## What we learned
We had some experience with React and web development before, but not a ton, so getting to build a functional help in less than 36 hours was very fulfilling and helped us accelerate our learning.
## What's next for HWHelp
We think that the idea is a neat concept, and we hope to fully integrate video streams, as well as a more complex and individualized matchmaking process. | ## Inspiration
Remember the thrill of watching mom haggle like a pro at the market? Those nostalgic days might seem long gone, but here's the twist: we can help you carry out the generational legacy. Introducing our game-changing app – it's not just a translator, it’s your haggling sidekick. This app does more than break down language barriers; it helps you secure deals. You’ll learn the tricks to avoid the tourist trap and get the local price, every time.
We’re not just reminiscing about the good old days; we’re rebooting them for the modern shopper. Get ready to haggle, bargain, and save like never before!
## What it does
Back to the Market is a mobile app specifically crafted to enhance communication and negotiation for users in foreign markets. The app shines in its ability to analyze quoted prices using local market data, cultural norms, and user-set preferences to suggest effective counteroffers. This empowers users to engage in informed and culturally appropriate negotiations, without being overcharged. Additionally, Back to the Market offers a customization feature, allowing users to tailor their spending limits. The user-interface is simple and cute, making it accessible for a broad range of users regardless of their technical interface. Its integration of these diverse features positions Back to the Market not just as a tool for financial negotiation, but as a comprehensive companion for a more equitable, enjoyable, and efficient international shopping experience.
## How we built it
Back to the Market was built by separating the front-end from the back-end. The front-end consists of React-Native, Expo Go, and Javascript to develop the mobile app. The back-end consists of Python, which was used to connect the front-end to the back-end. The Cohere API was used to generate the responses and determine appropriate steps to take during the negotiation process.
## Challenges we ran into
During the development of Back to the Market, we faced two primary challenges. First was our lack of experience with React Native, a key technology for our app's development. While our team was composed of great coders, none of us had ever used React prior to the competition. This meant we had to quickly learn and master it from the ground up, a task that was both challenging and educational. Second, we grappled with front-end design. Ensuring the app was not only functional but also visually appealing and user-friendly required us to delve into UI/UX design principles, an area we had little experience with. Luckily, through the help of the organizers, we were able to adapt quickly with few problems. These challenges, while demanding, were crucial in enhancing our skills and shaping the app into the efficient and engaging version it is today.
## Accomplishments that we're proud of
We centered the button on our first try 😎
In our 36 hours journey with Back to the Market, there are several accomplishments that stand out. Firstly, successfully integrating Cohere for the both the translation and bargaining aspects of the app was a significant achievement. This integration not only provided robust functionality but also ensured a seamless user experience, which was central to our vision.
Secondly, it was amazing to see how quickly we went from zero React-Native experience to making an entire app with it in less than 24 hours. We were able to create both an aesthetically pleasing and highly functional. This rapid skill acquisition and application in a short time frame was a testament to our team's dedication and learning agility.
Finally, we take great pride in our presentation and slides. We managed to craft an engaging and dynamic presentation that effectively communicated the essence of Back to the Market. Our ability to convey complex technical details in an accessible and entertaining manner was crucial in capturing the interest and understanding of our audience.
## What we learned
Our journey with this project was immensely educational. We learned the value of adaptability through mastering React-Native, a technology new to us all, emphasizing the importance of embracing and quickly learning new tools. Furthermore, delving into the complexities of cross-cultural communication for our translation and bargaining features, we gained insights into the subtleties of language and cultural nuances in commerce. Our foray into front-end design taught us about the critical role of user experience and interface, highlighting that an app's success lies not just in its functionality but also in its usability and appeal. Finally, creating a product is the easy part, making people want it is where a lot of people fall. Thus, crafting an engaging presentation refined our storytelling and communication skills.
## What's next for Back to the Market
Looking ahead, Back to the Market is poised for many exciting developments. Our immediate focus is on enhancing the app's functionality and user experience. This includes integrating translation features to allow users to stay within the app throughout their transaction.
In parallel, we're exploring the incorporation of AI-driven personalization features. This would allow Back to the Market to learn from individual user preferences and negotiation styles, offering more tailored suggestions and improving the overall user experience. The idea can be expanded by creating a feature for users to rate suggested responses. Use these ratings to refine the response generation system by integrating the top-rated answers into the Cohere model with a RAG approach. This will help the system learn from the most effective responses, improving the quality of future answers.
Another key area of development is utilising computer vision so that users can simply take a picture of the item they are interested in purchasing instead of having to input an item name, which is especially handy in areas where you don’t know exactly what you’re buying (ex. cool souvenir).
Furthermore, we know that everyone loves a bit of competition, especially in the world of bargaining where you want the best deal possible. That’s why we plan on incorporating a leaderboard for those who save the most money via our negotiation tactics. | losing |
## Inspiration
A teammate's near-death experience on a bicycle triggered a months-long quest for knowledge regarding the policies and civic infrastructure that continuously neglect the needs of cyclists -- and consequentially our environment/society -- while catering to the hundreds of thousands of cars congesting and overpowering our city streets.
## What it does
3feet is a complete toolkit for the urban commuter, a set of hardware complete with a voice-interactive mobile app companion.
3feet's vision tackles the issue of cyclist safety, twofold:
One, by ensuring reliable, realtime, crowdsourced incident reporting without any additional work on the part of the cyclists. Namely, the iOS app automatically records the date, time, and coordinates of any 3feet violations detected by the sensor the moment they occur as a "data snapshot" that will then be made open for review on our public database.
Two, by providing immediate emergency assistance to the cyclist in question: buzzing handlebars for close calls and varying light patterns to keep drivers mindful from behind.
## How we built it
We pinned the necessary outputs to an Arduino breadboard:
1. three blinking lights indicating increasing levels of "awareness"
2. an ultrasonic sensor to read
3 a haptic buzzer for feedback to allow a user to be notified that a vehicle was approaching them ![in the final product, this would result in shaking handlebars]
Using esri, we rendered a dynamic visualization of the user's current geolocation as well as locations of previous 3feet violations.
Using Houndify, we created an integrated hands-free voice interface to have the sensor detection and alerts turn on or off based on such custom commands as "on", "turn on", "turn on 3feet", "keep me safe", "off", or our most controversial: "feelin' dangerous".
Using Firebase Firestore, we were able to spin up a lightweight, realtime-updating database accessible across platforms, devs, and languages, while also having the ability to reduce large datasets into chunks for public consumption.
## Challenges we ran into
* Software: weaving different API's in one project
* Android dev's computer not being able to run Android Studio
* As hardware/software integration noobs, we had a lot of fun theorizing and designing a multi-threaded system design that could efficiently communicate with bluetooth signals being sent out from the Arduino
* Hardware: broken sensors, 3.3 volt LED lights getting burnt out without resistors by our 5.5 charges while testing
## Accomplishments that we're proud of
* tackling problems with patience consistently
* taking advantage of the resources available to us at TreeHacks (i.e. mentors. shoutout Kevin, Nick Swift, Nigel, Scott and James and Katy and from Soundhound)
## What we learned
Those of who hadn't even seen an Arduino board up close before were exposed to and taught the code by more experienced members, and vice versa with mobile app developers teaching syntax, basic networking paradigms, and coding environments to software newbies! We scratched each other's backs like nobody's business.
## What's next for 3feet
* Android app
* sensors for both the front *and* the back of the bicycle for more accurate readings
* a damage-control approach to bike wrecks involving flashing lights and alarms that draw attention to the cyclist (who may be unconscious or somehow incapacitated) to avoid being run over by cars
* taking advantage of esri's route drawing API to give commuters the ability to look down for navigation tailored to their needs (maybe avoiding areas that have clearly higher concentrations of incidents, while also keeping mind of hills, construction, or terrain that could be just as much of a hazard)
* having a second page on the app to display an easily-readable mobile display of the public incident database
* finetuning map's pin retrieval to allow for more scalability with an increasingly large dataset
* bike-to-street communication, particularly with traffic lights | ## Inspiration
Parker was riding his bike down Commonwealth Avenue on his way to work this summer when a car pulled out of nowhere and hit his front tire. Lucky, he wasn't hurt, he saw his life flash before his eyes in that moment and it really left an impression on him. (His bike made it out okay as well, other than a bit of tire misalignment!)
As bikes become more and more ubiquitous as a mode of transportation in big cities with the growth of rental services and bike lanes, bike safety is more prevalent than ever.
## What it does
We designed \_ Bikeable \_ - a Boston directions app for bicyclists that uses machine learning to generate directions for users based on prior bike accidents in police reports. You simply enter your origin and destination, and Bikeable creates a path for you to follow that balances efficiency with safety. While it's comforting to know that you're on a safe path, we also incorporated heat maps, so you can see where the hotspots of bicycle theft and accidents occur, so you can be more well-informed in the future!
## How we built it
Bikeable is built in Google Cloud Platform's App Engine (GAE) and utilizes the best features three mapping apis: Google Maps, Here.com, and Leaflet to deliver directions in one seamless experience. Being build in GAE, Flask served as a solid bridge between a Python backend with machine learning algorithms and a HTML/JS frontend. Domain.com allowed us to get a cool domain name for our site and GCP allowed us to connect many small features quickly as well as host our database.
## Challenges we ran into
We ran into several challenges.
Right off the bat we were incredibly productive, and got a snappy UI up and running immediately through the accessible Google Maps API. We were off to an incredible start, but soon realized that the only effective way to best account for safety while maintaining maximum efficiency in travel time would be by highlighting clusters to steer waypoints away from. We realized that the Google Maps API would not be ideal for the ML in the back-end, simply because our avoidance algorithm did not work well with how the API is set up. We then decided on the HERE Maps API because of its unique ability to avoid areas in the algorithm. Once the front end for HERE Maps was developed, we soon attempted to deploy to Flask, only to find that JQuery somehow hindered our ability to view the physical map on our website. After hours of working through App Engine and Flask, we found a third map API/JS library called Leaflet that had much of the visual features we wanted. We ended up combining the best components of all three APIs to develop Bikeable over the past two days.
The second large challenge we ran into was the Cross-Origin Resource Sharing (CORS) errors that seemed to never end. In the final stretch of the hackathon we were getting ready to link our front and back end with json files, but we kept getting blocked by the CORS errors. After several hours of troubleshooting we realized our mistake of crossing through localhost and public domain, and kept deploying to test rather than running locally through flask.
## Accomplishments that we're proud of
We are incredibly proud of two things in particular.
Primarily, all of us worked on technologies and languages we had never touched before. This was an insanely productive hackathon, in that we honestly got to experience things that we never would have the confidence to even consider if we were not in such an environment. We're proud that we all stepped out of our comfort zone and developed something worthy of a pin on github.
We also were pretty impressed with what we were able to accomplish in the 36 hours. We set up multiple front ends, developed a full ML model complete with incredible data visualizations, and hosted on multiple different services. We also did not all know each other and the team chemistry that we had off the bat was astounding given that fact!
## What we learned
We learned BigQuery, NumPy, Scikit-learn, Google App Engine, Firebase, and Flask.
## What's next for Bikeable
Stay tuned! Or invest in us that works too :)
**Features that are to be implemented shortly and fairly easily given the current framework:**
* User reported incidents - like Waze for safe biking!
* Bike parking recommendations based on theft reports
* Large altitude increase avoidance to balance comfort with safety and efficiency. | ## Inspiration
**Introducing Ghostwriter: Your silent partner in progress.** Ever been in a class where resources are so hard to come by, you find yourself practically living at office hours? As teaching assistants on **increasingly short-handed course staffs**, it can be **difficult to keep up with student demands while making long-lasting improvements** to your favorite courses.
Imagine effortlessly improving your course materials as you interact with students during office hours. **Ghostwriter listens intelligently to these conversations**, capturing valuable insights and automatically updating your notes and class documentation. No more tedious post-session revisions or forgotten improvement ideas. Instead, you can really **focus on helping your students in the moment**.
Ghostwriter is your silent partner in educational excellence, turning every interaction into an opportunity for long-term improvement. It's the invisible presence that delivers visible results, making continuous refinement effortless and impactful. With Ghostwriter, you're not just tutoring or bug-bashing - **you're evolving your content with every conversation**.
## What it does
Ghostwriter hosts your class resources, and supports searching across them in many ways (by metadata, semantically by content). It allows adding, deleting, and rendering markdown notes. However, Ghostwriter's core feature is in its recording capabilities.
The record button starts a writing session. As you speak, Ghostwriter will transcribe and digest your speech, decide whether it's worth adding to your notes, and if so, navigate to the appropriate document and insert them at a line-by-line granularity in your notes, integrating seamlessly with your current formatting.
## How we built it
We used Reflex to build the app full-stack in Python, and support the various note-management features including addition, deleting, selecting, and rendering. As notes are added to the application database, they are also summarized and then embedded by Gemini 1.5 Flash-8B before being added to ChromaDB with a shared key. Our semantic search is also powered by Gemini-embedding and ChromaDB.
The recording feature is powered by Deepgram's threaded live-audio transcription API. The text is processed live by Gemini, and chunks are sent to ChromaDB for queries. Distance metrics are used as thresholds to not create notes, add to an existing note, or create a new note. In the latter two cases, llama3-70b-8192 is run through Groq to write on our (existing) documents. It does this through a RAG on our docs, as well as some prompt-engineering. To make insertion granular we add unique tokens to identify candidate insertion-points throughout our original text. We then structurally generate the desired markdown, as well as the desired point of insertion, and render the changes live to the user.
## Challenges we ran into
Using Deepgram and live-generation required a lot of tasks to run concurrently, without blocking UI interactivity. We had some trouble reconciling the requirements posed by Deepgram and Reflex on how these were handled, and required us redesign the backend a few times.
Generation was also rather difficult, as text would come out with irrelevant vestiges and explanations. It took a lot of trial and error through prompting and other tweaks to the generation calls and structure to get our required outputs.
## Accomplishments that we're proud of
* Our whole live note-generation pipeline!
* From audio transcription process to the granular retrieval-augmented structured generation process.
* Spinning up a full-stack application using Reflex (especially the frontend, as two backend engineers)
* We were also able to set up a few tools to push dummy data into various points of our process, which made debugging much, much easier.
## What's next for GhostWriter
Ghostwriter can work on the student-side as well, allowing a voice-interface to improving your own class notes, perhaps as a companion during lecture. We find Ghostwriter's note identification and improvement process very useful ourselves.
On the teaching end, we hope GhostWriter will continue to grow into a well-rounded platform for educators on all ends. We envision that office hour questions and engagement going through our platform can be aggregated to improve course planning to better fit students' needs.
Ghostwriter's potential doesn't stop at education. In the software world, where companies like AWS and Databricks struggle with complex documentation and enormous solutions teams, Ghostwriter shines. It transforms customer support calls into documentation gold, organizing and structuring information seamlessly. This means fewer repetitive calls and more self-sufficient users! | partial |
## Inspiration
Building domain-specific automated systems in the real world is painstaking, requiring massive codebases for exception handling and robust testing of behavior for all kinds of contingencies — automated packaging, drone delivery, home surveillance, and search and rescue are all enormously complex and result in highly specialized industries and products that take thousands of engineering hours to prototype.
But it doesn’t have to be this way! Large language models have made groundbreaking strides towards helping out with the similarly tedious task of writing, giving novelists, marketing agents, and researchers alike a tool to iterate quickly and produce high-quality writing exhibiting both semantic precision and masterful high-level planning.
Let’s bring this into the real world. What if asking “find the child in the blue shirt and lead them to the dinner table” was all it took to create that domain-specific application?
Taking the first steps towards generally intelligent embodied AI, DroneFormer turns high-level natural language commands into long scripts of low-level drone control code leveraging advances in language and visual modeling. The interface is the simplest imaginable, yet the applications and end result can adapt to the most complex real-world tasks.
## What it does
DroneFormer offers a no-code way to program a drone via generative AI. You can easily control your drone with simple written high-level instructions. Simply type up the command you want and the drone will execute it — flying in spirals, exploring caves to locate lost people with depth-first search, or even capturing stunning aerial footage to map out terrain. The drone receives a natural language instruction from the user (e.g. "find my keys") and explores the room until it finds the object.
## How we built it
Our prototype compiles natural language instructions down into atomic actions for DJI Tello via in-context learning using the OpenAI GPT-3 API. These actions include primitive actions from the DJI SDK (e.g. forward, back, clockwise turn) as well as custom object detection and visual language model query actions we built leveraging zero-shot image and multimodels models such as YOLOv5 and image processing frameworks such as OpenCV. We include a demo for searching for and locating objects using the onboard Tello camera and object detection.
## Challenges we ran into
One significant challenge was deciding on a ML model that best fit our needs of performant real-time object detection. We experimented with state-of-the-art models such as BLIP and GLIP which either were too slow at inference time, or were not performing as expected in terms of accuracy. Ultimately, we settled on YOLOv5 as having a good balance between latency and ability to collect knowledge about an image. We were also limited by the lack of powerful onboard compute, which meant the drone needs to connect to an external laptop (which needed to serve both the drone and internet networks, which we resolved using Ethernet and wireless at the same time) which in turn connects to the internet for OpenAI API inference.
## Accomplishments that we're proud of
We were able to create an MVP! DroneFormer successfully generates complex 20+ line instructions to detect and navigate to arbitrary objects given a simple natural language instruction to do so (e.g. “explore, find the bottle, and land next to it”).
## What we learned
Hardware is a game changer! Embodied ML is a completely different beast than even a simulated reinforcement learning environment, and working with noisy control systems adds many sources of error on top of long-term language planning. To deal with this, we iterated much more frequently and added functionality to deal with new corner cases and ambiguity as necessary over the course of the project, rewriting as necessary. Additionally, connectivity issues arose often due to the three-tiered nature of the system between the drone, laptop, and cloud backends.
## What's next for DroneFormer
We were constrained by the physical confines of the TreeHacks drone room and obstacles available in the vicinity, as well as the short battery life of the Tello drone. Expanding to larger and more complex hardware, environments, and tasks, we expect the DroneFormer framework to handily adapt, given a bit of prompt engineering, to emergent sophisticated behaviors such as:
* Watching over a child wandering around the house and reporting any unexpected behavior according to a fine-tuned classifier
* Finding that red jacket that you could swear was on the hanger but which has suddenly disappeared
* Checking in “person” if the small coffee shop down the street is still open despite the out-of-date Google Maps schedule
* Sending you a picture of the grocery list you forgot at home
DroneFormer will be a new type of personal assistant — one that always has your back and can bring the magic of complex language model planning to the embodied real world. We’re excited!
<https://medium.com/@sidhantbendre22/hacking-the-moonshot-stanford-treehacks-2023-9166865d4899> | ## Inspiration
## What it does
## It searches for a water Bottle!
## How we built it
## We built it using a roomba, raspberrypi with a picamera, python, and Microsoft's Custom Vision
## Challenges we ran into
## Attaining wireless communications between the pi and all of our PCs was tricky. Implementing Microsoft's API was troublesome as well. A few hours before judging we exceeded the call volume for the API and were forced to create a new account to use the service. This meant re-training the AI and altering code, and loosing access to our higher accuracy recognition iterations during final tests.
## Accomplishments that we're proud of
## Actually getting the wireless networking to consistently. Surpassing challenges like framework indecision between IBM and Microsoft. We are proud of making things that lacked proper documentation, like the pi camera, work.
## What we learned
## How to use Github, train AI object recognition system using Microsoft APIs, write clean drivers
## What's next for Cueball's New Pet
## Learn to recognize other objects. | ## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything. | partial |
## Inspiration
Our team became curious about the concept of sentiment analysis after stumbling across it in the HuggingFace API documentation. After discussing the various real life problems to which sentiment analysis could be applied, we decided that we would build a general platform in which a user could freely determine the concepts that they want to explore.
## What it does
A user is presented with a simple screen featuring a search bar and a drop down of time frames. When they type in a word, the backend runs a combination of Selenium and chromedriver to web scrape the top tweets for that keyword within the timeframe specified by the user. The text from these tweets is then passed through a sentiment analyzer, in which the HuggingFace API scores each text on a "positivity" scale from 1-100. The user is then presented with a "positivity ratio", which gives them an idea of how that specific keyword is perceived by Twitter.
## How we built it
We built the backend with Python3. Most of that file, however, is the logic for Selenium and its chromedriver. Through manual instructions, Selenium logs into Twitter with a burner Twitter account and parses through tweets that appear in Twitter's search function. All of the text is then sent to the sentiment\_analysis file, which attaches a score and populates various data visualization tools (bar plots, histograms) to eventually send back to the user. The frontend is built with React, and Flask was also used to help make the correct calls from user input to the backend, and the resulting data back to the frontend.
## Challenges we ran into
Twitter is notoriously difficult to get information from. We first attempted the use of Tweepy (the official Twitter developer tools API), which failed for two main reasons: the free developer account would only allow access to tweets from the homepage (as opposed to search) and would only show tweets from up to 7 days prior. We then switched gears and attempted various web scraping tools. However, it was extremely difficult to automate this process due to Twitter's new guidelines on web scraping and bot usage; we ended up using Selenium as a sort of proxy to log into an account and look through the timeline.
Another issue we had was that of dynamic loading- Twitter doesn't load tweets (or various other features) until a user manually scrolls through the timeline/search feed to see it. As a result, we were also forced to manually scroll the page down and have the program wait as the tweets loaded. The combination of this issue, as well as the inability to pull tweets from the background made this collection an extremely slow process (~60 seconds to parse text from 100 tweets).
Our other main issue was bringing user input into the backend, and then back to the frontend as organized data. Because the frontend and backend files were made completely independently, we ran into a lot of trouble getting the two to work together.
## Accomplishments that we're proud of
We were really proud to create a fully functional "bot" with the capability to collect text from tweets.
## What we learned
We learned a lot about web scraping, dove deep into the concept of sentiment analysis, and ultimately gained well rounded exposure to full stack development.
## What's next for CHARm
We want to implement categorization by date, which would allow users to see trends in sentiment of certain concepts (Ex. a celebrity 6 months ago vs. today). This could be extremely applicable for those with political campaigns, social media influencer accounts, or even people with a desire to learn. | ## Inspiration
It is a data-feeding world and we need to sift through data and get what we need and we could use. Sentiment and topic analysis are the tools that used for companies to collect and analyze the customer feedback in order for them to better understand the data trend and make associated plannings for improvement.
## What it does
The data we use is scraped online directly to ensure that the data we get are fresh and insightful. By running through about 5000 training examples, we are able to get the topic of each predicted answer very right with the help of machine learning and topic analysis. In doing so, we make full use the twitter API online to achieve the goal like data cleaning. In the end, we are able to visualize it to get a pie chart about the costumers' feedbacks on twitter and provide a clear results to the company. The graph is about the major reasons for the company's negative sentiments and how they are distributed through out the social media online.
## How I built it
First of all, we did research on how to do a sentiment analysis on customer feedback which leads us to data mining, pre-processing data, data classifying, and data analyzing.
For the data mining, we wanted to go for web-scraping the keywords from the website directly. However, we found out that Twitter has its own API for sentiment analysis and we requested for approval for twitter developers. Once we got the consumer key, we downloaded a dataset with customer feedback with the keyword, twitter ID, and its associated address and wrote the relevant ones into a new corpus file.
After we extracted the data with the input keyword, we started to clean the data by tokenizing and removing punctuations of the text.
After classifying the data using Naive Bayes classifier and then predict for the new dataset, we were able to get the labels (positive/negative/neutral) and get the accuracy result for the model.
## Challenges I ran into
For web-scraping, it was hard to filter the the meaningful data we need and convert from HTML format to CSV file. For the labels, in this case, we only introduce the sentiment label in the training which is classified into positive, negative, and neutral. It was difficult to create more labels/topics through the comments from each twitter ID.
## Accomplishments that I'm proud of
We are able to use topic and sentiment analysis to classify different topics. We classified it by airline time issues, costumers' baggage issues, costumers service issues. After having obtained the percentage of each of the component, we are bale to plot it down and have a better visualizaion about the results for the customers of our project.
## What I learned
We have learned that except for apply functions for web scraping, we could use APIs directly which solves a lot of hassles for us. By implementing machine learning algorithms and and topic and sentiment analysis, we get a better understanding of how machine learning is applied to the business planning and operations. Sentiment and topic analysis methods are something totally new to us and we had to learn from scratch off the algorithms and the applications.
## What's next for JetBlue Sentiment Analysis
We hope to create more labels such as year, season, time, region, country, etc. In doing so, we can have a more detailed and specific picture about how customers feel about this company or under which strategies that Jetblue has made, we can find out about the business trends. | ## Inspiration
“**Social media sucks these days.**” — These were the first few words we heard from one of the speakers at the opening ceremony, and they struck a chord with us.
I’ve never genuinely felt good while being on my phone, and like many others I started viewing social media as nothing more than a source of distraction from my real life and the things I really cared about.
In December 2019, I deleted my accounts on Facebook, Instagram, Snapchat, and WhatsApp.
For the first few months — I honestly felt great. I got work done, focused on my small but valuable social circle, and didn’t spend hours on my phone.
But one year into my social media detox, I realized that **something substantial was still missing.** I had personal goals, routines, and daily checklists of what I did and what I needed to do — but I wasn’t talking about them. By not having social media I bypassed superficial and addictive content, but I was also entirely disconnected from my network of friends and acquaintances. Almost no one knew what I was up to, and I didn’t know what anyone was up to either. A part of me longed for a level of social interaction more sophisticated than Gmail, but I didn’t want to go back to the forms of social media I had escaped from.
One of the key aspects of being human is **personal growth and development** — having a set of values and living them out consistently. Especially in the age of excess content and the disorder of its partly-consumed debris, more people are craving a sense of **routine, orientation, and purpose** in their lives. But it’s undeniable that **humans are social animals** — we also crave **social interaction, entertainment, and being up-to-date with new trends.**
Our team’s problem with current social media is its attention-based reward system. Most platforms reward users based on numeric values of attention, through measures such as likes, comments and followers. Because of this reward system, people are inclined to create more appealing, artificial, and addictive content. This has led to some of the things we hate about social media today — **addictive and superficial content, and the scarcity of genuine interactions with people in the network.**
This leads to a **backward-looking user-experience** in social media. The person in the 1080x1080 square post is an ephemeral and limited representation of who the person really is. Once the ‘post’ button has been pressed, the post immediately becomes an invitation for users to trap themselves in the past — to feel dopamine boosts from likes and comments that have been designed to make them addicted to the platform and waste more time, ultimately **distorting users’ perception of themselves, and discouraging their personal growth outside of social media.**
In essence — We define the question of reinventing social media as the following:
*“How can social media align personal growth and development with meaningful content and genuine interaction among users?”*
**Our answer is High Resolution — a social media platform that orients people’s lives toward an overarching purpose and connects them with liked-minded, goal-oriented people.**
The platform seeks to do the following:
**1. Motivate users to visualize and consistently achieve healthy resolutions for personal growth**
**2. Promote genuine social interaction through the pursuit of shared interests and values**
**3. Allow users to see themselves and others for who they really are and want to be, through natural, progress-inspired content**
## What it does
The following are the functionalities of High Resolution (so far!):
After Log in or Sign Up:
**1. Create Resolution**
* Name your resolution, whether it be Learning Advanced Korean, or Spending More Time with Family.
* Set an end date to the resolution — i.e. December 31, 2022
* Set intervals that you want to commit to this goal for (Daily / Weekly / Monthly)
**2. Profile Page**
* Ongoing Resolutions
+ Ongoing resolutions and level of progress
+ Clicking on a resolution opens up the timeline of that resolution, containing all relevant posts and intervals
+ Option to create a new resolution, or ‘Discover’ resolutions
* ‘Discover’ Page
+ Explore other users’ resolutions, that you may be interested in
+ Clicking on a resolution opens up the timeline of that resolution, allowing you to view the user’s past posts and progress for that particular resolution and be inspired and motivated!
+ Clicking on a user takes you to that person’s profile
* Past Resolutions
+ Past resolutions and level of completion
+ Resolutions can either be fully completed or partly completed
+ Clicking on a past resolution opens up the timeline of that resolution, containing all relevant posts and intervals
**3. Search Bar**
* Search for and navigate to other users’ profiles!
**4. Sentiment Analysis based on IBM Watson to warn against highly negative or destructive content**
* Two functions for sentiment analysis textual data on platform:
* One function to analyze the overall positivity/negativity of the text
* Another function to analyze the user of the amount of joy, sadness, anger and disgust
* When the user tries to create a resolution that seems to be triggered by negativity, sadness, fear or anger, we show them a gentle alert that this may not be best for them, and ask if they would like to receive some support.
* In the future, we can further implement this feature to do the same for comments on posts.
* This particular functionality has been demo'ed in the video, during the new resolution creation.
* **There are two purposes for this functionality**:
* a) We want all our members to feel that they are in a safe space, and while they are free to express themselves freely, we also want to make sure that their verbal actions do not pose a threat to themselves or to others.
* b) Current social media has shown to be a propagator of hate speech leading to violent attacks in real life. One prime example are the Easter Attacks that took place in Sri Lanka exactly a year ago: <https://www.bbc.com/news/technology-48022530>
* If social media had a mechanism to prevent such speech from being rampant, the possibility of such incidents occurring could have been reduced.
* Our aim is not to police speech, but rather to make people more aware of the impact of their words, and in doing so also try to provide resources or guidance to help people with emotional stress that they might be feeling on a day-to-day basis.
* We believe that education at the grassroots level through social media will have an impact on elevating the overall wellbeing of society.
## How we built it
Our tech stack primarily consisted of React (with Material UI), Firebase and IBM Watson APIs. For the purpose of this project, we opted to use the full functionality of Firebase to handle the vast majority of functionality that would typically be done on a classic backend service built with NodeJS, etc. We also used Figma to prototype the platform, while IBM Watson was used for its Natural Language toolkits, in order to evaluate sentiment and emotion.
## Challenges we ran into
A bulk of the challenges we encountered had to do with React Hooks. A lot of us were only familiar with an older version of React that opted for class components instead of functional components, so getting used to Hooks took a bit of time.
Another issue that arose was pulling data from our Firebase datastore. Again, this was a result of lack of experience with serverless architecture, but we were able to pull through in the end.
## Accomplishments that we're proud of
We’re really happy that we were able to implement most of the functionality that we set out to when we first envisioned this idea. We admit that we might have bit a lot more than we could chew as we set out to recreate an entire social platform in a short amount of time, but we believe that the proof of concept is demonstrated through our demo
## What we learned
Through research and long contemplation on social media, we learned a lot about the shortcomings of modern social media platforms, for instance how they facilitate unhealthy addictive mechanisms that limit personal growth and genuine social connection, as well as how they have failed in various cases of social tragedies and hate speech. With that in mind, we set out to build a platform that could be on the forefront of a new form of social media.
From a technical standpoint, we learned a ton about how Firebase works, and we were quite amazed at how well we were able to work with it without a traditional backend.
## What's next for High Resolution
One of the first things that we’d like to implement next, is the ‘Group Resolution’ functionality. As of now, users browse through the platform, find and connect with liked-minded people pursuing similarly-themed interests. We think it would be interesting to allow users to create and pursue group resolutions with other users, to form more closely-knitted and supportive communities with people who are actively communicating and working towards achieving the same resolution.
We would also like to develop a sophisticated algorithm to tailor the users’ ‘Discover’ page, so that the shown content is relevant to their past resolutions. For instance, if the user has completed goals such as ‘Wake Up at 5:00AM’, and ‘Eat breakfast everyday’, we would recommend resolutions like ‘Morning jog’ on the discover page. By recommending content and resolutions based on past successful resolutions, we would motivate users to move onto the next step. In the case that a certain resolution was recommended because a user failed to complete a past resolution, we would be able to motivate them to pursue similar resolutions based on what we think is the direction the user wants to head towards.
We also think that High Resolution could be potentially become a platform for recruiters to spot dedicated and hardworking talent, through the visualization of users’ motivation, consistency, and progress. Recruiters may also be able to user the platform to communicate with users and host online workshops or events .
WIth more classes and educational content transitioning online, we think the platform could serve as a host for online lessons and bootcamps for users interested in various topics such as coding, music, gaming, art, and languages, as we envision our platform being highly compatible with existing online educational platforms such as Udemy, Leetcode, KhanAcademy, Duolingo, etc.
The overarching theme of High Resolution is **motivation, consistency, and growth.** We believe that having a user base that adheres passionately to these themes will open to new opportunities and both individual and collective growth. | losing |
## Inspiration
With the COVID-19 crisis, it has become more difficult to connect with others so we thought of having music being a simple icebreaker in conversations.
## What it does
Muse is a website that combines music streaming and social networking to allow users to meet each other with similar musical interests. Muse will match users on the platform who both love the same song. Users can then be navigated to the direct messaging page on Muse where they can discuss more their interests and hobbies separately. There is also a community page where everyone can share their music preferences.
## How WE built it
React/Firebase/HTML/CSS
## Challenges WE ran into
We were very lost on how to approach the Spotify API and ultimately had to switch our project approach towards the end.
## Accomplishments that We're proud of
This is the first full web development project that we have completed and though it's not perfect, we learned a lot.
## What's next for Muse
More implementation of the Spotify API to gather their music activities to better match users. | ## Inspiration 💭
[From Spotify](https://ads.spotify.com/en-GB/news-and-insights/audio-therapy/)
We set out at Delta Hacks X to make a project that would be *fun* to use. More importantly, however, we wanted to make a project that would help people. We wanted to connect people through a shared medium, and the most obvious answer was **music**. Post-pandemic, Spotify reports that they have 31% more active monthly users year by year. Nowadays, it's hard to find someone who doesn't listen to music. Music isn't just a a tool to beat boredom, or stay focused. Music brings people together, especially so in times of need. Tone is a minimal tool that puts the emphasis on two things: people, and the music they love.
## What it does 💻
Tone uses your Spotify playback history to connect you with people around the world that have similar history to you. You can then have a conversation with that user while listening to a song you both like, allowing you to form a connection through the **magical** **medium** of **music**. When you guys are done talking, Tone will add the user you talked with to your friends list, allowing you to chat with them whenever you desire.
## How we built it 🚀
Tone is bootstrapped with Next.JS and written in TypeScript. Prisma is used as a middleman to the PostgreSQL database, and Tailwind CSS is used for all the styling. Spotify's Web API is used to retrieve your stats and store them in the database, allowing custom algorithms to calculate your top three genres. We then crawl the client database to find another person who is into the same music as you, making sure you haven't matched with them before. The project is hosted on Cloudflare, and Redis is used to check which of your friends are online.
## Challenges we ran into 🧱
Spotify's Web API is very particular with it's routes and syntax. A couple times throughout the project, we forgot a line of code here or there and spent hours debugging as a consequence. Additionally, we weren't able to meet all of our goals for the app, leaving a couple of features out due to time restraints.
## Accomplishments that we're proud of ✅
The chat and UI function flawlessly, and we're pretty proud that we were able to ship a clean, functional product in 24 hours after starting from scratch.
## What we learned 📖
We learned that you should always thoroughly read documentation before using API's. Aside from that, this was our first time using Redis and Cloudflare, both of which worked well for our app.
## What's next for Tone ➡️
Tone isn't stopping here. Soon, you will be able to locate concerts to attend with your nearby Tone friends, ones that your circle can enjoy together. To improve user experience, we are planning to implement sentiment analysis using an LLM to block negative users from being added to friend lists. | # Delta Draw
Watch the [video](https://www.youtube.com/watch?v=hDX5sQmFqY8)
## Project Description:
1. A **mechanical shell** with 3D printed gondolas and a motor-guided marker
2. An **image-processing pipeline** that determines how to draw uploaded images start-to-finish
3. A **Web-based HMI** that allows images to be uploaded & drawn
The whiteboard was scavenged from McMaster's IEEE Student Branch | losing |
## Inspiration
Everyone gets tired waiting for their large downloads to complete. BitTorrent is awesome, but you may not have a bunch of peers ready to seed it. Fastify, a download accelerator as a service, solves both these problems and regularly enables 4x download speeds.
## What it does
The service accepts a URL and spits out a `.torrent` file. This `.torrent` file allows you to tap into Fastify's speedy seed servers for your download.
We even cache some downloads so popular downloads will be able to be pulled from Fastify even speedier!
Without any cache hits, we saw the following improvements in download speeds with our test files:
```
| | 512Mb | 1Gb | 2Gb | 5Gb |
|-------------------|----------|--------|---------|---------|
| Regular Download | 3 mins | 7 mins | 13 mins | 30 mins |
| Fastify | 1.5 mins | 3 mins | 5 mins | 9 mins |
|-------------------|----------|--------|---------|---------|
| Effective Speedup | 2x | 2.33x | 2.6x | 3.3x |
```
*test was performed with slices of the ubuntu 16.04 iso file, on the eduroam network*
## How we built it
Created an AWS cluster and began writing Go code to accept requests and the front-end to send them. Over time we added more workers to the AWS cluster and improved the front-end. Also, we generously received some well-needed Vitamin Water.
## Challenges we ran into
The BitTorrent protocol and architecture was more complicated for seeding than we thought. We were able to create `.torrent` files that enabled downloads on some BitTorrent clients but not others.
Also, our "buddy" (*\*cough\** James *\*cough\**) ditched our team, so we were down to only 2 people off the bat.
## Accomplishments that we're proud of
We're able to accelerate large downloads by 2-5 times as fast as the regular download. That's only with a cluster of 4 computers.
## What we learned
Bittorrent is tricky. James can't be trusted.
## What's next for Fastify
More servers on the cluster. Demo soon too. | ![](https://user-images.githubusercontent.com/43010710/219844570-3b4ff1c1-0bec-4009-a7b1-0c73eedf5789.png)
## Problem Statement 💡
The modern electronic health record (EHR) encompasses a treasure trove of information across patient demographics, medical history, clinical data, and other health system interactions (Jensen *et al.*). Although the EHR represents a valuable resource to track clinical care and retrospectively evaluate clinical decision-making, the data deluge of the EHR often obfuscates key pieces of information necessary for the physician to make an accurate diagnosis and devise an effective treatment plan (Noori and Magdamo *et al.*). Physicians may struggle to rapidly synthesize the lengthy medical histories of their patients; in the absence of data-driven strategies to extract relevant insights from the EHR, they are often forced to rely on intuition alone to generate patient questions. Further, the EHR search interface is rarely optimized for the physician search workflow, and manual search can be both time-consuming and error-prone.
The volume and complexity of the EHR can lead to missed opportunities for physicians to gather critical information pertinent to patient health, leading to medical errors or poor health outcomes. It is imperative to design tools and services to reduce the burden of manual EHR search on physicians and help them elicit the most relevant information from their patients.
## About Amanuensis 📝
Amanuensis is an AI-enabled physician assistant for automated clinical summarization and question generation. By arming physicians with relevant insights collected from the EHR as well as with patient responses to NLP-generated questions, we empower physicians to achieve more accurate diagnoses and effective treatment plans. The Amanuensis pipeline is as follows:
1. **Clinical Summarization:** Through our web application, physicians can access medical records of each of their patients, where they are first presented with a clinical summary: a concise, high-level overview of the patient's medical history, including key information such as diagnoses, medications, and allergies. This clinical summary is automatically generated by Amanuensis using Generative Pre-Trained Transformer 3 (GPT-3), an autoregressive language model with a 2048-token-long context and 175 billion parameters. The clinical summary may be reviewed by the physician to ensure that the summary is accurate and relevant to the patient's health.
2. **Question Generation:** Next, Amanuensis uses GPT-3 to automatically generate a list of questions that the physician can ask their patient to elicit more information and identify relevant information in the EHR that the physician may not have considered. The NLP-generated questions are automatically sent to the patient *prior* to their appointment (*e.g.*, once the appointment is scheduled); then, the physician can review the patient's responses and use them to inform their clinical decision-making during the subsequent encounter. Importantly, we have tested Amanuensis on a large cohort of high-quality simulated EHRs generated by SyntheaTM.
By guiding doctors to elicit the most relevant information from their patients, Amanuensis can help physicians improve patient outcomes and reduce the incidences of all five types of medical errors: medication errors, patient care complications, procedure/surgery complications, infections, and diagnostic/treatment errors.
## Building Process 🏗
To both construct and validate Amanuensis, we used the [SyntheaTM](https://synthetichealth.github.io/synthea/) library to generate synthetic patients and associated EHRs (Walonoski *et al.*). SyntheaTM is an open-source software package that simulates the lifespans of synthetic patients using realistic models of disease progression and corresponding standards of care. These models rely on a diverse set of real-world data sources, including the United States Census Bureau demographics, Centers for Disease Control and Prevention (CDC) prevalence and incidence rates, and National Institutes of Health (NIH) reports. The SyntheaTM package was developed by an international research collaboration involving the MITRE Corporation and the HIKER Group, and is in turn based on the Publicly Available Data Approach to the Realistic Synthetic EHR framework (Dube and Gallagher). We customized the SyntheaTM synthetic data generation workflow to produce the following 18 data tables (see also the [SyntheaTM data dictionary](https://github.com/synthetichealth/synthea/wiki/CSV-File-Data-Dictionary)):
| Table | Description |
| --- | --- |
| `Allergies` | Patient allergy data. |
| `CarePlans` | Patient care plan data, including goals. |
| `Claims` | Patient claim data. |
| `ClaimsTransactions` | Transactions per line item per claim. |
| `Conditions` | Patient conditions or diagnoses. |
| `Devices` | Patient-affixed permanent and semi-permanent devices. |
| `Encounters` | Patient encounter data. |
| `ImagingStudies` | Patient imaging metadata. |
| `Immunizations` | Patient immunization data. |
| `Medications` | Patient medication data. |
| `Observations` | Patient observations including vital signs and lab reports. |
| `Organizations` | Provider organizations including hospitals. |
| `Patients` | Patient demographic data. |
| `PayerTransitions` | Payer transition data (*i.e.*, changes in health insurance). |
| `Payers` | Payer organization data. |
| `Procedures` | Patient procedure data including surgeries. |
| `Providers` | Clinicians that provide patient care. |
| `Supplies` | Supplies used in the provision of care. |
To simulate an EHR system, we pre-processed all synthetic data (see `code/construct_database.Rmd`) and standardized all fields. Next, we constructed a PostgreSQL database and keyed relevant tables together using primary and foreign keys constructed by hand. In total, our database contains **199,717 records** from **20 patients** across **262 different fields**. However, it is important to note that our data generation pipeline is scalable to tens of thousands of patients (and we have tested this synthetic data generation capacity).
Finally, we coupled the PostgreSQL database with the [RedwoodJS](https://redwoodjs.com/) full stack web development framework to build a web application that allows:
1. **Physicians:** Physicians to access the clinical summaries and questions generated by Amanuensis for each of their patients.
2. **Patients:** Patients to access the questions generated by Amanuensis and respond to them via a web form.
To generate both clinical summaries and questions for each patient, we used the [OpenAI GPT-3 API](https://beta.openai.com/docs/api-reference/completions/create). In both cases, GPT-3 was prompted with a subset of the EHR record for a given patient inserted into a prompt template for GPT-readability. Other key features of our web application include:
1. **Authentication:** Users can log in with their email addresses; physicians are automatically redirected to their dashboard upon login, while patients are redirected to a page where they can respond to the questions generated by Amanuensis.
2. **EHR Access:** Physicians can also access the full synthetic EHR for each patient as well as view autogenerated graphs and data visualizations, which they can use to review the accuracy of the clinical summaries and questions generated by Amanuensis.
3. **Patient Response Collection:** Prior to an appointment, Amanuensis will automatically collect the patient's responses to the NLP-generated questions and send them to the physician. During an appointment, physicians will be informed by these responses which will facilitate better clinical decision-making.
## Future Directions 🚀
In the future, we hope to integrate Amanuensis into existing EHR systems (*e.g.*, Epic, Cerner, etc.), providing physicians with a seamless, AI-powered assistant to help them make more informed clinical decisions. We also plan to enrich our NLP pipeline with real patient data rather than synthetic EHR records. In concert with gold-standard annotations generated by physicians, we intend to fine-tune our question generation and clinical summarization models on real-world data to improve the sophistication and fidelity of the generated text and enable more robust clinical reasoning capabilities.
# Development Team 🧑💻
* [Ayush Noori](mailto:anoori@college.harvard.edu)
* [Iñaki Arango](mailto:inakiarango@college.harvard.edu)
* [Addea Gupta](mailto:addeagupta@college.harvard.edu)
* [Smriti Somasundaram](mailto:smritisomasundaram@college.harvard.edu)
This project was completed during the [TreeHacks 2023](https://www.treehacks.com/) hackathon at Stanford University.
# Award Descriptions 🏆
Below, we provide descriptions of specific prizes at TreeHacks 2023 which we believe Amanuensis is a strong candidate for. Please note that this list is non-exhaustive. We thank the sponsors and judges for their consideration.
### Patient Safety Technology Challenge
By guiding physicians to make more informed clinical decisions, Amanuensis will help reduce medical errors and improve patient safety. We anticipate that Amanuensis is well poised to avert patient harm and reduce the indidence of medical across the continuum of care, including:
1. *Medication errors:* Through AI-based summarization of the medication history and identification of key risk factors for adverse drug events, potential drug-drug interactions, drug-allergy reactions, and drug-disease contraindications, Amanuensis will help physicians prescribe safer medications.
2. *Procedural/surgical errors:* With access to patient responses to relevant AI-generated questions, physicians will be more prepared for procedures and surgeries, and will be able to identify potential complications and risks.
3. *Diagnostic errors:* By providing physicians with a summary of the patient's medical history, relevant clinical findings, and most salient symptoms, Amanuensis will help physicians make more accurate diagnoses and avoid treatment errors.
We thank Dr. Paul Tang for his time and guidance throughout the hackathon.
### Best Startup and Most Likely to Become a Business
The EHR industry is ripe for disruption. Among hospitals with over 500 beds, the two dominant EHR systems, Epic and Cerner, hold 85% market share: Epic’s market share is 58%, and Cerner’s market share is 27%. Yet, physicians consistently express frustration with antiquated and inefficient interface. By contrast, Amanuensis offers a unique value proposition to physicians: a seamless, AI-powered assistant that may integrate with or supplant existing EHR systems. We believe that Amanuensis will be well positioned to disrupt the EHR industry and become a leading provider of AI-powered clinical decision support tools.
### Best Use of Data
Amanuensis is undergirded by careful and nuanced analysis of large electronic medical datasets. To construct and validate Amanuensis, we generated a dataset with 199,717 records from 20 patients across 262 different fields; further, our synthetic EHR data generation pipeline can scale to tens of thousands of patients. Our deep understanding of the data enabled us to construct a large relational database with explicit foreign and primary keys; we later exploit these relations to efficiently query the database in JavaScript. Finally, informed by our high-quality dataset, we used leading large language models like GPT-3 from OpenAI to generate clinical summaries and questions for each patient, and designed a user interface for clinicians to both access and validate this information.
### Best Hack for a Real World Use Case
We carefully designed our solution to address a real-world problem: the need for more efficient and effective clinical decision support tools. Further, we iterated on our solution throughout the hackathon, incorporating feedback from physicians such as [Dr. Paul Tang](https://www.linkedin.com/in/paultang/) and other stakeholders to ensure that our solution is both feasible, impactful, and closely aligned with physician needs.
### Most Ethically Engaged Hack
Patient privacy and safety are at the core of our hack. Thus, we invested significant effort in generating synthetic data records, which we provided to downstream language models in our web application. We also designed our web application to leave the database unexposed, and we plan to integrate Amanuensis into existing EHR systems to ensure that patient data is stored securely and is only accessible to authorized users.
## References 📚
1. Noori, A. et al. Development and Evaluation of a Natural Language Processing Annotation Tool to Facilitate Phenotyping of Cognitive Status in Electronic Health Records: Diagnostic Study. *Journal of Medical Internet Research* **24**, e40384 (2022).
2. Jensen, P. B., Jensen, L. J. & Brunak, S. Mining electronic health records: towards better research applications and clinical care. *Nat Rev Genet* **13**, 395–405 (2012).
3. Walonoski, J. et al. Synthea: An approach, method, and software mechanism for generating synthetic patients and the synthetic electronic health care record. *Journal of the American Medical Informatics Association* **25**, 230–238 (2018).
4. Dube, K. & Gallagher, T. Approach and Method for Generating Realistic Synthetic Electronic Healthcare Records for Secondary Use. in *Foundations of Health Information Engineering and Systems* (eds. Gibbons, J. & MacCaull, W.) 69–86 (Springer, 2014). doi:10.1007/978-3-642-53956-5\_6. | ![Tech Diagram](https://i.ibb.co/StWfp3V/Princeton-1.jpg)
## Inspiration
With the rise of non-commission brokerages, more teens are investing their hard-earned money in the stock market. I started using apps such as Robinhood when I was in high school, but I had no idea what stocks to buy. Trading securities has a very steep learning curve for the average joe.
We see that lots of new investors have very little experience in the market. Communities like Wall Street Bets developed which further encourage wild speculation. Many open margin accounts, take out huge loans, and incur more risks without their knowledge. Also, many of these new investors are not financially savvy, nor are they looking forward to learning about finance. We wanted to build an app that helps new investors reduce risk and gain knowledge about what they own without much financial knowledge. Hence, we came up with a Social Media app, Due Diligence!
## What it does
Due Diligence allows the user to take a photo of everyday objects: car, laptop, fridge, calculator, collection of watches, etc... and recommends stocks associated with the images. For instance, if you upload a photo of your car, our app will recommend stocks like Tesla General Motors, and Ford. Our object detection model also has brand name recognition. Taking a photo of a Macbook will lead our app to recommend the APPL stock. By recommending companies that manufacture the products and services Due Diligence users daily use, we believe that our userbase can better evaluate the company, its business model, and its position in the market, and come up with a reasonable and safe decision on whether to buy the stock or not.
Our application also has a chat feature. When a user registers to DueDiligence, we ask them questions about their investment strategy (growth, value) and their investment horizon. After a user got a stock recommendation from our app, the user can choose to chat with another person looking to buy the same stock. We math the user with a partner that has similar investment strategies and investment horizons. They are able to use commands to get more specific information about a stock (get recent news articles, get its price, get its EPS this quarter) and we generate tailored questions for them to talk about based on their investment strategies.
## How we built it
We used React Native for the front-end, Flask for the back-end, and Postman for testing.
**1. Back-End**
The back-end dealt with managing users, saving investment strategies, classifying images to stock tickers, and matching/managing the chat. We used MongoDB Atlas to save all the data from the users and chat, and to update the front-end if necessary. We also used the IEX Cloud API which is an all-around stocks API that gives us the price, news, ticker symbol of a stock.
**2. Front-End**
We used React Native for the front end. We were experienced web developers but had little experience in app development. Being able to use web technologies sped up our development process.
**3. Google Cloud Vision API**
We used the Google Cloud Vision API to detect multiple items and logos in an image. After getting the tag names of the image, we ran it through our classification model to the image into ticker symbols.
**4. Classification Model**
The IEX Cloud API could search stocks based on their sector, so we had to relate products to sectors. This is where Naive Bayes came in. We weren't able to find a dataset, but we created our own data that trained the Bayes net table using the posterior probabilities. We built our dataset by figuring out how products matched with the business sector (mobile phone -> tech, sports car -> automobile industry, etc...)
## Challenges we ran into
The problem of relating products to related companies was a very hard problem. There wasn't an API for it. We resorted to using a machine learning model, but that said, it was very challenging to think of a solution that mapped it in a better way. Also, we were novice app developers. We learned how to use reactive native from scratch which was very time-consuming. Finally, working remotely with everyone was very challenging. We solved this problem by using discord and getting into team calls often.
## Accomplishments that we're proud of
We are proud that we were able to think of new solutions to problems that haven't been addressed. Mapping images to stock recommendations have not been done before, and we had to build our Bayes model from scratch. Also, we are proud that we learned to build a mobile app in less than 48 hours!
## What we learned
We learned new technologies such as React Native and new models like Naive Bayes. Most of all, we learned how to work together in a challenging situation, and come up with a program that we are very proud of.
## What's next for DD - Due Diligence
We ran out of time while finishing up the front-end, back-end integration, so finishing that will be our top priority. Other than that, we think expanding our services beyond NYSE and NASDAQ by integrating foreign exchanges into our app would be useful. Also, we think adding new asset classes such as Bonds and ETFs are the right step forward. | partial |
## Inspiration
Every musician knows that moment of confusion, that painful silence as onlookers shuffle awkward as you frantically turn the page of the sheet music in front of you. While large solo performances may have people in charge of turning pages, for larger scale ensemble works this obviously proves impractical. At this hackathon, inspired by the discussion around technology and music at the keynote speech, we wanted to develop a tool that could aid musicians.
Seeing AdHawks's MindLink demoed at the sponsor booths, ultimately give us a clear vision for our hack. MindLink, a deceptively ordinary looking pair of glasses, has the ability to track the user's gaze in three dimensions, recognizes events such as blinks and even has an external camera to display the user's view. Blown away by the possibility and opportunities this device offered, we set out to build a hands-free sheet music tool that simplifies working with digital sheet music.
## What it does
Noteation is a powerful sheet music reader and annotator. All the musician needs to do is to upload a pdf of the piece they plan to play. Noteation then displays the first page of the music and waits for eye commands to turn to the next page, providing a simple, efficient and most importantly stress-free experience for the musician as they practice and perform. Noteation also enables users to annotate on the sheet music, just as they would on printed sheet music and there are touch controls that allow the user to select, draw, scroll and flip as they please.
## How we built it
Noteation is a web app built using React and Typescript. Interfacing with the MindLink hardware was done on Python using AdHawk's SDK with Flask and CockroachDB to link the frontend with the backend.
## Challenges we ran into
One challenge we came across was deciding how to optimally allow the user to turn page using eye gestures. We tried building regression-based models using the eye-gaze data stream to predict when to turn the page and built applications using Qt to study the effectiveness of these methods. Ultimately, we decided to turn the page using right and left wink commands as this was the most reliable technique that also preserved the musicians' autonomy, allowing them to flip back and forth as needed.
Strategizing how to structure the communication between the front and backend was also a challenging problem to work on as it is important that there is low latency between receiving a command and turning the page. Our solution using Flask and CockroachDB provided us with a streamlined and efficient way to communicate the data stream as well as providing detailed logs of all events.
## Accomplishments that we're proud of
We're so proud we managed to build a functioning tool that we, for certain, believe is super useful. As musicians this is something that we've legitimately thought would be useful in the past, and granted access to pioneering technology to make that happen was super exciting. All while working with a piece of cutting-edge hardware technology that we had zero experience in using before this weekend.
## What we learned
One of the most important things we learnt this weekend were the best practices to use when collaborating on project in a time crunch. We also learnt to trust each other to deliver on our sub-tasks and helped where we could. The most exciting thing that we learnt while learning to use these cool technologies, is that the opportunities are endless in tech and the impact, limitless.
## What's next for Noteation: Music made Intuitive
Some immediate features we would like to add to Noteation is to enable users to save the pdf with their annotations and add landscape mode where the pages can be displayed two at a time. We would also really like to explore more features of MindLink and allow users to customize their control gestures. There's even possibility of expanding the feature set beyond just changing pages, especially for non-classical musicians who might have other electronic devices to potentially control. The possibilities really are endless and are super exciting to think about! | ## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well | ## Inspiration
Have you ever thought that BeReal's are too fake? An app built to challenge the unrealistic beauty standards set by typical social media turned into one of the giants it aimed to destroy. With BrainbeatReal we capture the real "you". Our state of the art social media platform prompts you to take a picture only when your heart rate and brain waves are elevated - because you're only real when your body tells you that you are.
## What it does
Our application monitors your heart rate and brain waves measured by EEG to capture moments in your life to share a more real you. When your heart rate is high and brainwave activity signals more vivid emotions, the app will prompt you to take a picture through your front and back camera to share this interesting moment in your life. There is no other way to post media, only when your body feels like it, you can! We have our AI Assistant Brainreader perform sentiment analysis on your pictures to generate a nonsensical caption. These moments are aggregated and analyzed to generate daily insights for the users. We auto-generate your captions to represent what your mind and heart really think without an opportunity for filtering.
## How we built it
**See systems diagram above**
Our application interacts with three physical devices, an Apple watch for heart rate monitoring, the Neurosity crown for brainwave monitoring, and the user facing application on the user’s mobile device. At a certain threshold the Apple watch will send a request to the backend to start the brainbeat recording process, this notification is relayed to the user’s phone which checks and collects the user’s current brainwaves to confirm an interesting event has occurred which triggers the start of taking a brainbeat.
To record a “brainbeat”, the heart rate, brain wave and front/back camera image data is passed into our backend server. The backend server will (1) generate an interesting caption for the images and (2) store and process the data into the database.
(1) The caption generation is powered by the LLaVA (Large Language and Vision Assistant) V1.5 7B and LLaMA 3.1 8B models through the **Groq**. The LLaVA model analyzes the images to provide a description for the front and back camera photos. The descriptions are then fed into the LLaMA model which is prompted to generate a funny nonsensical caption for the brainbeat.
(2) The brainbeat data (caption, photo urls, heart rate etc.) is stored into a **Databricks** managed table to take advantage of Databricks’ extensive infrastructure. This table is queried in a scheduled job to aggregate the day’s brainbeat data which is stored in a separate table for analytics and insights. The job is run everyday at 12am for the previous day’s data. This process allows for more optimized retrievals of the aggregated insights on user request. There is also an opportunity for future extension in collecting streaming data of the user’s brain waves / heart rate information for more intensive processing using Databricks’ pipelines.
**Technologies used:**
* React Native Expo mobile application
* FastAPI with Python backend
* Databricks tables / jobs
* Groq
* Swift for WatchOS app
* Render for hosting
## Challenges we ran into
We worked with a lot of new technologies and three different devices to create our vision of this app. Some challenges included:
**Hardware Integration**: We did a lot of planning based on the possibilities of hardware availability and how we could allow the devices to communicate with each other. In the end a lot of the hardware we originally wanted was unavailable, so we had to choose a backup plan with more difficult implementation.
**Working with new technology**: We had the opportunity to try out a lot of new technology that we haven’t had experience using before. For example, no one on the team has used Swift, Groq or Databricks before, but they ended up being essential parts to our project's success.
## Accomplishments that we're proud of
We’re proud to have completed an end to end project with a couple of different applications and integrations. There were many hiccups along the way where we could have thrown in the towel but we chose to stay up and work hard to complete our project.
## What we learned
Throughout the course of 32 hours, tireless effort was put into making a product that met our expectations. We learned that perseverance can get you a long way and how much of an itch a good project can scratch. Technically, we were able to learn more about integrations, databases, data analytics, mobile applications, watch applications and EEG analysis.
## What’s next?
BrainBeatReal has a lot of potential on the analytics side; we believe that users can gain a lot of insightful information through the analysis of their heartbeat and brain wave data. As a result, we want to integrate a streaming table and a data pipeline to better capture and aggregate data in real time. | winning |
## Inspiration
Over the past 30 years, the percentage American adults who read literature has dropped about 14%. We found our inspiration. The issue we discovered is that due to the rise of modern technologies, movies and other films are more captivating than reading a boring book. We wanted to change that.
## What it does
By implementing Google’s Mobile Vision API, Firebase, IBM Watson, and Spotify's API, Immersify first scans text through our Android Application by utilizing Google’s Mobile Vision API. After being stored into Firebase, IBM Watson’s Tone Analyzer deduces the emotion of the text. A dominant emotional score is then sent to Spotify’s API where the appropriate music is then played to the user. With Immerisfy, text can finally be brought to life and readers can feel more engaged into their novels.
## How we built it
On the mobile side, the app was developed using Android Studio. The app uses Google’s Mobile Vision API to recognize and detect text captured through the phone’s camera. The text is then uploaded to our Firebase database.
On the web side, the application pulls the text sent by the Android app from Firebase. The text is then passed into IBM Watson’s Tone Analyzer API to determine the tone of each individual sentence with the paragraph. We then ran our own algorithm to determine the overall mood of the paragraph based on the different tones of each sentence. A final mood score is generated, and based on this score, specific Spotify playlists will play to match the mood of the text.
## Challenges we ran into
Trying to work with Firebase to cooperate with our mobile app and our web app was difficult for our whole team. Querying the API took multiple attempts as our post request to IBM Watson was out of sync. In addition, the text recognition function in our mobile application did not perform as accurately as we anticipated.
## Accomplishments that we're proud of
Some accomplishments we’re proud of is successfully using Google’s Mobile Vision API and IBM Watson’s API.
## What we learned
We learned how to push information from our mobile application to Firebase and pull it through our web application. We also learned how to use new APIs we never worked with in the past. Aside from the technical aspects, as a team, we learned collaborate together to tackle all the tough challenges we encountered.
## What's next for Immersify
The next step for Immersify is to incorporate this software with Google Glasses. This would eliminate the two step process of having to take a picture on an Android app and going to the web app to generate a playlist. | # CodaCat
## How to run:
* Clone the package
* Pull from anas-dev and dev to get all the packages
* Run **npm run postinstall** of the root to install dependencies of all subclasses
* Navigate to each exercise and run command **npm run start**
## CodaCat is the perfect place for children to learn about coding with Coda in a simple and fun way. It includes many levels with multiple concepts to help children learn to code from a young age.
We were inspired by the teacher's pet and the warm and fuzzy categories to create something cute and easy to pick up for children.
### There are five main concepts that are included in the game each having levels with varying difficulties:
### Inputting and Outputting
A task where the player has to make the cat get onto the screen by simply using .show().
A small interaction between Coda and the player where the player has to output a greeting and receives a greeting back from Coda.
An assignment that includes positions and the interaction with the cursor and mouse. The player has to code the position of the mouse and this makes the eyes of the cat look in that direction when the mouse key is pressed.
### If and Else Statements
For this task, the player has to create if statements to create a game to find where Coda is hiding on the screen. The player has to change the volume of a 'meow' that occurs based on the distance away from Coda so that it gets louder as you get closer.
This assignment will include Coda walking back and forth from left to right on the screen. The player has to create if statements for when Coda reaches the right end of the screen to change directions and walk the opposite way and also do the same when reaching the left end.
### Loop Structures
The player creates a program with a for loop that chooses the number of cats on each row to be outputted on the screen.
### Arrays
Cota will be showing off his dance move and the player's job is to understand the idea of arrays by creating and choosing the order of the dance moves. The dance moves will then be repeated based on the order created by the player.
The player has to do a slightly more complicated code which includes a bubble sort. There will be an array of cats of various heights and the job of the player is to order the cats from shortest to tallest from left to right. This will show the cats moving around to be organized and the idea of a bubble sort.
### Functions
Coda and his friends are hungry for food. The player's job is to create a function that receives the number of cats and gives each of them food to be happy and healthy.
### Others
Flocking Concept - The cats will be running for food in a certain position on the edge of the screen. All the cats will be trying to eat the food and go in groups, but also do not want to collide into each other. The player has to figure out the mechanism and make it work? not sure how this part works completely. | ## Inspiration
The internet is filled with user-generated content, and it has become increasingly difficult to manage and moderate all of the text that people are producing on a platform. Large companies like Facebook, Instagram, and Reddit leverage their massive scale and abundance of resources to aid in their moderation efforts. Unfortunately for small to medium-sized businesses, it is difficult to monitor all the user-generated content being posted on their websites. Every company wants engagement from their customers or audience, but they do not want bad or offensive content to ruin their image or the experience for other visitors. However, hiring someone to moderate or build an in-house program is too difficult to manage for these smaller businesses. Content moderation is a heavily nuanced and complex problem. It’s unreasonable for every company to implement its own solution. A robust plug-and-play solution is necessary that adapts to the needs of each specific application.
## What it does
That is where Quarantine comes in.
Quarantine acts as an intermediary between an app’s client and server, scanning the bodies of incoming requests and “quarantining” those that are flagged. Flagging is performed automatically, using both pretrained content moderation models (from Azure and Moderation API) as well as an in house machine learning model that adapts to specifically meet the needs of the application’s particular content. Once a piece of content is flagged, it appears in a web dashboard, where a moderator can either allow or block it. The moderator’s labels are continuously used to fine tune the in-house model. Together with this in house model and pre-trained models a robust meta model is formed.
## How we built it
Initially, we built an aggregate program that takes in a string and runs it through the Azure moderation and Moderation API programs. After combining the results, we compare it with our machine learning model to make sure no other potentially harmful posts make it through our identification process. Then, that data is stored in our database. We built a clean, easy-to-use dashboard for the grader using react and Material UI. It pulls the flagged items from the database and then displays them on the dashboard. Once a decision is made by the person, that is sent back to the database and the case is resolved. We incorporated this entire pipeline into a REST API where our customers can pass their input through our programs and then access the flagged ones on our website.
Users of our service don’t have to change their code, simply they append our url to their own API endpoints. Requests that aren’t flagged are simply instantly forwarded along.
## Challenges we ran into
Developing the in house machine learning model and getting it to run on the cloud proved to be a challenge since the parameters and size of the in house model is in constant flux.
## Accomplishments that we're proud of
We were able to make a super easy to use service. A company can add Quarantine with less than one line of code.
We're also proud of adaptive content model that constantly updates based on the latest content blocked by moderators.
## What we learned
We learned how to successfully integrate an API with a machine learning model, database, and front-end. We had learned each of these skills individually before, but we has to figure out how to accumulate them all.
## What's next for Quarantine
We have plans to take Quarantine even further by adding customization to how items are flagged and taken care of. It is proven that there are certain locations that spam is commonly routed through so we could do some analysis on the regions harmful user-generated content is coming from. We are also keen on monitoring the stream of activity of individual users as well as track requests in relation to each other (detect mass spamming). Furthermore, we are curious about adding the surrounding context of the content since it may be helpful in the grader’s decisions. We're also hoping to leverage the data we accumulate from content moderators to help monitor content across apps using shared labeled data behind the scenes. This would make Quarantine more valuable to companies as it monitors more content. | partial |
## Inspiration
The simple purpose of this project was to create something helpful but also fun. Inspired by theremin, this instrument requires no physical contact to play. Thus, it is great for people who cannot play an instrument due to disorders like arthritis or Parkinsons. Additionally, it is helpful for developing fine motor skills, as a therapeutic medium, or to help with ear training, and can be enjoyed by people of all ages.
## What it does
Leap motion tracks your hand movements and plays a note accordingly. Using your right hand, you control the frequency, while your left-hand controls volume. To play, all you need to do is pinch your fingers on your right hand. The x-axis controls pitch modulation to produce a vibrato, so by shaking your right hand side to side, you can create a sound that mimics the human voice. This hands-free system has a game mode and free play mode. In game mode, you can follow the purple bar depicted on the screen to play some of your favorite songs, whereas in free play, you are free to create and practice. You can also create and upload your own songs for others to play and enjoy.
## How we built it
We used HTML for the structure of the project. For styling, we used CSS. The leap motion functionality was implemented with JavaScript and jQuery. We also added PHP to support the uploading of songs and used a MySQL database to store the data. Much of our process included trial and error since it was our first time using the Leap Motion.
## Challenges we ran into
We had difficulty with implementing the web audio API, as well as creating the game with javascript due to delays and having functions running asynchronously. We had difficulty setting up the Leap Motion since we're new to the process.
## Accomplishments that we're proud of
Figuring out the hardware.
Trying new things.
Finishing off a with good product.
## What we learned
None of us have had experience with the Leap Motion before, so we learned how much hard work goes into learning about a new piece of hardware and working in a new environment.
## What's next for Pinch Perfect
Sleep. Adding a login and user base. | ## Inspiration
Our inspiration was to improve the learning experience with technology through inuitive controls, efficient motions and minimal design. We also recognize the diversity of learning styles different individuals possess and wanted to push the boundaries of conventional learning.
## What it does
We used the Leap Motion device to detect and translate hand gestures into practical digital actions. It serves as a highly customizable controller that can be integrated and personlized to the user's preferences.
## How we built it
We processed the Leap Motion's three-dimensional vectorized data to define custom hand gestures by analyzing individual finger positions, directions, and movement. We then used a real-time Firebase back-end to allow the controller to be connected with other devices.
## Challenges we ran into
Creating accurate, consistent, and distinct hand gestures was a lot more challenging than we anticipated. However, we managed to utilize the data measured to eventually create reliable gestures.
## Accomplishments that we're proud of
We are proud of implementing a fully operational, unique and nontraditional medium of approaching technology education.
## What's next for SynHaptic
We would like to improve the human-computer interaction through more gestures and more accurate sensing. We also envision SynHaptic's gesture-based learning being applied to the classroom space, with multiple users (students and teachers), as well as curriculum or concept-based features. | ## Inspiration
After learning about the current shortcomings of disaster response platforms, we wanted to build a modernized emergency services system to assist relief organizations and local governments in responding faster and appropriately.
## What it does
safeFront is a cross between next-generation 911 and disaster response management. Our primary users are local governments and relief organizations. The safeFront platform provides organizations and governments with the crucial information that is required for response, relief, and recovery by organizing and leveraging incoming disaster related data.
## How we built it
safeFront was built using React for the web dashboard and a Flask service housing the image classification and natural language processing models to process the incoming mobile data.
## Challenges we ran into
Ranking urgency of natural disasters and their severity by reconciling image recognition, language processing, and sentiment analysis on mobile data and reported it through a web dashboard. Most of the team didn't have a firm grasp on React components, so building the site was how we learned React.
## Accomplishments that we're proud of
Built a full stack web application and a functioning prototype from scratch.
## What we learned
Stepping outside of our comfort zone is, by nature, uncomfortable. However, we learned that we grow the most when we cross that line.
## What's next for SafeFront
We'd like to expand our platform for medical data, local transportation delays, local river level changes, and many more ideas. We were able to build a fraction of our ideas this weekend, but we hope to build additional features in the future. | losing |
## Inspiration
At the start of this hackathon, our entire team began by brainstorming a plethora of ideas on the whiteboard. We spent numerous hours ideating, debating and voting on the one idea that we thought would not only be the most innovative hackathon project but also attempt to tackle an emerging technology that we believed was going to be the future of tomorrow. As we decided to take a break, one of our team member's started scrolling twitter when they discovered an individual by the name of [Nathan Gitter](https://twitter.com/nathangitter/status/1015645660509540353?lang=en). One of the latest projects he was working on and had tweeted about was imagining future AR interactions with wearables. And then it hit us....Augmented Reality (or AR) was a fairly untapped space, yet the possibilities of creating an interactive user experience were endless.
## What it does
InstAR brings augmented reality into life at the palm of your hands. With the use of an iPhone app it uses the iPhone’s camera to analyze the surroundings around the user. When the camera detects a certain image, additional information is shown in the form of augmented reality. For example, when the app detects the poster for the movie Spiderman – Into the Spiderverse it starts playing the trailer for the movie on the iPhone, overlaying the image of the advertisement. This allows something such as a static poster to be turned into an interactive activity for the user. It creates a more exciting experience for the user as well as boots promotion for the movie.
## How we built it
![home screen](https://i.imgur.com/4BzNyuH.png) ![AR Banner](https://i.imgur.com/LhsLHVI.jpg) ![AR Movie poster](https://i.imgur.com/ueRGdOo.jpg) ![AR Apple Watch app extension](https://i.imgur.com/03l3h0e.jpg)
We used Xcode to make use of Swift and ARKit to develop our application. We were able to upload this app to an iPhone which we used for testing. Since we did not have access to movie posters, we used an iPad to display a movie poster in its place. We did additional testing with art on a piece of paper to ensure that our app can detect content that is not being displayed on a screen. Our tests were successful, and we were able to demonstrate the functionality of our app.
## Challenges we ran into
The biggest challenge that we ran into while developing InstAR was truly getting started with the platform. None of the members on the team had ever worked on any sort of AR project in the past and it was definitely a little intimidating at first to learn a bunch of new platforms/languages in a span of just 24 hours.
## Accomplishments that we're proud of
As a team, we believe that our biggest accomplishment is being able to overcome our obstacle of not having past experience with AR. By the end of the hackathon not only were we able to learn a new language/platform but also go once step further and actually apply our learnings to a variety of different use cases that we see a potential for AR to grow in, by showcasing our working application.
## What we learned
On the technical side, we learned how to use Swift and ARKit. This was the first time for all of us using these tools. By researching we learned how AR is being implemented in several different ways around the world, such as in the video that inspired us to use AR. From brainstorming and coming up with the idea for our application we learned about the ways that AR could possibly affect our lives in the future.
## What's next for InstAR
There is a lot of prospect in the future of InstAR. This technology could also be used to promote events outside of movies and hackathons. It could be used to share links regarding more information about certain events, or play a sample of a live concert that a poster is advertising for. Beyond that, we believe that InstAR could be used in other places such as museums. For example, at the ROM there are skeletons of dinosaurs placed out in the form of how they once lived. Using AR, the user could look at the dinosaur bones through their iPhones and view on their screens how the dinosaurs once looked millions of years ago. This would take advantage of the technology available to us and make museums much more interactive for users of all ages. We truly believe that InstAR has the potential to be very prominent in the future. | ## Inspiration
Sometimes we will wake up in the morning and realize that we want to go for a short jog but don't want to be out for too long. Therefore it would be helpful to know how long the route is so we can anticipate the time we would spend. Google Maps does have a feature where we can input various destinations and find their total distance, but this typically requires a precise address where a runner would not necessarily care for.
## What it does
The user selects a general loop around which they want to run, and the Map will "snap" that path to the closest adjacent roads. At the click of a button, they will also be able to find the total distance of their route. If mistakes were made in generating the route, they can easily clear it and restart.
## How I built it
We used Google Cloud, particularly Maps and integrated that into Javascript. We looked through documentation to find strategies in determining the device location and route planning strategies through sample code in the API. We built on top of them to generate desirable polylines and calculate distances as accurately as possible. Additionally, we used web development (HTML and CSS) to build a simple yet attractive interface for the software.
## Challenges I ran into
A more practical use of this type of application is obviously a mobile application for easy access to . We spent countless hours trying to learn Java and work with Android Studio, but the complexity of all the libraries and features made it extremely difficult to work with. As a result, we transitioned over to a desktop web server, as we were slightly more comfortable working with Javascript. Within the web app, we spent a lot of time trying to implement polylines and snapping them to roads properly.
## Accomplishments that I'm proud of
We were able to make polylines work out, which was the most difficult and core part of our hack.
## What I learned
Always look for documentation and search for answers online. Javascript has a lot of resources to learn and use, and is very flexible to use. We definitely improved my knowledge of web development through this hack.
## What's next for Run Master
We are going to automate a method to have the user *input* a distance and the application with generate a suggested loop. It is much more difficult but will definitely be a very useful feature! | ## Inspiration
A couple weeks ago, a friend was hospitalized for taking Advil–she accidentally took 27 pills, which is nearly 5 times the maximum daily amount. Apparently, when asked why, she responded that thats just what she always had done and how her parents have told her to take Advil. The maximum Advil you are supposed to take is 6 per day, before it becomes a hazard to your stomach.
#### PillAR is your personal augmented reality pill/medicine tracker.
It can be difficult to remember when to take your medications, especially when there are countless different restrictions for each different medicine. For people that depend on their medication to live normally, remembering and knowing when it is okay to take their medication is a difficult challenge. Many drugs have very specific restrictions (eg. no more than one pill every 8 hours, 3 max per day, take with food or water), which can be hard to keep track of. PillAR helps you keep track of when you take your medicine and how much you take to keep you safe by not over or under dosing.
We also saw a need for a medicine tracker due to the aging population and the number of people who have many different medications that they need to take. According to health studies in the U.S., 23.1% of people take three or more medications in a 30 day period and 11.9% take 5 or more. That is over 75 million U.S. citizens that could use PillAR to keep track of their numerous medicines.
## How we built it
We created an iOS app in Swift using ARKit. We collect data on the pill bottles from the iphone camera and passed it to the Google Vision API. From there we receive the name of drug, which our app then forwards to a Python web scraping backend that we built. This web scraper collects usage and administration information for the medications we examine, since this information is not available in any accessible api or queryable database. We then use this information in the app to keep track of pill usage and power the core functionality of the app.
## Accomplishments that we're proud of
This is our first time creating an app using Apple's ARKit. We also did a lot of research to find a suitable website to scrape medication dosage information from and then had to process that information to make it easier to understand.
## What's next for PillAR
In the future, we hope to be able to get more accurate medication information for each specific bottle (such as pill size). We would like to improve the bottle recognition capabilities, by maybe writing our own classifiers or training a data set. We would also like to add features like notifications to remind you of good times to take pills to keep you even healthier. | losing |
## Inspiration:-
The inspiration behind Eat-O-Pia comes from its name, which is to build a utopia for healthy eaters. Unhealthy dietary habits and their detrimental impact on the human body is a profound concern challenging today’s generation. According to WHO, unhealthy diets are one of the leading risk factors for diseases such as obesity, heart disease and diabetes, which collectively contribute to millions of deaths per year. In a world overrun by fast food chains, our platform aims to empower individuals to make healthier food choices, one meal at a time.
Our application brings our vision to life by providing a social media platform that fosters a supportive community which serves as a beacon of encouragement to improve everyone’s food habits.
## Purpose:-
At the heart of Eat-o-pia is a simple yet profound purpose: to build a community of fit foodies and promote healthy eating. We envision a world where every meal is a celebration of well-being, and our app is the catalyst for this transformation. We understand that eating is a social experience, and by connecting people with a shared passion for nutritious food, we can make the journey to a healthier lifestyle more enjoyable and sustainable.
## Features:-
Recipe Exchange: Users can share photos of their delicious, healthy meals and provide the recipes, inspiring others to explore nutritious cooking.
Health Rating: Our unique health rating system empowers users to rate the healthiness of shared dishes, promoting accountability and education.
Progress Tracking: Users receive feedback on their food choices based on community ratings, fostering self-awareness and motivating better dietary decisions.
Monthly Challenges: Engage in various healthy eating challenges within our "Foodie" community, promoting exploration and growth.
Avo, Your AI Companion: Meet Avo, our AI chatbot companion, designed to teach users how to cook the dishes they aspire to try, providing guidance and answering culinary questions.
Encouragement to Cook: Our platform is a wellspring of inspiration for cooking nutritious food at home, making healthy eating accessible and enjoyable.
## How Did we build it?
The development process can be broken down in 5 steps:-
1) Ideation:- The project began with a clear conceptualization of the idea. The app's core features, were decided based on the platform's primary mission of promoting healthy eating.
2) Design and UI:- User experience (UX) and user interface (UI) design were critical components, ensuring that the app would be intuitive and appealing to users. Design decisions were made to align with the project's mission, creating an engaging and user-friendly environment for users to explore and share nutritious food experiences.
3) Technology Stack Selection:-
Choosing the right technology stack was crucial for the project's success. The team selected a combination of technologies and tools that included:
**CockroachDB**: As the database solution, CockroachDB was chosen for its distributed, resilient, and highly available architecture. It provided the foundation for data management and real-time updates.
**together.ai**: The development team leveraged together.ai's natural language processing capabilities to create Avo, the AI chatbot companion. This technology helped in providing conversational and educational support to users.
**React Native**: For building the mobile app, React Native, a popular framework for cross-platform development, was used. It allowed for the development of a single codebase that could run on both iOS and Android.
**Web Technologies**: For creating the interactive web platform, standard web technologies like HTML, CSS, and JavaScript were employed.
4) Development:-
Implementing and building the mobile application
5) Testing:-
User testing (tested by different users including our team members)
What did we learn?
1) The power of community: As the saying goes “A community is like a mosaic. The pieces alone are beautiful, but together they create a masterpiece”. Thus the best way to address the issue of unhealthy food habits is to build a vibrant community of individuals passionate about healthy eating where each individual inspires and motivates others to make better food choices thus helping everyone collective achieve their goals.
2) Integration of together.ai API: Our journey led us to harness the potent capabilities of the together.ai API, marking our first foray into this innovative technology. This novel experience equipped us with the means to create 'Avo,' our artificial intelligence chatbot. The process of working with this API and leveraging its diverse models to shape Avo's conversational and educational prowess was instrumental in achieving our mission. This new endeavor introduced us to the exciting world of AI, expanding our horizons and adding a layer of innovation to our project.
3) Impact on public health: Our journey has taught us that the pursuit of healthier living is not just a personal endeavor; it's a collective mission that thrives on the support, inspiration, and education shared within a vibrant community. We've learned that every small step towards better food choices can lead to significant improvements in our overall well-being.
## Challenges Faced ? :
1) Keeping users engaged:- Keeping users engaged and motivated to make healthier choices was one of the major challenges. Community challenges was one way to keep users engaged.
2) Scalability:- As the user base of Eat-O-Pia grew, ensuring that the platform could handle real-time interactions and maintain data integrity became a challenge
3) User-Generated Content Quality:- The third significant challenge we encountered in the development of Eat-O-Pia was maintaining the quality and integrity of user-generated content. User-generated content is at the core of our platform, as users share photos of their meals, rate the healthiness of other dishes, and exchange recipes. Ensuring that the content contributed positively to the community and aligned with our mission of promoting healthy eating was a critical challenge.
4)Professional and Ethical Content: We faced the challenge of ensuring that the content shared on the platform was not only accurate in terms of healthiness ratings but also related and ethical. It was vital to prevent the spread of misleading or harmful information. We addressed this challenge by implementing content guidelines and a checkbox for accepting the terms of ethical content while creating account.
## Languages:-
JavaScript (for frontend development)
Python (for AI chatbot development)
## Frameworks:
React Native (for mobile app development)
Flask (for connecting database to front-end)c
## Web Technologies:
HTML, CSS (for web platform development)
AI and Natural Language Processing:-
together.ai (for AI chatbot development)
## Databases:-
1.CockroachDB (distributed, resilient, and highly available database)
2.Firebase (storage, authentication, and highly available database)
## Cloud Services:-
Intel (utilized for cloud infrastructure and services)
Mobile Development Tools:-
Expo (for making a universal native app for iOS, Android, and the web)
## Version Control:-
Git (for version control and collaboration)
These technologies, including Intel for cloud services, were thoughtfully chosen to create a robust, user-friendly, and scalable platform that fulfills the vision of Eat-O-Pia in promoting healthy eating and building a supportive community. | ## Inspiration: It's time that everyone became less confused about how to eat right. We thought this was the perfect opportunity to explore solutions.
## What it does: This iOS app automates meal recommendations based on the the user's inputted characteristics, medical condition, and goals. Many of us face decision fatigue when we're confused about how to use leftover ingredients, how to make a healthy meal out of it, and what to make for the next meal. Our object-detection feature facilitates these decisions by recommending recipes that will fit the user's nutritional needs, goals, budget, and preferences. The user has the option to track a serving of the recipe they've made which will be analyzed by an existing API (on Edamam-Nutrition) so that users can get immediate feedback about how they can reach their nutrition targets for the day (with suggestions: "have you tried this recipe?", "this recipe may help you achieve your protein goals for the day", etc). User's ultimately gain time by spending less time deliberating and more time cooking and eating well; the app simultaneously tackles food waste by encouraging the use of leftovers.
## How I built it: We built a mobile application through SwiftUI, and used ARKit2 for object-detection (of food). For our own learning experience, we attempted to build the training model from scratch and spent hours collecting data to detect enough detail on different fruits. Once the food was detected, users are given an option to find recipes containing those detected foods which was enabled with the use of Edamam-API. The nutrition facts were extracted from those recipes from the same API to be compared with the the individual's nutrient requirements.
## Challenges I ran into: 3-D detecting the food was a great hurdle to say the least. Learning how to update every part of my macOS system and downloading packages for more than a few hours was a learning curve as well for the new hacker in the team.
## Accomplishments that I'm proud of: We're proud about working with complete strangers for the first time, being extremely resourceful throughout, learning completely novel technologies, having fun, and overcoming the hottest sauce alive. The experienced were very patient to the newbies and the newb was willing to learn.
## What I learned: We're proud that we know how to navigate ARKit2 and Swift, which were two technologies that we are grateful to learn.
## What's next for Smhacked: Smhacked was a medium for a group of passionate, perserving, and patient hackers to unite and explore a completely new technology. Each individual tackled an unfamiliar territory, and these skills will transcend Smhacked into future endeavors because of the people we met and the programs we risked to explore. Smhacked was an adventure that may see the day again with sudden inspiration or with greater practice with the programs we used. | ## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants. | losing |
# yhack
JuxtaFeeling is a Flask web application that visualizes the varying emotions between two different people having a conversation through our interactive graphs and probability data. By using the Vokaturi, IBM Watson, and Indicoio APIs, we were able to analyze both written text and audio clips to detect the emotions of two speakers in real-time. Acceptable file formats are .txt and .wav.
Note: To differentiate between different speakers in written form, please include two new lines between different speakers in the .txt file.
Here is a quick rundown of JuxtaFeeling through our slideshow: <https://docs.google.com/presentation/d/1O_7CY1buPsd4_-QvMMSnkMQa9cbhAgCDZ8kVNx8aKWs/edit?usp=sharing> | ## Inspiration
When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless.
## What it does
* Touchless is an accessible and contact-free solution for gathering form information.
* Allows users to interact with forms using voices and touchless gestures.
* Users use different gestures to answer different questions.
* Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no.
* Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated.
* Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices.
## How we built it
* Gesture and voice components are written in Python.
* The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols.
* SpeechRecognition recognizes user speech
* The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises.
* We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database.
## Challenges we ran into
* Tried to set up a Cerner API for FHIR data, but had difficulty setting it up.
* As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data.
## Accomplishments we’re proud of
This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective.
## What we learned
We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects.
## What’s next for Touchless
In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components. | ## Inspiration
As students and programmers we can spend countless iterations and follow ups prompting ChatGPT to give us the answer that we are looking for. Often before we reach a working solution we've already used up our tokens for ChatGPT 4.0. Not only is prompting time consuming, but also token-consuming. Thus, we decided to create something convenient and practical that we can use to help us engineer our prompts for large language models.
## What it does
Promptli is a prompt engineering chrome extension that provides personalized suggestions and improves your prompts as you type them live in the chatbot. By using frameworks from our research into prompt engineering, Promptli analyzes the prompt on the following dimensions:
1. Identify any ambiguities, vague terms, or areas that could benefit from more detail or context.
2. Assess the logical flow and organization of the prompt.
3. Evaluate the word choice, vocabulary precision, and overall readability.
4. Determine whether the prompt is appropriately focused and scoped for its intended purpose.
5. Making prompts more "AI-friendly"
Make the most out of AI with Promptli, your personal prompt engineer.
## How we built it
We built a chrome extension using:
* Javascript for the back-end server, scripting, and content rendering
* HTML & CSS for a user-friendly UI/UX that seamlessly integrates with the chat interface
* Gemini API for Promptli’s prompt engineering capabilities and JSON file rendering
* 100ms processing time for real time prompt improvement
* Effortless rendering of new prompts
* Interactive and easy-to-use UI/UX with single click per prompt
## Challenges we ran into
\We began this project without prior experience with creating web extensions. Our first step was looking at resources from the Chrome Developers site. Specifically focused on chrome extension sample code and familiarizing ourselves with their API.
Our second challenge was integrating our extension with AI platforms in a seamless fashion that was easy to use and wouldn’t block users’ workflow. This posed greater difficulties than we had initially anticipated as we needed to inject our JS into the website's dynamic layout. After several iterations of designs and troubleshooting our extensions compatibility with the AI site’s code, we were able to create something that works alongside the website’s ever changing layout and is intuitive to use.
Lastly, having to work in a time-crunch with little to no sleep was definitely challenging but at the same time thrilling to build something with our friends that we truly would use and love as a product.
## Accomplishments that we're proud of
We're proud of building an app that we could use ourselves and provides practical value to any daily user of a chat bot. A lot of the time, the world seems overcrowded with artificial intelligence and machines so we are also are proud of creating a more human friendly experience that allows better communication between machine learning models and the human being.
## What we learned
I think the best learning experience was about ourselves and our capabilities when working with friends for something we were passionate about. This was a really fun experience and we loved hacking our persimmon app: Promptli.
Other things we learned:
* how to code a chrome extension
* how to create a back end server
* how to utilize javascript to call API and parse it through
* how to prompt engineer
* how to prompt chatbots to output file structure
* how to sleep under a table
## What's next for Promptli
We're currently working on integration with a wider range of generative AI services and hope to make the product more accessible with future partnerships. We also hope for a custom prompting model for long term sustainability fine tuned for writing good prompts. | winning |
## 🤔 Problem Statement
* The complexity of cryptocurrency and blockchain technology can be intimidating for many potential users.
* According to a survey by Finder, 36% of Americans are not familiar with cryptocurrencies, and 29% find it too complicated to understand.
* BitBuddy aims to create an easy-to-use, AI-driven chatbot that simplifies transactions and interactions with cryptocurrencies, using natural language processing and cutting-edge technology like OpenAI and ChatGPT.
* Research shows that 53% of people feel intimidated by the complexities of cryptocurrency and blockchain technology.
* BitBuddy aims to make crypto accessible to everyone by creating a system that allows users to send funds, mint NFTs, and buy crypto through a simple chat interface.
* BitBuddy aims to break down the barriers to entry that have traditionally limited the growth and adoption of crypto, making it more accessible to a wider range of people.
## 💡 Inspiration
* Shared passion for blockchain technology and its potential to revolutionize finance.
* Desire to simplify the process of interacting with cryptocurrencies and make it accessible to everyone.
* Belief in the potential of chatbots and natural language processing to simplify complex tasks.
## 🤖 What it does
* MintBuddy is a blockchain chatbot that simplifies the process of minting NFTs.
* It guides users through the process step-by-step using Voiceflow and Midjourney APIs.
* Users provide name, description, wallet address, and image URL or generate an NFT using Midjourney.
* The bot will mint the NFT and provide a transaction hash and ID, along with URLs to view the NFT on block explorers and OpenSea.
* It is compatible with various different chains and eliminates the need for users to navigate complex blockchain systems or switch between platforms.
* MintBuddy makes minting NFTs easy and accessible to everyone using AI-driven chatbots and cutting-edge APIs.
## 🧠 How we built it
* MintBuddy simplifies the process of interacting with cryptocurrencies and blockchain technology.
The platform utilizes a variety of cutting-edge technologies, including Voiceflow, Verbwire API, and OpenAI.
* Voiceflow is used to create the user flows and chatbot pathways that enable users to send funds, mint NFTs, and buy crypto.
* Verbwire API is integrated with the chatbot to generate unique and creative NFT art using prompts and have it minted on the blockchain within minutes.
* OpenAI's natural language processing and ChatGPT algorithms are used to extract key information from user requests and enable the chatbot to respond accordingly.
* The combination of these technologies creates a user-friendly and accessible platform that makes blockchain technology and cryptocurrencies accessible to everyone, regardless of their technical knowledge or experience.
## 🧩 Challenges we ran into
* Developing MintBuddy was challenging but rewarding for the team.
* One major challenge was integrating Metamask with Voiceflow as they cannot run asynchronous functions together, but they found an alternative solution with Verbwire after contacting Voiceflow customer service.
* Another challenge was developing the NFT minting feature, which required a lot of back-and-forth communication between the user and the bot, but they were able to create a functional and user-friendly feature.
* The team looks forward to tackling new challenges and creating a platform that simplifies the process of interacting with cryptocurrencies and makes it accessible to everyone.
## 🏆 Accomplishments that we're proud of
* Developed a user-friendly and accessible platform for MintBuddy
* Integrated OpenAI ChatGPT into the platform, proving to be a powerful tool in simplifying the process of interacting with cryptocurrencies
* Integrated Verbwire API securely and ensured users could confirm transactions at every stage
* Developed a chatbot that makes blockchain technology accessible to everyone
* Took a risk in diving into unfamiliar technology and learning new skills, which has paid off in creating a platform that simplifies the process of interacting with cryptocurrencies and blockchain technology
## 💻 What we learned
* Gained a deep understanding of blockchain technology and proficiency in using various blockchain services
* Acquired valuable skills in natural language processing and AI technology, including the integration of OpenAI ChatGPT
* Developed soft skills in communication, collaboration, and project management
* Learned how to manage time effectively and work under tight deadlines
* Realized the importance of education and outreach when it comes to cryptocurrencies and blockchain technology
## 🚀 What's next for
* MintBuddy aims to make cryptocurrency and blockchain technology more accessible than ever before by adding more user-friendly features, tutorials, information, and gamification elements.
* The platform is working on integrating more APIs and blockchain platforms to make it more versatile and powerful with seamless natural language processing capabilities.
* The platform plans to allow users to buy and sell crypto directly through the chatbot, eliminating the need for navigating complex trading platforms.
* MintBuddy wants to become the go-to platform for anyone who wants to interact with cryptocurrency and blockchain technology, empowering people to take control of their finances with ease.
* The platform simplifies the process of interacting with complex systems, making it accessible to everyone, regardless of their technical knowledge or experience, and driving mass adoption of cryptocurrencies and blockchain technology.
## 📈 Why MintBuddy?
MintBuddy should win multiple awards at this hackathon, including the Best Generative AI, Best Blockchain Hack, Best Use of Verbwire, and Best Accessibility Hack. Our project stands out because we've successfully integrated multiple cutting-edge technologies to create a user-friendly and accessible platform that simplifies the process of interacting with cryptocurrencies and blockchain technology. Here's how we've met each challenge:
* Best Generative AI: We've utilized Voiceflow and OpenAI ChatGPT to create an AI chatbot that guides users through the process of sending funds, minting NFTs, and buying crypto. We've also integrated Verbwire API to generate unique and creative NFTs for users without requiring them to have any design skills or knowledge.
* Best Blockchain Hack: We've successfully tackled the problem of complexity and lack of accessibility in interacting with cryptocurrencies and blockchain technology. With our platform, anyone can participate in the blockchain world, regardless of their technical expertise.
* Best Use of Verbwire: We've used Verbwire API to generate creative and unique NFTs for users, making it easier for anyone to participate in the NFT market.
* Best Accessibility Hack: We've made creating and generating NFTs easy, even for people without any technical knowledge. Our chatbot's natural language processing capabilities make it possible for anyone to interact with our platform, regardless of their technical expertise.
Overall, our project MintBuddy stands out because it tackles the issue of accessibility and user-friendliness in interacting with cryptocurrencies and blockchain technology. We're confident that our innovative use of technology and dedication to accessibility make MintBuddy a standout project that deserves recognition at this hackathon. | ## Inspiration
Our inspiration for TeddyTalk stems from the desire to create a magical and educational companion for children. In a world where technology is advancing rapidly, we wanted to apply the power of artificial intelligence to stimulate learning and communication in a friendly manner. Acknowledging that brain development is at its highest between ages 2 and 7, where access to technologies like AI and the internet is very limited, TeddyTalk provides a tool to enhance early education.
## What it does
TeddyTalk enables kids to chat with their teddy bear, enjoying storytelling answering questions, and playing games. A key feature is the parental control dashboard, allowing parents to regulate subjects, monitor conversations between their child and the toy, and access functionalities for a secure and personalized experience.
## How we built it
We constructed our system by implementing a central script on a Raspberry Pi, serving as the core component. This script manages voice recognition through the AssemblyAI API and utilizes the Mistral-7b Large Language Model (LLM) for text generation. The generated text is then forwarded to ElevenLabs for Text-to-Speech (TTS) voice generation.
Additionally, our system is designed to maintain a dynamic interaction with a parental dashboard, which is built using Vue.js. The dashboard facilitates communication with the Raspberry Pi by exchanging messages through an S3 AWS bucket. The main script on the Raspberry Pi uploads chat history and retrieves any instructions or updates from the parental dashboard, ensuring seamless integration and interaction between the user interface and the hardware.
## Challenges we ran into
Addressing privacy concerns and implementing robust security measures to safeguard the interactions and data within TeddyTalk, particularly considering its child-centric nature.
## Accomplishments that we're proud of
Achieving a user-friendly interface that makes TeddyTalk accessible and enjoyable for both children and parents
## What we learned
Understanding the crucial role of early years in shaping social and emotional development, alongside recognizing the rapid language development during this period.
## What's next for TeddyTalk
Exploring partnerships with educational experts and institutions to enhance TeddyTalk's educational content and align it with the latest developments in early childhood education.
Actively collecting and incorporating user feedback to refine and enhance TeddyTalk's features, addressing the specific needs and preferences of both children and parents.
Expanding TeddyTalk's compatibility with various smart devices, ensuring accessibility across different platforms and devices for a seamless user experience. | ## Inspiration
How many times have you been in a lecture in person or online and wished that you could discuss the current topic with someone since you're having a hard time understanding in real-time? How many times have you taken a picture of the whiteboard to look at it later but never ended up studying from it? What about the volunteer note-takers in every course for accessibility? We wanted to create an app that would fix all of these problems and more by allowing more collaboration and connection during lectures and more aid with note taking.
## What it does
It is a social media study app, where you can choose courses and for each lecture, there is a channel where students can talk to each other. Furthermore, each lecture has a notes section where they can send pictures of notes to that section and it will transcribe it which reduces the need for volunteer notetakers and helps people who feel like it's tough to follow the lecture in real time while taking notes.
## How we built it
We decided to use a MERN stack as it's simple and reliable for storing, processing, retrieving and displaying information. First, we created a basic entity relation of the data that we would be working with [ER](https://res.cloudinary.com/dgmplm2mm/image/upload/v1709472777/wdkyeqwvpru0nuhb9emu.png). We delegated tasks among ourselves to focus on design, routes and the front-end. [figma](https://www.figma.com/file/nx3HUKc4jtir3QrKJ4fSTN/uottahack?type=design&node-id=0%3A1&mode=design&t=hkRPnNNAdy5mNkti-1). We eventually started designing the REST requests necessary for the front-end [Ideas][<https://res.cloudinary.com/dgmplm2mm/image/upload/v1709472981/jlxo0iwfzqarwyue5azm.jpg>]. We used a Flask server to serve as an endpoint for the application to do gpt prompts. We also used Cloudinary to store pictures as MongoDB is not great at storing pictures. We deployed the flask and the express server on an ec2 instance so that the website once hosted on vercel could use the APIs.
## Challenges we ran into
1. Storing user images in a way that wasn't slow.
At first we wanted to store the images using base64 URIs that would represent the binary data of the image itself. This proved to be quite slow when sending and receiving requests, and it quickly bloated the mongodb cluster. To avoid this, we decided to store the images in a cloud platform and store the urls to those images in the mongodb cluster.
2. Setting up the domain
There was not much documentation for working with the website for creating the domain name. In addition, as it was our first time working with a domain name, we had a lot of difficulty working with it but in the end it worked out fine.
3. Setting up the EC2 instance
Although it was easy to setup initially, for some reason while the requests to the ec2 worked well with postman, our app when using the exact same requests were not able to get the responses.
## Accomplishments that we're proud of
We are proud to have completed the project.
We are proud to have setup our app on the domain name.
We are proud to have designed and set things up from the start to help us succeed!
## What we learned
We learned about the importance of designing entities and thinking about rest connections from the start.
We learned about the importance of designing front end look from beforehand.
We learned how to use Cloudinary in MERN applications to hold images.
We learned how domains worked.
## What's next for Acadameet
In the future, we want to add many more of the features we wanted to add, such as group creation helpers, group assignment meeting automations and various other features that would make studying or doing group work much easier. We believe that this is important for the future of study | partial |
## Inspiration
I tried to find any sort of backend tools to build apis with python, but there are none! So I made my own.
This was my original design for a no trus
![PlantUML](https://www.plantuml.com/plantuml/png/VP1H2i8m38RVUufTO3SG6NEFFWZJ0uGiRc6raJJp_BR3Gl5XUKk-_EGZNJHGsrxUEJS8ckX8-Y5jsdFJUy4L5-3WRc1CmGgRZkWfmL4y8FKgAwdRgvxiAuyGjtTASAJfuY56VZA2iOJxeWGsa17czZ-QZrzbdab_njcrTYy-JRibKikLnA5fHs5A3_a2)
But there are so few libraries to support the authentication flow for metamask that I had to rewrite the metamask auth flow from scratch, and thats what I built! Hopefully this project will allow more frequent use of blockchain authentication on the backend.
## What it does
![UML Diagram](https://www.plantuml.com/plantuml/png/ZP51YiCm34NtEOKEC7E1MSo4APHcqpi8Hqr4Mnai9QMthr2o2C6KxcpyzFuNRjMms7I_DSUInKXw-Fw5VqnNF_v0Ge6k0-L1WEMlYOU52JwWaGe1Ao18baHrqsc5RAJGKG_deAlA3aQS2MOgV657dtagY_rh4wUaobd0iXj2j4yTo43eyQxO0MMD2sCgP2wCZlLkcqts_AURBsFLnkf5jceMI9ZyZgevCHhr7hTRlVLH_yCN)
Full flow for a low trust backend
![UML2](https://www.plantuml.com/plantuml/png/RSyn3i8m38NXFQTu1u0BC435WCZ27QBK40jmfDYjnErn8KE7Rf7o-RSygALcq_iLBgsAaOpO7t5E-UdF0t8In0ZbXu3b59QFRhftLAWHM94WLJ9jbwuTMQ5VZaRS5hsTZ2Xf9ipK-CvEFtLg__fkjKv9bQl0gXV2u9D-o1S0)
This is the design of our library, it authenticates the user by verifying a signed message and provides a key that can be saved in memory and used to encrypt top secret data that only the user can read.
## How we built it
The library is designed for FastAPI in python, but it should work with flask as well. I also wrote the frontend flow in plain Typescript.
## Challenges we ran into
The lack of documentation and lack of support in general. Very few people use blockchain for backend authentication but they should! It's very convenient and can be pseudonymous.
## Accomplishments that we're proud of
Getting as far as I did, the entire project is close to 300 lines of code and it was very difficult to debug. Next step is definitely to include detailed black box tests ensuring that our middleware is secure.
## What we learned
Metamask auth is tricky
## What's next for Metamask Authorization Middleware for FlaskAPI
Blackbox testing, and integration with popular libraries | ## Inspiration
We know investing can be scary for finance newbies. How about sparing 1% of every transaction?
Even in a digital-only bank, clients will *always* want to make sound decisions and feel secure in their investments. With ISET, we *ensure* that this fundamental goal is met.
## What it does
Every time you make a purchase or deposit a cheque, we put away 1% (or higher if you are down for it) of the amount into an investment account. All you need to do is select a preference for the investment: Low risk? High return? Support sustainability? We got you! In the end, you receive interest as well as reward points for your investment.
ISET educates clients with graphs detailing not only the progress of your investment but also the results of other pathways. What would have happened if you invested 5% more? 5% less? With this knowledge in hand, we guide you to become a better investor with the ease and transparency you need online. Then, you'll be able to make better use of ISET's customizable investment options,
This mobile application can be a stand-alone app or a feature in a banking app. ISET shows you that investment is just as easy as putting away pennies.
## Accomplishments that we're proud of
Completing the entire implementation part in 6 hours. The sleek UI design. The collaboration between four new friends and the way we kept our morale up as if we'd known each other for more than 36 hours. And of course, the working final project.
## How we built it
The front-end design ISET involved the use of Figma, Canva, and Photoshop. The back-end employed Android Studio and Capital One's Nessie API with Java as the programming language of choice.
## What we learned
tl;dr: The importance of caffeine, juice boxes, weird-flavoured-but-surprisingly-good bubble tea, and of course, friends.
Those of us who can call UofTHacksVII our first hackathon learned about both the atmosphere of hackathons and the new technologies listed above, improving our social and technical skills! Of course, the more experienced members learned how to introduce these concepts to beginners, improving their mentorship and articulation.
## Challenges we faced
The brainstorming process was a huge pain. It took half the time. Beginners had to quickly grasp new skills, feeling frustrated when software did not work as expected and not knowing how to fix it. Some parts didn't even work together: for example, the UI had to be completely remade in the proper resolution. Experienced members had to take a leading role and gauge what ideas were possible with existing resources, making brainstorming take longer than possible. The team also faced general problems like sleeping and meeting up (with a snowstorm and a doctor's appointment here and there).
## What's next for ISET
The possibilities for ISET are endless. ISET can be fully integrated into a digital banking app, combining more features to create a comprehensive online bank. Or, as data is inputted, ISET could process the user's preferences to provide a truly personalized experience. As a digital application, ISET could provide summaries and statistics for clients that a person could not compute on the spot! Overall, ISET can be improved and upgraded to the ideal online banking experience. | ## Introduction
[Best Friends Animal Society](http://bestfriends.org)'s mission is to **bring about a time when there are No More Homeless Pets**
They have an ambitious goal of **reducing the death of homeless pets by 4 million/year**
(they are doing some amazing work in our local communities and definitely deserve more support from us)
## How this project fits in
Originally, I was only focusing on a very specific feature (adoption helper).
But after conversations with awesome folks at Best Friends came a realization that **bots can fit into a much bigger picture in how the organization is being run** to not only **save resources**, but also **increase engagement level** and **lower the barrier of entry points** for strangers to discover and become involved with the organization (volunteering, donating, etc.)
This "design hack" comprises of seven different features and use cases for integrating Facebook Messenger Bot to address Best Friends's organizational and operational needs with full mockups and animated demos:
1. Streamline volunteer sign-up process
2. Save human resource with FAQ bot
3. Lower the barrier for pet adoption
4. Easier donations
5. Increase visibility and drive engagement
6. Increase local event awareness
7. Realtime pet lost-and-found network
I also "designed" ~~(this is a design hack right)~~ the backend service architecture, which I'm happy to have discussions about too!
## How I built it
```
def design_hack():
s = get_sketch()
m = s.make_awesome_mockups()
k = get_apple_keynote()
return k.make_beautiful_presentation(m)
```
## Challenges I ran into
* Coming up with a meaningful set of features that can organically fit into the existing organization
* ~~Resisting the urge to write code~~
## What I learned
* Unique organizational and operational challenges that Best Friends is facing
* How to use Sketch
* How to create ~~quasi-~~prototypes with Keynote
## What's next for Messenger Bots' Best Friends
* Refine features and code :D | losing |
## Inspiration
Ubisoft's challenge (a matter of time) + VR gaming + Epic anime protagonists
## What it does
It entertains (and it's good at it too!)
## How we built it
Unity, C#, Oculus SDK
## Challenges we ran into
Time crunch, limited assets, sleep debt
## Accomplishments that we're proud of
Playable game made with only 2 naps and 1 meal
## What we learned
36 hrs is a lot less than 48 hrs
## What's next for One in the Chamber
Oculus Start?? :pray: | ## Inspiration
What inspired us to build this application was spreading mental health awareness in relationship with the ongoing COVID-19 pandemic around the world. While it is easy to brush off signs of fatigue and emotional stress as just "being tired", often times, there is a deeper problem at the root of it. We designed this application to be as approachable and user-friendly as possible and allowed it to scale and rapidly change based on user trends.
## What it does
The project takes a scan of a face using a video stream and interprets that data by using machine learning and specially-trained models for emotion recognition. Receiving the facial data, the model is then able to process it and output the probability of a user's current emotion. After clicking the "Recommend Videos" button, the probability data is exported as an array and is processed internally, in order to determine the right query to send to the YouTube API. Once the query is sent and a response is received, the response is validated and the videos are served to the user. This process is scalable and the videos do change as newer ones get released and the YouTube algorithm serves new content. In short, this project is able to identify your emotions using face detection and suggest you videos based on how you feel.
## How we built it
The project was built as a react app leveraging face-api.js to detect the emotions and youtube-music-api for the music recommendations. The UI was designed using Material UI.
The project was built using the [REACT](https://reactjs.org/) framework, powered by [NodeJS](https://nodejs.org/en/). While it is possible to simply link the `package.json` file, the core libraries that were used were the following
* **[Redux](https://react-redux.js.org/)**
* **[Face-API](https://justadudewhohacks.github.io/face-api.js/docs/index.html)**
* **[GoogleAPIs](https://www.npmjs.com/package/googleapis)**
* **[MUI](https://mui.com/)**
* The rest were sub-dependencies that were installed automagically using [npm](https://www.npmjs.com/)
## Challenges we ran into
We faced many challenges throughout this Hackathon, including both programming and logistical ones, most of them involved dealing with React and its handling of objects and props. Here are some of the most harder challenges that we encountered with React while working on the project:
* Integration of `face-api.js`, as initially figuring out how to map the user's face and adding a canvas on top of the video stream proved to be a challenge, given how none of us really worked with that library before.
* Integration of `googleapis`' YouTube API v3, as the documentation was not very obvious and it was difficult to not only get the API key required to access the API itself, but also finding the correct URL in order to properly formulate our search query. Another challenge with this library is that it does not properly communicate its rate limiting. In this case, we did not know we could only do a maximum of 100 requests per day, and so we quickly reached our API limit and had to get a new key. Beware!
* Correctly set the camera refresh interval so that the canvas can update and be displayed to the user. Finding the correct timing and making sure that the camera would be disabled when the recommendations are displayed as well as when switching pages was a big challenge, as there was no real good documentation or solution for what we were trying to do. We ended up implementing it, but the entire process was filled with hurdles and challenges!
* Finding the right theme. It was very important to us from the very start to make it presentable and easy to use to the user. Because of that, we took a lot of time to carefully select a color palette that the users would (hopefully) be pleased by. However, this required many hours of trial-and-error, and so it took us quite some time to figure out what colors to use, all while working on completing the project we had set out to do at the start of the Hackathon.
## Accomplishments that we're proud of
While we did face many challenges and setbacks as we've outlined above, the results we something that we can really be proud of. Going into specifics, here are some of our best and satisfying moments throughout the challenge:
* Building a well-functioning app with a nice design. This was the initial goal. We did it. We're super proud of the work that we put in, the amount of hours we've spent debugging and fixing issues and it filled us with confidence knowing that we were able to plan everything out and implement everything that we wanted, given the amount of time that we had. An unforgettable experience to say the least.
* Solving the API integration issues which plagued us since the start. We knew, once we set out to develop this project, that meddling with APIs was never going to be an easy task. We were very unprepared for the amount of pain we were about to go through with the YouTube API. Part of that is mostly because of us: we chose libraries and packages that we were not very familiar with, and so, not only did we have to learn how to use them, but we also had to adapt them to our codebase to integrate them into our product. That was quite a challenge, but finally seeing it work after all the long hours we put in is absolutely worth it, and we're really glad it turned out this way.
## What we learned
To keep this section short, here are some of the things we learned throughout the Hackathon:
* How to work with new APIs
* How to debug UI issues use components to build our applications
* Understand and fully utilize React's suite of packages and libraries, as well as other styling tools such as MaterialUI (MUI)
* Rely on each other strengths
* And much, much more, but if we kept talking, the list would go on forever!
## What's next for MoodChanger
Well, given how the name **is** *Moodchanger*, there is one thing that we all wish we could change next. The world!
PS: Maybe add file support one day? :pensive:
PPS: Pst! The project is accessible on [GitHub](https://github.com/mike1572/face)! | ## Inspiration
While working with a Stanford PhD student to do work in Natural Language Processing, I was pointed to a paper that outlines a lightweight model which very effectively translates between languages. The average size for the saved weights of TensorFlow models are only about 5KB. Suddenly, it hit me--**what if someone could download translation capability on their phone, so that they can translate offline?** This is crucial for when you're in another country and don't have access to WiFi or a cellular network, but need to communicate with a local. *This is the most common use case for translators, yet there's no solution available.* **Thus, Offline Translate was born!**
## What it does
The app allows you to use it online just like any other translating app, using Google Cloud ML to serve the desired TensorFlow model. Then, when you know you're going to a country where you don't have internet access, **you can download the specific language-to-language translator onto your device for translation anytime, anywhere!**
## How I built it
The Neural Net is a **Python-implemented TensorFlow encoder-decoder RNN** which takes a variable-length input sequence, computes a representation of the phrase, and finally decodes that representation into a different language. Its architecture is based on cutting-edge, specifically this paper: <https://arxiv.org/pdf/1406.1078.pdf>. I **extended the TensorFlow Sequence-to-Sequence (seq2seq) library** to work effectively for this specific whitepaper. I also started **building a custom Long Short-Term Memory Cell** which computes the representation in the RNN, in order to adhere to the mathematics proposed in the whitepaper.
The app mockup was made using a prototyper, in order to communicate the concept effectively.
## Challenges I ran into
The biggest problems I ran into were ML-related issues. TensorFlow is still a bit tricky when it comes to working with variable-length sequences, so it was difficult wrangling the seq2seq library to work for me (specifically by having to extend it).
## Accomplishments that I'm proud of
I'm very proud that I was able to actually **extend the seq2seq librar**y and get a very difficult concept working in just 36 hours. I'm also proud that I have a **clear path of development next-steps** in order to get this fully function. I'm happy that I got this whole project figured out in such a short amount of time!
## What I learned
The big thing: **it is possible to make rigid TensorFlow libraries to work for you.** The source code is flexible and portable, so it's not difficult to mixed pre-packaged functionality with personal implementations. I also learned that I can function much better than I thought I could on just 2 hrs of sleep.
## What's next for tf-rnn-encoder-decoder
The immediate next step is to get inference work 100% correctly. Once that is done, the technology itself will be solid. Next will be to turn the mockup of the app into a working app, allowing the users to download their preferred models onto their phone and translate anytime, anywhere! | partial |
## Inspiration
We have a problem! We have a new generation of broke philanthropists.
The majority of students do not have a lot of spare cash so it can be challenging for them to choose between investing in their own future or the causes that they believe in to build a better future for others.
On the other hand, large companies have the capital needed to make sizeable donations but many of these acts go unnoticed or quickly forgotten.
## What it does
What if I told you that there is a way to support your favourite charities while also saving money? Students no longer need to choose between investing and donating!
Giving tree changes how we think about investing. Giving tree focuses on a charity driven investment model providing the ability to indulge in philanthropy while still supporting your future financially.
We created a platform that connects students to companies that make donations to the charities that they are interested in. Students will be able to support charities they believe in by investing in companies that are driven to make donations to such causes.
Our mission is to encourage students to invest in companies that financially support the same causes they believe in. Students will be able to not only learn more about financial planning but also help support various charities and services.
## How we built it
### Backend
The backend of this application was built using python. In the backend, we were able to overcome one of our largest obstacles, that this concept has never been done before! We really struggled finding a database or API that would provide us with information on what companies were donating to which charities.
So, how did we overcome this? We wanted to avoid having to manually input the data we needed as this was not a sustainable solution. Additionally, we needed a way to get data dynamically. As time passes, companies will continue to donate and we needed recent and topical data.
Giving Tree overcomes these obstacles using a 4 step process:
1. Using a google search API, search for articles about companies donating to a specified category or charity.
2. Identify all the nouns in the header of the search result.
3. Using the nouns, look for companies with data in Yahoo Finance that have a strong likeness to the noun.
4. Get the financial data of the company mentioned in the article and return the financial data to the user.
This was one of our greatest accomplishments of this project. We were able to overcome and obstacle that almost made us want to do a different project. Although the algorithm can occasionally produce false positives, it works more often than not and allows for us to have a self-sustaining platform to build off of.
### Flask
```shell script
$ touch application.py
from flask import Flask
application = Flask(**name**)
@application.route('/')
def hello\_world():
return 'Hello World'
```
```shell script
$ export FLASK_APP="application.py"
$ flask run
```
Now runs locally:
<http://127.0.0.1:5000/>
### AWS Elastic Beanstalk
Create a Web Server Environment:
```shell script
AWS -> Services -> Elastic beanstalk
Create New Application called hack-western-8 using Python
Create New Environment called hack-western-8-env using Web Server Environment
```
### AWS CodePipeline
Link to Github for Continuous Deployment:
```shell script
Services -> Developer Tools -> CodePipeline
Create Pipeline called hack-western-8
GitHub Version 2 -> Connect to Github
Connection Name -> Install a New App -> Choose Repo Name -> Skip Build Stage -> Deploy to AWS Elastic Beanstalk
```
This link is no longer local:
<http://hack-western-8-env.eba-a5injkhs.us-east-1.elasticbeanstalk.com/>
### AWS Route 53
Register a Domain:
```shell script
Route 53 -> Registered Domains -> Register Domain -> hack-western-8.com -> Check
Route 53 -> Hosted zones -> Create Record -> Route Traffic to IPv4 Address -> Alias -> Elastic Beanstalk -> hack-western-8-env -> Create Records
Create another record but with alias www.
```
Now we can load the website using:<br/>
[hack-western-8.com](http://hack-western-8.com)<br/>
www.hack-western-8.com<br/>
http://hack-western-8.com<br/>
http://www.hack-western-8.com<br/>
Note that it says "Not Secure" beside the link<br/>
### AWS Certificate Manager
Add SSL to use HTTPS:
```shell script
AWS Certificate Manager -> Request a Public Certificate -> Domain Name "hack-western-8.com" and "*.hack-western-8.com" -> DNS validation -> Request
$ dig +short CNAME -> No Output? -> Certificate -> Domains -> Create Records in Route 53
Elastic Beanstalk -> Environments -> Configuration -> Capacity -> Enable Load Balancing
Load balancer -> Add listener -> Port 443 -> Protocol HTTPS -> SSL certificate -> Save -> Apply
```
Now we can load the website using:
<https://hack-western-8.com>
<https://www.hack-western-8.com>
Note that there is a lock icon beside the link to indicate that we are using a SSL certificate so we are secure
## Challenges we ran into
The most challenging part of the project was connecting the charities to the companies. We allowed the user to either type the charity name or choose a category that they would like to support. Once we knew what charity they are interested in, we could use this query to scrape information concerning donations from various companies and then display the stock information related to those companies. We were able to successfully complete this query and we can display the donations made by various companies in the command line, however further work would need to be done in order to display all of this information on the website. Despite these challenges, the current website is a great prototype and proof of concept!
## Accomplishments that we're proud of
We were able to successfully use the charity name or category to scrape information concerning donations from various companies. We not only tested our code locally, but also deployed this website on AWS using Elastic Beanstalk. We created a unique domain for the website and we made it secure through a SSL certificate.
## What we learned
We learned how to connect Flask to AWS, how to design an eye-catching website, how to create a logo using Photoshop and how to scrape information using APIs.
We also learned about thinking outside the box. To find the data we needed we approached the problem from several different angles. We looked for ways to see what companies were giving to charities, where charities were receiving their money, how to minimize false positives in our search algorithm, and how to overcome seemingly impossible obstacles.
## What's next for Giving Tree
Currently, students have 6 categories they can choose from, in the future we would be able to divide them into more specific sub-categories in order to get a better query and find charities that more closely align with their interests.
Health
- Medical Research
- Mental Health
- Physical Health
- Infectious Diseases
Environment
- Ocean Conservation
- Disaster Relief
- Natural Resources
- Rainforest Sustainability
- Global Warming
Human Rights
- Women's Rights
- Children
Community Development
- Housing
- Poverty
- Water
- Sanitation
- Hunger
Education
- Literacy
- After School Programs
- Scholarships
Animals
- Animal Cruelty
- Animal Health
- Wildlife Habitats
We would also want to connect the front and back end. | # Course Connection
## Inspiration
College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life.
## What it does
Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students.
From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data.
## High-Level Tech Stack
Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing).
## How we built it
### Initial Setup
Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization.
### Backend Infrastructure
We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes.
We distributed our workload across several servers.
We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end.
### Graph Construction
Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it.
Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1.
We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken.
With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet!
## Challenges we ran into
We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers.
In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph.
## Accomplishments that we're proud of
We’re very proud of the graph component both in its data structure and in its visual representation.
## What we learned
It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel.
## What's next for Course Connections
There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation. | What if every startup could harness the power of 1,000 Nvidia RTX 4090 GPUs or 500 AMD Ryzen 9 7950X CPUs, at a fraction of the cost? While big tech companies can afford to spend millions of dollars on their GPU infrastructure, small developers and startups are left struggling to access affordable computational power needed for innovation. This bottleneck stifles creativity and limits competition in an industry ripe for disruption.
Our solution, AirPool, is an open-source decentralized network that democratizes access to high-performance computing. By allowing individuals and organizations to contribute their unused GPU and CPU cores, we create a shared, affordable infrastructure for distributing computational services. What's unique about AirPool is that resource availability is dynamically allocated based on project popularity, similar to GitHub repository stars. This ensures that groundbreaking open-source projects get the resources they need to thrive.
AirPool empowers small companies and developers to tackle sophisticated AI models, train large language models, and perform complex data analysis without the heavy upfront investment in hardware. With our platform, computational resources become as accessible and affordable as open-source software.
Join us in revolutionizing the computational landscape, making high-performance computing affordable, and leveling the playing field for innovators everywhere. With AirPool, we're not just sharing resources; we're fostering an open-source ecosystem that accelerates innovation for all. | winning |
## Overview
People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak.
You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read.
## How we built it
We used **OpenCV** and **Tensorflow** to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used **AssemblyAI** to convert audio files to transcripts. Both of these functions are written in **Python**, and our backend server uses **Flask** to make them accessible to the frontend.
For the frontend, we used **React** (JS) and MaterialUI to create a visual and accessible way for users to communicate.
## Challenges we ran into
* We had to re-train our models multiple times to get them to work well enough.
* We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute
## Accomplishments that we're proud of
* Using so many tools, languages and frameworks at once, and making them work together :D
* submitting on time (I hope? 😬)
## What's next for SignTube
* Add more signs!
* Use AssemblyAI's real-time API for more streamlined communication
* Incorporate account functionality + storage of videos | ## Inspiration
Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions.
While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care.
## What it does
Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity.
## How we built it
This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards.
## Challenges we ran into
Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding.
Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself.
## Accomplishments that we're proud of
Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief.
## What we learned
Among many things:
The complexity and difficulty of implementing mechanical systems
How to adjust mechatronics design parameters
Usage of Azure SQL and WordPress for dynamic user pages
Use of the Houndify API and custom commands
Raspberry Pi audio streams
## What's next for Medley
One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem. | ## **Inspiration**
1/5 elderly Canadians feel isolated as a result of their environment. In an everchanging world, society often forgets about the people that built the present world we live in. Elderly people constantly battle between the desire to connect with the changing society around them whilst worrying that they are being a bother to their children and grandchildren.
Having something that always has time to understand and talk to you would be great for loneliness and mental health issues in elderly people. This is where BloomBuddy comes in. Like a human, the plant lives alongside the user, undergoing all the environmental changes and basic necessities it would normally need. As such, it understands the environment that elderly people live in and is able to be a present and effective companion for elderly people. BloomBuddy not only analyzes the words coming from the user but also the environment and climate surrounding the user that could impact their mental health.
## **What it does**
Using machine learning, BloomBuddy is able to analyze the environment both the plant and the user is living in and generate a personality based off of those metrics. Through the unique personality created through the shared environment between the user and BloomBuddy, the plant is able to provide realistic and relatable responses to the user. BloomBuddy lives alongside the human and generates outputs based off of whether it needs resources (eg. water, more light...), it is self caring and automatically notifies the user on it's metrics using sensors. The user is able to communicate with their unique BloomBuddy through a screen and typed input, effectively communicating with the personalized ML model as they would a human. Each BloomBuddy is unique in both personality and digital presence, with a individualized NFT generated per plant built on the **Flow** blockchain network.
*The plant's metrics are easily viewable through the 7 inch display or through \*\*Kintone*\* where the user can see further data analysis of measured metrics throughout a period of time.
## **How we built it**
Some of the sensors we used includes a photoresistor, a humidity sensor, a temperature sensor and a moisture sensor to collect the large variety of data. Arduino IDE was used to program the Arduino and ESP32 respectively, while also programming the communication between the two with UART protocol to store the data on the ESP32 to display on the screen. We built the webpage with HTML for the template, CSS for the styling, and JavaScript for the functionality, while also integrating OpenAI API into the webpage. The NFT was created on the Flow blockchain network using Cadence and Javascript to create the smartcontracts and environment to validate the NFT.
## **Challenges we ran into**
It was our first time working with a lot of the technologies and we namely had several issues with hardware.
Our initial approach with the Raspberry Pi failed due to a faulty SD card reader. After hours of long work to debug and resolve the issue, we pivoted to an Arduino instead to fulfill the role as the central computing unit of our project. However, the Arduino still had some issues on communicating with the ESP32 that needed to be solved.
When implementing Flow into our project, we were unfamiliar with the environment and it was our first time setting a Web3 environment up. We encountered many issues getting the environment, smart contracts and scripts setup.
Nevertheless, our team learned a lot by working with new software and languages. It took all of 24+ hours with no sleep and many Red Bulls but we'd like to think it was worth it :).
## **Accomplishments that we're proud of**
We were able to support our hardware system with a strong, multi-aspect software system consisting of tech ranging from Web3, web and app development, data processing and more. The entire tech stack that we learned was geared towards the one goal of providing the most realistic interactable machine to provide companionship to elderly people and we'd like to say we've accomplished that.
As a team, we used our time efficiently by delegating responsibilities well based on skills and experience. Despite issues with hardware and software, our team pushed through it and learned lots in the process.
The issue we tackled hit home for our entire team as we've seen first hand how elderly people get neglected on accident in our fast-paced society. We're proud that within 24 hours, we were able to construct an MVP of a promising solution that we all truly believe in.
## **What we learned**
Although we had many setbacks, we overcame most of them and learned from our experiences. We quickly learned how to implement OpenAPI using JavaScript within a microcontroller such as the Arduino. Furthermore, as we had limited experience with NFTs, we learned how use Flow for Web3 implementation with NFTs. Through the hackathon, we were able to deploy a webpage on the ESP32 with the help of the mentor panel, as without them, we wouldn't have been able to get past this step.
Most of all, we've learned that any technology is learnable given a strong passion for the project and a team that's motivated to learn. As a collective team, we can confidently say we've learned way more then we expected coming into MakeUofT.
## **What's next for: BloomBuddy**
The next step is scaling the project by adding more functionalities to
1) Increase accessibility for elderly people with disabilities
2) Generate more impact by personalizing the plant more to tailor to individual mental illnesses
3) Further develop Web3 functionality to enhance NFT collections and reward users with FLOW tokens.
and more...
BloomBuddy is just the start of what personifying our hobbies could look like. Whether we realize it or not, every living being shows their behavior in some way. BloomBuddy is proof of concept that personifying the things we care about will enhance the growth of others as well as ourselves to
feasibly make a larger impact in the world with the help of technologies like AI and Web3. | winning |
# Trusty Paws
## Inspiration
We believe that every pet deserves to find a home. Animal shelters have often had to euthanize animals due to capacity issues and lack of adoptions. Our objective with Trusty Paws is to increase the rate of adoptions of these animals by increasing the exposure of shelters to potential pet owners as well as to provide an accessible platform for users to browse and adopt their next best friend.
Trusty Paws also aims to be an online hub for all things related to your pet, from offering an online marketplace for pet products to finding a vet near you.
## What it does
Our vision for Trusty Paws is to create a platform that brings together all the players that contribute to the health and happiness of our pawed friends, while allowing users to support local businesses. Each user, shelter, seller, and veterinarian contributes to a different aspect of Trusty Paws.
**Users**:
Users are your everyday users who own pets or are looking to adopt pets. They will be able to access the marketplace to buy items for their pets, browse through each shelter's pet profiles to fill out adoption requests, and find the nearest pet clinic.
**Shelters**:
Shelters accounts will be able to create pet profiles for each of their animals that are up for adoption! Each pet will have its own profile that can be customized with pictures and other fields providing further information on them. The shelter receives an adoption request form each time an user applies to adopt one of their pets.
**Sellers**:
Sellers will be able to set up stores showing all of their product listings, which include, but are not limited to, food, toys, accessories, and many other. Our marketplace will provide the opportunity for local businesses that have been affected by Covid-19 to reach their target audience while abiding by health and safety guidelines. For users, it will be a convenient way to satisfy all the needs of their pet in one place. Finally, our search bar will allow users to search for specific items for a quick and efficient shopping experience.
**Veterinarians**:
Veterinarians will be able to set up a profile for their clinic, with all the pertinent information such as their opening hours, services provided, and location.
## How we built it
For the front-end, React., Bootstrap and Materialized CSS were used to acquire the visual effects of the current website. In fact, the very first step we undertook was to draft an initial prototype of the product on Figma to ensure all the requirements and required features were met. After a few iterations of redesigning, we each dove into developing the necessary individual components, forms, and pages for the website. Once all components were completed, the next step was to route pages together in order to achieve a seamless navigation of the website.
We used Firebase within Node.Js to implement a framwork for the back-end. Using Firebase, we implemented a NoSQL database using Cloud Firestore. Data for users (all types), pets, products, and adoption forms along with their respective fields were stored as documents in their respective collections.
Finally, we used Google's Distance Matrix API to compute distances between two addresses and find the nearest services when necessary, such as the closest vet clinics or the closest shelters.
## Challenges we ran into
Although we were successful at accomplishing the major features of the website, we encountered many challenges throughout the weekend. As we started working on Trusty Paws, we realized that the initial implementation was not as user-friendly as we wanted it to be. We then decided to take a step back and to return to the initial design phase. Another challenge we ran into was that most of the team was unfamiliar with the development tools necessary for this project, such as Firebase, Node.Js, bootstrap, and redux.
## Accomplishments that we're proud of
We are proud that our team learned so much over the course of a few days.
## What's next for Trusty Paws
We want to keep improving our users' experience by optimizing the current features. We also want to improve the design and user friendliness of the interface. | ## Inspiration
Let's be honest. Sometimes, we have a paper cup, and we look at both the trash can and the recycling bin. We might throw the paper cup away in the trash because the recycling is just a little further. I'm definitely a culprit.
Our team set out to invent a fun perception of recycling by creating a digital pet that can only be cared for through recycling verified by Gemini's ML image recognition.
Something as simple as a tiny pet, backed by the complexity of Gemini, makes me take that extra step to throw away that paper cup into the recycling bin—to make sure my pet survives and keeps the world green.
## What it does
Take a photo of yourself recycling an item, and using image recognition, Gemini checks if it is a valid photo. After a successful photo, you can feed your digital pet a ton of snacks making your pet get progressively get bigger and bigger...
## How we built it
We began sketching each page to track the user's potential dopamine flow. Usually, recycling is seen as an inconvenience, especially since they are outnumbered by regular trashcans 2-to-1. We wanted the users to associate with recycling positively, so we created a pet for them to take care of. This emotional investment changes the idea of recycling from a burden to an opportunity to care for your digital pet.
## Challenges we ran into
Integrating Gemini API was difficult at the start, but it was smooth the second time we tried. Easily viewable color scheme. Staying awake.
## Accomplishments that we're proud of
'Baby Chester' turning out as a cute pet
The quick start to building SnackSnap making the rest of the days less stressful.
## What we learned
Shipping Fast!!! Pushing the limits of getting deep work done. The excitement of working with Gemini. Working with each other's strengths. Trust in each other.
## What's next for SnackSnap
Your pet can have babies. Discover new verticals. Future integration into spatial computing where there is virtually no friction for the user and we can auto-track their recycling activity. | ## Inspiration
No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience.
## What it does
We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that!
## How we built it
We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma.
## Challenges we ran into
Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well.
## Accomplishments that we're proud of
Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain.
## What we learned
We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable. | winning |
## Inspiration
One of the biggest challenges faced by families in war effected countries was receiving financial support from their family members abroad. High transaction fees, lack of alternatives and a lack of transparency all contributed to this problem, leaving families struggling to make ends meet.
According to the World Bank, the **average cost of sending remittances to low income countries is a striking 7% of the amount sent**. For conflict affected families, a 7% transaction fee means the difference between putting food on the table or going hungry for days. The truth is that the livelihoods of those left behind vitally depend on remittance transfers. Remittances are of central importance for restoring stability for families in post-conflict countries. At Dispatch, we are committed to changing the lives of war stricken communities. Our novel app allows families to receive money from their loved ones, without having to worry about the financial barriers that had previously stood in their area.
However, the problem is far larger. Economically, over **$20 billion** has been sent back and forth in the United States this year, and we are barely even two months in. There are more than 89 million migrants in the United States itself. In a hugely untapped market that cares little about its customers and is dominated by exploitative financial institutions, we provide the go-to technology-empowered alternative that lets users help their families and friends around the world. We provide a globalized, one-stop shop for sending money across the world.
*Simply put, we are the iPhone of a remittance industry that uses landlines.*
## What problems exist
1. **High cost, mistrust and inefficiency**: Traditional remittance services often charge high fees for their services, which significantly reduces the amount of money that the recipient receives. **A report by the International Fund for Agricultural Development (IFAD) found that high costs of remittance lead to a loss of $25 billion every year for developing countries**. Additionally, they don’t provide clear information on exchange rate and fees, which leads to mistrust among users. Remittance services tend to have an upper limit on how much one can send per transaction, and they end up leading to security issues once money has been sent over. Lastly, these agencies take days to acknowledge, process, and implement a certain transaction, making immediate transfers intractable.
2. **Zero alternatives = exploitation**: It’s also important to note that very few traditional remittance services are offered in countries affected by war. Remittance services tend not to operate in these regions. With extremely limited options, families are left with no option but to accept the high fees and poor exchange rates by these agencies. This isn’t unique to war stricken countries. This is a huge problem in developing countries. Due to the high fees associated with traditional remittance services, many families in developing countries are unable to fully rely on remittance alone to support themselves. As a result, they may turn to alternative financial options that can be exploitative and dangerous. One such alternative is the use of loan sharks, who offer quick loans with exorbitant interest rates, often trapping borrowers in a cycle of debt.
## How we improve the status quo
**We are a mobile application that provides a low-cost, transparent and safe way to remit money. With every transaction made through Dispatch, our users are making a tangible difference in the lives of their loved ones.**
1. **ZERO Transaction fees**: Instead of charging a percentage-based commission fee, we charge a subscription fee per month. This has a number of advantages. Foremost, it offers a cost effective solution for families because it remains the same regardless of the transfer amount. This also makes the process transparent and simpler as the total cost of the transaction is clear upfront.
2. **Simplifying the process**: Due to the complexity of the current remittance process, migrants may find themselves vulnerable to exploitative offers from alternative providers. This is because they don’t understand the details and risks associated with these alternatives. On our app, we provide clear and concise information that guides users through the entire process. A big way of simplifying the process is to provide multilingual support. This not only removes barriers for immigrants, but also allows them to fully understand what’s happening without being taken advantage of.
3. **Transparency & Security**
* Clearly stated and understood fees and exchange rates - no hidden fees
* Real-time exchange rate updates
* Remittance tracker
* Detailed transaction receipts
* Secure user data (Users can only pay when requested to)
4. **Instant notifications and Auto-Payment**
* Reminders for bill payments and insurance renewals
* Can auto-pay bills (will require confirmation each time before its done) so the user remains worry-
free and does not require an external calendar to manage finances
* Notifications for when new requests have been made by the remitter
## How we built it
1. **Backend**
* Our backend is built on an intricate [relational database](http://shorturl.at/fJTX2) between users, their transactions and the 170 currencies and their exchange rates
* We use the robust Checkbook API as the framework to make payments and keep track of the invoices of all payments run through Dispatch
2. **Frontend**
* We used the handy and intuitive Retool environment to develop a rudimentary app prototype, as demonstrated in our [video demo](https://youtu.be/rNj2Ts6ghgA)
* It implements most of the core functionality of our app and makes use of our functional MySQL database to create a working app
* The Figma designs represent our vision of what the end product UI would look like
## Challenges we ran into
1. International money transfer regulations
2. Government restrictions on currencies /embargos
3. Losing money initially with our business model
## Accomplishments that we're proud of
1. Develop an idea with immense social potential
2. Integrating different APIs into one comprehensive user interface
3. Coming from a grand total of no hackathon experience, we were able to build a functioning prototype of our application.
4. Team bonding – jamming to Bollywood music
## What we learned
1. How to use Retool and Checkbook APIs
2. How to deploy a full fledged mobile application
3. How to use MySQL
4. Understanding the challenges faced by migrants
5. Gained insight into how fintech can solve social issues
## What's next for Dispatch
The primary goal of Dispatch is to empower war-affected families by providing them with a cost-effective and reliable way to receive funds from their loved ones living abroad. However, our vision extends beyond this demographic, as we believe that everyone should have access to an affordable, safe, and simple way to send money abroad.
We hope to continuously innovate and improve our app. We hope to utilize blockchain technology to make transactions more secure by providing a decentralized and tamper proof ledger. By leveraging emerging technologies such as blockchain, we aim to create a cutting-edge platform that offers the highest level of security, transparency and efficiency.
Ultimately, our goal is to create a world where sending money abroad is simple, affordable, and accessible to everyone. **Through our commitment to innovation, transparency, and customer-centricity, we believe that we can achieve this vision and make a positive impact on the lives of millions of people worldwide.**
## Ethics
Banks are structurally disincentivized to help make payments seamless for migrants. We read through various research reports, with Global Migration Group’s 2013 Report on the “Exploitation and abuse of international migrants, particularly those in an irregular situation: a human rights approach” to further understand the violation of present ethical constructs.
As an example, consider how bad a 3% transaction fees (using any traditional banking service) can be for an Indian student whose parents pay Stanford tuition -
3 % of $ 82, 162 = $ 2464.86 (USD)
= 204,005.37 (INR) [1 USD = 82.07 INR]
That is, it costs an extra 200,000 Indian rupees for a family that pays Stanford students via a traditional banking service. Consider the fact that, out of 1.4 billion Indians, this is greater than the average annual income for an Indian. Just the transaction fees alone can devastate a home.
Clearly, we don’t destroy homes, hearts, or families. We build them, for everyone without exception.
We considered the current ethical issues that arise with traditional banking or online payment systems. The following ethical issues arise with creating exclusive, expensive, and exploitative payment services for international transfers:
1. Banks earn significant revenue from remittance payments, and any effort to make the process more seamless could potentially reduce their profits.
2. Banks may view migrant populations as a high-risk group for financial fraud, leading them to prioritize security over convenience in remittance payments
3. Remittance payments are often made to developing countries with less developed financial infrastructure, making it more difficult and costly for banks to facilitate these transactions
4. Many banks are large, bureaucratic organizations that may not be agile enough to implement new technologies or processes that could streamline remittance payments.
5. Banks may be more focused on attracting higher-value customers with more complex financial needs, rather than catering to the needs of lower-income migrants.
6. The regulatory environment surrounding remittance payments can be complex and burdensome, discouraging banks from investing in this area.
7. Banks do not have a strong incentive to compete on price in the remittance market, since many migrants are willing to pay high fees to ensure their money reaches its intended recipient.
8. Banks may not have sufficient data on the needs and preferences of migrant populations, making it difficult for them to design effective remittance products and services.
9. Banks may not see remittance payments as a strategic priority, given that they are only a small part of their overall business.
10. Banks may face cultural and linguistic barriers in effectively communicating with migrant populations, which could make it difficult for them to understand and respond to their needs.
Collectively, as remittances lower, we lose out on the effects of trickle-down economics in developing countries, detrimentally harming how they operate and even stunting their growth in some cases. For the above reasons, our app could not be a traditional online banking system.
We feel there is an ethical responsibility to help other countries benefit from remittances. Crucially, we feel there is an ethical responsibility to help socioeconomically marginalized communities help their loved ones. Hence, we wanted to use technology as a means to include, not exclude and built an app that we hope could be versatile and inclusive to the needs of our user. We needed our app design to be helpful towards our user - allowing the user to gain all the necessary information and make bill payments easier to do across the world. We carefully chose product design elements that were not wordy but simple and clear and provided clear action items that indicated what needed to be done. However, we anticipated the following ethical issues arising from our implementation :
1. Data privacy: Remittance payment apps collect a significant amount of personal data from users. It is essential to ensure that the data is used ethically and is adequately protected.
2. Security: Security is paramount in remittance payment apps. Vulnerabilities or data breaches could lead to significant financial losses or even identity theft. Fast transfers can often lead to mismanagement in accounting.
3. Accessibility: Migrants who may be unfamiliar with technology or may not have access to smartphones or internet may be left out of such services. This raises ethical questions around fairness and equity.
4. Transparency: It is important to provide transparent information to users about the costs and fees associated with remittance payment apps, including exchange rates, transfer fees, and any other charges. We even provide currency optimization features, that allows users to leverage low/high exchange rates so that users can save money whenever possible.
5. Inclusivity: Remittance payment apps should be designed to be accessible to all users, regardless of their level of education, language, or ability. This raises ethical questions around inclusivity and fairness.
6. Financial education: Remittance payment apps could provide opportunities for financial education for migrants. It is important to ensure that the app provides the necessary education and resources to enable users to make informed financial decisions.
Conscious of these ethical issues, we came up with the following solutions to provide a more principally robust app:
1. Data privacy: We collect minimal user data. The only information we care about is who sends and gets the money. No extra information is ever asked for. For undocumented immigrants this often becomes a concern and they cannot benefit from remittances. The fact that you can store the money within the app itself means that you don’t need to go through the bank's red-tape just to sustain yourself.
2. Security: We only send user data once the user posts a request from the sender. We prevent spam by only allowing contacts to send those requests to you. This prevents the user from sending large amounts of money to the wrong person. We made fast payments only possible in highly urgent queries, allowing for a priority based execution of transactions.
3. Accessibility: Beyond simple button clicks, we don’t require migrants to have a detailed or nuanced knowledge of how these applications work. We simplify the user interface with helpful widgets and useful cautionary warnings so the user gets questions answered even before asking them.
4. Transparency: With live exchange rate updates, simple reminders about what to pay when and to who, we make sure there is no secret we keep. For migrants, the assurance that they aren’t being “cheated” is crucial to build a trusted user base and they deserve to have full and clearly presented information about where their money is going.
5. Inclusivity: We provide multilingual preferences for our users, which means that they always end up with the clearest presentation of their finances and can understand what needs to be done without getting tangled up within complex and unnecessarily complicated “terms and conditions”.
6. Financial education: We provide accessible support resources sponsored by our local partners on how to best get accustomed to a new financial system and understand complex things like insurance and healthcare.
Before further implementation, we need to robustly test how secure and spam-free our payment system could be. Having a secure payment system is a high ethical priority for us.
Overall, we felt there were a number of huge ethical concerns that we needed to solve as part of our product and design implementation. We felt we were able to mitigate a considerable percentage of these concerns to provide a more inclusive, trustworthy, and accessible product to marginalized communities and immigrants across the world. | ## Inspiration
One of our team members was in the evacuation warning zone for the raging California fires in the Bay Area just a few weeks ago. Part of their family's preparation for this disaster included the tiresome, tedious, time-sensitive process of listing every item in their house for insurance claims in the event that it's burned down. This process took upwards of 15 hours between 3 people working on it and even then many items were missed an unaccounted for. Claim Cart is here to help!
## What it does
Problems Solved
(1) Families often have many belongings they don’t account for. It’s time intensive and inconvenient to coordinate, maintain, and update extensive lists of household items. Listing mundane, forgotten items can potentially add thousands of dollars to their insurance.
(2) Insurance companies have private master lists of the most commonly used items and what the cheapest viable replacements are. Families are losing out on thousands of dollars because their claims don’t state the actual brand or price of their items. For example, if a family listed “toaster”, they would get $5 (the cheapest alternative), but if they listed “stainless steel - high end toaster: $35” they might get $30 instead.
Claim Cart has two main value propositions: time and money. It is significantly faster to take a picture of your items than manually entering every object in. It’s also more efficient for members to collaborate on making a family master list.
## Challenges I ran into
Our team was split between 3 different time zones, so communication and coordination was a challenge!
## Accomplishments that I'm proud of
For three of our members, PennApps was their first hackathon. It was a great experience building our first hack!
## What's next for Claim Cart
In the future, we will make Claim Cart available to people on all platforms. | ## Inspiration
Beer pong is a popular hobby among university students, however it can get boring. We decided to add another level of difficulty and interactiveness to the game.
## What it does
Our beer pong setup, rotates the cups and uses sensors to determine which cups are left. Each round a random remaining cup is chosen and displayed using the front LEDs to be worth bonus points. The score, time, and cup statuses are streamed and displayed in realtime on a web application so players can see how they rank amongst their friends.
## How we built it
We used cardboard, servos, and an Arduino to spin the cups and gather data from the hardware end. A control program with internal logic to govern games and transmit data was created via C++. The data was sent to the web app using the MQTT IoT protocol through a personal MQTT Broker (cloudMQTT). Node.js was used to create the server which interfaced with the MQTT Broker. The web app was created using HTML, CSS, and Angular.js.
## Challenges we ran into
Hardware side: Configuration, connectivity, insufficient documentation.
Software side: Limited computational resources, deciding between different implementation methods, learning curve for websockets and MQTT.
## Accomplishments that we're proud of
We were able to build a rotating beer pong contraption, stream data from the Arduino to a web app, and adapt and overcome problems. Many many things did not go to plan, but we put in a lot of effort to hack things together.
## What we learned
The intel Edison is very hard to setup and use. Learned about MQTT and websockets.
## What's next for HashtagAlternativeHacks
Go home to shower | winning |
## What is Smile?
Smile is a web app that makes you smile! Studies show that smiling, even a forced one is proven to help with mental health. Our app makes sure you get your smiles in, alongside prompting you to come up with positive affirmances about yourself.
Our app provides a quick, easy, and re-usable set of tools that can help reduce your stress by making you smile and invoke more positive vibes!
Users participate in a smile mile, which consists of 3 different activities that were scientifically designed to help with positivity.
The user first starts by showing a large smile for a couple of seconds, once the app has determined you are smiling, it moves you onto the next stage.
In this stage, the user must actively say/type 3 positive compliments to themself. This is to help them get in the mindset of self-appreciating thoughts!
Finally, we finish the run with some light-hearted music and additional resources that the user can look into when they’re feeling down or want to read more into it.
## Inspiration
As students who just finished our exams, we noticed our mood was becoming more negative. With the added anxiety of seeing our final marks come out, we needed some guidance.
Research shows that there’s merit in doing simple activities to help boost your mood!
The effects positive affirmations have on your mental being:
<https://scholar.dominican.edu/scw/SCW2020/conference-presentations/63/>
Benefits of smiling:
<https://www.tandfonline.com/doi/full/10.1080/17437199.2022.2052740>
<https://www.sclhealth.org/blog/2019/06/the-real-health-benefits-of-smiling-and-laughing/>
Smiling for health
<https://www.nbcnews.com/better/health/smiling-can-trick-your-brain-happiness-boost-your-health-ncna822591>
## How we built it
We built our web application using Javascript, and NextJS. We leveraged Computer Vision and NLP to validate user interactions.
Computer vision was used for Smile detection, where the user is required to smile for at least 10 seconds, this was important to the project since we needed to validate if the user was really smiling throughout this activity.
NLP was used for sentimental analysis, this was important to the project since we needed to make sure the user wasn’t inputting negative compliments and only focused on positivity
For the sentimental analysis portion, we used MonkeyLearn’s classification library, as it provided a set of models that fit our requirements and had a faster turn-around rate. However, it’s a free trial so there’s a limited use
For the Smile detection, we used face-api’s various models, which can be found in `public/models` . This consists of a bunch of models we wish to have used for detecting landmarks such as the mouth and the eyes. However, the models are for general landmark detection, which could be improved upon by focusing only on targetting whether the user is smiling or not!
## Challenges we ran into
We faced a multitude of challenges going through this project:
* Figuring out which models best fits our requirements
* Designing and implementing the user flow in a minimalistic manner
* Fixing hydration issues with NextJS
3 out of the 4 members don’t have access to a camera, so we relied on one person to handle the Computer Vision aspect of the project, this was proven to be the bottleneck and required us to manage our time properly (and a little bit of help from “borrowing” my brother’s laptop).
Likewise, half of our team was in-experienced with building a web application, so the steps involved with onboarding and mentoring provided us with more of a time crunch.
An interesting challenge we ran into was dealing with the hybrid nature of the event. Our team was fluid with how we wanted to communicate as a couple of our team members couldn’t make it to campus, or couldn’t stay for long. This required us to think creatively to figure out how to effectively communicate with the team.
## Accomplishments that we're proud of
Getting the different activities to work was a major concern for all of us, and decided the feasibility of the project, so being able to see a final product that includes all of these features was a lovely sight to see.
Our team management skills were one soft skill we were proud of, since our team consisted of students in different years and disciplines, we wanted to make sure we best used our strengths but still provided an overcome-able challenge. We were able to do this by segmenting responsibilities between the team, and pairing whenever we needed assistance.
Balancing the project work and attending the fun on-campus activities. A lot of the team was interested in the other events throughout the hackathon, and were worried that we may run out of time
## What we learned
Browser-based CV models are difficult to manage since they need to be small enough to load on the client side quickly, but also be verbose enough to detect facial features in different lighting.
NLP models are a hit or miss for a broad topic like sentimental analysis since the use of negation words could completely change the intent of the sentence but Bag-of-Words models still consider it positively.
It’s extremely hard to center a div at 6 am, when we’re all sleep-deprived.
The fun wasn’t the end result, it was the journey and the struggles we had along the way!
## What's next for Smile
* Add more activities to the Smile Mile, so there’s a broad span of activities the user could choose from
* Build our own in-house models for both sentimental analysis and Computer vision, since the current models are for general cases, and can be improved upon through specialization
* We want to polish up the user interface, making things look more refined.
* Creating a mobile app, to make sure you get your smiles on the go!
* Notifications, to remind you to smile, in a Pomodoro-esque style.
## What’s next for you?
It’s obvious! SMILE 😄 | ## Inspiration
Too many times have broke college students looked at their bank statements and lament on how much money they could've saved if they had known about alternative purchases or savings earlier.
## What it does
SharkFin helps people analyze and improve their personal spending habits. SharkFin uses bank statements and online banking information to determine areas in which the user could save money. We identified multiple different patterns in spending that we then provide feedback on to help the user save money and spend less.
## How we built it
We used Node.js to create the backend for SharkFin, and we used the Viacom DataPoint API to manage multiple other API's. The front end, in the form of a web app, is written in JavaScript.
## Challenges we ran into
The Viacom DataPoint API, although extremely useful, was something brand new to our team, and there were few online resources we could look at We had to understand completely how the API simplified and managed all the APIs we were using.
## Accomplishments that we're proud of
Our data processing routine is highly streamlined and modular and our statistical model identifies and tags recurring events, or "habits," very accurately. By using the DataPoint API, our app can very easily accept new APIs without structurally modifying the back-end.
## What we learned
## What's next for SharkFin | ## Inspiration
Thinking back to going to the mall every weekend to hang out with friends, I realized that ecommerce can sometimes be a tiresome and lonely time. Furthermore shopping online tends to give you a disconnected feeling where you end up mindlessly shopping and completely overshooting your budget.
Taking these in mind, we wanted to create a more personal and social ecommerce experience. Hence ShopPal has born: an AI cartoon character that interacted with the user, asking them casual questions, telling jokes, and making unique comments based on the search history and type of websites the user was looking for in order to replicate real life conversations.
## What it does
ShopPal is a chrome browser extension, that is easily accessible in the chrome web extension tabs. Our code featured an animated character that one of our team members designed and created from scratch who would appear and disappear across the extension popup, spouting messages to it's users.
## How we built it
None of our three members had very much coding experience, so we stuck together and created the framework of the site using html and CSS, however after that was finished, we all branched out and tried to create our own unique part of the browser extension, searching for some new knowledge that we could learn and implement in order to create a more accessible and enjoyable website.
## Challenges we ran into
The undeniable, daunting fact that we are incredibly inexperienced had nothing on our unwavering determination to climb this mountain of a project. Being the first hackathon for all of our team members, many things were surprising. We faced many problems, including time management, underdeveloped technical skills, and especially a lack of sleep.
## What we learned and Accomplishments that we're proud of
Admittedly, we knew next to nothing before participating in this hackathon. While there is only so much one can learn in 36 hours, we definitely made the most of our time. Not only did we gain tons of valuable experience with a few different languages, we learned valuable skills and concepts which we will be sure to apply in the future. While there were some bumps in the road, they were more learning experience than challenge. One thing we struggled with was time management, but now we know where we went wrong, and how we can do better in our next event.
## What's next for ShopPal
After creating a simple AI and persona, we want to delve deeper into this character that we had built and truly push it to it's limits. We would aspire to create a robot that gets to know people based on their online habits and how they interact with the AI. ShopPal would store this information and compare it with other data gathered in order to learn more about online habits and find new creative ways to be useful and innovative. | winning |
## Inspiration
* A plethora of students spend enormous amounts of money on building their career. Whether it's on education, online courses, career counsellors and what not! We built this platform with a vision to help students who are at the initial stages of their careers to make a more informed decision on where their career interests lie. We were inspired by this idea because we as students have been through the hassle of connecting with several people and perusing through various websites just to get some direction with the career path that interested us.
## What it does
* Career Parenting brings young students a one-stop shop to figure out how they can go about pursuing their dream career. We help a student understand what skills are required by them to pursue a particular role, what online courses and/or certifications they could take to improve their skills.
* One major thing that students worry about is student debt and how cost-effective their return on investments would be in various undergrad/grad schools. We bring together the trending skillsets around the globe along with the average salaries that students could expect.
## How I built it
* We built it using the MERN stack, that is MongoDB, ExpressJS, NodeJS and ReactJS.
## Challenges I ran into
* Fetching data from multiple sources was a challenging task without doing web crawling
## What I learned
\*Working with new technologies within a short span of time taught us how valuable it can be working together in a team.
## What's next for Career Parenting
* We aim to add more features like helping students decide on a university they want to attend based on the role they want to pursue in the future, how quickly they can pay their student debt based on the university they decide to attend as well as the average salary they can expect for their chosen role
* We also look forward to add machine learning algorithms to better predict the student loan repayment amount as well as duration, and how much time it would take them to accomplish their career goal.
* We plan to integrate events happening around that would help student attend events related to their goal
* We plan to enhance features like displaying online courses in the order of the ratings it has received from multiple sources, add profiles, add the ability to interact with each other who share the same goal | ## Inspiration
As students, we know navigating a career path is often confusing and overwhelming, especially when geographical location, access to resources, and personal background create additional barriers. We were inspired by the challenges that many people face when trying to advance in their careers, particularly those in underrepresented regions or communities. We wanted to create a tool that could help anyone, regardless of where they come from, gain clarity on their career goals and how to achieve them. The idea was to harness the power of AI to provide personalized, actionable career advice that takes into account an individual’s unique context—be it their location, skills, education, or interests. This idea was mainly inspired by reading various SDGs such as "8. Decent Work and Economic Growth" and "Quality Education".
## What it does
Our platform helps users by generating personalized career paths based on key inputs such as current location, skills, education, and professional interests. The AI model processes this information and provides users with a 5 step plan for their desired career, outlining the skills they should develop, and various resources to help them. Additionally, the platform adjusts recommendations based on the user's location to ensure that they are both relevant and realistic.
## How we built it
Frontend: We used React to build a responsive and user-friendly interface. The frontend allows users to easily input their data and interact with the platform.
Backend: The backend was built using Node.js and frameworks such as Express.js to handle the API requests, user data processing, and communication with the AI model.
Database: We implemented PostgreSQL to store user data, including their inputted information and the career paths generated by the AI model.
AI Model: We used the llama-8b-8192 model for our career advice generation purposes. The model was personalized to analyze user inputs and generate personalized career recommendations based on the factors we gave it.
Development Tools: Kaggle was used to fine-tune the AI model, while Postman was used to test API and form requests.
## Challenges we ran into
* The main challenge we ran into was fine-tuning our AI model with a custom data set as none of us had any previous ML experience and had to learn from scratch this weekend by following various tutorials.
* Another challenge we experienced was handling various errors in our backend with our login and registration. We tackled this by using JWT's to simplify our login/logout process and express-session to handle the session of the user.
## Accomplishments that we're proud of
We’re incredibly proud of the fact that we successfully built a working website capable of generating career paths that adapt to users’ unique circumstances. We believe the ability to factor in both geographical and personal elements to offer tailored advice sets our project apart. Additionally, we are proud of the smooth integration between the AI model, frontend, and backend, which ensures a cohesive and responsive user experience.
## What we learned
Throughout the development process, we gained significant experience in training and fine-tuning AI models. We also learned the importance of refining data inputs and how they affect AI output, particularly when trying to achieve a high degree of personalization. Collaboratively, our team improved our problem-solving skills, learning to overcome technical challenges related to AI integration, backend efficiency, and frontend responsiveness.
## What's next for CareerPathAI
We would say the next main steps for CareerPathAI would be how to web scrape from various sources to get better data for our model, and also how to properly fine-tune/train an AI model for better personalization to help more people. | # coTA
🐒👀 Monkey see Monkey Learn 🙉🧠 -- scroll and absorb lectures! 📚
## 💡 Inspiration
Do you spend hours on social media platforms? Have you noticed the addictive nature of short-form videos? Have you ever found yourself remembering random facts or content from these videos? Do you ever get lost in those subway surfer, minecraft, or some satisfying video and come out learning about random useless information or fictional stories from Reddit?
Let’s replace that irrelevant content with material you need to study or learn. Check out coTA! coTA is a new spin on silly computer generated content where you can be entertained while learning and retaining information from your lecture slides.
## 🤔 What it does
We take traditional lectures and make them engaging. Our tool ingests PowerPoint presentations, comprehends the content, and creates entertaining short-form videos to help you learn about lecture material. This satisfies a user’s need for entertainment while staying productive and learning about educational content. Instead of robot-generated fictional Reddit post readers, our tool will teach you about the educational content from your PowerPoint presentations in an easy-to-understand manner.
We also have a chatting feature where you can chat with cohere's LLM to better understand more about power point with direct context. The chatting feature also helps users to clarify any questions that they have with the power of `cohere's` web-search `connector` that is powered by google search!
## 🛠️ How we built it
The Stack: `FastAPI`, `React`, `CoHereAPI`,`TailwindCSS`
For our front end, we used React, creating a minimalist and intuitive design so that any user can easily use our app.
For our backend, we used Python. We utilized a Python library called `python-pptx` to convert PowerPoint presentations into strings to extract lecture content. We then used `Cohere’s` RAG model with the `command-nightly` model to read in and vectorize the document data. This prepares for querying to extract information directly from the PowerPoint. This ensures that any questions that come directly from the PowerPoint content will not be made up and will teach you content within the scope of the class and PowerPoint. This content can then be added to our videos so that users will have relevant and correct information about what they are learning. When generating content, we used web sockets to sequentially generate content for the videos so that users do not have to wait a long time for all the slides to be processed and can start learning right away.
When creating the video, we used `JavaScript’s` built-in API called `Speech Synthesis` to read out loud the content. We displayed the text by parsing and manipulating the strings so that it would fit nicely in the video frame. We also added video footage to be played in the background to keep users engaged and entertained. We tinted the videos to keep users intrigued while listening to the content. This ultimately leads to an easy and fun way to help students retain information and learn more about educational content.
For each video, we made it possible for users to chat to learn more about the content in case they have further questions and can clarify if they don’t understand the content well. This is also done using `Cohere's` API to gain relevant context and up to date info from Google Search
## 🏔️ Challenges we ran into
One of the biggest issues we encountered was the inconsistency of the `cochat` endpoint in returning similar outputs. Initially, we prompted the LLM to parse out key ideas from the PowerPoints and return them as an array. However, the LLM sometimes struggled with matching quotations or consistently returning an array-formatted output. When we switched our models to use `Cohere’s` `command-nightly`, we noticed faster and better results. However, another issue we noticed is that if we overload a prompt, the LLM will have further issues following the strict return formatting, despite clear prompting.
Another significant issue was that parsing through our PowerPoints could take quite some time because our PowerPoints were too large. We managed to fix this by splicing the PowerPoints into sections, making it bite-sized for the model to quickly parse and generate content. However, this is a bottleneck at the moment because we can’t generate content as quickly as platforms like TikTok or YouTube, where it’s just a pre-made video. In the future, we plan to add a feature where users must watch at least 5 seconds so that we can keep users focused instead of being entertained by the scroll effect.
We spent a lot of time trying to create an efficient backend system that utilized both a RESTful API and Fastapi's Websocket to handle generating video content from the slides dynamically instead of waiting for the processing of all Powerpoints, as they would take up to a minute per Cohere call.
Regarding git commits, we accidentally overwrote some code because we miscommunicated on our git pushes. So, we will be sure to communicate when we are pushing and pulling, and of course, regularly pull from the main branch.
## ⭐ Accomplishments that we're proud of
We centered the divs on our first try 😎
We successfully used `Cohere’s` RAG model, which was much easier than we expected. We thought we would need a vector database and langchain, but instead, it was just some really simple, easy calls to the API to help us parse and generate our backend.
We are also really proud of our video feature. It’s really cool to see how we were able to replicate the smooth scrolling effect and text overlay, which is completely done in our frontend in React. Our short-video displayer looks as great as YouTube, TikTok, and Instagram!
## 🧠 What we learned
We gained a wealth of knowledge about RAG from the workshops at Deltahacks, facilitated by Bell ai, and from the Cohere API demo with Raymond. We discovered how straightforward it was to use RAG with Cohere’s API. RAG is an impressive technology that not only provides up-to-date information but also offers relevant internal data that we can easily access for our everyday LLMs.
## 🔮 What's next for coTA
One feature we’re excited to add is quizzes to ensure that users are actively engaged. Quizzes would serve as a tool to reinforce the learning experience for users.
We’re also looking forward to optimizing our system by reusing a vectorized document instead of having to refeed the API. This could save a significant amount of time and resources, and potentially speed up content generation. One approach we’re considering is exploring Langchain to see if they offer any support for this, as they do have conversational support! We’re eager to delve into this outside the scope of this hackathon and learn more about the incredible technologies that Cohere can provide.
In terms of background videos, we’re planning to expand beyond the pool of videos we currently have. Our existing videos align more with meme trends, but we’re interested in exploring a more professional route where relevant videos could play in the background. This could potentially be achieved with AI video generators, but for now, we can only hope for a near future where easily accessible video AI becomes a reality.
We’re considering implementing a bottleneck scrolling feature so that users will have to watch at least a portion of the video before skipping.
Lastly, we plan to utilize more AI features such as stable defusion or an image library to bring up relevant images for topics. | losing |
## Inspiration
Grip strength has been shown to be a powerful biomarker for numerous physiological processes. Two particularly compelling examples are Central Nervous System (CNS) fatigue and overall propensity for Cardiovascular Disease (CVD). The core idea is not about building a hand grip strengthening tool, as this need is already largely satisfied within the market by traditional hand grip devices currently. Rather, it is about building a product that leverages the insights behind one’s hand grip to help users make more informed decisions about their physical activities and overall well-being.
## What it does
Gripp is a physical device that users can squeeze to measure their hand grip strength in a low-cost, easy-to-use manner. The resulting measurements can be benchmarked against previous values taken by oneself, as well as comparable peers. These will be used to provide intelligent recommendations on optimal fitness/training protocols through providing deeper, quantifiable insights into recovery.
## How we built it
Gripp was built using a mixture of both hardware and software.
On the hardware front, the project began with a Computer-Aided Design (CAD) model of the device. With the requirement to build around the required force sensors and accompanying electronics, the resulting model was customized exclusively for this product, and subsequently, 3-D printed. Other considerations included the ergonomics of holding the device, and adaptability depending on the hand size of the user. Exerting force on the Wheatstone bridge sensor causes it to measure the voltage difference caused by minute changes to resistance. These changes in resistance are amplified by the HX711 amplifier and converted using an ESP32 into a force measurement.
From there, the data flows into a MySQL database hosted in Apache for the corresponding user, before finally going to the front-end interface dashboard.
## Challenges we ran into
There were several challenges that we ran into.
On the hardware side, getting the hardware to consistently output a force value was challenging. Further, listening in on the COM port, interpreting the serial data flowing in from the ESP-32, and getting it to interact with Python (where it needed to be to flow through the Flask endpoint to the front end) was challenging.
On the software side, our team was challenged by the complexities of the operations required, most notably the front-end components, with minimal experience in React across the board.
## Accomplishments that we're proud of
Connecting the hardware to the back-end database to the front-end display, and facilitating communication both ways, is what we are most proud of, as it required navigating several complex issues to reach a sound connection.
## What we learned
The value of having another pair of eyes on code rather than trying to individually solve everything. While the latter is often possible, it is a far less efficient (especially when around others) methodology.
## What's next for Gripp
Next for Gripp on the hardware side is continuing to test other prototypes of the hardware design, as well as materials (e.g., a silicon mould as opposed to plastic). Additionally, facilitating the hardware/software connection via Bluetooth.
From a user-interface perspective, it would be optimal to move from a web-based application to a mobile one.
On the front-end side, continuing to build out other pages will be critical (trends, community), as well as additional features (e.g., readiness score). | ## Why
Type 2 diabetes can be incredibly tough, especially when it leads to complications. I've seen it firsthand with my uncle, who suffers from peripheral neuropathy. Watching him struggle with insensitivity in his feet, having to go to the doctor regularly for new insoles just to manage the pain and prevent further damage—it’s really painful. It's constantly on my mind how easily something like a pressure sore could become something more serious, risking amputation. It's heartbreaking to see how diabetes quietly affects his everyday life in ways people do not even realize.
## What
Our goal is to create a smart insole for diabetic patients living with type 2 diabetes. This insole is designed with several pressure sensors placed at key points to provide real-time data on the patient’s foot pressure. By continuously processing this data, it can alert both the user and their doctor when any irregularities or issues are detected. What’s even more powerful is that, based on this data, the insole can adjust to help correct the patient’s walking stance. This small but important correction can help prevent painful foot ulcers and, hopefully, make a real difference in their quality of life.
## How we built it
We build an insole with 3 sensors on it (the sensors are a hackathon project on their own), that checks the plantar pressure exerted by the patient. We stream and process the data and feed it to another model sole that changes shape based on the gait analysis so it helps correct the patients walk in realtime.
Concurrently we stream the data out to our dashboard to show recent activity, alerts and live data about a patient's behavior so that doctors can monitor them remotely- and step in if any early signs of neural-degradation .
## Challenges we ran into and Accomplishments that we're proud of
So, we hit a few bumps in the road since most of the hackathon projects were all about software, and we needed hardware to bring our idea to life. Cue the adventure! We were running all over the city—Trader Joe's, Micro Center, local makerspaces—you name it, we were there, hunting for parts to build our force sensor. When we couldn’t find what we needed, we got scrappy. We ended up making our own sensor from scratch using PU foam and a pencil (yep, a pencil!). It was a wild ride of custom electronics, troubleshooting hardware problems, and patching things up with software when we couldn’t get the right parts.
In the end, we’re super proud of what we pulled off—our own custom-built sensor, plus the software to bring it all together. It was a challenge, but we had a blast, and we're thrilled with what we made in the time we had!
## What we learned
Throughout this project, we learned that flexibility and resourcefulness are key when working with hardware, especially under tight time constraints, as we had to get creative with available materials.
As well as this - we learnt a lot about preventative measures that can be taken to reduce the symptoms of diabetes and we have optimistic prospects about how we will can continue to help people with diabetes.
## What's next for Diabeteasy
Everyone in our team has close family affected by diabetes, meaning this is a problem very near and dear to all of us. We strive to continue developing and delivering a prototype to those around us who we can see, first hand, the impact and make improvements to refine the design and execution. We aim to build relations with remote patient monitoring firms to assist within elderly healthcare, since we can provide one value above all; health. | Valentine’s Day can be tough. Those without a special someone often feel alone and this made us think back to the simpler childhood days of Valentine’s grams. Back then, something as simple as a message from a friend could change our world. Our project is a literal manifestation of that concept. While one user is in a virtual world, outsiders can text Valentine’s grams to the user. The world changes to visualize the sentiment of the grams the user receives.
For most of us, it was our first hackathon and we were excited by the many technical and creative skills needed to build a socially-driven world together:
* Virtual Reality: create an interactive world using an entirely new medium and device, the HTC Vive
* Social: include non-VR users in the experience in a novel and intuitive way
* Database and Web Dev: enable anyone to text real-time Valentine’s grams into the virtual world
* 3D Modeling: create a flat-shaded lowpoly world which visualizes a transition between happiness and sadness
* Artificial Intelligence: use sentiment analysis to assign positive or negative sentiment to Valentine’s grams
On this Valentine’s Day, we’re looking back on what we built and loving that it incorporates so many diverse skills we bring together as a team but do not have alone. We hope you enjoy and do not forget the power your words have to change someone’s world. | winning |
## Bringing your music to life, not just to your ears but to your eyes 🎶
## Inspiration 🍐
Composing music through scribbling notes or drag-and-dropping from MuseScore couldn't be more tedious. As pianists ourselves, we know the struggle of trying to bring our impromptu improvisation sessions to life without forgetting what we just played or having to record ourselves and write out the notes one by one.
## What it does 🎹
Introducing PearPiano, a cute little pear that helps you pair the notes to your thoughts. As a musician's best friend, Pear guides pianists through an augmented simulation of a piano where played notes are directly translated into a recording and stored for future use. Pear can read both single notes and chords played on the virtual piano, allowing playback of your music with cascading tiles for full immersion. Seek musical guidance from Pear by asking, "What is the key signature of C-major?" or "Tell me the notes of the E-major diminished 7th chord." To fine tune your compositions, use "Edit mode," where musicians can rewind the clip and drag-and-drop notes for instant changes.
## How we built it 🔧
Using Unity Game Engine and the Oculus Quest, musicians can airplay their music on an augmented piano for real-time music composition. We used OpenAI's Whisper for voice dictation and C# for all game-development scripts. The AR environment is entirely designed and generated using the Unity UI Toolkit, allowing our engineers to realize an immersive yet functional musical corner.
## Challenges we ran into 🏁
* Calibrating and configuring hand tracking on the Oculus Quest
* Reducing positional offset when making contact with the virtual piano keys
* Building the piano in Unity: setting the pitch of the notes and being able to play multiple at once
## Accomplishments that we're proud of 🌟
* Bringing a scaled **AR piano** to life with close-to-perfect functionalities
* Working with OpenAI to synthesize text from speech to provide guidance for users
* Designing an interactive and aesthetic UI/UX with cascading tiles upon recording playback
## What we learned 📖
* Designing and implementing our character/piano/interface in 3D
* Emily had 5 cups of coffee in half a day and is somehow alive
## What's next for PearPiano 📈
* VR overlay feature to attach the augmented piano to a real one, enriching each practice or composition session
* A rhythm checker to support an aspiring pianist to stay on-beat and in-tune
* A smart chord suggester to streamline harmonization and enhance the composition process
* Depth detection for each note-press to provide feedback on the pianist's musical dynamics
* With the up-coming release of Apple Vision Pro and Meta Quest 3, full colour AR pass-through will be more accessible than ever — Pear piano will "pair" great with all those headsets! | ## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces. | ## Inspiration
Everyone loves music, but finding new music can be a difficult task. We found that music websites, such as Spotify, emphasize recommendations similar to what we currently enjoy, but don't address the music that we don't enjoy. As a result, we wanted to come up with an app that recommends songs not necessarily always similar to what we like, but also distinct from our dislikes, introducing us to potentially new genres.
## What it does
Using a Tinder-like front-end, users can listen to ditties of popular songs from Spotify to discover new genres. By swiping left or right, users can influence subsequent song recommendations.
## How we built it
We created a recommendation algorithm using Scikit-learn and the Spotipy API that analyzes the user swipes to suggest new songs on our web app. Our web app was created using React.js and Flask, and our database of choice was CockroachDB.
## Challenges we ran into
This was the first full stack app we've made, so we ran into a lot of issues connecting the front-end and back-end components, as well as setting up our database. Additionally, it was a challenge to gather all of our data, as the API we used limited our request volume. In the end, with CockroachDB, we were able to connect and store data into it; however, we were unable to resolve querying from it on our "Liked Songs" page.
## Accomplishments that we're proud of
We are super proud of our front-end's interactability and our app's ability to dynamically generate cards that align with the user's interests!
## What we learned
We learned how to connect various python libraries (Scikit-learn, Flask, Spotipy, etc.) together to develop a robust full stack application.
## What's next for Dittycal
We hope to further develop the social aspect of this application, as given the time constraints of this hackathon, we focused on the core functionality (allowing users to find songs they like based previous ditties they've liked). It would be awesome to implement an account system, so that user song data can be analyzed and inputted to a model that implements collaborative filtering (think TikTok, Tinder, or even Netflix!). | winning |
# Inspiration
As a team we decided to develop a service that we thought would not only be extremely useful to us, but to everyone around the world that struggles with storing physical receipts. We were inspired to build an eco friendly as well as innovative application that targets the pain points behind filing receipts, losing receipts, missing return policy deadlines, not being able to find the proper receipt with a particular item as well as tracking potentially bad spending habits.
# What it does
To solve these problems, we are proud to introduce, Receipto, a universal receipt tracker who's mission is to empower users with their personal finances, to track spending habits more easily as well as to replace physical receipts to reduce global paper usage.
With Receipto you can upload or take a picture of a receipt, and it will automatically recognize all of the information found on the receipt. Once validated, it saves the picture and summarizes the data in a useful manner. In addition to storing receipts in an organized manner, you can get valuable information on your spending habits, you would also be able to search through receipt expenses based on certain categories, items and time frames. The most interesting feature is that once a receipt is loaded and validated, it will display a picture of all the items purchased thanks to the use of item codes and an image recognition API. Receipto will also notify you when a receipt may be approaching its potential return policy deadline which is based on a user input during receipt uploads.
# How we built it
We have chosen to build Receipto as a responsive web application, allowing us to develop a better user experience. We first drew up story boards by hand to visually predict and explore the user experience, then we developed the app using React, ViteJS, ChakraUI and Recharts.
For the backend, we decided to use NodeJS deployed on Google Cloud Compute Engine. In order to read and retrieve information from the receipt, we used the Google Cloud Vision API along with our own parsing algorithm.
Overall, we mostly focused on developing the main ideas, which consist of scanning and storing receipts as well as viewing the images of the items on the receipts.
# Challenges we ran into
Our main challenge was implementing the image recognition API, as it involved a lot of trial and error. Almost all receipts are different depending on the store and province. For example, in Quebec, there are two different taxes displayed on the receipt, and that affected how our app was able to recognize the data. To fix that, we made sure that if two types of taxes are displayed, our app would recognize that it comes from Quebec, and it would scan it as such. Additionally, almost all stores have different receipts, so we have adapted the app to recognize most major stores, but we also allow a user to manually add the data in case a receipt is very different. Either way, a user will know when it's necessary to change or to add data with visual alerts when uploading receipts.
Another challenge was displaying the images of the items on the receipts. Not all receipts had item codes, stores that did have these codes ended up having different APIs. We overcame this challenge by finding an API called stocktrack.ca that combines the most popular store APIs in one place.
# Accomplishments that we're proud of
We are all very proud to have turned this idea into a working prototype as we agreed to pursue this idea knowing the difficulty behind it. We have many great ideas to implement in the future and have agreed to continue this project beyond McHacks in hopes of one day completing it. We our grateful to have had the opportunity to work together with such talented, patient, and organized team members.
# What we learned
With all the different skills each team member brought to the table, we were able to pick up new skills from each other. Some of us got introduced to new coding languages, others learned new UI design skills as well as simple organization and planning skills. Overall, McHacks has definitely showed us the value of team work, we all kept each other motivated and helped each other overcome each obstacle as a team.
# What's next for Receipto?
Now that we have a working prototype ready, we plan to further test our application with a selected sample of users to improve the user experience. Our plan is to polish up the main functionality of the application, and to expand the idea by adding exciting new features that we just didn't have time to add. Although we may love the idea, we need to make sure to conduct more market research to see if it could be a viable service that could change the way people perceive receipts and potentially considering adapting Receipto. | ## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app. | ## Inspiration 💡
Our inspiration stemmed from a desire to empower 💪 elderly individuals who may struggle with memory loss to live more independently and confidently in their own homes. We recognized the importance of creating a solution that not only addresses the practical challenge of finding objects but also provides companionship and reassurance. 😊
## What it does 🤖
Spot my good boy is a revolutionary dog-like 🐾 robot designed to assist the elderly in locating misplaced items within their living space. Using advanced technology and cutting-edge AI, it responds to natural language commands, navigates the environment to find requested objects, and provides verbal feedback to confirm its success😊🎉. Additionally, it serves as a vigilant companion, capable of alerting users to potential hazards in their surroundings. 🤖🔍👀
## How we built it 🛠️
We leveraged cutting-edge technology to retrofit a Boston Dynamics robot with custom software and hardware components. Through intensive programming and integration efforts, we enabled the robot to understand and execute complex commands, navigate autonomously, and communicate effectively with users. 💻🌟 This involved extensive collaboration between a huge diversity of technical experts 👏👥 from our team: software engineers, hardware specialists, and user experience designers. 💻🛠️👩💻
## Challenges we ran into 😵
Developing Spot my good boy presented several significant challenges, including: 😨
* Integrating natural language processing capabilities to ensure seamless communication. 🤖🗣️
* Designing an intuitive user interface for elderly users with varying levels of technological familiarity. 👵👴🖥️
* Implementing robust object recognition and navigation algorithms to locate items accurately. 📸🔎
* Managing multiple AI agents and API calls to services such as Google Text-to-Speech (gTTS), OpenAI's Whispr, GPT-4, and OpenCV cascade models for object detection. 🤖🤖🤖
* Hardware integration for sound, microphone, camera, and movement controls, ensuring seamless interaction and functionality across different components. 🎤📷🕹️
## Accomplishments that we're proud of 🎉
We are immensely proud of several accomplishments, including:
* Successfully creating a functional prototype of Spot my good boy within the constraints of the hackathon timeframe. ⏱️🤖
* Demonstrating the robot's ability to understand and respond to verbal commands in real-time. 🎙️🤖
* Implementing reliable object detection and navigation algorithms, ensuring accurate retrieval of items. 📸🧭🔍
* Establishing a foundation for future development and refinement of the technology. 🚀🌟
## What we learned 💡
Our journey with Spot my good boy taught us invaluable lessons about teamwork, innovation, and the power of technology to positively impact lives. 🌟🚀 Key takeaways include:
* The importance of user-centered design in creating inclusive and accessible solutions. 👥🔎
* The challenges and complexities of integrating hardware and software components in robotics projects. 🤖🔨
* The significance of empathy and understanding when designing for vulnerable populations. 👵👴💞
* The potential for technology to enhance the quality of life for elderly individuals and caregivers alike. 🌟📈
## What's next for Spot my good boy 🚀
Moving forward, we envision several exciting opportunities to further enhance and refine Spot my good boy:
* Conducting additional user testing and feedback sessions to iteratively improve the robot's functionality and user experience 👥📝
* Exploring potential partnerships with assisted living facilities and healthcare providers to deploy Spot in real-world environments 🏥👵
* Continuously updating and expanding Spot's capabilities through software updates 📲🔄
* Investigating advanced features such as fall detection, medication reminders, and remote monitoring capabilities 👀💊⏰
* Collaborating with researchers and industry experts to explore the broader implications of robotics in eldercare and aging-in-place initiatives 👴👵🤖
With dedication and innovation, Spot my good boy has the potential to revolutionize the way we support and care for elderly individuals, empowering them to live independently and confidently in their own homes. 🏡👵👍 | winning |
## Inspiration
After considering what hackathon project to pursue, our team, JAKT, encountered a common obstacle that visually impaired individuals face: their struggle with interacting with technology and our dynamic society. While there are existing solutions, they are expensive, bulky, and therefore inaccessible to many. Text2Touch is our take on the Braille Display combining our implementation of the Braille Display through computer text & computer vision. This versatility, along with its accessibility, is what we would like to bring to the community in hopes of providing the visually impaired with a new, cheaper, and more environmentally friendly way of interacting with our dynamic world.
## What it does
Text2Touch takes a screen's text, downloads as a txt file, and parses the text into serial data that then is sent over to an Arduino and reinterpreted as braille.
## How we built it
Our initial plan was to 3D print the parts that make up the electromagnets' chassis. However, our parts (as of 10:26 AM), have not been printed yet. Thus, we utilized aluminum cans, recycled cardboard, and clean paper food wrappings to create our ECO-FRIENDLY prototype of the braille display.
We split off to tackle the firmware, hardware and software components of this project. Our firmware was coded on the Arduino IDE, our hardware was glued, taped, soldered, and tested to completion, and our software was programmed with Python and Javascript. In the end, we came together to debug and combine our efforts into a tangible prototype that we're extremely excited to share!
## Challenges we ran into
There were many setbacks involving the hardware, chassis construction, and library utilization in our software. However, we tried to do the best we could through our dedicated teamwork and debugging efforts.
## Accomplishments that we're proud of
We're proud that we utilized relays to construct our electromagnets, that we persevered with our chassis despite the lack of technical and inventory limitations, and that we were able to accomplish the volume of work in 24 hours.
## What's next for Text2Touch
We want to refine Text2Touch to create a seamless user experience. We will do so by possibly implementing text-to-speech, refining our camera-to-touch feature, and improving the interactions between our Chrome Extension, the Arduino, and the Text2Touch Braille Display. | ## Inspiration
A member of our core team is very close with his cousin who is severely disabled. Thus, we approached YHack with a social conscious Hack that would assist those who don't have the same opportunities to attend Hackathons as we do. Although our peers who are visually impaired use current methods and technology including echolocation, seeing-eye dogs, and a white cane for assistance, existing aides fall short from the potential presented by today's technology. We decided to design and construct a revolutionary product that allows those who are blind to have a greater sense of their surroundings rather than what lies five feet ahead.
Looking to our community, we reached out and spoke with a prominent economics professor from Brown University, professor Roberto Serrano. He explained that, "The cane isn't perfect. For example, if an obstacle is not on the floor, but is up above, you are likely to bump into it. I would think that some electronic device that alerts me to its presence would help."
Thus, Louis was born, a proprietary, mobile braille reader that not only alerts but also locates and describes one's surroundings from a small, integrated camera.
## What it does
Louis uses a raspberry-pi camera to take images that are then uploaded and processed by the Microsoft Azure (vision) API, Google Cloud (vision) API, and Facebook Graph API to provide short-text summaries of the image. This text is converted to a Braille matrix which is transformed into a series of stepper motor signals. Using two stepper motors, we translate the image into a series of Braille characters that can be read simply by the sliding of a finger.
## How we built it
The hardware was designed using SolidWorks run on Microsoft Remote Desktop. Over a series of 36 hours we ventured to Maker Spaces to prototype our designs before returning to Yale to integrate them and refine our design.
## Challenges we ran into
In order to make an economically feasible system rather than creating actuators for every braille button, we devised a system using a series of eight dot-combinations that could comply with an unlimited amount of brail characters. We designed our own braille discs that are turned into a recognizable Braille pattern. We ran into a huge roadblock of how to turn one Braille piece at a time while keeping the rest constant. We overcame this obstacle and devised and designed a unique, three-part inner turning mechanism that allowed us to translate the whole platform horizontally and rotate a single piece at a time.
At first, we attempted to transform a visual input to an audio headset or speaker, but we realized we were making a product rather than something that actually makes a difference in people's lives. When someone loses one of their senses, the others become incredibly more precise. Many people in the world who are visually impaired count on the sounds we hear everyday to guide them; therefore, it's imperative that we looked towards touch: a sense that is used far less for reference and long-range navigation.
## What we learned
In 36 hours we were able to program and generate a platform that takes the images we see and others cannot and converts it into a physical language on a 3D printed, and completely self-designed system.
In addition, we explored the numerous applications of Microsoft Azure and the bourgeoning field of image processing.
## What's next for Louis
We are going to Kinect!
Unfortunately, we were unable to gain access to a Microsoft Kinect; nevertheless, we look forward to returning to Brown University with Louis and integrating the features of Kinect to a Braille output. We hope to grant our peers and colleagues with visual impairment unparalleled access to their surroundings using touch and the physical language of braille. | ## Inspiration
Between 1994 and 2013 there were 6,873 natural disasters worldwide, which claimed 1.35 million lives or almost 68,000 lives on average each year. In many of these disasters, people don't know where to go, and where is safe. For example, during Hurricane Harvey, many people learned about dangers such as alligators or flooding through other other people, and through social media.
## What it does
* Post a natural disaster hazard in your area
* Crowd-sourced hazards
* Pulls government severe weather data
* IoT sensor system to take atmospheric measurements and display on map
* Twitter social media feed of trending natural disasters in the area
* Machine learning image processing to analyze posted images of natural disaster hazards
Our app **Eye in the Sky** allows users to post about various natural disaster hazards at their location, upload a photo, and share a description of the hazard. These points are crowd-sourced and displayed on a map, which also pulls live data from the web on natural disasters. It also allows these users to view a Twitter feed of current trending information on the natural disaster in their location. The last feature is the IoT sensor system, which take measurements about the environment in real time and displays them on the map.
## How I built it
We built this app using Android Studio. Our user-created posts are hosted on a Parse database (run with mongodb). We pull data from the National Oceanic and Atmospheric Administration severe weather data inventory. We used Particle Electron to collect atmospheric sensor data, and used AWS to store this data in a JSON.
## Challenges I ran into
We had some issues setting up an AWS instance to run our image processing on the backend, but other than that, decently smooth sailing.
## Accomplishments that I'm proud of
We were able to work really well together, using everyone's strengths as well as combining the pieces into a cohesive app that we're really proud of. Everyone's piece was integrated well into the app, and I think we demonstrated good software collaboration.
## What I learned
We learned how to effectively combine our code across various languages and databases, which is a big challenge in software as a whole. We learned Android Development, how to set up AWS, and how to create databases to store custom data. Most importantly, we learned to have fun!
## What's next for Eye in the Sky
In the future, we would like to add verification of people's self-reported hazards (through machine learning and through up-voting and down-voting)
We would also like to improve the efficiency of our app and reduce reliance on network because there might not be network, or very poor network, in a natural disaster
We would like to add more datasets from online, and to display storms with bigger bubbles, rather than just points
We would also like to attach our sensor to drones or automobiles to gather more weather data points to display on our map | losing |
# DriveWise: Building a Safer Future in Route Planning
Motor vehicle crashes are the leading cause of death among teens, with over a third of teen fatalities resulting from traffic accidents. This represents one of the most pressing public safety issues today. While many route-planning algorithms exist, most prioritize speed over safety, often neglecting the inherent risks associated with certain routes. We set out to create a route-planning app that leverages past accident data to help users navigate safer routes.
## Inspiration
The inexperience of young drivers contributes to the sharp rise in accidents and deaths as can be seen in the figure below.
![Injuries and Deaths in Motor Vehicle Crashes](https://raw.githubusercontent.com/pranavponnusamy/Drivewise/refs/heads/main/AccidentsByAge.webp)
This issue is further intensified by challenging driving conditions, road hazards, and the lack of real-time risk assessment tools. With limited access to information about accident-prone areas and little experience on the road, new drivers often unknowingly enter high-risk zones—something traditional route planners like Waze or Google Maps fail to address. However, new drivers are often willing to sacrifice speed for safer, less-traveled routes. Addressing this gap requires providing insights that promote safer driving choices.
## What It Does
We developed **DriveWise**, a route-planning app that empowers users to make informed decisions about the safest routes. The app analyzes 22 years of historical accident data and utilizes a modified A\* heuristic for personalized planning. Based on this data, it suggests alternative routes that are statistically safer, tailoring recommendations to the driver’s skill level. By factoring in variables such as driver skill, accident density, and turn complexity, we aim to create a comprehensive tool that prioritizes road safety above all else.
### How It Works
Our route-planning algorithm is novel in its incorporation of historical accident data directly into the routing process. Traditional algorithms like those used by Google Maps or Waze prioritize the shortest or fastest routes, often overlooking safety considerations. **DriveWise** integrates safety metrics into the edge weights of the routing graph, allowing the A\* algorithm to favor routes with lower accident risk.
**Key components of our algorithm include:**
* **Accident Density Mapping**: We map over 3.1 million historical accident data points to the road network using spatial queries. Each road segment is assigned an accident count based on nearby accidents.
* **Turn Penalties**: Sharp turns are more challenging for new drivers and have been shown to contribute to unsafe routes. We calculate turn angles between road segments and apply penalties for turns exceeding a certain threshold.
* **Skillfulness Metric**: We introduce a driver skill level parameter that adjusts the influence of accident risk and turn penalties on route selection. New drivers are guided through safer, simpler routes, while experienced drivers receive more direct paths.
* **Risk-Aware Heuristic**: Unlike traditional A\* implementations that use distance-based heuristics, we modify the heuristic to account for accident density, further steering the route away from high-risk areas.
By integrating these elements, **DriveWise** offers personalized route recommendations that adapt as the driver's skill level increases, ultimately aiming to reduce the likelihood of accidents for new drivers.
## Accomplishments We're Proud Of
We are proud of developing an algorithm that not only works effectively but also has the potential to make a real difference in road safety. Creating a route-planning tool that factors in historical accident data is, to our knowledge, a novel approach in this domain. We successfully combined complex data analysis with an intuitive user interface, resulting in an app that is both powerful and user-friendly.
We are also kinda proud about our website. Learn more about us at [idontwannadie.lol](https://idontwannadie.lol/)
## Challenges We Faced
This was one of our first hackathons, and we faced several challenges. Having never deployed anything before, we spent a significant amount of time learning, debugging, and fixing deployment issues. Designing the algorithm to analyze accident patterns while keeping the route planning relatively simple added considerable complexity. We had to balance predictive analytics with real-world usability, ensuring that the app remained intuitive while delivering sophisticated results.
Another challenge was creating a user interface that encourages engagement without overwhelming the driver. We wanted users to trust the app’s recommendations without feeling burdened by excessive information. Striking the right balance between simplicity and effectiveness through gamified metrics proved to be an elegant solution.
## What We Learned
We learned a great deal about integrating large datasets into real-time applications, the complexities of route optimization algorithms, and the importance of user-centric design. Working with the OpenStreetMap and OSMnx libraries required a deep dive into geospatial analysis, which was both challenging and rewarding. We also discovered the joys and pains of deploying an application, from server configurations to domain name setups.
## Future Plans
In the future, we see the potential for **DriveWise** to go beyond individual drivers and benefit broader communities. Urban planners, law enforcement agencies, and policymakers could use aggregated data to identify high-risk areas and make informed decisions about where to invest in road safety improvements. By expanding our dataset and refining our algorithms, we aim to make **DriveWise** functional in more regions and for a wider audience.
## Links
* **Paper**: [Mathematical Background](https://drive.google.com/drive/folders/1Q9MRjBWQtXKwtlzObdAxtfBpXgLR7yfQ?usp=sharing)
* **GitHub**: [DriveWise Repository](https://github.com/pranavponnusamy/Drivewise)
* **Website**: [idontwannadie.lol](https://idontwannadie.lol/)
* **Video Demo**: [DriveWise Demo](https://www.veed.io/view/81d727bc-ed6b-4bba-95c1-97ed48b1738d?panel=share) | ## Inspiration
We got lost so many times inside MIT... And no one could help us :( No Google Maps, no Apple Maps, NO ONE. Since now, we always dreamed about the idea of a more precise navigation platform working inside buildings. And here it is. But that's not all: as traffic GPS usually do, we also want to avoid the big crowds that sometimes stand in corridors.
## What it does
Using just the pdf of the floor plans, it builds a digital map and creates the data structures needed to find the shortest path between two points, considering walls, stairs and even elevators. Moreover, using fictional crowd data, it avoids big crowds so that it is safer and faster to walk inside buildings.
## How we built it
Using k-means, we created nodes and clustered them using the elbow diminishing returns optimization. We obtained the hallways centers combining scikit-learn and filtering them applying k-means. Finally, we created the edges between nodes, simulated crowd hotspots and calculated the shortest path accordingly. Each wifi hotspot takes into account the number of devices connected to the internet to estimate the number of nearby people. This information allows us to weight some paths and penalize those with large nearby crowds.
A path can be searched on a website powered by Flask, where the corresponding result is shown.
## Challenges we ran into
At first, we didn't know which was the best approach to convert a pdf map to useful data.
The maps we worked with are taken from the MIT intranet and we are not allowed to share them, so our web app cannot be published as it uses those maps...
Furthermore, we had limited experience with Machine Learning and Computer Vision algorithms.
## Accomplishments that we're proud of
We're proud of having developed a useful application that can be employed by many people and can be extended automatically to any building thanks to our map recognition algorithms. Also, using real data from sensors (wifi hotspots or any other similar devices) to detect crowds and penalize nearby paths.
## What we learned
We learned more about Python, Flask, Computer Vision algorithms and Machine Learning. Also about frienship :)
## What's next for SmartPaths
The next steps would be honing the Machine Learning part and using real data from sensors. | ## Inspiration
```
We all love games, and we wanted to create a bot which could amplify the users’ experience on discord by turning their server into an MMO community, with the rpg creation feature allowing the server’s admins to create endless content for the other server members to enjoy.
```
## What it does
```
Our bot allows all of the members of a discord server to create their own fictional characters who can be used within RPG style games created by admins in the server. Outside of the rpgs, the characters can be used within the server’s world; users can have jobs, and are able to care for their character.
After the bot is added to a server, members can create their own characters and choose a job. Members can use these characters within the server’s environment, their own custom RPG adventures, or the sample RPG we provided.
```
## Challenges I ran into
```
The major problem we faced is that we started off with an ambitious plan to create an online multiplayer game in Unity, which we later changed to our current idea, leaving us less time than we would have liked to work on our project.
```
## Accomplishments that I'm proud of
```
For most of us, it was our first or second hackathon, so successfully creating this bot was an incredible experience we all are proud of.
```
## What I learned
## What's next for MMORPG bot
```
In the future, we’d like to expand the bot’s capabilities to offer an even more immersive and customizable experience. Due to the time limitations of this event, we would never be able to add every feature seen in highly complex MMOs or RPGs, but those features could easily be added to expand upon what we currently have. We’d also like to host the bot publicly so it could be present on many servers simultaneously. Additionally, we’d like to offer more rpg games built into the bot already, so that users are able to play a variety of games when they do not feel like making their own. In the future, the bot would allow users to post their custom games publicly so that users in other servers could play them.
``` | winning |
## Inspiration
Our goal was to create a system that could improve both a shopper's experience and a business's profit margin. People waste millions of hours every year waiting in supermarket lines and businesses need to spend a significant amount of money on upkeep, staffing, and real estate for checkout lines and stations.
## What it does
Our mobile app, web app, and anti-theft system provide an easy, straightforward platform for businesses to incorporate our technology and for consumers to use it.
## How we built it
We used RFID technology and bluetooth in conjunction with an Arduino Mega board for most of the core functions. A long-range RFID reader and controller was used for the anti-theft system. A multi-colored LED, buzzer, and speaker were used for supplementary functions.
## Challenges we ran into
We ran into many challenges related to the connections between the three mostly self-sufficient systems, especially from the anti-theft system. It took a long time to get the security system working using a macro script.
## Accomplishments that we're proud of
It works!
## What we learned
We each focused on one specific area of the app and because all of the subsystems of the project were reliant on one another, we had to spend a large amount of time communicating with each other and ensuring that all of the components worked together.
## What's next for GoCart
While our project serves as a proof of concept for the technology and showcases its potential impact, with a slightly higher budget and thus access to stronger technologies, we believe this technology can have a real commercial impact on the shopping industry. More specifically, by improving the range of the rfid readers and accuracy of product detection, we would be able increase the reliability of the antitheft system and the ease of use for shoppers. From a hardware standpoint, the readers on the carts need be much more compact and discreet. | ## Inspiration
Initially, our team was interested in exploring generative AI and Virtual Reality and how it could be integrated into one’s lifestyle to simplify a niche task. However, after a thorough conversation with representatives (shout out Neal) at Careyaya, we realized there was an incredibly neglected and untapped market: medical technology. Specifically, elderly and those that suffer from anxiety, depression, or any common mental disorders don’t have a “safe space” to live within.
## Overview
Introducing Shanti, a Sanskrit translation for “Peace”, a platform striving to accelerate the impact of therapeutic tools through VR. From grieving the death of a lost one to stressing over a test, anyone can immerse themselves in a hyper-realistic AI 360-degree panorama of an event or scene they enjoy. In a way, it becomes their safe haven or go-to meditation spot. Our tech is built using Express, React, Node, MATLAB (back-end), and GCP. We also make use of a ton of genAI APIs :)
## Challenges
However, there were numerous challenges we faced, both during technical development and ideation phase. We pivoted our idea multiple times when coming across more applicable ideas that could better serve those with mental disorders. In addition, we had to work with API compatibility as well as ensuring that the visual 360-degree outputs were accurate and aligned with the prompts given.
## Shanti and Beyond!
We sincerely believe that Shanti could scale into a live product used by those that face struggles both externally and internally, physically or mentally. And as a result, we hope to scale it further after HackHarvard, thank you! | ## Inspiration
How many times have you went on a website, looked at the length of the content and decided that it was not worth your time to read the entire page? The inspiration for my project comes from this exact problem. Imagine how much easier it would be if website owners could automatically generate a summary of their webpage for their users to consume.
## What it does
Allows the website owner to easily generate a summary of their webpage and place it anywhere on there webpage.
## How I built it
I used CloudFlare so that it would be available to the many website owners very easily as an app. I used javascript to fetch the summary from a 3rd party API and to update the html of the page.
## Challenges I ran into
It was hard trying to find a summarizer that was suitable for my needs. Many of them only accept URLs as input and many are just not good enough. Once I found one that was suitable, I also had difficulty incorporating it within my app.
I had also never done anything with html or css before so it was hard trying to create the view that the user would see.
## Accomplishments that I'm proud of
It works, looks nice, and is easy to use
## What I learned
How to make a CloudFlare app, html, css
## What's next for TL;DR
Add more options for the app (maybe show relevant tweets?) and possibly create a chrome web extension that does that same thing. | partial |
## Inspiration
As OEMs(Original equipment manufacturers) and consumers keep putting on brighter and brighter lights, this can be blinding for oncoming traffic. Along with the fatigue and difficulty judging distance, it becomes increasingly harder to drive safely at night. Having an extra pair of night vision would be essential to protect your eyes and that's where the NCAR comes into play. The Nighttime Collision Avoidance Response system provides those extra sets of eyes via an infrared camera that uses machine learning to classify obstacles in the road that are detected and projects light to indicate obstacles in the road and allows safe driving regardless of the time of day.
## What it does
* NCAR provides users with an affordable wearable tech that ensures driver safety at night
* With its machine learning model, it can detect when humans are on the road when it is pitch black
* The NCAR alerts users of obstacles on the road by projecting a beam of light onto the windshield using the OLED Display
* If the user’s headlights fail, the infrared camera can act as a powerful backup light
## How we built it
* Machine Learning Model: Tensorflow API
* Python Libraries: OpenCV, PyGame
* Hardware: (Raspberry Pi 4B), 1 inch OLED display, Infrared Camera
## Challenges we ran into
* Training machine learning model with limited training data
* Infrared camera breaking down, we had to use old footage of the ml model
## Accomplishments that we're proud of
* Implementing a model that can detect human obstacles from 5-7 meters from the camera
* building a portable design that can be implemented on any car
## What we learned
* Learned how to code different hardware sensors together
* Building a Tensorflow model on a Raspberry PI
* Collaborating with people with different backgrounds, skills and experiences
## What's next for NCAR: Nighttime Collision Avoidance System
* Building a more custom training model that can detect and calculate the distances of the obstacles to the user
* A more sophisticated system of alerting users of obstacles on the path that is easy to maneuver
* Be able to adjust the OLED screen with a 3d printer to display light in a more noticeable way | # talko
Hello! talko is a project for nwHacks 2022.
Interviews can be scary, but they don't have to be! We believe that practice and experience is what gives you the
confidence you need to show interviewers what you're made of. talko is made for students and new graduates who want to
learn to fully express their skills in interviews. With everything online, it's even more important now that you can
get your thoughts across clearly virtually.
As students who have been and will be looking for co-ops, we know very well how stressful interview season can be;
we took this as our source of inspiration for talko.
talko is an app that helps you practice for interviews. Record and time your answers to interview questions to get
feedback on how fast you're talking and view previous answers.
## Features
* View answer history for previous answers- playback recordings, words per minute, standard deviation of talking speed and overall answer quality.
* Integrated question bank with a variety of topics.
* Skip answers you aren't ready to answer.
* Adorable robots!!
## Technical Overview
For Talko’s front-end, we used React to create a web app that can be used on both desktop and mobile devices.
We used Figma for the wireframing and Adobe Fresco for some of the aesthetic touches and character designs.
We created the backend using Nodejs and Express. The api handles uploading, saving and retrieving recordings, as well as
fetching random questions from our question bank. We used Google Cloud Firestore to save data about past answers, and Microsoft
Azure to store audio files and use speech-to-text on our clips. In our api, we calculate the average words per minute over
the entire answer, as well as the variance in the rate of speech.
## Challenges & Accomplishments
Creating this project in just 24 hours was quite the challenge! While we have worked with some of these tools before,
it was our first time working with Microsoft Azure. We're really proud of what we managed to put together over this weekend.
Another issue we had is that it can take a while to get speech-to-text results from Azure. We wanted to send
a response back to the frontend quickly, so we decided to calculate the rate of speech variance afterwards and
patch our data in Firestore.
## What's next for talko?
* Tagged questions: get questions most relevant to your industry
* Better answer analysis: use different NLP APIs and assess the text to give better stats and pointers
+ Are there lots of pauses and filler words in the answer?
+ Is the answer related to the question?
+ Given a job description selected or supplied by the user, does the answer cover the keywords?
+ Is the tone of the answer formal, assertive?
* View answer history in more detail: option to show transcript and play back audio recordings
* Settings to personalize your practice experience: customize number of questions and answer time limit.
## Built using
![image](https://img.shields.io/badge/Node.js-339933?style=for-the-badge&logo=nodedotjs&logoColor=white)
![image](https://img.shields.io/badge/React-20232A?style=for-the-badge&logo=react&logoColor=61DAFB)
![image](https://img.shields.io/badge/Express.js-000000?style=for-the-badge&logo=express&logoColor=white)
![image](https://img.shields.io/badge/microsoft%20azure-0089D6?style=for-the-badge&logo=microsoft-azure&logoColor=white)
![image](https://img.shields.io/badge/firebase-ffca28?style=for-the-badge&logo=firebase&logoColor=black)
![drawing](https://github.com/nwhacks-2022/.github/blob/main/assets/rainbow.png?raw=true)
### Thanks for visiting! | ## Inspiration
The original inspiration came from the popular augmented-reality Android-based game "Ingress", in which random strangers around the world team up in factions in the game in order to achieve certain objectives in real life. We took the fundamental augmented-reality (and teamwork!) concept of "Ingress" and decided to add a touch of fitness to it, and out came Exgress (Ingress with Exercises, cleaver, we know).
Just take a quick look at all the sports around you, and it's easy to see that humans simply love to compete. By capturing this essence of competition, Exgress not only promote people to increase their daily amount of exercises, but also feel accomplished with the community while doing so. The app will drive competition and in turn drive people to reach their fitness goals!
## What Exgress does
Although Exgress took its roots from the game Ingress, it is first and foremost a fitness app. The application requires the data from a Microsoft Fitness Band. The high-level overview of the application is that upon signing up for Exgress, a user will choose one of two teams. The objective of each team is to capture as many real-life landmarks as they can, while defending the already captured from the other team. All the stats within the Exgress will be derived from the data taken from the Microsoft Fitness Band. We use band stats such as heart rate sensor and the calories counter to formulate how strong a team is at a given time. With these stats, the users can proceed around their neighborhoods and capture the beacons.
## How We built it
We built this application on the Android Platform. Throughout the project, we utilized Android Studio, Microsoft Azure as the back-end, Microsoft Fitness Band API, and the Google Maps API.
## Challenges We ran into
As none of us are familiar with Android in depth (half of us were actually Android beginners), the initial climb to set-up the project was a lot more difficult then we expected. However, after 36 hours of stackoverflowing, we are really proud of our prototype of Exgress.
## Accomplishments that We're proud of
EXGRESS YEEEAAHH | winning |
## Mafia
Mafia is a party game for a group of friends to get together and yell and lie at each other. It is incredibly fun, and I play all of the time with my friends and family. During the game of mafia, there are townspeople and mafia members, and each group seeks to kill off the other. The mafia are the informed minority: there are few of them (generally two), and the townspeople are the majority. The mafia know who is in the mafia, but the townspeople do not, so the goal of the game is for the townspeople to discover who is in the mafia. Because the game is played this way, there must be one person acting as the narrator, who tells the townspeople what the mafia do at night. This player does not participate in the game, and generally, every player in the game makes it more fun. Also, the mafia members and the townspeople must be chosen randomly, so playing cards or some other randomizer is required for the game.
## UMafia comes to the rescue.
UMafia is a companion for the party game: it is meant to assist the game, rather than something like "Town of Salem", which is just a way to play the game online. UMafia is meant for people to sit around a living room or a campfire, and pull out their phones to vote and kill other players. UMafia takes the place of the narrator, meaning that an extra person can play, and it also randomizes the roles. It makes friendly mafia games much easier.
## How I built it
UMafia is a website built with html, css, javascript and bootstrap. It should be compatible with mobile devices. I modeled UMafia after [spyfall](http://spyfall.crabhat.com/),which is a different party game to be played in person. It is nearly 500 lines of code, made mostly from scratch, except for some bootstrap effects which I added at the last minute. The javascript creates an array pL (short for playerList), which holds playerdata, and role data. It randomizes roles, and keeps track of how many players remain. If the condition is met that all of the mafia players are dead, then the townspeople win, and if the condition is met that the amount of mafia players exceeds the amount of townspeople, then the mafia wins.
## The Interface
During the day cycle of the game, all of the players vote one player to be killed, and during the night cycle the players choose which other players to be the target of their abilities. Therefore, I chose the interface to be a list of buttons, with each player on one of the buttons. Every a player makes a decision (ie, when a player votes for another player, or when the mafia decides to kill a player), UMafia puts the decision in an array called "decisions", and checks whether there are enough decisions to proceed. During the daytime, as soon as a majority of the decisions are made for one player, that player is killed and the game shifts to night. During the nighttime, UMafia waits for every player to make a decision, even players who have no role. These players must select another player, even if it does not do anything, otherwise people would be able to deduce who is in the mafia based on who is on their phones. After every player makes a decision during the nighttime, the game kills whichever player was chosen by the Godfather; the leader of the mafia, then the game transitions back to daytime. In this way, the game goes back and forth from day and night until any win conditions are met.
## Challenges
Several issues came up during the development of this game. During the production, everything broke multiple times, and most of the code was rewritten. In particular, using buttons in html was difficult, because I needed to create a function that created n buttons (where n is the number of players), and each button had to have an event listener for a different function. Initially this did not work, every event listener was the same. Google eventually led me to a solution in which I used the button object within the parameters of the event listener to convey information. Another issue was that of interaction between users. However, it would be hard to test if I constantly needed multiple users. Therefore, I made a temporary solution, which is a drop-down list of users, which would simulate different users. I have not yet created a way for users to interact, but it will be coming soon.
## Takeaways from YHack
After YHack, I feel optimistic about the future of UMafia. I have learned many things, and just as I have improved my code, my code has improved me. I stepped outside my comfort zone this time and searched through APIs for help in my code. Previously, I was doing all of my code from scratch, and challenging myself to use API code took a lot of time and energy. In the end, I did not end up using Google Cloud APIs as I had planned, but instead, I tried Bootstrap, which was much easier to access compared to Google's APIs. Using that allowed my entire interface to evolve. YHack made a boost in knowledge which will propel this project into the future.
## What's next?
The next few things for UMafia are integration of new roles, user interaction, and scalability. Adding new roles like doctor, detective and vigilante will improve the experience of the game. Adding user interaction is the final step before the application is functional. After that, UMafia should be expanded to include any number of players and many more new roles. | ## Inspiration
*Mafia*, also known as *Werewolf*, is a classic in-person party game that university and high school students play regularly. It's been popularized by hit computer games such as Town of Salem and Epic Mafia that serve hundreds of thousands of players, but where these games go *wrong* is that they replace the in-person experience with a solely online experience. We built Super Mafia as a companion app that people can use while playing Mafia with their friends in live social situations to *augment* rather than *replace* their experience.
## What it does
Super Mafia replaces the role of the game's moderator, freeing up every student to play. It also allows players to play character roles which normally aren't convenient or even possible in-person, such as the *gunsmith* and *escort*.
## How we built it
Super Mafia was built with Flask, Python, and MongoDB on the backend, and HTML, CSS, and Javascript on the front-end. We also spent time learning about mLab which we used to host the database.
## Challenges we ran into
Our biggest challenge was making sure that our user experience would be simple-to-use and approachable for young users, while still accommodating of the extra features we built.
## Accomplishments that we're proud of
We survived the deadly combo of a cold night and the 5th floor air conditioning.
## What we learned
How much sleeping during hackathons actually improves your focus...lol
## What's next for Super Mafia
* Additional roles (fool, oracle, miller, etc) including 3rd party roles. A full list of potential roles can be found [here](https://epicmafia.com/role)
* Customization options (length of time/day)
* Last words/wills
* Animations and illustrations | *Everything in this project was completed during TreeHacks.*
*By the way, we've included lots of hidden fishy puns in our writeup! Comment how many you find!*
## TL; DR
* Illegal overfishing is a massive issue (**>200 billion fish**/year), disrupting global ecosystems and placing hundreds of species at risk of extinction.
* Satellite imagery can detect fishing ships but there's little positive data to train a good ML model.
* To get synthetic data: we fine-tuned Stable Diffusion on **1/1000ths of the data** of a typical GAN (and 10x training speed) on images of satellite pictures of ships and achieved comparable quality to SOTA. We only used **68** original images!
* We trained a neural network using our real and synthetic data that detected ships with **96%** accuracy.
* Built a global map and hotspot dashboard that lets governments view realtime satellite images, analyze suspicious activity hotspots, & take action.
* Created a custom polygon renderer on top of ArcGIS
* Our novel Stable Diffusion data augmentation method has potential for many other low-data applications.
Got you hooked? Keep reading!
## Let's get reel...
Did you know global fish supply has **decreased by [49%](https://www.scientificamerican.com/article/ocean-fish-numbers-cut-in-half-since-1970/)** since 1970?
While topics like deforestation and melting ice dominate sustainability headlines, overfishing is a seriously overlooked issue. After thoroughly researching sustainability, we realized that this was an important but under-addressed challenge.
We were shocked to learn that **[90%](https://datatopics.worldbank.org/sdgatlas/archive/2017/SDG-14-life-below-water.html) of fisheries are over-exploited** or collapsing. What's more, around [1 trillion](https://www.forbes.com/sites/michaelpellmanrowland/2017/07/24/seafood-sustainability-facts/?sh=2a46f1794bbf) (1,000,000,000,000) fish are caught yearly.
Hailing from San Diego, Boston, and other cities known for seafood, we were shocked to hear about this problem. Research indicates that despite many verbal commitments to fish sustainably, **one in five fish is illegally caught**. What a load of carp!
### People are shellfish...
Around the world, governments and NGOs have been trying to reel in overfishing, but economic incentives and self-interest mean that many ships continue to exploit resources secretly. It's hard to detect small ships on the world's 140 million square miles of ocean.
## What we're shipping
In short (we won't keep you on the hook): we used custom Stable Diffusion to create realistic synthetic image data of ships and trained a convolutional neural networks (CNNs) to detect and locate ships from satellite imagery. We also built a **data visualization platform** for stakeholders to monitor overfishing. To enhance this platform, we **identified several hotspots of suspicious dark vessel activity** by digging into 55,000+ AIS radar records.
While people have tried to build AI models to detect overfishing before, accuracy was poor due to high class imbalance. There are few positive examples of ships on water compared to the infinite negative examples of patches of water without ships. Researchers have used GANs to generate synthetic data for other purposes. However, it takes around **50,000** sample images to train a decent GAN. The largest satellite ship dataset only has ~2,000 samples.
We realized that Stable Diffusion (SD), a popular text-to-image AI model, could be repurposed to generate unlimited synthetic image data of ships based on relatively few inputs. We were able to achieve highly realistic synthetic images using **only 68** original images.
## How we shipped it
First, we read scientific literature and news articles about overfishing, methods to detect overfishing, and object detection models (and limitations). We identified a specific challenge: class imbalance in satellite imagery.
Next, we split into teams. Molly and Soham worked on the front-end, developing a geographical analysis portal with React and creating a custom polygon renderer on top of existing geospatial libraries. Andrew and Sayak worked on curating satellite imagery from a variety of datasets, performing classical image transformations (rotations, flips, crops), fine-tuning Stable Diffusion models and GANs (to compare quality), and finally using a combo of real and synthetic data to train an CNN. Andrew also worked on design, graphics, and AIS data analysis. We explored Leap ML and Runway fine-tuning methods.
## Challenges we tackled
Building Earth visualization portals are always quite challenging, but we could never have predicted the waves we would face. Among animations, rotations, longitude, latitude, country and ocean lines, and the most-feared WebGL, we had a lot to learn. For ocean lines, we made an API call to a submarine transmissions library and recorded features to feed into a JSON. Inspired by the beautiful animated globes of Stripe's and CoPilot's landing pages alike, we challengingly but succeedingly wrote our own.
Additionally, the synthesis between globe to 3D map was difficult, as it required building a new scroll effect compatible with the globe. These challenges, although significant at the time, were ultimately surmountable, as we navigated through their waters unforgivingly. This enabled the series of accomplishments that ensued.
It was challenging to build a visual data analysis layer on top of the ArcGIS library. The library was extremely granular, requiring us to assimilate the meshes of each individual polygon to display. To overcome this, we built our own component-based layer that enabled us to draw on top of a preexisting map.
## Making waves (accomplishments)
Text-to-image models are really cool but have failed to find that many real-world use cases besides art and profile pics. We identified and validated a relevant application for Stable Diffusion that has far-reaching effects for agricultural, industry, medicine, defense, and more.
We also made a sleek and refined web portal to display our results, in just a short amount of time. We also trained a CNN to detect ships using the real and synthetic data that achieved 96% accuracy.
## What we learned
### How to tackle overfishing:
We learned a lot about existing methods to combat overfishing that we didn't know about. We really became more educated on ocean sustainability practices and the pressing nature of the situation. We schooled ourselves on AIS, satellite imagery, dark vessels, and other relevant topics.
### Don't cast a wide net. And don't go overboard.
Originally, we were super ambitious with what we wanted to do, such as implementing Monte Carlo particle tracking algorithms to build probabilistic models of ship trajectories. We realized that we should really focus on a couple of ideas at max because of time constraints.
### Divide and conquer
We also realized that splitting into sub-teams of two to work on specific tasks and being clear about responsibilities made things go very smoothly.
### Geographic data visualization
Building platforms that enable interactions with maps and location data.
## What's on the horizon (implications + next steps)
Our Stable Diffusion data augmentation protocol has implications for few-shot learning of any object for agricultural, defense, medical and other applications. For instance, you could use our method to generate synthetic lung CT-Scan data to train cancer detection models or fine-tune a model to detect a specific diseased fruit not covered by existing general-purpose models.
We plan to create an API that allows anyone to upload a few photos of a specific object. We will build a large synthetic image dataset based off of those objects and train a plug-and-play CNN API that performs object location, classification, and counting.
While general purpose object detection models like YOLO work well for popular and broad categories like "bike" or "dog", they aren't feasible for specific detection purposes. For instance, if you are a farmer trying to use computer vision to detect diseased lychees. Or a medical researcher trying to detect cancerous cells from a microscope slide. Our method allows anyone to obtain an accurate task-specific object detection model. Because one-size-fits-all doesn't cut it.
We're excited to turn the tide with our fin-tech!
*How many fish/ocean-related puns did you find?* | losing |
## Inspiration
Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app.
## What it does
Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents!
## How we built it
We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating.
## Challenges we ran into
Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning.
## Accomplishments that we're proud of
We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours.
## What we learned
We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products.
## What's next for EduVoicer
EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file. | ## Inspiration
Cliff is dyslexic, so reading is difficult and slow for him and makes school really difficult.
But, he loves books and listens to 100+ audiobooks/yr. However, most books don't have an audiobook, especially not textbooks for schools, and articles that are passed out in class. This is an issue not only for the 160M people in the developed world with dyslexia but also for the 250M people with low vision acuity.
After moving to the U.S. at age 13, Cliff also needed something to help him translate assignments he didn't understand in school.
Most people become nearsighted as they get older, but often don't have their glasses with them. This makes it hard to read forms when needed. Being able to listen instead of reading is a really effective solution here.
## What it does
Audiobook maker allows a user to scan a physical book with their phone to produced a digital copy that can be played as an audiobook instantaneously in whatever language they choose. It also lets you read the book with text at whatever size you like to help people who have low vision acuity or are missing their glasses.
## How we built it
In Swift and iOS using Google ML and a few clever algorithms we developed to produce high-quality scanning, and high quality reading with low processing time.
## Challenges we ran into
We had to redesign a lot of the features to make the app user experience flow well and to allow the processing to happen fast enough.
## Accomplishments that we're proud of
We reduced the time it took to scan a book by 15X after one design iteration and reduced the processing time it took to OCR (Optical Character Recognition) the book from an hour plus, to instantaneously using an algorithm we built.
We allow the user to have audiobooks on their phone, in multiple languages, that take up virtually no space on the phone.
## What we learned
How to work with Google ML, how to work around OCR processing time. How to suffer through git Xcode Storyboard merge conflicts, how to use Amazon's AWS/Alexa's machine learning platform.
## What's next for Audiobook Maker
Deployment and use across the world by people who have Dyslexia or Low vision acuity, who are learning a new language or who just don't have their reading glasses but still want to function. We envision our app being used primarily for education in schools - specifically schools that have low-income populations who can't afford to buy multiple of books or audiobooks in multiple languages and formats.
## Treehack themes
treehacks education Verticle > personalization > learning styles (build a learning platform, tailored to the learning styles of auditory learners) - I'm an auditory learner, I've dreamed a tool like this since the time I was 8 years old and struggling to learn to read. I'm so excited that now it exists and every student with dyslexia or a learning difference will have access to it.
treehacks education Verticle > personalization > multilingual education ( English as a second-language students often get overlooked, Are there ways to leverage technology to create more open, multilingual classrooms?) Our software allows any book to become polylingual.
treehacks education Verticle > accessibility > refugee education (What are ways technology can be used to bring content and make education accessible to refugees? How can we make the transition to education in a new country smoother?) - Make it so they can listen to material in their mother tongue if needed. or have a voice read along with them in English. Make it so that they can carry their books wherever they go by scanning a book once and then having it for life.
treehacks education Verticle >language & literacy > mobile apps for English literacy (How can you build mobile apps to increase English fluency and literacy amongst students and adults?) -One of the best ways to learn how to read is to listen to someone else doing it and to follow yourself. Audiobook maker lets you do that. From a practical perspective - learning how to read is hard and it is difficult for an adult learning a new language to achieve proficiency and a high reading speed. To bridge that gap Audiobook Maker makes sure that every person can and understand and learn from any text they encounter.
treehacks education Verticle >language & literacy > in-person learning (many people want to learn second languages) - Audiobook maker allows users to live in a foreign countrys and understand more of what is going on. It allows users to challenge themselves to read or listen to more of their daily work in the language they are trying to learn, and it can help users understand while they studying a foreign language in the case that the meaning of text in a book or elsewhere is not clear.
We worked a lot with Google ML and Amazon AWS. | ## Inspiration
Vision is perhaps our most important sense; we use our sight every waking moment to navigate the world safely, to make decisions, and to connect with others. As such, keeping our eyes healthy is extremely important to our quality of life. In spite of this, we often neglect to get our vision tested regularly, even as we subject our eyes to many varieties of strain in our computer-saturated lives. Because visiting the optometrist can be both time-consuming and difficult to schedule, we sought to create MySight – a simple and inexpensive way to test our vision anywhere, using only a smartphone and a Google Cardboard virtual reality (VR) headset. This app also has large potential impact in developing nations, where administering eye tests cheaply using portable, readily available equipment can change many lives for the better.
## What it does
MySight is a general vision testing application that runs on any modern smartphone in concert with a Google Cardboard VR headset. It allows you to perform a variety of clinical vision tests quickly and easily, including tests for color blindness, stereo vision, visual acuity, and irregular blindspots in the visual field. Beyond informing the user about the current state of their visual health, the results of these tests can be used to recommend that the patient follow up with an optometrist for further treatment. One salient example would be if the app detects one or more especially large blindspots in the patient’s visual field, which is indicative of conditions requiring medical attention, such as glaucoma or an ischemic stroke.
## How we built it
We built MySight using the Unity gaming engine and the Google Cardboard SDK. All scripts were written in C#. Our website (whatswrongwithmyeyes.org) was generated using Angular2.
## Challenges we ran into
None of us on the team had ever used Unity before, and only two of us had even minimal exposure to the C# language in the past. As such, we needed to learn both Unity and C#.
## Accomplishments that we're proud of
We are very pleased to have produced a working version of MySight, which will run on any modern smartphone.
## What we learned
Beyond learning the basics of Unity and C#, we also learned a great deal more about how we see, and how our eyes can be tested.
## What's next for MySight
We envision MySight as a general platform for diagnosing our eyes’ health, and potentially for *improving* eye health in the future, as we plan to implement eye and vision training exercises (c.f. Ultimeyes). | winning |
Handling personal finances can be a challenging task, and there doesn't exist a natural user experience for engaging with your money. Online banking portals and mobile apps are one-off interactions that don't help people manage their money over the long term. We solved this problem with Alex.
We built Alex with the goal of making it easier to stay on top of your finances through a conversational user interface. We believe that the chatbot as a layer of abstraction over financial information will make managing budgets and exploring transactions easier tasks for people.
Use Alex to look at your bank balances and account summary. See how much you spent on Amazon over the last two months, or take a look at all of your restaurant transactions since you opened your account. You can even send money to your friends.
There were a few technically-challenging problems we had to solve while building Alex. We had to handle OAuth2 and other identification tokens through Facebook and bank account information to ensure security. Allowing the user to make queries in natural language required machine learning and training a model to identify different intents and parameters within a sentence. We even attempted to build a custom solution to maintain long-term memory for our bot—a still unsolved problem in natural language processing.
Alex is first and foremost a consumer product, but we believe that it provides value beyond the individual. With some additions, banks could use Alex to handle their customer support, saving countless hours of phone calls and wasted time on both ends. In a business setting, banks could learn much more about their customers' behavior through interactions with Alex. | ## Inspiration and What it does
We often go out with a lot of amazing friends for trips, restaurants, tourism, weekend expedition and what not. Every encounter has an associated messenger groupchat. We want a way to split money which is better than to discuss on the groupchat, ask people their public key/usernames and pay on a different platform. We've integrated them so that we can do transactions and chat at a single place.
We (our team) believe that **"The Future of money is Digital Currency "** (-Bill Gates), and so, we've integrated payment with Algorand's AlgoCoins with the chat. To make the process as simple as possible without being less robust, we extract payment information out of text as well as voice messages.
## How I built it
We used Google Cloud NLP and IBM Watson Natural Language Understanding apis to extract the relevant information. Voice messages are first converted to text using Rev.ai speech-to-text. We complete the payment using the blockchain set up with Algorand API. All scripts and database will be hosted on the AWS server
## Challenges I ran into
It turned out to be unexpectedly hard to accurately find out the payer and payee. Dealing with the blockchain part was a great learning experience.
## Accomplishments that I'm proud of
that we were able to make it work in less than 24 hours
## What I learned
A lot of different APIs
## What's next for Mess-Blockchain-enger
Different kinds of currencies, more messaging platforms | ## Inspiration
Picture this: we were lining up for dinner on the first day of the hackathon, thinking about how hungry we were. And then it hit us: what if we did something about food? Recommending places according to your interests? Helping people explore the city by recommending locations based on their emotional states?
## What it does
Our Discord Travel Therapy Bot aims to help Torontonians in their daily life by recommending different places around the city based on their current mental state. Our app strives to counter negative emotions by recommending tailored destinations for people to explore that can help people feel better.
## How we built it
Our discord bot mainly uses Python to develop the backend system of our bot, and with the help of the discord API, our bot provides a user-friendly and understandable interface.
After getting input from users, our app uses Co:here to generate recommendations based on the emotions displayed by the user.
## Challenges we ran into
Along the development of our Discord Bot, we encountered many difficult challenges. One of the biggest challenges we encountered was learning how to create a Discord bot and apply the Discord API. Due to the various functions and limitations of Discord API, we discovered that our bot could only be used by one person at a time.
Another challenge was incorporating Co:here into our bot's functionality. As it was our first time using any form of AI or ML, learning how to use Co:here proved to be a difficult feat, especially ensuring the module would return the right kind of output.
Despite these challenges, we were able to successfully create a functioning bot, learning a lot in the process.
## What we learned
* How to create a discord bot
* AI and ML in the form of Co:here | partial |
## Inspiration
We wanted to make a game. We both are enthusiastic about health. We decided to do nutrition.
## What it does
Our game is focused around avoiding unhealthy foods and eating healthy foods.
## How we built it
We used pygame package to implement GUI. And we allocated the jobs so that one was working on the logic and the other on the display.
## Challenges we ran into
Since we are all beginners, this was the first time using pygame package. We had a lot of passion on this project. And we had a lot of ideas. Unfortunately due to our inexperience, we were unable to implement everything we wanted.
## Accomplishments that we're proud of
We believe we were able to implement an interactive learning game to educate others about nutrition with fun visuals that will make people laugh.
## What we learned
We learned how to use git to cooperate with each other. Meshing our different skills to achieve a common goal was a meaningful experience.
## What's next for Get fit or quit
Implementing new interactions between the foods and the character. Adding other foods. More playable characters. Increasing the number of fun settings to play the game. | ## Nutrition-Tracker Project
# Overview
The Nutrition-Tracker is a comprehensive fitness tracker designed to provide detailed descriptions of various foods and keep track of the user's dietary intake. This project integrates multiple functionalities, including database management, artificial intelligence (AI), and macronutrient planning, to create a seamless and user-friendly experience.
# Inspiration
The inspiration for this project came from a personal journey towards better health and fitness. As woman who have struggled to keep track of daily nutritional intake, we realized the need for a tool that could simplify this process. The existing apps were either too complex or lacked the specific features we were looking for, such as detailed nutritional information and AI-driven recommendations. This motivated us to build a customized solution that could cater to these requirements and help others achieve their fitness goals as well.
# What We Learned
Through the development of the Nutrition-Tracker, we gained significant insights into various domains:
* Database Management: we learned how to design and interact with databases efficiently, ensuring data integrity and optimal performance.
* AI Integration: Implementing AI functionalities, especially natural language processing (NLP) and machine learning (ML), taught us how to harness the power of AI for practical applications.
* Nutritional Science: Researching nutritional values and dietary recommendations expanded my understanding of macronutrients and their impact on health.
* Software Development: The project enhanced our skills in modular programming, error handling, and user interface design.
# Project Structure
**food.py**
The 'food.py' file is the backbone of the project, handling all interactions with the database. It includes functions for:
* Connecting to the Database: Establishes a connection to the SQLite database.
* Executing SQL Queries: Performs CRUD (Create, Read, Update, Delete) operations on the food records.
* Exception Handling: Manages errors and exceptions that arise during database operations.
**food\_ai.py**
The food\_ai.py file integrates AI capabilities to enhance the user experience. It leverages libraries like OpenAI to provide intelligent features such as:
* Nutritional Information Generation: Uses NLP to generate detailed nutritional information from food descriptions.
* Dietary Recommendations: Provides personalized dietary recommendations based on user inputs and goals.
* AI-Driven Tasks: Performs various tasks like predicting nutritional values for unlisted foods.
**macro\_planner.py**
The macro\_planner.py file is focused on planning and tracking macronutrient intake. It includes functionalities to:
* Create Meal Plans: Allows users to create daily or weekly meal plans.
* Track Nutritional Intake: Calculates the total nutritional intake for each meal or day.
* Manage Plans: Provides CRUD (Create, Read, Update, Delete) operations for meal plans.
**main.py**
The main.py file serves as the entry point for the application, orchestrating the interactions between different modules. It can be configured to run as a command-line interface (CLI) or a web server.
Key Features:
Initialize Modules: Sets up the necessary modules and configurations.
Handle User Inputs: Processes user commands or web requests.
Coordinate Interactions: Ensures smooth interaction between 'food.py', 'food\_ai.py', and 'macro\_planner.py'.
# Challenges Faced
Database Optimization: Ensuring the database could handle large volumes of data efficiently required careful indexing and query optimization.
AI Model Integration: Integrating AI models, especially for NLP tasks, posed challenges in terms of model accuracy and performance.
User Experience Design: Creating a user-friendly interface that was both functional and intuitive required multiple iterations and user feedback.
Data Accuracy: Ensuring the nutritional data was accurate and reliable involved extensive research and validation.
# Conclusion
The Nutrition-Tracker project has been a rewarding endeavor, combining our interests in technology and fitness. By addressing the challenges and learning from them, we were able to create a tool that simplifies nutritional tracking and helps users make informed dietary choices. We are excited to continue improving this project and exploring new features to enhance its utility! | ## Inspiration
Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application.
## What it does
InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations.
## How I built it
In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API.
The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS.
## Challenges I ran into
"Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly.
## Accomplishments that I'm proud of
I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully.
## What I learned
I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch.
## What's next for InterPrep
I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project! | losing |
## Inspiration
Our inspiration came from the danger of skin-related diseases, along with the rising costs of medical care. DermaFix not only provides an alternative free option for those who can't afford to visit a doctor due to financial issues but provides real-time diagnosis.
## What it does
Scans and analyzes the user's skin, determining if the user has any sort of skin disease. If anything is detected, possible remedies are provided, with a google map displaying nearby places to get treatment.
## How we built it
We learned to create a Flask application, using HTML, CSS, and Javascript to develop the front end. We used TensorFlow, feeding an image classifier machine learning model to differentiate between clear skin and 20 other skin diseases.
## Challenges we ran into
Fine-tuning the image classifying model to be accurate at least 85% of the time.
## Accomplishments that we're proud of
Creating a model that is accurate 95% of the time.
## What we learned
HTML, CSS, Flask, TensorFlow
## What's next for Derma Fix
Using a larger dataset for a much more accurate diagnosis, along with more APIs to be used, in order to contact nearby doctors, and automatically set appointments for those that need it | We created a web application with investment price predictions across asset classes utilizing regression and an underlying mean-inversion philosophy, incorporating R, HTML, CSS, and Javascript.
Our basic investment approach is intelligent mean-reversion analysis. We believe that the best assets to invest in are those that demonstrate an upward long-term trend; have a downward short-term trend, though depreciation has recently ceased (by mean inversion, these asset prices are at their trough, making now the optimal time to invest); and are undervalued, as determined by historical analysis and comparison with other markets. To accurately identify these assets, we analyze long-term trends (over the last two years), short-term trends (over the last month), and interrelationships between various asset classes.
Future forecasts are made by examining momentum (short and longer-term trends as assessed through regression) and whether the assets are currently overvalued or undervalued (evaluated by comparing current circumstances to long-term regression and trends). Long-term and short-term regression combined give a nuanced picture of the asset price trends and allow intelligent future forecasts where the degree to which short and long term analysis are incorporated into the prediction varies according to the amount of extrapolation (e.g. 1 month will emphasize short-term trends, 2-yr will emphasize long-term trends) and statistical confidence in trends inferred from regression data. | ## Inspiration
Us college students all can relate to having a teacher that was not engaging enough during lectures or mumbling to the point where we cannot hear them at all. Instead of finding solutions to help the students outside of the classroom, we realized that the teachers need better feedback to see how they can improve themselves to create better lecture sessions and better ratemyprofessor ratings.
## What it does
Morpheus is a machine learning system that analyzes a professor’s lesson audio in order to differentiate between various emotions portrayed through his speech. We then use an original algorithm to grade the lecture. Similarly we record and score the professor’s body language throughout the lesson using motion detection/analyzing software. We then store everything on a database and show the data on a dashboard which the professor can access and utilize to improve their body and voice engagement with students. This is all in hopes of allowing the professor to be more engaging and effective during their lectures through their speech and body language.
## How we built it
### Visual Studio Code/Front End Development: Sovannratana Khek
Used a premade React foundation with Material UI to create a basic dashboard. I deleted and added certain pages which we needed for our specific purpose. Since the foundation came with components pre build, I looked into how they worked and edited them to work for our purpose instead of working from scratch to save time on styling to a theme. I needed to add a couple new original functionalities and connect to our database endpoints which required learning a fetching library in React. In the end we have a dashboard with a development history displayed through a line graph representing a score per lecture (refer to section 2) and a selection for a single lecture summary display. This is based on our backend database setup. There is also space available for scalability and added functionality.
### PHP-MySQL-Docker/Backend Development & DevOps: Giuseppe Steduto
I developed the backend for the application and connected the different pieces of the software together. I designed a relational database using MySQL and created API endpoints for the frontend using PHP. These endpoints filter and process the data generated by our machine learning algorithm before presenting it to the frontend side of the dashboard. I chose PHP because it gives the developer the option to quickly get an application running, avoiding the hassle of converters and compilers, and gives easy access to the SQL database. Since we’re dealing with personal data about the professor, every endpoint is only accessible prior authentication (handled with session tokens) and stored following security best-practices (e.g. salting and hashing passwords). I deployed a PhpMyAdmin instance to easily manage the database in a user-friendly way.
In order to make the software easily portable across different platforms, I containerized the whole tech stack using docker and docker-compose to handle the interaction among several containers at once.
### MATLAB/Machine Learning Model for Speech and Emotion Recognition: Braulio Aguilar Islas
I developed a machine learning model to recognize speech emotion patterns using MATLAB’s audio toolbox, simulink and deep learning toolbox. I used the Berlin Database of Emotional Speech To train my model. I augmented the dataset in order to increase accuracy of my results, normalized the data in order to seamlessly visualize it using a pie chart, providing an easy and seamless integration with our database that connects to our website.
### Solidworks/Product Design Engineering: Riki Osako
Utilizing Solidworks, I created the 3D model design of Morpheus including fixtures, sensors, and materials. Our team had to consider how this device would be tracking the teacher’s movements and hearing the volume while not disturbing the flow of class. Currently the main sensors being utilized in this product are a microphone (to detect volume for recording and data), nfc sensor (for card tapping), front camera, and tilt sensor (for vertical tilting and tracking professor). The device also has a magnetic connector on the bottom to allow itself to change from stationary position to mobility position. It’s able to modularly connect to a holonomic drivetrain to move freely around the classroom if the professor moves around a lot. Overall, this allowed us to create a visual model of how our product would look and how the professor could possibly interact with it. To keep the device and drivetrain up and running, it does require USB-C charging.
### Figma/UI Design of the Product: Riki Osako
Utilizing Figma, I created the UI design of Morpheus to show how the professor would interact with it. In the demo shown, we made it a simple interface for the professor so that all they would need to do is to scan in using their school ID, then either check his lecture data or start the lecture. Overall, the professor is able to see if the device is tracking his movements and volume throughout the lecture and see the results of their lecture at the end.
## Challenges we ran into
Riki Osako: Two issues I faced was learning how to model the product in a way that would feel simple for the user to understand through Solidworks and Figma (using it for the first time). I had to do a lot of research through Amazon videos and see how they created their amazon echo model and looking back in my UI/UX notes in the Google Coursera Certification course that I’m taking.
Sovannratana Khek: The main issues I ran into stemmed from my inexperience with the React framework. Oftentimes, I’m confused as to how to implement a certain feature I want to add. I overcame these by researching existing documentation on errors and utilizing existing libraries. There were some problems that couldn’t be solved with this method as it was logic specific to our software. Fortunately, these problems just needed time and a lot of debugging with some help from peers, existing resources, and since React is javascript based, I was able to use past experiences with JS and django to help despite using an unfamiliar framework.
Giuseppe Steduto: The main issue I faced was making everything run in a smooth way and interact in the correct manner. Often I ended up in a dependency hell, and had to rethink the architecture of the whole project to not over engineer it without losing speed or consistency.
Braulio Aguilar Islas: The main issue I faced was working with audio data in order to train my model and finding a way to quantify the fluctuations that resulted in different emotions when speaking. Also, the dataset was in german
## Accomplishments that we're proud of
Achieved about 60% accuracy in detecting speech emotion patterns, wrote data to our database, and created an attractive dashboard to present the results of the data analysis while learning new technologies (such as React and Docker), even though our time was short.
## What we learned
As a team coming from different backgrounds, we learned how we could utilize our strengths in different aspects of the project to smoothly operate. For example, Riki is a mechanical engineering major with little coding experience, but we were able to allow his strengths in that area to create us a visual model of our product and create a UI design interface using Figma. Sovannratana is a freshman that had his first hackathon experience and was able to utilize his experience to create a website for the first time. Braulio and Gisueppe were the most experienced in the team but we were all able to help each other not just in the coding aspect, with different ideas as well.
## What's next for Untitled
We have a couple of ideas on how we would like to proceed with this project after HackHarvard and after hibernating for a couple of days.
From a coding standpoint, we would like to improve the UI experience for the user on the website by adding more features and better style designs for the professor to interact with. In addition, add motion tracking data feedback to the professor to get a general idea of how they should be changing their gestures.
We would also like to integrate a student portal and gather data on their performance and help the teacher better understand where the students need most help with.
From a business standpoint, we would like to possibly see if we could team up with our university, Illinois Institute of Technology, and test the functionality of it in actual classrooms. | losing |
## Inspiration
Traffic is a pain and hurdle for everyone. It costs time and money for everyone stuck within it. We wanted to empower everyone to focus on what they truly enjoy instead of having to waste their time in traffic. We found the challenge to connect autonomous vehicles and enable them to work closely with each other to make maximize traffic flow to be very interesting. We were specifically interested in aggregating real data to make decisions and evolve those over time using artificial intelligence.
## What it does
We engineered an autonomous network that minimizes the time delay for each car in the network as it moves from its source to its destination. The idea is to have 0 intersections, 0 accidents, and maximize traffic flow.
We did this by developing a simulation in P5.js and training a network of cars to interact with each other in such a way that they do not collide and still travel from their source to target destination safely. We slowly iterated on this idea by first creating the idea of incentivizing factors and negative points. This allowed the cars to learn to not collide with each other and follow the goal they're set out to do. After creating a full simulation with intersections (allowing cars to turn and drive so they stop the least number of times), we created a simulation on Unity. This simulation looked much nicer and took the values trained by our best result from our genetic AI. From the video, we can see that the generation is flawless; there are no accidents, and traffic flows seamlessly. This was the result of over hundreds of generations of training of the genetic AI. You can see our video for more information!
## How I built it
We trained an evolutionary AI on many physical parameters to optimize for no accidents and maximal speed. The allowed the AI to experiment with different weights for each factor in order to reach our goal; having the cars reach from source to destination while staying a safe distance away from all other cars.
## Challenges we ran into
Deciding which parameters to tune, removing any bias, and setting up the testing environment. To remove bias, we ended up introducing randomly generated parameters in our genetic AI and "breeding" two good outcomes. Setting up the simulation was also tricky as it involved a lot of vector math.
## Accomplishments that I'm proud of
Getting the network to communicate autonomously and work in unison to avoid accidents and maximize speed. It's really cool to see the genetic AI evolve from not being able to drive at all, to fully being autonomous in our simulation. If we wanted to apply this to the real world, we can add more parameters and have the genetic AI optimize to find the parameters needed to reach our goals in the fastest time.
## What I learned
We learned how to model and train a genetic AI. We also learned how to deal with common issues and deal with performance constraints effectively. Lastly, we learned how to decouple the components of our application to make it scalable and easier to update in the future.
## What's next for Traffix
We want to increasing the user-facing features for the mobile app and improving the data analytics platform for the city. We also want to be able to extend this to more generalized parameters so that it could be applied in more dimensions. | ## Inspiration
We wanted to solve a unique problem we felt was impacting many people but was not receiving enough attention. With emerging and developing technology, we implemented neural network models to recognize objects and images, and converting them to an auditory output.
## What it does
XTS takes an **X** and turns it **T**o **S**peech.
## How we built it
We used PyTorch, Torchvision, and OpenCV using Python. This allowed us to utilize pre-trained convolutional neural network models and region-based convolutional neural network models without investing too much time into training an accurate model, as we had limited time to build this program.
## Challenges we ran into
While attempting to run the Python code, the video rendering and text-to-speech were out of sync and the frame-by-frame object recognition was limited in speed by our system's graphics processing and machine-learning model implementing capabilities. We also faced an issue while trying to use our computer's GPU for faster video rendering, which led to long periods of frustration trying to solve this issue due to backwards incompatibilities between module versions.
## Accomplishments that we're proud of
We are so proud that we were able to implement neural networks as well as implement object detection using Python. We were also happy to be able to test our program with various images and video recordings, and get an accurate output. Lastly we were able to create a sleek user-interface that would be able to integrate our program.
## What we learned
We learned how neural networks function and how to augment the machine learning model including dataset creation. We also learned object detection using Python. | # The project
HomeSentry is an open source platform that turns your old phones or any devices into a distributed security camera system. Simply install our app on any mobile device and start monitoring your home (or any other place). HomeSentry gives a new life to your old devices while bringing you the peace of mind you deserve!
## Inspiration
We all have old phones stored in the bottom of a drawer waiting to be used for something. This is where the inspiration for HomeSentry came from. Indeed, We wanted to give our old cellphone and electronic devices a new use so they don't only collect dust over time. Generally speaking, every cellphone have a camera that could be used for something, and we thought using it for security reason would be a great idea. Home surveillance camera systems are often very expensive, or are complicated to set up. Our solution is very simple and costs close to nothing since it's equipment you already have.
## How it works
HomeSentry turns your old cellphones into a complete security system for your home. It's a modular solution where you can register as many devices as you have at your disposition. Every device are linked to your account and automatically stream their camera feed to your personnal dashboard in real time. You can view your security footage from anywhere by logging in to your HomeSentry dashboard.
## How we built it
The HomeSentry platform is constituted of 3 main components :
#### 1. HomeSentry Server
The server main responsibility is responsible to handle all authentication requests and orchestrate cameras connections. It is indeed in charge of connecting the mobile app to users' dashboard so that it may get the live stream footage. This server is built with node.js and uses a MongoDB to store user accounts.
#### 2. HomeSentry Mobile app
The user goes on the app from his cellphone and enters his credentials. He may then start streaming the video from his camera to the server. The app is currently a web app build with the Angular Framework. We plan to convert it to an Android/iOS application using Apache Cordova at a later stage.
#### 3. HomeSentry Dashboard
The dashboar is the user main management panel. It allows the user to watch all of the streams he his receiving from the connected cellphones. The website was also built with Angular.
## Technology
On a more technical note, this app uses several open sources framework and library in order to accomplish it work. Here's a quick summary.
The NodeJS server is built with TypeScript, Express.JS. We use Passport.JS + MongoDB as our authentication system and SocketIO to exchange real time data between every user's devices (cameras) and the dashboard.
On mobile side we are using WebRTC to access the devices' camera stream and to link it to the dashboard. Every camera stream is distributed by a peer to peer connection with the web dashboard when it become active. This ensures the streams privacy and reduce video latency. We used Peer.JS and SocketIO to implement this mecanism.
Just like the mobile client, the web dashboard is built with Angular and frontend libraries such as Bootstrap or feather-icons.
## Challenges we ran into (and what we've learned)
Overall we've learned that sending live streams is quite complicated !
We had underestimated the effort required to send and manage this feed. While working with this type of media, we learned how to communicate with WebRTC. At the begining, we tried to do all the stuff by oursef and use different protocols such as RTMP, but we come to a point where it was a little buggy. Late in the event, we found and used the PeerJS lib to manage those streams and it considerably simplified our code.
We found that working with mobile applications like Xamarin is much more complicated for this kind of project. The easiest way was clearly javascript, and it allow a greater type of device to be registered as cameras.
The project also help us improved our knowledge of real time messaging and WebSocket by using SocketIO to add a new stream without having to refresh the web page.
We also used an authentication library we haven't used yet, called PassportJS for Node. With this we were able to show only the streams of a specific user.
We hosted for the first time an appservice with NodeJS on Azure and we configure the CI from GitHub. It's nice to see that they use Github Action to automate this process.
We've finally perfected ourselves on various frontend technologies such as Angular.
## What's next for HomeSentry
HomeSentry works very well for displaying feeds of a specific user. Now, what might be cool is to add some analytics on that feed to detect motion and different events. We could send a notification of these movements by sms/email or even send a push notification if we could compile this application in Cordova and distribute it to the AppStore and Google Play. Adding the ability to record and save the feed when motion is detected could be a great addition. With detection, we should store this data in local storage and in the cloud. Working offline could also be a great addition. At last improve quality assurance to ensure that the panel works on any device will be a great idea.
We believe that HomeSentry can be used in many residences and we hope this application will help people secure their homes without having to invest in expensive equipments. | partial |
## Inspiration
We wanted to make hardware technology that allows people with visual disabilities to be able to feel as independent as anyone else.
## What it does
It can direct people with visual disabilities to their desired destination and allow them to know when to take a turn and if there is a person or object in front of them to avoid it.
## How I built it
We built it using Swift on the application side using the MapKit and Bluetooth framework. We build the hardware side with Arduino, C code, and Bluetooth Module to communicate with mobile.
## Challenges I ran into
We could not pair our Bluetooth module initially and we also could not add sound commands.
## Accomplishments that I'm proud of
We made a portable prototype, which we can start using to test whether it can help people with visual disabilities.
## What I learned
We learned a lot of hardware components and how to configure it properly with different parts such as Ultrasonic Sensors and Servo Motors.
## What's next for Bliglass
Testing the prototype with people | ## Inspiration
Both of us have glasses and have pretty bad vision. Even though we are nowhere near severe visually impairment, problems dealing with eyesight are things we easily relate to. We could only imagine how it felt to never be able to see properly, because even by removing our glasses, a lot of our sight is gone and life is so uncomfortable in those moments. We also have seen people try to combat this problem in the past, but none of the solutions have been successfully grasped by society. We wanted to give our best shot at changing that.
## What it does
Mirage has 2 parts: the module and the app. The module is a 3D printed device that mounts onto any standard cane that visually impaired people use. The module has a 1080p camera that is connected to a gimbal, and recognizes street signs, humans, roadways, paths, obstacles, that are in front of the person as they walk, and then sends audio to bluetooth headphones informing the person of all these things. The module + headphone setup keeps the individual aware of their surroundings at all times, even when they are alone. But, Mirage is so much more. Mirage is the perfect cross-section between hardware and software to help the visually impaired. The mobile application can be downloaded by the blind person's caretaker, and with the app, they are able to communicate via audio to the visually impaired person's headset, as well as get a live camera feed of the cane's field of view as the person is walking. The caretaker can do whatever they need to, even when their loved one is away and alone, but can still check in when needed.
## How we built it
We used Autodesk Inventor to design the 3D printable module from scratch. We used a Raspberry Pi to perform all Computer Vision using Python and OpenCV. We only wanted to use a camera for vision because we were able to get the most information, with the least amount of sensors. Once we recognized objects, we used a TTS (text to speech) software to create warnings and commands for the visually impaired person that would be played via bluetooth connection from the Raspberry Pi to Apple Airpods. Since the cane is always moving at different angles, we didn't want the camera feed to be all shaky, so we created a gimbal mechanism utilizing control algorithms with a servo, IMU, and Arduino UNO board to stabilize the camera as the person walked. We connected all of these components to a portable battery pack that was also mounted on the cane. In order to build the caretaker's app, we used Swift and Xcode. In order to achieve live streamed video between Python (Raspberry Pi) and Swift (iPhone) we used the Twilio Video API. We created a webpage using Ngrok, node.js, and html/css/js that would be launched with a Python program on the Raspberry Pi, which would connect to Swift on the iPhone to achieve a live stream. Because of the livestream, we were also able to send messages of GPS coordinates of the module device to the app, to then be plotted on a map for the caretaker. We also coded secure user authentication for the caretakers by utilizing Google's Firebase Authentication/Database servers, so they can each securely log in to our app and store any private information about them or their family on the app.
## Challenges we ran into
The biggest challenge we ran into was our choice to use a Raspberry Pi to power all the computer vision and live streaming. Not only had we never programmed with a Pi before, but we quickly found out that it was very, very slow. We originally were working with Amazon AWS's rekognition API to better detect surroundings, but the board wouldn't stay under 85-90 degrees C with it running in addition to the other things. We ended up needing to make a lot of compromises to our CV software because of its hardware limitations so in the future we certainly plan to use a more powerful machine, like a Nvidia Jetson Nano. Another one of the biggest challenges we ran into was to successfully
create a live video stream across multiple platforms (Python, Swift, HTML/CSS/JS). Although we had worked a little with these languages in the past, we struggled a lot getting each individual portion to work on its own even before we tried to connect everything.
## Accomplishments that we're proud of
One of the things that we are most proud of this hackathon was our ability to use so much technology that we had never messed with in the past, but still finish work in a timely fashion. We both certainly agree that of the few hackathons we have went to, we definitely learned the most at PennApps, and maybe that is thanks to the 36 hours of hacking time. In the past, we were known for always cutting it close and finishing at the end, but here we had a bit of extra wiggle room and time to make modifications and really test everything towards the end.
## What we learned
After working on Mirage, we learned a lot in Python and OpenCV, while working with Raspberry Pi, about the whole world of computer vision. We also tried to implement many API's throughout portions of this hack, something that we never really messed with in the past too. After seeing how beneficial and useful they are, they are certainly things that will make every hack we create in the future. We also learned a lot about Swift, as it was only our second time ever using Xcode to create mobile apps. We feel a lot more comfortable with the environment and utilizing all its powerful features. And lastly, once again, our ambition to try to incorporate a lot of things in this project taught us so much in so many different areas this weekend which is certainly our most valuable takeaway.
## What's next for Mirage
First off, we plan to switch out the Raspberry Pi with a much more powerful alternative so our computer vision and live streaming can work seamlessly without lag and delay. We also plan to improve our voice feedback with the Apple Airpods. Right now, we're like Siri. One day, we'll be like Jarvis. | ## Inspiration
When it comes to finding solutions to global issues, we often feel helpless: making us feel as if our small impact will not help the bigger picture. Climate change is a critical concern of our age; however, the extent of this matter often reaches beyond what one person can do....or so we think!
Inspired by the feeling of "not much we can do", we created *eatco*. *Eatco* allows the user to gain live updates and learn how their usage of the platform helps fight climate change. This allows us to not only present users with a medium to make an impact but also helps spread information about how mother nature can heal.
## What it does
While *eatco* is centered around providing an eco-friendly alternative lifestyle, we narrowed our approach to something everyone loves and can apt to; food! Other than the plenty of health benefits of adopting a vegetarian diet — such as lowering cholesterol intake and protecting against cardiovascular diseases — having a meatless diet also allows you to reduce greenhouse gas emissions which contribute to 60% of our climate crisis. Providing users with a vegetarian (or vegan!) alternative to their favourite foods, *eatco* aims to use small wins to create a big impact on the issue of global warming. Moreover, with an option to connect their *eatco* account with Spotify we engage our users and make them love the cooking process even more by using their personal song choices, mixed with the flavours of our recipe, to create a personalized playlist for every recipe.
## How we built it
For the front-end component of the website, we created our web-app pages in React and used HTML5 with CSS3 to style the site. There are three main pages the site routes to: the main app, and the login and register page. The login pages utilized a minimalist aesthetic with a CSS style sheet integrated into an HTML file while the recipe pages used React for the database. Because we wanted to keep the user experience cohesive and reduce the delay with rendering different pages through the backend, the main app — recipe searching and viewing — occurs on one page. We also wanted to reduce the wait time for fetching search results so rather than rendering a new page and searching again for the same query we use React to hide and render the appropriate components. We built the backend using the Flask framework. The required functionalities were implemented using specific libraries in python as well as certain APIs. For example, our web search API utilized the googlesearch and beautifulsoup4 libraries to access search results for vegetarian alternatives and return relevant data using web scraping. We also made use of Spotify Web API to access metadata about the user’s favourite artists and tracks to generate a personalized playlist based on the recipe being made. Lastly, we used a mongoDB database to store and access user-specific information such as their username, trees saved, recipes viewed, etc. We made multiple GET and POST requests to update the user’s info, i.e. saved recipes and recipes viewed, as well as making use of our web scraping API that retrieves recipe search results using the recipe query users submit.
## Challenges we ran into
In terms of the front-end, we should have considered implementing Routing earlier because when it came to doing so afterward, it would be too complicated to split up the main app page into different routes; this however ended up working out alright as we decided to keep the main page on one main component. Moreover, integrating animation transitions with React was something we hadn’t done and if we had more time we would’ve liked to add it in. Finally, only one of us working on the front-end was familiar with React so balancing what was familiar (HTML) and being able to integrate it into the React workflow took some time. Implementing the backend, particularly the spotify playlist feature, was quite tedious since some aspects of the spotify web API were not as well explained in online resources and hence, we had to rely solely on documentation. Furthermore, having web scraping and APIs in our project meant that we had to parse a lot of dictionaries and lists, making sure that all our keys were exactly correct. Additionally, since dictionaries in Python can have single quotes, when converting these to JSONs we had many issues with not having them be double quotes. The JSONs for the recipes also often had quotation marks in the title, so we had to carefully replace these before the recipes were themselves returned. Later, we also ran into issues with rate limiting which made it difficult to consistently test our application as it would send too many requests in a small period of time. As a result, we had to increase the pause interval between requests when testing which made it a slow and time consuming process. Integrating the Spotify API calls on the backend with the frontend proved quite difficult. This involved making sure that the authentication and redirects were done properly. We first planned to do this with a popup that called back to the original recipe page, but with the enormous amount of complexity of this task, we switched to have the playlist open in a separate page.
## Accomplishments that we're proud of
Besides our main idea of allowing users to create a better carbon footprint for themselves, we are proud of accomplishing our Spotify integration. Using the Spotify API and metadata was something none of the team had worked with before and we're glad we learned the new skill because it adds great character to the site. We all love music and being able to use metadata for personalized playlists satisfied our inner musical geek and the integration turned out great so we're really happy with the feature. Along with our vast recipe database this far, we are also proud of our integration! Creating a full-stack database application can be tough and putting together all of our different parts was quite hard, especially as it's something we have limited experience with; hence, we're really proud of our service layer for that. Finally, this was the first time our front-end developers used React for a hackathon; hence, using it in a time and resource constraint environment for the first time and managing to do it as well as we did is also one of our greatest accomplishments.
## What we learned
This hackathon was a great learning experience for all of us because everyone delved into a tool that they'd never used before! As a group, one of the main things we learned was the importance of a good git workflow because it allows all team members to have a medium to collaborate efficiently by combing individual parts. Moreover, we also learned about Spotify embedding which not only gave *eatco* a great feature but also provided us with exposure to metadata and API tools. Moreover, we also learned more about creating a component hierarchy and routing on the front end. Another new tool that we used in the back-end was learning how to perform database operations on a cloud-based MongoDB Atlas database from a python script using the pymongo API. This allowed us to complete our recipe database which was the biggest functionality in *eatco*.
## What's next for Eatco
Our team is proud of what *eatco* stands for and we want to continue this project beyond the scope of this hackathon and join the fight against climate change. We truly believe in this cause and feel eatco has the power to bring meaningful change; thus, we plan to improve the site further and release it as a web platform and a mobile application. Before making *eatco* available for users publically we want to add more functionality and further improve the database and present the user with a more accurate update of their carbon footprint. In addition to making our recipe database bigger, we also want to focus on enhancing the front-end for a better user experience. Furthermore, we also hope to include features such as connecting to maps (if the user doesn't have a certain ingredient, they will be directed to the nearest facility where that item can be found), and better use of the Spotify metadata to generate even better playlists. Lastly, we also want to add a saved water feature to also contribute into the global water crisis because eating green also helps cut back on wasteful water consumption! We firmly believe that *eatco* can go beyond the range of the last 36 hours and make impactful change on our planet; hence, we want to share with the world how global issues don't always need huge corporate or public support to be solved, but one person can also make a difference. | losing |
## Inspiration
a strategy and unique board game.
the game 2048 gave us the idea to make a board game involving numbers.
## What it does
It is game that involves thinking, some simple calculations and a bit of luck.
## How we built it
Using HTML5. CSS and JS and its libraries(jQuery), we managed to create a dynamic page.
## Challenges we ran into
The toughest part was to split the canvas, get the coordinates and clear a specific part of the canvas.
## Accomplishments that we're proud of
Coming up with the idea, implementing the interface and the code necessary in 12 hours is something we're really proud of. Also, It is an original idea for a game, hopefully no one did beat us to it.
## What's next for Grand Multiple
We'll add some features, fix some minor bugs, improve the design. | ## Inspiration
We often see people throw garbage in the recycling, or recycling in the garbage, and wanted to, in some way, take action on that.
## What it does
It educates the greater population on how to recycle, and how not to, in a gamified way.
## How we built it
Most of us worked on the website; one member worked on the game.
## Challenges we ran into
It was difficult to set up Git for us to all collaborate together since we've never touched the technology. Alongside that, learning brand-new technologies was a challenge in itself, and only now do we somewhat grasp HTML, CSS, and JS.
## Accomplishments that we're proud of
The fact we managed to make the site and game in such a short amount of time is great!
## What we learned
How to
* Program in HTML, CSS, JS
* Use Godot to make games and host them online
* How to use Git
* How to set up repositories and collaborate on projects
## What's next for Sort It Out
* We want to update the site and game to be more robust and modern. With time, the game and site could end up being very good. As is, it's only sufficient at best. | ## Inspiration
We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online.
## What it does
Recommends sustainable and local business alternatives when shopping online.
## How we built it
Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB.
## Challenges we ran into
Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging.
## Accomplishments that we're proud of
Creating a working product!
Successful end-to-end data pipeline.
## What we learned
We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB.
## What's next for Conscious Consumer
First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier. | losing |
## Inspiration
We were inspired by our passion for mental health awareness, journaling, and giraffes to create Giraffirmations. During this time of isolation, we found ourselves falling into negative mindsets and allowing feelings of hopelessness to creep in. This greatly impacted our mental health and we saw that journalling centred around gratitude helped to improve our attitudes. We also found ourselves spending hours in front of our computers and thought that it would be a good idea to allow for quick journalling breaks right from our favourite browser.
## What it does
Giraffirmations prompts users to reflect on positive experiences and promotes feelings of gratitude. Users can jot down their feelings and save them for future references, reinforcing happy thought patterns!
There is also a hidden easter egg for additional fun surprises to boost the user's mood :)
## How we built it
* 60% JavaScript
* 25% HTML
* 15% CSS
* 110% passion and fun! (plus some helpful APIs)
## Challenges we ran into
* Implementing tabs within the extension
* Using Chrome Storage Sync API
* Retrieving the world date and time using JavaScript
* Controlling Youtube ad frequencies
## Accomplishments that we're proud of
* Learning JavaScript in a day
* Working in a team of 2!
* Learning how to randomize link destinations
* Coming up with a great extension name
## What we learned
* Chrome Storage Sync API is HARD
* Colours and fonts matter
* Version control is a lifesaver
## What's next for Giraffirmations
* Showing all of the user's previous entries
* Implementing reminder notifications to journal
* Gamification aspects (growing a Giraffe through positivity!)
* Dark mode! | ## Inspiration
Our inspiration comes from many of our own experiences with dealing with mental health and self-care, as well as from those around us. We know what it's like to lose track of self-care, especially in our current environment, and wanted to create a digital companion that could help us in our journey of understanding our thoughts and feelings. We were inspired to create an easily accessible space where users could feel safe in confiding in their mood and check-in to see how they're feeling, but also receive encouraging messages throughout the day.
## What it does
Carepanion allows users an easily accessible space to check-in on their own wellbeing and gently brings awareness to self-care activities using encouraging push notifications. With Carepanion, users are able to check-in with their personal companion and log their wellbeing and self-care for the day, such as their mood, water and medication consumption, amount of exercise and amount of sleep. Users are also able to view their activity for each day and visualize the different states of their wellbeing during different periods of time. Because it is especially easy for people to neglect their own basic needs when going through a difficult time, Carepanion sends periodic notifications to the user with messages of encouragement and assurance as well as gentle reminders for the user to take care of themselves and to check-in.
## How we built it
We built our project through the collective use of Figma, React Native, Expo and Git. We first used Figma to prototype and wireframe our application. We then developed our project in Javascript using React Native and the Expo platform. For version control we used Git and Github.
## Challenges we ran into
Some challenges we ran into included transferring our React knowledge into React Native knowledge, as well as handling package managers with Node.js. With most of our team having working knowledge of React.js but being completely new to React Native, we found that while some of the features of React were easily interchangeable with React Native, some features were not, and we had a tricky time figuring out which ones did and didn't. One example of this is passing props; we spent a lot of time researching ways to pass props in React Native. We also had difficult time in resolving the package files in our application using Node.js, as our team members all used different versions of Node. This meant that some packages were not compatible with certain versions of Node, and some members had difficulty installing specific packages in the application. Luckily, we figured out that if we all upgraded our versions, we were able to successfully install everything. Ultimately, we were able to overcome our challenges and learn a lot from the experience.
## Accomplishments that we're proud of
Our team is proud of the fact that we were able to produce an application from ground up, from the design process to a working prototype. We are excited that we got to learn a new style of development, as most of us were new to mobile development. We are also proud that we were able to pick up a new framework, React Native & Expo, and create an application from it, despite not having previous experience.
## What we learned
Most of our team was new to React Native, mobile development, as well as UI/UX design. We wanted to challenge ourselves by creating a functioning mobile app from beginning to end, starting with the UI/UX design and finishing with a full-fledged application. During this process, we learned a lot about the design and development process, as well as our capabilities in creating an application within a short time frame.
We began by learning how to use Figma to develop design prototypes that would later help us in determining the overall look and feel of our app, as well as the different screens the user would experience and the components that they would have to interact with. We learned about UX, and how to design a flow that would give the user the smoothest experience. Then, we learned how basics of React Native, and integrated our knowledge of React into the learning process. We were able to pick it up quickly, and use the framework in conjunction with Expo (a platform for creating mobile apps) to create a working prototype of our idea.
## What's next for Carepanion
While we were nearing the end of work on this project during the allotted hackathon time, we thought of several ways we could expand and add to Carepanion that we did not have enough time to get to. In the future, we plan on continuing to develop the UI and functionality, ideas include customizable check-in and calendar options, expanding the bank of messages and notifications, personalizing the messages further, and allowing for customization of the colours of the app for a more visually pleasing and calming experience for users. | ## Inspiration
The inspiration behind Go Desk was to take AI chatbots to the next level for SMEs and startups. We wanted to help businesses focus on growth and innovation rather than getting bogged down by repetitive customer support tasks. By automating support calls, we aim to give businesses more time to build and scale.
## What it does
Go Desk is a phone-based customer support AI agent that allows businesses to create intelligent agents for answering customer questions and performing specific tasks. These agents go beyond simple responses—they can cancel orders, book appointments, escalate cases, and update information, without requiring human intervention.
## How we built it
Go Desk was built on Open AI's reliable APIs for conversational generation and intent comprehension. We integrated Twilio to handle phone calls programmatically, using speech-to-text for voice input processing. The backend was developed with Node.js and TypeScript, while the frontend was built with Vue.js.
## Challenges we ran into
We faced a few challenges, especially in figuring out the right idea to implement. Initially, we planned a hardware project, but due to lack of components and other issues, we decided to pivot to an AI-based solution. Without a designer on the team, we had to get creative with the UI, and while it was a bit hacky, we’re proud of the result!
## Accomplishments that we're proud of
This was our first project involving large language models (LLMs), and we pulled it off with almost no sleep in two days! We’re also proud of the fact that we managed to pivot the project successfully and deliver a fully functional AI-powered solution.
## What's next for Go Desk
We plan to iterate on the platform, refine its features, and validate the idea by testing it with real customers. Our goal is to keep improving based on feedback and make Go Desk a go-to tool for businesses needing advanced AI-powered customer support. | partial |
## Inspiration
Everyone gets tired waiting for their large downloads to complete. BitTorrent is awesome, but you may not have a bunch of peers ready to seed it. Fastify, a download accelerator as a service, solves both these problems and regularly enables 4x download speeds.
## What it does
The service accepts a URL and spits out a `.torrent` file. This `.torrent` file allows you to tap into Fastify's speedy seed servers for your download.
We even cache some downloads so popular downloads will be able to be pulled from Fastify even speedier!
Without any cache hits, we saw the following improvements in download speeds with our test files:
```
| | 512Mb | 1Gb | 2Gb | 5Gb |
|-------------------|----------|--------|---------|---------|
| Regular Download | 3 mins | 7 mins | 13 mins | 30 mins |
| Fastify | 1.5 mins | 3 mins | 5 mins | 9 mins |
|-------------------|----------|--------|---------|---------|
| Effective Speedup | 2x | 2.33x | 2.6x | 3.3x |
```
*test was performed with slices of the ubuntu 16.04 iso file, on the eduroam network*
## How we built it
Created an AWS cluster and began writing Go code to accept requests and the front-end to send them. Over time we added more workers to the AWS cluster and improved the front-end. Also, we generously received some well-needed Vitamin Water.
## Challenges we ran into
The BitTorrent protocol and architecture was more complicated for seeding than we thought. We were able to create `.torrent` files that enabled downloads on some BitTorrent clients but not others.
Also, our "buddy" (*\*cough\** James *\*cough\**) ditched our team, so we were down to only 2 people off the bat.
## Accomplishments that we're proud of
We're able to accelerate large downloads by 2-5 times as fast as the regular download. That's only with a cluster of 4 computers.
## What we learned
Bittorrent is tricky. James can't be trusted.
## What's next for Fastify
More servers on the cluster. Demo soon too. | ## Inspiration
You use Apple Music. Your friends all use Spotify. But you're all stuck in a car together on the way to Tahoe and have the perfect song to add to the road trip playlist. With TrainTrax, you can all add songs to the same playlist without passing the streaming device around or hassling with aux cords.
Have you ever been out with friends on a road trip or at a party and wished there was a way to more seamlessly share music? TrainTrax is a music streaming middleware that lets cross platform users share music without pulling out the aux cord.
## How it Works
The app authenticates a “host” user sign through their Apple Music or Spotify Premium accounts and let's them create a party where they can invite friends to upload music to a shared playlist. Friends with or without those streaming service accounts can port through the host account to queue up their favorite songs. Hear a song you like? TrainTrax uses Button to deep links songs directly to your iTunes account, so that amazing song you heard is just a click away from being yours.
## How We Built It
The application is built with Swift 3 and Node.js/Express. A RESTful API let’s users create parties, invite friends, and add songs to a queue. The app integrates with Button to deep link users to songs on iTunes, letting them purchase songs directly through the application.
## Challenges We Ran Into
• The application depended a lot on third party tools, which did not always have great documentation or support.
• This was the first hackathon for three of our four members, so a lot of the experience came with a learning curve. In the spirit of collaboration, our team approached this as a learning opportunity, and each member worked to develop a new skill to support the building of the application. The end result was an experience focused more on learning and less on optimization.
• Rain.
## Accomplishments that we're proud of
• SDK Integrations: Successful integration with Apple Music and Spotify SDKs!
• Button: Deep linking with Button
• UX: There are some strange UX flows involved with adding songs to a shared playlist, but we kicked of the project with a post-it design thinking brainstorm session that set us up well for creating these complex user flows later on.
• Team bonding: Most of us just met on Friday, and we built a strong fun team culture.
## What we learned
Everyone on our team learned different things.
## What's next for TrainTrax
• A web application for non-iPhone users to host and join parties
• Improved UI and additional features to fine tune the user experience — we've got a lot of ideas for the next version in the pipeline, including some already designed in this prototype: [TrainTrax prototype link](https://invis.io/CSAIRSU6U#/219754962_Invision-_User_Types) | ## Inspiration
it's really fucking cool that big LLMs (ChatGPT) are able to figure out on their own how to use various tools to accomplish tasks.
for example, see Toolformer: Language Models Can Teach Themselves to Use Tools (<https://arxiv.org/abs/2302.04761>)
this enables a new paradigm self-assembling software: machines controlling machines.
what if we could harness this to make our own lives better -- a lil LLM that works for you?
## What it does
i made an AI assistant (SMS) using GPT-3 that's able to access various online services (calendar, email, google maps) to do things on your behalf.
it's just like talking to your friend and asking them to help you out.
## How we built it
a lot of prompt engineering + few shot prompting.
## What's next for jarbls
shopping, logistics, research, etc -- possibilities are endless
* more integrations !!!
the capabilities explode exponentially with the number of integrations added
* long term memory
come by and i can give you a demo | partial |
## Inspiration
As students around 16 years old, skin conditions such as acne make us even more self-conscious than we already are. Furthermore, one of our friends is currently suffering from eczema, so we decided to make an app relating to skin care. While brainstorming for ideas, we realized that the elderly are affected by more skin conditions than younger people. These skin diseases can easily transform into skin cancer if left unchecked.
## What it does
Ewmu is an app that can assist people with various skin conditions. It utilizes machine learning to provide an accurate evaluation of the skin condition of an individual. After analyzing the skin, Ewmu returns some topical creams or over-the-top-medication that can alleviate the users' symptoms.
## How we built it
We built Ewmu by splitting the project into 3 distinct parts. The first part involved developing and creating the Machine Learning backend model using Swift and the CoreML framework. This model was trained on datasets from Kaggle.com, which we procured over 16,000 images of various skin conditions ranging from atopic dermatitis to melanoma. 200 iterations were used to train the ML model, and it achieved over 99% training accuracy, and 62% validation accuracy and 54% testing accuracy.
The second part involved deploying the ML model on a flask backend which provided an API endpoint for the frontend to call from and send the image to. The flask backend fed the image data to the ML model which gave the classification and label for the image. The result was then taken to the frontend where it was displayed.
The frontend was built with React.JS and many libraries that created a dashboard for the user. In addition we used libraries to take a photo of the user and then encoded that image to a base64 string which was sent to the flask backend.
## Challenges we ran into
Some challenges we ran into were deploying the ML model to a flask backend because of the compatibility issue between Apple and other platforms. Another challenge we ran into was the states within React and trying to get a still image from the webcam, then mapping it over to a base64 encode, then finally sending it over to the backend flask server which then returned a classification.
## Accomplishments that we're proud of
* Skin condition classifier ML model
+ 99% training accuracy
+ 62% validation accuracy
+ 54% testing accuracy
We're really proud of creating that machine learning model since we are all first time hackers and haven't used any ML or AI software tools before, which marked a huge learning experience and milestone for all of us. This includes learning how to use Swift on the day of, and also cobbling together multiple platforms and applications: backend, ML model, frontend.
## What we learned
We learned that time management is all to crucial!! We're writing this within the last 5 minutes as we speak LMAO. From the technical side, we learned how to use React.js to build a working and nice UI/UX frontend, along with building a flask backend that could host our custom built ML model. The biggest thing we took away from this was being open to new ideas and learning all that we could under such a short time period!
* TIL uoft kids love: ~~uwu~~
## What's next for Ewmu
We're planning on allowing dermatologists to connect with their patients on the website. Patients will be able to send photos of their skin condition to doctors. | ## Inspiration
Throughout Quarantine and the global pandemic that we are all currently experiencing, I have begun to reflect on my own life and health in general. We may believe we have a particular ailment when, in fact, it is actually our fear getting the best of us. But how can one be sure that the symptoms they are having are as severe as they seem to be? As a result, we developed the Skin Apprehensiveness Valdiator and Educator App to help not only maintain our mental balance in regards to paranoia about our own health, but also to help front line staff solve the major pandemic that has plagued the planet.
## What it does
The home page is the most critical aspect of the app. It has a simple user interface that allows the user to choose between using an old photo or taking a new photo of any skin issues they may have. They can then pick or take a screenshot, and then, after submission, we can use the cloud to run our model and receive results from it, thanks to Google's Cloud ML Kit. We get our overall diagnosis of what illness our Model thinks it is, as well as our trust level. Following that, we have the choice of viewing more information about this disease through a wikipedia link or sending this to a specialist for confirmation.
Tensorflow and a convolutional neural network with several hidden layers were used to build the first Machine Learning Algorithm. It uses the Adam optimizer and has a relu activation layer. We've also improved the accuracy by using cross validation.
The second section of the app is the specialist section, where you can view doctors whose diagnoses you can check online. You may send them a text, email, or leave a voicemail to request a consultation and learn more about your diagnosis. This involves questions like what should my treatment be, when should I begin, where should I go, and how much will it cost, as well as some others.
The third section of the app is the practices section, which helps you to locate dermatology practices in your area that can assist you with your care. You can see a variety of information about a location, including its Google Review Average ranking, total number of reviews, phone number, address, and other relevant information. You also get a glimpse into how their office is decorated. You may also click on the location to be redirected to Google Maps directions to that location.
The tips section of the app is where you can find different links from reputable sources that will help you get advice on care, diagnosis, or skin disorders in a geological environment.
## How we built it
ReactJS, Expo, Google Cloud ML Kit, Tensorflow, practoAPI, Places API | ## Window Share
**Intuitive content sharing. Simply drag a program window from one computer to another.**
**Seamless, local sharing**
Window Share is the most intuitive way for you to share the things you enjoy with those close to you, help your co-workers on their projects, use multiple computers for optimum efficiency, or get through the wee hours of the morning at your next hackathon.
**Drag and drop.**
All you need to do is move your mouse off the side of your screen and it controls the computer next to you. If you’re grabbing a window, it’ll send whatever file is open in that window along for the ride. And if they don’t have the program you have the file open in, it will open in their default.
**Cross-platform**
Yes, if you drag a Notepad file onto your friend’s Mac, it’ll open Textedit. (It’s a new way to pass notes in class) | partial |
## Inspiration
According to WebAim, in 2021, the top 1 million websites had an average of 51.4 accessibility errors on the homepage alone (<https://webaim.org/projects/million/>). After learning about the lack of website accessibility, we wanted to find a solution for improving the user experience for disabled individuals. Creating accessible websites ensures equal access for all, aligns with social responsibility and innovation, and benefits both users and businesses.
## What it does
The user first inputs the URL of a website and after the click of a button, our web application identifies and displays various accessibility issues it finds on the web page through visual graphical elements (i.e. pie charts and bar graphs).
## How we built it
We split up our project into frontend and backend tasks. The frontend was responsible for collecting a user-inputted website URL and displaying the accessibility data as graphical elements using Taipy’s chart GUI functionality. The backend consisted of a GET request to the WAVE API using the URL passed in from the data. Then, it returned the accessibility issues from the WAVE API, capturing the necessary data in JSON format. We captured the JSON data into arrays and fed the information to the Taipy chart functions to display the relevant fields to the user. Both the frontend and the backend leveraged Taipy’s full-stack development capabilities.
## Challenges we ran into
As a relatively new library, we didn’t have many resources to consult when building our Taipy application. The biggest challenge was navigating the functionalities present in the library and making sure that we could return the right data to the user. One challenge we had was taking the user-inputted URL and using that in our GET request to the WAVE API. This was difficult as we could not get the data to update in real-time when a new URL was inputted. In the end, we resolved this by adding the JSON parsing functionality in an `on_button_action` function and realized that we had to update the `state` variable in order to have the data refreshed to accurately reflect the user-inputted URL field.
## Accomplishments that we're proud of
We’re proud that we were able to debug our issues particularly since there was not a lot of support for using Taipy. We were also a new team and were quick to come up with an idea and start collaborating on the project. While we didn’t get as much sleep this weekend, we are proud that we were able to get some rest and participate in workshops alongside working on this project!
## What we learned
Here are a couple of things we learned:
* How to develop a multi-page web application from scratch using Python
* How to handle HTTP requests using the WAVE API in Python
* How to parse and collect necessary data from a JSON file
* How to display information and datasets through Taipy bar graphs and pie charts
* How to break down a project into manageable tasks
* How to create PR templates on Github
## What's next for AccChecky
As next steps, we would like to highlight areas of the website with colour contrasting issues through heatmaps so the user has more actionable items to work on to improve the accessibility. We would also look into storing previous accessibility checks to display gradual improvement over time through other graphical charts (i.e. line graphs). Lastly, to scale our app, we could compare inputted websites against large datasets to show severity of accessibility issues from the particular user-inputted URL to other websites that exist on the internet as a form of calibration. | ## Inspiration
Research shows that maximum people face mental or physical health problems due to their unhealthy daily diet or ignored symptoms at the early stages. This app will help you track your diet and your symptoms daily and provide recommendations to provide you with an overall healthy diet. We were inspired by MyFitnessPal's ability to access the nutrition information from foods at home, restaurants, and the grocery store. Diet is extremely important to the body's wellness, but something that is hard for any one person to narrow down is: What foods should I eat to feel better? It is a simple question, but actually very hard to answer. We eat so many different things in a day, how do you know what is making positive impacts on your health, and what is not?
## What it does
Right now, the app is in a pre-alpha phase. It takes some things as input, carbs, fats, protein, vitamins, and electrolyte intake in a day. It sends this data to a Mage API, and Mage predicts how well they will feel in that day. The Mage AI is based off of sample data that is not real-world data, but as the app gets users it will get more accurate. Based off of our data set that we gather and the model type, the AI maintains 96.4% accuracy at predicting the wellness of a user on a given day. This is based off of 10000 users over 1 day, or 1 user over 10000 days, or somewhere in between. The idea is that the AI will be constantly learning as the app gains users and individual users enter more data.
## How we built it
We built it in Swift using the Mage.ai for data processing and API
## Challenges we ran into
Outputting the result on the App after the API returns the final prediction. We have the prediction score displayed in the terminal, but we could not display it on the app initially. We were able to do that after a lot of struggle. All of us made an app and implemented an API for the very first time.
## Accomplishments that we're proud of
-- Successfully implementing the API with our app
-- Building an App for the very first time
-- Creating a model for AI data processing with a 96% accuracy
## What we learned
-- How to implement an API and it's working
-- How to build an IOS app
-- Using AI in our application without actually knowing AI in depth
## What's next for NutriCorr
--Adding different categories of symptoms
-- giving the user recommendations on how to change their diet
-- Add food object to the app so that the user can enter specific food instead of the nutrient details
-- Connect our results to mental health wellness and recommendations. Research shows that people who generally have more sugar intake in their diet generally stay more depressed. | ## Inspiration
The idea behind Halo - Virtual Companion emerged from the increasing global issue of emotional well-being, especially in today's fast-paced world where loneliness, stress, and mental health challenges have become common. Inspired by recent advances in emotionally intelligent AI and technologies like Hume AI, we wanted to create a personal companion that could provide real-time emotional insights and support to users in their home environments. Halo aims to help users reflect on their emotions, providing meaningful insights and fostering a sense of connection by offering conversations tailored to emotional needs.
## What it does
Halo is a virtual mental wellness companion that uses AI to detect a user's emotions through conversations. It tracks emotional history, provides personalized insights, and even recommends music based on the user’s mood. Halo can recognize when a user enters the room, initiate a conversation, and summarize daily emotional experiences using AI-powered tools. The app visualizes this data to help users track their emotional well-being over time and receive actionable insights on improving habits, aiming to offer a comforting presence when they need it the most.
## How we built it
We built Halo using a combination of advanced technologies:
* **Hume AI**: To detect and analyze emotions in real-time during conversations.
* **TensorFlow & COCO-SSD**: For object detection and motion tracking, which recognizes the user when they enter the room.
* **Gemini API**: To summarize conversations and emotional trends.
* **React**: For building the user interface of the web app.
* **Spotify API**: For generating personalized music recommendations based on the user's emotions.
* **Phoenix Tracing**: For tracking how accurately the AI processes and summarizes the emotional data.
## Challenges we ran into
One of the biggest challenges we faced was integrating various APIs and ensuring that the emotion detection was both accurate and seamless. Handling real-time emotional data in a way that felt natural while respecting privacy was a technical hurdle. Another challenge was optimizing the AI to not only detect motion but also provide insights from the captured conversations . Additionally, ensuring that the dashboard visualization was both insightful and user-friendly within a short timeframe was a major task.
## Accomplishments that we're proud of
We are proud of how well Halo integrates multiple technologies to form a seamless experience. We successfully created a system where users can receive real-time emotional insights and reflect on their daily emotional trends. The music recommendation feature, which tailors playlists based on emotions, turned out to be an exciting feature that adds significant value. The sleek user interface and data visualizations for tracking emotional history were also highlights of the project. We are very proud of the final outcome and how many technologies we were able to integrate. It was all of our first time using computer vision in a project, and also our first exposure to some of the tools like Arize and Hume. Not to mention, half of our team are first-time hackers!
## What we learned
Throughout the development of Halo, we deepened our understanding of emotion recognition technologies and how they can be applied in real-life scenarios. We learned how to effectively integrate complex APIs like Hume AI, Gemini, and Spotify, while navigating the challenges of real-time data processing and analysis. Collaboration within a tight timeframe helped us refine our project management, communication, and technical debugging skills.
Additionally, working as a team significantly improved our version control skills. We learned to efficiently handle merge conflicts, manage pull requests, and maintain clean, organized code through frequent reviews and continuous integration. These experiences taught us how to work more effectively in a collaborative development environment.
## What's next for Halo - Virtual Companion
We are excited about the future of Halo and have several ideas to enhance its emotional intelligence capabilities. We plan to incorporate additional AI features such as mental health exercises and personalized advice based on emotional trends, while refining our emotion detection algorithms to improve accuracy. We envision expanding Halo to serve more user personas, such as seniors living independently, where mood tracking could be shared with family members for added support. Additionally, we plan to implement a Med Gemini-based AI assistant for deep conversations triggered by specific emotional cues, offering scientifically-backed mental health advice. As we continue developing Halo, we hope to scale it to support a larger user base, explore integrations with health apps and services, and enrich user experience with more AI-driven insights, making a meaningful impact on emotional well-being. | partial |
## Inspiration
Currently the insurance claims process is quite labour intensive. A person has to investigate the car to approve or deny a claim, and so we aim to make the alleviate this cumbersome process smooth and easy for the policy holders.
## What it does
Quick Quote is a proof-of-concept tool for visually evaluating images of auto accidents and classifying the level of damage and estimated insurance payout.
## How we built it
The frontend is built with just static HTML, CSS and Javascript. We used Materialize css to achieve some of our UI mocks created in Figma. Conveniently we have also created our own "state machine" to make our web-app more responsive.
## Challenges we ran into
>
> I've never done any machine learning before, let alone trying to create a model for a hackthon project. I definitely took a quite a bit of time to understand some of the concepts in this field. *-Jerry*
>
>
>
## Accomplishments that we're proud of
>
> This is my 9th hackathon and I'm honestly quite proud that I'm still learning something new at every hackathon that I've attended thus far. *-Jerry*
>
>
>
## What we learned
>
> Attempting to do a challenge with very little description of what the challenge actually is asking for is like a toddler a man stranded on an island. *-Jerry*
>
>
>
## What's next for Quick Quote
Things that are on our roadmap to improve Quick Quote:
* Apply google analytics to track user's movement and collect feedbacks to enhance our UI.
* Enhance our neural network model to enrich our knowledge base.
* Train our data with more evalution to give more depth
* Includes ads (mostly auto companies ads). | ## Inspiration
Recognizing the disastrous effects of the auto industry on the environment, our team wanted to find a way to help the average consumer mitigate the effects of automobiles on global climate change. We felt that there was an untapped potential to create a tool that helps people visualize cars' eco-friendliness, and also helps them pick a vehicle that is right for them.
## What it does
CarChart is an eco-focused consumer tool which is designed to allow a consumer to make an informed decision when it comes to purchasing a car. However, this tool is also designed to measure the environmental impact that a consumer would incur as a result of purchasing a vehicle. With this tool, a customer can make an auto purhcase that both works for them, and the environment. This tool allows you to search by any combination of ranges including Year, Price, Seats, Engine Power, CO2 Emissions, Body type of the car, and fuel type of the car. In addition to this, it provides a nice visualization so that the consumer can compare the pros and cons of two different variables on a graph.
## How we built it
We started out by webscraping to gather and sanitize all of the datapoints needed for our visualization. This scraping was done in Python and we stored our data in a Google Cloud-hosted MySQL database. Our web app is built on the Django web framework, with Javascript and P5.js (along with CSS) powering the graphics. The Django site is also hosted in Google Cloud.
## Challenges we ran into
Collectively, the team ran into many problems throughout the weekend. Finding and scraping data proved to be much more difficult than expected since we could not find an appropriate API for our needs, and it took an extremely long time to correctly sanitize and save all of the data in our database, which also led to problems along the way.
Another large issue that we ran into was getting our App Engine to talk with our own database. Unfortunately, since our database requires a white-listed IP, and we were using Google's App Engine (which does not allow static IPs), we spent a lot of time with the Google Cloud engineers debugging our code.
The last challenge that we ran into was getting our front-end to play nicely with our backend code
## Accomplishments that we're proud of
We're proud of the fact that we were able to host a comprehensive database on the Google Cloud platform, in spite of the fact that no one in our group had Google Cloud experience. We are also proud of the fact that we were able to accomplish 90+% the goal we set out to do without the use of any APIs.
## What We learned
Our collaboration on this project necessitated a comprehensive review of git and the shared pain of having to integrate many moving parts into the same project. We learned how to utilize Google's App Engine and utilize Google's MySQL server.
## What's next for CarChart
We would like to expand the front-end to have even more functionality
Some of the features that we would like to include would be:
* Letting users pick lists of cars that they are interested and compare
* Displaying each datapoint with an image of the car
* Adding even more dimensions that the user is allowed to search by
## Check the Project out here!!
<https://pennapps-xx-252216.appspot.com/> | ## Inspiration
As students, we wanted to generate a solution for efficient and convenient meal planning based on the weekly deals at the grocery store. The plan is to save people money and encourage people to try out new recipes at the same time.
## What it does
In summary, the app combines web scraping, API integration, frontend and backend development, and database management to deliver a user-friendly platform for personalized meal recommendations, leveraging both user preferences and current grocery deals. Users can benefit from a streamlined approach to meal planning and grocery shopping, making the process more convenient and enjoyable.
## How we built it
The app was built using React for the frontend, Django for the backend, Firebase for the database, and Python for web scraping. It integrates the Edamam API to offer personalized meal recommendations based on scraped data from weekly grocery flyers. The development process included setting up environments, creating an intuitive UI with React, implementing backend logic with Django, and deploying on suitable platforms. The app enhances meal planning by combining modern web technologies, real-time data, and user-friendly interfaces.
## Challenges we ran into
Web scraping complications - As it turns out, websites don't usually love it when Python scrypts are run to read and extract the content of its source code. This led us to having several grocery websites block us from gathering the data we were after (Walmart, Loblaws, ...)
API call limitations - As non paying users, we were informed personnally by a rep from Edamam API that we had exceeded the limit of API calls for a free user by 300%. This was not shocking considering the large dataset we were trying to produce for the backend of our application.
Firebase datase Read limits - Limits to daily read limits from Firebase database slowed down our progress tremendously as we were unable to see anything from our Database for the rest of the day.
## Accomplishments that we're proud of
-Learning web app development with React and Django
-Modelling a solution that is meaningful to us and we believe can help people.
-Rendering a successful pre release product.
## What we learned
-Web app development with React front end and Django back end
-Web scrapping scrypt with Python
-Integrating API into web applications for research and data production
-Javascript coding in general
-Planning and executing a tech project from start to finish.
## What's next for ezEATS
-Expand the operation to store the data from many grocery stores to go worldwide.
-Implement more functionalities to our web app. Accounts for users to save liked recipies, self updating database for the new flyers every week. | winning |
## Inspiration
Have you ever had to stand in line and tediously fill out your information for contact tracing at your favorite local restaurant? Have you ever asked yourself what's the point of traffic jams at restaurants which rather than reducing the risk of contributing to the spreading of the outbreak ends up increasing social contact and germ propagation? If yes, JamFree is for you!
## What it does
JamFree is a web application that supports small businesses and restaurants during the pandemic by completely automating contact tracing in order to minimize physical exposure and eliminate the possibility of human error in the event where tracing back on customer visits is necessary. This application helps support local restaurants and small businesses by alleviating the pressure and negative impact this pandemic has had on their business.
In order to accomplish this goal, here's how it would be used:
1. Customer creates an account by filling out the required information restaurants would use for contact tracing such as name, email, and phone number.
2. A QR code is generated by our application
3. Restaurants also create a JamFree account with the possibility of integrating with their favorite POS software
4. Upon arrival at their favorite restaurant, the restaurant staff would scan the customer's QR code from our application
5. Customer visit has now been recorded on the restaurant's POS as well as JamFree's records
## How we built it
We divided the project into two main components; the front-end with react components to make things interactive while the back-end used Express to create a REST API that interacts with a cockroach database. The whole project was deployed using amazon-web services (serverless servers for a quick and efficient deployment).
## Challenges we ran into
We had to figure out how to complete the integration of QR codes for the first time, how to integrate our application with third-party software such as Square or Shopify (OAuth), and how to level out the playing field with the adaptability of new technologies and different languages used across the team.
## Accomplishments that we're proud of
We successfully and simply integrated or app with POS software (e.g. using a free Square Account and Square APIs in order to access the customer base of restaurants while keeping everything centralized and easily accessible).
## What we learned
We became familiar with OAuth 2.0 Protocols, React, and Node. Half of our team was compromised of first-time hackers who had to quickly become familiar with the technologies we used. We learnt that coding can be a pain in the behind but it is well worth it in the end! Teamwork makes the dream work ;)
## What's next for JamFree
We are planning to improve and expand on our services in order to provide them to local restaurants. We will start by integrating it into one of our teammate's family-owned restaurant as well as pitch it to our local parishes to make things safer and easier. We are looking into integrating geofencing in the future in order to provide targeted advertisements and better support our clients in this difficult time for small businesses. | **Inspiration**
Toronto ranks among the top five cities in the world with the worst traffic congestion. As both students and professionals, we faced the daily challenge of navigating this chaos and saving time on our commutes. This led us to question the accuracy of traditional navigation tools like Google Maps. We wondered if there were better, faster routes that could be discovered through innovative technology.
**What it does**
ruteX is an AI-driven navigation app that revolutionizes how users find their way. By integrating Large Language Models (LLMs) and action agents, ruteX facilitates seamless voice-to-voice communication with users. This allows the app to create customized routes based on various factors, including multi-modal transportation options (both private and public), environmental considerations such as carbon emissions, health metrics like calories burned, and cost factors like the cheapest parking garages and gas savings.
**How we built it**
We developed ruteX by leveraging cutting-edge AI technologies. The core of our system is powered by LLMs that interact with action agents, ensuring that users receive personalized route recommendations. We focused on creating a user-friendly interface that simplifies the navigation process while providing comprehensive data on various routing options.
**Challenges we ran into**
Throughout the development process, we encountered challenges such as integrating real-time data for traffic and environmental factors, ensuring accuracy in route recommendations, and maintaining a smooth user experience in the face of complex interactions. Balancing these elements while keeping the app intuitive required significant iterative testing and refinement.
**Accomplishments that we're proud of**
We take pride in our app's simplistic user interface that enhances usability without sacrificing functionality. Our innovative LLM action agents (using fetch ai) effectively communicate with users, making navigation a more interactive experience. Additionally, utilizing Gemini as the "brain" of our ecosystem has allowed us to optimize our AI capabilities, setting ruteX apart from existing navigation solutions.
**What we learned**
This journey has taught us the importance of user feedback in refining our app's features. We've learned how critical it is to prioritize user needs and preferences while also staying flexible in our approach to integrating AI technologies. Our experience also highlighted the potential of AI in transforming traditional industries like navigation.
**What's next for ruteX**
Looking ahead, we plan to scale ruteX to its full potential, aiming to completely revolutionize traditional navigation methods. We are exploring integration with wearables like smartwatches and smart lenses, allowing users to interact with their travel assistant effortlessly. Our vision is for users to simply voice their needs and enjoy their journey without the complexities of conventional navigation. | # Doctors Within Borders
### A crowdsourcing app that improves first response time to emergencies by connecting city 911 dispatchers with certified civilians
## 1. The Challenge
In Toronto, ambulances get to the patient in 9 minutes 90% of the time. We all know
that the first few minutes after an emergency occurs are critical, and the difference of
just a few minutes could mean the difference between life and death.
Doctors Within Borders aims to get the closest responder within 5 minutes of
the patient to arrive on scene so as to give the patient the help needed earlier.
## 2. Main Features
### a. Web view: The Dispatcher
The dispatcher takes down information about an ongoing emergency from a 911 call, and dispatches a Doctor with the help of our dashboard.
### b. Mobile view: The Doctor
A Doctor is a certified individual who is registered with Doctors Within Borders. Each Doctor is identified by their unique code.
The Doctor can choose when they are on duty.
On-duty Doctors are notified whenever a new emergency occurs that is both within a reasonable distance and the Doctor's certified skill level.
## 3. The Technology
The app uses *Flask* to run a server, which communicates between the web app and the mobile app. The server supports an API which is used by the web and mobile app to get information on doctor positions, identify emergencies, and dispatch doctors. The web app was created in *Angular 2* with *Bootstrap 4*. The mobile app was created with *Ionic 3*.
Created by Asic Chen, Christine KC Cheng, Andrey Boris Khesin and Dmitry Ten. | losing |
## Inspiration
You see a **TON** of digital billboards at NYC Time Square. The problem is that a lot of these ads are **irrelevant** to many people. Toyota ads here, Dunkin' Donuts ads there; **it doesn't really make sense**.
## What it does
I built an interactive billboard that does more refined and targeted advertising and storytelling; it displays different ads **based on who you are** ~~(NSA 2.0?)~~
The billboard is equipped with a **camera**, which periodically samples the audience in front of it. Then, it passes the image to a series of **computer vision** algorithm (Thank you *Microsoft Cognitive Services*), which extracts several characteristics of the viewer.
In this prototype, the billboard analyzes the viewer's:
* **Dominant emotion** (from facial expression)
* **Age**
* **Gender**
* **Eye-sight (detects glasses)**
* **Facial hair** (just so that it can remind you that you need a shave)
* **Number of people**
And considers all of these factors to present with targeted ads.
**As a bonus, the billboard saves energy by dimming the screen when there's nobody in front of the billboard! (go green!)**
## How I built it
Here is what happens step-by-step:
1. Using **OpenCV**, billboard takes an image of the viewer (**Python** program)
2. Billboard passes the image to two separate services (**Microsoft Face API & Microsoft Emotion API**) and gets the result
3. Billboard analyzes the result and decides on which ads to serve (**Python** program)
4. Finalized ads are sent to the Billboard front-end via **Websocket**
5. Front-end contents are served from a local web server (**Node.js** server built with **Express.js framework** and **Pug** for front-end template engine)
6. Repeat
## Challenges I ran into
* Time constraint (I actually had this huge project due on Saturday midnight - my fault -, so I only **had about 9 hours to build** this. Also, I built this by myself without teammates)
* Putting many pieces of technology together, and ensuring consistency and robustness.
## Accomplishments that I'm proud of
* I didn't think I'd be able to finish! It was my first solo hackathon, and it was much harder to stay motivated without teammates.
## What's next for Interactive Time Square
* This prototype was built with off-the-shelf computer vision service from Microsoft, which limits the number of features for me to track. Training a **custom convolutional neural network** would let me track other relevant visual features (dominant color, which could let me infer the viewers' race - then along with the location of the Billboard and pre-knowledge of the demographics distribution, **maybe I can infer the language spoken by the audience, then automatically serve ads with translated content**) - ~~I know this sounds a bit controversial though. I hope this doesn't count as racial profiling...~~ | ## Inspiration
Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life.
Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life.
## What it does
Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text.
## How we built it
The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app.
---
The back-end service is written in Go and is served with NGrok.
---
We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud.
---
We make use of the Google Vision API in three ways:
* To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc.
* To run Optical Character Recognition on text in the real world which is then read aloud to the user.
* For label detection, to indentify objects and surroundings in the real world which the user can then query about.
## Challenges we ran into
There were a plethora of challenges we experienced over the course of the hackathon.
1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go.
2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded.
3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go.
## Accomplishments that we're proud of
Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app.
Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software.
We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app.
## What we learned
Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack.
Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis.
Zak learned about building a native iOS app that communicates with a data-rich APIs.
We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service.
Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges.
## What's next for Sight
If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app.
Ultimately, we plan to host the back-end on Google App Engine. | ## Inspiration
The inspiration for this project was both personal experience and the presentation from Ample Labs during the opening ceremony. Last summer, Ryan was preparing to run a summer computer camp and taking registrations and payment on a website. A mother reached out to ask if we had any discounts available for low-income families. We have offered some in the past, but don't advertise for fear of misuse of the discounts by average or high-income families. We also wanted a way to verify this person's income. If we had WeProsper, verification would have been easy. In addition to the issues associated with income verification, it is likely that there are many programs out there (like the computer camps discounts) that low-income families aren't aware of. Ample Labs' presentation inspired us with the power of connecting people with services they should be able to access but aren't aware of. WeProsper would help low-income families be aware of the services available to them at a discount (transit passes, for another example) and verify their income easily in one place so they can access the services that they need without bundles of income verification paperwork. As such, WeProsper gives low-income families a chance to prosper and improve financial stability. By doing this, WeProsper would increase social mobility in our communities long-term.
## What it does
WeProsper provides a login system which allows users to verify their income by uploading a PDF of their notice of assessment or income proof documents from the CRA and visit service providers posted on the service with a unique code the service provider can verify with us to purchase the service. Unfortunately, not all of this functionality is implemented just yet. The login system works with Auth0, but the app mainly runs with dummy data otherwise.
## How We built it
We used Auth0, react, and UiPath to read the PDF doing our on-site demo. UiPath would need to be replaced in the future with a file upload on the site. The site is made with standard web technologies HTML, CSS and Javascript.
## Challenges We ran into
The team was working with technologies that are new to us, so a lot of the hackathon was spent learning these technologies. These technologies include UiPath and React.
## Accomplishments that we're proud of
We believe WeProsper has a great value proposition for both organizations and low-income families and isn't easy replicated with other solutions. We excited about the ability to share a proof-of-concept that could have massive social impact. Personally, we are also excited that every team member improved skills (technical and non-technical) that will be useful to them in the future.
## What we learned
The team learned a lot about React, and even just HTML/CSS. The team also learned a lot about how to share knowledge between team members with different backgrounds and experiences in order to develop the project.
## What's next for WeProsper
WeProsper would like to use AI to detect anomalies in the future when verifying income. | winning |
## Inspiration
According to the World Health Organization (WHO), 1 in every 5 college students suffer from mental health disorders including depression and anxiety. This epidemic has far-reaching personal and societal consequences – increasing disease risk, fracturing relationships, and reducing workforce productivity.
Current methods of depression diagnosis are extremely time-consuming, relying heavily on one-on-one clinical interviews conducted manually by psychiatrists. Moreover, the subjective nature of evaluations often lead to inconsistent medical advice and recommended treatment paths across different specialists. As a result, there is a clear need for a solution for efficient and standardized depression diagnosis.
## What it does
SmartPsych is a streamlined web platform for automated depression diagnosis from videos of natural patient conversation. Leveraging a unique mix of audio, computer vision, and natural language processing deep learning algorithms, SmartPsych accurately pinpoints and quantifies the complex network of symptoms underlying depression for further psychiatric evaluation. Furthermore, SmartPsych has a special sentence-by-sentence playback feature that enables psychiatrists to hone in on specific sentences (color-coded for severity from green to red being the most severe). This enables psychiatrists to identify the patient-specific struggles and affected topics – paving the way for personalized treatment.
SmartPsych is currently able to detect depression sentiment from patient words, negative emotions (e.g. anger, contempt, fear, sadness) from facial images, and valence and arousal (i.e. positive energy levels) from spectral qualities of a user's voice.
Overall, SmartPsych represents an entirely new way of depression diagnosis – instead of wasting the valuable time of psychiatrists, we delegate the tedious task of identifying depression symptoms to machines and bring in psychiatrists at the end for the final diagnosis and treatment when their expertise is most crucial. This hybrid man and machine model increases efficiency and retains accuracy compared to relying on machine learning predictions alone.
## How I built it
The web application was built entirely in Flask; to facilitate user interaction with the application, I used the pre-designed UIKit components to create a minimalistic yet user-friendly front-end.
Since current sentiment analysis methods are unable to directly infer depression, I trained my own CNN-LSTM neural network on the 189 interview transcripts of both depressed and non-depressed patients from the USC Distress Analysis Wizard-of-Oz database. After undergoing supervised learning, my neural network had a validation accuracy of 69%, which is highly comparable to current human inter-rater variabilities between 70-75%.
Emotion classification and facial landmark detection was performed through the Microsoft Cognitive Face API, valence and arousal prediction was performed through multi-variate regression on audio Mel Frequency Cepstral Coefficients (MFCC), and speech-to-text transcription was performed through Google Cloud.
## Accomplishments that I'm proud of
Integrating custom-trained Keras/Tensorflow models, Google and Microsoft cloud APIs, and regression trees into a single unified and streamlined application.
## What I learned
The sheer amount of factors behind depression, and calling cloud APIs!
## What's next for SmartPsych
While SmartPsych can now be easily used on the web for effective depression diagnosis, there is still much room left for improvement. Development of a mobile form of SmartPsych to increase usability, integration of other biometrics (i.e. sleep quality, heart rate, exercise) into depression diagnosis, and increasing understanding of depression sentiment prediction with entity analysis. | ## Inspiration
As post secondary students, our mental health is directly affected. Constantly being overwhelmed with large amounts of work causes us to stress over these large loads, in turn resulting in our efforts and productivity to also decrease. A common occurrence we as students continuously endure is this notion that there is a relationship and cycle between mental health and productivity; when we are unproductive, it results in us stressing, which further results in unproductivity.
## What it does
Moodivity is a web application that improves productivity for users while guiding users to be more in tune with their mental health, as well as aware of their own mental well-being.
Users can create a profile, setting daily goals for themselves, and different activities linked to the work they will be doing. They can then start their daily work, timing themselves as they do so. Once they are finished for the day, they are prompted to record an audio log to reflect on the work done in the day.
These logs are transcribed and analyzed using powerful Machine Learning models, and saved to the database so that users can reflect later on days they did better, or worse, and how their sentiment reflected that.
## How we built it
***Backend and Frontend connected through REST API***
**Frontend**
* React
+ UI framework the application was written in
* JavaScript
+ Language the frontend was written in
* Redux
+ Library used for state management in React
* Redux-Sagas
+ Library used for asynchronous requests and complex state management
**Backend**
* Django
+ Backend framework the application was written in
* Python
+ Language the backend was written in
* Django Rest Framework
+ built in library to connect backend to frontend
* Google Cloud API
+ Speech To Text API for audio transcription
+ NLP Sentiment Analysis for mood analysis of transcription
+ Google Cloud Storage to store audio files recorded by users
**Database**
* PostgreSQL
+ used for data storage of Users, Logs, Profiles, etc.
## Challenges we ran into
Creating a full-stack application from the ground up was a huge challenge. In fact, we were almost unable to accomplish this. Luckily, with lots of motivation and some mentorship, we are comfortable with naming our application *full-stack*.
Additionally, many of our issues were niche and didn't have much documentation. For example, we spent a lot of time on figuring out how to send audio through HTTP requests and manipulating the request to be interpreted by Google-Cloud's APIs.
## Accomplishments that we're proud of
Many of our team members are unfamiliar with Django let alone Python. Being able to interact with the Google-Cloud APIs is an amazing accomplishment considering where we started from.
## What we learned
* How to integrate Google-Cloud's API into a full-stack application.
* Sending audio files over HTTP and interpreting them in Python.
* Using NLP to analyze text
* Transcribing audio through powerful Machine Learning Models
## What's next for Moodivity
The Moodivity team really wanted to implement visual statistics like graphs and calendars to really drive home visual trends between productivity and mental health. In a distant future, we would love to add a mobile app to make our tool more easily accessible for day to day use. Furthermore, the idea of email push notifications can make being productive and tracking mental health even easier. | ## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC. | partial |
## Inspiration
Being students in a technical field, we all have to write and submit resumes and CVs on a daily basis. We wanted to incorporate multiple non-supervised machine learning algorithms to allow users to view their resumes from different lenses, all the while avoiding the bias introduced from the labeling of supervised machine learning.
## What it does
The app accepts a resume in .pdf or image format as well as a prompt describing the target job. We wanted to judge the resume based on layout and content. Layout encapsulates font, color, etc., and the coordination of such features. Content encapsulates semantic clustering for relevance to the target job and preventing repeated mentions.
### Optimal Experience Selection
Suppose you are applying for a job and you want to mention five experiences, but only have room for three. cv.ai will compare the experience section in your CV with the job posting's requirements and determine the three most relevant experiences you should keep.
### Text/Space Analysis
Many professionals do not use the space on their resume effectively. Our text/space analysis feature determines the ratio of characters to resume space in each section of your resume and provides insights and suggestions about how you could improve your use of space.
### Word Analysis
This feature analyzes each bullet point of a section and highlights areas where redundant words can be eliminated, freeing up more resume space and allowing for a cleaner representation of the user.
## How we built it
We used a word-encoder TensorFlow model to provide insights about semantic similarity between two words, phrases or sentences. We created a REST API with Flask for querying the TF model. Our front end uses Angular to deliver a clean, friendly user interface.
## Challenges we ran into
We are a team of two new hackers and two seasoned hackers. We ran into problems with deploying the TensorFlow model, as it was initially available only in a restricted Colab environment. To resolve this issue, we built a RESTful API that allowed us to process user data through the TensorFlow model.
## Accomplishments that we're proud of
We spent a lot of time planning and defining our problem and working out the layers of abstraction that led to actual processes with a real, concrete TensorFlow model, which is arguably the hardest part of creating a useful AI application.
## What we learned
* Deploy Flask as a RESTful API to GCP Kubernetes platform
* Use most Google Cloud Vision services
## What's next for cv.ai
We plan on adding a few more features and making cv.ai into a real web-based tool that working professionals can use to improve their resumes or CVs. Furthermore, we will extend our application to include LinkedIn analysis between a user's LinkedIn profile and a chosen job posting on LinkedIn. | # 🎓 **Inspiration**
Entering our **junior year**, we realized we were unprepared for **college applications**. Over the last couple of weeks, we scrambled to find professors to work with to possibly land a research internship. There was one big problem though: **we had no idea which professors we wanted to contact**. This naturally led us to our newest product, **"ScholarFlow"**. With our website, we assure you that finding professors and research papers that interest you will feel **effortless**, like **flowing down a stream**. 🌊
# 💡 **What it Does**
Similar to the popular dating app **Tinder**, we provide you with **hundreds of research articles** and papers, and you choose whether to approve or discard them by **swiping right or left**. Our **recommendation system** will then provide you with what we think might interest you. Additionally, you can talk to our chatbot, **"Scholar Chat"** 🤖. This chatbot allows you to ask specific questions like, "What are some **Machine Learning** papers?". Both the recommendation system and chatbot will provide you with **links, names, colleges, and descriptions**, giving you all the information you need to find your next internship and accelerate your career 🚀.
# 🛠️ **How We Built It**
While half of our team worked on **REST API endpoints** and **front-end development**, the rest worked on **scraping Google Scholar** for data on published papers. The website was built using **HTML/CSS/JS** with the **Bulma** CSS framework. We used **Flask** to create API endpoints for JSON-based communication between the server and the front end.
To process the data, we used **sentence-transformers from HuggingFace** to vectorize everything. Afterward, we performed **calculations on the vectors** to find the optimal vector for the highest accuracy in recommendations. **MongoDB Vector Search** was key to retrieving documents at lightning speed, which helped provide context to the **Cerebras Llama3 LLM** 🧠. The query is summarized, keywords are extracted, and top-k similar documents are retrieved from the vector database. We then combined context with some **prompt engineering** to create a seamless and **human-like interaction** with the LLM.
# 🚧 **Challenges We Ran Into**
The biggest challenge we faced was gathering data from **Google Scholar** due to their servers blocking requests from automated bots 🤖⛔. It took several hours of debugging and thinking to obtain a large enough dataset. Another challenge was collaboration – **LiveShare from Visual Studio Code** would frequently disconnect, making teamwork difficult. Many tasks were dependent on one another, so we often had to wait for one person to finish before another could begin. However, we overcame these obstacles and created something we're **truly proud of**! 💪
# 🏆 **Accomplishments That We're Proud Of**
We’re most proud of the **chatbot**, both in its front and backend implementations. What amazed us the most was how **accurately** the **Llama3** model understood the context and delivered relevant answers. We could even ask follow-up questions and receive **blazing-fast responses**, thanks to **Cerebras** 🏅.
# 📚 **What We Learned**
The most important lesson was learning how to **work together as a team**. Despite the challenges, we **pushed each other to the limit** to reach our goal and finish the project. On the technical side, we learned how to use **Bulma** and **Vector Search** from MongoDB. But the most valuable lesson was using **Cerebras** – the speed and accuracy were simply incredible! **Cerebras is the future of LLMs**, and we can't wait to use it in future projects. 🚀
# 🔮 **What's Next for ScholarFlow**
Currently, our data is **limited**. In the future, we’re excited to **expand our dataset by collaborating with Google Scholar** to gain even more information for our platform. Additionally, we have plans to develop an **iOS app** 📱 so people can discover new professors on the go! | ## Inspiration
America's unhoused have been underserved by financial technology developments and are in an increasingly difficult situation as the world transitions to electronic payments. We wanted to build the financial infrastructure to support our homeless populations and meet them at their tech level. That's why we focused on a solution that does not require the use of a phone or the ownership of any technology to function.
## What it does
Our banking infrastructure enables the unhoused to receive electronic donations stigma-free (without having to use square or a phone). We provide free banking services to people who traditionally have high difficulty levels getting a bank account. Additionally, we provide great benefits for donators who use our platform by providing them tax write-offs for donations that were previously unrecognizable for tax purposes. For unhoused populations who use our web-app when checking their account status, we have built NLP-powered financial literacy education materials for them to learn and earn financial rewards. The local government interface we built using Palantir Foundry enables municipal clerks to go directly to those who are the most in need with tax distributions.
## How we built it
We built Givn using Palantir Foundry and Next.js for front-end, Firebase (NoSQL) and Express/Vercel for backend, and Unit APi and GPT-3 API. We piped in transaction data that we tracked and created through our Unit banking system into Palantir Foundry to display for local government managers. We used GPT-3 to create financial literacy content and we used Next.js and firebase to run the transaction system by which donators can donate and unhoused populations can make purchases.
## Challenges we ran into
We had significant challenges with Foundry because Foundry is not a publicly available software and it had a steep learning curve. Cleaning and piping in census data and building our own API to transfer transaction data from our core product to Foundry for local government operators to take action on was the most difficult part of our Foundry integration. We eventually solved these issues with some creative PySpark and data-processing skills.
## Accomplishments that we're proud of
We are proudest of our core product--a debit card that enables electronic payments for the unhoused. We believe that the financial infrastructure supporting unhoused populations has been lacking for a long time and we are excited to build in a space that can make such a large impact on people's financial well-being. From a technical perspective, we are the proudest of the API and integrations we built between Foundry and our core product to enable municipalities to understand and support those who are in need in their community. Specifically, municipal clerks can monitor poverty levels, donation levels, average bank account savings, and spending of the unhoused--all while protecting the identity and anonymity of unhoused populations.
## What we learned
We learned so much! Our proficiency with Foundry is quite strong after this weekend--pushing out a functional product with technology you had never worked with before will do that to you. We also learned how to build embedded banking systems with the Unit API and their affiliated banks--Piermont Bank, Thread Bank, Blue Ridge Bank, and Choice Bank. A few members of our team became more familiar with some areas of the stack they hadn't worked with before--front-end, back-end, and the OpenAI API were all refreshed for a few of our members, respectively.
## What's next for GIV
We plan to continue building Givn until it is ready for deployment with a local government and a go-to-market apparatus can be spun up. | partial |
# A-EYE 👀✏️📙
Need help improving your studying? A-EYE is an innovative study helper using **eye-tracking** technology and **facial recognition** to analyze an individual's *study productivity*.
## Inspiration
Studying in different environments can significantly impact focus and productivity. We were inspired to analyze if we could determine a student's optimal environment that yields the highest productivity score over a period of time using image analysis. To do this, we wanted students to be able to track their study sessions with A-EYE which will generate a report after the session denoting how focused the student was throughout the session.
## What it does
Our project uses eye-tracking technology and facial recognition to analyze an individual's study productivity. The system calculates an arbitrary "focus" score over a time frame of 10-20 seconds, using weights derived from scientific research studies. Constantly changing gaze, facial expression can denote poor concentration while looking up for long periods can denote good concentration. Once the study session ends, the user receives a chart showing their focus level over time and an overall productivity score.
## How we built it
We built the project using the following technologies:
* **Frontend**: Angular
* **Backend**: Django + MongoDB
* **Image Analysis**: Python, OpenCV, DeepFace, GazeTracking
The frontend captures images from video and sends them to the backend. We then use gaze tracker and deepface to process each image, aggregating 5 images in a 10 second window to determine characteristics like frequent glancing, change in emotions or constant neutral expression and gaze. For each window, a focus score is produced using heuristics obtained from the research papers. After the study session ends, the frontend fetches the processed windows and displays the result as a graph to the user.
## Challenges we ran into
* Integrating various technologies (Angular, Django, OpenCV, DeepFace) into a cohesive system
* Ensuring accurate and reliable gaze tracking and facial recognition
* Creating meaningful, logical and scientifically-backed weights for the focus score calculation
## Accomplishments that we're proud of
* Successfully integrating eye-tracking and facial recognition technologies to analyze study productivity
* Developing a system that provides real-time feedback on focus levels
* Creating an innovative tool that has the potential to help individuals optimize their study environments and improve productivity
## What we learned
* The importance of seamless integration between frontend and backend technologies
* Techniques for effective gaze tracking and facial recognition
* How to process and analyze large volumes of image data efficiently
* The value of using scientific research to inform our focus score calculations
## Limitations
* Hard to distinguish between certain actions like person looking down when thinking or looking at their paper vs when person is looking at their phone (assuming phone is out of frame)
* Arbitrary weights could yield innacurate results since every student's behavior is different
* Eye gaze model not 100% accurate which could lead to discrepancies in the data collected
* Multiple faces could be detected
## What's next for A-EYE
* Enhancing the accuracy and reliability of the gaze tracking and facial recognition components
* Implementing user feedback to improve the focus algorithm as every student exhibits different behaviors when studying
* Ensure privacy by processing data onsite vs on the server
* Batch processing on the fly with queue and map-reduce for performance and scalability
## References
1. [Analysis of Learners' Emotions in E-Learning Environments Based on Cognitive Sciences](https://www.researchgate.net/publication/380588073_Analysis_of_Learners'_Emotions_in_E-Learning_Environments_Based_on_Cognitive_Sciences)
2. [Gaze direction as a facial cue of memory retrieval state](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2022.1063228/full)
3. [Landing page image](https://pixabay.com/illustrations/lofi-book-reading-study-table-8390942/) | ## Inspiration
Through personal conversations with friends, we find that it is a common issue for our peers to dispose of uneaten food due to their inability to finish it. Most people feel that some restaurants have unexpectedly large serving sizes which they are unable to finish.
After conducting some background research regarding food wastage, we realized that food wastage is a significant issue that requires our immediate attention.
## What it does
Ever feel like you have ordered too much food but feel bad about throwing it away? Sometimes feel like you’re hungry but the portion size at a restaurant just wasn’t enough? The uncertainty of how much we can eat and how much food we actually get from ordering at the store can lead to some frustration. Oftentimes, it also leads to food wastage. But look no further – Portion.io would bring about a peace of mind for you when making food choices that’s both beneficial for the environment and your stomach!
Portion.io is able to recommend to you food stores that best fit your portion size so that you can make more informed food choices! After providing your basic physical information, Portion.io utilizes an algorithm to predict your portion size – and it constantly updates this suggestion based on your feedback after a meal. You would also be able to share your food experiences with your friends and even gather others to join you on food adventures in order to ensure the food portion served is being maximized. Now, you’ll be able to enjoy food, share with friends and reduce food waste – in the all-in-one Portion.io application.
## How we built it
We built the frontend portion of the application first using React and Bootstrap in order to have the different pages in place.
We then integrated with the backend MongoDB database using Express.js.
Concurrently, we also added on extra features such as user authentication using Auth0.
## Challenges we ran into
We faced difficulties in setting up the chat functions and integrating new APIs that we were unfamiliar with, such as Twilio. By referencing online documentations and consulting mentors at PennApps, we were able to pick up new skills in using these APIs and learned to debug any issues we faced along the way.
Due to the lack of experience in backend development for some of us on the project, we spent more time setting up and debugging the backend portion of the project leading to less progress made overall that we intended. Nonetheless, we managed to build a prototype of each feature we wanted to include to provide a general idea on how the complete application could look like.
## Accomplishments that we're proud of
Most of us have more experience with front-end development, mainly in React, so we were able to set up the front-end of the application fairly quickly. We were able to plan out our timeline and allocate more time to explore the backend development and other new APIs which we were less familiar with. This allowed us to be able to come up with a basic prototype of the project.
We were also able to figure out the integration of a new API which we have less experience with.
## What we learned
We learned how to break down a large scale project into modular features that can be built individually before combining together. This allowed us to work on multiple components simultaneously and get a basic prototype of each feature working.
We tried out the integration of new APIs for the chat function without prior experience. However, we learned to figure out how the APIs work based on existing knowledge that we have regarding software development in the front-end area. This allowed us to build on our prior knowledge of web development to pick up new skills in other APIs.
## What's next for Portion.io
We think it would be possible to integrate a feature that gives users insight on nutrition and health. This would promote the sustainability feature of our application. | ## Slooth
Slooth.tech was born from the combined laziness and frustration towards long to navigate school websites of four Montréal based hackers.
When faced with the task of creating a hack for McHacks 2016, the creators of Slooth found the perfect opportunity to solve a problem they faced for a long time: navigating tediously complicated school websites.
Inspired by Natural Language Processing technologies and personal assistants such as Google Now and Siri, Slooth was aimed at providing an easy and modern way to access important documents on their school websites.
The Chrome extension Slooth was built with two main features in mind: customization and ease of use.
# Customization:
Slooth is based on user recorded macros. Each user will record any actions they which to automate using the macro recorder and associate an activation phrase to it.
# Ease of use:
Slooth is intended to simplify its user's workflow. As such, it was implemented as an easily accessible Chrome extension and utilizes voice commands to lead its user to their destination.
# Implementation:
Slooth is a Chrome extension built in JS and HTML.
The speech recognition part of Slooth is based on the Nuance ASR API kindly provided to all McHacks attendees.
# Features:
-Fully customizable macros
-No background spying. Slooth's speech recognition is done completely server side and notifies the user when it is recording their speech.
-Minimal server side interaction. Slooth's data is stored entirely locally, never shared with any outside server. Thus you can be confident that your personal browsing information is not publicly available.
-Minimal UI. Slooth is designed to simplify one's life. You will never need a user guide to figure out Slooth.
# Future
While Slooth reached its set goals during McHacks 2016, it still has room to grow.
In the future, the Slooth creators hope to implement the following:
-Full compatibility with single page applications
-Fully encrypted autofill forms synched with the user's Google account for cross platform use.
-Implementation of the Nuance NLU api to add more customization options to macros (such as verbs with differing parameters).
# Thanks
Special thanks to the following companies for their help and support in providing us with resources and APIs:
-Nuance
-Google
-DotTech | losing |
[Play The Game](https://gotm.io/askstudio/pandemic-hero)
## Inspiration
Our inspiration comes from the concern of **misinformation** surrounding **COVID-19 Vaccines** in these challenging times. As students, not only do we love to learn, but we also yearn to share the gifts of our knowledge and creativity with the world. We recognize that a fun and interactive way to learn crucial information related to STEM and current events is rare. Therefore we aim to give anyone this opportunity using the product we have developed.
## What it does
In the past 24 hours, we have developed a pixel art RPG game. In this game, the user becomes a scientist who has experienced the tragedies of COVID-19 and is determined to find a solution. Become the **Hero of the Pandemic** through overcoming the challenging puzzles that give you a general understanding of the Pfizer-BioNTech vaccine's development process, myths, and side effects.
Immerse yourself in the original artwork and touching story-line. At the end, complete a short feedback survey and get an immediate analysis of your responses through our **Machine Learning Model** and receive additional learning resources tailored to your experience to further your knowledge and curiosity about COVID-19.
Team A.S.K. hopes that through this game, you become further educated by the knowledge you attain and inspired by your potential for growth when challenged.
## How I built it
We built this game primarily using the Godot Game Engine, a cross-platform open-source game engine that provides the design tools and interfaces to create games. This engine uses mostly GDScript, a python-like dynamically typed language designed explicitly for design in the Godot Engine. We chose Godot to ease cross-platform support using the OpenGL API and GDScript, a relatively more programmer-friendly language.
We started off using **Figma** to plan out and identify a theme based on type and colour. Afterwards, we separated components into groupings that maintain similar characteristics such as label outlining and movable objects with no outlines. Finally, as we discussed new designs, we added them to our pre-made categories to create a consistent user-experience-driven UI.
Our Machine Learning model is a content-based recommendation system built with Scikit-learn, which works with data that users provide implicitly through a brief feedback survey at the end of the game. Additionally, we made a server using the Flask framework to serve our model.
## Challenges I ran into
Our first significant challenge was navigating through the plethora of game features possible with GDScript and continually referring to the documentation. Although Godot is heavily documented, as an open-source engine, there exist frequent bugs with rendering, layering, event handling, and more that we creatively overcame
A prevalent design challenge was learning and creating pixel art with the time constraint in mind. To accomplish this, we methodically used as many shortcuts and tools as possible to copy/paste or select repetitive sections.
Additionally, incorporating Machine Learning in our project was a challenge in itself. Also, sending requests, display JSON, and making the recommendations selectable were considerable challenges using Godot and GDScript.
Finally, the biggest challenge of game development for our team was **UX-driven** considerations to find a balance between a fun, challenging puzzle game and an educational experience that leaves some form of an impact on the player. Brainstorming and continuously modifying the story-line while implementing the animations using Godot required a lot of adaptability and creativity.
## Accomplishments that I'm proud of
We are incredibly proud of our ability to bring our past experiences gaming into the development process and incorporating modifications of our favourite gaming memories. The development process was exhilarating and brought the team down the path of nostalgia which dramatically increased our motivation.
We are also impressed by our teamwork and team chemistry, which allowed us to divide tasks efficiently and incorporate all the original artwork designs into the game with only a few hiccups.
We accomplished so much more within the time constraint than we thought, such as training our machine learning model (although with limited data), getting a server running up and quickly, and designing an entirely original pixel art concept for the game.
## What I learned
As a team, we learned the benefit of incorporating software development processes such as **Agile Software Development Cycle.** We solely focused on specific software development stages chronologically while returning and adapting to changes as they come along. The Agile Process allowed us to maximize our efficiency and organization while minimizing forgotten tasks or leftover bugs.
Also, we learned to use entirely new software, languages, and skills such as Godot, GDScript, pixel art, and design and evaluation measurements for a serious game.
Finally, by implementing a Machine Learning model to analyze and provide tailored suggestions to users, we learned the importance of a great dataset. Following **Scikit-learn** model selection graph or using any cross-validation techniques are ineffective without the data set as a foundation. The structure of data is equally important to manipulate the datasets based on task requirements to increase the model's score.
## What's next for Pandemic Hero
We hope to continue developing **Pandemic Hero** to become an educational game that supports various age ranges and is worthy of distribution among school districts. Our goal is to teach as many people about the already-coming COVID-19 vaccine and inspire students everywhere to interpret STEM in a fun and intuitive manner.
We aim to find support from **mentors** along the way, who can help us understand better game development and education practices that will propel the game into a deployment-ready product.
### Use the gotm.io link below to play the game on your browser or follow the instructions on Github to run the game using Godot | ## Inspiration
With caffeine being a staple in almost every student’s lifestyle, many are unaware when it comes to the amount of caffeine in their drinks. Although a small dose of caffeine increases one’s ability to concentrate, higher doses may be detrimental to physical and mental health. This inspired us to create The Perfect Blend, a platform that allows users to manage their daily caffeine intake, with the aim of preventing students from spiralling down a coffee addiction.
## What it does
The Perfect Blend tracks caffeine intake and calculates how long it takes to leave the body, ensuring that users do not consume more than the daily recommended amount of caffeine. Users can add drinks from the given options and it will update on the tracker. Moreover, The Perfect Blend educates users on how the quantity of caffeine affects their bodies with verified data and informative tier lists.
## How we built it
We used Figma to lay out the design of our website, then implemented it into Velo by Wix. The back-end of the website is coded using JavaScript. Our domain name was registered with domain.com.
## Challenges we ran into
This was our team’s first hackathon, so we decided to use Velo by Wix as a way to speed up the website building process; however, Wix only allows one person to edit at a time. This significantly decreased the efficiency of developing a website. In addition, Wix has building blocks and set templates making it more difficult for customization. Our team had no previous experience with JavaScript, which made the process more challenging.
## Accomplishments that we're proud of
This hackathon allowed us to ameliorate our web design abilities and further improve our coding skills. As first time hackers, we are extremely proud of our final product. We developed a functioning website from scratch in 36 hours!
## What we learned
We learned how to lay out and use colours and shapes on Figma. This helped us a lot while designing our website. We discovered several convenient functionalities that Velo by Wix provides, which strengthened the final product. We learned how to customize the back-end development with a new coding language, JavaScript.
## What's next for The Perfect Blend
Our team plans to add many more coffee types and caffeinated drinks, ranging from teas to energy drinks. We would also like to implement more features, such as saving the tracker progress to compare days and producing weekly charts. | ## Inspiration
Since this was the first hackathon for most of our group, we wanted to work on a project where we could learn something new while sticking to familiar territory. Thus we settled on programming a discord bot, something all of us have extensive experience using, that works with UiPath, a tool equally as intriguing as it is foreign to us. We wanted to create an application that will allow us to track the prices and other related information of tech products in order to streamline the buying process and enable the user to get the best deals. We decided to program a bot that utilizes user input, web automation, and web-scraping to generate information on various items, focusing on computer components.
## What it does
Once online, our PriceTracker bot runs under two main commands: !add and !price. Using these two commands, a few external CSV files, and UiPath, this stores items input by the user and returns related information found via UiPath's web-scraping features. A concise display of the product’s price, stock, and sale discount is displayed to the user through the Discord bot.
## How we built it
We programmed the Discord bot using the comprehensive discord.py API. Using its thorough documentation and a handful of tutorials online, we quickly learned how to initialize a bot using Discord's personal Development Portal and create commands that would work with specified text channels. To scrape web pages, in our case, the Canada Computers website, we used a UiPath sequence along with the aforementioned CSV file, which contained input retrieved from the bot's "!add" command. In the UiPath process, each product is searched on the Canada Computers website and then through data scraping, the most relevant results from the search and all related information are processed into a csv file. This csv file is then parsed through to create a concise description which is returned in Discord whenever the bot's "!prices" command was called.
## Challenges we ran into
The most challenging aspect of our project was figuring out how to use UiPath. Since Python was such a large part of programming the discord bot, our experience with the language helped exponentially. The same could be said about working with text and CSV files. However, because automation was a topic none of us hardly had any knowledge of; naturally, our first encounter with it was rough. Another big problem with UiPath was learning how to use variables as we wanted to generalize the process so that it would work for any product inputted.
Eventually, with enough perseverance, we were able to incorporate UiPath into our project exactly the way we wanted to.
## Accomplishments that we're proud of
Learning the ins and outs of automation alone was a strenuous task. Being able to incorporate it into a functional program is even more difficult, but incredibly satisfying as well. Albeit small in scale, this introduction to automation serves as a good stepping stone for further research on the topic of automation and its capabilities.
## What we learned
Although we stuck close to our roots by relying on Python for programming the discord bot, we learned a ton of new things about how these bots are initialized, the various attributes and roles they can have, and how we can use IDEs like Pycharm in combination with larger platforms like Discord. Additionally, we learned a great deal about automation and how it functions through UiPath which absolutely fascinated us the first time we saw it in action. As this was the first Hackathon for most of us, we also got a glimpse into what we have been missing out on and how beneficial these competitions can be. Getting the extra push to start working on side-projects and indulging in solo research was greatly appreciated.
## What's next for Tech4U
We went into this project with a plethora of different ideas, and although we were not able to incorporate all of them, we did finish with something we were proud of. Some other ideas we wanted to integrate include: scraping multiple different websites, formatting output differently on Discord, automating the act of purchasing an item, taking input and giving output under the same command, and more. | winning |
## Inspiration
In a world where refrigerators are humble white boxes merely used to store food, average and ubiquitous are the most common adjectives when describing one of the most important appliances. Several upgrades are necessary for a technology that essentially hasn't changed much since its introduction a century ago. We wanted to tackle food waste and create a better experience with the Frigid
## What it does
Analyzes the image of the food placed in its basket using Microsoft's Computer Vision API, Raspberry Pi, and Arduino.
Places food in specific compartment for it using a laser-cut acrylic railing frame and basket, actuated by motors. The plan for a full product is to maintain a specific temperature for each compartment (our prototype uses LEDs as an example of different temperatures) that allows the food to last as long as possible, maintain flavor and nutrition, and either freeze or defrost foods such as meats from the tap of a finger. This also makes it much easier to arrange groceries after buying.
Using the identification from the vision system, Frigid can also warn you when you are running low on supplies or if the food has gone bad. Notifications over the internet allow you to easily order more food or freeze food that is getting close the expiration date.
## How we built it
Laser cut acrylic, stepper motors, Arduino, Raspberry Pi. As a hardware project, we spent most of our time trying to build our product from scratch and maintain structural rigidity. The laser cut acrylic is what we used to build the frame, basket holding the food, and compartments for food to be held in.
## Challenges we ran into
-limited laser cutting hours
-limited materials
-limited tools
-LED control
-power
-hot glue
-working with new technology
## Accomplishments that we're proud of
-Microsoft api
-first hardware hack
-first time using arduino and raspberry pi together
-first time working with Nelson
-lots of laser cutting
-learning about rigidity and structural design
-LED strips
-transistor logic
-power management
## What we learned
-soldering
-power management
-Controlling arduinos with raspberry pi
## What's next for Frigid
There is a huge potential for future development with Frigid, namely working in the temperature features and increasing the amount of food types the vision system can recognize. As more people use the product, we hope we can use the data to create better vision recognition and figure out ways to reduce food waste. | ## Inspiration
As a team, we were immediately intrigued by the creative freedom involved in building a ‘useless invention’ and inspiration was drawn from the ‘useless box’ that turns itself off. We thought ‘why not have it be a robot arm and give it an equally intriguing personality?’ and immediately got to work taking our own spin on the concept.
## What It Does
The robot has 3 servos that allow the robot to move with personality. Whenever the switch is pressed, the robot executes a sequence of actions in order to flick the switch and then shut down.
## How We Built It
We started by dividing tasks between members: the skeleton of the code, building the physical robot, and electronic components. A CAD model was drawn up to get a gauge for scale, and then it was right into cutting and glueing popsicle sticks. An Exacto blade was used to create holes in the base container for components to fit through to keep everything neat and compact. Simultaneously, as much of the code and electronic wiring was done to not waste time.
After the build was complete, a test code was run and highlighted areas that needed to be reinforced. While that was happening, calculations were being done to determine the locations the servo motors would need to reach in order to achieve our goal. Once a ‘default’ sequence was achieved, team members split to write 3 of our own sequences before converging to achieve the 5th and final sequence. After several tests were run and the code was tweaked, a demo video was filmed.
## Challenges We Ran Into
The design itself is rather rudimentary, being built out of a Tupperware container, popsicle sticks and various electronic components to create the features such as servo motors and a buzzer. Challenges consisted of working with materials as fickle as popsicle sticks – a decision driven mainly by the lack of realistic accessibility to 3D printers. The wood splintered and was weaker than expected, therefore creative design was necessary so that it held together.
Another challenge was the movement. Working with 3 servo motors proved difficult when assigning locations and movement sequences, but once we found a ‘default’ sequence that worked, the other following sequences slid into place. Unfortunately, our toils were not over as now the robot had to be able to push the switch, and initial force proved to be insufficient.
## Accomplishments That We’re Proud Of
About halfway through, while we were struggling with getting the movement to work, thoughts turned toward what we would do in different sequences. Out of inspiration from other activities occurring during the event, it was decided that we would add a musical element to our ‘useless machine’ in the form of a buzzer playing “Tequila” by The Champs. This was our easiest success despite involving transposing sheet music and changing rhythms until we found the desired effect.
We also got at least 3 sequences into the robot! That is more than we were expecting 12 hours into the build due to difficulties with programming the servos.
## What We Learned
When we assigned tasks, we all chose roles that we were not normally accustomed to. Our mechanical member worked heavily in software while another less familiar with design focused on the actual build. We all exchanged roles over the course of the project, but this rotation of focus allowed us to get the most out of the experience. You can do a lot with relatively few components; constraint leads to innovation.
## What’s Next for Little Dunce
So far, we have only built in the set of 5 sequences, but we want Little Dunce to have more of a personality and more varied and random reactions. As of now, it is a sequence of events, but we want Little Dunce to act randomly so that everyone can get a unique experience with the invention. We also want to add an RGB LED light for mood indication dependent on the sequence chosen. This would also serve as the “on/off” indicator since the initial proposal was to have a robot that goes to sleep. | ## Inspiration
Lyft's round up and donate system really inspired us here.
We wanted to find a way to benefit both users and help society. We all want to give back somehow, but don't know how sometimes or maybe we want to donate, but don't really know how much to give back or if we could afford it.
We wanted an easy way incorporated into our lives and spending habits.
This would allow us to reach a wider amount of users and utilize the power of the consumer society.
## What it does
With a chrome extension like "Heart of Gold", the user gets every purchase's round up to nearest dollar (for example: purchase of $9.50 has a round up of $10, so $0.50 gets tracked as the "round up") accumulated. The user gets to choose when they want to donate and which organization gets the money.
## How I built it
We built a web app/chrome extension using Javascript/JQuery, HTML/CSS.
Firebase javascript sdk library helped us store the calculations of the accumulation of round up's.
We make an AJAX call to the Paypal API, so it took care of payment for us.
## Challenges I ran into
For all of the team, it was our first time creating a chrome app extension. For most of the team, it was our first time heavily working with javascript let alone using technologies like Firebase and the Paypal API.
Choose what technology/platform would make the most sense was tough, but the chrome extension would allow for more relevance since a lot of people make more online purchases nowadays and an extension can run in the background/seem omnivalent.
So we picked up the javascript language to start creating the extension. Lisa Lu integrated the PayPal API to handle donations and used HTML/CSS/JavaScript to create the extension pop-up. She also styled the user interface.
Firebase was also completely new to us, but we chose to use it because it didn't require us to have a two step process: a server (like Flask) + a database (like mySQL or MongoDB). It also helped that we had a mentor guide us through. We learned a lot about the Javascript language (mostly that we haven't even really scratched the surface of it), and the importance of avoiding race conditions. We also learned a lot about how to strategically structure our code system (having a background.js to run firebase database updates
## Accomplishments that I'm proud of
Veni, vidi, vici.
We came, we saw, we conquered.
## What I learned
We all learned that there are multiple ways to create a product to solve a problem.
## What's next for Heart of Gold
Heart of Gold has a lot of possibilities: partnering with companies that want to advertise to users and social good organizations, making recommendations to users on charities as well as places to shop, game-ify the experience, expanding capabilities of what a user could do with the round up money they accumulate. Before those big dreams, cleaning up the infrastructure would be very important too. | losing |
## Inspiration
We're 4 college freshmen that were expecting new experiences with interactive and engaging professors in college; however, COVID-19 threw a wrench in that (and a lot of other plans). As all of us are currently learning online through various video lecture platforms, we found out that these lectures sometimes move too fast or are just flat-out boring. Summaread is our solution to transform video lectures into an easy-to-digest format.
## What it does
"Summaread" automatically captures lecture content using an advanced AI NLP pipeline to automatically generate a condensed note outline. All one needs to do is provide a YouTube link to the lecture or a transcript and the corresponding outline will be rapidly generated for reading. Summaread currently generates outlines that are shortened to about 10% of the original transcript length. The outline can also be downloaded as a PDF for annotation purposes. In addition, our tool uses the Google cloud API to generate a list of Key Topics and links to Wikipedia to encourage further exploration of lecture content.
## How we built it
Our project is comprised of many interconnected components, which we detail below:
**Lecture Detection**
Our product is able to automatically detect when lecture slides change to improve the performance of the NLP model in summarizing results. This tool uses the Google Cloud Platform API to detect changes in lecture content and records timestamps accordingly.
**Text Summarization**
We use the Hugging Face summarization pipeline to automatically summarize groups of text that are between a certain number of words. This is repeated across every group of text previous generated from the Lecture Detection step.
**Post-Processing and Formatting**
Once the summarized content is generated, the text is processed into a set of coherent bullet points and split by sentences using Natural Language Processing techniques. The text is also formatted for easy reading by including “sub-bullet” points that give a further explanation into the main bullet point.
**Key Concept Suggestions**
To generate key concepts, we used the Google Cloud Platform API to scan over the condensed notes our model generates and provide wikipedia links accordingly. Some examples of Key Concepts for a COVID-19 related lecture would be medical institutions, famous researchers, and related diseases.
**Front-End**
The front end of our website was set-up with Flask and Bootstrap. This allowed us to quickly and easily integrate our Python scripts and NLP model.
## Challenges we ran into
1. Text summarization is extremely difficult -- while there are many powerful algorithms for turning articles into paragraph summaries, there is essentially nothing on shortening conversational sentences like those found in a lecture into bullet points.
2. Our NLP model is quite large, which made it difficult to host on cloud platforms
## Accomplishments that we're proud of
1) Making a multi-faceted application, with a variety of machine learning and non-machine learning techniques.
2) Working on an unsolved machine learning problem (lecture simplification)
3) Real-time text analysis to determine new elements
## What we learned
1) First time for multiple members using Flask and doing web development
2) First time using Google Cloud Platform API
3) Running deep learning models makes my laptop run very hot
## What's next for Summaread
1) Improve our summarization model through improving data pre-processing techniques and decreasing run time
2) Adding more functionality to generated outlines for better user experience
3) Allowing for users to set parameters regarding how much the lecture is condensed by | **Finding a problem**
Education policy and infrastructure tend to neglect students with accessibility issues. They are oftentimes left on the backburner while funding and resources go into research and strengthening the existing curriculum. Thousands of college students struggle with taking notes in class due to various learning disabilities that make it difficult to process information quickly or write down information in real time.
Over the past decade, Offices of Accessible Education (OAE) have been trying to help support these students by hiring student note-takers and increasing ASL translators in classes, but OAE is constrained by limited funding and low interest from students to become notetakers.
This problem has been particularly relevant for our TreeHacks group. In the past year, we have become notetakers for our friends because there are not enough OAE notetakers in class. Being note writers gave us insight into what notes are valuable for those who are incredibly bright and capable but struggle to write. This manual process where we take notes for our friends has helped us become closer as friends, but it also reveals a systemic issue of accessible notes for all.
Coming into this weekend, we knew note taking was an especially interesting space. GPT3 had also been on our mind as we had recently heard from our neurodivergent friends about how it helped them think about concepts from different perspectives and break down complicated topics.
**Failure and revision**
Our initial idea was to turn videos into transcripts and feed these transcripts into GPT-3 to create the lecture notes. This idea did not work out because we quickly learned the transcript for a 60-90 minute video was too large to feed into GPT-3.
Instead, we decided to incorporate slide data to segment the video and use slide changes to organize the notes into distinct topics. Our overall idea had three parts: extract timestamps the transcript should be split at by detecting slide changes in the video, transcribe the text for each video segment, and pass in each segment of text into a gpt3 model, fine-tuned with prompt engineering and examples of good notes.
We ran into challenges every step of the way as we worked with new technologies and dealt with the beast of multi-gigabyte video files. Our main challenge was identifying slide transitions in a video so we could segment the video based on these slide transitions (which signified shifts in topics). We initially started with heuristics-based approaches to identify pixel shifts. We did this by iterating through frames using OpenCV and computing metrics such as the logarithmic sum of the bitwise XORs between images. This approach resulted in several false positives because the compressed video quality was not high enough to distinguish shifts in a few words on the slide. Instead, we trained a neural network using PyTorch on both pairs of frames across slide boundaries and pairs from within the same slide. Our neural net was able to segment videos based on individual slides, giving structure and organization to an unwieldy video file. The final result of this preprocessing step is an array of timestamps where slides change.
Next, this array was used to segment the audio input, which we did using Google Cloud’s Speech to Text API. This was initially challenging as we did not have experience with cloud-based services like Google Cloud and struggled to set up the various authentication tokens and permissions. We also ran into the issue of the videos taking a very long time, which we fixed by splitting the video into smaller clips and then implementing multithreading approaches to run the speech to text processes in parallel.
**New discoveries**
Our greatest discoveries lay in the fine-tuning of our multimodal model. We implemented a variety of prompt engineering techniques to coax our generative language model into producing the type of notes we wanted from it. In order to overcome the limited context size of the GPT-3 model we utilized, we iteratively fed chunks of the video transcript into the OpenAI API at once. We also employed both positive and negative prompt training to incentivize our model to produce output similar to our desired notes in the output latent space. We were careful to manage the external context provided to the model to allow it to focus on the right topics while avoiding extraneous tangents that would be incorrect. Finally, we sternly warned the model to follow our instructions, which did wonders for its obedience.
These challenges and solutions seem seamless, but our team was on the brink of not finishing many times throughout Saturday. The worst was around 10 PM. I distinctly remember my eyes slowly closing, a series of crumpled papers scattered nearby the trash can. Each of us was drowning in new frameworks and technologies. We began to question, how could a group of students, barely out of intro-level computer science, think to improve education.
The rest of the hour went in a haze until we rallied around a text from a friend who sent us some amazing CS notes we had written for them. Their heartfelt words of encouragement about how our notes had helped them get through the quarter gave us the energy to persevere and finish this project.
**Learning about ourselves**
We found ourselves, after a good amount of pizza and a bit of caffeine, diving back into documentation for react, google text to speech, and docker. For hours, our eyes grew heavy, but their luster never faded. More troubles arose. There were problems implementing a payment system and never-ending CSS challenges. Ultimately, our love of exploring technologies we were unfamiliar with helped fuel our inner passion.
We knew we wanted to integrate Checkbook.io’s unique payments tool, and though we found their API well architectured, we struggled to connect to it from our edge-compute centric application. Checkbook’s documentation was incredibly helpful, however, and we were able to adapt the code that they had written for a NodeJS server-side backend into our browser runtime to avoid needing to spin up an entirely separate finance service. We are thankful to Checkbook.io for the support their team gave us during the event!
Finally, at 7 AM, we connected the backend of our website with the fine-tuned gpt3 model. I clicked on CS106B and was greeted with an array of lectures to choose from. After choosing last week’s lecture, a clean set of notes were exported in LaTeX, perfect for me to refer to when working on the PSET later today!
We jumped off of the couches we had been sitting on for the last twelve hours and cheered. A phrase bounced inside my mouth like a rubber ball, “I did it!”
**Product features**
Real time video to notes upload
Multithreaded video upload framework
Database of lecture notes for popular classes
Neural network to organize video into slide segments
Multithreaded video to transcript pipeline | ## Inspiration
Our journey to creating this project stems from a shared realization: the path from idea to execution is fraught with inefficiencies that can dilute even the most brilliant concepts. As developers with a knack for turning visions into reality, we've faced the slow erosion of enthusiasm and value that time imposes on innovation. This challenge is magnified for those outside the technical realm, where a lack of coding skills transforms potential breakthroughs into missed opportunities. Harvard Business Review and TechCrunch analyzed Y Combinator startups and found that around 40% of founders are non-technical.
Drawing from our experiences in fast-paced sectors like health and finance, we recognized the critical need for speed and agility. The ability to iterate quickly and gather user feedback is not just beneficial but essential in these fields. Yet, this process remains a daunting barrier for many, including non-technical visionaries whose ideas have the potential to reshape industries.
With this in mind, we set out to democratize the development process. Our goal was to forge a tool that transcends technical barriers, enabling anyone to bring their ideas to life swiftly and efficiently. By leveraging our skills and insights into the needs of both developers and non-developers alike, we've crafted a solution that bridges the gap between imagination and tangible innovation, ensuring that no idea is left unexplored due to the constraints of technical execution.
This project is more than just a tool; it's a testament to our belief that the right technology can unlock the potential within every creative thought, transforming fleeting ideas into impactful realities.
## What it does
Building on the foundation laid by your vision, MockupMagic represents a leap toward democratizing digital innovation. By transforming sketches into interactive prototypes, we not only streamline the development process but also foster a culture of inclusivity where ideas, not technical prowess, stand in the spotlight. This tool is a catalyst for creativity, enabling individuals from diverse backgrounds to participate actively in the digital creation sphere.
The user can upload a messy sketch on paper to our website. MockupMagic will then digitize your low-fidelity prototype into a high-fidelity replica with interactive capabilities. The user can also see code alongside the generated mockups, which serves as both a bridge to tweak the generated prototype and a learning tool, gently guiding users toward deeper technical understanding. Moreover, the integration of a community feedback mechanism through the Discussion tab directly within the platform enhances the iterative design process, allowing for real-time user critique and collaboration.
MockupMagic is more than a tool; it's a movement towards a future where the digital divide is narrowed, and the translation of ideas into digital formats is accessible to all. By empowering users to rapidly prototype and refine their concepts, we're not just accelerating the pace of innovation; we're ensuring that every great idea has the chance to be seen, refined, and realized in the digital world.
## How we built it
Conceptualization: The project began with brainstorming sessions where we discussed the challenges non-technical individuals face in bringing their ideas to life. Understanding the value of quick prototyping, especially for designers and founders with creative but potentially fleeting ideas, we focused on developing a solution that accelerates this process.
Research and Design: We conducted research to understand the needs of our target users, including designers, founders, and anyone in between who might lack technical skills. This phase helped us design a user-friendly interface that would make it intuitive for users to upload sketches and receive functional web mockups.
Technology Selection: Choosing the right technologies was crucial. We decided on a combination of advanced image processing and AI algorithms capable of interpreting hand-drawn sketches and translating them into HTML, CSS, and JavaScript code. We leveraged and finetuned existing AI models from MonsterAPI and GPT API and tailored them to our specific needs for better accuracy in digitizing sketches.
Development: The development phase involved coding the backend logic that processes the uploaded sketches, the AI model integration for sketch interpretation, and the frontend development for a seamless user experience. We used the Reflex platform to build out our user-facing website, capitalizing on their intuitive Python-like web development tools.
Testing and Feedback: Rigorous testing was conducted to ensure the accuracy of the mockups generated from sketches. We also sought feedback from early users, including designers and founders, to understand how well the tool met their needs and what improvements could be made.
## Challenges we ran into
We initially began by building off our own model, hoping to aggregate quality training data mapping hand-drawn UI components to final front-end components, but we quickly realized this data was very difficult to find and hard to scrape for. Our model performs well for a few screens however it still struggles to establish connections between multiple screens or more complex actions.
## Accomplishments that we're proud of
Neither of us had much front-end & back-end experience going into this hackathon, so we made it a goal to use a framework that would give us experience in this field. After learning about Reflex during our initial talks with sponsors, we were amazed that Web Apps could be built in pure Python and wanted to jump right in. Using Reflex was an eye-opening experience because we were not held back by preconceived notions of traditional web development - we got to enjoy learning about Reflex and how to build products with it. Reflex’s novelty also translates to limited knowledge about it within LLM tools developers use to help them while coding, this helped us solidify our programming skills through reading documentation and creative debugging methodologies - skills almost being abstracted away by LLM coding tools. Finally, our favorite part about doing hackathons is building products we enjoy using. It helps us stay aligned with the end user while giving us personal incentives to build the best hack we can.
## What we learned
Through this project, we learned that we aren’t afraid to tackle big problems in a short amount of time. Bringing ideas on napkins to full-fledged projects is difficult, and it became apparent hitting all of our end goals would be difficult to finish in one weekend. We quickly realigned and ensured that our MVP was as good as it could get before demo day.
## What's next for MockupMagic
We would like to fine-tune our model to handle more edge cases in handwritten UIs. While MockupMagic can handle a wide range of scenarios, we hope to perform extensive user testing to figure out where we can improve our model the most. Furthermore, we want to add an easy deployment pipeline to give non-technical founders even more autonomy without knowing how to code. As we continue to develop MockupMagic, we would love to see the platform being used even at TreeHacks next year by students who want to rapidly prototype to test several ideas! | winning |
## Inspiration
We were inspired by the notion that correlation does not imply causation, and we set out to test the truth behind this statement. After spending 2 of our 1 hour brainstorming budget throwing ideas at each other, we decided on combining star alignment with machine learning. The outcome? This application. Pleiades. Named after The Seven Sisters star system, part of the constellation Taurus. In this day and age, many people face the problem of finding a true, deep connection with others. What could be deeper than the connection we have with the Universe?
## What it does
This is a dating application. There are many like it, but this one is ours, and it contains several key differences. Our dating app utilizes dark maths and powerful star tracking to calculate everyone's unique Astral Love Quotient. Separate from other dating apps that rely on decades old static data, ours stays up-to-date by constantly recalculating based on the movement of the Pleiades star cluster. In addition, our highly classified ALQ algorithm takes into account various cosmic factors such as the frequency of the universe to ensure that any factor that affects your love life is accounted for.
## How we built it
Our application relies on Tangram for easily training our machine learning algorithm. To train our model we used data from the General Social Survey, which is collected by the National Opinion Research Center at the University of Chicago. By selecting key parameters and doing thorough data cleaning using the software package Stata, we were able to confidently feed our model thousands of data points and guarantee a match. Using Flask we were able to connect our python backend to a HTML, CSS, and JS frontend.
## Challenges we ran into
The stars are fickle things, and so is code. The main challenges we faced were communicating with our website through python and figuring out how we could send and receive data from users that sign up for our astral love service. We burned through many different ideas for how to do this since none of us have any experience with web development beyond basic HTML and CSS. There were unfortunately a few moments where we spent precious hours on a single problem that had a retrospectively easy solution.
## Accomplishments that we're proud of
We are proud of our ability to combine seemingly unrelated aspects of the universe as well as seemingly unrelated programs. This is our first hackathon and we are excited to share what we have been able to put together, and the coding challenges we surmounted.
## What we learned
We have learned that brainstorming is hard, but when there is an idea that motivates everyone on the team, people will work with passion to meet goals. Also, web development is a pain on such tight time constraints when no one has much in-depth experience with it.
## What's next for Pleiades
Pleiades is a wonderful concept, although it has its flaws. The GSS dataset, while super cool, is probably not the best for determining someone's love life, and we did not have ample time to vet it properly. Also our application lacks many features we had planned but couldn't achieve, such as a proper database for ~real~ users and not just accounts we created ourselves. | ## Inspiration
University gets students really busy and really stressed, especially during midterms and exams. We would normally want to talk to someone about how we feel and how our mood is, but due to the pandemic, therapists have often been closed or fully online. Since people will be seeking therapy online anyway, swapping a real therapist with a chatbot trained in giving advice and guidance isn't a very big leap for the person receiving therapy, and it could even save them money. Further, since all the conversations could be recorded if the user chooses, they could track their thoughts and goals, and have the bot respond to them. This is the idea that drove us to build Companion!
## What it does
Companion is a full-stack web application that allows users to be able to record their mood and describe their day and how they feel to promote mindfulness and track their goals, like a diary. There is also a companion, an open-ended chatbot, which the user can talk to about their feelings, problems, goals, etc. With realtime text-to-speech functionality, the user can speak out loud to the bot if they feel it is more natural to do so. If the user finds a companion conversation helpful, enlightening or otherwise valuable, they can choose to attach it to their last diary entry.
## How we built it
We leveraged many technologies such as React.js, Python, Flask, Node.js, Express.js, Mongodb, OpenAI, and AssemblyAI. The chatbot was built using Python and Flask. The backend, which coordinates both the chatbot and a MongoDB database, was built using Node and Express. Speech-to-text functionality was added using the AssemblyAI live transcription API, and the chatbot machine learning models and trained data was built using OpenAI.
## Challenges we ran into
Some of the challenges we ran into were being able to connect between the front-end, back-end and database. We would accidentally mix up what data we were sending or supposed to send in each HTTP call, resulting in a few invalid database queries and confusing errors. Developing the backend API was a bit of a challenge, as we didn't have a lot of experience with user authentication. Developing the API while working on the frontend also slowed things down, as the frontend person would have to wait for the end-points to be devised. Also, since some APIs were relatively new, working with incomplete docs was sometimes difficult, but fortunately there was assistance on Discord if we needed it.
## Accomplishments that we're proud of
We're proud of the ideas we've brought to the table, as well the features we managed to add to our prototype. The chatbot AI, able to help people reflect mindfully, is really the novel idea of our app.
## What we learned
We learned how to work with different APIs and create various API end-points. We also learned how to work and communicate as a team. Another thing we learned is how important the planning stage is, as it can really help with speeding up our coding time when everything is nice and set up with everyone understanding everything.
## What's next for Companion
The next steps for Companion are:
* Ability to book appointments with a live therapists if the user needs it. Perhaps the chatbot can be swapped out for a real therapist for an upfront or pay-as-you-go fee.
* Machine learning model that adapts to what the user has written in their diary that day, that works better to give people sound advice, and that is trained on individual users rather than on one dataset for all users.
## Sample account
If you can't register your own account for some reason, here is a sample one to log into:
Email: [demo@example.com](mailto:demo@example.com)
Password: password | # FaceConnect
##### Never lose a connection again! Connect with anyone, any wallet, and send transactions through an image of one's face!
## Inspiration
Have you ever met someone and instantly connected with them, only to realize you forgot to exchange contact information? Or, even worse, you have someone's contact but they are outdated and you have no way of contacting them? I certainly have.
This past week, I was going through some old photos and stumbled upon one from a Grade 5 Summer Camp. It was my first summer camp experience, I was super nervous going in but I had an incredible time with a friend I met there. We did everything together and it was one of my favorite memories from childhood. But there was a catch – I never got their contact, and I'd completely forgotten their name since it's been so long. All I had was a physical photo of us laughing together, and it felt like I'd lost a precious connection forever.
This dilemma got me thinking. The problem of losing touch with people we've shared fantastic moments with is all too common, whether it's at a hackathon, a party, a networking event, or a summer camp. So, I set out to tackle this issue at Hack The Valley.
## What it does
That's why I created FaceConnect, a Discord bot that rekindles these connections using facial recognition. With FaceConnect, you can send connection requests to people as long as you have a picture of their face.
But that's not all. FaceConnect also allows you to view account information and send transactions if you have a friend's face. If you owe your friend money, you can simply use the "transaction" command to complete the payment.
Or even if you find someone's wallet or driver's license, you can send a reach out to them just with their ID photo!
Imagine a world where you never lose contact with your favorite people again.
Join me in a future where no connections are lost. Welcome to FaceConnect!
## Demos
Mobile Registration and Connection Flow (Registering and Detecting my own face!):
<https://github.com/WilliamUW/HackTheValley/assets/25058545/d6fc22ae-b257-4810-a209-12e368128268>
Desktop Connection Flow (Obama + Trump + Me as examples):
<https://github.com/WilliamUW/HackTheValley/assets/25058545/e27ff4e8-984b-42dd-b836-584bc6e13611>
## How I built it
FaceConnect is built on a diverse technology stack:
1. **Computer Vision:** I used OpenCV and the Dlib C++ Library for facial biometric encoding and recognition.
2. **Vector Embeddings:** ChromaDB and Llama Index were used to create vector embeddings of sponsor documentation.
3. **Document Retrieval:** I utilized Langchain to implement document retrieval from VectorDBs.
4. **Language Model:** OpenAI was employed to process user queries.
5. **Messaging:** Twilio API was integrated to enable SMS notifications for contacting connections.
6. **Discord Integration:** The bot was built using the discord.py library to integrate the user flow into Discord.
7. **Blockchain Technologies:** I integrated Hedera to build a decentralized landing page and user authentication. I also interacted with Flow to facilitate seamless transactions.
## Challenges I ran into
Building FaceConnect presented several challenges:
* **Solo Coding:** As some team members had midterm exams, the project was developed solo. This was both challenging and rewarding as it allowed for experimentation with different technologies.
* **New Technologies:** Working with technologies like ICP, Flow, and Hedera for the first time required a significant learning curve. However, this provided an opportunity to develop custom Language Models (LLMs) trained on sponsor documentation to facilitate the learning process.
* **Biometric Encoding:** It was my first time implementing facial biometric encoding and recognition! Although cool, it required some time to find the right tools to convert a face to a biometric hash and then compare these hashes accurately.
## Accomplishments that I'm proud of
I're proud of several accomplishments:
* **Facial Recognition:** Successfully implementing facial recognition technology, allowing users to connect based on photos.
* **Custom LLMs:** Building custom Language Models trained on sponsor documentation, which significantly aided the learning process for new technologies.
* **Real-World Application:** Developing a solution that addresses a common real-world problem - staying in touch with people.
## What I learned
Throughout this hackathon, I learned a great deal:
* **Technology Stacks:** I gained experience with a wide range of technologies, including computer vision, blockchain, and biometric encoding.
* **Solo Coding:** The experience of solo coding, while initially challenging, allowed for greater freedom and experimentation.
* **Documentation:** Building custom LLMs for various technologies, based on sponsor documentation, proved invaluable for rapid learning!
## What's next for FaceConnect
The future of FaceConnect looks promising:
* **Multiple Faces:** Supporting multiple people in a single photo to enhance the ability to reconnect with groups of friends or acquaintances.
* **Improved Transactions:** Expanding the transaction feature to enable users to pay or transfer funds to multiple people at once.
* **Additional Technologies:** Exploring and integrating new technologies to enhance the platform's capabilities and reach beyond Discord!
### Sponsor Information
ICP Challenge:
I leveraged ICP to build a decentralized landing page and implement user authentication so spammers and bots are blocked from accessing our bot.
Built custom LLM trained on ICP documentation to assist me in learning about ICP and building on ICP for the first time!
I really disliked deploying on Netlify and now that I’ve learned to deploy on ICP, I can’t wait to use it for all my web deployments from now on!
Canister ID: be2us-64aaa-aaaaa-qaabq-cai
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/ICP.md>
Best Use of Hedera:
With FaceConnect, you are able to see your Hedera account info using your face, no need to memorize your public key or search your phone for it anymore!
Allow people to send transactions to people based on face! (Wasn’t able to get it working but I have all the prerequisites to make it work in the future - sender Hedera address, recipient Hedera address).
In the future, to pay someone or a vendor in Hedera, you can just scan their face to get their wallet address instead of preparing QR codes or copy and pasting!
I also built a custom LLM trained on Hedera documentation to assist me in learning about Hedera and building on Hedera as a beginner!
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/hedera.md>
Best Use of Flow
With FaceConnect, to pay someone or a vendor in Flow, you can just scan their face to get their wallet address instead of preparing QR codes or copy and pasting!
I also built a custom LLM trained on Flow documentation to assist me in learning about Flow and building on Flow as a beginner!
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/flow.md>
Georgian AI Challenge Prize
I was inspired by the data sources listed in the document by scraping LinkedIn profile pictures and their faces for obtaining a dataset to test and verify my face recognition model!
I also built a custom LLM trained on Georgian documentation to learn more about the firm!
Link: <https://github.com/WilliamUW/HackTheValley/blob/readme/GeorgianAI.md>
Best .Tech Domain Name:
FaceCon.tech
Best AI Hack:
Use of AI include:
1. Used Computer Vision with OpenCV and the Dlib C++ Library to implement AI-based facial biometric encoding and recognition.
2. Leveraged ChromaDB and Llama Index to create vector embeddings of sponsor documentation
3. Utilized Langchain to implement document retrieval from VectorDBs
4. Used OpenAI to process user queries for everything Hack the Valley related!
By leveraging AI, FaceConnect has not only addressed a common real-world problem but has also pushed the boundaries of what's possible in terms of human-computer interaction. Its sophisticated AI algorithms and models enable users to connect based on visuals alone, transcending language and other barriers. This innovative use of AI in fostering human connections sets FaceConnect apart as an exceptional candidate for the "Best AI Hack" award.
Best Diversity Hack:
Our project aligns with the Diversity theme by promoting inclusivity and connection across various barriers, including language and disabilities. By enabling people to connect using facial recognition and images, our solution transcends language barriers and empowers individuals who may face challenges related to memory loss, speech, or hearing impairments. It ensures that everyone, regardless of their linguistic or physical abilities, can stay connected and engage with others, contributing to a more diverse and inclusive community where everyone's unique attributes are celebrated and connections are fostered.
Imagine trying to get someone’s contact in Germany, or Thailand, or Ethiopia? Now you can just take a picture!
Best Financial Hack:
FaceConnect is the ideal candidate for "Best Financial Hack" because it revolutionizes the way financial transactions can be conducted in a social context. By seamlessly integrating facial recognition technology with financial transactions, FaceConnect enables users to send and receive payments simply by recognizing the faces of their friends.
This innovation simplifies financial interactions, making it more convenient and secure for users to settle debts, split bills, or pay for services. With the potential to streamline financial processes, FaceConnect offers a fresh perspective on how we handle money within our social circles. This unique approach not only enhances the user experience but also has the potential to disrupt traditional financial systems, making it a standout candidate for the "Best Financial Hack" category. | partial |
## Inspiration
"Thrice upon a time, our four incredible ninjas decided to attend a hackathon. After winning, some other salty ninjas decide to take them out. In order to get home and sleep after a long night of coding, our heroic ninjas must dodge incoming attacks to escape. "
## What it does
Sleepy ninja allows the user to play randomly as one of four characters, Sleepy Ninja, Angry Ninja, Happy Ninja, and Naruto.
Press 'Space' to jump over deadly ninja stars. Beware, they all come at you at different speeds! Some of them even come from the air... | ## Github REPO
<https://github.com/charlesjin123/medicine-scanner>
## Inspiration
We believe everyone deserves easy, safe access to their medication – yet the small fine print that often takes up hundreds to thousands of words printed on the back of medicine bottles is incredibly inaccessible for a huge market. Watching elderly patients struggle to read fine print, and non-English speakers feeling overwhelmed by confusing medical terms, inspired us to act. Imagine the power of a tool that turns every medication bottle into a personalized, simplified guide in the simple to understand form of cards — that’s the project we’re building. With Med-Scanner, we’re bridging gaps in healthcare access and redefining patient safety.
## What it does
Med-Scanner is a game changer, transforming complex medication information into easy-to-understand digital cards. By simply scanning a medication with your phone, users instantly get critical info like dosages, side effects, and interactions, all laid out clearly and concisely. We even speak the instructions for those who are visually impaired. Med-Scanner is the safe and accessible patient care solution for everyone.
## How we built it
Using cutting-edge tech like React, Python, and NLP models, we built Med-Scanner from the ground up. The app scans medication labels using advanced OCR and analyzes it with NLP. But we didn’t stop there. We infused text-to-speech for the blind, and personalized chatbots for even further support.
## Challenges we ran into
We thrived under time pressure to build a Med-Scanner. One of our greatest challenges was perfecting the OCR to handle blurry, inconsistent images from different formats. Plus, developing an interface that’s accessible, intuitive, and incredibly simple for older users pushed us to innovate like never before. However, with our team of four bring together a multitude of talents, we were able to overcome these challenges to fulfill our mission.
## Accomplishments that we're proud of
The fact that we did it — we brought this ambitious project to life — fills us with pride. We built a prototype that not only works but works brilliantly, turning complex medical details into clear, actionable cards. We’re especially proud of the accuracy of our OCR model and the seamless voice-over features that make this tool genuinely accessible. We’re also proud of creating a product that’s not just tech-savvy, but mission-driven— making healthcare safer for millions of people. | ## Inspiration
Jessica here - I came up with the idea for BusPal out of expectation that the skill has already existed. With my Amazon Echo Dot, I was already doing everything from checking the weather to turning off and on my lights with Amazon skills and routines. The fact that she could not check when my bus to school was going to arrive was surprising at first - until I realized that Amazon and Google are one of the biggest rivalries there is between 2 tech giants. However, I realized that the combination of Alexa's genuine personality and the powerful location ability of Google Maps would fill a need that I'm sure many people have. That was when the idea for BusPal was born: to be a convenient Alexa skill that will improve my morning routine - and everyone else's.
## What it does
This skill enables Amazon Alexa users to ask Alexa when their bus to a specified location is going to arrive and to text the directions to a phone number - all hands-free.
## How we built it
Through the Amazon Alexa builder, Google API, and AWS.
## Challenges we ran into
We originally wanted to use stdlib, however with a lack of documentation for the new Alexa technology, the team made an executive decision to migrate to AWS roughly halfway into the hackathon.
## Accomplishments that we're proud of
Completing Phase 1 of the project - giving Alexa the ability to take in a destination, and deliver a bus time, route, and stop to leave for.
## What we learned
We learned how to use AWS, work with Node.js, and how to use Google APIs.
## What's next for Bus Pal
Improve the text ability of the skill, and enable calendar integration. | losing |
## Inspiration
A homecooked chicken biriyani; Chinese dumplings on the dinner table. The only problem? We, college students, are away from home.
And we deeply miss home-cooked meals.
So much so, that a survey conducted on college students showed that 77% of respondents ranked home-cooked meals in the top 3 things they miss from home.
On the other hand, aunties, uncles and mothers love to cook. They find great joy in feeding their young ones.
So, what if there was a way to connect local aunties to college students, and finally have a way to satisfy the craving for home-cooked meals?
Introducing Aunties Kitchen!
## What it does
Auntie’s Kitchen connects “aunties” with local college students. Students are able to explore different homecooked meals from different cultural backgrounds that are being delivered each week to their campus and can place an order. For aunties, we simplified the process of scheduling and managing who they’re making meals for.
At least for our initial product, aunties can create weekly meals that students are able to sign up for. The aunties then go to the colleges they chose to deliver for and students are able to pay and meet the auntie.
## How we built it
* The frontend and backend is built using Next.js with Typescript both hosted on the same server
* Authentication for the aunties and the students was done using NextAuth and Google OAuth2.
* The data for students, aunties, and meals are stored in a MongoDB database. We utilize RestAPIs to create, retrieve, update, and delete the entities involved.
* Deployed our web application using Vercel
+ Utilized Shadcn components and composed our styles using Tailwind CSS
* Collaborated using Github and Git for version control
## Challenges we ran into
* Setting up user authentication had a lot of moving pieces
* There were some styling issues with Tailwind CSS and getting it to work with Shadcn components
* Next.js 14 had a lot of new updates that some teammates were not aware of. Learning those new features needed some time
## Accomplishments that we're proud of
In the beginning, we were thinking about how to make something that would be the most technically complex and trying to force LLMs or AI into our application. But, after some ideation, we realized we needed to solve an actual problem. Thus, we framed our thinking from solution -> problem, to problem -> solution. And eventually, we came up with Auntie’s Kitchen. We feel accomplished in the sense that even though this application is not as technically complex, it solves a problem that many students have.
## What we learned
We learned that it’s more important to work on something that is a genuine problem as opposed to building “wouldn’t it be cool if” products. We had all these ideas involving AI and workflow automation but at the end of the day, we felt that the problem with homecooked meals was something that just resonated more with us.
## What's next for Auntie's Kitchen
Aside from additional product improvements, one potential next move for Auntie’s Kitchen is to offer the services we give to aunties to restaurants. Many restaurants want to do large orders on a weekly or monthly basis but the systems in place to do so are really messy and oftentimes just a massive group chat. The value for restaurants is that they’re able to do a bulk order and earn a large chunk of money from one delivery as well as service customers not in their local area. The value for users is that they can order from restaurants they love even if it’s one that’s far away, with little to no delivery fees. | ## Inspiration
While attending Hack the 6ix, our team had a chance to speak to Advait from the Warp team. We got to learn about terminals and how he got involved with Warp, as well as his interest in developing something completely new for the 21st century. Through this interaction, my team decided we wanted to make an AI-powered developer tool as well, which gave us the idea for Code Cure!
## What it does
Code Cure can call your python file and run it for you. Once it runs, you will see your output as usual in your terminal, but if you experience any errors, our extension runs and gives some suggestions in a pop-up as to how you may fix it.
## How we built it
We made use of Azure's OpenAI service to power our AI code fixing suggestions and used javascript to program the rest of the logic behind our VS code extension.
## Accomplishments that we're proud of
We were able to develop an awesome AI-powered tool that can help users fix errors in their python code. We believe this project will serve as a gateway for more people to learn about programming, as it provides an easier way for people to find solutions to their errors.
## What's next for Code Cure
As of now, we are only able to send our output through a popup on the user's screen. In the future, we would like to implement a stylized tab where we are able to show the user different suggestions using the most powerful AI models available to us. | ## Inspiration
As college students who recently graduated high school in the last year or two, we know first-hand the sinking feeling that you experience when you open an envelope after your graduation, and see a gift card to a clothing store you'll never set foot into in your life. Instead, you can't stop thinking about the latest generation of AirPods that you wanted to buy. Well, imagine a platform where you could trade your unwanted gift card for something you would actually use... you would actually be able to get those AirPods, without spending money out of your own pocket. That's where the idea of GifTr began.
## What it does
Our website serves as a **decentralized gift card trading marketplace**. A user who wants to trade their own gift card for a different one can log in and connect to their **Sui wallet**. Following that, they will be prompted to select their gift card company and cash value. Once they have confirmed that they would like to trade the gift card, they can browse through options of other gift cards "on the market", and if they find one they like, send a request to swap. If the other person accepts the request, a trustless swap is initiated without the use of a intermediary escrow, and the swap is completed.
## How we built it
In simple terms, the first party locks the card they want to trade, at which point a lock and a key are created for the card. They can request a card held by a second party, and if the second party accepts the offers, both parties swap gift cards and corresponding keys to complete the swap. If a party wants to tamper with their object, they must use their key to do so. The single-use key would then be consumed by the smart contract, and the trade would not be possible.
Our website was built in three stages: the smart contract, the backend, and the frontend.
**The smart contract** hosts all the code responsible for automating a trustless swap between the sender and the recipient. It **specifies conditions** under which the trade will occur, such as the assets being exchanged and their values. It also has **escrow functionality**, responsible for holding the cards deposited by both parties until swap conditions have been satisfied. Once both parties have undergone **verification**, the **swap** will occur if all conditions are met, and if not, the process will terminate.
**The backend\* acts as a bridge between the smart contract and the front end, allowing for \*\*communication** between the code and the user interface. The main way it does this is by **managing all data**, which includes all the user accounts, their gift card inventories, and more. Anything that the user does on the website is communicated to the Sui blockchain. This **blockchain integration** is crucial so that users can initiate trades without having to deal with the complexities of blockchain.
**The frontend** is essentially everything the user sees and does, or the UI. It begins with **user authentication** such as the login process and connection to Sui wallet. It allows the user to **manage transactions** by initiating trades, entering in attributes of the asset they want to trade, and viewing trade offers. This is all done through React to ensure *real-time interaction* so that new offers are seen and updated without refreshing the page.
## Challenges we ran into
This was **our first step into the field** of Sui blockchain and web 3 entirely, so we found it to be really informative, but also really challenging. The first step we had to take to address this challenge was to begin learning Move through some basic tutorials and set up a development environment. Another challenge was the **many aspects of escrow functionality**, which we addressed through embedding many tests within our code. For instance, we had to test that that once an object was created, it would actually lock and unlock, and also that if the second shared party stopped responding or an object was tampered with, the trade would be terminated.
## Accomplishments that we're proud of
We're most proud of the look and functionality of our **user interface**, as user experience is one of our most important focuses. We wanted to create a platform that was clean, easy to use and navigate, which we did by maintaining a sense of consistency throughout our website and keep basic visual hierarchy elements in mind when designing the website. Beyond this, we are also proud of pulling off a project that relies so heavily on **Sui blockchain**, when we entered this hackathon with absolutely no knowledge about it.
## What we learned
Though we've designed a very simple trading project implementing Sui blockchain, we've learnt a lot about the **implications of blockchain** and the role it can play in daily life and cryptocurrency. The two most important aspects to us are decentralization and user empowerment. On such a simple level, we're able to now understand how a dApp can reduce reliance on third party escrows and automate these processes through a smart contract, increasing transparency and security. Through this, the user also gains more ownership over their own financial activities and decisions. We're interested in further exploring DeFi principles and web 3 in our future as software engineers, and perhaps even implementing it in our own life when we day trade.
## What's next for GifTr
Currently, GifTr only facilitates the exchange of gift cards, but we are intent on expanding this to allow users to trade their gift cards for Sui tokens in particular. This would encourage our users to shift from traditional banking systems to a decentralized system, and give them access to programmable money that can be stored more securely, integrated into smart contracts, and used in instant transactions. | losing |
## Inspiration
As college students, we didn't know anything, so we thought about how we can change that. One way was by being smarter about the way we take care of our unused items. We all felt that our unused items could be used in better ways through sharing with other students on campus. All of us shared our items on campus with our friends but we felt that there could be better ways to do this. However, we were truly inspired after one of our team members, and close friend, Harish, an Ecological Biology major, informed us about the sheer magnitude of trash and pollution in the oceans and the surrounding environments. Also, as the National Ocean Science Bowl Champion, Harish truly was able to educate the rest of the team on how areas such as the Great Pacific Garbage Patch affect the wildlife and oceanic ecosystems, and the effects we face on a daily basis from this. With our passions for technology, we wanted to work on an impactful project that caters to a true need for sharing that many of us have while focusing on maintaining sustainability.
## What it does
The application essentially works to allow users to list various products that they want to share with the community and allows users to request items. If one user sees a request they want to provide a tool for or an offer they find appealing, they’ll start a chat with the user through the app to request the tool. Furthermore, the app sorts and filters by location to make it convenient for users. Also, by allowing for community building through the chat messaging, we want to use
## How we built it
We first, focused on wireframing and coming up with ideas. We utilized brainstorming sessions to come up with unique ideas and then split our team based on our different skill sets. Our front-end team worked on coming up with wireframes and creating designs using Figma. Our backend team worked on a whiteboard, coming up with the system design of our application server, and together the front-end and back-end teams worked on coming up with the schemas for the database.
We utilized the MERN technical stack in order to build this. Our front-end uses ReactJS in order to build the web app, our back-end utilizes ExpressJS and NodeJS, while our database utilizes MongoDB.
We also took plenty of advice and notes, not only from mentors throughout the competition, but also our fellow hackers. We really went around trying to ask for others’ advice on our web app and our final product to truly flush out the best product that we could. We had a customer-centric mindset and approach throughout the full creation process, and we really wanted to look and make sure that what we are building has a true need and is truly wanted by the people. Taking advice from these various sources helped us frame our product and come up with features.
## Challenges we ran into
Integration challenges were some of the toughest for us. Making sure that the backend and frontend can communicate well was really tough, so what we did to minimize the difficulties. We designed the schemas for our databases and worked well with each other to make sure that we were all on the same page for our schemas. Thus, working together really helped to make sure that we were making sure to be truly efficient.
## Accomplishments that we're proud of
We’re really proud of our user interface of the product. We spent quite a lot of time working on the design (through Figma) before creating it in React, so we really wanted to make sure that the product that we are showing is visually appealing.
Furthermore, our backend is also something we are extremely proud of. Our backend system has many unconventional design choices (like for example passing common ids throughout the systems) in order to avoid more costly backend operations. Overall, latency and cost and ease of use for our frontend team was a big consideration when designing the backend system
## What we learned
We learned new technical skills and new soft skills. Overall in our technical skills, our team became much stronger with using the MERN frameworks. Our front-end team learned so many new skills and components through React and our back-end team learned so much about Express. Overall, we also learned quite a lot about working as a team and integrating the front end with the back-end, improving our software engineering skills
The soft skills that we learned about are how we should be presenting a product idea and product implementation. We worked quite a lot on our video and our final presentation to the judges and after speaking with hackers and mentors alike, we were able to use the collective wisdom that we gained in order to really feel that we created a video that shows truly our interest in designing important products with true social impact. Overall, we felt that we were able to convey our passion for building social impact and sustainability products.
## What's next for SustainaSwap
We’re looking to deploy the app in local communities as we’re at the point of deployment currently. We know there exists a clear demand for this in college towns, so we’ll first be starting off at our local campus of Philadelphia. Also, after speaking with many Harvard and MIT students on campus, we feel that Cambridge will also benefit, so we will shortly launch in the Boston/Cambridge area.
We will be looking to expand to other college towns and use this to help to work on the scalability of the product. We ideally, also want to push for the ideas of sustainability, so we would want to potentially use the platform (if it grows large enough) to host fundraisers and fundraising activities to give back in order to fight climate change.
We essentially want to expand city by city, community by community, because this app also focuses quite a lot on community and we want to build a community-centric platform. We want this platform to just build tight-knit communities within cities that can connect people with their neighbors while also promoting sustainability. | ## Inspiration
While looking for genuine problems that we could solve, it came to our attention that recycling is actually much harder than it should be. For example, when you go to a place like Starbucks and are presented with the options of composting, recycling, or throwing away your empty coffee, it can be confusing and for many people, it can lead to selecting the wrong option.
## What it does
Ecolens uses a cloud-based machine learning webstream to scan for an item and tells the user the category of item it is that they scanned, providing them with a short description of the object and updating their overall count of consuming recyclable vs. unrecyclable items as well as updating the number of items that they consumed in that specific category (i.e. number of water bottle consumed)
## How we built it
This project consists of both a front end and a back end. The backend of this project was created using Java Spring and Javascript. Javascript was used in the backend in order to utilize Roboflow and Ultralytics which allowed us to display the visuals from Roboflow on the website for the user to see. Java Spring was used in the backend for creating a database that consisted of all of the scanned items and tracked them as they were altered (i.e. another item was scanned or the user decided to dump the data).
The front end of this project was built entirely through HTML, CSS, and Javascript. HTML and CSS were used in the front end to display text in a format specific to the User Interface, and Javascript was used in order to implement the functions (buttons) displayed in the User Interface.
## Challenges we ran into
This project was particularly difficult for all of us because of the fact that most of our team consists of beginners and there were multiple parts during the implementation of our application that no one was truly comfortable with. For example, integrating camera support into our website was particularly difficult as none of our members had experience with JavaScript, and none of us had fully fledged web development experience. Another notable challenge was presented with the backend of our project when attempting to delete the user history of items used while also simultaneously adding them to a larger “trash can” like a database.
From a non-technical perspective, our group also struggled to come to an agreeance on how to make our implementation truly useful and practical. Originally we thought to have hardware that would physically sort the items but we concluded that this was out of our skill range and also potentially less sustainable than simply telling the user what to do with their item digitally.
## Accomplishments that we're proud of
Although we can acknowledge that there are many improvements that could be made, such as having a cleaner UI, optimized (fast) usage of the camera scanner, or even better responses for when an item is accidentally scanned, we’re all collectively proud that we came together to find an idea that allowed each of us to not only have a positive impact on something we cared about but to also learn and practice things that we actually enjoy doing.
## What we learned
Although we can acknowledge that there are many improvements that could be made, such as having a cleaner UI, optimized (fast) usage of the camera scanner, or even better responses for when an item is accidentally scanned, we’re all collectively proud that we came together to find an idea that allowed each of us to not only have a positive impact on something we cared about but to also learn and practice things that we actually enjoy doing.
## What's next for Eco Lens
The most effective next course of action for EcoLens is to assess if there really is a demand for this product and what people think about it. Would most people genuinely use this if it was fully shipped? Answering these questions would provide us with grounds to move forward with our project. | ## Inspiration
Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week.
## What it does
IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application.
## How WE built it
on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time.
## Challenges WE ran into
hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck.
To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult.
Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue.
The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful. | partial |
## venturso.me
## Inspiration
When you're visiting a new place, it's a hassle to plan out what your going to do. With venturso.me, you can plan your trip with the press of a button. Just enter in your point and time of departure and arrival and we will fill in all the stops in between. Don't want to visit that boring science museum? Easy - mark it as a no-go and we will recalculate your itinerary!
### Origin of the name
We take advantage of the .me domain, and the word adventuresome to create venturso.me. It's also a play on the phrase adventure, so me.
## Features
* PDF itinerary generation
* Route display on map
* Rejecting destinations
* Pinning destinations | ## Instago
Instago! Your goto **travel planner app**! We're taking the most annoying parts about planning for a trip and leaving only the funnest parts to you! Just give us your destination and we'll quickly generate you a trip to the 5 most attractive tourist destinations starting at the most ideally located hotel at your destination.
We were inspired by our experiences with planning trips with our friends. Often times, it would take us a few days to find the best places we were interested to see, in a city. Unfortunately, we also had to spend time filtering away the destinations that we were uninterested in or did not fulfil our constraints. We hoped that by building Instago, we can help reduce the time it takes for people to plan their trips and get straight to the exciting preparation for the big day!
In this project, we chose to run our front-end with **React** and build our back-end with **Stdlib**, a technology we have never used before.
On the front-end side, we used **Google Maps API** for the first time in conjunction with React. This proved to be more challenging than expected as Google does not provide any in-house APIs that primarily support React. We also lost a significant amount of time trying many npm wrapper packages for google maps. They either did not support Google Directions, a key feature we needed for our project or were poorly documented. We ended up discovering the possibility of introducing script tag elements in React which allowed us to import the vanilla Google Maps JS API which supported all map features.
The largest challenge was in the back-end. While Stdlib was phenomenally flexible and easy to set up, test, debug and deploy, we needed to write our own algorithm to determine how to rank tourist attractions. We considered factors such as tourist attraction location, type of tourist attraction, number of ratings and average rating to calculate a score for each attraction. Since our API required making multiple requests to Google Places to get tourist attraction information, we had to make compromises to ensure the speed of our algorithm was reasonable for users.
The number of next steps for our project is boundless! We have plans to integrate Facebook/Google login so that we can take in user likes and preferences to make even more tailored travel plans (Users can also store and share their trips on their associated accounts)! We want to apply a semantic similarity calculation using **Word2vec** models and compare a city's tourist attraction names and types with the interests of users to gather a list of places a user would most likely visit. Concerts or sport games that are happening around the time of the user's trip could also be added to an iteniary. We also have plans to add budget constraints and modes of travel to the calculation. It was too much to add all these cool features into our project before demoing, but we are excited to add them in later!
Overall, this was an awesome project and we are so proud of what we made! :) | ## Inspiration
After witnessing the power of collectible games and card systems, our team was determined to prove that this enjoyable and unique game mechanism wasn't just some niche and could be applied to a social activity game that anyone could enjoy or use to better understand one another (taking a note from Cards Against Humanity's book).
## What it does
Words With Strangers pairs users up with a friend or stranger and gives each user a queue of words that they must make their opponent say without saying this word themselves. The first person to finish their queue wins the game. Players can then purchase collectible new words to build their deck and trade or give words to other friends or users they have given their code to.
## How we built it
Words With Strangers was built on Node.js with core HTML and CSS styling as well as usage of some bootstrap framework functionalities. It is deployed on Heroku and also makes use of TODAQ's TaaS service API to maintain the integrity of transactions as well as the unique rareness and collectibility of words and assets.
## Challenges we ran into
The main area of difficulty was incorporating TODAQ TaaS into our application since it was a new service that none of us had any experience with. In fact, it isn't blockchain, etc, but none of us had ever even touched application purchases before. Furthermore, creating a user-friendly UI that was fully functional with all our target functionalities was also a large issue and challenge that we tackled.
## Accomplishments that we're proud of
Our UI not only has all our desired features, but it also is user-friendly and stylish (comparable with Cards Against Humanity and other genre items), and we were able to add multiple word packages that users can buy and trade/transfer.
## What we learned
Through this project, we learned a great deal about the background of purchase transactions on applications. More importantly, though, we gained knowledge on the importance of what TODAQ does and were able to grasp knowledge on what it truly means to have an asset or application online that is utterly unique and one of a kind; passable without infinite duplicity.
## What's next for Words With Strangers
We would like to enhance the UI for WwS to look even more user friendly and be stylish enough for a successful deployment online and in app stores. We want to continue to program packages for it using TODAQ and use dynamic programming principles moving forward to simplify our process. | losing |
## 💡 Inspiration
Generation Z is all about renting - buying land is simply out of our budgets. But the tides are changing: with Pocket Plots, an entirely new generation can unlock the power of land ownership without a budget.
Traditional land ownership goes like this: you find a property, spend weeks negotiating a price, and secure a loan. Then, you have to pay out agents, contractors, utilities, and more. Next, you have to go through legal documents, processing, and more. All while you are shelling out tens to hundreds of thousands of dollars.
Yuck.
Pocket Plots handles all of that for you.
We, as a future LLC, buy up large parcels of land, stacking over 10 acres per purchase. Under the company name, we automatically generate internal contracts that outline a customer's rights to a certain portion of the land, defined by 4 coordinate points on a map.
Each parcel is now divided into individual plots ranging from 1,000 to 10,000 sq ft, and only one person can own a contract to each plot to the plot.
This is what makes us fundamentally novel: we simulate land ownership without needing to physically create deeds for every person. This skips all the costs and legal details of creating deeds and gives everyone the opportunity to land ownership.
These contracts are 99 years and infinitely renewable, so when it's time to sell, you'll have buyers flocking to buy from you first.
You can try out our app here: <https://warm-cendol-1db56b.netlify.app/>
(AI features are available locally. Please check our Github repo for more.)
## ⚙️What it does
### Buy land like it's ebay:
![](https://i.imgur.com/PP5BjxF.png)
We aren't just a business: we're a platform. Our technology allows for fast transactions, instant legal document generation, and resale of properties like it's the world's first ebay land marketplace.
We've not just a business.
We've got what it takes to launch your next biggest investment.
### Pocket as a new financial asset class...
In fintech, the last boom has been in blockchain. But after FTX and the bitcoin crash, cryptocurrency has been shaken up: blockchain is no longer the future of finance.
Instead, the market is shifting into tangible assets, and at the forefront of this is land. However, land investments have been gatekept by the wealthy, leaving little opportunity for an entire generation
That's where pocket comes in. By following our novel perpetual-lease model, we sell contracts to tangible buildable plots of land on our properties for pennies on the dollar.
We buy the land, and you buy the contract.
It's that simple.
We take care of everything legal: the deeds, easements, taxes, logistics, and costs. No more expensive real estate agents, commissions, and hefty fees.
With the power of Pocket, we give you land for just $99, no strings attached.
With our resell marketplace, you can sell your land the exact same way we sell ours: on our very own website.
We handle all logistics, from the legal forms to the system data - and give you 100% of the sell value, with no seller fees at all.
We even will run ads for you, giving your investment free attention.
So how much return does a Pocket Plot bring?
Well, once a parcel sells out its plots, it's gone - whoever wants to buy land from that parcel has to buy from you.
We've seen plots sell for 3x the original investment value in under one week. Now how insane is that? The tides are shifting, and Pocket is leading the way.
### ...powered by artificial intelligence
**Caption generation**
*Pocket Plots* scrapes data from sites like Landwatch to find plots of land available for purchase. Most land postings lack insightful descriptions of their plots, making it hard for users to find the exact type of land they want. With *Pocket Plots*, we transformed links into images, into helpful captions.
![](https://i.imgur.com/drgwbft.jpg)
**Captions → Personalized recommendations**
These captions also inform the user's recommended plots and what parcels they might buy. Along with inputting preferences like desired price range or size of land, the user can submit a text description of what kind of land they want. For example, do they want a flat terrain or a lot of mountains? Do they want to be near a body of water? This description is compared with the generated captions to help pick the user's best match!
![](https://i.imgur.com/poTXYnD.jpg)
### **Chatbot**
Minute Land can be confusing. All the legal confusion, the way we work, and how we make land so affordable makes our operations a mystery to many. That is why we developed a supplemental AI chatbot that has learned our system and can answer questions about how we operate.
*Pocket Plots* offers a built-in chatbot service to automate question-answering for clients with questions about how the application works. Powered by openAI, our chat bot reads our community forums and uses previous questions to best help you.
![](https://i.imgur.com/dVAJqOC.png)
## 🛠️ How we built it
Our AI focused products (chatbot, caption generation, and recommendation system) run on Python, OpenAI products, and Huggingface transformers. We also used a conglomerate of other related libraries as needed.
Our front-end was primarily built with Tailwind, MaterialUI, and React. For AI focused tasks, we also used Streamlit to speed up deployment.
### We run on Convex
We spent a long time mastering Convex, and it was worth it. With Convex's powerful backend services, we did not need to spend infinite amounts of time developing it out, and instead, we could focus on making the most aesthetically pleasing UI possible.
### Checkbook makes payments easy and fast
We are an e-commerce site for land and rely heavily on payments. While stripe and other platforms offer that capability, nothing compares to what Checkbook has allowed us to do: send invoices with just an email. Utilizing Checkbook's powerful API, we were able to integrate Checkbook into our system for safe and fast transactions, and down the line, we will use it to pay out our sellers without needing them to jump through stripe's 10 different hoops.
## 🤔 Challenges we ran into
Our biggest challenge was synthesizing all of our individual features together into one cohesive project, with compatible front and back-end. Building a project that relied on so many different technologies was also pretty difficult, especially with regards to AI-based features. For example, we built a downstream task, where we had to both generate captions from images, and use those outputs to create a recommendation algorithm.
## 😎 Accomplishments that we're proud of
We are proud of building several completely functional features for *Pocket Plots*. We're especially excited about our applications of AI, and how they make users' *Pocket Plots* experience more customizable and unique.
## 🧠 What we learned
We learned a lot about combining different technologies and fusing our diverse skillsets with each other. We also learned a lot about using some of the hackathon's sponsor products, like Convex and OpenAI.
## 🔎 What's next for Pocket Plots
We hope to expand *Pocket Plots* to have a real user base. We think our idea has real potential commercially. Supplemental AI features also provide a strong technological advantage. | ## FLEX [Freelancing Linking Expertise Xchange]
## Inspiration
Freelancers deserve a platform where they can fully showcase their skills, without worrying about high fees or delayed payments. Companies need fast, reliable access to talent with specific expertise to complete jobs efficiently. "FLEX" bridges the gap, enabling recruiters to instantly find top candidates through AI-powered conversations, ensuring the right fit, right away.
## What it does
Clients talk to our AI, explaining the type of candidate they need and any specific skills they're looking for. As they speak, the AI highlights important keywords and asks any more factors that they would need with the candidate. This data is then analyzed and parsed through our vast database of Freelancers or the best matching candidates. The AI then talks back to the recruiter, showing the top candidates based on the recruiter’s requirements. Once the recruiter picks the right candidate, they can create a smart contract that’s securely stored and managed on the blockchain for transparent payments and agreements.
## How we built it
We built starting with the Frontend using **Next.JS**, and deployed the entire application on **Terraform** for seamless scalability. For voice interaction, we integrated **Deepgram** to generate human-like voice and process recruiter inputs, which are then handled by **Fetch.ai**'s agents. These agents work in tandem: one agent interacts with **Flask** to analyze keywords from the recruiter's speech, another queries the **SingleStore** database, and the third handles communication with **Deepgram**.
Using SingleStore's real-time data analysis and Full-Text Search, we find the best candidates based on factors provided by the client. For secure transactions, we utilized **SUI** blockchain, creating an agreement object once the recruiter posts a job. When a freelancer is selected and both parties reach an agreement, the object gets updated, and escrowed funds are released upon task completion—all through Smart Contracts developed in **Move**. We also used Flask and **Express.js** to manage backend and routing efficiently.
## Challenges we ran into
We faced challenges integrating Fetch.ai agents for the first time, particularly with getting smooth communication between them. Learning Move for SUI and connecting smart contracts with the frontend also proved tricky. Setting up reliable Speech to Text was tough, as we struggled to control when voice input should stop. Despite these hurdles, we persevered and successfully developed this full stack application.
## Accomplishments that we're proud of
We’re proud to have built a fully finished application while learning and implementing new technologies here at CalHacks. Successfully integrating blockchain and AI into a cohesive solution was a major achievement, especially given how cutting-edge both are. It’s exciting to create something that leverages the potential of these rapidly emerging technologies.
## What we learned
We learned how to work with a range of new technologies, including SUI for blockchain transactions, Fetch.ai for agent communication, and SingleStore for real-time data analysis. We also gained experience with Deepgram for voice AI integration.
## What's next for FLEX
Next, we plan to implement DAOs for conflict resolution, allowing decentralized governance to handle disputes between freelancers and clients. We also aim to launch on the SUI mainnet and conduct thorough testing to ensure scalability and performance. | ## Inspiration
Every year thousands of companies are compromised and the authentication information for many is stolen. The consequence of such breaches is immense and damages the trust between individuals and organizations. There is significant overhead for an organization to secure it's authentication methods, often usability is sacrificed. Users must trust organizations with their info and organizations must trust that their methods of storage are secure. We believe this presents a significant trust and usability problem. What if we could leverage the blockchain, to do this authentication trustlessly between parties? Using challenge and response we'd be able to avoid passwords completely. Furthermore, this system of permissions could be extended from the digital world to physical assets, i.e. giving somebody the privilege to unlock your door.
## What it does
Entities can assign and manage privileges for resources they possess by publishing that a certain user (with an associated public key) has access to a resource on the ethereum blockchain (this can be temporary or perpetual). During authentication, entities validate that users hold the private keys to their associated public keys using challenge and response. A user needs only to keep his private key and remember his username.
## How we built it
We designed and deployed a smart contract on the Ropsten Ethereum testnet to trustlessly manage permissions. Users submit transactions and read from this contract as a final authority for access control. An android app is used to showcase real-life challenge and response and how it can be used to validate privileges trustfully between devices. A web app is also developed to show the ease of setup for an individual user. AWS Lambda is used to query the blockchain through trusted apis, this may be adjusted by any user to their desired confidence level. A physical lock with an NFC reader was to be used to showcase privilege transfer, but the NFC reader was broken.
## Challenges we ran into
The NFC reader we used was broken so we were unable to demonstrate one potential application. Since Solidity (Ethereum EVM language) is relatively new there was not an abundance of documentation available when we ran into issues sending and validating transactions, although we eventually fixed these issues.
## Accomplishments that we're proud of
Trustless authentication on the blockchain, IoT integration, Ethereum transactions greatly simplified for users (they need not know how it works), and Login with username
## What we learned
We learned a lot about the quirks of Ethereum and developing around it. Solidity still has a long way to go regarding developer documentation. The latency of ethereum transactions, scalability of ethereum, and transaction fees on the network present limiting factors towards future adoption, though we have demonstrated that such a trustless authentication scheme using the blockchain is indeed secure and easy to use.
## What's next for Keychain
Use a different chain with faster transaction times and lower fees, or even rolling our own chain using optimized for keychain. More digital and IoT demos demonstrating ease of use. | winning |
## Inspiration
Whether there is a fire threatening the Notre Dame, wildfires invading California's forests, or a small city fire, emergency services have difficulty arriving at the location.
## What it does
When Phoenix is activated via the web app, the carbon dioxide sensor, or the heat sensor, it flies to the location and tracks the fire. At the same time, the data from the sensors and a live video is displayed on the web app so emergency services can monitor the fire. When Phoenix locates the fire, it sprays an extinguishing agent ('silly string' Pennapps's birthday theme :)) at the fire!
## How I built it
We created the web app with Python, JSON, and CSS, Google Cloud Vision and hacked the movement of the drone with the PS drone API. The silly string is fired using a micro servo controlled by an esp32!
## Challenges I ran into
-Since a laptop can only be connected to one wifi network at a time, we needed to find a wifi adapter
-integrate the parts of the project (ml, web app, drone movement, servo control)
## Accomplishments that I'm proud of
* We integrated all the parts of the project
## What I learned
-Python, PS Drone API, Google Cloud Vision detection
## What's next for Phoenix
We hope to be able to add more sensors, and use Phoenix for natural disaster relief too. | # RiskWatch
## Inspiration
## What it does
Our project allows users to report fire hazards with images to a central database. False images could be identified using machine learning (image classification). Also, we implemented methods for people to find fire stations near them. We additionally implemented a way for people to contact Law enforcement and fire departments for a speedy resolution. In return, the users get compensation from insurance companies. Idea is relevant because of large wildfires in California and other states.
## How we built it
We build the site from the ground up using ReactJS, HTML, CSS and JavaScript. We also created a MongoDB database to hold some location data and retrieve them in the website. Python was also used to connect the frontend to the database.
## Challenges we ran into
We initially wanted to create a physical hardware device using a Raspberry Pi 2 and a RaspiCamera. Our plan was to create a device that could utilize object recognition to classify general safety issues. We understood that performance would suffer greatly when going in, as we thought 1-2 FPS would be enough. After spending hours compiling OpenCV, Tensorflow and Protobuf on the Pi, it was worth it. It was surprising to achieve 2-3 FPS after object recognition using Google's SSDLiteNetv2Coco algorithm. But unfortunately, the Raspberry Pi camera would disconnect often and eventually fail due to a manufacturing defect. Another challenge we faced at the final hours was that our original domain choice was mistakenly marked available by the registry when it really was taken, but we eventually resolved it by talking to a customer support representative.
## Accomplishments that we're proud of
We are proud of being able to quickly get back on track after we had issues with our initial hardware idea and repurpose it to become a website. We were all relatively new to React and quickly transitioned from using Materialize.CSS at all the other hackathons we went to.
### Try it out (production)!
* clone the repository: `git clone https://github.com/dwang/RiskWatch.git`
* run `./run.sh`
### Try it out (development)!
* clone the repository: `git clone https://github.com/dwang/RiskWatch.git`
* `cd frontend` then `npm install`
* run `npm start` to run the app - the application should now open in your browser
* start the backend with `./run.sh`
## What we learned
Our group learned how to construct and manage databases with MongoDB, along with seamlessly integrating them into our website. We also learned how to make a website with React, making a chatbot, using image recognition and even more!
## What's next for us?
We would like to make it so that everyone uses our application to be kept safe - right now, it is missing a few important features, but once we add those, RiskWatch could be the next big thing in information consumption.
Check out our GitHub repository at: <https://github.com/dwang/RiskWatch> | ## Inspiration
In the next 10-15 years, the rise of autonomous machine to machine payments will become growingly obvious and lucrative industry to serve.
In seeing how soon large scale fleets of autonomous Uber cars will not only drive us, but begin negotiating and paying for their own parking with their fellow smart device such as parking meters, the necessity of a singular platform to manage all of these IoT devices and their related working capital is clear.
With the growth of novel primitives such as cryptocurrency, the ability to microtransact has never been easier, and will provide the substrate to the revolution of machine to machine economy. To tackle this massive opportunity we created *Horizon*, for managing IoT devices and their associated working capital spending.
## What it does
Horizon is to IoT working capital as Amazon Web Services is to computing resources. Horizon provides the ability to create and register hundreds of IoT devices to a single container with a single funding wallet to track, manage and analyze the spending of IoT devices.
IoT devices register with a unique ID, and cryptocurrency address and solicit funds from their governing wallet in order to make purchases from other machines.
## How We Built It
Horizon is a Rails Web App and API communicating with a Blockchain to move funds around to IoT devices
## Challenges Ran Into
Working with several high complexity technologies made this project difficult. We had trouble connecting to the blockchain and spent many hours attempting to work with Ruby and Ethereum
## What We Learned
Many insights about the nitty gritty of ethereum transactions.
## What's next for Horizon
Scalable containers, building out the blockchain platform, statistics and in depth analysis. | partial |
# Are You Taking
It's the anti-scheduling app. 'Are You Taking' is the no-nonsense way to figure out if you have class with your friends by comparing your course schedules with ease. No more screenshots, only good vibes!
## Inspiration
The fall semester is approaching... too quickly. And we don't want to have to be in class by ourselves.
Every year, we do the same routine of sending screenshots to our peers of what we're taking that term. It's tedious, and every time you change courses, you have to resend a picture. It also doesn't scale well to groups of people trying to find all of the different overlaps.
So, we built a fix. Introducing "Are You Taking" (AYT), an app that allows users to upload their calendars and find event overlap.
It works very similar to scheduling apps like when2meet, except with the goal of finding where there *is* conflict, instead of where there isn't.
## What it does
The flow goes as follows:
1. Users upload their calendar, and get a custom URL like `https://areyoutaking.tech/calendar/<uuidv4>`
2. They can then send that URL wherever it suits them most
3. Other users may then upload their own calendars
4. The link stays alive so users can go back to see who has class with who
## How we built it
We leveraged React on the front-end, along with Next, Sass, React-Big-Calendar and Bootstrap.
For the back-end, we used Python with Flask. We also used CockroachDB for storing events and handled deployment using Google Cloud Run (GCR) on GCP. We were able to create Dockerfiles for both our front-end and back-end separately and likewise deploy them each to a separate GCR instance.
## Challenges we ran into
There were two major challenges we faced in development.
The first was modelling relationships between the various entities involved in our application. From one-to-one, to one-to-many, to many-to-many, we had to write effective schemas to ensure we could render data efficiently.
The second was connecting our front-end code to our back-end code; we waited perhaps a bit too long to pair them together and really felt a time crunch as the deadline approached.
## Accomplishments that we're proud of
We managed to cover a lot of new ground!
* Being able to effectively render calendar events
* Being able to handle file uploads and store event data
* Deploying the application on GCP using GCR
* Capturing various relationships with database schemas and SQL
## What we learned
We used each of these technologies for the first time:
* Next
* CockroachDB
* Google Cloud Run
## What's next for Are You Taking (AYT)
There's a few major features we'd like to add!
* Support for direct Google Calendar links, Apple Calendar links, Outlook links
* Edit the calendar so you don't have to re-upload the file
* Integrations with common platforms: Messenger, Discord, email, Slack
* Simple passwords for calendars and users
* Render a 'generic week' as the calendar, instead of specific dates | ## Inspiration
We were inspired by all the people who go along their days thinking that no one can actually relate to what they are experiencing. The Covid-19 pandemic has taken a mental toll on many of us and has kept us feeling isolated. We wanted to make an easy to use web-app which keeps people connected and allows users to share their experiences with other users that can relate to them.
## What it does
Alone Together connects two matching people based on mental health issues they have in common. When you create an account you are prompted with a list of the general mental health categories that most fall under. Once your account is created you are sent to the home screen and entered into a pool of individuals looking for someone to talk to. When Alone Together has found someone with matching mental health issues you are connected to that person and forwarded to a chat room. In this chat room there is video-chat and text-chat. There is also an icebreaker question box that you can shuffle through to find a question to ask the person you are talking to.
## How we built it
Alone Together is built with React as frontend, a backend in Golang (using Gorilla for websockets), WebRTC for video and text chat, and Google Firebase for authentication and database. The video chat is built from scratch using WebRTC and signaling with the Golang backend.
## Challenges we ran into
This is our first remote Hackathon and it is also the first ever Hackathon for one of our teammates (Alex Stathis)! Working as a team virtually was definitely a challenge that we were ready to face. We had to communicate a lot more than we normally would to make sure that we stayed consistent with our work and that there was no overlap.
As for the technical challenges, we decided to use WebRTC for our video chat feature. The documentation for WebRTC was not the easiest to understand, since it is still relatively new and obscure. This also means that it is very hard to find resources on it. Despite all this, we were able to implement the video chat feature! It works, we just ran out of time to host it on a cloud server with SSL, meaning the video is not sent on localhost (no encryption). Google App Engine also doesn't allow websockets in standard mode, and also doesn't allow `go.mod` on `flex` mode, which was inconvenient and we didn't have time to rewrite parts of our webapp.
## Accomplishments that we're proud of
We are very proud for bringing our idea to life and working as a team to make this happen! WebRTC was not easy to implement, but hard work pays off.
## What we learned
We learned that whether we work virtually together or physically together we can create anything we want as long as we stay curious and collaborative!
## What's next for Alone Together
In the future, we would like to allow our users to add other users as friends. This would mean in addition of meeting new people with the same mental health issues as them, they could build stronger connections with people that they have already talked to.
We would also allow users to have the option to add moderation with AI. This would offer a more "supervised" experience to the user, meaning that if our AI detects any dangerous change of behavior we would provide the user with tools to help them or (with the authorization of the user) we would give the user's phone number to appropriate authorities to contact them. | # Pose-Bot
### Inspiration ⚡
**In these difficult times, where everyone is forced to work remotely and with the mode of schools and colleges going digital, students are
spending time on the screen than ever before, it not only affects student but also employees who have to sit for hours in front of the screen. Prolonged exposure to computer screen and sitting in a bad posture can cause severe health problems like postural dysfunction and affect one's eyes. Therefore, we present to you Pose-Bot**
### What it does 🤖
We created this application to help users maintain a good posture and save from early signs of postural imbalance and protect your vision, this application uses a
image classifier from teachable machines, which is a **Google API** to detect user's posture and notifies the user to correct their posture or move away
from the screen when they may not notice it. It notifies the user when he/she is sitting in a bad position or is too close to the screen.
We first trained the model on the Google API to detect good posture/bad posture and if the user is too close to the screen. Then integrated the model to our application.
We created a notification service so that the user can use any other site and simultaneously get notified if their posture is bad. We have also included **EchoAR models to educate** the children about the harms of sitting in a bad position and importance of healthy eyes 👀.
### How We built it 💡
1. The website UI/UX was designed using Figma and then developed using HTML, CSS and JavaScript.Tensorflow.js was used to detect pose and JavaScript API to send notifications.
2. We used the Google Tensorflow.js API to train our model to classify user's pose, proximity to screen and if the user is holding a phone.
3. For training our model we used our own image as the train data and tested it in different settings.
4. This model is then used to classify the users video feed to assess their pose and detect if they are slouching or if they are too close too screen or are sitting in a generally a bad pose.
5. If the user sits in a bad posture for a few seconds then the bot sends a notificaiton to the user to correct their posture or move away from the screen.
### Challenges we ran into 🧠
* Creating a model with good acccuracy in a general setting.
* Reverse engineering the Teachable Machine's Web Plugin snippet to aggregate data and then display notification at certain time interval.
* Integrating the model into our website.
* Embedding EchoAR models to educate the children about the harms to sitting in a bad position and importance of healthy eyes.
* Deploying the application.
### Accomplishments that we are proud of 😌
We created a completely functional application, which can make a small difference in in our everyday health. We successfully made the applicaition display
system notifications which can be viewed across system even in different apps. We are proud that we could shape our idea into a functioning application which can be used by
any user!
### What we learned 🤩
We learned how to integrate Tensorflow.js models into an application. The most exciting part was learning how to train a model on our own data using the Google API.
We also learned how to create a notification service for a application. And the best of all **playing with EchoAR models** to create a functionality which could
actually benefit student and help them understand the severity of the cause.
### What's next for Pose-Bot 📈
#### ➡ Creating a chrome extension
So that the user can use the functionality on their web browser.
#### ➡ Improve the pose detection model.
The accuracy of the pose detection model can be increased in the future.
#### ➡ Create more classes to help students more concentrate.
Include more functionality like screen time, and detecting if the user is holding their phone, so we can help users to concentrate.
### Help File 💻
* Clone the repository to your local directory
* `git clone https://github.com/cryptus-neoxys/posture.git`
* `npm i -g live-server`
* Install live server to run it locally
* `live-server .`
* Go to project directory and launch the website using live-server
* Voilla the site is up and running on your PC.
* Ctrl + C to stop the live-server!!
### Built With ⚙
* HTML
* CSS
* Javascript
+ Tensorflow.js
+ Web Browser API
* Google API
* EchoAR
* Google Poly
* Deployed on Vercel
### Try it out 👇🏽
* 🤖 [Tensorflow.js Model](https://teachablemachine.withgoogle.com/models/f4JB966HD/)
* 🕸 [The Website](https://pose-bot.vercel.app/)
* 🖥 [The Figma Prototype](https://www.figma.com/file/utEHzshb9zHSB0v3Kp7Rby/Untitled?node-id=0%3A1)
### 3️⃣ Cheers to the team 🥂
* [Apurva Sharma](https://github.com/Apurva-tech)
* [Aniket Singh Rawat](https://github.com/dikwickley)
* [Dev Sharma](https://github.com/cryptus-neoxys) | partial |
## Inspiration
Although each of us came from different backgrounds, we each share similar experiences/challenges during our high school years: it was extremely hard to visualize difficult concepts, much less understand the the various complex interactions. This was most prominent in chemistry, where 3D molecular models were simply nonexistent, and 2D visualizations only served to increase confusion. Sometimes, teachers would use a combination of Styrofoam balls, toothpicks and pens to attempt to demonstrate, yet despite their efforts, there was very little effect. Thus, we decided to make an application which facilitates student comprehension by allowing them to take a picture of troubling text/images and get an interactive 3D augmented reality model.
## What it does
The app is split between two interfaces: one for text visualization, and another for diagram visualization. The app is currently functional solely with Chemistry, but can easily be expanded to other subjects as well.
If the text visualization is chosen, an in-built camera pops up and allows the user to take a picture of the body of text. We used Google's ML-Kit to parse the text on the image into a string, and ran a NLP algorithm (Rapid Automatic Keyword Extraction) to generate a comprehensive flashcard list. Users can click on each flashcard to see an interactive 3D model of the element, zooming and rotating it so it can be seen from every angle. If more information is desired, a Wikipedia tab can be pulled up by swiping upwards.
If diagram visualization is chosen, the camera remains perpetually on for the user to focus on a specific diagram. An augmented reality model will float above the corresponding diagrams, which can be clicked on for further enlargement and interaction.
## How we built it
Android Studio, Unity, Blender, Google ML-Kit
## Challenges we ran into
Developing and integrating 3D Models into the corresponding environments.
Merging the Unity and Android Studio mobile applications into a single cohesive interface.
## What's next for Stud\_Vision
The next step of our mobile application is increasing the database of 3D Models to include a wider variety of keywords. We also aim to be able to integrate with other core scholastic subjects, such as History and Math. | ## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | ## Inspiration
Since the advent of cloud based computing, personal computers have started to become less and less powerful, to the point where they have started to become little more than web viewers. While this has lowered costs, and consequently more people have access to computers, people have less freedom to run the programs that they want to and are limited to using applications that large companies, who are usually very disconnected from their users, decide they can run.
This is where we come in.
## What it does
Our project allows people to to connect to a wifi network, but instead of getting access to just the internet, they also get access to a portal where they can run code on powerful computers
For example, a student can come to campus and connect to the network, and they instantly have a way to run their projects, or train their neural networks with much more power than their laptop can provide.
## How we built it
We used used Django and JavaScript for the the interface that the end user accesses. We used python and lots of bash scripts to get stuff working on our servers, on both the low cost raspberry pis, and the remote computer that does most of the procesing.
## Challenges we ran into
We had trouble sand boxing code and setting limits on how much compute time one person has access to. We also had issues with lossy compression.
## Accomplishments that we're proud of
Establishing asynchronous connections between 3 or more different computers at once.
Managed to gain access to our server after I disabled passwords but forgot to copy over my ssh keys.
## What we learned
How to not mess up permissions, how to manage our very limited time even though we're burn't out.
## What's next for Untitled Compute Power Sharing Thing
We intend to fix a few small security issues and add support for more programming languages. | winning |
## Inspiration
Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language!
## What it does
TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you!
## How we built it
We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes.
## Challenges we ran into
One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source.
## Accomplishments that we're proud of
Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible!
## What we learned
We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library.
## What's next for TranslatAR
We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages. | ## Inspiration
As victims, bystanders and perpetrators of cyberbullying, we felt it was necessary to focus our efforts this weekend on combating an issue that impacts 1 in 5 Canadian teens. As technology continues to advance, children are being exposed to vulgarities online at a much younger age than before.
## What it does
**Prof**(ani)**ty** searches through any webpage a child may access, censors black-listed words and replaces them with an appropriate emoji. This easy to install chrome extension is accessible for all institutional settings or even applicable home devices.
## How we built it
We built a Google chrome extension using JavaScript (JQuery), HTML, and CSS. We also used regular expressions to detect and replace profanities on webpages. The UI was developed with Sketch.
## Challenges we ran into
Every member of our team was a first-time hacker, with little web development experience. We learned how to use JavaScript and Sketch on the fly. We’re incredibly grateful for the mentors who supported us and guided us while we developed these new skills (shout out to Kush from Hootsuite)!
## Accomplishments that we're proud of
Learning how to make beautiful webpages.
Parsing specific keywords from HTML elements.
Learning how to use JavaScript, HTML, CSS and Sketch for the first time.
## What we learned
The manifest.json file is not to be messed with.
## What's next for PROFTY
Expand the size of our black-list.
Increase robustness so it parses pop-up messages as well, such as live-stream comments. | ## Inspiration
It is difficult for university students to find the time and money to go to the gym. Although some YouTube videos teach exercises that can be done at home without weights, it's not always easy to self-correct without a gym buddy.
## What it does
When a user works out at home, they can place their laptop camera and display at the front of their space. They carry an Arduino microcontroller in their pocket and tape a haptic motor to their wrist or side. They select from a list of exercises--so far we have implemented tricep pushups and squats--and computer vision is used to detect form errors. The haptic motor alerts the user to form errors, so they know to look at the screen for feedback.
These are the implemented feedback items:
TRICEP PUSHUPS:
* Move wrists closer together or farther apart such that they are under the shoulders
* Keep elbows tucked in through the pushup
SQUATS:
* Go lower
* Keep knees directly above ankles, not too far forward
* Sit more upright with a straight back
## How we built it
We used a pretrained implementation of CMU Posenet in Tensorflow ([link](https://github.com/ildoonet/tf-pose-estimation?fbclid=IwAR1CBbW9_A3_vrwbKDmAiZJ3tQ3owjEk9NFHZ8ufRfA_QhDfOSYK-p1SYaA)) for pose estimation. We analyzed coordinates of joints in the image using our own Python functions based on expert knowledge of workout form.
The vision processing feedback outputs from the laptop are interfaced to an Arduino Uno over Bluetooth connection, and the Uno controls a Grove haptic motor.
## Challenges I ran into
* Diagnosing physical hardware problems: We spent a lot of time debugging a Raspberry Pi with a faulty SD card. We learned that it's important to debug from the hardware level up.
* Finding usable TensorFlow models that fit well to our mission. We got a lot better at filtering usable sources and setting up command line environments.
* Creating a durable and wearable design of the fitness buddy. We experienced issues with haptic motor connector wires breaking as we exercised. We learned the importance of component research in planning physical designs.
## Accomplishments that I'm proud of
* Integrating Python and Arduino using a Bluetooth module to achieve haptic feedback.
* Labelling joints and poses for analysis through appropriate machine learning models.
* Adding analysis to machine learning outputs to make them useful in a real life context.
* Learning to use different languages and products (including Raspberry Pi) to perform specific technical tasks.
## What I learned
* How to use many different hardware products and techniques, including a bluetooth module, haptic motors and controllers, and a Raspberry Pi (which we did not use in our final design). We also improved our Arduino and circuit skills.
* The efficiency and output derivation of many different machine learning models.
* The importance of prototyping physical systems that people will interact with and could break.
* A greater sense of focus towards better wellbeing of individual people through exercise.
## What's next for Fitness Buddy: Haptic Feedback on Exercise Form:
* Incorporate and add software for a variety of different exercises.
* Migrate to Raspberry Pi for a more portable experience.
* Integrate with Google Home for more seamless IoT ("Ok Google, start my pushup routine!").
* Add goal setting and facial recognition for different household users with different goals. | winning |
## Inspiration
Old school bosses don't want want to see you slacking off and always expect you to be all movie hacker in the terminal 24/7. As professional slackers, we also need our fair share of coffee and snacks. We initially wanted to create a terminal app to order Starbucks and deliver it to the E7 front desk. Then bribe a volunteer to bring it up using directions from Mappedin. It turned out that it's quite hard to reverse engineer Starbucks. Thus, we tried UberEats, which was even worse. After exploring bubble tea, cafes, and even Lazeez, we decided to order pizza instead. Because if we're suffering, might as well suffer in a food coma.
## What it does
Skip the Walk brings food right to your table with the help of volunteers. In exchange for not taking a single step, volunteers are paid in what we like to call bribes. These can be the swag hackers received, food, money,
## How we built it
We used commander.js to create the command-line interface, Next.js to run MappedIn, and Vercel to host our API endpoints and frontend. We integrated a few Slack APIs to create the Slack bot. To actually order the pizzas, we employed Terraform.
## Challenges we ran into
Our initial idea was to order coffee through a command line, but we soon realized there weren’t suitable APIs for that. When we tried manually sending POST requests to Starbucks’ website, we ran into reCaptcha issues. After examining many companies’ websites and nearly ordering three pizzas from Domino’s by accident, we found ourselves back at square one—three times. By the time we settled on our final project, we had only nine hours left.
## Accomplishments that we're proud of
Despite these challenges, we’re proud that we managed to get a proof of concept up and running with a CLI, backend API, frontend map, and a Slack bot in less than nine hours. This achievement highlights our ability to adapt quickly and work efficiently under pressure.
## What we learned
Through this experience, we learned that planning is crucial, especially when working within the tight timeframe of a hackathon. Flexibility and quick decision-making are essential when initial plans don’t work out, and being able to pivot effectively can make all the difference.
## Terraform
We used Terraform this weekend for ordering Domino's. We had many close calls and actually did accidentally order once, but luckily we got that cancelled. We created a Node.JS app that we created Terraform files for to run. We also used Terraform to order Domino's using template .tf files. Finally, we used TF to deploy our map on Render. We always thought it funny to use infrastructure as code to do something other than pure infrastructure. Gotta eat too!
## Mappedin
Mappedin was an impressive tool to work with. Its documentation was clear and easy to follow, and the product itself was highly polished. We leveraged its room labeling and pathfinding capabilities to help volunteers efficiently deliver pizzas to hungry hackers with accuracy and ease.
## What's next for Skip the Walk
We plan to enhance the CLI features by adding options such as reordering, randomizing orders, and providing tips for volunteers. These improvements aim to enrich the user experience and make the platform more engaging for both hackers and volunteers. | ## Inspiration
Many people on our campus use an app called When2Meet to schedule meetings, but their UI is terrible, their features are limited, and overall we thought it could be done better. We brainstormed what would make When2Meet better and thought the biggest thing would be a simple new UI as well as a proper account system to see all the meetings you have.
## What it does
Let's Meet is an app that allows people to schedule meetings effortlessly. "Make an account and make scheduling a breeze." A user can create a meeting and share it with others. Then everyone with access can choose which times work best for them.
## How we built it
We used a lot of Terraform! We really wanted to go with a serverless microservice architecture on AWS and thus chose to deploy via AWS. Since we were already using lambdas for the backend, it made sense to add Amplify for the frontend, Cognito for logging in, and DynamoDB for data storage. We wrote over 900 lines of Terraform to get our lambdas deployed, api gateway properly configured, permissions correct, and everything else we do in AWS configured. Other than AWS, we utilized React with Ant Design components. Our lambdas ran on Python 3.12.
## Challenges we ran into
The biggest challenge we ran into was a bug with AWS. For roughly 5 hours we fought intermittent 403 responses. Initially we had an authorizer on the API gateway, but after a short time we removed it. We confirmed it was deleting by searching the CLI for it. We double checked in the web console because we thought it may be the authorizer but it wasn't there anyway. This ended up requiring everything to be manually deleted around the API gate way and everything have to be rebuilt. Thanks to Terraform it made restoring everything relatively easy.
Another challenge was using Terraform and AWS itself. We had almost no knowledge of it going in and coming out we know there is so much more to learn, but with these skills we feel confident to set up anything in AWS.
## Accomplishments that we're proud of
We are so proud of our deployment and cloud architecture. We think that having built a cloud project of this scale in this time frame is no small feat. Even with some challenges our determination to complete the project helped us get through. We are also proud of our UI as we continue to strengthen our design skills.
## What we learned
We learned that implementing Terraform can sometimes be difficult depending on the scope and complexity of the task. This was our first time using a component library for frontend development and we now know how to design, connect, and build an app from start to finish.
## What's next for Let's Meet
We would add more features such as syncing the meetings to a Google Calendar. More customizations and features such as location would also be added so that users can communicate where to meet through the web app itself. | ## Inspiration
With the whole world being forced online, we wanted to help people connect by creating a simplistic, easy-to-use PC-building website.
Many PC building websites tend to use terms that beginners would not be able to understand while also either having the user select all their parts themselves or to have a singular randomly generated build based on very few factors.
## What it does
Our Auto PC builder automatically compiles a list of viable components in real-time based on a user's budget. This is more than just a simple hardcoded program, our program uses an API that connects to a sophisticated scraper that gathers data on almost every component ever released since 2005 using PCPartPicker, and from as many regions as the user desires. We completed an overhaul of the web scraping system the API accessed, in order to get more relevant data for our project.
Using this, we can get the live data of thousands of components at once, and decide which component best suits the user's needs!
## How we built it
All the backend was done using python 3.10 and a vastly modified and improved version of <https://github.com/JonathanVusich/pcpartpicker>.
(See Imports At Bottom)
The project prototype was first drafted on Figma and then coded using HTML and CSS to create a basic draft website.
## Challenges we ran into
Amy: This was my first time working with Figma which was quite challenging in itself. One of the main challenges I ran into while using Figma was trying to make animations and interactions between specific objects for prototyping.
David: As a relatively new coder, I found the task of learning multiprocessing, caching, lxml/scraping, selenium/chromedriver, and navigating around the PC Part Picker DDoS protection very challenging to handle and learn for the first time. On top of this, the code I based my scraper on was running a vary outdated python version, meaning much of the code needed to be rewritten. A demon that I ran into was the unfortunate case where my chromedriver package, which enabled cloudflare bypass, ended up using multiprocessing which meant that I couldn't scrape using more than 1 instance at once! This was because daemonic processes cannot spawn children, so I ended up having to find a way to implement a pool that was non-daemonic, a little scuffed, but I eventually got it working.
Mercy: Worked on the front end, wanted to use JS to manipulate JSON files and put them into a database.
Jagrit: Dealing with the way the API interfaced with the dataset was difficult, as it took a lot of in-depth understanding of an API that we had never worked with before. We spent a whole day modifying the API and succesfully implemented it.
## Accomplishments that we're proud of
David: This was my first time writing up a shoddy database and creating such a large scraping program. I'm proud that it functions as intended. Definitely had a huge boost to my knowledge in the last 48 hours! One issue in the backend was how slow the web scraping was. In order to speed it up, we would have to use multiprocessing; however, the Pool class creates problem. So, we created a whole new class that mimicked Pool functionality.
Mercy: Coded a nice front end on a time crunch.
## What we learned
Amy: I learnt much about creating a project prototype on Figma during the hackathon. I also helped Mercy in the front end portion of creating our website which challenged me to learn HTML and CSS.
David: I ended up learning much of python multiprocessing as not using it made the scraping take too long (over 30 minutes)! Some minor accomplishments were learning how to use selenium and lxml scraping while also practicing good coding. A regret was that I did not comment my code as I was too busy writing it. (But it's ok, no one will see it anyways.)
Mercy: Always good to practice web dev, always something new to learn, had to adapt due to time crunch but ended up okay.
Jagrit: One issue in the backend was how slow the web scraping was. In order to speed it up, we would have to use multiprocessing; however, the Pool class creates problem. So, we created a whole new class that mimicked Pool functionality.
## What's next for QHACKS2022
If we had more time to work on the project during the hackathon, we would definitely want to finish implementing the prototype into website form.
Another possible addition is to make a user database so that users can create an account and save their PC builds. It would also be interesting to make a community on the website for people to share their builds and also give helpful tips and tricks. | partial |
Discord:
* spicedGG#1470
* jeremy#1472
![patientport logo](https://i.imgur.com/qWsX4Yw.png)
## 💡 Inspiration
As healthcare is continuing to be more interconnected and advanced, patients and healthcare resources will always have to worry about data breaches and the misuses of private information. While healthcare facilities move their databases to third-party providers (Amazon, Google, Microsoft), patients become further distanced from accessing their own medical record history, and the complete infrastructure of healthcare networks are significantly at risk and threatened by malicious actors. Even a single damaging attack on a centralized storage solution can end up revealing much sensitive and revealing data.
To combat this risk, we created Patientport as a decentralized and secure solution for patients to easily view the requests for their medical records and take action on them.
## 💻 What it does
Patientport is a decentralized, secure, and open medical record solution. It is built on the Ethereum blockchain and securely stores all of your medical record requests, responses, and exchanges through smart contracts. Your medical data is encrypted and stored on the blockchain.
By accessing the powerful web application online through <patientport.tech>, the patient can gain access to all these features.
First, on the website, the patient authenticates to the blockchain via MetaMask, and provides the contract address that was provided to them from their primary care provider.
Once they complete these two steps, a user has the ability to view all requests made about their medical record by viewing their “patientport” smart contract that is stored on the blockchain.
For demo purposes, the instance of the Ethereum blockchain that the application connects to is hosted locally.
However, anyone can compile and deploy the smart contracts on the Ethereum mainnet and connect to our web app.
## ⚙️ How we built it
| | |
| --- | --- |
| **Application** | **Purpose** |
| React, React Router, Chakra UI
| Front-end web application
|
| Ethers, Solidity, MetaMask
| Blockchain, Smart contracts
|
| Netlify
| Hosting
|
| Figma, undraw.co
| Design
|
## 🧠 Challenges we ran into
* Implementation of blockchain and smart contracts was very difficult, especially since the web3.js API was incompatible with the latest version of react, so we had to switch to a new, unfamiliar library, ethers.
* We ran into many bugs and unfamiliar behavior when coding the smart contracts with Solidity due to our lack of experience with it.
* Despite our goals and aspirations for the project, we had to settle to build a viable product quickly within the timeframe.
## 🏅 Accomplishments that we're proud of
* Implementing a working and functioning prototype of our idea
* Designing and developing a minimalist and clean user interface through a new UI library and reusable components with a integrated design
* Working closely with Solidity and MetaMask to make an application that interfaces directly with the Ethereum blockchain
* Creating and deploying smart contracts that communicate with each other and store patient data securely
## 📖 What we learned
* How to work with the blockchain and smart contracts to make decentralized transactions that can accurately record and encrypt/decrypt transactions
* How to work together and collaborate with developers in a remote environment via Github
* How to use React to develop a fully-featured web application that users can access and interact with
## 🚀 What's next for patientport
* Implementing more features, data, and information into patientport via a more robust smart contract and blockchain connections
* Developing a solution for medical professionals to handle their patients’ data with patientport through a simplified interface of the blockchain wallet | ## Inspiration
The inspiration for InstaPresent came from our frustration with constantly having to create presentations for class and being inspired by the 'in-game advertising' episode on Silicon Valley.
## What it does
InstaPresent is a tool that uses your computer's microphone to generate a presentation in real-time. It can retrieve images and graphs and summarize your words into bullet points.
## How we built it
We used Google's Text To Speech API to process audio from the laptop's microphone. The Text To Speech is captured when the user speaks and when they stop speaking, the aggregated text is sent to the server via WebSockets to be processed.
## Challenges We ran into
Summarizing text into bullet points was a particularly difficult challenge as there are not many resources available for this task. We ended up developing our own pipeline for bullet-point generation based on part-of-speech and dependency analysis. We also had plans to create an Android app for InstaPresent, but were unable to do so due to limited team members and time constraints. Despite these challenges, we enjoyed the opportunity to work on this project.
## Accomplishments that we're proud of
We are proud of creating a web application that utilizes a variety of machine learning and non-machine learning techniques. We also enjoyed the challenge of working on an unsolved machine learning problem (sentence simplification) and being able to perform real-time text analysis to determine new elements.
## What's next for InstaPresent
In the future, we hope to improve InstaPresent by predicting what the user intends to say next and improving the text summarization with word reordering. | ## Inspiration
COVID-19 has drastically transformed education from in-person to online. While being more accessible, e-learning imposes challenges in terms of attention for both educators and students. Attention is key to any learning experience, and it could normally be assessed approximately by the instructor from the physical feedback of students. However, it is not feasible for instructors to assess the attention levels of students in a remote environment. Therefore, we aim to build a web app that could assess attention based on eye-tracking, body-gesture, and facial expression using the Microsoft Azure Face API.
## What it does
C.L.A.A.S takes the video recordings of students watching lectures (with explicit consent and ethics approval) and process them using Microsoft Azure Face API. Three features including eye-tracking, body posture, and facial expression with sub-metrics will be extracted from the output of the API and analyzed to determine the attention level of the student during specific periods of time. An attention average score will be assigned to each learner at different time intervals based on the evaluation of these three features, and the class attention average score will be calculated and displayed across time on our web app. The results would better inform instructors on sections of the lecture that gain attraction and lose attention in order for more innovative and engaging curriculum design.
## How we built it
1. The front end of the web app is developed using Python and the Microsoft Azure Face API. Video streaming decomposes the video into individual frames from which key features are extracted using the Microsoft Azure Face API.
2. The back end of the web app is also written with Python. With literature review, we created an algorithm which assesses attention based on three metrics (blink frequency, head position, leaning) from two of the above-mentioned features (eye-tracking and body gesture). Finally, we output the attention scores averaged across all students with respect to time on our web app.
## Challenges we ran into
1. Lack of online datasets and limitation on time prevents us from collecting our own data or using machine learning models to classify attention.
2. Insufficient literature to provide quantitative measure for the criteria of each metric.
3. Decomposing a video into frames of image on a web app.
4. Lag during data collection.
## Accomplishments that we're proud of
1. Relevance of the project for education
2. Successfully extracting features from video data using the Microsoft Azure Face API
3. Web design
## What we learned
1. Utilizing the Face API to obtain different facial data
2. Computer vision features that could be used to classify attention
## What's next for C.L.A.A.S.
1. Machine learning model after collection of accurate and labelled baseline data from a larger sample size.
2. Address the subjectiveness of the classification algorithm by considering more scenarios and doing more lit review
3. Test the validity of the algorithm with more students
4. Improve web design, functionalities
5. Address limitations of the program from UX standpoint, such as lower resolution camera, position of their webcam relative to their face | winning |
## Inspiration
The inspiration behind our machine learning app that can diagnose blood diseases runs deep within us. Both of us, as teammates, have been touched by the impact of blood diseases within our own families. Witnessing our loved ones facing the challenges posed by these conditions ignited a passion to create a tool that could potentially alleviate the suffering of others. Our personal connections served as a powerful driving force, propelling us to combine our technical expertise with our heartfelt motivations. Through this app, we aim to provide timely and accurate diagnoses, ultimately contributing to better healthcare outcomes for those at risk and underscoring the importance of empathy-driven innovation.
## What it does
The web application prompts the user to upload an image of their blood cells. The application will then utilize machine learning to identify possible diseases and inform the user of their diagnosis. The possible diagnoses include sickle cell disease, thalassemia, and leukemia. If there were no distinguishable features of a listed disease, the application will inform the user that their blood cells are healthy. It also includes a very brief explanation of the diseases and their symptoms.
## How we built it
First, we used fast.ai libraries to create a machine-learning model built within Kaggle. This first step uses ResNet-18 as a neural network to train our specified model. ResNet-18 is a convolutional neural network that holds millions of reference photos in thousands of different categories and works to identify objects. Next, we trained our specific model using 8000 images of the diseases and healthy blood cells with various conditions and edge cases. This model was then implemented into a second file that uses our pre-trained model to apply to our web application. To create the web application, we used Gradio to locally host a website that could apply the machine learning model. We then refined the UI and added text to guide the user through the process.
## Challenges we ran into
One of the biggest challenges that we ran into was learning how to implement our model into Gradio. Having never applied a model to Gradio in the past, we were tasked with learning the development process and application of Gradio. Eventually, we were able to overcome the difficulties by lots of trial and error and various video tutorials regarding model application and the syntax of Gradio in python.
## Accomplishments that we're proud of
We are extremely proud of the accuracy rate yielded by our model and the intuitive nature of the web application. Having yielded an approximate 95% accuracy from our test trial images, we are thrilled with the high rate of correctness that our machine learning app has achieved. The app's user-friendly interface is designed with accessibility in mind, ensuring that individuals, medical professionals, and caregivers can navigate it with ease. Seeing our project come to fruition has deepened our conviction in the potential of technology to bridge gaps in healthcare, and it reinforces our commitment to applying our skills to causes that hold personal significance.
## What we learned
As a whole, we expanded our knowledge regarding maching learning models and web application development as a whole. Having gone through the process of creating a functioning application required the use of new concepts such as using Gradio and increasing error rate percentage rates within the model.
## What's next for Blood Cell Disease Identifier using Machine Learning
We hope to expand the web application in the future. While easy and simple to use as of now, we hope to add more to the app to increase accuracy, information retention, and advice. One of the biggest improvements to be made is increasing the variety of blood diseases detectable by our model. With more time available after this event, we will be able to advance our front regarding the number of detectable diseases. | ## Inspiration
The inspiration for this app arose from two key insights about medical education.
1. Medicine is inherently interdisciplinary. For example, in fields like dermatology, pattern recognition plays a vital role in diagnosis. Previous studies have shown that incorporating techniques from other fields, such as art analysis, can enhance these skills, highlighting the benefits of cross-disciplinary approaches. Additionally, with the rapid advancement of AI, which has its roots in pattern recognition, there is a tremendous opportunity to revolutionize medical training.
2. Second, traditional methods like textbooks and static images often lack the interactivity and personalized feedback needed to develop diagnostic skills effectively. Current education emphasizes the knowledge of various diagnostic features, but not the ability to recognize such features. This app was designed to address these gaps, creating a dynamic, tech-driven solution to better prepare medical students for the complexities of real-world practice.
## What it does
This app provides an interactive learning platform for medical students, focusing on dermatological diagnosis. It presents users with real-world images of skin conditions and challenges them to make a diagnosis. After each attempt, the app delivers personalized feedback, explaining the reasoning behind the correct answer, whether the diagnosis was accurate or not. By emphasizing pattern recognition and critical thinking, in concert with a comprehensive dataset of over 400,000 images, the app helps students refine their diagnostic skills in a hands-on manner. With its ability to adapt to individual performance, the app ensures a tailored learning experience, making it an effective tool for bridging the gap between theoretical knowledge and clinical application.
## How we built it
To build the app, we utilized a variety of tools and technologies across both the frontend and backend. On the frontend, we implemented React with TypeScript and styled the interface using TailwindCSS. To track user progress in real time, we integrated React’s Rechart library, allowing us to display interactive statistical visualizations. Axios was employed to handle requests and responses between the frontend and backend, ensuring smooth communication. On the backend, we used Python with Pandas, Scikit-Learn, and Numpy to create a machine learning model capable of identifying key factors for diagnosis. Additionally, we integrated OpenAI’s API with Flask to generate large language model (LLM) responses from user input, making the app highly interactive and responsive.
## Challenges we ran into
One of the primary challenges we encountered was integrating OpenAI’s API to deliver real-time feedback to users, which was critical for enhancing the app's personalized learning experience. Navigating the complexities of API communication and ensuring seamless functionality required significant troubleshooting. Additionally, learning how to use Flask to connect the frontend and backend posed another challenge, as some team members were unfamiliar with this framework. This required us to invest time in researching and experimenting with different approaches to ensure proper integration and communication between the app's components.
## Accomplishments that we're proud of
We are particularly proud of successfully completing our first hackathon, where we built this app from concept to execution. Despite being new to many of the technologies involved, we developed a full-stack application, learning the theory and implementation of tools like Flask and OpenAI's API along the way. Another accomplishment was our ability to work as a cohesive team, bringing together members from diverse, interdisciplinary backgrounds, both in general interests and in past CS experiences. This collaborative effort allowed us to combine different skill sets and perspectives to create a functional and innovative app that addresses key gaps in medical education.
## What we learned
Throughout the development of this app, we learned the importance of interdisciplinary collaboration. By combining medical knowledge, AI, and software development, we were able to create a more effective and engaging tool than any one field could produce alone. We also gained a deeper understanding of the technical challenges that come with working on large datasets and implementing adaptive feedback systems.
## What's next for DermaDrill
Looking ahead, there are many areas our app can expand into. With AI identifying the reasoning behind a certain diagnosis, we can explore the potential for diagnostic assistance, where AI can identify areas that may be abnormal to ultimately support clinical decision-making, giving physicians another tool. Furthermore, in other fields that are based on image-based diagnosis, such as radiology or pathology, we can apply a similar identification and feedback system. Future applications of such an app can enhance clinical diagnostic abilities while acknowledging the complexities of real world practice. | ## Inspiration
Growing up, many of us dreamed of exploring the world of music. However, we soon discovered that pursuing this passion often requires significant financial investment in production, mixing, and additional instruments. 🎛️ Well-meaning adults frequently warned us about the instability and security concerns associated with a music career. The reality is that music can be challenging, and we want to change that narrative.
There are barriers to entry at every turn:
* 87.6% of all artists remain undiscovered.
* Out of 1.3 million registered artists on Chartmetric over the past year, 710,000 have yet to achieve any career milestones.
* Only 11% of independent artists can sustain a living through music.
Even signing with a record label doesn’t guarantee success:
* Labels typically take 50-90% of the revenue generated by an artist, often with fine print that includes hidden distributor fees and overhead costs.
* Remarkably, even with a major label, only 1 in 2,149 artists achieves commercial success.
With soundchain, we aim to empower aspiring artists to pursue their musical dreams independently while connecting them with a supportive community that believes in their potential. This platform is also perfect for hobbyists who have a polished piece sitting in their files, just waiting for the right moment to be released. Whether you’re looking to build a career in music or simply share your passion, we’re here to help you take that next step. 🔊
## What it does
soundchain is a decentralized platform that connects aspiring artists with music enthusiasts, enabling fans to discover and support emerging talent through unbiased music playback and NFT purchases in exchange for rights. Artists and "angel music investors" are matched through our music-forward playback system.
It's actually quite simple:
1. Artists upload snippets of their creative process (verses, beats, melody lines, anything in between). They set SUI-based crowdfunding goals to produce their song, and issuance terms for partners.
2. Potential supporters browse uploaded projects. They can view goals and decide to support projects based on the provided snippets. They buy-in to the potential of the project.
3. The Sui blockchain records the smart contract and creates an enforceable legal parameter for investors.
4. If the project takes off, initial investors can now resell their rights on secondary web3 markets. 🚀
## How we built it
We utilized the Sui blockchain for its scalability and performance, implementing smart contracts to handle transactions. The application is integrated/connected with google-auth and the Enoki developer portal. The platform was developed with a user-friendly interface mocked on Figma to ensure seamless interactions between artists and listeners. Our front-end is made using Next.js, typescript, and assets from libraries such as material UI.
## Challenges we ran into
This project was our team's first time ever working with any web3 products. We went from not understanding what blockchain even was to maybe understanding a little bit of how blockchain works. We faced challenges in the integration of blockchain elements into our front-end application since we weren't familiar with the technology. Additionally, navigating copyright issues and establishing fair revenue-sharing models for artists required careful consideration.
## Accomplishments that we're proud of
We think that we came up with a bangin' 🎸 idea that can really change the way funding in the music industry works. We also ensured that this product was realistic and viable for integration in the current state of the music industry.
## What we learned
SO MUCH !!!! All new ways of designing, methodologies of ideation, new technologies, and first-time experiences. We also learned the importance of community engagement in building a platform that truly serves its users. Understanding the pain points and publication process from both artists and listeners helped us refine our approach and identify key features that enhance the overall experience.
## What's next for soundchain
* Mint NFTs that provide a visual badge/proof of support for supporters.
* Implement tags for different types of projects including Songwriting, Lyrics, Rap, and Beats.
* Integrate tracking systems to measure expected revenue shares through traditional music streaming applications, allowing the smart contract to take a more active role in revenue sharing. | losing |
## Inspiration
At reFresh, we are a group of students looking to revolutionize the way we cook and use our ingredients so they don't go to waste. Today, America faces a problem of food waste. Waste of food contributes to the acceleration of global warming as more produce is needed to maintain the same levels of demand. In a startling report from the Atlantic, "the average value of discarded produce is nearly $1,600 annually" for an American family of four. In terms of Double-Doubles from In-n-Out, that goes to around 400 burgers. At reFresh, we believe that this level of waste is unacceptable in our modern society, imagine every family in America throwing away 400 perfectly fine burgers. Therefore we hope that our product can help reduce food waste and help the environment.
## What It Does
reFresh offers users the ability to input ingredients they have lying around and to find the corresponding recipes that use those ingredients making sure nothing goes to waste! Then, from the ingredients left over of a recipe that we suggested to you, more recipes utilizing those same ingredients are then suggested to you so you get the most usage possible. Users have the ability to build weekly meal plans from our recipes and we also offer a way to search for specific recipes. Finally, we provide an easy way to view how much of an ingredient you need and the cost of those ingredients.
## How We Built It
To make our idea come to life, we utilized the Flask framework to create our web application that allows users to use our application easily and smoothly. In addition, we utilized a Walmart Store API to retrieve various ingredient information such as prices, and a Spoonacular API to retrieve recipe information such as ingredients needed. All the data is then backed by SQLAlchemy to store ingredient, recipe, and meal data.
## Challenges We Ran Into
Throughout the process, we ran into various challenges that helped us grow as a team. In a broad sense, some of us struggled with learning a new framework in such a short period of time and using that framework to build something. We also had issues with communication and ensuring that the features we wanted implemented were made clear. There were times that we implemented things that could have been better done if we had better communication. In terms of technical challenges, it definitely proved to be a challenge to parse product information from Walmart, to use the SQLAlchemy database to store various product information, and to utilize Flask's framework to continuously update the database every time we added a new recipe.
However, these challenges definitely taught us a lot of things, ranging from a better understanding to programming languages, to learning how to work and communicate better in a team.
## Accomplishments That We're Proud Of
Together, we are definitely proud of what we have created. Highlights of this project include the implementation of a SQLAlchemy database, a pleasing and easy to look at splash page complete with an infographic, and being able to manipulate two different APIs to feed of off each other and provide users with a new experience.
## What We Learned
This was all of our first hackathon, and needless to say, we learned a lot. As we tested our physical and mental limits, we familiarized ourselves with web development, became more comfortable with stitching together multiple platforms to create a product, and gained a better understanding of what it means to collaborate and communicate effectively in a team. Members of our team gained more knowledge in databases, UI/UX work, and popular frameworks like Boostrap and Flask. We also definitely learned the value of concise communication.
## What's Next for reFresh
There are a number of features that we would like to implement going forward. Possible avenues of improvement would include:
* User accounts to allow ingredients and plans to be saved and shared
* Improvement in our search to fetch more mainstream and relevant recipes
* Simplification of ingredient selection page by combining ingredients and meals in one centralized page | ## Inspiration
One of our teammates works part time in Cineplex and at the end of the day, he told us that all their extra food was just throw out.This got us thinking, why throw the food out when you can you earn revenue and some end of the day sales for people in the local proxmity that are looking for something to eat.
## What it does
Out web-app will give a chance for the restaurant to publish the food item which they are selling with a photo of the food. Meanwhile, users have the chance to see everything in real-time and order food directly from the platform. The web-app also identifies the items in the food, nutrient facts, and health benefitis, pro's and con's of the food item and displays it directly to the user. The web-app also provides a secure transaction method which can be used to pay for the food. The food by the restaurant would be sold at a discounted price.
## How I built it
The page was fully made by HTML, CSS, JavaScript and jQuery. There would be both a login and signup for both the restaurants wanting to sell and also for the participants wanting to buy the food.Once signed up for the app, the entry would get stored into Azure and would request for access to the Android Pay app which will allow the users to use Android Pay to pay for the food. When the food is ordered, we use the Clarifai API which allows the users can see the ingredients, health benefits, nutrient facts, pro's and con's of the food item on their dashboard and the photo of the app. This would all come together once the food is delivered by the restaurant.
## Challenges I ran into
Challenges we ran into were getting our database working as none of us have past experiences using Azure. The biggest challenge we ran into was our first two ideas but after talking to sponsors we found out that they were too limiting meaning we had to let go of the ideas and keep coming up with new ones. We started hacking late afternoon on Saturday which cut our time to finish the entire thing.
## Accomplishments that I'm proud of
We are really proud of getting the entire website up and running properly within the 20 hours as we started late enough with database problems that we were at the point of giving up on Sunday morning. Additionally we were very proud of getting our Clarifai API working as none of us had past experenices with Clarifai.
## What I learned
The most important thing we learned out of this hackathon was to start with a concrete idea early on as if this was done for this weekend, our idea could've included a lot more functions. This would benefit both our users and consumers.
## What's next for LassMeal
Our biggest next leap would be modifying the delivery portion of the food item. Instead of the restaurant delivering the food, users that sign up for the food service, also have a chance to become a deliverer. If they are within the distance of the restaurant and going back in the prxomity of the user's home, they would be able to pick up food for the user and deliver it and earn a percentage of the entire order. This would mean both the users and restaurants are earning money now for food that was once losing them money as they were throwing it out.Another additoin would be taking our Android Mockups and transferring them into a app meaning now both the users and restaurants have a way to buy/publish food via a mobile device. | ## Inspiration
In a world where the voices of the minority are often not heard, technology must be adapted to fit the equitable needs of these groups. Picture the millions who live in a realm of silence, where for those who are deaf, you are constantly silenced and misinterpreted. Of the 50 million people in the United States with hearing loss, less than 500,000 — or about 1% — use sign language, according to Acessibility.com and a recent US Census. Over 466 million people across the globe struggle with deafness, a reality known to each in the deaf community. Imagine the pain where only 0.15% of people (in the United States) can understand you. As a mother, father, teacher, friend, or ally, there is a strong gap in communication that impacts deaf people every day. The need for a new technology is urgent from both an innovation perspective and a human rights perspective.
Amidst this urgent disaster of an industry, a revolutionary vision emerges – Caption Glasses, a beacon of hope for the American Sign Language (ASL) community. Caption Glasses bring the magic of real-time translation to life, using artificial neural networks (machine learning) to detect ASL "fingerspeaking" (their one-to-one version of the alphabet), and creating instant subtitles displayed on glasses. This revolutionary piece effortlessly bridges the divide between English and sign language. Instant captions allow for the deaf child to request food from their parents. Instant captions allow TAs to answer questions in sign language. Instant captions allow for the nurse to understand the deaf community seeking urgent care at hospitals. Amplifying communication for the deaf community to the unprecedented level that Caption Glasses does increases the diversity of humankind through equitable accessibility means!
With Caption Glasses, every sign becomes a verse, every gesture an eloquent expression. It's a revolution, a testament to humanity's potential to converse with one another. In a society where miscommunication causes wars, there is a huge profit associated with developing Caption Glasses. Join us in this journey as we redefine the meaning of connection, one word, one sign, and one profound moment at a time.
## What it does
The Caption Glasses provide captions displayed on glasses after detecting American Sign Language (ASL). The captions are instant and in real-time, allowing for effective translations into the English Language for the glasses wearer.
## How we built it
Recognizing the high learning curve of ASL, we began brainstorming for possible solutions to make sign language more approachable to everyone. We eventually settled on using AR-style glasses to display subtitles that can help an ASL learner quickly identify what sign they are looking at.
We started our build with hardware and design, starting off by programming a SSD1306 OLED 0.96'' display with an Arduino Nano. We also began designing our main apparatus around the key hardware components, and created a quick prototype using foam.
Next, we got to loading computer vision models onto a Raspberry Pi4. Although we were successful in loading a basic model that looks at generic object recognition, we were unable to find an ASL gesture recognition model that was compact enough to fit on the RPi.
To circumvent this problem, we made an approach change that involved more use of the MediaPipe Hand Recognition models. The particular model we chose marked out 21 landmarks of the human hand (including wrist, fingertips, knuckles, etc.). We then created and trained a custom Artificial Neural Network that takes the position of these landmarks, and determines what letter we are trying to sign.
At the same time, we 3D printed the main apparatus with a Prusa I3 3D printer, and put in all the key hardware components. This is when we became absolute best friends with hot glue!
## Challenges we ran into
The main challenges we ran into during this project mainly had to do with programming on an RPi and 3D printing.
Initially, we wanted to look for pre-trained models for recognizing ASL, but there were none that were compact enough to fit in the limited processing capability of the Raspberry Pi. We were able to circumvent the problem by creating a new model using MediaPipe and PyTorch, but we were unsuccessful in downloading the necessary libraries on the RPi to get the new model working. Thus, we were forced to use a laptop for the time being, but we will try to mitigate this problem by potentially looking into using ESP32i's in the future.
As a team, we were new to 3D printing, and we had a great experience learning about the importance of calibrating the 3D printer, and had the opportunity to deal with a severe printer jam. While this greatly slowed down the progression of our project, we were lucky enough to be able to fix our printer's jam!
## Accomplishments that we're proud of
Our biggest accomplishment is that we've brought our vision to life in the form of a physical working model. Employing the power of 3D printing through leveraging our expertise in SolidWorks design, we meticulously crafted the components, ensuring precision and functionality.
Our prototype seamlessly integrates into a pair of glasses, a sleek and practical design. At its heart lies an Arduino Nano, wired to synchronize with a 40mm lens and a precisely positioned mirror. This connection facilitates real-time translation and instant captioning. Though having extensive hardware is challenging and extremely time-consuming, we greatly take the attention of the deaf community seriously and believe having a practical model adds great value.
Another large accomplishment is creating our object detection model through a machine learning approach of detecting 21 points in a user's hand and creating the 'finger spelling' dataset. Training the machine learning model was fun but also an extensively difficult task. The process of developing the dataset through practicing ASL caused our team to pick up the useful language of ASL.
## What we learned
Our journey in developing Caption Glasses revealed the profound need within the deaf community for inclusive, diverse, and accessible communication solutions. As we delved deeper into understanding the daily lives of over 466 million deaf individuals worldwide, including more than 500,000 users of American Sign Language (ASL) in the United States alone, we became acutely aware of the barriers they face in a predominantly spoken word.
The hardware and machine learning development phases presented significant challenges. Integrating advanced technology into a compact, wearable form required a delicate balance of precision engineering and user-centric design. 3D printing, SolidWorks design, and intricate wiring demanded meticulous attention to detail. Overcoming these hurdles and achieving a seamless blend of hardware components within a pair of glasses was a monumental accomplishment.
The machine learning aspect, essential for real-time translation and captioning, was equally demanding. Developing a model capable of accurately interpreting finger spelling and converting it into meaningful captions involved extensive training and fine-tuning. Balancing accuracy, speed, and efficiency pushed the boundaries of our understanding and capabilities in this rapidly evolving field.
Through this journey, we've gained profound insights into the transformative potential of technology when harnessed for a noble cause. We've learned the true power of collaboration, dedication, and empathy. Our experiences have cemented our belief that innovation, coupled with a deep understanding of community needs, can drive positive change and improve the lives of many. With Caption Glasses, we're on a mission to redefine how the world communicates, striving for a future where every voice is heard, regardless of the language it speaks.
## What's next for Caption Glasses
The market for Caption Glasses is insanely large, with infinite potential for advancements and innovations. In terms of user design and wearability, we can improve user comfort and style. The prototype given can easily scale to be less bulky and lighter. We can allow for customization and design patterns (aesthetic choices to integrate into the fashion community).
In terms of our ML object detection model, we foresee its capability to decipher and translate various sign languages from across the globe pretty easily, not just ASL, promoting a universal mode of communication for the deaf community. Additionally, the potential to extend this technology to interpret and translate spoken languages, making Caption Glasses a tool for breaking down language barriers worldwide, is a vision that fuels our future endeavors. The possibilities are limitless, and we're dedicated to pushing boundaries, ensuring Caption Glasses evolve to embrace diverse forms of human expression, thus fostering an interconnected world. | partial |