anchor
stringlengths
58
24.4k
positive
stringlengths
9
13.4k
negative
stringlengths
166
14k
anchor_status
stringclasses
3 values
# Slacker created by Albert Lai, Hady Ibrahim, and Varun Kothandaraman github : *[Slacker Github](https://github.com/albertlai431/slacker-chore)* ## Inspiration Among shared housing, chores are a major hassle for most people to deal with organizing to ensure everyone is doing their fair share of the work. In most cases, without direct instruction, most people simply forget about their slice of work they need to complete. ## What it does Slacker is a web-app that allows users to join a group that contains multiple members of their household and through an overall bigger list of items - tasks get automatically assigned to each member in the group. Each member in the group has a couple of task view points with the main pages being the user’s own personal list, the total group list, each group member’s activity, and settings. The user’s personal list of chores constantly refreshes over each week through one-time and repeating chores for each task. WIth forgetting/overdue chores appearing at the top of the screen on every group member’s personal page for quicker completion. ## How we built it Slacker was built using a combination of React and Chakra UI through github source control. Additionally, we have created mockups of both the desktop pages and the mobile app we were planning on creating. To find pictures of the mockups kindly check out the images we have attached to this devpost for the items that we have created so far. ## Challenges we ran into Originally, our plan was to create an ios/android app through react native and create our fleshed out figma app mockups. The full idea simply had too many features and details to work as both: * Create the mobile application * Create the full application, with all the features we brainstormed The first challenge that we ran into was the mockup and design of the application. UI/UX design caused us a lot of grief as we found it difficult to create some design that we felt both looked good and were easy to understand in terms of functionality. The second challenge that we faced was the google authentication feature we created for logging into the website. The main issue was that the implementation of the feature created a lot of issues and bugs that delayed our total work time by a considerable amount of time. Additionally with the time constraint, we were able to create a React web application that has some basic functionality as a prototype for our original idea. ## Accomplishments that we're proud of We are happy with the web application that we have created so far in our prototype with the given time so far: We have implemented: * Finished the landing page * Finished the google authentication * Home screen * Create tasks that will be automatically assigned to users on a recurring basis * Create invite and join group * Labels slacker member with least tasks * Donut graphs for indication of task completion every week * The ability to see every task for each day * The ability to sign out of the webpage * and even more! ## What we learned As a group, since for the majority of us it was our first hackathon, we put more emphasis and time on brainstorming an idea instead of just sitting down and starting to code our project up. We definitely learned that coming into the hackathon with some preconceived notions of what we individually wanted to code would have saved us around more than half a day in time. We also were surprised to learn how useful figma is as a tool for UI/UX design for web development. The ability to copy-paste CSS code for each element of the webpage was instrumental in our ability to create a working prototype faster. ## What's next for Slacker For Slacker, the next steps are to: * Finish the web application with all of the features * Create and polish the full web application, with all the visual features we brainstormed * Finish the mobile application with all of the same features as the web application we aim to complete
# Mental-Health-Tracker ## Mental & Emotional Health Diary This project was made because we all know how much of a pressing issue that mental health and depression can have not only on ourselves, but thousands of other students. Our goal was to make something where someone could have the chance to accurately assess and track their own mental health using the tools that Google has made available to access. We wanted the person to be able to openly express their feelings towards the diary for their own personal benefit. Along the way, we learned about using Google's Natural Language processor, developing using Android Studio, as well as deploying an app using Google's App Engine with a `node.js` framework. Those last two parts turned out to be the greatest challenges. Android Studio was a challenge as one of our developers had not used Java for a long time, nor had he ever developed using `.xml`. He was pushed to learn a lot about the program in a limited amount of time. The greatest challenge, however, was deploying the app using Google App Engine. This tool is extremely useful, and was made to seem that it would be easy to use, but we struggled to implement it using `node.js`. Issues arose with errors involving `favicon.ico` and `index.js`. It took us hours to resolve this issue and we were very discouraged, but we pushed though. After all, we had everything else - we knew we could push through this. The end product involves and app in which the user signs in using their google account. It opens to the home page, where the user is prompted to answer four question relating to their mental health for the day, and then rate themselves on a scale of 1-10 in terms of their happiness for the day. After this is finished, the user is given their mental health score, along with an encouraging message tagged with a cute picture. After this, the user has the option to view a graph of their mental health and happiness statistics to see how they progressed over the past week, or else a calendar option to see their happiness scores and specific answers for any day of the year. Overall, we are very happy with how this turned out. We even have ideas for how we could do more, as we know there is always room to improve!
## Inspiration STEM was inspired by our group members, who all experienced failure of a personal health goal. We believed that setting similar goals with our friends and seeing their progress will indeed inspire us to work harder towards completing our goal. We also agreed that this may encourage us to start challenges that we see our friends partaking in, which can help us develop healthy lifestyle habits. ## What it does STEM provides a space where users can set their health goals in the form of challenges and visualize their progress in the form of a growing tree. Additionally, users can see others' progress within the same challenges to further motivate them. User's can help promote fitness and health by creating their own challenges and inviting their friends, family, and colleagues. ## How we built it This mobile application was built with react-native, expo CLI and firebase. ## Challenges we ran into One challenge that we ran into was the time limit. There was a few parts of our project that we designed on Figma and intended to code, but we were unable to do so. Furthermore, each of our group members had no prior experience using react-native which in combination with the time limit, lead to some planned features being undeveloped. Another challenge faced was the fact that our project is a very simple idea with a lot of competition. ## Accomplishments that we're proud of We are very proud of our UI and the aesthetics of our project. Each member of our group members had no prior experience with react native, and therefore, we are proud that we were able to build and submit a functional project within 36 hours. Lastly, we are also very proud that we were able to develop an idea with potential to be a future business. ## What we learned Throughout this weekend, we learned how to be more consistent with version control, in order to work better and faster as a team. We also learned how to build an effective NOSQL database schema. ## What's next for STEM As we all believe that STEM has potential to be a future business, we will continue developing the code, and deploy. We will be adding a live feed page that will allow you to see, like and comment on friends' posts. Users will be able to post about their progress in challenges. STEM will also reach out and try to partner with companies to create incentives for certain achievements made by users. (E.g. getting a discount on certain sportswear brands after completing a physical challenge or certain tree level)
partial
## Inspiration With the ubiquitous and readily available ML/AI turnkey solutions, the major bottlenecks of data analytics lay in the consistency and validity of datasets. **This project aims to enable a labeller to be consistent with both their fellow labellers and their past self while seeing the live class distribution of the dataset.** ## What it does The UI allows a user to annotate datapoints from a predefined list of labels while seeing the distribution of labels this particular datapoint has been previously assigned by another annotator. The project also leverages AWS' BlazingText service to suggest labels of incoming datapoints from models that are being retrained and redeployed as it collects more labelled information. Furthermore, the user will also see the top N similar data-points (using Overlap Coefficient Similarity) and their corresponding labels. In theory, this added information will motivate the annotator to remain consistent when labelling data points and also to be aware of the labels that other annotators have assigned to a datapoint. ## How we built it The project utilises Google's Firestore realtime database with AWS Sagemaker to streamline the creation and deployment of text classification models. For the front-end we used Express.js, Node.js and CanvasJS to create the dynamic graphs. For the backend we used Python, AWS Sagemaker, Google's Firestore and several NLP libraries such as SpaCy and Gensim. We leveraged the realtime functionality of Firestore to trigger functions (via listeners) in both the front-end and back-end. After K detected changes in the database, a new BlazingText model is trained, deployed and used for inference for the current unlabeled datapoints, with the pertinent changes being shown on the dashboard ## Challenges we ran into The initial set-up of SageMaker was a major timesink, the constant permission errors when trying to create instances and assign roles were very frustrating. Additionally, our limited knowledge of front-end tools made the process of creating dynamic content challenging and time-consuming. ## Accomplishments that we're proud of We actually got the ML models to be deployed and predict our unlabelled data in a pretty timely fashion using a fixed number of triggers from Firebase. ## What we learned Clear and effective communication is super important when designing the architecture of technical projects. There were numerous times where two team members were vouching for the same structure but the lack of clarity lead to an apparent disparity. We also realized Firebase is pretty cool. ## What's next for LabelLearn Creating more interactive UI, optimizing the performance, have more sophisticated text similarity measures.
## Inspiration During extreme events such as natural disasters or virus outbreaks, crisis managers are the decision makers. Their job is difficult since the right decision can save lives while the wrong decision can lead to their loss. Making such decisions in real-time can be daunting when there is insufficient information, which is often the case. Recently, big data has gained a lot of traction in crisis management by addressing this issue; however it creates a new challenge. How can you act on data when there's just too much of it to keep up with? One example of this is the use of social media during crises. In theory, social media posts can give crisis managers an unprecedented level of real-time situational awareness. In practice, the noise-to-signal ratio and volume of social media is too large to be useful. I built CrisisTweetMap to address this issue by creating a dynamic dashboard for visualizing crisis-related tweets in real-time. The focus of this project was to make it easier for crisis managers to extract useful and actionable information. To showcase the prototype, I used tweets about the current coronavirus outbreak. ## What it does * Scrape live crisis-related tweets from Twitter; * Classify tweets in relevant categories with deep learning NLP model; * Extract geolocation from tweets with different methods; * Push classified and geolocated tweets to database in real-time; * Pull tweets from database in real-time to visualize on dashboard; * Allows dynamic user interaction with dashboard ## How I built it * Tweepy + custom wrapper for scraping and cleaning tweets; * AllenNLP + torch + BERT + CrisisNLP dataset for model training/deployment; * Spacy NER + geotext for extracting location names from text * geopy + gazetteer elasticsearch docker container for extracting geolocation from locations; * shapely for sampling geolocation from bounding boxes; * SQLite3 + pandas for database push/pull; * Dash + plotly + mapbox for live visualizations; ## Challenges I ran into * Geolocation is hard; * Stream stalling due to large/slow neural network; * Responsive visualization of large amounts of data interactively; ## Accomplishments that I'm proud of * A working prototype ## What I learned * Different methods for fuzzy geolocation from text; * Live map visualizations with Dash; ## What's next for CrisisTweetMap * Other crises like extreme weather events;
## Inspiration We started off by thinking, "What is something someone needs today?". In light of the stock market not doing so well, and the amount of false information being spread over the Internet these days, we figured it was time to get things right by understanding the stock market. We know that no human could analyze a company without bias of the history of the company and its potential stereotypes, but nothing can beat using an NLP to understand the current situation of a company. Thinking about the capabilities of the Cohere NLP and what we know and want from the stock market led us to a solution: Stocker. ## What it does The main application allows you to search up words that make up different stocks. Then, for each company which matches the inputted string, we run through the backend, which grabs the inputted company and searches through the recent news of the company via a web scrapper on Google News. Then, we collect all of the headings and evaluate the status of the company according to a rating system. Finally, we summarize all the data by using Generate on all of the text that was read through and outputs it. ## How we built it The stocks corresponding to the search were grabbed via the NASDAQ API. Then, once the promise is fulfilled, the React page can update the list with ratings already prepended on there. The backend that is communicated runs through Google Cloud, and the backend was built in Python along with a Flask server. This backend communicates directly with the Cohere API, specifically on the Generate and Classify functionalities. Classify is used to evaluate company status from the headings, which is coupled with the Generate to get the text summary of all the headings. Then, the best ones are selected and then displayed with links to the specific articles for people to verify the truthfulness of the information. We trained the Classify with several tests in order to ensure the API understood what we were asking of it, rather than being too extreme or imprecise. ## Challenges we ran into Coming up with a plan of how to bring everything together was difficult -- we knew that we wanted to get data to pass in to a Classify model, but how the scraping would work and being table to communicate that data took time to formulate a plan in order to execute. The entire backend was a little challenging for the team members, as it was the first time they worked with Flask on the backend. This resulted in some troubles with getting things set up, but more significantly, the process of deploying the backend involved lots of research and testing, as nobody on our team knew how our backend could specifically be deployed. On the front-end side, there were some hiccups with getting the data to show for all objects being outputted (i.e., how mapping and conditional rendering would work in React was a learning curve). There were also some bugs with small technical details as well, but those were eventually figured out. Finally, bringing together the back-end and front-end and troubleshooting all the small errors was a bit challenging, given the amount of time that was remaining. Overall though, most errors were solved in appropriate amounts of time. ## Accomplishments that we're proud of Finally figuring out the deployment of the backend was one of the highlights for sure, as it took some time with researching and experimenting. Another big one was getting the front-end designed from the Figma prototype we made and combining it with the functional, but very barebones infrastructure of our app that we made as a POC. Being able to have the front-end design be very smooth and work with the object arrays as a whole rather than individual ones made the code a lot more standardized and consolidated in the end as well, which was nice to see. ## What we learned We learned that it is important to do some more research on how standard templates on writing code in order to be deployed easily is very useful. Some of us also got experience in Flask while others fine-tuned their React skills, which was great to see as the proficiency became useful towards the point where the back-end, front-end, and REST API were coming together (sudden edits were very easy and smooth to make). ## What's next for Stocker Stocker can have some more categories and get smarter for sure. For example, it can actually try to interpret the current trend of where the stock has been headed recently, and also maybe other sources of data other than the news. Stocker relies heavily on the training model and the severity of article names, but in the future it could get smarter with more sources such as these listed.
winning
## Inspiration Every student knows the struggle that is course registration. You're tossed into an unfamiliar system with little advice and all these vague rules and restrictions to follow. All the while, courses are filling up rapidly. Far too often students—often underclassmen— are stuck without the courses they need. We were inspired by these pain points to create Schedge, an automatic schedule generator. ## What it does Schedge helps freshmen build their schedule by automatically selecting three out of a four course load. The three courses consist of a Writing the Essay course, the mandatory writing seminar for NYU students, a Core course like Quantitative Reasoning, and a course in the major of the student's choosing. Furthermore, we provide sophomores with potential courses to take after their freshman year, whether that's a follow up to Writing the Essay, or a more advanced major course. ## How we built it We wrote the schedule generation algorithm in Rust, as we needed it to be blazing fast and well designed. The front end is React with TypeScript and Material UI. The algorithm, while technically NP complete for all courses, uses some shortcuts and heuristics to allow for fast schedule generation. ## Challenges we ran into We had some trouble with the data organization, especially with structuring courses with their potential meeting times. ## Accomplishments that we're proud of Using a more advanced systems language such as Rust in a hackathon. Also our project has immediate real world applications at NYU. We plan on extending it and providing it as a service. ## What we learned Courses have a lot of different permutations and complications. ## What's next for Schedge More potential majors and courses! Features for upperclassmen!
## Inspiration As a team of post-secondary students, we’ve all been through the torment of realising that the courses you intended to take have times that conflict with each other. But if there’s one thing AI can do, it’s making decisions in a short period of time (provided they have the data). Rather than having students search through each course description to decide on how they’ll arrange their schedule, we wanted to create a product that could generate schedules for them, so long as they are provided sufficient information to decide which courses should be in the schedule, and when. ## What it does Borzoi Scheduler is a website that builds course schedules for UofT students. Users just need to provide their program of study, the semester they’re planning for, and the times when they don’t want classes, then Borzoi will generate a schedule for them. With additional exchanges between the user and Borzoi’s AI chat, further specifications can be made to ensure the schedule is as relevant as possible to the user and their needs. ## How we built it Figma was used to create a high-fidelity prototype of the website, demonstrating its functionalities with a sample use case. Meanwhile, Python was used in combination with the ChatGPT API to build the chat that users will interact with to create the personalised schedules. As for the website itself, we used HTML, CSS, and Javascript for its creation and design. Last, but not least, we attempted to use Flask to bring the frontend and backend together. Given the time constraint, we were unable to incorporate the databases that would’ve been required if we actually had to create schedules with UofT courses. However, our team was able to utilise these tools to create a bare-bones version of our website. ## Challenges we ran into Although we were able to settle on an idea relatively early on, due to a lack of experience with the software tools we’d previously learned about, our team had trouble identifying where to start on the project, as well as the technicalities behind the way it worked. We recognised the need for implementing AI, databases, and some sort of frontend/backend, but were unsure how, exactly, that implementation worked. To find our way to the start of actually creating the project, we consulted multiple resources: from Google, to the mentors, and even to ChatGPT, the very AI we intended to use in our website. Many of the answers we got were beyond our understanding, and we often felt just as confused as when we first started searching. After a good night’s rest and some more discussion, we then realised that our problem was that we were thinking too broadly. By breaking our ideas down into smaller, simpler chunks, we were able to get clearer answers and simultaneously identify the steps we needed to take to complete the implementation of our ideas. Our team still came across many unknowns along the way, but with the support of the mentors and quite a bit of self-learning, each of these points were clarified, and we were slowly, but surely, able to move along our development journey. ## Accomplishments that we're proud of Our team is proud of all that we were able to learn in these past 2-3 days! Although we weren’t able to come up with how to write all the code completely on our own, it was a rewarding experience, being exposed to so many development tools, learning the pros and cons of each, and (at the cost of our sleep) figuring out how to use the new knowledge. In particular, at the start of this event, our group wanted to work with AI specifically because none of us had experience with it; we wanted to use this hackathon as an excuse to learn more about this topic and the tools needed to apply it, and we were not disappointed. The time spent doing research and asking mentors for suggestions deepened our understanding of the use of AI, as well as a variety of other tools that we’d often heard of, but had never interacted with until we participated in this hackathon. ## What we learned As mentioned in the accomplishments section, after these past 2-3 days, we now know quite a bit more about AI and other topics such as APIs, JavaScript, etc. But technical knowledge aside, we discovered the importance of breaking problems down into more manageable pieces. When we first started trying to work on our idea, it felt almost impossible for us to even get one function working. But by setting mini goals, and working through each one slowly, and carefully, we were eventually able to create what we have now! ## What's next for Borzoi Scheduler At the moment, there are still a number of functionalities we’re hoping to add (features we wanted to add if we had more time). For one, we want to make the service more accessible by providing voice input and multilingual support (possibly with the use of WhisperAI). For another, we’re hoping to allow users to save their schedule in both a visual and textual format, depending on their preferences. Once those functions are implemented and tested, we want to consider the scope of our service. Currently, Borzoi Scheduler is only available for the students of one school, but we’re hoping to be able to extend this service to other schools as well. Knowing that many students also have to work to pay for rent, tuition, and more, we want to allow as many people as possible to have access to this service so that they can save time that can be used to focus on their hobbies, relationships, as well as their own health. Though this is a big goal, we’re hoping that by collaborating with school services to provide accurate course information, as well as to receive possible funding for the project from the schools, this mission will be made possible. Furthermore, as scheduling is not only done by students, but also by organisations and individuals, we would like to consider creating or adapting Borzoi Scheduler to these audiences so that they may also save time on organising their time.
# Are You Taking It's the anti-scheduling app. 'Are You Taking' is the no-nonsense way to figure out if you have class with your friends by comparing your course schedules with ease. No more screenshots, only good vibes! ## Inspiration The fall semester is approaching... too quickly. And we don't want to have to be in class by ourselves. Every year, we do the same routine of sending screenshots to our peers of what we're taking that term. It's tedious, and every time you change courses, you have to resend a picture. It also doesn't scale well to groups of people trying to find all of the different overlaps. So, we built a fix. Introducing "Are You Taking" (AYT), an app that allows users to upload their calendars and find event overlap. It works very similar to scheduling apps like when2meet, except with the goal of finding where there *is* conflict, instead of where there isn't. ## What it does The flow goes as follows: 1. Users upload their calendar, and get a custom URL like `https://areyoutaking.tech/calendar/<uuidv4>` 2. They can then send that URL wherever it suits them most 3. Other users may then upload their own calendars 4. The link stays alive so users can go back to see who has class with who ## How we built it We leveraged React on the front-end, along with Next, Sass, React-Big-Calendar and Bootstrap. For the back-end, we used Python with Flask. We also used CockroachDB for storing events and handled deployment using Google Cloud Run (GCR) on GCP. We were able to create Dockerfiles for both our front-end and back-end separately and likewise deploy them each to a separate GCR instance. ## Challenges we ran into There were two major challenges we faced in development. The first was modelling relationships between the various entities involved in our application. From one-to-one, to one-to-many, to many-to-many, we had to write effective schemas to ensure we could render data efficiently. The second was connecting our front-end code to our back-end code; we waited perhaps a bit too long to pair them together and really felt a time crunch as the deadline approached. ## Accomplishments that we're proud of We managed to cover a lot of new ground! * Being able to effectively render calendar events * Being able to handle file uploads and store event data * Deploying the application on GCP using GCR * Capturing various relationships with database schemas and SQL ## What we learned We used each of these technologies for the first time: * Next * CockroachDB * Google Cloud Run ## What's next for Are You Taking (AYT) There's a few major features we'd like to add! * Support for direct Google Calendar links, Apple Calendar links, Outlook links * Edit the calendar so you don't have to re-upload the file * Integrations with common platforms: Messenger, Discord, email, Slack * Simple passwords for calendars and users * Render a 'generic week' as the calendar, instead of specific dates
losing
## Inspiration JetBlue challenge of YHack ## What it does Website with sentiment analysis of JetBlue ## How I built it Python, Data scraping, used textblob for sentiment analysis ## Challenges I ran into choosing between textblob and nltk ## Accomplishments that I'm proud of Having a finished product ## What I learned How to do sentiment analysis ## What's next for FeelingBlue
## Inspiration We got our inspiration from looking at the tools provided to us in the Hackathon. We saw that we cold use the Google API’s effectively when analyzing the sentiment of the customers review on social media platforms. With the wide range of possibilities, it gave us we got the idea of using programs to see the data visually ## What it does JetBlueByMe is a program which takes over 16000 reviews from trip advisor, and hundreds of tweets from twitter to present them in a graphable way. The first representation is an effective yet simple word cloud which shows more frequently described adjective larger. The other is a bar graph to show which word appears most consistently. ## How we built it The first step was to scrape data off multiple websites. To do this a web scraping robot by UiPath was used. This saved a lot of time and allowed us to focus on other aspects of the program. For Twitter, Python had to be used in junction with Beautiful Soup library to extract the tweets and hashtags. This was only possible after receiving permission 10 hours after applying to Twitter for its API use. The Google sentiment API and Syntax API were used to create the final product. The syntax API helped extract the adjectives from the reviews so we can show a word cloud. To display the word cloud, the programming was done in R as it is an effective language for data manipulation. ## Challenges we ran into We were unable to initially use UiPath for Twitter to scrape data as it didn’t have a next button, so the robot did not continue on its own. This was fixed using beautiful soup on Python. Also, when trying to extract the adjectives, the compiling was very slow causing us to fall back about 2 hours. None of us knew the inns and outs of web hence it was a challenging problem for us. ## Accomplishments that we're proud of We are happy about finding an effective way to word scrape using both UiPath and BeautifulSoup. Also, we weren't aware that Google provided an API for sentiment analysis, access to that was a big plus. We learned how to utilize our tools and incorporated them into our project. We also used Firebase to help store data on the cloud so we know its secure. ## What we learned Word scraping was a big thing that we all learned as it was new to all of us. We had to extensively research before applying any idea. Most of the group did not know how to use the language R but we understood the basics by the end. We also learned how to set up a firebase and google-cloud service that will definitely be a big asset in our future programming endeavours. ## What's next for JetBlueByMe Our web scraping application can be optimized and we plan on getting a live feed set up to show reviews sentiment in real-time. With time and resources, we would be able to implement that.
## Inspiration Given the increase in mental health awareness, we wanted to focus on therapy treatment tools in order to enhance the effectiveness of therapy. Therapists rely on hand-written notes and personal memory to progress emotionally with their clients, and there is no assistive digital tool for therapists to keep track of clients’ sentiment throughout a session. Therefore, we want to equip therapists with the ability to better analyze raw data, and track patient progress over time. ## Our Team * Vanessa Seto, Systems Design Engineering at the University of Waterloo * Daniel Wang, CS at the University of Toronto * Quinnan Gill, Computer Engineering at the University of Pittsburgh * Sanchit Batra, CS at the University of Buffalo ## What it does Inkblot is a digital tool to give therapists a second opinion, by performing sentimental analysis on a patient throughout a therapy session. It keeps track of client progress as they attend more therapy sessions, and gives therapists useful data points that aren't usually captured in typical hand-written notes. Some key features include the ability to scrub across the entire therapy session, allowing the therapist to read the transcript, and look at specific key words associated with certain emotions. Another key feature is the progress tab, that displays past therapy sessions with easy to interpret sentiment data visualizations, to allow therapists to see the overall ups and downs in a patient's visits. ## How we built it We built the front end using Angular and hosted the web page locally. Given a complex data set, we wanted to present our application in a simple and user-friendly manner. We created a styling and branding template for the application and designed the UI from scratch. For the back-end we hosted a REST API built using Flask on GCP in order to easily access API's offered by GCP. Most notably, we took advantage of Google Vision API to perform sentiment analysis and used their speech to text API to transcribe a patient's therapy session. ## Challenges we ran into * Integrated a chart library in Angular that met our project’s complex data needs * Working with raw data * Audio processing and conversions for session video clips ## Accomplishments that we're proud of * Using GCP in its full effectiveness for our use case, including technologies like Google Cloud Storage, Google Compute VM, Google Cloud Firewall / LoadBalancer, as well as both Vision API and Speech-To-Text * Implementing the entire front-end from scratch in Angular, with the integration of real-time data * Great UI Design :) ## What's next for Inkblot * Database integration: Keeping user data, keeping historical data, user profiles (login) * Twilio Integration * HIPAA Compliancy * Investigate blockchain technology with the help of BlockStack * Testing the product with professional therapists
losing
MediBot: Help us help you get the healthcare you deserve ## Inspiration: Our team went into the ideation phase of Treehacks 2023 with the rising relevance and apparency of conversational AI as a “fresh” topic occupying our minds. We wondered if and how we can apply conversational AI technology such as chatbots to benefit people, especially those who may be underprivileged or underserviced in several areas that are within the potential influence of this technology. We were brooding over the six tracks and various sponsor rewards when inspiration struck. We wanted to make a chatbot within healthcare, specifically patient safety. Being international students, we recognize some of the difficulties that arise when living in a foreign country in terms of language and the ability to communicate with others. Through this empathetic process, we arrived at a group that we defined as the target audience of MediBot; children and non-native English speakers who face language barriers and interpretive difficulties in their communication with healthcare professionals. We realized very early on that we do not want to replace the doctor in diagnosis but rather equip our target audience with the ability to express their symptoms clearly and accurately. After some deliberation, we decided that the optimal method to accomplish that using conversational AI was through implementing a chatbot that asks clarifying questions to help label the symptoms for the users. ## What it does: Medibot initially prompts users to describe their symptoms as best as they can. The description is then evaluated to compare to a list of proper medical terms (symptoms) in terms of similarity. Suppose those symptom descriptions are rather vague (do not match very well with the list of official symptoms or are blanket terms). In that case, Medibot asks the patients clarifying questions to identify the symptoms with the user’s added input. For example, when told, “My head hurts,” Medibot will ask them to distinguish between headaches, migraines, or potentially blunt force trauma. But if the descriptions of a symptom are specific and relatable to official medical terms, Medibot asks them questions regarding associated symptoms. This means Medibot presents questions inquiring about symptoms that are known probabilistically to appear with the ones the user has already listed. The bot is designed to avoid making an initial diagnosis using a double-blind inquiry process to control for potential confirmation biases. This means the bot will not tell the doctor its predictions regarding what the user has, and it will not nudge the users into confessing or agreeing to a symptom they do not experience. Instead, the doctor will be given a list of what the user was likely describing at the end of the conversation between the bot and the user. The predictions from the inquiring process are a product of the consideration of associative relationships among symptoms. Medibot keeps track of the associative relationship through Cosine Similarity and weight distribution after the Vectorization Encoding Process. Over time, Medibot zones in on a specific condition (determined by the highest possible similarity score). The process also helps in maintaining context throughout the chat conversations. Finally, the conversation between the patient and Medibot ends in the following cases: the user needs to leave, the associative symptoms process suspects one condition much more than the others, and the user finishes discussing all symptoms they experienced. ## How we built it We constructed the MediBot web application in two different and interconnected stages, frontend and backend. The front end is a mix of ReactJS and HTML. There is only one page accessible to the user which is a chat page between the user and the bot. The page was made reactive through several styling options and the usage of states in the messages. The back end was constructed using Python, Flask, and machine learning models such as OpenAI and Hugging Face. The Flask was used in communicating between the varying python scripts holding the MediBot response model and the chat page in the front end. Python was the language used to process the data, encode the NLP models and their calls, and store and export responses. We used prompt engineering through OpenAI to train a model to ask clarifying questions and perform sentiment analysis on user responses. Hugging Face was used to create an NLP model that runs a similarity check between the user input of symptoms and the official list of symptoms. ## Challenges we ran into Our first challenge was familiarizing ourselves with virtual environments and solving dependency errors when pushing and pulling from GitHub. Each of us initially had different versions of Python and operating systems. We quickly realized that this will hinder our progress greatly after fixing the first series of dependency issues and started coding in virtual environments as solutions. The second great challenge we ran into was integrating the three separate NLP models into one application. This is because they are all resource intensive in terms of ram and we only had computers with around 12GB free for coding. To circumvent this we had to employ intermediate steps when feeding the result from one model into the other and so on. Finally, the third major challenge was resting and sleeping well. ## Accomplishments we are proud of First and foremost we are proud of the fact that we have a functioning chatbot that accomplishes what we originally set out to do. In this group 3 of us have never coded an NLP model and the last has only coded smaller scale ones. Thus the integration of 3 of them into one chatbot with front end and back end is something that we are proud to have accomplished in the timespan of the hackathon. Second, we are happy to have a relatively small error rate in our model. We informally tested it with varied prompts and performed within expectations every time. ## What we learned: This was the first hackathon for half of the team, and for 3/4, it was the first time working with virtual environments and collaborating using Git. We learned quickly how to push and pull and how to commit changes. Before the hackathon, only one of us had worked on an ML model, but we learned together to create NLP models and use OpenAI and prompt engineering (credits to OpenAI Mem workshop). This project's scale helped us understand these ML models' intrinsic moldability. Working on Medibot also helped us become much more familiar with the idiosyncrasies of ReactJS and its application in tandem with Flask for dynamically changing webpages. As mostly beginners, we experienced our first true taste of product ideation, project management, and collaborative coding environments. ## What’s next for MediBot The next immediate steps for MediBot involve making the application more robust and capable. In more detail, first we will encode the ability for MediBot to detect and define more complex language in simpler terms. Second, we will improve upon the initial response to allow for more substantial multi-symptom functionality.Third, we will expand upon the processing of qualitative answers from users to include information like length of pain, the intensity of pain, and so on. Finally, after this more robust system is implemented, we will begin the training phase by speaking to healthcare providers and testing it out on volunteers. ## Ethics: Our design aims to improve patients’ healthcare experience towards the better and bridge the gap between a condition and getting the desired treatment. We believe expression barriers and technical knowledge should not be missing stones in that bridge. The ethics of our design therefore hinges around providing quality healthcare for all. We intentionally stopped short of providing a diagnosis with Medibot because of the following ethical considerations: * **Bias Mitigation:** Whatever diagnosis we provide might induce unconscious biases like confirmation or availability bias, affecting the medical provider’s ability to give proper diagnosis. It must be noted however, that Medibot is capable of producing diagnosis. Perhaps, Medibot can be used in further research to ensure the credibility of AI diagnosis by checking its prediction against the doctor’s after diagnosis has been made. * **Patient trust and safety:** We’re not yet at the point in our civilization’s history where patients are comfortable getting diagnosis from AIs. Medibot’s intent is to help nudge us a step down that path, by seamlessly, safely, and without negative consequence integrating AI within the more physical, intimate environments of healthcare. We envision Medibot in these hospital spaces, helping users articulate their symptoms better without fear of getting a wrong diagnosis. We’re humans, we like when someone gets us, even if that someone is artificial. However, the implementation of AI for pre-diagnoses still raises many ethical questions and considerations: * **Fairness:** Use of Medibot requires a working knowledge of the English language. This automatically disproportionates its accessibility. There are still many immigrants for whom the questions, as simple as we have tried to make them, might be too much for. This is a severe limitation to our ethics of assisting these people. A next step might include introducing further explanation of troublesome terms in their language (Note: the process of pre-diagnosis will remain in English, only troublesome terms that the user cannot understand in English may be explained in a more familiar language. This way we further build patients’ vocabulary and help their familiarity with English ). There are also accessibility concerns as hospitals in certain regions or economic stratas may not have the resources to incorporate this technology. * **Bias:** We put severe thought into bias mitigation both on the side of the doctor and the patient. It is important to ensure that Medibot does not lead the patient into reporting symptoms they don’t necessarily have or induce availability bias. We aimed to circumvent this by asking questions seemingly randomly from a list of symptoms generated based on our Sentence Similarity model. This avoids leading the user in just one direction. However, this does not eradicate all biases as associative symptoms are hard to mask from the patient (i.e a patient may think chills if you ask about cold) so this remains a consideration. * **Accountability:** Errors in symptom identification can be tricky to detect making it very hard for the medical practitioner to know when the symptoms are a true reflection of the actual patient’s state. Who is responsible for the consequences of wrong pre-diagnoses? It is important to establish these clear systems of accountability and checks for detecting and improving errors in MediBot. * **Privacy:** MediBot will be trained on patient data and patient-doctor diagnoses in future operations. There remains concerns about privacy and data protection. This information, especially identifying information, must be kept confidential and secure. One method of handling this is asking users at the very beginning whether they want their data to be used for diagnostics and training or not.
## Inspiration No one likes waiting around too much, especially when we feel we need immediate attention. 95% of people in hospital waiting rooms tend to get frustrated over waiting times and uncertainty. And this problem affects around 60 million people every year, just in the US. We would like to alleviate this problem and offer alternative services to relieve the stress and frustration that people experience. ## What it does We let people upload their medical history and list of symptoms before they reach the waiting rooms of hospitals. They can do this through the voice assistant feature, where in a conversation style they tell their symptoms, relating details and circumstances. They also have the option of just writing these in a standard form, if it's easier for them. Based on the symptoms and circumstances the patient receives a category label of 'mild', 'moderate' or 'critical' and is added to the virtual queue. This way the hospitals can take care of their patients more efficiently by having a fair ranking system (incl. time of arrival as well) that determines the queue and patients have a higher satisfaction level as well, because they see a transparent process without the usual uncertainty and they feel attended to. This way they can be told an estimate range of waiting time, which frees them from stress and they are also shown a progress bar to see if a doctor has reviewed his case already, insurance was contacted or any status changed. Patients are also provided with tips and educational content regarding their symptoms and pains, battling this way the abundant stream of misinformation and incorrectness that comes from the media and unreliable sources. Hospital experiences shouldn't be all negative, let's try try to change that! ## How we built it We are running a Microsoft Azure server and developed the interface in React. We used the Houndify API for the voice assistance and the Azure Text Analytics API for processing. The designs were built in Figma. ## Challenges we ran into Brainstorming took longer than we anticipated and had to keep our cool and not stress, but in the end we agreed on an idea that has enormous potential and it was worth it to chew on it longer. We have had a little experience with voice assistance in the past but have never user Houndify, so we spent a bit of time figuring out how to piece everything together. We were thinking of implementing multiple user input languages so that less fluent English speakers could use the app as well. ## Accomplishments that we're proud of Treehacks had many interesting side events, so we're happy that we were able to piece everything together by the end. We believe that the project tackles a real and large-scale societal problem and we enjoyed creating something in the domain. ## What we learned We learned a lot during the weekend about text and voice analytics and about the US healthcare system in general. Some of us flew in all the way from Sweden, for some of us this was the first hackathon attended so working together with new people with different experiences definitely proved to be exciting and valuable.
## Inspiration As the amount of Internet users grows, there is an increasing need for online technical support. We asked ourselves, what would a help desk look like 30 years from now? Would there be human-to-human interaction? Perhaps the user's problem could be solved before it even happens. While we aren't currently able to predict the future, we can still make it as easy as possible for a client to find the help they need in a timely manner. Too much time of technical support teams is wasted by users not being able to find what they need or not knowing how to solve their problems. We decided to make a tool that makes it as easy as possible for them to find the solutions. ## What it does Penguin Desk is a universal Google Chrome extension. Most technical problems can be reduced to specific roots, allowing solutions to be streamlined for the majority of websites. The user tells our extension what issue they're having or what they would like to accomplish. Our extension quickly searches for all the possible paths that the user could take and automatically does what it determines is the best action. These actions could be anything from page redirection, highlighting steps on each page or autofilling forms. Because of the similar format of most websites, this extension automatically adapts to the user's needs. ## How we built it We built the tool as a Google Chrome extension using JavaScript, HTML and CSS. To run it, we sideload it into the browser. We use a synonyms API to find other words on the page that could help the user solve their problem. ## Challenges we ran into We ran into a numerous amount of bugs. Everything from console logging breaking our program, to comments being ignored. We found it difficult to get the files to communicate between each other. ## Accomplishments and what we learned None of us had built a Google Chrome extension before, so it was quite the learning experience. ## What's next for Penguin Desk Penguin Desk was initially going to have users log in to an account so that we could store data such as their personal information and previous help requests. We didn't have time to implement that, but it would have been nice to have.
partial
## Inspiration Due to a lot of work and family problems, People often forget to take care of their health and food. Common health problems people know a day's faces are BP, heart problems, and Diabetics. Most people face mental health problems, due to studies, jobs, or any other problems. This Project can help people find out their health problems. It helps people in easy recycling of items, as they are divided into 12 different classes. It will help people who do not have any knowledge of plants and predict if the plant is having any diseases or not. ## What it does On the Garbage page, When we upload an image it classifies which kind of garbage it is. It helps people with easy recycling. On the mental health page, When we answer some of the questions. It will predict If we are facing some kind of mental health issue. The Health Page is divided into three parts. One page predicts if you are having heart disease or not. The second page predicts if you are having Diabetics or not. The third page predicts if you are having BP or not. Covid 19 page classify if you are having covid or not Plant\_Disease page predicts if a plant is having a disease or not. ## How we built it I built it using streamlit and OpenCV. ## Challenges we ran into Deploying the website to Heroku was very difficult because I generally deploy it. Most of this was new to us except for deep learning and ML so it was very difficult overall due to the time restraint. The overall logic and finding out how we should calculate everything was difficult to determine within the time limit. Overall, time was the biggest constraint. ## Accomplishments that we're proud of ## What we learned Tensorflow, Streamlit, Python, HTML5, CSS3, Opencv, Machine learning, Deep learning, and using different python packages. ## What's next for Arogya
## Inspiration Over 70 million people around the world use sign language as their native form of communication. 70 million voices who were unable to be fully recognized in today’s society. This disparity inspired our team to develop a program that will allow for real-time translation of sign language into a text display monitor. Allowing for more inclusivity among those who do not know sign language to be able to communicate with a new community by breaking down language barriers. ## What it does It translates sign language into text in real-time processing. ## How we built it We set the environment by installing different packages (Open cv, mediapipe, scikit-learn), and set a webcam. -Data preparation: We collected data for our ML model by capturing a few sign language letters through the webcam that takes the whole image frame to collect it into different categories to classify the letters. -Data processing: We use the media pipe computer vision inference to capture the hand gestures to localize the landmarks of your fingers. -Train/ Test model: We trained our model to detect the matches between the trained images and hand landmarks captured in real time. ## Challenges we ran into The challenges we ran into first began with our team struggling to come up with a topic to develop. Then we ran into the issue of developing a program to integrate our sign language detection code with the hardware due to our laptop lacking the ability to effectively process the magnitude of our code. ## Accomplishments that we're proud of The accomplishment that we are most proud of is that we were able to implement hardware in our project as well as Machine Learning with a focus on computer vision. ## What we learned At the beginning of our project, our team was inexperienced with developing machine learning coding. However, through our extensive research on machine learning, we were able to expand our knowledge in under 36 hrs to develop a fully working program.
## Inspiration Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate. ## What it does We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal. ## How we built it Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript Backend: Python,Javascript Server side> Nodejs, Passport js Database> MongoDB( for user login), MySQL(for mood based music recommendations) ## Challenges we ran into Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked . But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally. ## Accomplishments that we're proud of Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions. We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor. Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging. ## What we learned We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists. ## What's next for Umang While the core functionality of our app is complete, it can of course be further improved . 1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress. 2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement. This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit
winning
## Inspiration There are many scary things in the world ranging from poisonous spiders to horrifying ghosts, but none of these things scare people more than the act of public speaking. Over 75% of humans suffer from a fear of public speaking but what if there was a way to tackle this problem? That's why we created Strive. ## What it does Strive is a mobile application that leverages voice recognition and AI technologies to provide instant actionable feedback in analyzing the voice delivery of a person's presentation. Once you have recorded your speech Strive will calculate various performance variables such as: voice clarity, filler word usage, voice speed, and voice volume. Once the performance variables have been calculated, Strive will then render your performance variables in an easy-to-read statistic dashboard, while also providing the user with a customized feedback page containing tips to improve their presentation skills. In the settings page, users will have the option to add custom filler words that they would like to avoid saying during their presentation. Users can also personalize their speech coach for a more motivational experience. On top of the in-app given analysis, Strive will also send their feedback results via text-message to the user, allowing them to share/forward an analysis easily. ## How we built it Utilizing the collaboration tool Figma we designed wireframes of our mobile app. We uses services such as Photoshop and Gimp to help customize every page for an intuitive user experience. To create the front-end of our app we used the game engine Unity. Within Unity we sculpt each app page and connect components to backend C# functions and services. We leveraged IBM Watson's speech toolkit in order to perform calculations of the performance variables and used stdlib's cloud function features for text messaging. ## Challenges we ran into Given our skillsets from technical backgrounds, one challenge we ran into was developing out a simplistic yet intuitive user interface that helps users navigate the various features within our app. By leveraging collaborative tools such as Figma and seeking inspiration from platforms such as Dribbble, we were able to collectively develop a design framework that best suited the needs of our target user. ## Accomplishments that we're proud of Creating a fully functional mobile app while leveraging an unfamiliar technology stack to provide a simple application that people can use to start receiving actionable feedback on improving their public speaking skills. By building anyone can use our app to improve their public speaking skills and conquer their fear of public speaking. ## What we learned Over the course of the weekend one of the main things we learned was how to create an intuitive UI, and how important it is to understand the target user and their needs. ## What's next for Strive - Your Personal AI Speech Trainer * Model voices of famous public speakers for a more realistic experience in giving personal feedback (using the Lyrebird API). * Ability to calculate more performance variables for a even better analysis and more detailed feedback
## Inspiration It is nearly a year since the start of the pandemic and going back to normal still feels like a distant dream. As students, most of our time is spent attending online lectures, reading e-books, listening to music, and playing online games. This forces us to spend immense amounts of time in front of a large monitor, clicking the same monotonous buttons. Many surveys suggest that this has increased the anxiety levels in the youth. Basically, we are losing the physical stimulus of reading an actual book in a library, going to an arcade to enjoy games, or play table tennis with our friends. ## What it does It does three things: 1) Controls any game using hand/a steering wheel (non-digital) such as Asphalt9 2) Helps you zoom-in, zoom-out, scroll-up, scroll-down only using hand gestures. 3) Helps you browse any music of your choice using voice commands and gesture controls for volume, pause/play, skip, etc. ## How we built it The three main technologies used in this project are: 1) Python 3 The software suite is built using Python 3 and was initially developed in the Jupyter Notebook IDE. 2) OpenCV The software uses the OpenCV library in Python to implement most of its gesture recognition and motion analysis tasks. 3) Selenium Selenium is a web driver that was extensively used to control the web interface interaction component of the software. ## Challenges we ran into 1) Selenium only works with google version 81 and is very hard to debug :( 2) Finding the perfect HSV ranges corresponding to different colours was a tedious task and required me to make a special script to make the task easier. 3) Pulling an all-nighter (A coffee does NOT help!) ## Accomplishments that we're proud of 1) Successfully amalgamated computer vision, speech recognition and web automation to make a suite of software and not just a single software! ## What we learned 1) How to debug selenium efficiently 2) How to use angle geometry for steering a car using computer vision 3) Stabilizing errors in object detection ## What's next for E-Motion I plan to implement more components in E-Motion that will help to browse the entire computer and make the voice commands more precise by ignoring background noise.
## Inspiration The inspiration behind our project stems from the personal challenges we’ve faced in the past with public speaking and presentations. Like many others, we often found ourselves feeling overwhelmed by nerves, shaky voices, and the pressure of presenting in front of peers. This anxiety would sometimes make it difficult to communicate ideas effectively, and the fear of judgment made the experience even more daunting. Recognizing that we weren’t alone in these feelings, we wanted to create a solution that could help others overcome similar hurdles. That’s where Vocis comes in. Its aim is to give people the freedom and the ability to practice their presentation skills at their own pace, in a safe, supportive environment. Whether it’s practicing for a school project, a work presentation, or simply building the confidence to speak in front of others, the platform allows users to refine their delivery. ## What It Does Our project aims to simulate real-life challenges that presenters might face. For example, handling difficult situations like Q&A sessions, dealing with hecklers, or responding to aggressive viewers. By creating these simulated scenarios, our software prepares users for the unpredictability of live presentations. We hope that by giving people the tools and the settings to practice on their own terms, they can gradually build the skills and self-assurance needed to present with ease in any setting. ## Tech Stack - How Vocis is built ReactJs Shadcn NextJs TailwindCSS Hume.Ai OpenAI ## Challenges We Faced During our hackathon, one of the key challenges we faced was the need to dive into extensive documentation while working on API implementation as we had never worked with Hume before. Not only that, as all of us don’t have much experience with the backend of an app, it was really taxing to learn and implement at the same time. This task, which is already time-consuming, became more difficult due to unstable internet connectivity. This led to unexpected delays in accessing resources and troubleshooting problems in real time, which put additional pressure on our timeline. Despite these setbacks, our team worked hard to adapt and maintain momentum. ## Accomplishments Despite the challenges we faced, we were able to make a functional prototype at the very least that displays the core of our program which is simulating real-life difficult scenarios for presenters and public-speakers. At least, the very bare bones and we’re very proud of ourselves for being able to do that and create a wonderful project. ## What We Learned We learned to create a viable project in limited time allowing us to overcome our shortcomings in our ability to create a project Through multiple workshops and gaining insightful help from mentors, we learned more about APIs, implementing APIs, and making sure they cooperate with each other and streamlining the process. We also learned a lot of new, cool and amazing technologies created by a lot of amazing people that allowed us to achieve the aim of our project ## What’s Next For Vocis We allow multiple users to present at the same time and the AI can create situations for multiple “panelists” We allow many more situations that panelists and presenters may face like many different types of aggressive people, journalists that are a little too overbearing. We add reactions of the audience that is listening to our presentation so that it creates a more realistic experience for the user (“presenter”) More security measure Authentication
winning
## Inspiration Inspired by [SIU Carbondale's Green Roof](https://greenroof.siu.edu/siu-green-roof/), we wanted to create an automated garden watering system that would help address issues ranging from food deserts to lack of agricultural space to storm water runoff. ## What it does This hardware solution takes in moisture data from soil and determines whether or not the plant needs to be watered. If the soil's moisture is too low, the valve will dispense water and the web server will display that water has been dispensed. ## How we built it First, we tested the sensor and determined the boundaries between dry, damp, and wet based on the sensor's output values. Then, we took the boundaries and divided them by percentage soil moisture. Specifically, the sensor that measures the conductivity of the material around it, so water being the most conductive had the highest values and air being the least conductive had the lowest. Soil would fall in the middle and the ranges in moisture were defined by the pure air and pure water boundaries. From there, we visualized the hardware set up with the sensor connected to an Arduino UNO microcontroller connected to a Raspberry Pi 4 controlling a solenoid valve that releases water when the soil moisture meter is less than 40% wet. ## Challenges we ran into At first, we aimed too high. We wanted to incorporate weather data into our water dispensing system, but the information flow and JSON parsing were not cooperating with the Arduino IDE. We consulted with a mentor, Andre Abtahi, who helped us get a better perspective of our project scope. It was difficult to focus on what it meant to truly create a minimum viable product when we had so many ideas. ## Accomplishments that we're proud of Even though our team is spread across the country (California, Washington, and Illinois), we were still able to create a functioning hardware hack. In addition, as beginners we are very excited of this hackathon's outcome. ## What we learned We learned about wrangling APIs, how to work in a virtual hackathon, and project management. Upon reflection, creating a balance between feasibility, ability, and optimism is important for guiding the focus and motivations of a team. Being mindful about energy levels is especially important for long sprints like hackathons. ## What's next for Water Smarter Lots of things! What's next for Water Smarter is weather controlled water dispensing. Given humidity, precipitation, and other weather data, our water system will dispense more or less water. This adaptive water feature will save water and let nature pick up the slack. We would use the OpenWeatherMap API to gather the forecasted volume of rain, predict the potential soil moisture, and have the watering system dispense an adjusted amount of water to maintain correct soil moisture content. In a future iteration of Water Smarter, we want to stretch the use of live geographic data even further by suggesting appropriate produce for which growing zone in the US, which will personalize the water conservation process. Not all plants are appropriate for all locations so we would want to provide the user with options for optimal planting. We can use software like ArcScene to look at sunlight exposure according to regional 3D topographical images and suggest planting locations and times. We want our product to be user friendly, so we want to improve our aesthetics and show more information about soil moisture beyond just notifying water dispensing.
## Inspiration As the world progresses into the digital age, there is a huge simultaneous focus on creating various sources of clean energy that is sustainable and affordable. Unfortunately, there is minimal focus on ways to sustain the increasingly rapid production of energy. Energy is wasted everyday as utility companies over supply power to certain groups of consumers. ## What It Does Thus, we bring you Efficity, a device that helps utility companies analyze and predict the load demand of a housing area. By leveraging the expanding, ubiquitous arrival of Internet of Things devices, we can access energy data in real-time. Utility companies could then estimate the ideal power to supply to a housing area, while keeping in mind to satisfy the load demand. With this, not too much energy will be wasted and thus improving energy efficiency. On top of that, everyday consumers can also have easy access to their own personal usage for tracking. ## How We Built It Our prototype is built primarily around a Dragonboard 410c, where a potentiometer is used to represent the varying load demand of consumers. By using the analog capabilities of a built in Arduino (ATMega328p), we can calculate the power that is consumed by the load in real time. A Python script is then run via the Dragonboard to receive the data from the Arduino through serial communication. The Dragonboard then further complements our design by having built-in WiFi capabilities. With this in mind, we can send HTTP requests to a webserver hosted by energy companies. In our case, we explored sending this data to a free IOT platform webserver, which can allow a user from anywhere to track energy usage as well as perform analytics such as using MATLAB. In addition, the Dragonboard comes with a fully usable GUI and compatible HDMI monitor for users that are less familiar with command line controls. ## Challenges We Ran Into There were many challenges throughout the Hackathon. First, we had trouble grasping the operations of a Dragonboard. The first 12 hours was spent only on learning how to use the device itself—it also did not help that our first Dragonboard was defective and did not come with a pre-flashed operating system! Next time, we plan to ask more questions early on rather than fixating on problems we believed were trivial. Next, we had a hard time coding the Wi-Fi functionality of the DragonBoard. This was largely due to the lack of expertise in the area from most members. For future references, we find it advisable to have a larger diversity of team members to facilitate faster development. ## Accomplishments That We're Proud Of Overall, we are proud of what we have achieved as this was our first time participating in a hackathon. We ranged from first all the way to fourth year students! From learning how to operate the Dragonboard 410c to having hands on experience in implementing IOT capabilities, we thoroughly believe that HackWestern has broaden all our perspectives on technology. ## What's Next for Efficity If this pitch is successful in this hackathon, we are planning to further improvise and make iterations and develop the full potential of the Dragonboard prototype. There are numerous algorithms we would love to implement and explore to process the collected data since the Dragonboard is quite a powerful device with its own operation systems. We may also want to include extra hardware add-ons such as silent arms for over-usage or solar panels to allow a fully self-sustained device. To take this one step further--if we were able to have a fully functional product, we can opt to pitch this idea to investors!
# see our presentation [here](https://docs.google.com/presentation/d/1AWFR0UEZ3NBi8W04uCgkNGMovDwHm_xRZ-3Zk3TC8-E/edit?usp=sharing) ## Inspiration Without purchasing hardware, there are few ways to have contact-free interactions with your computer. To make such technologies accessible to everyone, we created one of the first touch-less hardware-less means of computer control by employing machine learning and gesture analysis algorithms. Additionally, we wanted to make it as accessible as possible in order to reach a wide demographic of users and developers. ## What it does Puppet uses machine learning technology such as k-means clustering in order to distinguish between different hand signs. Then, it interprets the hand-signs into computer inputs such as keys or mouse movements to allow the user to have full control without a physical keyboard or mouse. ## How we built it Using OpenCV in order to capture the user's camera input and media-pipe to parse hand data, we could capture the relevant features of a user's hand. Once these features are extracted, they are fed into the k-means clustering algorithm (built with Sci-Kit Learn) to distinguish between different types of hand gestures. The hand gestures are then translated into specific computer commands which pair together AppleScript and PyAutoGUI to provide the user with the Puppet experience. ## Challenges we ran into One major issue that we ran into was that in the first iteration of our k-means clustering algorithm the clusters were colliding. We fed into the model the distance of each on your hand from your wrist, and designed it to return the revenant gesture. Though we considered changing this to a coordinate-based system, we settled on changing the hand gestures to be more distinct with our current distance system. This was ultimately the best solution because it allowed us to keep a small model while increasing accuracy. Mapping a finger position on camera to a point for the cursor on the screen was not as easy as expected. Because of inaccuracies in the hand detection among other things, the mouse was at first very shaky. Additionally, it was nearly impossible to reach the edges of the screen because your finger would not be detected near the edge of the camera's frame. In our Puppet implementation, we constantly *pursue* the desired cursor position instead of directly *tracking it* with the camera. Also, we scaled our coordinate system so it required less hand movement in order to reach the screen's edge. ## Accomplishments that we're proud of We are proud of the gesture recognition model and motion algorithms we designed. We also take pride in the organization and execution of this project in such a short time. ## What we learned A lot was discovered about the difficulties of utilizing hand gestures. From a data perspective, many of the gestures look very similar and it took us time to develop specific transformations, models and algorithms to parse our data into individual hand motions / signs. Also, our team members possess diverse and separate skillsets in machine learning, mathematics and computer science. We can proudly say it required nearly all three of us to overcome any major issue presented. Because of this, we all leave here with a more advanced skillset in each of these areas and better continuity as a team. ## What's next for Puppet Right now, Puppet can control presentations, the web, and your keyboard. In the future, puppet could control much more. * Opportunities in education: Puppet provides a more interactive experience for controlling computers. This feature can be potentially utilized in elementary school classrooms to give kids hands-on learning with maps, science labs, and language. * Opportunities in video games: As Puppet advances, it could provide game developers a way to create games wear the user interacts without a controller. Unlike technologies such as XBOX Kinect, it would require no additional hardware. * Opportunities in virtual reality: Cheaper VR alternatives such as Google Cardboard could be paired with Puppet to create a premium VR experience with at-home technology. This could be used in both examples described above. * Opportunities in hospitals / public areas: People have been especially careful about avoiding germs lately. With Puppet, you won't need to touch any keyboard and mice shared by many doctors, providing a more sanitary way to use computers.
partial
## Inspiration Aravind doesn't speak Chinese. When Nick and Jon speak in Chinese Aravind is sad. We want to solve this problem for all the Aravinds in the world -- not just for Chinese though, for any language! ## What it does TranslatAR allows you to see English (or any other language of your choice) subtitles when you speak to other people speaking a foreign language. This is an augmented reality app which means the subtitles will appear floating in front of you! ## How we built it We used Microsoft Cognitive Services's Translation APIs to transcribe speech and then translate it. To handle the augmented reality aspect, we created our own AR device by combining an iPhone, a webcam, and a Google Cardboard. In order to support video capturing along with multiple microphones, we multithread all our processes. ## Challenges we ran into One of the biggest challenges we faced was trying to add the functionality to handle multiple input sources in different languages simultaneously. We eventually solved it with multithreading, spawning a new thread to listen, translate, and caption for each input source. ## Accomplishments that we're proud of Our biggest achievement is definitely multi-threading the app to be able to translate a lot of different languages at the same time using different endpoints. This makes real-time multi-lingual conversations possible! ## What we learned We familiarized ourselves with the Cognitive Services API and were also able to create our own AR system that works very well from scratch using OpenCV libraries and Python Imaging Library. ## What's next for TranslatAR We want to launch this App in the AppStore so people can replicate VR/AR on their own phones with nothing more than just an App and an internet connection. It also helps a lot of people whose relatives/friends speak other languages.
## Our Inspiration We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts. ## What it does EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable. ## How we built it We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space. ## Challenges we ran into Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR! ## Accomplishments that we're proud of We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life.
## Inspiration Our inspiration for this project was to empower individuals, enabling them to make well-informed financial choices effortlessly. Our goal was to eliminate the need for extensive hours of sifting through intricate credit card terms and conditions, making financial decision-making accessible to everyone. ## What it does The application stores information about the various credit cards used by the client. When a client wishes to make a purchase, they can simply open the app. It will then utilize location data to automatically identify the type of store they are in and recommend the most suitable credit card for that specific transaction, optimizing their benefits and rewards. ## How we built it The framework of the mobile application is React Native. We used Python for information processing/retrieval to and from Google Places API as well as OpenAI API. We also used JavaScript for app functionality. ## Challenges we ran into We had an issue because when we retrieved users' location data, we could initially only map their location to an address that could have multiple stores in its area depending on the density of the stores (in plazas/malls, etc). We solved this problem by making the program get the closest store within a 100m radius of the retrieved address. ## Accomplishments that we're proud of Building a React-Native app with no prior experience with the framework. Working around hurdles within our techstack and maintaining the original project idea. Utilizing and learning how to use new API's (OpenAI, Google-Places). ## What we learned How to work efficiently in a team. By finding what needs to be done and delegating tasks among the group. ## What's next for Blitz Future additions for Blitz is to add a database of credit cards and their rewards so the user can search for what card they have and add it to their account. Index our own map data to make it more accurate for our use case. Track user spending to recommend new cards. Create a physical card that stores all the other credit card data to automatically choose the card for you. A Chrome extension for online checkout.
winning
## Inspiration Our inspiration came from the challenge proposed by Varient, a data aggregation platform that connects people with similar mutations together to help doctors and users. ## What it does Our application works by allowing the user to upload an image file. The image is then sent to Google Cloud’s Document AI to extract the body of text, process it, and then matched it against the datastore of gene names for matches. ## How we built it While originally we had planned to feed this body of text to a Vertex AI ML for entity extraction, the trained model was not accurate due to a small dataset. Additionally, we attempted to build a BigQuery ML model but struggled to format the data in the target column as required. Due to time constraints, we explored a different path and downloaded a list of gene symbols from HUGO Gene Nomenclature Committee’s website (<https://www.genenames.org/>). Using Node.js, and Multer, the image is processed and the text contents are efficiently matched against the datastore of gene names for matches. The app returns a JSON of the matching strings in order of highest frequency. This web app is then hosted on Google Cloud through App Engine at (<https://uofthacksix-chamomile.ue.r.appspot.com/>). The UI while very simple is easy to use. The intent of this project was to create something that could easily be integrated into Varient’s architecture. Converting this project into an API to pass the JSON to the client would be very simple. ## How it meets the theme "restoration" The overall goal of this application, which has been partially implemented, was to create an application that could match mutated gene names from user-uploaded documents and connect them with resources, and others who share the common gene mutation. This would allow them to share strategies or items that have helped them deal with living with the gene mutation. This would allow these individuals to restore some normalcy in their lives again. ## Challenges we ran into Some of the challenges we faced: * having a small data set to train the Vertex AI on * time constraints on learning the new technologies, and the best way to effectively use it * formatting the data in the target column when attempting to build a BigQuery ML model ## Accomplishments that we're proud of The accomplishment that we are all proud of is the exposure we gained to all the new technologies we discovered and used this weekend. We had no idea how many AI tools Google offers. The exposure to new technologies and taking the risk to step out of our comfort zone and attempt to learn and use it this weekend in such a short amount of time is something we are all proud of. ## What we learned This entire project was new to all of us. We have never used google cloud in this manner before, only for Firestore. We were unfamiliar with using Express and working with machine learning was something only one of us had a small amount of experience with. We learned a lot about Google Cloud and how to access the API through Python and Node.js. ## What's next for Chamomile The hack is not as complete as we would like since ideally there would be a machine learning aspect to confirm the guesses made by the substring matching and more data to improve the Vertex AI model. Improving on this would be a great step for this project. Also, add a more put-together UI to match the theme of this application.
## Inspiration The inspiration for this project came from UofTHacks Restoration theme and Varient's project challenge. The initial idea was to detect a given gene mutation for a given genetic testing report. This is an extremely valuable asset for the medical community, given the current global situation with the COVID-19 pandemic. As we can already see, misinformation and distrust in the medical community continue to grow in popularity, thus we must try to leverage technology to solve this ever-expanding problem. One way Geneticheck can restore public trust in the medical community is by providing a way to bridge the gap between confusing medical reports and the average person's medical understanding. ## What it does Geneticheck is a smart software that allows a patient or parents of patients with rare diseases to gather more information about their specific conditions and genetic mutations. The reports are scanned through to find the gene mutation and shows where the gene mutation is located on the original report. Genticheck also provides the patient with more information regarding their gene mutation. Specifically, to provide them with the associated Diseases and Phenotypes (or related symptoms) they may now have. The software, given a gene mutation, searches through the Human Phenotype Ontology database and auto-generates a pdf report that lists off all the necessary information a patient will need following a genetic test. The descriptions for each phenotype are given in a laymen-like language, which allows the patient to understand the symptoms associated with the gene mutation, resulting in the patients and loved ones being more observant over their status. ## How we built it Geneticheck was built using Python and Google Cloud's Vision API. Other libraries were also explored, such as PyTesseract, however, yielded lower gene detection results ## Challenges we ran into One major challenge was initially designing the project in Python. Development in python was initially chosen for its rapid R&D capabilities and the potential need to do image processing in OpenCV. As the project was developed and Google Cloud Vision API was deemed acceptable for use, moving to a web-based python framework was deemed too time-consuming. In the interest of time, the python-based command line tool had to be selected as the current basis of interaction ## Accomplishments that we're proud of One proud accomplishment of this project is the success rate of the overall algorithm, being able to successfully detect all 47 gene mutations with their related image. The other great accomplishment was the quick development of PDF generation software to expand the capabilities and scope of the project, to provide the end-user/patient with more information about their condition, ultimately restoring their faith in the medical field through a better understanding/knowledge. ## What we learned Topics learned include OCR for python, optimizing images for text OCR for PyTesseract, PDF generation in python, setting up Flask servers, and alot about Genetic data! ## What's next for Geneticheck The next steps include poring over the working algorithms to a web-based framework, such as React. Running the algorithms on Javascript would allow the user web-based interaction, which is the best interactive format for the everyday person. Other steps is to gather more genetic tests results and to provide treatments options in the reports as well.
## Inspiration ## What it does PhyloForest helps researchers and educators by improving how we see phylogenetic trees. Strong, useful data visualization is key to new discoveries and patterns. Thanks to our product, users have a greater ability to perceive depth of trees by communicating widths rather than lengths. The length between proteins is based on actual lengths scaled to size. ## How we built it We used EggNOG to get phylogenetic trees in Newick format, then parsed them using a recursive algorithm to get the differences between the protein group in question. We connected names to IDs using the EBI (European Bioinformatics Institute) database, then took the lengths between the proteins and scaled them to size for our Unity environment. After we put together all this information, we went through an extensive integration process with Unity. We used EBI APIs for Taxon information, EggNOG gave us NCBI (National Center for Biotechnology Information) identities and structure. We could not use local NCBI lookup (as eggNOG does) due to the limitations of Virtual Reality headsets, so we used the EBI taxon lookup API instead to make the tree interactive and accurately reflect the taxon information of each species in question. Lastly, we added UI components to make the app easy to use for both educators and researchers. ## Challenges we ran into Parsing the EggNOG Newick tree was our first challenge because there was limited documentation and data sets were very large. Therefore, it was difficult to debug results, especially with the Unity interface. We also had difficulty finding a database that could connect NCBI IDs to taxon information with our VR headset. We also had to implement a binary tree structure from scratch in C#. Lastly, we had difficulty scaling the orthologs horizontally in VR, in a way that would preserve the true relationships between the species. ## Accomplishments that we're proud of The user experience is very clean and immersive, allowing anyone to visualize these orthologous groups. Furthermore, we think this occupies a unique space that intersects the fields of VR and genetics. Our features, such as depth and linearized length, would not be as cleanly implemented in a 2-dimensional model. ## What we learned We learned how to parse Newick trees, how to display a binary tree with branches dependent on certain lengths, and how to create a model that relates large amounts of data on base pair differences in DNA sequences to something that highlights these differences in an innovative way. ## What's next for PhyloForest Making the UI more intuitive so that anyone would feel comfortable using it. We would also like to display more information when you click on each ortholog in a group. We want to expand the amount of proteins people can select, and we would like to manipulate proteins by dragging branches to better identify patterns between orthologs.
losing
## Inspiration I'm taking a class called How To Make (Almost) Anything that will go through many aspects of digital fabrication and embedded systems. For the first assignment we had to design a model for our final project trying out different modeling software. As a beginner, I decided to take the opportunity to learn more about Unity through this hackathon. ## What it does Plays like the 15 tile block puzzle game ## How we built it I used Unity. ## Challenges we ran into Unity is difficult to navigate, there were a lot of hidden settings that made things not show up or scale. Since I'm not familiar with C# or Unity I spent a lot of time learning about different methods and data structures. Referencing of objects across the different scripts and attributes is not obvious and I ran into a lot of those kinds of issues. ## Accomplishments that we're proud of About 60% functional. ## What's next for 15tile puzzle game Making it 100% functional.
## Inspiration We wanted to create something that helped other people. We had so many ideas, yet couldn't stick to one. Luckily, we ended up talking to Phoebe(?) from Hardware and she talked about how using textiles would be great in a project. Something clicked, and we started brainstorming ideas. It ended up with us coming up with this project which could help a lot of people in need, including friends and family close to us. ## What it does Senses the orientation of your hand, and outputs either a key press, mouse move, or a mouse press. What it outputs is completely up to the user. ## How we built it Sewed a glove, attached a gyroscopic sensor, wired it to an Arduino Uno, and programmed it in C# and C++. ## Challenges we ran into Limited resources because certain hardware components were out of stock, time management (because of all the fun events!), Arduino communication through the serial port ## Accomplishments that we're proud of We all learned new skills, like sewing, coding in C++, and programming with the Arduino to communicate with other languages, like C#. We're also proud of the fact that we actually fully completed our project, even though it's our first hackathon. ## What we learned ~~how 2 not sleep lolz~~ Sewing, coding, how to wire gyroscopes, sponsors, DisguisedToast winning Hack the North. ## What's next for this project We didn't get to add all the features we wanted, both to hardware limitations and time limitations. Some features we would like to add are the ability to save and load configs, automatic input setup, making it wireless, and adding a touch sensor to the glove.
## Inspiration I was inspired to make this device while sitting in physics class. I really felt compelled to make something that I learned inside the classroom and apply my education to something practical. Growing up I always remembered playing with magnetic kits and loved the feeling of repulsion between magnets. ## What it does There is a base layer of small magnets all taped together so the North pole is facing up. There are hall effect devices to measure the variances in magnetic field that is created by the user's magnet attached to their finger. This allows the device to track the user's finger and determine how they are interacting with the magnetic field pointing up. ## How I built it It is build using the intel edison. Each hall effect device is either on or off depending if there is a magnetic field pointing down through the face of the black plate. This determines where the user's finger is. From there the analog data is sent via serial port to the processing program on the computer that demonstrates that it works. That just takes the data and maps the motion of the object. ## Challenges I ran into There are many challenges I faced. Two of them dealt with just the hardware. I bought the wrong type of sensors. These are threshold sensors which means they are either on or off instead of linear sensors that give a voltage proportional to the strength of magnetic field around it. This would allow the device to be more accurate. The other one deals with having alot of very small worn out magnets. I had to find a way to tape and hold them all together because they are in an unstable configuration to create an almost uniform magnetic field on the base. Another problem I ran into was dealing with the edison, I was planning on just controlling the mouse to show that it works but the mouse library only works with the arduino leonardo. I had to come up with a way to transfer the data to another program, which is how i came up dealing with serial ports and initially tried mapping it into a Unity game. ## Accomplishments that I'm proud of I am proud of creating a hardware hack that I believe is practical. I used this device as a way to prove the concept of creating a more interactive environment for the user with a sense of touch rather than things like the kinect and leap motion that track your motion but it is just in thin air without any real interaction. Some areas this concept can be useful in is in learning environment or helping people in physical therapy learning to do things again after a tragedy, since it is always better to learn with a sense of touch. ## What I learned I had a grand vision of this project from thinking about it before hand and I thought it was going to theoretically work out great! I learned how to adapt to many changes and overcoming them with limited time and resources. I also learned alot about dealing with serial data and how the intel edison works on a machine level. ## What's next for Tactile Leap Motion Creating a better prototype with better hardware(stronger magnets and more accurate sensors)
losing
## What it does Khaledifier replaces all quotes and images around the internet with pictures and quotes of DJ Khaled! ## How we built it Chrome web app written in JS interacts with live web pages to make changes. The app sends a quote to a server which tokenizes words into types using NLP This server then makes a call to an Azure Machine Learning API that has been trained on DJ Khaled quotes to return the closest matching one. ## Challenges we ran into Keeping the server running with older Python packages and for free proved to be a bit of a challenge
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration What if you could automate one of the most creative performances that combine music and spoken word? Everyone's watched those viral videos of insanely talented rappers online but what if you could get that level of skill? Enter **ghostwriter**, freestyling reimagined. ## What it does **ghostwriter** allows you to skip through pre-selected beats where it will then listen to your bars, suggesting possible rhymes to help you freestyle. With the 'record' option, you can listen back to your freestyles and even upload them to share with your friends and listen to your friend's freestyles. ## How we built it In order to build **ghostwriter** we used Google Cloud Services for speech-to-text transcription, the Cohere API for rhyming suggestions, Socket.io for reload time communication between frontend and backend, Express.js for backend, and the CockroachDB distributed SQL database to store transcription as well as the audio files. We used React for the fronted and styled with the Material UI library. ## Challenges we ran into We had some challenges detecting when the end of a bar might be as different rhyming schemes and flows will have varying pauses. Instead, we decided to display rhyming suggestions for each word as the user then has the freedom to determine when they want to end their bar and start another. Another issue we had was figuring out the latency of the API calls to make sure the data was retrieved in time for the user to think of another bar. Finally, we also had some trouble using audio media players to record the user's freestyle along with the background music, however, we were able to find a solution in the end. ## Accomplishments that we're proud of We are really proud to say that what we created during the past 36 hours is meeting its intended purpose. We were able to put all the components of this project in motion for the software to successfully hear our words and to generate rhyming suggestions in time for the user to think of another line and continue their freestyle. Additionally, using technologies that were new to us and coding away until it reached our goal expanded our technological expertise. ## What we learned We learned how to use react and move the text around to match our desired styling. Next, we learned how to interact with numerous APIs (including Cohere's) in order to get the data we want to be organized in the way most efficient for us to display to the user. Finally, we learned how to freestyle better a bit ourselves. ## What's next for Ghostwriter For **ghostwriter**, we aim to have a higher curation for freestyle beats and to build a social community to highlight the most fire freestyles. Our goal is to turn today's rappers into tomorrow's Hip-Hop legends!
winning
## Why we made Time Capsule Traditional physical photo albums & time capsules are not easily accessible or sharable and are limited in storage capabilities. And while cloud-based photo album services offer remote access, collaborative sharing, and automatic backup, you are not in full control of your photos, there is often a subscription cost, and a risk of deletion. ## What it does Time\_capsule.tech is a blockchain-based **photo album** that employs smart contracts to function as a **virtual time capsule** for each image. By storing and encrypting your photos on an *Interplanetary File System* (IPFS) 🪐🌌, the risk of data loss is minimised greatly, as well as adding **unparalleled security, permanence, and control of your own memories**. 📷 ## How we built it While similar to Next.js, the front end was built with **Starknet.js**, a frontend library for easy integration with Starknet custom hooks and components. Also, **Cairo** with intermediary **Sierra** was used for the deployment of contracts both locally as well as remotely on IDEs such as Remix. Finally, to ensure that images remained decentralized, we strived to use an **IPFS** system to host our images. And also *a lot* of dedication. 🔥 ## Accomplishments that we're proud of * Setting up a local devnet for deploying contracts * Understanding the file structure of Starknet.js * Trying most of the outdated tech for IPFS ## What we learned / Challenges we ran into We learned about blockchain, specifically smart contracts and their use cases. On a technical level, we learned about Cairo development, standards for ERC20 contracts, and differences in Starknet.js. On a more practical level, each member brought unique skills and perspectives to the table, fostering a fun and constructive environment. Our collective efforts resulted in an overall successful outcome as well as a positive and enjoyable working experience. ## What's next for Time Capsule * A more thorough implementation of DevOps tools such as Vercel for branch deployment as well as Github actions for functional testing * 3-D visualisation of photos with libraries such as three.js or CSS animations * Incorporate other Ethereum branches onto the network * Sleep 🛌, gaming 🖥️ 🎮 Overall, it was a great time for all and it was a pleasure attending this year’s event.
## Inspiration According to Statistics Canada, nearly 48,000 children are living in foster care. In the United States, there are ten times as many. Teenagers aged 14-17 are the most at risk of aging out of the system without being adopted. Many choose to opt-out when they turn 18. At that age, most youths like our team are equipped with a lifeline back to a parent or relative. However, without the benefit of a stable and supportive home, fostered youths, after emancipation, lack the consistent security for their documents, tacit guidance for practical tasks, and moral aid in building meaningful relationships through life’s ups and downs. Despite the success possible during foster care, there is overwhelming evidence that shows how our conventional system alone inherently cannot guarantee the necessary support to bridge a foster youth’s path into adulthood once they exit the system. ## What it does A virtual, encrypted, and decentralized safe for essential records. There is a built-in scanner function and a resource of contacts who can mentor and aid the user. Alerts can prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. Youth in foster care can start using the app at age 14 and slowly build a foundation well before they plan for emancipation. ## How we built it The essential decentralized component of this application, which stores images on an encrypted blockchain, was built on the Internet Computer Protocol (ICP) using Node JS and Azle. Node JS and React were also used to build our user-facing component. Encryption and Decryption was done using CryptoJS. ## Challenges we ran into ICP turned out to be very difficult to work with - attempting to publish the app to a local but discoverable device was nearly impossible. Apart from that, working with such a novel technology through an unfamiliar library caused many small yet significant mistakes that we wouldn't be able to resolve without the help of ICP mentors. There were many features we worked on that were put aside to prioritize, first and foremost, the security of the users' sensitive documents. ## Accomplishments that we're proud of Since this was the first time any of us worked on blockchain, having a working application make use of such a technology was very satisfying. Some of us also worked with react and front-end for the first time, and others worked with package managers like npm for the first time as well. Apart from the hard skills developed throughout the hackathon, we're also proud of how we distributed the tasks amongst ourselves, allowing us to stay (mostly) busy without overworking anyone. ## What we learned As it turns out, making a blockchain application is easier than expected! The code was straightforward and ICP's tutorials were easy to follow. Instead, we spent most of our time wrangling with our coding environment, and this experience gave us a lot of insight into computer networks, blockchain organization, CORS, and methods of accessing blockchain applications through code run in standard web apps like React. ## What's next for MirrorPort Since the conception of MirrorPort, it has always been planned to become a safe place for marginalized youths. Often, they would also lose contact with adults who have mentored or housed them. This app will provide this information to the user, with the consent of the mentor. Additionally, alerts will be implemented to prompt the user to tasks such as booking the annual doctors' appointments and tell them, for example, about openings for suitable housing and jobs. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. We're looking to migrate the dApp to mobile for more accessibility and portability. 2FA would be implemented for login security. It could also be a tool for tracking progress against their aspirations and providing tailored resources that map out a transition plan. Adding a document translation feature would also make the dApp work well with immigrant documents across borders.
## Inspiration It’s Friday afternoon, and as you return from your final class of the day cutting through the trailing winds of the Bay, you suddenly remember the Saturday trek you had planned with your friends. Equipment-less and desperate you race down to a nearby sports store and fish out $$$, not realising that the kid living two floors above you has the same equipment collecting dust. While this hypothetical may be based on real-life events, we see thousands of students and people alike impulsively spending money on goods that would eventually end up in their storage lockers. This cycle of buy-store-collect dust inspired us to develop Lendit this product aims to stagnate the growing waste economy and generate passive income for the users on the platform. ## What it does A peer-to-peer lending and borrowing platform that allows users to generate passive income from the goods and garments collecting dust in the garage. ## How we built it Our Smart Lockers are built with RaspberryPi3 (64bit, 1GB RAM, ARM-64) microcontrollers and are connected to our app through interfacing with Google's Firebase. The locker also uses Facial Recognition powered by OpenCV and object detection with Google's Cloud Vision API. For our App, we've used Flutter/ Dart and interfaced with Firebase. To ensure *trust* - which is core to borrowing and lending, we've experimented with Ripple's API to create an Escrow system. ## Challenges we ran into We learned that building a hardware hack can be quite challenging and can leave you with a few bald patches on your head. With no hardware equipment, half our team spent the first few hours running around the hotel and even the streets to arrange stepper motors and Micro-HDMI wires. In fact, we even borrowed another team's 3-D print to build the latch for our locker! On the Flutter/ Dart side, we were sceptical about how the interfacing with Firebase and Raspberry Pi would work. Our App Developer previously worked with only Web Apps with SQL databases. However, NoSQL works a little differently and doesn't have a robust referential system. Therefore writing Queries for our Read Operations was tricky. With the core tech of the project relying heavily on the Google Cloud Platform, we had to resolve to unconventional methods to utilize its capabilities with an internet that played Russian roulette. ## Accomplishments that we're proud of The project has various hardware and software components like raspberry pi, Flutter, XRP Ledger Escrow, and Firebase, which all have their own independent frameworks. Integrating all of them together and making an end-to-end automated system for the users, is the biggest accomplishment we are proud of. ## What's next for LendIt We believe that LendIt can be more than just a hackathon project. Over the course of the hackathon, we discussed the idea with friends and fellow participants and gained a pretty good Proof of Concept giving us the confidence that we can do a city-wide launch of the project in the near future. In order to see these ambitions come to life, we would have to improve our Object Detection and Facial Recognition models. From cardboard, we would like to see our lockers carved in metal at every corner of this city. As we continue to grow our skills as programmers we believe our product Lendit will grow with it. We would be honoured if we can contribute in any way to reduce the growing waste economy.
partial
## Inspiration The Canadian winter's erratic bouts of chilling cold have caused people who have to be outside for extended periods of time (like avid dog walkers) to suffer from frozen fingers. The current method of warming up your hands using hot pouches that don't last very long is inadequate in our opinion. Our goal was to make something that kept your hands warm and *also* let you vent your frustrations at the terrible weather. ## What it does **The Screamathon3300** heats up the user's hand based on the intensity of their **SCREAM**. It interfaces an *analog electret microphone*, *LCD screen*, and *thermoelectric plate* with an *Arduino*. The Arduino continuously monitors the microphone for changes in volume intensity. When an increase in volume occurs, it triggers a relay, which supplies 9 volts, at a relatively large amperage, to the thermoelectric plate embedded in the glove, thereby heating the glove. Simultaneously, the Arduino will display an encouraging prompt on the LCD screen based on the volume of the scream. ## How we built it The majority of the design process was centered around the use of the thermoelectric plate. Some research and quick experimentation helped us conclude that the thermoelectric plate's increase in heat was dependent on the amount of supplied current. This realization led us to use two separate power supplies -- a 5 volt supply from the Arduino for the LCD screen, electret microphone, and associated components, and a 9 volt supply solely for the thermoelectric plate. Both circuits were connected through the use of a relay (dependent on the Arduino output) which controlled the connection between the 9 volt supply and thermoelectric load. This design decision provided electrical isolation between the two circuits, which is much safer than having common sources and ground when 9 volts and large currents are involved with an Arduino and its components. Safety features directed the rest of our design process, like the inclusion of a kill-switch which immediately stops power being supplied to the thermoelectric load, even if the user continues to scream. Furthermore, a potentiometer placed in parallel with the thermoelectric load gives control over how quickly the increase in heat occurs, as it limits the current flowing to the load. ## Challenges we ran into We tried to implement feedback loop, ambient temperature sensors; even though large temperature change, very small changes in sensor temperatures. Goal to have an optional non-scream controlled system failed because of ultimately not having a sensor feedback system. We did not own components such as the microphone, relay, or battery pack, we could not solder many connections so we could not make a permanent build. ## Accomplishments that we're proud of We're proud of using a unique transducer (thermoelectric plate) that uses an uncommon trigger (current instead of voltage level), which forced us to design with added safety considerations in mind. Our design was also constructed of entirely sustainable materials, other than the electronics. We also used a seamless integration of analog and digital signals in the circuit (baby mixed signal processing). ## What we learned We had very little prior experience interfacing thermoelectric plates with an Arduino. We learned to effectively leverage analog signal inputs to reliably trigger our desired system output, as well as manage physical device space restrictions (for it to be wearable). ## What's next for Screamathon 3300 We love the idea of people having to scream continuously to get a job done, so we will expand our line of *Scream* devices, such as the scream-controlled projectile launcher, scream-controlled coffee maker, scream-controlled alarm clock. Stay screamed-in!
## Inspiration Noise sensitivity is common in autism, but it can also affect individuals without autism. Research shows that 50 to 70 percent of people with autism experience hypersensitivity to everyday sounds. This inspired us to create a wearable device to help individuals with heightened sensory sensitivities manage noise pollution. Our goal is to provide a dynamic solution that adapts to changing sound environments, offering a more comfortable and controlled auditory experience. ## What it does SoundShield is a wearable device that adapts to noisy environments by automatically adjusting calming background audio and applying noise reduction. It helps individuals with sensory sensitivities block out overwhelming sounds while keeping them connected to their surroundings. The device also alerts users if someone is behind them, enhancing both awareness and comfort. It filters out unwanted noise using real-time audio processing and only plays calming music if the noise level becomes too high. If it detects a person speaking or if the noise is low enough to be important, such as human speech, it doesn't apply filters or background music. ## How we built it We developed SoundShield using a combination of real-time audio processing and computer vision, integrated with a Raspberry Pi Zero, a headphone, and a camera. The system continuously monitors ambient sound levels and dynamically adjusts music accordingly. It filters noise based on amplitude and frequency, applying noise reduction techniques such as Spectral Subtraction, Dynamic Range Compression to ensure users only hear filtered audio. The system plays calming background music when noise levels become overwhelming. If the detected noise is low, such as human speech, it leaves the sound unfiltered. Additionally, if a person is detected behind the user and the sound amplitude is high, the system alerts the user, ensuring they are aware of their surroundings. ## Challenges we ran into Processing audio in real-time while distinguishing sounds based on frequency was a significant challenge, especially with the limited computing power of the Raspberry Pi Zero. Additionally, building the hardware and integrating it with the software posed difficulties, especially when ensuring smooth, real-time performance across audio and computer vision tasks. ## Accomplishments that we're proud of We successfully integrated computer vision, audio processing, and hardware components into a functional prototype. Our device provides a real-world solution, offering a personalized and seamless sensory experience for individuals with heightened sensitivities. We are especially proud of how the system dynamically adapts to both auditory and visual stimuli. ## What we learned We learned about the complexities of real-time audio processing and how difficult it can be to distinguish between different sounds based on frequency. We also gained valuable experience in integrating audio processing with computer vision on a resource-constrained device like the Raspberry Pi Zero. Most importantly, we deepened our understanding of the sensory challenges faced by individuals with autism and how technology can be tailored to assist them. ## What's next for SoundSheild We plan to add a heart rate sensor to detect when the user is becoming stressed, which would increase the noise reduction score and automatically play calming music. Additionally, we want to improve the system's processing power and enhance its ability to distinguish between human speech and other noises. We're also researching specific frequencies that can help differentiate between meaningful sounds, like human speech, and unwanted noise to further refine the user experience.
## Inspiration Our goal as a team was to use emerging technologies to promote a healthy living through exclaiming the importance of running. The app would connect to the user’s strava account track running distance and routes that the user runs and incentivise this healthy living by rewarding the users with digital artwork for their runs. There is a massive runners’ community on strava where users regularly share their running stats as well as route maps. Some enthusiasts also try to create patterns with their maps as shown in this reddit thread: <https://www.reddit.com/r/STRAVAart/>. Our app takes this route image and generates an NFT for the user to keep or sell on OpenSea. ## What it does ruNFT’s main goal is to promote a healthy lifestyle by incentivizing running. The app connects to users’ Strava accounts and obtains their activity history. Our app uses this data to create an image of each run on a map and generates an NFT for the user. Additionally, this app builds a community of health enthusiasts that can view and buy each other’s NFT map collections. There are also daily, weekly, and all time leaderboards that showcase stats of the top performers on the app. Our goal is to use this leaderboard to derive value for the NFTs as users with the best stats will receive rarer, more valuable tokens. Overall, this app serves as a platform for runners to share their stats, earn tokens for living a healthy lifestyle, and connect with other running enthusiasts around the world. With the growing interest of NFTs booming in the blockchain market with many new individuals taking interest in collecting NFTs, runners can now use our app to create and access their NFTs while using it as motivation to improve their physical health. ## How we built it The front-end was developed using flutter. Initial sketches of how the user interface would look was conceptualized in photoshop where we decided on the color-scheme and the layout. We took these designs to flutter using some online tutorials as well as some acquired tips from the DeltaHacks Flutter workshop. Most of the main components in the front-end were buttons and a header for navigation as well as a form for some submissions regarding minting. The backend was hosted on Heroku and consisted of manipulating and providing data to/from the Strava API, our MongoDB database, in which we used express to serve the data. We also integrated the ability to automate the minting process in the backend by using web.js and the alchemy api. We simply initiate the mintNFT method from on smart contract, while passing a destination wallet address, this is how our users are able to view and receive their minted strava activities. ## Challenges we ran into One of the biggest challenges we ran into was merge conflicts. While GitHub makes it very easy to share and develop code with a group of people, it became hard to distribute who was coding what, oftentimes creating merge conflicts. At many times, this obstacle would take away from our precious time so our solution was to use a scrum process where we had a sprint to develop a certain feature for 2 hour sprints and meeting after using discord to keep ourselves organized. Other challenges that we faced included production challenges with Rinkeby TestNet where its servers were down for hours into Saturday halting our production significantly, however, we overcame that challenge by developing creative ways in the local environment to test our features. Finally, working with flutter being new to us was a challenge of its own, however, it became very annoying when implementing some of the backend features from the Strava API. ## Accomplishments that we're proud of We are really proud of how we used the emerging popularity of NFTs to promote running as now the users will have an incentive to go running and practise a healthier lifestyle essentially giving running a value. We are also really proud of learning flutter and other technologies we used for development that we were not really familiar with. As emerging software engineers, we understand that it will be very important to keep up with new software languages, technologies and methodologies, and this weekend, from what we accomplished by building an app using something none of us knew proves we can continue to adapt and grow as developers. ## What we learned The biggest point of learning for us was how to use flutter for mobile app development since none of us had used flutter before. We were able to do research and learn how the flutter environment works and how it can make it really easy to create apps. With our group's growing interest in NFTs and the NFT market we also learnt a few important things when it comes to creating NFTs and managing them and also what gives NFTs or digital artwork value. ## What's next for ruNFT There are many features that we would like to continue developing in the interface of the app itself. We believe that there is so much more that the app can do for the user. One of the primary motives we have is to create a page that allows the user to see their own collection from within the app as well as a feature such as a blog where stories of running and the experiences of the users can be posted like a feed. Since the app is focused around NFTs, we want to set up a place where NFTs can be sold and bought from within the app using current blockchain technologies and secure transactions. This can make it easier for newer users to operate selling and buying of NFTs easily and do not need to access other resources for this. All in all, we are proud of what we have accomplished and with the constant changes in the markets and blockchain technologies, there are so many more new things that will come for us to implement.
winning
## 💡 INSPIRATION 💡 Many students have **poor spending habits** and losing track of one's finances may cause **unnecessary stress**. As university students ourselves, we're often plagued with financial struggles. As young adults down on our luck, we often look to open up a credit card or take out student loans to help support ourselves. However, we're deterred from loans because they normally involve phoning automatic call centers which are robotic and impersonal. We also don't know why or what to do when we've been rejected from loans. Many of us weren't taught how to plan our finances properly and we frequently find it difficult to keep track of our spending habits. To address this problem troubling our generation, we decided to create AvaAssist! The goal of the app is to **provide a welcoming place where you can seek financial advice and plan for your future.** ## ⚙️ WHAT IT DOES ⚙️ **AvaAssist is a financial advisor built to support young adults and students.** Ava can provide loan evaluation, financial planning, and monthly spending breakdowns. If you ever need banking advice, Ava's got your back! ## 🔎RESEARCH🔍 ### 🧠UX Research🧠 To discover the pain points of existing banking practices, we interviewed 2 and surveyed 7 participants on their current customer experience and behaviors. The results guided us in defining a major problem area and the insights collected contributed to discovering our final solution. ### 💸Loan Research💸 To properly predict whether a loan would be approved or not, we researched what goes into the loan approval process. The resulting research guided us towards ensuring that each loan was profitable and didn't take on too much risk for the bank. ## 🛠️ HOW WE BUILT IT🛠️ ### ✏️UI/UX Design✏️ ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911782991204876348/Loan_Amount.gif) Figma was used to create a design prototype. The prototype was designed in accordance with Voice UI (VUI) design principles & Material design as a base. This expedited us to the next stage of development because the programmers had visual guidance in developing the app. With the use of Dasha.AI, we were able to create an intuitive user experience in supporting customers through natural dialog via the chatbot, and a friendly interface with the use of an AR avatar. Check out our figma [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=206%3A3694&scaling=min-zoom&page-id=206%3A3644&starting-point-node-id=206%3A3694&show-proto-sidebar=1) Check out our presentation [here](https://www.figma.com/proto/0pAhUPJeuNRzYDBr07MBrc/Hack-Western?node-id=61%3A250&scaling=min-zoom&page-id=2%3A2) ### 📈Predictive Modeling📈 The final iteration of each model has a **test prediction accuracy of +85%!** ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911592566829486120/unknown.png) We only got to this point because of our due diligence, preprocessing, and feature engineering. After coming up with our project, we began thinking about and researching HOW banks evaluate loans. Loan evaluation at banks is extremely complex and we tried to capture some aspects of it in our model. We came up with one major aspect to focus on during preprocessing and while searching for our datasets, profitability. There would be no point for banks to take on a loan if it weren't profitable. We found a couple of databases with credit card and loan data on Kaggle. The datasets were smaller than desired. We had to be very careful during preprocessing when deciding what data to remove and how to fill NULL values to preserve as much data as possible. Feature engineering was certainly the most painstaking part of building the prediction model. One of the most important features we added was the Risk Free Rate (CORRA). The Risk Free Rate is the rate of return of an investment with no risk of loss. It helped with the engineering process of another feature, min\_loan, which is the minimum amount of money that the bank can make with no risk of loss. Min\_loan would ultimately help our model understand which loans are profitable and which aren't. As a result, the model learned to decline unprofitable loans. ![alt text](https://cdn.discordapp.com/attachments/910655355661463584/911981729168887948/unknown.png) We also did market research on the average interest rate of specific types of loans to make assumptions about certain features to supplement our lack of data. For example, we used the average credit card loan interest rate of 22%. The culmination of newly engineered features and the already existing data resulted in our complex, high accuracy models. We have a model for Conventional Loans, Credit Card Loans, and Student Loans. The model we used was RandomForests from sklearn because of its wide variety of hyperparameters and robustness. It was fine-tuned using gridsearchCV to find its best hyperparameters. We designed a pipeline for each model using Pipeline, OneHotEncoder, StandardScaler, FunctionTransformer, GradientBoostingClassifier, and RandomForestClassifier from sklearn. Finally, the models were saved as pickle files for front-end deployment. ### 🚀Frontend Deployment🚀 Working on the frontend was a very big challenge. Since we didn't have a dedicated or experienced frontend developer, there was a lot of work and learning to be done. Additionally, a lot of ideas had to be cut from our final product as well. First, we had to design the frontend with React Native, using our UI/UX Designer's layout. For this we decided to use Figma, and we were able to dynamically update our design to keep up with any changes that were made. Next, we decided to tackle hooking up the machine learning models to React with Flask. Having Typescript communicate with Python was difficult. Thanks to these libraries and a lot of work, we were able to route requests from the frontend to the backend, and vice versa. This way, we could send the values that our user inputs on the frontend to be processed by the ML models, and have them give an accurate result. Finally, we took on the challenge of learning how to use Dasha.AI and integrating it with the frontend. Learning how to use DashaScript (Dasha.AI's custom programming language) took time, but eventually, we started getting the hang of it, and everything was looking good! ## 😣 CHALLENGES WE RAN INTO 😣 * Our teammate, Abdullah, who is no longer on our team, had family issues come up and was no longer able to attend HackWestern unfortunately. This forced us to get creative when deciding a plan of action to execute our ambitious project. We needed to **redistribute roles, change schedules, look for a new teammate, but most importantly, learn EVEN MORE NEW SKILLS and adapt our project to our changing team.** As a team, we had to go through our ideation phase again to decide what would and wouldn't be viable for our project. We ultimately decided to not use Dialogflow for our project. However, this was a blessing in disguise because it allowed us to hone in on other aspects of our project such as finding good data to enhance user experience and designing a user interface for our target market. * The programmers had to learn DashaScript on the fly which was a challenge as we normally code with OOP’s. But, with help from mentors and workshops, we were able to understand the language and implement it into our project * Combining the frontend and backend processes proved to be very troublesome because the chatbot needed to get user data and relay it to the model. We eventually used react-native to store the inputs across instances/files. * The entire team has very little experience and understanding of the finance world, it was both difficult and fun to research different financial models that banks use to evaluate loans. * We had initial problems designing a UI centered around a chatbot/machine learning model because we couldn't figure out a user flow that incorporated all of our desired UX aspects. * Finding good data to train the prediction models off of was very tedious, even though there are some Kaggle datasets there were few to none that were large enough for our purposes. The majority of the datasets were missing information and good datasets were hidden behind paywalls. It was for this reason that couldn't make a predictive model for mortgages. To overcome this, I had to combine datasets/feature engineer to get a useable dataset. ## 🎉 ACCOMPLISHMENTS WE ARE PROUD OF 🎉 * Our time management was impeccable, we are all very proud of ourselves since we were able to build an entire app with a chat bot and prediction system within 36 hours * Organization within the team was perfect, we were all able to contribute and help each other when needed; ex. the UX/UI design in figma paved the way for our front end developer * Super proud of how we were able to overcome missing a teammate and build an amazing project! * We are happy to empower people during their financial journey and provide them with a welcoming source to gain new financial skills and knowledge * Learning and implementing DashaAi was a BLAST and we're proud that we could learn this new and very useful technology. We couldn't have done it without mentor help, 📣shout out to Arthur and Sreekaran📣 for providing us with such great support. * This was a SUPER amazing project! We're all proud to have done it in such a short period of time, everyone is new to the hackathon scene and are still eager to learn new technologies ## 📚 WHAT WE LEARNED 📚 * DashaAi is a brand new technology we learned from the DashaAi workshop. We wanted to try and implement it in our project. We needed a handful of mentor sessions to figure out how to respond to inputs properly, but we're happy we learned it! * React-native is a framework our team utilized to its fullest, but it had its learning curve. We learned how to make asynchronous calls to integrate our backend with our frontend. * Understanding how to take the work of the UX/UI designer and apply it dynamically was important because of the numerous design changes we had throughout the weekend. * How to use REST APIs to predict an output with flask using the models we designed was an amazing skill that we learned * We were super happy that we took the time to learn Expo-cli because of how efficient it is, we could check how our mobile app would look on our phones immediately. * First time using AR models in Animaze, it took some time to understand, but it ultimately proved to be a great tool! ## ⏭️WHAT'S NEXT FOR AvaAssist⏭️ AvaAssist has a lot to do before it can be deployed as a genuine app. It will only be successful if the customer is satisfied and benefits from using it, otherwise, it will be a failure. Our next steps are to implement more features for the user experience. For starters, we want to implement Dialogflow back into our idea. Dialogflow would be able to understand the intent behind conversations and the messages it exchanges with the user. The long-term prospect of this would be that we could implement more functions for Ava. In the future Ava could be making investments for the user, moving money between personal bank accounts, setting up automatic/making payments, and much more. Finally, we also hope to create more tabs within the AvaAssist app where the user can see their bank account history and its breakdown, user spending over time, and a financial planner where users can set intervals to put aside/invest their money. ## 🎁 ABOUT THE TEAM🎁 Yifan is a 3rd year interactive design student at Sheridan College, currently interning at SAP. With experience in designing for social startups and B2B software, she is interested in expanding her repertoire in designing for emerging technologies and healthcare. You can connect with her at her [LinkedIn](https://www.linkedin.com/in/yifan-design/) or view her [Portfolio](https://yifan.design/) Alan is a 2nd year computer science student at the University of Calgary. He's has a wide variety of technical skills in frontend and backend development! Moreover, he has a strong passion for both data science and app development. You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/alanayy/) Matthew is a 2nd year student at Simon Fraser University studying computer science. He has formal training in data science. He's interested in learning new and honing his current frontend skills/technologies. Moreover, he has a deep understanding of machine learning, AI and neural networks. He's always willing to have a chat about games, school, data science and more! You can reach out to him at his [LinkedIn](https://www.linkedin.com/in/matthew-wong-240837124/) **📣📣 SHOUT OUT TO ABDULLAH FOR HELPING US THROUGH IDEATION📣📣** You can still connect with Abdullah at his [LinkedIn](https://www.linkedin.com/in/abdullah-sahapdeen/) He's super passionate about reactJS and wants to learn more about machine learning and AI! ### 🥳🎉 THANK YOU UW FOR HOSTING HACKWESTERN🥳🎉
## Inspiration Our inspiration came from the annoying amount of times we have had to take out a calculator after a meal with friends and figure out how much to pay each other, make sure we have a common payment method (Venmo, Zelle), and remember if we paid each other back or not a week later. So to answer this question we came up with a Split that can easily divide our expenses for us, and organize the amount we owe a friend, and payments without having a common platform at all in one. ## What it does This application allows someone to put in a value that someone owes them or they owe someone and organize it. During this implementation of a due to someone, you can also split an entire amount with multiple individuals which will be reflected in the amount owed to each person. Additionally, you are able to clear your debts and make payments through the built-in Checkbook service that allows you to pay just given their name, phone number, and value amount. ## How we built it We built this project using html, css, python, and SQL implemented with Flask. Alongside using these different languages we utilized the Checkbook API to streamline the payment process. ## Challenges we ran into Some challenges we ran into were, not knowing how to implement new parts of web development. We had difficulty implementing the API we used, “Checkbook” , using python into the backend of our website. We had no experience with APIs and so implementing this was a challenge that took some time to resolve. Another challenge that we ran into was coming up with different ideas that were more complex than we could design. During the brainstorming phase we had many ideas of what would be impactful projects but were left with the issue of not knowing how to put that into code, so brainstorming, planning, and getting an attainable solution down was another challenge. ## Accomplishments that we're proud of We were able to create a fully functioning, ready to use product with no prior experience with software engineering and very limited exposure to web dev. ## What we learned Some things we learned from this project were first that communication was the most important thing in the starting phase of this project. While brainstorming, we had different ideas that we would agree on, start, and then consider other ideas which led to a loss of time. After completing this project we found that communicating what we could do and committing to that idea would have been the most productive decision toward making a great project. To complement that, we also learned to play to our strengths in the building of this project. In addition, we learned about how to best structure databases in SQL to achieve our intended goals and we learned how to implement APIs. ## What's next for Split The next step for Split would be to move into a mobile application scene. Doing this would allow users to use this convenient application in the application instead of a browser. Right now the app is fully supported for a mobile phone screen and thus users on iPhone could also use the “save to HomeScreen” feature to utilize this effectively as an app while we create a dedicated app. Another feature that can be added to this application is bill scanning using a mobile camera to quickly split and organize payments. In addition, the app could be reframed as a social media with a messenger and friend system.
## Inspiration Everyone on our team comes from a family of newcomers and just as it is difficult to come into a new country, we had to adapt very quickly to the Canadian system. Our team took this challenge as an opportunity to create something that our communities could deeply benefit from when they arrive in Canada. A product that adapts to them, instead of the other way around. With some insight from our parents, we were inspired to create this product that would help newcomers to Canada, Indigenous peoples, and modest income families. Wealthguide will be a helping hand for many people and for the future. ## What it does A finance program portal that provides interactive and accessible financial literacies to customers in marginalized communities improving their financially intelligence, discipline and overall, the Canadian economy 🪙. Along with these daily tips, users have access to brief video explanations of each daily tip with the ability to view them in multiple languages and subtitles. There will be short, quick easy plans to inform users with limited knowledge on the Canadian financial system or existing programs for marginalized communities. Marginalized groups can earn benefits for the program by completing plans and attempting short quiz assessments. Users can earn reward points ✨ that can be converted to ca$h credits for more support in their financial needs! ## How we built it The front end was built using React Native, an open-source UI software framework in combination with Expo to run the app on our mobile devices and present our demo. The programs were written in JavaScript to create the UI/UX interface/dynamics and CSS3 to style and customize the aesthetics. Figma, Canva and Notion were tools used in the ideation stages to create graphics, record brainstorms and document content. ## Challenges we ran into Designing and developing a product that can simplify the large topics under financial literacy, tools and benefits for users and customers while making it easy to digest and understand such information | We ran into the challenge of installing npm packages and libraries on our operating systems. However, with a lot of research and dedication, we as a team resolved the ‘Execution Policy” error that prevented expo from being installed on Windows OS | Trying to use the Modal function to enable pop-ups on the screen. There were YouTube videos of them online but they were very difficult to follow especially for a beginner | Small and merge errors prevented the app from running properly which delayed our demo completion. ## Accomplishments that we're proud of **Kemi** 😆 I am proud to have successfully implemented new UI/UX elements such as expandable and collapsible content and vertical and horizontal scrolling. **Tireni** 😎 One accomplishment I’m proud of is that despite being new to React Native, I was able to learn enough about it to make one of the pages on our app. **Ayesha** 😁 I used Figma to design some graphics of the product bringing the aesthetic to life! ## What we learned **Kemi** 😆 I learned the importance of financial literacy and responsibility and that FinTech is a powerful tool that can help improve financial struggles people may face, especially those in marginalized communities. **Tireni** 😎 I learned how to resolve the ‘Execution Policy” error that prevented expo from being installed on VS Code. **Ayesha** 😁 I learned how to use tools in Figma and applied it in the development of the UI/UX interface. ## What's next for Wealthguide Newsletter Subscription 📰: Up to date information on current and today’s finance news. Opportunity for Wealthsimple product promotion as well as partnering with Wealthsimple companies, sponsors and organizations. Wealthsimple Channels & Tutorials 🎥: Knowledge is key. Learn more and have access to guided tutorials on how to properly file taxes, obtain a credit card with benefits, open up savings account, apply for mortgages, learn how to budget and more. Finance Calendar 📆: Get updates on programs, benefits, loans and new stocks including when they open during the year and the application deadlines. E.g OSAP Applications.
losing
## Inspiration Fully homomorphic computing is a hip new crypto trick that lets you compute on encrypted data. It's pretty wild, so I wanted to try something wild with it. FHE has been getting getting super fast - boolean operations now only take tens of milliseconds, down from minutes or hours just a few years ago. Most applications of FHE still focus on computing known functions on static data, but it's fast enough now to host a real language all on its own. The function I'm homomophically evaluating is *eval*, and the data I'm operating on is code. "Brainfreeze" is what happens if you think about this too hard too long. ## What it does Brainfreeze is a fully-homomorphic runtime for the language [https://en.wikipedia.org/wiki/Brainfuck](Brainfuck). ## How I built it I wrote Python bindings for the TFHE C library for fast FHE. TFHE only exposes boolean operations on single bits at a time, so I wrote a framework for assembling and evaluating virtual homomorphic circuits in Python. Then I wrote an ALU for simple 8-bit arithmetic, and a tiny CPU for dispatching on Brainfuck's 8 possible operations. ## Does it work? No! I didn't have time to finish the entire instruction set - only moving the data pointer (< and >) and incrementing and decrementing the data (+ and -) work right now :-/. It turns out that computers are complicated and I don't remember as much of 6.004 as I thought I did. ## Could it work? Definitely at small scales! But there are some severe limiting factors. FHE guarantees - mathematically - to leak **absolutely no** information about the data it's operation on, and that results in a sort of catastrophically exponential branching nightmare because the computer has to execute *every possible instruction on every possible memory address **during every single clock cycle***, because it's not sure which is the "real" data or the "real" instruction and which is just noise.
## Inspiration We want to share the beauty of the [Curry-Howard isomorphism](https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence) and automated proof checking with beginning programmers. The concepts of Types and Formal Proofs are central to many aspects of computer science. ProofLang is to Agda the way Python is to C. We believe that the beauty of mathematical proofs and formal verification can be appreciated by more than CS theorists, when taught the right way. The best way to build this intuition is using visualizations, which is what this project aims to do. By presenting types as containers of variants, it allows a teacher to demonstrate the concept of type inhabitation, and why that is central to automated theorem proving. ## What it does ProofLang is a simplified, type-based, programming language. It also comes with an online interpreter and a real time visualization tool, which displays all the types in a way that solidifies the correct type of intuition about types (with regards to theorem proving and the [calculus of constructions](https://en.wikipedia.org/wiki/Calculus_of_constructions)), alongside the instantiations of the types, showing a ledger of evidence. ## How we built it We wrote ProofLang the programming language itself from the ground up based on the [calculus of constructions](https://en.wikipedia.org/wiki/Calculus_of_constructions), but simplified it enough for beginner audiences. The interpreter is written in Rust and complied down to WebAssembly which is imported as a Javascript library into our React frontend. ## Challenges we ran into We ran into challenges integrating WebAssembly with our React frontend. `web-pack` compiles our Rust code down into Javascript for Node.js rather than the Web JS that React uses. Since the interpreter is written in Rust, there was some fighting with the borrow-checker involved as well. ## Accomplishments that we're proud of We are proud of building our own interpreter! We also created a whole programming language which is pretty awesome. We even wrote a tiny parser combinator framework similar to [nom](https://docs.rs/nom/latest/nom/), since we could not figure out a few edge cases. ## What's next for ProofLang Support for function types, as well as type constructors that are not unit-like! Going forward, we would also like to add a visual programming aspect to it, where users can click and drag on a visual interface much like [Snap](https://snap.berkeley.edu/) to write code, which would make it even more accessible to beginner programmers and mathematicians.
## Inspiration We enjoyed playing the computer party game [*Keep Talking and Nobody Explodes*](http://www.keeptalkinggame.com/) with our friends and decided that a real-life implementation would be more accessible and interesting. It's software brought to life. ## What it does Each randomly generated "bomb" has several modules that must be defused in order to win the game. Here's the catch: only one person can see and interact with the bomb. The other players have the bomb defusal manual to defuse the bomb and must act as "experts," communicating quickly with the bomb defuser. And you only have room for three errors. Puzzle-solving, communication, and interpretation skills will be put to the test as players race the five-minute clock while communicating effectively. Here are the modules we built: * **Information Display** *Sometimes, information is useful.* In this display module, we display the time remaining and the serial number of the bomb. How can you use this information? * **Simple Wires** *Wires are the basis of all hardware hacks. But sometimes, you have to pull them out.* A schematic is generated, instructing players to set up a variety of colored wires into six pins. There's only one wire to pull out, but which one? Only the "experts" will know, following a series of conditional statements. * **The Button** *One word. One LED. One button.* Decode this strange combination and figure out if the button saying "PRESS" should be pressed, or if you should hold it down and light up another LED. * **Password** *The one time you wouldn't want a correct horse battery.* Scroll through letters with buttons on an LCD display, in hopes of stumbling upon an actual word, then submit it. * **Simon Says** *The classic childhood toy and perfect Arduino hack, but much, much crueler.* Follow along the flashing LEDs and repeat the pattern - but you must map it to the correct pattern first. ## How we built it We used six Arduino Unos, with one for each module and one for a central processor to link all of the modules together. Each module is independent, except for two digital outputs indicating the number of strikes to the central processor. On breadboards, we used LEDs, LCD displays, and switches to provide a simple user interface. ## Challenges we ran into Reading the switches on the Simon Says module, interfacing all of the Arduinos together ## Accomplishments that we're proud of Building a polished product in a short period of time that made use of our limited resources ## What we learned How to use Arduinos, the C programming language, connecting digital and analog components ## What's next for Keep Talking Arduino More modules, packaging and casing for modules, more options for players
partial
## Inspiration As college students, our lives are often filled with music: from studying at home, partying, to commuting. Music is ubiquitous in our lives. However, we find the current process of listening to music and controlling our digital music player pretty mechanical and boring: it’s either clicking or tapping. We wanted to truly interact with our music. We want to feel our music. During one brainstorming session, a team member jokingly suggested a Minority Report-inspired gesture UI system. With this suggestion, we realized we can use this hackathon as a chance to build a cool interactive, futuristic way to play music. ## What it does Fedoract allows you to control your music in a fun and interactive way. It wireless streams your hand gestures and allows you to control your Spotify with them. We are using a camera mounted on a fedora to recognize hand gestures, and depending on which gesture, we can control other home applications using the technology of IoT. The camera will be mounted wirelessly on the hat and its video feed will be sent to the main computer to process. ## How we built it For the wireless fedora part, we are using an ESP32-CAM module to record and transmit the video feed of the hand gesture to a computer. The ESP32-CAM module will be powered by a power supply built by a 9V battery and a 3V3/5V Elegoo Power Supply. The video feed is transmitted through WiFi and is connected to the main computer to be analyzed using tools such as OpenCV. Our software will then calculate the gesture and perform actions on Spotify accordingly. The software backend is built using the OpenCV and the media pipe library. The media pipe library includes a hand model that has been pre-trained using a large set of data and it is very accurate. We are using this model to get the positions of different features (or landmarks) of the hand, such as fingertips, the wrist, and the knuckles. Then we are using this information to determine the hand gesture made by the user. The Spotify front end is controlled and accessed using the Selenium web driver. Depending on the action determined by hand gesture recognition, the program presses the corresponding button. Note the new window instantiated by the web driver does not have any prior information. Therefore, we need to log in to Spotify through an account at the start of the process. Then we can access the media buttons and other important buttons on the web page. Backend: we used OpenCV in combination with a never-seen-before motion classification algorithm. Specifically, we used Python scripts using OpenCV to capture webcam input to get hand recognition to recognize the various landmarks (joints) of the hand. Then, motion classification was done through a non-ML, trigonometric approach. First, a vector of change in X and Y input movement was computed using the first and last stored hand coordinates for some given period after receiving some hand motion input. Using deltaX and delta Y, we were able to compute the angle of the vector on the x-y plane, relative to a reference angle that is obtained using the display's width and height. If the vector is between the positive and negative reference angles, then the motion is classified and interpreted as Play Next Song, and so on for the other actions. See the diagrams below for more details. ## Challenges we ran into The USB-to-TTL cable we got for the ESP32 CAM was defective, so we were spending way too much time trying to fix and find alternative ways with the parts we have. Worse of all, we were also having trouble powering the ESP32-CAM both when it was connected directly to the computer and when it was running wirelessly using its own power supply. The speaker we bought was too quiet for our purposes, and we did not have the right types of equipment to get our display working in time. The ESP32 CAM module is very sensitive to power fluctuations in addition to having an extremely complicated code upload process. The community around the device is very small therefore there was often misleading advice. This led to a long debugging process. The software also had many issues. First of all, we needed to install MediaPipe on our ARM (M1) Macs to effectively develop using OpenCV but we figured out that it wasn’t supported only after spending some time trying to install it. Eventually, we resorted to the Intel chip version of PyCharm to install MediaPipe, which surprisingly worked, seeing as our chips are not Intel-manufactured. As a result, PyCharm was super slow and this really slowed down the development process. Also, we had minor IDE issues when importing OpenCV in our scripts, so we hotfixed that by simply creating a new project (shrug). Another thing was trying to control the keyboard via the OS but it turned out to be difficult for keys other than volume, so we resorted to using Selenium to control the Spotify client. Additionally, in the hand gesture tracking, the thumbs down gesture was particularly difficult because the machine kept thinking that other fingers were lifted as well. In the hand motion tracking process, the x and y coordinates were inverted, which made the classification algorithm a lot harder to develop. Then, bridging the video live stream coming from the ES32-CAM to the backend was problematic and we spent around 3 hours trying to find a way to effectively and simply establish a bridge using OpenCV so that we could easily redirect the video live stream to be the SW's input feed. Lastly, we needed to link the multiple functionality scripts together, which wasn’t obvious. ## Accomplishments that we're proud of One thing the hardware team is really proud of is the perseverance displayed during the debugging of our hardware. Because of faulty connection cords and unstable battery supply, it took us over 14 hours simply just to get the camera to connect wirelessly. Throughout this process, we had to use an almost brute force approach and tried all possible combinations of potential fixes. We are really surprised we have mental toughness. The motion classification algorithm! It took a while to figure out but was well worth it. Hand gesture (first working product in the team, team spirit) This was our first fully working Minimum Viable Product in a hackathon for all of the team members ## What we learned How does OpenCV work? We learned extensively how serial connection works. We learned that you can use the media pipe module to perform hand gesture recognition and other image classification using image capture. An important thing to note is the image capture must be in RGB format before being passed into the Mediapipe library. We also learned how to use the image capture with webcams to test in development and how to draw helpful figures on the output image to debug. ## What's next for Festive Fedora There is a lot of potential for improvements in this project. For example, we can put all the computing through a cloud computing service. Right now, we have the hand gesture recognition calculated locally, and having it online means we will have more computing power, meaning that it will also have the potential to connect to more devices by running more complicated algorithms. Something else we can improve is that we can try to get better hardware such that we will have less delay in the video feed, giving us more accuracy for the gesture detection.
## Inspiration The inspiration for ResuMate came from observing how difficult it can be for undergraduate students and recent graduates to get personalized and relevant feedback on their resumes. We wanted to create a tool that could provide intelligent, real-time resume analysis specifically for technology-related jobs, focusing on internship and new grad roles. By leveraging AI, we aim to help candidates enhance their resumes and improve their chances in the competitive tech job market. ## What it does ResuMate is an AI-powered web application that analyzes resumes by providing personalized eligibility and compatibility assessments. It identifies key strengths and areas for improvement based on keyword matching and specific job requirements for tech roles. Users receive insights on which parts of their resume align with job descriptions and suggestions to fill in missing skills or keywords. ## How we built it ResuMate is built using modern web technologies: * React for building a responsive frontend interface. * Next.js for server-side rendering and easy routing. * Pyodide to run Python in the browser, enabling advanced resume analysis through Python libraries like PyPDF2. * CSS Modules to style the application components consistently and modularly. -Cerebras API (Llama3 model) as AI API to generate personalized feedback recommendations based on Large Language Models (LLMs) The core functionality revolves around uploading a PDF resume, processing it with Python code in the browser, and providing feedback based on keyword analysis using LLM call API. ## Challenges we ran into One of the key challenges we faced was transferring PDF content to text within a JavaScript framework. Parsing PDFs in a web environment isn't straightforward, especially in a client-side context where JavaScript doesn't natively support the full breadth of PDF handling like Python does. Integrating Pyodide was crucial for running Python libraries like PyPDF2 to handle the PDF extraction, but it introduced challenges in managing the virtual filesystem and ensuring seamless communication between JavaScript and Python. ## Accomplishments that we're proud of We successfully integrated Python code execution in the browser through Pyodide, allowing us to analyze resumes in real time without needing a backend server for processing. Additionally, we created a user-friendly interface that helps users understand what keywords are missing from their resumes, which will directly improve their job applications. ## What we learned Throughout this project, we learned how to: * Seamlessly integrate Python within a JavaScript framework using Pyodide. * Handle complex file uploads and processing entirely on the client-side. * Optimize PDF text extraction and keyword matching for real-time performance. * Work as a team to overcome technical challenges and meet our project goals. ## What's next for ResuMate Moving forward, we plan to: * Improve the accuracy of our PDF text extraction, especially for resumes with complex formatting. * Expand the keyword matching and scoring algorithms to handle more specific job descriptions and fields. * Develop a more advanced suggestion system that not only identifies missing keywords but also provides actionable advice based on the latest job market trends. * Add support for more resume formats, including Word documents and plain text.
## Inspiration The cryptocurrency market is an industry which is expanding at an exponential rate. Everyday, thousands new investors of all kinds are getting into this volatile market. With more than 1,500 coins to choose from, it is extremely difficult to choose the wisest investment for those new investors. Our goal is to make it easier for those new investors to select the pearl amongst the sea of cryptocurrency. ## What it does To directly tackle the challenge of selecting which cryptocurrency to choose, our website has a compare function which can add up to 4 different cryptos. All of the information from the chosen cryptocurrencies are pertinent and displayed in a organized way. We also have a news features for the investors to follow the trendiest news concerning their precious investments. Finally, we have an awesome bot which will answer any questions the user has about cryptocurrency. Our website is simple and elegant to provide a hassle-free user experience. ## How we built it We started by building a design prototype of our website using Figma. As a result, we had a good idea of our design pattern and Figma provided us some CSS code from the prototype. Our front-end is built with React.js and our back-end with node.js. We used Firebase to host our website. We fetched datas of cryptocurrency from multiple APIs from: CoinMarketCap.com, CryptoCompare.com and NewsApi.org using Axios. Our website is composed of three components: the coin comparison tool, the news feed page, and the chatbot. ## Challenges we ran into Throughout the hackathon, we ran into many challenges. First, since we had a huge amount of data at our disposal, we had to manipulate them very efficiently to keep a high performant and fast website. Then, there was many bugs we had to solve when integrating Cisco's widget to our code. ## Accomplishments that we're proud of We are proud that we built a web app with three fully functional features. We worked well as a team and had fun while coding. ## What we learned We learned to use many new api's including Cisco spark and Nuance nina. Furthermore, to always keep a backup plan when api's are not working in our favor. The distribution of the work was good, overall great team experience. ## What's next for AwsomeHack * New stats for the crypto compare tools such as the number of twitter, reddit followers. Keeping track of the GitHub commits to provide a level of development activity. * Sign in, register, portfolio and watchlist . * Support for desktop applications (Mac/Windows) with electronjs
losing
## Inspiration I got annoyed at Plex's lack of features ## What it does Provides direct database and disk access to Plex configuration ## How I built it Python ## Challenges I ran into ## Accomplishments that I'm proud of ## What I learned ## What's next for InterPlex
## Inspiration We've had roommate troubles in the past, so we decided to make something to help us change that. ## What it does It keeps track of tasks and activities among roommates, and by gamifying these task using a reward system to motivate everyone to commit to the community. ## Challenges we ran into The two biggest obstacles we ran into were version control and Firebase Documentation/Database ## Accomplishments that we're proud of We completed our core features, and we made decent looking app. ## What we learned Take heed when it comes to version control. ## What's next for roomMe We would like add more database support, and more features that allow communication with other people in your group. We would also like to add extension apps to further enhance the experience of roomMe such as Venmo, Google Calendar, and GroupMe. We are also considering creating a game where people can spend their credits.
## Inspiration We wanted to use Livepeer's features to build a unique streaming experience for gaming content for both streamers and viewers. Inspired by Twitch, we wanted to create a platform that increases exposure for small and upcoming creators and establish a more unified social ecosystem for viewers, allowing them both to connect and interact on a deeper level. ## What is does kizuna has aspirations to implement the following features: * Livestream and upload videos * View videos (both on a big screen and in a small mini-player for multitasking) * Interact with friends (on stream, in a private chat, or in public chat) * View activities of friends * Highlights smaller, local, and upcoming streamers ## How we built it Our web-application was built using React, utilizing React Router to navigate through webpages, and Livepeer's API to allow users to upload content and host livestreams on our videos. For background context, Livepeer describes themselves as a decentralized video infracstucture network. The UI design was made entirely in Figma and was inspired by Twitch. However, as a result of a user research survey, changes to the chat and sidebar were made in order to facilitate a healthier user experience. New design features include a "Friends" page, introducing a social aspect that allows for users of the platform, both streamers and viewers, to interact with each other and build a more meaningful connection. ## Challenges we ran into We had barriers with the API key provided by Livepeer.studio. This put a halt to the development side of our project. However, we still managed to get our livestreams working and our videos uploading! Implementing the design portion from Figma to the application acted as a barrier as well. We hope to tweak the application in the future to be as accurate to the UX/UI as possible. Otherwise, working with Livepeer's API was a blast, and we cannot wait to continue to develop this project! You can discover more about Livepeer's API [here](https://livepeer.org/). ## Accomplishments that we're proud of Our group is proud of the persistance through the all the challenges that confronted us throughout the hackathon. From learning a whole new programming language, to staying awake no matter how tired, we are all proud of each other's dedication to creating a great project. ## What we learned Although we knew of each other before the hackathon, we all agreed that having teammates that you can collaborate with is a fundamental part to developing a project. The developers (Josh and Kennedy) learned lot's about implementing API's and working with designers for the first time. For Josh, this was his first time implementing his practiced work from small projects in React. This was Kennedy's first hackathon, where she learned how to implement CSS. The UX/UI designers (Dorothy and Brian) learned more about designing web-applications as opposed to the mobile-applications they are use to. Through this challenge, they were also able to learn more about Figma's design tools and functions. ## What's next for kizuna Our team maintains our intention to continue to fully develop this application to its full potential. Although all of us our still learning, we would like to accomplish the next steps in our application: * Completing the full UX/UI design on the development side, utilizing a CSS framework like Tailwind * Implementing Lens Protocol to create a unified social community in our application * Redesign some small aspects of each page * Implementing filters to categorize streamers, see who is streaming, and categorize genres of stream.
losing
## Inspiration We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in. ## What it does You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it. ## How I built it We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's ## Challenges I ran into Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow. ## Accomplishments that I'm proud of The excellent UI design along with the amazing outcomes that can be produced from the translation of slang ## What I learned A lot of things we learned ## What's next for SlangSlack We are going to transform the way today's menials keep up with growing trends in slang.
## Inspiration It's easy to zone off in online meetings/lectures, and it's difficult to rewind without losing focus at the moment. It could also be disrespectful to others if you expose the fact that you weren't paying attention. Wouldn't it be nice if we can just quickly skim through a list of keywords to immediately see what happened? ## What it does Rewind is an intelligent, collaborative and interactive web canvas with built in voice chat that maintains a list of live-updated keywords that summarize the voice chat history. You can see timestamps of the keywords and click on them to reveal the actual transcribed text. ## How we built it Communications: WebRTC, WebSockets, HTTPS We used WebRTC, a peer-to-peer protocol to connect the users though a voice channel, and we used websockets to update the web pages dynamically, so the users would get instant feedback for others actions. Additionally, a web server is used to maintain stateful information. For summarization and live transcript generation, we used Google Cloud APIs, including natural language processing as well as voice recognition. Audio transcription and summary: Google Cloud Speech (live transcription) and natural language APIs (for summarization) ## Challenges we ran into There are many challenges that we ran into when we tried to bring this project to reality. For the backend development, one of the most challenging problems was getting WebRTC to work on both the backend and the frontend. We spent more than 18 hours on it to come to a working prototype. In addition, the frontend development was also full of challenges. The design and implementation of the canvas involved many trial and errors and the history rewinding page was also time-consuming. Overall, most components of the project took the combined effort of everyone on the team and we have learned a lot from this experience. ## Accomplishments that we're proud of Despite all the challenges we ran into, we were able to have a working product with many different features. Although the final product is by no means perfect, we had fun working on it utilizing every bit of intelligence we had. We were proud to have learned many new tools and get through all the bugs! ## What we learned For the backend, the main thing we learned was how to use WebRTC, which includes client negotiations and management. We also learned how to use Google Cloud Platform in a Python backend and integrate it with the websockets. As for the frontend, we learned to use various javascript elements to help develop interactive client webapp. We also learned event delegation in javascript to help with an essential component of the history page of the frontend. ## What's next for Rewind We imagined a mini dashboard that also shows other live-updated information, such as the sentiment, summary of the entire meeting, as well as the ability to examine information on a particular user.
## Inspiration We were motivated to tackle linguistic challenges in the educational sector after juxtaposing our personal experience with current news. There are currently over 70 million asylum seekers, refugees, or internally-displaced people around the globe, and this statistic highlights the problem of individuals from different linguistic backgrounds being forced to assimilate into a culture and language different than theirs. As one of our teammates was an individual seeking a new home in a new country, we had first hand perspective at how difficult this transition was. In addition, our other team members had volunteered extensively within the educational system in developing communities, both locally and globally, and saw a similar need with individuals being unable to meet the community’s linguistics standards. We also iterated upon our idea to ensure that we are holistically supporting our communities by making sure we consider the financial implications of taking the time to refine your language skills instead of working. ## What it does Fluently’s main purpose is to provide equitable education worldwide. By providing a user customized curriculum and linguistic practice, students can further develop their understanding of their language. It can help students focus on areas where they need the most improvement. This can help them make progress at their own pace and feel more confident in their language skills while also practicing comprehension skills. By using artificial intelligence to analyze pronunciation, our site provides feedback that is both personalized and objective. ## How we built it Developing the web application was no easy feat. As we were searching for an AI model to help us through our journey we stumbled upon OpenAI, specifically Microsoft Azure’s cognitive systems that utilize OpenAI’s comprehensive abilities in language processing. This API gave us the ability to analyze voice patterns and fluency and transcribe passages that are mentioned in the application. Figuring out the documentation as well as how the AI will be interacting with the user was most important for us to execute properly since the AI would be acting as the tutor/mentor for the students in these cases. We developed a diagram that would break down the passages read to the student phonetically and give them a score of 100 for how well each word was pronounced based on the API’s internal grading system. As it is our first iteration of the web app, we wanted to explore how much information we could extract from the user to see what is most valuable to display to them in the future. Integrating the API with the web host was a new feat for us as a young team. We were confident in our python abilities to host the AI services and found a library by the name of Flask that would help us write html and javascript code to help support the front end of the application through python. By using Flask, we were able to host our AI services with python while also continuously managing our front end through python scripts. This gave room for the development of our backend systems which are Convex and Auth0. Auth0 was utilized to give members coming into the application a unique experience by having them sign into a personalized account. The account is then sent into the Convex database to be used as a storage base for their progress in learning and their development of skills over time. All in all, each component of the application from the AI learning models, generating custom passages for the user, to the backend that communicated between the Javascript and Python server host that streamlines the process of storing user data, came with its own challenges but came together seamlessly as we guide the user from our simple login system to the passage generator and speech analyzer to give the audience constructive feedback on their fluency and pronunciation. ## Challenges we ran into As a majority beginning team, this was our first time working with many of the different technologies, especially with AI APIs. We need to be patient working with key codes and going through an experiment process of trying different mini tests out to then head to the major goal that we were headed towards. One major issue that we faced was the visualization of data to the user. We found it hard to synthesize the analysis that was done by the AI to translate to the user to make sure they are confident in what they need to improve on. To solve this problem we first sought out how much information we could extract from the AI and then in future iterations we would simply display the output of feedback. Another issue we ran into was the application of convex into the application. The major difficulty came from developing javascript functions that would communicate back to the python server hosting the site. This was resolved thankfully; we are grateful for the Convex mentors at the conference that helped us develop personalized javascript functions that work seamlessly with our Auth0 authentication and the rest of the application to record users that come and go. ## Accomplishments that we're proud of: One accomplishment that we are proud of was the implementation of Convex and Auth0 with Flask and Python. As python is a rare language to host web servers in and isn't the primary target language for either service, we managed to piece together a way to fit both services into our project by collaboration with the team at Convex to help us out. This gave way to a strong authentication platform for our web application and for helping us start a database to store user data onto. Another accomplishment was the transition of using a React Native application to using Flask with Python. As none of the group has seen Flask before or worked for it for that matter, we really had to hone in our abilities to learn on the fly and apply what we knew prior about python to make the web app work with this system. Additionally, we take pride in our work with OpenAI, specifically Azure. We researched our roadblocks in finding a voice recognition AI to implement our natural language processing vision. We are proud of how we were able to display resilience and conviction to our overall mission for education to use new technology to build a better tool. ## What we learned As beginners at our first hackathon, not only did we learn about the technical side of building a project, we were also able to hone our teamwork skills as we dove headfirst into a project with individuals we had never worked with before. As a group, we collectively learned about every aspect of coding a project, from refining our terminal skills to working with unique technology like Microsoft Azure Cognitive Services. We also were able to better our skillset with new cutting edge technologies like Convex and OpenAI. We were able to come out of this experience not only growing as programmers but also as individuals who are confident they can take on the real world challenges of today to build a better tomorrow. ## What's next? We hope to continue to build out the natural language processing applications to offer the technology in other languages. In addition, we hope to hone to integrate other educational resources, such as videos or quizzes to continue to build other linguistic and reading skill sets. We would also love to explore the cross section with gaming and natural language processing to see if we can make it a more engaging experience for the user. In addition, we hope to expand the ethical considerations by building a donation platform that allows users to donate money to the developing community and pay forward the generosity to ensure that others are able to benefit from refining their linguistic abilities. The money would then go to a prominent community in need that uses our platform to fund further educational resources in their community. ## Bibliography United Nations High Commissioner for Refugees. “Global Forced Displacement Tops 70 Million.” UNHCR, UNHCR, The UN Refugee Agency, <https://www.unhcr.org/en-us/news/stories/2019/6/5d08b6614/global-forced-displacement-tops-70-million.html>.
winning
README.md exists but content is empty.
Downloads last month
51