Gesture Recognition Home Automation
By: Muhammad Anwar, Jomarc Julius Silos, Ricardo Campos, Favienne Lopez
Department: Engineering
Faculty Advisor: Dr. George Anwar
Today, our smart devices are activated via verbal commands to a hub. Assuming you're speaking within the optimal range and clarity of the device, you can use any of your smart devices. Otherwise, you have to adjust your voice to ensure the correct tasking. This system works for most, if not all, average consumers. Despite the advancements in the current home automation system, it doesn't work for individuals who are deaf-mute. This community lacks the capability of making vocal commands to the smart home device’s hub. Their means of activation would only be an external device capable of manipulating home automation. If someone were able to speak a command, a visual aid or another form of verification and confirmation would be required to ensure the correct command was called for the user. Without the home automation's reply to the individual, they would not know if the command was understood by the device properly, meaning that the device could not perform the requested command or perform an entirely new command that was not designated at the time.
Understanding the gap between accessibility and home automation, our group proposes Gesture Recognition Home Automation (GRHA), which will bridge the gap and allow smart home automation to be inclusive to everyone. The consumer does not have to be deaf-mute to utilize GRHA, as long as they are able to demonstrate a gesture to the device. GRHA will consist of a device that makes virtual assistance accessible for deaf and hard of hearing individuals, especially with the growing popularity of smart home devices. Below, is a design of our product, along with a flowchart of how the software will function within the hardware we will be using.