A Study on Wearable Tech Interfaces and Perception: Cognitive AI-Enabled Device

Authors

  • Debjyoti Bagchi Calcutta Institute of Engineering and Management https://orcid.org/0009-0007-4372-5369
  • Samir Biswas Calcutta Institute of Engineering and Management
  • Uryaswi Bhowmick Calcutta Institute of Engineering and Management
  • Titash Das Calcutta Institute of Engineering and Management
  • Srijaa Chatterjee Calcutta Institute of Engineering and Management
  • Gouree Purkait Calcutta Institute of Engineering and Management
  • Tathagata Ghosh Calcutta Institute of Engineering and Management

DOI:

https://doi.org/10.24203/5xjf4286

Keywords:

Smart glass, Augmented and Virtual Reality, AI

Abstract

In our paper, we explore the possibility of utilizing state-of-the-art hardware architecture to develop an interactive Artificial Intelligence(AI) based virtual agent in an Internet of Things (IoT) enabled smart glass. In contrast to the traditional system that relies on Central Processing Unit(CPU) and Graphical Processing Unit(GPU), we examine the possibility of a proposed hardware system that employs neuromorphic computing for precise and energy-efficient processing in real-time, resolving the problems of latency and excessive power consumption. When combined with conventional Complementary Metal-Oxide-Semiconductor (CMOS) technology, Resistive Random Access Memory(ReRAM) offers non-volatile, fast memory that facilitates parallel computing and guarantees smooth data store and retrieval for wearables with limited resources. This study also emphasizes the application of intelligent virtual agents based on cognitive AI in wearable technology. In order to create an immersive human-computer interface, we attempted to create an intelligent interactive AI avatar with Cycle Generative Adversarial Network (CycleGAN) that mimics the user's traits and further pushes it to the large Language Model(LLM) to generate motion sequences that perform additional tasks, transforming LLM prompt reactions into movements.

 

Author Biographies

  • Debjyoti Bagchi, Calcutta Institute of Engineering and Management

    Assistant Professor of Computer Science and Engineering, Calcutta Institute of engineering and Management, Kolkata, India

  • Samir Biswas, Calcutta Institute of Engineering and Management

    Assistant Professor of Information Technology, Calcutta Institute of engineering and Management Kolkata, India

  • Titash Das, Calcutta Institute of Engineering and Management

    Graduate-class of 2024, Department of Information Technology, Calcutta Institute of Engineering and Management, Kolkata, India 

  • Srijaa Chatterjee, Calcutta Institute of Engineering and Management

    Graduate-class of 2024, Department of Information Technology ,Calcutta Institute of Engineering and Management, Kolkata, India 

  • Gouree Purkait, Calcutta Institute of Engineering and Management

    Graduate student-class of 2024, Department of Information Technology, Calcutta Institute of Engineering and Management, Kolkata, India  

  • Tathagata Ghosh, Calcutta Institute of Engineering and Management

    Graduate student-class of 2024, Department of Information Technology, Calcutta Institute of Engineering and Management, Kolkata, India 

References

[1] Chheang at al “Towards Anatomy Education with Generative AI-based Virtual Assistants in Immersive Virtual Reality Environments”, 2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR), ISBN: 979-8-3503-7203-8

[2] A. Howard, W. Hope, and A. Gerada, “Chatgpt and antimicrobial advice: the end of the consulting infection doctor?” The Lancet Infectious Diseases, vol. 23, no. 4, pp. 405–406, 2023.

[3] K. Cheng, Q. Guo, Y. He, Y. Lu, S. Gu, and H. Wu, “Exploring the

potential of gpt-4 in biomedical engineering: the dawn of a new era,” Annals of Biomedical Engineering, pp. 1–9, 2023.

[4] Y. He, H. Tang, D. Wang, S. Gu, G. Ni, and H. Wu, “Will chatgpt/gpt-4 be a lighthouse to guide spinal surgeons?” Annals

of Biomedical Engineering, pp. 1–4, 2023.

[5] S. Pedram, G. Kennedy, and S. Sanzone, “Toward the validation of vr-hmds for medical education: a systematic literature review,” Virtual Reality, pp. 1–26, 2023.

[6] H. Makinen, E. Haavisto, S. Havola, and J.-M. Koivisto, “User experiences of virtual reality technologies for healthcare in learning: An integrative review,” Behaviour & Information Technology, vol. 41, no. 1, pp. 1–17, 2022.

[7] J. J. Reyes-Cabrera, J. M. Santana-Nu ́nez, A. Trujillo-Pino, M. Maynar, and M. A. Rodriguez-Florido, “Learning Anatomy through Shared Virtual Reality,” in Eurographics Workshop on Visual Computing for Biology and Medicine. The Eurographics Association, 2022.

[8] Jui-Fa Chen at al, “Constructing an Intelligent Behavior Avatar in a Virtual World: A Self-Learning Model based on Reinforcement”, IRI -2005 IEEE International Conference on Information Reuse and Integration, Conf, 2005. ISBN: 0-7803-9093-8

[9]Vasarainen Minna, Paavola Sami, Vetoshkina Liubov, “A Systematic Literature Review on Extended Reality: Virtual, Augmented and Mixed Reality in Collaborative Working Life Setting.” International Journal of Virtual Reality, ISSN: 2727-9979, 18th October-2021

[10]Albadra, D., Elamin, Z., Adeyeye, K., Polychronaki, E., Coley, D.A., Holley, J., & Copping, A. (2020).

Participatory design in refugee camps: comparison of different methods and visualization tools.

Building Research and Information, 49(2), 248-264. https://doi:10.1080/09613218.2020.1740578

[11]Stanton, N.A., Plant, K.L., Roberts, A.P., Allison, C.K., & Howell, M. (2020) Seeing through the mist: an evaluation of an iteratively designed head-up display, using a simulated degraded visual environment, to facilitate rotary-wing pilot situation awareness and workload. Cognition, Technology & Work, 22(3), pp549-563 https://doi.org/10.1007/s10111-019-00591-2

[12]Zahabi, M., & Abdul Razak, A. (2020) Adaptive virtual reality-based training: a systematic literature review and framework. Virtual Reality: the Journal of the Virtual Reality Society https://doi:10.1007/s10055-020-00434-w

[13]Jiarui Zhu, Radha Kumaran, Chengyuan Xu, Tobias Hollerer “ ̈Free-form Conversation with Human and Symbolic Avatars in Mixed Reality” , Published in: 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), ISBN: 979-8-3503-2838-7

[14]D. Adiwardana, M.-T. Luong, D. R. So, J. Hall, N. Fiedel, R. Thoppilan ,Z. Yang, A. Kulshreshtha, G. Nemade, Y. Lu, and Q. V. Le. Towards a human-like open-domain chatbot, 2020.

[15]T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal,A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M.Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learn-ers. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20. Curran Associates Inc., Red Hook, NY, USA, 2020.

[16] M. Ghoshal, J. Ong, H. Won, D. Koutsonikolas, and C. Yildirim. Colocated immersive gaming: A comparison between augmented and virtual reality. In 2022 IEEE Conference on Games (CoG), pp. 594–597, 2022. doi: 10.1109/CoG51982.2022.9893708

[17] S. Z. Hassan, P. Salehi, R. K. Røed, P. Halvorsen, G. A. Baugerud, M. S. Johnson, P. Lison, M. Riegler, M.E. Lamb, C. Griwodz, and S. S. Sabet. Towards an ai-driven talking avatar in virtual reality for investigative interviews of children. In Proceedings of the 2nd Workshop on Games Systems, GameSys ’22, p. 9–15. Association for Computing Machinery, New York, NY, USA, 2022.

[18]Adélaïde Genay, Anatole Lécuyer, Martin Hachet, “Being an Avatar for Real: a Survey on Virtual Embodiment in Augmented Reality”,Published in: IEEE Transactions on Visualization and Computer Graphics ( Volume: 28, Issue: 12, 01 December 2022), ISSN:1941-0506

[19] Raghavendra K. Chunduri and Darshika G. Perera , “Neuromorphic Sentiment Analysis Using Spiking Neural Networks”, Published in : MDPI- Sensors, 6th September-2023, https://doi.org/10.3390/s23187701

[20]Strubell, E.; Ganesh, A.; McCallum, A. Energy and Policy Considerations for Modern Deep Learning Research. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13693–13696.

[21] Javanshir, A.; Nguyen, T.T.; Mahmud, M.A.P.; Kouzani, A.Z. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput. 2022, 34, 1289–1328.

[22] Yamazaki, K.; Vo-Ho, V.K.; Bulsara, D.; Le, N. Spiking Neural Networks and Their Applications: A Review. Brain Sci. 2022, 12, 863.

[23] Dang, N.C.; Moreno-García, M.N.; De la Prieta, F. Sentiment analysis based on deep learning: A comparative study. Electronics 2020, 9, 483.

[24]Shafin, M.A.; Hasan, M.M.; Alam, M.R.; Mithu, M.A.; Nur, A.U.; Faruk, M.O. Product review sentiment analysis by using nlp and machine learning in bangla language. In Proceedings of the 2020 23rd International Conference on Computer and Information Technology (ICCIT), Dhaka, Bangladesh, 19–21 December 2020; pp. 1–5.

[25] Chen, C.-H.; Chen, P.-Y.; Lin, J.C.-W. An Ensemble Classifier for Stock Trend Prediction Using Sentence-Level Chinese News Sentiment and Technical Indicators. Int. J. Interact. Multimed. Artif. Intell. 2022, 7, 53–6

[26]Chunduri, R.K.; Cherukuri, A.K. Big Data Processing Frameworks and Architectures: A Survey in Hand Book of Big Data Analytics; IET Digital Library: Stevenage, UK, 2021; Volume 1, pp. 37–104.

[27] Ricketts, J.; Barry, D.; Guo, W.; Pelham, J. A scoping literature review of natural language processing application to safety occurrence reports. Safety 2023, 9, 22.

[28] Susanne Schmidt, Oscar Ariza and Frank Steinicke “Intelligent Blended Agents: Reality–Virtuality Interaction with Artificially Intelligent Embodied Virtual Humans”, published in https://doi.org/10.3390/mti4040085, 27 November 2020

[29]Kangsoo Kim; Celso M. de Melo; Nahal Norouzi; Gerd Bruder; Gregory F. Welch

“Reducing Task Load with an Embodied Intelligent Virtual Assistant for Improved Performance in Collaborative Decision Making”, Published in: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), ISBN: 978-1-7281-5609-5

[30] Sixue Wu , Le Xu , Zhaoyang Dai and Younghwan Pan ,“Factors Affecting Avatar Customization Behavior in Virtual Environments”, published in Electronics 2023, 12(10), 2286; https://doi.org/10.3390/electronics12102286, 18 May 2023

[31]E. Aksan at al. “A Spatio-temporal Transformer for 3D Human Motion Prediction”, Published in: 2021 International Conference on 3D Vision (3DV), ISBN: 978-1-6654-2688-6

[32]Yujun Cai, Lin Huang, Yiwei Wang, Tat-Jen Cham, Jianfei Cai, Junsong Yuan, Jun Liu, Xu Yang, Yiheng Zhu, Xiaohui Shen, et al. Learning progressive joint propagation for human motion prediction. In European Conference on Computer Vision, pages 226–242. Springer, 2020.

[33]Mao Wei, Liu Miaomiao, and Salzemann Mathieu. History repeats itself: Human motion prediction via motion attention. In ECCV, 2020.

[34]Takano, M.; Taka, F. Fancy avatar identification and behaviors in the virtual world: Preceding avatar customization and succeeding communication. Comput. Hum. Behav. Rep. 2022, 6, 100176.

[35]Pauw, L.S.; Sauter, D.A.; van Kleef, G.A.; Lucas, G.M.; Gratch, J.; Fischer, A.H. The avatar will see you now: Support from a virtual human provides socio-emotional benefits. Comput. Hum. Behav. 2022, 136.

[36]Nabity-Grover, T.; Cheung, C.M.; Thatcher, J.B. Inside out and outside in: How the COVID-19 pandemic affects self-disclosure on social media. Int. J. Inf. Manag. 2020, 55, 102188.

[37]Geraets, C.; Tuente, S.K.; Lestestuiver, B.; van Beilen, M.; Nijman, S.; Marsman, J.; Veling, W. Virtual reality facial emotion recognition in social environments: An eye-tracking study. Internet Interv. 2021, 25, 100432.

[38]Marín-Morales, J.; Llinares, C.; Guixeres, J.; Alcañiz, M. Emotion Recognition in Immersive Virtual Reality: From Statistics to Affective Computing. Sensors 2020, 20, 5163.

[39]Cui, D.; Kao, D.; Mousas, C. Toward understanding embodied human-virtual character interaction through virtual and tactile hugging. Comput. Animat. Virtual Worlds 2021, 32, e2009.

[40] MA De at al, “Darwin: a Neuromorphic Hardware Co-Processor based on Spiking Neural Networks”, published on Journal of System Architecture, Vol 77, 17 January 2017, https://doi.org/10.1016/j.sysarc.2017.01.003

[41] Xingrun Xing at al., “SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking”, published on Fri, 5 Jul 2024, arXiv: 2407.04752

[42] Malyaban Bal and Abhronil Sengupta. Spikingbert: Distilling bert to train spiking language

models using implicit differentiation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 10998–11006, 2024.

[43] Rui-Jie Zhu, Qihang Zhao, Guoqi Li, Jason K. Eshraghian,“SPIKEGPT: GENERATIVE PRE-TRAINED LANGUAGE MODEL WITH SPIKING NEURAL NETWORKS”, Thu, 11 Jul 2024,

arXiv:2302.13939

[44]Lasse F Wolff Anthony, Benjamin Kanding, and Raghavendra Selvan. Carbontracker: Tracking and predicting the carbon footprint of training deep learning models. arXiv preprint arXiv:2007.03051,

2020.

[45]Sami Barchid, José Mennesson, Jason Eshraghian, Chaabane Djéraba, and Mohammed Bennamoun. Spiking neural networks for frame-based and event-based single object localization. Neurocomputing, pp. 126805, 2023.

[46]Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He,Connor Leahy, Kyle McDonell, Jason Phang, et al. Gpt-neox-20b: An open-source autoregressive language model. arXiv preprint arXiv:2204.06745, 2022.

[47]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33:1877–1901,2020.

[48]Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, et al. Rethinking attention with performers. arXiv preprint arXiv:2009.14794, 2020.

[49] Haoming Chu at al.,“A Neuromorphic Processing System With Spike-Driven SNN Processor for Wearable ECG Classification”,Published in: IEEE Transactions on Biomedical Circuits and Systems ( Volume: 16, Issue: 4, August 2022), ISSN:1940-9990

[50]Y. Liu et al., “An 82nW 0.53 pJ/SOP clock-free spiking neural network with 40μs latency for AloT wake-up functions using ultimate-event-driven bionic architecture and computing-in-memory technique,” in Proc. IEEE Int. Solid-State Circuits Conf., 2022, pp. 372–374.

Downloads

Published

2025-04-26

Issue

Section

Articles

How to Cite

A Study on Wearable Tech Interfaces and Perception: Cognitive AI-Enabled Device. (2025). International Journal of Computer and Information Technology(2279-0764), 14(1). https://doi.org/10.24203/5xjf4286

Similar Articles

1-10 of 46

You may also start an advanced similarity search for this article.