Deep convolutional neural networks for large-scale speech tasks. Neural Networks, (64):39--48, Elsevier, 2015. [PUMA: LVCSR recognition, learning, deep neural speech nets,]
Deep learning for robust feature generation in audiovisual emotion recognition. Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, 3687--3691, 2013. [PUMA: learning, network, deep IEMOCAP, feature audiovisual selection, belief]
Deep learning in neural networks: An overview. Neural networks, (61):85--117, Elsevier, 2015. [PUMA: Theoretical learning, deep overview, Background history,]
Demonstrating PalmTouch: The Palm as An Additional Input Modality on Commodity Smartphones. Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, ACM, New York, NY, USA, 2018. [PUMA: palm, smartphone visus:mayersn image, visus:leht learning, vis(us) machine capacitive vis-sks visus:henzens] URL
Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016. [PUMA: CIFAR learning, deep CNN, DenseNet, ImageNet,]
Embedding Virtual and Remote Experiments Into a Cooperative Knowledge Space. The 2008 Frontiers in Education Conference (FIE 2008), Saratoga, NY, Oktober 2008. [PUMA: laboratories, Web learning, experiments, Collaborative services Remote Virtual] URL
Embedding Virtual and Remote Experiments Into a Cooperative Knowledge Space. The 2008 Frontiers in Education Conference (FIE 2008), Saratoga, NY, Oktober 2008. [PUMA: laboratories, Web learning, experiments, Collaborative services Remote Virtual] URL
Identification and Off-Policy Learning of Multiple Objectives Using Adaptive Clustering. Neurocomputing, (263)2017. [PUMA: Learning, Learning Off-Policy, Adaptive Reinforcement Clustering, Q-learning, Multiobjective]
InfiniTouch: Finger-Aware Interaction on Fully Touch Sensitive Smartphones. Proceedings of the 31th Annual ACM Symposium on User Interface Software and Technology, ACM, New York, NY, USA, 2018. [PUMA: visus:mayersn Touchscreen, visus:leht learning, vis(us) machine finger-aware interaction vis-sks visus:henzens] URL
Joint Self-Supervised Image-Volume Representation Learning with Intra-Inter Contrastive Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 2023. [PUMA: learning, transformer] URL
Learning salient features for speech emotion recognition using convolutional neural networks. IEEE Transactions on Multimedia, (16)8:2203--2213, IEEE, 2014. [PUMA: recognition, DES, salient discriminative classification emotion MES, learning, speech feature CNN, Emo-DB, analysis, SAVEE,]
One Model To Learn Them All. arXiv preprint arXiv:1706.05137, 2017. [PUMA: learning, deep multi-domain model, multimodal]
PalmTouch: Using the Palm As an Additional Input Modality on Commodity Smartphones. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 360:1--360:13, ACM, New York, NY, USA, 2018. [PUMA: palm, smartphone visus:mayersn image, visus:leht learning, vis(us) machine capacitive vis-sks visus:henzens] URL
Predicting economic growth with stock networks. Physica A: Statistical Mechanics and its Applications, (489):102--111, Januar 2018. [PUMA: Naïve classifier, Economic learning, Econophysics, Bayes growth, networks Stock Machine] URL
Towards End-To-End Speech Recognition with Recurrent Neural Networks.. ICML, (14):1764--1772, 2014. [PUMA: RNN recognition, learning, speech end-to-end]
Variational Autoencoders for Learning Latent Representations of Speech Emotion. arXiv preprint arXiv:1712.08708, 2017. [PUMA: variational IEMOCAP learning, auto-encoder, autoencoder, Emotion representation Recognition,]