I'm, Interesed in everything around AI engineering and research.
- I am interested in how to use the current state of the art Artificial Intelligence systems (machine learning, large language models, data science, statistics,…) in practice for the benefit of others, such as for automating mundane tasks, science, technology development, mathematics, healthcare, preventing various risks, chatbot assistants grounded in knowledge etc. with Llamaindex, Langchain, Autogen, Vector databases, etc. (Courses - DeepLearning.AI, Practical Deep Learning for Coders - Practical Deep Learning, Stanford CS229: Machine Learning)
- I am interested in how to build good AI models from scratch in Pytorch, or not from scratch with for example finetuning. I enjoy building large language models and other deep learning models from scratch. With Pytorch, Keras, fastai, etc. (I really love Neural Networks: Zero to Hero by Andrej Karpathy that was in OpenAI and Tesla and thaught in Stanford)
- I am interested in trying to mathematically and empirically understand current and future AI systems, why and how they work, and how to make them much more reliable, robust, steerable, creative, intelligent, safe etc. across all levels of their development! Better steering wheel for AI systems would be great! RLHF, prompt engineering, systems made of LLMs, and current reverse engineering methods don't seem to be enough! Mechanistic interpretability, neurosymbolic paradigm, weak to strong generalization paradigm, and formal verification sound promising! (A Comprehensive Mechanistic Interpretability Explainer & Glossary - Dynalist)
- I am interested in the future, with a focus on the future of AI, but not just the state of the art of the technology itself, but what implications does it have for our system in terms of politics, culture, economy, governance, other technologies, etc. In general I think about how to make world better for everyone with increasing automatization. How can it go well and have positive impact on your future? What political changes will we have to do? What risks exist? Where can we make the most beneficial progress? I don't want to see people suffer, maybe something like universal basic income or services will be needed to catch up with lob loss with increasing automatization, so that technology generates abundance for the benefit of all, not just for select few? How to minimize power concentration in the hands of the few? I'm trying to find solutions! I really care about the future filled with a ton of free fulfilled sentient beings flourishing, instead of dystopias and catastrophes! (David Shapiro)
And many other things, see my website burnyverse.com.