Code Monkey home page Code Monkey logo

synthesis.js's Introduction

About me

Katsuomi Kobayashi, a.k.a. korinVR (he/him) is a Unity developer, a software engineer and a VR enthusiast. I got an Oculus Rift DK1 in 2013 and started creating VR experiences and exhibiting them at events on my own, and before I knew it, it became my main job. After working on several commercial VR projects, I joined Hashilus in 2016. I was the main programmer of Shibuya VR PARK TOKYO's "Salomon Carpet," "VR Attack on Titan THE HUMAN RACE." In December 2019, I joined XVI Inc. and started working on developing "AniCast Maker" for Oculus Quest (released in April 2021). I am a co-author of "VR Content Development Guide 2017" and a speaker at XR Kaigi 2020, CEDEC 2021, XR Kaigi 2021. You can read my detailed activities on my website (in Japanese).

📫 Get in touch

My primary SNS is Twitter @korinVR (in Japanese). I also have an English Twitter account @korinVR_en, but the activity is lower.

synthesis.js's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

micahscopes

synthesis.js's Issues

Audio latency on web browsers

I set the Web Audio script processor buffer size 1024 samples (23.2 ms), but this is too large for real-time performance. 512 samples buffer causes occasional noise, and 256 samples causes complete noise on my PC Chrome.

If you want to create a "playable" instrument, you should choose an appropriate native platform and programming language. For example, iOS Core Audio and Objective-C/C++ can achieve stable 128 samples (3 ms). That is the reason iPhone has many musical instrument apps.

ref. Android’s 10 Millisecond Problem explained | Hacker News

VirtualKeyboard is broken

Due to the restriction from Chrome 71, we need to create AudioContext when a key is first pressed.

Create audio context only after a user gesture

I've shared the showcase page with a friend and he couldn't play any sounds, and it turned out to be that the audio context was created before any user gesture, and it would be created suspended and it wouldn't play any sound.

You can add a function getAudioContext() and that function can create the context only when it is first needed (i.e. when a user presses a button or a key), and then store it in a global variable and just return it next time. The other thing you can do is capture all user gestures and do a context.resume() on at least the first time that event fires.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.