top of page

What's Missing?

As we have seen on the previous page, there are numerous mobile translating tools, ranging from online text and photo translation on our mobile phones or devices to using real time translation ear buds and audio assist translation paired with our smartphones.  

In all cases, the translation has been either inconsistent or not very natural in terms of timing and fluency between the speakers. The earbuds, while promising, do not facilitate a natural interaction between individuals and are time consuming and clunky. They also do not give the speakers a chance to speak in the foreign language themselves, although this is separate issue entirely.   

What I would like to see: 

1. Improved real-time translation

2. More natural integration and interaction between speakers

3. A more conspicuous and mobile technology in the form of immersive lenses/screen or microscopic implant

4. Translating technology that can assist in casual situations, rather than just polite and formal ones

5. The ability for the translating tech to:

状況を読む (Jōkyō o yomu) or rather"read the situation" and be able to pick up on subtle differences between various signals, gestures and other forms of unspoken communication

6. The ability for deep learning technology to process more slang and colloquial expressions    

bottom of page