Ollo,

My Name is Ran Li, always doing idea engineering.

PROJECTS:

Best way to learning a new language

Vocvov

Learning a new language by leveraging a UGC-recommending system of multimedia content makes it easy and fun.

By empowering users to upload text mnemonic rules, interesting images and video clips, we are eventually creating a community for people who are struggling with language learning.

Mixed Reality In-person Social Network Assistance

So-Show

It's a more intuitive way to interact with people at social events by adding their digital social profile with real person in space.

By linking personal mobile device with the system, editing digital profiles through personal mobile devices will be reflected in mixed reality in real time.

It creates a new social network experience which real time digital information supplements the previous unknown information to entities in the mixed spaces.

To buy or not to buy Coca? SOMEBODY help me!

Personal-Assistant

This personal assistant is designed to provide people with a more visual and vivid guidance to help with their personal habit formation. I believe this can be a better solution than voice assisant from a user interaction point of view. It leverages the immersive interaction experience that AR provides to help people in a more convincing way and a more effective behavior intervention.

Real-Virtual-Real Chain

Real-Virtual Ballencing

The project aimed to create an experience which transfers events between reality and virtual space. Actions that are initiated from reality to virtual world will create real world effects from virtual space afterwards. It's no longer a single direction control always originated from reality anymore. Just like the Turning Test is for AI, the ultimate test for mixed reality will be whether we can distinguish which pieces are real which are virtual when seeing dominoes' falling one pushing another.

Mixed Reality Language Learning Assistance

Hololingo

An augmented reality application that helps students to learn language.

The application is able to take voice input from teachers who setup the language learning goals by looking at the real world objects and binding pronunciations with them. After the setup stage, the students will try to look for these objects in the real world by listening and searching the spatial sounds source as guidance. It is an effective way of learning language vocabulary by triggering a genuine 'discovering' mindset by leveraging the mixed reality and virtual spacial sounds.

Hospital Remote Control Interface

Home-care-robot

Robots that are capable of 'sensing' the patients' homes will be deployed to their houses after they are discharged from hospitals for follow-up medical care.

The robots' 'sensing' system includes a map-generation system that can create maps of the patients' homes which enable a better observation for hospitals. The robots can also 'sense' the patients by a human-tracking system.

Through the system, doctors are able to observe multiple patients who are at homes remotely without leaving hospitals and provide help in time.

EMG Human Computer Interfaces

Adaptive EMG controller

The project's goal is trying to replace traditional hand control interfaces such as joy sticks, keyboards so people with no limb mobility can be better facilitated.

The control interface module can take in multiple control signals through facial expressions via Emotive headset toolkit. The outputs are processed from the raw signals to filtered results.

The modularized interface is adapted to control wheelchair's navigation system so people who can't use their limbs(locked in syndrome) are still able to have certain mobility. The wheelchair users are able to navigate through the new interface in relatively complex environments. By recording the manipulation data with wheelchair's trajectory, certain optimizations were explored by utilizing these history data.

It can also be adapted to personal computers for a web browsing experience. It provides certain convenience to people who can't use mice and keyboards. Users are able to browse various predefined websites with certain navigation abilities.

The same controller can also be easily adapted to become a control interface for robotic systems.

US Defense Advanced Research Projects Agency (DARPA) Robotics Challenge

DARPA Atlas

I worked on the three-door task as part of the DARPA challenge 2013-2014. It requires the robot to locate three doors and open them, then go through each one.

The task used point cloud data collected through a rotating 360-degree LIDAR sensor. The robot was able to identify the doors and their handles' positions. Then the human operators are able to control the robot to open and walk across the doors.

A Low Cost Object Detection System

LIDAR imitation

A mixed sensor system combining IR sensors and Ultrasonic sensors is built in a highly configurable design that can do a 360-degree range of detection. This is designed as an alternative of an expensive 360-degree LIDAR to lower the cost from $3000 to less than $300.

A Geo-location Based Opinion Sharing Interface In Audio Format

Post-it

The goal is to use real world information(personal voices and geo-locations) to augment the traditional text based post sharing system to encourage people to interact in a more personal way.

An opinion sharing platform is designed for this purpose to help people to post their thoughts on Shakespeare's literature works.

The platform takes audio recordings uploaded by people combined with the traditional text posts. It also helps people to show their geo-locations pinned on to a world map so it makes them know more about other peers and feel more comfortable sharing their own thoughts.

Memorize the death in a digital way

Digital Cemetery

How can we memorize the dead in a generation that is use to digitalize almost anything into bits and bytes through cameras?

This project is an experiment to provide a new way for people who wish to have their memories of their beloved families and friends in a digital manner. People are able to view the images of the tombs from maps of the cemeteries.

170 tombs built 150 year ago located in Millbury Cemetery(Millbury MA, US) are uploaded in this way.

Motivation:

Interaction bridges from human to machine, human to information and human to human are my research interests.

I believe interfaces should be powered by new 'sensing' technologies and they should be capable of learning through the interaction history data to encourage positive interactions and lower the cost and boundaries.

I spent most of my college years on building interfaces between human and computers/robots. I have experimented on EEG and EMG technologies combined with voice control and built an interface that people who have no limb control can still have good control. This interface is adapted to control electrical wheelchairs, vehicle-robot-base and internet browsing experiences. I had the amazing opportunity to participate in projects like DARPA Humanoid Robot Project and NASA ROBO Ops.

After graduation, I started to work in User Experience Development at Fidelity. Working there taught me a lot about how to build interfaces people are attracted to interact with through user research and in-person tests. I started to become interested in building new interfaces that encourage human interaction.
I experimented and developed a few web based applications. One of them is an opinion-sharing interface in audio format with the publishers’ geo-location linked on a world map. The goal was to create a different sharing experience which was geo-location based and audio formatted. With opinions posted on a geo-location based world map, it's amazing to see people start to post their different opinions based on their culture.

Mixed reality control and interaction interfaces are my current focus. Based on audio interfaces in 3D spacial sounds I created a discovery-oriented language learning interface which simulated the natural learning experience. I built real-time in-person social interaction to help smooth the interaction between strangers by showing their profile information rendered next to their body following them in real time. This reduced the time cost of introductions and social ice-breaking so people can start a meaningful conversation without having to go through the somewhat awkward process of the initial greeting stage. By just seeing their interests directly, it effectively encouraged people to talk to others who share their common interests. I believe this type of interface will dramatically lower the cost between in-person communication in groups where people have much less knowledge of each other but wish to have enjoyable and meaningful conversations. I was fortunate enough to collaborate on a project, Mindful Wearable, born in the MIT Media Lab's Fluid Interface group, which creates mixed reality intervention on human behaviors.

There's no deny that current AR hardwares including the physical size and computing powers are still a few years away from a complete adaptation in certain areas. But it's actually provides a perfect time and opportunity for expending and pushing the edge of understanding and applications in these areas.

Interfaces should have memories of themselves in the future. The memory histories is the foundation of the AI for these interfaces. I experimented on integrating memories to control the interface on the wheelchair project. The control histories are parallel inputs just as the direct control inputs are. This could potentially remove the control noises for more accurate outcome for probability based interface controller.

Bio

Worcester Polytechnic Institute (2010 – 2014), Worcester
Bachelor's degree, Electrical and Computer Engineering && Minor Computer Science
GPA: 3.97/4.0

EMC (2012.1-2012.6), Framingham
Software developer Co-op

Wayfair (2014-2015), Boston
Software developer

Fidelity User Experience Department, Boston (2015-2017)
User Interface Developer

Highland Times Technology, Beijing, China (2017-2019)
Tech Cofounder

Award:

Salisbury Prize Awards 2014:
22 out of 1000 senior students of Worcester Polytechnic Institute

NASA Robo-Ops robotic challenge 2013: 3rd Place

Publication:

Augmenting a voice and facial expression control of a robotic wheelchair with assistive navigation
2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
Authors: Dmitry A. Sinyukov, Ran Li, Runzi Gao, Nicholas W. Otero, Taşkın Padir

Press

At WPI, a push to make smart wheelchairs