Intel outlines ESP project

Intel outlines ESP project

Everyday Sensing and Perception initiative aims to make computers more 'aware'

Intel has revealed details of its Everyday Sensing and Perception (ESP) project which aims to make computers more context-sensitive.

Andrew Chien, director of Intel's Corporate Technology Research unit, told the Intel Developer Forum in Shanghai that ESP would "drive fundamental research advances that enable computing systems to become more aware in everyday activities and environments".

Computers excel at analytical tasks such as automation, complex analysis, modelling and simulation, but lack the 'human touch'.

Intel is working closely with universities and research labs to help " simplify and enrich all aspects of work and daily life".

The improved use of new and existing sensors and inferences will allow a higher level of understanding from raw data, and Chien believes that ESP could trigger a "Cambrian" explosion in the world of computing.

Chien identified four research projects on which ESP will focus to achieve " 90 per cent accuracy for 90 per cent of the day".

The first is 'Laugh', which looks at interaction and social behaviour. By using sounds, motion and images to become more aware of the user's activity, smart applications can make suggestions about related information or appropriate music or add comments or point out relevant topics.

The next is 'Learn', which seeks to better understand our interests and motivations, thereby having a better idea of our goals and current capabilities.

Improved analysis of previous research and current topics of interest would allow systems to better guide and educate users rather than just provide reams of information, much of which will be inappropriate for that user at that time.

Third is 'Touch', which looks to bridge the gap between the physical and virtual world. As robotics becomes more advanced, computers need to better understand physical objects and the dynamics of the real world.

This includes the ability to recognise specified objects as well as move and manipulate them with the correct amount of force and speed.

The final focus point is 'Move', which focuses of the understanding of location and physical context.

Location-aware systems are increasingly popular, but further integration of systems such as GPS and image recognition would enable devices to garner a level of understanding about the user's situation and provide relevant advice and content accordingly.

Chien concluded that, by working closely with other institutions, devices and systems can use high-level semantics to understand and become aware of the world around them and the needs of the user.