In association with

The Call for Papers Is Open! 

Interview with Keynote Speaker Dr. Stephan Sigg

News, Article

Stephan Sigg deals with what happens behind the chatbot talk, the smart watch or the individual recommendations on Netflix we get, which is a wide, more complex world. It is the world of mathematical patterns and sensor technologies that enables machines to interact in a context-sensitive and natural way. Let´s dive deeper and take the opportunity to talk with him and try to get closer to these questions:

H. Maier: First, could you explain to those who are not yet into information technologies but want to understand: How do we get from light sensors on my lamp to individual information from a home robot like “Hello Mr. Sigg – your lightbulb is not working well anymore. Shall I order a new one from Amazon for you? You still have the Christmas voucher from your sister.”

S. Sigg: First of all, thank you for the nice and spot-on introduction and for the opportunity to discuss and share ideas with the experts, theoreticians and practitioners attending the Information Energy 2018 conference. I am very much looking forward to this event. In your question, you are comparing two systems that can be perceived as services adapting to changing environmental conditions. The lamp exploits a simple actuator to establish a perception of intelligent behavior: in low light conditions, the light sensor controls the current flowing through the circuit. These elements determine the amount of current to pass through based on the amount of light they detect.



Intelligent system – simple everyday example

Despite the factitious intelligent behavior of such a system, it is indeed pa ssive and triggered by environmental input to the actuator. Similar to a mechanical light switch that is triggered by the intensity of physical force, the light sensor acts as a switch that is triggered by the intensity of light.

The robot, on the other hand, is a much more advanced system in many respects. It interacts using natural human language; it knows my name and my social network. It detects an anomaly – a deviation from normal operation – and makes suggestions based on knowledge about my usual shopping preferences, the existing private inventory of the product in demand, and different payment options.

Intelligent system example with an artificial assistant (e.g. chatbot)

The way we get to agents with such a level of is data – huge amounts of data: from profile-type data such as names, ages, addresses of individuals, to social network groups, interactions, and resources (items in the fridge, light bulbs in the cupboard, …), to streams of continuous sensor readings from all kinds of environmental and on-body sensors in order to detect such anomalies (3)

To detect an anomaly, the system must first establish a perception of normal behavior. For this, feature values are extracted from sensed data streams. For instance, reduced light intensity, squealing sounds emitted from the bulb, or increased flickering. In a multi-dimensional feature space, an anomaly is spotted by feature values that fall far off the typical feature samples. The reaction of the system to such an anomaly might then follow a number of rules based on the mentioned profile type, social network and inventory data. So, data is the key to enabling such seemingly intelligent behavior. 

H. Maier: Can you name three examples of future technologies that will change our daily life? 

S. Sigg: If I had to pick three, it would be Artificial Intelligence, 5G, and the bundle of security/privacy/authentication.

First: Artificial Intelligence is slightly hyped nowadays, due largely to the impressive advances in deep learning methods in various domains. With the help of the amount of data and enormous computational power, the accuracy of deep learning classifiers has advanced significantly. In addition, transfer learning and zero-shot learning paradigms have significant potential to further improve the perceived performance of classification algorithms.


Transfer learning attempts to re-use trained models in similar domains so that less training is required in a new domain. Zero-shot learning enables the learning of a new (not previously trained) concept given a semantic knowledge database in which the new concept is described.

A prominent example of the high potential of AI applications is autonomous driving, which already shows impressive results in current realizations. Another good example, exploiting speech processing, is the life-translation of spoken input, and there are many further recent adaptations of AI in various domains that hold great potential. These technologies will significantly change our way of life, how we use our time, and with whom we interact.

Second: As a second example of future technologies that will change our daily life, I believe that 5G and, more generally, the proliferation of the IoT will have a disruptive impact. One promise of 5G is a unified standard (though still different for device classes and distinct frequency ranges) ranging from tiny IoT devices to vehicular communication.

By simplifying communication across devices of all classes, 5G can build the backbone of a ubiquitous network among devices that seamlessly share data and processing load. Furthermore, this unified interface can work for further services that use the existing infrastructure. For instance, think of device-free radio sensing, a paradigm that exploits fluctuation e.g. in signal strength, phase, and frequency spectrum for environmental perception. Together with a ubiquitous 5G instrumentation, it has the potential to turn virtually every electronic device into a connected environmental sensor, able to track presence, activities, gestures and more.(4)

Finally: any service in the IoT and among connected personal devices can only be successful if it establishes trust in how data is managed, especially as regards confidentiality and availability.
Authentication and encryption are therefore of immense importance. But the solutions in use nowadays to ensure authentication and security have not kept up with the immense advances in information technology, miniaturization and device proliferation. With the increasing number of devices with which we interact routinely, authentication and encryption are required that seamlessly integrate into daily routines.

For instance, common solutions for providing user-friendly authentication on mobile devices make significant compromises in security. Pattern-based inputs, for instance, are easily overcome via shoulder surfing or smudge attacks, while biometrics cannot withstand any targeted attack, as the biometric token used (e.g. fingerprints, iris, gait,…) is continuously observable with contemporary image and video technology.

Usable security schemes exploiting, for instance, fuzzy cryptography schemes and multiple implicit feature patterns, might, on the other hand, also be able to provide seamless context-based authentication among interface-less devices.(2)

H. Maier:
What are the next steps you and your colleagues are working on? 

S. Sigg: I am expecting tremendous advances in AI methods and applied AI research in the next couple of years. These might go together with new approaches to process Big Data and to filter relevant information.

In particular, transfer learning will receive increased attention in the next few years. In addition, advances in deep learning models, for instance, towards applying deep learning classifiers on devices with restricted resources, will find their way into applied research in various fields.
  
Our group is focusing on ambient intelligence and in particular towards realizing machine learning for environmental perception on battery-less devices.
In particular, we are not aiming for close-to-perfect recognition accuracy by exploiting tremendous processing and storage resources, as is common, for instance, in applied deep learning research. Instead, we aim to provide “good enough” accuracy at a minimum resource cost (power, CPU, storage).

It is sometimes most efficient to e.g. distribute processing and storage demand, and aggregation tasks can also be achieved during the simultaneous transmission of data on a wireless channel. We are trying to be as efficient as possible and also to involve, for instance, autonomous backscatter nodes that can draw part or all of their energy parasitically from the surrounding environment. In the meantime, this might lead to maintenance-free autonomous nodes that provide environmental perception.

H. Maier: Reading the comments on documentaries about future scenarios – you read positive as well as critical ones. To sum up, the big critical point seems to be the issue of safety in an increasingly connected world. How far along is the research with safety models to deal with this challenge?

S. Sigg: Yes, safety, security and privacy are very important and pressing challenges. This is also reflected in the large interest in blockchains recently. This concept still leaves a number of challenges to be addressed, especially when it comes to scalability or latency. However, recently, good solutions have been proposed, such as the OmniLedger and ByzCoin concepts brought forward by the group around Bryan Ford(1) . I am really excited about these developments.

H. Maier: Talking in a measurable scale – if ubiquitous computing is part of our everyday life: we do use pens and tablets instead of mouse and keyboard and our sweatshirt is a wearable user interface, and that is a 10 on our scale, while the static website content we are familiar with from the early 90’s is a 1, where are we now?

S. Sigg: This is a difficult question. I am not sure if static webpages would score a 1 on my scale for ubiquitous computing or whether it is actually possible to identify a single fixed starting point for any such dynamic and gradual development.

In my perception, we are not that far from integrating user interfaces into garments in actual end-user products. Prototypes of such clothing are regularly displayed at Ubicomp and other leading conferences, especially in HCI.

Also, garments as intelligent user interface could surely be further be advanced by getting rid of the necessity for the interface at all. For instance, imagine an implicit interaction and control triggered by your actions and context and by the way you behave, anticipating your input to the interface rather than reacting to it.(5)

To answer your question, I see us at the beginning of an exciting journey with many exciting advances still to be experienced.

H. Maier: So, how long will it be until we experience true pervasive computing?
S. Sigg:
Curiosity comes from being excited about learning things you do not yet know. I feel curious about many directions in pervasive and ubiquitous computing, and I hope and expect that this feeling might prevail for a fairly long time.

H. Maier: Last question – fictional scenario: The time traveler Dr. Emmett Brown from the film series Back to the Future answers two questions from the future. What would you ask him?

S. Sigg:
- Is the total transparence of all actors a viable solution to resolve privacy concerns?
- What kind of experiment were you performing on October 26, 1985 that made all your watches run 25 minutes late? 

Text Sources:

[1] Kokoris-Kogias, Eleftherios, et al. “OmniLedger: A Secure, Scale-Out, Decentralized Ledger.” IACR Cryptology ePrint Archive 2017 (2017): 406
[2] D. Schürmann, A. Brüsch, S. Sigg and L. Wolf, “BANDANA — Body area network device-to-device authentication using natural gAit,” 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), 2017.
[3] Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. 46, 3, Article 33 (January 2014), 33 pages.
[4] S. Savazzi, S. Sigg, M. Nicoli, V. Rampa, S. Kianoush and U. Spagnolini, “Device-Free Radio Vision for Assisted Living: Leveraging wireless channel quality information for human sensing,” in IEEE Signal Processing Magazine, vol. 33, no. 2, pp. 45-58, March 2016.
[5] M. Elhamshary, A. Basalmah and M. Youssef, “A Fine-Grained Indoor Location-Based Social Network,” in IEEE Transactions on Mobile Computing, vol. 16, no. 5, pp. 1203-1217, May 1 2017

Picture Sources:

  • Main Picture: Lightgrafity Street, Original Pixabay, Author: Felix_Broennimann
  • Daylight, windown: Original Pixabay, Author: banshiwal
  • IoT & Cloud Database: www.istockphoto.com