Expert Interview: Hexagon Geosystems – Aldo Facchin

May 26th, 2021 | by Marius Dupuis

(7 min read)

In our very first expert interview, we had a chance to speak to Aldo Facchin of Hexagon Geosystems about digital twins and how to perform the process from data collection to making data available for different stakeholders‘ use cases. All of it with a special focus on data quality. We hope you’ll enjoy this interview as much as we did performing it:


Aldo, thank you very much for taking the time to answer a few questions for the blog of our GEONATIVES Think Tank! We highly appreciate that you agreed to this interview at our first request, even before we went public with our initiative. Would you mind introducing yourself to our readers?

Marius, thanks to you for this opportunity. My name is Aldo Facchin and I work for Hexagon’s Geosystems division. I’ve been working in the geospatial industry since 1992, mainly in the software business with some “excursions” in hardware and data capture services. Regarding road related topics, I was part of a team that created a mobile mapping system back in the year 2000.

From 2013 I had the role of R&D Manager for Mobile Mapping at Leica Geosystems, part of Hexagon and now I’m just nominated Vice President R&D for Reality Capture division.

You are involved in activities across our five pillars, most prominently as one of the “stakeholders” in “data lake”, “data processing” and “tooling”. Would you mind describing what roles you see yourself in?

My favorite role is to listen and evaluate the users problem and create efficient “solutions” to address these problems. At Hexagon’s Geosystems division, we have a tremendous advantage: we design, build, and distribute both sensors and software. My role is to leverage this advantage and provide the best toolbox with the most efficient combination of hardware, software, and services, to create these “solutions”.

What would the “ideal” digital twin of reality in terms of geodata look like for you?

I don’t see an ideal “one size fits all” digital twin. It’s all about how you want to use the “twin”.

For some applications, the best photorealistic model is the perfect twin. But I could easily list applications where the best and most accurate geometric representation at centimeter/millimeter level is the best twin. For other applications, the best twin is the most accurate functional model of the reality.

If you think of a city, a highly detailed 3D photorealistic representation is commonly considered a digital twin… but if you have to simulate the behavior of a car, a detailed digital twin must include the traffic lanes and how they are connected to allow traffic flow.

Collecting geodata is one of your key business fields. What does your quality assurance processes look like and what levels of accuracy can you achieve with your equipment? How does a user of your data know about the quality and accuracy level of each batch?

Mobile mapping equipment (hardware + algorithms + workflows) quality has significantly increased in the last 10 years. Now we have LiDAR sensors that can measure with millimeter level accuracy, the road surface, and the surrounding corridor up to 100m and more.

We have positioning technology made with GNSS, IMU and SLAM technology and multi-pass adjustment that reach 1-2cm of absolute accuracy on a regular basis, also thanks to global GNSS correction infrastructures like our solution HxGN SmartNet.

The achievement of this quality is guided by automatic processing able to automatically determine the best processing parameters for a certain project based on precise GNSS coverage.

We are at the point that R&D effort is not focused on improving quality but to make this quality repeatable, achievable, and accessible by everybody, every time, everywhere.

Any data that has been measured is “old data”. What possibilities do you see for continuous updating of geodata, e.g. by vehicles equipped with all kinds of ADAS sensors?

The topic of data obsolescence is very important!

Since a system (like a human) uses data to make decisions, and data is supposed to be a representation (a model) of the reality, the efficiency of data collection is a key component of the accuracy. A system that requires too much time to collect or process the data has a component of “inaccuracy” built in by nature, this is true for all dynamic scenarios.

Having said that, not all sensors can produce equally accurate data. The idea of continuously updating geodata with ADAS sensors is a good idea, of course, but only for applications that don’t require a special level of quality or accuracy.

For geospatial information, where geometric accuracy has a role, probably ADAS sensors are not yet good enough.

However, to detect if a road has changed or a new lane has been built… so to diagnose the obsolescence of your high-definition base map, then ADAS sensors can play a key role.

Data collection for high-definition base maps with high quality is an expensive process, so having a good indication of where these data need updating is very important.

Going one step further: Do you see a chance that geodata may also be completely crowdsourced by using on-board sensors of regular vehicles only?

It’s an interesting scenario for the mid to long term. But I see an opportunity for the short term: there are a lot of special fleets of vehicles driving around most of the road’s regularly.

I think to garbage trucks, buses, taxis etc.

These are perfect carriers for some type of data collection devices. This is already a reality in the rail industry, for example.

Reality is huge and providing a digital twin of it is an even bigger task. Still, we see that today many companies measure the same areas again and again (just think of all the digital twins of San Francisco’s road network). How can we avoid this overlap and multiplication of efforts in small areas and get the coverage of different areas in the world instead?

First of all, we need to question ourselves if it is necessary to avoid this overlap.

Data is collected (and hardware/software are created) to solve problems.

We accept competition on hardware and software tools from different vendors, and we accept that more choices for hardware and software is good for the market. Why can’t we expect the same for data?

Geometrical accuracy, completeness, level of obsolescence are key factors for the data. Do we want data providers to compete on quality and offer choices to the market? Or do we prefer a “one size fits all” (probably “one size fits nothing”) solution?

Data is the fuel for machine learning and Artificial Intelligence (AI), so different data sources can determine the performance of a system (a simulation system or even an autonomous car).

If “having better data” is one of the variables, as important as having better software, then we have to think about the goal of “avoiding overlap and multiplication of efforts”.

There is also another dimension to this problem, and it’s about the liability of system performance, and how it relates to the data.

Reality is available to everyone (who wants to perceive it). Who should own the digital twin of reality? How do you see ownership relations between reality and its digital version?

I don’t yet have a clear opinion, but I would like to trigger a discussion around the licensing aspects, even if it could sound philosophical.

If you make money with the digital twin of my house (or my road), I could argue that your business would not be the same without my house (or my road)… and so this should be defined and regulated somewhere.

If you take a picture of me and/or my house and you put it online for free or accidentally, this is one story… but if your business is to sell the picture of me and/or my house, then the discussion should be different.

But this is strictly a personal thought… as I said.

Prioritizing “what is good for the community” versus the rights of single individuals is also a good thing.

Where does the value of geodata come from? If we look into the stages from collecting the raw data, pre-processing them and shaping them according to a consumer’s requirements – what share of value creation will roughly sit in each stage?

Geodata is the twin of reality, but as I said before, “twin” is too generic.  The digital twin is an abstraction of some of the features of the reality. It could be a perfect looking twin (like a photorealistic 3D representation) or a twin with perfect behavior (like an OpenDRIVE model with all the driving rules, lanes etc.)

“How it works” or “how it looks” are equally important, so I don’t see a single answer to the question.

The costs of the “collecting” phase can be significantly different if you use a terrestrial (static) laser scanner, a terrestrial mobile mapping system, a UAV or an airplane mapping system.

The phase of “shaping” (I assume this is the modeling part) depends on the level of completeness you need to represent the complexity of reality.

Modeling the OpenDRIVE of a highway or the center of San Francisco is completely different.

Looking at the future of mobility: What role will geodata play as a means for manufacturers and providers of transport and mobility solutions to differentiate themselves from their competitors?

With the fast growth of AI, I see a bright future for data.

We are moving into a world where labelled data is becoming true intellectual property, as important as the algorithm (or even more important).

The quality of any autonomous driving system will be affected by the quality and quantity of data used for the training and (not less important) also used for simulation and validation of the algorithm.

In the software industry, we all know how important data is to test our products and systems, hence geodata has a very high value to us all.

Follow-up to the previous question: Who will rule the world – geodata or algorithms?

I tend to have a comprehensive approach to this question.

In the chain, sensors + algorithm + data + algorithm (and I mentioned intentionally “algorithm” twice, since they are needed to extract actionable data from the sensors and actionable information from the data), if I remove any of these components, the chain is broken.

Bad data with a good algorithm will provide bad results, good data with bad algorithms will provide bad results, bad hardware with good algorithms will also provide bad results… etc.

It is enough to have one single “bad” component to have bad results….so they are for me equally important.

Final question: We just started our initiative; what topics would you like to see covered in a blog like ours?

First of all, congratulations for this new initiative!

This intersection between the geospatial and the automotive industry is very exciting, and to me, both parts will have huge complementary benefits. I’d like to know more how this is perceived from the automotive side.

Thanks a lot, Aldo!

Add a Comment

Your email address will not be published. Required fields are marked *