All of those automations that we looked at (and of course all the automations that you are now going to develop and use) are amazing, they enable us to work more efficiently, they will help you to free up time for the creative work that we as humans, rather than machines, are uniquely placed do.
But all of these automations only enable us to work better, more efficiently and more reliably with our data. Essentially, doing more of the same.
The next step on the data ladder is to use all this data to gain insights and to inform decision making. For example, how do we assess whether a design is optimised for well being, or for space utilisation or CO2 emissions or how can we assess the likelihood of whether a project will finish on time and within budget.
For answers to questions like these where the number of different factors (or variables) are enormous and their interactions are too complex for us to understand the normal tools available to us are just not good enough.
What we need to do is to use AI and in particular Machine Learning (ML) and a certain type of machine learning known as Deep Learning (DL). This involves us gathering the increasingly large amounts of data and analysing it using state of the art techniques. Deep learning has within the space of 3 or 4 years transformed the accuracy, reliability and quality of AI.
I’m aware that the idea of AI for most people is either something very scary that’s going to take over our lives on route to the apocalypse or is something that has benefits but is only accessible by the tech giants (the google, apple, facebooks and amazons). Neither of those views are correct.
On the first point, machine learning just aggregates lots of data to develop a machine learning model. Once a model has been produced, it takes an input and provides a result. The result is purely what the model assesses as the most likely result. And the most likely result is assessed using all the previous examples which used to make up the machine learning model, what it learnt from. Now I’m quite certain that that deep learning, which is by far the best AI technique that we have isn’t going to result in AI apocalypse, at least not anytime soon.
On the second point AI and machine learning is only for the tech giants. The tech giants are undoubtedly using and benefiting from AI. But the principles of machine learning were developed in academia, this means that it is available to all of us.
And better still there are machine learning tools available to us all. And guess what, most of them can be used in a Jupyter Notebook, using Python, a Pandas dataframe and open source machine learning packages.
Doing Artificial Intelligence
Before we finish here are a couple of examples that we have. The examples are in their early stages of implementation, but they have got to an acceptable level of accuracy for our internal use.
Building on our ability to assess model objects for Uniclass classifications, we have a machine learning model that automatically applies a classification for objects.
It assesses either an architectural or building services model and then takes data about the object such as IFC type, the object dimensions, the location relative to floor level, the name of the object and 10 other variables. And provides the most likely Uniclass product classification for the object. And it does this using a combination of what is called natural language processing for the object name and a tabular model for the remaining data.
We are using this internally, when we receive a client model for use within Operance which has no or very few classifications. But in theory this could be extended through plugins to directly apply classifications within Revit.
And then finally we have our Operance AI Object Detection where you can take a photo of a building asset, such as a door or a light fitting or a boiler and assess what it is from 130 defined Uniclass products. Go ahead, have a go yourself!
This uses a groundbreaking technique called transfer learning. Transfer learning builds directly on top of some of the largest available data sets (in this case imagenet which has some 14 million images in more than 20,000 categories.) and then adds our new data which for the initial learning uses around 100 images for each of the 130 building asset categories. During training and validation or the model we get an accuracy of around 86%. Which a few years ago would have been viewed as a cutting edge.
To give you an idea of how good this is, a highly cited academic paper from 2016 achieved an accuracy of 85% on just 20 different categories and these were very broad and different categories such as boat, bird and bike. Rather than the quite narrow and similar building assets such as a consumer unit compared to a distribution board and external louvre and an internal ventilation grille.
And this technology is accessible to all of us. Consider how object detection could be integrated into an automated assessment of progress, or assist with labelling site photographs or recording evidence of completed work.