Publications

atomicBIM: Splitting Data to Unleash BIM's Power

July 28, 2017

It is not uncommon for one generation to continue in the familiar ways of a previous era even while a significant shift is occurring around them. The Greeks, for example, continued to build their temples according to the visual expression of earlier wood construction even though a new medium, stone, had made its woodcraft-based symbolism obsolete. The old iconography was quite literally ‘enshrined,’ and the new generation chose simply not to question the previous path despite a new material and medium.

We may find a similarly significant shift is occurring in the development of BIM – and now is a good time to re-focus and take stock lest we unintentionally build a faulty workflow. In particular, with the accelerating success of BIM adoption, the shift from a single-player process focused on modeling, to a multi-player BIM process focused on information, may soon prompt careful reconsideration of our entire approach to BIM models.

The most obvious concern – unmanageable file sizes – has been unfolding over several years. In our present BIM workflow, we are creating ever larger, difficult-to-access BIM models within a single file, or a series of large linked files which are likely to severely hamper the BIM workflow. An additional concern, however, is that we have no workable protocols or established tools to retrieve and manage the data at the other end for “data extraction." If we are to fully address these issues, we may want a comprehensive re-evaluation of the downstream uses before we get too far along our present path.

Instead of continuing to create ever larger files, we ought to conceptualize and structure the BIM environment for quick and easy access. We could imagine an arrangement where BIM is comprised of many tiny pieces of data – which we are calling atomicBIM, that is, BIM in small, discrete pieces of data. An atomized information structure would provide granularity and rapid access so that subsets of BIM information could be more easily accessed without a massive download.

The Atomic Structure of BIM

The term “atomicBIM” evokes an image of BIM data in small packets, similar to atoms of an element. This notion’s core difference from a large single file, which also contains many pieces, is that in atomicBIM these pieces would be distinctly addressed and more easily accessed in isolation. This leads to an intriguing question: what exactly might the atomic structure of BIM look like in this scenario? If we could get closer to the very heart of BIM’s core properties, we might better understand how to unpack it for downstream uses.

Most dictionaries define an atom as: “…the smallest unit of an element, having all the characteristics of that element…” Thus, an atom is understood to be a very small but nonetheless still recognizable piece of the substance that it comprises. An atom of BIM then should still possess all the required characteristics and capabilities for the construction process.

Let’s examine how BIM objects have evolved over time and try to discern their “atomic nature” from that investigation. The evolution can be traced through four distinct phases of development, the last of which is still nascent.

BM: The origins of today’s BIM movement is wholly predicated on the idea that buildings are three-dimensional endeavors, thus, as a result, 3D models are a valuable tool in predicting scenarios for the design and construction effort. This 3D-modeling effort in isolation we could simply call “BM” for Building Modeling. The value proposition of “BM” alone is to understand the relationship between purely physical, geometric components. The “information” that exists in the BM model is simply spatial – where things start and stop, how they are arranged – which, while valuable, is really just graphical information.

In the first pure BM phase, the atoms were simply 3D objects – there was no other data, no opportunity to create schedules of components, or arrange them on a timeline, or count them for cost estimating. They were simply geometry. To do any of these more advanced things, we would have to add data in some form to the objects. This is largely what happened in the next stage of evolution: BM+I.

BM+I: In the first advance, data tags were added to 3D objects. The geometry object dataset was simply expanded to contain fields of data that were attached to the geometry. The tags were added without much architectural “context,” i.e., the objects did not understand that they were parts of a building. They were simply digital 3D objects with data fields; they might as well have been parts of a household drill.

However, if users could pluck the data from the 3D objects, arrange them into a spreadsheet, and add the necessary context of what the data meant – often in their heads – it would be possible to get some useful findings, like an equipment schedule. The data transfer was not, however, reciprocal. Once a design change was made, the data fields – again without context – would be exported and manipulated again.

Atoms of BM+I, therefore, were 3D objects with data attached to them, similar to a “pin cushion” configuration. The "pinheads" containing object data were not automatically related to each other.

BIM: Before too long – originating in the manufacturing world – a new breed of software was maturing that took a totally new approach: 3D modeling was created within context. In the manufacturing arena, software such as Parametric Corporation’s Pro-Engineer emerged that could emulate manufacturing processes. However, plain CAD objects were not the centerpiece of this software, instead, fabrication management and process simulation were. It was a bold new direction in digital design, and it soon arrived at the door of the AEC industry.

In the AEC arena, similar software soon emerged, particularly Revit, which had at its core a database. In this new arrangement, the “information engine” was at the center of the software, and both graphical representations and schedules were driven by data managed by the engine.

In addition, data objects were clearly situated in an architectural context – walls for example, were “hardwired” to have certain behaviors, such as hosting doors and windows; gridlines and floor levels were understood as they actually exist in the construction world – that is as major determinants of building layout. Every component “knew” which floor level it belonged to, and all manner of architectural objects were capable of being scheduled.

At this stage, contextualized building-related data was born: atoms of 3D objects with attached data floating in a further building data context, what Victorians might have called an “ether.” This heralded the arrival of BIM – “BM” linked to information-management. This is roughly where we find ourselves today, with 3D objects in a context that also creates linkages between the object data. But that is not the end of the story.

BI(m): The B.I.M. phase generally captures BIM’s status as we now know it, and much of its present focus revolves around its use in the design process – the data-input phase. However, we are quickly evolving into a new phase where 3D models get transferred downstream to an increasing cast of builders, owners, and operations personnel. In this still-developing phase – which we could call BI(m), information about the project elements may be of greater importance than their particular 3D model shape. This is because once a project passes a certain point in design, the workflow emphasis shifts, and it becomes equally crucial to get that data out. Though data extraction is an ongoing activity during design, the nexus of the significant switch from data input to data extraction is often the bid date  the point where the design is complete and the project enters the Bid, Construction, and Operations phases.

The BI(m) paradigm is one that values the 3D object less, and the information about that object more. Obviously, it is not possible to discard model geometry completely in BI(m), but this BIM object is now replaced by a real-world product, and what is now needed is to provide critical information for comprehensive tracking of construction projects through into building operations. In fact one of the more accessible examples of BI(m) would be the emerging COBie specification, where the model, manufacturer and spare information is what owners want, extracted from the 3D model into a simple and lightweight spreadsheet.

The Evolution of BIM: As we can see, the composition of BIM has gradually evolved from “BM” (Building Modeling) towards “BI” (Building Information) with various combinations in between. This sequential development of BIM above often mirrors the shift in emphasis seen over a project’s phases as the work moves from design to construction – the shift from modeling to information, from placing objects to retrieving information about them. Over the life of a project, various participants will shift their focus from geometric aspects to data aspects.

4%20BIM%20MODELS_0.jpg
Evolution of Atoms of BIM

BIM atoms, therefore, can be viewed as discrete, self-contained building elements, often whole components or assemblies – like a door, an air terminal or a wall – but these components float in the ether of a building context. So the atomic theory of BIM recognizes that there are two distinct components – objects, and a context. Atoms possess sufficient characteristics to be recognizable outside of the ether but are less useful without that ether or context of project information.

Authoring and Integration

Breaking large datasets into atom-sized packets is not the only solution for ballooning BIM files. It is possible to approach the problem by simply making the aggregation of large files more efficient. In this aggregative approach, large chunks of a project created by multiple authors and software would be compressed in a common format so that they could be dealt with as a unified whole.

In the construction field for example, this approach can frequently be seen when contractors use software like Navisworks for clash-detection, timeline-management and coordination. Typically, very large BIM files are translated from their native format into a unified proprietary format and then combined for various operations.

In the file translation process, however, 3D objects lose a lot of their native intelligence, and the extra intelligence added by 4D or clash-detection exercises is not easily transferred back to the authoring BIM application. This is one of the reasons why the alternative, the ‘atomic’ approach, is worth considering. In the long run, if we could formulate atoms of BIM in such a way that they remain intact as they are passed from application to application, for information authoring in a non-destructive fashion, then that workflow would be more robust and extensible.

This introduces a further, significant concept for future BIM software development – that there will be authoring software, and there will be an integrative setting. Authoring software will populate a BIM environment with atoms, or add discipline-specific information to those atoms. Current BIM applications, for instance, are examples of atom-generation authoring software. Likewise, energy or cost estimating programs will also be information-authoring software.

The authoring concept is especially important because ultimately, BIM authoring is unlikely to be the sole preserve of architects, engineers and other building designers; it will expand to include property managers, financiers, estimators, suppliers, procurers – in fact, anyone whose day-to-day job deals with the built environment.

Once the authoring of atoms is established, it is time to consider the nature of the second component, the integrative context. It is likely that the BIM “model” that most people will interact with will be a more static repository of atoms, rather than a live interactive design environment. Each participant’s BIM authoring software will produce or add atoms of data, placing them in this context; the BIM environment will be the “ether” which manages those atoms. An energy analysis, for example, might check out the rooms and spaces from a BIM model, along with the climatic context (Southern exposure, etc). An energy software package would then author new information on those atoms and upload them again to the BIM repository for all participants to view. In an ideal arrangement, atoms will be able to be continuously checked out of the BIM repository and authored anew with updated information.

Atoms and Ether

The concepts underlying atomicBIM may appear unfamiliar and remote, but in fact, the recognition that large files could be managed by fragmenting them was once present in very early CAD packages such as MicroGDS and ArrisCAD. In ArrisCAD, for instance, CAD layers were often saved as entirely separate .LYR files which were combined at the user’s desktop into the “drawing” file they required. Though BIM’s rich data adds several layers of complexity to the atom management, the key thrust was discernible. Fortunately, many of the core concepts required for BIM “atoms” and “ether” are already under development.

Currently, the foremost candidate for the title of “BIM atom” is the IFC (Industry Foundation Class) 3D format, together with its allied initiatives in the BuildingSMART Alliance. Though originally created to address interoperability and the operations lifecycle, the IFC initiative has continued to draw further contributions from other groups, which taken together offer one of the best examples of defining atoms of BIM today. IFCs hold both geometric representations and associated data, so they already reflect many of the major characteristics of BM+I above.

DIAGRAM.jpg
IFD as a Mapping Mechanism: Image courtesy Lars Bjørkhaug and Håvard Bell, www.ifd-library.org

IFC is structured so that its classifications describe building components in a uniform way, allowing designers, suppliers and fabricators to agree on common terminology and property sets. More recently, an international working group has added important functionality to the IFC with the IFD (International Framework of Dictionaries). The IFD effort has developed a scheme in which building components can be consistently identified and classified. The IFD initiative is set to unite with the IFC efforts in the next IFC2x4 release for universal tracking and naming of objects in the lifecycle of a project. Essentially, the IFC/IFD combination will create unique, traceable atoms of BIM which are 3D geometry with information data fields attached to them (which we previously termed BM+I).

A full discussion of the IFC/IFD effort is beyond this article’s scope, but suffice to say that it is a creative and exciting initiative with much promise for BIM’s future. With common definitions established, various pieces of software can deal with the same IFC instance and add information as required. The IFC/IFD pairing is ready to take the atomicBIM effort as far as the “BM+I” stage with a uniformly accepted file format.

The second key component of atomicBIM, the ether, is also under development with several different efforts underway. Similar to the aggregative approach which we discussed earlier, some applications that aggregate models already use IFCs as their core/interchange medium. The concept most often judged as most promising is that of 'Model Server,' i.e., a central server environment which manages, classifies and distributes pieces of the model in much the same way as a central personnel database of a dispenses data records in response to queries.

In the AEC domain, we can already see some moves toward server-based collaboration with applications such as Bentley’s ProjectWise Integration Server. It’s worth noting that the pursuit of model servers is not confined to the AEC industry, demonstrating that this is an established approach to large object datasets. In the product manufacturing arena, applications have been developed to control parts and objects that can be accessed by multiple players from a central database. Many of these server solutions are employing widely accepted querying techniques, such as SQL, to structure their operation.

Given that IFCs are the leading candidate for the role of “atoms of construction,” however, there are also several efforts underway to develop IFC-based model servers which will fulfill this functionality. There are already some commercial IFC servers available and a number of others in development. Several of these initiatives are based in Europe or are joint initiatives with US organizations.

This includes Oracle, a company whose products are known for their ability to manage large databases. Oracle is already an accepted standard in other industries, with its products for managing production on a large distributed basis. It is possible we will see some of that technology migrate over to our industry in the near future in the form of a model server.

It remains to be seen whether the IFC format would actually become the final working prototype of BIM atoms, but if not, at the very least much of its core thinking is invaluable to an atomic approach to BIM. Whichever final form they take, the eventual role of BIM atoms and contextual ether will be to create remote, universally-accessible model repositories that contain all data concerning a particular design project.

The Benefits of atomicBIM

The atomic approach to BIM may soon become a pressing need as BIM adoption spreads into more and more fields of activity. When mature, the BIM workflow will involve a multitude of players, and a fully developed BIM repository will ideally house a vast store of information about the buildings involved. Regardless of how powerful future computer hardware becomes, it is likely that we will simply add more information to challenge the speed of the new hardware as soon as it is available. Even if speed were not a factor, there will clearly be massive amounts of data, and so some ability to parse and slice the data will be a requirement.

There are several ways in which an atomicBIM approach can streamline the BIM workflow.

  • Extracting slices of data, and then processing them in a variety of authoring applications;
  • Enabling the use of thin-client devices for lean, efficient access to large datasets;
  • Easing Interoperability and aggregation of data from multiple sources.

Slicing Data: As previously described, a large but granular data structure would permit many changes in workflow over our current situation. Some of the major advantages will include lean sowing and harvesting of data in central databases. In addition, there will be a short-term benefit to atomicBIM – helping control file size during production. But it is the long-term goal that is most important, setting up a structure so that regardless of the rate of growth, BIM has a robust scalable platform for future additions.

There are countless forms that “data-slices” from a BIM model could take. In all likelihood, these slices will take the form of structured queries from a team member to the central model server. For instance, sample queries might ask: “how many fire-rated doors are in this building?” or “what is the detail of the floor/beam/wall connection on the 12th floor at column line F-12?” Ideally, the model server would deliver only the appropriate atoms and associated data in return to these various queries.

BIM%20MODEL%20AND%20PHONE.jpg
Thin client access to large BIM database

Thin Client BIM access: BIM data will ultimately need to become a ubiquitous resource for project participants. This means we will really want BIM information where it can provide the most value – on the job site and on any device.

Today, there are already forward-thinking design teams who bring a BIM presence from the design environment to the job site. The next obvious step will be to expand BIM beyond the job trailer and make it universally accessible.

The ultimate experience of BIM model accessibility will probably occur when a device as lean as an iPhone or other handheld can query a large BIM database and extract precisely the information required for the task at hand without much latency. We will need atomicBIM to enable a low threshold so that anyone can easily be involved.

Interoperability: interoperability is a critical issue to the new workflow. Despite recent welcome moves to allow interoperability between design software by AutoDesk and Bentley, everyone (including these companies themselves) acknowledges that it is only a step towards a larger more difficult goal. No matter how many corporations agree to make their software more compatible, it will never be practical to have all software necessary for BIM purposes talk directly to each other. Luckily, most BIM software already exports to IFC format.

The common language of IFCs will greatly help to promote collaboration. Universal open interoperability will enable the traffic of commonly agreed definitions of construction items in a BIM environment, independent of any single software application. This will enable the authoring capabilities of a whole host of software applications to participate in a unified model.

Conclusion: Avoiding "Carpentry in Marble"

The technical challenges facing BIM adoption today, and in the near future, are largely the result of legacy workflow protocols triggered by current software. To make them viable, the resultant large files trigger a host of unprecedented and unintended issues – WAN acceleration needs, hardware upgrades and software overload. Even the impending move to 64-Bit computing systems is unlikely to remedy the issue in the long run. File-size issues are unlikely to be solved by simply optimizing current BIM software, no matter how many resources are devoted to that task. What is needed is a reassessment of our vision for the eventual BIM model.

The atomicBIM approach would aggregate tiny pieces of data into a larger whole through a ‘big data’ server model so that the data can be queried quickly. Clearly, transitioning to a granular form of BIM will be a wrenching but important re-alignment for the evolution of BIM. But there are many exciting research initiatives related to model servers (many originating in Europe) that are exploring new ways to manage building data. While it is a somewhat futuristic proposition, these efforts point to a more workable proposition than our current trajectory of ever-growing BIM models.

Though our current BIM solutions have served us well over the last decade, they may not be built to lead us for future success. In particular, they have not created scalable, open or granular access to the information we create during design activities. This suggested approach, atomicBIM, proposes to structure that information in a more manageable way.

 - originally published by AECbytes