Digital Modelling Technologies: Reals Within Reals

Words by STEPHEN COUSINS AND KATIE PUCKETT

city collage- bim

With the latest digital modelling technologies, we can immerse ourselves in building and city designs, marshal vast quantities of data and conceive of places we never thought possible. Prepare to step inside the most realistic game you’ll ever play …

Calum Sinclair is standing at the edge of the motorway, cars speeding by. When he spots a gap in the traffic, he runs out, snatches a piece of construction debris and races back to the safety of the hard shoulder. He’s learning to do a “dash and grab”, a dangerous manoeuvre that he’s never done before. Then he makes a fatal misjudgement.

If this was real life, Sinclair would probably be dead. Fortunately, he’s safely seated in his office wearing a VR headset, shocked but unharmed. Being run over in virtual reality is a deliberately overwhelming experience: “The camera really shakes so it makes you feel a bit bizarre,” says Sinclair. “It’s something you’re going to remember, and it needs to be. Before it would have been ‘read this four-page document about how to collect something from the middle of the road’. This is a lot more engaging.”

Sinclair is not a highway engineer, he’s a specialist in immersive technologies and he joined WSP after a degree in visual effects and a masters in “serious games” — the application of gaming technologies to real-life problems. He never expected to be working in the built environment, but it’s a career trajectory that’s set to become more common. An industry long derided as Luddite is beginning to adopt a range of powerful tools from the gaming and entertainment sectors, while technology firms seize on the wider potential applications for their inventions. Rapid advances in digital modelling and visualization, coupled with artificial intelligence (AI), big data and 3D printing, are transforming everything from on-site training to city planning. They enable forms that could never have been conceived of or built, internal environments that respond intuitively to users, and an unprecedented degree of analysis of design, construction and performance. Together they hold the promise of better designed, more efficient, more pleasant buildings and cities.

The virtual realm

The most immediately obvious benefit is clearer communication. VR brings technical drawings to vivid life and makes them comprehensible as never before: you don’t have to imagine what a 2D plan might feel like in real life, you’re right there. For clients and the public, this helps to prevent unpleasant surprises and expensive or impossible alterations at later stages, improving the product and the experience. For the project team, it can make collaboration easier and more effective.

As well as safety training aids, Sinclair has produced VR simulations and interactive walkthroughs for a consultation on an NHS hospital and a configuration tool for a new apartment building in France. Medical staff were able to roam freely through the virtual spaces flagging up issues in the design and suggesting improvements, while apartment buyers customized the finishes in their new home and could immediately see the impact on their budget. “That’s the power of gaming engines: they work in real-time,” he says.

Welcome to Virtual Chicago

virtual model of chicago

WSP’s 3D model incorporates infrastructure and covers 450 square miles.

Read more

Another advantage is graphics. It looks better, so it’s really good for visualizing and communicating a design. Not everyone is going to jump into Revit engineering software and just start making changes, but anyone could open The Sims and build a house. There’s no reason why it has to be that complex.”

On the New Slussen urban transformation project in Stockholm, which aims to be one of the first in the world to deliver all design information digitally, a fire protection engineer from Greater Stockholm Fire Department donned a pair of VR goggles to carry out a 1:1 scale safety review inside the lock channel. Using two joystick controllers he was able to navigate through the complex and examine how ambulance personnel would enter to carry a person out after an accident, and check corridor widths and stairway angles. “Drawings wouldn’t have been able to give the same mental understanding of the spaces,” says Johan Stribeck, business area manager for BIM/VR at Tikab, lead technology consultant on the project.

“In less than a minute he understood how to move around and had started his review. In 30 minutes he had finished his work and was quite impressed by the technique.”

Architect Foster + Partners used VR to review the design of a large building on the site. “Although many issues could be uncovered using drawings and a physical model, VR makes it easier and more immediate,” says project partner Ricky Sandhu. “For example, we had an internal debate about a roof garden on the building. I wanted to create a form of allotment, others wanted a thick forest. When I was in the VR model I realized the trees would obscure beautiful historic views of Gamla Stan [the old town]. We turned off the trees and all of a sudden we could see the vista. That kind of thing is really powerful.”

On the Twentytwo development in London, 4D BIM was used to move backwards or forwards in time through planned sequences of work

Visualizations are particularly useful for illustrating hard-to-imagine experiential qualities. For a 60-storey glazed tower project, a WSP team in Boulder, Colorado produced time-lapse videos to demonstrate how sun would penetrate throughout the space at different times of day and year. “We wanted to show them why motorized external shades were necessary to improve the user experience in internal conference rooms,” says vice president Jay Wratten, who leads the BOLD&R innovation centre. “As soon as we played the video, nothing more was required to convince them.”

“We turned off the trees and all of a sudden we could see the vista. That kind of thing is really powerful”

Ricky Sandhu, Foster + Partners

VR technology is improving in leaps and bounds, with new developments in hardware and software for every budget. VR experiences are accessible to anyone with a smartphone and a low-cost “cardboard” headset through a growing number of apps. Meanwhile, the resolution of VR headsets such as Oculus Rift, HTC Vive Pro and Samsung Gear VR is steadily improving and the field of view widening to create more and more lifelike experiences. Introducing 3D sound into the mix could further enhance the immersiveness, something currently under development in WSP’s San Francisco office. “One of our guys is working on a way to create a VR acoustic model of a space, so a client wearing a VR headset and earphones could understand how echoey a space will be or how dampened the sound,” says Wratten. “Where we’re headed is a VR model where we can all jump in together and work through some of these relational or experiential issues.” 

Until recently, producing a realistic 3D simulation from a building design model has been a complex, multistage process. Real-life models are much more detailed than game worlds, and one downside of a serious game is that you can’t cheat.

“In a game, you can design a level so that the computer doesn’t need to calculate that many things at once,” explains Kristian Svensson, visualization specialist at WSP in Stockholm. “You can’t do that with a model of a subway station or a city.”

One way to get round this is to strip out unnecessary features that don’t add to the experience, he says. “In a Revit model of a wall, there will be geometry on top of the wall and beneath it, but that information is not really useful when you’re walking around in VR.”

There are now one-click tools to do this. Unreal Engine’s Datasmith, for example, simplifies the process of importing CAD data and optimizing textures and geometry, making it as easy to create convincing visualizations as to export a PDF. An architect or engineer can hit a button to produce a model that can be viewed on a VR device, allowing them to sit with a client and incorporate changes instantly and tangibly.

For now, this will only work for relatively small datasets or simulations taken from a single model: for more complicated projects, 3D artists will often combine information from as many as 15 different models together. “The technology is quite fresh, but there are better tools all the time,” says Svensson. “Game tools are not really used to optimizing building information models, but they’re getting there with each iteration.”

The triangular roof design of Nvidia’s Santa Clara campus was modelled using a GPU render engine, which simulated how the materials would react to changes in daylight
Left and below Matterport is a 3D data capture system that uses a rotating scanner to fire infrared grids over surfaces, recording dimensional depth. Within 1-4 hours of the scanning process, users have a photomapped 3D model

Exporting reality

What if the designer and the client aren’t in the same room? Even optimized 3D visualizations can be very large files, up to 20 GB, so they are not easily shared via conventional means. “Accessibility is a big focus for our department,” says Svensson. “It’s not the coolest, but it enables so many other things. It will have a huge impact on our industry.”

Until recently, viewing 3D renders over the internet involved the installation of a browser plug-in — requiring administrator privileges and, according to Svensson, “a major hassle as IT departments don’t want to support too many plug-ins”. But with an open-source technology called WebGL now integrated into 95% of browsers, there is no need: 3D rendering takes place directly in the browser.

There was also the problem of a lack of a standard file format, but this is also being resolved by the growing adoption of glTF, the equivalent of a jpeg for 3D files. This allows easier interoperability between software, so files can be authored once, consumed everywhere. There is great interest from all the major players, and it’s already embedded in Windows 10 and the Microsoft Office suite of products. 

For very large datasets, 3D tiles, another open-source format, is gaining traction. “Up to 50MB is fine to load in a web browser,” explains Svensson, “but when you have a dataset of a few gigabytes, you need another technology that can chop it up into thousands of pieces and send it to the viewer to put back together.”

Just like the movies

As for the VR experience, the Holy Grail is “inside-out” tracking: getting rid of the wires. “Right now, you have to have quite a beefy computer, cables, external sensors, and to be up and running in an office will probably take you half an hour,” says Svensson. “With inside-out tracking, you will be able to just put the helmet on and start navigating in any space. That’s the technology leap that’s happening right now.”

Infrared sensors will be embedded into the helmet to produce a pointcloud to recognize the wearer’s movements, similar to Microsoft’s Kinect motion sensor technology for the Xbox.

Seeing the light in Silicon Valley

Nvidia and Gensler simulate lighting and materials in real time on a campus HQ

Graphics processor giant Nvidia’s 500,000ft2 campus in Santa Clara, California, covers just two storeys, giving it one of the largest footprints outside of Facebook’s 430,000ft2 single-room headquarters in Menlo Park.

Just as chip design prioritizes the flow of information, the design of the building focuses on the flow of people, providing opportunities to interact and collaborate. The designers say that the optimized layout increases the chance of employees bumping into one another by 20 times compared to a traditional commercial office building.

The iconic triangular pattern of the roof takes its design cues from polygons — the foundation of computer graphics. A huge atrium at the centre provides access to all key amenities and even functions as the main entrance to the parking garage below.

Nvidia wanted to push the capabilities of its technology to help Gensler visualize its designs. The GPU (graphics processing unit) render engine Iray was developed to make it possible to produce extremely accurate simulations of light and materials in real time, when multiple GPUs were linked together. Building materials were scanned and represented in the Iray renderings, which allowed each material to react to light as it would in reality. All design changes on the projects were visualized in VR for interrogation at client review meetings. Every day during the construction phase, Nvidia flew drones over the site to take photographs of progress; these were used as the basis of 3D progress models, also viewed in VR. Nvidia added in the ability to scrub through multiple days at a time to understand the overall progression of construction at a single meeting.

“If you’re building a road between point A and point B, there are infinite solutions. Are we really going to find the best one by testing only three of them?”

Pontus Bengtson, WSP

The technology has now shrunk so it can fit inside a phone, though it’s quite power-hungry and will drain the battery quickly. The other essential component of inside-out tracking is geolocation: the VR device needs to know exactly where you are with pinpoint accuracy. “If you turn your head rapidly and then turn back to look at a building and it’s moved 10m, you’re not going to trust what you’re seeing. GPS technology can position the user at the beginning, but after that, we need stable image tracking that never fails.”

For the future, Light Field technology holds the possibility of cinema-quality imaging, which Svensson says will be “like walking in an animated movie”. “VR works great, but it doesn’t have the highest fidelity yet. This will increase it to a level where you can’t tell if it’s real or not.” Light Field will shortly be available for gaming, and it will be baked into rendering software so that 3D artists can create and walk inside movies.
“You can imagine how many extra frames that requires — when you move your head a centimetre, you need to have a different field of view from that perspective. It will require huge bandwidth, but with an intelligent way of compressing information and sending it over the web, you’ll be able to run it
on your phone.”

All of this will also require much greater processing power. In the past, the only options were to own banks of computers or to rent them through cloud services. Distributed rendering offers a cheaper alternative by allowing computer owners with graphics cards to sell processing power they’re not using — similar to the way in which cryptocurrencies such as Bitcoin are mined.

True disruptors

A less immersive but potentially even more transformative alternative to VR is augmented reality (AR), or its more immersive sibling, mixed reality (MR). This is a hybrid approach where 3D holograms and context-specific data are projected onto a glass visor, allowing users to view and manipulate 3D models while maintaining visual contact with other people and their surroundings.

“We believe augmented and mixed reality will be the true disruptors of the industry,” says Octavian Gheorghiu, a member of the Specialist Modelling Group at Foster + Partners. “At present, the technology is not developed enough for mass adoption, but we expect that to change very soon.”

Like VR, AR can be viewed through headsets such as Magic Leap Lightwear or Microsoft’s HoloLens, but also using a smartphone. In 2017, both Apple and Google released improved software development kits, respectively known as ARKit and ARCore. “That’s a good way to get started with AR,” says Svensson. “It’s not going to be as accurate as hardware solutions but it’s a great eye-opener for what it can do.”

For example, AR could be used in the planning process for new developments, to demonstrate the impact on the local environment, Svensson adds. “If I stand looking out of my window at a future building site, I’m worried about what it will look like, how noisy the construction will be, whether I’ll have any sun left.
Now we try to answer those questions through brochures and city meetings, but it’s still difficult for people to understand how it will affect them. It could be easily done using AR technology and a phone.”

AR also supports project collaboration, as designs for 3D components for MEP, architectural details and furniture can be viewed in context, overlaid on the landscape or existing infrastructure to the correct scale and orientation. Site work could be greatly simplified if those undertaking complex specialist assembly could be guided by visual cues projected onto the visor.

Global design firm Gensler used AR when it remodelled its Los Angeles office, to visualize the impact of adding a glass-encased skybridge to connect two buildings. “Using augmented reality allows us to view designs in context at very early stages,” says Retha Swanepoel, associate and design technology manager. “AR helps immensely to communicate these things on site and reduces back and forth communication. It allows the architect, contractor and client to have the same spatial understanding of the environment.”

AR and other forms of “mixed reality” will definitely become more integral to the design and delivery process, she adds. “Potentially people will be able to work and collaborate with no monitors required. That would immensely change the way spaces are perceived, used and designed. What might a future desk or a meeting room look like? Would these be required at all in an office environment?”

Foster + Partners’ Gheorghiu suggests that AR imagery could remain a feature of the finished building: “The buildings themselves might be void of physical decoration or wayfinding, with customized graphics overlaid on the physical space for each user instead. Perhaps in the future, we could be asked to design both the physical space and the digital augmented interactive layer that occupies that space. This will change our relationship with the clients, making our services like those of web masters, designing the back-end of the project once, and continuously redesigning the front-end to new design paradigms and client needs.”

The real value of BIM

But even the most sophisticated visualization technology stands or falls on the quality of the data underpinning it. This is where building information modelling (BIM) comes in.

BIM is not so much a tool for design, as for creating and managing information. The model is a 3D representation backed by a database, a digital description of every element of a built asset, and they are expanding in detail and complexity. As well as data on the physical properties of each component, they may also include information on construction programmes and cost, maintenance needs and energy usage.

4D BIM, for example, shows how the construction will take shape over time: on the Twentytwo development in London, contractor Multiplex uses the software to scrub backwards or forwards in time through planned sequences of work to check for clashes between different packages and trades. Some sequences are then exported into VR so site managers can understand potential issues. 4D modelling consultancy Freeform has developed a tool that makes it possible to “laser etch” onto objects and surfaces in VR. “So you can stand in the environment and mark where an opening needs to be cut into a wall, or where a hoist must be installed,” explains managing director James Bowles. It is also exploring how to link this to cost data, so as users change the build sequence, they receive immediate feedback on the implications for their budget.

It is not yet clear whether the savings promised by BIM for the construction industry have come to pass — or how easily they will be to quantify. But there can be little doubt about BIM’s potential to improve the finished product, by making it much easier to compare different options and the impact of design changes on a myriad of outcomes.

“Schedule and cost are important from a financial point of view, but energy and materials are much more interesting from the point of view of a better world,” says Pontus Bengtson, head of project technology at WSP in Malmö. Take a house: “If you have data about how the windows let the sun in and the level of insulation, you can make an energy calculation. Then you change the architecture, with more windows here and less windows there, and you want to know how the cost is affected and you can see the energy consumption.
So you can start to have an iterative process to balance these things where you end up with a very cost-effective house that is environmentally friendly.”

Bengtson cites a Swedish study into how many iterations were tested on construction projects. “The average was three. But if you’re building a road between point A and point B, there are infinite solutions. Are we really going to find the best one by testing only three of them? Probably not. So why do we only test so few? Because it takes too much time to do the iterations. If we can make that easier, so we could test 200, we can do more and better with the same investment.”

“We can observe how people use a space and overlay temperature and acoustic data to identify problems or improve conditions”

Jay Wratten, WSP

BIM’s great potential lies in the “I” — the information in the model. “It’s easy to change the geometry, but to prove the design we have to be able to see the impact on cost, energy, materials, how to build it and the operational side. That’s what takes the time.”

WSP’s lighting design team is piloting a project to embed and use data about lighting control systems and the power consumption of light fittings. This means that when layout designs are altered, the impact on energy consumption and efficiency will be updated automatically. “Delete a hub of six lights and immediately the schedule will update in sync with the lighting manufacturer and contractor,” explains Wratten. “Or if we add lights, a red flag will appear to indicate the energy code allowance has been exceeded, in which case we can rethink the design instead of finding all this out in the week before our deadline.”

It could also make a difference throughout the building’s life, by enabling the creation of a “digital twin” to compare to real-time operation. This might be used to understand performance anomalies (why is the office so warm?), present information to occupants (what conference rooms are available?), or track maintenance procedures (how soon will the carpet wear out?). “We’re moving towards a world where we don’t just hand the owners the keys and the O&M manual,” says Wratten, “we hand them the digital model of architecture, structure and how it is supposed to operate.”

This could also be fed with real-time data from sensors integrated throughout a space — the internet of things — as well as user feedback gathered from smartphone apps. “We can observe how people use a space and overlay temperature and acoustic data to identify problems or improve conditions — perhaps pre-cooling a space to anticipate a heat load or flashing the lights when it gets too loud.”

The next step will be to combine this with artificial intelligence: “A client I spoke to who owns an AI platform is very interested in harvesting the data we get out of the building in real-time and comparing that to either causal relationships we have observed in the past, to predict problems, or to our energy model, to automatically alert them if the building is performing out of spec.”

Meanwhile, it is getting easier and cheaper to produce BIM models of real-world buildings, using 3D scanners and tools to convert point clouds into BIM. For example, Matterport is a 3D data-capture system that rapidly records and uploads accurate scans of real-world buildings or spaces to the cloud. The technology uses a rotating scanner that fires infrared grids over surfaces, recording dimensional depth. While not as dimensionally accurate as more conventional laser scanning — the density of points is 4-6mm compared to 1mm for laser scanning — it has the advantage of accessibility and speed: within 1-4 hours of the scanning process, non-specialist users can have a photomapped 3D model. 

In the US, Matterport has become famous for enabling Google Streetview-style click-throughs of properties for sale, but it is also being used on construction sites as a communication tool. The BAM-Ferrovial-Kier joint venture building the Farringdon Crossrail station project in London is deploying it at critical milestones when subcontractors hand over to the following trade. So for example, electricians will scan the space before and after the first fix, adding tags to the 3D model that can be viewed on any device. These link back to a document control system to highlight any outstanding RFIs, with everything time- and date-stamped. Post-construction it could be possible to compare BIM models with completed buildings and automatically identify where changes have occurred — replacing the time-consuming task of creating as-built drawings.

Endangered buildings preserved in VR

3D archive brings threatened heritage to a wider audience.

Read more

Design by algorithm

Advances in 3D modelling have already had a profound effect on the form of buildings, and they are now beginning toreshape the user experience. Computer-aided parametric design is the creation of a digital model following a series of pre-programmed rules that generate certain elements automatically, so it is based on internal logic rather than human manipulation. Typically, parametric rules create relationships between different elements, so a rule might be created to ensure that a wall starts at floor level and reaches the underside of a ceiling. Then if the floor-to-ceiling height is changed, the wall automatically adjusts to fit. Parametric design also makes it possible to design very complex geometries and structures, which architects such as Zaha Hadid, Frank Gehry and Daniel Libeskind have exploited to create distinctive expressionistic forms.

These parameters are continually widening, taking in not just building form, structure and manufacture, but less tangible factors such as lighting, acoustics and energy efficiency. The latest “generative” design software enables the designer to function more like a curator. Effectively, generative design uses software algorithms to produce optimum forms for products and buildings, similar to the way in which organisms evolve in the natural world, without the need for human intervention. The design enters a set of interdependent parameters, and the computer uses these to generate numerous designs that can either be put to use or become a springboard for new creations.

Architect Herzog & de Meuron used this process to develop some 10,000 unique shell-shaped acoustic panels for the dramatic curved main auditorium of the Elbphilharmonie concert hall in Hamburg. Parameters for acoustic performance were combined with the architect’s preferences for a consistent, beautiful skin, and the need for a smoother surface in areas where audience members could touch the panels.

Generative design can also help to solve issues related to comfort conditions and programme. In this, it is aided by advances in crowd modelling software, which make it possible to simulate the effects on human behaviour of a range of conditions — such as temperature, noise, oxygen levels and opportunities for physical interaction.

Autodesk, for example, used its generative design tool Project Discovery to produce thousands of possible layouts for its new office in Toronto. The software combined parameters for windows, stairs, elevators and the physical floor space with the preferences of individuals on aspects such as distance to neighbours and amenities, daylight, visual distractions and views of the outside.

Shajay Bhooshan, leader of the ZH CODE computation and design research group at Zaha Hadid Architects, says that parameters around occupant behaviour are increasingly influencing the form that a building takes: “Where previously we would run simulations to predict a building’s structural or environmental performance, the trend now is to understand social aspects, using data from building sensors, social media streams and other tech to predict how the average person might behave in an office or apartment. The fields of data forensics and data acquisition and analysis are beginning to creep into the early parts of design.”

The New York by Gehry tower in Manhattan. Advanced digital modelling ensured that the detailing worked coherently and the installation of the complex, draped-fabric cladding would be problem-free.

3D printing unleashed

We are still in the foothills of generative design’s capabilities. Some of the most intriguing developments are being made in conjunction with another digital technology — 3D printing — which offers the compelling prospect of fully integrating digital design and construction. Foster + Partners, for example, has developed analysis and control tools that link design with Fused Deposition Modelling, a robotic technique for 3D printing not in layers but in 3D space. Although the method compromises slightly on accuracy, it opens up the possibility of giving shape to far more complex, digitally designed forms. According to Foster’s Jan Dierckx, another member of the Specialist Modelling Group, “This changes the design method considerably.

Whereas for traditional printing any model can be sliced into 2D layers and printed automatically, now there is an opportunity to explicitly design and optimize the structure.”

Many construction components are a certain shape and size because of manufacturing and logistics constraints, points out Bengtson. If they could be 3D-printed on site using the minimum amount of material to meet strength and other requirements, the potential savings on materials and transport are huge. “That’s why this technique is so interesting and why it will be so disruptive: that’s when we can really start to save the world.”

Mining for data

If one thing unites all of these digital modelling tools, it is their insatiable thirst for data. Matterport, for example, hosts a vast amount of geospatial data on Amazon web servers — over 650,000 3D models, primarily of real estate. The owners of Matterport are exploring the use of AI to automatically categorize and interpret the spaces and objects in this database. “In real estate it might recognize when a lounge is a lounge or a kitchen is a kitchen by recognizing the standard objects in a room type,” explains Karl Pallas, co-director at Immerse UK, which sells Matterport in the UK. “They are mining the data, and the more they have, the more they can do with it.”

Likewise, BIM models will become more intelligent as wider trends such as big data and the internet of things filter through to the design process. Data extracted from internet-connected sensors will give designers unprecedented access to metrics related to building use, performance and user behaviour. At the same time, archived data on projects, including 3D models, 2D drawings, images and text, can be interrogated and those insights transferred to new projects.

“The trend now is to understand social aspects, using data from building sensors, social media streams and other tech to predict how the average person might behave in an office or apartment”

Shajay Bhooshan, Zaha Hadid Architects

Gaming company Ubisoft has trained an AI program to spot when its coders are about to make a mistake and alert them. Its R&D division fed the program with ten years of code from its software library, so that it could learn from historic mistakes and predict when a coder is about to repeat them. Why couldn’t this work for building design too? Post-occupancy evaluation data could be mined along with archived project data to create a repository of solutions and flag up potential issues early in the process.

In time, building models could be linked together to provide ever more complex simulations of the built environment. “On large schemes, we build up BIM models over multiple buildings until we have coverage over a quite considerable area,” says Nick Edwards, principal at architect BDP. “In other areas we might work with landowners who own an estate — as each individual building is mapped and data coordinated, a picture is built up.”

As these models become broader in scope, they become increasingly powerful tools, averting the need to duplicate survey information and providing a more comprehensive understanding of how buildings and their users interrelate. “We could overlay pedestrian and cycle routes with air-pollution mapping and sunlight and noise to see how well public spaces work,” says Edwards. “The more you can feed in, the more you can extract.”

Cities including London, Hamburg, Singapore and Helsinki are all developing intelligent 3D models of the urban realm to help streamline planning and design. In Chicago, WSP is developing a model that interlinks with existing engineering design software, to display not just buildings but transportation and other infrastructure projects.

One day soon these models could be expanded to include live data feeds on anything from vehicle traffic to building performance to air pollution, offering an unprecedented opportunity for city authorities to monitor and tweak systems, and for developers and architects to test out the impact of schemes on their surroundings. “It is not an unrealistic expectation,” says Edwards, “but it needs more lead from the public sector because, by default, the private sector is in competition, so many of the projects we work on involve signing non-disclosure agreements. There’s a lot of protection about data, yet if you can unlock some of that in a collective way, there is benefit for everyone — not just teams trying to progress projects, but society as a whole.”

In an increasingly digital world, computers will complement rather than replace human intelligence. Designers will be marshalling ever more powerful tools and interpreting and refining the results they produce. “There still needs to be judgement because ‘computer says’ doesn’t necessarily mean it’s right,” points out Edwards. “Sometimes things can run counter to each other: in a city people might want more public space, but that can push them further apart and reduce the efficiency of local services. Sometimes a smaller amount of space that’s better maintained can produce a better outcome. The more tools we have to make those judgements, the better.”

When computers can take on the heavy lifting, design time will be freed up to focus on areas where human insight can genuinely bring value. “We need to think about what’s unique to us,” says Bengtson. “Computers will struggle for many years to understand feelings, empathy, fantasy — that’s what we should add.”

Leave a comment

Leave a Comment

Your email address will not be published. Required fields are marked *