Calibrating lighting simulation with panoramic high dynamic range imaging

Fast and accurate field measurement with high resolution considering the light reaching human eyes can help calibrate and validate lighting simulation. This study established a workflow of lighting simulation calibration aided by the recently developed panoramic high dynamic range imaging (HDRI) with a 360° field of view (FOV) and conducted a case study. The panoramic maps of illuminance and coefficient of variation of luminance (CVL) can be retrieved by the HDR images to compare the directional light reaching preset viewpoints within occupants’ visual range. The case study showed a high correlation (rCVL ≥ 0.900, rE ≥ 0.990) of the spatial luminous distribution between the simulation and built reality. Close to 90% of the simulated illuminance data had an error rate within 20%(|e|≤20%), revealed by the 360° residual maps of illuminance. The proposed lighting calibration approach with panoramic HDR Imaging was validated to improve lighting simulation accuracy.


Introduction
Computer simulation of various artificial and daylit luminous environments is useful for representing the built reality in lighting design and evaluation.In contrast to field measurements that would otherwise take a long time to collect limited data, computer simulation is considered more efficient in conducting pre-and postoccupancy lighting evaluation in various lighting conditions.The easy adjustment of computer modelling and simulation enables flexible lighting layouts and corresponding energy calculations (Altenberg Vaz and Inanici 2021), and analysis of dynamic human-light interactions on diverse space users (Ochoa, Aries, and Hensen 2012).With simulated versatile lighting conditions, lighting professionals can evaluate lighting quality, visual comfort (Suk 2019), the non-image-forming effect of light (Jung and Inanici 2019), and long-term light exposure of a target (space occupant or object) (Mardaljevic et al. 2021).
In the past decades, lighting simulation algorithms (e.g.direct lighting calculations, [Yves and Willems 1993] raytracing [Jones and Reinhart 2017], radiosity [Willmott and Heckbert 1997], and spectrum and circadian stimulus [He, Yan, and Cai 2022]) have been continuously upgraded to better represent the built reality, CONTACT Hongyi Cai hycai@ku.edu1530 W. 15th Street, Lawrence, KS 66045, USA Supplemental data for this article can be accessed online at https://doi.org/10.1080/19401493.2023.2242306.serve different simulation purposes, speed up the simulation process, and increase the simulation accuracy.
Ideally, choosing a validated proper lighting algorithm can serve the right simulation purpose with the least workload.Nevertheless, there is always a concern about the accuracy of computer-aided lighting simulation that has multiple possible causes rooted in computer hardware, lighting algorithms, simulation techniques, and accuracy benchmarks.Continuously improved computer hardware and software may alleviate but cannot eliminate this concern without proper calibration and validation of the simulation results with the built reality, which is still facing two major challenges today as follows.
First, the input geometry and lighting metrics in computer simulation largely affect the precision of lighting modelling and its output.Field measurements with acceptable accuracy and limited resolution are used as a reference to which the simulation is compared, and any input information mismatched with the built reality may lower the simulation accuracy.Conventional light metre measurement or the camera-aided 180°field of view (FOV) high dynamic range (HDR) imaging technique has limitations for collection of input geometry and lighting metrics for calibrating the simulation accuracy.Firstly, although horizontal illuminance measurement can be easily conducted on a known horizontal task plane, vertical illuminance measurement, whose aiming direction may vary, is difficult to calibrate where even a slight rotation of the light metre or camera lens may cause an evident change in the measured value (Cai et al. 2018).Secondly, as a space observer's eyes and corresponding vertical/corneal illuminance are concerned in modern lighting analysis (Jung and Inanici 2019;Suk 2019), it is important to validate the lighting model at many different viewpoints of the space occupants performing typical tasks along varying viewing directions.A conventional grid-based measurement at dozens of points using light metres or data loggers wired to sensors or the 180°H DR imaging would be overwhelming in terms of field workload, yet the resolution is still considered too low in most simulation applications.Thirdly, regarding other input lighting metrics, although a luminaire's light distribution curve can be provided by the manufacturer, its light loss factor that changes over time is hard to validate in the field.Surface reflectance, BTDF (bidirectional transmittance distribution function), and BRDF (bidirectional reflectance distribution function), which could be measured in a laboratory setting (Ochoa, Aries, and Hensen 2012), are also difficult to measure in the field.The reflectance of Lambertian surfaces may be approximated in the field with an illuminance metre, but the specularity of glossy or polishing surfaces is difficult to obtain in field measurement.Additionally, the time span between two consecutive points in the field measurement of changing electric lighting and daylighting could produce undesired large errors (Cai 2013), which is particularly evident in camera-aided 180°photography (Mahlab et al. 2023), as an unreliable reference for calibrating the model.Conclusively, using conventional light metres or the 180°HDR imaging technique, it is a challenge to quickly obtain accurate input in geometry and lighting for computer simulation while keeping a balance of workload in the field measurement.Developing a fast and more reliable approach for field measurement is necessary to overcome those limitations.
Second, for the calibration of lighting models, a wellagreed benchmark error for acceptable simulation accuracy is still unavailable and thus needs to be proposed and validated in each individual simulation.One study (Ochoa, Aries, and Hensen 2012) explored the accuracy of lighting simulation with mean biased error (MBE) of 0-9% and root mean squared error (RMSE) of 14%−19%, validated with the built reality in a controlled laboratory, while it was hard to control the error within 5% in complex situations.Accordingly, they proposed a benchmark error rate of 10% for average illuminance and an acceptable maximum of 20% at a single point.Another recent simulation study in a daylit space (Quek and Jakubiec 2021) observed 25.8% and 45.5% of relative root mean squared errors (RMSE rel ) for simulating horizontal and vertical illuminance, respectively.Based on those two studies, a benchmark error rate within an acceptable range of lower and upper thresholds [−20%, 20%] was suggested (Ochoa, Aries, and Hensen 2012) for simulation accuracy of horizontal illuminance in practice.Note that those benchmark error rates are not for vertical illuminance measured at the human's eye position with varying aiming directions.Therefore, a new benchmark accuracy is necessary for light reaching human eyes, which would be explored in the present study.
Conclusively, to overcome those obstacles and improve the accuracy of lighting simulation, a new protocol is proposed and explored in the present study for fast and accurate calibration of the lighting simulation model using the latest camera-aided field measurement technologies at the space occupants' eyes.This protocol covers two major aspects, including (a) 360°panoramic high dynamic range imaging (HDRI) (Li and Cai 2022a;Li and Cai 2022b) to help conduct field measurement and calibrate the simulation, given the innate property of a panoramic camera to approximate human vision and its high efficiency in capturing the field lighting, and (b) the benchmark error rate.
The proposed protocol was developed based on the latest advancement of camera-aided lighting measurement for calibration of computer modelling & simulation.Historically, dynamic daylight influenced by changing sun position and cloud coverage is hard to simulate precisely using the 15 preset stereotypes of sky proposed by CIE (ISO 15469:2004(E)/CIE S 011:20032004).The 15 CIE sky models (ISO 15469:2004(E)/CIE S 011:20032004) may not be well aligned with the local sky at a particular time, causing errors in the simulation (Reinhart and Andersen 2006).To overcome this limitation, a circular fisheye HDR image could be taken in the same view as the interior observer to calibrate the computer model with real outdoor obstruction and cloud coverage (Inanici 2010).Hereby, realtime daylight arriving at the interior observer's position from the skylight and window could be approximated to supplement the CIE sky.However, this new method requires long-term real-time sky HDR images taken at a specific project site at all possible viewpoints of observers at least hourly over years for year around simulation, leading to an overwhelming workload in HDR imaging.
Alternatively, computer simulation can be calibrated using only a few typical field measurements.HDR images that contain millions of per-pixel luminous data could be either taken in the field or generated via computer simulation at the same location in the same viewing direction for comparison and validation (Au 2013;Mardaljevic et al. 2021).Therefore, to reduce the field measurement workload, real-time HDR images could be taken in the field at only a few typical viewpoints at a specific time under the same type of sky and used as a reference to calibrate the computer modelling.The so-calibrated computer model can then generate as many HDR images as necessary at different locations in different viewing directions at that specific time under that specific sky condition.This may largely reduce the workload of HDR imaging.
On the other hand, conducting calibration on a daylight-integrated lighting model needs to capture the changing daylight repeatedly and fast enough in the field, which otherwise would produce large errors.To simplify the calibration process and improve the accuracy of field measurement, the present study proposed calibrating a base lighting model without daylight and using constant artificial lighting as a reference light source to calibrate the luminous feature of the space.After that, the daylight parameters, such as CIE sky or field-captured luminance distribution of the sky, could be input in the base model for daylighting calibration in a future study.
The recently developed 360°panoramic high dynamic range imaging (HDRI) (Li and Cai 2022a;Li and Cai 2022b) was devised in the present study to not only facilitate the field measurement but also calibrate and validate the lighting simulation model, given the innate property of a panoramic camera to approximate human vision and its high efficiency in capturing the field lighting.The 360°H DR images were either captured in the field or generated in the computer simulation at the same viewpoints in the same viewing direction for comparison.
Compared with the traditional test point measurement/calibration along six directions in Cartesian coordinates (Figure 1a), the omnidirectional point measurement/calibration (Figure 1b) can precisely capture the target (e.g. a light source with large error) with its precise geometric information instead of estimating the approximate position of the target in a quadrant.Meanwhile, with a higher-resolution dataset, the omnidirectional measurement could conduct geometric correction according to the panoramic image in the post-test analysis.
Accordingly, in the present study, a corresponding workflow was established to capture the luminous environment in the field, then compare and evaluate the similarity with the virtual simulation model.This approach was applied in a workspace to explore the simulation accuracy via calibrations of the input lighting parameters.

Workflow
An accurate lighting simulation with the aid of 360°p anoramic HDR imaging needs calibration and validation in mainly three steps of the whole process, including (a) conducting field measurement, (b) building a lighting model calibrated to the field measurement, and (c) comparing the simulation results with the field measurements using a protocol for acceptable error rates.Such a workflow is shown in Figure 2, with more details expounded as follows.

Field measurement with 360°HDRI
In the first step of the workflow, a field measurement is conducted at the typical viewpoints to collect all necessary input lighting and geometry data to be used as a reference to calibrate the lighting simulation model.In addition to light metres, a recently calibrated Ricoh Theta Z1 camera can be used for 360°panoramic HDR imaging (Li and Cai 2022a;Li and Cai 2022b) in the field measurement.The Theta Z1 camera has validated its accuracy of  an average error rate of 4.0% ± 2.4% for luminance measurement and 3.1% ± 2.6% for illuminance measurement in building interiors (Li and Cai 2022b).The 360°HDR images taken with this panoramic camera are dual fisheye images in equisolid angle projection after an angular correction (Li and Cai 2022a).For photometric calibration of the 360°HDR images, an illuminance metre can be used at every viewpoint to measure illuminance at the camera front and rear lenses or five reference directions (α = 0-360°, β = 90°), (0/360°, 0°), (90°, 0°), (180°, 0°), (270°, 0°) (Li and Cai 2022b).Results are then used to calculate a calibration factor CF E using Equation (1).In practice, to speed up the field measurement, illuminance calibration could be conducted only at the camera's front and rear lenses.Those calibrated 360°HDR images with the illuminance calibration and pre-set white balance following the previous study (Li and Cai 2022a;Li and Cai 2022b) are then used for calibrating the computer modelling & simulation in Step 2.
Where, E meter and E HDR are the illuminance measured with the light metres and retrieved from the raw HDR image after vignetting calibration, respectively.Different from conventional grid-based metre measurement across the entire space, which is tedious, the timesaving 360°image-based measurement can be conducted at only a few typical viewpoints for recording the entire luminous environment visible at those viewpoints.Typical viewpoints were selected by observing the space occupancy pattern and the frequency of occupants staying on each task spot.Often the task spots with the most space occupants and/or the longest light exposure are selected as typical viewpoints.For measuring stable lighting conditions in a small space, a single panoramic camera can be mounted at each viewpoint, one after another.Given that the measurement takes approximately one minute at each viewpoint, in total less than ten minutes taken to conduct the field measurement with a single camera is considered acceptable.If the room has dynamic lighting conditions (e.g.daylighting under a partially cloudy sky), multiple panoramic cameras could be used simultaneously to expedite the field measurement, which leads to increased equipment costs.Alternatively, in a large space with both stable and dynamic lighting areas, the target space can be divided into subregions, each subregion may use either a single camera or multiple cameras.Such strategies can improve the measurement accuracy in a shortened time, although the 360°HDR imaging method still cannot handle rapidly changing daylight.In the present study, the field measurement would be conducted at night under stable electric lighting to calibrate the lighting simulation model in light of the spatial light distribution and light level.

Lighting modelling & simulation
With all input geometry and lighting metrics obtained from the field measurement, a geometry model of the office space can be built in Rhinoceros 6.0 or another 3D modelling platform with the correct layout of the room space, furniture, visible HVAC equipment, and the typical viewpoints of space occupants with preset standing or sitting heights.Elements with the same surface texture need to be grouped in one layer in preparation for setting their reflectance.In the present study, DIVA (Solemma), a plugin in Rhino and Grasshopper, was employed to conduct photometric calculations with adjustable geometry, luminaire setup, surface photopic reflectance, and sky condition, and then render the HDR images of corresponding luminous environments.Since the simulation in the calibration phase would not involve daylight, the dynamic sky condition was excluded by setting it at night.The room surface reflectance can be approximated with an illuminance metre by measuring the ratio of the reflected illuminance to the incoming illuminance at multiple sampling points.The luminaire's shape and initial intensity distribution curve can be obtained from the IES file provided by the manufacturer and validated by the spatial luminous distribution of the tested site.Yet the maintained lumen output of the luminaire should be determined later after other lighting parameters (e.g.geometry, room reflectance, relative luminous distribution curve of the luminaire) are calibrated, since the light loss factor would otherwise be difficult to measure in the field.
Additionally, a virtual camera needs to be set up and calibrated in the simulation.A balance may be achieved between the minimum high pixel resolution of rendered HDR images desired for high accuracy and their corresponding longest rendering time.Our pilot study indicated a sufficient image resolution of 800 × 400 pixels that was adopted in this study for expedited simulation and data treatment.As for the lens projection, '360°e quirectangular' available in DIVA for Grasshopper can be adopted in correspondence to the synthesized equirectangular image of the 360°panoramic camera.The similarity between the computer-rendered and field-taken HDR images can be compared and analyzed at each viewpoint without necessarily dealing with any image distortion correction.

Calibration process
Next, the calibration process starts by comparing the lighting and geometry information retrieved from the rendered HDR image with that of field measurement taken at exactly the same viewpoint in the same viewing direction.Key visible geometric information embedded on the 360°panoramic luminance map includes the location and size of walls, ceiling, floor, furniture, and objects.Key lighting metrics can also be retrieved from the panoramic HDR images, which are either taken in the field or rendered in the computer simulation for comparison, including 360°luminance map, 360°coefficient of variation of luminance (CV L ) map (Li and Cai 2022a), and 360°illuminance map.
Note that a 180°luminance map, retrieved from the 360°luminance map in an identified viewing direction, can be used to calculate per-pixel illuminance (E) or CV L for making the 360°illuminance or CV L map.The illuminance data received at the camera lens (or simulated eyes of a space occupant) from any direction across the 360°v iewing field is calculated using Equation (2) (Li and Cai 2022a) based on a corresponding 180°HDR image generated in that specific viewing direction from the panoramic 360°HDR image (Bourke 2016).The illuminance data at the measurement point with corresponding viewing directions are then mapped on a 360°E map across the entire 360°viewing field (Li and Cai 2022a).Similarly, the coefficient of variance of luminance (CV L ) on each retrieved 180°HDR image is calculated using Equation ( 3) and mapped to the corresponding viewing direction to create a 360°CV L map.
Where, E i is the illuminance at a corresponding pixel on the 360°equirectangular projection map, captured from the 180°HDR images aiming at a specific viewing direction.
ω i is the solid angle of a single pixel (the i th ) as a constant for equisolid-angle projection.
CV L_i , L SD(FOV180 • ) , and L mean(FOV180 • ) are the coefficient of variation (CV), standard deviation, and the average value of all pixel luminance within the 180°FOV, respectively.
For retrieving those photometric metrics used in the comparison, HDR image transformation between 2D and 3D coordinates needs to be conducted in this study.Equisolid-angle projection is adopted in the present study to reproject a new 180°HDR image via the 2D-3D-2D transformation (Li and Cai 2022a;Li and Cai 2022b).Of a 360°equirectangular HDR image, Figure 3 illustrates the 2D coordinates (Figure 3(a), in the shaded rectangular shape) and its 3D coordinates in object space (Figure 3(b), in the shaded hemisphere view) with a corresponding principal point (x, y).The image retrieving process has three steps: (1) extracting the 180°HDR image with a specified viewing direction from the panoramic 360°HDR image, (2) conducting 2D-3D coordinate transformation on those extracted pixels, and (3) reprojecting the 180°H DR image into the target coordinates.Figure 4 shows an example 180°HDR image with a viewing direction   Once available, the comparison of the photometric metrics starts in a few sub-tasks as follows.Firstly, the 360°luminance maps are used to compare the simulated and field-measured luminance distribution as well as the visible geometry of the space embedded on the luminance map.Any mismatch of the simulated luminous environment or incorrectly modelled room enclosure can be visibly spotted on the panoramic HDR images and corrected at once.Note that this visual comparison of 360°H DR images in luminance map is approximately region to region via visual spotting with the help of a few sample points, which may provide desired fast comparison and convenience but also limitation on the accuracy.
Secondly, more extensive comparison and calibration of the simulated luminous environment with the built reality can be conducted via comparison of 360°HDR images point to point for expected higher precision than the traditional manual calibration using a single (local calibration) or a few sample points across the scene (global calibration).From a 360°HDR image that is either computer rendered or field captured at the same viewpoint, the panoramic 360°luminance map, 360°luminance CV map (CV L ), 360°illuminance map (E), or 360°luminous flux map could be retrieved for comparison between the field measurement and computer modelling.In this step, only the 360°luminance CV map (CV L ) and illuminance map were compared point to point, as shown in Figure 5, on the corresponding paired HDR images taken at any viewpoint to reveal the model's difference from the built reality for global or localized calibration.
It is worth mentioning that a pilot study showed a low correlation between the simulated and field-measured luminance compared at the pixel level for two possible reasons.First, the geometry model manually constructed in this study has simplified the detail of furniture, equipment, and occupants' personal objects.Second, the location and aiming direction of the camera were also manually set in the field which might be slightly different from the computer simulation.However, in the point-to-point analysis, even a slight offset of the camera and visible objects would cause a significant disparity between the simulated and field-measured luminance maps.Alternatively, to calibrate the light distribution and light level perceived by human eyes in the simulation model with field measurement, illuminance, and luminance CV would be appropriate metrics with sufficient precision in the lighting calibration, with a precondition that the computer model has sufficient details and good accuracy.
In this study, thus, the model's similarity to the built reality could be quantitatively evaluated with Pearson correlation coefficients (r) on luminance coefficient of variation (CV L ) and per-pixel illuminance (E) distribution, respectively (Equations ( 4)-( 5)).
Where, r E and r CV.L are the Pearson correlation coefficient of illuminance (E) and luminance CV, respectively.E sml_i and E fld_i are the simulated and field-measured illuminance data points, respectively.CV Lsml_i and CV Lfld_i are the simulated and field-measured luminance CV data points, respectively.n is the number of illuminance or CV L data points.
Additionally, a mismatch of the inappropriately set lighting effect can also be discovered via calculated mean biased error (MBE) and root mean squared error (RMSE) of per-pixel illuminance (E), using Equations ( 6) and ( 7), respectively.Corresponding photometric calibrations can be conducted by adjustment of the light level and light distribution pattern settings in the simulation.

MBE
Where, E sml_i and E fld_i are illuminance in the corresponding viewing direction retrieved from the simulated or field-captured HDR image, respectively.n is the number of illuminance data points.Moreover, in the lighting simulation with expected sufficient accuracy, a residual rate (e) (Equation ( 8)) of illuminance measurement within 20% is deemed necessary (Ochoa, Aries, andHensen 2012, ISO 15469:2004 S 011:2003 2004).Therefore, the percentage of acceptable illuminance data (R (|e|≤20%) ) could be calculated (Equation( 9)) as an indicator of simulation accuracy at each or all preset viewpoints.Correspondingly, 360°residual rate (e) maps are generated to exhibit the magnitude and viewing direction of the residual point.Additionally, the relative percentage of RMSE (RMSE rel ) and MBE (MBE rel ) of illuminance measurement against the average illuminance of the scene can be calculated using Equations ( 10) and ( 11), where Ēfield is the mean value of the illuminances in 360°directions retrieved from the HDR image captured in the field.
Where, e and R (|e|≤20%) are the residual rate of a data point and percentage of acceptable illuminance data points with a residual rate ≤ 20%, respectively.n |e|≤20% and n are the number of data points with acceptable accuracy (|e| ≤ 20%) and the total number of data points in the residual map, respectively.
Accordingly, in the present study, the judging criteria for the calibration metrics are preset as r CV.L ≥ 0.9, r E ≥ 0.9, MBE rel ≤ 10%, RMSE rel ≤ 20%, and R (|e|≤20%) ≥ 80%.As shown in Figure 6, the total lumen output of luminaires would be scaled to calibrate the light level within the visual field after the calibration of the luminous distribution using the correlation coefficient of illuminance (r E ) and L CV_i (r CV.L ).If r CV.L and r E fail to meet their criteria, the light distribution at the corresponding viewpoint should be double-checked to validate the position of the virtual camera, the luminous intensity distribution curve of the luminaire, and the reflectance of the room surfaces in the simulation.For the typical visual field of space occupants, light from their upper side is often contributed by overhead luminaires and upper room surfaces with an open ceiling.In contrast, light reflected from the floor and lower walls are usually obstructed by objects on the floor, furniture, and the human body of the user.
In the present study, the calibration of lighting simulation adopted a limited visual range with a vertical aiming angle of η ≥ −30°.The downward visual field η < −30°w as not included for two reasons.First, the tripod to hold the panoramic camera in the field measurement would shield a lower portion (η < −80°) of the FOV of the camera lens, while the virtual camera in the computer simulation does not have this obstruction problem.This discrepancy would lead to an unjustified extremely large error of luminous information close to the area of η = −90°in the computer simulation.Meanwhile, considering the occupants' head and eye rotation, the comfort visual range of humans is λ ε [−180˚, 180˚] in the horizontal direction and η ε [−30˚, 90˚] in the vertical direction (Tilley 2001), the neck and body of humans would also shield the visual field with large pitch angle downwards.Second, since the measured lens' illuminance has a low contribution from the ground and tabletop fallen within the lower FOV of η ε [−30˚, −90˚], while the reflectance of temporal objects often has a significant impact on illuminance level, it would be inappropriate to use the mixed information in the lower FOV of η ε [−30˚, −90˚] on the present Figure 6.Workflow of the calibration process with the preset judging criteria (r CV.L ≥ 0.9, r E ≥ 0.9, MBE rel ≤ 10%, RMSE rel ≤ 20%, and R (|e|≤20%) ≥ 80%) panoramic illuminance residual map to calibrate the luminous information.Future studies will need to explore a more precise calibration method considering the lower FOV of η ε [−30˚, −90˚] in the 360˚space and temporal objects, rather than modelling a tripod to support the virtual camera to match with the field measurement, which is not preferred due to obstruction of view.

Experiment setup
A case study was conducted in the daylit office (Figure 6) of approximately 70 m 2 at the University of Kansas (38.9717°N, 95.2353°W).This multifunctional office consists of a niche on the west side with three stationary computer desks for study and a larger open space with four moveable tables and chairs used for lectures and conferences.This space has an open ceiling with exposed equipment, a concrete floor, and white walls.Daylight enters this space through a south-facing stripe window and another east-facing small inside window open towards an atrium.Ten direct-indirect linear luminaires (Litecontrol NOP-ID-5900, 1.2 m long), each fitted with three fluorescent tubes (Philips F32T8, 28W × 3), were suspended below the ceiling.Following the first step of the workflow (Figure 2), six typical viewpoints of space users were identified in this space, as shown in Figure 7, whose locations were measured in the field with a laser distance metre in reference to several marked wall surfaces.Viewpoints 1-4 were set at the eye height of 1.20 m above the floor of seated space occupants, and viewpoints 5 and 6 were at the standing eye height of 1.65 m of speakers in the front space.At viewpoints 1-4, as shown in Figure 7, the main viewing directions of students were aimed toward the TV screen for lecturing, while the instructor located at viewpoints 5 and 6 had his/her main viewing directions toward the audience area.With the identified viewpoints and main viewing directions, six groups of dual-fisheye LDR images were taken at night without daylight to generate panoramic HDR images after the vignetting and luminance calibration.
Then, in the second step of the proposed workflow in Figure 2, a demonstration of lighting model simulation calibration with panoramic HDRI was conducted in the case study in the multifunctional daylit office space (Figure 7).A 3D model was built in Rhinoceros 6.0 and its photometric attributes were defined in DIVA for Grasshopper.
Last, the calibration of the lighting model went through present parameters, comparison of panoramic HDR images, and the detailed calibration process with manipulation of various lighting parameters, shown as follows.

Generating and comparing panoramic HDR images
Comparison of 360°HDR images taken in the field and generated in the computer simulation at the same viewpoint is a new technique for the calibration of lighting modelling.As shown in Figure 8, the correlation coefficient of illuminance (r E ) and luminance CV (r CVL ) were used as the initial calibration metrics at this stage, assuming a sufficient accuracy of geometry and luminous distribution when r E and r CVL reached 0.9.During the field measurement, six panoramic HDR images were captured at viewpoints 1-4 (for calibration of the model) and viewpoints 5-6 (for further validation).With approximated lighting parameters, six panoramic HDR images were rendered in the 3D model at corresponding viewpoints.The captured and rendered HDR images are visually and computably compared side by side in Figure 9 with r E and r CVL all reaching 0.90, and any evident difference in the  model geometry from the built reality was visually spotted.The slight orientation deviation, if any, could be fixed by inputting a correction angle into the MATLAB calculation programme before generating panoramic illuminance/ luminance CV maps.After the initial calibration on geometry and light distribution, the space enclosure and luminaires in the simulation were consistent with those in the real scenes perceived at the six viewpoints, while the furniture and other moveable objects that projected in the lower side of the FOV with a pitch angle η ε [−30˚, −90˚] were loosely approximated.After the manual geometry calibration, the corresponding fieldcaptured and simulation-rendered 360°HDR images can be generated and compared quantitatively for the calibration of lighting parameters, as disclosed in Sections 3.3 and 3.4.

Preset lighting parameters in the model
To calibrate the lighting simulation with the field measurement, three lighting parameters were adjusted in computer modelling, including room surface reflectance, luminous intensity distribution curve, and luminaire's lumen output.Such adjustments of lighting parameters span from precise calibration to loose approximation, in order to explore a wide range of influence of each parameter on the model calibration.
Before the adjustment of lighting parameters, some room surfaces' attribute was assumed to be calibrated and maintained constant setting.The glazing of the south-facing window was set as 'double-pane Low-E glass' with a transmittance of 65% in the modelling, matching the actual specification of the window glass.The reflectance of some object surfaces had assigned constant values, as shown in Table 1, which approximate the actual field measurement.
However, the main room surfaces (interior walls, ceiling, and floor) would have adjustable reflectance values in DIVA, in addition to the actual field measurement, for simulation calibration.As shown in Table 2, besides the calibrated values, the loosely approximated lighting parameters are marked with * .For example, Reflectance #1 is the optimal reflectance calibrated to the built reality, while Reflectance #2 * is a loosely approximated value to  investigate the influence of reflectance deviation on the modelling accuracy.
The ten suspended direct-indirect linear luminaires (Litecontrol NOP-ID-5900) as the only electric light source in the space were used as a reference to calibrate the simulation model.The initial IES file of the luminaire was downloaded from the manufacturer's website.As shown in Figure 10.Intensity #1 (solid line) is calibrated to the IES file provided by the manufacturer with more uplight, while Intensity #2 * (dashed line) is a loosely approximated one with slightly less uplight and thus more direct downlight.Albeit different intensity distribution curves, the luminaires had the same lumen output (6900 lm, #100%), unless further adjusted later to 13800 lm (#200% * ).
The accuracy of the loosely approximated model vs. that of the optimally calibrated model was compared in Table 3, which shows a total of five lighting models with adjusted lighting parameters.Model 1 was simulated under the least favourite lighting conditions with all loosely approximated parameters, while Model 5 has all calibrated parameters only.In between, Models 2-4  were simulated with one loosely approximated parameter, including room reflectance, luminaire's luminous intensity distribution curve, or total lumen output of the luminaire, respectively.

Results of the calibration
To show how the adjustment of lighting parameters would affect the simulation accuracy, simulation results of different models were compared at viewpoint 1 located in the space centre as an example.As shown in Table 4, without surprise, the loosely approximated Model 1 had the lowest accuracy in illuminance simulation while the calibrated Model 5 had the least errors.
Model 1 has a maximum illuminance residual of 1965 lx between the simulation and field measurement at viewing direction (horizontal 70°, vertical 60°), the highest RMSE of 1114 lx (with 334% over the average illuminance), and MBE of 1011 lx (303%).In contrast, Model 5 has the lowest MBE and RMSE of 25.4 lx (8%) and 30.5 lx (15%), respectively, and the maximum illuminance residual is within 90 lx, evidently lower than that of Model 1.As expected, Models 2-4 had moderate simulation errors in between those of Models 1 and 5.Among Models 1-5, the high Pearson correlation coefficients of luminance CV (r CV.L ≥ 0.94) and illuminance (r E ≥ 0.98) between the rendered and field-captured images indicate that the simulated luminous distribution is similar to the real condition.However, other statistics on illuminance (MBE, RMSE, Max.E in lx) show a discrepancy between the simulated light levels in different models and the shared field measurement.This is possibly caused by similar light distribution patterns and geometry but different light levels.
Additionally, it was found that adjustment on the luminaire's lumen output (Model 4) and intensity distribution curve (Models 3) may have a greater influence on the accuracy of simulated illuminance than changing surface reflectance (Model 2) that results in larger MBE and RMSE but lower R (|e|≤20%) .Last, the optimal model (Model 5) with all calibrated lighting parameters shows the least illuminance error with the lowest MBE and RMSE of less than 26 lx ( < 10% and 15% over the average, respectively) and 31 lx ( < 15%), respectively, and the smallest maximum residual (Max.E) within 85 lx.
Next, a panoramic illuminance residual map was used to show the difference between the simulated illuminance and the field measurement at a specific measurement point (e.g.Viewpoints 1-6) in any viewing direction within the 360°panoramic view.Note that on the map, the residual rate is marked positive (e ≥ 0) when the simulated illuminance is larger than or equal to the fieldmeasured illuminance and negative (e < 0) when the simulated illuminance is less than the field-measured illuminance.The panoramic illuminance residual map could cover both positive and negative values on the same map or separate them into two complementary maps (one positive residual rate map, one negative residual rate map) for the convenience of using colour scales to show the residual rate magnitude.This case study generated a total of 12 (6 × 2 = 12) panoramic illuminance residual maps at all six viewpoints 1-6, respectively, for different lighting models 1-5, and compared them for calibration purposes.Given the page limit of this paper, the results of only one viewpoint for Models 1-5 were shown in Figure 11.Note that the negative residual rate of illuminance has minimal coverage on the maps and thus was not included, Figure 11(a) and (b) showed the positive residuals of illuminance on maps and in histograms, respectively.
As shown in Figure 11, Models 1-4 simulated nearly all higher illuminance at Viewpoint 1 than the field measurement as indicated by the positive residual rate (e ≥ 0)  rates but hardly affected their distribution pattern.Additionally, the frequency of residual rate can be retrieved within a limited visual range of the most common view of space observers (tilt angle η ≥ −30˚, aiming angle −180˚≤ λ < 180˚), as shown in Figure 11(b).On the histogram maps, two dash lines indicate the acceptable illuminance data range (|e| ≤ 20%).Model 2 with manipulated room surface reflectance has 51.2% of acceptable illuminance data (R (|e|≤20%) ).In contrast, no illuminance data in Models 1, 3, and 4 fell into the acceptable range, indicating less simulation accuracy in those models.Then, Figure 12 compares the panoramic spatial distribution of both positive and negative illuminance residual rate (e) mapped at Viewpoints 1-6 of only model 5 (the optimal one).The positive residual map (left column in Figure 12) and negative residual map (right column) are supplemental, if combined, they can make a whole 360°p anoramic map.Large residual rates were observed at Viewpoints 1 and 4 in the downward-looking direction (lower than −30°), possibly caused by the light reflection of furniture and objects put on the tabletop and the floor.At Viewpoints 2, 3, 5, and 6, relatively larger residuals were observed at a few scattered points, caused by nearby light trespass.
The details of Model 5 were first calibrated with the field measurement at viewpoints 1-4, and the socalibrated Model 5 was then validated at viewpoints 5-6 with the built reality.The thresholds for both calibration and validation processes are MBE rel ≤ 10%, RMSE rel ≤ 20%, r E , and r CVL ≥ 0.9 within the visual range (η ≥ −30˚).The resulting similarity of the lighting model 5 with the built reality was analyzed as follows.The descriptive statistic is listed in Table 5, which shows the slight difference between the simulated and field-measured results ( < 10 lx, when sample size N = 2886).The lighting at Viewpoints 3 and 5 is the dimmest (E avg : 117 lx in the field vs. 106 lx in the simulation) and brightest (E avg : 695 lx in the field vs. 670 lx in the simulation), respectively.Table 6 is the comparison of the field measurement and the simulation.The maximum residual rate of illuminance (Max.E) is observed as lower than 65 lx at all six viewpoints.The MBE and RMSE of illuminance at the six viewpoints within the visual range (η ≥ −30°) do not exceed 30 lx (or < 10%) and 40 lx (or < 15%), respectively.That inconsistency between the simulation and the field measurement was possibly caused by the loosely approximated furniture's specular reflectance in DIVA software that was not measured and validated in the field measurement.
To further illustrate the simulation accuracy at the most commonly observed viewing directions (tilt angle η ≥ −30˚, aiming angle −180˚≤ λ < 180˚), Figure 13 shows the retrieved histograms of the illuminance residual rate within the limited visual range (η ≥ −30˚) at the six viewpoints of the calibrated Model 5.It was found that  the residual rates are normally distributed at 0% at Viewpoints 3, 4, and 5, but not at Viewpoints 1, 2, and 6.It was also found that the peak normalized frequency of the data points always fell into the acceptable range [−20%, 20%].The illuminance simulation accuracy at Viewpoints 1-5 (R (|e|≤20%) ≥ 90%) is higher than that at Viewpoint 6 (R (|e|≤20%) = 79.2%).Moreover, Figure 14 (a) and (c) exhibit the linear relationship between the simulated and field-measured illuminance fallen into the most commonly used visual range (tilt angle η ≥ −30°, aiming angle −180°≤ λ < 180°) in the calibration and validation phases (N = 1924, 962).Within this limited visual range, close to 90% (94.2%, 89.6%) of the simulated illuminance data have acceptable accuracy with a residual rate lower than 20%.The R 2 of the linear function of simulated and field-measured illuminance is over 0.99, which indicates the calibrated simulation was effective and accurate to predict the illuminance distribution at the six viewpoints of space occupants.
In summary, the simulation results at the six viewpoints 1-6 are mostly kept within the present threshold for acceptable accuracy (MBE rel ≤ 10%, RMSE rel ≤ 20%, and R (|e|≤20%) ≥ 80%), except for the MBE rel of 10.2% > 10% at viewpoint 4 and R (|e|≤20%) = 79.2%< 80% at viewpoint 6, which slightly exceed the benchmark values.Considering the MBE and the Max.E of illuminance are less than 30 and 50 lx at viewpoints 4 and 6, respectively, the calibrated simulation Model 5 with all fully calibrated lighting parameters could be deemed accurate enough to simulate the illuminance at the position of the occupant's eyes.

Discussion and conclusion
The present study established a workflow to calibrate and validate the lighting simulation with a field measurement aided by 360°panoramic HDR photography.High dynamic range imaging (HDRI) technologies have been widely used in lighting measurement and simulation to facilitate per-pixel analysis and in-depth lighting research.With the emergent 360°panoramic HDRI (Li and Cai 2022a;Quek and Jakubiec 2021), a large amount of luminous data could be obtained with a relatively light workload of field measurement.Different from the conventional 180°HDR image that measures only one illuminance in its aiming direction, the 360°panoramic HDR image can retrieve countless illuminance values that aim at any arbitrary direction.Thus, the utilization of 360°panoramic images could upgrade the lighting measurement and computer simulation and help find a new way to estimate the real-time light exposure of human eyes.
As a result, the 360°panoramic HDRI can help collect the overall luminous status of the whole 3D space at a specific viewpoint and realize a point-by-point photometric comparison.The simulated and field-measured illuminance in the corresponding viewpoint and viewing direction of every pixel could be compared.A few statistical analysis indicators were stated to evaluate the accuracy of the simulation model, including the correlation coefficient (r E and r CV.L ), mean biased error (MBE and MBE rel ), root mean squared error (RMSE and RMSE rel ), the percentage of the illuminance data with acceptable residuals rate (R (|e|≤20%) ).Whether the total input lumen or the luminous distribution pattern in the space needed to be modified can be judged according to a group of residual rate maps of illuminance data until all the preset benchmark thresholds were achieved, then the calibrated model was deemed to be optimal.
A case study was implemented in the multifunctional office with six test viewpoints.It took about 62 s to capture a group of LDR images at one single viewpoint, and less than 10 min to complete the whole field measurements at the six viewpoints, including the time to shift the location of the camera and to run the programme on the Raspberry to control the camera.The calibration benchmark was preset as MBE rel ≤ 10%, RMSE ≤ 20%, and R (|e|≤20%) ≥ 80%.The simulation results showed that once the geometry was set appropriately, the correlation coefficient of 360˚luminous data between the simulation and field measurement would reach a high level (r > 0.90), even though the incorrect setting of the luminous intensity distribution curve of the luminaire would lead to a different luminous distribution in space.The total output lumen of the luminaire would only affect the total quantity of light in the space and change the MBE and RMSE of illuminance evidently, but not influence the luminous distribution or the correlation coefficients (r E or r CV.L ) between the simulation and the reality.Usually, the loosely approximated room surfaces' reflectance would impact the simulation accuracy but less than other lighting parameters (e.g.luminous distribution curve, total lumen output of the luminaire) with lower MBE rel (18% vs. 85%, 128%), RMSE rel (27% vs. 99%, 146%), and higher R (|e|≤20%) (51% vs 0%).
Moreover, the accuracy of the calibrated model with all fully calibrated lighting parameters was examined.It is worth mentioning that it is impossible and unnecessary to perfectly recur the preoccupied space in the virtual model to simulate spatial illuminance distribution.The per-pixel luminance correlation analysis between the simulation and the field measurement would be affected by movable furniture, occupants' personal stuff, and slight geometry mismatch.Considering light from the downward side of each measurement point might be affected by the light reflections from objects, furniture, and even the occupants' body on the floor, which was not the focus of the lighting simulation calibration, a limited visual range of −30°≤ η ≤ 90°, −180°≤ λ < 180°was preset as the most commonly observed visual field to filter the illuminance data and conduct calibration.The present study proved that the proposed calibration process was able to improve the accuracy of the simulation with the correlation coefficient for 360°illuminance and CV L reaching 0.99 and 0.90, respectively.In five out of the six viewpoints, over 90% of the simulated illuminance in any viewing direction within the limited visual range (η ≥ −30˚) maintained an acceptable residual rate of 20%.The MBE rel and RMSE rel are within 10.2% and 12.1%, respectively.The R 2 of the linear function of the simulated and field-measured illuminance is over 0.99 no matter in the calibration or the validation phase.Although the MBE rel at viewpoint 4 and R (|e|≤20%) at viewpoint 6 slightly exceeded their preset benchmark values, the MBE and Maximum residual of illuminance at those two viewpoints are small (less than 30 and 50 lx, respectively).Therefore, the calibrated lighting simulation model was still deemed accurate.
On the other hand, the present study still has a few limitations that need to be addressed in the future.
First, the experimental space was small with only six test points investigated.There is a need to validate this method in a larger space with more test points and various pre-occupation statuses to see if the benchmark level of accuracy for calibration needs to be lowered to prevent overfitting.In this study, the preset benchmark values are based on some previous studies (Ochoa, Aries, and Hensen 2012, ISO 15469:2004(E)/CIE S 011:20032004), some conducted only grid-based horizontal illuminance.Since none of them calibrated the spatial illuminance in either the entire 360°or limited visual range (η ≥ 30°) in a built space, future studies are needed to test more scenarios to preset more reasonable benchmark values.
Second, this study used stable electric lighting as the reference light source to conduct simulation calibration, no changing daylighting was used.Due to the time span of capturing HDR images, the changing daylight, sun position, and cloud coverage of the sky would affect the accuracy of lighting measurement.This problem could be solved in the future by using more cameras set at different viewpoints for capturing multiple HDR images at the same time.The base lighting model calibrated in the present study with only electric lighting can be used as the validated luminous environment to take further input from daylight parameters for daylighting calibration in a future study.
Third, the present study did not calibrate the surface reflectance to precisely describe the BRDT (reflection) and BTDF (transmission) of every surface.Therefore, this simulation could only imitate the human's light perception in real space but not the actual visual appearance of the space.It is worth exploring in a future study if there is a way to scan the real lighting scenario and transform it into a more precise lighting simulation model with the help of HDRI.
Last, the geometry model in the present study was built and calibrated manually that might contain random errors.Some study (Mahlab et al. 2023) utilized a laser scanner to construct the 3D model, which suggested a potential way to improve the geometry input, however, the high cost of a laser scanner is a potential hurdle.Other emergent commercial 3D scanner apps and consumer products were able to collect the point cloud of a normal indoor space, which could pave a new path to reconstruct the 3D space quickly and accurately, as the field luminous and geometric information might be collected simultaneously.
To conclude, this study devised and validated a procedure to calibrated a lighting simulation to achieve higher accuracy with the aid of a 360°panoramic HDR camera.The luminous distribution pattern and the light intensity of the simulation could be adjusted by manipulating different lighting parameters, and their impact on the simulation would be reflected in a group of panoramic luminous data maps.The calibrated simulation model with all fully calibrated lighting parameters achieved a high correlation with the built reality after the calibration.A large portion of the illuminance data within the normal human visual range had reached acceptable accuracies.The calibration approach proposed in the present study can be used in other similar scenarios with various space sizes, mostly diffusive-reflectance surfaces, and no changing daylight that could be captured by a 360°HDR image with sufficient detail.Therefore, the proposed simulation calibration protocol with the aid of 360°HDR imaging could improve the accuracy of future lighting simulation.

Figure 1 .
Figure 1.Comparison between (a) the traditional six-direction measurement and (b) the proposed omnidirectional measurement on a single test point to detect a target source.

Figure 2 .
Figure 2. The workflow of lighting simulation with the aid of 360°panoramic HDR imaging, covering field measurement, lighting simulation, and calibration process.

Figure 3 .
Figure 3.The demonstrations of the coordinate conversion between 2D equirectangular projection (a) and 3D geographic coordinate (b), and the viewing direction defined by horizontal viewing direction (x) and vertical viewing direction (y).

Figure 4 .
Figure 4.An example of retrieving a 180°FOV circular fisheye HDR image with a specified viewing direction (b) from a panoramic HDR image with equirectangular projection (a).

Figure 5 .
Figure5.Illustration of the point-to-point simulation calibration with the field measurement.X-axis and Y-axis are the horizontal and vertical viewing angles, respectively.A single data point located at each block (at an interval of 5˚or 10˚) on the simulated luminous map (could be illuminance, luminance CV, etc.) is compared to that data point located at the same block on the field-measured luminous map.The result is a 360°residual map of the difference in illuminance or luminance CV.

Figure 7 .
Figure 7.The layout of the space with pendant lighting fixtures (grey line) and the selected viewpoints 1-6 (red circles).The larger and smaller arrows starting from the viewpoints denote the aiming direction of the space occupants in this scenario.

Figure 8 .
Figure 8.The initial calibration of panoramic HDR images captured in the field and rendered in the simulation, taken Viewpoint 1 as an example with an approximated geometry setting in the simulation.

Figure 9 .
Figure 9.The panoramic HDR images captured in the field (left column) and rendered by the 3D model (right column) in six preset viewpoints and two phases in total.

Figure 10 .
Figure 10.The luminous intensity distribution (or power distribution) curve of two different luminaires.The solid line and dash line represent the curves of the calibrated luminaire with 80% uplight and 20% downlight (Intensity 1) and a different luminaire with 60% uplight and 40% downlight (Intensity #2 * ), respectively.

Figure 11 .Figure 12 .
Figure 11.The positive illuminance residual rate maps (a) and the normalized frequency histogram of the illuminance residual rate (b) for five models (1-5) at the same Viewpoint 1 (Model 1 manipulated all parameters, Model 2 manipulated surface reflectance, Model 3 manipulated luminaire intensity distribution, Model 4 manipulated luminaire lumen output).(a) The x-axis and y-axis represent the horizontal aiming angle λ(°) and vertical tilt angle η(°), respectively; the colour bar is the residual rate (%).(b) The x-axis and y-axis represent the residual rate (e%) and the normalized frequency (Norm.Freq) of the residual rates falling into the acceptable illuminance data range (|e| ≤ 20%) as shown within the two dashed bar lines.

Figure 13 .
Figure 13.The histogram of the illuminance residual rate distribution (N = 481) of the calibrated simulation model at all viewpoints 1-6, for data fallen into the most common visual range of space observers (tilt angle η ≥ −30°, aiming angle −180°≤ λ < 180°).Between the two dashed lines is the acceptable relative illuminance error (|e| ≤ 20%).R (|e|≤20%) is the ratio of illuminance data with an acceptable residual rate.

Figure 14 .
Figure 14.Aggregated illuminance data in the calibration (viewpoints 1-4) and validation (viewpoints 5-6) phases fallen into the limited visual range (vertical viewing direction η ≥ 30˚) are compared.(a) and (c) are the linear relationships between the rendered and fieldcaptured illuminance data, where the dashed line and the solid line denote the linear fit of the simulated illuminance, versus the desired perfect linear fit with the best simulation accuracy, r E is the Pearson correlation coefficient.(b) and (d) are the frequency histograms of the residual rate distribution.

Table 1 .
The constant setting of objects' opaque surfaces reflectance.

Table 3 .
The manipulation of lighting parameters in different lighting models (the loosely approximated parameters are marked with * ).

Table 4 .
The simulation accuracy, in comparison to the field measurement, of panoramic photometric measurements at viewpoint 1 (with sample size N = 481 viewing directions) in the five lighting models with different adjusted lighting parameters.
Note: r CV.L and r E are the Pearson correlation coefficients of panoramic luminance's CV (coefficient of variation) and illuminance, respectively.Max.E: the max residual in this viewpoint.

Table 5 .
Descriptive of field-measured and simulated results.

Table 6 .
The accuracy of the calibrated model on six viewpoints in all 360˚viewing directions (N = 703) and usual visual range (N = 481).Note: r CV.L and r E are the correlation coefficients of CV L and illuminance (E) retrieved from the panoramic HDR images rendered by the model and captured in the field.Max.E is the max illuminance residual at the specific viewpoint.