Introduction
An integrative activity
Refinery production scheduling (RPS) is about understanding, modeling and solving production problems inside a very complex process industry (discussed in
Joly, 2012). It includes the management of timing, sizing, allocation and sequencing decisions in a connected and nonlinear world, where optimization trade-offs, specialized processing models and complex blending correlations must be considered in conjunction with a myriad of technical, economic, environmental, and commercial constraints (
Lee et al., 1996;
Pinto et al., 2000).
Over the decades (Symonds, 1955), with the increasing complexity of petroleum refineries, it has become increasingly clear that RPS is a highly profitable activity, and for modern high-performing oil refineries, RPS is now understood to be a strategic tool involving many areas. First, advanced tools built on proprietary or confidential know-how are involved (e.g., business intelligence, processing models, blending correlations and product chemistry). Second, RPS depends on an experienced refinery staff whose expertise is required to interpret the results and perform optimization in the technical sense of the word. Third, having an intelligent production strategy is crucial not only to achieve profitability but also for performance, including energy and environmental efficiency, logistics reliability and, hence, customer satisfaction. Fourth, RPS is a paradigm-breaking activity that drives changes needed inside the organization (see section 1.3). Fifth, and most importantly, RPS integrates data, systems, technologies, work processes and people, thereby rendering schedulers the true ‘brain’ of an oil refinery.
Given its powerful integrative role, RPS is therefore expected to play a central role in the refining business performance in the (holistic) context of the Fourth Industrial Revolution. In contrast to the reductionist, exact discrete manufacturing industry (e.g., automobile industry), the chemical process industry in general, and oil refineries in particular, may represent a conspicuous environment for exploiting the emergent behavior (
Ottino, 2011) of the manufacturing process. For example, in-line blending operations—which are also the basis for cutting-edge production technologies such as in-line blending certification (
Feital et al., 2013)—represent a didactic example, since blending has been considered the refinery’s last chance to impact profitability (
Kelly and Mann, 2003). Increasing attention has been devoted to analyzing the main opportunities and challenges in the field (
Li, 2016;
Li et al., 2016), which include the urgent need for an industry-university research coalition (
Yuan et al., 2017). In this sense, here we unify insights from industry and academia to examine promising routes for furthering RPS development in smart refineries.
Overview of RPS technology
The union between discrete, event-based simulation techniques and simple, rule-based heuristics proved to be the first technological marriage necessary to handle realistic RPS problems. Relying on this combination, pioneering commercial RPS technologies first released in the 1990s (e.g., ORION by Aspentech) covered a very profitable software market and conquered a new category of important users: the refinery schedulers. Highly customized and complex electronic spreadsheets could finally begin to be replaced by standardized, user-friendly computer-aided decision-making tools.
The ability to easily represent highly specialized operational rules and process models was a hallmark of this innovative class of industrial automation technology. Since RPS intelligence is a competitive differentiator (
Joly, 2012), the appearance of the first commercial solutions motivated some oil companies to develop their own RPS applications. Examples include the BR-SIPP® by
Magalhães et al. (1998) at Petrobras (Brazil) and the OMV Scheduling System by
Steinschorn and Hofferl (1997) at the OMV Schwechat refinery (Austria).
Such endeavors carried out inside the industrial environment propelled the search for novel and automated solution approaches in tight collaboration with academia (
Kelly, 2003). The widespread use of linear-based optimization formulations for solving tactical production planning problems gained a new function. These models massively evolved to mixed-integer programming (MIP)-based formulations as a first attempt to solve large-scale scheduling problems (
Pinto et al., 2000;
Joly and Pinto, 2003). The results from the fruitful work between industry and academy were also gradually being captured by commercial vendors.
As a result, the next remarkable technological progress in the RPS software market was the introduction of optimization-based techniques (e.g., MIP) (
Biegler and Grossmann, 2004;
Grossmann and Lee, 2003;
Kelly and Zyngier, 2007) in the 2000s for solving well-defined refinery subsystems (e.g.,
Lee et al., 1996;
Castro and Grossmann, 2014;
Kelly et al., 2017). Typically, these subsystems are in the crude-oil and product blending areas, whose topology, routings and configuration can normally be generalizable for any refinery.
Currently, there are many good RPS tools from traditional vendors, such as Aspentech, Haverly, Honeywell, Technip, Princeps, Soteica, etc. Although potentially suboptimal solutions were still being obtained in practice, these new tools promptly proved to be useful in supporting refinery schedulers in finding feasible and generally good quality solutions for critical refinery subsystems. However, as noted by
Yuan et al. (2017), the capacity to complete the scheduling of an entire oil refinery or petrochemical plant—a key feature of advanced or smart manufacturing—has not been achieved until now.
Role of the human factor
The importance of data for RPS systems finally became evident for those in the highest positions inside the corporation when trying to improve their overall value-chains. Since a very large set of structured and consistent information is required to run a RPS system, the refinery data had to be organized appropriately. In Brazilian refineries, this triggered an unprecedented level of IT investments in the 2000s to harmonically integrate many refinery and corporate systems.
Not surprisingly, such work typically leads to new job openings since large data sets from many systems had to be vetted and treated (e.g., screened, converted, reconciled, checked, etc.) before being fed to the RPS application. The mantra of “Best Data+ Best Decisions= Best Business” began to be popularized inside organizations, transcending the technical habitat that had been typical of engineering departments in oil refineries. As a result, work processes inside the corporation (not only inside the refinery) also had to be systematized and improved as prerequisites for successful project implementations.
When properly implemented, RPS applications have also provided the driving force for changing cultures and paradigms inside the operational environment. With the introduction of standardized RPS tools, the short-term production scheduling information could finally be properly cataloged and, hence, accessed by others. Actually, RPS applications have ‘democratized’ and standardized operational information inside the corporate environment.
This was an important milestone. In many cases, such standardization also paved the way for changes in the way the refining business was managed at the operational level. Once the ‘information monopoly’ had been broken, staff that were previously considered ‘irreplaceable’ (and, in general, their ‘frozen’ viewpoints regarding how things should be done) could finally be replaced, thereby also rejuvenating the operationalization of the RPS activity as a whole. As witnessed in Petrobras and abroad, the introduction of automation technologies in RPS allowed the role of the human factor to evolve from a tedious and uninspiring activity to a highly motivating predictive and prescriptive analytical activity.
RPS in a smart manufacturing reality
The context
Both European’s Industry 4.0 (I4) (reviewed in
Lasi et al., 2014) and American’s Smart Manufacturing (SM) (
Davis et al., 2012) are about intensively and consistently interweaving real-world industrial operations and its on-going processes with computing and communication infrastructures in a cyber-physical system (CPS) (
Monostori, 2014;
Lee et al., 2015) (Table 1).
By simultaneously providing and using the data-accessing and data-processing services available on the Internet, the promise of CPS is to drastically increase the interaction between automated and human-based working processes. The high flexibility of the plant design (e.g., virtual plant commissioning), operation, and diagnosis of the production system provided by smart networking will give rise to completely new business models that will support optimal resource utilization and smart control (
Jazdi, 2014). Robustness, autonomy, self-consciousness, self-organization, self-diagnosis, self-repair (self-X), real-time monitoring, responsiveness and predictability are some hallmarks of production systems based on the I4 philosophy, the newest focus of many in the field of process systems engineering (PSE).
Central to any CPS model is the concept of ‘digital twins,’ which are digital models of industrial equipment and manufacturing processes (
General Electric, 2016). Digital twins rely on a connected, responsive and predictive software-machine existing at the nexus of physical engineering and data science. Their value translates directly to measurable business outcomes, such as minimization of asset downtime and maintenance costs, optimization of energy and utility efficiency, reduction in cycle times, better multi-unit coordination and increased market agility. Regarding the refining business, the concept of ‘digital twins’ leads to another concept, the virtual refinery (
Moro, 2009). The virtual refinery is a new underlying philosophical paradigm in the petroleum refining industry that requires a knowledge breakthrough, or even a cultural ‘break-with’ inside the entire organization. For instance, the versatility provided by virtual instrumentation (software-redundant measurement systems (
Kelly and Zyngier, 2008a)) allows firms to readily estimate the customized data required to properly running RPS applications. Moreover, by replacing complex and expensive hardware systems (e.g., online analyzers) with conventional hardware (computers) and software (e.g., process simulators), synthetic instruments also allow refiners to reduce the operating and maintenance costs of physical instruments.
A real-world example in Brazilian oil refineries is the ‘virtual laboratory’. Basically, it consists of efficiently integrating advanced process simulators (e.g., PetroSIM by Yokogawa (KBC)) to plant data information systems (e.g., PI by OSI-Systems) in a real-time environment. By adopting this simple recipe, accurate and reliable estimations of process stream properties (including compositional information) have promptly been obtained in-silico. This has not only minimized the need for laboratory manpower and, in some cases, maintaining sophisticated and expensive in-line analyzers but has also improved the execution of many high-profile engineering tasks, such as process monitoring and model updating.
Refinery production scheduling 4.0
Mobile devices and related technologies (e.g., RFID) will provide cheap time- and distance-independent access to the data, processes, and services of automated production systems. Key technologies (e.g., industrial wireless networks) allow networking among data from/to automation devices, equipment status, and operating supplies. By consistently assembling large plant data sets from physical and synthetic instruments, CPS will provide data integrity and data integration, thereby heightening the integrative role of RPS. Decision-making intelligence will rely on the real-time cooperation among decentralized optimizers (
Kelly and Zyngier, 2008b) that are integrated in the decision automation engine (e.g., cloud computing) with powerful big-data analytics, visualization and sharing of information (
Qin, 2014).
Therefore, refinery schedulers will make decisions inside a workforce environment supported by a real-time cyber representation of the physical world. Even though diagnostics can also partly be performed by schedulers (who have experience in plant operations), information from big data analytics will be retrieved on-demand, intelligently used and linked so that an automated (self-) diagnosis (root cause analysis) can be produced. Ideally, RPS in a smart manufacturing reality presupposes no human diversion (distraction, deviation, digression, etc.) or limitations in intelligence (degree of training, subjective viewpoints, prior experience, etc.) in decision-making and execution. Therefore, with leading-edge smart scheduling, the RPS can, in principle, exclude the human factor from its failure or success.
Examples
Example I: Failure of the crude-oil charging pump
Let us consider the crude-oil area of an oil refinery as depicted in Fig. 1. In this first example, it is hypothesized that (a) a crude-oil charging pump connecting a crude-oil tank to the crude-oil distillation unit (CDU) will fail on day 5 due to a severe wear in the rolling bearings, and (b) after failure, the pump remains out of service for 24 h (for maintenance).
Regarding a 10-day scheduling horizon, the production scheduling is theoretically assessed with respect to two scenarios: the oil refinery running either under an Industry 3.0 (I3) or Industry 4.0 (I4) technological background (Fig. 2).
In I3, the scheduler is provided with the best decision-making technology currently available. This consists of having a standalone RPS solution integrated to corporate databases from which the initial or opening conditions of the plant can automatically be loaded to run the RPS application off-line. In I4, the industrial wireless network enables all equipment, devices, workmen, terminals and other wireless nodes to finish complicated tasks individually and cooperate with other equipment, thereby providing a new architecture based on quality of service and quality of data for running a RPS in real time in an industrial private and secure cloud. The RPS application in I4 runs in real time, relying on additional, real-time processed information from the plant similar to real-time traffic information while driving. This may include optimal data from smart entities (e.g., sensors, machines, workmen, mobile devices) and distributed real-time optimizers and monitors which, in turn, are fed back with updated information from the RPS application, thereby improving their optimized trajectories in a virtuous cycle.
For the sake of simplicity, our analysis is restricted to only one I4 property: equipment self-diagnosis. Hence, our analysis is also conservative in terms of potential benefits. Regardless of the context (I3 or I4), determining the best operational schedule involves many interrelated decision-making processes. They comprise logic and logistics decisions for resource selection and sequencing over time (e.g., start and end times of logistic operations involving crude-oil tank selections; blending sequencing, etc.) as well as decisions in a continuous domain, such as optimal blending recipes and batch sizes, optimal flowrates, and distillation cut points (
Kelly and Mann, 2003).
For a scheduler working under I3, the underlying assumption is that equipment failures will not occur. Therefore, the optimal production schedule is first determined at day 0, relying exclusively on known (and, therefore, past) information. Typically, this comprises a large data set, which must capture a snapshot of the plant at that time (i.e., the plant initial conditions; e.g., crude-oil inventory levels and qualities) and known operational information about the future (e.g., the crude-oil supply schedule, equipment programmed maintenance, production targets, etc.). In general, a multi-objective optimization procedure will be performed to harmonically satisfy operational (e.g., plant operational stability) and non-operational (e.g., plant profitability) aims. If no unexpected events appear, the real-world will (approximately) behave as idealized (or scheduled) in silico.
This is illustrated in the top panel of Fig. 2, which represents the output from a competent scheduler working under I3. Under such a hypothetical condition (unexpected events do not occur), the production schedule is determined considering proper ramp times (R1), during which the unloading flows from two sequenced charging tanks to the CDU are gradually decreased and increased, separately (Fig. 3). This operational procedure is a ‘good practice’ that allows for smooth transitory operation in the CDU during charging tank switchovers. In an I3 reality, the lack of intelligent automation systems that improve equipment monitoring, and data reliability does not allow schedulers to know in advance (and, hence, compute) a future ‘unexpected’ event when running their RPS application at day 0. Thus, if an unexpected event occurs, fast rescheduling should be executed by schedulers to avoid a crisis.
Let us assume that the pump feeding the light crude-oil from charging tank 2 (LC-2) to CDU fails at day 5 due to excessive wear in its rolling bearings (Fig. 2, middle panel). As a result, the LC-2 injection into the crude-oil blend mix will stop suddenly, thereby implying (undesirable) disturbances in the CDU operation despite the expected compensatory actions from the APCs/RTOs applications. After the pump failure, the feed flowrate of heavy crude-oil from charging tank 1 (HC-1) is altered by these online applications to compensate for the reduction in the feed flowrate of the CDU. As a result, the originally scheduled date for its completion is anticipated. After an elapsed (lag) time L spanning from the event detection (day 5) to the conclusion of (unplanned) logistic operations in the tank farm, the light crude-oil from tank 4 (LC-4) begins to be injected into the crude-oil blend mix that feeds the CDU. This operation takes place without a smooth changeover occurring between tanks 2 and 4. Stable operation in the CDU is therefore impaired.
The stable operation in the CDU will only be restored S time units after LC-4 is aligned to load the CDU. The early emptiness of tank 1 (caused by the increase in the HC-1 flowrate as ordered by the APC, aiming at compensating for the abrupt fall in the LC-2 flowrate right after the pump failure event) causes the need for shortening the ramp time (R2<R1) between tank 1 and tank 3 (which is unavailable until the beginning of day 6). This, in turn, imposes a nonsmoothed transition between two different heavy crude-oils and, hence, additional disturbances in the CDU. Moreover, if the scheduler is unaware for the period required for pump maintenance when urgently rerunning the RPS application, additional operational discontinuities may arise, thus amplifying the aforementioned negative consequences.
Conversely, in a smart industrial environment (Fig. 2, bottom panel), the RPS solver is supplied with real-time information related to equipment self-diagnosis. This is achieved with support from the digital twin models of every pump and by continuously analyzing each model using advanced statistical tools (discussed in
General Electric, 2016). Here, engineers cannot only predict the time of failure but also bring the pump down for maintenance predictively, eliminating the costs of unnecessary downtime and mitigating the risks of unplanned outages.
In an I4 context, the (true) global optimal solution is determined all at once on day 0 (or at any time at which updated plant self-diagnosis information becomes available for performing rescheduling). In this scenario, the crude-oil supply from LC-2 is preventively interrupted U time-units before the expected time of failure for the pump connecting tank 2 to the CDU. This allows the switchover between the light crude-oil tanks (LC-2 to LC-4) to be performed, satisfying optimal conditions with respect to ramp times. Even though the I4 optimal solution (Fig. 2, bottom panel) requires the same number of tank switchover operations as determined in the I3 rescheduling (Fig. 2, middle panel), the switchover operations are now ideally performed, that is, they are executed obeying adequate ramp times (R1), thereby resulting in stable CDU operation.
It is worth noting that, besides potentially affecting the operational stability, an unexpected (and, in general, urgent) need for rescheduling may (and in general does) imply suboptimal operation along with loss of economic opportunities. Worse still, it may also introduce potential risks for plant safety if it is not carefully planned and implemented at the operational level. For a scheduler working in a smart refinery, equipment self-diagnosis (relying on lube-oil temperature and pump shaft vibration information, for example) will automatically indicate the need for programmed maintenance of the charging pump. This information is updated in real-time and sent to the RPS application, which optimizes the programmed flowrate with which the pump must operate in the short term, thereby establishing a virtuous cycle toward optimized operation. In such a holistic approach, more stable operation based on a production schedule which tends “to do the right thing at the first time” is the net outcome. Here, operational discontinuities and economic losses associated with unexpected plan changes are minimized (or avoided).
Example II: CDU tray damage
Let us discuss the differences between I3 and I4 with respect to a distinct situation: impaired functional performance of the equipment. This may be a case of tray damage inside the CDU. In this case, the equipment (CDU) will not stop working. However, the fractioning inside the column becomes noticeably impaired once this damage occurs. Specifically, distillate yields and qualities, such as distillation temperatures, will change due to loss of separation efficiency, causing negative outcomes.
Contrary to the previous example (pump failure), now schedulers will have to contend with the problem for some time (until the scheduled date for the next refinery programmed maintenance stop). Worse still, there is a considerable probability that the event may even go unnoticed by schedulers working under an I3 reality. This is because the typical symptom (loss of column efficiency and performance) may partially be offset/hindered by actions from advanced process control (APC) and/or real-time optimization (RTO) applications, particularly if the stream quality control of the side-cuts is poor. Therefore, money losses in this example may potentially become larger than in the previous example because essentially no action (by schedulers) may be implemented to properly address the problem in an I3 reality.
In contrast, the efficient equipment self-diagnosis associated with I4 will reveal that the CDU performance was undesirably changed. As a result, new crude-oil assays (constant parameters related to distillation yields as function of crude oil type and CDU operation mode) must be determined/estimated in silico (e.g., by using process simulators) and then loaded to the RPS solution to properly solve the crude-oil blend shop optimization problem. Once properly implemented, this action will allow APCs and RTOs to efficiently play their original roles and efficiently collaborate with other automation applications toward finding a systemically new optimal operation point associated with the ‘new’ hardware. Distinctions between I3 and I4 for both examples are summarized in Table 2.
Challenges
We argue that RPS in an I4 reality brings academic and industrial challenges, which can be categorized into four main topics (Table 3).
Information technology and the Internet
In terms of IT, things have improved since 2007, namely, the Cloud—big data and servers—which will be responsible for data storage, cleaning, mining, high-performance computing, remote support and other services that provide the bridge between the industrial network and the application layers. Cloud computing is now mainstream, which makes leveraging it in industry so much easier and more secure. (discussed in
Li et al., 2015). This is an essential aspect for I4 because the integrated optimization of the whole system beyond the refinery limits remains a computationally intractable problem (
Grossmann, 2005). However, a major ongoing challenge is to find IT resources and proper technologies to support the transformation of industrial systems in the information age; it is like changing the tire while the car is moving. For instance, cybersecurity and big data are critical issues that require very experienced industrial transformation specialists with interdisciplinary skills and subject matter domain knowledge.
The role of IT in general—and the Internet in particular—on intelligent industrial systems can be even more crucial in some instances, such as the Brazilian one. With occasional exceptions (e.g., the automobile and electronics), the Brazilian industry is still at the level of Industry 2.0 (I2). Recent discussions carried out by FIESP (Federation of Industries of the State of São Paulo) indicated that I4 is possible in Brazil, even starting from an I2. However, it is necessary to look for routes that will materialize short-term results while the Brazilian industrial park completes its technological transition to a condition closer to that idealized for I4. In this sense, a major recommendation from consultants has been “Just use the Internet.” In other words, the main challenge for smart manufacturing in Brazil is to seek direct communication between suppliers and consumers. “Do not think about products, think about services” is the mantra that best summarizes the understanding of what advanced manufacturing should look for in such a reality.
Mathematical techniques
Refinery modeling
A first challenge lies on the fact that an oil refinery is an open system. It maintains itself in a continuous inflow and outflow, a building-up and breaking-down of its material components. The same final state may be reached from different initial conditions and in different ways (the principle of equifinality). As Horgan highlights, for open systems “our knowledge of them is always partial, approximate, at best” (
Horgan, 1995). Open systems require input parameters, mathematical structures and assumptions that are not fully known. Even when measurements are available, they are never available for all model elements, and they always contain some level of uncertainty and inconsistency. In addition, an oil refinery is also a complex system. Decomposing the system and analyzing subparts does not necessarily give a clue as to the behavior of the whole (
Ottino, 2011). In this case, reductionist and deterministic attempts fail to provide an explanation because engineering complex systems are characterized by emergent properties that typically appear as a result of nonlinear interactions among the system components. This may be a particularly problematic issue at the microscopic level (e.g., reactive processes). Hence, the biggest challenge is to capture the network collectively in a single model to compute the emergent behavior, which, we argue, may impact the optimal solution. However, the current state-of-the-art technology in scheduling simulators—especially spreadsheets—does not consider the flowsheet as a whole and thus misses substantial profit and performance opportunities collectively. Similarly, advanced controllers (e.g., multivariable model predictive control, MPC), such as DMCplus, RMPCT, PACE/SMOCpro, etc., do not consider the flowsheet explicitly; they only consider controlled (dependent) and manipulated (independent) variables.
A second challenge is to incorporate the real-world feedback of ongoing changes into the virtual environment; this is what we refer to as the “parameter feedback” addressed in the FOCAPO 2008 (
Kelly and Zyngier, 2008c)). As the models are always improved with gain and bias updating, this is a crucial aspect for a successful RPS project in an I4 reality, where online rescheduling (
Zhang et al., 2015;
Gupta et al., 2016) is expected to occur. Here, data reconciliation emerges as a critical issue, since it is instrumental to a) validate the material balances for consistency, b) regress/fit/calibrate model parameters based on plant measurement feedback, and c) verify and validate the system’s measurements. However, we are not properly leveraging the power of data reconciliation and regression to manage the past/present rolling horizon. Instead, many consider state estimation (Kalman Filtering) as the way forward, but it does not handle nonlinearities properly. In fact, state estimation is just a subset of data reconciliation that has been proven effective (
Kelly and Zyngier, 2008a).
Here, current industrial challenges involve a) the integration between real-time scheduling optimization with feedback and hybrid model predictive control, and b) minimizing timeliness, which can be defined as the time between when data are expected and when it is readily available for use (
Loshin, 2011). A recent project at a Canadian iron-ore processing plant (Rio Tinto) exemplifies this crossover between real-time scheduling and hybrid model predictive control for the application of “Smart Sweeping.” Discrete control of shuttle-conveyor positions dumping crushed-ore onto multiple stockpiles feeding several grinding mills simultaneously and controlling their continuous holdup / inventory profiles using new industrial radar level sensing devices is an excellent example of nascent Industry 4.0 technology and concepts at work.
Algorithms
Decision-making processes concerning continuous variables (e.g., to model in-line blending units and conversion units) should gain increasing attention in the RPS community due to the increasing need for producing clean fuels from the processing of poor crude-oils. Nonlinear optimization, for instance, using sequential quadratic programming (SQP), generalized reduced gradient (GRG2) algorithm or the augmented Lagrangian method do not work in practice for large-scale planning and scheduling models with an indefinite Hessian (diagonally dominate quadratics). On the other hand, successive linear programming (SLP) works very well and is industry-proven in planning optimization. XPRESS-SLP in Spiral Plan and homegrown solvers in PIMS, GRTMPS and RPMS are illustrative examples. Therefore, some researchers have invested substantially in developing SLP algorithms that can be called using any commercial and community-based LP and QP solver, and not just XPRESS for XPRESS-SLP and LINDO for LINDO-SLP.
The MILP technology is not being leveraged properly, which is suitable for representing typical on-off operations in RPS (e.g., resource selection and sequencing). Scheduling simulators currently relegate most (or even all) logic and logistics decisions to the scheduler. MPC only solves for continuous variables. MILP can—and should—be considered within nondeterministic approaches, such as predictive modeling. Predictive modeling (identification and estimation) represents underlying relationships in historical data to explain the data and make predictions, forecasts or classifications regarding future events. By combining distinct methods such as statistical analysis, historical review, pattern recognition, mathematical programming techniques, risk analysis, deep learning, neural networks, machine learning and other artificial intelligence techniques, its promise is to solve the whole RPS problem, which remains intractable by deterministic optimization methods.
Refining business
To impact the profitability of their existing installations, refiners need to adopt more phenomenological approaches (e.g., molecular management of refinery operations and petroleomics (
Wu and Zhang, 2010)). For instance, one can use crude-oil composition tracking to predict the yields and properties of key distillate streams using the assay-data and the fractionator model updated with feedback from the measurement system. In this sense, initial forays can be traced back to 1995 with pipeline-to-tank and tank-to-CDU movements being logged directly in the Honeywell TDC3000 DCS at Exxon Canada’s Nanticoke refinery. In Brazil, Petrobras has invested millions of dollars since the 2000s in the difficult task of developing their own integrated solutions for monitoring and optimizing the refinery logistics in real time (the GOMM project (
Moro and Zanin, 2014)). However, despite the monumental efforts made by these companies, the challenge has not yet been completely overcome. Other oil companies, such as ExxonMobil, have invested considerable efforts in the so called “molecular management,” which also justified RPS projects vis-à-vis Aspentech's scheduling solution—though we lack information regarding if they used crude-oil composition tracking/tracing results as a data feed to their scheduling and distillation units advanced controls.
Today, many APC vendors are struggling with determining how to control distillation and fractionation units given the continuously changing crude-oil diet/slate (i.e., “crude-switching”). This issue is even more important for optimizing the crude-oil feed blend shop when the crude-oil tanks are running-gauge tanks (simultaneous input and output flows), which makes composition tracking even more challenging, as one needs to integrate at least every 1 min to improve accuracy. We argue that the actual opportunity here is to track the crude-oil compositions every hour by, for example, integrating every minute, especially for running-gauge tanks in an automated fashion while also suggesting which crude-oil tanks have active transfers or movements in and out. Then, take the assays with effective cut point temperatures and predict compositional and property information for the various refinery streams (e.g., hydrotreating feed total/reactive sulfur). If there is a laboratory measurement for it, then ymeasured= gain * ymodel+ bias can be used, where past routine operating data (what we call the “past rolling horizon”) can be used to calibrate the gain and bias (
Kelly and Zyngier, 2017). If the crude-oil compositions can be accurately tracked and have reasonably good assays and laboratory data, then a very effective prediction of any product stream leaving the CDU/VDU in real-time can be made, which can then be used as feedforward/disturbance variables in model predictive controllers. We speculate that few, if any, refiners are doing this now, despite the increasing availability of supporting technology (e.g., PetroSIM by Yokogawa (KBC)).
However, the greatest challenge with better crude-oil feed composition tracking, specifically, is the ability to log, in real-time, the movements of crude-oil deliveries from pipelines/marine vessels, tank-to-tank transfers, tank-to-blender and tank-to-CDU. Therefore, searching for crude-oil composition tracking as a “soft-sensor” remains an engineering aim to be pursued in an I4 reality. It may be posed as an excellent example of an I4 and Analytics 3.0 application. This is because it requires a network model/cyber-physical system and is for predictive analytics applied to achieve better crude-oil feed control, optimization and scheduling, i.e., feedforward / anticipatory control.
Human resources
Even more than a revolution in the industrial environment, I4 is a revolution for our society and the way it is hierarchically structured. From a business management standpoint, past paradigms and beliefs must be reassessed at all levels of the organization before migrating manufacturing to a new reality. I4 will represent a special challenge for managers. Here, technical education has proven to be the best catalyst to promoting the required ‘cultural revolution’ in the industrial environment in the automation era of I3 (
Joly et al., 2015).
Previously involved only in “Project Management,” many managers expect the technical staff/consultants/contractors to be solely responsible for implementing new solutions inside their organization. What some managers do not understand is that the business re-engineering is as much a management job as an engineering job and an operations job. While the severity of this aspect may vary from company to company, it is likely to be more pronounced in state-owned companies, as we have witnessed in Brazil. Antiquated viewpoints that do not consider IT (and IT-related fields, e.g., artificial intelligence) as a core business area of an oil company must be changed. In the age of the Internet-of-Things, everything is connected to everything else.
By connecting and integrating traditional industries to promote flexibility, adaptability, and efficiency and increase effective communication between producers and consumers, the objective of I4 is to ensure high product quality at a minimal price. Therefore, any discipline, field or domain that enables a company to reach this lofty goal must also be understood as a core business area. Refinery production planning and scheduling may be posed as an example (
Joly, 2012). Considering this, work processes inside the entire organization must be reassessed to meet the needs of an integrated business’ core areas in a timely manner. Better data and better decisions equal better business. If managers do not understand this, then engineers and operators will not care either (or conflicts among them will result). A deplorable example comes from a Brazilian refinery. To satisfy a corporate KPI related to the number of the refinery employees, refinery managers halted work on data reconciliation. Although this peculiar example may be considered an aberrant case, data reconciliation is another subject that deserves much more attention than it has often received from many oil refiners and others involved in production, yield and loss accounting.
Conclusions
In this report, we have sought to show that the best operationalization of RPS in an I4 reality does not depend strictly upon having good scheduling software. Rather, the optimal prediction of future operations will depend on the CPS. The prerequisite for success now lies in having intelligent data management, analytics and computational capabilities that form the cyber space. In other words, the approach has changed: its philosophical basis has evolved from a reductionist to a holistic viewpoint of the problem (Fig. 1).
Therefore, more than any traditional off-the-shelf RPS solution available today, flexible and integrative specialized modeling platforms (as embedded systems) will be increasingly necessary to produce a truly optimized production schedule. Having proper working processes, highly educated personnel, coherent and comprehensive KPI models, and, of course, bright managers with open minds to find new solutions and test new technologies are the complementary requirements.
At the proposed endpoint, schedulers will evolve from mere users or operators of software packages to technically qualified modelers educated to develop optimization models and to improve solution strategies on their own. Additionally, schedulers will have a high-profile in the I4 environment by decisively contributing to the design of novel systems and integrative working processes considering the Aristotelian perspective of the whole. As a true organism coexisting with its environment, the Refinery 4.0 is more than the sum of its parts.
The Author(s) 2017. Published by Higher Education Press. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0)