What is "inferential control"? Some say it is the type of
control scheme which tries to control product qualities without an on-stream
analyzer. Traditionally people implemented inferential models measuring
simple signals, such as a tray temperature on a column, in an attempt
to avoid analyzers. The premise is: if we are able to maintain that tray
temperature at a constant value, then the product quality is constant.
Inferential controls configured this way have had mixed success. In a
typical distillation column, product quality is sensitive to tray temperature,
but it also changes in response to other operational variables. No inferential
controller can be very successful unless it models the influence of all
pertinent variables.
To improve these simple minded inferential controls, Industry has started
to employ multi variable models, where the effect of each variable is
determined by linear regression analyses. This was an enhancement which
works very well in conjunction with analyzers. The linear regression predictor
is imprecise, but it gets much of the control response timed correctly.
Analyzer feedback would then correct the residual imprecision. Still this
approach cannot employ inferential controls in lieu of analyzers because
the correlations are empirical and their range is limited. They tend to
drift with operational changes and require frequent calibration, often
once a day or even once a shift.
Along came the simulation industry with an opposite approach: "Tell
me the feed qualities and main control actions, and I will tell you the
product qualities". Present day computers can do wonders in simulating
equipment behavior. In addition to predicting the product qualities they
can compute the complete set of instrument readings on the unit. However
it turns out that in a complex oil refinery situation, people do not know
the feed qualities precisely. With imprecise knowledge of the feed, many
of the measurements predicted by simulation do not agree with actual plant
measurements. The quality predictions are then suspect.
What then is a reasonable inferential controller? This paper proposes
an approach which adheres to four principle:
- The correlations must be based on fundamental laws or at least on
established engineering procedures, such as API methods.
- All predicted plant variables have to agree with process measurements.
- There are operator inputs in the form of entering numbers into the
program. These tend to be unreliable in time and in value, and they
are best avoided.
- There must be a simple way to calibrate the model.
Reliance on fundamental physical laws gives the models a certain solidity.
As a minimum, the models are valid over a wide range of operation and
require little or no tuning. Once tuning is established these models do
not tend to drift. Further the models are verifiable and tunable via a
few simple sets of steady state data.
The paper proceeds to describes four examples of working inferential
control schemes.
**SIMPLE DISTILLATION COLUMN MODELS**
A good many distillation columns have controls as shown in Figure 1.
This control configuration is coined "heat balance control with tray
temperature feedback". The tray temperature controller infers bottom
purity from tray temperature, and manipulates the reboiler to keep purity
(tray temperature) constant. Reflux policy is left for the operator, or
sometimes it is controlled in ratio to the feed or distillate.
The problem is obvious. In addition to tray temperature there are two
other variables which affect bottom purity: pressure and V/L (vapor to
liquid) ratio. In multi component distillation columns there are additional
disturbance variables associated with light-light keys and heavy-heavy
keys. We need a model to predict how product quality varies with all process
variables, and how to correct the tray temperature to keep quality at
target. If we can come up with such a model, we could place it as shown
in figure 2, manipulating the tray temperature controller to keep the
composition truly constant.
It turns out that the knowledge for developing inferential models for
this class of distillation problems is documented in the open literature
and has been there for a long time. Our model is based on fifty years
old discoveries as follows:
1. Clausius Clapeyron type equation for vapor pressures of the two components
at the tray temperature.
VP(i) = EXP[A(i) - B(i)/TT) [1]
VP(i) = Vapor pressure of component i
A(i), B(i) = Known coefficients of the Clausius Clapeyron equation for
component i.
TT = Tray temperature in absolute units
2. Raoult's (or Henry's) law for volatilities of the two components at
the tray temperature and pressure.
K(i) = VP(i) / P [2]
K(i) = volatility of component i.
P = Tray pressure in absolute units.
3. Colburn equations [(Ind. Eng. Chem. 33, 459 (1941)] for the effect
of internal reflux.
YN(i)/XB(i) = [Z(i)-1] * [ K(i)-1] / [U(i)-1] +1 [3]
U(i) = K(i) * (V/L) [4]
Z(i) = U(i)**N [5]
XB(i) = mol fraction of component i in the bottom product
N = the number of theoretical trays up to and including the tray whose
temperature is measured.
YN(i) = mol fraction of component i in the vapor above tray N.
K(i) = average volatility of component i in the section below the tray
of interest.
(V/L) = vapor to liquid mol ratio on the tray of interest.
4. A calibration procedure adjusting the tray efficiency to match between
model and laboratory reading.
Considering model calibration, it is best to use the least known value,
in our case tray efficiency, as a knob for getting the best fit between
model and laboratory measurements. Equation 5 does not require N to be
an integer number, and so the adjustment is continuous. To estimate the
number of theoretical trays we need at least one good set of measurements:
tray temperature and pressure, a calculation of L/V from process measurements
around the column, and a laboratory measurement of the bottom composition.
Given that we are dealing with an accurate theoretical model, three or
four sets of values suffice for calibrating the model with certainty.
The application is configured as a computer control scheme as shown in
figure 2. The calculation procedure include perhaps 50 Fortran statements,
executed once per minute.
**NAPHTHA CUTPOINT CALCULATION AND CONTROL**
Figure 3 shows our second inferential control problem: Predicting and
controlling naphtha (the top product) TBP (True Boiling Point) cutpoint
on fractionators. In the petroleum industry this is an important problem
to solve. In terms of abundance, almost every major unit in a refinery
has a fractionator for product separation. In terms of economics, there
are large differences in product prices and incorrect product cutting
is costly.
Traditionally the control method of choice has been a column top temperature
controller, manipulating reflux flow into the column. Column top temperature
is indicative of the dew point of naphtha, and hence is related to the
cutpoint. However, the column top temperature is sensitive to partial
pressure variations and temperature control alone cannot work well without
analyzer feedback. The contribution of this inferential model is partial
pressure correction which permits prediction of the cutpoint with accuracy
of about 3 °F. This accuracy makes it possible to employ the model
in lieu of an analyzer.
One difficulty in estimating the partial pressure is the prediction of
how much LPG is disolved in naphtha and reflux, and how much of the light
naphtha is evaporated into off gas. The overhead drum equilibrium changes
with operating pressure and weather, and we need a fairly elaborate flash
calculation at overhead drum conditions.
The fractionator top inferential model incorporates principles which
are similar to those of the distillation column model of the previous
section, and in addition it makes use of certain API (American Petroleum
Institute) procedures for dealing with a continuous boiling curve rather
then discrete components.
1. Clausius Clapeyron type equation for vapor pressures of the components
at overhead drum temperature, as shown in equation 1.
2. Raoult's (or Henry's) law for volatilities of the two components at
the overhead drum temperature and pressure, as shown in equation 2.
3. A flash calculation model for the overhead drum taking into account
four pseudo components: Heavy naphtha, light naphtha, LPG and gas. This
model is for predicting evaporation of light naphtha into tail gas and
absorption of LPG in naphtha product and reflux.
4. Correction of boiling temperature from measured to atmospheric pressure.
This is also a form of Clausius Clapeyron correction.
TA = TM * [C1 - C2 * LN(PM)] / [C1 - TVA * LN(PM)] [6]
TA = Boiling temperature at atmospheric pressure in °K.
TM = Measured temperature in °K.
C1, C2 = Known constants.
PM = Measured pressure (or partial pressure) in absolute atmospheres.
LN(PM) = Natural log of PM.
5. The established API procedure for converting between TBP and EFV (Equilibrium
Flash Evaporation) curves. Our model treats heavy naphtha as a continuous
boiling curve material whereas all other components: LPG, gas, steam and
light naphtha are treated as having discrete boiling points.
6. The calibration mechanism is based on changing one constant in the
EFV to TBP conversion procedure.
The inferential model determines a precise column top temperature control
target and it manipulates the top temperature controller to obtain that
target as shown in Figure 3. This controller has been implemented on many
fractionator columns without any analyzer feedback.
**VISBREAKING UNIT CONVERSION CALCULATION AND CONTROL**
Visbreaking is the process of mild thermal cracking of oil residues.
The process takes place first in a furnace coil and then in a soaker drum
as shown in figure 4. It is important to predict and control the extent
of reaction of the visbreaker. At too high a rate there is excessive coke
laydown on the furnace coils and drum. At too low a rate operational profits
suffer. In terms of abundance, visbreaking units are in operation mostly
outside of the US. Still visbreaking reactions occur not only in visbreakers
but in other units as an unwanted side reaction. A model predicting coke
laydown is useful for optimizing the operation of these units.
Our model based visbreaker control relies on simple reaction kinetics
principles plus flash calculations:
1. The extent of reaction is predicted from a time - temperature history
model integrated over the furnace and soaker:
dX = EXP[K1 * T - K2] * dt [7]
X = Extent of reaction
dX = Delta change in extent of reaction per unit length of furnace coil
dt = Delta residence time inside the unit length of coil
K1, K2 = Known coefficients.
T = Temperature in the middle of the unit length of coil.
2. A bulk density model based on flash calculation at coil conditions.
Bulk density is necessary for the calculation of dt of equation 7. The
components participating in the flash model are known as a function X,
and their volatilities are known by equations 1 and 2.
3. After performing flash and density calculations, dt is estimated as:
dt = dV * Density / Feed [8]
dV = Volume of coil bound within the unit length of interest
Feed = Mass flow of the unit feed
4. While the extent of reaction can be calculated via equation 7, the
products of reaction are separated in a fractionator, and the extent of
reaction can also be computed from temperature and flow measurements around
the fractionator. This gives us a feedback mechanism to correct for the
crackability of feed. I.e., K2 of equation 7.
5. A model for predicting coke laydown from extent of reaction and feed
characteristics. This part of the model is proprietary and details cannot
be given.
6. The model is calibrated by a measurement of asphaltene solubility
of the product. Asphaltene solubility is related to coking tendency.
This visbreaking model has been put to test in several visbreaking units.
In these initial implementations the part of model which predicts coke
laydown was not yet available and the model there just works to keep the
extent of reaction constant. This means that the allowable extent of reaction
needs to be updated every time a change in feed occurs. Still it lends
significant help to the operator because at constant feedstock the model
perfectly decouples changes in throughput, pressure and feed and soaker
temperatures.
Future implementations of the visbreaker application will incorporate
coke laydown correlations. That will improve the feed forward prediction
and reduce the need to input product asphaltene content.
**FCC UNIT CONVERSION CALCULATION AND CONTROL**
Fluid catalytic crackers are built with reactor / regenerator configuration
as shown in Figure 5. Hot catalyst powder combines with heavy hydrocarbon
feed. Both flow through a riser into the reactor. Cracked hydrocarbon
vapor flows from the reactor into the fractionator, while spent catalyst
flows down back to the regenerator, where coke deposits are burnt off.
The kinetic equation for the reaction is theoretically known:
X / (1-X) = F1[Feed] * F2[Cat] * F3[Temp] * F4[RT] * F5[CCR] [9]
X = Fraction converted to gasoline: C5-400°F boiling range
F1 = Function of feed properties
F2 = Function of catalyst
F3 = Function of reactor temperature
F4 = Function of residence time in the riser (most of the reaction takes
place in the riser, not in the reactor)
F5 = Function of catalyst to feed ratio.
F3, F4 and F5 are well defined functions. While calculations of the riser
residence time and catalyst circulation rate from measurements around
the reactor / regenerator are not trivial, they are do-able. The calculations
are based on heat balance, mass balance, and more specifically carbon
balance. On the other hand, F1 and F2 are nearly impossible to predict.
F2 has to do not only with fresh catalyst but also with rate of catalyst
contamination (function of feed), catalyst addition rate and regenerator
burning characteristics. F1 is even more difficult. Precise identification
of the feed requires extensive (and expensive) testing, and even then
some uncertainty remains.
Our on line control modeling approach is to simplify equation 9 by assuming
that F1 and F2 do not change quickly:
X / (1-X) = K * F3[Temp] * F4[RT] * F5[CCR] [10]
Obviously our assumption of K being a constant is questionable. We only
expect it to be a constant during a steady operation. Once the feedstock
changes, F1 will change abruptly, causing a fast transient in K. Changes
in catalyst addition or deterioration rate will cause a slow transient
in K.
However this does not present a serious problem because K is measurable.
From fractionator flow and temperature data we can compute the conversion
X. F3, F4 and F5 are known, and so K can be calculated as:
K = X / {F3[Temp] * F4[RT] * F5[CCR] * (1-X)} [11]
This permits the use certain control functions which are important for
optimized FCC operation.
1. When throughput changes, the riser residence time changes. Our model
of equation 10 supplies a precise feed forward mechanism for how to change
reactor temperature and F3, to keep conversion constant. This is somewhat
tricky because temperature control is accomplished by manipulating catalyst
circulation, and circulation changes affect F4 and F5 in a secondary way.
However the model takes all that into account, and it keeps modifying
the reactor temperature until the target conversion is reached.
2. The catalyst circulation rate may change for a variety of reasons.
Catalyst circulation closes the heat balance, and an increase in feed
temperature, for example, will create a situation where less catalyst
is needed to heat the feed. Again here the model provides a mechanism
for manipulating the reactor temperature to keep conversion constant.
3. When feed characteristics change, we have a feedback mechanism, in
the way of equation 11, to update the model "constant" K, and
then to correct conversion back to its target.
4. When feed characteristics change it is typical that the conversion
target also changes. Our model provides a mechanism for manipulating the
reactor temperature to change conversion to its new target.
**CONCLUSIONS**
We have shown some examples of a modeling technique which is based on
fundamental physical principles. These models are powerful in several
ways:
1. They use engineering variables, such as heat duty or partial pressure.
They are easily understood by process engineers.
2. They are based on non linear physical laws. Their range of validity
is greater than that of linear regression models.
3. They guarantee no conflict between plant data and model results.
4. They require no operator inputs. All calculations are based on process
measurements.
5. In comparison to on-stream analyzers, in some ways models work better.
They do not have long measurement delays and they do not break down. In
another way models are inferior. Well maintained analyzers can achieve
a slightly more accurate steady state reading.
6. Certain quality measurements cannot be measured by analyzers. In those
cases the use of models is the only option. An example of a property which
cannot be measured by an on-stream analyzer is coke laydown tendency.
7. In a multi variable control environment the models work as inputs
to the dynamic controller. This captures the best of both worlds: Precise
dynamic control together with the improved nonlinear model accuracy. |