•Comparison of single events microkinetic and continuous lumping hydrocracking models.•Continuous lumping model is more accurate for simulation of yield structure.•Single events model provides ...detailed kinetic data.•The conversion and cracking/isomerization product distributions are traced.
Development of models for industrial hydrocrackers has received a great amount of attention by the scientific community over the past decades. Two fundamentally different modelling approaches are compared in this paper: a continuous lumping model with three families (paraffins, naphthenes, and aromatics) and a single events microkinetic model. The aim is to demonstrate the differences in the capabilities of the two modelling frameworks. Both models are capable of simulating experimental data from hydrocracking of a pre-treated Vacuum Gas Oil in a pilot plant at industrial conditions. The continuous lumping model provides better results of the macroscopic effluent characteristics, such as yield structure and PNA (Paraffin, Naphthene, Aromatic) distribution in the middle distillate cut. It requires only the feed SIMDIS (Simulated Distillation) and PNA composition to be known. The single events model, on the other hand, provides information which is not available in a simple continuous lumping model. An analysis of the reaction kinetics of paraffins and mono-naphthenes is performed to demonstrate this aspect. The single events model is far more complex and computationally expensive than the continuous lumping model. In conclusion, the two approaches should be considered complementary rather than competitive. In conjunction, they can be used to balance the drawbacks of each individual modelling approach.
•Main problem: how to quantify uncertainties in model prediction.•Sobol index to quantify relationship between inputs and output.•Sobol index to estimate non influent inputs.•Sobol index shows strong ...impact of resins and low impact of ppH2S.•We apply Sobol indexes to HDN and HDS reactions.
This work applies sensitivity analysis to a kinetic model of hydrotreating processes. The proposed approach is subdivided into several steps: The first step is to build the kinetic model. The second step is to develop a simplified model (meta model) on which easy calculations can be performed in the next step. The third step is to use the meta model to estimate the influence of each input on the model output without Monte Carlo methods which are difficult to manage. Sensitivity analyses are very useful for model comparison and uncertainties estimation.
This paper presents a diagnostic module developed by IFP and tested off-line on a FCC (Fluid Catalytic Cracking) pilot plant. The method uses four successive complementary techniques. They enable to ...go step by step from the observations to a sentence in natural language describing the faults. First, a quantitative causal model is elaborated from a quantitative behavioural model. Causality is obtained from the structure of each equation. Then, global and local alarms are generated using residuals (differences between measures and outputs of the model) and fuzzy logic reasoning. Then, a hitting set algorithm is applied to determine sets of components or equipment which are suspected to have an abnormal behaviour. Finally, expert human operator knowledge about those components is used to identify the fault(s) and produce messages for the operators. This software is currently tested off-line on the FCC pilot plant at IFP. The performance of the diagnostic module is illustrated on four practical scenarios of abnormal behaviour. This work is conducted as part of the CHEM EC funding project.
Cet article présente le système de diagnostic ASCO (Aide à la supervision et à la conduite des opérateurs) développé par l'IFP et testé hors ligne sur un pilote de FCC (Fluid Catalytic Cracking). Il fait successivement appel à quatre modules complémentaires. Ces derniers permettent, à partir d'un ensemble d'informations, de fournir aux opérateurs un message indiquant la panne et ses conséquences. Le premier module permet de générer un modèle causal quantitatif de bon fonctionnement du procédé. Le second module effectue la détection de défauts : il déclenche des alarmes à partir des observations. Ces alarmes sont ensuite traitées par le module de localisation (algorithme de hitting set) qui élabore une liste de composants physiques suspectés défaillants. Enfin, la connaissance des experts sur ces composants est traitée automatiquement par le module d'identification qui renvoie un message à l'opérateur. Ce message décrit la défaillance, les actions à entreprendre pour traiter l'opération ou pour la maintenance à effectuer, et les répercussions de la défaillance sur le procédé. Les résultats obtenus sont illustrés par quatre scénarios réels de mauvais comportement. Ce travail a été mené dans le cadre du projet européen CHEM.
High Throughput Experimentation (HTE) is a rapidly expanding field. However, the productivity gains obtained via the synthesis or parallel testing of catalysts may be lost due to poor data management ...(numerous manual inputs, information difficult to access, etc.). A global framework has then been developed. It includes the HTE pilot plants in the global information system. It produces dedicated computer tools offering spectacular time savings in the operation of HTE units, information storage and rapid extraction of relevant information. To optimize the productivity of engineers, Excel has been included in the system by adding specific features in order to treat it as an industrial tool (development of additional modules, update of modules, etc.). The success obtained by setting up the information system is largely due to the chosen development method. An Agile method (Agile Alliance (2012) http://www.agilealliance.org/the-alliancel)1 was chosen since close collaboration between the computer specialists and the chemist engineers is essential. Rather than a global and precise description of the framework which might be boring and tedious, the global framework is presented through 3 examples: scheduling experiments applied to zeolite synthesis; data management (storage and access); real application to pilot plant: dedicated interfaces to pilot and supervise HTE pilot plants, comparison of tests runs coming from several pilot plants.
L’Expérimentation Haut Débit (EHD) est un domaine en plein essor. Cependant, les gains de productivité obtenus via la synthèse ou le test parallélisé de catalyseurs peuvent être annihilés par une mauvaise gestion de données (nombreuses saisies manuelles, difficulté d’accès à l’information, etc.). Dans ce document, une nouvelle architecture permettant d’intégrer les unités EHD dans un système d’information global est présentée. Des outils informatiques dédiés ont été développés. Ils permettent des gains de temps spectaculaires dans la conduite des unités EHD, le stockage des données et l’extraction rapide des informations pertinentes. L’approche retenue a été guidée par une approche Agile (Agile Alliance (2012) http://www. agilealliance.org/the-alliance/) 1 basée sur une très forte collaboration entre les chimistes et les informaticiens. Excel, l’outil principal des chimistes, a été positionné au coeur du système d’information avec des liens bidirectionnels (entrée/sortie) avec les bases de données et les différentes unités pilotes. Plutôt qu’une description globale du système d’information qui serait longue et fastidieuse, le framework est présenté via 3 exemples principaux : planification de données en utilisant des outils de gestion de production; gestion optimisée des données (stockage, requêtes), analyse de données; exemples d’application sur des unités pilotes : développements d’interfaces dédiées pour la conduite, le monitoring des unités et l’exploitation multi runs et multi unités pilotes.
High Throughput Experimentation (HTE) is a rapidly expanding field. However, the productivity gains obtained via the synthesis or parallel testing of catalysts may be lost due to poor data management ...(numerous manual inputs, information difficult to access, etc.). A global framework has then been developed. It includes the HTE pilot plants in the global information system. It produces dedicated computer tools offering spectacular time savings in the operation of HTE units, information storage and rapid extraction of relevant information. To optimize the productivity of engineers, Excel has been included in the system by adding specific features in order to treat it as an industrial tool (development of additional modules, update of modules, etc.). The success obtained by setting up the information system is largely due to the chosen development method. An Agile method (Agile Alliance (2012) http://www.agilealliance.org/the-alliancel)1 was chosen since close collaboration between the computer specialists and the chemist engineers is essential. Rather than a global and precise description of the framework which might be boring and tedious, the global framework is presented through 3 examples: scheduling experiments applied to zeolite synthesis; data management (storage and access); real application to pilot plant: dedicated interfaces to pilot and supervise HTE pilot plants, comparison of tests runs coming from several pilot plants. L’Expérimentation Haut Débit (EHD) est un domaine en plein essor. Cependant, les gains de productivité obtenus via la synthèse ou le test parallélisé de catalyseurs peuvent être annihilés par une mauvaise gestion de données (nombreuses saisies manuelles, difficulté d’accès à l’information, etc.). Dans ce document, une nouvelle architecture permettant d’intégrer les unités EHD dans un système d’information global est présentée. Des outils informatiques dédiés ont été développés. Ils permettent des gains de temps spectaculaires dans la conduite des unités EHD, le stockage des données et l’extraction rapide des informations pertinentes. L’approche retenue a été guidée par une approche Agile (Agile Alliance (2012) http://www. agilealliance.org/the-alliance/) 1 basée sur une très forte collaboration entre les chimistes et les informaticiens. Excel, l’outil principal des chimistes, a été positionné au coeur du système d’information avec des liens bidirectionnels (entrée/sortie) avec les bases de données et les différentes unités pilotes. Plutôt qu’une description globale du système d’information qui serait longue et fastidieuse, le framework est présenté via 3 exemples principaux : planification de données en utilisant des outils de gestion de production; gestion optimisée des données (stockage, requêtes), analyse de données; exemples d’application sur des unités pilotes : développements d’interfaces dédiées pour la conduite, le monitoring des unités et l’exploitation multi runs et multi unités pilotes.
In order to deal with the complexity of the diagnosis of FCC pilot plants, several modelling approaches were developed, combined and tested on-line. Two causal modelling approaches were investigated ...based on control loop analysis and on the detailed equations describing the behaviour of the process. These models are used online to detect fault on process variables. Information on the components of the system allows faults on physical components to be isolated. Then using expert knowledge, information is given to the operator. This paper details the different kinds of models, their use in the diagnosis module and a case study on the IFP FCC pilot plant. This work is conducted as a part of the Chem project1.
Error‐in‐variables model (EVM) methods are used for parameter estimation when independent variables are uncertain. During EVM parameter estimation, output measurement variances are required as ...weighting factors in the objective function. These variances can be estimated based on data from replicate experiments. However, conducting replicates is complicated when independent variables are uncertain. Instead, pseudo‐replicate runs may be performed where the target values of inputs for repeated runs are the same, but the true input values may be different. Here, we propose a method to estimate output‐measurement variances for use in multivariate EVM estimation problems, based on pseudo‐replicate data. We also propose a bootstrap technique for quantifying uncertainties in resulting parameter estimates and model predictions. The methods are illustrated using a case study involving n‐hexane hydroisomerization in a well‐mixed reactor. Case‐study results reveal that assumptions about input uncertainties can have important influences on parameter estimates, model predictions and their confidence intervals.