E&P companies struggle with the challenges of data quality, conformance, accuracy and cross-system integrity resulting in inaccurate reporting. There is a lack of a “single source of truth” for the full population of well data and corresponding attributes. Furthermore, some attributes will differ between systems, even well name for instance, in one system it may be named “Murphy_25” while in another system it may be “murphy-025”. These inconsistencies create big challenges when trying to produce reporting across multiple systems or spreadsheets.
Pulling reports from disparate data systems such as your production accounting, ERP, well management and forecasting can be a timely process that results in potentially inaccurate, incomplete or hard-to-digest reports. Even with proper reports with correct data, if it does not have a nice, easy-to-use visualization, it can be hard to determine what actions should take priority.
The process of uploading new data to your systems can be lengthy and error prone. If data is incorrect in the source system and there is no screening process to ensure data accuracy the data will have the same errors once uploaded into your system.
E&P companies must implement a proper master data management solution to establish a single source of truth that integrates with your disparate systems. It can enable:
When performing a data upload for either an initial implementation or a recent acquisition we offer a data load process to ensure data is blended according to your rules and then transferred smoothly and accurately to all systems.
Data load is a simple, multi-step process for ensuring accurate data enters the systems and is pushed to all relevant systems.
1. During the data blend process, users define rules to determine which system will be used for each data field. They also create promotion rules that allow completion or wellbore data to “bubble up” and create a golden record at the wellbore or well header level.
2. The Data Load app is designed to identify individual well data across multiple systems using the following essential data fields: well name, API & cost center. If all of the aforementioned data fields match than that record is blended and can be pushed through to all other systems. In the case where there is only partial matching, users can work through the data in a friendly user interface (or Excel files, etc., for large implementations) to group data records together that represent the same well.
3. The Well Master Data view contains all the data fields for each well, along with information on the system of record (including last change date/time) and the status of synchronization with subscriber system(s).
The Well Lifecycle’s workflow component is used to push a well from preconception all the way through production and eventually to abandonment. The tasks of the workflow are broken up into multiple teams that receive notifications when it is their turn to complete a task. There are manual and automated tasks. During manual tasks specific team members are required to enter data. Once that data is entered there are automated tasks that follow where that data is pushed to relevant systems. This “push” technology can also support initial creation of well records in source systems across the enterprise. Upon completion, the next team in the task list will receive a notification that it is their turn. There are multiple workflow categories for each different part of the lifecycle:
Data Quality is the final module of the Well Lifecycle Workflow. This module is used to ensure data accuracy across your entire organization. Business can import and/or manually create rules for validation, and the system will flag records that violate those rules. Users can then manually fix flagged data to improve data integrity.
When you have high data quality that has been cross-referenced across your entire organization, you can make business decisions confidently. You will know that any analytics you produce manually or view with an analytics dashboard are accurate and up-to-date.