Benchmark Import pipeline
The following diagram shows the Data flow in the iterative Import methodology by area of responsibility between the Project teams, including Implementation Specialists, Business Analysts Client and Conversion Developers, this process would be exhausted to iterate through the source data to incrementally convert and populate the Benchmark schema.
The process has a few phases:
Mapping and Analysis: The Business Analysts review the customer’s legacy system to determine which data would be relevant for the Benchmark business requirements and the mapping documents.
Configuration: The Implementation Specialists would then begin to parse this information to begin setting up the initial base configuration items.
Conversion: The conversion developers would consume the produced mappings and merge them with the configuration to produce a usable Benchmark implementation. These would be validated through automated scripts that check for the Benchmark Business requirements to be properly configured and populated.
Bug/Issue resolution: If any issues are reported in the validation phase, either from conversion or configuration, these would be resolved at the appropriate stage; i.e. Source data would be massaged by logic in the Conversion step or configuration would be corrected in the configuration stage, and the scripts would be run iteratively until all the data categories are exhausted.
Dependencies:
In order to achieve the proposed method above, we’ll need to harness project-agnostic tooling that’s:
Reusable. Being able to invest development time for tools that can be used for all implementations is key to reduce re-work.
Scalable. Adding features and use cases for tools to increase usability.
Crowd-maintained. Developers maintaining the same codebase would reduce silos, increase proficiency, and standardize practices.
Source-controlled. Eliminate unsanctioned copies of the same objects out in the world being modified by individuals out of the team.
Suggested starter tooling: