Architecting Target State over the years
What did we do?
Not so long ago, when thinking of target-state architecture for an enterprise, one would think in terms of systems.
How are the users using them? How are they connected? How are they deployed? How are they managed? And so on. Data was created, stored, transported, and copied. Data served a purpose toward the end goal, with systems being connected to serve the enterprise.
With the introduction of microservices architecture, some monolithic systems were decomposed into services, providing greater connectivity, scalability, and, more importantly, governance over the data being transported.
Introducing microservices did not really eliminate legacy or the need for monolithic systems. It created an architecture where mainframe systems, monolithic systems, and microservices-based systems all coexisted. Point-to-point connections became more and more prevalent. Data was transported and copied many times to serve different purposes. Connections became complex. Services became bloated.
With most newer platforms being SaaS, plug-and-play became mainstream. Services served those purposes as well, carrying data in and out of the enterprise. Enterprise technology sprawl increased.
In come AI-based tools, and now there is a huge demand for data, not just any data, but structured data. Alright, do we have a data warehouse in there somewhere? Great, so let’s pull data out of that and feed the AI machine. We are all set. Right? RIGHT? No? What’s wrong?
Part 2 is out, read it here

