I would not say that this is correct. Think of all the network traffic in a cloud: Between VMs (via tunnels or VLANs), between VMs and the outside world, iSCSI, external APIs, internal APIs, message queue, database, Swift data, Ceph data, Ceph control, live migration, and whatever else may come to mind.
Depending on the size of your installation, you may want to separate this traffic into several networks. However, nodes don't usually have five or ten network interfaces (except for blade servers, where network interfaces can be added at will). Typical example: Networks are implemented as VLANs, and nodes have a single interface bond that carries all VLANs.
Here is a blog series with a more realistic description of production networking. It was written at the Liberty/Mitaka timeframe, but the principles are the same today.
Of course, for a proof-of-concept cloud, training, self-education etc., you can set up OpenStack nodes with two networks and interfaces. But you don't need two. A single network is sufficient, in particular for an all-in-one cloud.
To answer your question
The provider interface connects the node to a provider network, such as an external network. This gives instances connectivity to the outside world.
The management network carries API traffic between the OpenStack servers, as well as message queue, database etc. traffic. In a two-network configuration, it probably also carries VM traffic over GRE or VXLAN tunnels, and external API traffic. In short, everything except provider traffic.