TopologyUpdater is an existing API solution that has been repeatedly adapted and improved since the mainnet launch in the summer of 2020 to cope with the increasing number of participating relay nodes. Currently there are 2700 nodes in the mainnet and about 150 in the testnet using TU. Originally intended as a short-term transitional solution for the P2P module, it provides each participating node with 10-20 working peer connections distributed from very near to far around the globe. Old nodes are sorted out by automated quality tests within an hour. Participation is generally free and permissionless. Currently, this creates about 40,000 inter-peer connections for a significantly more decentralised acting Cardano network, supplementing the (few) connections manually configured by SPOs and the star-topology relays provided by IOG. Every day, the TopologyUpdater servers handle about 80,000 requests. Part of this proposal covers the duration of this service for another 12 months.
A second part of this project is yet to come as an additional API service, and will take care of measuring the individual block propagation times in the Cardano network. The realtime aggregation of this data will provide an exceptional overview and comparison. The resulting insights into individual and general optimisation possibilities of the network are very promising. Participating operators can compare the performance of their nodes (topology, network, CPU, configuration) with the average and the best in the entire network. Errors such as unnecessary/incorrect delays in the generation and distribution of blocks from certain pools can be identified and thus addressed quickly and concretely.
The data collected will also be made available for fundamental optimisation projects (research and engineering and application developement) with the aim of significantly increasing the transaction throughput and net capacity of the Cardano Blockchain. The proposal includes the costs for setting up and operating this service for one year.
The TopologyUpdater has been used since 2020 with a steadily increasing number of relays. This number should be maintained in line with the development of the network or continue to grow, at least until the P2P module can take over this task.
The new BlockPerformance API and Dashboard to be developed is currently in the status of a preliminary study. (see illustrated graphic https://cardano.ideascale.com/userimages/accounts/93/936143/panel_upload_48088/TUBlockPerf_dashboard-2ce138.png)
As with the TopologyUpdater, an open source script is to be made available to every SPO free of charge, with which they can easily and reliably enter the timing data of their relays. By participating, the SPO will have free access to the data and findings. The visualization Dashboard will provide web-based interactive gantt timeline diagramms with filtering and sorting functions as well as detailed popup information for each reported blocks propagation progress and time. There will be violin-plot graphs presenting the individual relays performance ranges, to be compared with the network average and best performing relays in each category. Additional graphs and statistics over time will be created and also provided to the public and for other ecosystem members for further analysis, interpretation and use-cases such as application planning and design.
We are aiming for at least 150 participating nodes for the BlockPerf tool, but are designing the system so that 1500 nodes can also participate if there is interest.
The main goal is not to increase the number of relay nodes, but to support the ongoing improvements, quality control and performance tunings. Operators and Application designers should be able to use these graphs, reports and data to form, prove or change their opinion on future topics and discussions.
The collection of data must be done reliably without affecting the basic function and performance of the relays (low memory and CPU usage, no block propagation delays). The data must be made available in agregated, summarised and anonymised form without exposing individual nodes network connections, setups and behaviour. The aim is to develop comprehensible representations that summarise the large amounts of data in a clear and insightful way.