Are National Broadband Networks Doomed?

Over the years, I’ve worked with national broadband projects in Australia, New Zealand, Qatar and Singapore. More recently in Australia, the National Broadband Network (NBN) has been in the news for all the wrong reasons. Their Retail Service Providers (RSPs) – who sell the NBN services to end customers, have been up in arms about NBN approaching large enterprise customers directly. Today, they announced that they would no longer do that – and that has the RSPs like Vocus, Mactel and Telstra very happy. Arguably, the decicion for NBN to sell direct was in breach of the founding principles that the Australian government put in place when it created NBNCo.

Such controversy is not why I think the NBN and the equivalents in other countries are doomed, although it’s not helping their case in the eyes of the public and end customers.

No, I think the proliferation of 5G networks and more recently global players like SpaceX’s Starlink constellation could be the harbinger of death for NBN.

Slow Rollouts

NBN has been copping a lot of flack lately in the media for taking too long to roll out. I get it, Australia is a HUGE country – even with most Australians living within an hour of the coast, it’s still a lot of physical ground that needs to be covered by the fibre and HFC networks that cover the bulk of the NBN end users. This has lead to a level of dissatisfaction with NBN as a whole.

Slow Network

Those end customers that do have a NBN connection are often complaining to the telecommunications ombudsman about the service they get – and while some of those faults are laid at the doorstep of the RSPs, some of it is due to physical breakages of modems and network termination devices and some are the fault of NBN – in all cases, because in Australia, we include NBN in the product offerings of the RSPs (ie – its customer facing), NBN cops the blame for ALL of the issues. As an example, my RSP (Optus) sold me a 100/40 HFC based NBN connection – which is usually fine. I often get 90-95 Mbps downstream and 30-37 Mbps upstream. However, so many HFC customers were seeing much slower than advertised speeds that Optus removed that speed combination from the market – the fastest they sell now is 50/20. (50 Mbps down, 20 Mbps up).

5G Networks

The 5G rollout in Australia is still pretty limited, but the 4G (LTE) rollout is pretty comprehensive and on 4G, I often see speeds approaching my home NBN based connection. Assuming 5G will bring a significant boost in speed (along with many other advantages including much great density of connections per cell) – which means that a 5G connections promises to deliver faster connections than NBN and without the need to tie the end customer down to their home boundaries.

If you add unlimited plans (in terms of Gb to be transferred up or down) to such as 5G (or even a 4G service) then you have a strong competitor to the NBN.

Some local mobile network providers and even MVNOs are already talking about selling fixed mobile services instead of selling a NBN based home (or office) connection.

Starlink

This morning, SpaceX launched another 60 satellites into orbit, bringing the total to 240 – that’s 120 new satellites within a month – well on the way to 12,000 satellites.

As I’ve mentioned in my previous blog post (see https://telcotalk.online/index.php/2020/01/09/starlink-a-global-csp-disruptor/), SpaceX’s Starlink constellation of communications satellites promise to deliver broadband (up to 10 Gbps) AND low latency (good for gaming) to 100% of Australia (other than the Australian Antarctic Territory). If SpaceX can deliver reasonable plans (in terms of speed, capacity and price) then SpaceX will be a strong competitor for NBN. If the plans are right, it could kill NBN.

Two NBN alternatives – either could kill NBN

Sure, NBN in Australia is facing some significant challenges, but these are exactly the same challenges that all national broadband networks/project face… Customers have zero allegance to NBN – and if 5G or Starlink will provide faster speeds at a competitive price, NBN is doomed.

If you disagree, let me know what you think…

Starlink – a global CSP disruptor

As SpaceX reach 180 starlink satellites in low Earth orbit; on the road to 12,000 once the network is complete, it’s becoming increasingly apparent that Starlink is set to become a global disruptor in the telecommunications industry. Once complete, SpaceX’s network will be nearly global (except for polar regions) and will be ubiquitous. The diagram below illustrates the coverage area – basically 100% coverage of most of the populated regions of the Earth :

Starlink network coverage (approximate)

The big benefit that the Starlink satellite constellation over a traditional communications satellite sitting in a geostationary orbit is the time it takes for the signal to get from a point on Earth to another Point on Earth. The Starlink constellation at an altitude of 550km is much closer to us than a geostationary communications satellite at (approx) 35,700km. The Starlink constellation will transmit information between Starlink satellites before downlinking to the destination. In an example of a connection between Australia and the Middle East, the diagram below shows the Starlink connection in White, a Fibre (mainly undersea) connection in Red and the geostationary communications satellite connection in yellow. This is approximately to scale.

A scale comparison of Starlink vs Geostationary Satellite and Fibre connections (Melbourne to the Middle East)

The white Starlink connection can pretty closely approximate a great circle path and thus is a shorter path than the red fibre connection (which must travel where the undersea cables have been laid). Yes, there are potentially more hops in a Starlink connection than a traditional satellite connection, but the distances involved are MUCH shorter. The other issue with traditional geostationary satellite connections is because of the distances involved, the signal strength is relatively very weak and because of that, these signals suffer from higher data loss, which means that these comms don’t just use standard TCP/IP, rather they use protocols that have much greater error correction and parity capabilities – this has the cumulative effect of slowing down the connection. Combine this slow connection speed and high latency (because of the distances involved), traditional satellite carriers have a big challenge ahead of them when compared to Starlink which promises to deliver much greater speeds at much lower latency – competitive with fibre for all but the largest bandwidth consumers.

To give you an idea of the comparison between Starlink and Geostationary satellite communications, I built a quick animation – remember this is only half of the connection (one way only) ; a real connection would be double the times as the response needs to back to the initiator.

A visualisation of the speed difference between Starlink and traditional geostationary satellite communications

What about for connections between two systems in the same country, as opposed to international connections? Even local in-country CSPs face significant competition from Starlink.

If we look at the Telstra coverage map for Australia+, easily the largest Telco in Australia with the best coverage, and yet we have lots of space that has no coverage at all. Contrast that with a 100% coverage that Starlink would provide with lower latency and much higher throughput… what do you think ? Will Telstra or any other CSP in the world face a challenge from Starlink? I think so…

January 2020 Telstra Mobile Coverage Map

+ Yes, I know Australia has a very large area and has a relatively small population, so the problem is not as big in other countries, but just imagine 100% coverage 100% of the time and in any country you ever visit…

SpaceX launch new batch of Starlink satellites

I’ve just watched the SpaceX launch of the latest batch of 60 starlink satellites into low earth orbit – aimed at providing low latency internet services all over the world. Initially, SpaceX are targeting the North American market – I mean, why wouldn’t they? The US has such a disjointed connectivity marketplace with a mixture of Metro Area Networks (WiFi and Wimax based) in small towns, LTE/5G in larger population centres, HFC cable and Fibre connectivity options for fixed services and probably still a bit of xDSL running around… Not to mention the oft complained about mobile network coverage. Starlink (despite being Internet rather than voice focused) has the potential to steal a lot of the subscribers that live in or travel to marginal coverage areas. Think of it – 100% coverage of North America at up to 10Gbps – if the price is competitive, why wouldn’t you as a subscriber go with that option!

There were a few things that peaked my interest with this launch in particular:

  • The launch of these Starlink satellites in close succession from the December’19 launch of the Kacific comms satellite (ironically on a SpaceX Falcon 9), a more conventional geostationary communications satellite, targeting at providing services to the South Pacific, SE Asia and Himalayan nations (not Australia) via Ka band radio (thus the name). They plan to provide services to over 600 million subscribers – from the following countries (from https://www.kacific.com):
    • American Samoa
    • Bangladesh
    • Bhutan
    • Brunei
    • Cook Islands
    • East Timor
    • Federated States of Micronesia
    • Fiji
    • French Polynesia
    • Guam
    • Indonesia
    • Kiribati
    • Malaysia
    • Myanmar
    • Nepal
    • New Zealand
    • Niue
    • Northern Mariana Islands
    • Papua New Guinea
    • Philippines
    • Samoa
    • Solomon Islands
    • Tonga
    • Tuvalu
    • Vanuatu

Obviously, the bulk of those subscribers are going to be coming from Indonesia, being the highest population country in their target list. It makes me wonder about the competition between Kacific and Starlink for those same subscribers once SpaceX establish their services in the north American market and spread their wings to the rest of the world…

  • The Starlink swarm of satellites have had astronomers up in arms because of the additional light and radio pollution these satellites have been adding to the night sky making it difficult for both visual and radio astronomers to get good observations. With more than 12,000 (!!!) Starlink satellites planned to go into orbit, we’re just seeing the beginning of this problem.
Telescopes at Lowell Observatory in Arizona captured this image of galaxies on May 25, their images marred by the reflected light from more than 25 Starlink satellites as they passed overhead.
Victoria Girgis/Lowell Observatory – image linked from astro.princeton.edu

I noted during the latest launch coverage, the SpaceX presenter said that one of the satellites launched today had been ‘darkened’ to reduce reflections in the hope that it would lessen the affect on visual (at least) astronomy. Let’s hope it works.

If you want to read up on the Starink’s effect on Astronomy – I’d suggest you read this article on Nat Geo – https://www.nationalgeographic.com/science/2019/05/elon-musk-starlink-internet-satellites-trouble-for-astronomy-light-pollution/

For sure, these launches are great to watch and remind me of when I watched Apollo 17 launch as a boy (that’s the only one I remember from way back then) and the excitement I felt when I watched that launch…

Driving Analytics in a Telco

Originally posted on 21Sep17 to IBM Developerworks (11,101 views)

An ex-colleague of mine (Violet Le – now the Marketing Director at Imageware) asked me about the drivers for Analytics in Telcos. I’ll admit that it’s a subject that I haven’t really given a lot of thought to – all the projects that I’ve worked on in the past that have included Analytics have had a larger business case that I was trying to solve… Marketing, Future Planning, Sales etc I’ve never worked on an Analytics project for the sake of analytics, nor have I designed a solution that was just (or mainly) analytics.

There is a definite value in analytics in providing an insight into how the business is running – to enable business to plan for the future and to manage how they run in the present. Both Strategic and Tactical cases for analytics would seem to me to be of value to any business. An analytics system that delivers insight into the business (customer behaviour, sales effectiveness, capacity usage and predictions etc) is great, but at the end of the day, a Telco needs to do something about that information/insight to actually deliver business benefits.

As I’m no analytics specialist, I wont’ try to describe how to define or build those systems. What I will try to do is to describe the bits around the analytics systems that make use of that insight to deliver real value for the CSP.

What are the business cases that I’ve seen?

  1. Sales & Marketing 
    • Driving promotions to to positively affect subscriber retention or acquisition… I did a project with Globe Telecom in the Philippines which was primarily aimed at driving SMS based outbound marketing promotions that are targeting based on subscriber behaviour. An example might be if a subscriber had a pre-paid balance less than (say) 5 pesos, and the subscriber topped up more than 20 pesos and less than 50 pesos, then send a promo encouraging the subscriber to top up by more than 100 pesos… all the interaction is via SMS (via a ParlayX SMS API)
    • Back in 2013, I did an Ignite presentation at the IBM Impact Conference in Las Vegas – Here is the presentation (Smarter Marketing for Telecom – Impact 2013)
  • Social networking analysis to determining who should be targeted. IBM’s Research group was pushing for years a Social Networking Analysis capability that looked at Social Networking connection to determine which subscribers are followers, which are community leaders and influencers and based on that assessment.
  1. Networks
  • Ensuring utilisation of the network is optimised for the load requirements. I worked with a telco in Hong Kong that wanted to dynamically adjust the quality of service level to be delivered to a specific user based on their location (in real time) and a historical analysis of the traffic on the network.  For example, if a subscriber was entering the MTR (subway) station and the analytics showed that particular station typically got very high numbers of subscribers all watching youtube clips at that time of day on that day of the week, then lower the QoS setting for that subscriber UNLESS they were a premium or post-paid customer in which case, keep the QoS settings the same. The rating as a premium subscriber could be derived from their past behaviour and spend – from a traditional analytics engine. 
  • Long term planning on network (SDN/NFV will allow Networks to be more agile which will reduce the need for traditional offline analytics to drive network planning and make the real time view more relevant as networks adapt to real time loads dynamically … as traffic increases in particular sections of the network, real time analytics and predictions will drive the SDN to scale up that part of the network on demand. This is where new next gen AI’s may be useful in predicting where the load will be int he network and then using SDN to increase capacity BEFORE the load is detected…  read Watson from IBM and similar….

A few years ago, a number of ex colleagues (from IBM) formed a company on the back of real time marketing use case for Telcos and since then, they’ve gone ahead in leaps and bounds. (Check them out if you’re interested, the company name is Knowesis)

Do you have significant use cases for analytics in a CSP? I’m sure they are and I’m not claiming this is an exhaustive list – merely the cases that I’ve seen multiple times in my time as a solution architect focused on the telecommunications industry.

Progress on the miss-match between the TMF SID and TMF API Data model

Originally posted on 4Sep17 to IBM Developerworks (10,430 Views)

I wouldn’t normally just post a link to someone else’s work here, but in this case Frank Wong – a colleague of mine at my new company (DGIT Systems) has done some terrific work in helping to eliminate the miss-match between the data model used by the TMF’s REST based APIs and the TMF’s Information Model (SID). I know this was an issue that IBM were also looking to resolve.  In the effort to encourage the use of a simple REST interface, the data model used in the TMF’s APIs has been greatly simplified from the comprehensive (some might say complex) data model that is the TMF’s Information Model (SID). This meant that a CSP who is using the SID internally to connect internal systems needed to map to the simplified API data model to expose those APIs externally – there was no easy one-to-one mapping for that mapping which meant that the one could not simply create a API for an existing business service (eTOM or otherwise) – a lot more custom data modelling work would be required.

This interview with Frank by the TMF illustrates some of the latest work to resolve that miss-match – read it at https://inform.tmforum.org/open-apis/2017/08/apis-need-good-parents-catalog-success/?mkt_tok=eyJpIjoiTm1aa1pUVXhOR001TkRFMSIsInQiOiJXbEpaajNHRmR1Rm9meTZzQlMzMnJRODJDNlllUjdsdFk2RUxNMDVRS25HMEdlOTZzK3NDNkx5YkZXSjlyQW42eDkrQW5lT0pkRVFpdm5lNXJIdW9STGpaYWV5aHZiald0b1JBenhlSTFRV2FUMVhFNXBLUlRkZ05MV2ZZK1JSViJ9

Are MicroServices the future INSIDE a CSP?

Originally posted on 30Aug17 to IBM Developerworks (11,517 Views)

Across many industries, including the Telecommunications sector, there seems to be a strong movement towards a MicroServices Architecture and (somewhat) away from Service Oriented Architecture. I’ve seen this move in a CSP here in Australia. The TeleManagement Forum have a significant project that is trying to standardise the REST APIs that a CSP might publish.

The TMF state:

TM Forum’s Open API program is a global initiative to enable end to end seamless connectivity, interoperability and portability across complex ecosystem based services. The program is creating an Open API suite which is a set of standard REST based APIs enabling rapid, repeatable, and flexible integration among operations and management systems, making it easier to create, build and operate complex innovative services. TM Forum REST based APIs are technology agnostic and can be used in any digital service scenario, including B2B value fabrics, Internet of Things, Smart Health, Smart Grid, Big Data, NFV, Next Generation OSS/BSS and much more.”

“TM Forum is bringing different stakeholders from across industries to work together and build key partnerships to create the APIs and connections. The reference architecture and APIs we are co-creating are critical enablers of our API program and open innovation approach for building innovative new digital services in a number of key areas, including IoT applications, smart cities, mobile banking and more.”

Laurent Leboucher, Vice President of APIs & Ecosystems, Orange

I’ve been a part of a number of projects where these REST APIs have been exposed primarily to a CSPs trading partners – my very first Service Delivery Platform exposed APIs to external developers. Back then, it was Parlay X Web services (REST didn’t really exist and certainly there to external developers.were no Telco standards in place for REST based interfaces) that exposed the functionality of network elements to 3rd party developers. With many of the APIs that the TMF have defined, they seem to be more focused on OSS/BSS functions instead.  Now that the TMF have quite a number of Open APIs defined, there are some network focused APIs that are coming onto the list – for instance, a Location API would have typically be exposed using the ParlayX Web Services or ParlayREST REST interfaces to the network’s Location Based Server (LBS).  As a result, there does seem to be a small amount of crossover between the new TMF APIs and the older ParlayREST APIs.

Does this mean that the new TMF OpenAPIs are of no use? Not at all.  There are certainly advantages to exposing functions that a CSP has to external developers and REST based OpenAPIs make the consumption of those functions easier than the ParlayX web services or Parlay CORBA services have been in the past.  Ease of consumption is not to be underestimated.  An API that is easy to include in an application and provides a real capability that would have been otherwise difficult to provide stands a much greater chance of wide usage.

Sure, there is a place for externalising the OSS/BSS functions of a CSP. Trading partners could place orders against a CSP, they could bill to a subscriber’s post or pre-paid accounts, they could update the subscriber profile held by the CSP. All relevant use cases for externalising the TMF Open APIs.

The big question in my mind is will REST APIs be of use internally?

REST based APIs being easier to integrate internally will drive some value.  But in CSPs that have significant investments in a Service Oriented Architecture (SOA), I’m struggling to see the business value in abandoning that in favour of a MicroServices Architecture where there is no common integration tool, no common orchestration capability, rather lots and lots of point to point integrations through REST APIs.

For those of us that have been around a while, you will have seen point to point integrations and the headaches they cause – complex dependencies in mesh architectures make maintenance hard and expensive.  Changing a (say) billing system that is integrated through multiple point to point connections is a nightmare – even if they have a standardised API describing those interfaces. The plane truth of the matter is that not all of those interfaces will be adequately described by the TMF’s Open APIs, so custom specifications APIs will arise and make swapping out the billing system expensive. Additionally, not all of a CSPs internal systems will have TMF Open API compliant interfaces – many won’t even support REST interfaces natively.  Changing all of a CSP’s systems to ensure they have a REST interface is a non-trivial task.

A Hybrid environment may be needed.

I’d suggest that a Hybrid approach is needed – existing Enterprise Service Busses may be able to interface with REST APIs – certainly IBM’s Integration Bus and the (now superseded) WebSphere Enterprise Service Bus could connect to REST APIs just as easily as they could connect to Web Services, Files and other connectivity options.  The protocol transformation capabilities of a ESB are able to provide REST APIs to systems that would have otherwise not supported such modern interfaces. Similarly, where a function is not provided by a single system, a traditional orchestration (BPM) capability can coordinate multiple systems to provide a single interface to that capability even if (behind the scenes) there are multiple end point systems involved in providing the functionality of that transaction/interface. The diagram below shows my thinking of what should be in place….