SpaceX launch new batch of Starlink satellites

I’ve just watched the SpaceX launch of the latest batch of 60 starlink satellites into low earth orbit – aimed at providing low latency internet services all over the world. Initially, SpaceX are targeting the North American market – I mean, why wouldn’t they? The US has such a disjointed connectivity marketplace with a mixture of Metro Area Networks (WiFi and Wimax based) in small towns, LTE/5G in larger population centres, HFC cable and Fibre connectivity options for fixed services and probably still a bit of xDSL running around… Not to mention the oft complained about mobile network coverage. Starlink (despite being Internet rather than voice focused) has the potential to steal a lot of the subscribers that live in or travel to marginal coverage areas. Think of it – 100% coverage of North America at up to 10Gbps – if the price is competitive, why wouldn’t you as a subscriber go with that option!

There were a few things that peaked my interest with this launch in particular:

  • The launch of these Starlink satellites in close succession from the December’19 launch of the Kacific comms satellite (ironically on a SpaceX Falcon 9), a more conventional geostationary communications satellite, targeting at providing services to the South Pacific, SE Asia and Himalayan nations (not Australia) via Ka band radio (thus the name). They plan to provide services to over 600 million subscribers – from the following countries (from https://www.kacific.com):
    • American Samoa
    • Bangladesh
    • Bhutan
    • Brunei
    • Cook Islands
    • East Timor
    • Federated States of Micronesia
    • Fiji
    • French Polynesia
    • Guam
    • Indonesia
    • Kiribati
    • Malaysia
    • Myanmar
    • Nepal
    • New Zealand
    • Niue
    • Northern Mariana Islands
    • Papua New Guinea
    • Philippines
    • Samoa
    • Solomon Islands
    • Tonga
    • Tuvalu
    • Vanuatu

Obviously, the bulk of those subscribers are going to be coming from Indonesia, being the highest population country in their target list. It makes me wonder about the competition between Kacific and Starlink for those same subscribers once SpaceX establish their services in the north American market and spread their wings to the rest of the world…

  • The Starlink swarm of satellites have had astronomers up in arms because of the additional light and radio pollution these satellites have been adding to the night sky making it difficult for both visual and radio astronomers to get good observations. With more than 12,000 (!!!) Starlink satellites planned to go into orbit, we’re just seeing the beginning of this problem.
Telescopes at Lowell Observatory in Arizona captured this image of galaxies on May 25, their images marred by the reflected light from more than 25 Starlink satellites as they passed overhead.
Victoria Girgis/Lowell Observatory – image linked from astro.princeton.edu

I noted during the latest launch coverage, the SpaceX presenter said that one of the satellites launched today had been ‘darkened’ to reduce reflections in the hope that it would lessen the affect on visual (at least) astronomy. Let’s hope it works.

If you want to read up on the Starink’s effect on Astronomy – I’d suggest you read this article on Nat Geo – https://www.nationalgeographic.com/science/2019/05/elon-musk-starlink-internet-satellites-trouble-for-astronomy-light-pollution/

For sure, these launches are great to watch and remind me of when I watched Apollo 17 launch as a boy (that’s the only one I remember from way back then) and the excitement I felt when I watched that launch…

Driving Analytics in a Telco

Originally posted on 21Sep17 to IBM Developerworks (11,101 views)

An ex-colleague of mine (Violet Le – now the Marketing Director at Imageware) asked me about the drivers for Analytics in Telcos. I’ll admit that it’s a subject that I haven’t really given a lot of thought to – all the projects that I’ve worked on in the past that have included Analytics have had a larger business case that I was trying to solve… Marketing, Future Planning, Sales etc I’ve never worked on an Analytics project for the sake of analytics, nor have I designed a solution that was just (or mainly) analytics.

There is a definite value in analytics in providing an insight into how the business is running – to enable business to plan for the future and to manage how they run in the present. Both Strategic and Tactical cases for analytics would seem to me to be of value to any business. An analytics system that delivers insight into the business (customer behaviour, sales effectiveness, capacity usage and predictions etc) is great, but at the end of the day, a Telco needs to do something about that information/insight to actually deliver business benefits.

As I’m no analytics specialist, I wont’ try to describe how to define or build those systems. What I will try to do is to describe the bits around the analytics systems that make use of that insight to deliver real value for the CSP.

What are the business cases that I’ve seen?

  1. Sales & Marketing 
    • Driving promotions to to positively affect subscriber retention or acquisition… I did a project with Globe Telecom in the Philippines which was primarily aimed at driving SMS based outbound marketing promotions that are targeting based on subscriber behaviour. An example might be if a subscriber had a pre-paid balance less than (say) 5 pesos, and the subscriber topped up more than 20 pesos and less than 50 pesos, then send a promo encouraging the subscriber to top up by more than 100 pesos… all the interaction is via SMS (via a ParlayX SMS API)
    • Back in 2013, I did an Ignite presentation at the IBM Impact Conference in Las Vegas – Here is the presentation (Smarter Marketing for Telecom – Impact 2013)
  • Social networking analysis to determining who should be targeted. IBM’s Research group was pushing for years a Social Networking Analysis capability that looked at Social Networking connection to determine which subscribers are followers, which are community leaders and influencers and based on that assessment.
  1. Networks
  • Ensuring utilisation of the network is optimised for the load requirements. I worked with a telco in Hong Kong that wanted to dynamically adjust the quality of service level to be delivered to a specific user based on their location (in real time) and a historical analysis of the traffic on the network.  For example, if a subscriber was entering the MTR (subway) station and the analytics showed that particular station typically got very high numbers of subscribers all watching youtube clips at that time of day on that day of the week, then lower the QoS setting for that subscriber UNLESS they were a premium or post-paid customer in which case, keep the QoS settings the same. The rating as a premium subscriber could be derived from their past behaviour and spend – from a traditional analytics engine. 
  • Long term planning on network (SDN/NFV will allow Networks to be more agile which will reduce the need for traditional offline analytics to drive network planning and make the real time view more relevant as networks adapt to real time loads dynamically … as traffic increases in particular sections of the network, real time analytics and predictions will drive the SDN to scale up that part of the network on demand. This is where new next gen AI’s may be useful in predicting where the load will be int he network and then using SDN to increase capacity BEFORE the load is detected…  read Watson from IBM and similar….

A few years ago, a number of ex colleagues (from IBM) formed a company on the back of real time marketing use case for Telcos and since then, they’ve gone ahead in leaps and bounds. (Check them out if you’re interested, the company name is Knowesis)

Do you have significant use cases for analytics in a CSP? I’m sure they are and I’m not claiming this is an exhaustive list – merely the cases that I’ve seen multiple times in my time as a solution architect focused on the telecommunications industry.

Progress on the miss-match between the TMF SID and TMF API Data model

Originally posted on 4Sep17 to IBM Developerworks (10,430 Views)

I wouldn’t normally just post a link to someone else’s work here, but in this case Frank Wong – a colleague of mine at my new company (DGIT Systems) has done some terrific work in helping to eliminate the miss-match between the data model used by the TMF’s REST based APIs and the TMF’s Information Model (SID). I know this was an issue that IBM were also looking to resolve.  In the effort to encourage the use of a simple REST interface, the data model used in the TMF’s APIs has been greatly simplified from the comprehensive (some might say complex) data model that is the TMF’s Information Model (SID). This meant that a CSP who is using the SID internally to connect internal systems needed to map to the simplified API data model to expose those APIs externally – there was no easy one-to-one mapping for that mapping which meant that the one could not simply create a API for an existing business service (eTOM or otherwise) – a lot more custom data modelling work would be required.

This interview with Frank by the TMF illustrates some of the latest work to resolve that miss-match – read it at https://inform.tmforum.org/open-apis/2017/08/apis-need-good-parents-catalog-success/?mkt_tok=eyJpIjoiTm1aa1pUVXhOR001TkRFMSIsInQiOiJXbEpaajNHRmR1Rm9meTZzQlMzMnJRODJDNlllUjdsdFk2RUxNMDVRS25HMEdlOTZzK3NDNkx5YkZXSjlyQW42eDkrQW5lT0pkRVFpdm5lNXJIdW9STGpaYWV5aHZiald0b1JBenhlSTFRV2FUMVhFNXBLUlRkZ05MV2ZZK1JSViJ9

What’s all the fuss about Orchestration for NFV?

Originally posted on 6Jun17 to IBM Developerworks (11,950 Views)

Think about it – orchestration is everywhere in a Telco – the Order to Cash process, The Ticket to Resolution process, the service and resource  fulfilment process and even the NFV MANO processes.  Orchestration is everywhere…

There is a hierarchy to processes in a Telco – just as the TMF recognises that there is a hierarchy in business services (within the eTOM Process Framework). At the highest level, the Order to Cash process might look like this:

Each task in this swimlane diagram will have multiple sub-processes. If we delve down into the provision resources task for instance, a CSP will need processes that will interrogate the resource catalog and network inventory to determine where in the network that resource can be put and what characteristics need to be set, then tell the resource manager to provision that resource. If it’s a physical resource, that may involve allocating a technician to install the physical resource. If it’s a virtual resource such as a Virtual Network Function (VNF) then the Network Function Virtualisation (NFV) orchestration engine will need to be told to provision that VNF.  If we go one level deeper, the NFV Orchestration engine will need to tell the NFV Manager to provision that VNF and then update the network inventory.

Perhaps the diagram below will help you to understand what i mean:

This diagram is a very simplified hierarchical process model designed to show the layers of process. As you can see, there are many layers of orchestration required in a CSP and as long as the orchestration engine is flexible enough and can handle the integration points with the many systems it needs to interact with, there is no real reason why the same orchestration engine couldn’t be used by all levels of process.

Over the past couple of years as NFV has risen significantly in popularity and interest, I’ve seen many players in the market talk about orchestration engines that just handle NFV orchestration and nothing else.  To me, that seems like a waste. Why put in an orchestration engine that is just used for NFV when you also still need orchestration engines for the higher process layers as well? I’d suggest that a common orchestration and common integration capability makes the most sense delivering:

  • High levels of reuse
  • Maximising utilisation of software capabilities
  • Common Admin and Development skills for all levels of process (be they business focussed or service or resource focussed)
  • Common tooling
  • Common Integration patterns (enabling developers and management staff to work across all layers of the business)
  • Greater Business Agility – able to react to changing business and technical conditions faster

There are a number of Integration platforms – typically marketed as Enterprise Service Buses (ESB) that can handle integration through Web Services, XML/HTTP, File, CORBA/IIOP  even Socket/RPC connections for those legacy systems that many telcos still have hanging around.  An ESB can work well in a MicroServices environment too – so don’t think that just because you have a ESB, you’re fighting against MicroServices – you are not.  MicroServices can make use of the ESB for connectivity to conventional Web Services (SOA) as well as legacy systems.

A common Orchestration layer would drive consistency in processes at all layers of a Telco – and there are a number of Business Process Management orchestration engines out there that have the flexibility to work with the Integration layer to orchestrate processes from the lowest level (such as within a Network Function Virtualisation (NFV) environment) all the way up to the highest levels of business process – the orchestrations should be defined in an standard language such as Business Process Execution Language (BPEL) or Business Process Model Notation (BPMN).

To me, it makes no sense to re-invent the wheel and have orchestration engines just for the NFV environment, different orchestration engines for the Service Order Management, the Resource Order Management, the Customer Order Management, the Service Assurance, the Billing, the Partner/Supplier management etc etc – all of these orchestration requirements could be handled by a single orchestration engine. Additionally, this would make disaster recovery simpler and faster and cheaper as well (fewer software components to be restored in a disaster situation).

Blockchain and BPM – follow up

Originally posted on 31May17 to IBM Developerworks (12,500 Views)

A link to this blog entry (link now broken) popped up in my LinkedIn feed today which in turn linked to a Developerworks article – Combine business process management and blockchain (link now broken) which steps you though a use case and allows you to build your own basic BPM & Blockchain demo. Complex processes could save and get data to/from Blockchain ensuring that every process in any organisation (within the same company and across company boundaries) are using the most up to date data.

I thought it would be appropriate to paste in a link given my previous post on Blockchain in Telcos. As I think about this topic more, I can see a few more use cases in Telecom. I’ll explore them in subsequent posts, but for now, I think it’s important that we be pragmatic about this. Re-engineering processes to make good use of blockchain is non-trivial and therefore will have a cost associated with it.  Will the advantages in transparency and resilience be worth the cost of making the changes? Speaking about resilience, don’t forget the damage that a failure can cause.  British Airways IT system failure (which I believe is outsourced but I cannot be sure) was down for the better part of three days – failures like that have the potential to bring down a business.  We don’t know yet what will happen to BA in the long term, but you certainly don’t want the same sort of failure happening to your business.

Business Agility Through Standards Alignment To Ease The Pain of Major System Changes

Originally posted on 21Mar14 to IBM Developerworks (14,349 Views)

Why TMF Frameworx?

The TeleManagement Forum (TMF) have defined a set of four frameworks collectively known as Frameworx. The key frameworks that will deliver business value to the CSP are the Information Framework(SID) and the Process Framework (eTOM). Both of these can deliver increased business agility – which will reduce time to market and lower IT costs. In particular if a CSP is undertaking with the multiple major IT projects in the near term, TMF Frameworx alignment will ease the pain associated with those major projects.

Without a Services Oriented Architecture (SOA), such as many CSP’s have currently, there is no common integration layer, no common way to perform format transformations with that multiple systems can communicate correctly. A typical illustration of this point to point integration might look like the Illustration to the right:

Each of the orange ovals represents a transformation of information so that the two systems can understand each other – each of which must be developed and maintained independently. These transformations will typically be built with a range of different technologies and method, thus increasing the IT costs of integrating, maintaining such transformations, not to mention maintaining competency within the IT organisation.

A basic SOA environment introduces the concept of an Enterprise Service Bus which provides a common way to integrate systems together and a common way of building transformation of information model used by multiple systems. The Illustration below shows this basic Services Oriented Architecture – note that we still have the same number of transformations to build and maintain, but now they can be built using a common method, tools and skills.

If we now introduce a standard information model such as the SID from the TeleManagement Forum, we can reduce the number of transformation that need to be built and maintained to one per system as shown in the Illustration below. Ensuring that all the traffic across the ESB is SID aligned means that as the CSP changes systems (such as CRM or Billing) the effort required to integrate the new system into the environment is dramatically reduced. That will enable the introduction of new systems faster than could otherwise been achieved. It will also reduce the ongoing IT maintenance costs.

As I’m sure you’re aware, most end to end business processes need to orchestrate multiple systems. If we take the next step and insulate those end to end business processes from the functions that are specific to the various end point systems using a standard Process Framework such as eTOM, then business process can be independent of systems such as CRM, Billing, Provisioning etc. That means that if those systems change in the future (as many CSPs are looking to do) the end to end business processes will not need to change – in fact the process will not even be aware that the end system has changed.

When changing (say) the CRM system, you will need to remap the eTOM business services to the specific native services and rebuild a single integration and a single transformation to/from the standard data model (SID). This is a significant reduction in effort required to introduce new systems into the CSP’s environment. Additionally, if the CSP decide to take a phased approach to the migration of the CRM systems (as opposed to a big bang) the eTOM aligned business processes can dynamically select which of the two CRM systems should be used for this particular process instance.

What that means for the CSP.

Putting in place a robust integration and process orchestration environment that is aligned to TMF Frameworx should be the CSP’s first priority; this will not only allow the subsequent major projects integration and migration efforts to be minimised, it will also reduce the time to market for new processes and product that the CSP might offer into the market.

Telekom Slovenia is a perfect example of this. When the Slovenian government forced Mobitel (Slovenia) and Telekom Slovenia to merge, having the alignment with the SID and eTOM within Mobitel allowed the merged organisation to meet the governments deadlines for the specific target KPIs:

  • Be able to provide subscribers with a joint bill
  • Enable CSR from both organisations to sell/service products from both organisations
  • Offer a quad-play product that combined offerings from both Telekom Slovenia and Mobitel
  • All within six months.

Recommended Approach

When a CSP is undertaking multiple concurrent major IT replacement projects, there are a number of recommendations that IBM would make based on past observations with other CSPs that have also undertaken significant and multiple system replacement projects:

  1. Use TMF Frameworx to minimise integration work (requires integration and process orchestration environment such as the ESB/SOA project is building) to be in place
  2. Use TMF eTOM to build system independent business processes so that as those major systems change, end to end business processes do not need to change and can dynamically select the legacy or new system during the migration phases of the system replacement projects.
  3. To achieve, 1 and 2, the CSP will need to have the SOA and BPM infrastructure that is capable of integration with ALL of the systems (not just limited to (say) CRM or ERP) within the CSP in place first
  4. If you have the luxury of time, don’t try to run the projects simultaneously, rather run them linearly. If this cannot be achieved due to business constraints, limit the concurrent projects to as few systems as possible, and preferably to systems that don’t have a lot of interaction with each other.

TeleManagement Forum Africa Summit 2012

Originally posted on 29Sep12 to IBM Developerworks (13,053 Views)

Last week, I was at the TeleManagement Forum’s (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) – if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don’t know if I did enough to get over the line’.   Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter’s thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure. I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?  I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple’s Facetime video chat hasn’t taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made. Personally, I think that voice (despite it’s declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.

Telco standards gone, dead and buried

Originally posted on 22Auc12 to IBM Developerworks (13,006 Views)

Further to my last post, it now looks like the WAC is completely dead and buried. One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) – there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don’t intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general. The Telco standards that seem to take hold are the ones with strong engineering background – I am thinking of networking standards  like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration – sure standards are good – they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don’t they stick? Why do we see a progression of standards that are well designed, have collaboration of  a core set of telcos around the world (I’m thinking the WAC here) yet nothing comes of it.  It we look at Parlay for example, sure CORBA is hard, so I get why it didn’t take off, but ParlayX with web services is easy – pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service – why didn’t it take off?  I’ve spoken to telcos all around the world about ParlayX, but it’s rare to find one that is truly committed to the standard – sure the RFP’s say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM’s case) they either continue to offer their previous in house developed interfaces for those network services and don’t use ParlayX or they just don’t follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra ‘ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface’.  No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended.  So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement.  Will that get any more take up? I don’t think so. The TMF Frameworx seem to have more adoption, but they are the exception to the rule. I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:

  • Lack of long term thinking within telcos – there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line – they’re too worried about getting the day to day things patched up and then getting re-elected)
  • Senior executives in Telcos that truly don’t appreciate the benefits of standardisation –  I am not sure if this is because executives come from a non-technical background or some other reason.

 What to do? I guess I will keep preaching about standards – it is fundamental to IBM’s strategy and operations after all – and keep up with the new ones as they come along.  Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.

This Is Not a Test: The Emergency Alert System Is Worthless Without Social Networks

Originally posted on 17Nov11 to IBM Developerworks (11,306 Views)

This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.
Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks
This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.  Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel – alternates like TV and Radio are seen as not as pervasive and thus a lower priority.  I don’t think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS – the problem is getting everyone to “friend” the NEWS system so that they see updates and warnings!

New version of SPDE announced at TeleManagement World 2011

Originally posted on 26May11 to IBM Developerworks (12,948 Views)

Yesterday, IBM launched the latest iteration of the Service Provider Delivery Environment (SPDE), a software framework for Telecom that has been around since 2000.  Over the years, it has evolved with change sin market requirements and architecture maturity.  The link below is for the launch:

http://www-01.ibm.com/software/industry/communications/framework/index.html

The following enhancements are part of the new SPDE 4.0 Framework:

1. CSP Business Function Domains –  a clear articulation of “communications service provider business domains” that describe the business functions that are common to any service provider across the world.  These business domains offer us a simpler way to introduce the SPDE capabilities to a LOB audience, as well as to other client and partner constituents that are new to SPDE:

  • Customer Management
  • Sales & Marketing
  • Operations Support
  • Subscriber Services
  • Corporate Management
  • Information Technology
  • Network Technology

2. New Capabilities – In the areas of cloud, B2B commerce, enterprise marketing management, business analytics, and service delivery.

3. Introduction of the SPDE Enabled Business Projects –  that deliver solutions to address common business and IT needs for the LOB (CIO/CTO/CMO) and represent repeatable solutions and patterns harvested from client engagements.

4. Improved alignment with Telemanagment Forum (TMF) Industry Standards  – a clearly defined depiction of the areas of alignment to TMF Frameworx – key industry standards that underpin much of the communications industry investment.

5. Simplified Graphics and Messaging – to improve ease of adoption and consumability by a broader LOB audience.

Built on best practices and patterns from client engagements with CSPs around the world, IBM SPDE 4.0 is the blueprint that enables Smarter Communications by helping deliver value-added services that launch smarter services, drive smarter operations and build smarter networks. IBM is leading a conversation in the marketplace about how our world is becoming smarter, and software is at the very heart of this change.  IBM’s Industry Frameworks play a critical role in our ability to deliver smarter planet solutions by pulling together deep industry expertise, technology and a dynamic infrastructure from across the company to provide clients with offerings targeted to their industry-specific needs.