A few weeks ago, I attended a TMForum local event which had a number of different tracks for different topics. While I’d normally be attending the fulfilment track that my CEO Greg Tilton was running, I opted for the 5G track to check out an area I don’t normally get too involved in.
There was a lot of discussion on 5G slices. If you’re wondering what a 5G slice is, you can think of it as a subset of the bandwidth that’s dedicated to a particular purpose. That might be for a specific enterprise, or Emergency services or for a device type like IOT. If a enterprise customer wants to guarantee that they’ll have bandwidth no matter how busy a cell tower is, a 5G slice is the way to do that.
Lets take the example of an enterprise customer buying a 5G slice for themselves, then lets say that they also buy from cloud compute and storage – perhaps resold through the Telco that’s also providing the 5G slice and lets say that same Telco is also providing an e-WAN to connect their headquarters with their branches all over the country, with a fail-over to the 5G slice that they bought. The e-WAN fibre network could be completely software definable (SD-WAN) for greater flexibility and resilience. The Telco might also resell Office 365, Dynamics CRM and other cloud applications … more and more, the Telco in this scenario is providing what in the past has been provided by the Enterprise’s IT department.
Are we seeing Telcos becoming the new IT department of the enterprise?
Over the years, many internal IT departments have shrunk, particularly in terms of the services that they provide in house. More and more, those departments are outsourcing software services to cloud providers like Salesforce.com, ServiceNow.com and (of course) inomial.net. As we see them also outsource infrastructure services to to the likes of Amazon Web Services, Google Cloud and IBM Softlayer, that’s less and less for the IT departments to manage (particularly in terms of the in-house server rooms).
If I compare the current IT world to my first job in the sector way back in 1994 where I was responsible for day to day running of:
A DEC VAX mini-computer
A Lotus Notes collaboration platform
A Lotus cc:Mail email platform
A bank of 36 modems for remote staff to connect back to the core systems
Changing backup tapes and taking them once a week to a bank safety deposit box for offsite storage
We didn’t have any other offices, so no real multi-site connectivity required, but if we did, back then it would have most likely been a leased ISDN line between the different sites.
Let’s compare with what I see in IT departments today…
Remote staff connectivity over the internet
Outsourced software platforms (providing capabilities like billing, CRM, Ticketing etc)
That doesn’t leave a lot of in-house services that the IT departments are providing.
That doesn’t mean that the Telcos are replacing the IT departments, but if a Telco is selling advanced services such as compute, storage, software and networks as a service, we could get to a situation where Telcos are providing more IT department services than not – it’s an opportunity for Telcos to elevate themselves up fromt he networks that they traditionally provide…
I see some Telcos reselling SaaS products of other cloud providers, but not a lot of them are tying it all together in a comprehensive offering to Enterprise customers in an effort to take over from the in house IT departments.
I watch some of the Internet’s stupidest conspiracy theories; chemtrails, jets don’t burn fuel, flat earth, hollow earth, moon landings were faked, vaccines cause autism (and apparently every other childhood ailment), naturopathy, chiropractic, HAARP weather modification etc – they all provide a bit of amusement to me when I consider how easy it is for some people to get sucked in.
Recently, I’ve been hearing more and more conspiracy theories around 5G mobile networks. Some more ridiculous than others:
5G causes cancer and other cellular mutations
5G causes headaches
5G causes weather problems such as storms
5G causes COVID-19 (SARS-Cov-2 Coronavirus)
5G cause trees to be deformed, losing branches and leaves
One of my family members has been spreading these conspiracy theories, trying to convince the rest of my family to sign some protest web site to prevent the roll out of 5G. She has been seeing conspiracy memes blaming 5G for bird mass death events, for headaches (allegedly causing nausea, headaches and other wellness issues at the Glastonbury Music festival – I’ll cover this one separately).
Lets get a basic understading of the facts first.
5G is the term used to identify the fifth generation of mobile phone technology – moving beyond 4G (generally Long Term Evolution (LTE)) which itself is an evolutionary change from 3G technologies which did not require telcos to completely rebuild their networks, rather just add some components to each cell site and turn it on. I know that’s a rash generalisation, but in simple terms, that’s what needed to happen for the 4G rollout and adequately explains why 4G was so quickly rolled out in comparison to 3G around the world.
5G radio frequencies are higher on the electromagnetic radiation scale than previous generations of mobile phone networks but still much lower than radiation we’re exposed to from numerous other sources. 5G mobile network frequencies are split into two basic bands – the lower frequencies which are within the same range of previous generations and the higher frequency band in the 26-39 GHz range.
5G is not just about more bandwidth/speed for consumers, it’s also about supporting a much greater density of devices – this is key for the Internet of Things (IOT) becoming pervasive – more and more small devices connecting to improve our lives in lots of different ways.
The higher frequency and corresponding shorter wavelength has much less penetration power that longer wavelength frequencies.
Because of the lower penetration power for (the higher frequency) 5G networks, the range of the network will be significantly reduced compared to previous generation networks. This means that a 5G network will need to have many more cells to cover the same geographic region. The distance between cell towers will be much shorter as a result.
So, should you be freaked out by 5G mobile networks or not?
The short answer is no. The long answer is that it’s complicated; long term testing of a 5G network’s effects on humans has not been conducted, so some caution should be exercised. That said, you should also consider the following:
We are constantly exposed to EMF radiation at higher frequencies and higher power than a 5G network could ever deliver and we don’t suffer any significant consequences from that.
As unlicensed ElectroMagnetic Field (EMF) spectrum, the 5GHz band is widely used by Wifi networks, Microwave Ovens, Refrigerators and others items around the house) – this is very close to the higher frequencies of the 5G bandwidths.
Incidents of brain cancer have not increased in the past 20 years as mobile phones have become more and more pervasive. Experts consider brain cancer to be the most likely cancer if mobile phone usage did have a causal effect because of the proximity to the head of mobile phones when on a call – which also coincides with the maximum power output of a mobile phone – when on a call)
Because of the weak penetration power of the 5G radio signal, it cannot penetrate more than 5-8mm beneath the skin of a person.
5G enabled phones have between 1-3 watts of transmission power – that’s very little power.
EMF radiation in the tens of GHz range does have the ability to excite molecules and apply a warming effect, however because of the weak power levels a human would not be able to detect such warming of their skin or sub-dermal tissue.
EMF below 750 THz (7.5 x 1015 HZ)does not have the power to knock electrons out of their orbits and is called non-ionising radiation. Non-ionising radiation is not considered harmful by the health community. 5G has a much lower frequency (3.9 x 1010 Hz) than this.
Don’t get freaked out by the term EMF radiation – you’re not going to grow extra eyes or get super powers!
The diagram below illustrates the EMF spectrum – from non-ionising all the way to harmful Gamma Rays (which are Ionising – do you remember that from High School – that’s when I first learned about them).
With all the paranoia surrounding the 5G rollout, I’m constantly amazed by the lack of critical thinking skills being exhibited by so many people. For instance – the alleged health impact of the 5G trail at Glastonbury. With about 10 minutes of research online – being careful of sources, it seems that BT were planning on running a 5G trail at the Glastonbury music festival in 2019. These mass gatherings provide telcos with a unique test that they cannot achieve in a lab with a handful of 5G handsets. A local committee called Villagers Against Masts (VAM) who seem to have swallowed the 5G conspiracy theories hook line and sinker petitioned the local council to prevent the 5G trial. Frightened of the bad press they would get from VAM for allowing the 5G trial, then prevented BT from running the trial. There were no reports I could find from reputable news sources that indicated anyone attending the festival suffered from headaches, nausea and other mysterious health conditions. It didn’t happen. Despite this and the fact that the 5G trial didn’t even happen, there were still memes published suggesting that the 5G trial caused all sorts of health issues for the festival goers.
So – a lack of critical thinking prevented all these conspiracy theorists from getting to the bottom of the real situation – instead they chose to accept a meme they saw on facebook or instagram as the truth.
Another instance of 5G damaging life I’ve heard of is the supposed mass bird deaths supposedly as a result of 5G. Again a little bit of research online reveals that the locations of these mass bird death events are never where there is a 5G deployment. One example given to me as “proof” was a mass bird death event that happened in the Welsh countryside at Angelsey – follow the link, you’ll see how far Angelsey is from major cities in the UK. Where do you think UK telcos are deploying 5G networks first? yep – where their customers are – in the major cities. Nowhere near Angelsey. A lack of critical thinking is preventing people from dismissing these crazy conspiracy theories with a minimal amount of thought or effort.
Amid our current worldwide COVID-19 pandemic, I see the same lack of critical thinking skills being applied to 5G supposedly causing COVID-19. Radio waves cannot cause a virus to develop. Wuhan, China was not the first city in the world to have 5G coverage, in fact it’s 5G coverage is quite spotty – London, England has much more coverage than Wuhan.
Think about it:
The virus may have started in Wuhan, but it has spread worldwide and the 5G deployments have only just begun in most countries – there is a distinct lack of a causal relationship.
Radio waves (EMF radiation) cannot cause mutation of viruses let alone magically create viruses or damage human cells.
The majority of COVID-19 cases in Australia find their roots in the Ruby Princess cruise ship that docked in Sydney with multiple infections – the ship does not have a 5G network aboard and did not visit cities with 5G networks prior to arriving in sydney with infected people on-board.
I see the same pattern among flat earthers, moon landing deniers, anti-vaxxers and 5G conspiracists – a lack of critical thinking and an eagerness to accept these crazy ideas because that’s all they’re getting fed within their echo chamber bubbles on social media means that these insane theories continue to circulate and continue to drag in new people who fail to exercise critical thinking.
Well – this post has gone a bit longer than I had intended. Sorry for that. All I ask that you do is to use critical thinking when you come across these crazy 5G conspiracies and encourage others to do the same.
While the world is in lockdown, I seem to be one of the lucky ones that can continue to work – in fact, I’m feeling as busy as I’ve ever been while at DGIT systems. For the most part, we’re all working hard to sell and deliver Quote Order Bill solutions to our Telco customers.
For those that are not as lucky as me, those that have lost their jobs, those that have been temporarily stood down, those that have had to leave your jobs to look after kids that are now having to be home or remote schooled, I feel for you. I’m not in any position to promise relief or to change the direction of this pandemic other than working from home and isolating – doing my little part to slow the path of this terrible disease.
So, I’d like to pass on my encouragement to everyone. Stay the course – isolate yourselves until the health professionals say you don’t need to. Those that need to look for new work, be patient and keep at it. Roll with these punches, and keep on keeping on. If we all do that; we’ll get through this crisis.
Since my last post, OneWeb launched 36 new low Earth orbit (LEO) communications satellites (in an effort to rival Starlink, but with a much less ambitious network), then as the COVID-19 lockdown commenced in Europe and many other countries worldwide, abruptly declared bankruptcy. What a bizarre turn of events.
Oneweb, was to be made up of just under 650 satellites at approximately 1,200km altitude, on polar orbits. For a summary of their plans, check out the most recent launch video below:
Now, 650 is a MUCH smaller network than Starlink’s huge LEO network of 21,000 satellites and some of the results showed – greater latency than Starlink and lower throughput – broadband speeds in excess of 400 Mbps and latency of 32 ms, but still a huge improvement on a traditional geostationary communications satellite. Back on 21Mar20, not quite three weeks ago as I write this post, OneWeb launched their third batch of 36 satellites from Baikonur Cosmodrome in Kazakhstan giving them a total of 74 birds in the sky. I love a rocket launch, so here it is a t-10s for your enjoyment.
So, with a reasonable start to their network deployment, a 2021 launch of their service is looking good… until just six days layer, OneWeb filed for US Chapter 11 bankruptcy protection. I’m no accountant and I don’t pretend to understand US bankruptcy law, but this is not like bankruptcy in Australia – where creditors would move in and sell of the assets to try and recoup some of their money, no in the USA, the company keeps operating and reorganises it’s finances to ease the burden on its creditors. In this case however, this statement from OneWeb states that they’re trying to reorganise their finances with a view to continue operations (which at this point in the company’s life, means network deployment). These steps have resulted from failed finance negotiations that were progressing, but fell in a hole when the markets tanked as a result of the COVID-19 pandemic.
OneWeb have about half of their planned 44 base stations built and another 580 satellites to get into orbit, lets hope that they can get their finances in order to finish out their build because until they do, all they have is a lot of liability.
A quick comparison between OneWeb and Starlink reveals the obvious advantage that Starlink have right now:
This is all very interesting and all, but what’s this got to do with the Telecommunications industry?
Well, as these players move along with their deployments, it’s going to be harder and harder for Telcos as we know them now to compete against these new companies. If OneWeb can get past these financial problems – and that’s a whole other discussion given the state of the markets as a result of COVID-19 – we could see national CSPs that are focussed on (particularly) IP traffic (lets face it, aren’t they all!) fall to these new competitors.
It all depends on companies like OneWeb and Starlink ability survive this current financial crisis, their price plans and if they can live up to their promised performance. I can’t predict the future, so you and I will need to wait and see…
I’ve just watched the SpaceX launch of the latest batch of 60 starlink satellites into low earth orbit – aimed at providing low latency internet services all over the world. Initially, SpaceX are targeting the North American market – I mean, why wouldn’t they? The US has such a disjointed connectivity marketplace with a mixture of Metro Area Networks (WiFi and Wimax based) in small towns, LTE/5G in larger population centres, HFC cable and Fibre connectivity options for fixed services and probably still a bit of xDSL running around… Not to mention the oft complained about mobile network coverage. Starlink (despite being Internet rather than voice focused) has the potential to steal a lot of the subscribers that live in or travel to marginal coverage areas. Think of it – 100% coverage of North America at up to 10Gbps – if the price is competitive, why wouldn’t you as a subscriber go with that option!
There were a few things that peaked my interest with this launch in particular:
The launch of these Starlink satellites in close succession from the December’19 launch of the Kacific comms satellite (ironically on a SpaceX Falcon 9), a more conventional geostationary communications satellite, targeting at providing services to the South Pacific, SE Asia and Himalayan nations (not Australia) via Ka band radio (thus the name). They plan to provide services to over 600 million subscribers – from the following countries (from https://www.kacific.com):
Federated States of Micronesia
Northern Mariana Islands
Papua New Guinea
Obviously, the bulk of those subscribers are going to be coming from Indonesia, being the highest population country in their target list. It makes me wonder about the competition between Kacific and Starlink for those same subscribers once SpaceX establish their services in the north American market and spread their wings to the rest of the world…
The Starlink swarm of satellites have had astronomers up in arms because of the additional light and radio pollution these satellites have been adding to the night sky making it difficult for both visual and radio astronomers to get good observations. With more than 12,000 (!!!) Starlink satellites planned to go into orbit, we’re just seeing the beginning of this problem.
I noted during the latest launch coverage, the SpaceX presenter said that one of the satellites launched today had been ‘darkened’ to reduce reflections in the hope that it would lessen the affect on visual (at least) astronomy. Let’s hope it works.
For sure, these launches are great to watch and remind me of when I watched Apollo 17 launch as a boy (that’s the only one I remember from way back then) and the excitement I felt when I watched that launch…
Originally posted on 21Sep17 to IBM Developerworks (11,101 views)
An ex-colleague of mine (Violet Le – now the Marketing Director at Imageware) asked me about the drivers for Analytics in Telcos. I’ll admit that it’s a subject that I haven’t really given a lot of thought to – all the projects that I’ve worked on in the past that have included Analytics have had a larger business case that I was trying to solve… Marketing, Future Planning, Sales etc I’ve never worked on an Analytics project for the sake of analytics, nor have I designed a solution that was just (or mainly) analytics.
There is a definite value in analytics in providing an insight into how the business is running – to enable business to plan for the future and to manage how they run in the present. Both Strategic and Tactical cases for analytics would seem to me to be of value to any business. An analytics system that delivers insight into the business (customer behaviour, sales effectiveness, capacity usage and predictions etc) is great, but at the end of the day, a Telco needs to do something about that information/insight to actually deliver business benefits.
As I’m no analytics specialist, I wont’ try to describe how to define or build those systems. What I will try to do is to describe the bits around the analytics systems that make use of that insight to deliver real value for the CSP.
What are the business cases that I’ve seen?
Sales & Marketing
Driving promotions to to positively affect subscriber retention or acquisition… I did a project with Globe Telecom in the Philippines which was primarily aimed at driving SMS based outbound marketing promotions that are targeting based on subscriber behaviour. An example might be if a subscriber had a pre-paid balance less than (say) 5 pesos, and the subscriber topped up more than 20 pesos and less than 50 pesos, then send a promo encouraging the subscriber to top up by more than 100 pesos… all the interaction is via SMS (via a ParlayX SMS API)
Social networking analysis to determining who should be targeted. IBM’s Research group was pushing for years a Social Networking Analysis capability that looked at Social Networking connection to determine which subscribers are followers, which are community leaders and influencers and based on that assessment.
Ensuring utilisation of the network is optimised for the load requirements. I worked with a telco in Hong Kong that wanted to dynamically adjust the quality of service level to be delivered to a specific user based on their location (in real time) and a historical analysis of the traffic on the network. For example, if a subscriber was entering the MTR (subway) station and the analytics showed that particular station typically got very high numbers of subscribers all watching youtube clips at that time of day on that day of the week, then lower the QoS setting for that subscriber UNLESS they were a premium or post-paid customer in which case, keep the QoS settings the same. The rating as a premium subscriber could be derived from their past behaviour and spend – from a traditional analytics engine.
Long term planning on network (SDN/NFV will allow Networks to be more agile which will reduce the need for traditional offline analytics to drive network planning and make the real time view more relevant as networks adapt to real time loads dynamically … as traffic increases in particular sections of the network, real time analytics and predictions will drive the SDN to scale up that part of the network on demand. This is where new next gen AI’s may be useful in predicting where the load will be int he network and then using SDN to increase capacity BEFORE the load is detected… read Watson from IBM and similar….
A few years ago, a number of ex colleagues (from IBM) formed a company on the back of real time marketing use case for Telcos and since then, they’ve gone ahead in leaps and bounds. (Check them out if you’re interested, the company name is Knowesis)
Do you have significant use cases for analytics in a CSP? I’m sure they are and I’m not claiming this is an exhaustive list – merely the cases that I’ve seen multiple times in my time as a solution architect focused on the telecommunications industry.
Originally posted on 4Sep17 to IBM Developerworks (10,430 Views)
I wouldn’t normally just post a link to someone else’s work here, but in this case Frank Wong – a colleague of mine at my new company (DGIT Systems) has done some terrific work in helping to eliminate the miss-match between the data model used by the TMF’s REST based APIs and the TMF’s Information Model (SID). I know this was an issue that IBM were also looking to resolve. In the effort to encourage the use of a simple REST interface, the data model used in the TMF’s APIs has been greatly simplified from the comprehensive (some might say complex) data model that is the TMF’s Information Model (SID). This meant that a CSP who is using the SID internally to connect internal systems needed to map to the simplified API data model to expose those APIs externally – there was no easy one-to-one mapping for that mapping which meant that the one could not simply create a API for an existing business service (eTOM or otherwise) – a lot more custom data modelling work would be required.
Originally posted on 30Aug17 to IBM Developerworks (11,517 Views)
Across many industries, including the Telecommunications sector, there seems to be a strong movement towards a MicroServices Architecture and (somewhat) away from Service Oriented Architecture. I’ve seen this move in a CSP here in Australia. The TeleManagement Forum have a significant project that is trying to standardise the REST APIs that a CSP might publish.
The TMF state:
“TM Forum’s Open API program is a global initiative to enable end to end seamless connectivity, interoperability and portability across complex ecosystem based services. The program is creating an Open API suite which is a set of standard REST based APIs enabling rapid, repeatable, and flexible integration among operations and management systems, making it easier to create, build and operate complex innovative services.TM Forum REST based APIs are technology agnostic and can be used in any digital service scenario, including B2B value fabrics, Internet of Things, Smart Health, Smart Grid, Big Data, NFV, Next Generation OSS/BSS and much more.”
“TM Forum is bringing different stakeholders from across industries to work together and build key partnerships to create the APIs and connections. The reference architecture and APIs we are co-creating are critical enablers of our API program and open innovation approach for building innovative new digital services in a number of key areas, including IoT applications, smart cities, mobile banking and more.”
Laurent Leboucher, Vice President of APIs & Ecosystems, Orange
I’ve been a part of a number of projects where these REST APIs have been exposed primarily to a CSPs trading partners – my very first Service Delivery Platform exposed APIs to external developers. Back then, it was Parlay X Web services (REST didn’t really exist and certainly there to external developers.were no Telco standards in place for REST based interfaces) that exposed the functionality of network elements to 3rd party developers. With many of the APIs that the TMF have defined, they seem to be more focused on OSS/BSS functions instead. Now that the TMF have quite a number of Open APIs defined, there are some network focused APIs that are coming onto the list – for instance, a Location API would have typically be exposed using the ParlayX Web Services or ParlayREST REST interfaces to the network’s Location Based Server (LBS). As a result, there does seem to be a small amount of crossover between the new TMF APIs and the older ParlayREST APIs.
Does this mean that the new TMF OpenAPIs are of no use? Not at all. There are certainly advantages to exposing functions that a CSP has to external developers and REST based OpenAPIs make the consumption of those functions easier than the ParlayX web services or Parlay CORBA services have been in the past. Ease of consumption is not to be underestimated. An API that is easy to include in an application and provides a real capability that would have been otherwise difficult to provide stands a much greater chance of wide usage.
Sure, there is a place for externalising the OSS/BSS functions of a CSP. Trading partners could place orders against a CSP, they could bill to a subscriber’s post or pre-paid accounts, they could update the subscriber profile held by the CSP. All relevant use cases for externalising the TMF Open APIs.
The big question in my mind is will REST APIs be of use internally?
REST based APIs being easier to integrate internally will drive some value. But in CSPs that have significant investments in a Service Oriented Architecture (SOA), I’m struggling to see the business value in abandoning that in favour of a MicroServices Architecture where there is no common integration tool, no common orchestration capability, rather lots and lots of point to point integrations through REST APIs.
For those of us that have been around a while, you will have seen point to point integrations and the headaches they cause – complex dependencies in mesh architectures make maintenance hard and expensive. Changing a (say) billing system that is integrated through multiple point to point connections is a nightmare – even if they have a standardised API describing those interfaces. The plane truth of the matter is that not all of those interfaces will be adequately described by the TMF’s Open APIs, so custom specifications APIs will arise and make swapping out the billing system expensive. Additionally, not all of a CSPs internal systems will have TMF Open API compliant interfaces – many won’t even support REST interfaces natively. Changing all of a CSP’s systems to ensure they have a REST interface is a non-trivial task.
A Hybrid environment may be needed.
I’d suggest that a Hybrid approach is needed – existing Enterprise Service Busses may be able to interface with REST APIs – certainly IBM’s Integration Bus and the (now superseded) WebSphere Enterprise Service Bus could connect to REST APIs just as easily as they could connect to Web Services, Files and other connectivity options. The protocol transformation capabilities of a ESB are able to provide REST APIs to systems that would have otherwise not supported such modern interfaces. Similarly, where a function is not provided by a single system, a traditional orchestration (BPM) capability can coordinate multiple systems to provide a single interface to that capability even if (behind the scenes) there are multiple end point systems involved in providing the functionality of that transaction/interface. The diagram below shows my thinking of what should be in place….
Originally posted on 6Jun17 to IBM Developerworks (11,950 Views)
Think about it – orchestration is everywhere in a Telco – the Order to Cash process, The Ticket to Resolution process, the service and resource fulfilment process and even the NFV MANO processes. Orchestration is everywhere…
There is a hierarchy to processes in a Telco – just as the TMF recognises that there is a hierarchy in business services (within the eTOM Process Framework). At the highest level, the Order to Cash process might look like this:
Each task in this swimlane diagram will have multiple sub-processes. If we delve down into the provision resources task for instance, a CSP will need processes that will interrogate the resource catalog and network inventory to determine where in the network that resource can be put and what characteristics need to be set, then tell the resource manager to provision that resource. If it’s a physical resource, that may involve allocating a technician to install the physical resource. If it’s a virtual resource such as a Virtual Network Function (VNF) then the Network Function Virtualisation (NFV) orchestration engine will need to be told to provision that VNF. If we go one level deeper, the NFV Orchestration engine will need to tell the NFV Manager to provision that VNF and then update the network inventory.
Perhaps the diagram below will help you to understand what i mean:
This diagram is a very simplified hierarchical process model designed to show the layers of process. As you can see, there are many layers of orchestration required in a CSP and as long as the orchestration engine is flexible enough and can handle the integration points with the many systems it needs to interact with, there is no real reason why the same orchestration engine couldn’t be used by all levels of process.
Over the past couple of years as NFV has risen significantly in popularity and interest, I’ve seen many players in the market talk about orchestration engines that just handle NFV orchestration and nothing else. To me, that seems like a waste. Why put in an orchestration engine that is just used for NFV when you also still need orchestration engines for the higher process layers as well? I’d suggest that a common orchestration and common integration capability makes the most sense delivering:
High levels of reuse
Maximising utilisation of software capabilities
Common Admin and Development skills for all levels of process (be they business focussed or service or resource focussed)
Common Integration patterns (enabling developers and management staff to work across all layers of the business)
Greater Business Agility – able to react to changing business and technical conditions faster
There are a number of Integration platforms – typically marketed as Enterprise Service Buses (ESB) that can handle integration through Web Services, XML/HTTP, File, CORBA/IIOP even Socket/RPC connections for those legacy systems that many telcos still have hanging around. An ESB can work well in a MicroServices environment too – so don’t think that just because you have a ESB, you’re fighting against MicroServices – you are not. MicroServices can make use of the ESB for connectivity to conventional Web Services (SOA) as well as legacy systems.
A common Orchestration layer would drive consistency in processes at all layers of a Telco – and there are a number of Business Process Management orchestration engines out there that have the flexibility to work with the Integration layer to orchestrate processes from the lowest level (such as within a Network Function Virtualisation (NFV) environment) all the way up to the highest levels of business process – the orchestrations should be defined in an standard language such as Business Process Execution Language (BPEL) or Business Process Model Notation (BPMN).
To me, it makes no sense to re-invent the wheel and have orchestration engines just for the NFV environment, different orchestration engines for the Service Order Management, the Resource Order Management, the Customer Order Management, the Service Assurance, the Billing, the Partner/Supplier management etc etc – all of these orchestration requirements could be handled by a single orchestration engine. Additionally, this would make disaster recovery simpler and faster and cheaper as well (fewer software components to be restored in a disaster situation).
Originally posted on 31May17 to IBM Developerworks (12,500 Views)
A link to this blog entry (link now broken) popped up in my LinkedIn feed today which in turn linked to a Developerworks article – Combine business process management and blockchain (link now broken) which steps you though a use case and allows you to build your own basic BPM & Blockchain demo. Complex processes could save and get data to/from Blockchain ensuring that every process in any organisation (within the same company and across company boundaries) are using the most up to date data.
I thought it would be appropriate to paste in a link given my previous post on Blockchain in Telcos. As I think about this topic more, I can see a few more use cases in Telecom. I’ll explore them in subsequent posts, but for now, I think it’s important that we be pragmatic about this. Re-engineering processes to make good use of blockchain is non-trivial and therefore will have a cost associated with it. Will the advantages in transparency and resilience be worth the cost of making the changes? Speaking about resilience, don’t forget the damage that a failure can cause. British Airways IT system failure (which I believe is outsourced but I cannot be sure) was down for the better part of three days – failures like that have the potential to bring down a business. We don’t know yet what will happen to BA in the long term, but you certainly don’t want the same sort of failure happening to your business.