Blockchain in Telcos?

Originally posted on 23may17 to IBM Developerworks (16,554 Views)

If you like me are hearing ‘Blockchain this, blockchain that‘, it almost seems like blockchain will solve world peace, global hunger and feed your pets for you!  We’re obviously at the ‘peak of inflated expectations’ of the Gartner hype cycle.

I saw a tweet yesterday from an ex-colleague at IBM yesterday that spoke about using blockchain to combat fraud in a Telco.   While I can see that as a possible use case, I was thinking about other opportunities for blockchain.

Perhaps I need to explain blockchain briefly so that those that don’t understand it can also understand the Telecom use cases for blockchain. Wikipedia defines it like this:

“A blockchain… is a distributed database that maintains a continuously growing list of records, called blocks, secured from tampering and revision. Each block contains a timestamp and a link to a previous block.By design, blockchains are inherently resistant to modification of the data — once recorded, the data in a block cannot be altered retroactively. Through the use of a  peer-to-peer network and a distributed timestamping server, a blockchain database is managed autonomously. Blockchains are “an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. The ledger itself can also be programmed to trigger transactions automatically.”

So, it’s an immutable record of changes to something.  I was thinking about that yesterday and there were a number of use cases in Telecom that I could think of that could use blockchain.  I’m not suggesting that they should use blockchain or that it’s needed, just that they could. These are the Use cases I came up with:

  • Fraud prevention : being immutable makes it harder to ‘slip one by’ the normal accounting checks and balances that any large company has.  I suppose the real question is ‘exactly which records need to be stored in a blockchain to enable that fraud prevention?’ The obvious one is the billing records.
  • Billing – maintaining state of post-paid billing accounts, who is making payments, billing amounts and other biulling events (such as rate changes, grace periods etc)
  • Tracking changes to the network. At the moment, many of the changes being made in a Telco’s network may be made by staff, but increasingly, maintenance and management of the network is being outsourced to external companies and you want to keep en eye on them to ensure they’re doing what they say they’re doing.  In the new world of Software Defined Networks (SDN) utilising Network Function Virtualisation (NFV) to build and change the network architecture at a rate that we’ve not seen before, it becomes important for a Telco to be able to track changes to the network to diagnose faults and customer complaints. Over a 24 hour period, a path on a network that supports enterprise customer X may change tens of times – much higher frequency than would be possible if the network elements were physical. 
  • Tracking changes to accounts by customers and telco staff – I could imagine a situation where a customer claims that they didn’t request a configuration change, but a blockchain based record of changes could be used to track beck through all the changes in a customer’s account to determine what happened and when – potentially enabling a Telco to limit the liability to the customer… or vice versa…
  • Tracking purchases – A blockchain record of purchases would allow a CSP to rebuild a customer’s liability from base information; provided there was an immutable record of the data records as well…
  • xDRs – any type of Data Record (CDRs, EDRs…) could be stored in a blockchain to facilittate rebuilding of a client’s history and billing records from base data. The problem with using a blockchain to store xDRs is the size requirements.  I know that large CSPs in India for example produce between five and ten BILLION records per day. It wouldn’t take long for that to build up to a very large storage requirement – even if you store the mediated data records, it’s going to be very large. I guess the question is : ‘what is the return on investment?’ – it is worth while doing. I can’t think of a business case to justify such an investment, but there may be one out there.
  • Assurance events – Recording records associated with trouble tickets and problem resolution.

I don’t for a second think that all of these can be justified in terms of cost/benefit analysis, but I could see blockchain being used in these scenarios. 

Do you have any ideas? Please leave a comment below.

<edit>

I realise I missed the usual business case that blockchain is used for – a financial ledger. Obviously storing a CSP’s financial data in a blockchain would work (and make sense) as it would in ANY other enterprise. I really wanted to illustrate the CSP specific use cases for blockchain.

</edit>

Business Agility Through Standards Alignment To Ease The Pain of Major System Changes

Originally posted on 21Mar14 to IBM Developerworks (14,349 Views)

Why TMF Frameworx?

The TeleManagement Forum (TMF) have defined a set of four frameworks collectively known as Frameworx. The key frameworks that will deliver business value to the CSP are the Information Framework(SID) and the Process Framework (eTOM). Both of these can deliver increased business agility – which will reduce time to market and lower IT costs. In particular if a CSP is undertaking with the multiple major IT projects in the near term, TMF Frameworx alignment will ease the pain associated with those major projects.

Without a Services Oriented Architecture (SOA), such as many CSP’s have currently, there is no common integration layer, no common way to perform format transformations with that multiple systems can communicate correctly. A typical illustration of this point to point integration might look like the Illustration to the right:

Each of the orange ovals represents a transformation of information so that the two systems can understand each other – each of which must be developed and maintained independently. These transformations will typically be built with a range of different technologies and method, thus increasing the IT costs of integrating, maintaining such transformations, not to mention maintaining competency within the IT organisation.

A basic SOA environment introduces the concept of an Enterprise Service Bus which provides a common way to integrate systems together and a common way of building transformation of information model used by multiple systems. The Illustration below shows this basic Services Oriented Architecture – note that we still have the same number of transformations to build and maintain, but now they can be built using a common method, tools and skills.

If we now introduce a standard information model such as the SID from the TeleManagement Forum, we can reduce the number of transformation that need to be built and maintained to one per system as shown in the Illustration below. Ensuring that all the traffic across the ESB is SID aligned means that as the CSP changes systems (such as CRM or Billing) the effort required to integrate the new system into the environment is dramatically reduced. That will enable the introduction of new systems faster than could otherwise been achieved. It will also reduce the ongoing IT maintenance costs.

As I’m sure you’re aware, most end to end business processes need to orchestrate multiple systems. If we take the next step and insulate those end to end business processes from the functions that are specific to the various end point systems using a standard Process Framework such as eTOM, then business process can be independent of systems such as CRM, Billing, Provisioning etc. That means that if those systems change in the future (as many CSPs are looking to do) the end to end business processes will not need to change – in fact the process will not even be aware that the end system has changed.

When changing (say) the CRM system, you will need to remap the eTOM business services to the specific native services and rebuild a single integration and a single transformation to/from the standard data model (SID). This is a significant reduction in effort required to introduce new systems into the CSP’s environment. Additionally, if the CSP decide to take a phased approach to the migration of the CRM systems (as opposed to a big bang) the eTOM aligned business processes can dynamically select which of the two CRM systems should be used for this particular process instance.

What that means for the CSP.

Putting in place a robust integration and process orchestration environment that is aligned to TMF Frameworx should be the CSP’s first priority; this will not only allow the subsequent major projects integration and migration efforts to be minimised, it will also reduce the time to market for new processes and product that the CSP might offer into the market.

Telekom Slovenia is a perfect example of this. When the Slovenian government forced Mobitel (Slovenia) and Telekom Slovenia to merge, having the alignment with the SID and eTOM within Mobitel allowed the merged organisation to meet the governments deadlines for the specific target KPIs:

  • Be able to provide subscribers with a joint bill
  • Enable CSR from both organisations to sell/service products from both organisations
  • Offer a quad-play product that combined offerings from both Telekom Slovenia and Mobitel
  • All within six months.

Recommended Approach

When a CSP is undertaking multiple concurrent major IT replacement projects, there are a number of recommendations that IBM would make based on past observations with other CSPs that have also undertaken significant and multiple system replacement projects:

  1. Use TMF Frameworx to minimise integration work (requires integration and process orchestration environment such as the ESB/SOA project is building) to be in place
  2. Use TMF eTOM to build system independent business processes so that as those major systems change, end to end business processes do not need to change and can dynamically select the legacy or new system during the migration phases of the system replacement projects.
  3. To achieve, 1 and 2, the CSP will need to have the SOA and BPM infrastructure that is capable of integration with ALL of the systems (not just limited to (say) CRM or ERP) within the CSP in place first
  4. If you have the luxury of time, don’t try to run the projects simultaneously, rather run them linearly. If this cannot be achieved due to business constraints, limit the concurrent projects to as few systems as possible, and preferably to systems that don’t have a lot of interaction with each other.

Study: VoLTE slashes smartphone battery life by 50% – FierceBroadbandWireless

Originally posted on 30Nov12 to IBM Developerworks ( 11,011 Views)

Operators hoping to engage in widespread deployment of voice over LTE in order to gain spectral efficiencies in their network may face some unhappy customers because one vendor’s recent tests showed that VoLTE calls can slash a device’s talk-time battery life by half.
 Here is the URL for this bookmark: http://www.fiercebroadbandwireless.com/story/study-volte-slashes-smartphone-battery-life-50/2012-11-27 For years now, we’ve known that higher speed mobile networks would mean more power required in handsets to maintain the higher bandwidth connections.  I recall it being raised as a concern when UMTS (3G) was being rolled out while GPRS or EDGE were the dominant technology in the mobile data networks. In fact, while I am travelling, I often switch off my 3G/3.5G network capability and trop back to GPRS and EDGE just to make my better last through the day.  It’s interesting that it has been quantified like this.

When you think about it though, it makes sense. VoLTE (Voice over LTE) is not using a traditional GSM or CDMA circuit, rather it is using a packet data network to encapsulate the voice traffic – so it is voice over a data network.  We’ve known for a long time that data traffic (particularly higher speed data traffic) uses a lot more power than voice traffic.  More power equals less talk time from the same charge. 

This study is a US based one, so it brings the luggage of CDMA rather than GSM like the rest of the world uses, but I think there are lessons here for the GSM carriers around the world too.  CDMA battery life (from my experience) has been on a par with GSM battery life. I think it would be reasonable to equate the CDMA battery life in this study with GSM battery life.

Battery drain - CDMA vs VoLTE
Unfortunately, FireceWireless no longer appears to exist, thus the diagram and the article have disappeared… 🙁

I am seeing more and more countries around the world clawing back the 2G spectrum for use with Digital TV, LTE or other local requirements.  At some point in the future (at least for some markets) the only Voice traffic will be using VoLTE and those subscribers will have severely reduced standby and talk time compared to mobile phones of a few years back. Will that lead to a backlash in the community? By that point it may be too late with the spectrum re-deployed for other uses. Will we end up with VoLTE being the only voice option in some countries and others still having CDMA or GSM voice networks – and will that complicate things for phone manufacturers? (remember the days of so called ‘Global phones’ that had to be made to cater to all the different spectrums used around the world – yes multi band phones became pervasive, but will so called Global Phones that retain backward compatibility with GSM networks be so popular when the primary channel for mobile phone distribution is still the telephone carriers themselves (and they have committed to VoLTE in their own country)?

Who knows. I do think that we’ll end up with a big group of primarily voice subscribers who aren’t going to be happy campers!

TeleManagement Forum Africa Summit 2012

Originally posted on 29Sep12 to IBM Developerworks (13,053 Views)

Last week, I was at the TeleManagement Forum’s (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) – if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don’t know if I did enough to get over the line’.   Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter’s thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure. I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?  I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple’s Facetime video chat hasn’t taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made. Personally, I think that voice (despite it’s declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.

High Availability – unbelievable claims

Originally posted on 06Sep12 to IBM Developerworks (15,303 Views)

The other day, I was at a customer proof of concept, where the customer asked for 99.9999% availability within the Proof of Concept environment. Let me explain briefly the environment for the Proof of Concept – we were allocated ONE HP Proliant server, with twelve cores and needed to run the following:

  • IBM BPM Advanced (BPM Adv)
  • WebSphere Operational Decision Management (WODM)
  • WebSphere Services Registry & Repository(WSRR)
  • Oracle DB (not sure what version the customer installed). 

Obviously we needed to use VMWare to deploy the software since installing all of the software on the server (and being able to demonstrate any level of redundancy) would be impossible.  Any of you that understand High Availability as I do would say it can’t be done in a Proof of Concept – and I agree, yet our competitor claims they have demonstrated six nines (99.9999% availability) in this Proof of Concept environment – it was deployed on the customer’s hardware; hardware that did not have any redundancy at all. I call shenanigans on the competitor claims.  Unfortunately for us, the customer swallowed the claim hook line and sinker. I want to explain why their claim of six nines cannot be substantiated and why the customer should be sceptical as soon as a vendor – any vendor makes such claims.  First, lets think about what 99.9999% availability really means.  To quantify that figure, that means 31.5 seconds of unplanned downtime per year! For a start, how could you possibly measure availability for a year over a two week period.  Our POC server VMs didn’t crash for the entire time we had them running – does that entitle us to claim 100% availability? No way. The simple fact is that the Proof of Concept was deployed in a virtualised environment on a single physical machine – without redundant Hard Drives or power supplies – there is no way we or our competition could possibly claim any level of availability given the unknowns of the environment. 
 In order to achieve high levels of availability, there can be no single point of failure. That means no failure points in  the Network, the Hardware or the Software. For example, that means:

  • Hardware
    • Multiple redundant Network Interface Connectors
    • RAID 1+0 drive array,
    • Multiple redundant power supplies,
    • Multiple redundant network switches,
    • Multiple redundant network backbones
  • Software
    • Hardened OS
    • Minimise unused OS services
    • Use Software clustering capabilities (WebSphere n+x clustering *)
    • Active automated management of the software and OS
    • Database replication / clustering (eg Oracle RAC or DB2 HADP)
    • HA on network software elements (eg DNS servers etc)

We need to go back to the Telco and impress upon them that six nines availability depends on all of the above factors (and probably some others!) and not just about measuring the availability of the software over a short (and non-representative) sample period.

Typically this level of HA is very expensive, indeed every additional ‘9’ increases the cost exponentially – that is: six nines (99.9999% availability) is exponentially more expensive than five nines(99.999% availability).  I found this great diagram that illustrates the cost versus HA level. 

This diagram is actually from a IBM Redbook (See http://www.redbooks.ibm.com/redbooks/pdfs/sg247700.pdf ) which has a terrific section on high Availability – it illustrates how there is a compromise point between the level of high availability (aiming for continuous availability) and the cost of the infrastructure to provide that level of availability.  


* Note:

  • n is number of servers needed to handle load requirements
  • x is the number of redundant nodes in the cluster – to achieve six 9’s, this should be in excess of 2)

Telco standards gone, dead and buried

Originally posted on 22Auc12 to IBM Developerworks (13,006 Views)

Further to my last post, it now looks like the WAC is completely dead and buried. One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) – there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don’t intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general. The Telco standards that seem to take hold are the ones with strong engineering background – I am thinking of networking standards  like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration – sure standards are good – they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don’t they stick? Why do we see a progression of standards that are well designed, have collaboration of  a core set of telcos around the world (I’m thinking the WAC here) yet nothing comes of it.  It we look at Parlay for example, sure CORBA is hard, so I get why it didn’t take off, but ParlayX with web services is easy – pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service – why didn’t it take off?  I’ve spoken to telcos all around the world about ParlayX, but it’s rare to find one that is truly committed to the standard – sure the RFP’s say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM’s case) they either continue to offer their previous in house developed interfaces for those network services and don’t use ParlayX or they just don’t follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra ‘ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface’.  No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended.  So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement.  Will that get any more take up? I don’t think so. The TMF Frameworx seem to have more adoption, but they are the exception to the rule. I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:

  • Lack of long term thinking within telcos – there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line – they’re too worried about getting the day to day things patched up and then getting re-elected)
  • Senior executives in Telcos that truly don’t appreciate the benefits of standardisation –  I am not sure if this is because executives come from a non-technical background or some other reason.

 What to do? I guess I will keep preaching about standards – it is fundamental to IBM’s strategy and operations after all – and keep up with the new ones as they come along.  Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.

WAC Whacked: Telecom-Backed Alliance Merges Into GSMA, Assets Acquired By API Management Service Apigee | TechCrunch

Originally posted on 17Jul12 to IBM Developerworks (9,830 Views)

Apigee, the API management company that was most recently spotted powering that new “print to Walgreens” feature in half a dozen or so mobile applications, is now acquiring the technology assets of WAC, aka the Wholesale Applications Community. WAC, an alliance of global telecom companies, like AT&T, Verizon, Sprint, Deutsche Telecom, China Mobile, Orange, and others (and pegged by TechCrunch writer Jason Kincaid back in 2010 as “a disaster in the making“) was intent on building a platform that would allow mobile developers to build an application once, then run it on any carrier, OS or device. The group also developed network API technology, which is another key piece to today’s acquisition.”

TechCrunch – techcrunch.com/2012/07/17/wac-whacked-telecom-backed-alliance-merges-into-gsma-assets-acquired-by-api-management-service-apigee/

I think this is a really interesting development.  The Wholesale Application Community (WAC) was supposed to give Telcos a way of minimizing the revenue losses to the likes of Apple’s App Store and Google Play.  IBM’s Telecom Solution Lab in France built a demonstration that was shown at Mobile World Congress (MWC) in 2011 demonstration how a Telco’s own app store could incorporate applications from the WAC App store  as well as other app stores within their own combined app store.  I’ve demonstrated this a number of times around the world and the thing that always seemed odd to me is that applications in the WAC App Store could not be native applications (for Android, Blackberry, WinMob or Symbian) but rather, they could ONLY be HTML5 based apps.  That was always going to limit the number of apps that would be in the WAC App store, but since the WAC was announced at WMC 2010, the number of apps in the store has never really taken off.

I’m not sure if this is effectively the end of the road for the WAC, or if it’s just a stop on their journey.   Certainly, the Telcos that I have dealt with that form the core WAC Telco members remain dedicated to the WAC. I guess we’ll have to wait and see what happens.

This Is Not a Test: The Emergency Alert System Is Worthless Without Social Networks

Originally posted on 17Nov11 to IBM Developerworks (11,306 Views)

This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.
Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks
This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.  Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel – alternates like TV and Radio are seen as not as pervasive and thus a lower priority.  I don’t think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS – the problem is getting everyone to “friend” the NEWS system so that they see updates and warnings!

TelecomTV | News | News Alert: HP drops WebOS and plans to sell its PC business

Originally posted on 29Aug11 to IBM Developerworks (10,011 Views)

Here is the URL for this bookmark: www.telecomtv.com/comspace_newsDetail.aspx?n=47960


Wow!  HP getting out of PCs and abandoning their very recent and very significant investment in Palm – then on top of that, they’re looking to buy Autonomy!
 
While I can understand HP getting out of the PC business – it’s a very competitive marketplace with low margins – after all, that is why IBM sold it’s PC division to Lenovo.  What surprises me is the timing.  Only 18 months after buying Palm for US$1.2 Billion, they’re cutting their losses and shedding it.
 
Since I don’t live in the US, I can’t comment on the marketing push that HP put behind the Pre and the TouchPad, but I’ve never seen any marketing for it.  When your competitor is Apple, the only way to make any dent is the push and push hard.  They needed to out market Apple and I’m sure I don’t need to tell you how difficult and expensive that would be!