TeleManagement Forum Africa Summit 2012

Originally posted on 29Sep12 to IBM Developerworks (13,053 Views)

Last week, I was at the TeleManagement Forum’s (TMF) Africa Summit event in Johannesburg, South Africa. The main reason for me attending was to finish of my TMF certifications (I am level 3 currently) in the process framework (eTOM) – if I have passed the exam, I will be Level 4 certified. It was a really tough exam (75% pass mark) so I don’t know if I did enough to get over the line’.   Regardless, the event was well attended with 200-230 attendees for the two days of the conference. It was interesting to hear the presenter’s thoughts on telco usage within Africa into the future. Many seemed to think that video would drive future traffic for telcos. I am not so sure. I n other markets around the world, video was also projected to drive 3G network adoption, yet this has not happened anywhere. Why do all these people think that Africa will be different?  I see similar usage patterns in parts of Asia, yet Video has not take off there. Skype carries many more voice only calls than video calls. Apple’s Facetime video chat hasn’t taken off like Apple predicted. 3G video calls makes a tiny proportion of all calls made. Personally, I think that voice (despite it’s declining popularity relatively speaking in the developed world) will remain the key application, especially voice over LTE for the foreseeable future in Africa. I also think that social networking (be it Facebook, freindster, MySpace or some other African specific tool) will drive consumer data (LTE) traffic. Humans are social animals, and I think these sorts of social interactions will apply just as much in the African scenario as it has in others.

Telco standards gone, dead and buried

Originally posted on 22Auc12 to IBM Developerworks (13,006 Views)

Further to my last post, it now looks like the WAC is completely dead and buried. One thing that is creating a lot of chatter at the moment though is TelcoML (Telco Markup Language) – there it a lot of discussions about it on the TeleManagement Forum (TMF) community site and while I don’t intend to get in a big discussion about TelcoML, I do want to talk about Telco standards in general. The Telco standards that seem to take hold are the ones with strong engineering background – I am thinking of networking standards  like SS7, INAP, CAMEL, SigTRAN etc, but the Telco standards focussed on the IT domain (like Parlay, ParlayX, OneAPI, ParlayREST and perhaps TelcoML) seem to struggle to get real penetration – sure standards are good – they make it easier and cheaper for Telcos to integrate and introduce new software; they make it easier for ISVs to build software that can be deployed at any telco. So, why don’t they stick? Why do we see a progression of standards that are well designed, have collaboration of  a core set of telcos around the world (I’m thinking the WAC here) yet nothing comes of it.  It we look at Parlay for example, sure CORBA is hard, so I get why it didn’t take off, but ParlayX with web services is easy – pretty much every IDE in the world can build a SOAP request from the WSDL for that web Service – why didn’t it take off?  I’ve spoken to telcos all around the world about ParlayX, but it’s rare to find one that is truly committed to the standard – sure the RFP’s say must have ParlayX, but then after they implement the software (Telecom Web Services Server in IBM’s case) they either continue to offer their previous in house developed interfaces for those network services and don’t use ParlayX or they just don’t follow through with their plans to expose the services externally: why did we bother? ParlayX stagnated for many years with little real adoption from Telcos. Along comes GSMA with OneAPI with the mantra ‘ParlayX web services are too complicated still, lets simplify them and also provide a REST based interface’.  No new services, just the same ones as ParlayX, but simplified. Yes, I responded to a lot of Requests For Proposal (RFP) asking for OneAPI support, but I have not seen one telco that has actually exposed those OneAPI interfaces to 3rd party developers as they originally intended.  So, now, OneAPI doesn;t really exist any more and we have ParlayREST as a replacement.  Will that get any more take up? I don’t think so. The TMF Frameworx seem to have more adoption, but they are the exception to the rule. I am not really sure why Telco standards efforts have such a tough time of it, but I suspect that it comes down to:

  • Lack of long term thinking within telcos – there are often too many tactical requirements to be fulfilled and the long term strategy never gets going (this is like Governments who have a four year terms not being able to get 20 year projects over the line – they’re too worried about getting the day to day things patched up and then getting re-elected)
  • Senior executives in Telcos that truly don’t appreciate the benefits of standardisation –  I am not sure if this is because executives come from a non-technical background or some other reason.

 What to do? I guess I will keep preaching about standards – it is fundamental to IBM’s strategy and operations after all – and keep up with the new ones as they come along.  Lets hope that Telcos start to understand why they should be using standards as much as possible, after all they will make their life easier and their operations cheaper.

This Is Not a Test: The Emergency Alert System Is Worthless Without Social Networks

Originally posted on 17Nov11 to IBM Developerworks (11,306 Views)

This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.
Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks
This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.  Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel – alternates like TV and Radio are seen as not as pervasive and thus a lower priority.  I don’t think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS – the problem is getting everyone to “friend” the NEWS system so that they see updates and warnings!

New version of SPDE announced at TeleManagement World 2011

Originally posted on 26May11 to IBM Developerworks (12,948 Views)

Yesterday, IBM launched the latest iteration of the Service Provider Delivery Environment (SPDE), a software framework for Telecom that has been around since 2000.  Over the years, it has evolved with change sin market requirements and architecture maturity.  The link below is for the launch:

http://www-01.ibm.com/software/industry/communications/framework/index.html

The following enhancements are part of the new SPDE 4.0 Framework:

1. CSP Business Function Domains –  a clear articulation of “communications service provider business domains” that describe the business functions that are common to any service provider across the world.  These business domains offer us a simpler way to introduce the SPDE capabilities to a LOB audience, as well as to other client and partner constituents that are new to SPDE:

  • Customer Management
  • Sales & Marketing
  • Operations Support
  • Subscriber Services
  • Corporate Management
  • Information Technology
  • Network Technology

2. New Capabilities – In the areas of cloud, B2B commerce, enterprise marketing management, business analytics, and service delivery.

3. Introduction of the SPDE Enabled Business Projects –  that deliver solutions to address common business and IT needs for the LOB (CIO/CTO/CMO) and represent repeatable solutions and patterns harvested from client engagements.

4. Improved alignment with Telemanagment Forum (TMF) Industry Standards  – a clearly defined depiction of the areas of alignment to TMF Frameworx – key industry standards that underpin much of the communications industry investment.

5. Simplified Graphics and Messaging – to improve ease of adoption and consumability by a broader LOB audience.

Built on best practices and patterns from client engagements with CSPs around the world, IBM SPDE 4.0 is the blueprint that enables Smarter Communications by helping deliver value-added services that launch smarter services, drive smarter operations and build smarter networks. IBM is leading a conversation in the marketplace about how our world is becoming smarter, and software is at the very heart of this change.  IBM’s Industry Frameworks play a critical role in our ability to deliver smarter planet solutions by pulling together deep industry expertise, technology and a dynamic infrastructure from across the company to provide clients with offerings targeted to their industry-specific needs.

TeleManagement World 2011

Originally posted on 25May11 to IBM Developerworks (12,766 Views)

 I am in Dublin at the moment for TeleManagement World 2011  which has changed locations from Nice, France last year. it looks to be a very interesting conference.  I’ve already done two days of 

training and now, we’re beginning the sessions. the keynote  session has the Irish Minister for Communications, Mr Rabitte who is talking about the challenges that CSPs face all the world around.  He is also talking about an innovation programme that the Irish Government have started called ‘Examplar‘ which is part of their NGN Trial network. i’ll see if I can get some more info over the next few days… 

Steven Shurrock, CEO at O2  Ireland

Steven Shurrock, the new CEO at O2  Ireland  who has been in the role for just six months is very bullish about the opportunities in Ireland for data services. After Steven, we saw a host of Keynote speakers who have been focused on a number of themes, but many common presenters included:

  • Standards compliance – including certification against standards.  Particularly with the TMF Frameworx standards
  • Horizontal platforms and moving away from silos is their IT strategy
  • SOA is the basis for all of the new IT initiatives

I have recorded a number of keynote speakers as video, but for the time being, those files are very large.  Once I have had a chance to transcode them to a smaller size, I’ll add them to the blog as well – while not particularly technical, they’re very interesting for a Telecom perspective.

Governments vs Blackberry – what’s it all about?

Originally posted on 13Aug10 to IBM Developerworks (19,781 Views)

Over the past few weeks, I have been watching what seems to be a snowballing issue of governments spying on their citizens in the name of protection from terrorism.  First cab off the rank was India a couple of years ago asking Research In Motion (RIM) for access to the data stream for Indian Blackberry users, then asking for the encryption keys.  That went quiet until recently (1Jul10), the Indian Government again asked RIM for access to the Blackberry traffic and gave RIM 15 days to comply (See this post in Indian govt gives RIM, Skype 15 days notice, warns Google – Telecompaper).  That has passed and the Indian government yesterday gave RIM a new deadline of 31Aug10 (See Indian govt gives 31 August deadline for BlackBerry solution – Telecompaper). In parallel, a number of other nations have asked their CSPs or RIM for access to the data sent via Blackberry devices. 

First, was the United Arab Emirates (UAE) who will put a ban on Blackberry devices in place which will force the local Communications Service Providers (CSPs) to halt the service from 11Oct10.  RIM are meeting with the UAE government, but who knows where that will lead with the Canadian government stepping in to defend it’s Golden Hair Child – RIM.  Following the UAE ban, Saudi Arabia, Lebanon and more recently Indonesia have all said they will also consider a ban on RIM devices. As an interesting aside, I read an article a week ago (See UAE cellular carrier rolls out spyware as a 3G “update”) that suggested that the UAE government sent all Etisalat Blackberry subscribers an email advising them to update their devices with a ‘special update’ – it turns out that the update was just a Trojan which in fact delivered a spyware application to the Blackberry devices to allow the government to monitor all the traffic! (wow!)

Much of the hubbub seems to be around the use of Blackberry Messenger, an Instant Messaging function similar to Lotus Sametime Mobile, but hosted by RIM themselves which allows all Blackberry users (even on different networks and telcos) to chat to each other via their devices.

I guess at this stage, it might be helpful to describe how RIM’s service works.  From a historical point of view, RIM were a pager company.  Pagers need a Network Operations Centre (NOC) to act as a single point from which to send all the messages out to the pagers.  That’s where all the RIM contact centre staff sat and answered phones, typed messages into their internal systems and sent the messages out to the subscribers.  RIM had the brilliant idea to make their pagers two way so that the person being paged could respond initially with just an acknowledgement that they had read the message, and then later with full text messages.  That’s the point at which the pagers gained QWERTY keyboards. From there, RIM made the leap in functionality to support emails as well as pager messages, after all, they had a full keyboard now, a well established NOC based delivery system and a return path via the NOC for messages sent from the device.  The only thing that remained was a link into an enterprise email system.  That’s where the Blackberry Enterprise Server (BES) comes in.  The BES sites inside the Enterprise network and connects to the Lotus Domino or MS Exchange servers and acts as a connection to the NOC in Canada (the home of RIM and the location of the RIm NOC).  The connection from the device to the NOC is encrypted and from the NOC to the BES is encrypted.  Because of that encryption, there is no way for a government such as India, UAE, Indonesia, Saudi Arabia or other to intercept the traffic over either of the links (to or from the NOC)

Blackberry Topology

Last time I spoke to someone at RIM about this topology, they told me that RIM did not support putting the BES in the DMZ (where I would have put it) – since then, this situation may have changed.

Blackberry messenger traffic doesn’t get to the BES, but instead it goes from the device up to the NOC and then back to the second Blackberry which means that non-enterprise subscribers also have access to the messenger service and this appears to be the crux of what the various governments are concerned about.  Anybody, including a terrorist could buy a Blackberry phone and have access to the encrypted Blackberry messenger service without needing to connect up their device to a BES which explains why they don’t seem to be chasing after the other VPN vendors (including IBM with Lotus Mobile Connect) to get access to the encrypted traffic between the device and the enterprise VPN server.  Importantly, other VPN vendors typically don’t have a NOC in the mix (apart from the USA based Good who have a very similar model to RIM).  I guess the governments don’t see the threat from the enterprise customers, but rather the individuals who buy Blackberry devices.

To illustrate how a VPN like Lotus Mobile Connect differs from the Blackberry topology above, have a look at the diagram below:

Lotus Mobile Connect topology

If we extend that thought a little more, a terrorist cell could set them selves up as a pseudo enterprise by deploying a traditional VPN solution in conjunction with an enterprise type instant messaging server and therefore avoid the ban on Blackberries.  the VPN server and IM server could even be located in another country which would avoid the possibility of the government easily getting a court order to intercept traffic within the enterprise environment (on the other end of the VPN).  It will be interesting to see if those governments try to extend the reach of their prying to this type of IM strategy…

New Zealand’s National Broadband Project – progressing

Originally posted on 5Jul10 to IBM Developerworks (10,547 Views)

Since I last posted about New Zealand’s National Broadband project which seemed to me to be much more focused on the subscribers and the products they would have available to them (and the retailers that sold them) than the high speed backbone network.  My impressions may have been tainted by the work I was doing with the Telecom New Zealand Undertaking In Progress (UIP) project that I was involved with – the rather public forced split of Telecom New Zealand’s Retail, Wholesale and Network departments to ensure equivalency of input for all retail and wholesale partners for (only) broadband services.

My understanding of the situation has developed somewhat since then and we can see that the situation in New Zealand Government also involves a similar structure to what is happening in Australia with the Communications Alliance and the NBN Company.  In New Zealand, the companies are a little different.  Certainly, we have the NZ Government Ministry of Economic Development (MED) as one participant, then we have Crown Fibre Holdings (not much of a web site there!) -set up by the Government to manage the process of selecting the companies to build the National Broadband Network and manage the government’s investment in the NBN.  Together with the companies that are bidding for the deal Crown Fibre holdings will form Local Fibre Companies (LFC) which (combined) will match the government’s contribution to the NBN.  That will mean the total project will cost NZ$3 Billion** with the LFCs kicking in NZ$1.5B and the NZ government contributing NZ$1.5B.  I dont have the full schedule, but from a couple of sources, I have compiled an overview of the progress to date:

  • 21 October 2009 – Communications and Information Technology Minister Steven Joyce announced the government’s process for selecting private sector co-investment partners.
  • 13 November 2009 – Intention to respond due. 
  • 9 December 2009 – The Ministry and Crown Fibre Holdings release a clarifications and amendments
  • 14 January 2009 – The Ministry and Crown Fibre Holdings released additional clarification and amendments with respect to the Invitation to Participate.
  • 29 January 2010 – Proposals must be lodged
  • 4 February 2010 –  Crown Fibre Holdings notify respondents of handover of responsibility for the partner selection process
  • August 2010 – Refined Proposals to be re-submitted to the government (See http://www.totaltele.com/view.aspx?C=0&ID=456818 )
  • October 2010 – Successful respondents announced/notified.

What I find a bit interesting is that the government are only looking to cover 75% of the population by 2019.  For a small country (compared to Australia at least), that seems to me to be a very low target to aim for.  If we compare that with Australia’s NBN project, their target is 90% coverage at greater than 100Mbps and 10% greater than 12Mbps (that’s 100% coverage!) by 2017.  Admittedly, the Australian project has about a year’s head start, but it’s also a MUCH bigger country with a population nearly five times larger.  Lets have a quick look at the comparisons:


AustraliaNew Zealand Ratio
(AU to NZ)
Population22.4M4.3M5.2
Area7,617,930 km2268,021 km228.4
Population Density2.833/km216.1/km20.17
Planned NBN Completion year20182019
NBN Coverage22.4M (100%*)3M (70%)7.5
NBN Cost**AU$40B = US$33BNZ$3B = US$216.5
NBN Cost per person (US$/person)US$1473US$6662.2
NBN Cost per area (US$/km2)US$4331US$74620.6

* 100% coverage is split between greater than 100Mbps (90%) and greater than 12Mbps (10%)
** One Billion is using the short scale definition = 109 = 1,000,000,000

What do I take from this quick comparison?  Lets take a quick look at the numbers.  Obviously, Australia is a much bigger country (28.4 times larger) and has a much larger population (5.2 time larger), so it is reasonable (in my opinion) that the cost per potential NBN customer should be higher for Australia (and it is at 2.2 times higher) but the thing that makes me ponder is the cost per square kilometre:  New Zealand is nearly twice that of Australia.  When the New Zealand target is only 70% of the population and thus enables them to avoid areas that are physically difficult to provide coverage to (I’m no NZ geologist, but I would imagine lots of the South Island’s most mountainous areas would pose significant problems for cablers) I find myself wondering why the NZ network is going to be so expensive.  I guess it could be a matter of scale – but I thought the biggest cost was actually laying the cables rather than the back end systems which every broadband network will need (routers, switches, administration and management systems).  Maybe I am missing something – does anyone have any ideas?


edit:  I’ve just found this quote in Wikipedia which (I think) is truly revealing when you consider New Zealand’s 70% coverage target:

“New Zealand is a predominantly urban country, with 72% of the population living in 16 main urban areas and 53% living in the four largest cities of AucklandChristchurchWellington, and Hamilton

source: wikipedia.com

By only extending the NBN to those 16 main urban areas and nowhere else – they’ve achieved their target!  You wouldn’t want to live in country New Zealand and be dependent on a fast network!

iPhone 4 Facetime standards

Originally posted on 15Jun10 to IBM Developerworks (11,653 Views)

Nokia e71 making a video call

Since I penned my last post, I have done some more reading on Facetime and watch Steve Job’s launch of Facetime.  While I will happily admit that Apple have in fact used some standards within their Facetime Technology (Jobs lists H.264AACSIPSTUNTURNICERTPSRTP as all being used), I am somewhat bemused by the “standards” discussion that most of the media seem to be focusing on with regard to Facetime.  Almost everyone that refers to compliance with standards is talking about interoperability with current PC based video chat capabilities – from the likes of Skype, MS Messenger, GTalk and others.  Am I the only one that has noticed the iPhone 4 is not a PC and is in fact a mobile phone?  Why is it that no one else is questioning interoperability with existing video chat capable mobile phones?

After thinking on this for a little while, I guess it might be that most of the media coverage about the iPhone 4 is coming from the USA – where is was launched.  It’s only natural.  The problem with the US telecoms market is that it is not representative of the rest of the world – who has had video calling for ages and don’t really use it.  Perhaps it was the overflowing Apple coolaid fountain in the iPhone 4 launch that got the audience clapping when Jobs placed a video call, or perhaps it was just that they had never seen a video call before – I wasn’t there so I cant be sure.  Right now, the Facetime capability on the iPhone 4 is only for WiFi connections – which makes it pretty limiting.  Apparently, there is no setup required, no buddylist, you just use the phone number to make a video call – which is the way video calling already works (see the screen dump of my phone to the right and the short video below), but the WiFi limitation on the iPhone 4 will mean that you have to guess when the recipient is WiFi connected.  At least with the standard 3GPP video call, the networks are ubiquitous enough to pretty much guarantee that if the recipient is connected to a network, they can receive a video or at least a phone call.  Job’s didn’t explain what would happen if the recipient was not WiFi connected – does it just make a voice call instead?  I hope so.

(Note: the original post had a flash video of a video call conducted from my Nokia e71 phone – I’m trying to find the original recording of the call (3GVideoCall/3GVideoCall_controller.swf) and I’ll update this post if I can find it)

If you look at the pixelation and general poor quality of the video call, consider that I am in a UMTS coverage area, not HSPA (the phone would indicate 3.5G if I were), so this is what was available more than seven years ago in Australia, longer in other countries. If I was in a HSDPA coverage area, I would expect the video call to be higher quality due to the increase bandwidth available.

I recall in 2003, Hutchison 3 launched their 3G network in Australia with much fan-fair.  Video calls was a key part of the 3G launch in Australia for all of the telcos.  This article from the 14Apr03 Sydney Morning Herald (on day before the first official 3G network in Australia) illustrates what I am talking about.  The authors say that the network’s “…main feature is that it makes video calling possible via mobile phone.”  Think about it for a second.  That’s from more than seven years ago and Australia was far from the first country to get a 3G network.  A lifetime in today’s technology evolution.  Still the crowds clapped and cheered as Jobs made a Video call.  If I had have been in the audience, I think I would have yawned at that point.

The other interesting thing that I noticed in job’s speech as his swipe at the Telcos.  He implied that they needed to get their networks in order to support video calls.  Evidence from the rest of the world would suggest that is not the case – perhaps it is in the USA, or perhaps he is trying to deflect blame for not allowing Facetime over 3G connections away from Apple and back to the likes of AT&T who have copped a lot of flack over their alleged influence on Apple’s Application store policies involving applications that could be seen to be competitive with services from AT&T.  I am not sure how much stick AT&T deserve on that front, but it’s pretty obvious from job’s comment that he is not in love with carriers – and certainly from what I’ve seen, carriers are not in love with Apple.  It might be interesting to see how long the relationship lasts.  My guess is that as long as Apple devices continue to be popular, both parties will be forced to share the same bed.

On another related point, I have been searching the Internet to find what standards body Apple submitted Facetime to for certification – Jobs says in the launch that it will be done “tomorrow”  – this could be marketing speak for ‘in the future’ or it could literally mean the day after he launched the iPhone 4.  If anyone knows please let me know – I want to have a look into the way Facetime works.


Thanks very much to my colleague Geoff Nicholls for taking the Video Call in the video above.

Jobs has lofty goal for iPhone 4’s FaceTime video chat with open standard – Computerworld

Originally posted on 10Jun10 to IBM Developerworks (11,776 Views)

Regarding this article: http://www.computerworld.com/s/article/9177819/Jobs_has_lofty_goal_for_iPhone_4_s_FaceTime_video_chat_with_open_standard


I came across this article today – Apple wanting to propose their new Facetime technology for video chat now that they finally have a camera on the front of their  iPhone 4.  I’m now on my second phone with a camera on the surface of the phone (that’s at least four years that my phones have had video chat capabilities) which has not proved to be much more than a curiosity where Telcos have launched it around the world.  I recall the first 3G network launch in Australia – for Hutchinson’s ‘3’ network – video chat was seen as the next big thing – the killer application, yet apart from featuring in some reality shows on the TV, very few people used it.  I wonder why Steve Jobs thinks this will be any different.  At least the video chat capabilities that are in the market already have a standard that they comply with which means that  on my Nokia phone, I can have a video call with someone on a (say) Motorola phone.  With Apple’s Facetime, it’s only  iPhones 4 to iPhone 4 (which does not support a 4G network like LTE or WiMax I hasten to add).  If Apple really is worried about standards as the Computerworld article suggests, then I have to ask why doesn’t Apple make their software comply with existing 3GPP Video call standards instead of ‘inventing their own’.  If Apple were truly concerned about interoperability, that would have been a more sensible path.

According to Wikipedia, in Q2 2007 there were “… over 131 million UMTS users (and hence potential videophone users), on 134 networks in 59 countries.”.  Today, in 2010, I would feel very confident in doubling those figures given the rate at which UMTS networks (and more latterly, HSPA networks) have been deployed throughout the world.  Of note is that the Chinese 3G standard (TD-SCDMA) also supports the same video call standard protocol.  That protocol (3G-324M – See this article from commdesign.com for a great explanation of the protocol and it’s history – from way back in 2003!) has been around for a while and yes, it was developed because the original UMTS networks couldn’t support IPv6 or the low latency connectivity to provide a good quality video call over a purely IP infrastructure.  But, things have changed with LTE gathering steam all around the world (110 telcos across 48 countries according to 3GPP) and mobile WiMax being deployed in the USA by Sprint and at a few other locations around the world (See WiMax Forum’s April 2010 report – note that the majority of these WiMax deployments are not for mobile WiMax and as far as I know, Sprint are the first to be actively deploying WiMax enabled mobile phones as opposed to mobile broadband USB modems) so, perhaps it is time to revisit those video calling standards and update them with something that can take advantage of these faster networks.  I think that would be a valid thing to do right now.  If it were up to me, I would be looking at SIP based solutions and learning from the success that companies like Skype have had with their video calling (albeit only on PCs and with proprietary technology) – wouldn’t it be great if you could video call anyone from any device?
I guess the thing that annoys me most about Apple’s arrogance is to ignore the prior work in the field.  Wouldn’t it be better to make Facetime compatible with the hundreds of millions of handsets already deployed rather than introduce yet another incompatible technology and proclaim it as “… going to be a standard”.

My 2c worth…

ICE at TeleManagement World 2010 – a great example of real benefits from TMF Frameworx

Oroiginally posted on 29May10 to IBM Developerworks (23,580 Views)

Yes, I should have posted this a week ago during the TeleManagement World conference – I’ve been busy since then and the wireless network at the conference was not available in most of the session rooms – at least that is my excuse.

Ricardo Mata, Sub-Director, VertICE (OSS) Project from ICE

At Impact 2010 in Las Vegas we heard from the IBM Business Partner (GBM) on the ICE project.  At TMW 2010, it was ICE themselves presenting on ICE and their journey down the TeleManagement Forum Frameworx path.  Ricardo Mata, Sub-Director, VertICE (OSS) Project from ICE presented (see his picture to the right) presented on ICE’s projects to move Costa Rica’s legacy carrier to a position that will allow them to remain competitive when the government opens up the market to international competitors such as Telefonica who are champing at the bit to get in there.  ICE used IBM’s middleware to integrate components from a range of vendors and align them to the TeleManagement Forum’s Frameworx (the new name for eTOM, TAM and SID).  In terms of what ICE wanted to achieve with this project (they call it PESSO) this diagram shows it really well.

I wish I could share with you the entire slide pack, but I think I might incur the wrath of the TeleManagement Forum if I were to do that.  If you want to see these great presentations from Telcos from all around the world, you will just have to stump up the cash and get yourself to Nice next year.  Finally, I want to illustrate the integration architecture that ICE used – this diagram is similar to the one form Impact, but I think importantly shows ICE’s view of the architecture rather than IBM’s or GMB’s.

For the benefit of those that don’t understand some of the acronyms in the architecture diagram above, let me explain them a bit:

  • ESB – Enterprise Services Bus
  • TOCP – Telecom Operations Content Pack (the old name for WebSphere Telecom Content Pack) – IBM’s product to help Telcos get in line with the TMF Frameworx)
  • NGOSS – Next Generation Operations Support Systems (the old name to TMF Frameworx)
  • TAM – Telecom Applications Map
  • SID – Shared Information / Data model