This Is Not a Test: The Emergency Alert System Is Worthless Without Social Networks

Originally posted on 17Nov11 to IBM Developerworks (11,306 Views)

This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.
Here is the URL for this bookmark: gizmodo.com/5857897/this-is-not-a-test-the-emergency-alert-system-is-worthless-without-social-networks
This makes for an interesting comparison to the National Emergency Warning System (NEWS) that was implemented in Australia last year as a result of the Black Saturday  bushfires.  Of particular interest is that the USA have avoided the SMS channel when in Australia, that has been the primary channel – alternates like TV and Radio are seen as not as pervasive and thus a lower priority.  I don’t think that NEWS here in Oz is connected to twitter, facebook, foursquare or any other social networking site either, but that could be an extension to NEWS – the problem is getting everyone to “friend” the NEWS system so that they see updates and warnings!

New version of SPDE announced at TeleManagement World 2011

Originally posted on 26May11 to IBM Developerworks (12,948 Views)

Yesterday, IBM launched the latest iteration of the Service Provider Delivery Environment (SPDE), a software framework for Telecom that has been around since 2000.  Over the years, it has evolved with change sin market requirements and architecture maturity.  The link below is for the launch:

http://www-01.ibm.com/software/industry/communications/framework/index.html

The following enhancements are part of the new SPDE 4.0 Framework:

1. CSP Business Function Domains –  a clear articulation of “communications service provider business domains” that describe the business functions that are common to any service provider across the world.  These business domains offer us a simpler way to introduce the SPDE capabilities to a LOB audience, as well as to other client and partner constituents that are new to SPDE:

  • Customer Management
  • Sales & Marketing
  • Operations Support
  • Subscriber Services
  • Corporate Management
  • Information Technology
  • Network Technology

2. New Capabilities – In the areas of cloud, B2B commerce, enterprise marketing management, business analytics, and service delivery.

3. Introduction of the SPDE Enabled Business Projects –  that deliver solutions to address common business and IT needs for the LOB (CIO/CTO/CMO) and represent repeatable solutions and patterns harvested from client engagements.

4. Improved alignment with Telemanagment Forum (TMF) Industry Standards  – a clearly defined depiction of the areas of alignment to TMF Frameworx – key industry standards that underpin much of the communications industry investment.

5. Simplified Graphics and Messaging – to improve ease of adoption and consumability by a broader LOB audience.

Built on best practices and patterns from client engagements with CSPs around the world, IBM SPDE 4.0 is the blueprint that enables Smarter Communications by helping deliver value-added services that launch smarter services, drive smarter operations and build smarter networks. IBM is leading a conversation in the marketplace about how our world is becoming smarter, and software is at the very heart of this change.  IBM’s Industry Frameworks play a critical role in our ability to deliver smarter planet solutions by pulling together deep industry expertise, technology and a dynamic infrastructure from across the company to provide clients with offerings targeted to their industry-specific needs.

TeleManagement World 2011

Originally posted on 25May11 to IBM Developerworks (12,766 Views)

 I am in Dublin at the moment for TeleManagement World 2011  which has changed locations from Nice, France last year. it looks to be a very interesting conference.  I’ve already done two days of 

training and now, we’re beginning the sessions. the keynote  session has the Irish Minister for Communications, Mr Rabitte who is talking about the challenges that CSPs face all the world around.  He is also talking about an innovation programme that the Irish Government have started called ‘Examplar‘ which is part of their NGN Trial network. i’ll see if I can get some more info over the next few days… 

Steven Shurrock, CEO at O2  Ireland

Steven Shurrock, the new CEO at O2  Ireland  who has been in the role for just six months is very bullish about the opportunities in Ireland for data services. After Steven, we saw a host of Keynote speakers who have been focused on a number of themes, but many common presenters included:

  • Standards compliance – including certification against standards.  Particularly with the TMF Frameworx standards
  • Horizontal platforms and moving away from silos is their IT strategy
  • SOA is the basis for all of the new IT initiatives

I have recorded a number of keynote speakers as video, but for the time being, those files are very large.  Once I have had a chance to transcode them to a smaller size, I’ll add them to the blog as well – while not particularly technical, they’re very interesting for a Telecom perspective.

Governments vs Blackberry – what’s it all about?

Originally posted on 13Aug10 to IBM Developerworks (19,781 Views)

Over the past few weeks, I have been watching what seems to be a snowballing issue of governments spying on their citizens in the name of protection from terrorism.  First cab off the rank was India a couple of years ago asking Research In Motion (RIM) for access to the data stream for Indian Blackberry users, then asking for the encryption keys.  That went quiet until recently (1Jul10), the Indian Government again asked RIM for access to the Blackberry traffic and gave RIM 15 days to comply (See this post in Indian govt gives RIM, Skype 15 days notice, warns Google – Telecompaper).  That has passed and the Indian government yesterday gave RIM a new deadline of 31Aug10 (See Indian govt gives 31 August deadline for BlackBerry solution – Telecompaper). In parallel, a number of other nations have asked their CSPs or RIM for access to the data sent via Blackberry devices. 

First, was the United Arab Emirates (UAE) who will put a ban on Blackberry devices in place which will force the local Communications Service Providers (CSPs) to halt the service from 11Oct10.  RIM are meeting with the UAE government, but who knows where that will lead with the Canadian government stepping in to defend it’s Golden Hair Child – RIM.  Following the UAE ban, Saudi Arabia, Lebanon and more recently Indonesia have all said they will also consider a ban on RIM devices. As an interesting aside, I read an article a week ago (See UAE cellular carrier rolls out spyware as a 3G “update”) that suggested that the UAE government sent all Etisalat Blackberry subscribers an email advising them to update their devices with a ‘special update’ – it turns out that the update was just a Trojan which in fact delivered a spyware application to the Blackberry devices to allow the government to monitor all the traffic! (wow!)

Much of the hubbub seems to be around the use of Blackberry Messenger, an Instant Messaging function similar to Lotus Sametime Mobile, but hosted by RIM themselves which allows all Blackberry users (even on different networks and telcos) to chat to each other via their devices.

I guess at this stage, it might be helpful to describe how RIM’s service works.  From a historical point of view, RIM were a pager company.  Pagers need a Network Operations Centre (NOC) to act as a single point from which to send all the messages out to the pagers.  That’s where all the RIM contact centre staff sat and answered phones, typed messages into their internal systems and sent the messages out to the subscribers.  RIM had the brilliant idea to make their pagers two way so that the person being paged could respond initially with just an acknowledgement that they had read the message, and then later with full text messages.  That’s the point at which the pagers gained QWERTY keyboards. From there, RIM made the leap in functionality to support emails as well as pager messages, after all, they had a full keyboard now, a well established NOC based delivery system and a return path via the NOC for messages sent from the device.  The only thing that remained was a link into an enterprise email system.  That’s where the Blackberry Enterprise Server (BES) comes in.  The BES sites inside the Enterprise network and connects to the Lotus Domino or MS Exchange servers and acts as a connection to the NOC in Canada (the home of RIM and the location of the RIm NOC).  The connection from the device to the NOC is encrypted and from the NOC to the BES is encrypted.  Because of that encryption, there is no way for a government such as India, UAE, Indonesia, Saudi Arabia or other to intercept the traffic over either of the links (to or from the NOC)

Blackberry Topology

Last time I spoke to someone at RIM about this topology, they told me that RIM did not support putting the BES in the DMZ (where I would have put it) – since then, this situation may have changed.

Blackberry messenger traffic doesn’t get to the BES, but instead it goes from the device up to the NOC and then back to the second Blackberry which means that non-enterprise subscribers also have access to the messenger service and this appears to be the crux of what the various governments are concerned about.  Anybody, including a terrorist could buy a Blackberry phone and have access to the encrypted Blackberry messenger service without needing to connect up their device to a BES which explains why they don’t seem to be chasing after the other VPN vendors (including IBM with Lotus Mobile Connect) to get access to the encrypted traffic between the device and the enterprise VPN server.  Importantly, other VPN vendors typically don’t have a NOC in the mix (apart from the USA based Good who have a very similar model to RIM).  I guess the governments don’t see the threat from the enterprise customers, but rather the individuals who buy Blackberry devices.

To illustrate how a VPN like Lotus Mobile Connect differs from the Blackberry topology above, have a look at the diagram below:

Lotus Mobile Connect topology

If we extend that thought a little more, a terrorist cell could set them selves up as a pseudo enterprise by deploying a traditional VPN solution in conjunction with an enterprise type instant messaging server and therefore avoid the ban on Blackberries.  the VPN server and IM server could even be located in another country which would avoid the possibility of the government easily getting a court order to intercept traffic within the enterprise environment (on the other end of the VPN).  It will be interesting to see if those governments try to extend the reach of their prying to this type of IM strategy…

New Zealand’s National Broadband Project – progressing

Originally posted on 5Jul10 to IBM Developerworks (10,547 Views)

Since I last posted about New Zealand’s National Broadband project which seemed to me to be much more focused on the subscribers and the products they would have available to them (and the retailers that sold them) than the high speed backbone network.  My impressions may have been tainted by the work I was doing with the Telecom New Zealand Undertaking In Progress (UIP) project that I was involved with – the rather public forced split of Telecom New Zealand’s Retail, Wholesale and Network departments to ensure equivalency of input for all retail and wholesale partners for (only) broadband services.

My understanding of the situation has developed somewhat since then and we can see that the situation in New Zealand Government also involves a similar structure to what is happening in Australia with the Communications Alliance and the NBN Company.  In New Zealand, the companies are a little different.  Certainly, we have the NZ Government Ministry of Economic Development (MED) as one participant, then we have Crown Fibre Holdings (not much of a web site there!) -set up by the Government to manage the process of selecting the companies to build the National Broadband Network and manage the government’s investment in the NBN.  Together with the companies that are bidding for the deal Crown Fibre holdings will form Local Fibre Companies (LFC) which (combined) will match the government’s contribution to the NBN.  That will mean the total project will cost NZ$3 Billion** with the LFCs kicking in NZ$1.5B and the NZ government contributing NZ$1.5B.  I dont have the full schedule, but from a couple of sources, I have compiled an overview of the progress to date:

  • 21 October 2009 – Communications and Information Technology Minister Steven Joyce announced the government’s process for selecting private sector co-investment partners.
  • 13 November 2009 – Intention to respond due. 
  • 9 December 2009 – The Ministry and Crown Fibre Holdings release a clarifications and amendments
  • 14 January 2009 – The Ministry and Crown Fibre Holdings released additional clarification and amendments with respect to the Invitation to Participate.
  • 29 January 2010 – Proposals must be lodged
  • 4 February 2010 –  Crown Fibre Holdings notify respondents of handover of responsibility for the partner selection process
  • August 2010 – Refined Proposals to be re-submitted to the government (See http://www.totaltele.com/view.aspx?C=0&ID=456818 )
  • October 2010 – Successful respondents announced/notified.

What I find a bit interesting is that the government are only looking to cover 75% of the population by 2019.  For a small country (compared to Australia at least), that seems to me to be a very low target to aim for.  If we compare that with Australia’s NBN project, their target is 90% coverage at greater than 100Mbps and 10% greater than 12Mbps (that’s 100% coverage!) by 2017.  Admittedly, the Australian project has about a year’s head start, but it’s also a MUCH bigger country with a population nearly five times larger.  Lets have a quick look at the comparisons:


AustraliaNew Zealand Ratio
(AU to NZ)
Population22.4M4.3M5.2
Area7,617,930 km2268,021 km228.4
Population Density2.833/km216.1/km20.17
Planned NBN Completion year20182019
NBN Coverage22.4M (100%*)3M (70%)7.5
NBN Cost**AU$40B = US$33BNZ$3B = US$216.5
NBN Cost per person (US$/person)US$1473US$6662.2
NBN Cost per area (US$/km2)US$4331US$74620.6

* 100% coverage is split between greater than 100Mbps (90%) and greater than 12Mbps (10%)
** One Billion is using the short scale definition = 109 = 1,000,000,000

What do I take from this quick comparison?  Lets take a quick look at the numbers.  Obviously, Australia is a much bigger country (28.4 times larger) and has a much larger population (5.2 time larger), so it is reasonable (in my opinion) that the cost per potential NBN customer should be higher for Australia (and it is at 2.2 times higher) but the thing that makes me ponder is the cost per square kilometre:  New Zealand is nearly twice that of Australia.  When the New Zealand target is only 70% of the population and thus enables them to avoid areas that are physically difficult to provide coverage to (I’m no NZ geologist, but I would imagine lots of the South Island’s most mountainous areas would pose significant problems for cablers) I find myself wondering why the NZ network is going to be so expensive.  I guess it could be a matter of scale – but I thought the biggest cost was actually laying the cables rather than the back end systems which every broadband network will need (routers, switches, administration and management systems).  Maybe I am missing something – does anyone have any ideas?


edit:  I’ve just found this quote in Wikipedia which (I think) is truly revealing when you consider New Zealand’s 70% coverage target:

“New Zealand is a predominantly urban country, with 72% of the population living in 16 main urban areas and 53% living in the four largest cities of AucklandChristchurchWellington, and Hamilton

source: wikipedia.com

By only extending the NBN to those 16 main urban areas and nowhere else – they’ve achieved their target!  You wouldn’t want to live in country New Zealand and be dependent on a fast network!

Smarter homes and Smarter Telcos, what’s the link?

Originally posted on 29Jun10 to IBM Developerworks (10,979 Views)

I was looking at where some of the traffic for this blog comes from this morning. Someone had used Google to search for “ibm sdp cloud” which I am glad to say yielded this blog as the third and forth results. Above Telco Talk in the results was a post from 2005 from fellow MyDeveloperworks blogger Bobby Woolf with his post What is in RAD 6.0 – which is interesting in that the post wasn’t about Service Delivery Platforms and the term “SDP” is only mentioned in the comments on the post, yet it rated higher in Google’s index than my posts which have been about cloud, SDPs or both! That’s another conversation though…

The thing that really caught my attention was a new whitepaper form IBM on Smarter Homes. This has been an ongoing area of interest for me for a few years now. This new whitepaper “The IBM vision of a smarter home enabled by cloud technology” is interesting – it talks about some of the concepts that I have seen coming over the past few years, but it also introduces the concept of Cloud based services providers as the key enabler outside the home to enable smarter home to deliver on their lofty promises. In the introduction of the whitepaper, it states:

A common services delivery platform based on industry standards supports cooperative interconnection and creation of new services. Implementation inside the cloud delivers quick development of services at lower cost, with shorter time to market, facilitating rapid experimentation and improvement. The emergence of cloud computing, Web services and service-oriented architecture (SOA), together with new standards, is the key that will open up the field for the new smarter home services.

Excerpt from “The IBM vision of a smarter home enabled by cloud technology”

The dependence on external networks (from our homes) and external Communications Service Providers presents an opportunity for them to provide much more than just the pipe to the house. This is an area that some Telcos are trying to tap into already. Here in Australia, Telstra have recently introduced a home based smart device called the T-Hub which is intended to arrest some of the decline in homes installing or keeping land line phones (in Australia, more and more homes are buying a naked DSL or Hybrid Fibre Coax (HCF) service for Internet and using mobile phones for voice calls and not having a home phone service at all). I recently cancelled my Telstra Home Phone service, so I cannot buy one of the T-Hubs and apparently it won’t work with my home phone service via my HCF connection. It is an intriguing idea though. I find myself wondering if Telstra’s toe in the Smarter Home pond is too little too late. For years, in Telstra’s Innovation Centres (one in Melbourne and one in Sydney) they had standing demonstrations of smarter home technology (I think the previous Telstra CEO, Sol Tujilllo closed them down). I even helped to install a Smarter Healthcare demo at the Sydney Telstra Innovation Centre a few years ago (more on that later) and their demos were every bit as good as the demos that IBM has at the Austin (Texas, USA) and LaGaude (France) Telecom Solutions Labs.

Further into the whitepaper, when talking about Cloud based Service Delivery Platforms (pp 10) there is a nice summary of why a Telco would consider a cloud deployment of their SDP:

An SDP in the cloud supports the expansion of the services scope by enabling new services in existing markets and by expanding existing services into new markets with minimum risk. By exposing standard service interfaces in the network, it enables third parties to integrate their services quickly, or to build new services based on the service components provided in the SDP. This creates the opportunity for new business models, for instance, for media distribution and advertising throughout multiple delivery scenarios.

I think this illustrates what all Telcos should be thinking about – the agility needed to compete in today’s marketplace. Cloud is one way to enhance that agility but also adds elasticity – the ability to grow and shrink as the market demands grow and shrink. Sorry for rambling a bit there… some semi-random thoughts kept popping up when talking about Smarter homes and Telcos. Anyway, I would encourage you to have a read of the whitepaper for yourself. It’s available via slideshare:


Disclaimer: I own a small number of shares in Telstra Corp.

Airtel App Central surpasses 13 Million downloads – shows other Telcos how it’s done

Originally posted on 25Jun10 to IBM Developerworks (8,988 Views)

Wow!

In just five months, Bharti Airtel’s App store has had over 13 Million downloads.  What a terrific example of a Telco App Store in action and (presumably) making money for the Telco.  This article came across my screen this afternoon and given my previous posts about Bharti’s App Store and carriers wanting to get into them (something I’ve seen all over Asia) to try and arrest some of the revenue bleeding to Apple (and to a lesser extent Google, Nokia and RIM) through single brand (phone) app stores.

http://www.telecompaper.com/news/printarticle.aspx?cid=742043 – Thursday 24 June 2010 | 03:29 AM CET, Telecompaper

The article is really brief, barely a footnote, but it does lay out some interesting facts:

  • 13 Million downloads since Feb ’10
  • Over 71,00 Applications available, up from 1250 at launch
  • Support for 780 different devices
  • 1.2 downloads per second

I guess having over 200 Million subscribers does help achieve these sorts of numbers 🙂 . I have some a bit of background about Airtel’s App Central store and the technology it uses, much of it IBM technology.  IBM Portal and Mobile Portal Accelerator are used to drive the interface which is able to support over 8,000 different devices from iPhones to WebTVs (remember them?  They seem to be making a bit of a comeback at the moment) and everything in-between.  These screen dumps are from their old mobile site – I will post some new ones if I can get them soon.


 Airtel’s App Central on a PC

iPhone 4 Facetime standards

Originally posted on 15Jun10 to IBM Developerworks (11,653 Views)

Nokia e71 making a video call

Since I penned my last post, I have done some more reading on Facetime and watch Steve Job’s launch of Facetime.  While I will happily admit that Apple have in fact used some standards within their Facetime Technology (Jobs lists H.264AACSIPSTUNTURNICERTPSRTP as all being used), I am somewhat bemused by the “standards” discussion that most of the media seem to be focusing on with regard to Facetime.  Almost everyone that refers to compliance with standards is talking about interoperability with current PC based video chat capabilities – from the likes of Skype, MS Messenger, GTalk and others.  Am I the only one that has noticed the iPhone 4 is not a PC and is in fact a mobile phone?  Why is it that no one else is questioning interoperability with existing video chat capable mobile phones?

After thinking on this for a little while, I guess it might be that most of the media coverage about the iPhone 4 is coming from the USA – where is was launched.  It’s only natural.  The problem with the US telecoms market is that it is not representative of the rest of the world – who has had video calling for ages and don’t really use it.  Perhaps it was the overflowing Apple coolaid fountain in the iPhone 4 launch that got the audience clapping when Jobs placed a video call, or perhaps it was just that they had never seen a video call before – I wasn’t there so I cant be sure.  Right now, the Facetime capability on the iPhone 4 is only for WiFi connections – which makes it pretty limiting.  Apparently, there is no setup required, no buddylist, you just use the phone number to make a video call – which is the way video calling already works (see the screen dump of my phone to the right and the short video below), but the WiFi limitation on the iPhone 4 will mean that you have to guess when the recipient is WiFi connected.  At least with the standard 3GPP video call, the networks are ubiquitous enough to pretty much guarantee that if the recipient is connected to a network, they can receive a video or at least a phone call.  Job’s didn’t explain what would happen if the recipient was not WiFi connected – does it just make a voice call instead?  I hope so.

(Note: the original post had a flash video of a video call conducted from my Nokia e71 phone – I’m trying to find the original recording of the call (3GVideoCall/3GVideoCall_controller.swf) and I’ll update this post if I can find it)

If you look at the pixelation and general poor quality of the video call, consider that I am in a UMTS coverage area, not HSPA (the phone would indicate 3.5G if I were), so this is what was available more than seven years ago in Australia, longer in other countries. If I was in a HSDPA coverage area, I would expect the video call to be higher quality due to the increase bandwidth available.

I recall in 2003, Hutchison 3 launched their 3G network in Australia with much fan-fair.  Video calls was a key part of the 3G launch in Australia for all of the telcos.  This article from the 14Apr03 Sydney Morning Herald (on day before the first official 3G network in Australia) illustrates what I am talking about.  The authors say that the network’s “…main feature is that it makes video calling possible via mobile phone.”  Think about it for a second.  That’s from more than seven years ago and Australia was far from the first country to get a 3G network.  A lifetime in today’s technology evolution.  Still the crowds clapped and cheered as Jobs made a Video call.  If I had have been in the audience, I think I would have yawned at that point.

The other interesting thing that I noticed in job’s speech as his swipe at the Telcos.  He implied that they needed to get their networks in order to support video calls.  Evidence from the rest of the world would suggest that is not the case – perhaps it is in the USA, or perhaps he is trying to deflect blame for not allowing Facetime over 3G connections away from Apple and back to the likes of AT&T who have copped a lot of flack over their alleged influence on Apple’s Application store policies involving applications that could be seen to be competitive with services from AT&T.  I am not sure how much stick AT&T deserve on that front, but it’s pretty obvious from job’s comment that he is not in love with carriers – and certainly from what I’ve seen, carriers are not in love with Apple.  It might be interesting to see how long the relationship lasts.  My guess is that as long as Apple devices continue to be popular, both parties will be forced to share the same bed.

On another related point, I have been searching the Internet to find what standards body Apple submitted Facetime to for certification – Jobs says in the launch that it will be done “tomorrow”  – this could be marketing speak for ‘in the future’ or it could literally mean the day after he launched the iPhone 4.  If anyone knows please let me know – I want to have a look into the way Facetime works.


Thanks very much to my colleague Geoff Nicholls for taking the Video Call in the video above.

Jobs has lofty goal for iPhone 4’s FaceTime video chat with open standard – Computerworld

Originally posted on 10Jun10 to IBM Developerworks (11,776 Views)

Regarding this article: http://www.computerworld.com/s/article/9177819/Jobs_has_lofty_goal_for_iPhone_4_s_FaceTime_video_chat_with_open_standard


I came across this article today – Apple wanting to propose their new Facetime technology for video chat now that they finally have a camera on the front of their  iPhone 4.  I’m now on my second phone with a camera on the surface of the phone (that’s at least four years that my phones have had video chat capabilities) which has not proved to be much more than a curiosity where Telcos have launched it around the world.  I recall the first 3G network launch in Australia – for Hutchinson’s ‘3’ network – video chat was seen as the next big thing – the killer application, yet apart from featuring in some reality shows on the TV, very few people used it.  I wonder why Steve Jobs thinks this will be any different.  At least the video chat capabilities that are in the market already have a standard that they comply with which means that  on my Nokia phone, I can have a video call with someone on a (say) Motorola phone.  With Apple’s Facetime, it’s only  iPhones 4 to iPhone 4 (which does not support a 4G network like LTE or WiMax I hasten to add).  If Apple really is worried about standards as the Computerworld article suggests, then I have to ask why doesn’t Apple make their software comply with existing 3GPP Video call standards instead of ‘inventing their own’.  If Apple were truly concerned about interoperability, that would have been a more sensible path.

According to Wikipedia, in Q2 2007 there were “… over 131 million UMTS users (and hence potential videophone users), on 134 networks in 59 countries.”.  Today, in 2010, I would feel very confident in doubling those figures given the rate at which UMTS networks (and more latterly, HSPA networks) have been deployed throughout the world.  Of note is that the Chinese 3G standard (TD-SCDMA) also supports the same video call standard protocol.  That protocol (3G-324M – See this article from commdesign.com for a great explanation of the protocol and it’s history – from way back in 2003!) has been around for a while and yes, it was developed because the original UMTS networks couldn’t support IPv6 or the low latency connectivity to provide a good quality video call over a purely IP infrastructure.  But, things have changed with LTE gathering steam all around the world (110 telcos across 48 countries according to 3GPP) and mobile WiMax being deployed in the USA by Sprint and at a few other locations around the world (See WiMax Forum’s April 2010 report – note that the majority of these WiMax deployments are not for mobile WiMax and as far as I know, Sprint are the first to be actively deploying WiMax enabled mobile phones as opposed to mobile broadband USB modems) so, perhaps it is time to revisit those video calling standards and update them with something that can take advantage of these faster networks.  I think that would be a valid thing to do right now.  If it were up to me, I would be looking at SIP based solutions and learning from the success that companies like Skype have had with their video calling (albeit only on PCs and with proprietary technology) – wouldn’t it be great if you could video call anyone from any device?
I guess the thing that annoys me most about Apple’s arrogance is to ignore the prior work in the field.  Wouldn’t it be better to make Facetime compatible with the hundreds of millions of handsets already deployed rather than introduce yet another incompatible technology and proclaim it as “… going to be a standard”.

My 2c worth…

ICE at TeleManagement World 2010 – a great example of real benefits from TMF Frameworx

Oroiginally posted on 29May10 to IBM Developerworks (23,580 Views)

Yes, I should have posted this a week ago during the TeleManagement World conference – I’ve been busy since then and the wireless network at the conference was not available in most of the session rooms – at least that is my excuse.

Ricardo Mata, Sub-Director, VertICE (OSS) Project from ICE

At Impact 2010 in Las Vegas we heard from the IBM Business Partner (GBM) on the ICE project.  At TMW 2010, it was ICE themselves presenting on ICE and their journey down the TeleManagement Forum Frameworx path.  Ricardo Mata, Sub-Director, VertICE (OSS) Project from ICE presented (see his picture to the right) presented on ICE’s projects to move Costa Rica’s legacy carrier to a position that will allow them to remain competitive when the government opens up the market to international competitors such as Telefonica who are champing at the bit to get in there.  ICE used IBM’s middleware to integrate components from a range of vendors and align them to the TeleManagement Forum’s Frameworx (the new name for eTOM, TAM and SID).  In terms of what ICE wanted to achieve with this project (they call it PESSO) this diagram shows it really well.

I wish I could share with you the entire slide pack, but I think I might incur the wrath of the TeleManagement Forum if I were to do that.  If you want to see these great presentations from Telcos from all around the world, you will just have to stump up the cash and get yourself to Nice next year.  Finally, I want to illustrate the integration architecture that ICE used – this diagram is similar to the one form Impact, but I think importantly shows ICE’s view of the architecture rather than IBM’s or GMB’s.

For the benefit of those that don’t understand some of the acronyms in the architecture diagram above, let me explain them a bit:

  • ESB – Enterprise Services Bus
  • TOCP – Telecom Operations Content Pack (the old name for WebSphere Telecom Content Pack) – IBM’s product to help Telcos get in line with the TMF Frameworx)
  • NGOSS – Next Generation Operations Support Systems (the old name to TMF Frameworx)
  • TAM – Telecom Applications Map
  • SID – Shared Information / Data model