Jobs has lofty goal for iPhone 4’s FaceTime video chat with open standard – Computerworld

Originally posted on 10Jun10 to IBM Developerworks (11,776 Views)

Regarding this article: http://www.computerworld.com/s/article/9177819/Jobs_has_lofty_goal_for_iPhone_4_s_FaceTime_video_chat_with_open_standard


I came across this article today – Apple wanting to propose their new Facetime technology for video chat now that they finally have a camera on the front of their  iPhone 4.  I’m now on my second phone with a camera on the surface of the phone (that’s at least four years that my phones have had video chat capabilities) which has not proved to be much more than a curiosity where Telcos have launched it around the world.  I recall the first 3G network launch in Australia – for Hutchinson’s ‘3’ network – video chat was seen as the next big thing – the killer application, yet apart from featuring in some reality shows on the TV, very few people used it.  I wonder why Steve Jobs thinks this will be any different.  At least the video chat capabilities that are in the market already have a standard that they comply with which means that  on my Nokia phone, I can have a video call with someone on a (say) Motorola phone.  With Apple’s Facetime, it’s only  iPhones 4 to iPhone 4 (which does not support a 4G network like LTE or WiMax I hasten to add).  If Apple really is worried about standards as the Computerworld article suggests, then I have to ask why doesn’t Apple make their software comply with existing 3GPP Video call standards instead of ‘inventing their own’.  If Apple were truly concerned about interoperability, that would have been a more sensible path.

According to Wikipedia, in Q2 2007 there were “… over 131 million UMTS users (and hence potential videophone users), on 134 networks in 59 countries.”.  Today, in 2010, I would feel very confident in doubling those figures given the rate at which UMTS networks (and more latterly, HSPA networks) have been deployed throughout the world.  Of note is that the Chinese 3G standard (TD-SCDMA) also supports the same video call standard protocol.  That protocol (3G-324M – See this article from commdesign.com for a great explanation of the protocol and it’s history – from way back in 2003!) has been around for a while and yes, it was developed because the original UMTS networks couldn’t support IPv6 or the low latency connectivity to provide a good quality video call over a purely IP infrastructure.  But, things have changed with LTE gathering steam all around the world (110 telcos across 48 countries according to 3GPP) and mobile WiMax being deployed in the USA by Sprint and at a few other locations around the world (See WiMax Forum’s April 2010 report – note that the majority of these WiMax deployments are not for mobile WiMax and as far as I know, Sprint are the first to be actively deploying WiMax enabled mobile phones as opposed to mobile broadband USB modems) so, perhaps it is time to revisit those video calling standards and update them with something that can take advantage of these faster networks.  I think that would be a valid thing to do right now.  If it were up to me, I would be looking at SIP based solutions and learning from the success that companies like Skype have had with their video calling (albeit only on PCs and with proprietary technology) – wouldn’t it be great if you could video call anyone from any device?
I guess the thing that annoys me most about Apple’s arrogance is to ignore the prior work in the field.  Wouldn’t it be better to make Facetime compatible with the hundreds of millions of handsets already deployed rather than introduce yet another incompatible technology and proclaim it as “… going to be a standard”.

My 2c worth…

ICE at TeleManagement World 2010 – a great example of real benefits from TMF Frameworx

Oroiginally posted on 29May10 to IBM Developerworks (23,580 Views)

Yes, I should have posted this a week ago during the TeleManagement World conference – I’ve been busy since then and the wireless network at the conference was not available in most of the session rooms – at least that is my excuse.

Ricardo Mata, Sub-Director, VertICE (OSS) Project from ICE

At Impact 2010 in Las Vegas we heard from the IBM Business Partner (GBM) on the ICE project.  At TMW 2010, it was ICE themselves presenting on ICE and their journey down the TeleManagement Forum Frameworx path.  Ricardo Mata, Sub-Director, VertICE (OSS) Project from ICE presented (see his picture to the right) presented on ICE’s projects to move Costa Rica’s legacy carrier to a position that will allow them to remain competitive when the government opens up the market to international competitors such as Telefonica who are champing at the bit to get in there.  ICE used IBM’s middleware to integrate components from a range of vendors and align them to the TeleManagement Forum’s Frameworx (the new name for eTOM, TAM and SID).  In terms of what ICE wanted to achieve with this project (they call it PESSO) this diagram shows it really well.

I wish I could share with you the entire slide pack, but I think I might incur the wrath of the TeleManagement Forum if I were to do that.  If you want to see these great presentations from Telcos from all around the world, you will just have to stump up the cash and get yourself to Nice next year.  Finally, I want to illustrate the integration architecture that ICE used – this diagram is similar to the one form Impact, but I think importantly shows ICE’s view of the architecture rather than IBM’s or GMB’s.

For the benefit of those that don’t understand some of the acronyms in the architecture diagram above, let me explain them a bit:

  • ESB – Enterprise Services Bus
  • TOCP – Telecom Operations Content Pack (the old name for WebSphere Telecom Content Pack) – IBM’s product to help Telcos get in line with the TMF Frameworx)
  • NGOSS – Next Generation Operations Support Systems (the old name to TMF Frameworx)
  • TAM – Telecom Applications Map
  • SID – Shared Information / Data model

Impact 2010 – ICE and CAFTA Next Generation OSS/BSS

Originally posted on 06May10 to IBM Developerworks (16,509 Views)

ICE present at Impact’10

In Costa Rica, the government owned telco – ICE is being forced to open up it’s market to competitors because of the Central American Free Trade Agreement (CAFTA) that Costa Rica has joined. This represented a huge change for ICE who were a Power and Communications provider, without a competitor in their market, they didn’t have any competitive forces to push them to modernise their systems and processes. For instance, fulfilment of basic services took weeks as a result.

GBM, an IBM business partner and IBM Software group proposed to ICE that they base their new OSS/BSS architecture on the TeleManagement Forum’s Frameworx (eTOM, TAM, SID, TNA) – for which they used the WebSphere Telecom Content Pack and IBM Dynamic Process Edition to ensure ICE would have the standards compliance and dynamic BPM capabilities. By using WTCP and DPE, ICE reduced the effort required to build and deploy their new processes by an estimated 20-50%. A fundamental principle of Dynamic BPM is the Business Services layer which sits on top of the BPM layer which in turn sits on the SOA layer. A Business Service is abstracted up from the physical process. For instance, a business service might be ‘Check Technical Availability’ which would apply regardless of the service you are talking about – mobile, POTS or xDSL. These business services are defined within the Telecom Content Pack which enables system integrators like GBM to accelerate the architecture work on projects like this one for ICE.

GBM made use of IBM’s Rapid Delivery Environment (RDE) – where they sent a number of their architects to the IBM Telecom Solution Lab in Austin, Texas for six weeks to conduct a proof of concept and to learn how to apply WTCP to a real customer situation such as that faced by ICE. The RDE allowed GBM to work with the IBM experts to build the first few scenarios so that GBM could continue the work locally in Costa Rica without a lot of assistance from IBM. The other benefit of using the RDE is to get access to the eTOM level 4,5 and 6 assets – the connections to the physical systems that the RDE has previously developed. For instance, the connection to Oracle Infranet Billing engine which can then be reused by other customers who also engage with the RDE.

GBM and ICE have not yet been able to measure that acceleration that WTCP and DPE provided, but anecdotal evidence suggests that it was significant. In preparation for CAFTA, ICE have already launched a 3G network and are preparing to launch pre-paid services in preparation to compete with several new operators that will enter the market this year.

Impact 2010 – AT&T, Using SOA & BPM to accelerate business value

Originally poster on 05May10 to IBM Developerworks (16,501 Views)

AT&T are part way through a major SOA/BPM project which if you know a little about their history* must be an enormous task. They are introducing modelling tools and reverse modelling their existing systems as well as using a tool from iRise to prototype the user interfaces and reduce the risk of not hitting the business requirements.

They have deployed Rational Requisite Pro to capture requirements without the need to get users away from their beloved MS Word. In the last five months, their requirements have gone from 15,000 requirements registered in January to over 30,000 now. Certainly illustrates the traction that they are achieving with their business people. Users access Req Pro via Citrix sessions and the tools are available to thousands of business users.

AT&T are also exposing WebSphere Business Modeler and iRise to a smaller set of subject matter expert users – building a Centre of Excellence in UI design and Process Modelling. So far, they have modelled over 800 process flows base on eTOM models which have been extended to meet their specific requirements. All of these are stored within a common Rational Asset Manager instance which helps their business analysts to improve asst use and reuse.

Those process models feed directly into the model driven development method which is aligned with the requirements and process models. That MDD method uses WebSphere Integration Developer(WID), Rational Software Architect (RSA) for development and WebSphere Process Server (WPS) runtime. WebShere Business Modeler and WebShere Services Registry and Repository (WSRR) in support of the runtime. IBM GBS have put in place processes to support AT&T’s development life cycle and governance requirements.

Key success factors that AT&T see include:

  • Solve Critical Business Problems
  • Win over senior Exec support
  • Achieve Business Partner Alignment
  • Integrated Tools Approach
  • Organisational transformation
  • Infrastructure investment
  • Communicate, communicate, communicate!


* AT&T have been through multiple de-mergers and mergers and acquisitions over the past 10 years resulting in a hugely complex IT environment.

Impact 2010 – BPM in the Cloud

Originally posted on 05May10 to IBM Developerworks (10,946 Views)

I have just seen Amy Wohl of Amy D Wohl Opinions present on Cloud computing, she was going through the various cloud models and spoke about Community Clouds. What she means by that is multiple community focused clouds as part of a larger (private) cloud. An example of that is the Vietnam Government that bought an IBM Cloudburst to provide multiple virtual private clouds to small businesses in Vietnam so that they can have access to computing power that they otherwise now be able to afford. For Telcos, this could be an offering to their local community groups – perhaps a local schools, bar, sporting clubs, service clubs etc but also potentially for commercial organisations – perhaps to small businesses.

She also made the interesting point that (in her opinion) we are too early in the cloud evolution to actually define standards. She believes that any standards set now would stifle innovation in cloud technology and interoperability. I was interested to hear about this since I attended a web conference call a few weeks ago run by the TeleManagement Forum’s effort to create standards around clouds, particularly For Enterprise use rather than public clouds. I guess the Enterprise cloud market is the most likely type of cloud user that will need interoperability first, thus the emphasis on standards.

John Falkl from IBM

Amy co-presented with John Falkl from IBM who discussed BPM within the cloud. Given BPM is a business function, items subjects such as Security are usually one of the biggest hurdles for Cloud Services. There are multiple factors that fall under the title of ‘security’ such as encryption, roles, authentication (especially when using federates or external authentication services), legal data protection requirements and authorisations. John also pointed out a number of considerations that should be considered in enterprise cloud services including Governance models (which he sees as an extension to normal enterprise governance models). John’s view of standards for Cloud services is that it will most likely start with Web Services standards such as WS-Provisioning and mentioned that there were multiple efforts around cloud standards. I might see if I can have a chat to both John and Amy after the session to get their views on the TMF’s efforts around cloud standards. If that discussion is interesting, I will report back.

Amy made a really interesting point during the Q&A – she said that when she was at Microsoft a few weeks ago and asked about transactional activity in their cloud – they said that MS could not do it…. Very interesting especially when you consider that transactional integrity is a core capability on IBM’s cloud capability.

<edit>
I asked Amy about the TMF Cloud standardisation – she hadn’t heard about it, but did say that she thought that TMF’s approach was right – asking the enterprise customers to specify their requirements – she also thought they were probably the right place to start for any cloud standards too.
</edit>

Impact 2010 -Orange France, Decreasing the development time for Telco apps

Originally posted on 05May10 to IBM Developerworks ( 8,995 Views)

Orange in France are using WebSphere sMash to provide an easy development environment using PHP and Groovy to build Telco enabled applications that consume Orange Application Programming Interface (API) which are exposed through pre-built widgets. The custom Orange API is not compliant with either OneAPI or ParlayX and I would normally not endorse a custom API like this, but time to market forces meant that Orange had to move before the (OneAPI) standards were in place. What I would take from their experience in France is their model and use cases. All of which could be done and (now) use standards for those APIs. Interestingly, I think that Orange could also use IBM Mashup Center to support developers with even less skills that the PHP and Groovy developers they’re currently targeting.

http://orange-innovation.tv/webtv/getVideo.php?id=1040

Impact 2010 – Telus overview – Ed Jung

Originally posted on 04May10 to IBM Developerworks (9,176 Views)

Ed Jung, Telus Canada

Telus is a Communications Service Provider in Canada, the second largest in their market with 12M connections (wireline, mobile and broadband). Telus have a very complex mix of products, services and systems and they need to maximise their investments while still be able to grow and maintain a lid on their costs. New projects still need to be implemented through good times and bad, so they need an architecture that will allow Telus to continue to grow and maintain costs through a range of economic conditions. Telus selected an agile method/strategy where a reasonable investment early on with the plan to become agile and support new ‘projects’ through small add ons in terms of investment. Ed Jung from Telus characterised the ‘projects’ in the later stages as rule or policy changes which may or may not require a formal release.

To achieve this agility, Telus are using WebShere Telecom Content Pack (WTCP) as an accelerator to keep costs down, while still maintaining standards compliance for their architecture. He sees key success factors as:
Selecting a key implementation partner (IBM)

Using standards where possible to maintain consistency

For Telus, they elected to start with fulfilment scenarios within their IPTV system. The basis for this is a data mapping to and from a common model – within the TeleManagement Forum’s standards, that relates to the SID. Ed sees this common model as key to their success.

Dynamic endpoint selection is used within Telus to enable their processes to integrate and participate with their BPM layer. Ed suggest the key factors for a successful WTCP project are:

  • Adopt a reference architecture
  • Select a good partner
  • Seed money for lab trials
  • Refine architecture
  • Choose correct pilots
  • Put governance in place (business and architects)
  • Configure data / reduce code

Ed thinks that last point (configure data / reduce code) is the best description of an agile architecture that really drive lower total cost of ownership for projects as well as a lower capital expenditure for each project.

Towards a Future in the Clouds (for Telcos)

Originally posted on 15Feb10 to IBM Developerwork where it got 12,073 Views

A colleague of mine at IBM, Anthony Behan has just had an article published in BillingOSS magazine.  I’ll admit that I have never heard of the magazine before, but this particular issue has quite a few articles about Cloud computing in a Telco environment. I don’t agree with all of the content in e-zine, it is still an interesting read none the less.  Check out the full issue at http://www.billingoss.com/101 and Anthony’s article on pp48.
 
The image is a screen capture of Anthony’s article from the billingoss.com web site.

A tale of three National Broadband Networks

Originally posted on 21Feb10 to IBM Developerworks where it got  12,303 Views

Providing a National Broadband Network within a country is seen by many governments as a way to help their population and country compete with other countries.  I have been involved in three NBN projects; Australia, Singapore and New Zealand.  I don’t claim to be an expert in all three projects (which are ongoing) but I though I would share some observations and comparisons between the three projects.

Where Australia and Singapore have both opted to build a new network with (potentially) new companies running it, New Zealand has taken a different path.  The Kiwis have decided to split the incumbent (and formerly monopoly) Telecom New Zealand into three semi-separated ‘companies’ Retail, Wholesale and Chorus (the network), but only for the ‘regulated products’ which for the New Zealand government is ‘broadband’.  They all still report to a single TNZ CEO.  I have not seen any direction in terms of Fibre to the Home or Fibre to the Node, just defined the product as ‘broadband’.  The really strange thing with this split is that the three business units will continue to operate as they did in the past for other non-regulated products such as voice. 
 
As an aside, the Kiwi government not regulating voice seems an odd decision to me – especially when you compare it to countries like Australia and the USA where the government has mandated that the Telcos provide equivalent voice services to the entire population. Sure, New Zealand is a much smaller country, but it is not without it’s own geographic challenges in providing services to all kiwis, yet

Telecom NZ is now Spark

A key part of the separation is that these three business units are obliged to provide the same level of service to external companies as they provide to Telecom and it’s other business units.  For example if Vodafone wants to sell a Telecom Wholesale product, then Telecom Wholesale MUST treat Vodafone identically to the way they treat Telecom Retail.  Likewise Chorus must do the same for it’s customers which would include ISPs as well as potentially other local Telcos (Vodafone, Telstra Clear and 2Degrees).  This equivalency of input seems to me to be an attempt to get to a similar place to Singapore (more on that later).  Telecom NZ have already spent tens of million of NZ$ to this point and they don’t have a lot to show for it yet.  It seems to me like the Government is trying to get to a NBN state of play by using Telecom’s current network and perhaps adding to that as needed.  For the kiwi population, that’s not anything flash like fibre to the home, but more like Fibre to the node and then have a DSL last mile connection.  That will obviously limit the sorts of services that could be delivered over that network.  When other countries are talking about speeds in excess of 100Mbps to the home, New Zealand will be limited to DSL speeds until the network is extended to a full FTTH deployment (not planned at the moment as far as I am aware) 

Singapore, rather than split up an existing telco (like Singtel or Starhub) have gone to tender for the three layers – Network, Wholesale and Retail.  The government (Singapore Ltd)  has decided that should only be one network and run by one company (Nucleus Connect – providing Fibre to the Home), that there would be a maximum of three wholesale companies and as many retail companies as the market will support.  A big difference to New Zealand is that the Singapore government wants the wholesalers to offer a range of value added services – that they refer to as ‘sit forward’ services to engage the population rather than ‘sit back’ services that do not engage the population base.  Retail companies would be free to pick and choose wholesale products for different wholesalers to provide differentiation of services.

Singapore, New Zealand and Australia are vastly different countries – Singapore is only 700km2 in size, Australia is a continent in it’s own right and new Zealand is at the smaller end of in between.  This is naturally going to have a dramatic effect on each Government’s approach to a NBN.  Singapore’s highly structured approach is typical of the way Singapore does things.  Australia’s approach is less controlled – due to the nature of the political environment in Australia rather than it’s size and New Zealand’s approach seems somewhat half-hearted by comparison.  I am not sure why the NZ government has not elected to build a new network independent of Telecom NZ’s current network. 

In Australia on the other hand, the government have set up the Communications Alliance to manage the NBN and subcontract to the likes of Telstra, Optus and others.  The interesting thing with that approach (other than the false start that has already cost the Australian Taxpayers AU$30 million) and the thing that sets it apart from Singapore is that the approach doesn’t seem to have any focus on the value added services (unlike Singapore’s approach) – it’s all about the network, even the wholesaler plan for Australia is talking about layer 2 protocols (See The Communications Alliance Wiki.  All of the documents I have seen from Communications Alliance are all about the network – all very low level stuff. 

Of course, these three countries are not the only countries that are going through a NBN project.  For example the Philippines had a shot at one a few years ago – the bid was won by ZTE, but then a huge scandal caused the project to be abandoned.  It came back a while later as the Government Broadband Network (GBN) but that doesn’t really help the average Filipino.  It’s interesting to see how these projects develop around the world…

Quality, Speed, Price: Pick two

Originally posted on 02Feb10 to IBM Developerworks where it got 15,259 Views

On the Wednesday of the week before last (the week before my leave) at about 1am my time, I got an urgent request for a RFI response to be presented back to the customer at Friday noon (GMT+8 – 3pm for me – 2.5 business days for the locals in that timezone).  This RFI  was asking lots of hypothetical questions about what this particular telco might do with their Service Delivery Platform (SDP).  It had plenty of requirements like “Email service” or “App Store Service” and so on.  These ‘use cases’ made up 25% of the overall score, but did not have any more detail than I have quoted here.  Two to four words for each use case.  Crazy!  If I am responding to this, such loose scope means I can interpret the use cases any way that I want.  It also means that to meet all the use cases (14 in all) ranging from ‘Instance Messaging Presence Service (IMPS)’ to ‘Media Content and Management Service’ to ‘Next-Generation Network Convergence innovative services’  the proposal and the system would have to be a monster with lots of components.  The real problem with such vague requirements is that vendors will answer the way they think the customer wants them to, rather than the customer telling them what they want to see in the response.  The result will be six or eight different responses that vary so much that they cannot be compared which is the whole point of running the RFI process – to compare vendors and ultimately select one to grant the project to.

On top of the poor quality of the RFI itself, the lack of time to respond creates great difficulties for the people responding.  ‘So what, I don’t care, it’s there job’ you might expect them to say and to an extent you are correct, but think about it like this:  A short timeframe to respond means that the vendor has to find whoever they can internally to respond – they don’t have time to find the best person.  A short timeframe means that the customer is more likely to get a cookie cutter solution (one that the vendor has done before) rather than a solution that is designed to meet their actual needs. A short timeframe means that the vendor may not have enough time to do a proper risk assessment and quality assurance on the proposal – both of which will increase the cost quoted on the proposal.

All of these factors should be of interest to the Telco that is asking for the proposal because they all have a direct effect on the quality and price of the project and ultimately the success of the project. 

I know this problem is not unique to the Telecom industry, but of all the industries I have worked with in my IT career, the Telcos seem to do it more often.  I could go on and on quoting examples of ultra short lead times to write proposals – sometimes as little as 24 hours (to answer 600 questions in that case), but all it would do is get me riled up thinking about them.

The whole subject reminds me of what my boss in a photolab (long before my IT career began) would say “Quality, Speed, Price: Pick two”.  Think about it – it rings true doesn’t it?