Impact 2010 -Orange France, Decreasing the development time for Telco apps

Originally posted on 05May10 to IBM Developerworks ( 8,995 Views)

Orange in France are using WebSphere sMash to provide an easy development environment using PHP and Groovy to build Telco enabled applications that consume Orange Application Programming Interface (API) which are exposed through pre-built widgets. The custom Orange API is not compliant with either OneAPI or ParlayX and I would normally not endorse a custom API like this, but time to market forces meant that Orange had to move before the (OneAPI) standards were in place. What I would take from their experience in France is their model and use cases. All of which could be done and (now) use standards for those APIs. Interestingly, I think that Orange could also use IBM Mashup Center to support developers with even less skills that the PHP and Groovy developers they’re currently targeting.

http://orange-innovation.tv/webtv/getVideo.php?id=1040

Impact 2010 – Gridit case study

Originally poster on 05May10 to IBM Developerworks (13,014 Views)

Gridit is a Finnish company that is providing online retail services which was only founded in 2009. They are owned by nine local network providers. Think of them as an aggregated application store that sells a broad range of services and products from those nine network companies as well as third party content providers. They plan to sell services and content such as:

  • Music
  • Games
  • e-books
  • Access
  • Data storage
  • Information Security
  • Home services
  • VoIP
  • IPTV

They do not make exclusive agreements with the content/service providers and provide their customers with freedom of choice. For Gridit, the customer is king – they will seek out new content providers if there is demand from the customers. Gridit also interact with local network providers and 3rd party content providers giving the customers a single point of contact and billing for the services that they resell.

What Gridit are providing is pretty similar to an app store solution we deployed last year in Vietnam which was also a joint venture by a number of Telcos and a bank which provided a retail online store for products and services from those communications providers as well as 3rd party content providers except that Gridit are also offering a hosted wholesale service – I could go to Gridit and build my new company ‘Larmourcom’ and offer products and services from a range of providers that Gridit front end for Larmourcom. Gridit can stand up an online commerce portal for Larmourcom and also provide an interface to the back end providers to allow for traditional and non-traditional service assurance, fulfilment and billing processes.

To achieve this abstraction from the back end providers, Gridit have used WebSphere Telecom Content Pack to provide an architectural framework and accelerator for all of those services. IBM has helped Gridit to map those processes as defined within the TeleManagement Forum’s standards (eTOM, TAM, SID) and map them to the lower level processes to wherever the content or services come from.

Like the Vietnamese app store, Gridit are also using WebSphere Commerce to provide the online commerce and catalogue. For Gridit, the benefits they expect to see (as a result of a Business Value Assessment that was conducted) was 48% faster time to value by using Dynamic BPM and Telecom Content Pack versus a traditional BPM model. That is real business value and a great story for both Gridit and IBM.

Impact 2010 – Telus overview – Ed Jung

Originally posted on 04May10 to IBM Developerworks (9,176 Views)

Ed Jung, Telus Canada

Telus is a Communications Service Provider in Canada, the second largest in their market with 12M connections (wireline, mobile and broadband). Telus have a very complex mix of products, services and systems and they need to maximise their investments while still be able to grow and maintain a lid on their costs. New projects still need to be implemented through good times and bad, so they need an architecture that will allow Telus to continue to grow and maintain costs through a range of economic conditions. Telus selected an agile method/strategy where a reasonable investment early on with the plan to become agile and support new ‘projects’ through small add ons in terms of investment. Ed Jung from Telus characterised the ‘projects’ in the later stages as rule or policy changes which may or may not require a formal release.

To achieve this agility, Telus are using WebShere Telecom Content Pack (WTCP) as an accelerator to keep costs down, while still maintaining standards compliance for their architecture. He sees key success factors as:
Selecting a key implementation partner (IBM)

Using standards where possible to maintain consistency

For Telus, they elected to start with fulfilment scenarios within their IPTV system. The basis for this is a data mapping to and from a common model – within the TeleManagement Forum’s standards, that relates to the SID. Ed sees this common model as key to their success.

Dynamic endpoint selection is used within Telus to enable their processes to integrate and participate with their BPM layer. Ed suggest the key factors for a successful WTCP project are:

  • Adopt a reference architecture
  • Select a good partner
  • Seed money for lab trials
  • Refine architecture
  • Choose correct pilots
  • Put governance in place (business and architects)
  • Configure data / reduce code

Ed thinks that last point (configure data / reduce code) is the best description of an agile architecture that really drive lower total cost of ownership for projects as well as a lower capital expenditure for each project.

Am I the only one over the Apple hype?

Originally posted on 21Apr10 to IBM Developerworks and got 8,859 Views

I am sitting here in Singapore and reading today’s Straits Times, keeping up with the affairs in the region and around the world where on page 3 (the most important page in a newspaper after the front page) is an article about the leaked/lost next generation iPhone that Gizmodo reportedly paid US$5000 dollars for (other online reports that I’ve read have suggested other amounts such as US$350.  I’m not sure who is right).  The article occupied almost half of page 3.. for the next gen iPhone…  that seems excessive to me for a non-specialist publication, but I guess it is reflective of the general hype that exists around Apple products.  The previous hype was around the next gen MacBooks with faster processors and prior to that the iPad.  I’ve read articles suggesting that the iPad will revolutionise newspapers and home computing and telcos.  I’m not so sure.  While I think a lot of iPad will be sold worldwide (once it is released outside of the USA), but I also think a lot of those devices will get a lot of use through a honeymoon period and then sit idle until they are eventually disposed of.  I am so sick of the hype around all these Apple products.  There are some things that Apple do really well (UI and Design) and some they do really poorly (Business use support, locking in users).  I respect them, but I do not like them. It reminds me of a  great parody that The Onion did a while ago:
Apple Introduces Revolutionary New Laptop With No Keyboard

Towards a Future in the Clouds (for Telcos)

Originally posted on 15Feb10 to IBM Developerwork where it got 12,073 Views

A colleague of mine at IBM, Anthony Behan has just had an article published in BillingOSS magazine.  I’ll admit that I have never heard of the magazine before, but this particular issue has quite a few articles about Cloud computing in a Telco environment. I don’t agree with all of the content in e-zine, it is still an interesting read none the less.  Check out the full issue at http://www.billingoss.com/101 and Anthony’s article on pp48.
 
The image is a screen capture of Anthony’s article from the billingoss.com web site.

A tale of three National Broadband Networks

Originally posted on 21Feb10 to IBM Developerworks where it got  12,303 Views

Providing a National Broadband Network within a country is seen by many governments as a way to help their population and country compete with other countries.  I have been involved in three NBN projects; Australia, Singapore and New Zealand.  I don’t claim to be an expert in all three projects (which are ongoing) but I though I would share some observations and comparisons between the three projects.

Where Australia and Singapore have both opted to build a new network with (potentially) new companies running it, New Zealand has taken a different path.  The Kiwis have decided to split the incumbent (and formerly monopoly) Telecom New Zealand into three semi-separated ‘companies’ Retail, Wholesale and Chorus (the network), but only for the ‘regulated products’ which for the New Zealand government is ‘broadband’.  They all still report to a single TNZ CEO.  I have not seen any direction in terms of Fibre to the Home or Fibre to the Node, just defined the product as ‘broadband’.  The really strange thing with this split is that the three business units will continue to operate as they did in the past for other non-regulated products such as voice. 
 
As an aside, the Kiwi government not regulating voice seems an odd decision to me – especially when you compare it to countries like Australia and the USA where the government has mandated that the Telcos provide equivalent voice services to the entire population. Sure, New Zealand is a much smaller country, but it is not without it’s own geographic challenges in providing services to all kiwis, yet

Telecom NZ is now Spark

A key part of the separation is that these three business units are obliged to provide the same level of service to external companies as they provide to Telecom and it’s other business units.  For example if Vodafone wants to sell a Telecom Wholesale product, then Telecom Wholesale MUST treat Vodafone identically to the way they treat Telecom Retail.  Likewise Chorus must do the same for it’s customers which would include ISPs as well as potentially other local Telcos (Vodafone, Telstra Clear and 2Degrees).  This equivalency of input seems to me to be an attempt to get to a similar place to Singapore (more on that later).  Telecom NZ have already spent tens of million of NZ$ to this point and they don’t have a lot to show for it yet.  It seems to me like the Government is trying to get to a NBN state of play by using Telecom’s current network and perhaps adding to that as needed.  For the kiwi population, that’s not anything flash like fibre to the home, but more like Fibre to the node and then have a DSL last mile connection.  That will obviously limit the sorts of services that could be delivered over that network.  When other countries are talking about speeds in excess of 100Mbps to the home, New Zealand will be limited to DSL speeds until the network is extended to a full FTTH deployment (not planned at the moment as far as I am aware) 

Singapore, rather than split up an existing telco (like Singtel or Starhub) have gone to tender for the three layers – Network, Wholesale and Retail.  The government (Singapore Ltd)  has decided that should only be one network and run by one company (Nucleus Connect – providing Fibre to the Home), that there would be a maximum of three wholesale companies and as many retail companies as the market will support.  A big difference to New Zealand is that the Singapore government wants the wholesalers to offer a range of value added services – that they refer to as ‘sit forward’ services to engage the population rather than ‘sit back’ services that do not engage the population base.  Retail companies would be free to pick and choose wholesale products for different wholesalers to provide differentiation of services.

Singapore, New Zealand and Australia are vastly different countries – Singapore is only 700km2 in size, Australia is a continent in it’s own right and new Zealand is at the smaller end of in between.  This is naturally going to have a dramatic effect on each Government’s approach to a NBN.  Singapore’s highly structured approach is typical of the way Singapore does things.  Australia’s approach is less controlled – due to the nature of the political environment in Australia rather than it’s size and New Zealand’s approach seems somewhat half-hearted by comparison.  I am not sure why the NZ government has not elected to build a new network independent of Telecom NZ’s current network. 

In Australia on the other hand, the government have set up the Communications Alliance to manage the NBN and subcontract to the likes of Telstra, Optus and others.  The interesting thing with that approach (other than the false start that has already cost the Australian Taxpayers AU$30 million) and the thing that sets it apart from Singapore is that the approach doesn’t seem to have any focus on the value added services (unlike Singapore’s approach) – it’s all about the network, even the wholesaler plan for Australia is talking about layer 2 protocols (See The Communications Alliance Wiki.  All of the documents I have seen from Communications Alliance are all about the network – all very low level stuff. 

Of course, these three countries are not the only countries that are going through a NBN project.  For example the Philippines had a shot at one a few years ago – the bid was won by ZTE, but then a huge scandal caused the project to be abandoned.  It came back a while later as the Government Broadband Network (GBN) but that doesn’t really help the average Filipino.  It’s interesting to see how these projects develop around the world…

Quality, Speed, Price: Pick two

Originally posted on 02Feb10 to IBM Developerworks where it got 15,259 Views

On the Wednesday of the week before last (the week before my leave) at about 1am my time, I got an urgent request for a RFI response to be presented back to the customer at Friday noon (GMT+8 – 3pm for me – 2.5 business days for the locals in that timezone).  This RFI  was asking lots of hypothetical questions about what this particular telco might do with their Service Delivery Platform (SDP).  It had plenty of requirements like “Email service” or “App Store Service” and so on.  These ‘use cases’ made up 25% of the overall score, but did not have any more detail than I have quoted here.  Two to four words for each use case.  Crazy!  If I am responding to this, such loose scope means I can interpret the use cases any way that I want.  It also means that to meet all the use cases (14 in all) ranging from ‘Instance Messaging Presence Service (IMPS)’ to ‘Media Content and Management Service’ to ‘Next-Generation Network Convergence innovative services’  the proposal and the system would have to be a monster with lots of components.  The real problem with such vague requirements is that vendors will answer the way they think the customer wants them to, rather than the customer telling them what they want to see in the response.  The result will be six or eight different responses that vary so much that they cannot be compared which is the whole point of running the RFI process – to compare vendors and ultimately select one to grant the project to.

On top of the poor quality of the RFI itself, the lack of time to respond creates great difficulties for the people responding.  ‘So what, I don’t care, it’s there job’ you might expect them to say and to an extent you are correct, but think about it like this:  A short timeframe to respond means that the vendor has to find whoever they can internally to respond – they don’t have time to find the best person.  A short timeframe means that the customer is more likely to get a cookie cutter solution (one that the vendor has done before) rather than a solution that is designed to meet their actual needs. A short timeframe means that the vendor may not have enough time to do a proper risk assessment and quality assurance on the proposal – both of which will increase the cost quoted on the proposal.

All of these factors should be of interest to the Telco that is asking for the proposal because they all have a direct effect on the quality and price of the project and ultimately the success of the project. 

I know this problem is not unique to the Telecom industry, but of all the industries I have worked with in my IT career, the Telcos seem to do it more often.  I could go on and on quoting examples of ultra short lead times to write proposals – sometimes as little as 24 hours (to answer 600 questions in that case), but all it would do is get me riled up thinking about them.

The whole subject reminds me of what my boss in a photolab (long before my IT career began) would say “Quality, Speed, Price: Pick two”.  Think about it – it rings true doesn’t it?

How I go about sizing

Originally posted on 22Jan10 to IBM Developerworks where it got 20,321 Views

Sizing of software components (and therefore also Hardware) is a task that I often need to perform.  I spend a lot of time on it, so I figured I would share how I go about doing it and what factors I take into account.  It is an inexact science.  While I talk about sizing Telecom Web Services Server for the most part, the same principles would be applied to any sizing exercise.  Please also note that the numbers stated are examples only and NOT should not be used to perform any sizing calculations of your own!

Inevitably, when asked to do a sizing, I am always forced to make assumptions about traffic predictions. I don’t like doing it, but is is rare for customers to have really thought through the impact that their traffic estimates/projections will have on the sizing of a solution or it’s price.

Assumptions are OK

Just as long as you state them – in fact they could be viewed as a way to wiggle out of any commitment to the sizing should ANY of the assumptions not hold true once the solution has been deployed. Let me give you and example – I have seen RFPs that have asked for 500 Transactions Per Second (TPS), but neglected to state anywhere what a Transaction actually is. When talking about a product like Telecom Web Services Server – you might assume that the transactions they’re talking about are SMS, but in reality, they might be talking about MMS or some custom transaction – a factor which would have a very significant effect on the sizing estimate. Almost always, different transaction types will place different loads on systems.

Similarly, it is rare for a WebSphere Process Server opportunity (at a Telco anyway) to fully define the processes that they will be implementing and their volumes once that system goes into production. So, what do I do in these cases? My first step is to try to get the customer to clear up the confusion. If that fails (I often have multiple attempts at explaining to the customer why we need such specific information – it is in their benefit after all – they’re much more likely to get the right (sized) system for their needs. This is not always successful, so my next step is to make assumptions to fill in the holes in the customer’s information. I am always careful to write those assumptions down and include them with my sizing estimations. At this point, industry experience and thinking about potential use cases really helps to make the assumptions I make reasonable (or I think so anyway 🙂 ) 

For instance, if a telco has stated that the Parlay X Gateway must be able to service 5760000 SMS messages per day, I think it would be reasonable to assume that very close to 100% of those would be sent within a 16 hour window (while people are awake and to avoid complaints to the telco about SMS messages that come in at all hours of the day – remembering we are talking about applications sending SMS message – nothing to do with user to user SMS messages ) which gets use down to 360000 (5760000/16) SMS per hour or 100 TPS for SendSMS over SMPP – now this is fine for an average number, but I guarantee that the distribution of those messages will not be even, so you have to make an assumption that the peak usage will be somewhat higher than 100 TPS, remembering that we have to size for peak load not average. How much higher will depend on use cases. If the customer cant give you those, then pick a number that your gut tells you is reasonable – lets say 35% higher than average which is roughly 135 TPS of SendSMS over SMPP (I say roughly because if that is your peak load, then as our total is constant for the day (5,760,000) the load must be lower during the non-busy hours. As we are making up numbers here anyway, I wouldn’t worry about this discrepancy, and certainly erring on the side of over sizing is the safer option anyway – provided you don’t over do the over sizing.

Assumptions are your friend

I said I prefer to not make lots of assumptions, but stating stringent assumptions can be your friend if the system does not perform as you predicted and the influencing factors are not as you stated exactly in your assumptions. For instance if you work on the basis of 35% increase in load during the busyhour and it turns out to be 200%, your sizing is going to be way off, but because you asked the customer for the increase in load during the busyhour and they did not give you the information, you were forced to make an assumption – they know their business better that we ever could and if they can’t or won’t predict such a increase during the busyhour, then we cannot be reasonably expected to predict it accurately either – the assumptions you stated will save your (and IBM’s) neck. If you didn’t explicitly state your assumptions, then you would be leaving yourself open to all sorts of consequences and not good ones at that.

Understand the hardware that you are deploying to

I saw a sizing estimate the other week that was supposed to be able to handle about 500 TPS of SendSMS over SMPP, but the machine quoted would have been able to handle around 850 TPS of SendSMS over SMPP; I would call that over doing the over sizing. This over estimate happened because the person who did the sizing failed to take into account the differences between the chosen deployment platform and the platform that the TWSS performance team did their testing on.

If you look at the way that our Processor Value Licensing (PVU) based software licensing works, you will pretty quickly come to the conclusion that not all processors are equal. PVUs are based on the architecture of the CPU – some value a processor at just 30 PVUs per core (Sparc eight core cpus), older Intel CPUs are 50 PVUs per core, while newer ones are 70 PVUs per core. PowerPC chips range from 80 PVUs per core to 120 PVUs per core. Basically, the higher the PVU rating to more powerful each core is on that CPU.

Processors that are rated at higher PVUs per core are more likely to be able to handle more load per core than ones with lower PVU ratings. Unfortunately, PVUs are not granular enough to use as the basis for sizing (remember them though) we will come back to PVUs later in the discussion. To compare the performance of different hardware, I use RPE2 benchmark scores. IBM’s Systems and Technology Group (Hardware) keeps track of RPE2 scored for IBM hardware (System p and x at least) Since pricing is done by CPU core, you should also do your sizing estimate by CPU core. For TWSS sizing, I use a spreadsheet from Ivan Heninger (ex WebSphere Software for Telecom Performance Team lead). Ivan’s spreadsheet works on the basis of CPU cores for (very old) HS21 blades. Newer servers/CPUs and PowerPC servers are pretty much all faster than the old clunkers Ivan had for his testing. To bridge the gap between the capabilities of his old test environment and modern hardware i use RPE2 scores. Since Ivan’s spreadsheet delivers a number of cores required result, I break the RPE2 score for the server down to a RPE2 score per core, then use the ratio between the RPE2 score per core for the new server and the test servers to figure out how many cores of the new hardware are required to meet the performance requirement. 

Ok – so now, using the spreadsheet, you key in the TPS required for the various transaction types – lets say 500 TPS of SendSMS over SMPP (just to keep is simple; normally, you would also have to take into account the Push WAP and MMS messages as well not to mention other transaction types such as Location requests which are not covered by the spreadsheet) that’s 12 x 2 cores for Ivan’s old clunkers, but on newer hardware such as newer HS21s with 3 Ghz CPUs, that’s 6 x 2 cores or on JS12 blades it is 6 x 2 cores. Oh that’s easy you say, the HS21s are only 50 PVUs eash easy, I just go with Linux on HS21 blades and that will be the best bang for the buck for the customer, well don’t forget that Intel no longer make dual-core CPUs for server they’re all quad-core, so in the above example, you have to buy 8 x 2 cores rather than 6 x 2 cores for the JS12/JS22 blades.

Note the x 2 after each number: that is because for TWSS in production deployments, you must separate the TWSS Access Gateway and the TWSS Service Platform. The x 2 indicates that the AG and the SP both require that number of cores.

Lets work that through:
 
Lets first say that TWSS is $850 per PVU.
For the fast HS21s – that’s 8 x 2 x 50 x $850 = $680,000 for the TWSS licences aloneFor JS12s – that’s 6 x 2 x 80 x $850 = $816,000 for the TWSS licences alone

Also (and all sales people who are pricing this should know this) the pre-requisites for TWSS must be licensed separately as well. That means the appropriate numbers of PVUs for WESB (for the TWSS AG) and the appropriate numbers of PVUs for WAS ND (for the TWSS SP) as well as the Database. It’s pretty easy to see how the numbers can add up pretty quickly and how much your sizing estimate can effect the prices of the solution.

Database sizing for TWSS

For the database, of course we prefer to use DB2, but most telcos will demand Oracle in my experience. For TWSS, the size of the server is usually not the bottleneck int he environment what is important is the DB writes and reads per second which equates to disk input/output to achieve high transaction rates with TWSS. It is VITAL to have an appropriate number of disk spindles in the database sick array to achieve the throughput required – the spreadsheet will give you the number of disk drives that need to be in a RAID 1 array to achieve the throughput. For the above 500 TPS example, it is 14.6 disks = 15 disks since you cant buy only part of a disk. While RAID 1 will give you striping and consequently throughput across your disk array, if one drive fails, you’re sunk. To achieve protection, you must go with a RAID 1+0 (sometimes called RAID 10) which gives you both mirroring (RAID 0) and stripping (RAID 1). RAID 1+0 immediately doubles your disk count so we’re up to 30 disks in the array. Our friends at STG should be able to advise on the most suitable disk array unit to go with. In terms of CPU for the database server, as I said, it does not get heavily loaded. The spreadsheet indicates that 70.7% of the reference HS21(Ivan’s clunker) would be suitable, so a single CPU JS12 or HS21 blade even an old one would be suitable.

Every time I do a TWSS sizing, I get asked how much capacity we need in the RAID 1+0 disk array – despite always asking for the smallest disk’s possible. Remember we are going for a (potentially) large array to get throughput, not storage space. In reality, I would expect a single 32Gb HDD would be able to easily handle the size requirements for the database, so space is not an issue at all when we have 30 disks in our array. To answer the question about what size – the smallest possible – since that will also be the cheapest possible provided it does not compromise the seek and data transfer rates for the drive. In the hypothetical 30 drive array, if we select the smallest drive now available (136Gb) we would would have a massive 1.9 Tb of space available ((15-1) x 136 Gb) which is way over what we need in terms of space, but it is the only way we can currently get the throughput needed for the disk I/O on our database server. Exactly the same principles apply regardless of DB2 or Oracle being used for the database.

Something that I have yet to see empirical data on is how Solid State Drives (SSD) with their higher I/O rates will perform in a RAID 1_0 array.  In such a I/O intensive application, I suspect that it would allow us to drop the total number of ‘disks’ in the array down quite significantly, but I don’t have any real data to back that up or to size an array of SSDs.

We have also considered using an in memory database such as SolidDB either as the working database or as a ‘cashe’ in front of DB2, but the problem there is the level of SQL supported by SolidDB is not the same as that supported by DB2 or Oracle’s conventional database.  To port the TWSS code to use SolidDB will require a significant investment in development.

Remember : Sizing estimates must always be multiples of the number of cores per CPU

Make sure you have enough of a overhead built into your calculations for other processes that my be using CPU cycles on your servers. I assume that the TWSS processes will only ever use a maximum of 50% of the CPU – that leaves the other 50% for other tasks and processes that may be running on the system. As a result, I always state that with my assumptions as well.  As an example, I would say:

To achieve 500 TPS (peak) of SendSMS over SMPP at 50% CPU utilisation, you will need 960 PVUs of TWSS on JS12 (BladeCenter JS12 P6 4.0GHz-4MB (1ch/2co)) blades or 800 PVUs of HS21 (BladeCenter HS21 XM Xeon L5430 Quad Core 2.66GHz (1ch/4co)) blades. I would then list the assumptions that I had made to get to the 500 TPS figure such as:

  • There is no allowance made for PushWAP or MMS included in the sizing estimate.
  • 500 TPS is the peak load and not an average load
  • SMSC has a SMPP interface available
  • All application driven SMS traffic will be during a 16 hour window
  • etc

What about High Availability?

Well, I think that High Availability (HA) is probably a topic in it’s own right, but it does have a significant effect on the sizing, so I will talk about it in that regard. HA is generally specified in nines – by that I mean if a customer asks for “five nines “, they mean 99.999% availability per annum (that’s about 5.2 minutes per year of unplanned down time). Three nines (or 99.9% available) or even two nines (99%) are also sometimes asked for. Often, customers will ask for five nines, not realising the significant impact that such a requirement will have on the software, hardware and services sizing. If we start adding additional nodes into clusters for server components, that will not only improve the availability of that component, it will also improve the transaction capability and the price. The trick is to find the right balance between hardware sizing and HA requirements. For example: if a customer wanted 400 TPS of Transaction X, but also wanted HA. Lets assume a single JS22 (2 x dual core PowerPC) blade can handle the 400 TPS requirement. We could go with JS22 blades and just add more to the cluster to build up the availability and remove single points of failure. As soon as we do that, we are also increasing the license cost and the actual capacity of the component., so with three nodes in the cluster, we would have 1200 TPS capability and three times the price of what they actually need just to get HA. If we use JS12 blades (1 x dual core PowerPC) which have half the computing power of a JS22, we could have three JS12s in a cluster, achieve 3 x 200(say) TPS = 600 TPS and even if a single node in the cluster is down, still achieve their 400 TPS requirement. With JS12’s, we meet the performance requirement, we have the same level of HA as we did with 3 x JS22s but the licensing price will be half that of the JS22 based solution ( at 1.5 x the single JS22 option).

I guess the point I am trying to get across is to think about your options and consider if there are ways to fiddle with the deployment hardware to get the most appropriate sizing for the customer and their requirements. The whole thing just requires a bit of thinking…

What other tools are available for sizing?

IBMers have a range of tools availbel to help with sizing – the TWSS spreadsheet I was talking about earlier, various online tools and of course Techline.  Techline is also available to our IBM Business Partners as well via the Partnerworld web site (You need to be a registered Business Partner to access the Techline pages on the Partnerworld site).  For more mainstream products such as WAS, WPS, Portal etc, Techline is the team to help Business Partners – they have questionnaires that they will use to get all the parameters they need to do the sizing. Techline is the initial contact point for sizing support. For more specialised product support (like for TWSS and the other WebSphere Software for Telecom products) you may need to contact your local IBM team for help.  If you are a partner, feel free to contact me directly for assistance with sizing WsT products.

There is a IBM class for IT Architects called ‘Architecting for Performance’ – don’t let the title put you off, others can do it – I did it and I am neither an architect (I am a specialist) or from IBM Global Services (although everyone else in the class was!). If you get the opportunity to attend the class, I recommend it – you work through plenty of exercises and while you don’t do any component sizing, you do do some whole system sizing which is a similar process.  I am not sure if the class is open to Business Partners, if it is, I would also encourage architects and specialists from our BPs to do the class as well.  Let me take that on as a task – I will see if it is available externally and report back.

Sizing estimations is not an exact science

:-)

As I glance back over this post, I guess that I have been rambling a bit, but hopefully you now understand some of the factors in doing a sizing estimate. The introduction of assumptions and other factors beyond your knowledge and control makes sizing non-exact – it will always be an estimate and you cannot guarantee its accuracy. That is something that you should also state with your assumptions. 

App Stores – are they the right move for Telcos?

Originally posted on 18Jan10 to IBM Developerworks where it got 13,471 Views

App Stores Background

I know lots of people are saying that Apple invented the Application Store (App Store) for their iPhone/iTouch range of devices, but they would be wrong. App stores have been around for years – I have been a customer of Handango since before I joined IBM’s Pervasive Computing team and that team has been gone for over three years now. Handango are an Internet based app store that have supported multiple handheld PDA and phone platforms. Others that I’ve used in the past include Tucows, although Tucows is more than just mobile applications – they also cover Win32, Linux, Mac etc as well. The big things that Apple did differently from Handango and their Internet brethren was:

  • Restrict applications to a single platform (I count the iTouch and the iPhone as the
  • same thing since the key difference lies in the Mobile Phone part, not the computing part of the device)
  • Restrict the development tooling and platform environment by license restrictions (All applications must be approved by Apple and must not breach their license agreement – you still can’t get a Java Virtual Machine on an iPhone for instance)
  • Force users to install via their iTunes installation on their PC/Mac or over the air from their device. Not being an iPhone user, I am not 100% sure of this point. Is there an iTunes install for Linux? (Other platforms allow apps the be installed via bluetooth, memory cards, IR and direct USB copying.)

Of course, Apples’ device competitors are trying to catch the same wave that Apple have been riding and deploy their own application store equivalents. We’ve seen efforts from Google, Nokia, Palm and Research In Motion (RIM – makers of the Blackberry) and interestingly, all have been somewhat successful. Successful at attracting developers which is key to then attracting users. Here are the their app stores:

Personally I am not a fan of Apple’s restrictive market practices and much prefer the more open ecosystem that surrounds the Symbian and Windows mobile platforms. I have in the past written applications for Palm Garnet (nee PalmOS), Symbian and Windows Mobile for use within a corporate environment. Something that is not possible with Apples licensing policies and forcing developers to upload apps to the App Store so that Apple can approve them and then include them in the App Store catalogue. If I only want to write an application for my customer, I cannot deploy it directly to the customer’s iPhones unless they have been jailbroken – the only alternative is for Apple to look at and approve the application then sign it. While the others also have the concept of signed and certified applications, you can install unsigned or un-certified applications on the other major platforms if you want (except for Android which appears to be going down a similar if less restrictive path to Apple).

Telcos and App Stores

In the past year as Telcos all around the world have watched Apple’s App Store take off and seen their interaction with the iPhone subscribers being reduced to the supplier of the pipe to the Internet – way down from the high value position that most carriers aspire to in order to improve ARPU. I’ve seen requests form many Telcos in that time for Application Store or Widget Store capability. The telos – understandably – want to raise their profile in the eyes of the subscriber and their worth in the value proposition. I have seen request for proposal documents from telcos in China, Taiwan, Vietnam, USA and queries from telcos in Thailand, Philippines, Singapore, Japan and other countries. App/Widget Stores are certainly one of the topics of the moment for Telcos.

The key differentiation that a Telco has that separates it from Apple’s App Store are:

  • Support for multiple smartphone platforms – Symbian, Blackberry, Windows Mobile, Garnet (and presumably soon; WebOS and Android as well)
  • The ability to sell things other than on device applications – this might include pre-paid top ups, ringtones, ringback tones, telco hosted applications (that could be delivered by the Telco’s Service Delivery Platfform (SDP)

In fact, IBM has won and has (partially at this stage) implemented an app store in Vietnam. Because of the Telecom environment in Vietnam, this App Store is not actually within a telco, but is instead an external company*. The app store was implement with a combination of WebSphere Portal (to provide the user interface) and WebSphere Commerce (to provide the catalog and sales part of the App Store and WebSphere Message Broker for Integration requirements. I was involved from the very initial stages of that project.

The company intends to launch a Mobile Commerce and Advertising Platform (MCAP), which is a multi-channel platform enabling its members to do small value electronic transactions (or m-commerce and e-commerce. Some of their use cases include:

  • Mobile phone content buying and selling (logo, ring tone, ring-back-tone…)
  • Purchasing small value digital products such as software, e-books, etc.
  • Buying and Selling of other services and products such as information services, Souvenir, electronic tickets, promotion vouchers, etc.
  • Low value payment services, such as prepaid top-up, game top-up, bill payment, etc.
  • Online marketing, advertisement and promotion services over Internet and Mobile.

I don’t often get involved in WebSphere Commerce projects (it tends to be a very specialised field) we do have a number of Telcos who are using WebSphere Commerce, not necessarily in App Stores, but based on the experience in Vietnam, it would not be a big leap to add that capability to their existing deployments.

The usage of WebSphere Portal provides a easy and extensible user interface primarily targeted at the PC, and with the addition of the Mobile Portal Accelerator (nee WebSphere Everyplace Mobile Portal Enable) to the existing Portal, that user interface can be extended to over 10,000 separate devices providing subscribers with an optimised experience for their device.

Where does this leave those Telcos who haven’t made the leap to their own app store? In my opinion, they still have time to catch the wave, and certainly if they want to avoid the Apple effect and being reduces to a bit pipe provider, then they need to do something to add value in the eyes of the subscriber. Apple’s model doesn’t help them with that, but perhaps the other device specific app stores wont be so carrier unfriendly. I will see what I can find out on this issue and report back in another post.

Buy for now 

;-)

* Once that customer has agreed to be a formal reference, I will share additional details in a future post.

If you want some background reading on App Stores, here are a couple of articles I would suggest:

User Generated content for Telcos

Originally posted on 11Jan10 to IBM Developerworks where it got 7,683 Views

There are a number of opportunities as I see them for Telcos over the next few years. There are definite opportunities for social networking which will enable a carrier to move from a traditional communication model with their subscribers to a more collaborative and open ‘Shared Social Space’ . For mobile operators, this market movement presents both opportunities and risks for the telco making that journey.

Opportunities

  • Extension of Social Networking to the mobile where operators continue to enjoy exclusivity
  • Extend enterprise offerings to include Social Networking and collaborative services
  • Offer users of mobile, online, and possibly IPTV, a unified rich Social network-experience across the “3 screens”

Risks

  • Virtual mobile social networking operators reducing network owners to bit-pipe

One of the most obvious moves for a mobile carrier is to simply allow mobile access to social networking tools. While this might satisfy the subscribers who want mobile access to Facebook, LinkedIn, MySpace etc they are effectively reducing themselves to a bit-pipe (all of those companies already have mobile interfaces for their platform). If Telcos are going to be able to effectively fight off those Internet based rivals, the Telco MUST offer some value beyond just the pipe. That’s where the Telcos have to use their closeness and brand to the best advantage to ensure that the carriers do not get relegated to the ‘plumbing’.

I spoke about the advantages that Telcos have over the Internet based Social Networking providers, this is where the Telco must play their hand and use those advantages because failure to do so will result in them being just providers of bandwidth. One way for Telcos to exert more ownership and maintain their value for subscribers is through User Generated Content (UCG). I’ve spoken to a number of Telcos int he past year about UCG in Australia, Malaysia and Thailand – for some reason those that I spoke to all had the idea that it is a space they ‘should’ move into, yet not one of them actually had the guts to step up and do it.

This is contrary to the they I think they should be moving and it feels to me like they are stuck in an old telco product thought mode. The whole idea behind long tail applications is that you have very short time to market for your products and try out as many as you can. Kill off the ones that fail and keep and extend/enhance the ones that do well. I speak to Telcos a lot about long tail applications – usually with respect to technology like IBM Mashup Center, WebSphere sMash and Telecom Web Services Server, but also with respect to traditional SOA as well. For example, Globe Telecom in the Philippines are executing a long tail strategy for new products brilliantly – with the help of a SDP and Unified Service Creation environment based on IBM’s software, they are able to bring new products to market in as little as 15 days. It used to take them at least eight months! They know that some products will work and some will fail, but by taking advantage of the short time to market, they can launch many products very quickly. This strategy is proving very successful for Globe, their resellers and their subscribers.

Sorry – got on a bit of a tangent there… back to UGC.

A Telco that deploys a User Generate Content framework has a number of opportunities for revenue – some better advised than others. For instance, one carrier I spoke with last year wanted to charge artists/publishers to upload content. To me, that would be a good way to prevent their platform from every taking off. The table below shows the various revenue opportunities from the artists/publishes, the consumers, the advertisers and others.

Artist / Publisher

Uploads:

  • Mobile:
    • Free apart from Data Charges ($)
    • Web:Free
  • Viewing:
    • Mobile:Free apart from Data Charges ($)
    • Web:Free
  • Consumer Purchases:
    • Ringtones/RBT’s ($)
    • Own Ringtone/RBT ($)
    • Ringtones ($)
    • Artist Wallpaper ($)
    • Video ($)
    • MP3 ($)
  • Premium Subscriptions: ($)
    • Automated alerts e.g. fan club started, x number of downloads made, etc

Subscriber

Consumer Viewing:

  • Mobile:
    • Free apart from Data Charges ($)
  • Web:
    • Free

Consumer Purchases (any channel):

  • Ringtones/RBT’s ($)
  • Ringtomes ($)
  • Artist Wallpaper ($)
  • Video ($)
  • MP3 ($)

Premium Subscriptions: ($)

  • Automated alerts e.g. Photo tagged

Advertisers

Broadcast Ad Revenue

  • Ad inserts before video starts ($)
  • Ad inserts could vary by site the video is published to e.g. Youth brand, Facebook, YouTube, etc
  • Ad Funded Content
  • Advertiser decides to do a “Powered by …” sponsorship for a given artist via their Fan Club for a week. ($)

Variability:

  • Different ad pricing by site published to ($)
  • Different pricing by artist popularity ($)

Endorsements:

  • Voted #1 by Telco XYZ Social Community
  • Community Testing ($)
  • Selling the value of the community to relevant companies that want to test new products/ideas within the community.

Supplemental

  • Cross Selling / Up Selling to Artists and Consumers
  • Transactional service revenue from enabling 3rd parties to consume and use the common web services exposed from the UGC platform.
  • Synthetic Currency – encourage the community members to do some rewarding. To do this they’ll need to purchase synthetic currency from the carrier.

As you can see, there are plenty of opportunities to establish new revenue streams from UCG – but any telco looking to move into this area need to tread a careful line between getting revenue and encouraging usage.