IT outsourcing prediction
#136
Join Date: Mar 2013
Location: US of A
Programs: Delta Diamond, United 1K, BA Blue, Marriott Titanium, Hilton Gold, Amex Platinum
Posts: 1,775
Cruz said that the "backup system had not worked properly". To make sure things work properly and are thoroughly tested, you need people on the ground at those facilities, not Indians thousands of miles away.
#137
Original Poster
Join Date: Aug 2015
Location: USA
Programs: BA Silver
Posts: 812
synstar used to be known as granada. They provide hardware break fix services that were traditionally supplied by hardware vendors. Boho as it is known was not operated by a 3rd party
#138
Join Date: Apr 2014
Location: London
Programs: Don't even mention it. Grrrrrrr.
Posts: 968
#139
Join Date: May 2017
Posts: 2
I used to work at BA in the IT department and I'm afraid I have to disagree with your assessment of their abilities. It's true that the outsourcing companies do have a few able staff, but the vast majority of the staff on the BA contract seem to have had the absolute minimum of training....
#140
Join Date: Mar 2013
Location: US of A
Programs: Delta Diamond, United 1K, BA Blue, Marriott Titanium, Hilton Gold, Amex Platinum
Posts: 1,775
#141
Original Poster
Join Date: Aug 2015
Location: USA
Programs: BA Silver
Posts: 812
I used to work at BA in the IT department and I'm afraid I have to disagree with your assessment of their abilities. It's true that the outsourcing companies do have a few able staff, but the vast majority of the staff on the BA contract seem to have had the absolute minimum of training....
#142
Join Date: Jul 2009
Location: Basingstoke, UK
Programs: BA, EK, Hilton H, Starwood, A-Club
Posts: 75
I worked for a large Japanese Electronics company that outsourced IT to TCS. Within the first weeks of live management from Chennai they'd managed to delete the Network Login accounts for most of the senior people in Europe, and than when they'd restored the accounts managed to not tie them up to email accounts. When I highlighted this would be the first of many issues with the competency of the staff in India I was told that I wasn't
being positive etc. Luckily I got a nice payoff into early retirement. ^
being positive etc. Luckily I got a nice payoff into early retirement. ^
#143
Moderator, Iberia Airlines, Airport Lounges, and Ambassador, British Airways Executive Club
Join Date: Feb 2010
Programs: BA Lifetime Gold; Flying Blue Life Platinum; LH Sen.; Hilton Diamond; Kemal Kebabs Prized Customer
Posts: 63,821
I used to work at BA in the IT department and I'm afraid I have to disagree with your assessment of their abilities. It's true that the outsourcing companies do have a few able staff, but the vast majority of the staff on the BA contract seem to have had the absolute minimum of training....
#144
Join Date: Dec 2015
Location: UK
Programs: BAEC Silver, *A, Marriott
Posts: 181
I worked for a large Japanese Electronics company that outsourced IT to TCS. Within the first weeks of live management from Chennai they'd managed to delete the Network Login accounts for most of the senior people in Europe, and than when they'd restored the accounts managed to not tie them up to email accounts. When I highlighted this would be the first of many issues with the competency of the staff in India I was told that I wasn't
being positive etc. Luckily I got a nice payoff into early retirement. ^
being positive etc. Luckily I got a nice payoff into early retirement. ^
This resembles my experiences with them, in a different industry (financial services). However, in the end of the day, as I remind people, you can outsource specific tasks, but you can't outsource responsibility. BA is responsible for the overall integrity of their service. As mentioned above, it seems like a combination of a power outage (not uncommon), followed by human error in bringing the system back up, possibly compounded human error.
Most large organisations that run well have a handful of very experienced people who know how to manage difficult problems without panicking, and this small group is responsible for what looks like a well oiled machine. I hope BA hasn't removed these key people.
#145
Join Date: Sep 2007
Location: BOS
Programs: BA - Blue > Bronze > Silver > Bronze > Blue
Posts: 6,812
You certainly can't offshore responsibility. My experience with offshore IT echoes no many on here. Completely reactive and simply following instructions that they would follow even if it ended the world.
#146
FlyerTalk Evangelist
Join Date: Dec 2003
Location: MAN and LON
Programs: Mucci, BAEC LT Gold, HH Dia, MR LT Plat, IHG Diamond Amb, Amex Plat
Posts: 13,773
You also can't effectively offshore judgement which may be needed to override the "script" being followed. When judgement is required offshored Ops fall apart.
#147
FlyerTalk Evangelist
Join Date: Jun 2004
Location: LON, ACK, BOS..... (Not necessarily in that order)
Programs: **Mucci Diamond Hairbrush** - compared to that nothing else matters (+BA Bronze)
Posts: 15,132
Does anyone with any knowledge of how the BA set up workd want to comment on this
From the IT rumour mill
Allegedly, the staff at the Indian data centre were told to apply some security fixes to the computers in the data centre. The BA IT systems have two, parallel systems to cope with updates. What was supposed to happen was that they apply the fixes to the computers of the secondary system, and when all is working, apply to the computers of the primary system. In this way, the programs all keep running without any interruption.
What they actually did was apply the patches to _all_ the computers. Then they shutdown and restarted the entire data centre. Unfortunately, computers in these data centres are used to being up and running for lengthy periods of time. That means, when you restart them, components like memory chips and network cards fail. Compounding this, if you start all the systems at once, the power drain is immense and you may end up with not enough power going to the computers - this can also cause components to fail. It takes quite a long time to identify all the hardware that failed and replace it.
So the claim that it was caused by "power supply issues" is not untrue. Bluntly - some idiot shut down the power.
Would this have happened if outsourcing had not be done? Probably not, because prior to outsourcing you had BA employees who were experienced in maintaining BA computer systems, and know without thinking what the proper procedures are. To the offshore staff, there is no context, they've no idea what they're dealing with - it's just a bunch of computers that need to be patched. Job done, get bonus for doing it quickly, move on.
https://forums.theregister.co.uk/for...aining/3191302
Originally Posted by The Register
From the IT rumour mill
Allegedly, the staff at the Indian data centre were told to apply some security fixes to the computers in the data centre. The BA IT systems have two, parallel systems to cope with updates. What was supposed to happen was that they apply the fixes to the computers of the secondary system, and when all is working, apply to the computers of the primary system. In this way, the programs all keep running without any interruption.
What they actually did was apply the patches to _all_ the computers. Then they shutdown and restarted the entire data centre. Unfortunately, computers in these data centres are used to being up and running for lengthy periods of time. That means, when you restart them, components like memory chips and network cards fail. Compounding this, if you start all the systems at once, the power drain is immense and you may end up with not enough power going to the computers - this can also cause components to fail. It takes quite a long time to identify all the hardware that failed and replace it.
So the claim that it was caused by "power supply issues" is not untrue. Bluntly - some idiot shut down the power.
Would this have happened if outsourcing had not be done? Probably not, because prior to outsourcing you had BA employees who were experienced in maintaining BA computer systems, and know without thinking what the proper procedures are. To the offshore staff, there is no context, they've no idea what they're dealing with - it's just a bunch of computers that need to be patched. Job done, get bonus for doing it quickly, move on.
#149
FlyerTalk Evangelist
Join Date: Jun 2004
Location: LON, ACK, BOS..... (Not necessarily in that order)
Programs: **Mucci Diamond Hairbrush** - compared to that nothing else matters (+BA Bronze)
Posts: 15,132
#150
Join Date: Aug 2007
Location: Cheshire / Wherever they send me
Programs: BA Blue, Marriott Plat Life, UA Silver (thx Marriott), IHG Gold, Accor Plat, Hilton Diamond
Posts: 943
This resembles my experiences with them, in a different industry (financial services). However, in the end of the day, as I remind people, you can outsource specific tasks, but you can't outsource responsibility. BA is responsible for the overall integrity of their service. As mentioned above, it seems like a combination of a power outage (not uncommon), followed by human error in bringing the system back up, possibly compounded human error.
Most large organisations that run well have a handful of very experienced people who know how to manage difficult problems without panicking, and this small group is responsible for what looks like a well oiled machine. I hope BA hasn't removed these key people.
Most large organisations that run well have a handful of very experienced people who know how to manage difficult problems without panicking, and this small group is responsible for what looks like a well oiled machine. I hope BA hasn't removed these key people.
1 - The design of the failover / backup etc was wrong in the first place. Key to any outsourcing is having the right people to review any designs / changes etc - i.e. the key people above.
2 - More likely - the failover plan has never been tested and when you test it, you find out the flaws in the plan. Once again this is down to poor management and not the outsourcing, as the management must of made this decision.
Don't get me wrong, the outage would probably of been shorter if there were a bunch of people on site. However to blame them for something which clearly hasn't worked, which would of been implemented many years before they put it in, is not the correct answer.
I know of another large organisation which globally switches it's systems from one data centre to the other, once a year, to make sure that it works. Most of their support is done from India, but they know that if it goes bang in one data centre, then they've a tried and tested method of switching to the other. Key to it working is that all of the key decisions, both management and design, are done by experienced UK people and not the cheapest resource available.
T