Satellite broadband: the unloved cousin of internet connectivity

SatelliteSatellite broadband has been around for many years now, and holds out the promise of internet access for users located pretty much anywhere in the UK that happens to be above ground. So why is this never mentioned when the issue of universal broadband access is raised?

Access to broadband internet is still an issue for many people in the UK, especially those living in remote or rural areas. The problem is that ADSL broadband delivered via a telephone line is limited by the distance to the telephone exchange; the further away you are, the more the signal quality becomes degraded.

This can be addressed in several ways, such as by using a fibre optic connection rather than the telephone line, but the latter can be costly to roll out, so providers like BT have focused first on areas where there are large numbers of subscribers, and may be reluctant to expand out to areas where there are few people to help them recoup the investment in infrastructure.

This can be frustrating for anybody in this situation, but especially for business users, for which the internet is increasingly vital for access to information and reaching customers. A recent episode of the BBC magazine programme Countryfile focused on farmers, who complained that vital information for their industry is increasingly delivered over the web, making internet access a necessity.

A common theme in features like this is simply to berate the government for its failure to stump up the money for broadband roll-out to rural areas. Few mention that such customers could be served by satellite broadband, at least until their area is served by an acceptable terrestrial broadband service.

Satellite broadband does what its name suggests: it delivers internet access via a two-way connection to a satellite in a geostationary orbit above the Earth. This is different from satellite TV, where a satellite simply broadcasts a television signal to everyone with a receiver within the satellite’s ground area footprint (Satellite TV companies such as Sky that also offer internet access provide this service via a telephone line).

To receive satellite broadband, a customer will require a satellite dish to be installed somewhere outside their premises. This is connected to a satellite modem, the equivalent of a broadband modem or router that you will see in a home with a typical broadband service.

So, the big advantage of satellite broadband is that you can access it pretty much anywhere. But what about disadvantages? Firstly, customers will have to stump up for an installation fee to have an engineer visit their premises, fit the satellite dish and orient it towards the satellite that delivers the service, as well as install the satellite modem. This typically starts at around £100 or so, but can be several hundred.

Secondly, potential customers should be aware that satellite internet access is subject to much longer latency than a terrestrial broadband connection. Latency is basically the “round trip” time taken for a signal from your computer to reach its destination and a response to come back.

Satellite broadband has this problem because data has to be transmitted from your computer up to the satellite in geostationary orbit, then back down again to the service provider’s ground station, from where it is routed onto the wider internet. The response has to return using the same route, and all of this adds delay.

For many applications such as browsing the web or downloading files, this latency is not really noticeable. Where it can become an issue is with videoconferencing or internet telephony, when users may notice a lag in the connection, and if you tried to play online action games, for example.

Another problem that satellite broadband customers may experience is that the signal quality can be affected by the weather conditions, especially rain or snow.

In terms of speed, most of the satellite internet providers can offer download speeds of up to 20Mbps, and upload speeds of 1Mbps up to about 6Mbps. The packages on offer from the providers vary, often based as much on the amount of data you are allowed to download each month as on the actual speed of the service, with some starting at just £10 per month.

What this all means is that satellite broadband is not for everyone, but if you do live in a remote or rural location and there are few other options, it is worth evaluating, rather than waiting for BT to get around to upgrading the infrastructure in your area.

Some satellite broadband providers covering the UK:

Tooway

Avonline

Europasat

Internet of Things or the Internet of hyberbole?

Internet of Things

The Internet of Things (or IoT, if you prefer) is one of those nebulous concepts that covers a multitude of things, rather like “cloud computing”, and thus gets hyped up and misappropriated, with vendors, marketers and journalists alike attaching the term to almost everything in order to attract attention to an otherwise me-too product or dull article.

Perhaps it is because there are so many wide-ranging use cases for the Internet of Things that it gets everybody confused. However, as I have opined before on more than one occasion in the past, the Internet of Things is basically just the internet, but with a whole lot more devices connected to it than before, and new applications.

The basic premise behind the Internet of Things is that internet access is now almost ubiquitous (at least for most people in the developed world), and reaches to almost anywhere. Instead of just using the internet so that millions of people can check their status updates on social media, why not also use it to connect up things that it would be handy to get data from, like weather stations, or traffic flow sensors, or anything that might not typically be connected previously?

In the past, if you wanted to collect temperature and rainfall data from a bunch of weather stations dotted around the landscape, you might have had to connect them up using some proprietary wireless system, or had a telephone line wired to each one so you could use a dial-up modem.

Nowadays, you can (relatively) easily connect anything to the internet, whether via WiFi, over a cellular network, and take advantage of ready-made protocol stacks for communicating with your central systems, making it cheaper and easier to build such a solution.

In indoor environments, it is typically even easier to connect things up, especially in offices where there is often WiFi signal coverage almost everywhere, or an Ethernet port within a few metres.

There are two other key factors that have enabled the recent uptake of projects and solutions that we can label as Internet of Things, and these are the increasing miniaturisation of compute hardware, and the growth of analytics tools that can sift through captured data and glean some useful insight from it.

Most people have likely heard of the Raspberry Pi, the credit-card sized device that was initially designed to help children to learn developer skills. This device and others like it now pack in a considerable amount of compute power at a low cost.

Meanwhile, collecting telemetry data from internet-enabled devices and hardware enables analysis to look for patterns, such as those indicating a potential fault is developing in machinery, for example.

However, the Internet of Things is also leading to a whole load of ill-considered, mostly consumer-focused products like internet-connected toasters, connected lightbulbs, and that evergreen cliché, the connected fridge.

Many of these products seem to offer little real advantage for the massive inconvenience they bring, by which I mean the need to configure and setup the connected wotsit and keep it up to date with the inevitable stream of patches and bugfixes that any self-respecting “smart” device seems to require these days.

Then there is the never-ending stream of hyperbole about the Internet of Things, such as the recent claim by one technology publication that every single thing you buy in future will be connected, and that buyers will have no choice in the matter. The publication in question even quoted a respected security researcher as implying it.

This is ridiculous for several reasons. Embedded compute hardware is cheap, but not so cheap that would cost less than the same device without it. And not least of all, there is ludicrous notion that users would have no choice about the device connecting itself to the internet and reporting on what kind of toast they eat, and so on.

How this is supposed to happen when even expert users still have problems getting some devices to connect to the internet is glossed over. Is it supposed to read your mind to get the WiFi password to your network?

Sadly, it looks like we can all expect more of this in future. The ‘things’ in the Internet of Things enables almost any device or application to be cast as a magic new gizmo that is going to make all of our lives better.

The reality is likely to be more prosaic. The real Internet of Things use cases are more likely to be applications such as building automation, traffic flow monitoring, and the aforementioned connecting of sensors to industrial equipment to monitor performance and allow for predictive maintenance rather than waiting for faults to cause downtime before addressing them.

Ten years of the iPhone

First generation Apple iPhoneApparently, it is now ten years since Steve Jobs stood up on stage and announced the worst kept secret of the decade: that Apple was about to enter the smartphone market, with a device that would become known as the iPhone.

That Apple was working on a mobile phone was widely reported and speculated on, but the precise details were kept under wraps, and thus the firm still managed to surprise everyone with the device that Jobs finally showed off on stage.

It is interesting now to note that at the time there was considerable debate over whether the iPhone would be a success for Apple or not. I can recall being in two minds about this, because the first iPhone was something of a brick compared with contemporary candybar smartphones, and it supported only 2G cellular communications, meaning that data access was painfully slow even by mobile standards.

However, I was also well aware of the “Apple effect”, which meant that almost anything the company produced was eagerly snapped up by adoring Apple fans.

But there were several significant new capabilities that Apple brought to the market with the original iPhone; it was the first with a capacitive touchscreen and a user interface designed to make the best use of it, and it was the first to enable gesture-based controls, such as pinch-to-zoom to make small text readable in the Safari browser.

In my opinion, it was the user interface that made the iPhone a success. Contemporary smartphones required the user to navigate on-screen options using a keypad, or with a crude stylus that Windows Mobile forced on users because its user interface was modelled on that of desktop Windows and the on-screen controls were often tiny and difficult to accurately reach.

This meant that, despite the glaring lack of support for 3G wireless, the relatively high cost of the device, and its chunky size, the iPhone was an attractive option for those who wanted a smartphone but didn’t want to have to do a PhD in Computer Science before they could operate it.

With later models, Apple fixed the lack of 3G support and delivered another crucial innovation in the shape of the App Store, enabling users to purchase and download new applications direct to their device at the touch of a button. In contrast, users of other smartphones typically had to install an app by downloading them to a PC before syncing it to their mobile device and running an installer.

This masterstroke benefited Apple, its users, and developers. Users got a trusted source for applications, developers got a pre-built store to showcase their wares, and Apple got to make revenue from every app sale. Many users now cite the broad range of apps as their main reason for choosing an iPhone.

To summarise, Apple didn’t invent the smartphone, but it was first with many of the features that many users now consider to be an indispensable part of the smartphone experience. The iPhone has shaped the smartphone market to such an extent that it is fair to say it gave the industry a kick up the backside, and led to the mobile world we see today.

My original review of the first iPhone, from IT Week in December of 2007 (content now moved to Computing) when the device finally made it to these shores , can still be read here.

Why all the confusion about hybrid cloud?

 

cloud computing

There has been a rash of stories in the IT press lately regarding how hybrid cloud is going to be the predominant model for adoption of cloud services among organisations, with the implication seeming to be that this is some shock revelation that nobody expected.

Cloud computing is still a relatively young sector of the IT industry, with Amazon’s S3 service launching back in 2006 and a handful of applications such as webmail pre-dating that. For that reason alone, it is unlikely to have yet reached the point where it has taken over the business computing market, and especially when you consider that large enterprises tend towards being conservative and moving somewhat cautiously.

Nevertheless, cloud computing has grown swiftly, and uptake continues to expand. There are good reasons for this; services such as those from Amazon’s AWS stable have demonstrated that some processes can be offloaded to the cloud with a cost saving to the customer. Email is a good example, as is software development and testing, along with backup and disaster recovery.

Cost savings typically come from the customer not having to purchase, deploy and maintain their own on-premise infrastructure in order to operate the service. Another key factor is flexibility, with cloud services holding out the promise of customers being able to pay for only those resources they use, and even swap service providers or bring service provisioning back internally if they so desire without having to pay a penalty charge. However, the reality often doesn’t quite add up.

Going hybrid

Hybrid cloud gets the name from the fact that it is a combination of on-premise IT service delivery and some IT services delivered from an external provider, often a public cloud platform such as AWS. In the first case, the organisation may have its own “private cloud” operating on its own infrastructure, but this is not always the case.

Having a “private cloud” implies that the organisation has organised its internal IT infrastructure into a giant pool of resources that can then be carved up as necessary in order to meet the demands of various user groups and applications. Often this is delivered by putting in place some sort of self-service kiosk or portal user interface that allows users to request the compute resources they need. In other words, it needs to act like a public cloud service, such as AWS.

However, this requires a high degree of automation at the management layer in order to be able to provision resources on demand, and to be able to keep track of used and available resources. It also implies that the infrastructure itself must be capable of being carved up in this manner, which traditional hardware was not really designed for. For this reason, many enterprise firms still have a lot of legacy IT kit that works perfectly well, but may not be easily co-opted into an on-premise private cloud.

But this does not stop organisations from taking advantage of cloud services. In fact, many will have started out on the road to cloud adoption through their development teams turning to AWS for speedy procurement of servers and other resources for initial development and test work, rather than go through the often interminable process of getting appropriate resources from their own IT department.

Later, those organisations may well have decided that it might be wiser to bring their development hosting back behind the firewall, which would entail building something on-premises that functioned like AWS. Often, this would require new-build infrastructure, and IT vendors quickly sensed an opportunity and started to offer turnkey solutions designed to do just that; converged infrastructure composed of servers, storage and networking kit all pre-configured to work together, or more recently, hyperconverged systems which integrate server and storage together into an appliance-like node, ideal for scaling out as demand requires.

Using cloud for greater flexibility

But even with an on-premises private cloud, organisations may find that they still have a requirement for public cloud services. The classic example is to meet seasonal demand, such as an online shopping site needing extra resources in order to cope with a peak in the volume of orders in the run-up to the Christmas period.

This is the source of the term cloud bursting, meaning that a business’s compute capacity can “burst” out beyond the bounds of the organisation’s own infrastructure in order to continue to service an increase in workload, without the business having to own enough infrastructure to meet these peaks in demand, resources that would be underused for the rest of the time.

That is just one scenario for how an organisation could arrive at a hybrid cloud deployment, but there are others, of course. For example, an enterprise may prefer to host its public-facing website in a public cloud provider’s datacentre, in order to keep it separate from the on-premise IT infrastructure it uses to actually operate the business.

And so far, this discussion has largely focused on infrastructure services delivered from a cloud, otherwise known as infrastructure as a service (IaaS).  Others available include platform as a service (PaaS), which typically refers to a hosted environment for developing and running applications; and software as a service (SaaS), which refers to cloud-hosted applications themselves, such as email or CRM.

Often, the services that any given enterprise customer requires are likely to come from more than one provider, so that an organisation may decide that its app development needs may be best met by AWS, while email may be outsourced to Microsoft’s Office 365 service and off-site disaster recovery entrusted to a third cloud provider. In fact, some recent surveys have shown that many enterprises are using or piloting services from as many as six different providers.

This arrangement is described as a multi-cloud strategy, yet another industry buzzword, and can also be understood as a way for organisations to hedge their bets against getting locked into one cloud provider.

Cloud computing may be still a relatively young sector of the IT industry, but it has already come a long way, and is now accepted as just another tool in the IT department kit for meeting the compute requirements of their organisation. There is now a broad range of cloud services available on the market, offering different capabilities and designed to meet a range of different requirements.

This is just as well, as it is unlikely that any single cloud provider is likely to meet all the requirements of an enterprise customer. It is also unlikely that enterprises are going to entrust all of their IT requirements to a cloud service provider now or for the foreseeable future.

You can expect the majority of organisations to keep their most critical IT functions operating internally, and when you realise that, it is pretty obvious that hybrid cloud will be the predominant model of cloud computing for some time to come.