Binary is dead.

Binary remains dead. And we have killed him. Yet his shadow still looms.

New Noise..

So its 2013. Firstly for my own sanity and posterity I need to start writing things down. Also my brain is constantly trying to derail me with incongruous thoughts.

Onto the article…

We all do it, everyday some more than most, most of the time we are not even aware that we are doing it. Of course I am talking about sending and receiving information, we have all become Mecca’s of information, personal,dedicated and complex data consumers and creators. For most people this is enough and they do not consider the physicality of this virtual existence, but we all get frustrated when we are told we cant have content when we want it…

The trouble is the physical construction of the network is diverse,old and inefficient, our data has to be sliced up then traverse the wire to get to us, then be put back together to be displayed to us. If you look at the wire like a city and imagine having an item that you need to get from one side to the other, then chopping it up putting it in different cars and sending them on different routes, there are some 30mph roads, some 60mph roads, some one ways and some even get lost. Now you can see why you having to wait for your media is a little trivial.

So why do we do it this way?

well its all about integrity which for the most part is reassuring as so few things are. The original network was the PSTN (Public Switched Telephone Network) all it had to do was carry a signal down a wire (and yes it was an analogue AM signal). Now there are a number of problems with sending analogue signals, primarily there is interference and attenuation, put simply the distance and quality of the physical medium has a direct effect on the signal. So the answer was to switch to frequency modulation (FM) as this is less prone to interference. So these were the foundation of our networks, with the adoption and diversification of users and products over the years additions have had to be made FM would no longer cut it. So we went ‘digital’ away from the sin/cosin to the square wave as its easier to determine 1 from 0 rather than 2356 from 2355. With the introduction ‘digital networks’ meant that we could push more information down the wire, but this meant shattering our information so that it can be pushed into packets and frames so that we can monitor flow and prioritise data. This all sounds great but it has a cost and ultimately the cost is data rates.

Lets take a look at data rates, even if you are local on dedicated bus you can only really expect 6mb/s which is approx. 22Gb an hour, over the PSTN you get nearer 1Gb per hour dependant on distance and network conditions. People often wax lyrical about fibre optics, but the deficiency is not in the physical medium rather the transport method. We effectively flash lights on an off down a fibre optic cable capable of carrying so much more diverse and complex information. We are so fixated on this base 2 system of moving information reducing it to its simplest form and pushing it over the wire.

Lets look at biology which is just natures way of dealing with physics, DNA is just a biological wave function containing the potentia for our life, much as the equations used by Erwin Schrodinger and his successors are used to describe how the quantum state of some physical system change with time. The equivalent to binary in a biological setting would be rather than the genesis of life, sperm and eggs, and the conjoining of chromosomes and information, to instead to try and give birth to the whole person in totality taking into account every experience in that persons life. So why do we try send data like this? Okay data does not have the subtle nuances that a life has, it has a totality and finite form but it also has a independent signature in ‘time’ if you will.

Lets look at a file end to end, assuming that the start is time period 0 and progress to the end. Now lets look at the data value at period 1 and so forth. so we create a model of time period mapped against value, much like you would use to create a graph. now if we use these value to modulate a carrier wave we are left with a fixed wave length but variable amplitude signature of a file. We can then further represent this wave signature as a complex wave function.

Then all we need do is send the wave function from A to B and rebuild the data at the receiving end, for want of a better explanation we are talking about a form of ‘quantum compression’.

Reference links



Post to Twitter

What is the future for Google News?

I’ve been following this debate about Google News with some interest over the past few years. Things appear to be hotting up. Brazil is leading the way with it’s recent announcement by the Associação Nacional de Jornais (ANJ) that its members (154 Brazilian newspapers) are no longer going to allow Google to display content from their websites. The nub of the arguament is this – does Google provide value to these news providers by displaying links to their content? And should Google pay for the privelige of displaying this content? After all Google is monetizing this content & unless a complaint is made the profits from this process are not shared with the originators of the content. France is also making a move – a law that they are debating could result in Google refusing to display news content from French Newspapers. So same result, different route. Who is losing out here? Are both parties shooting each other in the foot? Should there be a greater spirit of co-operation & alignment here or is this the beginning of the end of Google’s dominance over the internet. And will Google & Twitter ever get back into bed together? Here are a few interesting articles I found that you might want to look at before forming your own opinion.

Brazilian newspapers yank content from Google News, blame lack of


Brazil's largest association of newspapers, the Associação Nacional de Jornais (ANJ), has announced that its members are opting out of Google News. In late 2010 the ANJ — which… Brazilian newspapers yank content from Google News, blame lack of compensation. By Andrew Webster on October 21, 2012 04:23 pm Larry @A_Webster 88Comments. Google News

Is Google a free speech opportunist? — Tech News and Analysis


Google says the First Amendment should apply to its search results — even if this allows the company to favor its own products over those of its competitors. Is this a legitimate argument?

French Law Endangers Google's 'Very Existence', Threatens


What happens if Google boycotts an entire country's news content? We might get to witness such a trade war if Google excludes French news from its search results because of a proposed law that requires search engines to

Without Realtime Search, Google Risks Pushing News Seekers


Now, the best place to search if you want to find up to the second news and commentary about something that is happening “right now,” is undeniably Twitter. That might have still been the case even when Google had the

Post to Twitter

Remote Server Log Aggregation and Management

The Death of SysAdmin

These days it seems that everybody has a server. And with the advent and acceptance of affordable ‘Cloud’ computing this is not likely to change.

I would like to think that everyone that owns or operates a server has the skills to correctly manage it… on the other hand I guess it’s cool they generally don’t have these skills else I’d be out of work! However in order to combat this lack of knowledge and skill a lot of ‘White label’ companies and services have sprung up to cover the shortfall.

How the world has changed over the past decade. Now no one wants to pay for a dedicated Sys-Admin, right?

Surely I can get an App for that?

This is yet another unforgivable word that apple (note the small a) has forced into common parlance. I guess the short answer is yes, you can, and there are a number of companies out there now that offer to manage your server. It’s something we do here at SEM Solutions and I can confidently say that we’re very good at it. There are also a growing number of companies that will manage various aspects of your server, take all that scary data and make it all shiny and pretty. Cool right?

So this is where I could fire up the Nerd Rage! But here’s the funny thing, I’m not going to. The basic premise of these services is good; it gives companies better visibility of their important IT assets that before were more of an annoyance than a benefit. More knowledge can never be a bad thing… although I’d like to forget Babylon 5! My reservations come from a security standpoint but before I start cracking wise let’s just look at what these services do.

I have a server running a number of web sites, let’s just say that some of them are e-commerce with a few blogs and that my server has been configured to a high standard with all the correct logging, access / error / apache / nginx / mail logs etc. are all setup and running. Now let’s assume I’m not a Sys-Admin and my commercial end-user control-panel doesn’t display these logs. What do I do? I could log in via shell and manually check them one at a time but shell scares the hell out of me and I’ve heard that I ‘could’ cause some damage to the server, so that’s out. I don’t want to lose my job! Besides I don’t know what I’m looking for and it seems like it should be easier than that. Let’s fire up Google and see what the all-seeing eye of Sauron can pull up. Two minutes later and without the need for the Nazghoul and I’ve found a service that can take my stats and make them look all pretty. It seems cheap and they handle all of the ‘techie’ stuff. Cool! Also they manage it remotely which can only be good thing, right?

So far it all seems like a good idea and as I said it is the idea is sound… but here comes the ‘but.’ You’ve invested heavily in configuring your shiny new secure server; it’s locked down with jailed access and good ACLs in the firewall. So now you’re going to send your data over the public domain to another server, which you have no controls over or guarantees from, to potentially share with the world? Call me just a little bit stupid, okay more like a whole lot stupid, but does this seem like a good idea? Would Batman ride into the streets of Gotham without his cowl on?

So what is the solution? Do I just pay a Sys-Admin? Or do I just forget about checking my logs?

Well no. Perhaps we just need to start asking different things of the companies that sell these services rather than just looking at the end shiny bauble. We need to take ownership of our data.

Here’s the SEM Solution:

As a company we provide Server Management services which range from hands-on daily / weekly server log management to remote statistics. Recently we have been approached to provide live server statistics for several well-known companies so that they can monitor servers to ascertain whether they are under load or even under potential attack. In order to accomplish being a moderately good coder as well as being the member of the team who is most fanatical about system security I set out to develop a python server. No not Django! Frankly that piece of software worries me. I needed a server that could listen for incoming traffic securely process and record the resultant data. ‘Easy’ you say! I could use SSL for the encryption. Umm no! SSL is easily dealt with by even a novice cracker and to be fair if my server has even let itself be discovered in the first place I’ve already failed. Additionally I wanted to create dedicated VPNish tunnels for each client/server: separation of the streams allows for the data to be processed faster and in context, I can strip the encapsulation and pass the packet in the stack. It’s kinda a layered ‘Black Box’ approach.

(Here’s the thing I probably should have mentioned before, I might be a colossal geek and reference Star Trek, LOTR and always the Dark Knight, but I have been writing code and fiddling with the fabric of the internet since I was seven. I do have a deep & in-depth knowledge of this stuff!)

With the streams separated I can use the clients IP to lock and encrypt the data. But I can already hear you going ‘yeah, but that level of insanity would need multiple sockets open on a server!’ Client side – who cares? But I want as few sockets on my server open as possible, which I can’t disagree with so just extend the socket class and some new encapsulation wizardry and voilà – sub sockets! Shimple. Basically, you take that lump of socket you had, smash it on the head with a hammer and split like the atom. In-fact that’s why they are called ‘Quarksockets.’ (You can Google that but no one else has coined it so I guess your back here.) Then I added a soupçon of port-knocking (if you’ve read this far I’m guessing you know what that is) to actively police all sockets, besides you’re not even binding a process to a socket so good luck scanning it. The long and short of it is we have a server that is invisible to web but can still handle AES encrypted TCP connection streams. If it can be done like this then why settle for flashing your logs in public?

Of course, whilst I hope you find this little article useful it is also a shameless piece of promotion for the Server Management services that we provide for our wonderful clients. Interested in finding out more? Give me a call on 01797 361 688 and ask for Sam. Or drop me an email:


Post to Twitter

Planning Your Migration

Why Are You Migrating?

One of the first questions that you need to ask yourself is “Why am I migrating?”.
There are probably several reasons for migrating. These may include some of the following:
• server / software is old
• server /software is failing
• current support level is not what I require
• current level of access is too restrictive
• feature list is too restrictive
• costs are too high
• current host is in the wrong location (US vs UK, for example)

The choices you make for your new server / host / system should be based on the motivation for your migration. You should also allow for your future plans otherwise you will find that you need to migrate again if you start a new project that requires something that you do not have.

Your Migration Budget

It is a fact of life that most businesses do not budget for a migration. It is a “hidden” cost, if you like. Things like development, research, new systems, etc. are easy to budget for because they are obvious costs. A migration is often not budgeted for because until you need to migrate your systems you will not think about it! However, every business that uses servers will, at some stage, need to upgrade to newer systems. If your accounts dept. does not already budget then raise the point with them. On average you should expect to perform a migration every two to five years.

What Should I Migrate?

Deciding what you are going to migrate is a fundamental step in planning your migration. If you are moving to a new server with a fresh operating system then take the opportunity to do some spring cleaning on your data! It is always a good idea to keep data backups for historical reasons but this does not mean that your new shiny server needs to have all that data in it! You may find, for example, that you have thousands of old emails dating back to year 200x which you haven’t looked at in years…, do you really need to transfer these to your new email server? You may have backup files from the last time that you did a migration – do you need these transferred to your new server or would they be better off sitting on an external hard drive in your office drawer? If you are not sure then it is always better to have too much than too little, especially when storage space is so cheap these days. But if you know you don’t really need it, why go to the expense of transferring your dead wood to your new systems?

Planning your time frame

Every now and again we get a phone call that goes along the lines of:
“Hi, my server is dying and I need it migrated this weekend as a matter of urgency”.
Or words to that affect.

If your server is that bad then why have you left it so long before doing something about it? We get people telling us that their server has been playing up for months but recently it has been really bad. The chances are, especially if you have a hard drive problem, that pulling all your data off it will be the final nail in the coffin! As soon as you realise that there is a problem you need to be proactive; at the very least back up your essential data!

Another thing to consider is that if you are going to ask someone to perform a migration at very short notice then you will expect to pay a premium. Not only are you asking them to stop working on other projects they are dealing with, you are also asking them to potentially give up their free time in order to meet your deadline!

Then there’s the the testing phase of a migration. You should be prepared to test your new systems thoroughly before you relaunch. Nobody will understand how your site works like you do. Test, test and test again! Take this testing phase into account when you plan your migration time frame and be prepared to wait for problems to be resolved. Fixing a problem is usually a very fast process; actually tracking down the cause of a problem can take hours of delving in file systems and checking permissions, modules, settings, etc. If you have access to the original developer of the software / website then they will probably be able to tell you what the problem is and how to fix it very quickly!

When Should I Re-launch?

The re-launch is the single most important stage to plan for! re-launching means swapping from your existing server to your shiny new one. Things to consider are:
• if it all goes wrong how quickly can I switch back to the existing server?
• if it does go wrong who will be available to help put it right?
• have I made sure that I am available to ensure that the process goes smoothly?

When you re-launch, unfortunately things can go wrong no matter how much testing has been undertaken. 90% of problems will become apparent and be resolved during the testing phase. That still leaves 10% potentially unaccounted for. In the rare circumstance that something does go wrong you need to have someone available to help put things right.
The upshot of this is that re-launching at 2am on a Sunday morning is a bad idea. Very bad! The re-launch process requires clear thinking and the availability of people who can help sort things out. It is worth bearing in mind that a re-launch should not, in most cases, require more than one hour of downtime and it is often considerably less. Sometimes it is possible to make the switch over with no downtime at all but this is rare and depends on a number of factors.

Our philosophy is that the best time to perform a re-launch is on a Monday or Tuesday morning between 8am and 10am. This means that there is enough time to do preparation (re-sync files, mail and databases) before major traffic starts hitting the server. Also the start of the week means that you have four of five days of access to people who can help put things right. There is nothing worse than having your website or server off line and not being able to get in touch with someone to fix it!

The Final Stages

Okay, great – the website or server has been transferred and is now live! Nobody noticed any downtime and everything is running much faster than it was! There will now be a few days where you will probably notice little niggles – for example, your spam filtering isn’t working as well as it was. During this phase you need to decide whether any problems are migration related or not. If you suddenly notice that a script on your website is not working (this should really have been picked up during the testing phase) then investigation will be required to put it right. Extra time will be spent on the migration to resolve such problems and hence there will be overages incurred (did I mention how important the pre re-launch testing phase was?).

Once everything is resolve and your server is running well the migration is over. Go and visit your accounting dept. and make sure that they are putting the money aside for your next migration in a few years time!

Author: Michael Moore, SEM Solutions

Post to Twitter

Significant Increases in Website Performance with Varnish

The team at SEM Solutions have been investigating enterprise level tools for improving the speed and performance of large high-traffic websites and have recently discovered a powerful open source solution called Varnish. The team behind Varnish make their money providing enterprise level support, integration & configuring it all nicely. Their client list is small but incredibly impressive. Top of the list is Facebook with a testimonial from David Recordon, Head of open source initiatives at Facebook. If this isn’t enough then there are 5 more clients listed including MercadoLibre, the largest online trading platform and the market leader in e-commerce in Latin America! Read the MercadoLibre testimonial & case study here.

After a number of successful installs of the Varnish software we are looking to make this available as an option to all our migration customers. It is refreshing to find success and innovation still alive and kicking in the open source software market, particularly in the area of website and server performance which is only going to become a bigger issue over the next few years.

The Varnish website confidently states that ‘Varnish Cache is most often the single most critical piece of software in a web based business.’ With clients like Facebook standing alongside them & confirming this when stating ‘Varnish is our favored HTTP cache and we use it heavily; whenever you load photos and profile pictures of your friends on Facebook, there’s a very good chance that Varnish is involved’ then this is clearly a tool that must be taken seriously.

For more information on our work with Varnish, and how we can use it to significantly reduce the  operational costs of running a large website then call us on +44 (0)1797 361 688.

Post to Twitter

New ASA regulations to ensure stricter controls on website content

From the 1st March 2011, the Advertising Standards Authority (ASA) will be able to regulate marketing communications on company websites for the first time. The following statement is from – “From 1 March 2011 the ASA’s online remit now extends to cover companies’ own marketing claims on their own websites and in other non-paid for space they control. This landmark development brings enhanced consumer protection, particularly for children.”

It’s impossible to know the impact of this change without some real-life cases to go on so I guess we’ll just have to watch this space for the moment.

Read the ASA’s own post on this development.

Post to Twitter

What is ‘Cloud’ Technology?

Dedicated managed hosting

As one of the only specialist migration providers in the UK we have a wealth of experience of working with cloud hosting. But what exactly is the ‘cloud’? Our view is that this is a much misused term and this misuse is creating confusion in the business community. To compound the issue, the term ‘cloud’ means different things depending on who’s using it.

Cloud Computing Examples

Applied to computing the word ‘cloud’ is highly ambiguous. Type ‘The Cloud’ into Google and you will get a mix of results that include; providers of public access Wi-Fi hotspots, hosting companies offering a form of virtualized hosting and providers of Cloud Computing which generally means location-independent computing or is sometimes referred to as ‘software as a service’ (e.g. Microsoft Azure or Google Docs.) What this boils down to is that the term ‘Cloud’ is used as a variety of metaphors for ‘the internet’ and location-independent IT services.

For the purposes of this article I will be discussing Cloud hosting & Cloud computing and not public access Wi-Fi hotspots.

Cloud Hosting

Shared hosting has been around for a long time. More recently we’ve seen the coming of age of ‘virtualization’ technology. This freed up computers, which had traditionally only been able to run a single instance of an operating system or computing environment, to be able to run virtual machines instead. Suddenly this created an opportunity for the hosting companies to charge clients on a per-usage basis and virtualization has since become part of an overall trend in hosting and corporate IT.

An older concept of ‘grid computing’ was brought to life through the use of virtualization. An operating system could exist across many servers instead of one. The client only pays for the resources he uses while also getting the benefit of huge scaleability that was never really available to the masses before virtualization came along. Applications could now tap into the resources of a whole network of hardware instead of a single machine.

For hosting companies there was another benefit. At enterprise level there has been a tradition of over-selling hosting services. Companies were being sold powerful dedicated servers yet more often than not they were only using a fraction of the resources available to them. I suspect that this was, for a long time, a source of irritation to the enterprise level hosting companies. They would have looked at all these unused resources and seen a missed opportunity. Cloud computing was the solution. Clients pay for exactly what they use. No more wasted resources. Everyone wins.

Cloud Computing

Essentially, Cloud computing is any form of location-independent computing. Since the mid-eighties we’ve become used to working with ever more powerful personal computers on which we install software such as word processors or email clients. This represented a significant change from the previous decade when mainframe computing dominated the corporate environment. Since the mid-nineties we’ve enjoyed the benefits of the Internet and we’ve been able to stay connected while moving around more freely. When broadband was introduced it created the possibility of moving significant chunks of IT online. Online trading became a reality, Ecommerce was born and email took over as the dominant form of corporate communication. Companies empowered their staff by moving data centres (e.g. customer databases) online and allowing them access to this information from anywhere provided they had an Internet connection. Everything from stock control to customer service was streamlined and web-enabled. But certain computing services were more stubborn such as word processing. These stayed on the local machine until cloud computing came along. Now, even word processors can be accessed online. Google offers it’s own web-based equivalent of Microsoft Office free of charge – it is called Google Docs. Microsoft is pursuing its paid for model and offering its Office suite and more, as a web-based Cloud-computing service.

So, what could you migrate to the Cloud? Almost all of your IT functions / infrastructure can be externally hosted these days, and therefore all of your IT can potentially be moved to the Cloud. This includes almost all of the software your business relies on, although I must point out that one company – Sage – and its popular accounting software is probably proving the bottle neck for getting all IT onto a hosted environment. Almost every other IT function can be moved out of the office and onto an external hosted environment. With the Cloud you get pricing that makes this an investment worth considering. Especially in today’s difficult economic environment. So, here’s a (by no means complete) list of the commonly used IT services that can move to either Cloud hosting or Cloud computing…

What Benefits Does the Cloud Offer?

To me, the cloud (small ‘c’) is a metaphor for remote computing, and it defines the way I like to do business now. I no longer need to worry about my location because I can access all of my business data over any web-enabled device whether it be my Powerbook, my HTC Android phone or a friend’s computer. It also means I worry less about backing up data from local machines. As I’ve moved more of my computing to the cloud I’ve needed to keep less data on my personal devices. Hardware is therefore (and perhaps ironically) becoming less important to me. This has enabled me to create a robust, ever evolving & infinitely scaleable IT infrastructure with virtually no capital expenditure and very low running costs. I need very little support and I have a high sense of satisfaction as a customer using these services.

Many of the services we use such as Google Docs are even free, and as a company we now use Google Docs more than we use Microsoft Office. If you’ve ever wanted to collaborate on a spreadsheet it is indeed a revolution! We’ve got websites hosted on ‘The Cloud’ and we’re huge fans of companies like 37-Signals who provide brilliant software as a service (SAAS) such as Basecamp for project management or Highrise for our CRM. Services in the ‘Cloud’ seem to share another benefit – they can be set up very quickly. No more waiting for servers to be commissioned or software to be installed and configured. And the configuration can also be changed quickly and with no down-time. For example, when we’re working on cloud servers we often ramp up the RAM, dropping it back to the original setting when the work is finished. This is a big win in today’s fast moving business environment.

What about the downsides?

I suppose the most important thing to consider when considering using the cloud in its many forms is connectivity. Accessing the cloud relies on having a fast internet connection. Especially with today’s rich user interfaces and dynamic content.

We’ve experienced problems with database connectivity on a cloud web-hosting environment in the past but only on 1 occasion.

What else can we add to this list? I invite you to share any negative experiences of cloud computing here…

Migrating to the Cloud

We’ve carried out many migrations to the various types of cloud for our clients, giving us an unrivaled wealth of expertise in this area. As with the many uses of the term ‘cloud’ there are also many ways of migrating to the cloud and it would be misleading to give the impression that all your IT functions can be migrated as part of a single process.

As a business using the cloud, our process of migrating to the cloud was actually a variety of processes, carried out independently of each other over the course of 2 to 3 years. Different aspects of the business required different solutions and the migration process was unique for each. And we use a variety of providers.

Before making any IT decision careful consideration needs to be taken. But unlike traditional and costly IT solutions the risks, in my view, are often much lower with the cloud. For example, cloud hosting companies & software providers do not normally require lengthy contracts to be signed, and services can be turned on and off almost instantly. Trials can be run with little risk before long term decisions are taken. This to me is a significant USP for the cloud in general. Speed is everything today. And today it is the 26th of January 2011. I look forward to reading your comments.

> Contact us if you’re thinking of migrating to the Cloud.

Post to Twitter

Migrating to the Cloud

We’ve carried out many migrations to the various types of ‘Cloud’ for our clients, giving us an unrivaled wealth of expertise when it comes to Cloud migrations.

Migrating to these solutions takes careful planning and expert execution in order to minimise the disruption to your business. This is where our expertise comes in. With first hand experience of most forms of the Cloud we can give sound advice helping you to make the best decisions for your business. We can also plan and manage any Cloud migration ensuring a smooth and painless transition. Our migration service has won awards, and some of the biggest and best hosting companies in the world regularly recommend us to their customers. – that’s how good we are!

Post to Twitter

© Copyright 2012 SEM Solutions Ltd. . Thanks for visiting!