One of my passions is eSports. As an eSports fanatic, my journey has taken me to the world of commentary . One of the skills that is incredibly useful to have in the world of eSports commentary is a working knowledge of how to run a live stream. This was even more important in 2010 when I began, because the tools were primitive and the ability to use them was a rare commodity.
Originally, the way to do things was to use a combination the free-to-use Flash Media Live Encoder (FMLE) to broadcast the stream, the tools inside VH Multi Camera Studio to efficiently and effectively capture what was on your screen, and finally irfanview to add screen overlays such as sponsor logos. These tools are free to use and at the time, they were good enough. Being able to run a live stream of whatever gaming content you wanted from your own home without the need of a $30,000+ telecaster setup was a very new idea; however it was one that entirely redefined the way the 18-34 male demographic consumes gaming-related media.
This interested several companies. The creators of VH Multi Camera Studio, Split Media Labs set out to take their product one step further and create an easy-to-use, all-in-one solution for live streaming calledXsplit. Xsplit was free during its initial beta stages but has moved on to a subscription model now that they have released version 1.0. XSplit’s main competition is Wirecast by Telestream, which comes with a much heftier price tag.
Each one has some pros and cons outside of the price tag. Wirecast’s interface is absolutely brilliant: built on layers, you can manipulate each scene with maximum detail on-the-fly as needed. The key point as far as gaming content is concerned is the ability to manipulate the scene off-stream and then apply it once you have prepared everything completely, using either the tools and media that comes pre-canned within the software, or by adding in your own. Wirecast’s audio controls are also excellent, allowing you to do a vast amount of mixing via software without the need of an expensive (although most certainly useful) hardware mixer, giving you the ability to change the volumes of each individual element within the stream.
Xsplit’s interface is much more primitive to say the least. Xsplit is capable of capturing your default audio output device (as defined by the OS) as well as a microphone. Xsplit also has a maximum of 12 scenes to transition between, each one needing to be set up as much as possible before the production begins as any changes you make will be displayed to the audience as you make them.
The deciding factor here is the encoder that each of these programs is built around. Both programs are capable of H.264 up to 1080p (or even higher) at 60fps in virtually any format you could ever need; however, XSplit’s encoder provides a cleaner looking stream at the end of the day and clarity of the content itself is a higher priority than some bells and whistles. Simply put, XSplit is “good enough” – with proper preparation and knowledge of your event and stream beforehand, you can create a very professional broadcast for anyone to consume easily.
]]>This article was first published by The Register on February 24, 2012. The original can be found here.
Today I wanted to buy a metal business card case I could carry around in my pocket. I asked Google Maps politely if it knew where in Edmonton I could find such a widget, preferably on the way home. Google didn’t have the faintest clue where I could get such a thing, no matter how delicately I phrased the request.
I eventually switched to Google proper, asked Bing and even tried twitter. I only ever really came up with three viable results, all of which were at least 15 km in the wrong direction, and I had no intention of wasting two hours (and $20 of gas) trekking across the bridge in rush hour to pick up a few $10 cases.
In a city of one million people, less than a handful of companies spoke enough internet to have relevant search results on Google, and none of them have figured out how to integrate with Google Maps. Local businesses desperately need a lesson in Internet Presence Management (IPM).
IPM encompasses the totality of a company (or individual’s) online presence. Far more than simple Search Engine Optimization (SEO), it covers the existence (or lack thereof) of an online catalogue of goods and services, social media usage, and various flavours of astroturfing. While it is traditional for IT types to deride and disparage all aspects of IPM, it has become absolutely essential to the survival of the modern business.
Gaming Google isn’t enough; its influence is all too often overstated. IT types like to make jokes about people who believe that the internet is a little blue “e”, but the joke strikes closer to home than we might like.
Social media sites have become the internet for many people. Corporate presence on Facebook is the be-all and end-all of corporate discoverability to a certain segment of the population. Even if you have your own webpage, they’ll only ever find that page if it is linked to in some fashion through Facebook.
Twitter, Reddit and others have become product search and feedback appliances for millions. Unsure which widget or bobbin to buy, they punch the contenders into the social media search boxes and see what the hive mind has to say. App stores too are an important avenue of corporate visibility. Being the first in your market to figure out how to make an app for that can give you a slight edge and a sense of novelty. Being among the last enables only irrelevance and eventual doom.
For marketing to succeed they need access to the raw materials necessary to the craft: accurate and adequate information. What does your company offer? What products do they sell? Which products sell well, and which sell poorly? Which products are overstocked, which suppliers are experiencing troubles? Metrics and statistics are vital to doing the job right.
Getting the information out of the various databases it lives in and then presenting it in a readable fashion is the tech end of things. There are various enterprise metrics applications to help with this, but someone still has to get the data out of whatever systems you are using and into the ERP, CRM and Analytics suites.
Tagging content is another huge hurdle. How searchable is your website? Punching LOHAN into The Register‘s search bar, is roughly as useful to me as using the tags feature, and both get me the info I am seeking. Asking Dell’s search for the replacement fan I need is significantly less helpful.
Here again IT is needed. The data exists, but presenting it to the internal site search – not to mention Google and other web properties – in a fashion they can understand is a challenge. Using warm bodies to tag things manually may solve the problem in the short term, but long term solutions must be automated if they are to succeed.
This all must be considered against the backdrop of the rise of a generation of individuals for whom instant gratification is the norm. The average person doesn’t want to shop in your store. They don’t want to wander aimlessly through malls and “make a day of it” shopping for $100 worth of kitsch.
Consumers are starting to wise up to the fact that their time is simply more valuable than that. “Buy local” falls down when local businesses simply haven’t a clue what IPM is.
Especially when “at least one of everything” companies like Amazon most certainly do
]]>First: let’s admit that there does not exist primary science that conclusively and definitively pegs the exact percentage of our population for whom a social media site has become “the lens through which they view all content on the internet.” I would go so far as to say that this is A) an impossibility and B) functionally irrelevant. The percentage will be in constant flux as the habits of individuals (and groups) change.
But there are a number of studies that have been conducted so far that hint at this, and the reality of it is considered “common knowledge” amongst a certain brand of IPM nerd. The proof will out when the science is done, but studies to really refine the error bars around the exact percent of users for whom this is true are only now getting underway.
One person you could talk to about this is Scott Galloway, professor at NYU School of Business. He is considered one of the more notable digital strategy experts. Consider also the numerous studies being done showing how little email is being used by young people, with Facebook rapidly slotting into the role that email once filled. (Many argue that Twitter is slotting into the role that Google once filled.)
Dr. Michael Fenichel – amongst many, many others – has done a great deal of hard, primary research into Facebook/Social Media/Internet usage. Indeed, their research has convinced them that Facebook/Internet Addiction Disorder is a very real phenomenon, and should be added to the DSM V.
Beyond that, there are numerous industry studies that have noted – and then explored in depth – the reality of particular social media sites becoming “the entire internet” for some segments of the population. These are studies performed not by organisations who would benefit from Facbook/Twitter/etc. becoming a vehicle for advertising, but rather by organisations who have a driving need to know exactly how people shop, how they do product research and what influences their decisions.
Starting in 2007 we have a report from private equity firm Veronis Suhler Stevenson and PQ Media. They note that for the first time in decades, 2007 saw people spend less time on traditional media and more time on the internet. The study also noted a huge uptick in advertiser spending online as well as consumer online purchasing. They predicted that by 2011, the Internet would be the largest advertising medium.
They were right.
In the intervening years, hundreds of studies have been run on the topic. In 2009, we have a study from the Retail Advertising and Marketing Association (via BIGresearch). They concluded – amongst other things – that moms (women with children younger than 18) spend way more time on social media than anyone else. They also use social media for product research, trusting peer opinion above all other review methodologies.
Pew research in 2010 concluded that 58% of all Americans have done research for products online, numbers that start to get a lot larger as you adjust to look at the critical 18-32 age bracket. While there was no social media component to this study, the thing that got everyone’s attention was the fact that internet users in higher-income brackets do significantly more online research than those in lower income brackets.
In September 2011, Nielsen released a report saying that social media (in which they include blogs) account for nearly 25% of all time spent online. That’s more than double the amount of time spent in online games. 3/4 of all internet users participate in social media.
Critically, 60% of people with “three or more digital means of research for product purchases” discovered retailers or brands from a social networking site. According to the same study, Americans spend significantly more time on Facebook – 53.5% – than any other site.
Again, these are merely sample studies I am discussing. There are hundreds of studies – and a lot of primary science – that cover this area of discussion. Suffice it to say that the most critical demographic – 18 to 32 year olds – are strongly influenced by social media. So much so that they skew the statistics for “all internet users” towards the realm of “depressing amounts of time spent on Facebook.”
That “the internet” is for some – indeed for an increasing number – Facebook, Twitter, Reddit or so forth is not merely my opinion. It is the considered opinion of several experts in the area; I have merely taken notice. More importantly; this trend is increasing.
These social media websites are now the lens through which an ever increasing percentage of our population absorb their daily dose of internets.
]]>
I have recently been involved in an interesting debate focused on the concept of “bring your own device” computing (BYOD). I argue that no company will go out of business implementing BYOD, while others argue strenuously against the entire concept except under very narrowly limited circumstances.
Previous iterations of the argument focused on the costs of BYOD (is it cheaper?), the security (isn’t BYOD a security threat?), demand from end users, and possible resistance from IT.
I make the argument in the latter case that there are enough unemployed IT guys out there right now that resistance from IT is functionally irrelevant. IT operations staff are functionally disposable; there are so many of us that for every one you fire a dozen more are willing to step into the position. That varies by region, but I feel that on a global scale this is largely accurate.
IT staffing deficiencies are largely in development, Big Data, niche virtualisation deployments, Metal as a Service (MaaS) or in specialisations such as CCIEs, high-end storage and so forth. Sysadmins are a dime a dozen, and this is a fundamental premise to be borne in mind when reading the below.
BYOD policy MAY be more expensive, but this is not guaranteed. There are many high profile examples of successful deployments. (Intel and Google spring to mind.) Thus when the business side of the company comes to IT and says “make it happen,” they know it’s possible. The question is “do your extant IT staff have the skill to pull it off properly?”
If they don’t, you fire them and you get new IT staff.
Most businesses are small and medium enterprises. They aren’t running 1000 seats and they don’t need their data screwed down tighter than Fort Knox. In fact, on the lower end of the SME side of life, the time has come for them to bid adieu to their IT departments altogether. They can have IT delivered to them as a service cheaper and more securely than they are getting it now.
One argument against BYOD is that “you must open up more information to the internet.” I’m going to call bollocks here. Done even halfway competently, BYOD allows you tighter control of your information than most businesses currently have.
Let’s consider the average SME today. The average SME today has one (maybe two) overworked sysadmins. When they are not trying to prop up the ancient servers, they are rebuilding (again) some desktop or stuck on some support call with a user who can’t remember that “clicking” and “double clicking” are different.
These companies exist in an environment where half the company runs as local administrators because – despite their warnings against these behaviours by IT – alternative methods are simply less convenient. SMEs are companies where the IT is in nearly every case not “proper” to begin with. They aren’t set up by whitepaper and they aren’t managed and locked down like a fortune 500 company.
There are orders of magnitude more of these companies than there are organisation who are “doing it right” today.
So what does a BYOD with Virtual Desktop Interface (VDI) and Software as a Service (SaaS) approach bring? Well, first off it allows you to put everything in a single location. No information arriving or departing by USB stick, CD, DVD or any other physical manner. The endpoints don’t get to talk to the core network unless they are locked down. Everything else comes through an RDP session.
I’ve been running VDI on dozens of SMEs since 2005, and in all but one case, I haven’t had a single person notice that they can’t move files off the network (except through the internet) yet! They just don’t care. Everything they’d want to do with those files they can; through RDP. (Yes, we block RDP file transfer, USB pass-through, etc.)
AHA, you say! A weakness in his argument! They can move files around using the internet! We must prevent this at all costs!
Bah. This is what IDSes are for. Check out Palo Alto networks, for example (http://www.paloaltonetworks.com/index.php). They have IDS/IDP systems that outclass everything everyone else can bring to bear in this space: dirt cheap, application aware, and simple to configure. Even my precious Linux box configured as network-sniffing IDS/IPS systems simply can’t compete.
Suddenly, I can manage the band instead of the box. Sure, you can move information off the network using the internet, but I can monitor and restrict it with an appliance: a simple plug-and-play appliance that a twelve year old could manage. Here is a great example of the commoditisation of IT. What 10 years ago was deep voodoo now comes in a nice pre-canned box that simply does the thing for you.
So now we’ve got a great big ball of everything living in the datacenter, maybe with a few select SaaSy apps on the web. It all goes through an awesome IDS/IPS which allows me to filter it, and I even work with my SaaS providers to ensure that our instances of the SaaSy applications have logins restricted to selected IPs.
The only way you are getting information off of this network is to take a photograph of someone’s screen while they are RDPed in. If you are honestly concerned about this; if this is a legitimate security threat to you, then you are either dangerously paranoid, or you work in the kind of organisation that has enough qualified and competent IT personnel that you should be talking to them about this topic instead of reading my blog. (Suffice it to say that even this risk is one that can be mitigated using any of a number of different technologies.) This is a realm of infosec paranoia that is simply out of scope of this post.
The inevitable argument is “well, that’s not true BYOD! In a real BYOD environment, people can use files on their computers!”
Quite right.
But that’s where BYOD gives awesome options. Most people don’t need this, so they can (and will) use RDP. If you want to do things local to your system, then you have to accept some restrictions. Management software has to be put on your PC, and it will restrict what you are able to do. Mobile Device Management for the cell phones and tablets, Puppet for Macs and Linux boxen and Active Directory join for my Windows boxes.
The choice is up to the end user. BYOD and third-party management software has allowed me to provide greater security than I would otherwise be allowed to provide by the business owners under a more traditional model. Why? Because BYOD gets the convenience part of the security/convenience equation right.
Not the bogeyman
The argument that BYOD is usually/probably “bad” is rooted in several assumptions that just don’t hold true for the vast majority of the world. The first: that BYOD is being implemented in an environment that is properly setup already. This is almost never the case. The second, that IT has the kind of pull within an organisation that they can set things up properly and manage by fiat and edict. Again; when are you from, 2000?
In these organisations, BYOD is probably not a consideration. IT still has their little empire, and they will viciously and vociferously defend it against all comers. Here, we have the talent and knowledge to pull off BYOD properly if they so choose, but they won’t if they can possibly avoid it.
And frankly, who cares? These companies have something that works: proper security. They just don’t get any real benefit from BYOD beyond staff retention and a modification of CAPEX as a line item. BYOD will cost them more than their current setup if for no other reason than that you will have to cram it down the throats of IT.
So we have proven that BYOD is not a magic solution for all companies in all cases. Who has ever claimed that it was? My previous arguments on this topic have argued – quite simply – that no company is going to go out of business for deploying it. SMEs either have or they don’t have the talent to deploy this. If they do have, then their guys will probably jump all over it as a chance to (finally) do some real security in the enterprise. If they don’t, then they will bring in consultants/contractors – myself, say – who know this stuff cold and deliver the transition as a proper service.
If the company is large enough (and with a well enough set up extant IT apparatus) that the benefits of BYOD are marginal to begin with, then they already have the IT guys who are fully capable of pulling this off properly and securely, should they choose to do so.
BYOD is not a risk. It isn’t a security threat. It isn’t a disaster waiting to happen and it isn’t automatically – or even in most cases – a negative approach to computing. Quite the opposite; for the vast majority of organisations it provides the opportunity to significantly simplify their IT delivery.
]]>
I can’t afford a really pricy third-party spam filtering option. GFI, Symantec, even Microsoft offer up some pretty robust solutions. They are pricy though, and I don’t see why I should bother fighting that particular funding war when there are some easy solutions available for free. In my particular environment, I run an Exchange 2010 server front-ended by a CentOS box running Sendmail, SpamAssassin, ClamAV and a few others.
The first and most important thing is to of course go get the latest and greatest CentOS. As of the time of this write-up that would be CentOS 5.5. Toss it in a virtual machine and install it with nothing but the bare bones. In my case, I gave it two interfaces; one directly externally accessible, and the other on my local LAN. (I trust iptables to keep the baddies out as much as I do any other firewall, so I see little reason to hide the spam server behind a separate firewall and port forward.) Let’s get to the build.
0) Set up your IP addressing according to your own internal schema. Pointing the spamserver at your internal DNS (probably your domain controller) saves you having to build extensive hosts files on the spam server. (It will be talking to your active directory, so using your AD’s DNS is a good plan.)
1) Enable the RPMforge repo. (https://rpmrepo.org/RPMforge/Using) I use this for the simple reason that they have a tendency to keep ClamAV significantly more up-to-date than RedHat (and thus CentOS) do. If you don’t use RPMforge, eventually ClamAV will get so out of date it will refuse to download new definitions. Save yourself the aggravation; use RPMforge. (I tend to wget the latest rpm, then “yum install [rpm name] –nogpgpcheck”. This is because CentOS doesn’t natively have RPMforge’s key available, and RPMforge keeps changing the location on their site where they store the rpm installer for the key…)
2) Install the necessary software: yum install procmail sendmail sendmail-cf sendmail-milter clam* spamass* pyzor perl-Razor-Agent
3) Download and install Webmin: RPMs are available, and certainly work well enough.
4) Disable SELinux and allow ports 10000 and 25 through the firewall, as this is what centos works on. You can usually do this from the command line via system-config-securitylevel on a base CentOS install. Don’t forget to restart the system after disabling SELinux! I know that there are ways around disabling SELinux, but frankly I’m too lazy to futz with the thing. (At some point in the future I will figure out how to get SpamAssassin and ClamAV working with SELinux enabled.)
5) Create a user called Sendmail in your Active Directory under to OU “users.”
6) Save the password for this user in a file on the spam server. I used /etc/mail/ldap.secret
7) Log into Webmin, and under servers go to “Sendmail Mail Server.”
The following is what we are going to need to modify to get Sendmail to use ClamAV and SpamAssassin. It will also be set up to talk to your domain controller in order to look up users when a server attempts to deliver mail. In this way the Sendmail server will be able to reject recipients who don’t exist in your organization. (Thus avoiding a truckload of NDRs from your exchange server.)
1) Under Webmin -> Servers -> Sendmail Mail Server -> Domain Routing (mailertable)
The mailertable tells Sendmail where to send e-mail it receives for a given domain. In the example below, domain1.com and domain2.com are being redirected to internalmailserver.company.local. To achieve this, click on “manually edit /etc/mail/mailertable.” Update it to suit your configuration.
Mailertable example:
domain1.com smtp:internalmailserver.company.local
domain2.com smtp:internalmailserver.company.local
2) Under Webmin -> Servers -> Sendmail Mail Server -> Spam control (access)
This file contains a list of servers allowed to use your spam server as a relay. While e-mail relays are generally a very bad plan, in this case they are an excellent way to scan all your outbound company e-mail. Enter the internal IP address of your exchange server (and any other e-mail sending systems) in your organizations here. You can then configure them to treat your spam server as a “smart host,” thus providing antiviral and antispam scanning for all outbound e-mail traffic. To achieve this, click on “manually edit /etc/mail/access.” Update it to suit your configuration.
Acceslist example:
172.16.0.30 RELAY
mail.internalmailserver.company.local RELAY
3) Under Webmin -> Servers -> Sendmail Mail Server -> Relay Domains
Enter a list (separated by carriage returns) of all domains that you will be handling internally and which you wish to pass through this spam server.
Relay Domains example:
Domain1.com
Domain2.com
4) Under Webmin -> Servers -> Sendmail Mail Server -> Sendmail M4 Configuration
This is the heart of configuring Sendmail. Most of the default configuration provided by CentOS 5.5 is good, but we need to add a few goodies to get it working the way we want it.
The first and most important thing is the setting LOCAL_DOMAIN(`’). There is a big push right now by e-mail administrators the world over to require reverse DNS. The long story short is that the hostname of your spam server (as your incoming and outgoing mail point) absolutely must match the reverse DNS of the IP address assigned to it. That reverse DNS also needs to contain the word “mail.” So the hostname of your spamserver should be something akin to mail.domain.com, and the reverse DNS on your external IP address provided you by your ISP should also read mail.domain.com.
In this vein, it is a good idea to set LOCAL_DOMAIN(`’) to LOCAL_DOMAIN(`mail.domain.com’). This means your spamserver would always accept mail for “mail.domain.com” without forwarding it to your exchange server (an odd requirement that some e-mail administrators have begun to put into place.) It still allows you to forward mail bound for domain.com internally.
Keep an out for this command: DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA’). Toss a dnl # in front of it if you want your sendmail to listen on any addresses other than 127.0.0.1!
I also tend to dnl # out EXPOSED_USER(`root’) and FEATURE(`accept_unresolvable_domains’) for sanity reasons.
The rest of the commands I won’t go into too much detail on; if you are really curious there is plenty of documentation available online as to their specific functions. If you are reading this page, I trust you are capable of spotting where in the configuration you should be changing “domain.com” and “company.local” style commands to suit your configuration.
FEATURE(`greet_pause’)dnl
define(`LUSER_RELAY’,`error:5.1.1:”550 User unknown”‘)dnl
INPUT_MAIL_FILTER(`clamav-milter’, `S=/var/clamav/clmilter.socket, T=S:4m;R:4m’)dnl
INPUT_MAIL_FILTER(`spamassassin’, `S=:/var/run/spamass.sock, F=,T=C:15m;S:4m;R:4m;E:10m’)dnl
define(`confINPUT_MAIL_FILTERS’, `clamav-milter,spamassassin’)dnl
define(`confDOUBLE_BOUNCE_ADDRESS’,`’)dnl
FEATURE(`ldap_routing’,, `ldap -1 -T<TMPF> -v mail -k proxyAddresses=SMTP:%0′, `bounce’)dnl
LDAPROUTE_DOMAIN(`domain1.com’)dnl
LDAPROUTE_DOMAIN(`domain2.com’)dnl
define(`confLDAP_DEFAULT_SPEC’,`-h “domaincontroller.company.local” -d “CN=sendmail,CN=Users,DC=company,DC=local” -M simple -P /etc/mail/ldap-secret -b “DC=company,DC=local”‘)dnl
Once you have finished this, go save and rebuild the Sendmail configuration. It’s a good plan to restart Sendmail at this point to see if it blows up. Remember that Sendmail is really grouchy if you have an extra carriage return, or forget a ` or a ‘.
For SpamAssassin configuration, first go to Webmin -> Servers -> SpamAssassin Mail Filter -> Setup Procmail For SpamAssassin and enable SpamAssassin.
Next stop is Webmin -> Servers -> SpamAssassin Mail Filter and modify to your heart’s desire. I generally change the setting “Prepend text to Subject: header” to read [SPAM ASSASSIN DETECTED SPAM]. This then allows me to set either an Outlook rule or an Exchange -> Hub Transport -> Transport rule.
In the case of a local Outlook rule each client must be individually configured to deal with the [SPAM ASSASSIN DETECTED SPAM] in the subject line of “spam” e-mails. (I usually have them directed to the “Junk-Email” folder.)
In the case of an Exchange -> Hub Transport -> Transport rule, I usually set exchange to assign anything with [SPAM ASSASSIN DETECTED SPAM] in the subject line to a Spam Confidence Level (SCL) of 7. If you want to enable SCL junk filtering and set your own SCL levels, you will need some Exchange PowerShell commands. Google can tell you more. http://msexchangeteam.com/archive/2009/11/13/453205.aspx is a good article to read as well.
Set-ContentFilterConfig -SCLDeleteEnabled $true -SCLDeleteThreshold 9
Set-ContentFilterConfig -SCLRejectEnabled $true -SCLRejectThreshold 8
Set-OrganizationConfig -SCLJunkEnabled $true -SCLJunkThreshold 7
Go to Exchange -> Hub Transport -> Anti-Spam -> Content Filtering. Enable it, and uncheck any boxes except “Delete Messages that have an SCL greater than or equal to.” The rationale behind this is that the SpamAssassin server is doing all the heavy filtering. If you allow Exchange to reject mails, you are going to end up with a mess of rejection NDRs that will pile up and go nowhere. Similarly, under Exchange -> Hub Transport -> Remote Domains -> Default (*) I really recommend disabling non-delivery reports. There is a growing trend amongst email administrators to not accept mail from domains that send NDRs, as NDRs are being used by spammers as a vector to get spam into people’s e-mail boxes.
run freshclam and sa-update from the command line to get ClamAV and SpamAssassin updated to the latest definitions.
Go into Webmin -> System -> Bootup and Shutdown. Make sure important things like ClamAV-Milter, SpamAssassin and Sendmail are all set to start on boot (and are currently running.)
That’s it! If you’ve done it right, then you should now have a CentOS box capable of receiving e-mail from the internet, scanning it for viruses and Spam, and forwarding it on to your exchange server. The exchange server itself can be configured with junk-filtering properties, adding a second layer of protection. (Though in truth I’ve not needed it: SpamAssassin does the job just fine, and better than Exchange’s native capabilities.)
]]>
Microsoft’s licensing is a problem. For a company that makes its bread and butter on the midmarket, they sure can seem hostile to those of us who live and work in this arena. Indeed, Microsoft’s licensing compares more accurately to other enterprise players. Oracle licensing is byzantine and overtly a profit-maximization approach, but they don’t have anywhere near as many SKUs in play as Microsoft. IBM is a good comparison; they have a similar number of SKUs, and no incentive to make their licensing comprehensible to normal people.
Contrast VMware to Microsoft as a “complete experience.” Microsoft’s offerings are incredibly powerful. As this review clearly shows, the joined-up nature of the System Center suite can enable a “total package” that overwhelms anything VMware can bring to bear. That said, VMware licensing is simple. Truly understanding Microsoft’s licensing – enough to make sure you aren’t paying a dollar more than you have to – is a career that requires the full time efforts of an intelligent, educated individual.
VMware’s products are comparative child’s play to install and administer. It took me three weeks of concerted effort to install a test lab with enough software to test System Center Suite 2012 against its two immediate predecessors. To contrast, it took less than an hour to do the same with VMware.
Interaction with Microsoft’s licensing department always leaves me with the impression that I’ve been had; there’s a scam afoot and I’m not the one running it.
I can’t speak to how Microsoft treats their customers with over 1000 seats. My customers are all between 1 and 1000 seats. Most are between 50 and 250 seats. What I can say is that in this area, I dislike dealing with Microsoft intensely. Microsoft doesn’t want to deal with us “irrelevant” SMEs directly. They want us to use VARs. Frankly, I don’t trust VARs at all. Not once in my experience with VARs have I been able to find one who was willing and able to optimise my licence usage. I have saved clients tens, even hundreds of thousands over VAR quotes by doing the legwork myself.
Instead, Microsoft positions their products to be appealing if you have less than 25 seats, or greater than 250. If you live in the 50-250 seat range – where most of my customers do – then the licensing is not only hard to optimise, it is outright punitive. The Microsoft ecosystem between 25 and 250 seats constitutes a barrier to entry for any company; something Microsoft has no intention of addressing in their reckless bid to drive the middle of the bell curve into a subscription model that has a far higher TCO for midmarket organisations than a perpetually licensed item. Doubly so when you consider that most midmarket companies live on refresh cycles for their equipment of 5 or 6 years, not three.
I like Microsoft’s technology. I think they make some of the best software in the world, and inarguably the best in several fields. That being said, I go out of my way to use competing products in many places because of the complexity of Microsoft licensing. Other vendors may (or may not) be more expensive than Microsoft. That said; when an alternative vendor’s licensing is less opaque – and better tiered! – you don’t walk away from purchases wondering if you could have gotten a better deal if you had only known the ins and outs a little bit better.
]]>
So why is it that anybody who spends even a little bit of time shopping online has run into numerous websites that completely ignore the impression that they are making on customers? I can only guess.
Maybe it has to do with the fact that building a website is often a collaborative project, which creates challenges. There could potentially be a whole team of specialists working on the website; programmer, graphic designer, copywriter, and photographer, to name a few.
The question that needs to be asked is: has anyone been assigned to take a more general view and figure out if the finished project is doing its job? Judging by some of the websites I’ve used recently, the answer could be “no.”
Let me share a web design horror story. Names have been withheld, to protect the guilty, but I assure you that this site exists.
You’re doing it wrong
An international pizza joint with an outlet near my house has had a sexy website upgrade. It has mouth-watering photography and cool interactive menus. It is also now nearly impossible to order pizza.
Here is what happens when you try:
1) On arriving at the site’s main page, you are asked for your postal code. This is presumably so the site can customize the menus you see based on what is actually available at the restaurant closest to your location. Not a bad idea.
2) You type in your postal code. The website displays an error message.
3) Repeat.
4) If you manage to get past step three, you get to play with the nifty interactive menus. Unfortunately, they don’t work in some web browsers. This means that you can see a lot of menu options, but only select some of them.
5) You disable all of your browser plugins. You are still unable to order pizza.
6) You switch browsers. Still unable to order pizza.
7) You order one of the menu options that you can actually select, instead of the pizza that you wanted.
8 ) In order to pay for your order, you have to log in with a user name and password. There is a prominent “Register Here” button. You have a bad feeling about this.
9) “The username that you have selected is already in use. Please try another username.”
10) Repeat.
11) “The password that you have selected is already in use…” (Wait…what? There are security concerns here…)
12) You decide that phoning the restaurant would probably be faster. You try to look up the phone number in the “Restaurant Locations” area of the website…however you need to be logged in with a username and password to view this information.
13) You return to trying to create an account and finally come up with a magical combination of username and password that allows you to do so. (Intriguingly, “BrokenWebsite” is unavailable because it is already in use by someone else.)
14) You submit your order, by now a little worried about trusting your credit card information to this company.
15) “We are sorry; we are unable to process your order at this time. We are aware of this issue, and are working to resolve it.” Maybe it’s a temporary glitch?
16) Re-order.
17) “We are sorry; we are unable to process your order at this time….” Not temporary. Great.
18) Ask your spouse for the phone number to order pizza.
19) Call the pizza place’s national call centre and place an order. Note with increasing frustration that the number for the national call centre has not been visible on any part of the website that you have been able to access so far.
20) Receive two confirmation e-mails for the two orders that you (supposedly unsuccessfully?) placed online. Three orders of pizza are now on their way to your house.
21) Phone back the national call centre and ask them to cancel two of the three orders. They have no record of you having submitted any orders at all. Apparently they have been having some trouble with their website.
22) Ask a live human being for the phone number for the actual restaurant that is making your pizza. Phone them and cancel two out of three orders.
23) More than an hour after beginning the entire process, receive your pizza.
Doing it right
Contrast this with a competing pizza restaurant that does not even offer online ordering. Instead, they have an easy-to read menu on their website and a phone number to call to place your order. It takes less than five minutes, and the pizza is usually at your door in half an hour.
Guess which pizza place gets my business?
This is of course an extreme example of web design that has completely missed the point. There are less dramatic ways in which a website can unintentionally make a bad impression. Poor spelling, grammar or inaccurate information can also make a business look unprofessional and put off potential customers.
Before your website launches, get someone to look at it with the needs of your target audience in mind. These are probably pretty simple. No matter what business you are in, your prospective customers need to know the same basic things: who you are, what products or services you offer, and how they can get these from you.
Your customers need the information on your website to be accurate and up-to-date. They need to be able to access it no matter what device or browser they are using. (Has anyone checked to see what your website looks like on a smartphone?)
Most importantly, they need your website not to be an enormous pain to use. Your competition is only a few clicks and keystrokes away. Make a mistake and your customers can and will go elsewhere.
]]>Frequent document types created are word processing, spreadsheets and presentations. Most of the world will know these as Word, Excel and PowerPoint files. There are, however alternatives to Microsoft’s Office suite of applications.
Apple’s alternative – iWork – is gaining momentum at a surprising rate. Google Apps is also steadily gaining followers, despite being largely an online only proposition. (Desktop and mobile clients exist, but their current value for content creation is questionable at best.) IBM offers Lotus Symphony while the open source community offers Libre Office, both of them descended from the now largely defunct Open Office.
All of these packages can “get the job done,” but some do so better than others. Microsoft’s suite is certainly the most established productivity suite, and with good reason. Office includes Outlook, arguably the single best email client ever developed. Office integrates with a plethora of collaboration and communications software ranging from Sharepoint and Exchange to Lync. These applications in turn integrate with other applications; the “full stack” Microsoft productivity approach is nothing short of amazing.
The fly in the ointment is cost. Microsoft is expensive. The advantage to Microsoft is the integration with all the various components, but actually licenceing the totality of those components starts driving the cost per user north of $1500 per refresh cycle.
iWork offers only a word processor, spreadsheet and presentation application with their productivity suite, but on the flipside it starts at $80 per user and goes down from there. (Apple volume licenceing starts at 10 users.) This is a critical consideration as Macs are starting to invade the enterprise.
Google – champion of cloud computing – takes a different path. They charge $5 per user per month for Google Apps, banking on flexibility of licence management to bring the customer base in. Integration with communications services is free, but availability is restricted to certain countries only. The slow but steady conversion from other office packages to the browser-based suite has shown that this model is sustainable. Microsoft certainly thinks so, having recently launched their own competitor.
IBM takes a different approach; Lotus Symphony is free. Again, only the most basic productivity apps are included, the business model is based on hooking you into IBM as a supplier so they can sell you Lotus Notes. Lotus Notes is a top-notch communications suite that runs a little north of $100 per user.
Finally, there is Libre Office. Libre Office is free. As in beer as well as free as in speech. There is a certain hope that you will donate to the project (though money or developer time) if you find the product useful, but the open source philosophy has provided a top notch productivity suite at no cost to the end user.
The snag of course is that there is no communication suite at all. The open source email clients on offer are maintained by separate teams, the integration is utter pants and quite frankly they simply aren’t all that good. There is no collaboration software to speak of and the systems administration overhead of finding relevant open source products to fill the gap and glue them all together are a “hidden cost” worth bearing in mind.
At the end of the day, there exists choice. The sheer variety of offerings means that you are not restricted when choosing the right fit for your business. eGeek Consulting uses Libre Office for content creation and Google Apps for collaboration and communications. Our clients use many and varied mixes of the software suites discussed above, and we are constantly researching alternatives.
It is worth the time to analyse your business productivity software needs. Understanding what is on offer and choosing the right fit can increase productivity and save you a significant amount of money in licenceing fees.
]]>