Øredev 2011 in the rear-view mirror – Part 4
This is the fourth part of ny sum-up of Øredev 2011. It has taken quite a long time to get this summary finished…so I will write a bit less about each session in this finishing post and refer to external resources instead of spending a lot of lines describing products and concepts.
For more Øredev, check out the previous parts in this sum up as well:
Okey, here we go…but first, I believed that I would be able to cover all remaining sessions in this post, but I will not. There will be a fifth part…and possibly a sixth.
2:4 – Udi Dahan – Who needs a Service Bus anyway
Udi Dahan, founder of NServiceBus, had a nice session about why we should consider using a service bus. He begun with the history of the service bus, speaking about CORBA, the rise and fall of the Broker architecture and how a service bus differ from a broker (a broker is in the middle of everything, a service bus is everywhere).
A bus is distributed everywhere, plugged into each part of the system. There is no remoting, since none is needed. While a broker is in the middle, tying everything together, a bus communicates with messages and makes sure that all subscribers receives the messages they should receive.
Udi finally demonstrated NServiceBus and how to set it up in various ways. The demo was pretty cool, but hard to describe, so if you have not checked out NServiceBus, or any other buses for that matter, make sure to do so. They are great for certain tasks.
2:5 – Jeff Atwood – Creating a Top 500 Internet Website in C# for Dummies
When you publish your kick-ass web site for the world to see and use, how do you optimize it to stand the traffic? Jeff knows, and shared his four greatest means of optimization:
- Static content
- Reverse proxy
A CDN (Content Delivery Network) is a must have. If you do not want to use a cloud-based service like Amazon S3, at least put your content on a simply configured server of your own, separate it from your logic and you’ll be able to distribute your content over the world, grabbing the one closest to your users when they require it.
A reverse proxy distributes incoming requests over a number of internal servers. With load balancing capabilities, it can drastically improve the amount of traffic your site can handle. Just make sure to make it sticky, if a client has to be redirected to the same server for each request.
Multitenancy means that one application does many things. Having several applications on one server makes each perform more poorly than if one is configured to do several things. So, have one application handle several sites and services and you’ll be off to prestanda heaven.
Caching means…well, yeah, we all know. The issue is how to cache. Having one cache per server may cause inconsistency, but one that is shared by all may cause poor performance. Jeff use MySQL for cache storage. He has one per app and one that is shared by all and syncs with the individual cache instances.
Jeff also spoke about serialization and how you must consider your serialization options – binary serialization may crash if the assembly changes and xml may be CPU intense. A final piece of advice was to design your systems as if you have a farm, caching etc…even if you do not have one at the moment.
A great, but intense session. I hope my description gave it justice.
2:6 – Marc Mercuri – Cloud First Services
Marc covered a lot of information in his session. He started of with stating that you must adapt a completely different mindset when you develop for the cloud (how come these Americans make such obvious things sound smart when they say it?) and design all new applications as if they are to be run in the cloud.
Marc went through various hosting alternatives (on premise, cloud-based and partner hosted) and some of the popular service models:
- Infrastructure as a service (Amazon EC2 etc.) – you get a server somewhere and do the rest yourself
- Platform as a service (Azure, AppEngine etc.) – a configured environment to which you add your applications
- Software as a service (nuff commercial 🙂 – free or commercial software, ready to be used by you and others
If we break down our services into well-defined capabilities, workloads, solutions, roles and services, we will be able to:
- scale them independently of eachother
- replace one service with another one with the same capabilities
- move, exchange or delete one service, without making the rest fail
With cloud-based services, we must think async for all tasks, designing them to be stateless and always assume that services we use will not be available at the moment. Designing your services this way will prepare them for what will come – services that are not available at the moment 🙂
And now for some great advice in clutter form. Use distributed cache, queues, external data storage etc. and you will be able to easily scale out when you create that killer-app that the whole world wants to use. Consider your storage alternatives. Some data is perfect to store in a relational database, while other may fit better in a NoSQL or BLOG storage. Boost availability with redundancy (multiple instances) and resiliency (how to recover).
And finally, some final words of wisdom:
- Moving to the cloud is NOT equivalent to designing for the cloud
- Believing that moving to the cloud means moving all or nothing, is plain wrong
- Platform SLA:s are not Application SLA:s. Assuring uptime does not mean covering your application logic.
- Bad applications will not behave better in the cloud.
- Support and operations are not automatically automised
Phew, not bad for a one hour session, huh? And then, I have even excluded the Azure-specific parts.