I spoke yesterday on Distributed architecture at SMU. The audience was college students who hadn’t encountered distributed architecture before so I took it from “what is architecture” to some basic patterns.Slides available here
So far this year the theme has been No SQL. This past Thursday I did a variant on a No SQL talk I did at SMU It’s a big picture talk where then we get into the details of a couple different No SQL engines including MongoDB, Couch, and Neo4J while showing where to apply these technologies to a fictitious startup, FriendsWhoCook.com.
This month Sam Martindale is speaking on Hadoop with Azure and Elastic map reduce. And soon Paul Kavanaugh is slated to do a deep dive on RavenDB.
Yesterday I had the pleasure of speaking about the Kinect for the fifth time, this time at the North Dallas .NET User Group. NDDNUG is a great group. Even with it pouring outside, we had a decent turnout.
This was the first time since the 1.0 release I’ve presented on it. What I didn’t expect was how much shorter my presentation was due to how much work has gone into simplifying the .NET Kinect development experience. Bit twiddling on depth, MTA’s, DMOs, and much more are just gone from the experience. The team deserves some serious kudos for this.
A couple people asked me about slides and code. Code is available on bitbucket.
I’m happy to announce I’ll be speaking at NDDNUG (The North Dallas .NET User Group) on Kinect development on June 6th next month. The timing is auspicious. Microsoft hinted that that the Kinect 1.5 SDK was going to release toward the end of month and it’s here and there’s some great new stuff. Here’s what excited to me:
- Face Tracking. This is huge for those trying to do animated films on the cheap. This will map a person’s eyebrows, mouth position and more onto a 3D mesh. You smile and your puppet self will smile.
- Kinect Studio. Record and playback of Kinect data (No more jumping up and down just to write software.. though this was great exercise). This is going to make debugging key scenarios much easier.
- Seated skeletal tracking. Seated skeletons will no longer dissolve into painful yoga poses.
- Joint Orientation. The Kinect can tell you about rotated joints such as the rotation of the wrist. Good for playing twister and revving virtual motorcycles!
- Improved green screen effect (mapping RGB to depth) most noticeable in the speed of tracking (from about 10ms to 2ms), though it still comes with a halo of noise.
- Additional languages for speech recognition.
For new users and old, the quality of the Kinect developer samples and especially the Kinect Explorer (the key sample application to demonstrate Kinect features) have been greatly improved. Also released is a well written Human Interface Guidelines document. This 70 page PDF gives insight into good Kinect HCI for both audio and kinetic inputs. It also lends a domain language with which to discuss Kinect. What the guide doesn’t do is give technical details on how to develop gestures.
Developers are still hoping a gesture recognition and possibly a facial recognition SDK are in the works, but there isn’t a clear indication if these are in the works.
I’m a fan of Windows. I want Windows to continue to flourish. I actually hope Metro catches fire, but I have some reservations.
I’m sure to Microsoft the problem seems insurmountable (Apple’s market and mind share, the rising tablet market, the sad rate of Windows Phone sales) and I understand trying to take it in a different direction. I also understand that moving a ship like the Microsoft Windows team must be near impossible since the design and development cycles started years ago. Yet every day I see more MacBooks than PCs at coffee shops. PCs still rule the enterprise, but "I"devices are making in roads. And almost worse, it looks like Apple may be about to take on the XBox.
But we have to ask something realistic here. If Metro is failing to sell on the Windows phone (which I own and use daily), why does Microsoft believe it will succeed on tablets? And why would it succeed on a laptop or desktop for which it isn’t the optimized experience?
I’m not an award winning designer but I think I have some good insight. Here’s where I think things went awry.
Windows 8 misinterprets telemetry data.
In WW2 the English started a program to analyze aircraft to figure out where they needed to add armor. They looked at all of the planes coming back and did frequency analysis of where the bullet holes were. Some areas were so riddled that easily 60% of bullet holes hit these key areas.
The first reaction is to armor these heavily hit areas of the plane. This is wrong. These planes survived. The armor should go everywhere else.
In a similar manner MS is collecting telemetry data from customers who have "come back" and deciding based on their usage patterns to change the product. This is wrong. MS needs to understand the ones who left for other operating systems and understand why they left.
Windows 8 underestimates the value of familiarity.
People tell me OSX is beautiful. I think that’s rubbish. Have you seen the big ugly white toolbar at the top from circa 1990? Microsoft long ago did away with such things in their office products by replacing with the ribbon.
And let’s look at the IPad. Just take a phone and "make it big"? Surely this would be a recipe for failure..
Yet OSX market share continues to explode despite the big ugly menu bar. IPad sales couldn’t be hotter. Mac customers are repeat customers.
Apple isn’t very "innovative" – quite the opposite. They do one or two big innovations and then they find something that works and keep at it until everyone agrees. Many people still hate the ribbon (of course telemetry data says otherwise because many office users switched to Mac where they get to keep their ugly but familiar toolbars and menus). Even if the ribbon surfaces more commands, it’s jarring for users who are attuned to a product. Better would have been a search box.
Don’t underestimate the power of familiarity and muscle memory. I still "save" by hitting ALT-F and S. It’s inefficient. I know CTRL S is better. But I’ve been doing it that way for so many years, it’s a habit.
People hit the Start button and have an expectation based on over 15 years of familiarity. Some will be flexible and accept this significant change. Others will be very confused, so much so that they may think the PC is "broken" and they will want it "the old way". When this starts happening PC makers will start offering Windows 7 instead of 8.
Windows 8 devalues consistency
MS is betting that people want one device, not two. In some ways they are right. Most people I know (yes this is just my experience) want one device. But they want either a tablet or a laptop. The problem is the use cases are different. To try to put them in the same operating system gives us an OS with split brain syndrome. People want consistency.
Consistency is why the office ribbon was a bad idea. For each program, users must learn something new. Even the size of things aren’t consistent and the layout is haphazard. With menus you can easily read what you are trying to do. With toolbars, icon sizes are consistent and easier to scan. Yes, menus and toolbars and launch bars and start buttons don’t "surface" commands well – but there are other ways to handle this then to throw everything the user could possible need on the screen.
In the metro start menu, the only consistent motif is the rectangle. It’s why tiles are a bad idea. Metro tiles don’t promote mental mapping. From cognitive science we know we can only contain 7 +/- 2 things at a time unless we "chunk". Chunking in Metro involves a sea of tiles grouped by spacing. This is why folders are so valuable. The Metro tiles are akin to taking your filing cabinet and dumping their contents on your desk.
Windows 7 was universally praised. Why loose that momentum? It seems more sensible to release Windows 8 as a tablet only release and "wait and see" before trying to use it in the enterprise.
For decades enterprises have understood the database as the true technical authority. No matter what insanity we application developers create, the relational database, by enforcing constraints and referential consistency, by disallowing nulls and protecting unique values with its transactions and ACID principles – it keeps the chaos at bay and gives us Truth.
But nothing is free. Requiring all domain truth to be expressible with a single model, the relational entity store, is like painting the world with only primary colors. The subtleness of the domain is lost. Meaningful questions are obscured by the checklist of entity relational questions. Should the customer’s name be 50 characters or 100 characters? Should it be non nullable? In what aggregate does it belong? These questions hide more important question – What is the purpose of this data to the enterprise? In what context is it valuable?
Entities come with baggage. They stop us from thinking YAGNI at an architectural level. If we don’t need the data, we shouldn’t persist it. If we don’t need an event, we shouldn’t publish it. If a tree falls in the forest, unless it serves a purpose in the forest service business capability, it will not publish a sound event.
The hardest part of SOA is getting past our preconceived notions of what business is all about. We have them at many levels. We have baked them into relational databases and we have "anemic domain" models in our heads from years of repetitive system design. To get SOA right, we first have to get over ourselves.
This is my interpretation of what I learned in Udi’s SOA course. I highly recommend everyone take his course or at a minimum rent his videos.
Last time we discussed what service oriented architecture is. We discovered that services should align with and enforce enterprise semantics and rules instead of tangentially relate to them like applications tend to do.
This brings me to Udi’s definition of a service which works well with my preferred definition of SOA:
A service is the technical authority over a specific business capability. Any piece of data or rule must be owned by only one service. – Udi Dahan
"Technical authority" is intentionally strong. To be an authority means to have power of determination.
An authoritative service owns completely its data, its schema, and its concept of consistency. No outside force may tell a service what is true inside its boundary For example, enforcing referential consistency in a database across services is disallowed; This impedes on the technical authority of the service over its data.
A service also owns its business rules. To own a rule, a service must be able to change its interpretation of a rule without "issuing a memo". Other services shouldn’t be affected by this change. This is stronger than loosely coupled – this is autonomous.
To be autonomous as discussed in this blog post requires services to communicate asynchronously. If services make synchronous calls we have created strong temporal coupling between our services.
A more surprising takeaway of being a technical authority is that services must own their presentation elements. This means our applications will decouple orthogonally instead of horizontally as services will bundles data, schema, business rules and presentation elements together.
Providing presentation elements isn’t the same as a full blown stylized UI; A separate branding service is responsible for that. Instead, services emit a small amount of HTML or JSON or own an MVC controller. This is controversial as horizontally decoupling has been beaten into our skull for so many years. But it makes sense if services have been constructed at the right level of granularity.
With almost territorial delineation between services, how do we build applications? Udi notes in this blog post that applications are simply mashups of autonomous services. Applications are loosely communicating context aware services. The job of applications is to provide context to services.
Udi didn’t cover why he chose the specific language of business capability instead of the more familiar business process. Business capabilities describe what a business does – the functions that a business performs (use cases at a high level). Processes, on the other hand, define how a business operates and are anchored in today’s thinking about the business. For example, a business capability might be "Generate Bill". The specifics of how a bill is generated – whether we create a PDF or generate an email, is the process.
Sounds great, but if we don’t create code that actually does Print PDFs or send emails, nothing gets implemented. So why capabilities? Because they maps well to services. A capability is at the right level of granularity to be represented with a service. Capabilities are black boxes that encapsulate the processes they perform. Services are the same, sharing minimally and to observers (if constructed correctly) autonomous black boxes of functionality. Capabilities are stable over time. For our investment in services to be valuable, they too must be stable over time. The implementation can change as the environment and organizations changes, but services boundaries need not be redefined, just as the core business capabilities rarely need be redefined.
Now that we have a good idea of what we want out of our services, we can start talking about the moving parts that make all of this SOA machinery work.
This is my interpretation of what I learned from Udi’s 5 day intensive SOA course.
To learn something complex and paradigm shifting like service oriented architecture (SOA) can be tough. It’s doubly difficult when there is little agreement on what the definition of the thing being learned is.
If you look into its earliest history, SOA is usually defined in terms of specific technical stacks such as SOAP. The W3 penned a more general definition when it defined SOA. The W3 discusses the desirable characteristics of SOA such as exposing metadata, network friendly, and platform neutrality.
My problem with these (and many) definitions of SOA is they lean too technical. If SOA is just another technical stack, it’s not an architecture, it’s an implementation.
The definition of SOA I found which I like the most is from this article by Boris Lublinsky. it states that:
SOA can be defined as an architectural style promoting the concept of business-aligned enterprise service as the fundamental unit of designing, building, and composing enterprise business solutions. Multiple patterns, defining design, implementations, and deployment of the SOA solutions, complete this style – Boris Lublinsky
What I like best about this definition is the emphasis on services as a unit of business alignment.
According to Boris, because of the way IT needs are addressed, the enterprise becomes a mesh of siloed applications, each created with only partial business alignment. Enterprise business processes becomes shaded by application specific purposes. According to the article applications "manifest themselves as islands of data and islands of automation."
Islands of data develop when enterprise concepts are defined narrowly to meet specific application needs. These islands of data cause semantic dissonance as each application models a tiny, discolored slice of the enterprise concept. Such islands results in difficult to reconcile data duplication between applications. For example, an insurance claims application may contain demographics in a different format then a CRM application. They may be similar but not the same.
Islands of automation are the applications themselves. These force enterprise users to "application hop" in order to complete meaningful work. Business processes become disjointed, often requiring users to copy and paste between applications or invoke multiple executables or web sites to perform work. This context switching costs the enterprise in terms of lost time, concentration, and duplication of efforts.
SOA then, as an architectural style, should strive to end semantic dissonance in the enterprise and bring about business alignment. SOA should tear apart application islands in order to reconstruct the enterprise as a series of autonomous services, each with clear ownership and responsibility for its semantic concepts and business rules.
This means if we want to do more than pay lip service to SOA we’re going to have to cause some enterprise pain. According to Conway’s law the enterprise will produce designs which are copies of the communication structures of these organizations.
This means if we have four teams bent on doing SOA but projects are divided up around existing application silos, we’ll just get four additional silo applications. According to Conway, if we divide IT by technical lines (Development, Management, Support, Help Desk) we will find each group acting autonomously and aligning along technical boundaries rather than business aligned service boundaries.
If we want to do SOA with a capital S and reap all of its rewards, we must accept it will involve changes to IT. How projects are conceived and budgeted, how teams are formed, even the very structure of IT governance will need to be examined.
Next time we will discuss Udi’s definition of a service and how it is critical to a successful SOA.
This is my interpretation of what I learned from Udi’s 5 day intensive SOA course.
Most of the other fallacies are well documented elsewhere and I don’t want to steal Udi’s thunder by covering them in depth here. As I urged, take his course!
The last three fallacies were penned by Ted Neward of both Java and .NET fame. The first two can be found in his magnum opus Effective Enterprise Java, and the last in a blog post lamenting that he wished it had made the book. The very last one is of particular interest because it is the most controversial: Business Logic Can and Should be Centralized.
When Ted discussed the 11th fallacy, he’s talking about the fact that business logic is going to necessarily be distributed in a distributed system – that we will need to enforce the same rules on clients, on servers, in databases. As Ted states, this is a hard one to swallow, because we believe in DRY (Don’t repeat yourself). We want to write per the Once and Only Once Rule. We feel like bad programmers when we repeat ourselves.
What do we do when in one ear the Pragmatic Programmers are whispering “don’t repeat yourself!” and in the other ear “Coupling is bad!” How do we not repeat ourselves and keep a decoupled architecture? If we write it only once, we will be coupled to it everywhere it is used. But decentralize our rules, we have repeated ourselves. How can we resolve such things?
Udi has a quote I feel is brilliant.
When two principles are pushing in opposite directions, some underlying assumption is wrong. Often the word the is the culprit – Udi Dahan
What a beautiful insight!
If loose coupling and DRY are pushing in opposite directions what assumption is wrong?
At this point we have to ask – why are we are striving for DRY in business rules anyway (aside from the fact some old guys with beards told us it was the right thing to do)? Udi points out it’s not making changes in repeated code that is hard, it’s finding all the places where changes need to be made!
So what assumption is wrong? Where is the word “the” causing us pain? Udi offers that we have one “the view” of an architecture. Instead of trying to centralize business logic with DRY – why not have multiple views of your architecture? What if we tagged each requirement so that with a simple GREP query we could easily find everything related to a business rule and make a code change? Giving credit where due, Udi attributes this idea to Philip Kruchten’s paper Architectural Blueprints—The “4+1” View Model of Software.
Does this smack at the extreme programming view that “the code" is the only model? Possibly. I defer the reader to Eric Evan’s work in Domain Driven Design where he says “Documents should work for a living.” I can’t think of a better way for user stories / requirements to stay alive and relevant than to have them cross reference directly with the actual code that implements them in a “requirement view” of our architecture.