Gic Chic
Monday, October 18, 2004
 
The Explosion of Separated Concerns
As discussed in prior posts, I believe that MDSOC is the system development perspective of choice for moving the software industry toward our next capability plateau. However, the road is barely planned, and certainly not paved. There are a number of pitfalls that must be addressed if we are to avoid the sinkholes that lie along our path. Here, I will discuss one of the larger problems, and hint at a means to resolve it.

In a system project of any worthy level of complexity, the number of stakeholders will be high. If we divide them into 2 collections, functional and ancillary stakeholders, we can represent the concerns of our functional stakeholders through a number of well-recognized mechanisms in the realm of requirements gathering and specification. However, the needs of ancillary stakeholders are not so easily expressed. In this group, I would include such entities as project managers, financiers, regulators, and quality reviewers, just to name a few. In isolation, each of these stakeholders presents a collection of concerns that must be addressed in order to consider the project successful. Collectively, they represent a combinatorial explosion of considerations as their needs are taken into account throughout the system's development lifecycle.

Traditionally, this type of system (that is, the collection of stakeholder concerns throughout the SDLC) would be resolved through some form of multivariate analysis. While this approach is certainly valid, there are issues that are not easily addressed. First, there is no clear definition of exactly what stakeholders exist, how they are grouped, and what their concerns actually are. Second, it is certain that many of these concerns are not numerical, and that their relationships are not so easily defined. In fact, it is likely that a significant amount of judgment would be required to resolve conflicts and maximize benefit between these concerns.

There is a technique designed for just such problems. According to a paper on the subject by Tom Ritchey, this approach is:

A Generalised method for structuring and analysing complex problem fields which:
It certainly sounds like we have found our solution: Morphological Analysis. I would urge the interested reader to peruse the linked paper, the site in which it resides, and the world in general for more information on this marginalized, but phenomenally useful, technique.

Our approach, then, is to first identify the stakeholders and their concerns. Utilizing Morphological Analysis, we establish (demonstrably) the most effective way to resolve all stakeholder concerns. Finally, we put the determined action plan into place, and viola! Problem solved!

Of course, an approach is a long way from a methodology. There are still many issues to deal with. One of the largest concerns we face is the need to establish the identity and concerns of stakeholders on a per-implementation basis. Is this really necessary, or can we anticipate them, reducing the per-implementation overhead? Another concern involves the question of how consistent the output approach would be with industry practices. It is one thing to find the ideal solution to a particular problem; it is another thing entirely to have to train the whole team on a new approach for each project.

If you haven't already anticipated my response to these issues, then you didn't pay particular attention to my word selection in the previous paragraph. The answer, quite simply, is to define a "Process Management" stakeholder, and express the issues above (along with others not defined here) as their concerns.

All the pieces are in place. Now, if I or others can only find the time to begin the Morphological Analysis activities applied to System Development, I am confident the result will be a significant advance in industry practices.

Monday, October 11, 2004
 
Superhuman Superorganisms
In the course of information absorption, I come across "amazing discoveries" that leave me rather nonplussed. This is in no way a diminishing of the accomplishment of the researchers involved, as this is often nothing more than the result of my relative distance from the subject under consideration. Having long ago made a similar (though unstudied) conclusion, my thoughts have moved on to areas of more immediate interest.

One such discovery is covered somewhat loosely in the Wired magazine site's story titled People Are Human-Bacteria Hybrid. An intriguing title, to be sure-- more eye catching than The Challenges of Modeling Mammalian Biocomplexity title of the original article. The discovery outlined in the article is the impact of the fact that the number of human cells in our body is dramatically outnumbered by the number of microbial cells. Consideration of this commensal (wired word) or symbiotic (tired word?) relationship has significant consequences in a number of fields. The example most telling to me is the field of pharmacology.

Armed with this knowledge, researchers have an additional criteria by which to measure pharmacological properties such as metabolism, efficacy, and toxicity. This promises significantly improved understanding, and therefore modeling, of these interactions. In turn, this should lead to the ability to create more effective medications, and to predict more effectively potential side effects, based on (yet-to-be-developed?) tests of a patient's "superorganic composition".

Intriguing, to be sure. Good, for a fact. Surprising, not at all. But that's the advantage of being a sideliner in any game. The ability to observe the obvious nature of a discovery based on the ability to make sweeping judgments rather than the need to make informed decisions is amazingly cathartic.

As is often the case in these blogs, that last statement is charged with innuendo. Indeed, one might even say that it is central to a major area of investigation being undertaken right now by various Aesthesis team members. Unfortunately, aside from an intriguing but uninformative reference, there is little more to say at this time. It is my hope that at some point I will be able to post more information on this exciting area of research. Until that time, at least I have planted the seed.

Tuesday, September 28, 2004
 
qDoS
Quantum Cryptography (QC) is all the rage in technical journals. It seems mankind has finally arrived at a point in our understanding of the fabric of the universe that we can take advantage of those immutable laws, and bend that fabric to our will. Glorious stuff indeed.

Among the claims that QC promotes is "perfect encryption": the ability to share information in a way that guarantees its security against an attacker of unbounded capacity. This claim, in and of itself, is valid. Following the protocols established for the execution of QC, such a channel can be established. The problem arises when we attempt to use the data in a real-world situation.

Current bandwidths of the quantum channel are very low-- somewhere on the order of Kbits/second. Further, the nature of the protocols requires that a significant number of these bits be discarded-- on the order of 60% in a highly optimized environment. Therefore, from a practical perspective, the use of the quantum channel as a means of securing the entire communication is unlikely for the foreseeable future. Instead, it is often used for Quantum Key Distribution (QKD).

The security concern at this point should be obvious: per the principle of the weakest link, our total communication channel is now only as strong as the algorithm used to encrypt the public channel. Therefore, our evaluation of the value of QKD as a distribution mechanism must be based solely on the value of a "perfectly secure" key exchange-- and not on the security of the entire communication. That, unfortunately, is a topic of discussion for a future article.

Before we even get to that point, we must consider other inherent vulnerabilities of the protocols in question. Specifically, I want to consider Denial of Service (DoS) attacks. In extant networking environments, DoS is achieved by the process of saturating various aspects of the network in such a way that normal traffic cannot be processed effectively; the network becomes unusable.

In the world of QC, significant emphasis is put on the fact that any attempt to eavesdrop on the conversation is detectable-- which is indeed the case. Unfortunately, it is the nature of that detection that represents a significant vulnerability that I have labeled qDoS. I will now examine the protocols and vulnerabilities presented, using the common cryptographic protocol descriptive notation in which Alice and Bob are communicants, and Mallet is the malicious party (having superseded Eve's traditional role as eavesdropper, as we shall see).

The first step in a QC/QKD protocol is sifting: the selection of bits at random from the quantum stream. We will leave out, for now, how the random determination of bits on both ends is achieved-- though it is another critical aspect of our weakest link analysis that should be considered, there are ways of dealing with this issue that in effect provide the sought after level of confidence. The bottom line is that in this process, the volume of the stream is significantly reduced, on average. However, we are left, in the end, with a collection of candidate key bits that move forward to the next stage.

Next, error correction is initiated. In this process, parity bits are used to verify the validity of the candidate key stream. These parity bits, considered to be "revealed information", further reduce the quantum channel bandwidth. However, an even worse problem lingers: the source of error. Error can occur for any of a number of reasons, most of which are related to various forms of noise. One, however, is the heart of this blog's topic: eavesdropping.

You see, the guarantee that eavesdropping will be detected is based on the fact that any attempt to do so will disrupt the measurements that occur in sifting, thereby revealing the attempt in the form of error bits. However, indicating that such an intrusion is underway does nothing to alleviate the fact that by simply having attempted to eavesdrop, our attacker has now become a disruptive force on the quantum channel: error bits, by definition, cannot be used by QKD. In effect, the principle at play here is if you seek, you shall not find-- you shall only destroy. The rather high bar previously established by DoS attacks (network saturation) has been replaced by the act of merely "observing".

This problem is exacerbated by potential solutions to traffic analysis. The concern, simply stated, is that the presence of communication on the quantum channel is sufficient evidence of the presence of encrypted communication on the public channel. One resolution is the establishment of a quantum channel backbone across which multiple QKD requests can be executed. While resolving the traffic analysis problem, this expands the scope of a qDoS attack to include all participants on the affected backbone.

In closing, naysaying is the easiest of tasks. I prefer to offer not just problems, but solutions. In this case, however, the solution is not very clear. When manipulating the principles of physics to achieve such a lofty goal, you inherit the risk that such manipulation brings with it an unacceptable consequence. If the issues outlined in this post are indeed real, alternatives are hard to come by: selecting a more amenable set of "laws of physics" is not an option.

This is an interesting field with a lot of positive potential. It is my hope that the negative potential I have outlined can somehow be rectified in a way that makes it a practical solution to a significant problem. In the meantime, there are hints, scattered throughout this post, that other issues are afoot. Interesting fodder, to be sure. Stay tuned!

Monday, August 09, 2004
 
Concerns Over MDA
The Software Engineering industry is in a state of turmoil, as a number of practitioners attempt to insert (or, in some cases, exert) their perspectives on proper practices on the industry. The individual development processes are not that big an issue, in my book-- each and every one supports the needs of a wide variety of projects, and should be considered in that light. For now, I choose to focus on the Unified Process, because it seems to solve the problems I encounter most efficiently.

What does interest me is variant perspectives on the specification of the problems faced in our industry. Specifically, this post will discuss the seemingly challenging relationship between two of these perspectives: MDSOC and MDA.

I have written before about some of the issues I have with MDA, and my preferred perspective of ADM. I have also lightly touched on MDSOC, and the value it brings to certain viewpoints. Now I will make the kind of bold and brazen statements that I so enjoy: MDSOC and MDA are incompatible in their current incarnations, and MDSOC is right.

MDA, as implemented today, most often involves the transformation of UML models into an implementation presumed to represent both the functional and architectural concerns of the system. While on the surface this seems logical, there is a falacy lying just under the surface: UML is not capable of representing these concerns. Therefore, MDA tools regularly use the extension mechanisms of UML, such as stereotypes and profiles, to provide the additional information necessary for the generation mechanism.

The problem with this approach is that there is no standardization around the nature of these extensions. So, while achieving "standards compliance", these tools also require significant knowledge of non-standard extensions to take advantage of their power.

If we are required to gain an understanding of non-standard extensions in order to realize the advantages of MDA, then why not expand our capabilities to additional disciplines, improving their productivity and quality as well? This is the promise of MDSOC. Of course, the nature of these extensions is a different problem all together, and one I am most interested in exploring further.

Friday, July 09, 2004
 
What Do You Want From Me!?
In my recent digressions on architecture, I have pointed out that the increasing effectiveness of COTS products in the resolution of architectural concerns should lead to a different approach to all specification and implementation disciplines. The question remains: "how do we go about this?" I have learned during my many lives that arguing with kings is not a wise strategy. Therefore, I will "Start at the beginning and when [I] come to the end, stop."

The beginning, it seems to me, is requirements. If we are to implement ADM, we must understand the requirements of our system as it relates to architecture. We must also understand the relationship of those requirements to others, in order to prioritize and establish assessment and mitigation strategies.

Architects have attempted to specify these requirements (often termed supplemental requirements) around reliability, security, performance, etc. In other words, we have attempted to specify the system requirements, as opposed to the business functional requirements, that must be fulfilled in order to achieve all business goals. Collectively, these requirements have driven most (but not all) of the architectural elements of the system.

All of this is well and good in isolation, yet questions arise concerning the integration of these requirements into the more useful business functional requirements of the system. These requirements are also divided into areas. These areas often correspond with business organizations such as accounting, sales, management, etc. These might be further decomposed as necessary in order to determine the best resource for elaboration.

So many ways of looking at the system, and so many experts required to elaborate them. How can we ever hope to bring them together (in any complex system) in a way that fulfills all those stakeholder needs? Remember, we have only covered requirements-- there are still a huge number of processes that must occur, each with their own stakeholder needs, before a final system is delivered.

The answer, in a nutshell, is MDSOC. By allowing the specification of each of these areas (and their constituent decompositions) as concerns, MDSOC provides a mechanism whereby the unification of the concerns can be achieved across all specification and implementation disciplines.

At this time, MDSOC is fairly immature. This is no panacea, awaiting application in order to resolve all our woes. Rather, it is a framework that I believe can be expanded upon to discover the proper processes necessary to evolve software engineering practice to the point necessary to meet the next generation's expectations.

The journey is begun. Proceeding to the end seems a deliciously enticing process, indeed.
Thursday, July 08, 2004
 
Trust No one.
Aside from being the mantra of one of my favorite television serials of all times, the statement "trust no one" also serves as an interesting mathematical limit to the concept of authentication. Specifically, I am willing to state with some confidence that I trust, absolutely, all claims made by no one. You see, "no one" doesn’t tend to make many claims, and therefore does nothing to stretch their credibility. While seemingly nonsensical, or at best very "Zen", I urge the reader to think through this statement a little further.

As previously mentioned, the various perspectives of authentication form a multidimensional space in which the "trust" one places in an identity claim can be evaluated. That, however, is only the beginning of the problem. The next problem is to determine just how "evaluable" such a space might be. It does no good to provide a theoretical model which has no practical way of being resolved.

By demonstrating (anecdotally) that there is a definition of the upper bound (I trust, in the absolute, "no one") we have established one of the criteria of evaluability. Next, it is easy to postulate that the there is a minimum, as well: "even if I truly believed you are who you claim to be, I would place no trust in you." This, however, creates an interesting consequence: If we have a guaranteed minimum value (absolutely no trust) which we can place on an identity claim and a guaranteed maximum value (perfect trust) on another claim, then we have both upper and lower bounds on our results. This means we can normalize any evaluation result to any range of values we want: like, say, 0…1. This is an important conclusion that supports the tractability of any claim that meets these criteria. Further, if our formula is continuous, it would then be differentiable, which would facilitate the evaluation of trust over time.

How, then, do we begin to produce a formula that is able to take these various aspects and provides a useful evaluation of the identity? And how does this tie in with the concept of trust? Ah, fodder for another post.

Wednesday, July 07, 2004
 
FeliCa BohiCa?
When I spoke of NTT DoCoMo's new cell phone with embedded FeliCa technology, I posed a few questions, and mentioned that I might some day get around to answering them. While some are far too technical for such a "brief" environment as this blog, one, at least, deserves attention in the context of other posts here.

I have written before about the establishment of identity and its relationship to trust. What I have not written about (this post constitutes a preview) is the relationship of trust to value. To summarize: the amount of trust required for a set of transactions is directly proportional to the value of the transactions.

So, what is the problem with the "DoCoMo" solution? If it provides no more FeliCa based value than a single transaction, it is no better (possibly worse) than a standalone card. If it provides access to valueless additional transactions, the same logic applies. Only if the FeliCa implementation increases the value of the transactions under consideration can it truly be considered a value add.

Ah, there's the rub.

If the value increases beyond a certain threshold, then a authentication method based on nothing more than "what you have" becomes unable to support the trust required for the transaction set. Another preview: this is the exact point at which fraud becomes profitable.

I don't pretend to know what that "threshold" value is. I have read a bit about the FeliCa security model, and wouldn't personally rate it too high. It is very good for what it claims to do, but like so many other things before it, I believe this might just be stretching it beyond the point it was intended. Here's hoping that the bright folks at NTT are able to take advantage of the technology and cultural synergy in a way that lets it stretch, without breaking.

Powered by Blogger