yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News

2008 West
Data Direct
SOA, WOA and Cloud Computing: The New Frontier for Data Services
Red Hat
The Opening of Virtualization
User Environment Management – The Third Layer of the Desktop
Cloud Computing for Business Agility
CMIS: A Multi-Vendor Proposal for a Service-Based Content Management Interoperability Standard
Freedom OSS
Practical SOA” Max Yankelevich
Architecting an Enterprise Service Router (ESR) – A Cost-Effective Way to Scale SOA Across the Enterprise
Return on Assests: Bringing Visibility to your SOA Strategy
Managing Hybrid Endpoint Environments
Game-Changing Technology for Enterprise Clouds and Applications
Click For 2008 West
Event Webcasts

2008 West
Get ‘Rich’ Quick: Rapid Prototyping for RIA with ZERO Server Code
Keynote Systems
Designing for and Managing Performance in the New Frontier of Rich Internet Applications
How Can AJAX Improve Homeland Security?
Beyond Widgets: What a RIA Platform Should Offer
REAs: Rich Enterprise Applications
Click For 2008 Event Webcasts
Why Are There So Many Hurdles to Efficient SAM Benchmarking?
When dealing with Software Analysis and Measurement benchmarking, people's behavior generally falls into two categories

Two opposite sides
When dealing with Software Analysis and Measurement benchmarking, people’s behavior generally falls in one of the following two categories:

  • “Let’s compare anything and draw conclusions without giving any thought about relevance and applicability”
  • “There is always something that differs and nothing can ever be compared”

As often, there is no sensible middle ground.

Benchmarking challenges
Some of the most common reasons for objecting to comparing SAM results are:

  • Applications use different technological stacks, with variable thresholds on the level of difference that matters, such as:
    • Object-oriented vs. procedural, implying that comparing object-oriented applications together is OK
    • JEE vs. .NET, implying that comparing object-oriented applications together is not OK if they are not of the same flavor
    • With or without Hibernate framework (or any other framework for that matter), implying that comparing applications using different frameworks is not OK
  • Measurement capability evolves with
    • New measure elements, to check for new risk-inducing patterns
    • Improving existing measure elements to diminish false positives
  • Measurement process relies on some contextual information, such as
    • Target architecture
    • Vetted libraries
    • In-house naming norms

All of the above reasons make perfect sense and one has to be well aware of such situations. However, this is no ground for dismissing any possibility of comparison altogether.

Built-in clutch
A well-designed measurement model can help overcome these challenges. Indeed, with a measurement model which aligns on a Goal-Question-Metric approach, the three levels act like a built-in clutch.

How so? Because the Metrics required to answer the Question one has to ask themselves to measure the level of achievement of each Goal can differ from one technology stack to another, or evolve from one release of the measurement platform to another without invalidating the results. This is also true about the number and nature of the Questions one has to ask themselves to measure the level of achievement of each Goal.

Then, with a measurement model using a compliance ratio to consolidate and aggregate (as opposed to raw count of non-conformity), this statistical processing acts as a built-in clutch.

From one technology stack to another
At the Question level, even if the number of contributing Metrics differ, capabilities in different technologies differ and the sources of issues as well, hence a different number of Metrics. However, if the Question is well answered by the fewer or larger number of Metrics, the difference is then normal and acceptable.

At the Goal level, even if the number of contributing Questions differ: the difference is inherent to the technology and is therefore normal and acceptable.

For example, there is no possibility to over-complexify COBOL code with coding and architectural practices related to Object-oriented capabilities, while there are many ways with JEE. This comparison might seem unfair to JEE that is assessed with more rules and technical criteria than COBOL, but this is inherent to their respective capabilities and a fair assessment of transferability or changeability risk must take this difference into account.

As this assessment result will guide a decision for resource allocation, it is critical to know that in this JEE component, there is not only some regular algorithmic or SQL or XYZ complexity, but there is an additional load of complexity due to excessive polymorphism or XYZ.

From one release to another
New releases of the measurement system are designed to deliver more accurate assessment results. However, added accuracy will impact results. The use of compliance ratio can help limit the impact, assuming the impacts have to be limited.

  • Assuming the new supported syntax or extra capability leads to the testing of new objects and that the quality level is similar to objects with previously supported syntax, the ratio will be stable.
  • Assuming the new supported syntax or extra capability leads to the testing of new objects and that the quality level is really worse than objects with previously supported syntax, the ratio will be negatively impacted for the better: it makes quality issues visible.
  • This reasoning can also be applied to a new quality check: there's no impact if the quality level is similar, and impact if there is a new piece of information to be known.

Why not rely on a raw count of violations?
Using a raw count of violations would lead to an increase in the number of violations, regardless of the quality level. Any new rule can lead to more violations as it turns invisible quality issues into visible ones, even if the compliance ratio for the new rule is better than the rest of the pack.

From one context to another
This area is perhaps the most delicate because, the ability to compare relies on the assumption that human contextual input differs but that it is fairly set.

To expect fairness may seem naïve, but a lot of the measurement process already relies on some amount of fairness. For example, what are the true boundaries of the application you're measuring? Nowadays, with cross-app principles, the definition is blurred and it could be easy to omit part of the application to hide some facts from management.

One could also define a target Architecture that would work in their best interest, but that would mean entering into the system some flawed configuration data that anyone can review.

One could vet the libraries to prevent security-related vulnerabilities, improving assessment results, but that would also mean entering some flawed configuration data.

At the end of the day
Yes, there can be true differences between the way multiple applications are assessed.

Although, this is no reason to dismiss benchmarking altogether. Not hiding the differences and their impact is important though.

As Dr. Bill Curtis stated during the CISQ ( Seminar in Berlin on June 19th when explaining the key factors to conduct a clever productivity analysis - always inspect data, investigate outliers and extreme values, and always question the results too.

In other words, always use your brain.

Read the original blog entry...

About Lev Lesokhin
Lev Lesokhin is responsible for CAST's market development, strategy, thought leadership and product marketing worldwide. He has a passion for making customers successful, building the ecosystem, and advancing the state of the art in business technology. Lev comes to CAST from SAP, where he was Director, Global SME Marketing. Prior to SAP, Lev was at the Corporate Executive Board as one of the leaders of the Applications Executive Council, where he worked with the heads of applications organizations at Fortune 1000 companies to identify best management practices.

Latest AJAXWorld RIA Stories
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing ...
Data center, on-premise, public-cloud, private-cloud, multi-cloud, hybrid-cloud, IoT, AI, edge, SaaS, PaaS... it's an availability, security, performance and integration nightmare even for the best of the best IT experts. Organizations realize the tremendous benefits of everyt...
Public clouds dominate IT conversations but the next phase of cloud evolutions are "multi" hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specif...
Today's workforce is trading their cubicles and corporate desktops in favor of an any-location, any-device work style. And as digital natives make up more and more of the modern workforce, the appetite for user-friendly, cloud-based services grows. The center of work is shifting ...
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many ...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)!

Advertise on this site! Contact advertising(at)! 201 802-3021

SYS-CON Featured Whitepapers
Most Read This Week