In some circumstances, Agile can be used to rescue troubled IT projects. Learn when to use Agile as a tool and how to employ it for success.
This report reviews the role of IT governance with regard to leadership, management, clients and users of IT. It also reviews effective organizational structures to establish governance that seeks to achieve business goals and the contributions to those structures of the practitioners within IT Providers.
Is the function point method still viable “as is” in newer technologies? The answer is “yes,” but the function point counter must know how to appropriately apply the FPA guidelines. This article examines three types of architectures used in today’s business sectors: client server/cloud computing; real-time, process control and embedded systems; and service oriented architecture (SOA) -- and what to consider when counting in those scenarios.
The transition to Agile, or the evolution of Agile within the organization, is just one of the changes that an IT Manager finds on her plate today. In order to get the biggest bang for the buck from Agile, an IT Manager must have a basic understand of Agile. This report examines the need-to-know characteristics of Agile.
The volume of software to be delivered by a project and the effort taken to deliver it are the principle drivers of software productivity. However, this is not a simple correlation, and in this report we consider how system characteristics affect productivity. We consider what other factors can influence productivity and which industries might be impacted.
This report examines the purpose functional metrics play in newer development frameworks like Agile.
For the purposes of this report, we will adopt and test the hypothesis that Agile development and iterative development are different, look at the characteristics they share and don’t share, and then, either accept or reject the hypothesis.
We often read articles about software development best practices. There is no known industry standard or certification that is used to qualify something as a best practice, nor are there any rules or guidelines that help to classify something as a best practice. So, how do we know if something is a best practice? What gives a practice or a process that special distinction of being the ‘best?' This report examines how IT can identify its own best practices.
Software estimating continues to present challenges for programmers, project managers, and senior level IT managers. Most organizations consider their estimating practices to be ineffective and they have no real sense of how to make them better. However, if an IT organization is serious about improving their estimating practices and they want to estimate more effectively (accurately), there are workable solutions available.
This report examines how to utilize software estimation techniques more effectively.
This report examines why software development teams may have a separate Project Management Office, as well as the benefits and issues associated with that decision.
This report focuses on the use of the Capability Maturity Model Integration (CMMI®) in organizations employing an Agile approach to development. We answer the question of whether Agile and CMMI can work together or if they are they polar opposites of each other.
Benchmarks of software development processes are now commonplace in our industry, and they can be used effectively in order to understand how a software development organization is performing in relation to others against certain key metrics, like productivity, quality and time to market.
Unfortunately, abuse of the process is common, leading to a devaluing of the process and the tendency to resist the use of metrics. In this report, we discuss how benchmarks can be used effectively within a commercial framework.
This report looks at what we consider to be current software development outsourcing best practices and then speculates on future directions.
This report explains the IT Capability Maturity Framework and compares it to other popular industry frameworks.
This report determines if we are able to compare the two different function point counting methodologies - IFPUG and COSMIC. It also determines if there is a distinct correlation between the two methods.
This report examines the pros and cons of distributing estimation to SMEs versus centralizing it.
This report examines the Agile software development framework and how it compares to other SDLCs.
This report examines the presumption that the adoption of a software tool for performing software estimates changes the emphasis of an organization's software estimation process from a reliance on subject matter experts to a reliance on project parameterization and the use of historic actual data for previous “similar” projects.
This Trusted Advisor report explores the genesis of Agile, current best practices and the three future trends clearly seen in today's marketplace.
This report examines if automated function point counting is useful and/or effective in today's IT industry.
This report explains how to successfully use both waterfall and Agile in combination.
Delivering software systems and services has always been a balancing act between delivering on time and delivering “best practices." Every so often, compromises are made on each end and, whether intentional or unintentional, technical debt is created in the process. If left unmanaged, technical debt can hinder, debilitate, and even render an entire organization obsolete. This report identifies some root causes of technical debt and then outlines what can be done to identify and manage it.
The use of project metrics is often contentious and depends on the user’s viewpoint. In this report we take a look at what so-called “classic” project metrics are, how they might be defined, the consequences of the definitions, and how measures can be used effectively as part of assessing the outcome of a software development project.
This paper seeks to clarify how the TMMi works and what it contains.
This report is intended to provide a value analysis, or business case, for how an effective estimating practice contributes value to the organization in both financial and non-financial terms.
This report is intended to show where Agile project integration and acceptance testing fit in the overall flow of Agile and why they are integral to effective Agile.
Find out how you can manage change in order to get developers on board with using function points for software estimation.
Everyone has a different definition of Software Analytics and so it is fair to say that this is a broad topic. Hence, this report defines Software Analytics as much by the examples it gives as by one single formal definition. In answering the “who should care?” part of the question, the report seeks to identify roles that should be using Software Analytics already and/or could be using it more. We also look into the future to see how Software Analytics might change software development in organizations in the way that business intelligence has impacted some organizations.
There are a variety of ways to look at this question. The first response that comes to mind is to simply say, it depends. It depends on who is asking the question. For example, quality may be far more important than productivity if you are talking about a customer who is using the software. It depends on how we define quality and define productivity. Finally, it depends on what we mean by important.
This report seeks evidence that agile software development methods have had a positive impact on traditional performance metrics such as productivity, quality and time-to-market. We consider the assertion that this is not a fair question because, for agile, customer satisfaction is the primary metric. We also consider the possibility that the reported data is skewed by not considering the failure rates of agile projects and/or self-serving optimism from the large and vociferous agile coaching/consulting community (in which we must include ourselves). Finally, we provide the data.
In this report, we look at productivity primarily through a project management perspective. That said, we consider the strategic or top-down perspective and the tactical or bottom-up perspective separately.
How can I manage a project’s productivity? While the question may invoke a complex answer, the rationale for asking the question is quite simple: increased productivity reduces costs. Of course, this assumes that all other things stay the same and the report considers the interactions between cost, quality and time to market. It has been estimated that increasing global software productivity by even small percentages would translate into billions of dollars in savings and/or increased profitability. It is no small matter that organizations are constantly looking to improve their software delivery performance.
Should I even try? While on the surface this question may seem rhetorical and the response superficial, the real answer lies in the choice of goals for the project – we assume in this report that productivity is a high priority or else the question is moot – and the degree to which managing for high productivity can be achieved without compromising secondary objectives, the most difficult of which might be customer satisfaction.
The discussion of whether estimates based on historical data are better than estimates from subject matter experts (SME) is a difficult question. We suggest that as SMEs are actually repositories of historical data (their memory – as good or bad as that might be – assuming that they are ever informed about actuals) the question is a false dichotomy. Rather the question is more a reflection of whether a tool based estimate is better than a SME/expert based estimate.
The question driving this report assumes that a software defect backlog already exists. Hence, some but perhaps not all defects have been identified and logged. It assumes that the defects described in the software defect backlog are not sufficient to prevent the software from functioning. Hence, this report ignores much of the literature about software defect detection and tracking (which is adequately covered in the Sources listed for interested parties) to focus on the narrower perspective of what, if anything, should be done about those unaddressed defects in the software that have been reported by developers, testers, end-users or code analysis software.
The report concludes that there is no way to know if a particular Software Defect Backlog for a particular application or product matters without further analysis. The report recommends an approach for this further analysis.
IT Governance is about defining what decisions need to be made, who should make them and how they should be made. One of the biggest decision areas that business and IT providers should exercise strong governance over is outsourcing.
Many books and articles have been written about IT outsourcing. The business and social impact of outsourcing and, particularly, offshoring have been huge in the 21st century. It seems that for every success story, there is a story of a painful transition. In 2008, when we published “The Business Value of IT”, we believed that there was no turning back and that the future masters of IT would necessarily be masters of outsourcing. Six years later, we still believe that there is no turning back although we would modify that stark assertion today with the observation that companies including some of our clients have recognized “Total Outsourcing” of software development represents a significant corporate risk to some business.
This report reviews the most significant risks of outsourcing software development and offers up some mitigation strategies.
Most if not all CFPS (Certified Function Point Specialists) have encountered projects that sit outside the organizational norm with significant development requirements for non-functional work.
Function points by their very name are focused on delivered functionality so do not provide for non-functional development.
SNAP (Software Non-functional Assessment Process) has been developed to supplement the FP sizing methodology and provide a sizing technique for the non- functional component.
This report considers the following:
When do I need to use a non-functional measure?
Where did SNAP come from?
What does it capture?
How do I implement it?
Do estimating products cover SNAP?
Can SNAP productivity be compared against the rest of the Industry?
Are there alternatives?
In this report we examine what people consider as excellence in software development, and how they compare performance of development teams – the process of benchmarking. We will show that concentration on one aspect of excellence has a direct influence on other possible views. We will determine how individual views of excellence may coincide with aspects of the business lifecycle. Finally, we will look at how benchmarks tend to be driven to one conclusion, which may be optimum for one view of excellence but generally ignores the other factors.
The discussion that follows will explore:
• The key components of an effective estimating model
• The benefits of effective estimating
• The challenges impacting effective estimating
• Pros and cons of vendor developed estimating model(s)
To properly address the question and to evaluate the potential benefits of a vendor developed estimating model, we should understand the characteristics of an effective estimating model. Similarly we should know the benefits and challenges of estimating in order to evaluate the impact an estimating model may have on realizing those benefits and meeting those challenges. Once we have formed our basis of understanding in that regard we are then well positioned to evaluate the pros and cons of vendor developed estimating models.
To answer this question with a yes or no answer we will need to look beyond the hyperbole of the question and address three separate questions. The questions that must be address begin with whether all types of testing can be automated followed by whether automated testing is sufficient and finally whether developers can replace testers.
We discuss the different types of functional size measures other than the IFPUG methodology and review the pros and con of each, while also:
This report addresses the following questions:
This report provides a definition of Lean Software Development and explains some key characteristics. It explores the similarities and differences between Lean Software Development, Lean Manufacturing and Lean Six Sigma. Finally, it considers the extent to which traditional waterfall and agile (primarily scrum) approaches to software development can be considered as “Lean Software Development.”
This report is not about ROI of agile methods versus other SDLC’s. Instead, we consider if the traditional approach to producing business cases for projects or programs by predicting financial outflows (project costs) and financial inflows (new income of savings) is still appropriate or even meaningful for agile software development based on scrum and/or enterprise wide extensions of scrum such as SAFe or DSDM.
This report identifies evidence that projects are late, over budget or deliver less than promised. It then considers various potential causes for these failures including culture, process, and estimation and how getting these things right can contribute to success.
This report discusses the challenge Information Technology professionals face in marketing and selling their capabilities to their peers, or internal clients and how to meet that challenge in order to remain competitive in the ever commoditizing world of technology.
This report investigates how changes to the SDLC (Software Development Life Cycle) can improve the delivery of demonstrable value to the business. We consider how we might measure “demonstrable value” in a way that the business will understand. We review the theory of “Lean Software Engineering” and we suggest some ways that the theory can be applied to optimize different SDLC’s. Finally, we discuss the importance of Value Visualization – requiring each story or requirement in the SDLC to have a demonstrable and highly visible set of business value criteria to drive tactical decision making.
Estimation is one of the lightening rod issues in software development and maintenance. Over the past few years the concept of #NoEstimates has emerged and has become a movement within the Agile community. Due to its newness, #NoEstimates has several camps revolving around a central concept of not generating task level estimates. The newness of the movement also means there are no (or very few) large example projects that can be used as references . Finally there are no published quantitative studies of results comparing the results of work performed using #NoEstimates techniques to other methods. In order to have a conversation we need to be begin by establishing a shared context and language across the gamut of estimating ideas whether Agile or not. Without a shared language that includes #NoEstimates we will not be able to compare the concept to classical estimation concepts.
Discuss what is meant by value, the process of sizing and estimating the software deliverable and the benefits of those results
• What is “Value”?
• Functional Value
• More on the estimation process
• Case study example
• Conclusion
This paper discusses the time constraints of testing, its impact on several testing stakeholders, and possible ways to mitigate this problem. It includes:
• Statistics on testing length.
• Who are some of the stakeholders for software testing?
• What kinds of delays do testers frequently face?
• Making more time to test.
Story Points and Function Points are both methods for ‘sizing’ software. This Trusted Advisor report will establish why sizing is important and present an overview of the two sizing methods followed by a discussion on the merits of both Story Points and Function Points by answering some very common questions:
Can I use function points on an agile project?
“Story points are much easier and faster than function points?”
Is there a relationship between story points and function points?
Scrum defines three basic roles within a Scrum team: developers (including testers), a scrum master/coach and product owner. Each of these roles is critical for delivering value effectively and efficiently. The product owner role is deceptively simple. The product owner is the voice of the customer; a conduit to bring business knowledge into the team. They define what needs to be delivered to support the business (or at least finds out), dynamically provides answers and feedback to the team and prioritize the backlog. From a business perspective, the product owner is the face of the project. This essay will highlight the role of the product owner and why something that seems so easy is generally the hardest role on an Agile team.
The job description of a product owner is fairly straightforward. Their job is to act as the voice of the customer, prioritize the backlog, answer or get answers to the team’s questions and accept/reject the work that the team generates. However the devil is in the details. Understanding the nuances of applying the role is important to successfully function as part of an Agile team.
This report addresses the question in the following areas:
In this paper, we consider the impact of the digital transformation on software development and whether the Agile Scrum approach being used by many organizations to help the software development teams respond more effectively and quickly to business demands can be used more widely in the organization for digital transformation.
We focus on what has been termed “SMAC.” The acronym derived from the names for what many believe to be the driving forces of the latest wave of digital transformation:
• Social media
• Mobile
• Analytics (or “big data”)
• Cloud
It is our belief that Agile principles and methods can be applied throughout an organization to deliver effective digital transformation.
Every retrospective requires some sort of tool. Tools can be as simple as a white board and markers or as complex as mind-mapping and screen-sharing software. When a team is distributed, screen sharing and teleconferencing/videoconferencing tools are necessities. The combination of technique and level of team distribution will influence tool selection. Likewise, tool availability will influence technique selection. For example, use a mind mapping tool and screen sharing when executing a listing retrospective for a distributed team so that each location can see the ideas and participate. If the distributed team could not use those tools, you will have to find a different approach. Generally the technique defines the toolset, but that is not always the case. When everyone is in the same room sticky notes are great but when team members are teleconferencing into the retrospective electronics are required.
The retrospective can’t become ritualized to the point that it lacks meaning. Each retrospective needs to provide a platform for the Agile team to reflect on their performance and to determine how they can achieve more. This is a team activity that requires a free flow of conversation and ideas in order to maximize effectiveness. That means someone needs to facilitate the process and police the boundary. No team is perfect and all teams can learn and improve on a continuous basis. Most obstacles to effective retrospectives are solvable with a bit of coaching and education, if you recognize the obstacles before you abandon the technique. Facilitation skills, retrospective techniques and tools are all important for an effective retrospective. The technique is driven by needs of the team. The coach/facilitator needs to be aware of the needs of the team and the proper tools to facilitate the technique. If they are not available, pick another technique. However once the retrospective begins, facilitation skills are always the most important factor. Even with the best technique and tools, retrospectives are all about the people.
Every company wants to maximize its profits while meeting its customer expectations. The primary purpose of software delivery is to provide a product to the customer that will validate a business idea, and ultimately provide value to the end-user. There must be feedback between the customer and the business, and this iterative process must be performed quickly, cheaply and reliably.1 The real question is how does an organization know whether its software delivery is performing at optimal levels?
This report considers the following topics.
This report discusses the tension between organizational need of budgetary data for planned Agile deliverables vs traditional project cost accounting. Agile project lean-budgeting best practices at the portfolio level are highlighted to illuminate the importance of estimating and budgeting as Agile scales in an organization. The Scaled Agile Framework (SAFe) portfolio and value steam levels, as presented in SAFe 4.0, provide the backdrop for this discussion.
To assess the value of function points (any variety), it is important to step back and address two questions. The first is “What are function points (in a macro sense)” and secondly “Why do we measure?”
The effective use of function points centers around three primary functions: estimation, benchmarking and identifying service-level measures.
More and more organizations are adopting some form of Agile framework for application development and enhancement. The most recent VersionOne State of Agile Survey reveals that 94% of organizations practice Agile. Hot technologies such as big data, analytics, cloud computing, portlets and APIs are becoming ever more popular in the industry.
This report explores each of the three primary functions of function points and their relevance in today’s Agile dominated IT world and with new technologies.
Since the invention of Function Points (FPs) any time new development methods, techniques, or technologies are introduced the following questions always arise: “Can we still use FPs?”, “Do FPs apply?”, “How do we approach FP counting?”. These questions came up around middleware, real-time systems, web applications, component based development, and object oriented development, to name a few. With the increased use of Agile methodologies; therefore, the increased use of User Stories, these questions are being asked again. It is good to ask these questions and have conversations to ensure that the use and application of FPs is consistent throughout the industry in all situations. The short answers to the questions are: Can we still use FPs? YES. Do FPs apply? YES. How do we approach FP counting? The answer to this last question is what this article will address.
We’re pleased to share this month’s Trusted Advisor, which was written by Capers Jones. Capers is a well-known author and speaker on topics related to software estimation. He is the co-founder, Vice President, and Chief Technology Officer of Namcook Analytics LLC, which builds patent-pending advanced risk, quality, and cost estimation tools.
Many thanks to Capers for participating in Trusted Advisor and allowing us to publish his report!
The 30th anniversary of the International Function Point User’s Group (IFPUG) is approaching. As such, this report addresses a brief history of the origin of function points. The author, Capers Jones, was working at IBM in the 1960’s and 1970’s, observing the origins of several IBM technologies, such as inspections, parametric estimation tools, and function point metrics. This report discusses the origins and evolution of function point metrics.
In this report, we suggest some considerations for executives seeking to grow the number of agile teams in their organization. At some point, changes are needed at the top. In particular, the portfolio management team needs to reorganize the proposed software development work to allow it to be pulled by the programs and teams from a portfolio backlog prioritized by economic value.
This month’s report will focus on two key areas of vendor management. The first is vendor price evaluation which involves projecting the expected price for delivery on the requirements. The second is vendor governance. This is the process of monitoring and measuring vendor output through the use of service level measures.
This month’s report will focus on how to improve estimation practices by incorporating the Software Non-functional Assessment Process (SNAP) developed by the International Function Point User’s Group (IFPUG) into the estimation process.
Many venture capitalists, investors, and managers have experienced unforeseen and unnecessary losses due to hidden challenges in a target company’s software. Excessive enhancement requirements stemming from the size and/or complexity of a software asset can lead to significant upgrades and maintenance costs – or worse – non-performing functionality. These, sometimes large, issues can remain unidentified until very late in the development lifecycle. If your company is acquiring another company, you can plan to integrate their software with yours by working with the M&A team as early as possible to gather information about the risks and challenges that you are likely to face during the due diligence process.
It is commonly accepted that most organizations today have moved, are moving, or are evaluating a move toward the use of the Agile methodology. This report considers: (a) why the move to Agile; (b) what it means to adopt the Agile methodology to incur a transformation; (c) how to measure to know if your transformation is successful; and (d) how to insure that the effects of the transformation are continued.
Within the agile world story points are considered the metric to go to if we talk about teams that want to estimate the relative effort for their user stories. However, within organizations that use multiple agile teams, when we extrapolate those story points to an Epic level, the aggregate metric starts to show some flaws that need to be acknowledged.
This report analyzes the use of story points at the Epic level and proposes some alternative sizing solutions.
Estimation and software measurements are interrelated concepts but they are not the same. This paper examines the potential impact of adopting a #NoEstimates approach for estimation on software measurement.
Testing is an essential part of software development, and is even more critical with the arrival of complex integrated systems with business transactions that need to be bulletproof to defects. It doesn’t really matter whether you do agile or classic waterfall, testing is still going to be a big part of your lifecycle, and so it is imperative that it be as efficient as possible without reducing the expected quality. But there is another variable to this equation, and that is the cost of testing which has become a concern for many organizations. This report is divided into two parts. The first focuses on cost tracking and capturing other variables that help us make decisions regarding our testing strategy. The second analyzes several options to improve our testing efficiency.