Version Control


Version control systems are a category of tools of software that help a software team to manage changes in the source code through time. The version control tool tracks all of the changes and modifications that the code suffers inside a special type of database. If a developer makes a mistake inside the code one can easily go back to an earlier version of the code where the changes that broke the code didn’t exists yet. You can compare the code that existed before and more easily find where the error might be located.

For almost every software project, the source code is like the jewel of a crown. They are very precious and valuable. The source code is an invaluable source of knowledge and understanding of the specific domain that the developers have studied and perfected. Version control tools ensures the integrity of the source code against human errors and accidental mistakes.

Furthermore, the source code is alive, always changing and being edited continuously by numerous members of a team. Version control tools help resolve conflicts between all of these changes. Some changes that a developer might be working may be incompatible with the changes being made by another developer. This situation should be detected and resolved without blocking other team members.

Advantages of version control systems.

  • The first obvious advantage of using a vcs is a complete history of all the changes throughout time of all the files inside the project.
  • Branches and pull requests. One can create a branch and create and modify code without blocking other team members. If any error occurs one can easily rollback to an earlier version of the code
  • Tracking power. One can see who is responsible for the changes made inside a file. This is sometimes used to blame someone that broke the code.


Git is probably the most common used vcs. The main difference between Git and any other VCS is the way that data is handled. Most of VCS store information as a list of file changes.

Storing data as changes to a base version of each file.
found at

Git does no store information in this manner. Git stores a set of copies in a system of miniature files. Everytime you add a change or store the state of the git project, git takes a snapshot of how the files look in that point in time. And stores a reference for that snapshot. So git stores its data as a sequence of snapshots

Git stores data as snapshots of the project over time.



Software Testing Environments


There is a constant and growing concern from Software Companies to develop applications and systems that are of quality. Products that will satisfy the client’s needs. Companies will go the extra mile to meet these demands. That is why they spend large quantities of money into them.

Developers now have to deal with multiple teams working concurrently under tight schedules that are becoming even for stringent. The systems also have to work and communicate with a number of different systems and platforms. These circumstances make it really important to have protocols of quality assurance. Software Testing Environments become really important.

Optimal testing environments improve the efficiency in the testing phases. According to Gartner, the economic cost to fix bugs during the analysis phase is of around 70 USD, comparing this with fixing bugs in production there is a huge difference; 14,000 USD. Software testing environments play a big role mitigating these bugs in production and finding them during earlier phase of the development.


There are two global tendencies that have started to emerge in the software world. These help us better understand how testing environments should be managed.

TEM (Test Environment Management):

This model proposes the creation of a unique centralized administration of testing environments, monitoring, maintaining and solving emerging situations, establishing a sole entity responsible inside the environment.

TDM (Test Data Management)

This model ensures that the testing environment databases contain all that is necessary to deliver good tests, ensuring speed, efficiency and the continuous deployment of the software.


Peer review

What is it?

Peer review can be defined as the revision of code handled by another member of the team, different to the author of the original code. Its objective is to search defects and to propose and evaluate alternative solutions. Furthermore, peer review serves as a facilitator to spread knowledge across the team and if it’s the case, across the organization.

Where does it come from?

This technique can find its roots from the practice of publishing scientific articles where since the middle of the XX century takes prevalence in such process.

A Rigorous Peer Review: Why You Should Care | RSRT

What things should I have in mind in a peer review session?

Peer review sessions look for the early detection of bugs, that’s why they should be performed in an incremental matter as more phases of the development life cycle are done. So it’s better to have many small peer review sessions throughout the development of the software than one big PR session at the end of the software life cycle.

The focus of a peer review session should always stay be the actual code at hand. The people around the code should not be the center of attention.

Starting from these principle it is possible to build systems more or less complex to peer review: basic systems where the revisions are made by a unique developer that takes the role of the reviewer, or where the review it’s made by a group of developers.

Focusing on the key elements

The value of peer review sessions lie on detecting the impact of new systems of development. That’s why, during peer review sessions one has to maintain the focus in the points that really matter.

  • Are you following the principles of Object Oriented Programming?
  • Are you correctly using the libraries and frameworks your are integrating, are you using one that is not standard?
  • Can the code be refactored in a way that is more readable?
  • Can one foresee issues with performance of memory leaks?
  • Are we using correctly exceptions and logs?

Planing V&V

What is Validation and Verification

Validation and verification is an activity that plays a very big role regarding processes and products of quality. That is why and concerning the historic problems that have emerged from software development, the study of V&V is an important advancement for the future evolution of software technologies.

Planing V&V

The planning of V&V depends entirely on the goals and objectives of the project at hand. To achieve said goals different life cycle methods have been developed. But in all of them it will be required to have a specific process that ensures quality within the project.That’s why that at the end of every life cycle it should be ensured that the work done in that moment complies with the aforementioned objectives.

The final objective of the process of V&V is to prove that the system it’s made for a purpose and it tries to apply specific techniques known as tests and reviews. The process of validation and verification es a set of procedures, activities and tools that are use in parallel to software development with the end goal of ensure that the product solves the problem that was initially said to solve.


Verification tests the consistency of the software with respect to the requirements, that is, if it answers the following question. Is the software built correctly. The process determines if the resulting product of a life cycle comply with the requirements established in the previous phase. The process determines if the resulting software is complete, consistent and right to start the next phase.


Validation proves if that is specified and implemented is what the user really wants, that is if it answers the question of has the correct software been built. The process determines if the software accomplishes its specifications. The process ensures that the software built behaves as expected and it complies with the client’s expectation



What is it?

MoProSoft is a process reference model that encapsulates good practices and software management processes. These help companies that are in charge of the development and management of software. The goals are to improve their workflow, quality and competitiveness

The MoProSoft includes a set of integrated processes, with their workflows, roles and product that serve as a framework to companies that are in software engineering.


  • It specializes in software development and maintenance
  • It facilitates compliance of the following model requirements: ISO/IEC 29110, ISO 9001:2008, ISO/IEC 15504, CMMI-DEV e ISO/IEC 12007
  • It’s easy to understand and implement
  • It’s useful when implemented
  • The official document consists of less than 200 pages. This, relative to other models and standards makes it really concise.
  • It follows the structure of mexican organizations with the development and maintenance of software products.
  • It is oriented towards the improvement of software processes.
  • They have a low training cost as well as a fast adoption time.


  • Improves the quality of the shipped software of the company that adopts this model.
  • Elevates the company’s capacity to offer quality services and reach international standards of competitivity
  • Integrates all of the organization’s processes and maintains a direct relation with the strategic objectives
  • Allows the organization to be recognized as a mature and established organization
  • The organization appears in the global list of companies that adhere to this model. This serves as reference to clients, authorities and competitors.


Retrieved from:


What is Software Quality?

Ok so the first blog post for #TC3045 will start defining software quality.

silver laptop on brown wooden table

In a software development project there will be numerous requirements, implicit and explicit. Software quality will be the degree of conformance that one sticks to these requirements. So one can be as strict as it is needed. This strictness will be defined mostly by the stakeholders but also by the clients and end users.

Of course there are multiple definitions of Software Quality by multiple people. The IEEE define SQ as the degree to which a system, component, or process meets specified requirements. The International Software Testing Qualifications Board definas SQ as the totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs.

There are key aspects to talk about when discussing about Software Quality

  • Good design. UI and UX have become extremely important for the user. One good way to think about it is for the end user, the Interface IS the system
  • Reliability. Good software needs to function bug free.
  • Durability. A quality piece of software will pass the test of time
  • Consistency. One should expect a piece of software to behave in a matter that is predictable
  • Maintainability. This is very important for the stakeholders. There is money that is going to be spent maintaining a piece of software. One should expect it to be maintainable for different programmers.

Software Quality Models

  • McCall’s Quality Model
  • Boehm quality model
  • Dromey’s quality model
McCall’s Quality Model

This model tries to to maintain harmony between users and developers. This model has eleven factors that can be grouped like the image shown below

McCall Software Quality Model in Software Quality Assurance
Sin categoría

Course Evaluation

What I liked

In this course I liked the book. Head first Object Oriented Analysis and Desing. I think it was a good book with good examples. I learned quite a lot of things that will help me in my professional career.

What I didn’t like

Looking back, I thing I would’ve liked to develop a project throughout the semester. Maybe with the help of the teacher we could’ve develop a project similar to one the teacher had developed in the past. And see how it is done in the real life.

Ecoas evidence

Sin categoría

Mastery 13 – Test Driven Development

What is it?

The Test Driven Development (TDD) is a software engineering practice that requires unit tests to be written before the code they are supposed to validate. Coming from the Agile world in which it is a basic practice of the Extreme programming (XP) method, TDD is nowadays recognized as a discipline in its own right that is also used outside the agile context.

TDD Principles

By combining programming, unit test writing and refactoring, TDD is a structuring practice that allows to obtain a clean code, easy to modify and answering the expressed needs which remains the first priority when developing an application. The TDD has 3 phases:

  1. RED. First write a unit test in failure. The impossibility of compiling is a failure.
  2. GREEN. Write as soon as possible the production code sufficient to pass this unit test even if it means allowing the “worst” solutions. Of course if a clean and simple solution appears immediately, it must be realized but otherwise it is not serious the code will be improved incrementally during the refactoring phases. The aim here is to obtain as soon as possible the green bar of success of the unit tests.
  3. REFACTOR. This phase is often neglected but is essential because it eliminates possible code duplications but also makes it possible to make changes in architecture, factorization, presentation… This refactoring concerns both the production code and the test code and must not modify the external behavior of the program, which is materialized by a test execution bar that remains green.


Sin categoría

Mastery 12 – Testing in OO

What is testing?

Once a program code is written, it must be tested to detect and subsequently handle all errors in it. A number of schemes are used for testing purposes.

Another important aspect is the fitness of purpose of a program that ascertains whether the program serves the purpose which it aims for. The fitness defines the software quality.

Testing in Object Oriented Systems

Unit Testing

In unit testing, the individual classes are tested. It is seen whether the class attributes are implemented as per design and whether the methods and the interfaces are error-free. Unit testing is the responsibility of the application engineer who implements the structure.

Subsystem Testing

This involves testing a particular module or a subsystem and is the responsibility of the subsystem lead. It involves testing the associations within the subsystem as well as the interaction of the subsystem with the outside. Subsystem tests can be used as regression tests for each newly released version of the subsystem.

System Testing

System testing involves testing the system as a whole and is the responsibility of the quality-assurance team. The team often uses system tests as regression tests when assembling new releases.

Object-Oriented Metrics

Metrics can be broadly classified into three categories: project metrics, product metrics, and process metrics.

Project Metrics

Project Metrics enable a software project manager to assess the status and performance of an ongoing project. The following metrics are appropriate for object-oriented software projects −

  • Number of scenario scripts
  • Number of key classes
  • Number of support classes
  • Number of subsystems

Process Metrics

Process metrics help in measuring how a process is performing. They are collected over all projects over long periods of time. They are used as indicators for long-term software process improvements. Some process metrics are −

  • Number of KLOC (Kilo Lines of Code)
  • Defect removal efficiency
  • Average number of failures detected during testing
  • Number of latent defects per KLOC
Sin categoría

Software verification and validation

In this week’s mastery topic we’ll talk about verification and validation. We’ll see what each of them mean. What is the difference between these two and why they are often confused with one another.

Verification And Validation:

This steps are involved in software testing. Verification and validation are the processes to check whether a software system meets the specifications and that it fulfills its intended purpose or not. Verification and validation is also known as V & V. It may also be referred to as software quality control. It is normally the responsibility of software testers as part of the Software Development Life Cycle.


Verification is the process, to ensure that whether we are building the product right i.e., to verify the requirements which we have and to verify whether we are developing the product accordingly or not.

There are several activities in the verification step. Such as: Inspections, Reviews, Walkthroughs


Validation is the process, whether we are building the right product i.e., to validate the product which we have developed is right or not.

The activities involved in this stage are the following: Testing the software application

Verification vs Validation

These are hugely confused and debated terms in the software testing world. You will encounter all kinds of usage and interpretations of these terms. I’ll try to explain the difference.

The underlying question

Verification: Are we building the product right?

Validation: Are we building the right product?

Difference in objective

They both ensure that it does what it’s supposed to do. However, to verify this they are tested against different criteria. Verification is when you check your proyect’s requirements. Validation is when you check wether you project fulfills the user needs.

It worth noting that these are independent of one another in a project. Your product may pass validation and fail in verification or vice versa.


What is Verification And Validation In Software Testing

Verification vs Validation