Friday, June 5, 2015

Introduction to Muticore software design explained

Introduction

Multi-core programming is the branch of computer science related to dealing with a processor where more one CPU core is available on the same processor chip. They are interacting together to give you powerful processing power.

For multi-core system, there are several taxonomies for the system based on the building hardware cores for the system and the running software over these cores. In this post, we will describe two major different taxonomies, from point of hardware (Heterogeneous vs Homogeneous systems) and from point of view of the running software over the cores (Symmetric vs Asymmetric Multiprocessing systems).

Figure 1. Generic multicore processor design, explains the general concept of Multicore processor

Heterogeneous vs Homogeneous

Heterogeneous and Homogeneous cores stands for the types of the cores found on the processor chip
In Heterogeneous cores, the cores of the chip are different from each other, while in Homogeneous cores, the cores are identical to each other.

Homogeneous systems

The following example represents a mutli-core system where all the cores are Homogeneous chip, this is Freescale i.MX6 Quad Processors - Quad Core, High Performance, Advanced 3D Graphics, HD Video, Advanced Multimedia, ARM® Cortex®-A9 Core 

Figure 2. Freescale i.MX6 Quad Processor
It is composed of 4 cores ARM Cortex-A9. In this system, boot core0 starts and then starts the other cores, and all of them have the same CPU context.

Heterogeneous systems

The following is an example of Multi-core Heterogeneous chip, from Texas Instruments, this is TCI6630K2L, Multicore DSP+ARM KeyStone II System-on-Chip, 

Figure 3. TCI6630K2L SoC, Multicore DSP+ARM KeyStone

It is composed of 2 cores ARM Cortex-A15 and another 4 cores DSP C66x. In this system, the ARM core will start as boot core0, and then when the software on the ARM is ready it will start the DSP core to start execution of executable written specifically for the DSP using the instructions set of the DSP.

Notice here that each groups of cores (like the ARM cores or the DSP cores) are considered as homogeneous system within the group.

Homogeneous Multicore hardware communication

On the level of the hardware, the SoC designers usually provide several levels of communication channels between the cores. We can list here the available techniques on the ARM Cortex-A9 as generic hint for the communication techniques. Notice that the described techniques here have similar implementations on other hardware platforms like intel. We will not discuss ARM specific techniques for the sake of information capitalization.

Snoop Control Unit (SCU)

The goal of this unit is to provide coherency between the caches of the ARM cores. When each core is dealing only with its internal L1 cache, it will be isolated from the L2 shared cache of the cores, and moreover isolated from the main memory which is the main interface between the cores and the external world. The SCU will enable the cores to synchronize their caches together along with the L2 cache and the memory at certain moments of time according to a well defined hardware protocol called the Snoop Protocol. By synchronizing caches together, threads running on these cores can exchange information through shared memory without bothering the software designer about caches synchronizations. Since the software designer will be sure that any shared variable changed on the L1 cache of one core will be reflected into the other cores. 

Generic Interrupt Controller (GIC)

The goal of the GIC is to handle interrupts coming from the system to the Multi-core ARM Cortex-A9. This unit can operate on one or more ARM cores. The idea is that the cores will configure GIC for which interrupts they are interested to listen to. When an interrupt occurs, the interested cores will be signaled for a generic interrupt signal. This signal is still ambiguous to the core with no information about which peripheral or internal device caused the interrupt. Then each interested core can read the status from GIC to know about the actual parameters for the interrupts which have occurred, like interrupt source, interrupt parameters, etc.

GIC supports 3 types of interrupts to be managed

Software Generated Interrupt (SGI) This interrupt is generated explicitly by software by writing to a dedicated distributor register, the Software Generated Interrupt Register. It is most commonly used for inter-core communication. SGIs can be targeted at all, or at a selected group of cores in the system. Interrupt numbers 0-15 are reserved for this. The software manages the exact interrupt number used for communication.
Private Peripheral Interrupt (PPI) This interrupt is generated by a peripheral that is private to an individual core (like MMU, Core Timer, etc..). Interrupt numbers 16-31 are reserved for this. PPIs identify interrupt sources private to the core, and are independent of the same source on another core, for example, per-core timer.
Shared Peripheral Interrupt (SPI) This interrupt is generated by a peripheral that the Interrupt Controller can route to more than one core. Interrupt numbers 32-1020 are used for this. SPIs are used to signal interrupts from various peripherals accessible across the whole system.

Heterogeneous Multicore hardware communication

On heterogeneous systems, communication between cores is done by SoC specific interfaces. This could be done through shared memory between the cores, or interrupts control unit specific for communication between cores. What is important here is to know that Heterogeneous communication is much complex and SoC specific than the Homogeneous cores.

SMP vs AMP

Overview

SMP stands for Symmetric Multiprocessing.
AMP stands for Asymmetric Multiprocessing.

SMP and AMP are terms related to how the code threads are executing on the cores. When we talk about SMP and AMP we are looking to the system from the point of view of the processing done on the cores. You can consider that SMP and AMP is more software architecture related than hardware architecture related. This means how the running software utilizes the underlying hardware. In the SMP case, the software is aware that the underlying cores are symmetric and so the software uses the same instruction set for the all the cores. This means that the running threads are all identical to each other from the point of view of the instruction set used to write the threads as well as the CPU context to be saved per each thread.

On the other hand, in the AMP case, each core (or a group of cores together) is/are running an isolated group of threads. This means that each threads group may be written in a specific instruction set for the underlying core(s). Each group may have different context data to be saved about the underlying core(s).

Mixed SMP and AMP

In many cases the underlying hardware may contain several groups of identical cores, as with the case of the of the TCI6630K2L SoC. In this case, it is possible that the system be AMP and SMP at the same time. It will be SMP on the level of each identical cores group, and will be AMP on the level of the SoC.

Impact on the software design

Selection of the build system

As the generated machine code will be impacted by the underlying architecture, the designer has to take into account the hardware and software architecture to be used for development. For example, in an SMP system, most of time the generated code will be linked in one executable while in AMP system each core will have its own executable to run. Same for the Homogeneous and Heterogeneous systems where in Homogeneous systems same compiler tool chain can be used for all the SMP and AMP software while in Heterogeneous systems a specific tool chain or specific compilation parameters should be used to generate machine code suitable for each specific underlying hardware. The designer has to pay attention for the capabilities of the used tool chain to generate executable for the selected hardware / software architecture.

Selection of the operating system

Modern software design will use operating system to manage the hardware and software system resources. Selection of a suitable operating for the expected software architecture could be a tedious and difficult task if no prior requirements specifications set for the required system. The designer has to take into account whether the operating system will run in SMP or AMP mode. If the system is Heterogeneous, can the same operating systems be used for the different cores architectures or it is necessary to have specific operating system per each cores architecture. Also modern hardware systems provide Memory Management Unit (MMU) for memory protection, memory address virtualization, and paging; the designer has to decide if the operating system will support MMU and how memory management will be done on AMP and SMP software architectures. Other hardware factors like DMA, Hardware Cryptography units, etc.. will play significant factor, for example for Cryptography, hardware units are available like ARM TrustZone, in SMP system it will be simple to use, however in AMP it will be difficult to use specially if multiple cores are requiring to use it. All these points and more should be addressed during specification of requirements for the operating system to select the most suitable one. Also selection of the operating system will be impacted by the selection of the build system. 

 To Virtualize or not to Virtualize?, that is a big question

Virtualization means that the underlying hardware could be completely abstracted from the running software. In other words, Virtualization software (Hypervisor or a.k.a. Virtual machine) in the ideal case will allow machine code written for a certain CPU architecture to run on a very different hardware architecture, take QEMU as an example for that (http://wiki.qemu.org/Main_Page). It can also enable the running software to assume that it is running on different number of cores from the actual underlying hardware supported number of cores. In this case, the software designer should consider wisely whether or not to use Virtualization. For some multi-core hardware architectures like ARM Cortex-A15 it supports hardware virtualization where the hardware itself will support the Hypervisor to do its virtualization functions, in this case it is called hardware enabled vritualization and its performance is expected to be very optimum. However in most embedded systems hardware enabled virtualization is not available and in this case the performance of the Hypervisor might not be acceptable. In all cases, Hypervisor can allow a system written for SMP for example run on all cores of a Heterogeneous system. That is why this is a big question the designer has to ask for themselves before making a decision, what will be sacrificed versus what will be gained and what the underlying hardware supports and what is not supported. Also what my Hypervisor could support versus what what I am expecting. For example, using an operating system like Linux on Hypervisor like QEMU is very commonly used with architecture like ARM, however, for architecture like Zilog Z80 (8-bit microcontroller), QEMU supports that architecture using an external patch and can emulate your PC to run Zilog Z80 code, but doesn't mean that Linux can run on Zilog Z80. So a Hypervisor is not necessarily the best option for running the software on certain architectures. 


And as usual, thanks for reading :)

Monday, February 2, 2015

Are you aware of your electronic identity?

Social networks like Facebook, Google+, LinkedIn, and Twitter are now very active than ever before. With a lot of members joining everyday, each one of us can have now a very wide network of friends, and people who know. This is because it became very easy to connect to anyone just by searching for their name or circle of connections. In addition, each and every minute you made a comment, like, subscribe, or share to a piece of data, this is includes videos, comics or even a simple text. in some websites like Facebook you can even share your daily feelings, pictures of you in different situations with information about your feelings and emotions during each shared info. In precise description, a website like Facebook can record, if you are helping enough with information, a diary of your feelings, emotions, what you like, and what you don't like, and it can build a description of you mode swings with detailed data about your daily behavior, it can also do facial and limbs recognition to interpret your inner feelings just from a picture of yourself. Can you imagine it? With enough details you are supplying the social network, it can know on which days you are going to be in a good mode, and which days are you cursed days. In brief, it can predict your response to the several situations in addition to your future actions.

Dating your dream girl

Now consider yourself dreaming to date some girl, she is very difficult to you, or used to be, may be, but I would tell you what, just succeeding adding her to your Social network circle can help you a lot to find the backdoor to invade her hard rock and make your way through her heart.

With good analysis of her daily posts, and activities on the social network, you can know which color she loves, which days are her luck ones, when she would have enough free time to talk to. You can know also about her hobbies, favorite meal, which kind of perfume and cafes she would be interested to spend time in. You know, a good analysis can lead you to the best gift she would accept without any resistance. Yes, social networks contain bulk of raw data, which if consolidated and processed well, could be critical to our lives.

The critical social network AI Agent

Till this point, we cannot claim that a social network is misusing our profiles and stored data. No one can prove it, yet no one can disprove it. What if there is an Artificial Intelligence software agent, AI Agent, that is reading the profiles of all users and has wide unlimited access to each and every piece of data. I am not saying this AI Agent is available, I am just assuming. If this AI Agent has enough artificial intelligence model, natural language processing capabilities, computer vision algorithms, and voice recognition it could process these bulks of data and interpret and predict more critical information about us.

A simple feature like Google tracking can make precise time table of your daily activities, and can expect when will you after an hour from now, given enough access to your daily location. Another feature of face recognition in Facebook can detect your face precisely within a group of people when you are in a certain location at certain moment of time. Now think about this scenario, when you mix those two pieces of information together; if this AI Agent for example is programmed to search for you on a certain date and certain time to capture your image in a certain place. It can then guide all the Android based mobiles to start their cameras silently at the locations identified by Google and start capturing certain images for the place, based on Facebook face recognition capabilities, it can detect you within a crowd of people. You are detected and identified my friend.

A brief about human awareness

Human awareness is one of the most complex physiological and psychological properties of humans in which a person is aware of himself, the world around it, and knows which is real and which is fake. A person awareness of himself is called self awareness and is developed at early states after birth, a person awareness of the surroundings and reality versus illusions is developed after long period of brain development after birth across time based on the person experiences and is called simply awareness.

The awareness of the human is concentrated into two categories or parts of the central nervous system of the human, the conscious and sub-conscious of the brain. The conscious of the brain is responsible of handling the voluntarily actions of the human while the subconscious is managing the voluntarily patterns which are already learnt by human and doesn't need anymore concentration of doing it. A major example of that is the habit of driving a car. At the beginning when the driver is learning, he is doing the actions of driving a car using the conscious part of his brain. After some while, when the driver get used to driving with a real desire and passion to learn it, the driving patterns become a habit, scientists found that humans who do driving as a habit are achieving this through the sub conscious part and the conscious part is no more involved in it but just approving or disapproving the actions done by the sub-conscious.

Impact of using social network for long time

Similarly, the actions we do on the social network like updating our status, or even checking the social network updates starts by a conscious actions the human is totally aware of it. After some time of continuous usage of the social network and the desire we find in it makes surfing the social network a habit which is done through the sub conscious which may include status updates, location check ins, etc..

Given that an AI Agent is doable and could be smart enough to interpret much information, this could be a great threat to our privacy and safety.

Self aware AI Agent .. The raise of the Skynet

Self awareness in humans is done through complex sub systems in the central nervous system. In Computer world, self awareness is still under research to define what is the self awareness first and then to decide how to provide a computer machine self awareness. In a self aware machine of the future, the computer would be able to know that it is a computer machine, and aware also of its capabilities, what threats threatening its existence, and may be also what knowledge it needs to get to be more stronger a much smarter.

Skynet is was an idea introduced in the Terminator series, in which a computer became self aware, and is able to learn from the connected networks. At some moment, Skynet detected that humans is a threat for its existence and it is better to get rid of them. This idea is not only explained in Terminator, but you can find it in many other films like i-Robot, Eagle eye, Matrix series, Thirteens floor and many other films. Although these films are identified as sci-fi films about Artificial intelligence, in my opinion we are not very far from reaching it, specifically for an idea like the one described in Eagle eye.

Eagle-eye from scratch

In Eagle-eye, many sci-fi technologies were brought together to show that a computer system can track humans to lead them to do terror crimes, assassination, controlling humans fait. A super computer machine, ARRIA, with unlimited privileges is able to find a suitable human agent to commit a mission to assassinate some targets and monitoring him through many daily used devices like surveillance cameras, cell phones, etc.. It even has used in some cases military drones to chase him. One of the tricks which was really amazing is how ARRIA was able to know what is Jerry Shaw (the hero of the film played by Shia LaBeouf) was saying when he was speaking near a coffee cup with coffee the vibration of the voice would change the reflection in the liquid surface ever so slightly. ARIIA picked up the sound from a room with this method using a surveillance camera. Scientists have in 2014 done something similar using a normal digital camera.

With the power given by the every day collected data by social networks, Self aware computer could even be able to role us and manage everything about our lives. This is not sci-fi but a real threat for us and it is actually very close to occur in my opinion. Social networks can recommend you advertisements based on your profile, what if they are able to know which dates and techniques most suitable to persuade you of a certain item? This is achievable, based on the profile stored, and in this case the social network could easily guide to buy or not to buy certain item for example based on showing you some messages based on an analysis for your profile, by the way this is a branch of nervous system science called sub-conscious reprogramming through visual effects.

I hope you may find my article to be useful to you, and I am opened to any comments, discussions, feedback, questions, and even ideas to discuss together about computer science and engineering.

Thanks for reading ;)

Monday, January 19, 2015

Why is it called building the code?

Sometimes I wonder why we call the process of compiling and linking the code a build process. The final output is usually either running executable or library. We don't even call the output a building or a structure, in fact structures are found inside the source code itself. So, if we call it a build process, this will arouse many questions, we can list of them here:
1- If this is a build process of a product, where is the blueprints for that building?
2- Who is the actual builder, the human or the computer?
3- How to detect the build errors on level of blueprints to avoid wasting time?

These are philosophic questions and I don't know really an origin for who called it a build process, if anyone knows well the origin it will be worthy to share it among the WWW. However I am going to try to put answers to these questions, which could be useful for our day to day activities.

Where are the blueprints for the building?

If the process of building the source code results into a built product which the executable, then there should be blueprints for this process. This is the flesh of the discussion, the blueprints are usually the input to the builder to start doing their job. In this is case, is it the source code? the source code and other artifacts? In my point of view, it is the source code only. After all the builder of the source code has no idea about any other artifacts, and will never require them.

This make us have to pay more attention on source code, and ask, if we are working in a company, an earthquake has occurred, the building starts to fall down and you will be able to escape with one thing only in hand from the company building, which artifact you would keep with you? Is it the design document? specifications? architecture? or source code? In my point of view, it will be the source code, not because this is the deliverable to the customer, but because the source code contains the consolidation of all information and analysis of the system, it is an actual full design of the product which is ready to be baked and built to get our product ready.

Understanding that source code is the blueprint for the product will help us understand why successful companies will always try to devote much investments to optimize, standardize, and maintain source code. In the world of embedded systems, where the executable is part of the product, it will be efficient to have the source ready as soon as possible with minimum cost and maximum quality. The executable in its own cost nothing to be downloaded to the PCB, however the actual cost is in the effort of creating the executable itself. Having standard software components, standard libraries, and high quality software building bricks, is the factor which makes difference in the world of the embedded software development, and I think it is the same with other areas.


Who is the actual builder?

Is the actual builder the man who has written the source code, or the build system which will create the executable? In my point of view, it is the build system, and accepting it like that will make sense why people will need to have very efficient and high quality compiler tool chain, and build systems. A high quality make files system, and compiler tool chain can lead to better optimized software, more RAM and ROM efficient and even a faster more secure executable.

Compiler options play significant role in RAM, ROM, and Execution speed optimization, as well as security in modern CPU architectures. For example, a high end compiler could have compiler options to check array boundaries, uninitialized pointers, and may be even dangling pointers. These options could be affecting the executable ROM size, but will protect the software against malicious behaviors of other applications on the system.

In addition, nowadays compilers can also enforce some standards on the source code like ANSI C compliance, MISRA compliance, and many other standards. It can also go the extra mile to provide code quality checks like functions cyclometric complexity, line count, critical resources consumption like RAM and ROM. All of these activities will mean that the actual builder here is the build system, and designer of the product is the code writer, selection of a good builder will help you to get high quality product for sure.


How to detect the build errors on level of blueprints?

This is a strange question may be, why would I need to detect build errors on level of blueprints? In other words why I would need to detect compilation errors and linking errors on the level of software coding? A more elaboration for the meaning of errors will make the question logical.

Software development will involve one ore more software engineers writing code, and using other libraries most of time. With huge software design, incorrect casting error, misspelling a variable with other variable name, even incorrect parentheses is very common human mistake and sometimes it is normal, at the same time it could be one source of very stinky bugs. On the other hand, some bugs like referencing null pointers, dangling pointers, out of allocated memory access is very normal in a huge software dealing with many chunks of dynamic memory, files, shared memory, etc...

Using suitable code checking tools could save much precious time, and effort for the developers and the project manager who will pay for bug analysis and fixing activity. Sometimes it is a compromise between using or not using code checking tools, using free tools or commercial tools. As I know there are many tools free and could be used without any restrictions to do necessary code checking. The cost here is how to utilize the tool with the necessary automation scripts to get it working and maintain it with code versions. If somebody makes the math and find it is more cost effective to use these tools, then it will be better to use, otherwise if it is waste of time and effort never use it, as it will burden on the developers themselves to use tools they don't even understand why they have to use them. In my opinion, sometimes customers would ask about code checking tools and whether they are used or not, it will better to justify why you don't need to use them and keep your developers motivated rather forcing the developer to use them with no added value and getting them bored and demotivated.Using these tools or not is subject to the math and need as well.

Writing and maintaining optimized standardized code is what makes systems like Linux survive now, high quality easy-to-use build tool chain like GNU is now is leading the market with many ideas and de-facto concepts. And the last, code checkers could be peace makers and could be neighbors from hell, it all depends on which kind of tools you are going to use, and whether your software scale need them really.

I hope you may find my article to be useful to you, and I am opened to any comments, discussions, feedback, questions, and even ideas to discuss together about computer science and engineering.

Thanks for reading ;)

Sunday, January 18, 2015

Software architecture .. Once upon a time on Computerico

The ultimate debate

One of the greatest debates I have ever worked with is the definition of the software architecture, and even when we are working with a very simple software development, sometimes it will be necessary to define software architecture for the product.

Sometimes however, a person would not be able to explain what the architecture is and what is the activity of the software architect. Usually, in some companies, a software architecture is a niche document stored in a very high secure storage media, usually used to explain to the client or the customer that the company has software architecture for its product. When the engineers open this document, they discover it is of no use. Even with no relation at all to the current software version, and was only created for the aim to define well the software from flying bird at the high sky view.

However, having good understanding for the software architecture will not be only useful for a day to day development but also for the innovation and production of new ideas. In my humble opinion, it is an essential part of the software development cycle starting from customer needs awareness to the marketing for a new innovative idea. This includes customer requirements capturing, software design, and validation. Even the verification of the software code against some standard would require the interaction of the architecture to be able to correctly do the verification.

In Computerico

To understand my point of view, I encourage all of you to visit with me for a while the planet Computerico. It is a very advanced planet where the machines have taken over the lead to control everything in the planet, even the humans. In Computerico, the master is the intelligent computer, and it is the one who is responsible to define the tasks, on the other hand humans are no more than intelligent slaves who would do everything their masters want, but with some little bit intelligence.

With this model of parallel universe, the computers are thinking in terms of software, this means that the production of their thinking is software. The computers will need to transform their thinking into concrete ideas and hence instructions to be done by humans. As we all know, humans can understand only code instructions, documented and explained such that it is readable. So master computers have to transform these software pieces into understandable instructions and explained in the humans readable language, as we know, not all humans are software engineers!

Here, a human software engineer will be able to play the intermediate link between computer and human by explaining the software in a way that could be easily understood by any non-experienced human. This translator should not only understand the software, but be able to reverse it into natural language. So let us say a master computer will need to create a new table which is going to be used as a throne for the computer king. The software which is the thinking of the master computer represents a software to make a table! If the engineer reversed that software to natural language we will find that it holds the steps to define table dimensions, colors, and materials necessary to make it. Then other activities include the blueprint to create the table along with every step necessary to bring table to life. If there is not that software engineer, normal humans will take a very long time to get what the master computer want to say! They may be killed because of misunderstanding or doing incorrect instruction. So, a software engineer role is very important as a translator and mediator between the computer and other humans.

Now consider a more complex need, the product to be created is not just a single table but product to be used by every computer, then there will be focus on standardization, and optimization to reach maximum production rate with minimum resources as well as usual production steps. The software engineer has to understand all of these activities and translate all of them to other humans such that they are able to achieve their tasks well, otherwise master computer will be disappointed and may kill of them, including the software engineer!

And the question is "What is the role of that facilitator or mediator software engineer?" In my simple opinion, this man / woman is the real architect for the table. They know everything about every step, facilitate and support every other member, and give all guidance based on their full understanding for the master computer needs and expectations. This translator may or may not play a role in the creation of the table, but for sure he or she will have a major impact on the development of the table.

Back on earth

If we are back on earth, in my humble opinion, what we have just discussed before about the software engineer is in fact the role of the software architect in a software development project. The architect role is mainly to understand well the needs of the customer which are talisman and hard to solve symbols according to other engineers, and translate them in a well defined understandable language. In this case, the software architecture is not just a simple document, it is a complete process activity coming in parallel with the software life cycle to explain and facilitate to other engineers how to do their job correctly according to the customer requirements and needs.

In Agile/SCRUM methods

The best way to describe a software architect, still according to my point of view, is the Agile/SCRUM Master role. This one should be in direct contact with the product owner, and have full understandability and awareness of what is the product mainly is going to do, and how in details it could be made or at least, has the ability to use the clues available to head towards several possible optimum solutions.

In the Agile/SCRUM methods, the SCRUM Master is more like a facilitator and team leader rather than a manager, he / she is a person who is playing with their team in the match, not a manager sitting outside the field, leading them to the success. To be able to do so, it is not possible to document the match plan on paper and share it across the team members, this will not be enough. The plan is an activity that should be done by all members together, and needs someone within the team to lead the members to do the plan by being the first one to apply it, and guide them through each step.

Architecture here is then playing role in each part of the software development phases starting from requirements specification and till the final validation for the final delivery. Because Agile methods depend more on collaboration and team understanding, a specific document for the architecture should not be a very heavy document, it is enough to develop ongoing during the software cycle, and with each step, to store the necessary information only which needs to be documented. Since the evolution process of the project, the sprints and releases in SCRUM methods, stops when the project is terminated and delivered to the product owner, then the activity of architecture will stop at this point and so is the document evolution.

In traditional process

People in traditional process however, like strict V-Cycle and Waterfall, with traditional techniques and heavy documentation activities, will find this point of view strange or not precise. I can understand their point of view and accept it, but I would like to share my experience, which could explain why I can see the software architecture like this. In fact, according to my work experience as software architect in several projects, the software architect never finishes his / her work at a certain point during the project. He / She are always interacting all the time because they are the people with most highly understanding for the requirements and the top vision. The architect is one who always responsible to perform code reviews for modifications, solving complex bugs with major changes in software, accepting / rejecting software modifications according to the overall behavior of the project, and defining the necessary software testing techniques and methods to be able to cover all the use cases and requirements in a correct ad right way.

In a real-time embedded software for example, a small change in main clock value may seem a very minor change, however, an architect may see it as a major change due to the huge impact on timings of the systems, and so, this is an architecture task to accept or reject the software modification, and evaluate the updates to done on other parts of the software based on the change of the clock.

If we can take software architecture into account from this point of view, as the above example, we could understand why a software architecture is a everlasting activity which stops by the end of the project.

In conclusion

In my point of view, software architecture is not a niche work product, it should not be even seen as a software project artifact. Software architecture is an activity tightly coupled with the life cycle of the project, and should be done on every necessary step. The software architect is not a manager, he / she is a software project leader as well as very strong technical reference point in the project, their role never ends in the project, until the project is completely closed.

I hope you may find my article to be useful to you, and I am opened to any comments, discussions, feedback, questions, and even ideas to discuss together about computer science and engineering.

Thanks for reading ;)

Paying back debt to computer science community

With about 11 years working with computers, I have a deep feeling of being owing to the computer science community. With the support of many websites like StackOverflow, Bytes IT Community, and other many websites, I can say that there have never been a problem without a solution over the internet. Even Google search engine was and still of great aid to facilitate the activity of searching the solution to the problem, and even the concept of the world wide web itself helped a lot the humans to find solutions.

No one could deny that, and I think that all of you would approve what I am claiming, and so I found it is important to pay back to the computer science community with all the knowledge and experience that I am allowed to disclose to the community to help new computer engineers with some little information that could be useful.

I hope you may find my articles to be useful to you, and I am opened to any comments, discussions, feedback, questions, and even ideas to discuss together about computer science and engineering.

Thanks for reading ;)