项目作者: claudiomarpda

项目描述 :
Implementation of a microprogrammed control unit for didactic purpose
高级语言: C
项目地址: git://github.com/claudiomarpda/control_unit.git
创建时间: 2017-09-06T17:12:18Z
项目社区:https://github.com/claudiomarpda/control_unit

开源协议:

下载


This is an implementation of a microprogrammed control unit according to Harvard architecture with cache memory system with didactic purpose

Motivation


The memory access is slow, compared to the processor performance, so it is important to understand how the hardware works to make a good usage of it when developing software. This is mainly important for applications of critical performance.



Project


Language: C11 Standard

IDE: CLion 2017.2.2

Build: CMake

Part 1: Control Unit



Instructions Cycle



Fetch


Fetches one instruction in memory according to program counter.


Decode


Decodes one instruction to identify the operation kind and the operands.


Execute


Executes the operation with the values of the operands update in decoding step.


Memory



The memory is divided in two separated blocks:
Instructions Memory Data Memory

Registers



REG: General Purpose Register PC: Program Counter

IR: Instruction Register MAR: Memory Access Register

MBR: Memory Buss Register RC: Comparison Register

Operations



LOAD


Load in the register the value from the address.

The address can come from a constant value or a value of a register.

Pattern: LOAD Rn CONSTANT

Example: LOAD R1 100

Pattern: LOAD Rn Rn

Example: LOAD R1 R2


MOVE



Moves to the first register the value of the right-side operand, which can be a register or a constant.

Pattern 1: MOVE Rn CONSTANT

Example 1: MOVE R1 2017

Pattern 2: MOVE Rn Rn

Example 2: MOVE R1 R2


STORE



Store the value of the register in the address. The address can be a constant or a register value.

Pattern: STORE Rn CONSTANT

Example: STORE R1 100

Pattern: STORE Rn Rn

Example: STORE R1 R2


ADD, MULTIPLY, SUBTRACT and DIVIDE



Do arithmetic with the two right-side operands and assign the result to the first register.

Pattern: OPERATION Rn Rn Rn

Example 1: ADD R1 R2 R3

Example 2: DIVIDE R1 R2 R3 is equivalent to R1 = (R2 / R3)


JUMP


CONDITIONAL JUMP


Decodes the instruction to jump to an index of the instruction memory.

Compares the first constant value with the value in the comparison register and, if they are equal, jump condition is true.
Note that to update the register, Compare operation must be done before calling the Jump operation.

Pattern: JUMPc CONSTANT INDEX

Example: JUMPc 0 5

It means: Jump, if the value in the comparison register is 0, to the instruction of index 5


UNCONDITIONAL JUMP


Decodes unconditional jump operation.

Pattern: JUMPu CONSTANT

Example: JUMPu 10

It means: Jump to instruction of index 10


INCREMENT and DECREMENT



Decodes the instruction to increment or decrement 1 to the value of a register.

Pattern 1: INCREMENT Rn

Example 1: INCREMENT R1

Pattern 2: DECREMENT Rn

Example 2: DECREMENT R1


COMPARE



Decodes the comparison of two operands and stores it in the comparison register.

The comparison works according to arithmetic logic unit.

The first operand must be a register and the second must be a register or a constant.

Pattern: COMPARE Rn Rn

Example: COMPARE R1 R2

Pattern: COMPARE Rn CONSTANT

Example: COMPARE R1 0


Program Examples



These are some examples of developed programs with ths project’s assembly language

Arithmetic operations


Instructions Memory


LOAD R1 100 MOVE R2 10 ADD R3 R1 R2 MOVE R2 2 MULTIPLY R1 R3 R2 DIVIDE R3 R1 R2 MOVE R2 20 SUBTRACT R1 R3 R2 STORE R1 101

Data Memory


100 10

Fibonacci Series


Instructions Memory


LOAD R9 100 MOVE R8 103 MOVE R2 1 MOVE R3 1 STORE R2 101 STORE R3 102 MOVE R7 2 COMPARE R9 R7 JUMPc 1 10 JUMPu 99 MOVE R1 R3 ADD R3 R2 R3 MOVE R2 R1 STORE R3 R8 INCREMENT R8 DECREMENT R9 JUMPu 6

Data Memory


100 20

Bubble Sort



Instructions Memory


LOAD R9 100 MOVE R8 0 MOVE R1 0 COMPARE R1 R9 JUMPc -1 6 JUMPu 99 MOVE R2 R9 COMPARE R2 -1 JUMPc 1 10 JUMPu 23 INCREMENT R8 MOVE R4 R2 MOVE R3 R4 DECREMENT R3 LOAD R6 R4 LOAD R5 R3 COMPARE R6 R5 JUMPc -1 19 JUMPu 21 STORE R6 R3 STORE R5 R4 DECREMENT R2 JUMPu 7 INCREMENT R1 JUMPu 3

Data Memory


100 10 1 2 2 12 3 2 4 9 5 35 6 8 7 0 8 88 9 1 10 3

Part 2: Cache Memory System



Memory Access

alt tag

Memory Access Decision Flow

alt tag

Mapping Functions

Only one cache level is used. The user can choose one of the two mapping functions below and check the cache hit and cache miss performance.

Direct Mapping


Each block of main memory is mapped to only one block line of the cache. The main memory is larger than the cache, so the mapping always takes care to make the address of the main memory to fit in cache.



Associative Mapping


One block of the main memory can be loaded in any line of the cache. The tag identifies one block unically and all of them must be verified to fetch a block, until find it or end the lines. The replacement algorithm is FIFO.



Matrix Loading Program


A program to stimulate the memory access so the cache system can be tested.



Instructions Memory


LOAD R9 100 MOVE R8 101 MOVE R1 0 COMPARE R1 R9 JUMPc -1 6 JUMPu 99 INCREMENT R1 MOVE R2 0 COMPARE R2 R9 JUMPc -1 11 JUMPu 3 INCREMENT R2 LOAD R3 R8 INCREMENT R8 JUMPu 8

Data Memory


100 5 101 1 102 2 103 3 104 4 105 5 106 6 107 7 108 8 109 9 110 10 111 11 112 12 113 13 114 14 115 15 116 16 117 17 118 18 119 19 120 20 121 21 122 22 123 23 124 24 125 25

Cache Memory System Trending


These results were obtained with the above program executed several times without clearing the cache with different cache sizes. We can see that as the cache size increses, the cache hit tend to go up and the cache miss tend to go down, but it gets to a point that even though the cache size is increased, they tend to stabilize.



Direct Mapping

alt tag

Associative Mapping

alt tag