1
0
mirror of https://github.com/myhdl/myhdl.git synced 2024-12-14 07:44:38 +08:00

added new doc based on sphinx

This commit is contained in:
jand 2008-03-20 20:34:04 +00:00
parent 0464469a03
commit 8678647c72
11 changed files with 4614 additions and 0 deletions

66
doc/Makefile Normal file
View File

@ -0,0 +1,66 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build.py
PAPER =
ALLSPHINXOPTS = -d build/doctrees -D latex_paper_size=$(PAPER) \
$(SPHINXOPTS) source
.PHONY: help clean html web htmlhelp latex changes linkcheck
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " web to make files usable by Sphinx.web"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview over all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
clean:
-rm -rf build/*
html:
mkdir -p build/html build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html
@echo
@echo "Build finished. The HTML pages are in build/html."
web:
mkdir -p build/web build/doctrees
$(SPHINXBUILD) -b web $(ALLSPHINXOPTS) build/web
@echo
@echo "Build finished; now you can run"
@echo " python -m sphinx.web build/web"
@echo "to start the server."
htmlhelp:
mkdir -p build/htmlhelp build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in build/htmlhelp."
latex:
mkdir -p build/latex build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex
@echo
@echo "Build finished; the LaTeX files are in build/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
changes:
mkdir -p build/changes build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes
@echo
@echo "The overview file is in build/changes."
linkcheck:
mkdir -p build/linkcheck build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in build/linkcheck/output.txt."

132
doc/source/conf.py Normal file
View File

@ -0,0 +1,132 @@
# -*- coding: utf-8 -*-
#
# MyHDL documentation build configuration file, created by
# sphinx-quickstart.py on Thu Mar 20 11:33:23 2008.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# The contents of this file are pickled, so don't put values in the namespace
# that aren't pickleable (module imports are okay, they're removed automatically).
#
# All configuration values have a default value; values that are commented out
# serve to show the default value.
import sys
# If your extensions are in another directory, add it here.
#sys.path.append('some/directory')
# General configuration
# ---------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.addons.*') or your custom ones.
#extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['.templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General substitutions.
project = 'MyHDL'
copyright = '2008, Jan Decaluwe'
# The default replacements for |version| and |release|, also used in various
# other places throughout the built documents.
#
# The short X.Y version.
version = '0.6dev'
# The full version, including alpha/beta/rc tags.
release = '0.6dev'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# Options for HTML output
# -----------------------
# The style sheet to use for HTML and HTML Help pages. A file of that name
# must exist either in Sphinx' static/ path, or in one of the custom paths
# given in html_static_path.
html_style = 'default.css'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['.static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Content template for the index page.
#html_index = ''
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_use_modindex = True
# If true, the reST sources are included in the HTML build as _sources/<name>.
#html_copy_source = True
# Output file base name for HTML help builder.
htmlhelp_basename = 'MyHDLdoc'
# Options for LaTeX output
# ------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, document class [howto/manual]).
#latex_documents = []
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_use_modindex = True

21
doc/source/index.rst Normal file
View File

@ -0,0 +1,21 @@
.. MyHDL documentation master file, created by sphinx-quickstart.py on Thu Mar 20 11:33:23 2008.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to MyHDL's documentation!
=================================
Contents:
.. toctree::
:maxdepth: 2
manual/MyHDL
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -0,0 +1,84 @@
********************
The MyHDL manual
********************
.. % \renewcommand{\ttdefault}{cmtt}
.. % \renewcommand{\sfdefault}{cmss}
.. % \newcommand{\myhdl}{\protect \mbox{MyHDL}}
XXX: input{boilerplate} :XXX
XXX: input{copyright} :XXX
.. topic:: Abstract
The goal of the MyHDL project is to empower hardware designers with the elegance
and simplicity of the Python language.
MyHDL is a free, open-source (LGPL) package for using Python as a hardware
description and verification language. Python is a very high level language, and
hardware designers can use its full power to model and simulate their designs.
Moreover, MyHDL can convert a design to Verilog. In combination with an external
synthesis tool, it provides a complete path from Python to a silicon
implementation.
*Modeling*
Python's power and clarity make MyHDL an ideal solution for high level modeling.
Python is famous for enabling elegant solutions to complex modeling problems.
Moreover, Python is outstanding for rapid application development and
experimentation.
The key idea behind MyHDL is the use of Python generators to model hardware
concurrency. Generators are best described as resumable functions. In MyHDL,
generators are used in a specific way so that they become similar to always
blocks in Verilog or processes in VHDL.
A hardware module is modeled as a function that returns any number of
generators. This approach makes it straightforward to support features such as
arbitrary hierarchy, named port association, arrays of instances, and
conditional instantiation.
Furthermore, MyHDL provides classes that implement traditional hardware
description concepts. It provides a signal class to support communication
between generators, a class to support bit oriented operations, and a class for
enumeration types.
*Simulation and Verification*
The built-in simulator runs on top of the Python interpreter. It supports
waveform viewing by tracing signal changes in a VCD file.
With MyHDL, the Python unit test framework can be used on hardware designs.
Although unit testing is a popular modern software verification technique, it is
not yet common in the hardware design world, making it one more area in which
MyHDL innovates.
MyHDL can also be used as hardware verification language for VHDL and Verilog
designs, by co-simulation with traditional HDL simulators.
*Conversion to Verilog*
The converter to Verilog works on an instantiated design that has been fully
elaborated. Consequently, the original design structure can be arbitrarily
complex.
The converter automates certain tasks that are tedious or hard in Verilog
directly. Notable features are the possibility to choose between various FSM
state encodings based on a single attribute, the mapping of certain high-level
objects to RAM and ROM descriptions, and the automated handling of signed
arithmetic issues.
Contents:
.. toctree::
:maxdepth: 2
background
intro
modeling
unittest
cosimulation
conversion
reference

View File

@ -0,0 +1,171 @@
.. _background:
**********************
Background information
**********************
.. _prerequisites:
Prerequisites
=============
You need a basic understanding of Python to use MyHDL. If you don't know Python,
don't worry: it it is one of the easiest programming languages to learn [#]_.
Learning Python is one of the best time investments that engineering
professionals can make [#]_.
For starters, http://www.python.org/doc/current/tut/tut.html is probably the
best choice for an on-line tutorial. For alternatives, see
http://www.python.org/doc/Newbies.html.
A working knowledge of a hardware description language such as Verilog or VHDL
is helpful.
Code examples in this manual are sometimes shortened for clarity. Complete
executable examples can be found in the distribution directory at
:file:`example/manual/`.
.. _tutorial:
A small tutorial on generators
==============================
.. index:: single: generators; tutorial on
Generators are a relatively recent Python feature. They were introduced in
Python 2.2. Because generators are the key concept in MyHDL, a small tutorial is
included a here.
Consider the following nonsensical function::
def function():
for i in range(5):
return i
You can see why it doesn't make a lot of sense. As soon as the first loop
iteration is entered, the function returns::
>>> function()
0
Returning is fatal for the function call. Further loop iterations never get a
chance, and nothing is left over from the function call when it returns.
To change the function into a generator function, we replace :keyword:`return`
with :keyword:`yield`::
def generator():
for i in range(5):
yield i
Now we get::
>>> generator()
<generator object at 0x815d5a8>
When a generator function is called, it returns a generator object. A generator
object supports the iterator protocol, which is an expensive way of saying that
you can let it generate subsequent values by calling its :func:`next` method::
>>> g = generator()
>>> g.next()
0
>>> g.next()
1
>>> g.next()
2
>>> g.next()
3
>>> g.next()
4
>>> g.next()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
StopIteration
Now we can generate the subsequent values from the for loop on demand, until
they are exhausted. What happens is that the :keyword:`yield` statement is like
a :keyword:`return`, except that it is non-fatal: the generator remembers its
state and the point in the code when it yielded. A higher order agent can decide
when to get the next value by calling the generator's :func:`next` method. We
say that generators are :dfn:`resumable functions`.
.. index::
single: VHDL; process
single: Verilog; always block
If you are familiar with hardware description languages, this may ring a bell.
In hardware simulations, there is also a higher order agent, the Simulator, that
interacts with such resumable functions; they are called :dfn:`processes` in
VHDL and :dfn:`always blocks` in Verilog. Similarly, Python generators provide
an elegant and efficient method to model concurrency, without having to resort
to some form of threading.
.. %
.. %
.. index:: single: sensitivity list
The use of generators to model concurrency is the first key concept in MyHDL.
The second key concept is a related one: in MyHDL, the yielded values are used
to specify the conditions on which the generator should wait before resuming. In
other words, :keyword:`yield` statements work as general sensitivity lists.
.. %
For more info about generators, consult the on-line Python documentation, e.g.
at http://www.python.org/doc/2.2.2/whatsnew.
.. _deco:
About decorators
================
.. index:: single: decorators; about
Python 2.4 introduced a new feature called decorators. MyHDL 0.5 takes advantage
of this new feature by defining a number of decorators that facilitate hardware
descriptions. However, many users may not yet be familiar with decorators.
Therefore, an introduction is included here.
A decorator consists of special syntax in front of a function declaration. It
refers to a decorator function. The decorator function automatically transforms
the declared function into some other callable object.
A decorator function :func:`deco` is used in a decorator statement as follows::
@deco
def func(arg1, arg2, ...):
<body>
This code is equivalent to the following::
def func(arg1, arg2, ...):
<body>
func = deco(func)
Note that the decorator statement goes directly in front of the function
declaration, and that the function name :func:`func` is automatically reused for
the final result.
MyHDL 0.5 uses decorators to create ready-to-simulate generators from local
function definitions. Their functionality and usage will be described
extensively in this manual.
For more info about Python decorators, consult the on-line Python documentation,
e.g. at http://www.python.org/doc/2.4/whatsnew/node6.html.
.. warning::
Because MyHDL 0.5 uses decorators, it requires Python 2.4 or a later version.
.. rubric:: Footnotes
.. [#] You must be bored by such claims, but in Python's case it's true.
.. [#] I am not biased.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,415 @@
.. _cosim:
***********************************
Co-simulation with Verilog and VHDL
***********************************
.. _cosim-intro:
Introduction
============
One of the most exciting possibilities of MyHDL\ is to use it as a hardware
verification language (HVL). A HVL is a language used to write test benches and
verification environments, and to control simulations.
Nowadays, it is generally acknowledged that HVLs should be equipped with modern
software techniques, such as object orientation. The reason is that verification
it the most complex and time-consuming task of the design process. Consequently,
every useful technique is welcome. Moreover, test benches are not required to be
implementable. Therefore, unlike with synthesizable code, there are no
constraints on creativity.
Technically, verification of a design implemented in another language requires
co-simulation. MyHDL is enabled for co-simulation with any HDL simulator that
has a procedural language interface (PLI). The MyHDL\ side is designed to be
independent of a particular simulator, On the other hand, for each HDL simulator
a specific PLI module will have to be written in C. Currently, the MyHDL release
contains a PLI module for two Verilog simulators: Icarus and Cver.
.. _cosim-hdl:
The HDL side
============
To introduce co-simulation, we will continue to use the Gray encoder example
from the previous chapters. Suppose that we want to synthesize it and write it
in Verilog for that purpose. Clearly we would like to reuse our unit test
verification environment.
To start, let's recall how the Gray encoder in MyHDL looks like::
def bin2gray(B, G, width):
""" Gray encoder.
B -- input intbv signal, binary encoded
G -- output intbv signal, gray encoded
width -- bit width
"""
@always_comb
def logic():
for i in range(width):
G.next[i] = B[i+1] ^ B[i]
return logic
To show the co-simulation flow, we don't need the Verilog implementation yet,
but only the interface. Our Gray encoder in Verilog would have the following
interface::
module bin2gray(B, G);
parameter width = 8;
input [width-1:0] B;
output [width-1:0] G;
....
To write a test bench, one creates a new module that instantiates the design
under test (DUT). The test bench declares nets and regs (or signals in VHDL)
that are attached to the DUT, and to stimulus generators and response checkers.
In an all-HDL flow, the generators and checkers are written in the HDL itself,
but we will want to write them in MyHDL. To make the connection, we need to
declare which regs & nets are driven and read by the MyHDL\ simulator. For our
example, this is done as follows::
module dut_bin2gray;
reg [`width-1:0] B;
wire [`width-1:0] G;
initial begin
$from_myhdl(B);
$to_myhdl(G);
end
bin2gray dut (.B(B), .G(G));
defparam dut.width = `width;
endmodule
The ``$from_myhdl`` task call declares which regs are driven by MyHDL, and the
``$to_myhdl`` task call which regs & nets are read by it. These tasks take an
arbitrary number of arguments. They are defined in a PLI module written in C
and made available in a simulator-dependent manner. In Icarus Verilog, the
tasks are defined in a ``myhdl.vpi`` module that is compiled from C source code.
.. _cosim-myhdl:
The MyHDL side
==============
MyHDL supports co-simulation by a ``Cosimulation`` object. A ``Cosimulation``
object must know how to run a HDL simulation. Therefore, the first argument to
its constructor is a command string to execute a simulation.
The way to generate and run an simulation executable is simulator dependent.
For example, in Icarus Verilog, a simulation executable for our example can be
obtained obtained by running the ``iverilog`` compiler as follows::
% iverilog -o bin2gray -Dwidth=4 bin2gray.v dut_bin2gray.v
This generates a ``bin2gray`` executable for a parameter ``width`` of 4, by
compiling the contributing verilog files.
The simulation itself is run by the ``vvp`` command::
% vvp -m ./myhdl.vpi bin2gray
This runs the ``bin2gray`` simulation, and specifies to use the ``myhdl.vpi``
PLI module present in the current directory. (This is just a command line usage
example; actually simulating with the ``myhdl.vpi`` module is only meaningful
from a ``Cosimulation`` object.)
We can use a ``Cosimulation`` object to provide a HDL version of a design to the
MyHDL simulator. Instead of a generator function, we write a function that
returns a ``Cosimulation`` object. For our example and the Icarus Verilog
simulator, this is done as follows::
import os
from myhdl import Cosimulation
cmd = "iverilog -o bin2gray -Dwidth=%s bin2gray.v dut_bin2gray.v"
def bin2gray(B, G, width):
os.system(cmd % width)
return Cosimulation("vvp -m ./myhdl.vpi bin2gray", B=B, G=G)
After the executable command argument, the ``Cosimulation`` constructor takes an
arbitrary number of keyword arguments. Those arguments make the link between
MyHDL Signals and HDL nets, regs, or signals, by named association. The keyword
is the name of an argument in a ``$to_myhdl`` or ``$from_myhdl`` call; the
argument is a MyHDL Signal.
With all this in place, we can now use the existing unit test to verify the
Verilog implementation. Note that we kept the same name and parameters for the
the ``bin2gray`` function: all we need to do is to provide this alternative
definition to the existing unit test.
Let's try it on the Verilog design::
module bin2gray(B, G);
parameter width = 8;
input [width-1:0] B;
output [width-1:0] G;
reg [width-1:0] G;
integer i;
wire [width:0] extB;
assign extB = {1'b0, B}; // zero-extend input
always @(extB) begin
for (i=0; i < width; i=i+1)
G[i] <= extB[i+1] ^ extB[i];
end
endmodule
When we run our unit test, we get::
% python test_bin2gray.py
Check that only one bit changes in successive codewords ... ok
Check that all codewords occur exactly once ... ok
Check that the code is an original Gray code ... ok
----------------------------------------------------------------------
Ran 3 tests in 2.729s
OK
.. _cosim-restr:
Restrictions
============
In the ideal case, it should be possible to simulate any HDL description
seamlessly with MyHDL. Moreover the communicating signals at each side should
act transparently as a single one, enabling fully race free operation.
For various reasons, it may not be possible or desirable to achieve full
generality. As anyone that has developed applications with the Verilog PLI can
testify, the restrictions in a particular simulator, and the differences over
various simulators, can be quite frustrating. Moreover, full generality may
require a disproportionate amount of development work compared to a slightly
less general solution that may be sufficient for the target application.
Consequently, I have tried to achieve a solution which is simple enough so that
one can reasonably expect that any PLI-enabled simulator can support it, and
that is relatively easy to verify and maintain. At the same time, the solution
is sufficiently general to cover the target application space.
The result is a compromise that places certain restrictions on the HDL code. In
this section, these restrictions are presented.
.. _cosim-pass:
Only passive HDL can be co-simulated
------------------------------------
The most important restriction of the MyHDL co-simulation solution is that only
"passive" HDL can be co-simulated. This means that the HDL code should not
contain any statements with time delays. In other words, the MyHDL simulator
should be the master of time; in particular, any clock signal should be
generated at the MyHDL side.
At first this may seem like an important restriction, but if one considers the
target application for co-simulation, it probably isn't.
MyHDL supports co-simulation so that test benches for HDL designs can be written
in Python. Let's consider the nature of the target HDL designs. For high-level,
behavioral models that are not intended for implementation, it should come as no
surprise that I would recommend to write them in MyHDL directly; that is one of
the goals of the MyHDL effort. Likewise, gate level designs with annotated
timing are not the target application: static timing analysis is a much better
verification method for such designs.
Rather, the targeted HDL designs are naturally models that are intended for
implementation, most likely through synthesis. As time delays are meaningless in
synthesizable code, the restriction is compatible with the target application.
.. _cosim-race:
Race sensitivity issues
-----------------------
In a typical RTL code, some events cause other events to occur in the same time
step. For example, when a clock signal triggers some signals may change in the
same time step. For race-free operation, an HDL must differentiate between such
events within a time step. This is done by the concept of "delta" cycles. In a
fully general, race free co-simulation, the co-simulators would communicate at
the level of delta cycles. However, in MyHDL co-simulation, this is not entirely
the case.
Delta cycles from the MyHDL simulator toward the HDL co-simulator are preserved.
However, in the opposite direction, they are not. The signals changes are only
returned to the MyHDL simulator after all delta cycles have been performed in
the HDL co-simulator.
What does this mean? Let's start with the good news. As explained in the
previous section, the concept behind MyHDL co-simulation implies that clocks are
generated at the MyHDL side. *When using a MyHDL clock and its corresponding
HDL signal directly as a clock, co-simulation is race free.* In other words, the
case that most closely reflects the MyHDL co-simulation approach, is race free.
The situation is different when you want to use a signal driven by the HDL (and
the corresponding MyHDL signal) as a clock. Communication triggered by such a
clock is not race free. The solution is to treat such an interface as a chip
interface instead of an RTL interface. For example, when data is triggered at
positive clock edges, it can safely be sampled at negative clock edges.
Alternatively, the MyHDL data signals can be declared with a delay value, so
that they are guaranteed to change after the clock edge.
.. _cosim-impl:
Implementation notes
====================
This section requires some knowledge of PLI terminology.
Enabling a simulator for co-simulation requires a PLI module written in C. In
Verilog, the PLI is part of the "standard". However, different simulators
implement different versions and portions of the standard. Worse yet, the
behavior of certain PLI callbacks is not defined on some essential points. As a
result, one should plan to write or at least customize a specific PLI module for
any simulator. The release contains a PLI module for the open source Icarus and
Cver simulators.
This section documents the current approach and status of the PLI module
implementation and some reflections on future implementations.
.. _cosim-icarus:
Icarus Verilog
--------------
.. _cosim-icarus-delta:
Delta cycle implementation
^^^^^^^^^^^^^^^^^^^^^^^^^^
To make co-simulation work, a specific type of PLI callback is needed. The
callback should be run when all pending events have been processed, while
allowing the creation of new events in the current time step (e.g. by the MyHDL
simulator). In some Verilog simulators, the ``cbReadWriteSync`` callback does
exactly that. However, in others, including Icarus, it does not. The callback's
behavior is not fully standardized; some simulators run the callback before non-
blocking assignment events have been processed.
Consequently, I had to look for a workaround. One half of the solution is to use
the ``cbReadOnlySync`` callback. This callback runs after all pending events
have been processed. However, it does not permit to create new events in the
current time step. The second half of the solution is to map MyHDL delta cycles
onto real Verilog time steps. Note that fortunately I had some freedom here
because of the restriction that only passive HDL code can be co-simulated.
I chose to make the time granularity in the Verilog simulator a 1000 times finer
than in the MyHDL simulator. For each MyHDL time step, 1000 Verilog time steps
are available for MyHDL delta cycles. In practice, only a few delta cycles per
time step should be needed. Exceeding this limit almost certainly indicates a
design error; the limit is checked at run-time. The factor 1000 also makes it
easy to distinguish "real" time from delta cycle time when printing out the
Verilog time.
.. _cosim-icarus-pass:
Passive Verilog check
^^^^^^^^^^^^^^^^^^^^^
As explained before, co-simulated Verilog should not contain delay statements.
Ideally, there should be a run-time check to flag non-compliant code. However,
there is currently no such check in the Icarus module.
The check can be written using the ``cbNextSimTime`` VPI callback in Verilog.
However, Icarus 0.7 doesn't support this callback. In the meantime, support for
it has been added to the Icarus development branch. When Icarus 0.8 is
released, a check will be added.
In the mean time, just don't do this. It may appear to "work" but it really
won't as events will be missed over the co-simulation interface.
.. _cosim-cver:
Cver
----
MyHDL co-simulation is supported with the open source Verilog simulator Cver.
The PLI module is based on the one for Icarus and basically has the same
functionality. Only some cosmetic modifications were required.
.. _cosim-impl-verilog:
Other Verilog simulators
------------------------
The Icarus module is written with VPI calls, which are provided by the most
recent generation of the Verilog PLI. Some simulators may only support TF/ACC
calls, requiring a complete redesign of the interface module.
If the simulator supports VPI, the Icarus module should be reusable to a large
extent. However, it may be possible to improve on it. The workaround to support
delta cycles described in Section :ref:`cosim-icarus-delta` may not be
necessary. In some simulators, the ``cbReadWriteSync`` callback occurs after all
events (including non-blocking assignments) have been processed. In that case,
the functionality can be supported without a finer time granularity in the
Verilog simulator.
There are also Verilog standardization efforts underway to resolve the ambiguity
of the ``cbReadWriteSync`` callback. The solution will be to introduce new, well
defined callbacks. From reading some proposals, I conclude that the
``cbEndOfSimTime`` callback would provide the required functionality.
The MyHDL project currently has no access to commercial Verilog simulators, so
progress in co-simulation support depends on external interest and
participation. Users have reported that they are using MyHDL co-simulation with
the simulators from Aldec and Modelsim.
.. _cosim-impl-syscalls:
Interrupted system calls
------------------------
The PLI module uses ``read`` and ``write`` system calls to communicate between
the co-simulators. The implementation assumes that these calls are restarted
automatically by the operating system when interrupted. This is apparently what
happens on the Linux box on which MyHDL is developed.
It is known how non-restarted interrupted system calls should be handled, but
currently such code cannot be tested on the MyHDL development platform. Also, it
is not clear whether this is still a relevant issue with modern operating
systems. Therefore, this issue has not been addressed at this moment. However,
assertions have been included that should trigger when this situation occurs.
Whenever an assertion fires in the PLI module, please report it. The same holds
for Python exceptions that cannot be easily explained.
.. _cosim-impl-vhdl:
VHDL
----
It would be nice to have an interface to VHDL simulators such as the Modelsim
VHDL simulator. This will require a PLI module using the PLI of the VHDL
simulator.
The MyHDL project currently has no access to commercial VHDL simulators, so
progress in co-simulation support will depend on external interest and
participation.

537
doc/source/manual/intro.rst Normal file
View File

@ -0,0 +1,537 @@
.. _intro:
*********************
Introduction to MyHDL
*********************
.. _intro-basic:
A basic MyHDL simulation
========================
We will introduce MyHDL with a classic ``Hello World`` style example. All
example code can be found in the distribution directory under
:file:`example/manual/`. Here are the contents of a MyHDL\ simulation script
called :file:`hello1.py`::
from myhdl import Signal, delay, always, now, Simulation
def HelloWorld():
interval = delay(10)
@always(interval)
def sayHello():
print "%s Hello World!" % now()
return sayHello
inst = HelloWorld()
sim = Simulation(inst)
sim.run(30)
When we run this simulation, we get the following output::
% python hello1.py
10 Hello World!
20 Hello World!
30 Hello World!
_SuspendSimulation: Simulated 30 timesteps
The first line of the script imports a number of objects from the ``myhdl``
package. In Python we can only use identifiers that are literally defined in the
source file [#]_.
Then, we define a function called :func:`HelloWorld`. In MyHDL, classic
functions are used to model hardware modules. In particular, the parameter list
is used to define the interface. In this first example, the interface is empty.
.. index:: single: decorator; always
Inside the top level function we declare a local function called
:func:`sayHello` that defines the desired behavior. This function is decorated
with an :func:`always` decorator that has a delay object as its parameter. The
meaning is that the function will be executed whenever the specified delay
interval has expired.
Behind the curtains, the :func:`always` decorator creates a Python *generator*
and reuses the name of the decorated function for it. Generators are the
fundamental objects in MyHDL, and we will say much more about them further on.
Finally, the top level function returns the local generator. This is the
simplest case of the basic MyHDL code pattern to define the contents of a
hardware module. We will describe the general case further on.
In MyHDL, we create an *instance* of a hardware module by calling the
corresponding function. In the example, variable ``inst`` refers to an instance
of :func:`HelloWorld`. To simulate the instance, we pass it as an argument to a
:class:`Simulation` object constructor. We then run the simulation for the
desired amount of timesteps.
.. _intro-conc:
Signals, ports, and concurrency
===============================
In the previous section, we simulated a design with a single generator and no
concurrency. On the other hand, real hardware descriptions are typically
massively concurrent. MyHDL supports this by allowing an arbitrary number of
concurrently running generators.
With concurrency comes the problem of deterministic communication. Hardware
languages use special objects to support deterministic communication between
concurrent code. In particular, MyHDL has a :class:`Signal` object which is
roughly modeled after VHDL signals.
We will demonstrate signals and concurrency by extending and modifying our first
example. We define two hardware modules, one that drives a clock signal, and one
that is sensitive to a positive edge on a clock signal::
from myhdl import Signal, delay, always, now, Simulation
def ClkDriver(clk):
halfPeriod = delay(10)
@always(halfPeriod)
def driveClk():
clk.next = not clk
return driveClk
def HelloWorld(clk):
@always(clk.posedge)
def sayHello():
print "%s Hello World!" % now()
return sayHello
clk = Signal(0)
clkdriver_inst = ClkDriver(clk)
hello_inst = HelloWorld(clk)
sim = Simulation(clkdriver_inst, hello_inst)
sim.run(50)
.. index::
single: VHDL; signal assignment
single: Verilog; non-blocking assignment
The clock driver function :func:`ClkDriver` has a clock signal as its parameter.
This is how a *port* is modeled in MyHDL. The function defines a generator that
continuously toggles a clock signal after a certain delay. A new value of a
signal is specified by assigning to its ``next`` attribute. This is the MyHDL
equivalent of the VHDL signal assignment and the Verilog non-blocking
assignment.
.. %
.. %
.. index:: single: wait; for a rising edge
The :func:`HelloWorld` function is modified from the first example. It now also
takes a clock signal as parameter. Its generator is made sensitive to a rising
edge of the clock signal. This is specified by the ``posedge`` attribute of a
signal. The edge specifier is the argument of the ``always`` decorator. As a
result, the decorated function will be executed on every rising clock edge.
.. %
The ``clk`` signal is constructed with an initial value ``0``. When creating an
instance of each hardware module, the same clock signal is passed as the
argument. The result is that the instances are now connected through the clock
signal. The :class:`Simulation` object is constructed with the two instances.
When we run the simulation, we get::
% python hello2.py
10 Hello World!
30 Hello World!
50 Hello World!
_SuspendSimulation: Simulated 50 timesteps
.. _intro-hier:
Parameters and hierarchy
========================
We have seen that MyHDL uses functions to model hardware modules. We have also
seen that ports are modeled by using signals as parameters. To make designs
reusable we will also want to use other objects as parameters. For example, we
can change the clock generator function to make it more general and reusable, by
making the clock period parameterizable, as follows::
from myhdl import Signal, delay, instance, always, now, Simulation
def ClkDriver(clk, period=20):
lowTime = int(period/2)
highTime = period - lowTime
@instance
def driveClk():
while True:
yield delay(lowTime)
clk.next = 1
yield delay(highTime)
clk.next = 0
return driveClk
In addition to the clock signal, the clock period is a parameter, with a default
value of ``20``.
.. index:: single: decorator; instance
As the low time of the clock may differ from the high time in case of an odd
period, we cannot use the :func:`always` decorator with a single delay value
anymore. Instead, the :func:`driveClk` function is now a generator function with
an explicit definition of the desired behavior. It is decorated with the
:func:`instance` decorator. You can see that :func:`driveClk` is a generator
function because it contains ``yield`` statements.
When a generator function is called, it returns a generator object. This is
basically what the :func:`instance` decorator does. It is less sophisticated
than the :func:`always` decorator, but it can be used to create a generator from
any local generator function.
The ``yield`` statement is a general Python construct, but MyHDL uses it in a
dedicated way. In MyHDL, it has a similar meaning as the wait statement in
VHDL: the statement suspends execution of a generator, and its clauses specify
the conditions on which the generator should wait before resuming. In this case,
the generator waits for a certain delay.
Note that to make sure that the generator runs "forever", we wrap its behavior
in a ``while True`` loop.
Similarly, we can define a general :func:`Hello` function as follows::
def Hello(clk, to="World!"):
@always(clk.posedge)
def sayHello():
print "%s Hello %s" % (now(), to)
return sayHello
.. index:: single: instance; defined
We can create any number of instances by calling the functions with the
appropriate parameters. Hierarchy can be modeled by defining the instances in a
higher-level function, and returning them. This pattern can be repeated for an
arbitrary number of hierarchical levels. Consequently, the general definition of
a MyHDL instance is recursive: an instance is either a sequence of instances, or
a generator.
.. %
As an example, we will create a higher-level function with four instances of the
lower-level functions, and simulate it::
def greetings():
clk1 = Signal(0)
clk2 = Signal(0)
clkdriver_1 = ClkDriver(clk1) # positional and default association
clkdriver_2 = ClkDriver(clk=clk2, period=19) # named association
hello_1 = Hello(clk=clk1) # named and default association
hello_2 = Hello(to="MyHDL", clk=clk2) # named association
return clkdriver_1, clkdriver_2, hello_1, hello_2
inst = greetings()
sim = Simulation(inst)
sim.run(50)
As in standard Python, positional or named parameter association can be used in
instantiations, or a mix of both [#]_. All these styles are demonstrated in the
example above. Named association can be very useful if there are a lot of
parameters, as the argument order in the call does not matter in that case.
The simulation produces the following output::
% python greetings.py
9 Hello MyHDL
10 Hello World!
28 Hello MyHDL
30 Hello World!
47 Hello MyHDL
50 Hello World!
_SuspendSimulation: Simulated 50 timesteps
.. warning::
Some commonly used terminology has different meanings in Python versus hardware
design. Rather than artificially changing terminology, I think it's best to keep
it and explicitly describing the differences.
.. index:: single: module; in Python versus hardware design
A :dfn:`module` in Python refers to all source code in a particular file. A
module can be reused by other modules by importing. In hardware design, a module
is a reusable block of hardware with a well defined interface. It can be reused
in another module by :dfn:`instantiating` it.
.. %
.. index:: single: instance; in Python versus hardware design
An :dfn:`instance` in Python (and other object-oriented languages) refers to the
object created by a class constructor. In hardware design, an instance is a
particular incarnation of a hardware module.
.. %
Normally, the meaning should be clear from the context. Occasionally, I may
qualify terms with the words 'hardware' or 'MyHDL' to avoid ambiguity.
.. _intro-bit:
Bit oriented operations
=======================
Hardware design involves dealing with bits and bit-oriented operations. The
standard Python type :class:`int` has most of the desired features, but lacks
support for indexing and slicing. For this reason, MyHDL provides the
:class:`intbv` class. The name was chosen to suggest an integer with bit vector
flavor.
Class :class:`intbv` works transparently with other integer-like types. Like
class :class:`int`, it provides access to the underlying two's complement
representation for bitwise operations. In addition, it is a mutable type that
provides indexing and slicing operations, and some additional bit-oriented
support such as concatenation.
.. _intro-indexing:
Bit indexing
------------
.. index:: single: bit indexing
As an example, we will consider the design of a Gray encoder. The following code
is a Gray encoder modeled in MyHDL::
from myhdl import Signal, delay, Simulation, always_comb, instance, intbv, bin
def bin2gray(B, G, width):
""" Gray encoder.
B -- input intbv signal, binary encoded
G -- output intbv signal, gray encoded
width -- bit width
"""
@always_comb
def logic():
for i in range(width):
G.next[i] = B[i+1] ^ B[i]
return logic
This code introduces a few new concepts. The string in triple quotes at the
start of the function is a :dfn:`doc string`. This is standard Python practice
for structured documentation of code.
.. index::
single: decorator; always_comb
single: wait; for a signal value change
single: combinatorial logic
Furthermore, we introduce a third decorator: :func:`always_comb`. It is used
with a classic function and specifies that the resulting generator should wait
for a value change on any input signal. This is typically used to describe
combinatorial logic. The :func:`always_comb` decorator automatically infers
which signals are used as inputs.
.. %
.. %
Finally, the code contains bit indexing operations and an exclusive-or operator
as required for a Gray encoder. By convention, the lsb of an :class:`intbv`
object has index ``0``.
To verify the Gray encoder, we write a test bench that prints input and output
for all possible input values::
def testBench(width):
B = Signal(intbv(0))
G = Signal(intbv(0))
dut = bin2gray(B, G, width)
@instance
def stimulus():
for i in range(2**width):
B.next = intbv(i)
yield delay(10)
print "B: " + bin(B, width) + "| G: " + bin(G, width)
return dut, stimulus
We use the conversion function ``bin`` to get a binary string representation of
the signal values. This function is exported by the ``myhdl`` package and
supplements the standard Python ``hex`` and ``oct`` conversion functions.
As a demonstration, we set up a simulation for a small width::
sim = Simulation(testBench(width=3))
sim.run()
The simulation produces the following output::
% python bin2gray.py
B: 000 | G: 000
B: 001 | G: 001
B: 010 | G: 011
B: 011 | G: 010
B: 100 | G: 110
B: 101 | G: 111
B: 110 | G: 101
B: 111 | G: 100
StopSimulation: No more events
.. _intro-slicing:
Bit slicing
-----------
.. index:: single: bit slicing
For a change, we will use a traditional function as an example to illustrate
slicing. The following function calculates the HEC byte of an ATM header. ::
from myhdl import intbv, concat
COSET = 0x55
def calculateHec(header):
""" Return hec for an ATM header, represented as an intbv.
The hec polynomial is 1 + x + x**2 + x**8.
"""
hec = intbv(0)
for bit in header[32:]:
hec[8:] = concat(hec[7:2],
bit ^ hec[1] ^ hec[7],
bit ^ hec[0] ^ hec[7],
bit ^ hec[7]
)
return hec ^ COSET
The code shows how slicing access and assignment is supported on the
:class:`intbv` data type. In accordance with the most common hardware
convention, and unlike standard Python, slicing ranges are downward. The code
also demonstrates concatenation of :class:`intbv` objects.
As in standard Python, the slicing range is half-open: the highest index bit is
not included. Unlike standard Python however, this index corresponds to the
*leftmost* item. Both indices can be omitted from the slice. If the leftmost
index is omitted, the meaning is to access "all" higher order bits. If the
rightmost index is omitted, it is ``0`` by default.
The half-openness of a slice may seem awkward at first, but it helps to avoid
one-off count issues in practice. For example, the slice ``hex[8:]`` has exactly
``8`` bits. Likewise, the slice ``hex[7:2]`` has ``7-2=5`` bits. You can think
about it as follows: for a slice ``[i:j]``, only bits below index ``i`` are
included, and the bit with index ``j`` is the last bit included.
When an intbv object is sliced, a new intbv object is returned. This new intbv
object is always positive, even when the original object was negative.
.. _intro-python:
Some remarks on MyHDL and Python
================================
To conclude this introductory chapter, it is useful to stress that MyHDL is not
a language in itself. The underlying language is Python, and MyHDL is
implemented as a Python package called ``myhdl``. Moreover, it is a design goal
to keep the ``myhdl`` package as minimalistic as possible, so that MyHDL
descriptions are very much "pure Python".
To have Python as the underlying language is significant in several ways:
* Python is a very powerful high level language. This translates into high
productivity and elegant solutions to complex problems.
* Python is continuously improved by some very clever minds, supported by a
large and fast growing user base. Python profits fully from the open source
development model.
* Python comes with an extensive standard library. Some functionality is likely
to be of direct interest to MyHDL users: examples include string handling,
regular expressions, random number generation, unit test support, operating
system interfacing and GUI development. In addition, there are modules for
mathematics, database connections, networking programming, internet data
handling, and so on.
* Python has a powerful C extension model. All built-in types are written with
the same C API that is available for custom extensions. To a module user, there
is no difference between a standard Python module and a C extension module ---
except performance. The typical Python development model is to prototype
everything in Python until the application is stable, and (only) then rewrite
performance critical modules in C if necessary.
.. _intro-summary:
Summary and perspective
=======================
Here is an overview of what we have learned in this chapter:
* Generators are the basic building blocks of MyHDL models. They provide the way
to model massive concurrency and sensitiviy lists.
* MyHDL provides decorators that create useful generators from local functions.
* Hardware structure and hierarchy is described with classic Python functions.
* ``Signal`` objects are used to communicate between concurrent generators.
* ``intbv`` objects are used to describe bit-oriented operations.
* A ``Simulation`` object is used to simulate MyHDL models.
These concepts are sufficient to start describing and simulating MyHDL models.
However, there is much more to MyHDL. Here is an overview of what can be learned
from the following chapters:
* MyHDL supports sophisticated and high level modeling techniques. This is
described in Chapter :ref:`model`
* MyHDL enables the use of modern software verfication techniques, such as unit
testing, on hardware designs. This is the topic of Chapter :ref:`unittest`.
* It is possible to co-simulate MyHDL models with other HDL languages such as
Verilog and VHDL. This is described in Chapter :ref:`cosim`.
* Last but not least, MyHDL models can be converted to Verilog, providing a path
to a silicon implementation. This is the topic of Chapter :ref:`conv`.
.. rubric:: Footnotes
.. [#] The exception is the ``from module import *`` syntax, that imports all the
symbols from a module. Although this is generally considered bad practice, it
can be tolerated for large modules that export a lot of symbols. One may argue
that ``myhdl`` falls into that category.
.. [#] All positional parameters have to go before any named parameter.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,590 @@
.. _ref:
*********
Reference
*********
MyHDL is implemented as a Python package called ``myhdl``. This chapter
describes the objects that are exported by this package.
.. _ref-sim:
Simulation
==========
.. _ref-simclass:
The :class:`Simulation` class
-----------------------------
.. class:: Simulation(arg [, arg ...])
Class to construct a new simulation. Each argument should be a MyHDL instance.
In MyHDL, an instance is recursively defined as being either a sequence of
instances, or a MyHDL generator, or a Cosimulation object. See section
:ref:`ref-gen` for the definition of MyHDL generators and their interaction with
a :class:`Simulation` object. See Section :ref:`ref-cosim` for the
:class:`Cosimulation` object. At most one :class:`Cosimulation` object can be
passed to a :class:`Simulation` constructor.
A :class:`Simulation` object has the following method:
.. method:: Simulation.run([duration])
Run the simulation forever (by default) or for a specified duration.
.. _ref-simsupport:
Simulation support functions
----------------------------
.. function:: now()
Returns the current simulation time.
.. exception:: StopSimulation()
Base exception that is caught by the ``Simulation.run()`` method to stop a
simulation.
.. _ref-trace:
Waveform tracing
----------------
.. function:: traceSignals(func [, *args] [, **kwargs])
Enables signal tracing to a VCD file for waveform viewing. *func* is a function
that returns an instance. :func:`traceSignals` calls *func* under its control
and passes *\*args* and *\*\*kwargs* to the call. In this way, it finds the
hierarchy and the signals to be traced.
The return value is the same as would be returned by the call ``func(*args,
**kwargs)``. The top-level instance name and the basename of the VCD output
filename is ``func.func_name`` by default. If the VCD file exists already, it
will be moved to a backup file by attaching a timestamp to it, before creating
the new file.
The ``traceSignals`` callable has the following attribute:
.. attribute:: traceSignals.name
This attribute is used to overwrite the default top-level instance name and the
basename of the VCD output filename.
.. _ref-model:
Modeling
========
.. _ref-sig:
The :class:`Signal` class
-------------------------
.. class:: Signal([val=None] [, delay=0])
This class is used to construct a new signal and to initialize its value to
*val*. Optionally, a delay can be specified.
A :class:`Signal` object has the following attributes:
.. attribute:: Signal.posedge
Attribute that represents the positive edge of a signal, to be used in
sensitivity lists.
.. attribute:: Signal.negedge
Attribute that represents the negative edge of a signal, to be used in
sensitivity lists.
.. attribute:: Signal.next
Read-write attribute that represents the next value of the signal.
.. attribute:: Signal.val
Read-only attribute that represents the current value of the signal.
This attribute is always available to access the current value; however in many
practical case it will not be needed. Whenever there is no ambiguity, the Signal
object's current value is used implicitly. In particular, all Python's standard
numeric, bit-wise, logical and comparison operators are implemented on a Signal
object by delegating to its current value. The exception is augmented
assignment. These operators are not implemented as they would break the rule
that the current value should be a read-only attribute. In addition, when a
Signal object is assigned to the ``next`` attribute of another Signal object,
its current value is assigned instead.
.. attribute:: Signal.min
Read-only attribute that is the minimum value (inclusive) of a numeric signal,
or *None* for no minimum.
.. attribute:: Signal.max
Read-only attribute that is the maximum value (exclusive) of a numeric signal,
or *None* for no maximum.
.. attribute:: Signal.driven
Writable attribute that can be used to indicate that the signal is supposed to
be driven from the MyHDL code, and how it should be declared in Verilog after
conversion. The allowed values are ``'reg'`` and ``'wire'``.
This attribute is useful when the Verilog converter cannot infer automatically
whether and how a signal is driven. This occurs when the signal is driven from
user-defined Verilog code.
.. _ref-gen:
MyHDL generators and trigger objects
------------------------------------
.. index:: single: sensitivity list
MyHDL generators are standard Python generators with specialized
:keyword:`yield` statements. In hardware description languages, the equivalent
statements are called *sensitivity lists*. The general format of
:keyword:`yield` statements in in MyHDL generators is:
.. %
When a generator executes a :keyword:`yield` statement, its execution is
suspended at that point. At the same time, each *clause* is a *trigger object*
which defines the condition upon which the generator should be resumed. However,
per invocation of a :keyword:`yield` statement, the generator resumes exactly
once, regardless of the number of clauses. This happens on the first trigger
that occurs.
In this section, the trigger objects and their functionality will be described.
Some MyHDL objects that are described elsewhere can directly be used as trigger
objects. In particular, a signal can be used as a trigger object. Whenever a
signal changes value, the generator resumes. Likewise, the objects referred to
by the signal attributes ``posedge`` and ``negedge`` are trigger objects. The
generator resumes on the occurrence of a positive or a negative edge on the
signal, respectively. An edge occurs when there is a change from false to true
(positive) or vice versa (negative). For the full description of the
:class:`Signal` class and its attributes, see section :ref:`ref-sig`.
Furthermore, MyHDL generators can be used as clauses in ``yield`` statements.
Such a generator is forked, and starts operating immediately, while the original
generator waits for it to complete. The original generator resumes when the
forked generator returns.
In addition, the following functions return trigger objects:
.. function:: delay(t)
Return a trigger object that specifies that the generator should resume after a
delay *t*.
.. function:: join(arg [, arg ...])
Join a number of trigger objects together and return a joined trigger object.
The effect is that the joined trigger object will trigger when *all* of its
arguments have triggered.
Finally, as a special case, the Python ``None`` object can be present in a
``yield`` statement. It is the do-nothing trigger object. The generator
immediately resumes, as if no ``yield`` statement were present. This can be
useful if the ``yield`` statement also has generator clauses: those generators
are forked, while the original generator resumes immediately.
.. _ref-deco:
Decorator functions
-------------------
.
MyHDL defines a number of decorator functions, that make it easier to create
generators from local generator functions.
.. function:: instance()
The :func:`instance` decorator is the most general decorator. It automatically
creates a generator by calling the decorated generator function.
It is used as follows::
def top(...):
...
@instance
def inst():
<generator body>
...
return inst, ...
This is equivalent to::
def top(...):
...
def _gen_func():
<generator body>
...
inst = _gen_func()
...
return inst, ...
.. function:: always(arg [, *args])
The :func:`always` decorator is a specialized decorator that targets a widely
used coding pattern. It is used as follows::
def top(...):
...
@always(event1, event2, ...)
def inst()
<body>
...
return inst, ...
This is equivalent to the following::
def top(...):
...
def _func():
<body>
def _gen_func()
while True:
yield event1, event2, ...
_func()
...
inst = _gen_func()
...
return inst, ...
The argument list of the decorator corresponds to the sensitivity list. Only
signals, edge specifiers, or delay objects are allowed. The decorated function
should be a classic function.
.. function:: always_comb()
The :func:`always_comb` decorator is used to describe combinatorial logic. ::
def top(...):
...
@always_comb
def comb_inst():
<combinatorial body>
...
return comb_inst, ...
The :func:`always_comb` decorator infers the inputs of the combinatorial logic
and the corresponding sensitivity list automatically. The decorated function
should be a classic function.
.. _ref-intbv:
The :class:`intbv` class
------------------------
.. class:: intbv([val=None] [, min=None] [, max=None])
This class represents :class:`int`\ -like objects with some additional features
that make it suitable for hardware design. The *val* argument can be an
:class:`int`, a :class:`long`, an :class:`intbv` or a bit string (a string with
only '0's or '1's). For a bit string argument, the value is calculated as in
``int(bitstring, 2)``. The optional *min* and *max* arguments can be used to
specify the minimum and maximum value of the :class:`intbv` object. As in
standard Python practice for ranges, the minimum value is inclusive and the
maximum value is exclusive.
The minimum and maximum values of an :class:`intbv` object are available as
attributes:
.. attribute:: intbv.min
Read-only attribute that is the minimum value (inclusive) of an :class:`intbv`,
or *None* for no minimum.
.. attribute:: intbv.max
Read-only attribute that is the maximum value (exclusive) of an :class:`intbv`,
or *None* for no maximum.
Unlike :class:`int` objects, :class:`intbv` objects are mutable; this is also
the reason for their existence. Mutability is needed to support assignment to
indexes and slices, as is common in hardware design. For the same reason,
:class:`intbv` is not a subclass from :class:`int`, even though :class:`int`
provides most of the desired functionality. (It is not possible to derive a
mutable subtype from an immutable base type.)
An :class:`intbv` object supports the same comparison, numeric, bitwise,
logical, and conversion operations as :class:`int` objects. See
http://www.python.org/doc/current/lib/typesnumeric.html for more information on
such operations. In all binary operations, :class:`intbv` objects can work
together with :class:`int` objects. For mixed-type numeric operations, the
result type is an :class:`int` or a :class:`long`. For mixed-type bitwise
operations, the result type is an :class:`intbv`.
In addition, :class:`intbv` objects support indexing and slicing operations:
+-----------------+---------------------------------+--------+
| Operation | Result | Notes |
+=================+=================================+========+
| ``bv[i]`` | item *i* of *bv* | \(1) |
+-----------------+---------------------------------+--------+
| ``bv[i] = x`` | item *i* of *bv* is replaced by | \(1) |
| | *x* | |
+-----------------+---------------------------------+--------+
| ``bv[i:j]`` | slice of *bv* from *i* downto | (2)(3) |
| | *j* | |
+-----------------+---------------------------------+--------+
| ``bv[i:j] = t`` | slice of *bv* from *i* downto | (2)(4) |
| | *j* is replaced by *t* | |
+-----------------+---------------------------------+--------+
(1)
Indexing follows the most common hardware design conventions: the lsb bit is the
rightmost bit, and it has index 0. This has the following desirable property: if
the :class:`intbv` value is decomposed as a sum of powers of 2, the bit with
index *i* corresponds to the term ``2**i``.
(2)
In contrast to standard Python sequencing conventions, slicing range are
downward. This is a consequence of the indexing convention, combined with the
common convention that the most significant digits of a number are the leftmost
ones. The Python convention of half-open ranges is followed: the bit with the
highest index is not included. However, it is the *leftmost* bit in this case.
As in standard Python, this takes care of one-off issues in many practical
cases: in particular, ``bv[i:]`` returns *i* bits; ``bv[i:j]`` has ``i-j`` bits.
When the low index *j* is omitted, it defaults to ``0``. When the high index *i*
is omitted, it means "all" higher order bits.
(3)
The object returned from a slicing access operation is always a positive
:class:`intbv`; higher order bits are implicitly assumed to be zero. The bit
width is implicitly stored in the return object, so that it can be used in
concatenations and as an iterator. In addition, for a bit width w, the *min* and
*max* attributes are implicitly set to ``0`` and ``2**w``, respectively.
(4)
When setting a slice to a value, it is checked whether the slice is wide enough.
In addition, an :class:`intbv` object supports the iterator protocol. This makes
it possible to iterate over all its bits, from the high index to index 0. This
is only possible for :class:`intbv` objects with a defined bit width.
.. _ref-model-misc:
Miscellaneous modeling support functions
----------------------------------------
.. function:: bin(num [, width])
Returns a bit string representation. If the optional *width* is provided, and if
it is larger than the width of the default representation, the bit string is
padded with the sign bit.
This function complements the standard Python conversion functions ``hex`` and
``oct``. A binary string representation is often useful in hardware design.
.. function:: concat(base [, arg ...])
Returns an :class:`intbv` object formed by concatenating the arguments.
The following argument types are supported: :class:`intbv` objects with a
defined bit width, :class:`bool` objects, signals of the previous objects, and
bit strings. All these objects have a defined bit width. The first argument
*base* is special as it doesn't need to have a defined bit width. In addition to
the previously mentioned objects, unsized :class:`intbv`, :class:`int` and
:class:`long` objects are supported, as well as signals of such objects.
.. function:: downrange(high [, low=0])
Generates a downward range list of integers.
This function is modeled after the standard ``range`` function, but works in the
downward direction. The returned interval is half-open, with the *high* index
not included. *low* is optional and defaults to zero. This function is
especially useful in conjunction with the :class:`intbv` class, that also works
with downward indexing.
.. function:: enum(arg [, arg ...] [, encoding='binary'])
Returns an enumeration type.
The arguments should be string literals that represent the desired names of the
enumeration type attributes. The returned type should be assigned to a type
name. For example::
t_EnumType = enum('ATTR_NAME_1', 'ATTR_NAME_2', ...)
The enumeration type identifiers are available as attributes of the type name,
for example: ``t_EnumType.ATTR_NAME_1``
The optional keyword argument *encoding* specifies the encoding scheme used in
Verilog output. The available encodings are ``'binary'``, ``'one_hot'``, and
``'one_cold'``.
.. function:: instances()
Looks up all MyHDL instances in the local name space and returns them in a list.
.. _ref-cosim:
Co-simulation
=============
.. _ref-cosim-myhdl:
MyHDL
-----
.. class:: Cosimulation(exe, **kwargs)
Class to construct a new Cosimulation object.
The *exe* argument is a command string to execute an HDL simulation. The
*kwargs* keyword arguments provide a named association between signals (regs &
nets) in the HDL simulator and signals in the MyHDL simulator. Each keyword
should be a name listed in a ``$to_myhdl`` or ``$from_myhdl`` call in the HDL
code. Each argument should be a :class:`Signal` declared in the MyHDL code.
.. _ref-cosim-verilog:
Verilog
-------
.. function:: $to_myhdl(arg, [, arg ...])
Task that defines which signals (regs & nets) should be read by the MyHDL
simulator. This task should be called at the start of the simulation.
.. function:: $from_myhdl(arg, [, arg ...])
Task that defines which signals should be driven by the MyHDL simulator. In
Verilog, only regs can be specified. This task should be called at the start of
the simulation.
.. _ref-cosim-vhdl:
VHDL
----
Not implemented yet.
.. _ref-conv:
Conversion to Verilog
=====================
.. _ref-conv-conv:
Conversion
----------
.. function:: toVerilog(func [, *args] [, **kwargs])
Converts a MyHDL design instance to equivalent Verilog code, and also generates
a test bench to verify it. *func* is a function that returns an instance.
:func:`toVerilog` calls *func* under its control and passes *\*args* and
*\*\*kwargs* to the call.
The return value is the same as would be returned by the call ``func(*args,
**kwargs)``. It should be assigned to an instance name.
The top-level instance name and the basename of the Verilog output filename is
``func.func_name`` by default.
For more information about the restrictions on convertible MyHDL code, see
section :ref:`conv-subset` in Chapter :ref:`conv`.
The :func:`toVerilog` callable has the following attribute:
.. attribute:: toVerilog.name
This attribute is used to overwrite the default top-level instance name and the
basename of the Verilog output filename.
.. _ref-conv-user:
User-defined Verilog code
-------------------------
A user can insert user-defined code in the Verilog output by using the
``__verilog__`` hook.
.. data:: __verilog__
When defined within a function under elaboration, the ``__verilog__`` hook
variable specifies user-defined code that should be used instead of converted
code for that function. The user-defined code should be a Python format string
that uses keys to refer to the variables that should be interpolated in the
string. Any variable in the function context can be referred to.
Note that this hook cannot be used inside generator functions or decorated local
functions, as these are not elaborated.

View File

@ -0,0 +1,357 @@
.. _unittest:
************
Unit testing
************
.. _unittest-intro:
Introduction
============
Many aspects in the design flow of modern digital hardware design can be viewed
as a special kind of software development. From that viewpoint, it is a natural
question whether advances in software design techniques can not also be applied
to hardware design.
.. index:: single: extreme programming
One software design approach that gets a lot of attention recently is *Extreme
Programming* (XP). It is a fascinating set of techniques and guidelines that
often seems to go against the conventional wisdom. On other occasions, XP just
seems to emphasize the common sense, which doesn't always coincide with common
practice. For example, XP stresses the importance of normal workweeks, if we are
to have the fresh mind needed for good software development.
.. %
It is not my intention nor qualification to present a tutorial on Extreme
Programming. Instead, in this section I will highlight one XP concept which I
think is very relevant to hardware design: the importance and methodology of
unit testing.
.. _unittest-why:
The importance of unit tests
============================
Unit testing is one of the corner stones of Extreme Programming. Other XP
concepts, such as collective ownership of code and continuous refinement, are
only possible by having unit tests. Moreover, XP emphasizes that writing unit
tests should be automated, that they should test everything in every class, and
that they should run perfectly all the time.
I believe that these concepts apply directly to hardware design. In addition,
unit tests are a way to manage simulation time. For example, a state machine
that runs very slowly on infrequent events may be impossible to verify at the
system level, even on the fastest simulator. On the other hand, it may be easy
to verify it exhaustively in a unit test, even on the slowest simulator.
It is clear that unit tests have compelling advantages. On the other hand, if we
need to test everything, we have to write lots of unit tests. So it should be
easy and pleasant to create, manage and run them. Therefore, XP emphasizes the
need for a unit test framework that supports these tasks. In this chapter, we
will explore the use of the ``unittest`` module from the standard Python library
for creating unit tests for hardware designs.
.. _unittest-dev:
Unit test development
=====================
In this section, we will informally explore the application of unit test
techniques to hardware design. We will do so by a (small) example: testing a
binary to Gray encoder as introduced in section :ref:`intro-indexing`.
.. _unittest-req:
Defining the requirements
-------------------------
We start by defining the requirements. For a Gray encoder, we want to the output
to comply with Gray code characteristics. Let's define a :dfn:`code` as a list
of :dfn:`codewords`, where a codeword is a bit string. A code of order ``n`` has
``2**n`` codewords.
A well-known characteristic is the one that Gray codes are all about:
Consecutive codewords in a Gray code should differ in a single bit.
Is this sufficient? Not quite: suppose for example that an implementation
returns the lsb of each binary input. This would comply with the requirement,
but is obviously not what we want. Also, we don't want the bit width of Gray
codewords to exceed the bit width of the binary codewords.
Each codeword in a Gray code of order n must occur exactly once in the binary
code of the same order.
With the requirements written down we can proceed.
.. _unittest-first:
Writing the test first
----------------------
A fascinating guideline in the XP world is to write the unit test first. That
is, before implementing something, first write the test that will verify it.
This seems to go against our natural inclination, and certainly against common
practices. Many engineers like to implement first and think about verification
afterwards.
But if you think about it, it makes a lot of sense to deal with verification
first. Verification is about the requirements only --- so your thoughts are not
yet cluttered with implementation details. The unit tests are an executable
description of the requirements, so they will be better understood and it will
be very clear what needs to be done. Consequently, the implementation should go
smoother. Perhaps most importantly, the test is available when you are done
implementing, and can be run anytime by anybody to verify changes.
Python has a standard ``unittest`` module that facilitates writing, managing and
running unit tests. With ``unittest``, a test case is written by creating a
class that inherits from ``unittest.TestCase``. Individual tests are created by
methods of that class: all method names that start with ``test`` are considered
to be tests of the test case.
We will define a test case for the Gray code properties, and then write a test
for each of the requirements. The outline of the test case class is as follows::
from unittest import TestCase
class TestGrayCodeProperties(TestCase):
def testSingleBitChange(self):
""" Check that only one bit changes in successive codewords """
....
def testUniqueCodeWords(self):
""" Check that all codewords occur exactly once """
....
Each method will be a small test bench that tests a single requirement. To write
the tests, we don't need an implementation of the Gray encoder, but we do need
the interface of the design. We can specify this by a dummy implementation, as
follows::
def bin2gray(B, G, width):
### NOT IMPLEMENTED YET! ###
yield None
For the first requirement, we will write a test bench that applies all
consecutive input numbers, and compares the current output with the previous one
for each input. Then we check that the difference is a single bit. We will test
all Gray codes up to a certain order ``MAX_WIDTH``. ::
def testSingleBitChange(self):
""" Check that only one bit changes in successive codewords """
def test(B, G, width):
B.next = intbv(0)
yield delay(10)
for i in range(1, 2**width):
G_Z.next = G
B.next = intbv(i)
yield delay(10)
diffcode = bin(G ^ G_Z)
self.assertEqual(diffcode.count('1'), 1)
for width in range(1, MAX_WIDTH):
B = Signal(intbv(-1))
G = Signal(intbv(0))
G_Z = Signal(intbv(0))
dut = bin2gray(B, G, width)
check = test(B, G, width)
sim = Simulation(dut, check)
sim.run(quiet=1)
Note how the actual check is performed by a ``self.assertEqual`` method, defined
by the ``unittest.TestCase`` class.
Similarly, we write a test bench for the second requirement. Again, we simulate
all numbers, and put the result in a list. The requirement implies that if we
sort the result list, we should get a range of numbers::
def testUniqueCodeWords(self):
""" Check that all codewords occur exactly once """
def test(B, G, width):
actual = []
for i in range(2**width):
B.next = intbv(i)
yield delay(10)
actual.append(int(G))
actual.sort()
expected = range(2**width)
self.assertEqual(actual, expected)
for width in range(1, MAX_WIDTH):
B = Signal(intbv(-1))
G = Signal(intbv(0))
dut = bin2gray(B, G, width)
check = test(B, G, width)
sim = Simulation(dut, check)
sim.run(quiet=1)
.. _unittest-impl:
Test-driven implementation
--------------------------
With the test written, we begin with the implementation. For illustration
purposes, we will intentionally write some incorrect implementations to see how
the test behaves.
The easiest way to run tests defined with the ``unittest`` framework, is to put
a call to its ``main`` method at the end of the test module::
unittest.main()
Let's run the test using the dummy Gray encoder shown earlier::
% python test_gray.py -v
Check that only one bit changes in successive codewords ... FAIL
Check that all codewords occur exactly once ... FAIL
<trace backs not shown>
As expected, this fails completely. Let us try an incorrect implementation, that
puts the lsb of in the input on the output::
def bin2gray(B, G, width):
### INCORRECT - DEMO PURPOSE ONLY! ###
@always_comb
def logic():
G.next = B[0]
return logic
Running the test produces::
% python test_gray.py -v
Check that only one bit changes in successive codewords ... ok
Check that all codewords occur exactly once ... FAIL
======================================================================
FAIL: Check that all codewords occur exactly once
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_gray.py", line 109, in testUniqueCodeWords
sim.run(quiet=1)
...
File "test_gray.py", line 104, in test
self.assertEqual(actual, expected)
File "/usr/local/lib/python2.2/unittest.py", line 286, in failUnlessEqual
raise self.failureException, \
AssertionError: [0, 0, 1, 1] != [0, 1, 2, 3]
----------------------------------------------------------------------
Ran 2 tests in 0.785s
Now the test passes the first requirement, as expected, but fails the second
one. After the test feedback, a full traceback is shown that can help to debug
the test output.
Finally, if we use the correct implementation as in section
:ref:`intro-indexing`, the output is::
% python test_gray.py -v
Check that only one bit changes in successive codewords ... ok
Check that all codewords occur exactly once ... ok
----------------------------------------------------------------------
Ran 2 tests in 6.364s
OK
.. _unittest-change:
Changing requirements
---------------------
In the previous section, we concentrated on the general requirements of a Gray
code. It is possible to specify these without specifying the actual code. It is
easy to see that there are several codes that satisfy these requirements. In
good XP style, we only tested the requirements and nothing more.
It may be that more control is needed. For example, the requirement may be for a
particular code, instead of compliance with general properties. As an
illustration, we will show how to test for *the* original Gray code, which is
one specific instance that satisfies the requirements of the previous section.
In this particular case, this test will actually be easier than the previous
one.
We denote the original Gray code of order ``n`` as ``Ln``. Some examples::
L1 = ['0', '1']
L2 = ['00', '01', '11', '10']
L3 = ['000', '001', '011', '010', '110', '111', '101', 100']
It is possible to specify these codes by a recursive algorithm, as follows:
#. L1 = ['0', '1']
#. Ln+1 can be obtained from Ln as follows. Create a new code Ln0 by prefixing
all codewords of Ln with '0'. Create another new code Ln1 by prefixing all
codewords of Ln with '1', and reversing their order. Ln+1 is the concatenation
of Ln0 and Ln1.
Python is well-known for its elegant algorithmic descriptions, and this is a
good example. We can write the algorithm in Python as follows::
def nextLn(Ln):
""" Return Gray code Ln+1, given Ln. """
Ln0 = ['0' + codeword for codeword in Ln]
Ln1 = ['1' + codeword for codeword in Ln]
Ln1.reverse()
return Ln0 + Ln1
The code ``['0' + codeword for ...]`` is called a :dfn:`list comprehension`. It
is a concise way to describe lists built by short computations in a for loop.
The requirement is now that the output code matches the expected code Ln. We use
the ``nextLn`` function to compute the expected result. The new test case code
is as follows::
class TestOriginalGrayCode(TestCase):
def testOriginalGrayCode(self):
""" Check that the code is an original Gray code """
Rn = []
def stimulus(B, G, n):
for i in range(2**n):
B.next = intbv(i)
yield delay(10)
Rn.append(bin(G, width=n))
Ln = ['0', '1'] # n == 1
for n in range(2, MAX_WIDTH):
Ln = nextLn(Ln)
del Rn[:]
B = Signal(intbv(-1))
G = Signal(intbv(0))
dut = bin2gray(B, G, n)
stim = stimulus(B, G, n)
sim = Simulation(dut, stim)
sim.run(quiet=1)
self.assertEqual(Ln, Rn)
As it happens, our implementation is apparently an original Gray code::
% python test_gray.py -v TestOriginalGrayCode
Check that the code is an original Gray code ... ok
----------------------------------------------------------------------
Ran 1 tests in 3.091s
OK