Compilers Principles Techniques and Tools 2nd Ebook
Compilers: Principles, Techniques, and Tools, known to professors, students, and developers worldwide as the “Dragon Book,” is available in a new edition. Every chapter has been completely revised to reflect developments in software engineering, programming languages, and computer architecture that have occurred since 1986 when the last edition published. The authors, recognizing that few readers will ever go on to construct a compiler, retain their focus on the broader set of problems faced in software design and software development.
Total pages: 1035
File Type: PDF
Use of the Book
It takes at least two quarters or even two semesters to cover all or most of the material in this book. It is common to cover the first half in an undergraduate course and the second half of the book | stressing code optimization | in a second course at the graduate or mezzanine level. Here is an outline of the
Chapter 1 contains motivational material and also presents some background issues in computer architecture and programming-language principles.
Chapter 2 develops a miniature compiler and introduces many of the important concepts, which are then developed in later chapters. The compiler itself
appears in the appendix.
Chapter 3 covers lexical analysis, regular expressions, finite-state machines, and
scanner-generator tools. This material is fundamental to text-processing of all
Chapter 4 covers the major parsing methods, top-down (recursive-descent, LL)
and bottom-up (LR and its variants).
Chapter 5 introduces the principal ideas in syntax-directed donations and
Chapter 6 takes the theory of Chapter 5 and shows how to use it to generate
intermediate code for a typical programming language.
Chapter 7 covers run-time environments, especially management of the run-time
stack and garbage collection.
Chapter 8 is on object-code generation. It covers the construction of basic blocks,
generation of code from expressions and basic blocks, and register-allocation
Chapter 9 introduces the technology of code optimization, including ow graphs,
data- ow frameworks, and iterative algorithms for solving these frameworks.
Chapter 10 covers instruction-level optimization. The emphasis is on the extraction of parallelism from small sequences of instructions and scheduling them
on single processors that can do more than one thing at once.
Chapter 11 talks about larger-scale parallelism detection and exploitation. Here,
the emphasis is on numeric codes that have many tight loops that range over multidimensional arrays.
Chapter 12 is on interprocedural analysis. It covers pointer analysis, aliasing,
and data- ow analysis that takes into account the sequence of procedure calls
that reach a given point in the code.