A simple lexical analyzer implemented in Python for tokenizing input source code. This lexical analyzer is designed to break down a given source code into a sequence of tokens.
- Classification of keywords, identifiers, operators.
- Classification of integers, float, character data types.
- Single line comments identified as '#'
- Multi line comments identified as '@@'
-
Clone the repository: git clone https://github.com/UroobaShameem/lexical-analyzer.git cd lexical-analyzer
-
Write your source code in the input.txt file.
-
Run the main.py file.
-
View the generated tokens in the tokens.txt file.