Code Monkey home page Code Monkey logo

bitrec's People

Contributors

coreyryans avatar nelsobe avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

Forkers

mfkiwl

bitrec's Issues

Need additional site types for PIP fuzzing

The code restricts the sites usable as endpoints for nets when fuzzing pips. Currently for INT_* tiles it restricts them to TIEOFF or SLICEL sites. A number of PIPs (30+) only terminate on sites other than those (and therefore the DFS will not find valid endpoints). Experimentation has shown that adding SLICEM and BUFHCE sites to the list results in all PIPs finding endpoints.

Adding these to the list will require the code that creates cells to host the endpoints to be modified to accommodate the additional site types.

Code running error

image
When I execute Python 3 run according to the example_ Snapshot. py -- family=artix7-- tile=BRAM_ When L, there is an error on the image, execute other types, such as CLBLL_ L. DSP_ L, there may also be errors of the same type that prevent the generation of the correct database. Do you know the solution?

data_analysis.py/condense_data() relies on set ordering

The routine data_analysis.py/condense_data() iterates across enumerate(features) as well as across enumerate(bits) and records the order of the enumeration. However features and bits are both sets which are unordered. Later in the code (26 lines later) it converts features and bits into lists. Other code in the file then relies on the ordering of the entries in the lists matching the ordering it recorded when it iterated across the original sets.

It seems the conversion to lists should happen first and the enumerated iterating should be across the lists. Clearly, iterating across the enumerated set works but is that guaranteed to give the same order as converting later to a list?

See comment in code (reproduced below)...

    # TODO The next 6 lines of code are iterating across sets (which are unordered).
    # This assumes the enumerate ordering in the for loops will be the same order they end up in 25 lines below 
    # when they are turned into lists.
    # Shouldn't the conversion to lists happen before the next 6 lines of code?
    feature_dict = {}
    bit_dict = {}
    # Make mappings from feature names to feature numbers
    # E.g. feature_dict['C:1:DSP48E1:CARRYININV:CARRYIN'] = 0, feature_dict['C:Tile_Pip:DSP_R.DSP_IMUX44_0->DSP_1_A22'] = 1, ...
    for i,f in enumerate(features):
        feature_dict[f] = i
    # Make mappings from bit names to bit numbers
    # E.g. bit_dict['27_288'] = 0, bit_dict['26_43'] = 1, ...
    for i,b in enumerate(bits):
        bit_dict[b] = i

...

    features = list(features)
    bits = list(bits)

How are offset values in tilegrid.json computed?

I am trying to understand how offset values in tilegrid.json are obtained.

While looking at the function "get_byte_offset" in rapid_tilegrid.py, the variable "height", which was originally defined at get_db.tcl, is used to compute offset of target tiles with variables "crc" and "words_per_int".

However, their values seem very heuristic, and I cannot find how they are defined and used like that.

Can you please provide some explaination for this part?

Pip fuzzing - re-use of Node in uphill and downhill path

The following occurs for at least the GFAN0->>BYP_ALT1 pipD:. the issue is that both the uphill DFS and the downhill DFS re-use the same node in the originating tile, leading to an unroutable net. It is unclear how often this might happen but printing the uphill and downhill paths for GFAN0->>BYP_ALT1 shows it happens for at least that one.

Such a net is unroutable by Vivado. Although it presumably doesn't affect the results it does waste time. Is there any reason to add code to avoid it? Unclear.

But, if so, there are two ways to do so:

  1. Let the code do the searching it is doing unchanged but then then check each uphill/downhill pair found for duplicate nodes and toss out those that have re-use of a node. This will allow bogus routes from being found by dfs()but will prevent Vivado from wasting time trying to route them.

  2. Or, on the downhill search prevent it from using any nodes used by the uphill search. A code snippet to do this is below. This requires a check against that list every time a node is visited and that might add to the run time due to that. But, it would avoid ever creating such an illegal net which Vivado must attempt (but which will then fail) to route.

# In run_pip_generation():
...
                avoid = [ str(P.getEndNode()) ]                
                ret_up = dfs_main(P,"UP",max_depth, avoid)
                avoid = [ str(P.getStartNode()) ]
                if ret_up is not None:
                    avoid += ret_up[0]
                ret_down = dfs_main(P,"DOWN",max_depth, avoid)
...

# In dfs():
...
    elif sN in avoid:
        return None
...

Some PIPs can only be solved with a SLICEM LUTRAM endpoint

Some PIPs, such as "FAN_ALT0->>FAN_L0" cannot be solved with the current code. The only place "FAN_ALT0->>FAN_L0" leads when going downhill looking for site pins is to a SLICEM AI pin.

The original code did not allow SLICEM pins at all to avoid having to correctly configure SLICEM's with all their special cases.

In #12 it was suggested that allowing SLICEM's but restricting the allowable site pins to be those on SLICEL's would let SLICEM's be used but allow them to be treated like SLICEL's. That worked for PIPs like "GFAN0->>BYP_ALT1" which tie to SLICEL-compatible pins on a SLICEM. Sample code for this fix is given in #12, has been tested, and works for a subset of previously unsolved PIPs.

But, there are other PIPs like "FAN_ALT0->>FAN_L0" which only tie to SLICEM-only pins such as any of {WE, AI, BI, CI, DI} will not be reconstructed correctly.

Proposed fix: create a single LUTRAM instance that is legal (obeys all DRC rules) that is always used when any of these 5 pins are used. Difficulty level: low?

Sensitivity_analysis_v2: doesn't check both directions for one additional feature

In sensitivity_analysis_v2() what is being done is the code is looking for a pair of tiles, T1 and T2, whose sets of on features only differ by exactly one feature.

That is, if feature set T1 - feature set T2 == one feature, that tells us what bits program the additional feature.

It would seem that it should also check for T2 - T1 to see if T2 contains one additional feature. It doesn't.

PIP fuzzing - some paths may require use of SLICEM pin

Once again, this was found when analyzing why the GFAN0->>BYP_ALT1 pip would never solve (at least check_pip_files kept it in the pip_list, regardless of how many iterations you run).

There are only two legal downhill paths for GFAN0->>BYP_ALT1. One of them bounces around (literally using bounce PIPs) a few times before leaving the tile. The other one leaves immediately but only terminates into the input pin of a SLICEM. The current code will not use a SLICEM as a legal terminating slice (see is_calid_SP()), the reason given is to avoid all the DRC issues associated with using a SLICEM to drive arbitrary other sites.

A solution (tested) is to allow a SLICEM to be allowed to be used but restrict the pins on it that may be used to be those that also exist on a SLICEL. This should allow a SLICEM to be used just like a SLICEL, providing additional options for the fuzzer.

Sample code for this can be found in mypips.py. Snippets below:

# pips_rapid.py:

# In routine  main():
    # Build list of allowable SLICEM pins
    slice = getTileOfType(device, "CLBLL_L").getSites()[0]
    assert str(slice.getSiteTypeEnum()) == "SLICEL"
    allowedSLICEMpins = [slice.getPinName(i) for i in range(slice.getSitePinCount())]

...

# In routine is_valid_sp():
    elif ST == "SLICEM" and SP.getPinName() not in allowedSLICEMpins:
        return 0
    elif ST not in ["SLICEL", "SLICEM", "TIEOFF"] and TT != tile_type :
        return 0
    if str(SP).split("/")[-1] in banned_pin_list:
        return 0

Pip fuzzing - remaining wires in INT_L tile

By iteration 135, the pip fuzzer has solved for all in an INT_L tile except for 8 PIPs.

The 8 unsolved pips are:

GFAN0->>BYP_ALT1
LV_L18<<->>LH0
LV_L0<<->>LH0
LVB_L0<<->>LVB_L12
LV_L18<<->>LH12
LV_L0<<->>LH12
LH0<<->>LH12
LV_L0<<->>LV_L18

The top one is different from the others in that it is uni-directional and doesn't deal with long lines.

All the others are bi-directional and deal with long lines.
However, these are not the only bi-directional PIPs in an INT_L tile associated with long lines.
Thus, the problem would not seem to be long lines themselves, but rather these specific long lines.

What is unknown is whether these pips are actually solved for
or whether the problem is that the code that checks if they are disambiguated needs fixing.

Figuring this out could help complete the pip fuzzer

Solving for GFAN0->>BYP_ALT1

This is related to #6 and is specifically about GFAN0->>BYP_ALT1.

There are two downhill pips from this.

  1. The first one is BYP_ALT1->>BYP_BOUNCE1
    and from there many solutions found with a depth 4 or 5 search.

  2. The second one is BYP_ALT1->>BYP_L1
    and it ONLY terminates into a SLICEM AX input.
    This is not a valid site pin in the current code
    and so will not be used.

Thus, restricting the PIP fuzzer to only use SLICEL or TIEOFF sites
(which the code does) means this pip can NEVER be solved because
ALL solutions will include BYP_ALT1->>BYP_BOUNCE on the downhill side.

One solution would be to allow slicem site pins but restrict
it to pins that are common with slicel's.
That is, a slicem and can be used identically to a slicel.
There is no need to use the extended slicem functionality.

I have tested this with my own recursive search and modified version
of is_valid_SP() and it does find multiple disjoint solutions for the
downhill pips. :-)

However, this has also shown that many of the uphill PIP solutions also use
the BYP_ALT1->>BYP_BOUNCE1 pip.
If the pip order is randomized enough times an uphill solution which does not use
that particular PIP will eventually be found. But, it may take many iterations.

I was curious if there was a deterministic way to solve for this.
It seems the requirements are that you need
a. a pair of uphill solutions that are disjoint from each other (have no common pips),
b) a pair of downhill solutions that are disjoint from each other (have no common pips).
c. and, you also need to make sure that these two pairs of solutions are disjoint from one another as well.

It may be computationally intense, but conceptually it should result in a solution for a given PIP
in exactly one pass if such a solution exists. The trade-off is fewer iterations
in Vivado against a possibly more computationally intense python search for a solution.

I have coded up a test program as follows and it does find a solution for this otherwise impossible to solve pip in one iteration:

  1. For a given search depth, find all solutions uphill and put them in a list (usolutions).
  2. Do the same for downhill solutions and put them in a separate list (dsolutions).
  3. Identify ALL pair-wise disjoint solutions from usolutions and put them in a new list.
  4. Do the same from dsolutions.
  5. Now, find some pair from the list from 3 that is disjoint with some pair from 4.

The result is a guaranteed solution for the PIP because you have two uphill solutions, two downhill solutions
and all four are disjoint for one another. The test program does find such a solution for this pip.

if usolutions and dsolutions were sorted by length of solution it would further bias steps 3-5 to find the smallest solution possible, further reducing the number of needed Vivado runs.

Re-loading checkpoint after write_bitstream

It seems that in fuzzer.py:fuzzer() the code is trying to go through and set a single property:value pair on each cell. Doing this minimizes bit pollution caused when other properties are also set on the same cell (you can't separate out the bits).

To do so, the code:

  1. Fills the design with tiles of the needed type and then places a single cell of the proper type, and then writes a checkpoint (in the checkpoints directory) of that design.
  2. It then moves on and applies a single property:value combination to each cell, incrementing a cell counter as it goes.

For some tile types, however, there are so many properties and values that you run out of cells to apply them to. As such, the code detects this (when calling get_next_count()) and, writes a bitstream for the existing design and then starts a new design so it can continue applying properties to cells.

Except it doesn't start a new design. Rather, it simply (a) resets its cell counter and (b) continues on applying property values starting again with cell C_0. Thus, the properties accumulate, meaning each of the cells in later specimens does not contain just one set property:value combination but rather contain the accumulation of the previous specimens' property:value pairs as well.

It would seem the proper approach would be to reload the checkpoint saved above (with the cells placed onto the tiles but in their default states) before proceeding again after a write_bitstream. The code does not do that. The fix is would trivial: in get_next_count() after calling gen_bitstream() do open_checkpoint() and then disable_drc().

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.