Code Monkey home page Code Monkey logo

qiskit-experiments's Introduction

Qiskit Experiments

License Release Python DOI

Qiskit Experiments is a repository that builds tools for building, running, and analyzing experiments on noisy quantum computers using Qiskit.

To learn more about the package, you can see the most up-to-date documentation corresponding to the main branch of this repository or the documentation for the latest stable release.

Contribution Guidelines

If you'd like to contribute to Qiskit Experiments, please take a look at our contribution guidelines. This project adheres to Qiskit's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs. Please join the Qiskit Slack community and use the #experiments channel for discussion and simple questions. For questions that are more suited for a forum we use the Qiskit tag in Stack Exchange.

Authors and Citation

Qiskit Experiments is the work of many people who contribute to the project at different levels. If you use Qiskit Experiments, please cite our paper as per the included citation file.

License

Apache License 2.0

qiskit-experiments's People

Contributors

ahayman314 avatar arnaucasau avatar bicycle315 avatar catornow avatar chriseclectic avatar conradhaupt avatar coruscating avatar dekelmeirom avatar dependabot[bot] avatar eendebakpt avatar eggerdj avatar eliarbel avatar eric-arellano avatar gadial avatar itamargoldman avatar itoko avatar jakelishman avatar jyu00 avatar kevinsung avatar laurinfischer avatar merav-aharoni avatar mriedem avatar mtreinish avatar nayan2167 avatar nkanazawa1989 avatar shellygarion avatar thaddeus-pellegrini avatar tsafrira avatar wshanks avatar yaelbh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qiskit-experiments's Issues

Calculation of ys_lower and ys_upper

plot_curve_fit contains the following lines:

            params_upper = [param + error for param, error in zip(fit_params, fit_errors)]
            params_lower = [param - error for param, error in zip(fit_params, fit_errors)]
            ys_upper = func(xs, *params_upper)
            ys_lower = func(xs, *params_lower)

Consider the case where func increases with fit_params[0] and decreases with fit_params[1]. Then ys_upper and ys_lower lose their meaning. In other words, one needs to check all 2^{number of parameters} options, where in each option a parameter is either in its maximum or minimum value, and pick for ys_upper the maximal func of all options, and for ys_lower the minimal func of all options. And this is all correct only if func is either increasing or decreasing for each parameter.

Build and improve API docs

Currently the API docs index is empty so none of our documentation is being built by sphinx (which also means the API docs aren't being tested for bugs in the doc strings).

Building API docs

Some things we need to do:

  • Add auto summaries to the API docs and module init doc strings to build documentation
  • Add build for tutorial notebooks
  • Add nice way of citing references in API docs (maybe sphinxcontrib-bibtex?)
  • Add proper documentation to experiment class docstrings

Experiment Documentation

Currently all our experiment classes are missing documentation and just have 1 line placeholders like "Some experiment class". The experiment classes doc strings should all included a moderately detailed description of the experiment. They should allow someone not already familiar with the experiment to learn why and how to use it, and have citations to relevant papers so people can look at them for further details.

Existing experiments that need documentation:

  • T1
  • RB
  • IRB
  • Parallel experiment
  • Batch experiment

Improve code for bounds in RBAnalysis._setup_fitting

The current code consists of the following lines, related to bounds calculation:

        user_bounds = self._get_option("bounds")
        ...
        fit_option = {
            "p0": {
                 ...
            },
            "bounds": {
                "a": user_bounds["a"] or (0.0, 1.0),
                "alpha": user_bounds["alpha"] or (0.0, 1.0),
                "b": user_bounds["b"] or (0.0, 1.0),
            },
        }

The default values of the bounds have been previously entered in _default_options. So they don't need to be entered again here, and what's called user_bounds is in fact either user or default. In short, this could be shortened to

fit_option = {
            "p0": {
                 ...
            },
            "bounds": self._get_option("bounds")
        }

Refrain from accessing private members

Here are some code lines, extracted from composite_experiment.py:

for expr in experiment_data._composite_expdata:
            sub_types.append(expr._experiment._type)
            sub_ids.append(expr.experiment_id)
            sub_qubits.append(expr.experiment().physical_qubits)

The first and third lines inside the loop refer to the experiment that's in expr (expr itself, despite its name, is an experiment data object and not an experiment). The third line correctly retrieves the experiment using the experiment method (should it become a property?), whereas the first line does the same thing by directly accessing the _experiment private data member of expr.

We should search for all instances of ._ all over the code, that are not preceded by self, and replace them with calls to getters and setters.

By the way, to get the experiment data class of an experiment, we write experiment.__experiment__data__; to get the analysis class, we write experiment.analysis(). Perhaps it's a similar situation.

Scaling in figures of T1 and T2Ramsey

Here's how a graph looks like:
image
There's an automatic scaling mechanism that detects how to scale (by 1e-5 in this example). Could be nice if the label was for example Delay (microseconds), instead of Delay (s).

Verify correctness of run options in tests

Recently there has been a change in BaseExperiment.run. The options argument should consist now only of run options, unlike before, where transpile and analysis options were also included. Verify that calls to run in the tests have only run options. If not then fix. The fix will involve not only removal of the non-run option, but also setting it in the new way, using new methods like set_analysis_options.

Note that passing a non-existing run option does not trigger an error, hence the importance of this issue. This behavior is expected to change, either in the experiments module, or by the backends.

Conversion error in T2Star when calculating p0

Informations

  • Qiskit Experiments version: main
  • Python version: 3.8
  • Operating system: Linux

What is the current behavior?

The bug occurs when the conversion factor is not 1 (for example when the unit is ms), and the user did not set p0 (i.e., user_p0 is None). In _run_analysis we have the lines:

        si_xdata = xdata * conversion_factor
        t2star_estimate = np.mean(si_xdata)

And later in _t2star_default_params we have:

 t2star = t2star_input * conversion_factor

So we've multiplied twice by conversion_factor, resulting in an incorrect p0 and subsequently a failing fit.

Steps to reproduce the problem

Write tests for this case.

What is the expected behavior?

Suggested solutions

Warnings when running the tests

Running the tests, the following warnings appear:

Adding a job from a backend (qasm_simulator) that is different than the current ExperimentData backend (fake_paris).
PendingDeprecationWarning: The `QasmSimulator` backend will be deprecated in the future. It has been superseded by the `AerSimulator` backend.

Exception handling of analysis

What is the expected behavior?

Currently an error raised in analyis routine is not caught by expeirment.run, and the failure of fitting can crashe the code without returning no result object. Once this happens, user cannot take any action since the data processing chain is tightly bound to the analysis flow.

The error should be handled nicely and user should be able to get result object with at least raw measured data.

Parallel and batch experiment tests

What is the expected behavior?

Now that we have some concrete experiment implementations we need to add tests for the parallel and batch experiments

Circuit result object

What is the expected behavior?

Currently the experiment data only takes metadata and count from the returned data field.
https://github.com/Qiskit/qiskit-experiments/blob/fbe19dd01fbc8e572a27b8d9a06d1bed5a58f7da/qiskit_experiments/experiment_data.py#L176-L180

However, it also contains other useful information such as creg_size, meas_level, meas_return, etc... and these information can be consumed by the data processor. This free-form dict should be formatted as a data class or some dict-wrapper class (i.e. CircuitResult) so that we can guarantee these fields are extracted from the returned data.

Spectroscopy qubits don't pass to CurveAnalysisResult

What is the current behavior?

The following lines are taken from CurveAnalysis._run_analysis:

        # TODO update this with experiment metadata PR #67
        try:
            self.__qubits = experiment_data.data(0)["metadata"]["qubits"]
        except KeyError:
            pass

But for spectroscopy, the metadata key is qubit (singular).

Suggested solutions

This is anyway temporary until #67 is merged. But I'm worried because I'd expect tests to fail, so possibly something is wrong with the tests or with their coverage.

Standardize analysis options

Different experiments have different names for the same analysis options. It makes sense, for users and developers convenience, to apply (maybe even enforce, in some software way) a standard. Standards do have a con however, in that they hide nuances that are a bit different in different experiments. The standard can also be the sub-optimal choice for a specific experiment.

Need to decide:

  • Do we want a standard?
  • If yes: what should it be?
  • Also if yes: do we want to enforce it? If yes then how?
  • And even if no: do we want at least to work on making some of the options of some of the experiments more similar?

Keep metadata in data processor

What is the expected behavior?

Data processor currently discards metadata attached to circuit result. This can be kept as a data action instance variable. This would be helpful to process some data. For example, "outcome" and "qubit" can be extracted from metadata, and that can be used for data action that calculates probability.

Composite experiment with existing sub-experiments

Suppose that I run a T1 experiment on qubit 3. Then I want to run again the T1 experiment, this time qubits 3 and 5 in parallel. And I want to use the same experiment data for qubit 3, namely, to add the result of the new experiment to the one of the old experiment. The current code of CompositeExperiment always starts from fresh experiment data to sub-experiments, even if they have existing data from before.

Reflection about the fake backends

We have two types of fake backends:

  • Those that are used in the spectroscopy and Rabi tests, and only look at the circuit's metadata.
  • Those that are used in T1 and T2* tests, and look at the circuit's instructions, half-simulating them (still under assumptions on the circuit's structure).

Not sure which one to prefer. The first type is simpler, and second type half-tests the circuits.

Migration guide

What is the expected behavior?

We need to write migration guide on migrating from ignis to experiment.

Should `BaseExperiment` add basic experiment metadata

Currently the BaseExperiment class doesn't add any metadata to circuits, subclasses have to do that when defining their circuits method. I am wondering if it would make sense for the base class to add minimal metadata to all circuits automatically (probably just experiment type, and physical qubits) so subclasses only have to add the specific metadata they need for analysis

One way to do this would be to rename the current abstract circuit methods to _circuits , and then add a non-abstract circuit method like:

class BaseExperiment(ABC):
    ...

    def circuits(self, backend=None, **circuit_options):
        """doc str"""
        circuits = self._circuits(backend=backend, **circuit_options)
        for circ in circuits:
            if not circ.metadata:
                circ.metadata = {}
            circ.metadata['experiment_type'] = self._type
            circ.metadata['experiment_qubits'] = self.physical_qubits
        return circuits

    @abstractmethod
    def _circuits(self, backend=None, **circuit_options):
        """doc str"""

We would probably still want to override this method in subclasses just to add explicit kwargs and change the docstring for any required options (which gives an annoying pylint warning that must be disabled). So in this case subclassing might look like:

class MyExperiment(BaseExperiment):
    """My experiment"""
    # pylint: disable = arguments-differ

    def _circuits(self, backend=None, option1=None, option2=None):
        """Generate my experiment circuits
        Args:
            backend: blah
            option1: blah blah
            option2: blah blah blah
        Returns:
            list: circuits
        """
        circ1, circ2 = something
        circ1.metadata = {'option': option1}
        circ2.metadata = {'option': option2}
        return [circ1, circ2]

    def circuits(self, backend=None, option1=None, option2=None):
        """Generate my experiment circuits
        Args:
            backend: blah
            option1: blah blah
            option2: blah blah blah
        Returns:
            list: circuits
        """
        super().circuits(backend=backend, option1=option1, option2=option2)

Integrate experiments and calibrations

What is the expected behavior?

Users should be able to execute experiments within the context of the calibration framework. There are currently two proposals for this integration. See PRs #80 and #79 which also discuss the pros and cons.

Class docstring template for curve fit analysis.

This is the curve analysis class template I wrote with @eggerdj

    r"""Single line description of this analysis.

    Overview
        This analysis takes two series. This series is fit by the ...
        You can write technical aspect or add some reference here.

    Fit Model
        The fit is based on the following functions.

        .. math::

            F_1(x_1) &= a x_1^2 + b x_1 + c  ... {\rm Experiment 1}\\
            F_2(x_2) &= d x_2^2 + e x_2 + c  ... {\rm Experiment 2}

    Fit Parameters
        - :math:`a`: Description of parameter a
        - :math:`b`: Description of parameter b
        ...

    Initial Guesses
        - :math:`a`: This parameter is estimated by math:`\sqrt{y_1 - c - b x_1} / x_1` where ...
        - :math:`b`: This parameter is estimated by ...  some very very
          long description.  # This requires exactly two spaces before the sentence to ignore line feed.
        ...

    Bounds
        - :math:`a`: [-1, 1]
        - :math:`b`: [min(:math:`x_1`), max(:math:`x_1`)]
        ...

    """

Parameters can be in latex format for readability, e.g. \sigma, but the representation should be carefully chosen so that user can easily find the corresponding parameter in analysis result.

This template will nicely show all information that user may need. Below is the example of SpectroscopyAnalysis.

image

Combining expdata with the same xdata

exp_data = exp.run(backend=...)
exp.run(backend=..., experiment_data=exp_data)

If the data points are x_1, ...., x_n, then xdata of the second execution is x_1, ...., x_n, x_1, ...., x_n. This means:

  • The fitter fits 2n points, instead of n points with better std. Is this what we want? If not then need to merge y's of x's that appear in both the first and second execution.
  • The plot is not nice when the x's are not sorted (they are not sorted in this case because x_n > x_1). We should anyway sort when plotting, also for the case of a single execution where the user provided unsorted x's. This can be done by changing the line in plot_curve_fit to ax.plot(sorted(xs), [y for _, y in sorted(zip(xs, ys_fit))], **plot_opts). However even then it's really not nice in our case because we have pairs of equal x's, resulting in vertical lines in the plot. So, if we choose not to merge y's of equal x's, then do it at least when plotting.

What do you think, to merge or not to merge?

Variable names in RB tests

RB tests contain the lines:

experiment_obj = rb_exp.run(backend)
exp_data = experiment_obj.experiment

However run returns an object of type ExperimentData, and its data member experiment is of type Experiment.

Analysis options overridden by default options

Still need to verify that there is really a bug here. It looks like this from the following lines, taken from BaseAnalysis.run:

        analysis_options = self._default_options()
        analysis_options.update_options(**options)

I see two potential issues here:

  • The first line retrieves the default options instead of the current ones, hence overriding the current options by the default ones.
  • Possibly the second line overrides the default options.

BaseMetadata

What is the expected behavior?

As I wrote here, having a formatted metadata will be useful to extract x and y values, see #23. Currently, xvalue appears as different name in PRs ("delay" in #5 , "meas_basis" in #7, "xdata" in #18 ), so the naming rule is up to person who implements the module. Though this improves readability of metadata, this will be a real headache to write the analysis superclass.

Here I propose to define dataclass with some helper method:

@dataclasses.dataclass
class ExperimentMetadata:
    experiment_type: str
    qubits: List[int]
    exp_id: str = None
    
    def to_dict(self):
        return dataclasses.asdict(self)
    
    def check_entry(self, **series_kwargs):
        return all(self.to_dict()[key] == value for key, value in series_kwargs.items())

    @abstractmethod
    def get_x_value(self) -> Any:

We assume we can identify an experiment entry with x_value and series, i.e. x_value is horizontal axis of the graph, while series indicates a label of line. Some experiment may have only series, values can be provided by a method so that we don't need to fill metadata with empty value (still we can guarantee the extraction method proposed in #23 can access to values).

The extraction method may become

def extract_xy_values(exp_data: ExperimentData, **series: str)

since x_value is provided by metadata itself. Series becomes kwargs because it may be defined by a dictionary.

# e.g. QPT
extract_xy_values(exp_data, meas_basis=('X',), prep_basis=('Xp',))

The .check_entry method will return True if input kwargs mathces with the metadata.

I assume we can cover almost all typical experiments with below 3 sub types:

No scan:

Discriminator experiment

@dataclasses.dataclass
class DiscriminatorExperiment(ExperimentMetadata):
    prep_state: str
    
    def get_x_value(self) -> float:
        return None

extract_xy_values(exp_data, prep_state='00')

Process tomography

@dataclasses.dataclass
class TomographyMetadata(ExperimentMetadata):
    meas_basis: str
    prep_basis: str
    
    def get_x_value(self) -> float:
        return None

extract_xy_values(exp_data, meas_basis=('X',), prep_basis=('Xp',))

Line scan:

Interleaved randomized benchmarking

@dataclasses.dataclass
class RBMetadata(ExperimentMetadata):
    n_clifford: float
    interleaved: bool
    
    def get_x_value(self) -> float:
        return self.n_clifford

extract_xy_values(exp_data, interleaved=True)

T1 measurement

@dataclasses.dataclass
class T1Metadata(ExperimentMetadata):
    delay: int
    
    def get_x_value(self) -> float:
        return self.delay

extract_xy_values(exp_data)

Line scan with multiple series

Hamiltonian tomography

@dataclasses.dataclass
class HamTomographyMetadata(ExperimentMetadata):
    pulse_duration: int
    meas_basis: str
    control_state: int
    
    def get_x_value(self) -> float:
        return self.pulse_duration

extract_xy_values(exp_data, meas_basis='X', control_state=0)

`CompositeExperiment` discards calibrations

Informations

  • Qiskit Experiments version: main branch as of writting
  • Python version: 3.9
  • Operating system: Ubuntu

What is the current behavior?

Pulse calibrations are not carried over by CompositExperiment

Steps to reproduce the problem

The following gives a good example:

from qiskit_experiments.composite import ParallelExperiment
from qiskit_experiments.calibration.experiments import Rabi

exps = [Rabi(i) for i in range(3)]

par_exp = ParallelExperiment(exps)

print(exps[0].circuits()[0].calibrations)
print(par_exp.circuits()[0].calibrations)

results in

{'Rabi': {((0,), (-0.95,)): ScheduleBlock(Play(Gaussian(duration=160, amp=(-0.95+0j), sigma=40), DriveChannel(0)), name="block0", transform=AlignLeft())}}
{}

and shows that the calibrations are missing in the par_exp.

What is the expected behavior?

The calibrations need to be carried over.

Suggested solutions

Fix ParallelExperiment.circuits() by carrying over the calibrations.

CI to work with Terra's master branch

It seems that CI installs Terra 0.16.4, instead of cloning and installing up-to-date Terra's master branch from source. As far as I understand, this is the cause of tests failure in #5.

Data processor for creating a single outcome from multiple circuit

What is the expected behavior?

I think this situation is not assumed in the current data processor. For example,

qc1 = QuantumCircuit(1, 1)
qc1.append(my_gate, [0])
qc1.measure(0, 0)

qc2 = QuantumCircuit(1, 1)
qc2.append(my_gate, [0])
qc2.x(0)
qc2.measure(0, 0)

this pair of circuit can give us estimate of g, e, f-state population without custom discriminator.

P0 = 1 - P_qc1   ... (1)
P1 = 1 - P_qc2   ... (2)
P2 = 1 - P0 - P1 ... (3)

This is convenient because we don't need to download huge serialized data array of level1 measurement, e.g. RB experiment ~ 10s of MB, to determine the population of qutrit system. However it seems like we cannot perform this type of processing in current data processor implementation.

The current solution would be:

  • Do standard population processing with data processor, and post process (1)-(3) on the curve analysis side.
  • Allow data processor to take multiple circuit.

The second approach would make processor logic super complicated, because the processor need to manage merging of metadata for data sorting by analysis class.

Getting x and y values

What is the expected behavior?

We need to define a common methodology to extract x and y values for fits from the data (possibly processed). Suppose an experiment with n circuits. The data associated to this experiment is stored in ExperimentData.data (which has the form List[Dict[str, Any]] and is of length n). Currently, an entry in this data may look like

{'populations': [0.687],
 'metadata': {
    'experiment_type': 'RoughAmplitude',
    'pulse_schedule_name': 'RoughAmplitude',
    'series': None,
    'x_values': -0.7040,
    'exp_id': ...,
    ...
  }
}

Here, the x-value is in the metadata and the y-value is under the key populations. To extract the x and y value it appears we would need something like

def extract_xy_values(exp_data: ExperimentData, data_key: str, series: str = None) -> Tuple[np.array, np.array]:
    """
    Args:
        exp_data: The data that contains x and y values.
        data_key: The key in the exp_data.data dictionaries that contains the x-value.
        series: Optionally, the series for which to get the x and y values.

    Returns:
        The x and y data.
    """

This function could either be contained in its own dedicated class, e.g. a DataExtractor or be part of the DataProcessor.

Improve fake backends performance

In #74, the counts computation is done for all the shots together by calling

binomial(1, prob, size=shots)

By contrast, the tests for T1, T2*, and spectroscopy compute the shots one-by-one, by:

binomial(1, prob)

If possible, then move the last three tests to an all-shots computation, as done in #74.

Shoud `transpiled_circuits` be an internal function

Experiment.transpiled_circuits is a function required for use in the Experiment.run function. Should this be renamed to _transpiled_circuits so it is not part of the public API of this class? Most of the time a user would not ever directly call this function unless they wanted to look at the transpiled circuits being executed.

Remove duplication in CurveAnalysis._run_analysis

# Fit for each fit parameter combination
            if isinstance(fit_candidates, dict):
                # Only single initial guess
                fit_options = self._format_fit_options(**fit_candidates)
                fit_result = curve_fitter(
                    funcs=[series_def.fit_func for series_def in self.__series__],
                    series=_data_index,
                    xdata=_xdata,
                    ydata=_ydata,
                    sigma=_sigma,
                    **fit_options,
                )
                analysis_result.update(**fit_result)
            else:
                # Multiple initial guesses
                fit_options_candidates = [
                    self._format_fit_options(**fit_options) for fit_options in fit_candidates
                ]
                fit_results = [
                    curve_fitter(
                        funcs=[series_def.fit_func for series_def in self.__series__],
                        series=_data_index,
                        xdata=_xdata,
                        ydata=_ydata,
                        sigma=_sigma,
                        **fit_options,
                    )
                    for fit_options in fit_options_candidates
                ]
                # Sort by chi squared value
                fit_results = sorted(fit_results, key=lambda r: r["reduced_chisq"])
                analysis_result.update(**fit_results[0])

Looks like the if statement can be removed; a single guess is a special case of any number of guesses.

Unit management in curve analysis

What is the expected behavior?

Currently each subclass implements a logic to manage the unit of x and y axis values. Though some experiments, e.g. RB, don't need units, considering the fact that many experiments need units and some flexibility of supplementary unit for usability, it would be better to implement the basic unit management logic in curve analysis base class.

Behavior of set_run_options in composite experiments

Suppose that exp1 has, as a run option, shots equal to 1000. And exp2 has shots equal to 2000. And we create a parallel experiment, and set its shots to 3000. Now we run. What will happen?

Check what will happen. Decide what we want to happen. Fix if needed.

CPMG

What is the expected behavior?

Analysis options in two places

What is the expected behavior?

In #67 analysis options will be placed in two places, namely **options of run method and as a part of experiment_data metadata. This may confuse the developers, because the same data is available in two places. We need some logic update.

Missing tutorial notebooks

Tutorials

Right now there is only an RB notebook in the tutorials/ folder. We should add other tutorials:

  • Introduction to experiment classes (including composite/parallel experiments)
  • Tutorial for creating experiment class
  • Tutorial for resultsdb
  • T1 experiment
  • Tomography
  • T2* experiment
  • Calibrations
  • RB
  • QV

Resultsdb to be decided after merging

T_phi experiment

What is the expected behavior?

Now that T1 and T2* are ready, we can write a composite experiment for T_phi

"Work of many people"

The README says:

Qiskit Experiments is the work of many people who contribute to the project...

But points to the contributors graph of Terra, instead of Experiments. On the other hand, if we change to the correct link, "many people" becomes 3 !

Scan values should be run-time option

What is the expected behavior?

In the current implementation of T1 experiment, delays is passed to the constructor and becomes an immutable instance variable. This should be runtime option so that we can update scan range without creating new instance.

For me, in the experiment that scans some parameters, it is more preferable if we can check the parametrized circuit rather than list of circuits with assigned parameters. This can be implemented with

self._delay = Parameter('delay')

experiment_circuit = QuantumCircuit(1, 1)
experiment_circuit.x(0)
experiment_circuit.barrier(0)
experiment_circuit.delay(self._delay, 0, self._unit)
experiment_circuit.barrier(0)
experiment_circuit.measure(0, 0)

self._experiment_circuit = experiment_circuit  # we have some property to show this, no setter

and in .circuits method

t1_circuits = []
for delay in delays:
    circ = self._experiment_circuit.assign_parameters({self._delay: delay}, inplace=False)
    circ.metadata = ...
    t1_circuits.append(circ)

This will allow us to check the circuit with parameter.

t1_exp.experiment_circuit.draw()

     ┌───┐ ░ ┌──────────────────┐ ░ ┌─┐
q_0: ┤ X ├─░─┤ DELAY(delay[dt]) ├─░─┤M├
     └───┘ ░ └──────────────────┘ ░ └╥┘
c: 1/════════════════════════════════╩═
                                     0 

Incorrect check if experiment data is compatible

In base_experiment.py, need to change like this:

diff --git a/qiskit_experiments/base_experiment.py b/qiskit_experiments/base_experiment.py
index 7bd2ca4..f818f93 100644
--- a/qiskit_experiments/base_experiment.py
+++ b/qiskit_experiments/base_experiment.py
@@ -110,7 +110,7 @@ class BaseExperiment(ABC):
         else:
             # Validate experiment is compatible with existing data container
             metadata = experiment_data.metadata()
-            if metadata.get("experiment_data") != self._type:
+            if metadata.get("experiment_type") != self._type:
                 raise QiskitError(
                     "Existing ExperimentData contains data from a different experiment."
                 )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.