concepts.benchmark.algorithm_env.graph_env.GraphEnvBase#

class GraphEnvBase[source]#

Bases: RandomizedEnv

Graph Env Base.

Methods

close()

Override close in your subclass to perform any necessary cleanup.

render([mode])

Renders the environment.

reset(*[, seed, return_info, options])

Resets the environment to an initial state and returns the initial observation.

seed([seed])

deprecated:

function that sets the seed for the environment's random number generator(s).

step(action)

Run one timestep of the environment's dynamics.

Attributes

action_space

graph

The generated graph.

metadata

np_random

Returns the environment's internal _np_random that if not set will initialise with a random seed.

observation_space

reward_range

spec

unwrapped

Returns the base non-wrapped environment.

__init__(nr_nodes, p=0.5, directed=False, gen_method='edge', np_random=None, seed=None)[source]#

Initialize the environment.

Parameters:
  • nr_nodes (int) – the number of nodes in the graph.

  • p (float) – parameter for random generation. (Default: 0.5) - (edge method): The probability that an edge doesn’t exist in directed graph. - (dnc method): Control the range of the sample of out-degree. - other methods: Unused.

  • directed (bool) – directed or Undirected graph. Default: False (undirected)

  • gen_method (str) – use which method to randomly generate a graph. - ‘edge’: By sampling the existence of each edge. - ‘dnc’: Sample out-degree (\(m\)) of each node, and link to nearest neighbors in the unit square. - ‘list’: generate a chain-like graph.

  • np_random (RandomState | None) –

  • seed (int | None) –

__new__(**kwargs)#
close()#

Override close in your subclass to perform any necessary cleanup.

Environments will automatically close() themselves when garbage collected or when the program exits.

render(mode='human')#

Renders the environment.

A set of supported modes varies per environment. (And some third-party environments may not support rendering at all.) By convention, if mode is:

  • human: render to the current display or terminal and return nothing. Usually for human consumption.

  • rgb_array: Return a numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.

  • ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).

Note

Make sure that your class’s metadata ‘render_modes’ key includes the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method.

Example

>>> import numpy as np
>>> class MyEnv(Env):
...    metadata = {'render_modes': ['human', 'rgb_array']}
...
...    def render(self, mode='human'):
...        if mode == 'rgb_array':
...            return np.array(...) # return RGB frame suitable for video
...        elif mode == 'human':
...            ... # pop up a window and render
...        else:
...            super().render(mode=mode) # just raise an exception
Parameters:

mode – the mode to render with, valid modes are env.metadata[“render_modes”]

reset(*, seed=None, return_info=False, options=None)#

Resets the environment to an initial state and returns the initial observation.

This method can reset the environment’s random number generator(s) if seed is an integer or if the environment has not yet initialized a random number generator. If the environment already has a random number generator and reset() is called with seed=None, the RNG should not be reset. Moreover, reset() should (in the typical use case) be called with an integer seed right after initialization and then never again.

Parameters:
  • seed (optional int) – The seed that is used to initialize the environment’s PRNG. If the environment does not already have a PRNG and seed=None (the default option) is passed, a seed will be chosen from some source of entropy (e.g. timestamp or /dev/urandom). However, if the environment already has a PRNG and seed=None is passed, the PRNG will not be reset. If you pass an integer, the PRNG will be reset even if it already exists. Usually, you want to pass an integer right after the environment has been initialized and then never again. Please refer to the minimal example above to see this paradigm in action.

  • return_info (bool) – If true, return additional information along with initial observation. This info should be analogous to the info returned in step()

  • options (optional dict) – Additional information to specify how the environment is reset (optional, depending on the specific environment)

Returns:

Observation of the initial state. This will be an element of observation_space

(typically a numpy array) and is analogous to the observation returned by step().

info (optional dictionary): This will only be returned if return_info=True is passed.

It contains auxiliary information complementing observation. This dictionary should be analogous to the info returned by step().

Return type:

observation (object)

seed(seed=None)#
Deprecated:

function that sets the seed for the environment’s random number generator(s).

Use env.reset(seed=seed) as the new API for setting the seed of the environment.

Note

Some environments use multiple pseudorandom number generators. We want to capture all such seeds used in order to ensure that there aren’t accidental correlations between multiple generators.

Parameters:

seed (Optional int) – The seed value for the random number geneartor

Returns:

Returns the list of seeds used in this environment’s random

number generators. The first value in the list should be the “main” seed, or the value which a reproducer should pass to ‘seed’. Often, the main seed equals the provided ‘seed’, but this won’t be true if seed=None, for example.

Return type:

seeds (List[int])

step(action)#

Run one timestep of the environment’s dynamics. When end of episode is reached, you are responsible for calling reset() to reset this environment’s state.

Accepts an action and returns a tuple (observation, reward, done, info).

Parameters:

action (Any) – an action provided by the environment

Returns:

agent’s observation of the current environment reward: amount of reward returned after previous action done: whether the episode has ended, in which case further step() calls will return undefined results info: contains auxiliary diagnostic information (helpful for debugging, and sometimes learning)

Return type:

observation

property action_space#
property graph#

The generated graph.

metadata = {'render_modes': []}#
property np_random: RandomState#

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

property observation_space#
reward_range = (-inf, inf)#
spec = None#
property unwrapped: Env#

Returns the base non-wrapped environment.

Returns:

The base non-wrapped gym.Env instance

Return type:

Env