id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
43,002
7
8
26
30
7
0
7
21
_sync_dag_view_permissions
Fix permission issue for dag that has dot in name (#23510) How we determine if a DAG is a subdag in airflow.security.permissions.resource_name_for_dag is not right. If a dag_id contains a dot, the permission is not recorded correctly. The current solution makes a query every time we check for permission for dags that has a dot in the name. Not that I like it but I think it's better than other options I considered such as changing how we name dags for subdag. That's not good in UX. Another option I considered was making a query when parsing, that's not good and it's avoided by passing root_dag to resource_name_for_dag Co-authored-by: Ash Berlin-Taylor <ash_github@firemirror.com> Co-authored-by: Tzu-ping Chung <uranusjr@gmail.com>
https://github.com/apache/airflow.git
def _sync_dag_view_permissions(self, dag_id, access_control): dag_resource_name = permissions.resource_name_for_dag(dag_id)
116
security.py
Python
airflow/www/security.py
cc35fcaf89eeff3d89e18088c2e68f01f8baad56
airflow
7
82,968
30
11
5
101
14
0
33
49
_format_description
feat: Basic support for C4 model primitives. (#508) * Basic support for C4 model primitives. * Use the "rect" shape for nodes With the record shape we used before, graphviz would trip over edges that set constraint=False. * Adopt C4 terminology: Rename Dependency -> Relationship * Adopt C4 terminology: Rename type -> technology * Extract a shared C4Node This makes the code more DRY, but also allows to add company- specific extensions more easily. One need we have is to slightly adapt the terminology. At Spotify, we happen to call `Container` a `Component` for example. This is now easier to implement on top of the shared `C4Node`. * Add "C4" shield to the README * Document how to produce a C4 diagram
https://github.com/mingrammer/diagrams.git
def _format_description(description): wrapper = textwrap.TextWrapper(width=40, max_lines=3) lines = [html.escape(line) for line in wrapper.wrap(description)] lines += [""] * (3 - len(lines)) # fill up with empty lines so it is always three return "<br/>".join(lines)
60
__init__.py
Python
diagrams/c4/__init__.py
90dd23926bc42c1da8ae704d3c3c8645e118cea1
diagrams
2
310,150
6
6
3
22
4
0
6
20
available
Remove ping from mypy ignored modules (#64439) Co-authored-by: epenet <epenet@users.noreply.github.com>
https://github.com/home-assistant/core.git
def available(self) -> bool: return self._available
12
binary_sensor.py
Python
homeassistant/components/ping/binary_sensor.py
211b99e22d82f9d1a1553349d642e5fb5a36d776
core
1
167,524
6
7
13
24
3
0
6
20
_validate_names
REF: Reduce duplicative methods between XML parser classes (#47553) * REF: Reduce duplicative methods between XML parser classes * Add typing to base class methods
https://github.com/pandas-dev/pandas.git
def _validate_names(self) -> None: raise AbstractMethodError(self)
13
xml.py
Python
pandas/io/xml.py
ebc96aef57c3d99f6da6275a5cc42f5cfd6c1c8c
pandas
1
136,320
38
11
8
93
11
0
49
143
stop
[RLlib] Fault tolerant and elastic WorkerSets used across RLlib's algorithms (for sampling and evaluation). (#30118)
https://github.com/ray-project/ray.git
def stop(self) -> None: # If we have an env -> Release its resources. if self.env is not None: self.async_env.stop() # Close all policies' sessions (if tf static graph). for policy in self.policy_map.cache.values(): sess = policy.get_session() # Closes the tf session, if any. if sess is not None: sess.close()
54
rollout_worker.py
Python
rllib/evaluation/rollout_worker.py
76cb42c578adf19a70a6b4401098a7a21e0d3b29
ray
4
249,833
16
11
10
110
14
0
17
59
test_keyid_containing_forward_slash
Fix /key/v2/server calls with URL-unsafe key IDs (#14490) Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
https://github.com/matrix-org/synapse.git
def test_keyid_containing_forward_slash(self) -> None: fetcher = ServerKeyFetcher(self.hs) self.get_success(fetcher.get_keys("example.com", ["key/potato"], 0)) self.http_client.get_json.assert_called_once() args, kwargs = self.http_client.get_json.call_args self.assertEqual(kwargs["path"], "/_matrix/key/v2/server/key%2Fpotato")
64
test_keyring.py
Python
tests/crypto/test_keyring.py
e1b15f25f3ad4b45b381544ca6b3cd2caf43d25d
synapse
1
286,833
20
12
18
65
9
0
22
80
get_weights
Portfolio optimization bug fixes (#3675) * remove hcp * add prams dict to statics * little controller bug * fix view bug * avoid crash if one stock * check params on setup * create parameter class * check and convert parameters using class * change params name to avoid confusion * create a parameter statics dict * change some funcs names * remove unused imports * remove default dict * optional type * add multi choices * cast only int and float * fix completer * fix bugs with parameter validation * fix bugs with parameter validation * add excel formatting * sdk needs mapping as well, controller takes care of this in terminal * small formating * add some safe guard try except * controller bugs * oops * change export path of parameters to portfolio folder * add more commands to scripts * catch optimization exception * log errors * black and exceptions * add flag to test * black * flake8 * forgot this * pylint * change defaults * fix ef default * fix bl * sync sdk defaults * sync last defaults * fix plot heat and add more choices to controller autocomplete * patch weights * fix wrong bool parsing Co-authored-by: James Maslek <jmaslek11@gmail.com>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_weights(self, warning=True) -> Dict[str, float]: if not self._weights: if warning: console.print("No weights found. Please perform some optimization.") return {} return self._weights
39
po_engine.py
Python
openbb_terminal/portfolio/portfolio_optimization/po_engine.py
faca7ab67d1ce5d0ae0e5c862332bcfc37f72ea9
OpenBBTerminal
3
212,726
46
14
10
304
19
0
49
141
make_window
Catching up on the many many demo programs that were not checked in....
https://github.com/PySimpleGUI/PySimpleGUI.git
def make_window(): sg.theme(settings.get('-theme-', 'DarkBlue2')) # set the theme layout = [[sg.Text('Settings Window')], [sg.Input(settings.get('-input-', ''), k='-IN-')], [sg.Listbox(sg.theme_list(), default_values=[settings['-theme-'],], size=(15, 10), k='-LISTBOX-')], [sg.CB('Option 1', settings.get('-option1-', True), k='-CB1-')], [sg.CB('Option 2', settings.get('-option2-', False), k='-CB2-')], [sg.T('Settings file = ' + settings.get_filename())], [sg.Button('Save'), sg.Button('Settings Dictionary'), sg.Button('Exit without saving', k='Exit')]] window = sg.Window('A Settings Window', layout)
181
Demo_User_Settings_Class.py
Python
DemoPrograms/Demo_User_Settings_Class.py
cfe2c96a1fa6fc721c998179298a7d430ccbaefd
PySimpleGUI
1
43,876
7
7
3
29
6
0
7
21
serialize_for_task_group
Fix remaining mypy issues in "core" Airflow (#20795) Co-authored-by: Josh Fell <josh.d.fell@astronomer.io> Co-authored-by: Tzu-ping Chung <tp@astronomer.io> Co-authored-by: Jarek Potiuk <jarek@potiuk.com>
https://github.com/apache/airflow.git
def serialize_for_task_group(self) -> Tuple[DagAttributeTypes, Any]: raise NotImplementedError()
17
taskmixin.py
Python
airflow/models/taskmixin.py
2fdc23333909096d427171002582e2906f8bbc0a
airflow
1
20,339
15
11
6
60
4
0
19
69
_get_text_color
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _get_text_color(self, style): if style['color'] is not None: fill = '#' + style['color'] else: fill = '#000' return fill
32
img.py
Python
pipenv/patched/notpip/_vendor/pygments/formatters/img.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
295,120
28
13
8
135
18
1
30
86
websocket_device_automation_list_conditions
Device Automation: enforce passing in device-automation-enum (#69013)
https://github.com/home-assistant/core.git
async def websocket_device_automation_list_conditions(hass, connection, msg): device_id = msg["device_id"] conditions = ( await async_get_device_automations( hass, DeviceAutomationType.CONDITION, [device_id] ) ).get(device_id) connection.send_result(msg["id"], conditions) @websocket_api.websocket_command( { vol.Required("type"): "device_automation/trigger/list", vol.Required("device_id"): str, } ) @websocket_api.async_response @handle_device_errors
@websocket_api.websocket_command( { vol.Required("type"): "device_automation/trigger/list", vol.Required("device_id"): str, } ) @websocket_api.async_response @handle_device_errors
49
__init__.py
Python
homeassistant/components/device_automation/__init__.py
824066f519321161b47641afcf6b779439dc0e20
core
1
189,717
7
7
14
25
4
0
7
20
vector
Improved structure of the :mod:`.mobject` module (#2476) * group graphing and update its references * group text and update its references * group opengl and update its references * group three_d and update its references * group geometry and update (most) references * move some chaning.py + updater files into animation * refactor arc.py * refactor line.py * refactor polygram.py * refactor tips.py * black + isort * import new files in __init__.py * refactor places where geometry was used * black + isort again * remove unused imports * update reference.rst * add descriptions to files * fix circular imports * forgot ArrowTip * fix tests * fix doctests * satisfy mypy? * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix ALL merge conflicts * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * one VMobject import slipped through * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * re-add imports to `manim/opengl/__init__.py` * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix reference manual * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * ignore unknown directive type * fix arrow tip imports in docstrings Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Benjamin Hackl <devel@benjamin-hackl.at>
https://github.com/ManimCommunity/manim.git
def vector(self): r return self.tip_point - self.base
15
tips.py
Python
manim/mobject/geometry/tips.py
e040bcacd38378386749db18aeba575b93f4ebca
manim
1
288,843
22
10
7
96
14
0
25
78
old_unique_id
Migrate HomeKit Controller to use stable identifiers (#80064)
https://github.com/home-assistant/core.git
def old_unique_id(self) -> str: info = self.accessory_info serial = info.value(CharacteristicsTypes.SERIAL_NUMBER) if valid_serial_number(serial): return f"homekit-{serial}-{self._iid}" # Some accessories do not have a serial number return f"homekit-{self._accessory.unique_id}-{self._aid}-{self._iid}"
35
entity.py
Python
homeassistant/components/homekit_controller/entity.py
f23b1750e85f07091eb896a0b12b8f95e5646338
core
2
76,492
24
10
13
94
9
0
35
122
was_modified_since
Removed outdated handling of length parameter to If-Modified-Since header - The length parameter is not described in RFC-7232 and it's against HTTP/1.0 and HTTP/1.1 specifications. - It was an old and unofficial extension set by some ancient versions of IE. - See https://httpwg.org/specs/rfc7232.html#header.if-modified-since - https://github.com/django/django/pull/15500
https://github.com/wagtail/wagtail.git
def was_modified_since(header=None, mtime=0): try: if header is None: raise ValueError header_date = parsedate_tz(header) if header_date is None: raise ValueError header_mtime = mktime_tz(header_date) if mtime > header_mtime: raise ValueError except (ValueError, OverflowError): return True return False
58
sendfile_streaming_backend.py
Python
wagtail/utils/sendfile_streaming_backend.py
7b4cf43e2ef1335609fd9d0ec73dca240917d670
wagtail
5
314,031
6
6
8
22
4
0
6
20
should_poll
Enable polling for hardwired powerview devices (#73659) * Enable polling for hardwired powerview devices * Update homeassistant/components/hunterdouglas_powerview/cover.py * Update homeassistant/components/hunterdouglas_powerview/cover.py * docs were wrong * Update homeassistant/components/hunterdouglas_powerview/cover.py * Update homeassistant/components/hunterdouglas_powerview/sensor.py
https://github.com/home-assistant/core.git
def should_poll(self) -> bool: return self._is_hard_wired
12
cover.py
Python
homeassistant/components/hunterdouglas_powerview/cover.py
120479acef9a8e9e52fa356f036e55465e441d31
core
1
133,768
101
17
38
440
37
0
134
644
test_marwil_compilation_and_learning_from_offline_file
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def test_marwil_compilation_and_learning_from_offline_file(self): rllib_dir = Path(__file__).parent.parent.parent.parent print("rllib dir={}".format(rllib_dir)) data_file = os.path.join(rllib_dir, "tests/data/cartpole/large.json") print("data_file={} exists={}".format(data_file, os.path.isfile(data_file))) config = marwil.DEFAULT_CONFIG.copy() config["num_workers"] = 2 config["evaluation_num_workers"] = 1 config["evaluation_interval"] = 3 config["evaluation_duration"] = 5 config["evaluation_parallel_to_training"] = True # Evaluate on actual environment. config["evaluation_config"] = {"input": "sampler"} # Learn from offline data. config["input"] = [data_file] num_iterations = 350 min_reward = 70.0 # Test for all frameworks. for _ in framework_iterator(config, frameworks=("tf", "torch")): trainer = marwil.MARWILTrainer(config=config, env="CartPole-v0") learnt = False for i in range(num_iterations): results = trainer.train() check_train_results(results) print(results) eval_results = results.get("evaluation") if eval_results: print( "iter={} R={} ".format(i, eval_results["episode_reward_mean"]) ) # Learn until some reward is reached on an actual live env. if eval_results["episode_reward_mean"] > min_reward: print("learnt!") learnt = True break if not learnt: raise ValueError( "MARWILTrainer did not reach {} reward from expert " "offline data!".format(min_reward) ) check_compute_single_action(trainer, include_prev_action_reward=True) trainer.stop()
249
test_marwil.py
Python
rllib/agents/marwil/tests/test_marwil.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
6
337,426
3
7
2
22
3
0
3
17
clear
Create alias for Accelerator.free_memory (#318) * Add `Accelerator.clear` alias Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
https://github.com/huggingface/accelerate.git
def clear(self): self.free_memory()
11
accelerator.py
Python
src/accelerate/accelerator.py
5791d3dd6bdd26f8df05026c3b8e28620bb3463f
accelerate
1
272,311
10
9
4
49
6
1
11
26
_large_compatible_negative
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _large_compatible_negative(tensor_type): if tensor_type == tf.float16: return tf.float16.min return -1e9 @keras_export("keras.layers.Softmax")
@keras_export("keras.layers.Softmax")
22
softmax.py
Python
keras/layers/activation/softmax.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
313,741
12
8
9
48
8
0
12
88
extra_restore_state_data
Support restoring NumberEntity native_value (#73475)
https://github.com/home-assistant/core.git
def extra_restore_state_data(self) -> NumberExtraStoredData: return NumberExtraStoredData( self.native_max_value, self.native_min_value, self.native_step, self.native_unit_of_measurement, self.native_value, )
32
__init__.py
Python
homeassistant/components/number/__init__.py
23fa19b75a398d89fe585e8c1d26c175f2024e47
core
1
203,109
25
12
8
137
10
0
38
108
sanitize_file_name
Fixed #33062 -- Made MultiPartParser remove non-printable chars from file names.
https://github.com/django/django.git
def sanitize_file_name(self, file_name): file_name = html.unescape(file_name) file_name = file_name.rsplit('/')[-1] file_name = file_name.rsplit('\\')[-1] # Remove non-printable characters. file_name = ''.join([char for char in file_name if char.isprintable()]) if file_name in {'', '.', '..'}: return None return file_name IE_sanitize = sanitize_file_name
75
multipartparser.py
Python
django/http/multipartparser.py
3fadf141e66c8d0baaa66574fa3b63c4d3655482
django
4
189,618
27
16
27
105
13
0
35
240
should_update_mobjects
Added finer control over :meth:`.Scene.wait` being static (i.e., no updaters) or not (#2504) * added freeze_frame kwarg to Wait + documentation, changed logic of should_mobject_update * changed default behavior when adding updaters to opengl mobjects * added tests * fixed OpenGL behavior (?) * black * Scene.pause, type hints, documentation * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * actually handle frozen frames in OpenGL renderer * black * remove stray print Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
https://github.com/ManimCommunity/manim.git
def should_update_mobjects(self) -> bool: wait_animation = self.animations[0] if wait_animation.is_static_wait is None: should_update = ( self.always_update_mobjects or self.updaters or any( [ mob.has_time_based_updater() for mob in self.get_mobject_family_members() ], ) ) wait_animation.is_static_wait = not should_update return not wait_animation.is_static_wait
65
scene.py
Python
manim/scene/scene.py
8852ceee6e256dfa66b03337affbde028c0f2ccb
manim
5
142,932
6
6
2
20
5
0
6
20
__call__
[tune/structure] Introduce `stopper` package (#26040) Split different stoppers and put them in a `stopper` package.
https://github.com/ray-project/ray.git
def __call__(self, trial_id, result): raise NotImplementedError
12
stopper.py
Python
python/ray/tune/stopper/stopper.py
7e93370c914f06f2b47813e2ae7e9af4a15982a6
ray
1
47,576
28
12
9
148
15
0
34
125
test_roots
Replace usage of `DummyOperator` with `EmptyOperator` (#22974) * Replace usage of `DummyOperator` with `EmptyOperator`
https://github.com/apache/airflow.git
def test_roots(self): with DAG("test_dag", start_date=DEFAULT_DATE) as dag: op1 = EmptyOperator(task_id="t1") op2 = EmptyOperator(task_id="t2") op3 = EmptyOperator(task_id="t3") op4 = EmptyOperator(task_id="t4") op5 = EmptyOperator(task_id="t5") [op1, op2] >> op3 >> [op4, op5] assert set(dag.roots) == {op1, op2}
86
test_dag.py
Python
tests/models/test_dag.py
49e336ae0302b386a2f47269a6d13988382d975f
airflow
1
166,336
14
7
15
36
5
0
16
40
dtype
ENH: allow non-nano in DatetimeArray, TimedeltaArray._simple_new (#46901)
https://github.com/pandas-dev/pandas.git
def dtype(self) -> np.dtype: # type: ignore[override] return self._ndarray.dtype # ---------------------------------------------------------------- # Constructors _freq = None
16
timedeltas.py
Python
pandas/core/arrays/timedeltas.py
5a599c19ec6ca97c6628853ee80cac2f65154119
pandas
1
100,824
17
12
6
100
12
0
18
39
output_shapes
Refactoring and TravisCI to Github Actions (#1239) * refactor training * travis to actions
https://github.com/deepfakes/faceswap.git
def output_shapes(self) -> List[List[Tuple]]: shapes = [tuple(K.int_shape(output)[-3:]) for output in self._model.outputs] return [shapes[:len(shapes) // 2], shapes[len(shapes) // 2:]]
50
model.py
Python
plugins/train/model/_base/model.py
ff6b0209dd5ad57b81b0aca570df7f39a7119bfb
faceswap
2
265,912
48
12
10
185
17
0
59
118
highlight_string
Closes #10560: New global search (#10676) * Initial work on new search backend * Clean up search backends * Return only the most relevant result per object * Clear any pre-existing cached entries on cache() * #6003: Implement global search functionality for custom field values * Tweak field weights & document guidance * Extend search() to accept a lookup type * Move get_registry() out of SearchBackend * Enforce object permissions when returning search results * Add indexers for remaining models * Avoid calling remove() on non-cacheable objects * Use new search backend by default * Extend search backend to filter by object type * Clean up search view form * Enable specifying lookup logic * Add indexes for value field * Remove object type selector from search bar * Introduce SearchTable and enable HTMX for results * Enable pagination * Remove legacy search backend * Cleanup * Use a UUID for CachedValue primary key * Refactoring search methods * Define max search results limit * Extend reindex command to support specifying particular models * Add clear() and size to SearchBackend * Optimize bulk caching performance * Highlight matched portion of field value * Performance improvements for reindexing * Started on search tests * Cleanup & docs * Documentation updates * Clean up SearchIndex * Flatten search registry to register by app_label.model_name * Clean up search backend classes * Clean up RestrictedGenericForeignKey and RestrictedPrefetch * Resolve migrations conflict
https://github.com/netbox-community/netbox.git
def highlight_string(value, highlight, trim_pre=None, trim_post=None, trim_placeholder='...'): # Split value on highlight string try: pre, match, post = re.split(fr'({highlight})', value, maxsplit=1, flags=re.IGNORECASE) except ValueError: # Match not found return escape(value) # Trim pre/post sections to length if trim_pre and len(pre) > trim_pre: pre = trim_placeholder + pre[-trim_pre:] if trim_post and len(post) > trim_post: post = post[:trim_post] + trim_placeholder return f'{escape(pre)}<mark>{escape(match)}</mark>{escape(post)}'
97
utils.py
Python
netbox/utilities/utils.py
9628dead07ccef9608b32906aa8194bc948e5a09
netbox
6
313,507
7
7
3
33
5
1
7
12
cache_size
Update more nest tests to use common fixtures (#73303) Update nest tests to use fixtures
https://github.com/home-assistant/core.git
def cache_size() -> int: return 100 @pytest.fixture(autouse=True)
@pytest.fixture(autouse=True)
9
test_media_source.py
Python
tests/components/nest/test_media_source.py
7a5fa8eb58f49282e73f454826472ba54cd37a30
core
1
249,154
26
14
13
154
18
0
27
163
test_block_room_twice
Use literals in place of `HTTPStatus` constants in tests (#13469)
https://github.com/matrix-org/synapse.git
def test_block_room_twice(self) -> None: self._is_blocked(self.room_id, expect=False) for _ in range(2): channel = self.make_request( "PUT", self.url % self.room_id, content={"block": True}, access_token=self.admin_user_tok, ) self.assertEqual(200, channel.code, msg=channel.json_body) self.assertTrue(channel.json_body["block"]) self._is_blocked(self.room_id, expect=True)
98
test_room.py
Python
tests/rest/admin/test_room.py
c97042f7eef3748e17c90e48a4122389a89c4735
synapse
2
189,990
12
8
13
49
10
0
12
88
hash_seed
Ported improved implementation of :class:`.SVGMobject` from 3b1b/manim (#2898) * port SVGMobject from 3b1b/manim * added svgelements as dependency * revert change of default values * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * set default stroke_width of svg elements to 0 if not set * fix handling of circles with different rx/ry * turn more methods into staticmethods * removed duplicated method * set/adapt stroke-width of some test SVGs * updated control data * forgot some control data * fixed init_colors in tex_mobject and text_mobject * minor changes, added docstrings * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * module docstring, removed import * vector_to_coords changed again * nail sphinx version to below 5.1 to fix rtd (?) * update test_text control data for science * changed Brace to use VMobjectFromSVGPath * remove unused classes and methods depending on old SVG path implementation * remove style_utils and svg_path modules * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * change test_text to use monospace font * restore geometry.polygram * added get_mobject_type_class auxiliary method; changed polyline implementation to ad-hoc approach * restore test_text to previous version * skip Use tags as svgelements already populates them Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
https://github.com/ManimCommunity/manim.git
def hash_seed(self) -> tuple: return ( self.__class__.__name__, self.svg_default, self.path_string_config, self.file_name, config.renderer, )
33
svg_mobject.py
Python
manim/mobject/svg/svg_mobject.py
309c9d41eb734ca85a7aea5533f88a6d4ee7c944
manim
1
154,496
11
8
2
36
4
0
11
18
_deploy_ray_func
FIX-#4597: Refactor Partition handling of func, args, kwargs (#4715) Co-authored-by: Iaroslav Igoshev <Poolliver868@mail.ru> Signed-off-by: Jonathan Shi <jhshi@ponder.io>
https://github.com/modin-project/modin.git
def _deploy_ray_func(func, *args, **kwargs): # pragma: no cover return func(*args, **kwargs)
21
engine_wrapper.py
Python
modin/core/execution/ray/common/engine_wrapper.py
d6d503ac7c3028d871c34d9e99e925ddb0746df6
modin
1
110,871
38
13
7
122
11
0
48
76
dist_point_to_segment
Simplify/robustify segment-point distance calculation. The version in poly_editor is relatively simple because it only supports array inputs and doesn't vectorize over any input. The version in proj3d is private so the API can be changed, but it needs (currently) to at least support non-array inputs and to vectorize over `p`. - Rename the parameters to make the difference between the "segment ends" (`s0, s1`) and the "point" (`p`) parameters clearer. - Switch `p` to support (N, ndim) inputs instead of (ndim, N) (consistently with most other APIs); adjust test_lines_dists accordingly. - Use vectorized ops everywhere, which also caught the fact that previously, entries beyond the third in (what was) `p1, p2` would be silently ignored (because access was via `p1[0]`, `p1[1]`, `p2[0]`, `p2[1]`). Instead now the vectorized version naturally extends to any number of dimensions. Adjust format_coord and test_lines_dists_nowarning accordingly. - Also support vectorizing over `s0`, `s1`, if they have the same length as `p` (this comes basically for free).
https://github.com/matplotlib/matplotlib.git
def dist_point_to_segment(p, s0, s1): s01 = s1 - s0 s0p = p - s0 if (s01 == 0).all(): return np.hypot(*s0p) # Project onto segment, without going past segment ends. p1 = s0 + np.clip((s0p @ s01) / (s01 @ s01), 0, 1) * s01 return np.hypot(*(p - p1))
77
poly_editor.py
Python
examples/event_handling/poly_editor.py
f95ce27f2a68ae6f899b27e07bb46cef30bb23f9
matplotlib
2
126,858
49
18
21
196
31
0
58
412
_perform_iteration
[serve] Make serve agent not blocking when GCS is down. (#27526) This PR fixed several issue which block serve agent when GCS is down. We need to make sure serve agent is always alive and can make sure the external requests can be sent to the agent and check the status. - internal kv used in dashboard/agent blocks the agent. We use the async one instead - serve controller use ray.nodes which is a blocking call and blocking forever. change to use gcs client with timeout - agent use serve controller client which is a blocking call with max retries = -1. This blocks until controller is back. To enable Serve HA, we also need to setup: - RAY_gcs_server_request_timeout_seconds=5 - RAY_SERVE_KV_TIMEOUT_S=5 which we should set in KubeRay.
https://github.com/ray-project/ray.git
async def _perform_iteration(self, publisher): while True: try: formatted_status_string = await self._gcs_aio_client.internal_kv_get( DEBUG_AUTOSCALING_STATUS.encode(), None, timeout=GCS_RPC_TIMEOUT_SECONDS, ) stats = self._get_all_stats() # Report stats only when metrics collection is enabled. if not self._metrics_collection_disabled: cluster_stats = ( json.loads(formatted_status_string.decode()) if formatted_status_string else {} ) records_reported = self._record_stats(stats, cluster_stats) self._metrics_agent.record_reporter_stats(records_reported) await publisher.publish_resource_usage(self._key, jsonify_asdict(stats)) except Exception: logger.exception("Error publishing node physical stats.") await asyncio.sleep(reporter_consts.REPORTER_UPDATE_INTERVAL_MS / 1000)
119
reporter_agent.py
Python
dashboard/modules/reporter/reporter_agent.py
dac7bf17d9214dd3b79238caf0c8ec76f40328c6
ray
5
109,990
17
10
5
55
6
0
19
73
get_trifinder
Make all matplotlib.tri submodules private Users should access all elements through the outer namespace matplotlib.tri. Back-compatibility for the old module names will be added in a separate commit. If done in the same commit, git would interpret this as a modified file plus a new file and not as a rename. With the separation and the rename we keep the history.
https://github.com/matplotlib/matplotlib.git
def get_trifinder(self): if self._trifinder is None: # Default TriFinder class. from matplotlib.tri._trifinder import TrapezoidMapTriFinder self._trifinder = TrapezoidMapTriFinder(self) return self._trifinder
33
_triangulation.py
Python
lib/matplotlib/tri/_triangulation.py
cf8e04ddc1686dd285afdcc6e3ea8d9f29ff869b
matplotlib
2
200,574
107
13
37
648
61
0
186
508
__new__
move dummy index deduping to TensMul.__new__ Also removed _eval_subs and _xreplace. All tests pass.
https://github.com/sympy/sympy.git
def __new__(cls, *args, **kw_args): is_canon_bp = kw_args.get('is_canon_bp', False) args = list(map(_sympify, args)) free = [get_free_indices(arg) for arg in args] free = set(itertools.chain(*free)) #flatten free newargs = [] for arg in args: dum_this = set(get_dummy_indices(arg)) dum_other = [get_dummy_indices(a) for a in newargs] dum_other = set(itertools.chain(*dum_other)) #flatten dum_other free_this = set(get_free_indices(arg)) if len(dum_this.intersection(free)) > 0: exclude = free_this.union(free, dum_other) newarg = TensMul._dedupe_indices(arg, exclude, arg._index_structure) else: newarg = arg newargs.append(newarg) args = newargs # Flatten: args = [i for arg in args for i in (arg.args if isinstance(arg, (TensMul, Mul)) else [arg])] args, indices, free, dum = TensMul._tensMul_contract_indices(args, replace_indices=False) # Data for indices: index_types = [i.tensor_index_type for i in indices] index_structure = _IndexStructure(free, dum, index_types, indices, canon_bp=is_canon_bp) obj = TensExpr.__new__(cls, *args) obj._indices = indices obj._index_types = index_types[:] obj._index_structure = index_structure obj._free = index_structure.free[:] obj._dum = index_structure.dum[:] obj._free_indices = {x[0] for x in obj.free} obj._rank = len(obj.free) obj._ext_rank = len(obj._index_structure.free) + 2*len(obj._index_structure.dum) obj._coeff = S.One obj._is_canon_bp = is_canon_bp return obj index_types = property(lambda self: self._index_types) free = property(lambda self: self._free) dum = property(lambda self: self._dum) free_indices = property(lambda self: self._free_indices) rank = property(lambda self: self._rank) ext_rank = property(lambda self: self._ext_rank)
348
tensor.py
Python
sympy/tensor/tensor.py
6c55ca197b0f795047d8f8ee0d871ab36600d560
sympy
10
211,926
6
6
10
19
4
0
6
20
to_rgb
Bump min sphinx version (#11973) * Bump min sphinx version * checkpoint * comment for fully qualified names
https://github.com/bokeh/bokeh.git
def to_rgb(self) -> RGB: raise NotImplementedError
10
color.py
Python
bokeh/colors/color.py
ada85ff1dc6dc1d5857141b3202733870de5c809
bokeh
1
95,655
134
22
73
750
42
0
228
973
create_trace
chore(suspect-spans): Add some suspect spans data for mock data (#31168)
https://github.com/getsentry/sentry.git
def create_trace(slow, start_timestamp, timestamp, user, trace_id, parent_span_id, data): frontend = data.get("frontend") current_span_id = uuid4().hex[:16] spans = [] new_start = start_timestamp + timedelta(milliseconds=random_normal(50, 25, 10)) new_end = timestamp - timedelta(milliseconds=random_normal(50, 25, 10)) for child in data["children"]: span_id = uuid4().hex[:16] description = f"GET {child['transaction']}" duration = random_normal((new_end - new_start).total_seconds(), 0.25, 0.01) spans.append( { "same_process_as_parent": True, "op": "http", "description": description, "data": { "duration": duration, "offset": 0.02, }, "span_id": span_id, "trace_id": trace_id, "hash": hash_values([description]), # not the best but just set the exclusive time # equal to the duration to get some span data "exclusive_time": duration, } ) create_trace( slow, start_timestamp + timedelta(milliseconds=random_normal(50, 25, 10)), timestamp - timedelta(milliseconds=random_normal(50, 25, 10)), user, trace_id, span_id, child, ) for _ in range(data.get("errors", 0)): create_sample_event( project=data["project"], platform="javascript" if frontend else "python", user=user, transaction=data["transaction"], contexts={ "trace": { "type": "trace", "trace_id": trace_id, "span_id": random.choice(spans + [{"span_id": current_span_id}])["span_id"], } }, ) create_sample_event( project=data["project"], platform="javascript-transaction" if frontend else "transaction", transaction=data["transaction"], event_id=uuid4().hex, user=user, timestamp=timestamp, start_timestamp=start_timestamp, measurements={ "fp": {"value": random_normal(1250 - 50, 200, 500)}, "fcp": {"value": random_normal(1250 - 50, 200, 500)}, "lcp": {"value": random_normal(2800 - 50, 400, 2000)}, "fid": {"value": random_normal(5 - 0.125, 2, 1)}, } if frontend else {}, # Root parent_span_id=parent_span_id, span_id=current_span_id, trace=trace_id, spans=spans, hash=hash_values([data["transaction"]]), # not the best but just set the exclusive time # equal to the duration to get some span data exclusive_time=(timestamp - start_timestamp).total_seconds(), ) # try to give clickhouse some breathing room if slow: time.sleep(0.05)
477
samples.py
Python
src/sentry/utils/samples.py
e599ee88519f1b114735902bd7d4f96be4404e78
sentry
7
178,418
22
10
15
125
9
0
51
144
replaceTriggerModule
Plugins: Added ability to provide implicit fake dependencies * With this, the multiprocessing plugin can become always on, avoiding one of the bigger pitfalls of forked processing become unusable with hard to detect errors.
https://github.com/Nuitka/Nuitka.git
def replaceTriggerModule(old, new): found = None for key, value in pre_modules.items(): if value is old: found = key break if found is not None: pre_modules[found] = new found = None for key, value in post_modules.items(): if value is old: found = key break if found is not None: post_modules[found] = new
78
Plugins.py
Python
nuitka/plugins/Plugins.py
855a78e2dea8326662ebc8eeccbfa38eff43b3e7
Nuitka
7
317,862
9
7
4
34
4
0
10
31
async_reset
Add bluetooth options flow to pick the adapter (#75701)
https://github.com/home-assistant/core.git
def async_reset(self) -> None: self.history = {} self._setup = False
19
models.py
Python
homeassistant/components/bluetooth/models.py
a813cf987bf11bd9c0ee6aea04d2299f39b26e07
core
1
285,203
28
14
17
141
17
0
33
71
get_treasury_maturities
Here we merge all API Refactor related branches (#2236) * Update api.py * Updated forex menu * refactor ycrv command * refactor ycrv command black * refactor ecocal command * Minh changes * Adding space to test pushing * title fix ecocal df * get economic calendar annotation * fix investingcom tests * refactor index command * refactor overview command * give defaults to wsj view function args * rename date args investincom * refacto bigmac command * fix ecocal typo * refactor rtps command * alphavantage gdp * alphavantage gdp per capita * alphavantage cpi * alphavantage tyld * alphavantage inf * refactor macro command * refactor macro command w helpers * refactor treasury command * fix macro on terminal * treasury labels * refactor maturities * update treasury maturities doc strings * refactor get economic calendar finhub * refactor map command api * display map filter choices * route economy api to performance map * route economy api to performance map * display group choices on valuation command * refactor performance and valuation commands * refactor spectrum model and view * add choices to spectrum controller * delete image after view * fix model tests finviz * fix finciz view tests * refactor futures * fix some tests * fix more tests * fix controller test * refactor fred series notes * update fred notes docstring * refacto fred series ids * fix pred and qa when empty datasets * refactor fred * uncomment stuff * refacto get series data * fix some tests * set defaults on args * refactor fred yield curve * black * fix spell and remove ecocal names * fix linting * linting * pylint fix * change dangerous defaults * Working through crypto fixes (#2256) * Working through crypto fixes * Continued adding crypto stuff * Added crypto overview * Added test fixes * Added fixtures * Fixed tests * Fixed charting issue * Removed broken APIs * Final adjustments * Added test fixes * map get groups and get ycrv countries into old api * exposed econdb helper funcs * remove helpers * refactor search indices * linting * refactor arg currency * pylint from currency * Started switching crpyto ascending to ascend * Merging * Portfolio model arguements, params, and docstring * Refactored for etf commands (#2292) * Refactored for etf commands * Fixed tests * Added load command * Fixed menu * Portfolio logic fixes * Added econometrics (#2260) * Added econometrics * Fixed tests * Simplified API * Added test fixes * Added test csv * Allowed examples to be loaded * Fund refactor (#2291) * Fund refactor * Changed fund_name and fund to name * Changed ascending to ascend * Stock menu refactoring for easier API usage (#2194) * Stocks refactoring for easier API usage * Linting * Refactor newly added features * Linting * Fixing tests * Refactor common files used by stocks menu * Fixing flake8 * Fix linting and tests * Linting * Fix flake8 * refactor insider_data * refactor mentions * refactor watchlist * refactor sentiment * refactor sentiment * fix yahoofinance tests * refactor load and candle * refactor get_news and display_news * refactor stocks.ins.act * candle default matplotlib * fix yahoofinance_view tests * fix ark model tests * fix ark view tests * fix business insider model * fix business insider view * refactor csimarket model * fix tests csi market model * update dd controller * fix get suppliers tests * fix dd controller tests * fix finhub tests * fix finviz tests * fix fmp tests * fix marketwatch tests * corrected argument keywords in test_bt_model * corrected argument keywords in test_bt_view * refactor fa controller * refactor marketwatch view * refactor gov controller * fix tests fa av * fix tests elect * fix dcf tests * fix polygon tests * fix fmp tests * fix quiverquant tests * fix yahoofinance fa tests * fix more fa tests * fix insider tests * fix more tests * fix more tests * fix options tests * fix stock gov tests * fix tests test_ba_controller * fix tests for test_finviz_compare_model.py * fixed 2 tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fixed tests * fix final tests * fixed tests * fixed tests * Fix tests * black * forgot to black tests * fixed tests * fixed tests * fixed tests * fixed tests * flakefix * Tests + code : Stocks / Discovery * fix tests * added recorder * fixed tests * fixed tests * black * black * remove unused imports * refactor display raw * sia dicts fix * pylint * linting * remove dangerous default * fix tests * fix beta model test * black * skip screener qa test * change sector path to sectors * update tests readme * fix metric defaults * black * substitute lost ticker * defaults cpic * another round on sia * refactor cramer * reduce default tweets on sentiment * refactor yf hist, corr, volume * arkorders default * refactor income, balance, cashflow * refacto scorr, screener, getfinnhub * refactor stockgrid * ibkr refactor * another round on stockgrid * add dividens end point * refactor discovery endpoints * update docstrings with similar input * refactor messages * refactor ba * refactor regioons * refactor twitter sentiment * refactor hist * refactor regions * give default to timeframe * refactor bunch of defaults and arg names * remove leftover imports * refactor vwap * let tests run * fix tests * fix stock tests * fix stockanalysis tests * flake * MYPY * Made important changes * added fixes * Fixed big issue * Added fixes to tests * fix qa tests * fix tests * fix 1 more test * last stocks failing * fix crypto test Co-authored-by: Chavithra PARANA <chavithra@gmail.com> Co-authored-by: montezdesousa <montezdesousa@gmail.com> Co-authored-by: hjoaquim <h.joaquim@campus.fct.unl.pt> Co-authored-by: montezdesousa <79287829+montezdesousa@users.noreply.github.com> Co-authored-by: colin99d <colin99delahunty@gmail.com> * fix portfolio tests * change period to window * update ca docstrings * refactor get_similar_companies func * Fixed * Update CI * Update CI 2 * Update CI 3 * Update dependencies Co-authored-by: colin99d <colin99delahunty@gmail.com> Co-authored-by: Colin Delahunty <72827203+colin99d@users.noreply.github.com> Co-authored-by: montezdesousa <montezdesousa@gmail.com> Co-authored-by: James Simmons <simmonsj330@gmail.com> Co-authored-by: Theodore Aptekarev <aptekarev@gmail.com> Co-authored-by: minhhoang1023 <40023817+minhhoang1023@users.noreply.github.com> Co-authored-by: jose-donato <43375532+jose-donato@users.noreply.github.com> Co-authored-by: montezdesousa <79287829+montezdesousa@users.noreply.github.com> Co-authored-by: northern-64bit <75195383+northern-64bit@users.noreply.github.com> Co-authored-by: hjoaquim <h.joaquim@campus.fct.unl.pt>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def get_treasury_maturities() -> pd.DataFrame: instrument_maturities = { instrument: ", ".join(values["maturities"].keys()) for instrument, values in TREASURIES["instruments"].items() } df = pd.DataFrame.from_dict(instrument_maturities, orient="index") df.loc["average"] = "Defined by function" df.index.name = "Instrument" df.columns = ["Maturities"] return df
79
econdb_model.py
Python
openbb_terminal/economy/econdb_model.py
9e1a58e2dbedec4e4a9f9c2e32ddf091776c606b
OpenBBTerminal
2
212,760
6
8
2
27
3
0
6
12
advanced_mode
Removed the "Edit Me (This Program)" button since it caused confusion. Right click to choose "Edit me". Advanced mode is not on by default. Added PySimpleGUI version checking to warn about edit features
https://github.com/PySimpleGUI/PySimpleGUI.git
def advanced_mode(): return sg.user_settings_get_entry('-advanced mode-', True)
14
Browser_START_HERE_Demo_Programs_Browser.py
Python
DemoPrograms/Browser_START_HERE_Demo_Programs_Browser.py
1dbc22c41e0685ef8f57ad7bb4005fc04381e049
PySimpleGUI
1
19,553
9
11
2
44
7
0
9
15
cmd_list_to_shell
Code reorg utils into utils module reduces complexity (#4990) * Split apart the massive utils.py into a utils module
https://github.com/pypa/pipenv.git
def cmd_list_to_shell(args): return " ".join(shlex.quote(str(token)) for token in args)
25
shell.py
Python
pipenv/utils/shell.py
3387881a6d4fc2d8bdc0f05c484cb2f7222acfb8
pipenv
2
95,050
31
16
9
135
19
0
37
112
convert_category_value
feat(perf issues): Add search syntax for type and category (#38064) Add new search syntax for `type` and `category`
https://github.com/getsentry/sentry.git
def convert_category_value(value, projects, user, environments): if features.has("organizations:performance-issue-details-backend", projects[0].organization): results = [] for category in value: group_category = getattr(GroupCategory, category.upper(), None) if not group_category: raise InvalidSearchQuery(f"Invalid category value of '{category}'") results.extend([type.value for type in GROUP_CATEGORY_TO_TYPES.get(group_category, [])]) return results
84
issue_search.py
Python
src/sentry/api/issue_search.py
7be51e3ba18065e60df1f29e45689c68d414d471
sentry
5
126,747
62
10
6
101
10
0
83
149
test_simple_deployment_method_call_chain
[Serve] Use Async Handle for DAG Execution (#27411)
https://github.com/ray-project/ray.git
async def test_simple_deployment_method_call_chain(serve_instance): counter = Counter.bind(0) counter.inc.bind(1) counter.inc.bind(2) ray_dag = counter.get.bind() assert ray.get(ray_dag.execute()) == 3 # note(simon): Equivalence is not guaranteed here and # nor should it be a supported workflow. # ( # serve_root_dag, # deserialized_serve_root_dag_node, # ) = await _test_deployment_json_serde_helper(ray_dag, expected_num_deployments=1) # # Deployment to Deployment, possible DeploymentMethodNode call chain # # Both serve dags uses the same underlying deployments, thus the rhs value # # went through two execute() # assert ray.get(serve_root_dag.execute()) + ray.get(ray_dag.execute()) == ray.get( # deserialized_serve_root_dag_node.execute() # )
52
test_json_serde.py
Python
python/ray/serve/tests/test_json_serde.py
efee158cecc49fbec5527e75e17faadba4fac48d
ray
1
35,025
31
13
6
79
11
0
37
121
save_pretrained
PoC for a ProcessorMixin class (#15549) * PoC for a ProcessorMixin class * Documentation * Apply suggestions from code review Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Suraj Patil <surajp815@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Roll out to other processors * Add base feature extractor class in init * Use args and kwargs Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Co-authored-by: Suraj Patil <surajp815@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
https://github.com/huggingface/transformers.git
def save_pretrained(self, save_directory): for attribute_name in self.attributes: attribute = getattr(self, attribute_name) # Include the processor class in the attribute config so this processor can then be reloaded with the # `AutoProcessor` API. if hasattr(attribute, "_set_processor_class"): attribute._set_processor_class(self.__class__.__name__) attribute.save_pretrained(save_directory)
47
processing_utils.py
Python
src/transformers/processing_utils.py
b5c6fdecf0cab6ffe22bee2ca5b8474afba0d813
transformers
3
267,311
85
15
38
393
26
0
109
651
parse_args
Allow result sha to be overriden with local sha (#77832)
https://github.com/ansible/ansible.git
def parse_args(): source = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) parser = argparse.ArgumentParser(description='Report on incidental test coverage downloaded from Azure Pipelines.') parser.add_argument('result', type=directory, help='path to directory containing test results downloaded from Azure Pipelines') parser.add_argument('--output', type=optional_directory, default=os.path.join(source, 'test', 'results', '.tmp', 'incidental'), help='path to directory where reports should be written') parser.add_argument('--source', type=optional_directory, default=source, help='path to git repository containing Ansible source') parser.add_argument('--skip-checks', action='store_true', help='skip integrity checks, use only for debugging') parser.add_argument('--ignore-cache', dest='use_cache', action='store_false', help='ignore cached files') parser.add_argument('-v', '--verbose', action='store_true', help='increase verbosity') parser.add_argument('--result-sha', default=None, help='Override the result sha') targets = parser.add_mutually_exclusive_group() targets.add_argument('--targets', type=regex, default='^incidental_', help='regex for targets to analyze, default: %(default)s') targets.add_argument('--plugin-path', help='path to plugin to report incidental coverage on') if argcomplete: argcomplete.autocomplete(parser) args = parser.parse_args() return args
226
incidental.py
Python
hacking/azp/incidental.py
a415697d70794f0e6e09135e1e107390c2f46435
ansible
2
145,825
35
14
27
134
20
0
42
122
rows
[RLlib] Issue 22625: `MultiAgentBatch.timeslices()` does not behave as expected. (#22657)
https://github.com/ray-project/ray.git
def rows(self) -> Iterator[Dict[str, TensorType]]: seq_lens = None if self.get(SampleBatch.SEQ_LENS, 1) is None else 1 self_as_dict = {k: v for k, v in self.items()} for i in range(self.count): yield tree.map_structure_with_path( lambda p, v: v[i] if p[0] != self.SEQ_LENS else seq_lens, self_as_dict, )
90
sample_batch.py
Python
rllib/policy/sample_batch.py
c0ade5f0b7cfc9aeba46cde7af3b36068a6420df
ray
5
196,746
96
17
29
309
29
0
146
556
warns
Test that the stacklevel is set correctly in warns() The stacklevel makes it so that the warning message shows the actual user line of code (the default stacklevel=1 just shows the code that called warn()). This is somewhat ambiguous in a lot of cases, which is why there's a flag to disable it. The issue is that if the function that issues the warning is not always the top-level function called by the user, or always in the same call stack, there's no single value for stacklevel that will show the user code. However, at worst, a wrong stacklevel will just show a useless line of code, which is no different from the default stacklevel. It's mostly useful for deprecation warnings (warns_deprecated_sympy()), since in that case, we only care about the user calling the deprecated function and that's it (no other function should call deprecated code).
https://github.com/sympy/sympy.git
def warns(warningcls, *, match='', test_stacklevel=True): # Absorbs all warnings in warnrec with warnings.catch_warnings(record=True) as warnrec: # Hide all warnings but make sure that our warning is emitted warnings.simplefilter("ignore") warnings.filterwarnings("always", match, warningcls) # Now run the test yield # Raise if expected warning not found if not any(issubclass(w.category, warningcls) for w in warnrec): msg = ('Failed: DID NOT WARN.' ' No warnings of type %s was emitted.' ' The list of emitted warnings is: %s.' ) % (warningcls, [w.message for w in warnrec]) raise Failed(msg) if test_stacklevel: for f in inspect.stack(): thisfile = f.filename file = os.path.split(thisfile)[1] if file.startswith('test_'): break elif file == 'doctest.py': # skip the stacklevel testing in the doctests of this # function return else: raise RuntimeError("Could not find the file for the given warning to test the stacklevel") for w in warnrec: if w.filename != thisfile: msg = f.replace('\n', ' ') raise Failed(msg)
169
pytest.py
Python
sympy/testing/pytest.py
1659fa55e55d6c1b08b75bde1cc313b0b952a99d
sympy
10
63,935
95
25
50
651
52
0
135
85
get_so_with_invoices
feat: Payment Terms Status report - calculate status at runtime for payment terms based on invoices - invoices are used in FIFO method
https://github.com/frappe/erpnext.git
def get_so_with_invoices(filters): sorders = [] so = qb.DocType("Sales Order") ps = qb.DocType("Payment Schedule") datediff = query_builder.CustomFunction("DATEDIFF", ["cur_date", "due_date"]) ifelse = query_builder.CustomFunction("IF", ["condition", "then", "else"]) conditions = get_conditions(filters) query_so = ( qb.from_(so) .join(ps) .on(ps.parent == so.name) .select( so.name, so.transaction_date.as_("submitted"), ifelse(datediff(ps.due_date, functions.CurDate()) < 0, "Overdue", "Unpaid").as_("status"), ps.payment_term, ps.description, ps.due_date, ps.invoice_portion, ps.payment_amount, ps.paid_amount, ) .where( (so.docstatus == 1) & (so.payment_terms_template != "NULL") & (so.company == conditions.company) & (so.transaction_date[conditions.start_date : conditions.end_date]) ) .orderby(so.name, so.transaction_date, ps.due_date) ) if conditions.sales_order != []: query_so = query_so.where(so.name.isin(conditions.sales_order)) sorders = query_so.run(as_dict=True) invoices = [] if sorders != []: soi = qb.DocType("Sales Order Item") si = qb.DocType("Sales Invoice") sii = qb.DocType("Sales Invoice Item") query_inv = ( qb.from_(sii) .right_join(si) .on(si.name == sii.parent) .inner_join(soi) .on(soi.name == sii.so_detail) .select(sii.sales_order, sii.parent.as_("invoice"), si.base_net_total.as_("invoice_amount")) .where((sii.sales_order.isin([x.name for x in sorders])) & (si.docstatus == 1)) .groupby(sii.parent) ) invoices = query_inv.run(as_dict=True) return sorders, invoices
402
payment_terms_status_for_sales_order.py
Python
erpnext/selling/report/payment_terms_status_for_sales_order/payment_terms_status_for_sales_order.py
1bac7930834d6f688950e836c45305a62e7ecb3f
erpnext
4
118,614
58
16
13
206
25
0
79
269
test_named_legacy_add_rows_with_clear_queue
Rename and refactor `Report` machinery (#4141) This refactor renames (almost) everything related to the outdated "report" concept with more precise concepts that we use throughout our code, primarily "script run", "session", and "app".
https://github.com/streamlit/streamlit.git
def test_named_legacy_add_rows_with_clear_queue(self): for method in self._get_named_data_methods(): # Create a new data-carrying element (e.g. st._legacy_dataframe) el = method(DATAFRAME) # Make sure it has 2 rows in it. df_proto = data_frame._get_data_frame(self.get_delta_from_queue()) num_rows = len(df_proto.data.cols[0].int64s.data) self.assertEqual(num_rows, 2) # This is what we're testing: self.forward_msg_queue.clear() el._legacy_add_rows(mydata1=NEW_ROWS) # Make sure there are 3 rows in the delta that got appended. ar = self.get_delta_from_queue().add_rows num_rows = len(ar.data.data.cols[0].int64s.data) self.assertEqual(num_rows, 3) # Clear the queue so the next loop is like a brand new test. get_script_run_ctx().reset() self.forward_msg_queue.clear()
123
legacy_add_rows_test.py
Python
lib/tests/streamlit/legacy_add_rows_test.py
704eab3478cf69847825b23dabf15813a8ac9fa2
streamlit
2
55,850
17
12
6
73
12
0
17
63
test_version_none_if_source_file_cannot_be_determined
Use `inspect.getsourcefile` instead of `__globals__`
https://github.com/PrefectHQ/prefect.git
def test_version_none_if_source_file_cannot_be_determined(self, monkeypatch): monkeypatch.setattr( "prefect.flows.inspect.getsourcefile", MagicMock(return_value=None) ) f = Flow(name="test", fn=lambda **kwargs: 42) assert f.version is None
43
test_flows.py
Python
tests/test_flows.py
f4911664c3377f6193b1bd6b7eece2f60e55c3b2
prefect
1
191,528
33
12
10
134
14
0
42
79
test_adding_document_already_exists
wip: add method for both docstore and embeddings (#119) this will break atm but wanted to get thoughts on implementation. 1. should add() be on docstore interface? 2. should InMemoryDocstore change to take a list of documents as init? (makes this slightly easier to implement in FAISS -- if we think it is less clean then could expose a method to get the number of documents currently in the dict, and perform the logic of creating the necessary dictionary in the FAISS.add_texts method. Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
https://github.com/hwchase17/langchain.git
def test_adding_document_already_exists() -> None: _dict = {"foo": Document(page_content="bar")} docstore = InMemoryDocstore(_dict) new_dict = {"foo": Document(page_content="foo")} # Test that error is raised. with pytest.raises(ValueError): docstore.add(new_dict) # Test that old document is the same. bar_output = docstore.search("foo") assert isinstance(bar_output, Document) assert bar_output.page_content == "bar"
72
test_inmemory.py
Python
tests/unit_tests/docstore/test_inmemory.py
315b0c09c614fa44daa61529d1f1da2fe827b16c
langchain
1
264,477
14
8
6
65
6
1
14
43
badge
Standardize on get_FOO_color() method for returning ChoiceField colors
https://github.com/netbox-community/netbox.git
def badge(value, bg_color='secondary', show_empty=False): return { 'value': value, 'bg_color': bg_color, 'show_empty': show_empty, } @register.inclusion_tag('builtins/checkmark.html')
@register.inclusion_tag('builtins/checkmark.html')
29
tags.py
Python
netbox/utilities/templatetags/builtins/tags.py
1319b62acb07867150f7533149cfcce46d96252c
netbox
1
213,258
20
10
6
69
8
0
21
47
supports_inplace
added methods supports_inplace and assert_supports_inplace.
https://github.com/unifyai/ivy.git
def supports_inplace(x): if ivy.is_variable(x): return ivy.inplace_variables_supported() elif ivy.is_array(x): return ivy.inplace_arrays_supported() raise Exception('Input x must be either a variable or an array.')
39
general.py
Python
ivy/core/general.py
b8e9959d84035a7578ad84528a61fc5fec82e864
ivy
3
6,958
6
9
3
41
6
0
6
19
mkdir
Changed CheckpointManager to write the latest checkpoint to a consistent filename (#2123)
https://github.com/ludwig-ai/ludwig.git
def mkdir(s): if not os.path.exists(s): os.makedirs(s)
23
checkpoint_utils.py
Python
ludwig/utils/checkpoint_utils.py
0918b0b5dd614aa1d3909506a5913913679e8b1f
ludwig
2
22,230
14
9
7
73
9
1
18
42
get_contained_extras
Rename notpip to pip. Vendor in pip-22.2.1 and latest requirementslib and vistir.
https://github.com/pypa/pipenv.git
def get_contained_extras(marker): if not marker: return set() extras = set() marker = _ensure_marker(marker) _markers_collect_extras(marker._markers, extras) return extras @lru_cache(maxsize=1024)
@lru_cache(maxsize=1024)
35
markers.py
Python
pipenv/vendor/requirementslib/models/markers.py
cd5a9683be69c86c8f3adcd13385a9bc5db198ec
pipenv
2
261,219
7
9
2
37
7
0
7
13
is_scalar_nan
DOC Ensure that sklearn.utils.is_scalar_nan passes numpydoc validation (#24562) Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com>
https://github.com/scikit-learn/scikit-learn.git
def is_scalar_nan(x): return isinstance(x, numbers.Real) and math.isnan(x)
22
__init__.py
Python
sklearn/utils/__init__.py
0c22fa1da475531aa31b5c67f427b5e1461835b5
scikit-learn
2
256,892
20
12
19
96
13
0
23
67
test_load_yaml_pipeline_with_wrong_nodes
Pipeline's YAML: syntax validation (#2226) * Add BasePipeline.validate_config, BasePipeline.validate_yaml, and some new custom exception classes * Make error composition work properly * Clarify typing * Help mypy a bit more * Update Documentation & Code Style * Enable autogenerated docs for Milvus1 and 2 separately * Revert "Enable autogenerated docs for Milvus1 and 2 separately" This reverts commit 282be4a78a6e95862a9b4c924fc3dea5ca71e28d. * Update Documentation & Code Style * Re-enable 'additionalProperties: False' * Add pipeline.type to JSON Schema, was somehow forgotten * Disable additionalProperties on the pipeline properties too * Fix json-schemas for 1.1.0 and 1.2.0 (should not do it again in the future) * Cal super in PipelineValidationError * Improve _read_pipeline_config_from_yaml's error handling * Fix generate_json_schema.py to include document stores * Fix json schemas (retro-fix 1.1.0 again) * Improve custom errors printing, add link to docs * Add function in BaseComponent to list its subclasses in a module * Make some document stores base classes abstract * Add marker 'integration' in pytest flags * Slighly improve validation of pipelines at load * Adding tests for YAML loading and validation * Make custom_query Optional for validation issues * Fix bug in _read_pipeline_config_from_yaml * Improve error handling in BasePipeline and Pipeline and add DAG check * Move json schema generation into haystack/nodes/_json_schema.py (useful for tests) * Simplify errors slightly * Add some YAML validation tests * Remove load_from_config from BasePipeline, it was never used anyway * Improve tests * Include json-schemas in package * Fix conftest imports * Make BasePipeline abstract * Improve mocking by making the test independent from the YAML version * Add exportable_to_yaml decorator to forget about set_config on mock nodes * Fix mypy errors * Comment out one monkeypatch * Fix typing again * Improve error message for validation * Add required properties to pipelines * Fix YAML version for REST API YAMLs to 1.2.0 * Fix load_from_yaml call in load_from_deepset_cloud * fix HaystackError.__getattr__ * Add super().__init__()in most nodes and docstore, comment set_config * Remove type from REST API pipelines * Remove useless init from doc2answers * Call super in Seq3SeqGenerator * Typo in deepsetcloud.py * Fix rest api indexing error mismatch and mock version of JSON schema in all tests * Working on pipeline tests * Improve errors printing slightly * Add back test_pipeline.yaml * _json_schema.py supports different versions with identical schemas * Add type to 0.7 schema for backwards compatibility * Fix small bug in _json_schema.py * Try alternative to generate json schemas on the CI * Update Documentation & Code Style * Make linux CI match autoformat CI * Fix super-init-not-called * Accidentally committed file * Update Documentation & Code Style * fix test_summarizer_translation.py's import * Mock YAML in a few suites, split and simplify test_pipeline_debug_and_validation.py::test_invalid_run_args * Fix json schema for ray tests too * Update Documentation & Code Style * Reintroduce validation * Usa unstable version in tests and rest api * Make unstable support the latest versions * Update Documentation & Code Style * Remove needless fixture * Make type in pipeline optional in the strings validation * Fix schemas * Fix string validation for pipeline type * Improve validate_config_strings * Remove type from test p[ipelines * Update Documentation & Code Style * Fix test_pipeline * Removing more type from pipelines * Temporary CI patc * Fix issue with exportable_to_yaml never invoking the wrapped init * rm stray file * pipeline tests are green again * Linux CI now needs .[all] to generate the schema * Bugfixes, pipeline tests seems to be green * Typo in version after merge * Implement missing methods in Weaviate * Trying to avoid FAISS tests from running in the Milvus1 test suite * Fix some stray test paths and faiss index dumping * Fix pytest markers list * Temporarily disable cache to be able to see tests failures * Fix pyproject.toml syntax * Use only tmp_path * Fix preprocessor signature after merge * Fix faiss bug * Fix Ray test * Fix documentation issue by removing quotes from faiss type * Update Documentation & Code Style * use document properly in preprocessor tests * Update Documentation & Code Style * make preprocessor capable of handling documents * import document * Revert support for documents in preprocessor, do later * Fix bug in _json_schema.py that was breaking validation * re-enable cache * Update Documentation & Code Style * Simplify calling _json_schema.py from the CI * Remove redundant ABC inheritance * Ensure exportable_to_yaml works only on implementations * Rename subclass to class_ in Meta * Make run() and get_config() abstract in BasePipeline * Revert unintended change in preprocessor * Move outgoing_edges_input_node check inside try block * Rename VALID_CODE_GEN_INPUT_REGEX into VALID_INPUT_REGEX * Add check for a RecursionError on validate_config_strings * Address usages of _pipeline_config in data silo and elasticsearch * Rename _pipeline_config into _init_parameters * Fix pytest marker and remove unused imports * Remove most redundant ABCs * Rename _init_parameters into _component_configuration * Remove set_config and type from _component_configuration's dict * Remove last instances of set_config and replace with super().__init__() * Implement __init_subclass__ approach * Simplify checks on the existence of _component_configuration * Fix faiss issue * Dynamic generation of node schemas & weed out old schemas * Add debatable test * Add docstring to debatable test * Positive diff between schemas implemented * Improve diff printing * Rename REST API YAML files to trigger IDE validation * Fix typing issues * Fix more typing * Typo in YAML filename * Remove needless type:ignore * Add tests * Fix tests & validation feedback for accessory classes in custom nodes * Refactor RAGeneratorType out * Fix broken import in conftest * Improve source error handling * Remove unused import in test_eval.py breaking tests * Fix changed error message in tests matches too * Normalize generate_openapi_specs.py and generate_json_schema.py in the actions * Fix path to generate_openapi_specs.py in autoformat.yml * Update Documentation & Code Style * Add test for FAISSDocumentStore-like situations (superclass with init params) * Update Documentation & Code Style * Fix indentation * Remove commented set_config * Store model_name_or_path in FARMReader to use in DistillationDataSilo * Rename _component_configuration into _component_config * Update Documentation & Code Style Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
https://github.com/deepset-ai/haystack.git
def test_load_yaml_pipeline_with_wrong_nodes(tmp_path): with open(tmp_path / "tmp_config.yml", "w") as tmp_file: tmp_file.write( f ) with pytest.raises(PipelineConfigError) as e: Pipeline.load_from_yaml(path=tmp_path / "tmp_config.yml") assert "not_existing_node" in str(e)
51
test_pipeline_yaml.py
Python
test/test_pipeline_yaml.py
11cf94a9652a577732941f27ad59eb7c8bc5063e
haystack
1
295,968
5
6
39
18
4
0
5
8
test_database_corruption_while_running
Reduce memory pressure during database migration (#69628)
https://github.com/home-assistant/core.git
async def test_database_corruption_while_running(hass, tmpdir, caplog):
264
test_init.py
Python
tests/components/recorder/test_init.py
66f0a3816a7341f0726a01f4dfdbd1ff47c27d1b
core
1
77,976
8
10
2
38
5
0
8
22
get_menu_item
Deprecate wagtail.contrib.modeladmin.menus.SubMenu in favour of wagtail.admin.menu.Menu The Menu class was not originally designed to accept menu items at constructor time (instead requiring them to be passed via hooks); ModelAdmin's SubMenu class patched this functionality in, and the documentation for extending admin views piggybacked on this. Add this functionality to the base Menu class so that we don't have this unnecessary dependency on ModelAdmin.
https://github.com/wagtail/wagtail.git
def get_menu_item(self, order=None): return ModelAdminMenuItem(self, order or self.get_menu_order())
23
options.py
Python
wagtail/contrib/modeladmin/options.py
b8a9a2d319b06fc2318d68d05b5a6cdf85b5b33d
wagtail
2
311,818
6
7
7
28
4
0
6
20
unique_id
Add missing type hints to homekit_controller (#65368)
https://github.com/home-assistant/core.git
def unique_id(self) -> str: return self.pairing_data["AccessoryPairingID"]
15
connection.py
Python
homeassistant/components/homekit_controller/connection.py
9f5d77e0df957c20a2af574d706140786f0a551a
core
1
290,519
10
9
3
41
7
0
11
25
extra_state_attributes
Fix accelerator sensor in fibaro integration (#81237) * Fix accelerator sensor in fibaro integration * Implement suggestions from code review * Implement suggestions from code review * Changes as suggested in code review * Adjust as suggested in code review
https://github.com/home-assistant/core.git
def extra_state_attributes(self) -> Mapping[str, Any] | None: return super().extra_state_attributes | self._own_extra_state_attributes
25
binary_sensor.py
Python
homeassistant/components/fibaro/binary_sensor.py
1fe85c9b1700f36d41374490701c9f8738137273
core
1
34,075
13
10
7
80
9
0
16
49
create_dummy_object
Better dummies (#15148) * Better dummies * See if this fixes the issue * Fix quality * Style * Add doc for DummyObject
https://github.com/huggingface/transformers.git
def create_dummy_object(name, backend_name): if name.isupper(): return DUMMY_CONSTANT.format(name) elif name.islower(): return DUMMY_FUNCTION.format(name, backend_name) else: return DUMMY_CLASS.format(name, backend_name)
49
check_dummies.py
Python
utils/check_dummies.py
1b730c3d11fdad0180ee9f9d3da9cff933c3b264
transformers
3
21,809
6
8
5
29
4
0
6
20
end
Update tomlkit==0.9.2 Used: python -m invoke vendoring.update --package=tomlkit
https://github.com/pypa/pipenv.git
def end(self) -> bool: return self._src.end()
16
parser.py
Python
pipenv/vendor/tomlkit/parser.py
8faa74cdc9da20cfdcc69f5ec29b91112c95b4c9
pipenv
1
110,498
94
16
34
394
18
0
168
660
set_array
ENH: Allow RGB(A) arrays for pcolormesh Allow a user to set the array values to explicit colors with RGB(A) values in the 3rd dimension.
https://github.com/matplotlib/matplotlib.git
def set_array(self, A): height, width = self._coordinates.shape[0:-1] misshapen_data = False faulty_data = False if self._shading == 'flat': h, w = height-1, width-1 else: h, w = height, width if A is not None: shape = np.shape(A) if len(shape) == 1: if shape[0] != (h*w): faulty_data = True elif shape[:2] != (h, w): if np.prod(shape[:2]) == (h * w): misshapen_data = True else: faulty_data = True elif len(shape) == 3 and shape[2] not in {3, 4}: # 3D data must be RGB(A) (h, w, [3,4]) # the (h, w) check is taken care of above raise ValueError( f"For X ({width}) and Y ({height}) with " f"{self._shading} shading, the expected shape of " f"A with RGB(A) colors is ({h}, {w}, [3 or 4]), not " f"{A.shape}") if misshapen_data: raise ValueError( f"For X ({width}) and Y ({height}) with {self._shading} " f"shading, the expected shape of A is ({h}, {w}), not " f"{A.shape}") if faulty_data: raise TypeError( f"Dimensions of A {A.shape} are incompatible with " f"X ({width}) and/or Y ({height})") return super().set_array(A)
197
collections.py
Python
lib/matplotlib/collections.py
580411016168b01cae5813357208c79c534cb6bd
matplotlib
11
92,359
4
12
2
38
6
0
4
18
disallow_new_enrollment
feat(2fa): option for disallowing new enrollments on 2FA interface (#36344) * feat(2fa): add option to 2FA interfaces to allow for disallowing of new enrollments Setting `{interface_id}.disallow-new-enrollment` in the config will prevent new enrollments of the two-factor interface.
https://github.com/getsentry/sentry.git
def disallow_new_enrollment(self): return bool(options.get(f"{self.interface_id}.disallow-new-enrollment"))
17
base.py
Python
src/sentry/auth/authenticators/base.py
659367b0bd034921a989946472612e4e0bf14fb3
sentry
1
275,794
37
10
11
146
12
0
59
153
test_tokenizer_oov_flag
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def test_tokenizer_oov_flag(self): x_train = ["This text has only known words"] x_test = ["This text has some unknown words"] # 2 OOVs: some, unknown # Default, without OOV flag tokenizer = text.Tokenizer() tokenizer.fit_on_texts(x_train) x_test_seq = tokenizer.texts_to_sequences(x_test) self.assertLen(x_test_seq[0], 4) # discards 2 OOVs # With OOV feature tokenizer = text.Tokenizer(oov_token="<unk>") tokenizer.fit_on_texts(x_train) x_test_seq = tokenizer.texts_to_sequences(x_test) self.assertLen(x_test_seq[0], 6) # OOVs marked in place
83
text_test.py
Python
keras/preprocessing/text_test.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
290,016
10
8
3
36
5
0
10
24
name
Include esphome device name in BLE logs (#81284) * Include esphome device name in BLE logs This makes it easier to debug what is going on when there are multiple esphome proxies * revert unintended change
https://github.com/home-assistant/core.git
def name(self) -> str: return self.device_info.name if self.device_info else self.entry_id
22
entry_data.py
Python
homeassistant/components/esphome/entry_data.py
8db7afb2e0106f1f8a1f085d45d143024f77daf1
core
2
255,147
24
13
14
137
20
0
32
119
test_cases
Use Python type annotations rather than comments (#3962) * These have been supported since Python 3.5. ONNX doesn't support Python < 3.6, so we can use the annotations. Diffs generated by https://pypi.org/project/com2ann/. Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * Remove MYPY conditional logic in gen_proto.py It breaks the type annotations and shouldn't be needed. Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * Get rid of MYPY bool from more scripts Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * move Descriptors class above where its referenced in type annotation Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fixes Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * remove extra blank line Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix type annotations Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix type annotation in gen_docs Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix Operators.md Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix TestCoverage.md Signed-off-by: Gary Miguel <garymiguel@microsoft.com> * fix protoc-gen-mypy.py Signed-off-by: Gary Miguel <garymiguel@microsoft.com>
https://github.com/onnx/onnx.git
def test_cases(self) -> Dict[str, Type[unittest.TestCase]]: test_cases = {} for category, items_map in self._filtered_test_items.items(): test_case_name = str('OnnxBackend{}Test').format(category) test_case = self._get_test_case(test_case_name) for name, item in sorted(items_map.items()): setattr(test_case, name, item.func) test_cases[test_case_name] = test_case return test_cases
86
__init__.py
Python
onnx/backend/test/runner/__init__.py
83fa57c74edfd13ddac9548b8a12f9e3e2ed05bd
onnx
3
20,222
11
10
3
51
8
0
11
26
user_runtime_dir
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def user_runtime_dir(self) -> str: return self._append_app_name_and_version(os.path.expanduser("~/Library/Caches/TemporaryItems")) __all__ = [ "MacOS", ]
22
macos.py
Python
pipenv/patched/notpip/_vendor/platformdirs/macos.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
1
300,467
24
12
8
141
16
1
30
71
test_process_datetime_to_timestamp_freeze_time
Fix process_datetime_to_timestamp and add test coverage (#71755)
https://github.com/home-assistant/core.git
def test_process_datetime_to_timestamp_freeze_time(time_zone, hass): hass.config.set_time_zone(time_zone) utc_now = dt_util.utcnow() with freeze_time(utc_now): epoch = utc_now.timestamp() assert process_datetime_to_timestamp(dt_util.utcnow()) == epoch now = dt_util.now() assert process_datetime_to_timestamp(now) == epoch @pytest.mark.parametrize( "time_zone", ["Europe/Berlin", "America/Chicago", "US/Hawaii", "UTC"] )
@pytest.mark.parametrize( "time_zone", ["Europe/Berlin", "America/Chicago", "US/Hawaii", "UTC"] )
61
test_models.py
Python
tests/components/recorder/test_models.py
1d9fb4bca871f97109684419f0f9526a0c151f2d
core
1
106,051
9
8
3
35
5
0
9
30
encode_example
Clean up remaining Main Classes docstrings (#5349) clean up docstrings
https://github.com/huggingface/datasets.git
def encode_example(self, example): example = cast_to_python_objects(example) return encode_nested_example(self, example)
21
features.py
Python
src/datasets/features/features.py
c78559cacbb0ca6e0bc8bfc313cc0359f8c23ead
datasets
1
310,158
11
12
5
47
9
0
11
43
_create_device
Remove somfy from mypy ignore list (#64462) * Adjust somfy * Remove somfy from mypy-config * Fix pylint
https://github.com/home-assistant/core.git
def _create_device(self) -> Blind: self._cover = Blind( self.device, cast(SomfyDataUpdateCoordinator, self.coordinator).client )
29
cover.py
Python
homeassistant/components/somfy/cover.py
7592347715c03550d6826003c987afbc9fa3e1de
core
1
137,651
19
10
9
50
4
0
22
44
shutdown_ray_cluster
Ray on spark implementation (#28771) REP: ray-project/enhancements#14
https://github.com/ray-project/ray.git
def shutdown_ray_cluster() -> None: global _active_ray_cluster if _active_ray_cluster is None: raise RuntimeError("No active ray cluster to shut down.") _active_ray_cluster.shutdown() _active_ray_cluster = None
27
cluster_init.py
Python
python/ray/util/spark/cluster_init.py
e76ccee69aaa7583be1a9d81cf7b2aa72cf25647
ray
2
128,176
16
9
20
65
7
0
19
47
get_progress
Progress bar (#28575) Progress bar for showing number of tasks completed / running / scheduled. Creates dashboard api endpoint which proxies to prometheus to return data for the progress bar. Requires prometheus to be running. Light dependency on #28286 since that will help make prometheus easier to run out of the box. Per job progress is shown under the "progress" column in the table of jobs. <img width="1904" alt="Screen Shot 2022-09-28 at 6 16 15 PM" src="https://user-images.githubusercontent.com/711935/192916737-e9c8dde0-18e5-4ff8-b767-1091dea57287.png"> <img width="1904" alt="Screen Shot 2022-09-28 at 6 16 44 PM" src="https://user-images.githubusercontent.com/711935/192916757-c87e9eee-ec42-4729-a9b2-74044617122d.png">
https://github.com/ray-project/ray.git
async def get_progress(self, req): job_id = req.query.get("job_id") job_id_query = f'{{JobId="{job_id}"}}' if job_id else "" query = f"sum(ray_tasks{job_id_query}) by (State)"
113
metrics_head.py
Python
dashboard/modules/metrics/metrics_head.py
f4da07d3f5d63e1ad6b2412c5aedd506a4705590
ray
4
140,053
35
17
22
161
21
0
40
315
subscribe
[core] Resubscribe GCS in python when GCS restarts. (#24887) This is a follow-up PRs of https://github.com/ray-project/ray/pull/24813 and https://github.com/ray-project/ray/pull/24628 Unlike the change in cpp layer, where the resubscription is done by GCS broadcast a request to raylet/core_worker and the client-side do the resubscription, in the python layer, we detect the failure in the client-side. In case of a failure, the protocol is: 1. call subscribe 2. if timeout when doing resubscribe, throw an exception and this will crash the system. This is ok because when GCS has been down for a time longer than expected, we expect the ray cluster to be down. 3. continue to poll once subscribe ok. However, there is an extreme case where things might be broken: the client might miss detecting a failure. This could happen if the long-polling has been returned and the python layer is doing its own work. And before it sends another long-polling, GCS restarts and recovered. Here we are not going to take care of this case because: 1. usually GCS is going to take several seconds to be up and the python layer's work is simply pushing data into a queue (sync version). For the async version, it's only used in Dashboard which is not a critical component. 2. pubsub in python layer is not doing critical work: it handles logs/errors for ray job; 3. for the dashboard, it can just restart to fix the issue. A known issue here is that we might miss logs in case of GCS failure due to the following reasons: - py's pubsub is only doing best effort publishing. If it failed too many times, it'll skip publishing the message (lose messages from producer side) - if message is pushed to GCS, but the worker hasn't done resubscription yet, the pushed message will be lost (lose messages from consumer side) We think it's reasonable and valid behavior given that the logs are not defined to be a critical component and we'd like to simplify the design of pubsub in GCS. Another things is `run_functions_on_all_workers`. We'll plan to stop using it within ray core and deprecate it in the longer term. But it won't cause a problem for the current cases because: 1. It's only set in driver and we don't support creating a new driver when GCS is down. 2. When GCS is down, we don't support starting new ray workers. And `run_functions_on_all_workers` is only used when we initialize driver/workers.
https://github.com/ray-project/ray.git
def subscribe(self) -> None: with self._lock: req = self._subscribe_request(self._channel) start = time.time() from ray._raylet import Config while True: try: if self._close.is_set(): return return self._stub.GcsSubscriberCommandBatch(req, timeout=30) except grpc.RpcError as e: self._handle_subscribe_failure(e) if ( time.time() - start > Config.gcs_rpc_server_reconnect_timeout_s() ): raise e
96
gcs_pubsub.py
Python
python/ray/_private/gcs_pubsub.py
7cf42338589d57bafea9933055781fe78e066a72
ray
5
224,451
7
6
2
22
5
0
7
21
on_nav
Move plugin events docs into source code + refactor * Create real (no-op) methods for each event in the base class. * Refactor event dispatcher to not check for methods' existence, instead just call them. * Move documentation from Markdown into docstrings of these methods. * Activate the 'mkdocstrings' plugin. * Use 'mkdocstrings' to insert documentation from those docstrings into the site.
https://github.com/mkdocs/mkdocs.git
def on_nav(self, nav, config, files): return nav
14
plugins.py
Python
mkdocs/plugins.py
f79b34d174e41084391868e7b503f5c61b8b1bdf
mkdocs
1
163,432
24
10
8
53
8
0
24
96
iteritems
DEPR: Series/DataFrame/HDFStore.iteritems() (#45321)
https://github.com/pandas-dev/pandas.git
def iteritems(self): warnings.warn( "iteritems is deprecated and will be removed in a future version. " "Use .items instead.", FutureWarning, stacklevel=find_stack_level(), ) yield from self.items()
29
pytables.py
Python
pandas/io/pytables.py
e255e56fa086e06127268e409adb82f440326273
pandas
1
298,008
37
13
15
99
9
0
42
231
_update_max_mireds
String formatting and max line length - Part 6 (#84525)
https://github.com/home-assistant/core.git
def _update_max_mireds(self, render): try: if render in (None, "None", ""): self._max_mireds = None return self._max_mireds = int(render) except ValueError: _LOGGER.error( ( "Template must supply an integer temperature within the range for" " this light, or 'None'" ), exc_info=True, ) self._max_mireds = None
57
light.py
Python
homeassistant/components/template/light.py
8819634b613f6bfd55885283bab86c3852ae40c4
core
3
153,058
26
11
15
131
13
0
38
183
mask
REFACTOR-#2656: Update modin to fit algebra (code only) (#3717) Co-authored-by: Yaroslav Igoshev <Poolliver868@mail.ru> Co-authored-by: Vasily Litvinov <vasilij.n.litvinov@intel.com> Co-authored-by: Alexey Prutskov <alexey.prutskov@intel.com> Co-authored-by: Devin Petersohn <devin-petersohn@users.noreply.github.com> Signed-off-by: Rehan Durrani <rehan@ponder.io>
https://github.com/modin-project/modin.git
def mask(self, row_labels, col_labels): new_obj = super().mask(row_labels, col_labels) if isinstance(row_labels, slice) and isinstance( self._length_cache, ObjectIDType ): new_obj._length_cache = compute_sliced_len.remote( row_labels, self._length_cache ) if isinstance(col_labels, slice) and isinstance( self._width_cache, ObjectIDType ): new_obj._width_cache = compute_sliced_len.remote( col_labels, self._width_cache ) return new_obj
86
partition.py
Python
modin/core/execution/ray/implementations/pandas_on_ray/partitioning/partition.py
58bbcc37477866d19c8b092a0e1974a4f0baa586
modin
5
33,427
22
9
3
70
12
0
29
41
_set_gradient_checkpointing
Add X-CLIP (#18852) * First draft * Improve conversion script * Make vision encoder work * More improvements * Improve conversion script * Fix quality * Add MultiframeIntegrationTransformer * More improvements * Make MiT output work * Fix quality * Add prompts generator * Add tests * Fix some tests * Fix some more tests * Fix more tests * Improve conversion script * Fix model outputs * Fix more tests * Add XClipProcessor * Use processor in conversion script * Fix integration test * Update README, fix docs * Fix all tests * Add MIT output to XClipOutput * Create better variable names * Rename XClip to XCLIP * Extend conversion script * Add support for large models * Add support for 16 frame models * Add another model' * Fix module issue * Apply suggestions from code review * Add figure to docs * Fix CLIPProcessor issue * Apply suggestions from code review * Delete file * Convert more checkpoints * Convert last checkpoint * Update nielsr to microsoft
https://github.com/huggingface/transformers.git
def _set_gradient_checkpointing(self, module, value=False): if isinstance(module, (XCLIPEncoder, XCLIPVisionEncoder)): module.gradient_checkpointing = value X_CLIP_START_DOCSTRING = r X_CLIP_TEXT_INPUTS_DOCSTRING = r X_CLIP_VISION_INPUTS_DOCSTRING = r X_CLIP_INPUTS_DOCSTRING = r # Copied from transformers.models.clip.modeling_clip.CLIPEncoder with CLIP->XCLIP
28
modeling_x_clip.py
Python
src/transformers/models/x_clip/modeling_x_clip.py
bb6f6d53386bf2340eead6a8f9320ce61add3e96
transformers
2
285,727
64
13
26
221
28
0
74
352
call_price
Integrate live feeds from Pyth (#2178) * added dependency * added pyth models * dependencies * docs * some improvements to this pyth command (#2433) * some improvements to this pyth command * minor improv * dependencies * tests Co-authored-by: DidierRLopes <dro.lopes@campus.fct.unl.pt>; COlin
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_price(self, other_args): parser = argparse.ArgumentParser( add_help=False, formatter_class=argparse.ArgumentDefaultsHelpFormatter, prog="price", description=, ) parser.add_argument( "-s", "--symbol", required="-h" not in other_args, type=str, dest="symbol", help="Symbol of coin to load data for, ~100 symbols are available", ) if other_args and "-" not in other_args[0][0]: other_args.insert(0, "-s") ns_parser = self.parse_known_args_and_warn(parser, other_args) if ns_parser: if ns_parser.symbol in pyth_model.ASSETS.keys(): console.print( "[param]If it takes too long, you can use 'Ctrl + C' to cancel.\n[/param]" ) pyth_view.display_price(ns_parser.symbol) else: console.print("[red]The symbol selected does not exist.[/red]\n")
131
crypto_controller.py
Python
openbb_terminal/cryptocurrency/crypto_controller.py
1661ddd44044c637526e9a1e812e7c1863be35fc
OpenBBTerminal
5
88,594
24
14
10
154
19
0
28
130
test_config_and_source_url
ref(stacktrace_link): Add more than one code mapping in the tests (#41409) Include more than one code mapping in the setup code. Cleaning up a bit how we tag the transactions. This makes the PR for WOR-2395 a little easier to read.
https://github.com/getsentry/sentry.git
def test_config_and_source_url(self): with mock.patch.object( ExampleIntegration, "get_stacktrace_link", return_value="https://sourceurl.com/" ): response = self.get_success_response( self.organization.slug, self.project.slug, qs_params={"file": self.filepath} ) assert response.data["config"] == self.expected_configurations(self.code_mapping1) assert response.data["sourceUrl"] == "https://sourceurl.com/" assert response.data["integrations"] == [serialized_integration(self.integration)]
91
test_project_stacktrace_link.py
Python
tests/sentry/api/endpoints/test_project_stacktrace_link.py
2e0d2c856eb17a842c67d88363bed92c99578c20
sentry
1
48,599
60
15
13
265
29
0
77
232
test_follow_307_308_preserve_kwargs
Added test client support for HTTP 307 and 308 redirects (#8419) * Add retain test data on follow=True * Simplify TestAPITestClient.test_follow_redirect Inspired from Django's ClientTest.test_follow_307_and_308_redirect * Add 307 308 follow redirect test
https://github.com/encode/django-rest-framework.git
def test_follow_307_308_preserve_kwargs(self, *mocked_methods): methods = ('get', 'post', 'put', 'patch', 'delete', 'options') codes = (307, 308) for method, code in itertools.product(methods, codes): subtest_ctx = self.subTest(method=method, code=code) patch_ctx = patch.object(self.client, method, side_effect=getattr(self.client, method)) with subtest_ctx, patch_ctx as req_method: kwargs = {'data': {'example': 'test'}, 'format': 'json'} response = req_method('/redirect-view/%s/' % code, follow=True, **kwargs) assert response.redirect_chain is not None assert response.status_code == 200 for _, call_args, call_kwargs in req_method.mock_calls: assert all(call_kwargs[k] == kwargs[k] for k in kwargs if k in call_kwargs)
164
test_testing.py
Python
tests/test_testing.py
df92e57ad6c8394ca54654dfc7a2722f822ed8c8
django-rest-framework
5
309,365
16
11
5
76
10
0
16
39
test_get_controller_fails_to_connect
Fix reconnect rather than reauth when both HA and UniFi controller restarts at the same time (#63994)
https://github.com/home-assistant/core.git
async def test_get_controller_fails_to_connect(hass, side_effect, raised_exception): with patch("aiounifi.Controller.check_unifi_os", return_value=True), patch( "aiounifi.Controller.login", side_effect=side_effect ), pytest.raises(raised_exception): await get_controller(hass, **CONTROLLER_DATA)
44
test_controller.py
Python
tests/components/unifi/test_controller.py
59cea56e170d9e55e3c77b0be5dd47ded75919ce
core
1
130,277
45
22
19
190
17
0
66
361
detailed_match_files
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def detailed_match_files(patterns, files, all_matches=None): all_files = files if isinstance(files, Collection) else list(files) return_files = {} for pattern in patterns: if pattern.include is not None: result_files = pattern.match(all_files) if pattern.include: # Add files and record pattern. for result_file in result_files: if result_file in return_files: if all_matches: return_files[result_file].patterns.append(pattern) else: return_files[result_file].patterns[0] = pattern else: return_files[result_file] = MatchDetail([pattern]) else: # Remove files. for file in result_files: del return_files[file] return return_files
121
util.py
Python
python/ray/_private/thirdparty/pathspec/util.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
9
152,958
99
20
63
352
25
0
159
646
from_labels
FEAT-#3111: Ensure relabeling Modin Frame does not lose partition shape (#3662) Co-authored-by: Devin Petersohn <devin.petersohn@gmail.com> Signed-off-by: Naren Krishna <naren@ponder.io>
https://github.com/modin-project/modin.git
def from_labels(self) -> "PandasDataframe": new_row_labels = pandas.RangeIndex(len(self.index)) if self.index.nlevels > 1: level_names = [ self.index.names[i] if self.index.names[i] is not None else "level_{}".format(i) for i in range(self.index.nlevels) ] else: level_names = [ self.index.names[0] if self.index.names[0] is not None else "index" if "index" not in self.columns else "level_{}".format(0) ] # We will also use the `new_column_names` in the calculation of the internal metadata, so this is a # lightweight way of ensuring the metadata matches. if self.columns.nlevels > 1: # Column labels are different for multilevel index. new_column_names = pandas.MultiIndex.from_tuples( # Set level names on the 1st columns level and fill up empty level names with empty string. # Expand tuples in level names. This is how reset_index works when col_level col_fill are not specified. [ tuple( list(level) + [""] * (self.columns.nlevels - len(level)) if isinstance(level, tuple) else [level] + [""] * (self.columns.nlevels - 1) ) for level in level_names ], names=self.columns.names, ) else: new_column_names = pandas.Index(level_names, tupleize_cols=False) new_columns = new_column_names.append(self.columns)
304
dataframe.py
Python
modin/core/dataframe/pandas/dataframe/dataframe.py
3c740dbfcdd69ddc3ab45a42be996e5c61104342
modin
9
247,554
72
12
21
309
21
0
101
314
test_dont_notify_rule_overrides_message
Add an additional HTTP pusher + push rule tests. (#12188) And rename the field used for caching from _id to _cache_key.
https://github.com/matrix-org/synapse.git
def test_dont_notify_rule_overrides_message(self): user_id, access_token = self._make_user_with_pusher("user") other_user_id, other_access_token = self._make_user_with_pusher("otheruser") # Create a room room = self.helper.create_room_as(user_id, tok=access_token) # Disable user notifications for this room -> user body = { "conditions": [{"kind": "event_match", "key": "room_id", "pattern": room}], "actions": ["dont_notify"], } channel = self.make_request( "PUT", "/pushrules/global/override/best.friend", body, access_token=access_token, ) self.assertEqual(channel.code, 200) # Check we start with no pushes self.assertEqual(len(self.push_attempts), 0) # The other user joins self.helper.join(room=room, user=other_user_id, tok=other_access_token) # The other user sends a message (ignored by dont_notify push rule set above) self.helper.send(room, body="Hi!", tok=other_access_token) self.assertEqual(len(self.push_attempts), 0) # The user sends a message back (sends a notification) self.helper.send(room, body="Hello", tok=access_token) self.assertEqual(len(self.push_attempts), 1)
184
test_http.py
Python
tests/push/test_http.py
735e89bd3a0755883ef0a19649adf84192b5d9fc
synapse
1
100,674
21
16
12
133
11
0
25
89
process
alignments tool - Add from-faces job - Allows user to regenerate alignments file(s) from a folder of extracted faces
https://github.com/deepfakes/faceswap.git
def process(self) -> None: if self.args.job in ("missing-alignments", "missing-frames", "multi-faces", "no-faces"): job = Check else: job = globals()[self.args.job.title().replace("-", "")] job = job(self.alignments, self.args) logger.debug(job) job.process()
76
alignments.py
Python
tools/alignments/alignments.py
6437cd7ab0d6f18cdca0172ba281fd71967b86ac
faceswap
2
106,063
6
8
3
31
6
0
6
20
list_indexes
Clean up Dataset and DatasetDict (#5344) * clean up docstrings * make style * apply review Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com>
https://github.com/huggingface/datasets.git
def list_indexes(self) -> List[str]: return list(self._indexes)
18
search.py
Python
src/datasets/search.py
cd3169f3f35afcf73a36a8276113e1881d92e5e0
datasets
1
301,140
6
7
3
28
4
0
6
20
name
Add laundrify integration (#65090) * First version of laundrify integration * Code cleanup * Code cleanup after review #2 * Move coordinator to its own file * Save devices as dict and implement available prop as fn * Validate token on init, abort if already configured * Some more cleanup after review * Add strict type hints * Minor changes after code review * Remove OptionsFlow (use default poll interval instead) * Fix CODEOWNERS to pass hassfest job * Fix formatting to pass prettier job * Fix mypy typing error * Update internal device property after fetching data * Call parental update handler and remove obsolete code * Add coordinator tests and fix some config flow tests * Refactor tests * Refactor fixtures * Device unavailable if polling fails
https://github.com/home-assistant/core.git
def name(self) -> str: return self._device["name"]
15
binary_sensor.py
Python
homeassistant/components/laundrify/binary_sensor.py
abf9aab18f9a6953b49c4f8aee1ca7e560911e36
core
1
269,420
23
14
6
136
12
0
40
62
stack2
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def stack2(x, filters, blocks, stride1=2, name=None): x = block2(x, filters, conv_shortcut=True, name=name + "_block1") for i in range(2, blocks): x = block2(x, filters, name=name + "_block" + str(i)) x = block2(x, filters, stride=stride1, name=name + "_block" + str(blocks)) return x
90
resnet.py
Python
keras/applications/resnet.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
36,650
32
14
10
176
19
0
46
81
get_2d_sincos_pos_embed
Add TF ViT MAE (#16255) * ported TFViTMAEIntermediate and TFViTMAEOutput. * added TFViTMAEModel and TFViTMAEDecoder. * feat: added a noise argument in the implementation for reproducibility. * feat: vit mae models with an additional noise argument for reproducibility. Co-authored-by: ariG23498 <aritra.born2fly@gmail.com> Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
https://github.com/huggingface/transformers.git
def get_2d_sincos_pos_embed(embed_dim, grid_size, add_cls_token=False): grid_h = tf.range(grid_size, dtype=tf.float32) grid_w = tf.range(grid_size, dtype=tf.float32) grid = tf.meshgrid(grid_w, grid_h) # here w goes first grid = tf.stack(grid, axis=0) grid = tf.reshape(grid, [2, 1, grid_size, grid_size]) pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) if add_cls_token: pos_embed = tf.concat([tf.zeros((1, embed_dim)), pos_embed], axis=0) return pos_embed
118
modeling_tf_vit_mae.py
Python
src/transformers/models/vit_mae/modeling_tf_vit_mae.py
5b40a37bc4da9dc6cd33876ce9bb3f7f48450a03
transformers
2
154,057
67
13
20
206
19
0
96
340
_reset_index
FEAT-#4147: Add partial compatibility with Python 3.6 and pandas 1.1 (#4301) Signed-off-by: Devin Petersohn <devin.petersohn@gmail.com> Signed-off-by: Vasily Litvinov <fam1ly.n4me@yandex.ru> Co-authored-by: Alexey Prutskov <lehaprutskov@gmail.com> Co-authored-by: Rehan Durrani <rehan@ponder.io> Co-authored-by: Igoshev, Yaroslav <Poolliver868@mail.ru> Co-authored-by: Myachev, Anatoly <anatoly.myachev@intel.com>
https://github.com/modin-project/modin.git
def _reset_index(self, level, drop, name, inplace): # noqa: PR01, RT01, D200 if name is no_default: # For backwards compatibility, keep columns as [0] instead of # [None] when self.name is None name = 0 if self.name is None else self.name if drop and level is None: new_idx = pandas.RangeIndex(len(self.index)) if inplace: self.index = new_idx else: result = self.copy() result.index = new_idx return result elif not drop and inplace: raise TypeError( "Cannot reset_index inplace on a Series to create a DataFrame" ) else: obj = self.copy() obj.name = name from .dataframe import DataFrame return DataFrame(obj).reset_index(level=level, drop=drop, inplace=inplace)
126
series.py
Python
modin/pandas/series.py
6ce9cf4daec7f9996038205289bce2186be87611
modin
8
146,026
18
8
7
87
13
0
24
87
test_fs_checkpoint_fs
[ml] Add Ray ML / AIR checkpoint implementation (#22691) This PR splits up the changes in #22393 and introduces an implementation of the ML Checkpoint interface used by Ray Tune. This means, the TuneCheckpoint class implements the to/from_[bytes|dict|directory|object_ref|uri] conversion functions, as well as more high-level functions to transition between the different TuneCheckpoint classes. It also includes test cases for Tune's main conversion modes, i.e. dict - intermediate - dict and fs - intermediate - fs. These changes will be the basis for refactoring the tune interface to use TuneCheckpoint objects instead of TrialCheckpoints (externally) and instead of paths/objects (internally).
https://github.com/ray-project/ray.git
def test_fs_checkpoint_fs(self): checkpoint = self._prepare_fs_checkpoint() # Convert into fs checkpoint path = checkpoint.to_directory() self.assertIsInstance(path, str) # Create from fs checkpoint = Checkpoint.from_directory(path) self.assertTrue(checkpoint._local_path) self._assert_fs_checkpoint(checkpoint)
50
test_checkpoints.py
Python
python/ray/ml/tests/test_checkpoints.py
b267be475863a66e9feedb2be5f0a30a2ed8c493
ray
1
106,159
4
7
2
20
3
0
4
10
set_verbosity_debug
Clean filesystem and logging docstrings (#5356) clean filesystem and logging docstrings
https://github.com/huggingface/datasets.git
def set_verbosity_debug(): return set_verbosity(DEBUG)
10
logging.py
Python
src/datasets/utils/logging.py
cdc20c0de4d2a6edc2c8873460af5436e4f46d04
datasets
1
21
10
11
20
62
8
0
10
64
_object2proto
MOVE GetAllRequestsMessage and GetAllRequestsResponseMessage to the proper message file
https://github.com/OpenMined/PySyft.git
def _object2proto(self) -> GetAllRequestsMessage_PB: return GetAllRequestsMessage_PB( msg_id=serialize(self.id), address=serialize(self.address), reply_to=serialize(self.reply_to), )
39
object_request_messages.py
Python
packages/syft/src/syft/core/node/common/node_service/object_request/object_request_messages.py
05edf746cf5742b562996cf1a319b404152960e5
PySyft
1
267,204
49
20
25
403
23
0
96
459
execute_dump
ansible-config added json/yaml output to list/dump (#77447) fixes #733644
https://github.com/ansible/ansible.git
def execute_dump(self): if context.CLIARGS['type'] == 'base': # deal with base output = self._get_global_configs() elif context.CLIARGS['type'] == 'all': # deal with base output = self._get_global_configs() # deal with plugins for ptype in C.CONFIGURABLE_PLUGINS: plugin_list = self._get_plugin_configs(ptype, context.CLIARGS['args']) if context.CLIARGS['format'] == 'display': output.append('\n%s:\n%s' % (ptype.upper(), '=' * len(ptype))) output.extend(plugin_list) else: if ptype in ('modules', 'doc_fragments'): pname = ptype.upper() else: pname = '%s_PLUGINS' % ptype.upper() output.append({pname: plugin_list}) else: # deal with plugins output = self._get_plugin_configs(context.CLIARGS['type'], context.CLIARGS['args']) if context.CLIARGS['format'] == 'display': text = '\n'.join(output) if context.CLIARGS['format'] == 'yaml': text = yaml_dump(output) elif context.CLIARGS['format'] == 'json': text = json_dump(output) self.pager(to_text(text, errors='surrogate_or_strict'))
223
config.py
Python
lib/ansible/cli/config.py
a12e0a0e874c6c0d18a1a2d83dcb106d207136af
ansible
9
101,357
17
10
5
68
8
0
18
50
_add_queues
Bugfix: convert - Gif Writer - Fix non-launch error on Gif Writer - convert plugins - linting - convert/fs_media/preview/queue_manager - typing - Change convert items from dict to Dataclass
https://github.com/deepfakes/faceswap.git
def _add_queues(self) -> None: logger.debug("Adding queues. Queue size: %s", self._queue_size) for qname in ("convert_in", "convert_out", "patch"): queue_manager.add_queue(qname, self._queue_size)
39
convert.py
Python
scripts/convert.py
1022651eb8a7741014f5d2ec7cbfe882120dfa5f
faceswap
2
60,391
3
7
2
23
4
0
3
5
_SetFilters
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def _SetFilters(filters): _cpplint_state.SetFilters(filters)
12
cpp_lint.py
Python
code/deep/BJMMD/caffe/scripts/cpp_lint.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
1
310,335
26
12
7
81
11
0
27
88
_async_get_addon_info
Import hassio (#64561) * Import hassio * Fix HassioAPIError * Use relative import * Adjust import Co-authored-by: epenet <epenet@users.noreply.github.com>
https://github.com/home-assistant/core.git
async def _async_get_addon_info(self): try: addon_info = await hassio.async_get_addon_info(self.hass, "core_zwave") except hassio.HassioAPIError as err: _LOGGER.error("Failed to get OpenZWave add-on info: %s", err) raise AbortFlow("addon_info_failed") from err return addon_info
45
config_flow.py
Python
homeassistant/components/ozw/config_flow.py
6f631c542c54cd75360c0d5fd272433aa168c148
core
2