id
int64
20
338k
vocab_size
int64
2
671
ast_levels
int64
4
32
nloc
int64
1
451
n_ast_nodes
int64
12
5.6k
n_identifiers
int64
1
186
n_ast_errors
int64
0
10
n_words
int64
2
2.17k
n_whitespaces
int64
2
13.8k
fun_name
stringlengths
2
73
commit_message
stringlengths
51
15.3k
url
stringlengths
31
59
code
stringlengths
51
31k
ast_errors
stringlengths
0
1.46k
token_counts
int64
6
3.32k
file_name
stringlengths
5
56
language
stringclasses
1 value
path
stringlengths
7
134
commit_id
stringlengths
40
40
repo
stringlengths
3
28
complexity
int64
1
153
159,095
25
9
6
105
14
1
27
47
test_cli_log_level_debug_used
Configurable logging for libraries (#10614) * Make library level logging to be configurable Fixes https://github.com/RasaHQ/rasa/issues/10203 * Create log level documentation under cheatsheet in Rasa docs * Add log docs to `rasa shell --debug` (and others)
https://github.com/RasaHQ/rasa.git
def test_cli_log_level_debug_used(): configure_logging_and_warnings(logging.DEBUG) rasa_logger = logging.getLogger("rasa") rasa_logger.level == logging.DEBUG matplotlib_logger = logging.getLogger("matplotlib") # Default log level for libraries is currently ERROR matplotlib_logger.level == logging.ERROR @mock.patch.dict(os.environ, {"LOG_LEVEL": "WARNING"})
@mock.patch.dict(os.environ, {"LOG_LEVEL": "WARNING"})
41
test_common.py
Python
tests/utils/test_common.py
f00148b089d326c952880a0e5e6bd4b2dcb98ce5
rasa
1
101,184
21
12
10
126
15
0
24
150
_obtain_mask
lib.detected_face.Mask - Add source + target offset and coverage to set_sub_crop method
https://github.com/deepfakes/faceswap.git
def _obtain_mask(cls, detected_face, mask_type): mask = detected_face.mask.get(mask_type) if not mask: return None if mask.stored_centering != "face": face = AlignedFace(detected_face.landmarks_xy) mask.set_sub_crop(face.pose.offset[mask.stored_centering], face.pose.offset["face"], centering="face") return mask.mask.squeeze()
77
viewport.py
Python
tools/manual/faceviewer/viewport.py
32950897376b48e0f08b46385602e4df902cf49e
faceswap
3
259,191
14
12
6
62
6
0
15
45
_estimator_has
MNT Replace if_delegate_has_method with available_if in ensemble and semi_supervised (#20545) Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com> Co-authored-by: Jérémie du Boisberranger <34657725+jeremiedbb@users.noreply.github.com>
https://github.com/scikit-learn/scikit-learn.git
def _estimator_has(attr): return lambda self: ( hasattr(self.estimators_[0], attr) if hasattr(self, "estimators_") else hasattr(self.base_estimator, attr) )
39
_bagging.py
Python
sklearn/ensemble/_bagging.py
a794c58692a1f3e7a85a42d8c7f7ddd5fcf18baa
scikit-learn
2
286,398
90
14
82
406
36
0
114
880
call_find
More Fixes to Crypto + key sort (#3244) * fix #3095 - autocomplete and command working + key sort * fix #3056 * fix [Bug] bugs #3048 * fix [Bug] bug #3017 * sort -> sortby, not ascend, tests * fix my goof ups Co-authored-by: james <jmaslek11@gmail.com>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_find(self, other_args): parser = argparse.ArgumentParser( prog="find", add_help=False, formatter_class=argparse.ArgumentDefaultsHelpFormatter, description=, ) parser.add_argument( "-c", "--coin", help="Symbol Name or Id of Coin", dest="coin", required="-h" not in other_args, type=str, ) parser.add_argument( "-k", "--key", dest="key", help="Specify by which column you would like to search: symbol, name, id", type=str, choices=FIND_KEYS, default="symbol", ) parser.add_argument( "-l", "--limit", default=10, dest="limit", help="Number of records to display", type=check_positive, ) parser.add_argument( "-s", "--skip", default=0, dest="skip", help="Skip n of records", type=check_positive, ) if other_args and not other_args[0][0] == "-": other_args.insert(0, "-c") ns_parser = self.parse_known_args_and_warn( parser, other_args, EXPORT_ONLY_RAW_DATA_ALLOWED, ) # TODO: merge find + display_all_coins if ns_parser: if ns_parser.coin == "ALL": display_all_coins( symbol=ns_parser.coin, source=ns_parser.source, limit=ns_parser.limit, skip=ns_parser.skip, show_all=True, export=ns_parser.export, ) else: find( query=ns_parser.coin, source=ns_parser.source, key=ns_parser.key, limit=ns_parser.limit, export=ns_parser.export, )
257
crypto_controller.py
Python
openbb_terminal/cryptocurrency/crypto_controller.py
09f753da1c2a2f03c41fe6a3ca2eb79f6ea58995
OpenBBTerminal
5
190,078
40
13
18
213
18
0
55
153
generate_tex_file
Migrate more `os.path` to `pathlib` (#2980) * Migrate more `os.path` to `pathlib` * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix type errors with recent pathlib code * pathlib fixes * more pathlib fixes * remove unused imports introduced by pathlib migration * convert `open()` calls to pathlib * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Migrate tex_file_writing to pathlib * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * converted more old code to pathlib, and fixed a bug in module_ops * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix test failures * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix test failures * Apply suggestions from code review Co-authored-by: Benjamin Hackl <devel@benjamin-hackl.at> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Benjamin Hackl <devel@benjamin-hackl.at>
https://github.com/ManimCommunity/manim.git
def generate_tex_file(expression, environment=None, tex_template=None): if tex_template is None: tex_template = config["tex_template"] if environment is not None: output = tex_template.get_texcode_for_expression_in_env(expression, environment) else: output = tex_template.get_texcode_for_expression(expression) tex_dir = config.get_dir("tex_dir") if not tex_dir.exists(): tex_dir.mkdir() result = tex_dir / (tex_hash(output) + ".tex") if not result.exists(): logger.info( "Writing %(expression)s to %(path)s", {"expression": expression, "path": f"{result}"}, ) result.write_text(output, encoding="utf-8") return result
122
tex_file_writing.py
Python
manim/utils/tex_file_writing.py
9d1f066d637cb15baea10e6907ab85efff8fb36f
manim
5
246,370
55
12
29
295
23
0
88
330
test_in_flight_requests_stop_being_in_flight
Add more tests for in-flight state query duplication. (#12033)
https://github.com/matrix-org/synapse.git
def test_in_flight_requests_stop_being_in_flight(self) -> None: req1 = ensureDeferred( self.state_datastore._get_state_for_group_using_inflight_cache( 42, StateFilter.all() ) ) self.pump(by=0.1) # This should have gone to the database self.assertEqual(len(self.get_state_group_calls), 1) self.assertFalse(req1.called) # Complete the request right away. self._complete_request_fake(*self.get_state_group_calls[0]) self.assertTrue(req1.called) # Send off another request req2 = ensureDeferred( self.state_datastore._get_state_for_group_using_inflight_cache( 42, StateFilter.all() ) ) self.pump(by=0.1) # It should have gone to the database again, because the previous request # isn't in-flight and therefore isn't available for deduplication. self.assertEqual(len(self.get_state_group_calls), 2) self.assertFalse(req2.called) # Complete the request right away. self._complete_request_fake(*self.get_state_group_calls[1]) self.assertTrue(req2.called) groups, sf, d = self.get_state_group_calls[0] self.assertEqual(self.get_success(req1), FAKE_STATE) self.assertEqual(self.get_success(req2), FAKE_STATE)
186
test_state_store.py
Python
tests/storage/databases/test_state_store.py
546b9c9e648f5e2b25bb7c8350570787ff9befae
synapse
1
266,036
24
10
7
129
12
0
34
97
test_cf_data
Closes #10052: The cf attribute now returns deserialized custom field data
https://github.com/netbox-community/netbox.git
def test_cf_data(self): site = Site(name='Test Site', slug='test-site') # Check custom field data on new instance site.custom_field_data['foo'] = 'abc' self.assertEqual(site.cf['foo'], 'abc') # Check custom field data from database site.save() site = Site.objects.get(name='Test Site') self.assertEqual(site.cf['foo'], 'abc')
69
test_customfields.py
Python
netbox/extras/tests/test_customfields.py
ea6d86e6c4bb6037465410db6205a7471bc81a6c
netbox
1
200,289
37
19
11
176
16
0
54
150
convert_to_native_paths
runtests.py: Undo auto-formatting, re-add changes to blacklist for scipy, numpy
https://github.com/sympy/sympy.git
def convert_to_native_paths(lst): newlst = [] for i, rv in enumerate(lst): rv = os.path.join(*rv.split("/")) # on windows the slash after the colon is dropped if sys.platform == "win32": pos = rv.find(':') if pos != -1: if rv[pos + 1] != '\\': rv = rv[:pos + 1] + '\\' + rv[pos + 1:] newlst.append(os.path.normcase(rv)) return newlst
101
runtests.py
Python
sympy/testing/runtests.py
6d2bbf80752549276a968fd4af78231c569d55c5
sympy
5
60,371
171
19
31
424
32
0
258
556
ProcessFile
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def ProcessFile(filename, vlevel, extra_check_functions=[]): _SetVerboseLevel(vlevel) try: # Support the UNIX convention of using "-" for stdin. Note that # we are not opening the file with universal newline support # (which codecs doesn't support anyway), so the resulting lines do # contain trailing '\r' characters if we are reading a file that # has CRLF endings. # If after the split a trailing '\r' is present, it is removed # below. If it is not expected to be present (i.e. os.linesep != # '\r\n' as in Windows), a warning is issued below if this file # is processed. if filename == '-': lines = codecs.StreamReaderWriter(sys.stdin, codecs.getreader('utf8'), codecs.getwriter('utf8'), 'replace').read().split('\n') else: lines = codecs.open(filename, 'r', 'utf8', 'replace').read().split('\n') carriage_return_found = False # Remove trailing '\r'. for linenum in range(len(lines)): if lines[linenum].endswith('\r'): lines[linenum] = lines[linenum].rstrip('\r') carriage_return_found = True except IOError: sys.stderr.write( "Skipping input '%s': Can't open for reading\n" % filename) return # Note, if no dot is found, this will give the entire filename as the ext. file_extension = filename[filename.rfind('.') + 1:] # When reading from stdin, the extension is unknown, so no cpplint tests # should rely on the extension. if filename != '-' and file_extension not in _valid_extensions: sys.stderr.write('Ignoring %s; not a valid file name ' '(%s)\n' % (filename, ', '.join(_valid_extensions))) else: ProcessFileData(filename, file_extension, lines, Error, extra_check_functions) if carriage_return_found and os.linesep != '\r\n': # Use 0 for linenum since outputting only one error for potentially # several lines. Error(filename, 0, 'whitespace/newline', 1, 'One or more unexpected \\r (^M) found;' 'better to use only a \\n') sys.stderr.write('Done processing %s\n' % filename)
230
cpp_lint.py
Python
code/deep/BJMMD/caffe/scripts/cpp_lint.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
9
36,020
17
6
8
23
5
0
17
38
default_batch_size
Add ONNX export for ViT (#15658) * Add ONNX support for ViT * Refactor to use generic preprocessor * Add vision dep to tests * Extend ONNX slow tests to ViT * Add dummy image generator * Use model_type to determine modality * Add deprecation warnings for tokenizer argument * Add warning when overwriting the preprocessor * Add optional args to docstrings * Add minimum PyTorch version to OnnxConfig * Refactor OnnxConfig class variables from CONSTANT_NAME to snake_case * Add reasonable value for default atol Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
https://github.com/huggingface/transformers.git
def default_batch_size(self) -> int: # Using 2 avoid ONNX making assumption about single sample batch return OnnxConfig.default_fixed_batch
12
config.py
Python
src/transformers/onnx/config.py
50dd314d939a86f3a81e19af01459f449fbaeeca
transformers
1
272,037
10
11
4
63
10
0
11
39
_tracking_metadata
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _tracking_metadata(self): metadata = json.loads(super()._tracking_metadata) metadata["_is_feature_layer"] = True return json.dumps(metadata, default=json_utils.get_json_type)
37
dense_features.py
Python
keras/feature_column/dense_features.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
176,155
76
9
56
334
5
0
93
649
tutte_graph
Docstrings for the small.py module (#5240) * added description for the first 5 small graphs * modified descriptions based on comment and added description for two more functions * added doctrings to all the functions * Minor touchups. Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
https://github.com/networkx/networkx.git
def tutte_graph(create_using=None): description = [ "adjacencylist", "Tutte's Graph", 46, [ [2, 3, 4], [5, 27], [11, 12], [19, 20], [6, 34], [7, 30], [8, 28], [9, 15], [10, 39], [11, 38], [40], [13, 40], [14, 36], [15, 16], [35], [17, 23], [18, 45], [19, 44], [46], [21, 46], [22, 42], [23, 24], [41], [25, 28], [26, 33], [27, 32], [34], [29], [30, 33], [31], [32, 34], [33], [], [], [36, 39], [37], [38, 40], [39], [], [], [42, 45], [43], [44, 46], [45], [], [], ], ] G = make_small_undirected_graph(description, create_using) return G
267
small.py
Python
networkx/generators/small.py
dec723f072eb997a497a159dbe8674cd39999ee9
networkx
1
111,539
49
11
9
250
21
1
53
92
test_append_invalid_alias
Refactor KB for easier customization (#11268) * Add implementation of batching + backwards compatibility fixes. Tests indicate issue with batch disambiguation for custom singular entity lookups. * Fix tests. Add distinction w.r.t. batch size. * Remove redundant and add new comments. * Adjust comments. Fix variable naming in EL prediction. * Fix mypy errors. * Remove KB entity type config option. Change return types of candidate retrieval functions to Iterable from Iterator. Fix various other issues. * Update spacy/pipeline/entity_linker.py Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Update spacy/pipeline/entity_linker.py Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Update spacy/kb_base.pyx Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Update spacy/kb_base.pyx Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Update spacy/pipeline/entity_linker.py Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Add error messages to NotImplementedErrors. Remove redundant comment. * Fix imports. * Remove redundant comments. * Rename KnowledgeBase to InMemoryLookupKB and BaseKnowledgeBase to KnowledgeBase. * Fix tests. * Update spacy/errors.py Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com> * Move KB into subdirectory. * Adjust imports after KB move to dedicated subdirectory. * Fix config imports. * Move Candidate + retrieval functions to separate module. Fix other, small issues. * Fix docstrings and error message w.r.t. class names. Fix typing for candidate retrieval functions. * Update spacy/kb/kb_in_memory.pyx Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com> * Update spacy/ml/models/entity_linker.py Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com> * Fix typing. * Change typing of mentions to be Span instead of Union[Span, str]. * Update docs. * Update EntityLinker and _architecture docs. * Update website/docs/api/entitylinker.md Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> * Adjust message for E1046. * Re-add section for Candidate in kb.md, add reference to dedicated page. * Update docs and docstrings. * Re-add section + reference for KnowledgeBase.get_alias_candidates() in docs. * Update spacy/kb/candidate.pyx * Update spacy/kb/kb_in_memory.pyx * Update spacy/pipeline/legacy/entity_linker.py * Remove canididate.md. Remove mistakenly added config snippet in entity_linker.py. Co-authored-by: Paul O'Leary McCann <polm@dampfkraft.com> Co-authored-by: Sofie Van Landeghem <svlandeg@users.noreply.github.com>
https://github.com/explosion/spaCy.git
def test_append_invalid_alias(nlp): mykb = InMemoryLookupKB(nlp.vocab, entity_vector_length=1) # adding entities mykb.add_entity(entity="Q1", freq=27, entity_vector=[1]) mykb.add_entity(entity="Q2", freq=12, entity_vector=[2]) mykb.add_entity(entity="Q3", freq=5, entity_vector=[3]) # adding aliases mykb.add_alias(alias="douglas", entities=["Q2", "Q3"], probabilities=[0.8, 0.1]) mykb.add_alias(alias="adam", entities=["Q2"], probabilities=[0.9]) # append an alias - should fail because the entities and probabilities vectors are not of equal length with pytest.raises(ValueError): mykb.append_alias(alias="douglas", entity="Q1", prior_prob=0.2) @pytest.mark.filterwarnings("ignore:\\[W036")
@pytest.mark.filterwarnings("ignore:\\[W036")
148
test_entity_linker.py
Python
spacy/tests/pipeline/test_entity_linker.py
1f23c615d7a7326ca5a38a7d768b8b70caaa0e17
spaCy
1
102,696
36
11
9
106
17
0
41
88
subscribe_to_coin_updates
Merge standalone wallet into main (#9793) * wallet changes from pac * cat changes * pool tests * pooling tests passing * offers * lint * mempool_mode * black * linting * workflow files * flake8 * more cleanup * renamed * remove obsolete test, don't cast announcement * memos are not only bytes32 * trade renames * fix rpcs, block_record * wallet rpc, recompile settlement clvm * key derivation * clvm tests * lgtm issues and wallet peers * stash * rename * mypy linting * flake8 * bad initializer * flaky tests * Make CAT wallets only create on verified hints (#9651) * fix clvm tests * return to log lvl warn * check puzzle unhardened * public key, not bytes. api caching change * precommit changes * remove unused import * mypy ci file, tests * ensure balance before creating a tx * Remove CAT logic from full node test (#9741) * Add confirmations and sleeps for wallet (#9742) * use pool executor * rever merge mistakes/cleanup * Fix trade test flakiness (#9751) * remove precommit * older version of black * lint only in super linter * Make announcements in RPC be objects instead of bytes (#9752) * Make announcements in RPC be objects instead of bytes * Lint * misc hint'ish cleanup (#9753) * misc hint'ish cleanup * unremove some ci bits * Use main cached_bls.py * Fix bad merge in main_pac (#9774) * Fix bad merge at 71da0487b9cd5564453ec24b76f1ac773c272b75 * Remove unused ignores * more unused ignores * Fix bad merge at 3b143e705057d6c14e2fb3e00078aceff0552d7e * One more byte32.from_hexstr * Remove obsolete test * remove commented out * remove duplicate payment object * remove long sync * remove unused test, noise * memos type * bytes32 * make it clear it's a single state at a time * copy over asset ids from pacr * file endl linter * Update chia/server/ws_connection.py Co-authored-by: dustinface <35775977+xdustinface@users.noreply.github.com> Co-authored-by: Matt Hauff <quexington@gmail.com> Co-authored-by: Kyle Altendorf <sda@fstab.net> Co-authored-by: dustinface <35775977+xdustinface@users.noreply.github.com>
https://github.com/Chia-Network/chia-blockchain.git
async def subscribe_to_coin_updates(self, coin_names, peer, height=uint32(0)): msg = wallet_protocol.RegisterForCoinUpdates(coin_names, height) all_coins_state: Optional[RespondToCoinUpdates] = await peer.register_interest_in_coin(msg) # State for untrusted sync is processed only in wp sync | or short sync backwards if all_coins_state is not None and self.is_trusted(peer): await self.wallet_state_manager.new_coin_state(all_coins_state.coin_states, peer)
67
wallet_node.py
Python
chia/wallet/wallet_node.py
89f15f591cc3cc3e8ae40e95ffc802f7f2561ece
chia-blockchain
3
278,610
59
17
15
194
22
1
72
193
softmax
Remove pylint comments. PiperOrigin-RevId: 452353044
https://github.com/keras-team/keras.git
def softmax(x, axis=-1): if x.shape.rank > 1: if isinstance(axis, int): output = tf.nn.softmax(x, axis=axis) else: # nn.softmax does not support tuple axis. e = tf.exp(x - tf.reduce_max(x, axis=axis, keepdims=True)) s = tf.reduce_sum(e, axis=axis, keepdims=True) output = e / s else: raise ValueError( "Cannot apply softmax to a tensor that is 1D. " f"Received input: {x}" ) # Cache the logits to use for crossentropy loss. output._keras_logits = x return output @keras_export("keras.activations.elu") @tf.__internal__.dispatch.add_dispatch_support
@keras_export("keras.activations.elu") @tf.__internal__.dispatch.add_dispatch_support
104
activations.py
Python
keras/activations.py
3613c3defc39c236fb1592c4f7ba1a9cc887343a
keras
3
137,498
5
6
23
31
6
1
5
12
test_stop_job_gracefully
[Jobs] Use SIGTERM followed by SIGKILL to stop a job (#30851) Currently, when user wants to stop a job, we directly send a `SIGKILL` signal. Instead, we want to send a `SIGTERM` signal first, then send a `SIGKILL` signal after a few seconds if the child process still has not terminated.
https://github.com/ray-project/ray.git
async def test_stop_job_gracefully(job_manager): entrypoint =
entrypoint = """python -c \"
65
test_job_manager.py
Python
dashboard/modules/job/tests/test_job_manager.py
22af73253cd48fa134aee33c145b541c99acdc8b
ray
1
260,458
10
11
4
75
12
0
11
23
test_agglomerative_clustering_memory_mapped
MNT Deprecate `affinity` in `AgglomerativeClustering` (#23470) Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com> Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com>
https://github.com/scikit-learn/scikit-learn.git
def test_agglomerative_clustering_memory_mapped(): rng = np.random.RandomState(0) Xmm = create_memmap_backed_data(rng.randn(50, 100)) AgglomerativeClustering(metric="euclidean", linkage="single").fit(Xmm)
43
test_hierarchical.py
Python
sklearn/cluster/tests/test_hierarchical.py
a5d50cf3c7611b4343f446a97266ea77a4175afa
scikit-learn
1
198,198
46
13
16
211
28
0
61
193
sort_args_by_name
Rename files for array expression conversions in order to avoid naming conflicts in TAB-completion of the corresponding functions
https://github.com/sympy/sympy.git
def sort_args_by_name(self): expr = self.expr if not isinstance(expr, ArrayTensorProduct): return self args = expr.args sorted_data = sorted(enumerate(args), key=lambda x: default_sort_key(x[1])) pos_sorted, args_sorted = zip(*sorted_data) reordering_map = {i: pos_sorted.index(i) for i, arg in enumerate(args)} contraction_tuples = self._get_contraction_tuples() contraction_tuples = [[(reordering_map[j], k) for j, k in i] for i in contraction_tuples] c_tp = _array_tensor_product(*args_sorted) new_contr_indices = self._contraction_tuples_to_contraction_indices( c_tp, contraction_tuples ) return _array_contraction(c_tp, *new_contr_indices)
135
array_expressions.py
Python
sympy/tensor/array/expressions/array_expressions.py
a69c49bec6caf2cb460dc4eedf0fec184db92f0e
sympy
5
288,141
6
6
3
22
4
0
6
20
is_connected
Add ESPHome BleakClient (#78911) Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
https://github.com/home-assistant/core.git
def is_connected(self) -> bool: return self._is_connected
12
client.py
Python
homeassistant/components/esphome/bluetooth/client.py
7042d6d35be54865b1252c0b28a50cce1a92eabc
core
1
276,209
7
11
5
61
10
0
7
30
_get_assets_dir
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _get_assets_dir(export_dir): return tf.io.gfile.join( tf.compat.as_text(export_dir), tf.compat.as_text(tf.saved_model.ASSETS_DIRECTORY), )
38
saved_model_experimental.py
Python
keras/saving/saved_model_experimental.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
156,048
40
14
8
163
18
0
43
87
new_da_object
absolufy-imports - No relative - PEP8 (#8796) Conversation in https://github.com/dask/distributed/issues/5889
https://github.com/dask/dask.git
def new_da_object(dsk, name, chunks, meta=None, dtype=None): if is_dataframe_like(meta) or is_series_like(meta) or is_index_like(meta): from dask.dataframe.core import new_dd_object assert all(len(c) == 1 for c in chunks[1:]) divisions = [None] * (len(chunks[0]) + 1) return new_dd_object(dsk, name, meta, divisions) else: return Array(dsk, name=name, chunks=chunks, meta=meta, dtype=dtype)
111
core.py
Python
dask/array/core.py
cccb9d8d8e33a891396b1275c2448c352ef40c27
dask
5
191,686
11
10
5
55
7
0
11
27
test_run_multiple_args_error
change run to use args and kwargs (#367) Before, `run` was not able to be called with multiple arguments. This expands the functionality.
https://github.com/hwchase17/langchain.git
def test_run_multiple_args_error() -> None: chain = FakeChain() with pytest.raises(ValueError): chain.run("bar", "foo")
28
test_base.py
Python
tests/unit_tests/chains/test_base.py
8d0869c6d3ed63b2b15d4f75ea664e089dcc569d
langchain
1
107,167
21
11
5
106
16
1
22
40
test_constrained_layout4
ENH: implement and use base layout_engine for more flexible layout.
https://github.com/matplotlib/matplotlib.git
def test_constrained_layout4(): fig, axs = plt.subplots(2, 2, layout="constrained") for ax in axs.flat: pcm = example_pcolor(ax, fontsize=24) fig.colorbar(pcm, ax=axs, pad=0.01, shrink=0.6) @image_comparison(['constrained_layout5.png'], tol=0.002)
@image_comparison(['constrained_layout5.png'], tol=0.002)
60
test_constrainedlayout.py
Python
lib/matplotlib/tests/test_constrainedlayout.py
ec4dfbc3c83866f487ff0bc9c87b0d43a1c02b22
matplotlib
2
20,474
18
11
6
93
12
0
19
69
check
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def check(self, pattern): if self.eos: raise EndOfText() if pattern not in self._re_cache: self._re_cache[pattern] = re.compile(pattern, self.flags) return self._re_cache[pattern].match(self.data, self.pos)
60
scanner.py
Python
pipenv/patched/notpip/_vendor/pygments/scanner.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
3
5,819
23
11
4
67
6
0
36
58
extract_text_from_element
PR - Fix `extract_text_from_element()`and `find_element*()` to `find_element()` (#6438) * Updated getUserData() and find_element* Signed-off-by: elulcao <elulcao@icloud.com> Thanks @breuerfelix for reviewing, 🚀 People in this thread please let me know if something is not OK, IG changed a lot these days. 🤗 @her
https://github.com/InstaPy/InstaPy.git
def extract_text_from_element(elem): # if element is valid and contains text withou spaces if elem and hasattr(elem, "text") and elem.text and not re.search("\s", elem.text): return elem.text # if the element is not valid, return None return None
38
util.py
Python
instapy/util.py
2a157d452611d37cf50ccb7d56ff1a06e9790ecb
InstaPy
5
98,080
49
15
23
261
40
0
55
279
delete_alert_rule
feat(workflow): Add audit logs on add/edit/remove metric alerts (#33296)
https://github.com/getsentry/sentry.git
def delete_alert_rule(alert_rule, user=None): if alert_rule.status == AlertRuleStatus.SNAPSHOT.value: raise AlreadyDeletedError() with transaction.atomic(): if user: create_audit_entry_from_user( user, organization_id=alert_rule.organization_id, target_object=alert_rule.id, data=alert_rule.get_audit_log_data(), event=AuditLogEntryEvent.ALERT_RULE_REMOVE, ) incidents = Incident.objects.filter(alert_rule=alert_rule) bulk_delete_snuba_subscriptions(list(alert_rule.snuba_query.subscriptions.all())) if incidents.exists(): alert_rule.update(status=AlertRuleStatus.SNAPSHOT.value) AlertRuleActivity.objects.create( alert_rule=alert_rule, user=user, type=AlertRuleActivityType.DELETED.value ) else: alert_rule.delete() if alert_rule.id: # Change the incident status asynchronously, which could take awhile with many incidents due to snapshot creations. tasks.auto_resolve_snapshot_incidents.apply_async(kwargs={"alert_rule_id": alert_rule.id})
162
logic.py
Python
src/sentry/incidents/logic.py
5b01e6a61cbbb62bde6e1cacf155e00c5e5bc432
sentry
5
177,063
128
15
23
407
30
0
239
508
strategy_saturation_largest_first
strategy_saturation_largest_first now accepts partial colorings (#5888) The coloring strategy function `strategy_saturation_largest_first` allows the user to pass in a dict with a partial node coloring. This was always allowed, but previously the partial coloring dict was ignored. The partial coloring is also verified within the function.
https://github.com/networkx/networkx.git
def strategy_saturation_largest_first(G, colors): distinct_colors = {v: set() for v in G} # Add the node color assignments given in colors to the # distinct colors set for each neighbor of that node for vertex, color in colors.items(): for neighbor in G[vertex]: distinct_colors[neighbor].add(color) # Check that the color assignments in colors are valid # i.e. no neighboring nodes have the same color if len(colors) >= 2: for vertex, color in colors.items(): if color in distinct_colors[vertex]: raise nx.NetworkXError( "Neighboring vertices must have different colors" ) # If 0 nodes have been colored, simply choose the node of highest degree. if not colors: node = max(G, key=G.degree) yield node # Add the color 0 to the distinct colors set for each # neighbor of that node. for v in G[node]: distinct_colors[v].add(0) for i in range(len(G) - len(colors)): # Compute the maximum saturation and the set of nodes that # achieve that saturation. saturation = {v: len(c) for v, c in distinct_colors.items() if v not in colors} # Yield the node with the highest saturation, and break ties by # degree. node = max(saturation, key=lambda v: (saturation[v], G.degree(v))) yield node # Update the distinct color sets for the neighbors. color = colors[node] for v in G[node]: distinct_colors[v].add(color) #: Dictionary mapping name of a strategy as a string to the strategy function. STRATEGIES = { "largest_first": strategy_largest_first, "random_sequential": strategy_random_sequential, "smallest_last": strategy_smallest_last, "independent_set": strategy_independent_set, "connected_sequential_bfs": strategy_connected_sequential_bfs, "connected_sequential_dfs": strategy_connected_sequential_dfs, "connected_sequential": strategy_connected_sequential, "saturation_largest_first": strategy_saturation_largest_first, "DSATUR": strategy_saturation_largest_first, }
209
greedy_coloring.py
Python
networkx/algorithms/coloring/greedy_coloring.py
77d7ddac9a7c69ff086dd825e55454f300f4242b
networkx
13
60,244
20
10
9
83
3
0
34
74
choose_color_by_layertype
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def choose_color_by_layertype(layertype): color = '#6495ED' # Default if layertype == 'Convolution' or layertype == 'Deconvolution': color = '#FF5050' elif layertype == 'Pooling': color = '#FF9900' elif layertype == 'InnerProduct': color = '#CC33FF' return color
39
draw.py
Python
code/deep/BJMMD/caffe/python/caffe/draw.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
5
259,211
35
16
13
145
18
0
48
210
_compute_transformed_categories
ENH Adds infrequent categories to OneHotEncoder (#16018) * ENH Completely adds infrequent categories * STY Linting * STY Linting * DOC Improves wording * DOC Lint * BUG Fixes * CLN Address comments * CLN Address comments * DOC Uses math to description float min_frequency * DOC Adds comment regarding drop * BUG Fixes method name * DOC Clearer docstring * TST Adds more tests * FIX Fixes mege * CLN More pythonic * CLN Address comments * STY Flake8 * CLN Address comments * DOC Fix * MRG * WIP * ENH Address comments * STY Fix * ENH Use functiion call instead of property * ENH Adds counts feature * CLN Rename variables * DOC More details * CLN Remove unneeded line * CLN Less lines is less complicated * CLN Less diffs * CLN Improves readiabilty * BUG Fix * CLN Address comments * TST Fix * CLN Address comments * CLN Address comments * CLN Move docstring to userguide * DOC Better wrapping * TST Adds test to handle_unknown='error' * ENH Spelling error in docstring * BUG Fixes counter with nan values * BUG Removes unneeded test * BUG Fixes issue * ENH Sync with main * DOC Correct settings * DOC Adds docstring * DOC Immprove user guide * DOC Move to 1.0 * DOC Update docs * TST Remove test * DOC Update docstring * STY Linting * DOC Address comments * ENH Neater code * DOC Update explaination for auto * Update sklearn/preprocessing/_encoders.py Co-authored-by: Roman Yurchak <rth.yurchak@gmail.com> * TST Uses docstring instead of comments * TST Remove call to fit * TST Spelling error * ENH Adds support for drop + infrequent categories * ENH Adds infrequent_if_exist option * DOC Address comments for user guide * DOC Address comments for whats_new * DOC Update docstring based on comments * CLN Update test with suggestions * ENH Adds computed property infrequent_categories_ * DOC Adds where the infrequent column is located * TST Adds more test for infrequent_categories_ * DOC Adds docstring for _compute_drop_idx * CLN Moves _convert_to_infrequent_idx into its own method * TST Increases test coverage * TST Adds failing test * CLN Careful consideration of dropped and inverse_transform * STY Linting * DOC Adds docstrinb about dropping infrequent * DOC Uses only * DOC Numpydoc * TST Includes test for get_feature_names_out * DOC Move whats new * DOC Address docstring comments * DOC Docstring changes * TST Better comments * TST Adds check for handle_unknown='ignore' for infrequent * CLN Make _infrequent_indices private * CLN Change min_frequency default to None * DOC Adds comments * ENH adds support for max_categories=1 * ENH Describe lexicon ordering for ties * DOC Better docstring * STY Fix * CLN Error when explicity dropping an infrequent category * STY Grammar Co-authored-by: Joel Nothman <joel.nothman@gmail.com> Co-authored-by: Roman Yurchak <rth.yurchak@gmail.com> Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com>
https://github.com/scikit-learn/scikit-learn.git
def _compute_transformed_categories(self, i, remove_dropped=True): cats = self.categories_[i] if self._infrequent_enabled: infreq_map = self._default_to_infrequent_mappings[i] if infreq_map is not None: frequent_mask = infreq_map < infreq_map.max() infrequent_cat = "infrequent_sklearn" # infrequent category is always at the end cats = np.concatenate( (cats[frequent_mask], np.array([infrequent_cat], dtype=object)) ) if remove_dropped: cats = self._remove_dropped_categories(cats, i) return cats
92
_encoders.py
Python
sklearn/preprocessing/_encoders.py
7f0006c8aad1a09621ad19c3db19c3ff0555a183
scikit-learn
4
32,877
30
10
14
86
9
0
33
60
get_key_to_not_convert
`bitsandbytes` - `Linear8bitLt` integration into `transformers` models (#17901) * first commit * correct replace function * add final changes - works like charm! - cannot implement tests yet - tested * clean up a bit * add bitsandbytes dependencies * working version - added import function - added bitsandbytes utils file * small fix * small fix - fix import issue * fix import issues * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * refactor a bit - move bitsandbytes utils to utils - change comments on functions * reformat docstring - reformat docstring on init_empty_weights_8bit * Update src/transformers/__init__.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * revert bad formatting * change to bitsandbytes * refactor a bit - remove init8bit since it is useless * more refactoring - fixed init empty weights issue - added threshold param * small hack to make it work * Update src/transformers/modeling_utils.py * Update src/transformers/modeling_utils.py * revmoe the small hack * modify utils file * make style + refactor a bit * create correctly device map * add correct dtype for device map creation * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * apply suggestions - remove with torch.grad - do not rely on Python bool magic! * add docstring - add docstring for new kwargs * add docstring - comment `replace_8bit_linear` function - fix weird formatting * - added more documentation - added new utility function for memory footprint tracking - colab demo to add * few modifs - typo doc - force cast into float16 when load_in_8bit is enabled * added colab link * add test architecture + docstring a bit * refactor a bit testing class * make style + refactor a bit * enhance checks - add more checks - start writing saving test * clean up a bit * male style * add more details on doc * add more tests - still needs to fix 2 tests * replace by "or" - could not fix it from GitHub GUI Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * refactor a bit testing code + add readme * make style * fix import issue * Update src/transformers/modeling_utils.py Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com> * add few comments * add more doctring + make style * more docstring * raise error when loaded in 8bit * make style * add warning if loaded on CPU * add small sanity check * fix small comment * add bitsandbytes on dockerfile * Improve documentation - improve documentation from comments * add few comments * slow tests pass on the VM but not on the CI VM * Fix merge conflict * make style * another test should pass on a multi gpu setup * fix bad import in testing file * Fix slow tests - remove dummy batches - no more CUDA illegal memory errors * odify dockerfile * Update docs/source/en/main_classes/model.mdx * Update Dockerfile * Update model.mdx * Update Dockerfile * Apply suggestions from code review * few modifications - lm head can stay on disk/cpu - change model name so that test pass * change test value - change test value to the correct output - torch bmm changed to baddmm in bloom modeling when merging * modify installation guidelines * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Apply suggestions from code review Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * replace `n`by `name` * merge `load_in_8bit` and `low_cpu_mem_usage` * first try - keep the lm head in full precision * better check - check the attribute `base_model_prefix` instead of computing the number of parameters * added more tests * Update src/transformers/utils/bitsandbytes.py Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Merge branch 'integration-8bit' of https://github.com/younesbelkada/transformers into integration-8bit * improve documentation - fix typos for installation - change title in the documentation Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by: Michael Benayoun <mickbenayoun@gmail.com>
https://github.com/huggingface/transformers.git
def get_key_to_not_convert(model): r # Ignore this for base models (BertModel, GPT2Model, etc.) if not hasattr(model, model.base_model_prefix): return "" # otherwise they have an attached head list_modules = list(model.named_parameters()) last_name = list_modules[-1][0] return last_name.split(".")[0]
50
bitsandbytes.py
Python
src/transformers/utils/bitsandbytes.py
4a51075a96d2049f368b5f3dd6c0e9f08f599b62
transformers
2
156,649
10
8
2
58
6
0
10
24
split
Include known inconsistency in DataFrame `str.split` accessor docstring (#9177)
https://github.com/dask/dask.git
def split(self, pat=None, n=-1, expand=False): return self._split("split", pat=pat, n=n, expand=expand)
38
accessor.py
Python
dask/dataframe/accessor.py
dadfd9b681997b20026e6a51afef3fb9ebb1513a
dask
1
19,833
39
22
16
232
29
1
49
188
test_install_venv_project_directory
More granular control over PIPENV_VENV_IN_PROJECT variable. (#5026) * Allow PIPENV_VENV_IN_PROJECT to be read in as None, and ensure if it is set to False that it does not use .venv directory. * refactor based on PR feedback and add news fragment. * Review unit test coverage and add new tests. Remove unneccesary bits from other tests.
https://github.com/pypa/pipenv.git
def test_install_venv_project_directory(PipenvInstance): with PipenvInstance(chdir=True) as p: with temp_environ(), TemporaryDirectory( prefix="pipenv-", suffix="temp_workon_home" ) as workon_home: os.environ["WORKON_HOME"] = workon_home c = p.pipenv("install six") assert c.returncode == 0 venv_loc = None for line in c.stderr.splitlines(): if line.startswith("Virtualenv location:"): venv_loc = Path(line.split(":", 1)[-1].strip()) assert venv_loc is not None assert venv_loc.joinpath(".project").exists() @pytest.mark.cli @pytest.mark.deploy @pytest.mark.system
@pytest.mark.cli @pytest.mark.deploy @pytest.mark.system
129
test_install_basic.py
Python
tests/integration/test_install_basic.py
949ee95d6748e8777bed589f0d990aa4792b28f8
pipenv
4
83,182
50
11
15
172
18
0
60
158
test_json_get_subscribers_for_guest_user
docs: Consistently hyphenate “web-public”. In English, compound adjectives should essentially always be hyphenated. This makes them easier to parse, especially for users who might not recognize that the words “web public” go together as a phrase. Signed-off-by: Anders Kaseorg <anders@zulip.com>
https://github.com/zulip/zulip.git
def test_json_get_subscribers_for_guest_user(self) -> None: guest_user = self.example_user("polonius") never_subscribed = gather_subscriptions_helper(guest_user, True).never_subscribed # A guest user can only see never subscribed streams that are web-public. # For Polonius, the only web-public stream that he is not subscribed at # this point is Rome. self.assert_length(never_subscribed, 1) web_public_stream_id = never_subscribed[0]["stream_id"] result = self.client_get(f"/json/streams/{web_public_stream_id}/members") self.assert_json_success(result) result_dict = result.json() self.assertIn("subscribers", result_dict) self.assertIsInstance(result_dict["subscribers"], list) self.assertGreater(len(result_dict["subscribers"]), 0)
98
test_subs.py
Python
zerver/tests/test_subs.py
90e202cd38d00945c81da4730d39e3f5c5b1e8b1
zulip
1
245,574
90
16
30
456
30
0
144
486
convert
[Fix] replace mmcv's function and modules imported with mmengine's (#8594) * use mmengine's load_state_dict and load_checkpoint * from mmengine import dump * from mmengine import FileClient dump list_from_file * remove redundant registry * update * update * update * replace _load_checkpoint with CheckpointLoad.load_checkpoint * changes according to mmcv #2216 * changes due to mmengine #447 * changes due mmengine #447 and mmcv #2217 * changes due mmengine #447 and mmcv #2217 * update * update * update
https://github.com/open-mmlab/mmdetection.git
def convert(src, dst, depth): # load arch_settings if depth not in arch_settings: raise ValueError('Only support ResNet-50 and ResNet-101 currently') block_nums = arch_settings[depth] # load caffe model caffe_model = load(src, encoding='latin1') blobs = caffe_model['blobs'] if 'blobs' in caffe_model else caffe_model # convert to pytorch style state_dict = OrderedDict() converted_names = set() convert_conv_fc(blobs, state_dict, 'conv1', 'conv1', converted_names) convert_bn(blobs, state_dict, 'res_conv1_bn', 'bn1', converted_names) for i in range(1, len(block_nums) + 1): for j in range(block_nums[i - 1]): if j == 0: convert_conv_fc(blobs, state_dict, f'res{i + 1}_{j}_branch1', f'layer{i}.{j}.downsample.0', converted_names) convert_bn(blobs, state_dict, f'res{i + 1}_{j}_branch1_bn', f'layer{i}.{j}.downsample.1', converted_names) for k, letter in enumerate(['a', 'b', 'c']): convert_conv_fc(blobs, state_dict, f'res{i + 1}_{j}_branch2{letter}', f'layer{i}.{j}.conv{k+1}', converted_names) convert_bn(blobs, state_dict, f'res{i + 1}_{j}_branch2{letter}_bn', f'layer{i}.{j}.bn{k + 1}', converted_names) # check if all layers are converted for key in blobs: if key not in converted_names: print(f'Not Convert: {key}') # save checkpoint checkpoint = dict() checkpoint['state_dict'] = state_dict torch.save(checkpoint, dst)
223
detectron2pytorch.py
Python
tools/model_converters/detectron2pytorch.py
d0695e68654ca242be54e655491aef8c959ac345
mmdetection
9
249,486
56
11
15
177
17
0
71
132
check_rust_lib_up_to_date
Check if Rust lib needs rebuilding. (#13759) This protects against the common mistake of failing to remember to rebuild Rust code after making changes.
https://github.com/matrix-org/synapse.git
def check_rust_lib_up_to_date() -> None: if not _dist_is_editable(): return synapse_dir = os.path.dirname(synapse.__file__) synapse_root = os.path.abspath(os.path.join(synapse_dir, "..")) # Double check we've not gone into site-packages... if os.path.basename(synapse_root) == "site-packages": return # ... and it looks like the root of a python project. if not os.path.exists("pyproject.toml"): return # Get the hash of all Rust source files hash = _hash_rust_files_in_directory(os.path.join(synapse_root, "rust", "src")) if hash != get_rust_file_digest(): raise Exception("Rust module outdated. Please rebuild using `poetry install`")
99
rust.py
Python
synapse/util/rust.py
ebfeac7c5ded851a2639911ec6adf9d0fcdb029a
synapse
5
176,497
7
9
118
33
6
0
7
12
generate_gml
Update black (#5438) * CI: sync up black dev requirements version with precommit * Run black Co-authored-by: Jarrod Millman <jarrod.millman@gmail.com>
https://github.com/networkx/networkx.git
def generate_gml(G, stringizer=None): r valid_keys = re.compile("^[A-Za-z][0-9A-Za-z_]*$")
285
gml.py
Python
networkx/readwrite/gml.py
f6755ffa00211b523c6c0bec5398bc6c3c43c8b1
networkx
10
270,714
5
7
2
26
4
0
5
19
_defun_call
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _defun_call(self, inputs): return self._make_op(inputs)
15
base_layer.py
Python
keras/engine/base_layer.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
291,724
22
10
11
118
13
0
27
72
skybell_mock
Upgrade pytest-aiohttp (#82475) * Upgrade pytest-aiohttp * Make sure executors, tasks and timers are closed Some test will trigger warnings on garbage collect, these warnings spills over into next test. Some test trigger tasks that raise errors on shutdown, these spill over into next test. This is to mimic older pytest-aiohttp and it's behaviour on test cleanup. Discussions on similar changes for pytest-aiohttp are here: https://github.com/pytest-dev/pytest-asyncio/pull/309 * Replace loop with event_loop * Make sure time is frozen for tests * Make sure the ConditionType is not async /home-assistant/homeassistant/helpers/template.py:2082: RuntimeWarning: coroutine 'AsyncMockMixin._execute_mock_call' was never awaited def wrapper(*args, **kwargs): Enable tracemalloc to get traceback where the object was allocated. See https://docs.pytest.org/en/stable/how-to/capture-warnings.html#resource-warnings for more info. * Increase litejet press tests with a factor 10 The times are simulated anyway, and we can't stop the normal event from occuring. * Use async handlers for aiohttp tests/components/motioneye/test_camera.py::test_get_still_image_from_camera tests/components/motioneye/test_camera.py::test_get_still_image_from_camera tests/components/motioneye/test_camera.py::test_get_stream_from_camera tests/components/motioneye/test_camera.py::test_get_stream_from_camera tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template tests/components/motioneye/test_camera.py::test_camera_option_stream_url_template /Users/joakim/src/hass/home-assistant/venv/lib/python3.9/site-packages/aiohttp/web_urldispatcher.py:189: DeprecationWarning: Bare functions are deprecated, use async ones warnings.warn( * Switch to freezegun in modbus tests The tests allowed clock to tick in between steps * Make sure skybell object are fully mocked Old tests would trigger attempts to post to could services: ``` DEBUG:aioskybell:HTTP post https://cloud.myskybell.com/api/v3/login/ Request with headers: {'content-type': 'application/json', 'accept': '*/*', 'x-skybell-app-id': 'd2b542c7-a7e4-4e1e-b77d-2b76911c7c46', 'x-skybell-client-id': '1f36a3c0-6dee-4997-a6db-4e1c67338e57'} ``` * Fix sorting that broke after rebase
https://github.com/home-assistant/core.git
def skybell_mock(): mocked_skybell_device = AsyncMock(spec=SkybellDevice) mocked_skybell = AsyncMock(spec=Skybell) mocked_skybell.async_get_devices.return_value = [mocked_skybell_device] mocked_skybell.async_send_request.return_value = {"id": USER_ID} mocked_skybell.user_id = USER_ID with patch( "homeassistant.components.skybell.config_flow.Skybell", return_value=mocked_skybell, ), patch("homeassistant.components.skybell.Skybell", return_value=mocked_skybell): yield mocked_skybell
68
conftest.py
Python
tests/components/skybell/conftest.py
c576a68d336bc91fd82c299d9b3e5dfdc1c14960
core
1
293,022
57
15
16
193
21
0
113
278
async_update_group_state
Improve binary sensor group when member is unknown or unavailable (#67468)
https://github.com/home-assistant/core.git
def async_update_group_state(self) -> None: all_states = [self.hass.states.get(x) for x in self._entity_ids] # filtered_states are members currently in the state machine filtered_states: list[str] = [x.state for x in all_states if x is not None] # Set group as unavailable if all members are unavailable self._attr_available = any( state != STATE_UNAVAILABLE for state in filtered_states ) valid_state = self.mode( state not in (STATE_UNKNOWN, STATE_UNAVAILABLE) for state in filtered_states ) if not valid_state: # Set as unknown if any / all member is not unknown or unavailable self._attr_is_on = None else: # Set as ON if any / all member is ON states = list(map(lambda x: x == STATE_ON, filtered_states)) state = self.mode(states) self._attr_is_on = state
122
binary_sensor.py
Python
homeassistant/components/group/binary_sensor.py
c5dd5e18c0443b332dddfbbf4a8ff03374c5c070
core
7
304,515
13
9
6
53
4
0
13
56
_async_stop
Auto recover when the Bluetooth adapter stops responding (#77043)
https://github.com/home-assistant/core.git
async def _async_stop(self) -> None: if self._cancel_watchdog: self._cancel_watchdog() self._cancel_watchdog = None await self._async_stop_scanner()
29
scanner.py
Python
homeassistant/components/bluetooth/scanner.py
ced8278e3222501dde7d769ea4b57aae75f62438
core
2
320,177
100
14
32
384
35
0
169
528
decide_what_tags_to_keep
Fixes deleting images if the branch API returns an error code and makes the error code a failure
https://github.com/paperless-ngx/paperless-ngx.git
def decide_what_tags_to_keep(self): # Default to everything gets kept still super().decide_what_tags_to_keep() # Locate the feature branches feature_branches = {} for branch in self.branch_api.get_branches( repo=self.repo_name, ): if branch.name.startswith("feature-"): logger.debug(f"Found feature branch {branch.name}") feature_branches[branch.name] = branch logger.info(f"Located {len(feature_branches)} feature branches") if not len(feature_branches): # Our work here is done, delete nothing return # Filter to packages which are tagged with feature-* packages_tagged_feature: List[ContainerPackage] = [] for package in self.all_package_versions: if package.tag_matches("feature-"): packages_tagged_feature.append(package) # Map tags like "feature-xyz" to a ContainerPackage feature_pkgs_tags_to_versions: Dict[str, ContainerPackage] = {} for pkg in packages_tagged_feature: for tag in pkg.tags: feature_pkgs_tags_to_versions[tag] = pkg logger.info( f'Located {len(feature_pkgs_tags_to_versions)} versions of package {self.package_name} tagged "feature-"', ) # All the feature tags minus all the feature branches leaves us feature tags # with no corresponding branch self.tags_to_delete = list( set(feature_pkgs_tags_to_versions.keys()) - set(feature_branches.keys()), ) # All the tags minus the set of going to be deleted tags leaves us the # tags which will be kept around self.tags_to_keep = list( set(self.all_pkgs_tags_to_version.keys()) - set(self.tags_to_delete), ) logger.info( f"Located {len(self.tags_to_delete)} versions of package {self.package_name} to delete", )
199
cleanup-tags.py
Python
.github/scripts/cleanup-tags.py
9214b412556e994dcf32dd2c7e050169f5cace1c
paperless-ngx
8
134,379
6
6
8
22
4
0
6
20
is_multi_agent
[RLlib] AlgorithmConfig: Next steps (volume 01); Algos, RolloutWorker, PolicyMap, WorkerSet use AlgorithmConfig objects under the hood. (#29395)
https://github.com/ray-project/ray.git
def is_multi_agent(self) -> bool: return self._is_multi_agent
12
algorithm_config.py
Python
rllib/algorithms/algorithm_config.py
182744bbd151c166b8028355eae12a5da63fb3cc
ray
1
100,648
11
9
3
42
6
0
11
25
_skip_num
bugfix: extract - stop progress bar from going over max value
https://github.com/deepfakes/faceswap.git
def _skip_num(self) -> int: return self._args.extract_every_n if hasattr(self._args, "extract_every_n") else 1
25
extract.py
Python
scripts/extract.py
0d23714875f81ddabdbe8f4e40bef6e5f29eeb19
faceswap
2
169,398
43
14
16
204
17
0
67
237
_wrap_result
Bug fix using GroupBy.resample produces inconsistent behavior when calling it over empty df #47705 (#47672) * DOC #45443 edited the documentation of where/mask functions * DOC #45443 edited the documentation of where/mask functions * Update generic.py * Bug 43767 fix * fixing lines doc * visual indent * visual indent * indentation * grouby.resample bug * groubby.sample * syntax * syntax * syntax * syntax * what's new * flake8 error * pytest * blank line * editting resample * editting resample * syntax * syntax * syntax * space * space * space * inplace * spelling * test * test resampler * tests * tests * Update resample.py * Update resample.py * Update resample.py * Update v1.6.0.rst
https://github.com/pandas-dev/pandas.git
def _wrap_result(self, result): # GH 47705 obj = self.obj if ( isinstance(result, ABCDataFrame) and result.empty and not isinstance(result.index, PeriodIndex) ): result = result.set_index( _asfreq_compat(obj.index[:0], freq=self.freq), append=True ) if isinstance(result, ABCSeries) and self._selection is not None: result.name = self._selection if isinstance(result, ABCSeries) and result.empty: # When index is all NaT, result is empty but index is not result.index = _asfreq_compat(obj.index[:0], freq=self.freq) result.name = getattr(obj, "name", None) return result
132
resample.py
Python
pandas/core/resample.py
fba672389b74ca4afece56040ae079a1f2b71544
pandas
8
297,321
9
8
3
45
6
1
9
14
mock_expires_at
Google Assistant SDK integration (#82328) * Copy google_sheets to google_assistant_sdk This is to improve diff of the next commit with the actual implementation. Commands used: cp -r homeassistant/components/google_sheets/ homeassistant/components/google_assistant_sdk/ cp -r tests/components/google_sheets/ tests/components/google_assistant_sdk/ find homeassistant/components/google_assistant_sdk/ tests/components/google_assistant_sdk/ -type f | xargs sed -i \ -e 's@google_sheets@google_assistant_sdk@g' \ -e 's@Google Sheets@Google Assistant SDK@g' \ -e 's@tkdrob@tronikos@g' * Google Assistant SDK integration Allows sending commands and broadcast messages to Google Assistant. * Remove unnecessary async_entry_has_scopes check * Bump gassist-text to fix protobuf dependency
https://github.com/home-assistant/core.git
def mock_expires_at() -> int: return time.time() + 3600 @pytest.fixture(name="config_entry")
@pytest.fixture(name="config_entry")
15
conftest.py
Python
tests/components/google_assistant_sdk/conftest.py
5d316734659cc331658ea2f77a2985cd2c58d043
core
1
284,585
18
10
24
96
10
0
25
60
print_help
Bounty Hunter mood: 11 bugs fixed (#1853) * fix #1850 * fix #1831 * add extra check to Reddit API keys * ignore warning message to update praw api * improve OpenBB links * fix quick performance only on stocks class because I'm James bitch * fix quick performance only on stocks class because I'm James bitch * fix #1829 * fix #1821 * add messari to keys - fix #1819 * example of multiple oclumns to check on options/chains * minor improvement in xlabel re. #1814 * remove repeated command * fix #1698 * fix line too long * fix #1814 fr now * fix tests
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): has_ticker_start = "[unvl]" if not self.ticker else "" has_ticker_end = "[/unvl]" if not self.ticker else "" help_text = f console.print(text=help_text, menu="Stocks - Government")
42
gov_controller.py
Python
openbb_terminal/stocks/government/gov_controller.py
a6f7e111e68346aeab315985b3704c2710693b38
OpenBBTerminal
3
252,298
40
15
11
195
19
0
46
150
standalone_binaries
simplify ci build script, add MSIX installer and Microsoft store.
https://github.com/mitmproxy/mitmproxy.git
def standalone_binaries(): with archive(DIST_DIR / f"mitmproxy-{version()}-{operating_system()}") as f: _pyinstaller("standalone.spec") for tool in ["mitmproxy", "mitmdump", "mitmweb"]: executable = TEMP_DIR / "pyinstaller/dist" / tool if platform.system() == "Windows": executable = executable.with_suffix(".exe") # Test if it works at all O:-) print(f"> {executable} --version") subprocess.check_call([executable, "--version"]) f.add(str(executable), str(executable.name)) print(f"Packed {f.name}.")
91
build.py
Python
release/build.py
3872d33111d27430c4b5f1ae021e91e3522cc0e3
mitmproxy
3
144,311
16
9
15
79
12
0
18
64
_get_all_child_nodes
[Ray DAG] Implement experimental Ray DAG API for task/class (#22058)
https://github.com/ray-project/ray.git
def _get_all_child_nodes(self) -> Set["DAGNode"]: scanner = _PyObjScanner() children = set() for n in scanner.find_nodes([self._bound_args, self._bound_kwargs]): children.add(n) return children
47
dag_node.py
Python
python/ray/experimental/dag/dag_node.py
c065e3f69ec248383d98b45a8d1c00832ccfdd57
ray
2
285,873
33
17
11
171
18
0
40
154
refresh_datasets_on_menu
Forecasting Menu [Work in Progress] (#1933) * Gave forecasting memory * Fixed scripts, refactored * FIxed poetry lock * edge case check for forecast target * Improved combine and load functionality * Cleaned up translations * Fixed issue with covariates * Fixed issue checking covariates * Another covariates check fix * Ignored regr and linregr warnings * Fixed covariate issues * switched from forecasting to forecast * Finished transition to forecast * Can add entire dataset with one command * Improved combine description * Removed naming covariates * Created new installation * typo * Make plot show dates if available * Added better handling or users without the menu * Removed unused file * Fix * Better handling for nontraditional datasets * Fixed black and pylint * Fixed tests * Added darts install to main tests * Working on darts with CI * Added back test file * Made large tables print better * naive baseline * typo * Finished naive * no dollar on prediction * fixed positive MAPE bug * quick refactoring * Fixed two different args for same thing * added extra patience * linreg mape fix * info fix * Refactored API, bumped to Darts 0.21.0 * Added fixes * Increased verbosity for wrong column * Updated dependencies * Hid warnings * Fixed importing * Fixed tests * Fixed ugly seasonal plotting * Fixed forecast line color * Switched chart output to blue * Simplified lambda_price_prediction_color * fixed residuals * Chnage * Removed darts from CI per Chavi * Added fixes to tests * Added knnfix * Fixed issue where n!= o * Added changes * Added changes * Imrpoved forecast dash * Added Theo notebook * Added enhancements to dash * Added notebook * Added fix for jupyter lab * Added debug stuff * Change * Updated docs * Fixed formatting * Fixed formatting * Removed prints * Filtered some info * Added button to run model * Improved api * Added secret feautr (no peeking Martin) * Cleaned code * Fixed tests * Added test fixes * Added fixes * Fixes * FIxes for pres * Remove bad tests * Removed knn * Fixed issues with removing mc * doc for conda * Added forecast improvements * Added streamlit support * Fixed issues * fix expo with streamlit due to quantile() * fixed performance issues with streamlit for now.. * clean up historical forecast with new trainer * quick fix for regression trainer params * Added fixes * quick fix for other fix for regression trainer params * table formatting for timestamp * potential fix for inf in feature engineered datasets * Basic working in new format * dw * Trying * Fixed issues * Improved graphing * fixing trainer for LR and formatting * doge and linting * page break * automatic cleaning of datasets * automatic cleaning of datasets- fix * Fixed forecast dates * Made dashboard prettier * Added fixes * Added fixes * Added options * Fixed error * remove caching * adding in spinner * Added vairable n_predict in streamlit * Added mypy fix * renaming and range change * new index for n predict * check positive float for window size * Update _index.md * Update _index.md * Update _index.md * Update _index.md * Update _index.md * Update _index.md * Update _index.md * Update _index.md * Update _index.md * renaming * reorg files * Update _index.md * hidden which command for versions * Update _index.md * Update _index.md * which: ns parser * hugo for: which * hugo for: forecasting fix * formatting black * update stock controller test * Lay groundwork for better residual plotting * improved delete to allow for periods in title * improved automatic cleaning of inf * Added new API * Added new API * Added new API * formatting for black * Updated our testing CI * Reverted changes * Added forecast docs * Fixed mypy issues * Fixes tests * Did some refactoring, added a report * new api in streamlit * Added integrated tests * Update _index.md * improved loading in custom dataset * menu spacing * installer fixes * Added docs fixes * Adding comments to test if commit working * Fixed report * naming conventions * formatting * removing unused var * Made last report imporvements * Update README.md * Added fix * Switched to warning * Added fixes * Added fixes * Added fixes * Added fixes * Update economy av view test * Remove forgotten print statement * Update depencencies * Added verbosity to pytest * Added fixes * Fixed pylint * Fixed actions checkout * Added fixes Co-authored-by: colin99d <colin99delahunty@gmail.com> Co-authored-by: Colin Delahunty <72827203+colin99d@users.noreply.github.com> Co-authored-by: James Simmons <simmonsj330@gmail.com> Co-authored-by: minhhoang1023 <40023817+minhhoang1023@users.noreply.github.com> Co-authored-by: Theodore Aptekarev <aptekarev@gmail.com>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def refresh_datasets_on_menu(self): self.list_dataset_cols = list() maxfile = max(len(file) for file in self.files) self.loaded_dataset_cols = "\n" for dataset, data in self.datasets.items(): self.loaded_dataset_cols += ( f" {dataset} {(maxfile - len(dataset)) * ' '}: " f"{', '.join(data.columns)}\n" ) for col in data.columns: self.list_dataset_cols.append(f"{dataset}.{col}")
72
forecast_controller.py
Python
openbb_terminal/forecast/forecast_controller.py
7fd72d9ee1e8847717195859bf6d608268a94e2f
OpenBBTerminal
4
198,798
31
14
10
127
11
0
37
135
apply_load
changes to just append loads of the same direction along with others made
https://github.com/sympy/sympy.git
def apply_load(self, location, magnitude, direction): magnitude = sympify(magnitude) direction = sympify(direction) if location not in self.node_labels: raise ValueError("Load must be applied at a known node") else: if location in list(self._loads): self._loads[location].append([magnitude, direction]) else: self._loads[location] = [[magnitude, direction]]
80
truss.py
Python
sympy/physics/continuum_mechanics/truss.py
3b6e792c521077d661ae212e7de414696e13ccb6
sympy
3
246,011
6
8
7
29
5
0
6
20
get_room_max_stream_ordering
Remove redundant `get_current_events_token` (#11643) * Push `get_room_{min,max_stream_ordering}` into StreamStore Both implementations of this are identical, so we may as well push it down and get rid of the abstract base class nonsense. * Remove redundant `StreamStore` class This is empty now * Remove redundant `get_current_events_token` This was an exact duplicate of `get_room_max_stream_ordering`, so let's get rid of it. * newsfile
https://github.com/matrix-org/synapse.git
def get_room_max_stream_ordering(self) -> int: return self._stream_id_gen.get_current_token()
16
stream.py
Python
synapse/storage/databases/main/stream.py
2359ee3864a065229c80e3ff58faa981edd24558
synapse
1
144,850
6
9
4
31
4
0
6
20
as_pydict
[Datasets] Expose `TableRow` as public API; minimize copies/type conversions on row-based ops. (#22305) This PR properly exposes `TableRow` as a public API (API docs + the "Public" tag), since it's already exposed to the user in our row-based ops. In addition, the following changes are made: 1. During row-based ops, we also choose a batch format that lines up with the current dataset format in order to eliminate unnecessary copies and type conversions. 2. `TableRow` now derives from `collections.abc.Mapping`, which lets `TableRow` better interop with code expecting a mapping, and includes a few helpful mixins so we only have to implement `__getitem__`, `__iter__`, and `__len__`.
https://github.com/ray-project/ray.git
def as_pydict(self) -> dict: return dict(self.items())
17
row.py
Python
python/ray/data/row.py
53c4c7b1be54fe67a54b34f3f0c076ad502e35cc
ray
1
58,977
29
16
18
183
28
1
36
165
install_protected_system_blocks
Adds ability to delete block types via the CLI (#6849) * Ensure system blocks are protected on destructive API calls * Enable deleting block types * Ensure Block Types are protected against destructive API actions * Ensure updating protected Block Types on update doesn't impact saving * ⚫ * isort * Suppress status errors * ⚫
https://github.com/PrefectHQ/prefect.git
async def install_protected_system_blocks(session): for block in [ prefect.blocks.system.JSON, prefect.blocks.system.DateTime, prefect.blocks.system.Secret, prefect.filesystems.LocalFileSystem, prefect.infrastructure.Process, ]: block_type = block._to_block_type() block_type.is_protected = True block_type = await models.block_types.create_block_type( session=session, block_type=block_type, override=True ) block_schema = await models.block_schemas.create_block_schema( session=session, block_schema=block._to_block_schema(block_type_id=block_type.id), override=True, ) @router.post("/install_system_block_types")
@router.post("/install_system_block_types")
112
block_types.py
Python
src/prefect/orion/api/block_types.py
1a3a3adf0bf4d83206f0367b98905a9db15cfec4
prefect
2
177,078
22
11
6
112
11
0
30
56
radius
Add weight distance metrics (#5305) Adds the weight keyword argument to allow users to compute weighted distance metrics e.g. diameter, eccentricity, periphery, etc. The kwarg works in the same fashion as the weight param for shortest paths - i.e. if a string, look up with edge attr by key, if callable, compute the weight via the function. Default is None, meaning return unweighted result which is the current behavior. Co-authored-by: Dan Schult <dschult@colgate.edu> Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
https://github.com/networkx/networkx.git
def radius(G, e=None, usebounds=False, weight=None): if usebounds is True and e is None and not G.is_directed(): return _extrema_bounding(G, compute="radius", weight=weight) if e is None: e = eccentricity(G, weight=weight) return min(e.values())
71
distance_measures.py
Python
networkx/algorithms/distance_measures.py
28f78cfa9a386620ee1179582fda1db5ffc59f84
networkx
5
21,767
6
6
3
22
4
0
6
20
is_dotted
Update tomlkit==0.9.2 Used: python -m invoke vendoring.update --package=tomlkit
https://github.com/pypa/pipenv.git
def is_dotted(self) -> bool: return self._dotted
12
items.py
Python
pipenv/vendor/tomlkit/items.py
8faa74cdc9da20cfdcc69f5ec29b91112c95b4c9
pipenv
1
260,074
27
12
11
99
6
0
33
98
_get_prediction_method
API Rename base_estimator in CalibratedClassifierCV (#22054) Co-authored-by: Kevin Roice <kevinroice@Kevins-Air.broadband> Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com> Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com>
https://github.com/scikit-learn/scikit-learn.git
def _get_prediction_method(clf): if hasattr(clf, "decision_function"): method = getattr(clf, "decision_function") return method, "decision_function" elif hasattr(clf, "predict_proba"): method = getattr(clf, "predict_proba") return method, "predict_proba" else: raise RuntimeError( "'estimator' has no 'decision_function' or 'predict_proba' method." )
53
calibration.py
Python
sklearn/calibration.py
effdd6e215c67f2ae8ed1e378ea1661e936059a4
scikit-learn
3
174,928
9
13
11
79
11
1
9
40
generate_metadata
Use data-dist-info-metadata (PEP 658) to decouple resolution from downloading (#11111) Co-authored-by: Tzu-ping Chung <uranusjr@gmail.com>
https://github.com/pypa/pip.git
def generate_metadata(self) -> bytes: return dedent( f ).encode("utf-8") @pytest.fixture(scope="function")
@pytest.fixture(scope="function")
19
test_download.py
Python
tests/functional/test_download.py
bad03ef931d9b3ff4f9e75f35f9c41f45839e2a1
pip
1
176,579
79
16
95
326
30
1
115
310
weighted_projected_graph
Fixes #5403: Errors on non-distinct bipartite node sets (#5442) * is_bipartite_node_set errors if nodes not distinct * weighted_projected_graph errors if nodeset too big * update release_dev.rst * Use sets instead of lists for collecting flowfuncs in tests. (#5589) Co-authored-by: Ross Barnowski <rossbar@berkeley.edu>
https://github.com/networkx/networkx.git
def weighted_projected_graph(B, nodes, ratio=False): r if B.is_directed(): pred = B.pred G = nx.DiGraph() else: pred = B.adj G = nx.Graph() G.graph.update(B.graph) G.add_nodes_from((n, B.nodes[n]) for n in nodes) n_top = float(len(B) - len(nodes)) if n_top < 1: raise NetworkXAlgorithmError( f"the size of the nodes to project onto ({len(nodes)}) is >= the graph size ({len(B)}).\n" "They are either not a valid bipartite partition or contain duplicates" ) for u in nodes: unbrs = set(B[u]) nbrs2 = {n for nbr in unbrs for n in B[nbr]} - {u} for v in nbrs2: vnbrs = set(pred[v]) common = unbrs & vnbrs if not ratio: weight = len(common) else: weight = len(common) / n_top G.add_edge(u, v, weight=weight) return G @not_implemented_for("multigraph")
@not_implemented_for("multigraph")
188
projection.py
Python
networkx/algorithms/bipartite/projection.py
bc7ace58c872d527475c09345f89579ff82e4c5d
networkx
9
104,814
43
14
9
136
14
0
56
101
xdirname
Add support for metadata files to `imagefolder` (#4069) * Add support for metadata files to `imagefolder` * Fix imagefolder drop_labels test * Replace csv with jsonl * Add test * Correct resolution for nested metadata files * Allow None as JSON Lines value * Add comments * Count path segments * Address comments * Improve test * Update src/datasets/packaged_modules/imagefolder/imagefolder.py Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> * test e2e imagefolder with metadata * add test for zip archives * fix test * add some debug logging to know which files are ignored * add test for bad/malformed metadata file * revert use of posix path to fix windows tests * style * Refactor tests for packaged modules Text and Csv Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> Co-authored-by: Quentin Lhoest <lhoest.q@gmail.com>
https://github.com/huggingface/datasets.git
def xdirname(a): a, *b = str(a).split("::") if is_local_path(a): a = os.path.dirname(Path(a).as_posix()) else: a = posixpath.dirname(a) # if we end up at the root of the protocol, we get for example a = 'http:' # so we have to fix it by adding the '//' that was removed: if a.endswith(":"): a += "//" return "::".join([a] + b)
75
streaming_download_manager.py
Python
src/datasets/utils/streaming_download_manager.py
7017b0965f0a0cae603e7143de242c3425ecef91
datasets
3
247,306
36
11
12
157
9
0
51
164
test_add_alias
Add type hints to `tests/rest/client` (#12108) * Add type hints to `tests/rest/client` * newsfile * fix imports * add `test_account.py` * Remove one type hint in `test_report_event.py` * change `on_create_room` to `async` * update new functions in `test_third_party_rules.py` * Add `test_filter.py` * add `test_rooms.py` * change to `assertEquals` to `assertEqual` * lint
https://github.com/matrix-org/synapse.git
def test_add_alias(self) -> None: # Create an additional alias. second_alias = "#second:test" self._set_alias_via_directory(second_alias) # Add the canonical alias. self._set_canonical_alias({"alias": self.alias, "alt_aliases": [self.alias]}) # Then add the second alias. self._set_canonical_alias( {"alias": self.alias, "alt_aliases": [self.alias, second_alias]} ) # Canonical alias now exists! res = self._get_canonical_alias() self.assertEqual( res, {"alias": self.alias, "alt_aliases": [self.alias, second_alias]} )
90
test_rooms.py
Python
tests/rest/client/test_rooms.py
2ffaf30803f93273a4d8a65c9e6c3110c8433488
synapse
1
275,982
5
6
2
18
4
0
5
19
objects_to_serialize
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def objects_to_serialize(self, serialization_cache): raise NotImplementedError
10
base_serialization.py
Python
keras/saving/saved_model/base_serialization.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
224,044
23
13
5
118
14
0
26
44
build_page
Remove spaces at the ends of docstrings, normalize quotes
https://github.com/mkdocs/mkdocs.git
def build_page(title, path, config, md_src=''): files = Files([File(path, config['docs_dir'], config['site_dir'], config['use_directory_urls'])]) page = Page(title, list(files)[0], config) # Fake page.read_source() page.markdown, page.meta = meta.get_data(md_src) return page, files
74
build_tests.py
Python
mkdocs/tests/build_tests.py
e7f07cc82ab2be920ab426ba07456d8b2592714d
mkdocs
1
249,813
45
13
26
138
18
0
58
154
to_synapse_error
Remove redundant types from comments. (#14412) Remove type hints from comments which have been added as Python type hints. This helps avoid drift between comments and reality, as well as removing redundant information. Also adds some missing type hints which were simple to fill in.
https://github.com/matrix-org/synapse.git
def to_synapse_error(self) -> SynapseError: # try to parse the body as json, to get better errcode/msg, but # default to M_UNKNOWN with the HTTP status as the error text try: j = json_decoder.decode(self.response.decode("utf-8")) except ValueError: j = {} if not isinstance(j, dict): j = {} errcode = j.pop("errcode", Codes.UNKNOWN) errmsg = j.pop("error", self.msg) return ProxiedRequestError(self.code, errmsg, errcode, j)
82
errors.py
Python
synapse/api/errors.py
d8cc86eff484b6f570f55a5badb337080c6e4dcd
synapse
3
269,919
9
10
4
45
6
0
10
42
_implements_test_batch_hooks
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def _implements_test_batch_hooks(self): return not generic_utils.is_default( self.on_test_batch_begin ) or not generic_utils.is_default(self.on_test_batch_end)
26
callbacks.py
Python
keras/callbacks.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
269,930
11
9
4
51
6
0
11
43
on_predict_begin
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def on_predict_begin(self, logs=None): logs = self._process_logs(logs) for callback in self.callbacks: callback.on_predict_begin(logs)
31
callbacks.py
Python
keras/callbacks.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
2
133,184
10
10
4
52
8
0
12
44
map
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def map(self, func, iterable, chunksize=None): return self._map_async( func, iterable, chunksize=chunksize, unpack_args=False ).get()
35
pool.py
Python
python/ray/util/multiprocessing/pool.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
1
129,882
30
14
13
119
14
0
37
159
_get_current_node_resource_key
[CI] Format Python code with Black (#21975) See #21316 and #21311 for the motivation behind these changes.
https://github.com/ray-project/ray.git
def _get_current_node_resource_key(self) -> str: current_node_id = ray.get_runtime_context().node_id.hex() for node in ray.nodes(): if node["NodeID"] == current_node_id: # Found the node. for key in node["Resources"].keys(): if key.startswith("node:"): return key else: raise ValueError("Cannot find the node dictionary for current node.")
67
job_manager.py
Python
dashboard/modules/job/job_manager.py
7f1bacc7dc9caf6d0ec042e39499bbf1d9a7d065
ray
5
256,261
41
9
21
193
20
0
42
257
_get_state_dict
Apply black formatting (#2115) * Testing black on ui/ * Applying black on docstores * Add latest docstring and tutorial changes * Create a single GH action for Black and docs to reduce commit noise to the minimum, slightly refactor the OpenAPI action too * Remove comments * Relax constraints on pydoc-markdown * Split temporary black from the docs. Pydoc-markdown was obsolete and needs a separate PR to upgrade * Fix a couple of bugs * Add a type: ignore that was missing somehow * Give path to black * Apply Black * Apply Black * Relocate a couple of type: ignore * Update documentation * Make Linux CI run after applying Black * Triggering Black * Apply Black * Remove dependency, does not work well * Remove manually double trailing commas * Update documentation Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
https://github.com/deepset-ai/haystack.git
def _get_state_dict(self): state_dict = { "evaluate_every": self.evaluate_every, "n_gpu": self.n_gpu, "grad_acc_steps": self.grad_acc_steps, "device": self.device, "local_rank": self.local_rank, "early_stopping": self.early_stopping, "epochs": self.epochs, "checkpoint_on_sigterm": self.checkpoint_on_sigterm, "checkpoint_root_dir": self.checkpoint_root_dir, "checkpoint_every": self.checkpoint_every, "checkpoints_to_keep": self.checkpoints_to_keep, "from_epoch": self.from_epoch, "from_step": self.from_step, "global_step": self.global_step, "log_learning_rate": self.log_learning_rate, "log_loss_every": self.log_loss_every, "disable_tqdm": self.disable_tqdm, } return state_dict
114
base.py
Python
haystack/modeling/training/base.py
a59bca366174d9c692fa19750c24d65f47660ef7
haystack
1
281,704
10
9
31
54
9
0
10
31
print_help
Custom data context (#1193) * Add first iteration of custom context * Add sample data + improve plot * Change `head` to `show` with sorting and limit. Add "-x" to plot and dynamic update of completer * generate random time series for test csv * Make columns lower case. Check if date is in columns and convert to timestamp. Improve plotting for dates * Add qa to custom * Add pred to custom * Hugooooo * Testing * dang whitespace Co-authored-by: Colin Delahunty <72827203+colin99d@users.noreply.github.com> Co-authored-by: didierlopes.eth <dro.lopes@campus.fct.unl.pt>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): help_text = f console.print(text=help_text, menu="Custom - Quantitative Analysis")
22
qa_controller.py
Python
gamestonk_terminal/custom/quantitative_analysis/qa_controller.py
6a66f3f3ed934e0615ff4ba283ee67fcc43d3656
OpenBBTerminal
1
136,073
22
8
5
42
5
1
23
48
_arrow_extension_scalars_are_subclassable
[Datasets] [Arrow 7+ Support - 3/N] Add support for Arrow 8, 9, 10, and nightly. (#29999) This PR adds support for Arrow 8, 9, 10, and nightly in Ray, and is the third PR in a set of stacked PRs making up this mono-PR for Arrow 7+ support (#29161), and is stacked on top of a PR fixing task cancellation in Ray Core (#29984) and a PR adding support for Arrow 7 (#29993). The last two commits are the relevant commits for review. Summary of Changes This PR: - For Arrow 9+, add allow_bucket_creation=true to S3 URIs for the Ray Core Storage API and for the Datasets S3 write API ([Datasets] In Arrow 9+, creating S3 buckets requires explicit opt-in. #29815). - For Arrow 9+, create an ExtensionScalar subclass for tensor extension types that returns an ndarray view from .as_py() ([Datasets] For Arrow 8+, tensor column element access returns an ExtensionScalar. #29816). - For Arrow 8.*, we manually convert the ExtensionScalar to an ndarray for tensor extension types, since the ExtensionScalar type exists but isn't subclassable in Arrow 8 ([Datasets] For Arrow 8+, tensor column element access returns an ExtensionScalar. #29816). - For Arrow 10+, we match on other potential error messages when encountering permission issues when interacting with S3 ([Datasets] In Arrow 10+, S3 errors raised due to permission issues can vary beyond our current pattern matching #29994). - adds CI jobs for Arrow 8, 9, 10, and nightly - removes the pyarrow version upper bound
https://github.com/ray-project/ray.git
def _arrow_extension_scalars_are_subclassable(): # TODO(Clark): Remove utility once we only support Arrow 9.0.0+. return ( PYARROW_VERSION is None or PYARROW_VERSION >= MIN_PYARROW_VERSION_SCALAR_SUBCLASS ) @PublicAPI(stability="beta")
@PublicAPI(stability="beta")
15
arrow.py
Python
python/ray/air/util/tensor_extensions/arrow.py
06d5dc36e19f16b7e1f8d6e2e872a81624f6a42c
ray
2
181,661
32
12
15
338
18
0
78
123
test_transform
Revert "Deployed 7ccda9a with MkDocs version: 1.3.0" This reverts commit bd9629c40e01241766197119b581a99409b07068.
https://github.com/EpistasisLab/tpot.git
def test_transform(): input = np.array(((0, 1, 2, 3, 4, 5), (0, 1, 2, 3, 4, 5))).transpose() ohe = OneHotEncoder() ohe.fit(input) test_data = np.array(((0, 1, 2, 6), (0, 1, 6, 7))).transpose() output = ohe.transform(test_data).todense() assert np.sum(output) == 5 input = np.array(((0, 1, 2, 3, 4, 5), (0, 1, 2, 3, 4, 5))).transpose() ips = scipy.sparse.csr_matrix(input) ohe = OneHotEncoder() ohe.fit(ips) test_data = np.array(((0, 1, 2, 6), (0, 1, 6, 7))).transpose() tds = scipy.sparse.csr_matrix(test_data) output = ohe.transform(tds).todense() assert np.sum(output) == 3
233
one_hot_encoder_tests.py
Python
tests/one_hot_encoder_tests.py
388616b6247ca4ea8de4e2f340d6206aee523541
tpot
1
258,797
97
10
19
230
24
0
152
251
test_sample_y_shapes
BUG Fix covariance and stdev shape in GPR with normalize_y (#22199) Co-authored-by: Guillaume Lemaitre <g.lemaitre58@gmail.com> Co-authored-by: Nakamura-Zimmerer, Tenavi (ARC-AF) <tenavi.nakamura-zimmerer@nasa.gov>
https://github.com/scikit-learn/scikit-learn.git
def test_sample_y_shapes(normalize_y, n_targets): rng = np.random.RandomState(1234) n_features, n_samples_train = 6, 9 # Number of spatial locations to predict at n_samples_X_test = 7 # Number of sample predictions per test point n_samples_y_test = 5 y_train_shape = (n_samples_train,) if n_targets is not None: y_train_shape = y_train_shape + (n_targets,) # By convention single-output data is squeezed upon prediction if n_targets is not None and n_targets > 1: y_test_shape = (n_samples_X_test, n_targets, n_samples_y_test) else: y_test_shape = (n_samples_X_test, n_samples_y_test) X_train = rng.randn(n_samples_train, n_features) X_test = rng.randn(n_samples_X_test, n_features) y_train = rng.randn(*y_train_shape) model = GaussianProcessRegressor(normalize_y=normalize_y) # FIXME: before fitting, the estimator does not have information regarding # the number of targets and default to 1. This is inconsistent with the shape # provided after `fit`. This assert should be made once the following issue # is fixed: # https://github.com/scikit-learn/scikit-learn/issues/22430 # y_samples = model.sample_y(X_test, n_samples=n_samples_y_test) # assert y_samples.shape == y_test_shape model.fit(X_train, y_train) y_samples = model.sample_y(X_test, n_samples=n_samples_y_test) assert y_samples.shape == y_test_shape
142
test_gpr.py
Python
sklearn/gaussian_process/tests/test_gpr.py
3786daf7dc5c301478d489b0756f90d0ac5d010f
scikit-learn
4
135,927
51
15
19
112
7
0
82
399
_check_if_correct_nn_framework_installed
[RLlib] AlgorithmConfigs: Make None a valid value for methods to set properties; Use new `NotProvided` singleton, instead, to indicate no changes wanted on that property. (#30020)
https://github.com/ray-project/ray.git
def _check_if_correct_nn_framework_installed(self, _tf1, _tf, _torch): if self.framework_str in {"tf", "tf2"}: if not (_tf1 or _tf): raise ImportError( ( "TensorFlow was specified as the framework to use (via `config." "framework([tf|tf2])`)! However, no installation was " "found. You can install TensorFlow via `pip install tensorflow`" ) ) elif self.framework_str == "torch": if not _torch: raise ImportError( ( "PyTorch was specified as the framework to use (via `config." "framework('torch')`)! However, no installation was found. You " "can install PyTorch via `pip install torch`." ) )
60
algorithm_config.py
Python
rllib/algorithms/algorithm_config.py
087548031bcf22dd73364b58acb70e61a49f2427
ray
6
103,055
72
17
65
496
33
0
106
357
test_fish_integration
Add fish pipestatus integration tests and changelog entries
https://github.com/kovidgoyal/kitty.git
def test_fish_integration(self): fish_prompt, right_prompt = 'left>', '<right' completions_dir = os.path.join(kitty_base_dir, 'shell-integration', 'fish', 'vendor_completions.d') with self.run_shell( shell='fish', rc=f) as pty: q = fish_prompt + ' ' * (pty.screen.columns - len(fish_prompt) - len(right_prompt)) + right_prompt pty.wait_till(lambda: pty.screen_contents().count(right_prompt) == 1) self.ae(pty.screen_contents(), q) # XDG_DATA_DIRS pty.send_cmd_to_child('set -q XDG_DATA_DIRS; or echo ok') pty.wait_till(lambda: pty.screen_contents().count(right_prompt) == 2) self.ae(str(pty.screen.line(1)), 'ok') # completion and prompt marking pty.send_cmd_to_child('clear') pty.send_cmd_to_child('_test_comp_path') pty.wait_till(lambda: pty.screen_contents().count(right_prompt) == 2) q = '\n'.join(str(pty.screen.line(i)) for i in range(1, pty.screen.cursor.y)) self.ae(q, 'ok') self.ae(pty.last_cmd_output(), q) # resize and redraw (fish_handle_reflow) pty.write_to_child(r'echo $COLUMNS') pty.set_window_size(rows=20, columns=40) q = fish_prompt + 'echo $COLUMNS' + ' ' * (40 - len(fish_prompt) - len(right_prompt) - len('echo $COLUMNS')) + right_prompt pty.process_input_from_child()
744
shell_integration.py
Python
kitty_tests/shell_integration.py
e2f16ff62451eccb4fc901d30042ceca23940188
kitty
3
3,813
11
10
4
57
8
0
13
41
test_jobs_empty
🎉 🎉 Source FB Marketing: performance and reliability fixes (#9805) * Facebook Marketing performance improvement * add comments and little refactoring * fix integration tests with the new config * improve job status handling, limit concurrency to 10 * fix campaign jobs, refactor manager * big refactoring of async jobs, support random order of slices * update source _read_incremental to hook new state logic * fix issues with timeout * remove debugging and clean up, improve retry logic * merge changes from #8234 * fix call super _read_increment * generalize batch execution, add use_batch flag * improve coverage, do some refactoring of spec * update test, remove overrides of source * add split by AdSet * add smaller insights * fix end_date < start_date case * add account_id to PK * add notes * fix new streams * fix reversed incremental stream * update spec.json for SAT * upgrade CDK and bump version Co-authored-by: Dmytro Rezchykov <dmitry.rezchykov@zazmic.com> Co-authored-by: Eugene Kulak <kulak.eugene@gmail.com>
https://github.com/airbytehq/airbyte.git
def test_jobs_empty(self, api): manager = InsightAsyncJobManager(api=api, jobs=[]) jobs = list(manager.completed_jobs()) assert not jobs
34
test_async_job_manager.py
Python
airbyte-integrations/connectors/source-facebook-marketing/unit_tests/test_async_job_manager.py
a3aae8017a0a40ff2006e2567f71dccb04c997a5
airbyte
1
288,032
8
8
3
35
5
0
8
22
statflag
Refactor apcupsd to use config flow (#64809) * Add Config Flow to APCUPSd integration and remove YAML support. * Hide the binary sensor if user does not select STATFLAG resource. * Add tests for config flows. * Simplify config flow code. * Spell fix. * Fix pylint warnings. * Simplify the code for config flow. * First attempt to implement import flows to suppport legacy YAML configurations. * Remove unnecessary log calls. * Wrap synchronous update call with `hass.async_add_executor_job`. * Import the YAML configurations when sensor platform is set up. * Move the logger call since the variables are not properly set up. * Add codeowner. * Fix name field of manifest.json. * Fix linting issue. * Fix incorrect dependency due to incorrect rebase. * Update codeowner and config flows via hassfest. * Postpone the deprecation warning to 2022.7. * Import future annotations for init file. * Add an newline at the end to make prettier happy. * Update github id. * Add type hints for return types of steps in config flow. * Move the deprecation date for YAML config to 2022.12. * Update according to reviews. * Use async_forward_entry_setups. * Add helper properties to `APCUPSdData` class. * Add device_info for binary sensor. * Simplify config flow. * Remove options flow strings. * update the tests according to the changes. * Add `entity_registry_enabled_default` to entities and use imported CONF_RESOURCES to disable entities instead of skipping them. * Update according to reviews. * Do not use model of the UPS as the title for the integration. Instead, simply use "APCUPSd" as the integration title and let the device info serve as title for each device instead. * Change schema to be a global variable. * Add more comments. * Rewrite the tests for config flows. * Fix enabled_by_default. * Show friendly titles in the integration. * Add import check in `async_setup_platform` to avoid importing in sensor platform setup. * Add import check in `async_setup_platform` to avoid importing in sensor platform setup. * Update comments in test files. * Use parametrize instead of manually iterating different test cases. * Swap the order of the platform constants. * Avoid using broad exceptions. * Set up device info via `_attr_device_info`. * Remove unrelated test in `test_config_flow`. * Use `DeviceInfo` instead of dict to assign to `_attr_device_info`. * Add english translation. * Add `async_create_issue` for deprecated YAML configuration. * Enable UPS status by default since it could show "online, charging, on battery etc" which is meaningful for all users. * Apply suggestions from code review * Apply suggestion * Apply suggestion Co-authored-by: Martin Hjelmare <marhje52@gmail.com>
https://github.com/home-assistant/core.git
def statflag(self) -> str | None: return self.status.get("STATFLAG")
19
__init__.py
Python
homeassistant/components/apcupsd/__init__.py
52307708c843b947a2d631f2fe7ddaa8bd9a90d7
core
1
276,924
6
7
2
31
4
1
6
11
enable_interactive_logging
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def enable_interactive_logging(): INTERACTIVE_LOGGING.enable = True @keras_export("keras.utils.disable_interactive_logging")
@keras_export("keras.utils.disable_interactive_logging")
10
io_utils.py
Python
keras/utils/io_utils.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
269,372
9
7
2
33
4
1
9
15
preprocess_input
Reformatting the codebase with black. PiperOrigin-RevId: 450093126
https://github.com/keras-team/keras.git
def preprocess_input(x, data_format=None): # pylint: disable=unused-argument return x @keras_export("keras.applications.efficientnet_v2.decode_predictions")
@keras_export("keras.applications.efficientnet_v2.decode_predictions")
12
efficientnet_v2.py
Python
keras/applications/efficientnet_v2.py
84afc5193d38057e2e2badf9c889ea87d80d8fbf
keras
1
38,439
48
14
17
248
19
0
76
177
is_config_or_test
Update codeparrot data preprocessing (#16944) * add new preprocessing arguments * add new filters * add new filters to readme * fix config and test count, update function names and docstrings * reformat code * update readme * Update readme * rename config_test filter Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com> * rename few_assignments filter Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com> * rename tokenizer in arguments Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com> * rename functions and add limit_line argument for config_test filter * update threshold for config_test filter Co-authored-by: Leandro von Werra <lvwerra@users.noreply.github.com> Co-authored-by: Loubna ben allal <loubnabenallal@gmail.com>
https://github.com/huggingface/transformers.git
def is_config_or_test(example, scan_width=5, coeff=0.05): keywords = ["unit tests", "test file", "configuration file"] lines = example["content"].splitlines() count_config = 0 count_test = 0 # first test for _, line in zip(range(scan_width), lines): for keyword in keywords: if keyword in line.lower(): return {"config_or_test": True} # second test nlines = example["content"].count("\n") threshold = int(coeff * nlines) for line in lines: count_config += line.lower().count("config") count_test += line.lower().count("test") if count_config > threshold or count_test > threshold: return {"config_or_test": True} return {"config_or_test": False}
145
preprocessing.py
Python
examples/research_projects/codeparrot/scripts/preprocessing.py
e730e1256732b5dfeae2bdd427beacc3fbc20e2a
transformers
7
308,879
5
6
2
17
2
0
5
12
_async_set_value
Add device configuration entities to flux_led (#62786) Co-authored-by: Chris Talkington <chris@talkingtontech.com>
https://github.com/home-assistant/core.git
async def _async_set_value(self) -> None:
8
number.py
Python
homeassistant/components/flux_led/number.py
e222e1b6f05b630bef5aed73e307ca5072b6f286
core
1
209,511
83
14
20
202
14
0
132
343
_parse_multi_byte
E275 - Missing whitespace after keyword (#3711) Co-authored-by: Alexander Aring <alex.aring@gmail.com> Co-authored-by: Anmol Sarma <me@anmolsarma.in> Co-authored-by: antoine.torre <torreantoine1@gmail.com> Co-authored-by: Antoine Vacher <devel@tigre-bleu.net> Co-authored-by: Arnaud Ebalard <arno@natisbad.org> Co-authored-by: atlowl <86038305+atlowl@users.noreply.github.com> Co-authored-by: Brian Bienvenu <brian@bienvenu.id.au> Co-authored-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Co-authored-by: CQ <cq674350529@163.com> Co-authored-by: Daniel Collins <kinap@users.noreply.github.com> Co-authored-by: Federico Maggi <federico.maggi@gmail.com> Co-authored-by: Florian Maury <florian.maury@ssi.gouv.fr> Co-authored-by: _Frky <3105926+Frky@users.noreply.github.com> Co-authored-by: g-mahieux <37588339+g-mahieux@users.noreply.github.com> Co-authored-by: gpotter2 <gabriel@potter.fr> Co-authored-by: Guillaume Valadon <guillaume@valadon.net> Co-authored-by: Hao Zheng <haozheng10@gmail.com> Co-authored-by: Haresh Khandelwal <hareshkhandelwal@gmail.com> Co-authored-by: Harri Hämäläinen <hhamalai@iki.fi> Co-authored-by: hecke <hecke@naberius.de> Co-authored-by: Jan Romann <jan.romann@gmail.com> Co-authored-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com> Co-authored-by: jdiog0 <43411724+jdiog0@users.noreply.github.com> Co-authored-by: jockque <38525640+jockque@users.noreply.github.com> Co-authored-by: Julien Bedel <30991560+JulienBedel@users.noreply.github.com> Co-authored-by: Keith Scott <kscott@mitre.org> Co-authored-by: Kfir Gollan <kfir@drivenets.com> Co-authored-by: Lars Munch <lars@segv.dk> Co-authored-by: ldp77 <52221370+ldp77@users.noreply.github.com> Co-authored-by: Leonard Crestez <cdleonard@gmail.com> Co-authored-by: Marcel Patzlaff <mpatzlaff@benocs.com> Co-authored-by: Martijn Thé <martijnthe@users.noreply.github.com> Co-authored-by: Martine Lenders <authmillenon@gmail.com> Co-authored-by: Michael Farrell <micolous+git@gmail.com> Co-authored-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Co-authored-by: mkaliszan <mkaliszan@benocs.com> Co-authored-by: mtury <maxence.tury@ssi.gouv.fr> Co-authored-by: Neale Ranns <nranns@cisco.com> Co-authored-by: Octavian Toader <Octavian.Toader@belden.com> Co-authored-by: Peter Eisenlohr <peter@eisenlohr.org> Co-authored-by: Phil <phil@secdev.org> Co-authored-by: Pierre Lalet <pierre@droids-corp.org> Co-authored-by: Pierre Lorinquer <pierre.lorinquer@ssi.gouv.fr> Co-authored-by: piersoh <42040737+piersoh@users.noreply.github.com> Co-authored-by: plorinquer <pierre.lorinquer@ssi.gouv.fr> Co-authored-by: pvinci <pvinci@users.noreply.github.com> Co-authored-by: Rahul Jadhav <nyrahul@gmail.com> Co-authored-by: Robin Jarry <robin.jarry@6wind.com> Co-authored-by: romain-perez <51962832+romain-perez@users.noreply.github.com> Co-authored-by: rperez <rperez@debian> Co-authored-by: Sabrina Dubroca <sd@queasysnail.net> Co-authored-by: Sebastian Baar <sebastian.baar@gmx.de> Co-authored-by: sebastien mainand <sebastien.mainand@ssi.gouv.fr> Co-authored-by: smehner1 <smehner1@gmail.com> Co-authored-by: speakinghedge <hecke@naberius.de> Co-authored-by: Steven Van Acker <steven@singularity.be> Co-authored-by: Thomas Faivre <thomas.faivre@6wind.com> Co-authored-by: Tran Tien Dat <peter.trantiendat@gmail.com> Co-authored-by: Wael Mahlous <wael.mahlous@gmail.com> Co-authored-by: waeva <74464394+waeva@users.noreply.github.com> Co-authored-by: Alexander Aring <alex.aring@gmail.com> Co-authored-by: Anmol Sarma <me@anmolsarma.in> Co-authored-by: antoine.torre <torreantoine1@gmail.com> Co-authored-by: Antoine Vacher <devel@tigre-bleu.net> Co-authored-by: Arnaud Ebalard <arno@natisbad.org> Co-authored-by: atlowl <86038305+atlowl@users.noreply.github.com> Co-authored-by: Brian Bienvenu <brian@bienvenu.id.au> Co-authored-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Co-authored-by: CQ <cq674350529@163.com> Co-authored-by: Daniel Collins <kinap@users.noreply.github.com> Co-authored-by: Federico Maggi <federico.maggi@gmail.com> Co-authored-by: Florian Maury <florian.maury@ssi.gouv.fr> Co-authored-by: _Frky <3105926+Frky@users.noreply.github.com> Co-authored-by: g-mahieux <37588339+g-mahieux@users.noreply.github.com> Co-authored-by: gpotter2 <gabriel@potter.fr> Co-authored-by: Guillaume Valadon <guillaume@valadon.net> Co-authored-by: Hao Zheng <haozheng10@gmail.com> Co-authored-by: Haresh Khandelwal <hareshkhandelwal@gmail.com> Co-authored-by: Harri Hämäläinen <hhamalai@iki.fi> Co-authored-by: hecke <hecke@naberius.de> Co-authored-by: Jan Romann <jan.romann@gmail.com> Co-authored-by: Jan Sebechlebsky <sebechlebskyjan@gmail.com> Co-authored-by: jdiog0 <43411724+jdiog0@users.noreply.github.com> Co-authored-by: jockque <38525640+jockque@users.noreply.github.com> Co-authored-by: Julien Bedel <30991560+JulienBedel@users.noreply.github.com> Co-authored-by: Keith Scott <kscott@mitre.org> Co-authored-by: Kfir Gollan <kfir@drivenets.com> Co-authored-by: Lars Munch <lars@segv.dk> Co-authored-by: ldp77 <52221370+ldp77@users.noreply.github.com> Co-authored-by: Leonard Crestez <cdleonard@gmail.com> Co-authored-by: Marcel Patzlaff <mpatzlaff@benocs.com> Co-authored-by: Martijn Thé <martijnthe@users.noreply.github.com> Co-authored-by: Martine Lenders <authmillenon@gmail.com> Co-authored-by: Michael Farrell <micolous+git@gmail.com> Co-authored-by: Michał Mirosław <mirq-linux@rere.qmqm.pl> Co-authored-by: mkaliszan <mkaliszan@benocs.com> Co-authored-by: mtury <maxence.tury@ssi.gouv.fr> Co-authored-by: Neale Ranns <nranns@cisco.com> Co-authored-by: Octavian Toader <Octavian.Toader@belden.com> Co-authored-by: Peter Eisenlohr <peter@eisenlohr.org> Co-authored-by: Phil <phil@secdev.org> Co-authored-by: Pierre Lalet <pierre@droids-corp.org> Co-authored-by: Pierre Lorinquer <pierre.lorinquer@ssi.gouv.fr> Co-authored-by: piersoh <42040737+piersoh@users.noreply.github.com> Co-authored-by: pvinci <pvinci@users.noreply.github.com> Co-authored-by: Rahul Jadhav <nyrahul@gmail.com> Co-authored-by: Robin Jarry <robin.jarry@6wind.com> Co-authored-by: romain-perez <51962832+romain-perez@users.noreply.github.com> Co-authored-by: rperez <rperez@debian> Co-authored-by: Sabrina Dubroca <sd@queasysnail.net> Co-authored-by: Sebastian Baar <sebastian.baar@gmx.de> Co-authored-by: sebastien mainand <sebastien.mainand@ssi.gouv.fr> Co-authored-by: smehner1 <smehner1@gmail.com> Co-authored-by: Steven Van Acker <steven@singularity.be> Co-authored-by: Thomas Faivre <thomas.faivre@6wind.com> Co-authored-by: Tran Tien Dat <peter.trantiendat@gmail.com> Co-authored-by: Wael Mahlous <wael.mahlous@gmail.com> Co-authored-by: waeva <74464394+waeva@users.noreply.github.com>
https://github.com/secdev/scapy.git
def _parse_multi_byte(self, s): # type: (str) -> int assert len(s) >= 2 tmp_len = len(s) value = 0 i = 1 byte = orb(s[i]) # For CPU sake, stops at an arbitrary large number! max_value = 1 << 64 # As long as the MSG is set, an another byte must be read while byte & 0x80: value += (byte ^ 0x80) << (7 * (i - 1)) if value > max_value: raise error.Scapy_Exception( 'out-of-bound value: the string encodes a value that is too large (>2^{{64}}): {}'.format(value) # noqa: E501 ) i += 1 assert i < tmp_len, 'EINVAL: x: out-of-bound read: the string ends before the AbstractUVarIntField!' # noqa: E501 byte = orb(s[i]) value += byte << (7 * (i - 1)) value += self._max_value assert value >= 0 return value
125
http2.py
Python
scapy/contrib/http2.py
08b1f9d67c8e716fd44036a027bdc90dcb9fcfdf
scapy
3
300,199
47
11
20
255
19
0
99
193
test_service_calls_with_all_entities
Add ws66i core integration (#56094) * Add ws66i core integration * Remove all ws66i translations * Update ws66i unit tests to meet minimum code coverage * Update ws66i based on @bdraco review * General improvements after 2nd PR review * Disable entities if amp shutoff, set default source names, set 30sec polling * Add _attr_ and change async_on_unload * Improve entity generation * Implement coordinator * Made options fields required, retry connection on failed attempts, use ZoneStatus for attributes * Refactor WS66i entity properties, raise HomeAssistantError on restore service if no snapshot * Update to pyws66i v1.1 * Add quality scale of silver to manifest * Update config_flow test
https://github.com/home-assistant/core.git
async def test_service_calls_with_all_entities(hass): _ = await _setup_ws66i_with_options(hass, MockWs66i()) # Changing media player to new state await _call_media_player_service( hass, SERVICE_VOLUME_SET, {"entity_id": ZONE_1_ID, "volume_level": 0.0} ) await _call_media_player_service( hass, SERVICE_SELECT_SOURCE, {"entity_id": ZONE_1_ID, "source": "one"} ) # Saving existing values await _call_ws66i_service(hass, SERVICE_SNAPSHOT, {"entity_id": "all"}) # Changing media player to new state await _call_media_player_service( hass, SERVICE_VOLUME_SET, {"entity_id": ZONE_1_ID, "volume_level": 1.0} ) await _call_media_player_service( hass, SERVICE_SELECT_SOURCE, {"entity_id": ZONE_1_ID, "source": "three"} ) # await coordinator.async_refresh() # await hass.async_block_till_done() # Restoring media player to its previous state await _call_ws66i_service(hass, SERVICE_RESTORE, {"entity_id": "all"}) await hass.async_block_till_done() state = hass.states.get(ZONE_1_ID) assert state.attributes[ATTR_MEDIA_VOLUME_LEVEL] == 0.0 assert state.attributes[ATTR_INPUT_SOURCE] == "one"
151
test_media_player.py
Python
tests/components/ws66i/test_media_player.py
5e737bfe4fbc5a724f5fdf04ea9319c2224cb114
core
1
110,678
75
13
15
206
18
0
96
385
to_jshtml
Deprecate attributes and expire deprecation in animation
https://github.com/matplotlib/matplotlib.git
def to_jshtml(self, fps=None, embed_frames=True, default_mode=None): if fps is None and hasattr(self, '_interval'): # Convert interval in ms to frames per second fps = 1000 / self._interval # If we're not given a default mode, choose one base on the value of # the _repeat attribute if default_mode is None: default_mode = 'loop' if getattr(self, '_repeat', False) else 'once' if not hasattr(self, "_html_representation"): # Can't open a NamedTemporaryFile twice on Windows, so use a # TemporaryDirectory instead. with TemporaryDirectory() as tmpdir: path = Path(tmpdir, "temp.html") writer = HTMLWriter(fps=fps, embed_frames=embed_frames, default_mode=default_mode) self.save(str(path), writer=writer) self._html_representation = path.read_text() return self._html_representation
122
animation.py
Python
lib/matplotlib/animation.py
ec60128011c1009313e0c24b1dfc89313b2d1b59
matplotlib
6
60,417
27
12
7
91
7
0
37
66
CheckForBadCharacters
Balanced joint maximum mean discrepancy for deep transfer learning
https://github.com/jindongwang/transferlearning.git
def CheckForBadCharacters(filename, lines, error): for linenum, line in enumerate(lines): if u'\ufffd' in line: error(filename, linenum, 'readability/utf8', 5, 'Line contains invalid UTF-8 (or Unicode replacement character).') if '\0' in line: error(filename, linenum, 'readability/nul', 5, 'Line contains NUL byte.')
55
cpp_lint.py
Python
code/deep/BJMMD/caffe/scripts/cpp_lint.py
cc4d0564756ca067516f71718a3d135996525909
transferlearning
4
69,222
8
10
13
33
6
0
9
2
get_actual_gle_dict
feat: Asset Capitalization - manual selection of entry type - GLE cleanup with smaller functions - GLE considering periodical inventory - test cases
https://github.com/frappe/erpnext.git
def get_actual_gle_dict(name): return dict( frappe.db.sql( , name, ) )
20
test_asset_capitalization.py
Python
erpnext/assets/doctype/asset_capitalization/test_asset_capitalization.py
58d430fe3ee62e93ad8d16a08bb42156a25b7d41
erpnext
1
106,001
9
9
3
38
7
0
10
31
extract
Fix description of streaming in the docs (#5313) * fix streaming description in stream section * fix audio docs * change docstrings for streaming download manager * Apply suggestions from code review * all urls -> URL(s), correct return types Co-authored-by: Quentin Lhoest <42851186+lhoestq@users.noreply.github.com> Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>
https://github.com/huggingface/datasets.git
def extract(self, url_or_urls): urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) return urlpaths
24
streaming_download_manager.py
Python
src/datasets/download/streaming_download_manager.py
832780ac40e3bfb0e15336cc9eca894c1da4158f
datasets
1
26,744
25
15
19
111
11
0
39
221
validate_image_file
Save images to product media from external URLs (#9329) * Save images to product media from external URLs
https://github.com/saleor/saleor.git
def validate_image_file(file, field_name, error_class) -> None: if not file: raise ValidationError( { field_name: ValidationError( "File is required.", code=error_class.REQUIRED ) } ) if not is_image_mimetype(file.content_type): raise ValidationError( { field_name: ValidationError( "Invalid file type.", code=error_class.INVALID ) } ) _validate_image_format(file, field_name, error_class)
69
__init__.py
Python
saleor/graphql/core/utils/__init__.py
5763cb1ad22b84e29a270b9b0b365f32074b753d
saleor
3
216,133
88
15
34
319
11
0
122
500
test__compress_ids_multiple_module_functions
handle compressing multiple mod.fun combos under an id
https://github.com/saltstack/salt.git
def test__compress_ids_multiple_module_functions(): # raw data entering the outputter data = { "local": { "cmd_|-try_this_|-echo 'hi'_|-run": { "__id__": "try_this", "__run_num__": 1, "__sls__": "wayne", "changes": {"pid": 32615, "retcode": 0, "stderr": "", "stdout": "hi"}, "comment": 'Command "echo ' "'hi'\" run", "duration": 8.218, "name": "echo 'hi'", "result": True, "start_time": "23:43:25.715842", }, "test_|-try_this_|-asdf_|-nop": { "__id__": "try_this", "__run_num__": 0, "__sls__": "wayne", "changes": {}, "comment": "Success!", "duration": 0.906, "name": "asdf", "result": True, "start_time": "23:43:25.714010", }, } } # check output text for formatting opts = copy.deepcopy(highstate.__opts__) opts["state_compress_ids"] = True with patch("salt.output.highstate.__opts__", opts, create=True): actual_output = highstate.output(data) # if we only cared about the ID/SLS/Result combo, this would be 4 not 2 assert "Succeeded: 2 (changed=1)" in actual_output assert "Failed: 0" in actual_output assert "Total states run: 2" in actual_output
165
test_highstate.py
Python
tests/pytests/unit/output/test_highstate.py
b677222e9e5031afc2e3962247af5d4adfc91cbe
salt
1
241,549
47
14
12
145
14
0
54
181
reset
Reset the total fit-validation batch progress on epoch (#11244)
https://github.com/Lightning-AI/lightning.git
def reset(self) -> None: if self.restarting: self.batch_progress.reset_on_restart() self.scheduler_progress.reset_on_restart() self.batch_loop.optimizer_loop.optim_progress.reset_on_restart() else: self.batch_progress.reset_on_run() self.scheduler_progress.reset_on_run() self.batch_loop.optimizer_loop.optim_progress.reset_on_run() # when the epoch starts, the total val batch progress should be reset as it's supposed to count the batches # seen per epoch, this is useful for tracking when validation is run multiple times per epoch self.val_loop.epoch_loop.batch_progress.total.reset() self._outputs = []
84
training_epoch_loop.py
Python
pytorch_lightning/loops/epoch/training_epoch_loop.py
e9009d60588306b48a2931bf304a493c468c6740
lightning
2
214,475
4
8
2
24
4
0
4
18
__len__
feat: :sparkles: initial implementation of JsonlCorpora and Datasets
https://github.com/flairNLP/flair.git
def __len__(self): return len(self.sentences)
13
sequence_labeling.py
Python
flair/datasets/sequence_labeling.py
4b61c1f4b2e2d67fbda22eb01af78ffa23a15ab7
flair
1
281,141
27
12
15
137
19
0
32
165
test_coin_api_load_df_for_ta
Crypto menu refactor (#1119) * enabled some crypto commands in dd to be called independent of source loaded * support for coin_map_df in all dd functions + load ta and plot chart refactor * updated tests and removed coingecko scrapping where possible * removed ref of command from hugo * updated pycoingecko version * refactoring load * refactored load to fetch prices; pred can run independent of source now * load by default usd on cp/cg and usdt on cb/bin * updated to rich for formatting and updated dependencies * fixed changes requested * update docs * revert discord requirements * removed absolute from calculate change for price * fixing pr issues * fix loading issue when similar coins exist, move coins to home, fill n/a * update docs for coins * adds load to ta and pred menu
https://github.com/OpenBB-finance/OpenBBTerminal.git
def test_coin_api_load_df_for_ta(self, mock_load): with open( "tests/gamestonk_terminal/cryptocurrency/json/test_cryptocurrency_helpers/btc_usd_test_data.json", encoding="utf8", ) as f: sample_return = json.load(f) mock_load.return_value = sample_return mock_return, vs = load_ta_data( coin_map_df=self.coin_map_df, source="cg", currency="usd", days=30, ) self.assertTrue(mock_return.shape == (31, 4)) self.assertTrue(vs == "usd")
81
test_cryptocurrency_helpers.py
Python
tests/gamestonk_terminal/cryptocurrency/test_cryptocurrency_helpers.py
ea964109d654394cc0a5237e6ec5510ba6404097
OpenBBTerminal
1
315,206
18
12
8
100
11
0
21
85
async_step_zeroconf
Add config flow for Bose SoundTouch (#72967) Co-authored-by: Paulus Schoutsen <balloob@gmail.com>
https://github.com/home-assistant/core.git
async def async_step_zeroconf(self, discovery_info): self.host = discovery_info.host try: await self._async_get_device_id() except RequestException: return self.async_abort(reason="cannot_connect") self.context["title_placeholders"] = {"name": self.name} return await self.async_step_zeroconf_confirm()
56
config_flow.py
Python
homeassistant/components/soundtouch/config_flow.py
273e9b287f482f5d1f0718ebf09a56c49c66513d
core
2
19,887
11
9
8
37
4
0
11
25
_requirement_name
check point progress on only bringing in pip==22.0.4 (#4966) * vendor in pip==22.0.4 * updating vendor packaging version * update pipdeptree to fix pipenv graph with new version of pip. * Vendoring of pip-shims 0.7.0 * Vendoring of requirementslib 1.6.3 * Update pip index safety restrictions patch for pip==22.0.4 * Update patches * exclude pyptoject.toml from black to see if that helps. * Move this part of the hash collection back to the top (like prior implementation) because it affects the outcome of this test now in pip 22.0.4
https://github.com/pypa/pipenv.git
def _requirement_name(self) -> str: return str(self.req) if self.req else "unknown package"
21
exceptions.py
Python
pipenv/patched/notpip/_internal/exceptions.py
f3166e673fe8d40277b804d35d77dcdb760fc3b3
pipenv
2
288,291
28
11
9
94
11
0
32
71
is_valid_config_entry
Log config_flow errors for waze_travel_time (#79352)
https://github.com/home-assistant/core.git
def is_valid_config_entry(hass, origin, destination, region): origin = find_coordinates(hass, origin) destination = find_coordinates(hass, destination) try: WazeRouteCalculator(origin, destination, region).calc_all_routes_info() except WRCError as error: _LOGGER.error("Error trying to validate entry: %s", error) return False return True
59
helpers.py
Python
homeassistant/components/waze_travel_time/helpers.py
31ddf6cc31aaaf08a09cd6afa9eb830450d1817d
core
2
281,055
10
9
3
45
9
0
10
31
call_dashboards
Refactored Reports (#1123) * First working program * added Voila support * Attempt to fix requirements * Revert depen * Another dependency test * Fixed errors * Small changes * Fixed help * Added port warning * Fixed continued terminal use * Last change, we good fam * Added placeholder so we can have stored * Didier changes * Fixed exit * Fixed issue
https://github.com/OpenBB-finance/OpenBBTerminal.git
def call_dashboards(self, _): from gamestonk_terminal.jupyter.dashboards import dashboards_controller self.queue = dashboards_controller.menu(self.queue)
28
jupyter_controller.py
Python
gamestonk_terminal/jupyter/jupyter_controller.py
eb0dde4603468f64c5d99c57bd272816062e6a23
OpenBBTerminal
1
90,379
97
13
43
560
18
0
165
707
test_order_by
feat(api): Sort sessions by release timestamp [TET-7] (#34699) This PR adds the ability to sort by release timestamp in sessions v2 over metrics Essentially it: Runs a pre-flight query to Release model according to ordering of release timestamp whether ASC or DESC and gets the resulting release groups. It also adds environment filters to the pre flight query if environment filters are sent in the query param of the request Fetches those groups and uses them as a filter in subsequent metrics queries Formats the output so if groups are present in the Release model but not in the metrics dataset, those groups still exist but are nulled out in the output. Otherwise, it shows the results of the groups in the metrics dataset. Orders the output according to resulting groups from preflight query
https://github.com/getsentry/sentry.git
def test_order_by(self): # Step 1: Create 3 releases release1b = self.create_release(version="1B") release1c = self.create_release(version="1C") release1d = self.create_release(version="1D") # Step 2: Create crash free rate for each of those releases # Release 1c -> 66.7% Crash free rate for _ in range(0, 2): self.store_session(make_session(self.project, release=release1c.version)) self.store_session(make_session(self.project, release=release1c.version, status="crashed")) # Release 1b -> 33.3% Crash free rate for _ in range(0, 2): self.store_session( make_session(self.project, release=release1b.version, status="crashed") ) self.store_session(make_session(self.project, release=release1b.version)) # Create Sessions in each of these releases # Release 1d -> 80% Crash free rate for _ in range(0, 4): self.store_session(make_session(self.project, release=release1d.version)) self.store_session(make_session(self.project, release=release1d.version, status="crashed")) # Step 3: Make request response = self.do_request( { "project": self.project.id, # project without users "statsPeriod": "1d", "interval": "1d", "field": ["crash_free_rate(session)"], "groupBy": ["release"], "orderBy": "-release.timestamp", "per_page": 3, } ) # Step 4: Validate Results assert response.data["groups"] == [ { "by": {"release": "1D"}, "totals": {"crash_free_rate(session)": 0.8}, "series": {"crash_free_rate(session)": [0.8]}, }, { "by": {"release": "1C"}, "totals": {"crash_free_rate(session)": 0.6666666666666667}, "series": {"crash_free_rate(session)": [0.6666666666666667]}, }, { "by": {"release": "1B"}, "totals": {"crash_free_rate(session)": 0.33333333333333337}, "series": {"crash_free_rate(session)": [0.33333333333333337]}, }, ]
334
test_organization_sessions.py
Python
tests/snuba/api/endpoints/test_organization_sessions.py
af66ec160dda84b9d417295a5586a530f422511d
sentry
4
283,018
15
11
26
75
11
0
16
44
print_help
Add TA and test coverage for the Forex context (#1532) * Add technical analysis to forex menu * Remove "stock" and "$" from the common ta charts * Add tests for forex context * Update forex custom reset functions * Remove -t arg (ticker) insertion into other_args in forex controller * Remove excluded command from forex ta help string * Update test data Co-authored-by: didierlopes.eth <dro.lopes@campus.fct.unl.pt>
https://github.com/OpenBB-finance/OpenBBTerminal.git
def print_help(self): currency_str = f" {self.ticker} (from {self.start.strftime('%Y-%m-%d')})" help_text = f console.print(text=help_text, menu="Forex - Technical Analysis")
26
ta_controller.py
Python
gamestonk_terminal/forex/technical_analysis/ta_controller.py
ede6ec8b1a0dda6c85e81084922c7be80765cd43
OpenBBTerminal
1
195,194
26
9
16
149
11
0
31
195
r2c2_prompt
Patch 8322 (#4709) * add dafetymix teacher * safety_mix teacher * safety_mix teacher pos and neg teachers * add tests for teacher * add license info * improvement * add task list * add task list and lint * add init.py * adding some patch to director * seeker changes * th * 3 * jing * changes * z and r * remove .opts * fix docs * add contrractions * lint Co-authored-by: Dexter Ju <da.ju.fr@gmail.com> Co-authored-by: Jing Xu <jingxu23@fb.com>
https://github.com/facebookresearch/ParlAI.git
def r2c2_prompt(self): return { 'sdm': CONST.IS_SEARCH_REQUIRED, 'mdm': CONST.IS_MEMORY_REQUIRED, 'sgm': CONST.GENERATE_QUERY, 'mgm': CONST.GENERATE_MEMORY, 'mkm': CONST.ACCESS_MEMORY, 'ckm': CONST.EXTRACT_ENTITY, 'skm': CONST.GENERATE_KNOWLEDGE, 'mrm': '', 'crm': '', 'srm': '', 'vrm': '', 'grm': '', 'orm': '', }[self.value]
80
module.py
Python
projects/bb3/agents/module.py
b1acb681207559da56a787ba96e16f0e23697d92
ParlAI
1
260,765
15
7
3
33
5
0
15
43
partial_fit
MAINT Add parameter validation to `HashingVectorizer`. (#24181) Co-authored-by: jeremie du boisberranger <jeremiedbb@yahoo.fr>
https://github.com/scikit-learn/scikit-learn.git
def partial_fit(self, X, y=None): # TODO: only validate during the first call self._validate_params() return self
19
text.py
Python
sklearn/feature_extraction/text.py
7c835d550c1dcaf44938b1c285db017a773d7dba
scikit-learn
1
43,671
13
10
4
75
11
0
14
30
task_group
Map and Partial DAG authoring interface for Dynamic Task Mapping (#19965) * Make DAGNode a proper Abstract Base Class * Prevent mapping an already mapped Task/TaskGroup Also prevent calls like .partial(...).partial(...). It is uncertain whether these kinds of repeated partial/map calls have utility, so let's disable them entirely for now to simplify implementation. We can always add them if they are proven useful. Co-authored-by: Tzu-ping Chung <tp@astronomer.io>
https://github.com/apache/airflow.git
def task_group(python_callable=None, *tg_args, **tg_kwargs): if callable(python_callable): return TaskGroupDecorator(function=python_callable, kwargs=tg_kwargs) return cast("Callable[[T], T]", functools.partial(TaskGroupDecorator, kwargs=tg_kwargs))
47
task_group.py
Python
airflow/decorators/task_group.py
e9226139c2727a4754d734f19ec625c4d23028b3
airflow
2